-
-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue with Logging to stdout in Laravel 10.21 using Monolog v3 EXCEPTION#Cannot log request: fwrite(): Write of 378 bytes failed with errno=32 Broken pipe #1860
Comments
It is because of this: https://github.com/Seldaek/monolog/blob/main/src/Monolog/Handler/StreamHandler.php#L118 is_resource is not enough to handle writing to an already closed socket [EDITED - the first solution was not the proper one] The library should handle recreation of $this->stream on fwrite() error here: As the only way to tell if the pipe is broken is basically when trying to write to it. Also assuming fwrite will always succeed is a bit overoptimistic: https://github.com/Seldaek/monolog/blob/main/src/Monolog/Handler/StreamHandler.php#L160C24-L160C24 Sample pseudocode for this (as fwrite does not have to write everything in single call we could also check number of bytes written):
Although locally the fwrite should be usually able to write all the bytes to the stdout (although I would not be doing the same on network sockets). |
When can we anticipate a fix to this bug? |
See also comments on #1634 - I'm still waiting for a way to narrow this down. |
@Seldaek thanks. This is really the same class of problem like programming over the network (although the pipe is a local object, it can be closed by the OS at any time really) - closing the pipe on the "remote" side (so the side external to the library execution context) is not detectable until you try to write to the pipe. The explanation of this can be read here - there is not much we can do about it really as it is all coded in the kernel: https://stackoverflow.com/questions/8369506/why-does-sigpipe-exist |
Yeah fair enough, maybe we need to check before writing or handle that failure at least and reopen the stream if possible. I am still curious why this happens now and it didn't before though. |
Well, possibly due to file descriptors leak, I think this is the obvious scenario that can happen locally. |
Is it possible something with the server images changed, outside of any PHP changes, that could have caused this? |
Very unlikely unless PHP 8 implements is_resource or write() differently. Writing to an already closed socket will always behave like that. Other option is lower sysctl limits for file handles (so the bug has always been there but had less chance of occuring). There is also this interesting claim - which would explain the issue. |
We have the same issue in production, occured after upgrade to PHP 8.2 and Monolog 3 EDIT: so far it seems that file descriptors build up until reaching the limit of the kernel, preventing any new fwrite. Rebooting our servers fix the issue for a while. It seems like the stream isn't closed properly, or at least not in all cases. I've noticed that it should be done during the |
Any updates on if/when we may see a resolution to this issue? |
Hello! Encountering the same issue on many of my projects as I log to stdout with json formatter. |
+1 for the issue |
+1 Also having this issue with Laravel on PHP 8.2 currently. |
Just to grasp this better, as it seems nobody feels like digging into it (+1s are nice buuut they don't solve problems..). Does it happen only in long running worker processes, or also when using fpm just processing web requests? |
For me, it only happened in fpm web requests. Our jobs never had this issue. |
Ok thanks, this is odd though..
The way PHP works, resources should be closed and file handles released at the end of a request, even if you never call Per #1634 (comment) maybe PHP 8.1 is the cause? I still don't really see any clear pattern here, except for everyone using Laravel.. So maybe it's something Laravel does? Closing the handle every time seems to cause a 15-20% slowdown in log writes vs just opening it once, if you do log a lot. If you log only one line obviously it is the same, so I am not sure if this is acceptable or not. Perhaps if we ensure at least batch writes happen in one stream open the impact would be small enough. |
If you want to try the fix, you can require monolog/monolog |
Hey @Seldaek my team experiences it on custom PHP application. I can perhaps confirm that the issue started with 8.x branch however I would not necessarily assume that the resources are always "garbage-collected" - we use PHP-FPM and the worker processes will live through some requests. |
As already mentioned, only happens in fpm context, not for long running processes/cronjobs.
Going to try out the patch now. |
@Seldaek you write that "except for everyone using Laravel".. We don't use laravel. We have symfony project, php version 8.1.27, monolog version 2.9.1. And this issue happening in fpm requests. |
Ok then Laravel is excluded too. I really don't know what is causing it except for having too many parallel requests/workers doing stuff perhaps and reaching system file handle limits. Anyway hopefully my patch does help. |
Our application is running in docker. It seems that there is no limit on the number of files in docker (systemctl show docker | grep LimitNOFILE= output LimitNOFILE=infinity). Also pm.max_children setted to 15, so unrealistic that we can hit hard limit of open files. If only someone could make a demo stand with load testing on a clean application to try to reproduce this issue. And may be edit pm.max_requests to 1 to make sure that the case is in php-fpm. |
gives me, any quick idea how to resolve it?
I guess composer.json should have an entry in: "extra": {
"branch-alias": {
"dev-main": "3.x-dev"
}
}, |
@TheLevti that's why I said to put |
Worked, running it now on our test system. This might take some time to validate. So I guess in a few days I will let you know if the issue still occurs. Would be nice if someone else with the same issue can do the same. |
Great, thanks for trying, I'll merge and release in a couple weeks maybe once we have some real world feedback :) |
Instead of closing the file after each log, could we do that on shutdown? Like registering a shutdown function or so to close the resources. Or if its during a framework lifecycle, part of a request/response cleanup step. |
But that is done already in |
🤔 Ok,let's see where the patch leads us. We are going to deploy this change further into production next week and monitor it there. Unfortunately it's hard to reproduce reliably, that's why I think it would be important that more people deploy it in their application and report back their result. |
Hello
For some reason executing |
If the patch does not help, then we should not go forward with it as its definitely a performance hit to constantly open/close the file. |
Hello, unfortunately I can not reproduce it anymore since we attempted a different approach. Our team did some adjustments on the FPM config and since then we have not seen that problem. Unfortunately this happened before I applied your fix. |
@TheLevti Ok, could you perhaps share the fpm changes if that had an impact? Because this definitely seems like some resource exhaustion problem rather than a monolog problem per se. |
I also think so. In addition the additional overhead of constantly opening/closing the file is not justified here and concerning. There could be some error handling/restore attempt like @HaizeW suggested, but the proposed patch is more like a workaround that might cause other problems in my opinion. I will get back to you with what our team changed in the fpm config. |
Hey @TheLevti, just a sincere remainder about this, if you're able to share the changes, would mean a lot. |
Unfortunately still happens for us in Laravel 11 even with the #1882 :( |
@Seldaek suggestion to simplify the failure detection with:
Then you can easily just check the failure with following code:
as according to the PHP Doc https://www.php.net/manual/en/function.fwrite.php
|
I want to capture the error to convert it to an exception anyway so this wouldn't really simplify much. |
Monolog version 3
Hello,
I'm currently working with Laravel 10.21 and have encountered an issue when attempting to write logs to stdout using Monolog version 3. My configuration is as follows:
However, this setup results in an exception:
I noticed a similar issue previously reported as #1634, but it seems the problem persists. Any guidance or suggestions to resolve this would be greatly appreciated.
Thank you for your support.
Best regards,
The text was updated successfully, but these errors were encountered: