-
-
Notifications
You must be signed in to change notification settings - Fork 9.6k
[HttpClient] CurlHttpClient not closing file descriptors #60513
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Did you try to add Having a file descriptor per host is normal IMO, as connection should be kept alive if there is other request so you don't have to reconnect each time you do a request on the same host. |
Yes, I tried that, but I did not notice any difference.
In general I agree on that, but after some time idle connections should be closed, or at least there should be a (configurable) maximum number of idle connections. This is especially a problem in long running processes. (Or maybe there is a configuration for that, but I haven't seen anything about this yet?) |
Maybe it's because the curl handle is shared and never close, did you try to set |
Thank you, that's a good hint and actually solves the issue: $response = $client->request('GET', $url, [
'extra' => [
'curl' => [
CURLOPT_FORBID_REUSE => true,
],
],
]); I also did some more digging and noticed that the behavior changed with v6.4.12, and I guess that's because of PR #58278. Due to the changes in that PR, This is the code in 6.4.12: symfony/src/Symfony/Component/HttpClient/Internal/CurlClientState.php Lines 52 to 57 in 5532c34
Before #58278 symfony/src/Symfony/Component/HttpClient/Internal/CurlClientState.php Lines 52 to 57 in 658b70d
Removing this line after 6.4.12 solves the issue for me and there is not an unlimited number of open connections anymore:
While it's okay to increase the max number of connections, I do not think that quasi unlimited is a good default, since the errors when hitting the file descriptor limit is really unhelpful. |
If However i do believe it should be 2 separate options for the state instead of having the first option having an effect on the second one (i don't really undersand why also : you may want 1 connection per host max, but 100 max across all differents hosts ?) @nicolas-grekas do you know why having the |
One setting to rule them all I guess. |
That's not the case: Previously there was a limit, because if it's not set, curl is using a default value of 4 times the number of handlers. See https://curl.se/libcurl/c/CURLMOPT_MAXCONNECTS.html:
I am not sure what the number of handlers (or a handler) actually is, but I think it's a lower number. |
Symfony version(s) affected
at least 6.4.x, 7.2.x
Description
When sending requests with the
CurlHttpClient
against different host names, the file descriptors are not garbage collected and eventually further requests might be blocked by hitting the open file descriptors limit (ulimit -n
).Consider this example:
Executing this script gives me the following output:
As you can see the number of open file descriptors is increasing, even though the response objects can be garbage collected.
Extending the requests array with further URLs on the same hosts will not increase the file descriptors, but adding further hosts will do so.
Once the number of file descriptors is reached, it is not possible to send further requests and I get the following error:
PHP Fatal error: Uncaught Symfony\Component\HttpClient\Exception\TransportException: Could not resolve host: {hostname}
It took me a while to pinpoint the DNS resolution error to the file descriptors and this is a very subtle bug.
A workaround that solved the issue for me was creating a new HTTP client every x requests to stay below my file descriptor limit, but that's not really a good solution and needs awareness of this issue.
How to reproduce
You can change the file descriptor limit for the current shell with e.g.
ulimit -n 15
. Afterwards, when you run the script above you should see the described error.Instead of reducing the limit, you could also increase the number of hosts, but this might require a larger list depending on the current limit.
Possible Solution
No response
Additional Context
No response
The text was updated successfully, but these errors were encountered: