Skip to content

504 Gateway Timeout randomly occurring #2173

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
nybhtw opened this issue Jul 26, 2022 · 14 comments
Open

504 Gateway Timeout randomly occurring #2173

nybhtw opened this issue Jul 26, 2022 · 14 comments

Comments

@nybhtw
Copy link

nybhtw commented Jul 26, 2022

Checklist:
Have you pulled and found the error with jc21/nginx-proxy-manager:latest docker image?
Yes
Are you sure you're not using someone else's docker image?
Yes
Have you searched for similar issues (both open and closed)?
Yes

Describe the bug:
I am able to use NPM successfully, and it works most of the time. Occasionally, almost at random (every day, a few times a day), the page will try to load and eventually show this message:

504 Gateway Time-out
openresty

I will restart the app and the db, then it works. When it does show the 504 Gateway Time-out page, I can go to the NPM login page, log in successfully, and interact with the dashboard and proxy host as if it is working fine.

I only have one proxy host redirecting to Nextcloud Hub. Nextcloud is in a separate VM, so it redirects based on IP address.
Scheme = html
Forward Hostname/IP = 192.168.100.33
Forward Port = 80

Nginx Proxy Manager Version - 2.9.18

To Reproduce - Steps to reproduce the behavior:
I wish I knew. After I restart the stack from the Portainer dashboard, it works. Then, depending on some unknown factor, it will time out. It may only take one minute, or a few hours. The logs always are the same, of which I can't see any errors or issues.

Expected behavior:
It does not time out.

Screenshots:
I am not sure what to provide.

Operating System:
Ubuntu Server 20.04.4
Docker 20.10.17
Docker Container
Portainer 2.13.1

Additional Context:

Docker Compose yml file:

version: '2'
services:
app:
image: 'jc21/nginx-proxy-manager:latest'
ports:
- '80:80'
- '81:81'
- '443:443'
environment:
DB_MYSQL_HOST: "db"
DB_MYSQL_PORT: 3306
DB_MYSQL_USER: "not-real"
DB_MYSQL_PASSWORD: "also-not-real"
DB_MYSQL_NAME: "npm"
volumes:
- ./data:/data
- ./letsencrypt:/etc/letsencrypt
healthcheck:
test: ["CMD", "/bin/check-health"]
interval: 10s
timeout: 3s
db:
image: 'jc21/mariadb-aria:latest'
environment:
MYSQL_ROOT_PASSWORD: 'not-real'
MYSQL_DATABASE: 'npm'
MYSQL_USER: 'this-is-not-real'
MYSQL_PASSWORD: 'again-not-real'
volumes:
- ./data/mysql:/var/lib/mysql

DB Logs from today prior to it timing out:

2022-07-26 22:57:48 3 [Warning] Aborted connection 3 to db: 'npm' user: 'npmuser' host: '172.18.0.3' (Got an error reading communication packets)
2022-07-26 22:57:54 0 [Note] /usr/bin/mysqld (initiated by: unknown): Normal shutdown
2022-07-26 22:57:54 0 [Note] Event Scheduler: Purging the queue. 0 events
2022-07-26 22:57:55 0 [Note] /usr/bin/mysqld: Shutdown complete

[i] pre-init.d - processing /scripts/pre-init.d/01_secret-init.sh
[i] mysqld already present, skipping creation
[i] MySQL directory already present, skipping creation
2022-07-26 22:58:06 0 [Note] /usr/bin/mysqld (mysqld 10.4.15-MariaDB) starting as process 1 ...
2022-07-26 22:58:06 0 [Note] Plugin 'InnoDB' is disabled.
2022-07-26 22:58:06 0 [Note] Plugin 'FEEDBACK' is disabled.
2022-07-26 22:58:06 0 [Note] Server socket created on IP: '::'.
2022-07-26 22:58:06 0 [Warning] 'user' entry '@526f1f7a5d7d' ignored in --skip-name-resolve mode.
2022-07-26 22:58:06 0 [Warning] 'proxies_priv' entry '@% root@526f1f7a5d7d' ignored in --skip-name-resolve mode.
2022-07-26 22:58:06 0 [Note] Reading of all Master_info entries succeeded
2022-07-26 22:58:06 0 [Note] Added new Master_info '' to hash table
2022-07-26 22:58:06 0 [Note] /usr/bin/mysqld: ready for connections.
Version: '10.4.15-MariaDB' socket: '/run/mysqld/mysqld.sock' port: 3306 MariaDB Server

App logs from today prior to timing out:

[7/26/2022] [9:44:15 AM] [IP Ranges] › ℹ info Fetching https://ip-ranges.amazonaws.com/ip-ranges.json
[7/26/2022] [9:44:16 AM] [IP Ranges] › ℹ info Fetching https://www.cloudflare.com/ips-v4
[7/26/2022] [9:44:16 AM] [IP Ranges] › ℹ info Fetching https://www.cloudflare.com/ips-v6
[7/26/2022] [9:44:17 AM] [Nginx ] › ℹ info Reloading Nginx
[7/26/2022] [9:44:17 AM] [Nginx ] › ℹ info Reloading Nginx
[7/26/2022] [9:44:17 AM] [SSL ] › ℹ info Renew Complete
[7/26/2022] [10:44:15 AM] [SSL ] › ℹ info Renewing SSL certs close to expiry...
[7/26/2022] [10:44:17 AM] [Nginx ] › ℹ info Reloading Nginx
[7/26/2022] [10:44:17 AM] [SSL ] › ℹ info Renew Complete
[7/26/2022] [11:44:15 AM] [SSL ] › ℹ info Renewing SSL certs close to expiry...
[7/26/2022] [11:44:17 AM] [Nginx ] › ℹ info Reloading Nginx
[7/26/2022] [11:44:17 AM] [SSL ] › ℹ info Renew Complete
[7/26/2022] [12:44:15 PM] [SSL ] › ℹ info Renewing SSL certs close to expiry...
[7/26/2022] [12:44:17 PM] [Nginx ] › ℹ info Reloading Nginx
[7/26/2022] [12:44:17 PM] [SSL ] › ℹ info Renew Complete
[7/26/2022] [1:44:15 PM] [SSL ] › ℹ info Renewing SSL certs close to expiry...
[7/26/2022] [1:44:17 PM] [Nginx ] › ℹ info Reloading Nginx
[7/26/2022] [1:44:17 PM] [SSL ] › ℹ info Renew Complete
[7/26/2022] [2:44:15 PM] [SSL ] › ℹ info Renewing SSL certs close to expiry...
[7/26/2022] [2:44:17 PM] [Nginx ] › ℹ info Reloading Nginx
[7/26/2022] [2:44:17 PM] [SSL ] › ℹ info Renew Complete
[7/26/2022] [3:44:15 PM] [SSL ] › ℹ info Renewing SSL certs close to expiry...
[7/26/2022] [3:44:15 PM] [IP Ranges] › ℹ info Fetching IP Ranges from online services...
[7/26/2022] [3:44:15 PM] [IP Ranges] › ℹ info Fetching https://ip-ranges.amazonaws.com/ip-ranges.json
[7/26/2022] [3:44:16 PM] [IP Ranges] › ℹ info Fetching https://www.cloudflare.com/ips-v4
[7/26/2022] [3:44:16 PM] [IP Ranges] › ℹ info Fetching https://www.cloudflare.com/ips-v6
[7/26/2022] [3:44:17 PM] [Nginx ] › ℹ info Reloading Nginx
[7/26/2022] [3:44:17 PM] [Nginx ] › ℹ info Reloading Nginx
[7/26/2022] [3:44:17 PM] [SSL ] › ℹ info Renew Complete
[7/26/2022] [4:44:15 PM] [SSL ] › ℹ info Renewing SSL certs close to expiry...
[7/26/2022] [4:44:17 PM] [Nginx ] › ℹ info Reloading Nginx
[7/26/2022] [4:44:17 PM] [SSL ] › ℹ info Renew Complete
[7/26/2022] [5:44:15 PM] [SSL ] › ℹ info Renewing SSL certs close to expiry...
[7/26/2022] [5:44:17 PM] [Nginx ] › ℹ info Reloading Nginx
[7/26/2022] [5:44:17 PM] [SSL ] › ℹ info Renew Complete
[7/26/2022] [6:44:15 PM] [SSL ] › ℹ info Renewing SSL certs close to expiry...
[7/26/2022] [6:44:17 PM] [Nginx ] › ℹ info Reloading Nginx
[7/26/2022] [6:44:17 PM] [SSL ] › ℹ info Renew Complete
[7/26/2022] [7:44:15 PM] [SSL ] › ℹ info Renewing SSL certs close to expiry...
[7/26/2022] [7:44:17 PM] [Nginx ] › ℹ info Reloading Nginx
[7/26/2022] [7:44:17 PM] [SSL ] › ℹ info Renew Complete
[7/26/2022] [8:44:15 PM] [SSL ] › ℹ info Renewing SSL certs close to expiry...
[7/26/2022] [8:44:17 PM] [Nginx ] › ℹ info Reloading Nginx
[7/26/2022] [8:44:17 PM] [SSL ] › ℹ info Renew Complete
[7/26/2022] [9:44:15 PM] [SSL ] › ℹ info Renewing SSL certs close to expiry...
[7/26/2022] [9:44:15 PM] [IP Ranges] › ℹ info Fetching IP Ranges from online services...
[7/26/2022] [9:44:15 PM] [IP Ranges] › ℹ info Fetching https://ip-ranges.amazonaws.com/ip-ranges.json
[7/26/2022] [9:44:16 PM] [IP Ranges] › ℹ info Fetching https://www.cloudflare.com/ips-v4
[7/26/2022] [9:44:16 PM] [IP Ranges] › ℹ info Fetching https://www.cloudflare.com/ips-v6
[7/26/2022] [9:44:16 PM] [Nginx ] › ℹ info Reloading Nginx
[7/26/2022] [9:44:17 PM] [Nginx ] › ℹ info Reloading Nginx
[7/26/2022] [9:44:17 PM] [SSL ] › ℹ info Renew Complete
[7/26/2022] [10:44:15 PM] [SSL ] › ℹ info Renewing SSL certs close to expiry...
[7/26/2022] [10:44:17 PM] [Nginx ] › ℹ info Reloading Nginx
[7/26/2022] [10:44:17 PM] [SSL ] › ℹ info Renew Complete
[cont-finish.d] executing container finish scripts...
s6-svscanctl: fatal: unable to control /var/run/s6/services: supervisor not listening
[cont-finish.d] done.
[s6-finish] waiting for services.
[s6-finish] sending all processes the TERM signal.
[s6-finish] sending all processes the KILL signal and exiting.
[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 01_perms.sh: executing...
Changing ownership of /data/logs to 0:0
[cont-init.d] 01_perms.sh: exited 0.
[cont-init.d] 01_s6-secret-init.sh: executing...
[cont-init.d] 01_s6-secret-init.sh: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.
❯ Enabling IPV6 in hosts: /etc/nginx/conf.d
❯ /etc/nginx/conf.d/default.conf
❯ /etc/nginx/conf.d/include/force-ssl.conf
❯ /etc/nginx/conf.d/include/ssl-ciphers.conf
❯ /etc/nginx/conf.d/include/block-exploits.conf
❯ /etc/nginx/conf.d/include/assets.conf
❯ /etc/nginx/conf.d/include/letsencrypt-acme-challenge.conf
❯ /etc/nginx/conf.d/include/proxy.conf
❯ /etc/nginx/conf.d/include/ip_ranges.conf
❯ /etc/nginx/conf.d/include/resolvers.conf
❯ /etc/nginx/conf.d/production.conf
❯ Enabling IPV6 in hosts: /data/nginx
❯ /data/nginx/proxy_host/1.conf
[7/26/2022] [10:58:14 PM] [Migrate ] › ℹ info Current database version: 20211108145214
[7/26/2022] [10:58:14 PM] [Setup ] › ℹ info Logrotate Timer initialized
[7/26/2022] [10:58:14 PM] [Setup ] › ℹ info Logrotate completed.
[7/26/2022] [10:58:14 PM] [IP Ranges] › ℹ info Fetching IP Ranges from online services...
[7/26/2022] [10:58:14 PM] [IP Ranges] › ℹ info Fetching https://ip-ranges.amazonaws.com/ip-ranges.json
[7/26/2022] [10:58:15 PM] [IP Ranges] › ℹ info Fetching https://www.cloudflare.com/ips-v4
[7/26/2022] [10:58:15 PM] [IP Ranges] › ℹ info Fetching https://www.cloudflare.com/ips-v6
[7/26/2022] [10:58:15 PM] [SSL ] › ℹ info Let's Encrypt Renewal Timer initialized
[7/26/2022] [10:58:15 PM] [SSL ] › ℹ info Renewing SSL certs close to expiry...
[7/26/2022] [10:58:15 PM] [IP Ranges] › ℹ info IP Ranges Renewal Timer initialized
[7/26/2022] [10:58:15 PM] [Global ] › ℹ info Backend PID 240 listening on port 3000 ...
[7/26/2022] [10:58:18 PM] [Nginx ] › ℹ info Reloading Nginx
[7/26/2022] [10:58:19 PM] [SSL ] › ℹ info Renew Complete

@tandav
Copy link

tandav commented Aug 14, 2022

same issue, restarting NPM and my apps does not help, have to just wait until it unstucks. Apps and NPM web UI are accessible using IP. So the problem is not on applications side.
nslookup also works correctly from maching where NPM is running and from my computer where I try to open my website.

I also tried to see curl logs:

curl -vvv https://mydomain.com

it stucks at this point:

> user-agent: curl/7.81.0
> accept: */*
> 
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* old SSL session ID is stale, removing
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* Connection state changed (MAX_CONCURRENT_STREAMS == 128)!
* TLSv1.2 (OUT), TLS header, Supplemental data (23):

after some time it fails with 504:

< HTTP/2 504 
< server: openresty
< date: Sun, 14 Aug 2022 14:54:34 GMT
< content-type: text/html
< content-length: 164
< 
<html>
<head><title>504 Gateway Time-out</title></head>
<body>
<center><h1>504 Gateway Time-out</h1></center>
<hr><center>openresty</center>
</body>
</html>

@samknp
Copy link

samknp commented Aug 23, 2022

Me too.

Sometimes it occure 1 ~ 3 times a day.
But sometimes this issue cannot found 1 ~ 2 weeks.
Most of all fixed in 1 ~ 20 min.
If restart the npm container, it works fine again.
And anything found in container log. So I cannot get any information about this freezing.

Docker Container Status: Running
I also can go to the NPM login page, log in successfully, and interact with the dashboard and proxy host as if it is working fine.

Envirioment

npm 2.9.18,
npm 2.9.14 (Downgraded npm due to this issue. But 2.9.14 same)
Ubuntu 21.04
Portainer 2.9.3

Log

`[cont-finish.d] executing container finish scripts...

s6-svscanctl: fatal: unable to control /var/run/s6/services: supervisor not listening

[cont-finish.d] done.

[s6-finish] waiting for services.

[s6-finish] sending all processes the TERM signal.

[s6-finish] sending all processes the KILL signal and exiting.

[s6-init] making user provided files available at /var/run/s6/etc...exited 0.

[s6-init] ensuring user provided files have correct perms...exited 0.

[fix-attrs.d] applying ownership & permissions fixes...

[fix-attrs.d] done.

[cont-init.d] executing container initialization scripts...

[cont-init.d] 01_perms.sh: executing...

Changing ownership of /data/logs to 0:0

[cont-init.d] 01_perms.sh: exited 0.

[cont-init.d] 01_s6-secret-init.sh: executing...

[cont-init.d] 01_s6-secret-init.sh: exited 0.

[cont-init.d] done.

[services.d] starting services

[services.d] done.

❯ Enabling IPV6 in hosts: /etc/nginx/conf.d

❯ /etc/nginx/conf.d/default.conf

❯ /etc/nginx/conf.d/include/ssl-ciphers.conf

❯ /etc/nginx/conf.d/include/letsencrypt-acme-challenge.conf

❯ /etc/nginx/conf.d/include/ip_ranges.conf

❯ /etc/nginx/conf.d/include/proxy.conf

❯ /etc/nginx/conf.d/include/force-ssl.conf

❯ /etc/nginx/conf.d/include/assets.conf

❯ /etc/nginx/conf.d/include/block-exploits.conf

❯ /etc/nginx/conf.d/include/resolvers.conf

❯ /etc/nginx/conf.d/production.conf

❯ Enabling IPV6 in hosts: /data/nginx

❯ /data/nginx/proxy_host/2.conf

❯ /data/nginx/proxy_host/4.conf

❯ /data/nginx/proxy_host/8.conf

❯ /data/nginx/proxy_host/7.conf

❯ /data/nginx/proxy_host/10.conf

❯ /data/nginx/proxy_host/6.conf

❯ /data/nginx/proxy_host/12.conf

❯ /data/nginx/proxy_host/11.conf

❯ /data/nginx/proxy_host/3.conf

❯ /data/nginx/proxy_host/1.conf

❯ /data/nginx/proxy_host/5.conf

❯ /data/nginx/proxy_host/9.conf

[8/23/2022] [7:54:25 AM] [Global ] › ℹ info No valid environment variables for database provided, using default SQLite file '/data/database.sqlite'

[8/23/2022] [7:54:25 AM] [Migrate ] › ℹ info Current database version: none

[8/23/2022] [7:54:26 AM] [Setup ] › ℹ info Added Certbot plugins certbot-dns-cloudflare==$(certbot --version | grep -Eo '0-9+') cloudflare

[8/23/2022] [7:54:26 AM] [Setup ] › ℹ info Logrotate Timer initialized

[8/23/2022] [7:54:26 AM] [Setup ] › ℹ info Logrotate completed.

[8/23/2022] [7:54:26 AM] [IP Ranges] › ℹ info Fetching IP Ranges from online services...

[8/23/2022] [7:54:26 AM] [IP Ranges] › ℹ info Fetching https://ip-ranges.amazonaws.com/ip-ranges.json

[8/23/2022] [7:54:27 AM] [IP Ranges] › ℹ info Fetching https://www.cloudflare.com/ips-v4

[8/23/2022] [7:54:27 AM] [IP Ranges] › ℹ info Fetching https://www.cloudflare.com/ips-v6

[8/23/2022] [7:54:27 AM] [SSL ] › ℹ info Let's Encrypt Renewal Timer initialized

[8/23/2022] [7:54:27 AM] [SSL ] › ℹ info Renewing SSL certs close to expiry...

[8/23/2022] [7:54:27 AM] [IP Ranges] › ℹ info IP Ranges Renewal Timer initialized

[8/23/2022] [7:54:27 AM] [Global ] › ℹ info Backend PID 248 listening on port 3000 ...

[8/23/2022] [7:54:28 AM] [Nginx ] › ℹ info Reloading Nginx

[8/23/2022] [7:54:29 AM] [SSL ] › ℹ info Renew Complete

QueryBuilder#allowEager method is deprecated. You should use allowGraph instead. allowEager method will be removed in 3.0

QueryBuilder#eager method is deprecated. You should use the withGraphFetched method instead. eager method will be removed in 3.0

QueryBuilder#omit is deprecated. This method will be removed in version 3.0

Model#$omit is deprected and will be removed in 3.0.`

@samknp
Copy link

samknp commented Sep 5, 2022

I solved it with "delete my wss socket proxy host." and replace it with https socket.

To reproduce this issue ,
Check websockets support in proxy host setting.
And add sockets address in advanced setting.
But I already deleted it, so I can't remember exactly what I wrote.

@samknp
Copy link

samknp commented Sep 5, 2022

Reproduced today even I deleted web socket proxy host.

@HideCM
Copy link

HideCM commented Dec 22, 2023

Me too. I added 5 proxy host, but only one get errors 504, randomly time

Copy link

github-actions bot commented Aug 6, 2024

Issue is now considered stale. If you want to keep it open, please comment 👍

@github-actions github-actions bot added the stale label Aug 6, 2024
@littleblack111
Copy link

Same. keep happening..

@richard-scott
Copy link

I keep getting this on large file transfers.

@github-actions github-actions bot removed the stale label Sep 28, 2024
@Dave-Wagner
Copy link

This is happening to me too, it was working fine for 4 days and then I can't get anything from any host other than a 504 gateway error.

@staticdev
Copy link

Same here, is there a way to configure proxy_read_timeout and proxy_connect_timeout con NPM?

@richard-scott
Copy link

I put my tweaks in this section:

Screenshot_20250107_182349

@staticdev
Copy link

@nybhtw I solved this issue on my side just adding to Custom Nginx Configuration on Advanced tab:

  proxy_read_timeout 300s;
  proxy_send_timeout 300s;

@jpdore15
Copy link

I am having the same issue with a vaultwarden instance that has been running flawlessly for about 2 years. No change on my end that i have made on either npm or vaultwarden am aware of.

@jpdore15
Copy link

I got it figured out. I think that the qnap default Kubernetes ks3 lightweight, even though it was disabled, was using the same https port as i had assigned to Nginx,. Once i re-created the container and changed the port to something else it started working.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy