You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm using shinyproxy 3.1.1 to launch a verse-4.4.1 based container. Users authenticate with LDAPs. After successfull authentification, a user dependent cifs share is mounted inside the running container.
Mounting is triggered via a script in /etc/cont-init.d/, unmounting via a script in /etc/cont-finish.d/.
When the container shuts down (via pressing the logout button), the share is unmounted.
However, the (unresponsive) container hangs around for approx. 2-3 minutes before it is finally deleted.
The shinyproxy logs show
eu.openanalytics.containerproxy.ContainerProxyException: Failed to stop container
at eu.openanalytics.containerproxy.backend.AbstractContainerBackend.stopProxy(AbstractContainerBackend.java:145) ~[containerproxy-1.1.1.jar!/:1.1.1]
at eu.openanalytics.containerproxy.backend.dispatcher.DefaultProxyDispatcher.stopProxy(DefaultProxyDispatcher.java:54) ~[containerproxy-1.1.1.jar!/:1.1.1]
at eu.openanalytics.containerproxy.service.ProxyService.lambda$stopProxy$7(ProxyService.java:342) ~[containerproxy-1.1.1.jar!/:1.1.1]
at eu.openanalytics.containerproxy.service.ProxyService.lambda$action$12(ProxyService.java:638) ~[containerproxy-1.1.1.jar!/:1.1.1]
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) ~[na:na]
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[na:na]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) ~[na:na]
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[na:na]
at java.base/java.lang.Thread.run(Thread.java:840) ~[na:na]
Caused by: org.mandas.docker.client.exceptions.DockerRequestException: Request error: DELETE unix://localhost:80/containers/3a7257cd1b9a4590283293aa90e0e5adf577ae792ed973fc3a75714e8cea7044?force=1: 500, body: {"message":"cannot remove container \"/sp-container-2ff96864-48e2-4f86-ae89-db04ee5bbe0a-0\": could not kill: tried to kill container, but did not receive an exit event"}
at org.mandas.docker.client.DefaultDockerClient.request(DefaultDockerClient.java:2464) ~[docker-client-7.0.8-OA-3.jar!/:na]
at org.mandas.docker.client.DefaultDockerClient.removeContainer(DefaultDockerClient.java:681) ~[docker-client-7.0.8-OA-3.jar!/:na]
at eu.openanalytics.containerproxy.backend.docker.DockerEngineBackend.doStopProxy(DockerEngineBackend.java:244) ~[containerproxy-1.1.1.jar!/:1.1.1]
at eu.openanalytics.containerproxy.backend.AbstractContainerBackend.stopProxy(AbstractContainerBackend.java:143) ~[containerproxy-1.1.1.jar!/:1.1.1]
... 8 common frames omitted
My initial idea was that shutdown is slow because the umount hangs (loss of network connection before umounting). However, manually triggering the umount via docker exec while the network connection is still present is fast but the container still hangs.
Do you have any ideas on how to investigate this problem?
Thanks a lot!
The text was updated successfully, but these errors were encountered:
I'm using shinyproxy 3.1.1 to launch a verse-4.4.1 based container. Users authenticate with LDAPs. After successfull authentification, a user dependent cifs share is mounted inside the running container.
Mounting is triggered via a script in /etc/cont-init.d/, unmounting via a script in /etc/cont-finish.d/.
When the container shuts down (via pressing the logout button), the share is unmounted.
However, the (unresponsive) container hangs around for approx. 2-3 minutes before it is finally deleted.
The shinyproxy logs show
My initial idea was that shutdown is slow because the umount hangs (loss of network connection before umounting). However, manually triggering the umount via docker exec while the network connection is still present is fast but the container still hangs.
Do you have any ideas on how to investigate this problem?
Thanks a lot!
The text was updated successfully, but these errors were encountered: