-
Notifications
You must be signed in to change notification settings - Fork 18.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[feature] Allow for named volumes to specify host mount point #19990
Comments
If I have not misunderstood, the command |
@CWSpear You could have a docker volume plugin that does just that 😉 (allows to create a named volume that does bind mounting). |
As @qq690388648 mentioned, if you want to bind a path, use the full host path in the first part of the |
@qq690388648 you have misunderstood, I tried to explain why that isn't good enough. @cpuguy83 I know about that and I tried to explain, I'm trying to avoid doing that. Perhaps I wasn't clear enough, or perhaps I was too verbose and people didn't read carefully enough. @vdemeester I was thinking of creating a whole new plugin, but it seemed as though it'd be easier (and even more appropriate) to just add an opt to the local plugin. @cpuguy83 To try and be more clear, I want to avoid linking my non-data containers to the host directly. I want to change something like this:
(which returns a dead container in with:
which:
@cpuguy83 am I making more sense? |
@CWSpear We won't support this in the built-in driver. You are more than welcome to build a plugin to handle this. |
@cpuguy83 Why not? If I were to create a plugin, the code would be 99% the same. I don't know |
Well, I still feel like it could definitely find a place in the core, but I did create a plugin as @vdemeester suggested: https://github.com/CWSpear/local-persist Since I literally learned Go and wrote it just this weekend, I wouldn't mind some helpful eyes =) I'm calling this a |
@CWSpear thanks, I found this useful as well! We have 100s of application environments -- each has a media folder > 80gb. I don't duplicate this media folder per environment -- its slow and cumbersome. Instead I bind a UFS [overlay||aufs] export of the media, so each environment can now write to media without side-effects in other environments ++ we don't need to copy media ever. These UFS exports get setup on the docker host, and I'd like to reference them with named volumes for later use in docker-compose. The named volume pattern helps us keep things consistent and organized. E.g.
Great work !!! && call me crazy for trying to avoid data containers after migrating to docker 1.10 & compose 1.6 |
I find it surprising that named volumes do not have feature parity with anonymous volumes. Personally I would have thought While I really appreciate @CWSpear creating local-persist, I would prefer official plugin instead to ensure long-term viability and compatibility. With no offense intended, one-man projects sadly have high risk of becoming unmaintained at some point. |
@zokier |
In my case specifically? I'm working with docker-compose (learning as I go, so I might be way off), and I thought of using named volumes as a neat abstraction to avoid putting unnecessary host-specific information in docker-compose.yml, or at least encapsulating it into one section. That in turn would make it easier to swap in different host-paths, or even completely different backends. Overall I just find thinking mounts as a special case of volumes would be more elegant than having them as distinct "top-level" concept. They are already quite thoroughly mixed in together now, being defined with same -v flag or in the same volumes sub-section in docker-compose etc. |
@zokier you know how to help protect open source "one-man projects" from "becoming unmaintained at some point?" You help contribute when you find an issue. Rather than just stay away from projects you (supposedly) believe in, help improve/offer support =) In this case tho, the scope of the plugin is very narrow and it's really quite basic and straightforward. I use Docker in dozens of projects, and use my plugin myself on many of them. I'm not going to lose interest in it any time soon, and the maintenance demand is quite minimal, so it's not a huge burden. That all being said, I definitely feel that this functionality should be in core. Which was my first proposal. The plugin was in response to the Docker team feeling it shouldn't be in core. It'd be many fewer lines if it were in core, that's for sure. |
+1 for this. I have a data container which, for added security, I mount into various containers as read only. The source code, being PHP and some applications not being very well written (while there are some very very good PHP developers, there are also many with very spotty security knowledge). I therefore put their code in a data container, and then mount that code with Ideally |
@shrikeh This is what named volumes do. Create a volume, give it a name, and use |
@cpuguy83 the bit I haven't figured out is: where does the code come from then in the above? |
@shrikeh WDYM? Please feel free to hop on IRC to discuss. GH issues is not the best place. |
So I reread your solution and we're solving different problems. Essentially I don't want to mount a volume from the host. Here's the current composed stack:
Not shown above is a data container, The data container has the code, static assets, and the build tools (because don't minify or fetch vendors in production), and doesn't even run. But it does give me some advantages:
All of the above relies on |
I would like this option as well. The Another option would be to symlink/bindmount I would much rather have a named volume with the correct options created once, than doing what feels like hacks (eg; variables in my docker-compose > volumes). The usecases are many! It feels like it belongs to the local driver. The local driver already support (stuff like this). |
@xeor If the host path does not exist, it will not be created. |
It was added in 1.12, I think, maybe 1.11. |
Docker volume create --magic could use some more documenation for stuff like this. "generally just the same as If I do a bind-mount manually; [root@d3 /]# mkdir /src /dest
[root@d3 /]# mount -o bind /source /dest
[root@d3 /]# touch source/a
[root@d3 /]# ls dest/
a
[root@d3 /]# mount | grep /dest
/dev/mapper/vg1-root on /dest type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
[root@d3 /]# cat /etc/mtab | grep /dest
/dev/mapper/vg1-root /dest xfs rw,seclabel,relatime,attr2,inode64,noquota 0 0 It is not easy to get the options.. |
@ian-axelrod If you use a named volume with the above example for binding a host dir to the volume the data in the image will be copied over. |
Hi @cpuguy83, I did use the solution you gave earlier in this issue, in fact. The one blocker for me is the fact that it does not automatically create folders, which I know you specifically say it will not. That is why I am trying to find an alternative setup that does create folders, yet still has the advantages of the solution you proposed. My team is using docker for local development, which means we have a set of requirements that need to be satisfied for development to go smoothly. First, we need dependencies visible on the host (inside the IDE workspace, actually) so that IDE integrations work properly. Second, we need a simple approach to correctly initialize the apps for new, junior developers, who may not be familiar with docker initially, to ease the onboarding process. I created a set of command-line tools that accomplishes this; however, I also want to give developers that are familiar with docker complete control over their environment. This means that I cannot build any extra logic into the aforementioned utilities that would force their use over simple Hopefully this gives you more insight. Is there anything I can do, or am I stuck creating the dependency folders manually for new services? Cheers, -Ian |
Just wanted to pick this back up as I have a very specific question... I did what you said and create the mount like so:
It fails to perform the mount. The only way that I got it to work was by adding the bind option like this
If I'm doing it this way, am I discarding the benefits of using the newer volume mounts and pretty much just using old-school bind mounts? |
@matthew-hickok Just fyi But I'm not sure what you are hoping to accomplish in general here. |
@cpuguy83 All I want is to provide persistent storage to my containers which is on a secondary disk For example, I want Elasticsearch data to be stored on /storage/es_data which is sitting on /dev/sdb. And since I am completely new to Docker (and Linux actually), I am not sure what the best way to accomplish that is. I've heard that bind-mounts are bad because of things like permission issues. But it seems that if I want the persistent data to live outside of the default docker location used by volume mounts, I need to use a bind mount. I could be going about this completely backwards, I really have no idea. |
Unfortunately there is not feasible way to mount data from inside a container to an outside folder (moby/moby#19990). Also media and cache are confirmed to work now.
I found that I needed to manually delete docker volumes before @cpuguy83's solution would work. Does anyone have any thoughts on the best of handling 'n' shared volumes in this fashion? Or any general thoughts on whether a clustered filesystem is more appropriate, be it on host machine or remote? |
Can anyone tell me please what are the advantages of "bind mounted named volume " over a normal "bind mounted volume". Only one pointers I can think of is copying data from image to that volume initially. |
Your understanding matches mine dbjpanda. |
Generally image shouldn't hold any large no of data. So I use cp/mv command inside entrypoint script to fill host dir which works very good in case of small amount of data. |
That seems like a completely reasonable approach to solving the problem of sync-ing content between container and host. From my perspective, it made sense to have named volumes do that work for me, to reduce the amount of logic I need inside the entrypoint (at the cost of logic prior to the container being started). |
The difference between a "bind-mounted host directory" and a (named) "volume" is semantics, and some steps that are not performed in one or the other (copying of data, creation of a storage location); Bind-mounted host directoriesShort-hand syntax:
Advanced/long-form syntax:
Both of the above achieve the same, but the With a bind-mounted host directory;
(some of the steps above may be in a slightly different order 😅) (named) volumesSyntax; Short-hand syntax:
Advanced/long-form syntax:
Both of the above achieve the same, but the With a (named) volume;
(some of the steps above may be in a slightly different order 😅) The "storage location" of the volume should be considered an implementation detail, and although that location is not a secret (it's in named volumes with a custom storage locationThis is the topic being discussed here. In step 1. of the previous section, you create a volume using the local volume driver, but pass "mount" options to customize the storage location of the volume (see #19990 (comment)) Also see docker/compose#2957 (comment), which goes a bit deeper into "what happens under the hood" All the other steps are the same as the previous example, so if the volume is empty, but the container has files at the target location, those files (and their attributes) are copied on first run. The same applies here regarding "interacting with those files"; doing so can interfere with the docker daemon. PermissionsNote that in none of the cases, docker handles permissions/ownership of the volume's content;
One exception to the above; when bind-mounting host directories on Docker Desktop (Docker for Mac / Docker for Windows), the daemon itself runs in a lightweight VM. Given that bind-mounts are always mounted from the host where the daemon runs (so in this case, a VM), some magic is used to sync files from the host to the VM; on Docker for Mac, this includes a feature that strips/ignores ownership of the files. This makes it possible to access/modify files on the hose, even if the user inside the container doesn't match the user on the host. |
Oh wow that's interesting. Thanks for clearing this up.
Can I read more about this somewhere? (Edit: more so meant the docker daemon component) |
Don't think there's specific documentation about that, just that (as mentioned) keeping files in use by external processes could potentially cause issues if you want to remove the volume. Of course, similar things can be problematic at runtime if you share a volume between multiple containers (as multiple containers may attempt to write/lock files). |
Is this 100% sure? Because otherwise, postgres' issues with ownership should not happen. Am I misinterpreting something in what you explained? Reference: docker-library/postgres#253 |
docker itself does not change permissions of the files, but the container (e.g. an entrypoint script in the image, or other processes) may. |
https://forums.docker.com/t/how-to-mount-docker-volume-along-with-subfolders-on-the-host/120482/2 This allowed me to specify a mount point on the host for a named volume so I could view the files. |
Maybe this is what the OP was looking for:
|
yup, that solution was mentioned above; #19990 (comment)
|
@matthew-hickok how did you end up here? |
With docker-compose 1.6 coming out soon and the new v2 syntax, I've been learning more about
docker volume
that came out in docker 1.9.My thought was that docker volumes could be created to replace data-only containers and
--volumes-from
. Specifically, it allows for cleanerdocker ps
and it allows us to mount volumes on two different containers at different places.But it doesn't let us persist that data as well as data-only containers could.
Is there a particular reason it doesn't/you can't use a mount (or perhaps bind is a better work?) point outside of
/var/lib/docker
?My proposal: would it be possible for us to add an option for the local driver to specify a bind/mount point on the host?
My use-case is I use Docker for smaller things that don't need massive scaling, and something like Flocker is mega overkill. However, I have had multiple times where I needed a volume on at least 2 containers and to have it persisted, and it got pretty messy with data-only containers, and it could be solved nicely with my proposal.
Specific use case: I have a
letsencrypt
docker container that creates certificates. I need the certificates to persist, but I also need mynginx
container to have access them. I've had similar setups with images, but I also wanted to have them mounted at different points within the specific containers, something not possible with data-only containers.I could get some of this working by mounting each container to the same volume on the host, but then my containers are more host-dependent, and I'd rather move that to a dedicated volume whose job it is to persist things, and then I only have one place to change it and my other containers don't need to care about that.
Anyway, hopefully we could add something to help here, please let me know if I can add any clarification, etc. Thanks!
The text was updated successfully, but these errors were encountered: