Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[feature] Allow for named volumes to specify host mount point #19990

Closed
CWSpear opened this issue Feb 4, 2016 · 79 comments
Closed

[feature] Allow for named volumes to specify host mount point #19990

CWSpear opened this issue Feb 4, 2016 · 79 comments
Labels
area/volumes kind/enhancement Enhancements are not bugs or new features but can improve usability or performance.

Comments

@CWSpear
Copy link
Contributor

CWSpear commented Feb 4, 2016

With docker-compose 1.6 coming out soon and the new v2 syntax, I've been learning more about docker volume that came out in docker 1.9.

My thought was that docker volumes could be created to replace data-only containers and --volumes-from. Specifically, it allows for cleaner docker ps and it allows us to mount volumes on two different containers at different places.

But it doesn't let us persist that data as well as data-only containers could.

Is there a particular reason it doesn't/you can't use a mount (or perhaps bind is a better work?) point outside of /var/lib/docker?

My proposal: would it be possible for us to add an option for the local driver to specify a bind/mount point on the host?

My use-case is I use Docker for smaller things that don't need massive scaling, and something like Flocker is mega overkill. However, I have had multiple times where I needed a volume on at least 2 containers and to have it persisted, and it got pretty messy with data-only containers, and it could be solved nicely with my proposal.

Specific use case: I have a letsencrypt docker container that creates certificates. I need the certificates to persist, but I also need my nginx container to have access them. I've had similar setups with images, but I also wanted to have them mounted at different points within the specific containers, something not possible with data-only containers.

I could get some of this working by mounting each container to the same volume on the host, but then my containers are more host-dependent, and I'd rather move that to a dedicated volume whose job it is to persist things, and then I only have one place to change it and my other containers don't need to care about that.

Anyway, hopefully we could add something to help here, please let me know if I can add any clarification, etc. Thanks!

@qq690388648
Copy link
Contributor

If I have not misunderstood, the commanddocker run -v /path/to/hostfile:/path/to/containerfile image_name will help you.

@vdemeester
Copy link
Member

@CWSpear You could have a docker volume plugin that does just that 😉 (allows to create a named volume that does bind mounting).

@cpuguy83
Copy link
Member

cpuguy83 commented Feb 4, 2016

As @qq690388648 mentioned, if you want to bind a path, use the full host path in the first part of the -v.

@cpuguy83 cpuguy83 closed this as completed Feb 4, 2016
@CWSpear
Copy link
Contributor Author

CWSpear commented Feb 4, 2016

@qq690388648 you have misunderstood, I tried to explain why that isn't good enough.

@cpuguy83 I know about that and I tried to explain, I'm trying to avoid doing that. Perhaps I wasn't clear enough, or perhaps I was too verbose and people didn't read carefully enough.

@vdemeester I was thinking of creating a whole new plugin, but it seemed as though it'd be easier (and even more appropriate) to just add an opt to the local plugin.

@cpuguy83 To try and be more clear, I want to avoid linking my non-data containers to the host directly.

I want to change something like this:

docker create -v /persistent/images:/images --name images debian:jessie /bin/true
docker run -d --volumes-from images --name container1 some_image
docker run -d --volumes-from images --name container2 some_image2
docker run -d --volumes-from images --name container3 some_image3

(which returns a dead container in docker ps -a and less clear output in docker volume ls)

with:

docker volume create --opt mount=/persistent/images --name images
docker run -d -v images:/path/to/a --name container1 some_image
docker run -d -v images:/path/to/b --name container2 some_image
docker run -d -v images:/path/to/c --name container3 some_image

which:

  1. allows me to map images to different places on each container without each of those containers specifically relying on the host. They just rely on the volume whose job it is to find a place to persist the files
  2. cleans up docker ps -a
  3. has a more meaningful output for docker volumes ls (which could also have a column for mount point?)
  4. when dealing with docker-compose allows for a more clean separation in the v2 syntax where it's clear what are dedicated to volumes

@cpuguy83 am I making more sense?

@cpuguy83
Copy link
Member

cpuguy83 commented Feb 4, 2016

@CWSpear We won't support this in the built-in driver. You are more than welcome to build a plugin to handle this.

@CWSpear
Copy link
Contributor Author

CWSpear commented Feb 4, 2016

@cpuguy83 Why not? If I were to create a plugin, the code would be 99% the same. I don't know go very well (read: at all), but I've got a proof-of-concept (would need some error checking, etc) almost working that (I think) handles it and it doesn't require very much code.

@CWSpear
Copy link
Contributor Author

CWSpear commented Feb 8, 2016

Well, I still feel like it could definitely find a place in the core, but I did create a plugin as @vdemeester suggested: https://github.com/CWSpear/local-persist

Since I literally learned Go and wrote it just this weekend, I wouldn't mind some helpful eyes =)

I'm calling this a 1.0-beta. Probably needs a few tweaks to validate name and mountpoint, and gotta figure out binaries and probably a starter upstart script or something, but we're getting there! It works as intended and plan on using it in production this week =)

@briceburg
Copy link

@CWSpear thanks, I found this useful as well!

We have 100s of application environments -- each has a media folder > 80gb. I don't duplicate this media folder per environment -- its slow and cumbersome. Instead I bind a UFS [overlay||aufs] export of the media, so each environment can now write to media without side-effects in other environments ++ we don't need to copy media ever.

These UFS exports get setup on the docker host, and I'd like to reference them with named volumes for later use in docker-compose. The named volume pattern helps us keep things consistent and organized. E.g.

# doesn't work
docker volume create --name qa-3-media  --mountpoint /var/UFS/exports/qa-3-media

# now possible w/ https://github.com/CWSpear/local-persist
docker volume create --name qa-3-media  -o mountpoint=/var/UFS/exports/qa-3-media -d local-persist 

Great work !!! && call me crazy for trying to avoid data containers after migrating to docker 1.10 & compose 1.6

@zokier
Copy link

zokier commented Aug 15, 2016

I find it surprising that named volumes do not have feature parity with anonymous volumes. Personally I would have thought docker volume create /hostdir:vname && docker run -v vname:/contdir ... be at least roughly equivalent to docker run -v /hostdir:/contdir .... Of course the exact syntax is irrelevant here, -o mountpoint etc is probably better choice. Also isn't this one major feature which prevents fully deprecating the use of data containers?

While I really appreciate @CWSpear creating local-persist, I would prefer official plugin instead to ensure long-term viability and compatibility. With no offense intended, one-man projects sadly have high risk of becoming unmaintained at some point.

@cpuguy83
Copy link
Member

@zokier /hostdir is not a volume, it's a host mount.
Why create a volume for something that already exists on the host?
Why would one use data-containers for something that lives and is addressable directly on the host?

@zokier
Copy link

zokier commented Aug 15, 2016

In my case specifically? I'm working with docker-compose (learning as I go, so I might be way off), and I thought of using named volumes as a neat abstraction to avoid putting unnecessary host-specific information in docker-compose.yml, or at least encapsulating it into one section. That in turn would make it easier to swap in different host-paths, or even completely different backends.

Overall I just find thinking mounts as a special case of volumes would be more elegant than having them as distinct "top-level" concept. They are already quite thoroughly mixed in together now, being defined with same -v flag or in the same volumes sub-section in docker-compose etc.

@CWSpear
Copy link
Contributor Author

CWSpear commented Aug 16, 2016

@zokier you know how to help protect open source "one-man projects" from "becoming unmaintained at some point?"

You help contribute when you find an issue.

Rather than just stay away from projects you (supposedly) believe in, help improve/offer support =)

In this case tho, the scope of the plugin is very narrow and it's really quite basic and straightforward. I use Docker in dozens of projects, and use my plugin myself on many of them. I'm not going to lose interest in it any time soon, and the maintenance demand is quite minimal, so it's not a huge burden.

That all being said, I definitely feel that this functionality should be in core. Which was my first proposal. The plugin was in response to the Docker team feeling it shouldn't be in core. It'd be many fewer lines if it were in core, that's for sure.

@shrikeh
Copy link

shrikeh commented Sep 13, 2016

+1 for this. I have a data container which, for added security, I mount into various containers as read only. The source code, being PHP and some applications not being very well written (while there are some very very good PHP developers, there are also many with very spotty security knowledge).

I therefore put their code in a data container, and then mount that code with volumes_from into my php-fpm container. This literally means that no matter how weird the code is, it can't be rewritten by any sort of cunning exploit.

Ideally volumes_from would allow me a choice of where in the other container it would be mounted to, though.

@cpuguy83
Copy link
Member

@shrikeh This is what named volumes do. Create a volume, give it a name, and use -v imporant_data:/foo:ro

@shrikeh
Copy link

shrikeh commented Sep 13, 2016

@cpuguy83 the bit I haven't figured out is: where does the code come from then in the above?

@cpuguy83
Copy link
Member

@shrikeh WDYM?

Please feel free to hop on IRC to discuss. GH issues is not the best place.

@shrikeh
Copy link

shrikeh commented Sep 13, 2016

So I reread your solution and we're solving different problems. Essentially I don't want to mount a volume from the host. Here's the current composed stack:

{CloudFlare} -[HTTPS]->|nginx1|-[HTTP]->|varnish|-[HTTP]->|nginx2|-[TCP]->|php-fpm01|

Not shown above is a data container, app. nginx01 and php-fpm01 share the same data container. Not shown above is a nginx03/php-fpm02 combo for administrators, with various tweaks allowing higher memory usage but less users, etc. But it, too, uses the same data container.

The data container has the code, static assets, and the build tools (because don't minify or fetch vendors in production), and doesn't even run. But it does give me some advantages:

  • I can never get into the situation where any of the three containers that need the code or assets are out of sync, which can play havoc with caches because it creates unnecessary race conditions.
  • I can tag changes to my data container referencing GitHub commits/branches/tags and know that it's just code changes. It's very easy to rollback.
  • Similarly any changes to the other containers are config changes only, as they don't have the code themselves.
  • The data container itself is mounted read only in the other containers where possible, so adding to overall security (I'm looking at you, WordPress). I don't even need access to the repo to run it, it's just a volume I can easily move around. Otherwise, to change one line of code would require rebuilding three containers.

All of the above relies on volumes_from. Which is working just fine. I just wish I had a little bit more control over where it mounted code to within the various containers that use it.

@xeor
Copy link

xeor commented Sep 22, 2016

I would like this option as well.
As it is now, every volumes created with docker volume create would fill up the / partition if you don't have /var/lib/docker on it's own partition. In my setup, I use devicemapper and lvm-thinpooldev for my storage driver, but as the documentation sais, docker volume create abc would bypass this.

The local volume driver already supports some options when creating a volume. This is done for nfs, example docker volume create --driver local --opt type=nfs --opt o=addr=10.1.2.3,rw --opt device=:/docker --name nfsdatavolume. Can we use the same for making eg a bind-mount?

Another option would be to symlink/bindmount /var/lib/docker/volumes/ to where you actually want to. Not sure about the consequences by doing that tho.

I would much rather have a named volume with the correct options created once, than doing what feels like hacks (eg; variables in my docker-compose > volumes).

The usecases are many! It feels like it belongs to the local driver. The local driver already support (stuff like this).

@thaJeztah thaJeztah added kind/enhancement Enhancements are not bugs or new features but can improve usability or performance. area/volumes labels Sep 22, 2016
@cpuguy83
Copy link
Member

@xeor docker volume create --opt type=none --opt device=<host path> --opt o=bind

If the host path does not exist, it will not be created.

@xeor
Copy link

xeor commented Sep 22, 2016

@cpuguy83 thats great! worked perfectly! thanks!

@CWSpear isnt this exactly what you wanted in the beginning before making the plugin?

@CWSpear
Copy link
Contributor Author

CWSpear commented Sep 22, 2016

@xeor I think so...!

@cpuguy83 Can you explain those opts? Where are they documented? How long has this been possible?

@cpuguy83
Copy link
Member

It was added in 1.12, I think, maybe 1.11.
Opts are (generally) just the same options you pass to the mount command.

@xeor
Copy link

xeor commented Sep 22, 2016

Docker volume create --magic could use some more documenation for stuff like this. "generally just the same as mount" is very wage... (but thanks) :)

If I do a bind-mount manually;

[root@d3 /]# mkdir /src /dest
[root@d3 /]# mount -o bind /source /dest
[root@d3 /]# touch source/a
[root@d3 /]# ls dest/
a
[root@d3 /]# mount | grep /dest
/dev/mapper/vg1-root on /dest type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
[root@d3 /]# cat /etc/mtab | grep /dest
/dev/mapper/vg1-root /dest xfs rw,seclabel,relatime,attr2,inode64,noquota 0 0

It is not easy to get the options.. type=none, device=.., o=bind?...

@CWSpear
Copy link
Contributor Author

CWSpear commented Sep 22, 2016

Yeah, I've been digging... I'm quite confused, too. I'm not a mount master, but I'm with @xeor. Some clarification would be dandy... link to code or docs would be swell as well.

I'm looking through code a bit, but day job is calling... if I find anything, I'll post.

Cc @cpuguy83

@cpuguy83
Copy link
Member

@CWSpear https://docs.docker.com/engine/reference/commandline/volume_create/#/driver-specific-options

@cpuguy83
Copy link
Member

@ian-axelrod If you use a named volume with the above example for binding a host dir to the volume the data in the image will be copied over.

@ian-axelrod
Copy link

Hi @cpuguy83,

I did use the solution you gave earlier in this issue, in fact. The one blocker for me is the fact that it does not automatically create folders, which I know you specifically say it will not. That is why I am trying to find an alternative setup that does create folders, yet still has the advantages of the solution you proposed.

My team is using docker for local development, which means we have a set of requirements that need to be satisfied for development to go smoothly. First, we need dependencies visible on the host (inside the IDE workspace, actually) so that IDE integrations work properly. Second, we need a simple approach to correctly initialize the apps for new, junior developers, who may not be familiar with docker initially, to ease the onboarding process. I created a set of command-line tools that accomplishes this; however, I also want to give developers that are familiar with docker complete control over their environment. This means that I cannot build any extra logic into the aforementioned utilities that would force their use over simple docker-compose commands. I really want to avoid placing folder creation commands in the utilities, for instance, as that would mean devs not using the utility would have to manually create dependency folders for each new service they create. We have been creating quite a few new services as of late, so you can imagine that would become annoying.

Hopefully this gives you more insight. Is there anything I can do, or am I stuck creating the dependency folders manually for new services?

Cheers,

-Ian

@matthew-hickok
Copy link

@cpuguy83

Just wanted to pick this back up as I have a very specific question...

I did what you said and create the mount like so:

docker volume create --opt type=ext4 --opt device=/storage/data my_volume

sudo docker run -v my_volume:/var/garbage -it image/someimage

It fails to perform the mount.

The only way that I got it to work was by adding the bind option like this

docker volume create --opt type=ext4 --opt device=/storage/data --opt o=bind my_volume

If I'm doing it this way, am I discarding the benefits of using the newer volume mounts and pretty much just using old-school bind mounts?

@cpuguy83
Copy link
Member

cpuguy83 commented Dec 6, 2017

@matthew-hickok Just fyi type=ext4 is not doing anything there, also yes you are just bind-mounting, though there are benefits/trade-offs to using a volume instead of a straight up -v /foo:/bar...

But I'm not sure what you are hoping to accomplish in general here.

@matthew-hickok
Copy link

@cpuguy83 All I want is to provide persistent storage to my containers which is on a secondary disk

For example, I want Elasticsearch data to be stored on /storage/es_data which is sitting on /dev/sdb. And since I am completely new to Docker (and Linux actually), I am not sure what the best way to accomplish that is.

I've heard that bind-mounts are bad because of things like permission issues. But it seems that if I want the persistent data to live outside of the default docker location used by volume mounts, I need to use a bind mount.

I could be going about this completely backwards, I really have no idea.

konstin added a commit to meine-stadt-transparent/meine-stadt-transparent that referenced this issue Dec 17, 2017
Unfortunately there is not feasible way to mount data from inside a container to an outside folder (moby/moby#19990). Also media and cache are confirmed to work now.
@mikeyjk
Copy link

mikeyjk commented Jun 12, 2018

I found that I needed to manually delete docker volumes before @cpuguy83's solution would work.
Glad to have finally found this solution.

Does anyone have any thoughts on the best of handling 'n' shared volumes in this fashion?
The model here is a development environment, where we may want 'n' containers on 'n' different changesets.

Or any general thoughts on whether a clustered filesystem is more appropriate, be it on host machine or remote?

@dbjpanda
Copy link

dbjpanda commented Nov 28, 2018

Can anyone tell me please what are the advantages of "bind mounted named volume " over a normal "bind mounted volume". Only one pointers I can think of is copying data from image to that volume initially.
What else ?
Doe's it solve the permission issue also ?? In my case I don't see any advantages of using above said method. Because it doesn't create a dir on host if not exist, doesn't also provide acl support, doesn't fix permission issue, doesn't fix mac's bind mount performance issue. Apart from it you always have to provide a full path because dot and/or pwd will not going to work in windows. Please correct me if I am wrong.
Apart from it I also tried to use @CWSpear solution i.e local-persist volume plugin. But I still didn't find any benefits over normal bind mount.

@mikeyjk
Copy link

mikeyjk commented Nov 28, 2018

Your understanding matches mine dbjpanda.
Copying from container to volume was my use case. I use bash glue to create and set permissions ahead of docker compose running.

@dbjpanda
Copy link

Generally image shouldn't hold any large no of data. So I use cp/mv command inside entrypoint script to fill host dir which works very good in case of small amount of data.

@mikeyjk
Copy link

mikeyjk commented Nov 28, 2018

That seems like a completely reasonable approach to solving the problem of sync-ing content between container and host.

From my perspective, it made sense to have named volumes do that work for me, to reduce the amount of logic I need inside the entrypoint (at the cost of logic prior to the container being started).

@thaJeztah
Copy link
Member

thaJeztah commented Nov 29, 2018

The difference between a "bind-mounted host directory" and a (named) "volume" is semantics, and some steps that are not performed in one or the other (copying of data, creation of a storage location);

Bind-mounted host directories

Short-hand syntax:

-v /path/on/host:/destination-path/in/container

Advanced/long-form syntax:

--mount type=bind,src=/path/on/host,dest=/destination-path/in/container

Both of the above achieve the same, but the --mount syntax has more advanced options, and contrary to the -v short-hand syntax, the --mount syntax is more strict, and requires the host-path to exist (it won't automatically create the path).

With a bind-mounted host directory;

  1. You specify a path on the host (note: "host" is the host where the daemon (and thus: the container) runs, and a destination path for inside the container
  2. Docker creates the container
  3. Docker mounts the host-path into the container at the provided destination path.
  • If the destination in the container doesn't exist, Docker creates a directory (with root:root permissions) to use as mountpoint.
  • If the destination in the container does exist, docker uses this location as mount-point, and mounts the host-path. Doing so "masks" / "hides" the files that are in the container at that location (because the "mount" is "on top" of those files)
  1. Docker starts the container
  2. Any files and directories in the location on the host are accessible to the container.

(some of the steps above may be in a slightly different order 😅)

(named) volumes

Syntax;

Short-hand syntax:

# volume name provided: this is a named volume
-v volume-name:/destination-path/in/container

# no name: this is an anonymous volume
-v /destination-path/in/container

Advanced/long-form syntax:

--mount type=volume,src=volume-name,dest=/destination-path/in/container

# no name: this is an anonymous volume
--mount type=volume,dest=/destination-path/in/container

Both of the above achieve the same, but the --mount syntax has more advanced options.

With a (named) volume;

  1. (optional) create a named volume (docker volume create), specifying the options you want to set for the volume (such as the driver to use, and driver options).
  2. You specify the name of a volume (as part of the -v or --mount syntax). If no name is specified, a random name is generated (in this case it's an "anonymous" volume), and a destination path for inside the container
  3. docker requests the the volume from the specified volume driver ("local" driver by default), which creates the volume if it does not yet exist. A storage location for the volume is also created on the daemon host.
  4. Docker creates the container
  5. Docker mounts the volume's storage location from the host into the container at the provided destination path.
  • If the destination in the container doesn't exist, Docker creates a directory (with root:root permissions) to use as mountpoint.
  • If the destination in the container does exist, docker uses this location as mount-point, and mounts the host-path. Doing so "masks" / "hides" the files that are in the container at that location (because the "mount" is "on top" of those files)
  1. If the volume is empty (which is the case if the volume is first used), but the container has files/directories at the destination path, then docker copies those files to the volume, including their attributes (ownership and permissions)
  2. Docker starts the container

(some of the steps above may be in a slightly different order 😅)

The "storage location" of the volume should be considered an implementation detail, and although that location is not a secret (it's in /var/lib/docker/volumes/<name-of-volume>), these paths are not a "public API", and thus can change in future. Interacting with those files directly from the host "works", but doing so could potentially interfere with the docker daemon (e.g. lead to a "filesystem in use" if another process is using those files, which can cause docker failing to remove a volume).

named volumes with a custom storage location

This is the topic being discussed here.

In step 1. of the previous section, you create a volume using the local volume driver, but pass "mount" options to customize the storage location of the volume (see #19990 (comment))

Also see docker/compose#2957 (comment), which goes a bit deeper into "what happens under the hood"

All the other steps are the same as the previous example, so if the volume is empty, but the container has files at the target location, those files (and their attributes) are copied on first run.

The same applies here regarding "interacting with those files"; doing so can interfere with the docker daemon.

Permissions

Note that in none of the cases, docker handles permissions/ownership of the volume's content;

  • when bind-mounting, all files/folders are directly mounted into the container, so permissions are the same as on the host, and modifying permissions from inside the container will modify the permissions on the host.
  • when using a (named) volume, docker creates a copy of the files/folders, including their permissions. If permissions of those files should be modified, it's the container's responsibility to make those changes.

One exception to the above; when bind-mounting host directories on Docker Desktop (Docker for Mac / Docker for Windows), the daemon itself runs in a lightweight VM. Given that bind-mounts are always mounted from the host where the daemon runs (so in this case, a VM), some magic is used to sync files from the host to the VM; on Docker for Mac, this includes a feature that strips/ignores ownership of the files. This makes it possible to access/modify files on the hose, even if the user inside the container doesn't match the user on the host.

@mikeyjk
Copy link

mikeyjk commented Nov 29, 2018

Oh wow that's interesting. Thanks for clearing this up.

The same applies here regarding "interacting with those files"; doing so can interfere with the docker daemon.

Can I read more about this somewhere? (Edit: more so meant the docker daemon component)

@thaJeztah
Copy link
Member

Can I read more about this somewhere?

Don't think there's specific documentation about that, just that (as mentioned) keeping files in use by external processes could potentially cause issues if you want to remove the volume. Of course, similar things can be problematic at runtime if you share a volume between multiple containers (as multiple containers may attempt to write/lock files).

@willemavjc
Copy link

Permissions

Note that in none of the cases, docker handles permissions/ownership of the volume's content;

  • when bind-mounting, all files/folders are directly mounted into the container, so permissions are the same as on the host, and modifying permissions from inside the container will modify the permissions on the host.
  • when using a (named) volume, docker creates a copy of the files/folders, including their permissions. If permissions of those files should be modified, it's the container's responsibility to make those changes.

One exception to the above; when bind-mounting host directories on Docker Desktop (Docker for Mac / Docker for Windows), the daemon itself runs in a lightweight VM. Given that bind-mounts are always mounted from the host where the daemon runs (so in this case, a VM), some magic is used to sync files from the host to the VM; on Docker for Mac, this includes a feature that strips/ignores ownership of the files. This makes it possible to access/modify files on the hose, even if the user inside the container doesn't match the user on the host.

Is this 100% sure? Because otherwise, postgres' issues with ownership should not happen. Am I misinterpreting something in what you explained?

Reference: docker-library/postgres#253

@thaJeztah
Copy link
Member

Is this 100% sure? Because otherwise, postgres' issues with ownership should not happen. Am I misinterpreting something in what you explained?

Reference: docker-library/postgres#253

docker itself does not change permissions of the files, but the container (e.g. an entrypoint script in the image, or other processes) may.

@thornycrackers
Copy link

https://forums.docker.com/t/how-to-mount-docker-volume-along-with-subfolders-on-the-host/120482/2

This allowed me to specify a mount point on the host for a named volume so I could view the files.

@schnillerman
Copy link

Maybe this is what the OP was looking for:

services:
  foo:
    volumes:
      - bar:/dir_container

volumes:
  bar:
    driver: local
    driver_opts:
      device: /dir_host
      o: bind

@thaJeztah
Copy link
Member

yup, that solution was mentioned above; #19990 (comment)

@xeor docker volume create --opt type=none --opt device=<host path> --opt o=bind

If the host path does not exist, it will not be created.

@henkepa
Copy link

henkepa commented Jun 23, 2023

@cpuguy83 All I want is to provide persistent storage to my containers which is on a secondary disk

For example, I want Elasticsearch data to be stored on /storage/es_data which is sitting on /dev/sdb. And since I am completely new to Docker (and Linux actually), I am not sure what the best way to accomplish that is.

I've heard that bind-mounts are bad because of things like permission issues. But it seems that if I want the persistent data to live outside of the default docker location used by volume mounts, I need to use a bind mount.

I could be going about this completely backwards, I really have no idea.

@matthew-hickok how did you end up here?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/volumes kind/enhancement Enhancements are not bugs or new features but can improve usability or performance.
Projects
None yet
Development

No branches or pull requests

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy