Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Volumes gone after reboot #71

Open
saitoh183 opened this issue Jun 18, 2019 · 8 comments
Open

Volumes gone after reboot #71

saitoh183 opened this issue Jun 18, 2019 · 8 comments

Comments

@saitoh183
Copy link

So i just rebooted my server and all i see under the service status is Couldn't find for all my volumes. They are on another drive and the docker folder is on the same drive as the data folders. Could it be that the services is starting before the data drive is mounted?

@CWSpear
Copy link
Collaborator

CWSpear commented Jun 20, 2019

It could be. This plugin uses Systemd (or Upstart, but it's less supported), and I'm not super familiar with it, but I'm pretty sure you can have like a "depends on" clause and there should be some way to have it wait on the volumes...

Quick Google search brought up a mount option: https://www.freedesktop.org/software/systemd/man/systemd.mount.html#

@saitoh183
Copy link
Author

Yeah i tried it but it didnt seem to work. for this time i just recreated all the volumes in a single command but next time i reboot i will test again.

[Unit]
Description=docker-volume-local-persist
Before=docker.service
After=data.mount
Wants=docker.service

[Service]
TimeoutStartSec=0
ExecStart=/usr/bin/docker-volume-local-persist

[Install]
WantedBy=multi-user.target

@turkeryildirim
Copy link

Hello,
I'm using kubuntu 19.04 and installed docker 18.06.1-ce by snap package. After that i installed the driver with suggested "quick way". Both docker and local-persist services are running.

Any defined volume which is using local-persist driver is gone after reboot(or local-persist service restart)

Is there any direction to check to understand what is wrong ?

@CWSpear
Copy link
Collaborator

CWSpear commented Jun 20, 2019

@turkeryildirim I don't know for sure off the top of my head, but it may not recreate the volumes, but it persists the data so that the data is still there when you next recreate the volume with the same options.

That's the main intent of this plugin: to allow data to survive and persist even if a volume is destroyed (when the volume is next created).

@turkeryildirim
Copy link

@CWSpear
I just want to create a volume(docker volume create) with custom mountpoint and use it for container(s). By default approach (local driver), created volumes persist but does not allow me to specify a mountpoint.

In theory, volumes that are using local-persist driver should stay because the only difference is the driver (i guess). But if this disappearance is normal behaviour of the driver after "volume create" then i should look something different.

@etricky
Copy link

etricky commented Jun 21, 2019

I'm also affected by this, all my docker volumes are gone! Is this the intended behavior of the plugin? Is there a way to ensure the volumes are not removed after a reboot?

@gramozkaragjyzi
Copy link

gramozkaragjyzi commented Nov 19, 2019

In case you changed data-root in docker/daemon.json then check this #68.

@trustedcomputer
Copy link

Thanks everyone for the pointers, no matter how long ago they were written. I ended up having both issues being on a Synology box- the modified data-root and the volumes not being mounted yet. So I had to do two modifications:

  1. symbolic link from /var/lib/docker pointing to /var/packages/Docker/var/docker
  2. the systemd service file had to be changed in two ways:
    (a) change Before= and Wants= from docker.service to the actual service in synology, pkg-Docker-dockerd.service
    (b) add After=syno-volume.target to the [Unit] section

Then my volumes were still there after reboot!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants