Skip to content

Deleting persistent volume claims and persistend volumes on a multi-node minikube cluster does not delete files from disk #13320

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
victor-sudakov opened this issue Jan 10, 2022 · 10 comments
Labels
co/multinode Issues related to multinode clusters kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.

Comments

@victor-sudakov
Copy link

What Happened?

When deleting PVCs and PVs on a multi-node minikube cluster, these resources are reported as non-existent by kubectl get pvc and kubectl get pv but the actual files remain on disk under /var/lib/docker/volumes/minikube-m02/_data/hostpath-provisioner/... Thus the old data can be unexpectedly resurrected when you redeploy a StatefulSet, for example.

How to reproduce

  1. Create a minikube cluster with at least two nodes
  2. Create a StatefulSet with persistent volumes
  3. Delete the StatefulSet
  4. Delete all the PVs and PVCs (kubectl. Lens whatever)
  5. Search for files in /var/lib/docker/volumes/minikube-m02/_data/hostpath-provisioner/ - they still will be there

What I expected

Those on-disk files should be wiped out when the corresponding PVs disappear from kubectl get pv output.

Workaround

Delete the files manually or use a single-node minikube cluster.

Attach the log file

This is minikube version: v1.24.0 on Manjaro Linux. The cluster was created as

minikube start --disk-size=50g --nodes=2 --cni="calico" --insecure-registry="192.168.38.0/24"

Operating System

Other

Driver

Docker

@afbjorklund
Copy link
Collaborator

afbjorklund commented Jan 10, 2022

So the volume for the first node is removed, but the volume for the second node is not ? Sounds like a bug, if so.

Probably not only the storage provisioner in that case, but everything else stored under /var (volume mountpoint).

@afbjorklund afbjorklund added kind/bug Categorizes issue or PR as related to a bug. co/multinode Issues related to multinode clusters labels Jan 10, 2022
@victor-sudakov
Copy link
Author

So the volume for the first node is removed, but the volume for the second node is not ? Sounds like a bug, if so.

Exactly.

How to reproduce:

Install the attached manifest. It will create two pods with PVs. Visit each pod and touch a file in its persistent volume, then find the files on the host machine:

# find minikube*/ -type f | grep ubu
minikube/test/bigdisk-ubuntu-1/ubuntu-1.txt
minikube-m02/test/bigdisk-ubuntu-0/ubuntu-0.txt
#

Remove the manifest and the PVCs. The file on one of the nodes will remain, which I think is incorrect and inconsistent behaviour:

# find minikube*/ -type f | grep ubu
minikube-m02/test/bigdisk-ubuntu-0/ubuntu-0.txt
#

reproduce.yaml.txt

@spowelljr spowelljr added the priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. label Jan 19, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 19, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 19, 2022
@spowelljr spowelljr added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels May 25, 2022
@jsirianni
Copy link

jsirianni commented Oct 21, 2022

I think I am seeing this as well. When I perform a minikube delete, future clusters will have state from my previous cluster when I deploy the same application manifest.

minikube version
minikube version: v1.26.0
commit: f4b412861bb746be73053c9f6d2895f12cf78565

Fedora 37

@mgruner
Copy link

mgruner commented Sep 28, 2023

I can confirm this, too.

@ceelian
Copy link

ceelian commented Nov 4, 2023

Me too

@glebpom
Copy link

glebpom commented Apr 9, 2024

same problem

abychko added a commit to codership/containers that referenced this issue Jun 3, 2024
@njlaw
Copy link

njlaw commented Jun 5, 2024

I am having this occur with a single node minikube cluster as well. If a reproduction of this as well would be helpful, please let me know.

@tomthetommy
Copy link

Same issue:

> minikube version
minikube version: v1.35.0
commit: dd5d320e41b5451cdf3c01891bc4e13d189586ed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/multinode Issues related to multinode clusters kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

No branches or pull requests

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy