Skip to content

Sharing of shared="yes" volumes fails we mount if pods that mounting are using read-write/read-only #691

@simonswine

Description

@simonswine

What steps did you take and what happened:

I am having a storageClass with parameter.shared="yes".

And in general this is working as expected, although I can remember having faced some issues before, that I couldn't quite explain, but recently I tried to mount a volume by two pods one readOnly: true and other one readOnly: false and that fails, only the first moutned variant succeeds to share.

I have confirmed this is a general openzfs issue at least on my system:

$ zfs --version
zfs-2.4.0-1
zfs-kmod-2.4.0-1

$ zfs create rpool/playground -o mountpoint=legacy
$ mkdir /media/mount-a /media/mount-b /media/mount-c

# mount a rw
$ mount -t zfs rpool/playground  /media/mount-a/
# mount b ro, fails
$ mount -o ro  -t zfs rpool/playground  /media/mount-b/
zfs_mount_at() failed: mountpoint or dataset is busy
# mount c rw
$ mount -t zfs rpool/playground  /media/mount-c

# unmount all
$ umount /media/mount-a /media/mount-b /media/mount-c

# mount a ro
$ mount -o ro -t zfs rpool/playground  /media/mount-a/
# mount b rw, fails
$ mount -t zfs rpool/playground  /media/mount-b/
zfs_mount_at() failed: mountpoint or dataset is busy

I wonder if we can improve the error, as I have spent forever figuring this out.

Eventually that needs like an upstream fix or a workaround

What did you expect to happen:

I can mix and match read-write/read-only volumeMounts with a shared volume

The output of the following commands will help us better understand what's going on:
(Pasting long output into a GitHub gist or other Pastebin is fine.)

  • kubectl logs -f openebs-zfs-node-[xxxx] -n openebs -c openebs-zfs-plugin
I0104 19:29:30.111326       1 grpc.go:83] GRPC response: {}
I0104 19:29:30.181692       1 grpc.go:74] GRPC call: /csi.v1.Node/NodePublishVolume requests {"readonly":true,"target_path":"/var/lib/kubelet/pods/914e7e68-d405-452d-b5ad-5609ed4ecc4e/volumes/kubernetes.io~csi/pvc-2973a985-e1f4-4efd-912a-2ab02aaed369/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"zfs"}},"access_mode":{"mode":5}},"volume_context":{"openebs.io/cas-type":"localpv-zfs","openebs.io/poolname":"rpool/data/kubernetes-data-shared","storage.kubernetes.io/csiProvisionerIdentity":"1767544044262-2053-zfs.csi.openebs.io"},"volume_id":"pvc-2973a985-e1f4-4efd-912a-2ab02aaed369"}
E0104 19:29:30.214821       1 mount.go:236] zfs: could not mount the dataset rpool/data/kubernetes-data-shared/pvc-2973a985-e1f4-4efd-912a-2ab02aaed369 cmd [-o ro, -t zfs rpool/data/kubernetes-data-shared/pvc-2973a985-e1f4-4efd-912a-2ab02aaed369 /var/lib/kubelet/pods/914e7e68-d405-452d-b5ad-5609ed4ecc4e/volumes/kubernetes.io~csi/pvc-2973a985-e1f4-4efd-912a-2ab02aaed369/mount] error: mount: /var/lib/kubelet/pods/914e7e68-d405-452d-b5ad-5609ed4ecc4e/volumes/kubernetes.io~csi/pvc-2973a985-e1f4-4efd-912a-2ab02aaed369/mount: rpool/data/kubernetes-data-shared/pvc-2973a985-e1f4-4efd-912a-2ab02aaed369 already mounted on /var/lib/kubelet/pods/c11a7815-5b84-4df9-b61a-c0eb33804290/volumes/kubernetes.io~csi/pvc-2973a985-e1f4-4efd-912a-2ab02aaed369/mount.
E0104 19:29:30.214849       1 grpc.go:81] GRPC error: rpc error: code = Internal desc = rpc error: code = Internal desc = dataset: mount failed err : mount: /var/lib/kubelet/pods/914e7e68-d405-452d-b5ad-5609ed4ecc4e/volumes/kubernetes.io~csi/pvc-2973a985-e1f4-4efd-912a-2ab02aaed369/mount: rpool/data/kubernetes-data-shared/pvc-2973a985-e1f4-4efd-912a-2ab02aaed369 already mounted on /var/lib/kubelet/pods/c11a7815-5b84-4df9-b61a-c0eb33804290/volumes/kubernetes.io~csi/pvc-2973a985-e1f4-4efd-912a-2ab02aaed369/mount.
  • kubectl get zv -A -o yaml
apiVersion: zfs.openebs.io/v1
kind: ZFSVolume
metadata:
  creationTimestamp: "2026-01-04T16:48:10Z"
  finalizers:
  - zfs.openebs.io/finalizer
  generation: 2
  labels:
    kubernetes.io/nodename: xxx
  name: pvc-2973a985-e1f4-4efd-912a-2ab02aaed369
  namespace: openebs
  resourceVersion: "1102688265"
  uid: 185bce87-c369-406d-a1da-a29cd5b979b8
spec:
  capacity: "5368709120"
  compression: lz4
  dedup: "off"
  fsType: zfs
  ownerNodeID: xxx
  poolName: my-pool
  quotaType: quota
  recordsize: 4k
  shared: "yes"
  volumeType: DATASET
status:
  state: Ready

Anything else you would like to add:

#497 is potentially related.

Environment:

  • LocalPV-ZFS version 2.9.0
  • Kubernetes version (use kubectl version): v1.32.10
  • Kubernetes installer & version: Nixos config
  • Cloud provider or hardware configuration: hcloud
  • OS (e.g. from /etc/os-release): NixOS 25.11 (Xantusia)

Metadata

Metadata

Assignees

No one assigned

    Labels

    triage/needs-informationIndicates an issue needs more information in order to work on it

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions