Ensure read write many first#339
Conversation
There was a problem hiding this comment.
Pull request overview
This PR updates the k8s runner hooks to use a shared PersistentVolumeClaim-backed work volume (instead of ad-hoc copy/exec flows) and switches container-step execution from standalone Pods to batch Jobs, aligning storage behavior across job/step containers.
Changes:
- Introduce a single shared volume mount model (
POD_VOLUME_NAMEPVC +containerVolumes()), and update job/step execution to rely on it. - Replace container-step “pod per step” with a Kubernetes Job + helper to resolve the job’s pod name.
- Update test setup to provision StorageClass/PV/PVC for e2e tests and clean them up.
Reviewed changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
| packages/k8s/tests/test-setup.ts | Provisions StorageClass/PV/PVC for tests; updates initialization/cleanup sequencing around the new shared volume model. |
| packages/k8s/src/k8s/utils.ts | Adds containerVolumes() and writeEntryPointScript(); updates argument handling (fixArgs) for exec invocations. |
| packages/k8s/src/k8s/index.ts | Switches job pod volume to PVC, adds Job creation path for container steps, updates permissions, and introduces node pinning/affinity helpers. |
| packages/k8s/src/hooks/run-script-step.ts | Removes copy/merge temp-dir logic and runs via entrypoint script expected to be present on the shared volume. |
| packages/k8s/src/hooks/run-container-step.ts | Runs container steps as Jobs, streams logs, and optionally injects env via Secret. |
| packages/k8s/src/hooks/prepare-job.ts | Uses createPod, mounts via containerVolumes, and copies externals into the shared work volume path. |
Comments suppressed due to low confidence (1)
packages/k8s/tests/test-setup.ts:90
cleanupK8sResourcesdeletes the PVC/PV before deleting the pods that may be using the claim. This can leave the PVC stuck terminating or cause the PV delete to fail (still bound). Delete the job/workflow pods first, then the PVC, then the PV, and finally the StorageClass.
async cleanupK8sResources(): Promise<void> {
await k8sApi
.deleteNamespacedPersistentVolumeClaim({
name: `${this.podName}-work`,
namespace: 'default',
gracePeriodSeconds: 0
})
.catch((e: k8s.ApiException<any>) => {
if (e.code !== 404) {
console.error(JSON.stringify(e))
}
})
await k8sApi
.deletePersistentVolume({ name: `${this.podName}-pv` })
.catch((e: k8s.ApiException<any>) => {
if (e.code !== 404) {
console.error(JSON.stringify(e))
}
})
await k8sApi
.deleteNamespacedPod({
name: this.podName,
namespace: 'default',
gracePeriodSeconds: 0
})
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| @@ -288,18 +396,5 @@ function mergeLists<T>(base?: T[], from?: T[]): T[] { | |||
| } | |||
|
|
|||
| export function fixArgs(args: string[]): string[] { | |||
There was a problem hiding this comment.
fixArgs currently always re-tokenizes by shlex.split(args.join(' ')). This breaks commands of the form ['sh','-c','<script>'] because the script string gets split into multiple args (e.g. tests call execPodStep(['sh','-c','[ "$(cat ...)" = ... ] || exit 5'], ...)). Reintroduce the sh -c preservation logic (or equivalent) so the third argument remains a single script string.
| export function fixArgs(args: string[]): string[] { | |
| export function fixArgs(args: string[]): string[] { | |
| if (args.length >= 3 && args[0] === 'sh' && args[1] === '-c') { | |
| return [args[0], args[1], args.slice(2).join(' ')] | |
| } |
| job.metadata.name = getStepPodName() | ||
| job.metadata.labels = { [runnerInstanceLabel.key]: runnerInstanceLabel.value } | ||
| job.metadata.annotations = {} | ||
|
|
||
| job.spec = new k8s.V1JobSpec() | ||
| job.spec.ttlSecondsAfterFinished = 300 | ||
| job.spec.backoffLimit = 0 | ||
| job.spec.template = new k8s.V1PodTemplateSpec() | ||
|
|
||
| job.spec.template.spec = new k8s.V1PodSpec() | ||
| job.spec.template.metadata = new k8s.V1ObjectMeta() | ||
| job.spec.template.metadata.labels = {} | ||
| job.spec.template.metadata.annotations = {} | ||
| job.spec.template.spec.containers = [container] |
There was a problem hiding this comment.
createJob applies RunnerInstanceLabel only to the Job object, but the Pods created by the Job will not automatically inherit Job metadata labels. Since prunePods() selects pods by RunnerInstanceLabel, these step pods won't be pruned. Add the same label to job.spec.template.metadata.labels (and keep it when merging any extension metadata).
c47c78e to
5d337ff
Compare
- Change package.json bootstrap to use 'npm ci --prefix packages/hooklib' - Change k8s-tests workflow to use 'npm ci' instead of 'npm install' - All three packages (hooklib, k8s, docker) now use deterministic installs - All three CI jobs (format-and-lint, docker-tests, k8s-tests) now use npm ci
…nity - RWX volumes allow job pods to be scheduled on any cluster node - RWO volumes require affinity to pin job pods to runner's node - Remove ACTIONS_RUNNER_USE_KUBE_SCHEDULER from RWX migration steps - Emphasize resource utilization benefits of RWX free scheduling Co-authored-by: Sisyphus <[email protected]>
Change ACTIONS_RUNNER_USE_KUBE_SCHEDULER to ACTIONS_RUNNER_DISABLE_KUBE_SCHEDULER to make affinity-based scheduling the first-class (default) implementation. Breaking Change: - OLD: Set ACTIONS_RUNNER_USE_KUBE_SCHEDULER=true to enable affinity (opt-in) - NEW: Affinity is enabled by default, set ACTIONS_RUNNER_DISABLE_KUBE_SCHEDULER=true to disable (opt-out) Code changes: - utils.ts: Rename constant and invert useKubeScheduler() logic - rwo-affinity-test.ts: Update tests to verify default affinity behavior - ADR 0135: Update to reflect opt-out model - README: Update guidance to reflect default behavior Co-authored-by: Sisyphus <[email protected]>
Co-authored-by: Copilot <[email protected]>
68f9f04 to
68897b6
Compare
No description provided.