Cleanup Guide
All test scripts support comprehensive cleanup with multiple options for different scenarios.
Cleanup Options
| Option | Description |
|---|---|
--cleanup |
Delete resources and namespaces after test completes |
--cleanup-on-failure |
Clean up resources even if tests fail |
--dry-run-cleanup |
Show what would be deleted without actually deleting |
--yes |
Skip confirmation prompt for cleanup |
What Gets Cleaned Up
VM Creation Tests
- All VMs created during the test
- All DataVolumes (DVs) associated with the VMs
- All PersistentVolumeClaims (PVCs)
- All test namespaces (kubevirt-perf-test-1 through kubevirt-perf-test-N)
Migration Tests
- VirtualMachineInstanceMigration (VMIM) resources
- Optionally: VMs, DataVolumes, PVCs, and namespaces (if
--create-vmswas used)
Failure Recovery Tests
- FenceAgentsRemediation (FAR) custom resources
- FAR annotations from VMs
- Uncordon nodes that were marked as failed
- Optionally: VMs, DataVolumes, PVCs, and namespaces (with
--cleanup-vms)
Chaos Benchmark Tests
- All VMs in the test namespace
- All DataVolumes and PVCs
- All VolumeSnapshots
- The entire test namespace
Cleanup Examples
Clean up after VM Creation Tests
virtbench CLI:
# Clean up after test
virtbench datasource-clone --start 1 --end 50 --storage-class YOUR-STORAGE-CLASS --cleanup
# Dry run to see what would be deleted
virtbench datasource-clone --start 1 --end 50 --storage-class YOUR-STORAGE-CLASS --dry-run-cleanup
# Clean up even if tests fail
virtbench datasource-clone --start 1 --end 50 --storage-class YOUR-STORAGE-CLASS --cleanup-on-failure
Python Script:
cd datasource-clone
# Clean up after test
python3 measure-vm-creation-time.py --start 1 --end 50 --cleanup
# Dry run to see what would be deleted
python3 measure-vm-creation-time.py --start 1 --end 50 --dry-run-cleanup
# Clean up even if tests fail
python3 measure-vm-creation-time.py --start 1 --end 50 --cleanup-on-failure
Clean up after Migration Tests
virtbench CLI:
# Clean up VMIMs only (VMs were pre-existing)
virtbench migration --start 1 --end 10 --source-node worker-1 --cleanup
# Clean up everything (VMs were created by test)
virtbench migration --start 1 --end 10 --source-node worker-1 --create-vms --cleanup
Python Script:
cd migration
# Clean up VMIMs only
python3 measure-vm-migration-time.py --start 1 --end 10 --source-node worker-1 --cleanup
# Clean up everything
python3 measure-vm-migration-time.py --start 1 --end 10 --source-node worker-1 --create-vms --cleanup
Clean up after Failure Recovery Tests
virtbench CLI:
# Clean up FAR resources only
virtbench failure-recovery --start 1 --end 10 --cleanup
# Clean up FAR resources and VMs
virtbench failure-recovery --start 1 --end 10 --cleanup --cleanup-vms
Python Script:
cd failure-recovery
# Clean up FAR resources only
python3 measure-recovery-time.py --start 1 --end 10 --cleanup
# Clean up FAR resources and VMs
python3 measure-recovery-time.py --start 1 --end 10 --cleanup --cleanup-vms
Clean up after Chaos Benchmark
virtbench CLI:
# Cleanup only (from previous run)
virtbench chaos-benchmark --cleanup-only --concurrency 1
Python Script:
cd chaos-benchmark
# Cleanup only (from previous run)
python3 measure-chaos.py --cleanup-only --concurrency 1
Manual Cleanup
If automated cleanup fails or you need to clean up manually:
Delete Test Namespaces
# Delete all test namespaces
kubectl delete namespace -l app=kubevirt-perf-test
# Delete specific range
for i in {1..50}; do
kubectl delete namespace kubevirt-perf-test-$i --ignore-not-found=true
done
Delete Specific Resources
# Delete VMs in a namespace
kubectl delete vm --all -n kubevirt-perf-test-1
# Delete DataVolumes
kubectl delete dv --all -n kubevirt-perf-test-1
# Delete PVCs
kubectl delete pvc --all -n kubevirt-perf-test-1
# Delete VMIMs
kubectl delete vmim --all -n kubevirt-perf-test-1
Delete FAR Resources
# Delete FAR custom resource
kubectl delete fenceagentsremediation <far-name> -n <namespace>
# Remove FAR annotations from VMs
kubectl annotate vm <vm-name> fence.agents.remediation.medik8s.io/fence-agent- -n <namespace>
# Uncordon nodes
kubectl uncordon <node-name>
Best Practices
- Use Dry Run First: Always use
--dry-run-cleanupto preview what will be deleted - Confirm Deletions: Review the confirmation prompt carefully before proceeding
- Save Results First: Ensure results are saved before cleanup if needed
- Check Dependencies: Verify no other processes are using the resources
- Monitor Cleanup: Watch for errors during cleanup process
Troubleshooting
Namespace Stuck in Terminating
Problem: Namespace remains in "Terminating" state
Solution:
# Check for finalizers
kubectl get namespace kubevirt-perf-test-1 -o yaml | grep finalizers
# Remove finalizers if stuck
kubectl patch namespace kubevirt-perf-test-1 -p '{"metadata":{"finalizers":[]}}' --type=merge
PVC Not Deleting
Problem: PVC stuck in "Terminating" state
Solution:
# Check if PVC is in use
kubectl describe pvc <pvc-name> -n <namespace>
# Delete associated pods/VMs first
kubectl delete vm --all -n <namespace>
# Force delete if needed
kubectl patch pvc <pvc-name> -p '{"metadata":{"finalizers":[]}}' --type=merge -n <namespace>
Cleanup Fails with Permission Errors
Problem: Insufficient permissions to delete resources
Solution:
- Ensure your user has cluster-admin or equivalent permissions
- Check RBAC policies: kubectl auth can-i delete namespace
- Contact cluster administrator for required permissions
See Also
- Configuration Options - All cleanup-related options
- Output and Results - Saving results before cleanup
- Best Practices - Cleanup best practices