<fix>[compute]: hard delete host file if created state vm destroyed#3889
<fix>[compute]: hard delete host file if created state vm destroyed#3889zstack-robot-2 wants to merge 1 commit intozsv_5.0.0from
Conversation
Resolves: ZSV-11845 Related: ZSV-11310 Change-Id: I776f676a797473756a6f6166756c6a786f717869
概述调整虚拟机主机文件级联删除逻辑,使得处于 变更内容
预期代码审查工作量🎯 3 (中等) | ⏱️ ~20 分钟 诗
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Review rate limit: 4/5 reviews remaining, refill in 12 minutes. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@compute/src/main/java/org/zstack/compute/vm/devices/VmHostBackupFileCascadeExtension.java`:
- Around line 75-80: The current logic in VmHostBackupFileCascadeExtension uses
hasCreatedVmInDeletionContext(action) to skip deferring backup-file deletion for
the entire action, which causes backup rows for non-Created VMs to be
prematurely removed; change handleDeletion(...) so that you only bypass defer
for backup rows belonging to Created VMs: use
hasCreatedVmInDeletionContext(action) and
formatCreatedVmUuidsFromContext(action) (or iterate VmDeletionStructs) to
collect Created VM UUIDs, and when generating deletion messages for
VmHostBackupFileVO resources only mark those whose resourceUuid matches a
Created VM UUID as non-deferred while preserving the original defer semantics
for all other resourceUuid entries.
In
`@compute/src/main/java/org/zstack/compute/vm/devices/VmHostFileCascadeExtension.java`:
- Around line 75-80: The current code aborts deferring for the whole action when
any Created VM is present; change this to per-VM handling: remove the early
"return false" triggered by hasCreatedVmInDeletionContext(action), instead
iterate the VmDeletionStruct list from voFromAction(action) and separate Created
VMs from others, log created UUIDs using
formatCreatedVmUuidsFromContext(action), skip/force immediate deletion of host
files for Created VMs (VmHostFileVO entries), and only call handleDeletion(...)
(or pass the filtered list) for VMs whose deletion policy still allows
Delay/Never. Adjust logic in VmHostFileCascadeExtension to operate on each
VmDeletionStruct (or a filtered collection) rather than on the whole action.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 46ec7606-5c1a-4aa6-a408-b171581e3517
📒 Files selected for processing (2)
compute/src/main/java/org/zstack/compute/vm/devices/VmHostBackupFileCascadeExtension.javacompute/src/main/java/org/zstack/compute/vm/devices/VmHostFileCascadeExtension.java
| if (hasCreatedVmInDeletionContext(action)) { | ||
| logger.info(String.format( | ||
| "VmHostBackupFileCascadeExtension: skip deferring backup-file deletion for Created VM(s): %s; " | ||
| + "destroy uses DBOnly hard-delete without expunge, backup rows must be removed in this cascade", | ||
| formatCreatedVmUuidsFromContext(action))); | ||
| return false; |
There was a problem hiding this comment.
不要让一个 Created VM 带着整批 backup file 一起绕过 defer。
这里同样是 action 级别的开关:只要这批 VmDeletionStruct 里出现 Created VM,就会让整批 action 继续走 handleDeletion(...),后面会给所有 resourceUuid 对应的 VmHostBackupFileVO 发删除消息。若同批还包含删除策略仍应 Delay/Never 的非 Created VM,它们的 backup file 也会被提前清理。建议只对 Created VM 对应的 backup rows 放开 defer,其余资源继续沿用原策略。
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@compute/src/main/java/org/zstack/compute/vm/devices/VmHostBackupFileCascadeExtension.java`
around lines 75 - 80, The current logic in VmHostBackupFileCascadeExtension uses
hasCreatedVmInDeletionContext(action) to skip deferring backup-file deletion for
the entire action, which causes backup rows for non-Created VMs to be
prematurely removed; change handleDeletion(...) so that you only bypass defer
for backup rows belonging to Created VMs: use
hasCreatedVmInDeletionContext(action) and
formatCreatedVmUuidsFromContext(action) (or iterate VmDeletionStructs) to
collect Created VM UUIDs, and when generating deletion messages for
VmHostBackupFileVO resources only mark those whose resourceUuid matches a
Created VM UUID as non-deferred while preserving the original defer semantics
for all other resourceUuid entries.
| if (hasCreatedVmInDeletionContext(action)) { | ||
| logger.info(String.format( | ||
| "VmHostFileCascadeExtension: skip deferring host-file deletion for Created VM(s): %s; " | ||
| + "destroy uses DBOnly hard-delete without expunge, host files must be removed in this cascade", | ||
| formatCreatedVmUuidsFromContext(action))); | ||
| return false; |
There was a problem hiding this comment.
不要因为一个 Created VM 就对整批级联关闭 defer。
这里是按 action 维度放开 defer:只要 hasCreatedVmInDeletionContext(action) 命中一个 Created VM,后面的 voFromAction(action) / handleDeletion(...) 就会处理这批 List<VmDeletionStruct> 里的全部 VmHostFileVO。如果同一个 action 里还混有删除策略仍应 Delay/Never 的非 Created VM,它们的 host file 也会被提前删掉。这里需要按 VM 维度而不是按 action 维度放开 defer。
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@compute/src/main/java/org/zstack/compute/vm/devices/VmHostFileCascadeExtension.java`
around lines 75 - 80, The current code aborts deferring for the whole action
when any Created VM is present; change this to per-VM handling: remove the early
"return false" triggered by hasCreatedVmInDeletionContext(action), instead
iterate the VmDeletionStruct list from voFromAction(action) and separate Created
VMs from others, log created UUIDs using
formatCreatedVmUuidsFromContext(action), skip/force immediate deletion of host
files for Created VMs (VmHostFileVO entries), and only call handleDeletion(...)
(or pass the filtered list) for VMs whose deletion policy still allows
Delay/Never. Adjust logic in VmHostFileCascadeExtension to operate on each
VmDeletionStruct (or a filtered collection) rather than on the whole action.
|
Comment from yaohua.wu: Review: MR !9769 — ZSV-11845业务上下文回顾ZSV-11845 全局 rekey 失败的根因之一:当 VM 在 clone 失败等场景停留在 本 MR 是该 issue 的第 6 个增量补丁(前序 !9751 / !9755 / !9765 / !9766 / !9767 已合入)。 修复正确性评估
发现的问题🟡 Warning 1: 两个 cascade extension 中存在重复实现
位置:
影响: 按 ZSV-11310 评论中的「后续:补全 cascade」规划,TpmCascade / VmHostCascade(NvRam) 等还会继续新增类似 cascade extension。当前模式会让重复扩散到更多文件。 建议:
如急于合入收敛 ZSV-11845,本 MR 可先合入,但请在后续「补全 cascade」MR 标题里显式列入 dedup 这一项。 🟢 Suggestion 1:
|
| 项 | 数量/状态 |
|---|---|
| 修改文件 | 2(皆 logic class) |
| 修改行数 | ~80(含注释和日志) |
| 跨 persona 一致命中 | Warning 1(duplication)独立可见,Suggestion 1 需要业务上下文判断 |
| 上游冲突 | 无 |
| Pre-existing risks | VmHostBackupFileVO.resourceUuid 字段语义混杂(非 VM-only)属于历史设计问题,本 MR 不引入 |
Verdict: APPROVED
修复方向正确、与已合入的并行补丁协调一致,无 P0/P1 级阻塞问题。Warning 1(代码重复)可在「补全 cascade」后续 MR 中处理;Suggestion 1-3 为防御性 / 测试 / 文档改进,可选采纳。
考虑到 ZSV-11845 是 P0 阻塞且本 MR 是收敛该 issue 的最后一个补丁(残留场景),建议尽快合入并启动测试验证;后续 cascade 重构请把本 MR 涉及的 Created-state 路径作为必测项。
🤖 Robot Reviewer
Resolves: ZSV-11845
Related: ZSV-11310
Change-Id: I776f676a797473756a6f6166756c6a786f717869
sync from gitlab !9769