feat (query/v2): Query Builder#75
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughThis PR adds an experimental fluent Cypher query builder in ChangesQuery v2 Builder with Supporting Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
✨ Finishing Touches🧪 Generate unit tests (beta)
|
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (1)
query/v2/backend_test.go (1)
78-86: 💤 Low valueDocument the parameter-remapping boundary.
NamedParameter("props", ...)producespreparedQuery.Parameters["props"]but afterneo4j.NewQueryBuilder.Prepare(), the same value appears asqueryBuilder.Parameters["p0"]. The Neo4j backend re-derives its own parameter map from the AST, dropping the user-supplied symbol names. A short inline comment here would prevent future readers from mistakenly passingpreparedQuery.Parametersto the Neo4j driver instead ofqueryBuilder.Parameters.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@query/v2/backend_test.go` around lines 78 - 86, Add a short inline comment in the test case near the "create node" example clarifying that NamedParameter("props", ...) yields preparedQuery.Parameters["props"] but neo4j.NewQueryBuilder.Prepare() (or the Neo4j backend) remaps parameter names (e.g., to "p0") and the actual map passed to the driver is queryBuilder.Parameters; mention NamedParameter, Prepare, preparedQuery.Parameters and queryBuilder.Parameters by name so future readers know not to pass preparedQuery.Parameters directly to the Neo4j driver.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@cypher/models/pgsql/test/query_test.go`:
- Around line 31-39: The loop currently continues after queryBuilder.Build()
fails and then dereferences builtQuery on the next lines; change the failure
handling for Build() so the test stops or skips the iteration—either replace
t.Errorf("could not build query: %v", err) with t.Fatalf(...) (or
require.NoError(t, err)) to abort the test, or add a continue after the t.Errorf
to skip using builtQuery; apply the same pattern wherever builtQuery is
dereferenced (e.g., before calling translate.Translate) to ensure no dereference
happens after a failed Build().
In `@query/v2/query.go`:
- Around line 687-695: The Skip and Limit setters currently allow negatives and
callers elsewhere treat zero as “unset”; change builder.Skip and builder.Limit
to validate and preserve zero by (1) rejecting negative values at the setter
(e.g., if skip < 0 or limit < 0, panic or return an explicit error/validation
failure) and (2) always assign the pointer (s.skip = &skip and s.limit = &limit)
even when the value is 0 so presence checks can use != nil; apply the same fix
to the other identical setters around the 1044-1081 region so all zero values
are preserved and negatives are rejected.
- Around line 818-852: Delete() currently allows property lookups via
QualifiedExpression (e.g., Delete(Node().Property("name"))); detect and reject
these by adding a check in the loop that handles QualifiedExpression (the branch
for QualifiedExpression in the deleteItems iteration): after obtaining qualifier
:= typedNextUpdate.qualifier() and before accepting it, identify if qualifier
represents a property access (the form used by property lookups) and if so call
s.trackError with a clear message (e.g., "cannot delete property; use Remove()")
and continue; leave the existing validateExpressionValue,
isDetachDeleteQualifier, s.detachDelete, s.deleteItems and pendingDeleteItems
logic unchanged for non-property qualifiers so only property-access targets are
rejected.
In `@query/v2/util.go`:
- Around line 614-751: CollectFromValue currently doesn't handle raw
*cypher.ProjectionItem values so identifiers inside projection items (e.g.
Return(As(Node(), "alias"))) aren't recorded; add a case for
*cypher.ProjectionItem in CollectFromValue that extracts the inner expression
(the projected expression) and calls s.CollectFromExpression (or
s.CollectFromValue if you prefer) on it so variable identifiers inside
projection items get collected—update the switch in CollectFromValue to mirror
how *cypher.Return is expanded via projectionItemsFromReturn and ensure
ProjectionItem is handled the same way to prevent missing MATCH generation in
Build.
---
Nitpick comments:
In `@query/v2/backend_test.go`:
- Around line 78-86: Add a short inline comment in the test case near the
"create node" example clarifying that NamedParameter("props", ...) yields
preparedQuery.Parameters["props"] but neo4j.NewQueryBuilder.Prepare() (or the
Neo4j backend) remaps parameter names (e.g., to "p0") and the actual map passed
to the driver is queryBuilder.Parameters; mention NamedParameter, Prepare,
preparedQuery.Parameters and queryBuilder.Parameters by name so future readers
know not to pass preparedQuery.Parameters directly to the Neo4j driver.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 0bf23e1a-4e41-4b27-97dd-3600a9ebc32c
📒 Files selected for processing (10)
cypher/models/cypher/format/format.gocypher/models/pgsql/test/query_test.gocypher/models/pgsql/translate/translator.goquery/neo4j/neo4j.goquery/v2/backend_test.goquery/v2/compat.goquery/v2/doc.goquery/v2/query.goquery/v2/query_test.goquery/v2/util.go
There was a problem hiding this comment.
🧹 Nitpick comments (1)
query/v2/backend_test.go (1)
255-258: ⚡ Quick winAvoid hard-coding kind id
1in shortest-path harness assertion.This check is brittle against mapper-id allocation changes; derive the expected kind id from
mapperand assert with that value instead.Suggested adjustment
serializedHarnessQueryHasKindConstraint := false + expectedKindID, idOK := mapper.MapKind(edgeKind) + require.True(t, idOK, "expected edge kind to be mapped") + expectedKindConstraint := fmt.Sprintf("array [%d]::int2[]", expectedKindID) + for _, parameterValue := range translation.Parameters { - if serializedQuery, typeOK := parameterValue.(string); typeOK && strings.Contains(serializedQuery, "array [1]::int2[]") { + if serializedQuery, typeOK := parameterValue.(string); typeOK && strings.Contains(serializedQuery, expectedKindConstraint) { serializedHarnessQueryHasKindConstraint = true break } }import ( "context" + "fmt" "strings" "testing"🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@query/v2/backend_test.go` around lines 255 - 258, The test currently looks for the literal substring "array [1]::int2[]" in translation.Parameters which hard-codes kind id 1; instead compute the expected kind id from the test's mapper (e.g., use mapper.ID or mapper.KindID) and build the expected substring with that value, then check if parameterValue.(string) contains that generated substring and set serializedHarnessQueryHasKindConstraint accordingly (update the loop that sets serializedHarnessQueryHasKindConstraint and the check against translation.Parameters to use the derived expectedKindID).
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Nitpick comments:
In `@query/v2/backend_test.go`:
- Around line 255-258: The test currently looks for the literal substring "array
[1]::int2[]" in translation.Parameters which hard-codes kind id 1; instead
compute the expected kind id from the test's mapper (e.g., use mapper.ID or
mapper.KindID) and build the expected substring with that value, then check if
parameterValue.(string) contains that generated substring and set
serializedHarnessQueryHasKindConstraint accordingly (update the loop that sets
serializedHarnessQueryHasKindConstraint and the check against
translation.Parameters to use the derived expectedKindID).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 2b5c1177-f99c-4e4f-a568-e775556bf0a4
📒 Files selected for processing (10)
cypher/models/cypher/format/format.gocypher/models/pgsql/test/query_test.goquery/neo4j/neo4j.goquery/v2/backend_test.goquery/v2/compat.goquery/v2/doc.goquery/v2/legacy_parity_test.goquery/v2/query.goquery/v2/query_test.goquery/v2/util.go
✅ Files skipped from review due to trivial changes (1)
- query/v2/doc.go
🚧 Files skipped from review as they are similar to previous changes (6)
- query/neo4j/neo4j.go
- cypher/models/cypher/format/format.go
- cypher/models/pgsql/test/query_test.go
- query/v2/util.go
- query/v2/query.go
- query/v2/compat.go
|
@CodeRabbit - blast me. Give this a full review. |
|
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@query/v2/query.go`:
- Around line 1105-1173: buildCreates currently accumulates all top-level create
inputs into a single PatternPart (variable pattern), causing disjoint creates
(e.g., Create(NodeA, NodeB) or separate relationship creates) to be rendered as
one chained pattern; fix by splitting into separate PatternPart entries on
boundaries between disjoint creates: when you start processing a new top-level
create that should not be appended to the current pattern (e.g., current pattern
is empty or last element does not belong to this nextCreate), allocate a new
cypher.PatternPart, append it to createClause.Pattern, and use that new pattern
for subsequent AddPatternElements; update logic around the switch in
buildCreates (while using patternEndsWithNodePattern, nextCreateValueIsNode,
validateNodePattern, validateRelationshipPattern) so relationship and node
handling create new PatternPart instances when needed instead of always writing
into the single initial pattern.
- Around line 388-392: Order currently treats any non-SortDescending value as
ascending, so update the Order function to explicitly validate SortDirection:
accept only SortAscending and SortDescending, and panic (or otherwise fail fast)
for any other value to surface build errors; then set cypher.SortItem.Ascending
based on direction==SortAscending. Reference the Order function and the
SortDirection/SortAscending/SortDescending symbols when making this change.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: f84057f7-602d-4725-b320-dfd4dcb10034
📒 Files selected for processing (6)
query/v2/backend_test.goquery/v2/legacy_parity_test.goquery/v2/query.goquery/v2/query_test.goquery/v2/util.goquery/v2/util_internal_test.go
🚧 Files skipped from review as they are similar to previous changes (3)
- query/v2/backend_test.go
- query/v2/legacy_parity_test.go
- query/v2/util.go
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@cmd/benchdiff/benchfmt.go`:
- Around line 232-243: writeIntegrationBenchfmt currently ignores all errors
from its writes; update it to check and propagate the error returned by each
fmt.Fprintf / fmt.Fprintln call (inside the header writes and inside the loop)
and return the first non-nil error encountered so the caller can detect write
failures. Locate the function writeIntegrationBenchfmt and the uses of
fmt.Fprintf/fmt.Fprintln (including the header writes and the per-row line that
uses integrationBenchmarkName, procs and row.Median.Nanoseconds()) and thread
the error handling so any write error is returned instead of always returning
nil.
In `@cmd/benchdiff/compare.go`:
- Around line 248-254: The code currently falls back to "go run
golang.org/x/perf/cmd/benchstat@latest" which pulls an unpinned binary at
runtime; change the logic in compare.go around the exec.LookPath/benchstatPath
check (the block using exec.LookPath, benchstatPath, runCommand and
cfg.Benchstat) to stop using the `@latest` go run fallback and instead return a
clear error when benchstat is not found, instructing the caller/CI to
pre-install a specific benchstat version (or set cfg.Benchstat to a pinned
binary path); ensure the error message mentions exec.LookPath and suggests
installing golang.org/x/perf at a pinned version or providing cfg.Benchstat so
CI remains reproducible.
In `@cmd/benchdiff/run.go`:
- Around line 109-117: The code flips removeBase and removeTarget to false
before attempting removals, which prevents deferred cleanup if a removal fails;
change the logic in the block guarded by resolved.KeepWorktrees so you call
removeWorktree(ctx, resolved.Root, baseWorktree) first and only set removeBase =
false after that call succeeds, then call removeWorktree(ctx, resolved.Root,
targetWorktree) and set removeTarget = false only after it succeeds (or
alternatively perform removals and update the flags inside the same success
branch), ensuring that if any removeWorktree call returns an error the
corresponding flag remains true so the deferred cleanup will run.
In `@query/v2/backend_test.go`:
- Line 220: The test's expectedSQL string in the
TestBackendParityPGTranslate/node_read case is stale (it asserts
"(s0.n0).kind_ids"); update the expectedSQL value to match the current
translator output (replace the old fragment with the current SQL shape produced
by the translator) or refactor the assertion to check stable SQL fragments
rather than the full string; locate and modify the expectedSQL literal used in
the node read PG parity test to reflect the translator's actual output.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: af1e8e0e-da49-4c5e-89ed-df4171ef5b2e
📒 Files selected for processing (36)
.gitignoreMakefileREADME.mdcmd/benchdiff/README.mdcmd/benchdiff/benchfmt.gocmd/benchdiff/benchfmt_test.gocmd/benchdiff/command.gocmd/benchdiff/compare.gocmd/benchdiff/main.gocmd/benchdiff/markdown.gocmd/benchdiff/report.gocmd/benchdiff/report_test.gocmd/benchdiff/run.gocmd/benchmark/README.mdcmd/benchmark/main.gocmd/benchmark/report.gocmd/benchmark/report_test.gocmd/benchmark/runner.gocmd/benchmark/scenarios.gocmd/benchmark/scenarios_test.gocypher/models/cypher/format/format.gocypher/models/cypher/format/format_test.gocypher/models/pgsql/test/query_test.gocypher/test/cases/mutation_tests.jsoncypher/test/cases/positive_tests.jsonintegration/testdata/traversal_shapes.jsonquery/neo4j/neo4j.goquery/neo4j/neo4j_test.goquery/v2/backend_test.goquery/v2/compat.goquery/v2/doc.goquery/v2/legacy_parity_test.goquery/v2/query.goquery/v2/query_test.goquery/v2/util.goquery/v2/util_internal_test.go
✅ Files skipped from review due to trivial changes (4)
- .gitignore
- README.md
- cmd/benchdiff/README.md
- query/v2/doc.go
🚧 Files skipped from review as they are similar to previous changes (6)
- query/v2/util_internal_test.go
- query/v2/legacy_parity_test.go
- query/neo4j/neo4j.go
- cypher/models/pgsql/test/query_test.go
- query/v2/util.go
- query/v2/compat.go
| func writeIntegrationBenchfmt(w io.Writer, driver string, rows []markdownBenchmarkRow) error { | ||
| fmt.Fprintf(w, "goos: %s\n", runtime.GOOS) | ||
| fmt.Fprintf(w, "goarch: %s\n", runtime.GOARCH) | ||
| fmt.Fprintln(w, "pkg: github.com/specterops/dawgs/cmd/benchmark") | ||
|
|
||
| procs := runtime.GOMAXPROCS(0) | ||
| for _, row := range rows { | ||
| fmt.Fprintf(w, "%s-%d\t1\t%d ns/op\n", integrationBenchmarkName(driver, row.Dataset, row.Query), procs, row.Median.Nanoseconds()) | ||
| } | ||
|
|
||
| return nil | ||
| } |
There was a problem hiding this comment.
Propagate writer errors in report emission.
writeIntegrationBenchfmt ignores all write errors and always returns nil, which can silently drop/truncate output.
Suggested fix
func writeIntegrationBenchfmt(w io.Writer, driver string, rows []markdownBenchmarkRow) error {
- fmt.Fprintf(w, "goos: %s\n", runtime.GOOS)
- fmt.Fprintf(w, "goarch: %s\n", runtime.GOARCH)
- fmt.Fprintln(w, "pkg: github.com/specterops/dawgs/cmd/benchmark")
+ if _, err := fmt.Fprintf(w, "goos: %s\n", runtime.GOOS); err != nil {
+ return err
+ }
+ if _, err := fmt.Fprintf(w, "goarch: %s\n", runtime.GOARCH); err != nil {
+ return err
+ }
+ if _, err := fmt.Fprintln(w, "pkg: github.com/specterops/dawgs/cmd/benchmark"); err != nil {
+ return err
+ }
procs := runtime.GOMAXPROCS(0)
for _, row := range rows {
- fmt.Fprintf(w, "%s-%d\t1\t%d ns/op\n", integrationBenchmarkName(driver, row.Dataset, row.Query), procs, row.Median.Nanoseconds())
+ if _, err := fmt.Fprintf(w, "%s-%d\t1\t%d ns/op\n", integrationBenchmarkName(driver, row.Dataset, row.Query), procs, row.Median.Nanoseconds()); err != nil {
+ return err
+ }
}
return nil
}🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@cmd/benchdiff/benchfmt.go` around lines 232 - 243, writeIntegrationBenchfmt
currently ignores all errors from its writes; update it to check and propagate
the error returned by each fmt.Fprintf / fmt.Fprintln call (inside the header
writes and inside the loop) and return the first non-nil error encountered so
the caller can detect write failures. Locate the function
writeIntegrationBenchfmt and the uses of fmt.Fprintf/fmt.Fprintln (including the
header writes and the per-row line that uses integrationBenchmarkName, procs and
row.Median.Nanoseconds()) and thread the error handling so any write error is
returned instead of always returning nil.
| if cfg.Benchstat == "" || cfg.Benchstat == "auto" { | ||
| if benchstatPath, err := exec.LookPath("benchstat"); err == nil { | ||
| return runCommand(ctx, cfg.Root, nil, benchstatPath, baseFile, targetFile) | ||
| } | ||
|
|
||
| return runCommand(ctx, cfg.Root, nil, "go", "run", "golang.org/x/perf/cmd/benchstat@latest", baseFile, targetFile) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify current behavior and whether a pinned module/version exists in repo config.
rg -n -C2 'benchstat@latest|golang.org/x/perf/cmd/benchstat@' cmd/benchdiff
fd -a '^go\.mod$'
rg -n -C2 'x/perf|benchstat' go.modRepository: SpecterOps/DAWGS
Length of output: 753
🏁 Script executed:
cat go.modRepository: SpecterOps/DAWGS
Length of output: 13006
🏁 Script executed:
rg -i "benchstat" --type-list | head -20Repository: SpecterOps/DAWGS
Length of output: 450
🏁 Script executed:
fd -type f \( -name "*.go" -o -name "Makefile" -o -name "*.sh" -o -name "*.yml" -o -name "*.yaml" \) | xargs rg -l "benchstat" 2>/dev/null | head -20Repository: SpecterOps/DAWGS
Length of output: 383
🏁 Script executed:
sed -n '40,55p' cmd/benchdiff/README.mdRepository: SpecterOps/DAWGS
Length of output: 897
🏁 Script executed:
grep -r "benchstat" cmd/benchdiff/*.go | head -20Repository: SpecterOps/DAWGS
Length of output: 1720
Replace the @latest fallback with an explicit error or pinned version for reproducibility.
Line 253 uses go run golang.org/x/perf/cmd/benchstat@latest, which fetches a dynamically updated binary at runtime (golang.org/x/perf is not pinned in go.mod). This breaks reproducibility in CI pipelines and benchmark comparisons. Instead, require benchstat to be pre-installed or pinned, and fail with a clear diagnostic message when unavailable.
💡 Proposed fix
if cfg.Benchstat == "" || cfg.Benchstat == "auto" {
if benchstatPath, err := exec.LookPath("benchstat"); err == nil {
return runCommand(ctx, cfg.Root, nil, benchstatPath, baseFile, targetFile)
}
-
- return runCommand(ctx, cfg.Root, nil, "go", "run", "golang.org/x/perf/cmd/benchstat@latest", baseFile, targetFile)
+ return nil, fmt.Errorf("benchstat not found in PATH; install benchstat or set -benchstat explicitly")
}🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@cmd/benchdiff/compare.go` around lines 248 - 254, The code currently falls
back to "go run golang.org/x/perf/cmd/benchstat@latest" which pulls an unpinned
binary at runtime; change the logic in compare.go around the
exec.LookPath/benchstatPath check (the block using exec.LookPath, benchstatPath,
runCommand and cfg.Benchstat) to stop using the `@latest` go run fallback and
instead return a clear error when benchstat is not found, instructing the
caller/CI to pre-install a specific benchstat version (or set cfg.Benchstat to a
pinned binary path); ensure the error message mentions exec.LookPath and
suggests installing golang.org/x/perf at a pinned version or providing
cfg.Benchstat so CI remains reproducible.
| removeBase = false | ||
| removeTarget = false | ||
| if !resolved.KeepWorktrees { | ||
| if err := removeWorktree(ctx, resolved.Root, baseWorktree); err != nil { | ||
| return err | ||
| } | ||
| if err := removeWorktree(ctx, resolved.Root, targetWorktree); err != nil { | ||
| return err | ||
| } |
There was a problem hiding this comment.
Don’t disable deferred cleanup before removals succeed.
Lines 109-110 flip both flags before cleanup runs. If Line 112 fails, the function returns and deferred cleanup won’t remove targetWorktree.
💡 Proposed fix
- removeBase = false
- removeTarget = false
if !resolved.KeepWorktrees {
if err := removeWorktree(ctx, resolved.Root, baseWorktree); err != nil {
return err
}
+ removeBase = false
if err := removeWorktree(ctx, resolved.Root, targetWorktree); err != nil {
return err
}
+ removeTarget = false
}🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@cmd/benchdiff/run.go` around lines 109 - 117, The code flips removeBase and
removeTarget to false before attempting removals, which prevents deferred
cleanup if a removal fails; change the logic in the block guarded by
resolved.KeepWorktrees so you call removeWorktree(ctx, resolved.Root,
baseWorktree) first and only set removeBase = false after that call succeeds,
then call removeWorktree(ctx, resolved.Root, targetWorktree) and set
removeTarget = false only after it succeeds (or alternatively perform removals
and update the flags inside the same success branch), ensuring that if any
removeWorktree call returns an error the corresponding flag remains true so the
deferred cleanup will run.
| v2.Node().ID(), | ||
| v2.Node().Kinds(), | ||
| ), | ||
| expectedSQL: "with s0 as (select (n0.id, n0.kind_ids, n0.properties)::nodecomposite as n0 from node n0 where (n0.kind_ids operator (pg_catalog.&&) array [1]::int2[] and cypher_contains((n0.properties ->> 'name'), (@pi0::text)::text)::bool)) select (s0.n0).id, (s0.n0).kind_ids from s0;", |
There was a problem hiding this comment.
Update stale SQL expectation for the node read PG parity case.
Line 220 is currently asserting an outdated SQL shape ((s0.n0).kind_ids), which is why TestBackendParityPGTranslate/node_read is failing in CI. Align this expected value with current translator output (or assert stable SQL fragments for this case).
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@query/v2/backend_test.go` at line 220, The test's expectedSQL string in the
TestBackendParityPGTranslate/node_read case is stale (it asserts
"(s0.n0).kind_ids"); update the expectedSQL value to match the current
translator output (replace the old fragment with the current SQL shape produced
by the translator) or refactor the assertion to check stable SQL fragments
rather than the full string; locate and modify the expectedSQL literal used in
the node read PG parity test to reflect the translator's actual output.
Summary by CodeRabbit
New Features
Bug Fixes
Documentation
Tests