add first of the component tests for the new storage#792
Conversation
Signed-off-by: entlein <einentlein@gmail.com>
📝 WalkthroughWalkthroughA comprehensive component test is added to verify ApplicationProfile Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (2)
tests/component_test.go (1)
1692-1701: ParameterizehasAlertto remove the duplicated R0002 check for the curl container.
hasAlerthard-codescontainer_name == "nginx", so the curl wildcard subtest re-implements the same loop forcontainer_name == "curl". Threading the container name through avoids the duplication.♻️ Proposed refactor
- hasAlert := func(alerts []testutils.Alert) bool { + hasAlert := func(alerts []testutils.Alert, containerName string) bool { for _, a := range alerts { if a.Labels["rule_name"] == ruleName && a.Labels["comm"] == "cat" && - a.Labels["container_name"] == "nginx" { + a.Labels["container_name"] == containerName { return true } } return false }Then call
hasAlert(alerts, "nginx")/hasAlert(alerts, "curl")at use sites and drop the inline loop at lines 1933–1940.Also applies to: 1933-1940
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/component_test.go` around lines 1692 - 1701, The helper hasAlert currently hardcodes container_name == "nginx"; change its signature to hasAlert(alerts []testutils.Alert, container string) bool and replace the hardcoded "nginx" comparison with container so it becomes reusable; then update call sites to call hasAlert(alerts, "nginx") and hasAlert(alerts, "curl") and remove the duplicate inline loop that rechecked for "curl" (the inline loop that duplicates the logic for R0002), keeping comparisons against a.Labels["rule_name"] == ruleName and a.Labels["comm"] == "cat" and a.Labels["container_name"] == container.tests/resources/nginx-user-profile-deployment.yaml (1)
20-20: Consider bumpingnginx:1.14.2to a more recent supported tag.
nginx:1.14.2is from 2018 and has known CVEs in OpenSSL/HTTP/2 paths. Since this image is only used to give the pod a long-running container for the Opens test, any modernnginx:1.27/nginx:stable(or a digest-pinned variant, matching the curl deployment in this PR) would reduce supply-chain noise without affecting the test logic.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/resources/nginx-user-profile-deployment.yaml` at line 20, Update the Docker image tag used for the long-running test container: replace the hardcoded image reference "nginx:1.14.2" with a modern supported tag (e.g., "nginx:1.27" or "nginx:stable") or preferably a digest-pinned variant that matches how the curl deployment is pinned in this PR; ensure the change only affects the image field so test behavior is unchanged but CVE exposure and supply-chain noise are reduced.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/component_test.go`:
- Around line 1718-1730: The test currently iterates over
profile.Spec.Containers and container.Opens but will pass vacuously if no Opens
exist; add a positive cardinality assertion before the loops: compute totalOpens
by summing len(container.Opens) for each container and call t.Fatalf or t.Errorf
if totalOpens == 0 to fail the test, then proceed with the existing checks (use
profile, profile.Spec.Containers, container.Opens and t.Errorf/t.Fatalf as
appropriate) so a regression that drops all Opens is detected.
- Around line 1671-1689: The polling currently only waits for
testutils.GetAlerts to return without error, which can succeed immediately with
an empty list; update triggerAndGetAlerts so the require.Eventually predicate
checks for actual alerts (e.g., call testutils.GetAlerts into alerts and return
err==nil && len(alerts) > 0) instead of just err==nil so the 60s budget is
actually used for propagation; keep a final GetAlerts and NoError as now. If
some tests expect no alerts, split or add a separate helper that polls for
absence (err==nil && len(alerts) == 0) or use a single sleep there. Ensure you
update references in triggerAndGetAlerts and keep function names
testutils.GetAlerts and require.Eventually unchanged.
- Around line 1857-1872: In tests/component_test.go inside the Opens array (the
OpenCalls entries), remove or correct the two suspicious Path entries: replace
{Path: "/7/setgroups", ...} with {Path: "/proc/*/setgroups", Flags:
[]string{"O_RDONLY", "O_CLOEXEC"}} and either remove the {Path: "/runc", ...}
entry or replace it with a real path (e.g. {Path: "/run/runc", Flags:
[]string{"O_RDONLY", "O_CLOEXEC"}}) so the test uses valid wildcarded /proc and
runtime paths (look for the Opens slice in the test to update these entries).
---
Nitpick comments:
In `@tests/component_test.go`:
- Around line 1692-1701: The helper hasAlert currently hardcodes container_name
== "nginx"; change its signature to hasAlert(alerts []testutils.Alert, container
string) bool and replace the hardcoded "nginx" comparison with container so it
becomes reusable; then update call sites to call hasAlert(alerts, "nginx") and
hasAlert(alerts, "curl") and remove the duplicate inline loop that rechecked for
"curl" (the inline loop that duplicates the logic for R0002), keeping
comparisons against a.Labels["rule_name"] == ruleName and a.Labels["comm"] ==
"cat" and a.Labels["container_name"] == container.
In `@tests/resources/nginx-user-profile-deployment.yaml`:
- Line 20: Update the Docker image tag used for the long-running test container:
replace the hardcoded image reference "nginx:1.14.2" with a modern supported tag
(e.g., "nginx:1.27" or "nginx:stable") or preferably a digest-pinned variant
that matches how the curl deployment is pinned in this PR; ensure the change
only affects the image field so test behavior is unchanged but CVE exposure and
supply-chain noise are reduced.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 186d6dca-2886-4b41-a19c-a323a787897d
📒 Files selected for processing (3)
tests/component_test.gotests/resources/curl-user-profile-wildcards-deployment.yamltests/resources/nginx-user-profile-deployment.yaml
| triggerAndGetAlerts := func(t *testing.T, wl *testutils.TestWorkload, filePath string) []testutils.Alert { | ||
| t.Helper() | ||
| stdout, stderr, err := wl.ExecIntoPod([]string{"cat", filePath}, "nginx") | ||
| if err != nil { | ||
| t.Errorf("exec 'cat %s' in container nginx failed: %v (stdout=%q stderr=%q)", filePath, err, stdout, stderr) | ||
| } | ||
| // Poll for alerts — they may take time to propagate through | ||
| // eBPF → node-agent → alertmanager. | ||
| var alerts []testutils.Alert | ||
| require.Eventually(t, func() bool { | ||
| alerts, err = testutils.GetAlerts(wl.Namespace) | ||
| return err == nil | ||
| }, 60*time.Second, 5*time.Second, "alerts must be retrievable from ns %s", wl.Namespace) | ||
| // Give extra time for all alerts to arrive after first successful fetch. | ||
| time.Sleep(10 * time.Second) | ||
| alerts, err = testutils.GetAlerts(wl.Namespace) | ||
| require.NoError(t, err, "get alerts from ns %s", wl.Namespace) | ||
| return alerts | ||
| } |
There was a problem hiding this comment.
triggerAndGetAlerts polling does not actually wait for alerts — risk of flaky positive cases.
The Eventually condition only requires GetAlerts to return without error. Since testutils.GetAlerts just performs an HTTP fetch from Alertmanager and decodes the response, it returns err == nil on the very first iteration (even with an empty alert list). So Eventually exits immediately, and the only real wait before the final fetch is the fixed time.Sleep(10 * time.Second) on line 1685. The advertised 60s budget is never used.
For the "expect alert" subtests (exact_path_mismatch, ellipsis_rejects_multi_segment), this can produce false negatives if eBPF → node-agent → Alertmanager propagation takes longer than 10s on a slower CI runner.
Consider polling on the actual condition (presence of the target alert), and only fall back to the timeout when expecting absence:
♻️ Proposed predicate-based polling
- triggerAndGetAlerts := func(t *testing.T, wl *testutils.TestWorkload, filePath string) []testutils.Alert {
+ // triggerAndGetAlerts execs cat and waits up to 60s, returning early once
+ // the expected alert state is observed. For expectAlert=true it returns as
+ // soon as the R0002 alert is seen; for expectAlert=false it waits the full
+ // stabilization window so late alerts are not missed.
+ triggerAndGetAlerts := func(t *testing.T, wl *testutils.TestWorkload, filePath string, expectAlert bool) []testutils.Alert {
t.Helper()
stdout, stderr, err := wl.ExecIntoPod([]string{"cat", filePath}, "nginx")
if err != nil {
t.Errorf("exec 'cat %s' in container nginx failed: %v (stdout=%q stderr=%q)", filePath, err, stdout, stderr)
}
- // Poll for alerts — they may take time to propagate through
- // eBPF → node-agent → alertmanager.
var alerts []testutils.Alert
- require.Eventually(t, func() bool {
- alerts, err = testutils.GetAlerts(wl.Namespace)
- return err == nil
- }, 60*time.Second, 5*time.Second, "alerts must be retrievable from ns %s", wl.Namespace)
- // Give extra time for all alerts to arrive after first successful fetch.
- time.Sleep(10 * time.Second)
- alerts, err = testutils.GetAlerts(wl.Namespace)
- require.NoError(t, err, "get alerts from ns %s", wl.Namespace)
+ deadline := time.Now().Add(60 * time.Second)
+ for time.Now().Before(deadline) {
+ alerts, err = testutils.GetAlerts(wl.Namespace)
+ require.NoError(t, err, "get alerts from ns %s", wl.Namespace)
+ if expectAlert && hasAlert(alerts) {
+ return alerts
+ }
+ time.Sleep(2 * time.Second)
+ }
return alerts
}If you keep the current shape, please at least replace the no-op Eventually with a single sleep so the intent is clear, or accept that propagation-related flakiness may surface on slow runners.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/component_test.go` around lines 1671 - 1689, The polling currently only
waits for testutils.GetAlerts to return without error, which can succeed
immediately with an empty list; update triggerAndGetAlerts so the
require.Eventually predicate checks for actual alerts (e.g., call
testutils.GetAlerts into alerts and return err==nil && len(alerts) > 0) instead
of just err==nil so the 60s budget is actually used for propagation; keep a
final GetAlerts and NoError as now. If some tests expect no alerts, split or add
a separate helper that polls for absence (err==nil && len(alerts) == 0) or use a
single sleep there. Ensure you update references in triggerAndGetAlerts and keep
function names testutils.GetAlerts and require.Eventually unchanged.
| passed := true | ||
| for _, container := range profile.Spec.Containers { | ||
| for _, open := range container.Opens { | ||
| if !strings.HasPrefix(open.Path, "/") { | ||
| t.Errorf("recorded path must be absolute: got %q (container %s)", open.Path, container.Name) | ||
| passed = false | ||
| } | ||
| if open.Path == "." { | ||
| t.Errorf("recorded path must not be relative dot: got %q (container %s)", open.Path, container.Name) | ||
| passed = false | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
recorded_profile_absolute_paths passes vacuously if Opens is empty.
If for any reason the auto-learned profile contains no Opens entries (e.g. timing or a learning regression), the nested loops never execute and the subtest reports PASS without verifying anything. Add a positive cardinality assertion so a regression that drops Opens entirely is also caught.
🛡️ Proposed assertion
passed := true
+ totalOpens := 0
for _, container := range profile.Spec.Containers {
+ totalOpens += len(container.Opens)
for _, open := range container.Opens {
@@
}
}
+ if totalOpens == 0 {
+ t.Errorf("recorded profile has no Opens entries — cannot validate absolute paths")
+ passed = false
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| passed := true | |
| for _, container := range profile.Spec.Containers { | |
| for _, open := range container.Opens { | |
| if !strings.HasPrefix(open.Path, "/") { | |
| t.Errorf("recorded path must be absolute: got %q (container %s)", open.Path, container.Name) | |
| passed = false | |
| } | |
| if open.Path == "." { | |
| t.Errorf("recorded path must not be relative dot: got %q (container %s)", open.Path, container.Name) | |
| passed = false | |
| } | |
| } | |
| } | |
| passed := true | |
| totalOpens := 0 | |
| for _, container := range profile.Spec.Containers { | |
| totalOpens += len(container.Opens) | |
| for _, open := range container.Opens { | |
| if !strings.HasPrefix(open.Path, "/") { | |
| t.Errorf("recorded path must be absolute: got %q (container %s)", open.Path, container.Name) | |
| passed = false | |
| } | |
| if open.Path == "." { | |
| t.Errorf("recorded path must not be relative dot: got %q (container %s)", open.Path, container.Name) | |
| passed = false | |
| } | |
| } | |
| } | |
| if totalOpens == 0 { | |
| t.Errorf("recorded profile has no Opens entries — cannot validate absolute paths") | |
| passed = false | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/component_test.go` around lines 1718 - 1730, The test currently
iterates over profile.Spec.Containers and container.Opens but will pass
vacuously if no Opens exist; add a positive cardinality assertion before the
loops: compute totalOpens by summing len(container.Opens) for each container and
call t.Fatalf or t.Errorf if totalOpens == 0 to fail the test, then proceed with
the existing checks (use profile, profile.Spec.Containers, container.Opens and
t.Errorf/t.Fatalf as appropriate) so a regression that drops all Opens is
detected.
| Opens: []v1beta1.OpenCalls{ | ||
| {Path: "/etc/*", Flags: []string{"O_RDONLY", "O_LARGEFILE", "O_CLOEXEC"}}, | ||
| {Path: "/etc/ssl/openssl.cnf", Flags: []string{"O_RDONLY", "O_LARGEFILE"}}, | ||
| {Path: "/home/*", Flags: []string{"O_RDONLY", "O_LARGEFILE"}}, | ||
| {Path: "/lib/*", Flags: []string{"O_RDONLY", "O_LARGEFILE", "O_CLOEXEC"}}, | ||
| {Path: "/usr/lib/*", Flags: []string{"O_RDONLY", "O_LARGEFILE", "O_CLOEXEC"}}, | ||
| {Path: "/usr/local/lib/*", Flags: []string{"O_RDONLY", "O_LARGEFILE", "O_CLOEXEC"}}, | ||
| {Path: "/proc/*/cgroup", Flags: []string{"O_RDONLY", "O_CLOEXEC"}}, | ||
| {Path: "/proc/*/kernel/cap_last_cap", Flags: []string{"O_RDONLY", "O_CLOEXEC"}}, | ||
| {Path: "/proc/*/mountinfo", Flags: []string{"O_RDONLY", "O_CLOEXEC"}}, | ||
| {Path: "/proc/*/task/*/fd", Flags: []string{"O_RDONLY", "O_DIRECTORY", "O_CLOEXEC"}}, | ||
| {Path: "/sys/fs/cgroup/cpu.max", Flags: []string{"O_RDONLY", "O_CLOEXEC"}}, | ||
| {Path: "/sys/kernel/mm/transparent_hugepage/hpage_pmd_size", Flags: []string{"O_RDONLY"}}, | ||
| {Path: "/7/setgroups", Flags: []string{"O_RDONLY", "O_CLOEXEC"}}, | ||
| {Path: "/runc", Flags: []string{"O_RDONLY", "O_CLOEXEC"}}, | ||
| }, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find the source-of-truth wildcard profile YAML referenced in the comment and
# print its Opens entries so we can confirm the Go literal mirrors it exactly.
fd -t f -i 'known-application-profile-wildcards' --exec sh -c 'echo "===== {} ====="; cat {}'Repository: kubescape/node-agent
Length of output: 46
🏁 Script executed:
# Read the context around line 1827 to see the actual comment
head -1872 tests/component_test.go | tail -50 | cat -nRepository: kubescape/node-agent
Length of output: 3123
🏁 Script executed:
# Search for YAML files with "profile" or "wildcard" in the name using broader patterns
fd -t f '\.yaml' | grep -i 'profile\|wildcard'Repository: kubescape/node-agent
Length of output: 209
🏁 Script executed:
# Also search for any mentions of these specific paths in the codebase
rg '7/setgroups|/runc' --type go -B 2 -A 2Repository: kubescape/node-agent
Length of output: 563
🏁 Script executed:
# Check the curl user profile wildcards deployment YAML
cat tests/resources/curl-user-profile-wildcards-deployment.yamlRepository: kubescape/node-agent
Length of output: 597
🏁 Script executed:
# Search for any comments or documentation mentioning /7/setgroups or /runc
rg '/7/setgroups|/runc' --type yamlRepository: kubescape/node-agent
Length of output: 46
🏁 Script executed:
# Search for any TODO, FIXME, or similar comments near these paths
rg 'TODO|FIXME|HACK|BUG|XXX' tests/component_test.go | head -20Repository: kubescape/node-agent
Length of output: 327
Remove or correct the suspicious Opens paths /7/setgroups and /runc.
These paths appear to be errors. The comment claims the profile "matches known-application-profile-wildcards.yaml", but that file does not exist in the repository. Additionally, neither path appears in any YAML profile files — they only exist in this Go test. The path /7/setgroups looks like a truncated /proc/<pid>/setgroups, and /runc appears incomplete. Without a documented source reference, these are dead matchers that don't exercise real wildcard logic.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/component_test.go` around lines 1857 - 1872, In tests/component_test.go
inside the Opens array (the OpenCalls entries), remove or correct the two
suspicious Path entries: replace {Path: "/7/setgroups", ...} with {Path:
"/proc/*/setgroups", Flags: []string{"O_RDONLY", "O_CLOEXEC"}} and either remove
the {Path: "/runc", ...} entry or replace it with a real path (e.g. {Path:
"/run/runc", Flags: []string{"O_RDONLY", "O_CLOEXEC"}}) so the test uses valid
wildcarded /proc and runtime paths (look for the Opens slice in the test to
update these entries).
matthyx
left a comment
There was a problem hiding this comment.
Excluding the upcoming * open-path wildcard support as requested; adding the remaining inline suggestions.
|
|
||
| // triggerAndGetAlerts execs cat on the given path, then polls for alerts | ||
| // up to 60s to avoid race conditions with alert propagation. | ||
| triggerAndGetAlerts := func(t *testing.T, wl *testutils.TestWorkload, filePath string) []testutils.Alert { |
There was a problem hiding this comment.
This helper is shared by both the "expect alert" and "expect no alert" cases, but right now the polling only waits for GetAlerts() to become reachable. In practice that can return immediately with an empty list, so the positive cases still depend on the fixed 10 * time.Second sleep below.
I think this is less flaky if triggerAndGetAlerts takes an expectAlert bool and polls for the actual condition in the positive cases:
require.Eventually(t, func() bool {
alerts, err = testutils.GetAlerts(wl.Namespace)
return err == nil && hasAlert(alerts)
}, 60*time.Second, 5*time.Second, "expected R0002 alert in ns %s", wl.Namespace)For the no-alert cases, keep the stabilization sleep/final fetch path so late alerts are still caught.
| profilePath := "/etc/nginx/nginx.conf" | ||
| filePath := "/etc/hostname" | ||
| wl := deployWithProfile(t, []v1beta1.OpenCalls{ | ||
| {Path: profilePath, Flags: []string{"O_RDONLY"}}, |
There was a problem hiding this comment.
cat also opens /etc/ld.so.cache on exec. Since this case is looking for any R0002 from cat, omitting the loader cache here can make the subtest pass for the wrong reason even if /etc/hostname matching regresses.
| {Path: profilePath, Flags: []string{"O_RDONLY"}}, | |
| {Path: profilePath, Flags: []string{"O_RDONLY"}}, | |
| {Path: "/etc/ld.so.cache", Flags: []string{"O_RDONLY", "O_CLOEXEC"}}, // dynamic linker opens this on every exec |
| stdout, stderr, err := wl.ExecIntoPod([]string{"cat", f}, "curl") | ||
| if err != nil { | ||
| t.Logf("exec 'cat %s' failed: %v (stdout=%q stderr=%q)", f, err, stdout, stderr) | ||
| } |
There was a problem hiding this comment.
If one of these cat commands fails, the subtest still passes as long as no alert arrives, which makes this branch vacuous. I'd fail fast here so the "allowed opens" assertion only runs after the opens were actually exercised.
| stdout, stderr, err := wl.ExecIntoPod([]string{"cat", f}, "curl") | |
| if err != nil { | |
| t.Logf("exec 'cat %s' failed: %v (stdout=%q stderr=%q)", f, err, stdout, stderr) | |
| } | |
| stdout, stderr, err := wl.ExecIntoPod([]string{"cat", f}, "curl") | |
| require.NoErrorf(t, err, "exec 'cat %s' in container curl failed (stdout=%q stderr=%q)", f, stdout, stderr) |
Overview
Adds the first of the component tests to verify the OpensWildcard (of new storage) actually works e2e
put an [x] in the box to get it checked
It only contains tests
Summary by CodeRabbit