Skip to content

Enforce shared assertions in e2e tests#3108

Open
koxudaxi wants to merge 16 commits intomainfrom
test/assert-helper-guard
Open

Enforce shared assertions in e2e tests#3108
koxudaxi wants to merge 16 commits intomainfrom
test/assert-helper-guard

Conversation

@koxudaxi
Copy link
Copy Markdown
Owner

@koxudaxi koxudaxi commented Apr 30, 2026

Summary

  • add an e2e assert-usage guard for selected e2e test files that use the shared CLI/output assertion helpers
  • require remaining direct assert exceptions in those e2e files to be explicit via @pytest.mark.allow_direct_assert
  • migrate additional generated-output checks from partial string assertions / inline snapshots to full expected-output fixtures
  • extend existing shared e2e helpers for generated module dictionaries, captured CLI stderr/stdout, HTTP request kwargs, warnings, watch mode, and generated-code safety checks
  • add a shared mock_httpx_get fixture and extend HTTP request assertions so URL e2e tests do not repeat response mock setup or raw mock call assertions
  • reduce explicit direct-assert exceptions from 78 direct assert statements to 34, leaving only internal parser/config/mock-state checks that are not generated output comparisons
  • exclude generated expected-output fixtures from CodeQL scanning to avoid static-analysis noise on fixture code

Validation

  • uv run pytest tests/test_assert_helper_usage.py -q
  • uv run ruff check tests/conftest.py tests/main/conftest.py tests/main/test_main_watch.py tests/main/jsonschema/test_main_jsonschema.py tests/main/openapi/test_main_openapi.py tests/main/test_main_general.py tests/test_main_kr.py
  • uv run pytest tests/test_assert_helper_usage.py tests/main/test_main_watch.py tests/main/jsonschema/test_main_jsonschema.py tests/main/openapi/test_main_openapi.py tests/main/test_main_general.py tests/test_main_kr.py -q -k 'not test_validation and not test_validation_failed and not test_openapi_special_yaml_keywords and not test_graphql_parser_with_explicit_target_datetime_class and not test_graphql_parser_with_config_object'
  • uv run pytest tests/main/test_main_general.py::test_cli_input_overrides_pyproject_url tests/main/test_main_general.py::test_cli_url_overrides_pyproject_input tests/main/openapi/test_main_openapi.py::test_main_http_openapi tests/main/openapi/test_main_openapi.py::test_main_http_openapi_with_custom_port tests/main/openapi/test_main_openapi.py::test_main_openapi_body_and_parameters_remote_ref tests/main/openapi/test_main_openapi.py::test_main_openapi_read_only_write_only_url_ref tests/main/openapi/test_main_openapi.py::test_main_openapi_read_only_write_only_allof_url_ref tests/main/jsonschema/test_main_jsonschema.py::test_main_root_id_jsonschema_with_local_file tests/main/jsonschema/test_main_jsonschema.py::test_main_root_id_jsonschema_with_remote_file tests/main/jsonschema/test_main_jsonschema.py::test_main_root_id_jsonschema_self_refs_with_local_file tests/main/jsonschema/test_main_jsonschema.py::test_main_root_id_jsonschema_self_refs_with_remote_file tests/main/jsonschema/test_main_jsonschema.py::test_main_root_id_jsonschema_with_absolute_remote_file tests/main/jsonschema/test_main_jsonschema.py::test_main_url_with_relative_root_id_resolves_relative_refs tests/main/jsonschema/test_main_jsonschema.py::test_main_remote_ref_emits_deprecation_warning tests/main/jsonschema/test_main_jsonschema.py::test_main_remote_ref_blocked_when_explicitly_disabled tests/main/jsonschema/test_main_jsonschema.py::test_main_bundled_schema_with_id_url tests/main/jsonschema/test_main_jsonschema.py::test_main_http_jsonschema tests/main/jsonschema/test_main_jsonschema.py::test_main_http_jsonschema_with_http_headers_and_http_query_parameters_and_ignore_tls tests/main/jsonschema/test_main_jsonschema.py::test_main_circular_ref_external_url_keywords tests/test_main_kr.py::test_url_with_http_headers tests/test_main_kr.py::test_http_ignore_tls tests/test_main_kr.py::test_http_query_parameters tests/test_main_kr.py::test_http_timeout -q
  • uv run pytest tests/test_main_kr.py::test_main_modular_no_file tests/test_main_kr.py::test_main_no_file -q
  • uv run pytest tests/test_assert_helper_usage.py tests/test_main_kr.py::test_generate_prompt_basic tests/test_main_kr.py::test_generate_prompt_with_question tests/test_main_kr.py::test_generate_prompt_with_options tests/test_main_kr.py::test_generate_prompt_with_list_options -q
  • pre-commit run --files .github/workflows/codeql.yaml

Note: a full changed-file pytest run without -k currently requires optional prance and graphql extras for existing tests unrelated to this change.

Summary by CodeRabbit

  • Chores

    • Excluded test expected data from CodeQL analysis.
    • Added a pytest marker to allow direct asserts in selected tests.
    • Enhanced test harness: unified stdout/stderr validation, SystemExit helpers, watch-mode helpers, and standardized HTTP mocking/assertion utilities.
  • Tests

    • Added many golden-file fixtures for JSON Schema/OpenAPI and multi-module scenarios.
    • New suite enforcing assertion-helper usage and comprehensive HTTP/mock validations.
    • Refactored numerous tests to use centralized helpers for warnings, output comparisons, and generated-output validation.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 30, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 1d047c40-901f-4a3b-b13a-8cf9b0008ea4

📥 Commits

Reviewing files that changed from the base of the PR and between 2be8f84 and fd9bf2d.

📒 Files selected for processing (2)
  • tests/main/jsonschema/test_main_jsonschema.py
  • tests/test_assert_helper_usage.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/main/jsonschema/test_main_jsonschema.py

📝 Walkthrough

Walkthrough

Adds test-harness utilities (httpx mocking, HTTP-call assertions, generated-output and warning/stderr helpers, AST-based direct-assert enforcement), declares a pytest marker, updates CodeQL to ignore expected test data, and adds many generated expected-data fixtures while refactoring tests to use the new helpers.

Changes

Cohort / File(s) Summary
CI / Config
\.github/workflows/codeql.yaml, pyproject.toml
CodeQL init now ignores tests/data/expected/**. Adds pytest marker allow_direct_assert to [tool.pytest.ini_options].markers.
Test harness core
tests/conftest.py, tests/main/conftest.py
New/expanded fixtures and helpers: MockHttpxResponse, mock_httpx_get, assert_httpx_get_kwargs, generated-output assertions (assert_generated_modules_output, assert_generated_file_matches_output, assert_generate_wrote_file), warning/stderr helpers, assert_no_uncommented_generated_code, centralized capture/assertion (_assert_captured_output), run_main_with_system_exit, watch helpers, and template/env cache resets.
AST/assert enforcement & helper tests
tests/test_assert_helper_usage.py
Adds AST-based checks to forbid unmarked direct assert in e2e tests (honors @pytest.mark.allow_direct_assert), plus unit tests covering mock_httpx_get and assert_httpx_get_kwargs behaviors.
Refactored test suites
tests/main/..., tests/main/jsonschema/..., tests/main/openapi/..., tests/test_main_kr.py
Many tests rewritten to use shared fixtures/helpers, golden-file/directory assertions, centralized warnings/stderr checks, standardized exit/capture handling, and added allow_direct_assert marks where appropriate.
Generated expected-data fixtures
tests/data/expected/... (many new files)
Adds numerous generated fixture files (models, root models, enums, multi-module layouts, package init.py) used by golden assertions and module-shape checks.
Misc tests / small updates
tests/main/test_exec_validation.py, tests/main/test_main_watch.py, others
Signature/behavior tweaks: added params for captured output assertions, new helper usage in watch tests, and small refactors to replace ad-hoc capture/assert logic with shared helpers.

Sequence Diagram(s)

sequenceDiagram
    participant Test as Test
    participant Mock as mock_httpx_get
    participant HTTP as httpx.get
    participant FS as FileSystem
    participant Assert as assert_httpx_get_kwargs

    Test->>Mock: register MockHttpxResponse(url, status, body)
    Test->>HTTP: exercise code that calls httpx.get(url, **opts)
    HTTP->>Mock: lookup response queue for URL
    Mock-->>HTTP: return MockHttpxResponse(response)
    HTTP-->>Test: code receives mocked response
    Test->>FS: generate/write files via generate(..., output=...)
    Test->>Assert: verify httpx calls (urls, headers, params, timeout, call_count)
    Assert-->>Test: pass / raise detailed failure
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Suggested labels

safe-to-fix

Suggested reviewers

  • ilovelinux

Poem

🐰 I hopped through tests and stitched each part,
Mock responses in my paws, golden files like art,
Helpers snug and tidy, warnings folded neat,
ASTs guard rogue asserts — so the suite stays sweet,
A rabbit’s joyful thump — the tests are now complete!

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and concisely describes the main objective: enforcing the use of shared assertions in end-to-end tests. It directly reflects the primary change across the test suite.
Docstring Coverage ✅ Passed Docstring coverage is 90.64% which is sufficient. The required threshold is 80.00%.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch test/assert-helper-guard

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
Review rate limit: 4/8 reviews remaining, refill in 23 minutes and 14 seconds.

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Apr 30, 2026

📚 Docs Preview: https://pr-3108.datamodel-code-generator.pages.dev

@codspeed-hq
Copy link
Copy Markdown

codspeed-hq Bot commented Apr 30, 2026

Merging this PR will not alter performance

⚠️ Unknown Walltime execution environment detected

Using the Walltime instrument on standard Hosted Runners will lead to inconsistent data.

For the most accurate results, we recommend using CodSpeed Macro Runners: bare-metal machines fine-tuned for performance measurement consistency.

✅ 11 untouched benchmarks
⏩ 98 skipped benchmarks1


Comparing test/assert-helper-guard (fd9bf2d) with main (f00eaab)

Open in CodSpeed

Footnotes

  1. 98 benchmarks were skipped, so the baseline results were used instead. If they were deleted from the codebase, click here and archive them to remove them from the performance reports.

@codecov
Copy link
Copy Markdown

codecov Bot commented Apr 30, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 100.00%. Comparing base (f00eaab) to head (fd9bf2d).

Additional details and impacted files
@@            Coverage Diff            @@
##              main     #3108   +/-   ##
=========================================
  Coverage   100.00%   100.00%           
=========================================
  Files           87        88    +1     
  Lines        18306     18297    -9     
  Branches      2091      2115   +24     
=========================================
- Hits         18306     18297    -9     
Flag Coverage Δ
unittests 100.00% <100.00%> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@koxudaxi koxudaxi force-pushed the test/assert-helper-guard branch 3 times, most recently from 5bee535 to 15856f4 Compare April 30, 2026 04:51
@koxudaxi koxudaxi force-pushed the test/assert-helper-guard branch from 15856f4 to 5518ab7 Compare April 30, 2026 05:01
Comment thread tests/data/expected/main_kr/main_modular_no_file/output.py Fixed
Comment thread tests/data/expected/main_kr/main_modular_no_file/output.py Fixed
@koxudaxi koxudaxi marked this pull request as ready for review May 1, 2026 04:56
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
tests/main/jsonschema/test_main_jsonschema.py (1)

1005-1032: ⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Assert the resolved remote URL here too.

Right now this test only proves a FutureWarning was emitted. Add an assert_httpx_get_kwargs(...) check so the relative $ref still resolves to the intended remote schema URL; otherwise a wrong fetch path could regress without this test catching it.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/main/jsonschema/test_main_jsonschema.py` around lines 1005 - 1032, The
test test_main_remote_ref_emits_deprecation_warning currently only asserts a
FutureWarning; add an assertion that the mocked HTTP fetch was called with the
expected resolved remote URL by invoking assert_httpx_get_kwargs(...) after
run_main_and_assert to verify the relative $ref
"../other/schema.json#/definitions/Thing" resolved to
"https://example.com/other/schema.json" (or the exact expected URL used by your
resolver). Use the existing mock_httpx_get and assert_httpx_get_kwargs helpers
to check the request URL/path and any relevant kwargs so the test fails if the
resolver builds the wrong remote URL.
tests/main/conftest.py (2)

461-468: ⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Copy shared fixtures in the stdin_path branch too.

copy_files is honored for the file-input path, but stdin-driven runs skip _copy_files(copy_files), so they can observe a different workspace than the helper contract implies.

Suggested fix
     if stdin_path is not None:
         if monkeypatch is None:  # pragma: no cover
             pytest.fail("monkeypatch is required when using stdin_path")
+        _copy_files(copy_files)
         monkeypatch.setattr("sys.stdin", stdin_path.open(encoding="utf-8"))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/main/conftest.py` around lines 461 - 468, The stdin_path branch in the
test fixture skips calling _copy_files(copy_files), causing stdin-driven tests
to run without the shared fixtures; modify the stdin_path branch so that it
invokes _copy_files(copy_files) before setting sys.stdin and calling main(args),
and ensure it still uses monkeypatch.setattr("sys.stdin", stdin_path.open(...)),
builds args via _extend_args(output_path=..., input_file_type=...,
extra_args=...), and then calls return_code = main(args) so behavior matches the
file-input path.

484-500: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Fail fast when capture assertions are requested.

This branch silently skips stdout/stderr checks whenever capsys is None, so a misconfigured caller can get a false-green test instead of an assertion. Reuse _assert_captured_output() here or fail immediately when any capture expectation is set without capsys.

Suggested fix
-    if capsys is not None and (
-        expected_stdout_path is not None
-        or expected_stderr is not None
-        or expected_stderr_contains is not None
-        or assert_no_stderr
-    ):
-        captured = capsys.readouterr()
-        ...
+    _assert_captured_output(
+        capsys,
+        expected_stdout_path=expected_stdout_path,
+        expected_stderr=expected_stderr,
+        expected_stderr_contains=expected_stderr_contains,
+        assert_no_stderr=assert_no_stderr,
+    )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/main/conftest.py` around lines 484 - 500, The current capture-check
branch silently skips assertions when capsys is None causing false-green tests;
update the block that checks capsys and
expected_stdout_path/expected_stderr/expected_stderr_contains/assert_no_stderr
to immediately fail (pytest.fail with a clear message) if capsys is None while
any of those expectations are set, or simply call the existing helper
_assert_captured_output(capsys, expected_stdout_path, expected_stderr,
expected_stderr_contains, assert_no_stderr) instead of skipping; reference
capsys, expected_stdout_path, expected_stderr, expected_stderr_contains,
assert_no_stderr and _assert_captured_output to locate and replace the
silent-skip logic.
🧹 Nitpick comments (1)
tests/conftest.py (1)

897-905: ⚡ Quick win

Keep the file comparison strict.

Stripping both sides can hide leading/trailing blank-line regressions and makes this helper behave differently from the other snapshot assertions. Compare normalized line endings directly instead.

♻️ Proposed fix
-    if result.strip() != actual.strip():  # pragma: no cover
-        diff = _format_diff(result.strip(), actual.strip(), output_file)
+    expected = _normalize_line_endings(result)
+    actual = _normalize_line_endings(actual)
+    if expected != actual:  # pragma: no cover
+        diff = _format_diff(expected, actual, output_file)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/conftest.py` around lines 897 - 905, The comparison in
assert_generated_file_matches_output currently strips both strings which hides
leading/trailing blank-line regressions; modify the function
(assert_generated_file_matches_output) to compare the generation and file
contents without stripping by normalizing line endings (e.g., convert CRLF to LF
on both result and actual via .replace("\r\n", "\n")) and then comparing the
normalized strings directly, keeping the existing diff/error behavior when they
differ.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/main/jsonschema/test_main_jsonschema.py`:
- Around line 775-802: Update the two tests test_main_class_name_prefix_invalid
and test_main_class_name_suffix_invalid to assert the CLI failure message rather
than only Exit.ERROR by passing the stderr assertion to run_main_with_args
(e.g., add expected_stderr_contains="class name prefix must match" for the
prefix case and expected_stderr_contains="class name suffix must match" for the
suffix case, or reuse the new helper name expected_stderr_contains if provided),
so the tests verify the validation logic in run_main_with_args instead of any
generic error path.

In `@tests/test_assert_helper_usage.py`:
- Around line 51-68: In _collect_direct_asserts the code uses ast.walk(function)
which descends into nested FunctionDef/AsyncFunctionDef and causes asserts in
inner helpers to be attributed to the outer function; replace that traversal
with a visitor that yields Assert nodes only from the outer function body (i.e.,
walk child nodes but do not recurse into nodes of type ast.FunctionDef or
ast.AsyncFunctionDef) so DirectAssert entries use the actual assertion site;
keep references to DirectAssert(path=..., function_name=function.name,
lineno=..., statement=_statement(node, source)) and the existing
_allows_direct_assert and TESTS_ROOT usage when creating the entries.

---

Outside diff comments:
In `@tests/main/conftest.py`:
- Around line 461-468: The stdin_path branch in the test fixture skips calling
_copy_files(copy_files), causing stdin-driven tests to run without the shared
fixtures; modify the stdin_path branch so that it invokes
_copy_files(copy_files) before setting sys.stdin and calling main(args), and
ensure it still uses monkeypatch.setattr("sys.stdin", stdin_path.open(...)),
builds args via _extend_args(output_path=..., input_file_type=...,
extra_args=...), and then calls return_code = main(args) so behavior matches the
file-input path.
- Around line 484-500: The current capture-check branch silently skips
assertions when capsys is None causing false-green tests; update the block that
checks capsys and
expected_stdout_path/expected_stderr/expected_stderr_contains/assert_no_stderr
to immediately fail (pytest.fail with a clear message) if capsys is None while
any of those expectations are set, or simply call the existing helper
_assert_captured_output(capsys, expected_stdout_path, expected_stderr,
expected_stderr_contains, assert_no_stderr) instead of skipping; reference
capsys, expected_stdout_path, expected_stderr, expected_stderr_contains,
assert_no_stderr and _assert_captured_output to locate and replace the
silent-skip logic.

In `@tests/main/jsonschema/test_main_jsonschema.py`:
- Around line 1005-1032: The test test_main_remote_ref_emits_deprecation_warning
currently only asserts a FutureWarning; add an assertion that the mocked HTTP
fetch was called with the expected resolved remote URL by invoking
assert_httpx_get_kwargs(...) after run_main_and_assert to verify the relative
$ref "../other/schema.json#/definitions/Thing" resolved to
"https://example.com/other/schema.json" (or the exact expected URL used by your
resolver). Use the existing mock_httpx_get and assert_httpx_get_kwargs helpers
to check the request URL/path and any relevant kwargs so the test fails if the
resolver builds the wrong remote URL.

---

Nitpick comments:
In `@tests/conftest.py`:
- Around line 897-905: The comparison in assert_generated_file_matches_output
currently strips both strings which hides leading/trailing blank-line
regressions; modify the function (assert_generated_file_matches_output) to
compare the generation and file contents without stripping by normalizing line
endings (e.g., convert CRLF to LF on both result and actual via .replace("\r\n",
"\n")) and then comparing the normalized strings directly, keeping the existing
diff/error behavior when they differ.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 518c2db4-dfa7-4d82-a2a2-6d4f1cbdc4eb

📥 Commits

Reviewing files that changed from the base of the PR and between 92bdc27 and 34e225a.

⛔ Files ignored due to path filters (5)
  • tests/data/expected/main_kr/generate_cli_command/with_profile.txt is excluded by !tests/data/**/*.txt and included by none
  • tests/data/expected/main_kr/generate_prompt/basic.txt is excluded by !tests/data/**/*.txt and included by none
  • tests/data/expected/main_kr/generate_prompt/with_list_options.txt is excluded by !tests/data/**/*.txt and included by none
  • tests/data/expected/main_kr/generate_prompt/with_options.txt is excluded by !tests/data/**/*.txt and included by none
  • tests/data/expected/main_kr/generate_prompt/with_question.txt is excluded by !tests/data/**/*.txt and included by none
📒 Files selected for processing (96)
  • .github/workflows/codeql.yaml
  • pyproject.toml
  • tests/conftest.py
  • tests/data/expected/main/cli_url_overrides_pyproject_input.py
  • tests/data/expected/main/format_code_fallback_on_error.py
  • tests/data/expected/main/generate_parent_scoped_naming_backward_compat.py
  • tests/data/expected/main/generate_returns_dict_for_multiple_modules/__init__.py
  • tests/data/expected/main/generate_returns_dict_for_multiple_modules/address.py
  • tests/data/expected/main/generate_returns_dict_for_multiple_modules/main.py
  • tests/data/expected/main/generate_returns_string_when_output_none.py
  • tests/data/expected/main/generate_returns_string_with_custom_file_header.py
  • tests/data/expected/main/generate_returns_string_with_custom_file_header_and_code.py
  • tests/data/expected/main/generate_returns_string_with_custom_file_header_no_future.py
  • tests/data/expected/main/generate_returns_string_with_dataclass.py
  • tests/data/expected/main/generate_returns_string_with_pydantic_v2.py
  • tests/data/expected/main/generate_with_config_object.py
  • tests/data/expected/main/generate_with_imported_config_from_top_level.py
  • tests/data/expected/main/jsonschema/all_exports_multi_file_ruff/__init__.py
  • tests/data/expected/main/jsonschema/all_exports_multi_file_ruff/order.py
  • tests/data/expected/main/jsonschema/all_exports_multi_file_ruff/product.py
  • tests/data/expected/main/jsonschema/all_exports_multi_file_ruff/user.py
  • tests/data/expected/main/jsonschema/field_validators_all_skipped.py
  • tests/data/expected/main/jsonschema/field_validators_with_no_field_skipped.py
  • tests/data/expected/main/jsonschema/url_relative_root_id_resolves_relative_refs/__init__.py
  • tests/data/expected/main/jsonschema/url_relative_root_id_resolves_relative_refs/sub.py
  • tests/data/expected/main/openapi/all_exports_scope_recursive_collision_avoided_by_renaming/__init__.py
  • tests/data/expected/main/openapi/all_exports_scope_recursive_collision_avoided_by_renaming/_internal.py
  • tests/data/expected/main/openapi/all_exports_scope_recursive_collision_avoided_by_renaming/bar.py
  • tests/data/expected/main/openapi/all_exports_scope_recursive_collision_avoided_by_renaming/collections.py
  • tests/data/expected/main/openapi/all_exports_scope_recursive_collision_avoided_by_renaming/foo/__init__.py
  • tests/data/expected/main/openapi/all_exports_scope_recursive_collision_avoided_by_renaming/foo/bar.py
  • tests/data/expected/main/openapi/all_exports_scope_recursive_collision_avoided_by_renaming/models.py
  • tests/data/expected/main/openapi/all_exports_scope_recursive_collision_avoided_by_renaming/nested/__init__.py
  • tests/data/expected/main/openapi/all_exports_scope_recursive_collision_avoided_by_renaming/nested/foo.py
  • tests/data/expected/main/openapi/all_exports_scope_recursive_collision_avoided_by_renaming/woo/__init__.py
  • tests/data/expected/main/openapi/all_exports_scope_recursive_collision_avoided_by_renaming/woo/boo.py
  • tests/data/expected/main/openapi/external_ref_mapping_absolute_path.py
  • tests/data/expected/main/openapi/external_ref_mapping_file_uri.py
  • tests/data/expected/main/openapi/external_ref_mapping_local_ref_unchanged.py
  • tests/data/expected/main/openapi/external_ref_mapping_normalized.py
  • tests/data/expected/main/openapi/external_ref_mapping_without_flag.py
  • tests/data/expected/main/openapi/format_code_fallback_on_error_init_exports/__init__.py
  • tests/data/expected/main/openapi/format_code_fallback_on_error_init_exports/_internal.py
  • tests/data/expected/main/openapi/format_code_fallback_on_error_init_exports/bar.py
  • tests/data/expected/main/openapi/format_code_fallback_on_error_init_exports/collections.py
  • tests/data/expected/main/openapi/format_code_fallback_on_error_init_exports/foo/__init__.py
  • tests/data/expected/main/openapi/format_code_fallback_on_error_init_exports/foo/bar.py
  • tests/data/expected/main/openapi/format_code_fallback_on_error_init_exports/models.py
  • tests/data/expected/main/openapi/format_code_fallback_on_error_init_exports/nested/__init__.py
  • tests/data/expected/main/openapi/format_code_fallback_on_error_init_exports/nested/foo.py
  • tests/data/expected/main/openapi/format_code_fallback_on_error_init_exports/woo/__init__.py
  • tests/data/expected/main/openapi/format_code_fallback_on_error_init_exports/woo/boo.py
  • tests/data/expected/main/openapi/generate_multi_module_pydantic_ruff_defaults_to_runtime_imports/__init__.py
  • tests/data/expected/main/openapi/generate_multi_module_pydantic_ruff_defaults_to_runtime_imports/_internal.py
  • tests/data/expected/main/openapi/generate_multi_module_pydantic_ruff_defaults_to_runtime_imports/bar.py
  • tests/data/expected/main/openapi/generate_multi_module_pydantic_ruff_defaults_to_runtime_imports/collections.py
  • tests/data/expected/main/openapi/generate_multi_module_pydantic_ruff_defaults_to_runtime_imports/foo/__init__.py
  • tests/data/expected/main/openapi/generate_multi_module_pydantic_ruff_defaults_to_runtime_imports/foo/bar.py
  • tests/data/expected/main/openapi/generate_multi_module_pydantic_ruff_defaults_to_runtime_imports/models.py
  • tests/data/expected/main/openapi/generate_multi_module_pydantic_ruff_defaults_to_runtime_imports/nested/__init__.py
  • tests/data/expected/main/openapi/generate_multi_module_pydantic_ruff_defaults_to_runtime_imports/nested/foo.py
  • tests/data/expected/main/openapi/generate_multi_module_pydantic_ruff_defaults_to_runtime_imports/woo/__init__.py
  • tests/data/expected/main/openapi/generate_multi_module_pydantic_ruff_defaults_to_runtime_imports/woo/boo.py
  • tests/data/expected/main/openapi/http_openapi_with_custom_port.py
  • tests/data/expected/main/openapi/init_exports_without_formatting/__init__.py
  • tests/data/expected/main/openapi/init_exports_without_formatting/_internal.py
  • tests/data/expected/main/openapi/init_exports_without_formatting/bar.py
  • tests/data/expected/main/openapi/init_exports_without_formatting/collections.py
  • tests/data/expected/main/openapi/init_exports_without_formatting/foo/__init__.py
  • tests/data/expected/main/openapi/init_exports_without_formatting/foo/bar.py
  • tests/data/expected/main/openapi/init_exports_without_formatting/models.py
  • tests/data/expected/main/openapi/init_exports_without_formatting/nested/__init__.py
  • tests/data/expected/main/openapi/init_exports_without_formatting/nested/foo.py
  • tests/data/expected/main/openapi/init_exports_without_formatting/woo/__init__.py
  • tests/data/expected/main/openapi/init_exports_without_formatting/woo/boo.py
  • tests/data/expected/main/openapi/use_type_checking_imports_for_multi_module_pydantic_ruff/__init__.py
  • tests/data/expected/main/openapi/use_type_checking_imports_for_multi_module_pydantic_ruff/_internal.py
  • tests/data/expected/main/openapi/use_type_checking_imports_for_multi_module_pydantic_ruff/bar.py
  • tests/data/expected/main/openapi/use_type_checking_imports_for_multi_module_pydantic_ruff/collections.py
  • tests/data/expected/main/openapi/use_type_checking_imports_for_multi_module_pydantic_ruff/foo/__init__.py
  • tests/data/expected/main/openapi/use_type_checking_imports_for_multi_module_pydantic_ruff/foo/bar.py
  • tests/data/expected/main/openapi/use_type_checking_imports_for_multi_module_pydantic_ruff/models.py
  • tests/data/expected/main/openapi/use_type_checking_imports_for_multi_module_pydantic_ruff/nested/__init__.py
  • tests/data/expected/main/openapi/use_type_checking_imports_for_multi_module_pydantic_ruff/nested/foo.py
  • tests/data/expected/main/openapi/use_type_checking_imports_for_multi_module_pydantic_ruff/woo/__init__.py
  • tests/data/expected/main/openapi/use_type_checking_imports_for_multi_module_pydantic_ruff/woo/boo.py
  • tests/data/expected/main_kr/main_modular_no_file/output.py
  • tests/data/expected/main_kr/pyproject_profile/ignore_pyproject_with_profile.py
  • tests/main/conftest.py
  • tests/main/jsonschema/test_main_jsonschema.py
  • tests/main/openapi/test_main_openapi.py
  • tests/main/test_exec_validation.py
  • tests/main/test_main_general.py
  • tests/main/test_main_watch.py
  • tests/test_assert_helper_usage.py
  • tests/test_main_kr.py

Comment thread tests/main/jsonschema/test_main_jsonschema.py Outdated
Comment thread tests/test_assert_helper_usage.py
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
tests/main/test_main_general.py (1)

1509-1523: ⚡ Quick win

Prefer fixture-based golden output over inline snapshot in this migrated e2e path.

This test still uses snapshot(...) while adjacent changes standardize on shared expected-output fixtures. Converting this one too keeps assertion style consistent and reduces inline expected-body drift.

♻️ Proposed refactor
-from inline_snapshot import snapshot
+from inline_snapshot import snapshot
@@
 def test_cli_input_overrides_pyproject_url(
     output_file: Path, tmp_path: Path, mock_httpx_get: Callable[..., Any]
 ) -> None:
@@
     with chdir(tmp_path):
         run_main_and_assert(
             input_path=cli_input.relative_to(tmp_path),
             output_path=output_file,
             extra_args=["--disable-timestamp"],
-            expected_output=snapshot(
-                """\
-# generated by datamodel-codegen:
-#   filename:  cli.json
-
-from __future__ import annotations
-
-from pydantic import BaseModel
-
-
-class Model(BaseModel):
-    from_input: str | None = None
-"""
-            ),
+            assert_func=assert_file_content,
+            expected_file="cli_input_overrides_pyproject_url.py",
         )
-import black
-import pytest
-from inline_snapshot import snapshot
+import black
+import pytest
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/main/test_main_general.py` around lines 1509 - 1523, The test currently
embeds the expected-output via the inline snapshot(...) call stored in the
expected_output= argument; convert it to use the shared fixture-based golden
output like adjacent tests by removing the inline snapshot(...) and referencing
the common expected-output fixture (the same fixture other tests use) and
placing the mult-line expected string into that shared fixture file
(named/organized consistently with other golden files); update the test to pass
the fixture value into expected_output instead of snapshot(...) so the assertion
style matches neighboring tests.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/conftest.py`:
- Around line 784-829: The helper assert_httpx_get_kwargs has two issues:
called=True is ignored and params_contains only inspects the last call; update
assert_httpx_get_kwargs (the function) so that if called is True it calls
mock_get.assert_called() immediately, and when params_contains is provided
validate every recorded call by iterating mock_get.call_args_list (not just
mock_get.call_args) — for each call extract kwargs["params"] into a dict and
assert the params_contains keys/values are present (and keep the existing
behavior for single-call checks and expected_urls), ensuring this validation
works whether expected_url/expected_urls or call_count are used.

---

Nitpick comments:
In `@tests/main/test_main_general.py`:
- Around line 1509-1523: The test currently embeds the expected-output via the
inline snapshot(...) call stored in the expected_output= argument; convert it to
use the shared fixture-based golden output like adjacent tests by removing the
inline snapshot(...) and referencing the common expected-output fixture (the
same fixture other tests use) and placing the mult-line expected string into
that shared fixture file (named/organized consistently with other golden files);
update the test to pass the fixture value into expected_output instead of
snapshot(...) so the assertion style matches neighboring tests.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 633e6e0d-3005-4a90-85e7-9f99dd7279ab

📥 Commits

Reviewing files that changed from the base of the PR and between 34e225a and 3274f6a.

📒 Files selected for processing (5)
  • tests/conftest.py
  • tests/main/conftest.py
  • tests/main/jsonschema/test_main_jsonschema.py
  • tests/main/test_main_general.py
  • tests/test_assert_helper_usage.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/main/conftest.py

Comment thread tests/conftest.py
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/test_assert_helper_usage.py`:
- Around line 40-43: The _allows_direct_assert function currently uses
ast.unparse(...).startswith("pytest.mark.allow_direct_assert") which can falsely
match other decorators with that prefix; change it to check for an exact match
of the opt-in marker (e.g., compare ast.unparse(decorator) ==
"pytest.mark.allow_direct_assert" or otherwise resolve the decorator AST to the
exact dotted name) so only the exact decorator on function.decorator_list opts
out of the guard.
- Around line 129-141: The test only verifies the outer function ignores the
nested async helper but doesn't assert the nested helper is collected; modify
test_collect_function_asserts_ignores_nested_async_helpers to also locate the
nested async function (the AsyncFunctionDef named "assert_later" found either by
ast.walk or by inspecting function.body) and call _collect_function_asserts on
that nested node, asserting it returns the expected assert (i.e., not an empty
list) so the nested helper is reported when scanned directly.
- Around line 143-167: The test uses the type annotation pytest.MockerFixture
which doesn't exist; import and use MockerFixture from the pytest_mock package
instead. Add "from pytest_mock import MockerFixture" and change the mocker
parameter annotation in functions like
test_assert_httpx_get_kwargs_accepts_expected_urls_with_explicit_call_count (and
any other tests using pytest.MockerFixture) to use MockerFixture (e.g., def
test_...(mocker: MockerFixture) -> None) so type checkers (mypy/pyright) resolve
the fixture type correctly.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 51f7c7cf-c036-4ec4-b0fe-2e67ddd2da7c

📥 Commits

Reviewing files that changed from the base of the PR and between 3274f6a and a6fcdbf.

📒 Files selected for processing (1)
  • tests/test_assert_helper_usage.py

Comment thread tests/test_assert_helper_usage.py
Comment thread tests/test_assert_helper_usage.py
Comment thread tests/test_assert_helper_usage.py Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (2)
tests/test_assert_helper_usage.py (2)

143-213: ⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Import MockerFixture from pytest_mock instead of pytest.

pytest.MockerFixture is not provided by pytest; these annotations should use pytest_mock.MockerFixture to keep typing accurate.

Suggested fix
 import pytest
+from pytest_mock import MockerFixture
@@
-def test_assert_httpx_get_kwargs_accepts_expected_urls_with_explicit_call_count(mocker: pytest.MockerFixture) -> None:
+def test_assert_httpx_get_kwargs_accepts_expected_urls_with_explicit_call_count(mocker: MockerFixture) -> None:
@@
-def test_assert_httpx_get_kwargs_accepts_called_true(mocker: pytest.MockerFixture) -> None:
+def test_assert_httpx_get_kwargs_accepts_called_true(mocker: MockerFixture) -> None:
@@
-def test_assert_httpx_get_kwargs_validates_params_contains_for_every_call(mocker: pytest.MockerFixture) -> None:
+def test_assert_httpx_get_kwargs_validates_params_contains_for_every_call(mocker: MockerFixture) -> None:
@@
-def test_assert_httpx_get_kwargs_reports_params_contains_mismatch_per_call(mocker: pytest.MockerFixture) -> None:
+def test_assert_httpx_get_kwargs_reports_params_contains_mismatch_per_call(mocker: MockerFixture) -> None:
#!/bin/bash
set -euo pipefail

echo "Check for invalid pytest.MockerFixture annotations:"
rg -nP 'pytest\.MockerFixture' tests/test_assert_helper_usage.py

echo
echo "Verify pytest does not expose MockerFixture:"
python - <<'PY'
import pytest
print("hasattr(pytest, 'MockerFixture') =", hasattr(pytest, "MockerFixture"))
PY
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_assert_helper_usage.py` around lines 143 - 213, The test
annotations incorrectly use pytest.MockerFixture (which doesn't exist); update
the import and type hints to use pytest_mock.MockerFixture instead: add/replace
the import to bring in MockerFixture from the pytest_mock module and change
annotations in functions like
test_assert_httpx_get_kwargs_accepts_expected_urls_with_explicit_call_count,
test_assert_httpx_get_kwargs_accepts_called_true,
test_assert_httpx_get_kwargs_validates_params_contains_for_every_call, and
test_assert_httpx_get_kwargs_reports_params_contains_mismatch_per_call from
pytest.MockerFixture to pytest_mock.MockerFixture (or import MockerFixture
directly and use MockerFixture in the signatures).

40-43: ⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Use exact marker matching for allow_direct_assert.

startswith("pytest.mark.allow_direct_assert") can accidentally whitelist unrelated decorators with the same prefix. Match the marker name exactly (optionally allowing call syntax).

Suggested fix
 def _allows_direct_assert(function: ast.FunctionDef | ast.AsyncFunctionDef) -> bool:
-    return any(
-        ast.unparse(decorator).startswith("pytest.mark.allow_direct_assert") for decorator in function.decorator_list
-    )
+    return any(
+        ast.unparse(decorator).split("(", 1)[0] == "pytest.mark.allow_direct_assert"
+        for decorator in function.decorator_list
+    )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_assert_helper_usage.py` around lines 40 - 43, The current
_allows_direct_assert uses ast.unparse().startswith(...) which can false-match
other decorators; change it to perform exact AST matching: for each decorator in
function.decorator_list, if it's an ast.Call use decorator.func else use the
node itself, then check that the node is an ast.Attribute chain matching pytest
-> mark -> allow_direct_assert (i.e. node is ast.Attribute with .attr ==
"allow_direct_assert", its .value is ast.Attribute with .attr == "mark", and its
.value.value is ast.Name with .id == "pytest"); return True if any decorator
matches this exact structure (thereby allowing optional call syntax but
preventing prefix collisions).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@tests/test_assert_helper_usage.py`:
- Around line 143-213: The test annotations incorrectly use pytest.MockerFixture
(which doesn't exist); update the import and type hints to use
pytest_mock.MockerFixture instead: add/replace the import to bring in
MockerFixture from the pytest_mock module and change annotations in functions
like
test_assert_httpx_get_kwargs_accepts_expected_urls_with_explicit_call_count,
test_assert_httpx_get_kwargs_accepts_called_true,
test_assert_httpx_get_kwargs_validates_params_contains_for_every_call, and
test_assert_httpx_get_kwargs_reports_params_contains_mismatch_per_call from
pytest.MockerFixture to pytest_mock.MockerFixture (or import MockerFixture
directly and use MockerFixture in the signatures).
- Around line 40-43: The current _allows_direct_assert uses
ast.unparse().startswith(...) which can false-match other decorators; change it
to perform exact AST matching: for each decorator in function.decorator_list, if
it's an ast.Call use decorator.func else use the node itself, then check that
the node is an ast.Attribute chain matching pytest -> mark ->
allow_direct_assert (i.e. node is ast.Attribute with .attr ==
"allow_direct_assert", its .value is ast.Attribute with .attr == "mark", and its
.value.value is ast.Name with .id == "pytest"); return True if any decorator
matches this exact structure (thereby allowing optional call syntax but
preventing prefix collisions).

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: fe3dbb44-2a74-4029-b069-3ded304f4c85

📥 Commits

Reviewing files that changed from the base of the PR and between a6fcdbf and 1e7e553.

📒 Files selected for processing (6)
  • tests/conftest.py
  • tests/data/expected/main/cli_input_overrides_pyproject_url.py
  • tests/main/conftest.py
  • tests/main/test_main_general.py
  • tests/main/test_main_watch.py
  • tests/test_assert_helper_usage.py
✅ Files skipped from review due to trivial changes (2)
  • tests/data/expected/main/cli_input_overrides_pyproject_url.py
  • tests/main/test_main_watch.py

Copy link
Copy Markdown
Owner Author

koxudaxi commented May 1, 2026

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 1, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown
Owner Author

koxudaxi commented May 1, 2026

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 1, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tests/main/openapi/test_main_openapi.py (1)

4379-4400: ⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Preserve the HTTP fetch assertion in these URL $ref tests.

These two tests now register a mocked response but never verify that the external URL was actually requested. That weakens the regression coverage: a change that stops resolving the remote $ref could still leave these tests green. Please capture the returned mock and assert the expected URL, like the other HTTP tests in this file.

Suggested tweak
 def test_main_openapi_read_only_write_only_url_ref(mock_httpx_get: Callable[..., Any], output_file: Path) -> None:
     """Test readOnly/writeOnly with URL $ref to external schema."""
-    mock_httpx_get(
+    httpx_get_mock = mock_httpx_get(
         MockHttpxResponse(
             "https://example.com/common.yaml", OPEN_API_DATA_PATH / "read_only_write_only_url_ref_remote.yaml"
         )
     )
@@
     run_main_and_assert(
         input_path=OPEN_API_DATA_PATH / "read_only_write_only_url_ref.yaml",
         output_path=output_file,
         input_file_type="openapi",
         assert_func=assert_file_content,
@@
             "--allow-remote-refs",
         ],
     )
+    assert_httpx_get_kwargs(httpx_get_mock, expected_url="https://example.com/common.yaml")
 
 
 def test_main_openapi_read_only_write_only_allof_url_ref(mock_httpx_get: Callable[..., Any], output_file: Path) -> None:
     """Test readOnly/writeOnly with allOf that references external URL schema."""
-    mock_httpx_get(
+    httpx_get_mock = mock_httpx_get(
         MockHttpxResponse(
             "https://example.com/common.yaml",
             OPEN_API_DATA_PATH / "read_only_write_only_allof_url_ref_remote.yaml",
         )
     )
@@
     run_main_and_assert(
         input_path=OPEN_API_DATA_PATH / "read_only_write_only_allof_url_ref.yaml",
         output_path=output_file,
         input_file_type="openapi",
         assert_func=assert_file_content,
@@
             "--allow-remote-refs",
         ],
     )
+    assert_httpx_get_kwargs(httpx_get_mock, expected_url="https://example.com/common.yaml")

Also applies to: 4403-4425

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/main/openapi/test_main_openapi.py` around lines 4379 - 4400, The test
registers a mocked HTTP response but never verifies the external URL was
actually requested; update test_main_openapi_read_only_write_only_url_ref (and
the similar test at 4403-4425) to capture the mock returned by mock_httpx_get,
then assert that the mock was called for "https://example.com/common.yaml"
(e.g., check the mock's recorded request URL or call args) before calling
run_main_and_assert so the test fails if the remote $ref was not fetched.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/conftest.py`:
- Around line 844-850: The test only inspects mock_get.call_args so earlier
calls with wrong values can slip through; change the logic to iterate over
mock_get.call_args_list and for each recorded call retrieve call.kwargs and
assert that when verify is not None call_kwargs.get("verify") is verify and when
timeout is not None call_kwargs.get("timeout") == timeout (keep the existing
mock_get.assert_called() check but replace single-call inspection with a loop
over call_args_list to mirror params_contains behavior).

In `@tests/test_assert_helper_usage.py`:
- Around line 59-70: The _DirectAssertVisitor currently avoids nested functions
but still descends into nested classes, causing asserts inside helper classes to
be misattributed; add a visit_ClassDef method on _DirectAssertVisitor that
returns without calling generic_visit (similar to visit_FunctionDef /
visit_AsyncFunctionDef) so the visitor does not descend into ClassDef nodes and
nested-class asserts are not collected.

In `@tests/test_main_kr.py`:
- Around line 1524-1541: The test test_url_with_http_headers is missing an
explicit assertion that the Authorization header is forwarded to httpx.get;
capture the mock returned by mock_httpx_get (call mock_get =
mock_httpx_get(...)) before calling run_main_url_and_assert, then after
run_main_url_and_assert call assert_httpx_get_kwargs(mock_get,
headers={"Authorization": "Bearer token"}) to verify the header was passed; use
the existing MockHttpxResponse, mock_httpx_get and assert_httpx_get_kwargs
helpers referenced in the test.

---

Outside diff comments:
In `@tests/main/openapi/test_main_openapi.py`:
- Around line 4379-4400: The test registers a mocked HTTP response but never
verifies the external URL was actually requested; update
test_main_openapi_read_only_write_only_url_ref (and the similar test at
4403-4425) to capture the mock returned by mock_httpx_get, then assert that the
mock was called for "https://example.com/common.yaml" (e.g., check the mock's
recorded request URL or call args) before calling run_main_and_assert so the
test fails if the remote $ref was not fetched.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 8f3056a5-f857-4226-8758-498ff825e253

📥 Commits

Reviewing files that changed from the base of the PR and between 1e7e553 and c94b568.

⛔ Files ignored due to path filters (6)
  • tests/data/jsonschema/cli_url_overrides_pyproject_input.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/jsonschema/remote_ref/main.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/jsonschema/remote_ref/other.schema.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/jsonschema/url_relative_root_id/main.schema.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/jsonschema/url_relative_root_id/sub.schema.json is excluded by !tests/data/**/*.json and included by none
  • tests/data/openapi/http_openapi_with_custom_port.yaml is excluded by !tests/data/**/*.yaml and included by none
📒 Files selected for processing (6)
  • tests/conftest.py
  • tests/main/jsonschema/test_main_jsonschema.py
  • tests/main/openapi/test_main_openapi.py
  • tests/main/test_main_general.py
  • tests/test_assert_helper_usage.py
  • tests/test_main_kr.py

Comment thread tests/conftest.py Outdated
Comment thread tests/test_assert_helper_usage.py
Comment thread tests/test_main_kr.py
Copy link
Copy Markdown
Owner Author

koxudaxi commented May 1, 2026

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 1, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
tests/main/jsonschema/test_main_jsonschema.py (1)

2190-2191: ⚡ Quick win

Finish the HTTP mock migration in these URL tests.

These assertions already use the shared request helper, but the response setup just above is still duplicating the old url_to_path / get_mock_response / mocker.patch("httpx.get", ...) scaffold. Switching the setup to mock_httpx_get + MockHttpxResponse would remove the remaining boilerplate and keep these tests on the same shared helper path as the rest of this PR.

Example of the shared setup pattern
httpx_get_mock = mock_httpx_get(
    *[
        MockHttpxResponse(url, external_directory / path)
        for url, path in url_to_path.items()
    ]
)

Also applies to: 2277-2285, 9068-9069

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/main/jsonschema/test_main_jsonschema.py` around lines 2190 - 2191,
Tests are still using the old manual httpx.get patch scaffolding for the URL
response setup; replace that boilerplate with the shared helper by creating
httpx_get_mock via mock_httpx_get and feeding it MockHttpxResponse instances for
each url,path in url_to_path (e.g. httpx_get_mock =
mock_httpx_get(*[MockHttpxResponse(url, external_directory / path) for url, path
in url_to_path.items()])), keep the existing assertions
assert_httpx_get_kwargs(httpx_get_mock, expected_urls=list(url_to_path),
any_order=True) and assert_httpx_get_kwargs(httpx_get_mock, call_count=8), and
apply the same replacement at the other occurrences referenced (around lines
2277-2285 and 9068-9069).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@tests/main/jsonschema/test_main_jsonschema.py`:
- Around line 2190-2191: Tests are still using the old manual httpx.get patch
scaffolding for the URL response setup; replace that boilerplate with the shared
helper by creating httpx_get_mock via mock_httpx_get and feeding it
MockHttpxResponse instances for each url,path in url_to_path (e.g.
httpx_get_mock = mock_httpx_get(*[MockHttpxResponse(url, external_directory /
path) for url, path in url_to_path.items()])), keep the existing assertions
assert_httpx_get_kwargs(httpx_get_mock, expected_urls=list(url_to_path),
any_order=True) and assert_httpx_get_kwargs(httpx_get_mock, call_count=8), and
apply the same replacement at the other occurrences referenced (around lines
2277-2285 and 9068-9069).

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 3d7e66d0-58b6-41df-8226-253af4bec104

📥 Commits

Reviewing files that changed from the base of the PR and between c94b568 and aeba2ca.

📒 Files selected for processing (3)
  • tests/data/expected/main/jsonschema/no_empty_collapsed_external_model/__init__.py
  • tests/data/expected/main/jsonschema/no_empty_collapsed_external_model/parent.py
  • tests/main/jsonschema/test_main_jsonschema.py
✅ Files skipped from review due to trivial changes (2)
  • tests/data/expected/main/jsonschema/no_empty_collapsed_external_model/init.py
  • tests/data/expected/main/jsonschema/no_empty_collapsed_external_model/parent.py

Copy link
Copy Markdown
Owner Author

koxudaxi commented May 1, 2026

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 1, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown
Owner Author

koxudaxi commented May 1, 2026

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 1, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tests/main/jsonschema/test_main_jsonschema.py (1)

2207-2214: ⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Broaden headers_arguments to match all parametrized cases.

One test case passes a single header, so tuple[str, str] is too narrow here and will trip type checking. Use a variadic tuple type instead.

♻️ Suggested fix
-    headers_arguments: tuple[str, str],
+    headers_arguments: tuple[str, ...],
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/main/jsonschema/test_main_jsonschema.py` around lines 2207 - 2214, The
parameter type for headers_arguments in
test_main_http_jsonschema_with_http_headers_and_http_query_parameters_and_ignore_tls
is too narrow (tuple[str, str]) and should accept any-length header argument
tuples; change the annotation to a variadic tuple type (tuple[str, ...]) so it
matches single- and multi-header parametrized cases and fixes the type-checking
error for headers_arguments in that test.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/test_assert_helper_usage.py`:
- Around line 83-101: _collect_direct_asserts currently only looks inside
function nodes and misses module-level ast.Assert nodes; update it to also scan
the module body for top-level ast.Assert statements and add corresponding
DirectAssert entries. Specifically, after parsing the tree and collecting from
_collect_function_asserts, iterate over tree.body to find ast.Assert nodes and
extend direct_asserts with DirectAssert(path=path.relative_to(TESTS_ROOT),
function_name="<module>", lineno=assert_node.lineno,
statement=_statement(assert_node, source)) (or equivalent) so module-level
asserts are reported; keep using _statement, TESTS_ROOT and the DirectAssert
type to construct the entries.

---

Outside diff comments:
In `@tests/main/jsonschema/test_main_jsonschema.py`:
- Around line 2207-2214: The parameter type for headers_arguments in
test_main_http_jsonschema_with_http_headers_and_http_query_parameters_and_ignore_tls
is too narrow (tuple[str, str]) and should accept any-length header argument
tuples; change the annotation to a variadic tuple type (tuple[str, ...]) so it
matches single- and multi-header parametrized cases and fixes the type-checking
error for headers_arguments in that test.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: c879e7fd-11fa-457c-96cc-7a752622a1b7

📥 Commits

Reviewing files that changed from the base of the PR and between aeba2ca and 0be3071.

📒 Files selected for processing (5)
  • tests/conftest.py
  • tests/main/jsonschema/test_main_jsonschema.py
  • tests/main/openapi/test_main_openapi.py
  • tests/test_assert_helper_usage.py
  • tests/test_main_kr.py

Comment thread tests/test_assert_helper_usage.py
Copy link
Copy Markdown
Owner Author

koxudaxi commented May 1, 2026

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 1, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants