Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (2)
🚧 Files skipped from review as they are similar to previous changes (1)
📝 WalkthroughWalkthroughAdds test-harness utilities (httpx mocking, HTTP-call assertions, generated-output and warning/stderr helpers, AST-based direct-assert enforcement), declares a pytest marker, updates CodeQL to ignore expected test data, and adds many generated expected-data fixtures while refactoring tests to use the new helpers. Changes
Sequence Diagram(s)sequenceDiagram
participant Test as Test
participant Mock as mock_httpx_get
participant HTTP as httpx.get
participant FS as FileSystem
participant Assert as assert_httpx_get_kwargs
Test->>Mock: register MockHttpxResponse(url, status, body)
Test->>HTTP: exercise code that calls httpx.get(url, **opts)
HTTP->>Mock: lookup response queue for URL
Mock-->>HTTP: return MockHttpxResponse(response)
HTTP-->>Test: code receives mocked response
Test->>FS: generate/write files via generate(..., output=...)
Test->>Assert: verify httpx calls (urls, headers, params, timeout, call_count)
Assert-->>Test: pass / raise detailed failure
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Review rate limit: 4/8 reviews remaining, refill in 23 minutes and 14 seconds.Comment |
|
📚 Docs Preview: https://pr-3108.datamodel-code-generator.pages.dev |
Merging this PR will not alter performance
|
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #3108 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 87 88 +1
Lines 18306 18297 -9
Branches 2091 2115 +24
=========================================
- Hits 18306 18297 -9
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
5bee535 to
15856f4
Compare
15856f4 to
5518ab7
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
tests/main/jsonschema/test_main_jsonschema.py (1)
1005-1032:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winAssert the resolved remote URL here too.
Right now this test only proves a
FutureWarningwas emitted. Add anassert_httpx_get_kwargs(...)check so the relative$refstill resolves to the intended remote schema URL; otherwise a wrong fetch path could regress without this test catching it.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/main/jsonschema/test_main_jsonschema.py` around lines 1005 - 1032, The test test_main_remote_ref_emits_deprecation_warning currently only asserts a FutureWarning; add an assertion that the mocked HTTP fetch was called with the expected resolved remote URL by invoking assert_httpx_get_kwargs(...) after run_main_and_assert to verify the relative $ref "../other/schema.json#/definitions/Thing" resolved to "https://example.com/other/schema.json" (or the exact expected URL used by your resolver). Use the existing mock_httpx_get and assert_httpx_get_kwargs helpers to check the request URL/path and any relevant kwargs so the test fails if the resolver builds the wrong remote URL.tests/main/conftest.py (2)
461-468:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winCopy shared fixtures in the
stdin_pathbranch too.
copy_filesis honored for the file-input path, but stdin-driven runs skip_copy_files(copy_files), so they can observe a different workspace than the helper contract implies.Suggested fix
if stdin_path is not None: if monkeypatch is None: # pragma: no cover pytest.fail("monkeypatch is required when using stdin_path") + _copy_files(copy_files) monkeypatch.setattr("sys.stdin", stdin_path.open(encoding="utf-8"))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/main/conftest.py` around lines 461 - 468, The stdin_path branch in the test fixture skips calling _copy_files(copy_files), causing stdin-driven tests to run without the shared fixtures; modify the stdin_path branch so that it invokes _copy_files(copy_files) before setting sys.stdin and calling main(args), and ensure it still uses monkeypatch.setattr("sys.stdin", stdin_path.open(...)), builds args via _extend_args(output_path=..., input_file_type=..., extra_args=...), and then calls return_code = main(args) so behavior matches the file-input path.
484-500:⚠️ Potential issue | 🟠 Major | ⚡ Quick winFail fast when capture assertions are requested.
This branch silently skips stdout/stderr checks whenever
capsysisNone, so a misconfigured caller can get a false-green test instead of an assertion. Reuse_assert_captured_output()here or fail immediately when any capture expectation is set withoutcapsys.Suggested fix
- if capsys is not None and ( - expected_stdout_path is not None - or expected_stderr is not None - or expected_stderr_contains is not None - or assert_no_stderr - ): - captured = capsys.readouterr() - ... + _assert_captured_output( + capsys, + expected_stdout_path=expected_stdout_path, + expected_stderr=expected_stderr, + expected_stderr_contains=expected_stderr_contains, + assert_no_stderr=assert_no_stderr, + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/main/conftest.py` around lines 484 - 500, The current capture-check branch silently skips assertions when capsys is None causing false-green tests; update the block that checks capsys and expected_stdout_path/expected_stderr/expected_stderr_contains/assert_no_stderr to immediately fail (pytest.fail with a clear message) if capsys is None while any of those expectations are set, or simply call the existing helper _assert_captured_output(capsys, expected_stdout_path, expected_stderr, expected_stderr_contains, assert_no_stderr) instead of skipping; reference capsys, expected_stdout_path, expected_stderr, expected_stderr_contains, assert_no_stderr and _assert_captured_output to locate and replace the silent-skip logic.
🧹 Nitpick comments (1)
tests/conftest.py (1)
897-905: ⚡ Quick winKeep the file comparison strict.
Stripping both sides can hide leading/trailing blank-line regressions and makes this helper behave differently from the other snapshot assertions. Compare normalized line endings directly instead.
♻️ Proposed fix
- if result.strip() != actual.strip(): # pragma: no cover - diff = _format_diff(result.strip(), actual.strip(), output_file) + expected = _normalize_line_endings(result) + actual = _normalize_line_endings(actual) + if expected != actual: # pragma: no cover + diff = _format_diff(expected, actual, output_file)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/conftest.py` around lines 897 - 905, The comparison in assert_generated_file_matches_output currently strips both strings which hides leading/trailing blank-line regressions; modify the function (assert_generated_file_matches_output) to compare the generation and file contents without stripping by normalizing line endings (e.g., convert CRLF to LF on both result and actual via .replace("\r\n", "\n")) and then comparing the normalized strings directly, keeping the existing diff/error behavior when they differ.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/main/jsonschema/test_main_jsonschema.py`:
- Around line 775-802: Update the two tests test_main_class_name_prefix_invalid
and test_main_class_name_suffix_invalid to assert the CLI failure message rather
than only Exit.ERROR by passing the stderr assertion to run_main_with_args
(e.g., add expected_stderr_contains="class name prefix must match" for the
prefix case and expected_stderr_contains="class name suffix must match" for the
suffix case, or reuse the new helper name expected_stderr_contains if provided),
so the tests verify the validation logic in run_main_with_args instead of any
generic error path.
In `@tests/test_assert_helper_usage.py`:
- Around line 51-68: In _collect_direct_asserts the code uses ast.walk(function)
which descends into nested FunctionDef/AsyncFunctionDef and causes asserts in
inner helpers to be attributed to the outer function; replace that traversal
with a visitor that yields Assert nodes only from the outer function body (i.e.,
walk child nodes but do not recurse into nodes of type ast.FunctionDef or
ast.AsyncFunctionDef) so DirectAssert entries use the actual assertion site;
keep references to DirectAssert(path=..., function_name=function.name,
lineno=..., statement=_statement(node, source)) and the existing
_allows_direct_assert and TESTS_ROOT usage when creating the entries.
---
Outside diff comments:
In `@tests/main/conftest.py`:
- Around line 461-468: The stdin_path branch in the test fixture skips calling
_copy_files(copy_files), causing stdin-driven tests to run without the shared
fixtures; modify the stdin_path branch so that it invokes
_copy_files(copy_files) before setting sys.stdin and calling main(args), and
ensure it still uses monkeypatch.setattr("sys.stdin", stdin_path.open(...)),
builds args via _extend_args(output_path=..., input_file_type=...,
extra_args=...), and then calls return_code = main(args) so behavior matches the
file-input path.
- Around line 484-500: The current capture-check branch silently skips
assertions when capsys is None causing false-green tests; update the block that
checks capsys and
expected_stdout_path/expected_stderr/expected_stderr_contains/assert_no_stderr
to immediately fail (pytest.fail with a clear message) if capsys is None while
any of those expectations are set, or simply call the existing helper
_assert_captured_output(capsys, expected_stdout_path, expected_stderr,
expected_stderr_contains, assert_no_stderr) instead of skipping; reference
capsys, expected_stdout_path, expected_stderr, expected_stderr_contains,
assert_no_stderr and _assert_captured_output to locate and replace the
silent-skip logic.
In `@tests/main/jsonschema/test_main_jsonschema.py`:
- Around line 1005-1032: The test test_main_remote_ref_emits_deprecation_warning
currently only asserts a FutureWarning; add an assertion that the mocked HTTP
fetch was called with the expected resolved remote URL by invoking
assert_httpx_get_kwargs(...) after run_main_and_assert to verify the relative
$ref "../other/schema.json#/definitions/Thing" resolved to
"https://example.com/other/schema.json" (or the exact expected URL used by your
resolver). Use the existing mock_httpx_get and assert_httpx_get_kwargs helpers
to check the request URL/path and any relevant kwargs so the test fails if the
resolver builds the wrong remote URL.
---
Nitpick comments:
In `@tests/conftest.py`:
- Around line 897-905: The comparison in assert_generated_file_matches_output
currently strips both strings which hides leading/trailing blank-line
regressions; modify the function (assert_generated_file_matches_output) to
compare the generation and file contents without stripping by normalizing line
endings (e.g., convert CRLF to LF on both result and actual via .replace("\r\n",
"\n")) and then comparing the normalized strings directly, keeping the existing
diff/error behavior when they differ.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 518c2db4-dfa7-4d82-a2a2-6d4f1cbdc4eb
⛔ Files ignored due to path filters (5)
tests/data/expected/main_kr/generate_cli_command/with_profile.txtis excluded by!tests/data/**/*.txtand included by nonetests/data/expected/main_kr/generate_prompt/basic.txtis excluded by!tests/data/**/*.txtand included by nonetests/data/expected/main_kr/generate_prompt/with_list_options.txtis excluded by!tests/data/**/*.txtand included by nonetests/data/expected/main_kr/generate_prompt/with_options.txtis excluded by!tests/data/**/*.txtand included by nonetests/data/expected/main_kr/generate_prompt/with_question.txtis excluded by!tests/data/**/*.txtand included by none
📒 Files selected for processing (96)
.github/workflows/codeql.yamlpyproject.tomltests/conftest.pytests/data/expected/main/cli_url_overrides_pyproject_input.pytests/data/expected/main/format_code_fallback_on_error.pytests/data/expected/main/generate_parent_scoped_naming_backward_compat.pytests/data/expected/main/generate_returns_dict_for_multiple_modules/__init__.pytests/data/expected/main/generate_returns_dict_for_multiple_modules/address.pytests/data/expected/main/generate_returns_dict_for_multiple_modules/main.pytests/data/expected/main/generate_returns_string_when_output_none.pytests/data/expected/main/generate_returns_string_with_custom_file_header.pytests/data/expected/main/generate_returns_string_with_custom_file_header_and_code.pytests/data/expected/main/generate_returns_string_with_custom_file_header_no_future.pytests/data/expected/main/generate_returns_string_with_dataclass.pytests/data/expected/main/generate_returns_string_with_pydantic_v2.pytests/data/expected/main/generate_with_config_object.pytests/data/expected/main/generate_with_imported_config_from_top_level.pytests/data/expected/main/jsonschema/all_exports_multi_file_ruff/__init__.pytests/data/expected/main/jsonschema/all_exports_multi_file_ruff/order.pytests/data/expected/main/jsonschema/all_exports_multi_file_ruff/product.pytests/data/expected/main/jsonschema/all_exports_multi_file_ruff/user.pytests/data/expected/main/jsonschema/field_validators_all_skipped.pytests/data/expected/main/jsonschema/field_validators_with_no_field_skipped.pytests/data/expected/main/jsonschema/url_relative_root_id_resolves_relative_refs/__init__.pytests/data/expected/main/jsonschema/url_relative_root_id_resolves_relative_refs/sub.pytests/data/expected/main/openapi/all_exports_scope_recursive_collision_avoided_by_renaming/__init__.pytests/data/expected/main/openapi/all_exports_scope_recursive_collision_avoided_by_renaming/_internal.pytests/data/expected/main/openapi/all_exports_scope_recursive_collision_avoided_by_renaming/bar.pytests/data/expected/main/openapi/all_exports_scope_recursive_collision_avoided_by_renaming/collections.pytests/data/expected/main/openapi/all_exports_scope_recursive_collision_avoided_by_renaming/foo/__init__.pytests/data/expected/main/openapi/all_exports_scope_recursive_collision_avoided_by_renaming/foo/bar.pytests/data/expected/main/openapi/all_exports_scope_recursive_collision_avoided_by_renaming/models.pytests/data/expected/main/openapi/all_exports_scope_recursive_collision_avoided_by_renaming/nested/__init__.pytests/data/expected/main/openapi/all_exports_scope_recursive_collision_avoided_by_renaming/nested/foo.pytests/data/expected/main/openapi/all_exports_scope_recursive_collision_avoided_by_renaming/woo/__init__.pytests/data/expected/main/openapi/all_exports_scope_recursive_collision_avoided_by_renaming/woo/boo.pytests/data/expected/main/openapi/external_ref_mapping_absolute_path.pytests/data/expected/main/openapi/external_ref_mapping_file_uri.pytests/data/expected/main/openapi/external_ref_mapping_local_ref_unchanged.pytests/data/expected/main/openapi/external_ref_mapping_normalized.pytests/data/expected/main/openapi/external_ref_mapping_without_flag.pytests/data/expected/main/openapi/format_code_fallback_on_error_init_exports/__init__.pytests/data/expected/main/openapi/format_code_fallback_on_error_init_exports/_internal.pytests/data/expected/main/openapi/format_code_fallback_on_error_init_exports/bar.pytests/data/expected/main/openapi/format_code_fallback_on_error_init_exports/collections.pytests/data/expected/main/openapi/format_code_fallback_on_error_init_exports/foo/__init__.pytests/data/expected/main/openapi/format_code_fallback_on_error_init_exports/foo/bar.pytests/data/expected/main/openapi/format_code_fallback_on_error_init_exports/models.pytests/data/expected/main/openapi/format_code_fallback_on_error_init_exports/nested/__init__.pytests/data/expected/main/openapi/format_code_fallback_on_error_init_exports/nested/foo.pytests/data/expected/main/openapi/format_code_fallback_on_error_init_exports/woo/__init__.pytests/data/expected/main/openapi/format_code_fallback_on_error_init_exports/woo/boo.pytests/data/expected/main/openapi/generate_multi_module_pydantic_ruff_defaults_to_runtime_imports/__init__.pytests/data/expected/main/openapi/generate_multi_module_pydantic_ruff_defaults_to_runtime_imports/_internal.pytests/data/expected/main/openapi/generate_multi_module_pydantic_ruff_defaults_to_runtime_imports/bar.pytests/data/expected/main/openapi/generate_multi_module_pydantic_ruff_defaults_to_runtime_imports/collections.pytests/data/expected/main/openapi/generate_multi_module_pydantic_ruff_defaults_to_runtime_imports/foo/__init__.pytests/data/expected/main/openapi/generate_multi_module_pydantic_ruff_defaults_to_runtime_imports/foo/bar.pytests/data/expected/main/openapi/generate_multi_module_pydantic_ruff_defaults_to_runtime_imports/models.pytests/data/expected/main/openapi/generate_multi_module_pydantic_ruff_defaults_to_runtime_imports/nested/__init__.pytests/data/expected/main/openapi/generate_multi_module_pydantic_ruff_defaults_to_runtime_imports/nested/foo.pytests/data/expected/main/openapi/generate_multi_module_pydantic_ruff_defaults_to_runtime_imports/woo/__init__.pytests/data/expected/main/openapi/generate_multi_module_pydantic_ruff_defaults_to_runtime_imports/woo/boo.pytests/data/expected/main/openapi/http_openapi_with_custom_port.pytests/data/expected/main/openapi/init_exports_without_formatting/__init__.pytests/data/expected/main/openapi/init_exports_without_formatting/_internal.pytests/data/expected/main/openapi/init_exports_without_formatting/bar.pytests/data/expected/main/openapi/init_exports_without_formatting/collections.pytests/data/expected/main/openapi/init_exports_without_formatting/foo/__init__.pytests/data/expected/main/openapi/init_exports_without_formatting/foo/bar.pytests/data/expected/main/openapi/init_exports_without_formatting/models.pytests/data/expected/main/openapi/init_exports_without_formatting/nested/__init__.pytests/data/expected/main/openapi/init_exports_without_formatting/nested/foo.pytests/data/expected/main/openapi/init_exports_without_formatting/woo/__init__.pytests/data/expected/main/openapi/init_exports_without_formatting/woo/boo.pytests/data/expected/main/openapi/use_type_checking_imports_for_multi_module_pydantic_ruff/__init__.pytests/data/expected/main/openapi/use_type_checking_imports_for_multi_module_pydantic_ruff/_internal.pytests/data/expected/main/openapi/use_type_checking_imports_for_multi_module_pydantic_ruff/bar.pytests/data/expected/main/openapi/use_type_checking_imports_for_multi_module_pydantic_ruff/collections.pytests/data/expected/main/openapi/use_type_checking_imports_for_multi_module_pydantic_ruff/foo/__init__.pytests/data/expected/main/openapi/use_type_checking_imports_for_multi_module_pydantic_ruff/foo/bar.pytests/data/expected/main/openapi/use_type_checking_imports_for_multi_module_pydantic_ruff/models.pytests/data/expected/main/openapi/use_type_checking_imports_for_multi_module_pydantic_ruff/nested/__init__.pytests/data/expected/main/openapi/use_type_checking_imports_for_multi_module_pydantic_ruff/nested/foo.pytests/data/expected/main/openapi/use_type_checking_imports_for_multi_module_pydantic_ruff/woo/__init__.pytests/data/expected/main/openapi/use_type_checking_imports_for_multi_module_pydantic_ruff/woo/boo.pytests/data/expected/main_kr/main_modular_no_file/output.pytests/data/expected/main_kr/pyproject_profile/ignore_pyproject_with_profile.pytests/main/conftest.pytests/main/jsonschema/test_main_jsonschema.pytests/main/openapi/test_main_openapi.pytests/main/test_exec_validation.pytests/main/test_main_general.pytests/main/test_main_watch.pytests/test_assert_helper_usage.pytests/test_main_kr.py
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
tests/main/test_main_general.py (1)
1509-1523: ⚡ Quick winPrefer fixture-based golden output over inline snapshot in this migrated e2e path.
This test still uses
snapshot(...)while adjacent changes standardize on shared expected-output fixtures. Converting this one too keeps assertion style consistent and reduces inline expected-body drift.♻️ Proposed refactor
-from inline_snapshot import snapshot +from inline_snapshot import snapshot @@ def test_cli_input_overrides_pyproject_url( output_file: Path, tmp_path: Path, mock_httpx_get: Callable[..., Any] ) -> None: @@ with chdir(tmp_path): run_main_and_assert( input_path=cli_input.relative_to(tmp_path), output_path=output_file, extra_args=["--disable-timestamp"], - expected_output=snapshot( - """\ -# generated by datamodel-codegen: -# filename: cli.json - -from __future__ import annotations - -from pydantic import BaseModel - - -class Model(BaseModel): - from_input: str | None = None -""" - ), + assert_func=assert_file_content, + expected_file="cli_input_overrides_pyproject_url.py", )-import black -import pytest -from inline_snapshot import snapshot +import black +import pytest🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/main/test_main_general.py` around lines 1509 - 1523, The test currently embeds the expected-output via the inline snapshot(...) call stored in the expected_output= argument; convert it to use the shared fixture-based golden output like adjacent tests by removing the inline snapshot(...) and referencing the common expected-output fixture (the same fixture other tests use) and placing the mult-line expected string into that shared fixture file (named/organized consistently with other golden files); update the test to pass the fixture value into expected_output instead of snapshot(...) so the assertion style matches neighboring tests.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/conftest.py`:
- Around line 784-829: The helper assert_httpx_get_kwargs has two issues:
called=True is ignored and params_contains only inspects the last call; update
assert_httpx_get_kwargs (the function) so that if called is True it calls
mock_get.assert_called() immediately, and when params_contains is provided
validate every recorded call by iterating mock_get.call_args_list (not just
mock_get.call_args) — for each call extract kwargs["params"] into a dict and
assert the params_contains keys/values are present (and keep the existing
behavior for single-call checks and expected_urls), ensuring this validation
works whether expected_url/expected_urls or call_count are used.
---
Nitpick comments:
In `@tests/main/test_main_general.py`:
- Around line 1509-1523: The test currently embeds the expected-output via the
inline snapshot(...) call stored in the expected_output= argument; convert it to
use the shared fixture-based golden output like adjacent tests by removing the
inline snapshot(...) and referencing the common expected-output fixture (the
same fixture other tests use) and placing the mult-line expected string into
that shared fixture file (named/organized consistently with other golden files);
update the test to pass the fixture value into expected_output instead of
snapshot(...) so the assertion style matches neighboring tests.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 633e6e0d-3005-4a90-85e7-9f99dd7279ab
📒 Files selected for processing (5)
tests/conftest.pytests/main/conftest.pytests/main/jsonschema/test_main_jsonschema.pytests/main/test_main_general.pytests/test_assert_helper_usage.py
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/main/conftest.py
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/test_assert_helper_usage.py`:
- Around line 40-43: The _allows_direct_assert function currently uses
ast.unparse(...).startswith("pytest.mark.allow_direct_assert") which can falsely
match other decorators with that prefix; change it to check for an exact match
of the opt-in marker (e.g., compare ast.unparse(decorator) ==
"pytest.mark.allow_direct_assert" or otherwise resolve the decorator AST to the
exact dotted name) so only the exact decorator on function.decorator_list opts
out of the guard.
- Around line 129-141: The test only verifies the outer function ignores the
nested async helper but doesn't assert the nested helper is collected; modify
test_collect_function_asserts_ignores_nested_async_helpers to also locate the
nested async function (the AsyncFunctionDef named "assert_later" found either by
ast.walk or by inspecting function.body) and call _collect_function_asserts on
that nested node, asserting it returns the expected assert (i.e., not an empty
list) so the nested helper is reported when scanned directly.
- Around line 143-167: The test uses the type annotation pytest.MockerFixture
which doesn't exist; import and use MockerFixture from the pytest_mock package
instead. Add "from pytest_mock import MockerFixture" and change the mocker
parameter annotation in functions like
test_assert_httpx_get_kwargs_accepts_expected_urls_with_explicit_call_count (and
any other tests using pytest.MockerFixture) to use MockerFixture (e.g., def
test_...(mocker: MockerFixture) -> None) so type checkers (mypy/pyright) resolve
the fixture type correctly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 51f7c7cf-c036-4ec4-b0fe-2e67ddd2da7c
📒 Files selected for processing (1)
tests/test_assert_helper_usage.py
There was a problem hiding this comment.
♻️ Duplicate comments (2)
tests/test_assert_helper_usage.py (2)
143-213:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winImport
MockerFixturefrompytest_mockinstead ofpytest.
pytest.MockerFixtureis not provided bypytest; these annotations should usepytest_mock.MockerFixtureto keep typing accurate.Suggested fix
import pytest +from pytest_mock import MockerFixture @@ -def test_assert_httpx_get_kwargs_accepts_expected_urls_with_explicit_call_count(mocker: pytest.MockerFixture) -> None: +def test_assert_httpx_get_kwargs_accepts_expected_urls_with_explicit_call_count(mocker: MockerFixture) -> None: @@ -def test_assert_httpx_get_kwargs_accepts_called_true(mocker: pytest.MockerFixture) -> None: +def test_assert_httpx_get_kwargs_accepts_called_true(mocker: MockerFixture) -> None: @@ -def test_assert_httpx_get_kwargs_validates_params_contains_for_every_call(mocker: pytest.MockerFixture) -> None: +def test_assert_httpx_get_kwargs_validates_params_contains_for_every_call(mocker: MockerFixture) -> None: @@ -def test_assert_httpx_get_kwargs_reports_params_contains_mismatch_per_call(mocker: pytest.MockerFixture) -> None: +def test_assert_httpx_get_kwargs_reports_params_contains_mismatch_per_call(mocker: MockerFixture) -> None:#!/bin/bash set -euo pipefail echo "Check for invalid pytest.MockerFixture annotations:" rg -nP 'pytest\.MockerFixture' tests/test_assert_helper_usage.py echo echo "Verify pytest does not expose MockerFixture:" python - <<'PY' import pytest print("hasattr(pytest, 'MockerFixture') =", hasattr(pytest, "MockerFixture")) PY🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_assert_helper_usage.py` around lines 143 - 213, The test annotations incorrectly use pytest.MockerFixture (which doesn't exist); update the import and type hints to use pytest_mock.MockerFixture instead: add/replace the import to bring in MockerFixture from the pytest_mock module and change annotations in functions like test_assert_httpx_get_kwargs_accepts_expected_urls_with_explicit_call_count, test_assert_httpx_get_kwargs_accepts_called_true, test_assert_httpx_get_kwargs_validates_params_contains_for_every_call, and test_assert_httpx_get_kwargs_reports_params_contains_mismatch_per_call from pytest.MockerFixture to pytest_mock.MockerFixture (or import MockerFixture directly and use MockerFixture in the signatures).
40-43:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winUse exact marker matching for
allow_direct_assert.
startswith("pytest.mark.allow_direct_assert")can accidentally whitelist unrelated decorators with the same prefix. Match the marker name exactly (optionally allowing call syntax).Suggested fix
def _allows_direct_assert(function: ast.FunctionDef | ast.AsyncFunctionDef) -> bool: - return any( - ast.unparse(decorator).startswith("pytest.mark.allow_direct_assert") for decorator in function.decorator_list - ) + return any( + ast.unparse(decorator).split("(", 1)[0] == "pytest.mark.allow_direct_assert" + for decorator in function.decorator_list + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_assert_helper_usage.py` around lines 40 - 43, The current _allows_direct_assert uses ast.unparse().startswith(...) which can false-match other decorators; change it to perform exact AST matching: for each decorator in function.decorator_list, if it's an ast.Call use decorator.func else use the node itself, then check that the node is an ast.Attribute chain matching pytest -> mark -> allow_direct_assert (i.e. node is ast.Attribute with .attr == "allow_direct_assert", its .value is ast.Attribute with .attr == "mark", and its .value.value is ast.Name with .id == "pytest"); return True if any decorator matches this exact structure (thereby allowing optional call syntax but preventing prefix collisions).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@tests/test_assert_helper_usage.py`:
- Around line 143-213: The test annotations incorrectly use pytest.MockerFixture
(which doesn't exist); update the import and type hints to use
pytest_mock.MockerFixture instead: add/replace the import to bring in
MockerFixture from the pytest_mock module and change annotations in functions
like
test_assert_httpx_get_kwargs_accepts_expected_urls_with_explicit_call_count,
test_assert_httpx_get_kwargs_accepts_called_true,
test_assert_httpx_get_kwargs_validates_params_contains_for_every_call, and
test_assert_httpx_get_kwargs_reports_params_contains_mismatch_per_call from
pytest.MockerFixture to pytest_mock.MockerFixture (or import MockerFixture
directly and use MockerFixture in the signatures).
- Around line 40-43: The current _allows_direct_assert uses
ast.unparse().startswith(...) which can false-match other decorators; change it
to perform exact AST matching: for each decorator in function.decorator_list, if
it's an ast.Call use decorator.func else use the node itself, then check that
the node is an ast.Attribute chain matching pytest -> mark ->
allow_direct_assert (i.e. node is ast.Attribute with .attr ==
"allow_direct_assert", its .value is ast.Attribute with .attr == "mark", and its
.value.value is ast.Name with .id == "pytest"); return True if any decorator
matches this exact structure (thereby allowing optional call syntax but
preventing prefix collisions).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: fe3dbb44-2a74-4029-b069-3ded304f4c85
📒 Files selected for processing (6)
tests/conftest.pytests/data/expected/main/cli_input_overrides_pyproject_url.pytests/main/conftest.pytests/main/test_main_general.pytests/main/test_main_watch.pytests/test_assert_helper_usage.py
✅ Files skipped from review due to trivial changes (2)
- tests/data/expected/main/cli_input_overrides_pyproject_url.py
- tests/main/test_main_watch.py
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tests/main/openapi/test_main_openapi.py (1)
4379-4400:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winPreserve the HTTP fetch assertion in these URL
$reftests.These two tests now register a mocked response but never verify that the external URL was actually requested. That weakens the regression coverage: a change that stops resolving the remote
$refcould still leave these tests green. Please capture the returned mock and assert the expected URL, like the other HTTP tests in this file.Suggested tweak
def test_main_openapi_read_only_write_only_url_ref(mock_httpx_get: Callable[..., Any], output_file: Path) -> None: """Test readOnly/writeOnly with URL $ref to external schema.""" - mock_httpx_get( + httpx_get_mock = mock_httpx_get( MockHttpxResponse( "https://example.com/common.yaml", OPEN_API_DATA_PATH / "read_only_write_only_url_ref_remote.yaml" ) ) @@ run_main_and_assert( input_path=OPEN_API_DATA_PATH / "read_only_write_only_url_ref.yaml", output_path=output_file, input_file_type="openapi", assert_func=assert_file_content, @@ "--allow-remote-refs", ], ) + assert_httpx_get_kwargs(httpx_get_mock, expected_url="https://example.com/common.yaml") def test_main_openapi_read_only_write_only_allof_url_ref(mock_httpx_get: Callable[..., Any], output_file: Path) -> None: """Test readOnly/writeOnly with allOf that references external URL schema.""" - mock_httpx_get( + httpx_get_mock = mock_httpx_get( MockHttpxResponse( "https://example.com/common.yaml", OPEN_API_DATA_PATH / "read_only_write_only_allof_url_ref_remote.yaml", ) ) @@ run_main_and_assert( input_path=OPEN_API_DATA_PATH / "read_only_write_only_allof_url_ref.yaml", output_path=output_file, input_file_type="openapi", assert_func=assert_file_content, @@ "--allow-remote-refs", ], ) + assert_httpx_get_kwargs(httpx_get_mock, expected_url="https://example.com/common.yaml")Also applies to: 4403-4425
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/main/openapi/test_main_openapi.py` around lines 4379 - 4400, The test registers a mocked HTTP response but never verifies the external URL was actually requested; update test_main_openapi_read_only_write_only_url_ref (and the similar test at 4403-4425) to capture the mock returned by mock_httpx_get, then assert that the mock was called for "https://example.com/common.yaml" (e.g., check the mock's recorded request URL or call args) before calling run_main_and_assert so the test fails if the remote $ref was not fetched.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/conftest.py`:
- Around line 844-850: The test only inspects mock_get.call_args so earlier
calls with wrong values can slip through; change the logic to iterate over
mock_get.call_args_list and for each recorded call retrieve call.kwargs and
assert that when verify is not None call_kwargs.get("verify") is verify and when
timeout is not None call_kwargs.get("timeout") == timeout (keep the existing
mock_get.assert_called() check but replace single-call inspection with a loop
over call_args_list to mirror params_contains behavior).
In `@tests/test_assert_helper_usage.py`:
- Around line 59-70: The _DirectAssertVisitor currently avoids nested functions
but still descends into nested classes, causing asserts inside helper classes to
be misattributed; add a visit_ClassDef method on _DirectAssertVisitor that
returns without calling generic_visit (similar to visit_FunctionDef /
visit_AsyncFunctionDef) so the visitor does not descend into ClassDef nodes and
nested-class asserts are not collected.
In `@tests/test_main_kr.py`:
- Around line 1524-1541: The test test_url_with_http_headers is missing an
explicit assertion that the Authorization header is forwarded to httpx.get;
capture the mock returned by mock_httpx_get (call mock_get =
mock_httpx_get(...)) before calling run_main_url_and_assert, then after
run_main_url_and_assert call assert_httpx_get_kwargs(mock_get,
headers={"Authorization": "Bearer token"}) to verify the header was passed; use
the existing MockHttpxResponse, mock_httpx_get and assert_httpx_get_kwargs
helpers referenced in the test.
---
Outside diff comments:
In `@tests/main/openapi/test_main_openapi.py`:
- Around line 4379-4400: The test registers a mocked HTTP response but never
verifies the external URL was actually requested; update
test_main_openapi_read_only_write_only_url_ref (and the similar test at
4403-4425) to capture the mock returned by mock_httpx_get, then assert that the
mock was called for "https://example.com/common.yaml" (e.g., check the mock's
recorded request URL or call args) before calling run_main_and_assert so the
test fails if the remote $ref was not fetched.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 8f3056a5-f857-4226-8758-498ff825e253
⛔ Files ignored due to path filters (6)
tests/data/jsonschema/cli_url_overrides_pyproject_input.jsonis excluded by!tests/data/**/*.jsonand included by nonetests/data/jsonschema/remote_ref/main.jsonis excluded by!tests/data/**/*.jsonand included by nonetests/data/jsonschema/remote_ref/other.schema.jsonis excluded by!tests/data/**/*.jsonand included by nonetests/data/jsonschema/url_relative_root_id/main.schema.jsonis excluded by!tests/data/**/*.jsonand included by nonetests/data/jsonschema/url_relative_root_id/sub.schema.jsonis excluded by!tests/data/**/*.jsonand included by nonetests/data/openapi/http_openapi_with_custom_port.yamlis excluded by!tests/data/**/*.yamland included by none
📒 Files selected for processing (6)
tests/conftest.pytests/main/jsonschema/test_main_jsonschema.pytests/main/openapi/test_main_openapi.pytests/main/test_main_general.pytests/test_assert_helper_usage.pytests/test_main_kr.py
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
🧹 Nitpick comments (1)
tests/main/jsonschema/test_main_jsonschema.py (1)
2190-2191: ⚡ Quick winFinish the HTTP mock migration in these URL tests.
These assertions already use the shared request helper, but the response setup just above is still duplicating the old
url_to_path/get_mock_response/mocker.patch("httpx.get", ...)scaffold. Switching the setup tomock_httpx_get+MockHttpxResponsewould remove the remaining boilerplate and keep these tests on the same shared helper path as the rest of this PR.Example of the shared setup pattern
httpx_get_mock = mock_httpx_get( *[ MockHttpxResponse(url, external_directory / path) for url, path in url_to_path.items() ] )Also applies to: 2277-2285, 9068-9069
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/main/jsonschema/test_main_jsonschema.py` around lines 2190 - 2191, Tests are still using the old manual httpx.get patch scaffolding for the URL response setup; replace that boilerplate with the shared helper by creating httpx_get_mock via mock_httpx_get and feeding it MockHttpxResponse instances for each url,path in url_to_path (e.g. httpx_get_mock = mock_httpx_get(*[MockHttpxResponse(url, external_directory / path) for url, path in url_to_path.items()])), keep the existing assertions assert_httpx_get_kwargs(httpx_get_mock, expected_urls=list(url_to_path), any_order=True) and assert_httpx_get_kwargs(httpx_get_mock, call_count=8), and apply the same replacement at the other occurrences referenced (around lines 2277-2285 and 9068-9069).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@tests/main/jsonschema/test_main_jsonschema.py`:
- Around line 2190-2191: Tests are still using the old manual httpx.get patch
scaffolding for the URL response setup; replace that boilerplate with the shared
helper by creating httpx_get_mock via mock_httpx_get and feeding it
MockHttpxResponse instances for each url,path in url_to_path (e.g.
httpx_get_mock = mock_httpx_get(*[MockHttpxResponse(url, external_directory /
path) for url, path in url_to_path.items()])), keep the existing assertions
assert_httpx_get_kwargs(httpx_get_mock, expected_urls=list(url_to_path),
any_order=True) and assert_httpx_get_kwargs(httpx_get_mock, call_count=8), and
apply the same replacement at the other occurrences referenced (around lines
2277-2285 and 9068-9069).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 3d7e66d0-58b6-41df-8226-253af4bec104
📒 Files selected for processing (3)
tests/data/expected/main/jsonschema/no_empty_collapsed_external_model/__init__.pytests/data/expected/main/jsonschema/no_empty_collapsed_external_model/parent.pytests/main/jsonschema/test_main_jsonschema.py
✅ Files skipped from review due to trivial changes (2)
- tests/data/expected/main/jsonschema/no_empty_collapsed_external_model/init.py
- tests/data/expected/main/jsonschema/no_empty_collapsed_external_model/parent.py
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tests/main/jsonschema/test_main_jsonschema.py (1)
2207-2214:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winBroaden
headers_argumentsto match all parametrized cases.One test case passes a single header, so
tuple[str, str]is too narrow here and will trip type checking. Use a variadic tuple type instead.♻️ Suggested fix
- headers_arguments: tuple[str, str], + headers_arguments: tuple[str, ...],🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/main/jsonschema/test_main_jsonschema.py` around lines 2207 - 2214, The parameter type for headers_arguments in test_main_http_jsonschema_with_http_headers_and_http_query_parameters_and_ignore_tls is too narrow (tuple[str, str]) and should accept any-length header argument tuples; change the annotation to a variadic tuple type (tuple[str, ...]) so it matches single- and multi-header parametrized cases and fixes the type-checking error for headers_arguments in that test.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/test_assert_helper_usage.py`:
- Around line 83-101: _collect_direct_asserts currently only looks inside
function nodes and misses module-level ast.Assert nodes; update it to also scan
the module body for top-level ast.Assert statements and add corresponding
DirectAssert entries. Specifically, after parsing the tree and collecting from
_collect_function_asserts, iterate over tree.body to find ast.Assert nodes and
extend direct_asserts with DirectAssert(path=path.relative_to(TESTS_ROOT),
function_name="<module>", lineno=assert_node.lineno,
statement=_statement(assert_node, source)) (or equivalent) so module-level
asserts are reported; keep using _statement, TESTS_ROOT and the DirectAssert
type to construct the entries.
---
Outside diff comments:
In `@tests/main/jsonschema/test_main_jsonschema.py`:
- Around line 2207-2214: The parameter type for headers_arguments in
test_main_http_jsonschema_with_http_headers_and_http_query_parameters_and_ignore_tls
is too narrow (tuple[str, str]) and should accept any-length header argument
tuples; change the annotation to a variadic tuple type (tuple[str, ...]) so it
matches single- and multi-header parametrized cases and fixes the type-checking
error for headers_arguments in that test.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: c879e7fd-11fa-457c-96cc-7a752622a1b7
📒 Files selected for processing (5)
tests/conftest.pytests/main/jsonschema/test_main_jsonschema.pytests/main/openapi/test_main_openapi.pytests/test_assert_helper_usage.pytests/test_main_kr.py
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
Summary
assertexceptions in those e2e files to be explicit via@pytest.mark.allow_direct_assertmock_httpx_getfixture and extend HTTP request assertions so URL e2e tests do not repeat response mock setup or raw mock call assertionsValidation
uv run pytest tests/test_assert_helper_usage.py -quv run ruff check tests/conftest.py tests/main/conftest.py tests/main/test_main_watch.py tests/main/jsonschema/test_main_jsonschema.py tests/main/openapi/test_main_openapi.py tests/main/test_main_general.py tests/test_main_kr.pyuv run pytest tests/test_assert_helper_usage.py tests/main/test_main_watch.py tests/main/jsonschema/test_main_jsonschema.py tests/main/openapi/test_main_openapi.py tests/main/test_main_general.py tests/test_main_kr.py -q -k 'not test_validation and not test_validation_failed and not test_openapi_special_yaml_keywords and not test_graphql_parser_with_explicit_target_datetime_class and not test_graphql_parser_with_config_object'uv run pytest tests/main/test_main_general.py::test_cli_input_overrides_pyproject_url tests/main/test_main_general.py::test_cli_url_overrides_pyproject_input tests/main/openapi/test_main_openapi.py::test_main_http_openapi tests/main/openapi/test_main_openapi.py::test_main_http_openapi_with_custom_port tests/main/openapi/test_main_openapi.py::test_main_openapi_body_and_parameters_remote_ref tests/main/openapi/test_main_openapi.py::test_main_openapi_read_only_write_only_url_ref tests/main/openapi/test_main_openapi.py::test_main_openapi_read_only_write_only_allof_url_ref tests/main/jsonschema/test_main_jsonschema.py::test_main_root_id_jsonschema_with_local_file tests/main/jsonschema/test_main_jsonschema.py::test_main_root_id_jsonschema_with_remote_file tests/main/jsonschema/test_main_jsonschema.py::test_main_root_id_jsonschema_self_refs_with_local_file tests/main/jsonschema/test_main_jsonschema.py::test_main_root_id_jsonschema_self_refs_with_remote_file tests/main/jsonschema/test_main_jsonschema.py::test_main_root_id_jsonschema_with_absolute_remote_file tests/main/jsonschema/test_main_jsonschema.py::test_main_url_with_relative_root_id_resolves_relative_refs tests/main/jsonschema/test_main_jsonschema.py::test_main_remote_ref_emits_deprecation_warning tests/main/jsonschema/test_main_jsonschema.py::test_main_remote_ref_blocked_when_explicitly_disabled tests/main/jsonschema/test_main_jsonschema.py::test_main_bundled_schema_with_id_url tests/main/jsonschema/test_main_jsonschema.py::test_main_http_jsonschema tests/main/jsonschema/test_main_jsonschema.py::test_main_http_jsonschema_with_http_headers_and_http_query_parameters_and_ignore_tls tests/main/jsonschema/test_main_jsonschema.py::test_main_circular_ref_external_url_keywords tests/test_main_kr.py::test_url_with_http_headers tests/test_main_kr.py::test_http_ignore_tls tests/test_main_kr.py::test_http_query_parameters tests/test_main_kr.py::test_http_timeout -quv run pytest tests/test_main_kr.py::test_main_modular_no_file tests/test_main_kr.py::test_main_no_file -quv run pytest tests/test_assert_helper_usage.py tests/test_main_kr.py::test_generate_prompt_basic tests/test_main_kr.py::test_generate_prompt_with_question tests/test_main_kr.py::test_generate_prompt_with_options tests/test_main_kr.py::test_generate_prompt_with_list_options -qpre-commit run --files .github/workflows/codeql.yamlNote: a full changed-file pytest run without
-kcurrently requires optionalpranceandgraphqlextras for existing tests unrelated to this change.Summary by CodeRabbit
Chores
Tests