Skip to content

feat: plumb detector_mode through create_priming_group#430

Merged
brandon-wada merged 6 commits intomainfrom
brandon/sync-spec-detector-mode
May 5, 2026
Merged

feat: plumb detector_mode through create_priming_group#430
brandon-wada merged 6 commits intomainfrom
brandon/sync-spec-detector-mode

Conversation

@brandon-wada
Copy link
Copy Markdown
Collaborator

@brandon-wada brandon-wada commented May 5, 2026

Updates priming group tests according to new validation constraints.

The backend (zuuul PR #6286) now requires `detector_mode` on the
PrimingGroup creation endpoint to prevent the priming-group mode-mismatch
incident from recurring. Sync the spec, regenerate the openapi client, and
add the new required parameter to ExperimentalApi.create_priming_group.

The spec sync also picks up unrelated MLPipeline schema additions (training
metrics, model_binary_id, friendly_name, etc.) that landed in the same
zuuul regen.

Tested against integ: all 6 expensive priming-group tests + 4 non-expensive
tests pass.
Adding detector_mode pushed the signature to 6 args. Standard SDK pattern
(see create_alert, create_text_recognition_detector, etc.) is the noqa +
pylint disable on the def line.
@brandon-wada brandon-wada force-pushed the brandon/sync-spec-detector-mode branch from 9daef1a to 89a5ed0 Compare May 5, 2026 18:17
Copy link
Copy Markdown
Contributor

@f-wright f-wright left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a little confused about some of what's happening here, but if you think all of this is expected I'm happy to trust your judgement on this

@@ -0,0 +1,12 @@
# DetectorModeEnum

* `BINARY` - BINARY * `COUNT` - COUNT * `MULTI_CLASS` - MULTI_CLASS * `TEXT` - TEXT * `BOUNDING_BOX` - BOUNDING_BOX
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know this doesn't go in this PR, but it seems like it'd make more sense for these to be in alphabetical order. Right now they seem to be in order of when they were added, but that's kind of odd since we don't really use count and text mode all that much anymore

------------ | ------------- | ------------- | -------------
**name** | **str** | Name for the new priming group. |
**source_ml_pipeline_id** | **str** | ID of an MLPipeline owned by this account whose trained model will seed the priming group. |
**detector_mode** | **bool, date, datetime, dict, float, int, list, str, none_type** | Detector mode this priming group is intended for (BINARY, MULTI_CLASS, etc.). Must match the mode supported by the source MLPipeline's pipeline_config. * `BINARY` - BINARY * `COUNT` - COUNT * `MULTI_CLASS` - MULTI_CLASS * `TEXT` - TEXT * `BOUNDING_BOX` - BOUNDING_BOX |
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does this have so many type options? Do we need to tighten this on the back end?

Groundlight makes it simple to understand images. You can easily create computer vision detectors just by describing what you want to know using natural language. # noqa: E501

The version of the OpenAPI document: 0.18.2
Contact: support@groundlight.ai
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is out of date, but maybe we don't care

from groundlight_openapi_client.exceptions import ApiAttributeError


class NullEnum(ModelSimple):
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe this is expected in the generated code, but seems weird to me? Why do we need this NullEnum class?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Especially because it looks like we have a separate BlankEnum already?


def testDetectorModeEnum(self):
"""Test DetectorModeEnum"""
# FIXME: construct object with mandatory attributes with example values
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does a FIXME comment indicate we should do something about this before merging?

pg = gl_experimental.create_priming_group(
name=f"test-primer-{detector.id}",
source_ml_pipeline_id=trained.id,
detector_mode=ModeEnum.BINARY,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we have any tests for the multiclass specific num classes argument usage?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added test_create_priming_group_multiclass_populates_num_classes in 7c8400c — creates a 3-class detector, trains it, seeds a priming group with detector_mode=ModeEnum.MULTI_CLASS, and asserts pg.num_classes == 3. Marked @pytest.mark.expensive like the rest of the create-flow tests. Couldn't run it locally (the integ token I had has been revoked) — will rely on CI.

Addresses review comment: confirm the priming-group SDK path correctly
records num_classes when seeded from a MULTI_CLASS source pipeline.

Adds a parallel `_wait_for_trained_multiclass_pipeline` helper that submits
3 labeled examples per class and polls for trained_at, then asserts the
resulting PrimingGroup.num_classes matches the source class count.
brandon-wada and others added 3 commits May 5, 2026 21:03
The helper allow-list pre-dated the text-recognition result type, so any
text-rec IQ surfaced by list_image_queries failed the assertion in
test_list_image_queries. Same shape of test fragility was failing on main
already; including TextRecognitionResult clears it.
@brandon-wada brandon-wada merged commit 1fc8f86 into main May 5, 2026
8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants