pixano.app.routers.inference.zero_shot_detection
ZeroShotOutput(**data)
Bases: BaseModel
Zero shot output.
Source code in pydantic/main.py
call_image_zero_shot_detection(dataset_id, image, entity, classes, model, box_table_name, class_table_name, settings, box_threshold=0.3, text_threshold=0.2)
async
Perform zero shot detection on an image.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataset_id
|
Annotated[str, Body(embed=True)]
|
The ID of the dataset to use. |
required |
image
|
Annotated[ViewModel, Body(embed=True)]
|
The image to use for detection. |
required |
entity
|
Annotated[EntityModel, Body(embed=True)]
|
The entity to use for detection. |
required |
classes
|
Annotated[list[str] | str, Body(embed=True)]
|
Labels to detect. |
required |
model
|
Annotated[str, Body(embed=True)]
|
The name of the model to use. |
required |
box_table_name
|
Annotated[str, Body(embed=True)]
|
The name of the table to use for boxes in dataset. |
required |
class_table_name
|
Annotated[str, Body(embed=True)]
|
The name of the table to use for classifications in dataset. |
required |
settings
|
Annotated[Settings, Depends(get_settings)]
|
App settings. |
required |
box_threshold
|
Annotated[float, Body(embed=True)]
|
Box threshold for detection in the image. |
0.3
|
text_threshold
|
Annotated[float, Body(embed=True)]
|
Text threshold for detection in the image. |
0.2
|
Returns:
Type | Description |
---|---|
list[ZeroShotOutput]
|
The predicted bboxes and classifications. |