pixano_inference.providers.vllm
Provider for vLLM models.
VLLMProvider(*args, **kwargs)
Bases: ModelProvider
Provider for vLLM models.
Source code in pixano_inference/providers/vllm.py
load_model(name, task, device, path=None, processor_config={}, config={})
Load a model from vLLM.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
name
|
str
|
Name of the model. |
required |
task
|
Task | str
|
Task of the model. |
required |
device
|
device
|
Device to use for the model. |
required |
path
|
str | None
|
Path to the model or its Hugging Face hub's identifier. |
None
|
processor_config
|
dict
|
Configuration for the processor. |
{}
|
config
|
dict
|
Configuration for the model. |
{}
|
Returns:
Type | Description |
---|---|
VLLMModel
|
Loaded model. |
Source code in pixano_inference/providers/vllm.py
text_image_conditional_generation(request, model, *args, **kwargs)
Generate text from an image and a prompt.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
request
|
TextImageConditionalGenerationRequest
|
Request for text-image conditional generation. |
required |
model
|
VLLMModel
|
Model for text-image conditional generation |
required |
args
|
Any
|
Additional arguments. |
()
|
kwargs
|
Any
|
Additional keyword arguments. |
{}
|
Returns:
Type | Description |
---|---|
TextImageConditionalGenerationOutput
|
Output of text-image conditional generation. |