The llm_engine
module provides a unified interface for interacting with various large language model (LLM) backends, including OpenAI, Azure, and VLLM. It abstracts away the differences between APIs and makes it easy to switch between local and remote inference.
You typically do not need to call these modules directly. Instead, use the scripts in llm_sft/
for evaluation and data curation. The engine will automatically select the correct backend and handle model calls based on your command-line arguments.
--platform OpenAI
, --platform Azure
, or --platform VLLM
and provide your API key as an environment variable, you can see /.env.example
for more information and override these values through arguments.You can invoke the engine via the scripts in llm_sft/
:
bashpython -m llm_sft.answer_eval \ --model Qwen/Qwen2.5-VL-7B-Instruct \ --model_type remote \ --platform VLLM \ --input_path /path/to/your/data.jsonl \ --image_dir /path/to/your/images
adapters.py
and register it in llm.py
.utils/prompt_utils.py
and utils/schemas.py
.