LLM Engine Overview

The llm_engine module provides a unified interface for interacting with various large language model (LLM) backends, including OpenAI, Azure, and VLLM. It abstracts away the differences between APIs and makes it easy to switch between local and remote inference.

Key Components

How It Works

You typically do not need to call these modules directly. Instead, use the scripts in llm_sft/ for evaluation and data curation. The engine will automatically select the correct backend and handle model calls based on your command-line arguments.

Example Usage

You can invoke the engine via the scripts in llm_sft/:

bash
python -m llm_sft.answer_eval \
    --model Qwen/Qwen2.5-VL-7B-Instruct \
    --model_type remote \
    --platform VLLM \
    --input_path /path/to/your/data.jsonl \
    --image_dir /path/to/your/images

Advanced: Customization & Extensibility


← Back to Docs