Skip to main content

Overview

Zeus supports flexible model configuration. Users can connect any OpenAI-compatible model using their own API Key, or use Zeus preset models.

Configuration Sources

Model configuration is selected by the following priority: User custom configuration > Request parameters > System default configuration.

Zeus Preset Models

Zeus ships with three preset model tiers, each backed by a specific provider and model:
PresetActual ModelProviderMax TokensTier
zeus-1.5kimi-k2.5Moonshot262,144Standard
zeus-1.5-proclaude-opus-4-5-20251101UnifyLLM200,000Pro
zeus-1.5-litegemini-2.5-proUnifyLLM1,048,576Lite
Users select a preset tier in the UI; the system automatically routes to the corresponding provider and model. When using presets, no API Key configuration is required.

LLMManager

The LLM model manager is responsible for creating model instances. _init_model() receives the LLM configuration passed from the frontend (including Base URL, API Key, model name, temperature, etc.) and creates a ChatOpenAI instance. It first checks whether a user custom model configuration exists (with their own API Key and Base URL). If not, it checks for Zeus preset model selection, routing to the appropriate provider. As a final fallback, it reads the system default configuration. Supports any OpenAI-compatible API (OpenAI, Anthropic via proxy, local models, etc.).

Model Profiles

The system includes 200+ built-in model context window configurations, used for SummarizationMiddleware trigger threshold calculation:
Modelmax_input_tokensSummarization Trigger (85%)
gpt-4o128,000108,800
claude-3.5-sonnet200,000170,000
gpt-4-turbo128,000108,800
Default64,00054,400
When a model is not in the predefined list, default configuration is used.