LLM Configuration
Configure language model providers for the chat interface and agents.
Configuration File
Section titled “Configuration File”Add to ~/.config/crucible/config.toml:
[llm]default = "local"
[llm.providers.local]type = "ollama"default_model = "llama3.2"endpoint = "http://localhost:11434"The [llm] section has one field:
default— name of the provider to use by default
Each provider lives under [llm.providers.NAME] where NAME is whatever label you choose.
Provider Fields
Section titled “Provider Fields”| Field | Type | Required | Description |
|---|---|---|---|
type | string | yes | Provider backend (see below) |
default_model | string | no | Model to use (falls back to provider default) |
endpoint | string | no | API endpoint (falls back to provider default) |
api_key | string | no | API key, or {env:VAR_NAME} to read from environment |
temperature | float | no | Randomness 0.0–2.0 (default: 0.7) |
max_tokens | integer | no | Max response tokens (default: 4096) |
timeout_secs | integer | no | Request timeout in seconds (default: 120) |
Providers
Section titled “Providers”Ollama (Local)
Section titled “Ollama (Local)”Run models locally with Ollama:
[llm]default = "local"
[llm.providers.local]type = "ollama"default_model = "llama3.2"endpoint = "http://localhost:11434"All fields except type are optional. Ollama defaults to llama3.2 on http://localhost:11434.
Setup:
# Install Ollamacurl -fsSL https://ollama.com/install.sh | sh
# Pull a modelollama pull llama3.2
# Verify it's runningollama listOpenAI
Section titled “OpenAI”[llm]default = "openai"
[llm.providers.openai]type = "openai"default_model = "gpt-4o"api_key = "{env:OPENAI_API_KEY}"Defaults to gpt-4o on https://api.openai.com/v1 if not specified.
Environment variable:
export OPENAI_API_KEY=your-api-keyAnthropic
Section titled “Anthropic”[llm]default = "anthropic"
[llm.providers.anthropic]type = "anthropic"default_model = "claude-3-5-sonnet-20241022"api_key = "{env:ANTHROPIC_API_KEY}"Defaults to claude-3-5-sonnet-20241022 on https://api.anthropic.com/v1 if not specified. Available models depend on your account. Run cru models to see the current list.
Environment variable:
export ANTHROPIC_API_KEY=your-api-keyOther Providers
Section titled “Other Providers”Additional provider types are supported: openrouter, zai, github-copilot, vertexai, cohere, and custom. They follow the same [llm.providers.NAME] format. Run cru models to see all available models across your configured providers.
Parameters
Section titled “Parameters”temperature
Section titled “temperature”Controls randomness in responses (0.0–2.0):
[llm.providers.local]type = "ollama"temperature = 0.70.0— Deterministic, focused0.7— Balanced (default)1.0+— More creative, varied
max_tokens
Section titled “max_tokens”Maximum tokens in response:
[llm.providers.openai]type = "openai"default_model = "gpt-4o"max_tokens = 4096endpoint
Section titled “endpoint”Custom API endpoint:
[llm.providers.local]type = "ollama"endpoint = "http://192.168.1.100:11434"api_key
Section titled “api_key”Set directly or reference an environment variable with {env:VAR_NAME}:
[llm.providers.openai]type = "openai"api_key = "{env:OPENAI_API_KEY}"Multiple Providers
Section titled “Multiple Providers”You can configure several providers and switch between them:
[llm]default = "local"
[llm.providers.local]type = "ollama"default_model = "llama3.2"
[llm.providers.cloud]type = "openai"default_model = "gpt-4o"api_key = "{env:OPENAI_API_KEY}"
[llm.providers.claude]type = "anthropic"default_model = "claude-3-5-sonnet-20241022"api_key = "{env:ANTHROPIC_API_KEY}"Change the active provider by setting default under [llm], or switch at runtime with the :model command in the TUI.
Environment Variables
Section titled “Environment Variables”| Variable | Purpose |
|---|---|
OPENAI_API_KEY | OpenAI API key |
ANTHROPIC_API_KEY | Anthropic API key |
OLLAMA_HOST | Ollama endpoint (default: localhost:11434) |
Example Configurations
Section titled “Example Configurations”Local Development
Section titled “Local Development”[llm]default = "local"
[llm.providers.local]type = "ollama"default_model = "llama3.2"temperature = 0.7Production with OpenAI
Section titled “Production with OpenAI”[llm]default = "openai"
[llm.providers.openai]type = "openai"default_model = "gpt-4o"api_key = "{env:OPENAI_API_KEY}"max_tokens = 4096Cost-Conscious
Section titled “Cost-Conscious”[llm]default = "openai-mini"
[llm.providers.openai-mini]type = "openai"default_model = "gpt-4o-mini"api_key = "{env:OPENAI_API_KEY}"temperature = 0.5max_tokens = 2048Troubleshooting
Section titled “Troubleshooting””Connection refused” with Ollama
Section titled “”Connection refused” with Ollama”Check Ollama is running:
ollama listStart if needed:
ollama serve“Invalid API key” with OpenAI/Anthropic
Section titled ““Invalid API key” with OpenAI/Anthropic”Verify environment variable:
echo $OPENAI_API_KEYModel not found
Section titled “Model not found”For Ollama, pull the model first:
ollama pull llama3.2For cloud providers, check that the model name is correct. Run cru models to list available models.
Implementation
Section titled “Implementation”Source code: crates/crucible-config/src/components/llm.rs
See Also
Section titled “See Also”:h config.embedding— Embedding configuration:h chat— Chat command reference- chat — Chat usage guide