Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Configuration Reference

Complete reference for MoFA configuration options.

Environment Variables

LLM Configuration

VariableDefaultDescription
OPENAI_API_KEY-OpenAI API key
OPENAI_MODELgpt-4oModel to use
OPENAI_BASE_URL-Custom endpoint
ANTHROPIC_API_KEY-Anthropic API key
ANTHROPIC_MODELclaude-sonnet-4-5-latestModel to use

Persistence Configuration

VariableDefaultDescription
DATABASE_URL-Database connection string
MOFA_SESSION_TTL3600Session timeout (seconds)
MOFA_MAX_CONNECTIONS10Max DB connections

Runtime Configuration

VariableDefaultDescription
RUST_LOGinfoLogging level
MOFA_MAX_AGENTS100Max concurrent agents
MOFA_TIMEOUT30Default timeout (seconds)

Configuration File

Create mofa.toml in your project root:

[agent]
default_timeout = 30
max_retries = 3
concurrency_limit = 10

[llm]
provider = "openai"
model = "gpt-4o"
temperature = 0.7
max_tokens = 4096

[llm.openai]
api_key_env = "OPENAI_API_KEY"
base_url = "https://api.openai.com/v1"

[persistence]
enabled = true
backend = "postgres"
session_ttl = 3600

[persistence.postgres]
url_env = "DATABASE_URL"
max_connections = 10
min_connections = 2

[plugins]
hot_reload = true
watch_dirs = ["./plugins"]

[monitoring]
enabled = true
metrics_port = 9090
tracing = true

Loading Configuration

#![allow(unused)]
fn main() {
use mofa_sdk::config::Config;

// Load from environment and config file
let config = Config::load()?;

// Access values
let timeout = config.agent.default_timeout;
let model = config.llm.model;

// Use with agent
let agent = LLMAgentBuilder::from_config(&config)?
    .build_async()
    .await;
}

Programmatic Configuration

Agent Configuration

#![allow(unused)]
fn main() {
use mofa_sdk::runtime::{AgentConfig, AgentConfigBuilder};

let config = AgentConfigBuilder::new()
    .timeout(Duration::from_secs(60))
    .max_retries(5)
    .rate_limit(100)  // requests per minute
    .build();
}

LLM Configuration

#![allow(unused)]
fn main() {
use mofa_sdk::llm::{LLMConfig, LLMConfigBuilder};

let config = LLMConfigBuilder::new()
    .model("gpt-4o")
    .temperature(0.7)
    .max_tokens(4096)
    .top_p(1.0)
    .frequency_penalty(0.0)
    .presence_penalty(0.0)
    .build();

let client = LLMClient::with_config(provider, config);
}

Persistence Configuration

#![allow(unused)]
fn main() {
use mofa_sdk::persistence::{PersistenceConfig, Backend};

let config = PersistenceConfig {
    enabled: true,
    backend: Backend::Postgres {
        url: std::env::var("DATABASE_URL")?,
        max_connections: 10,
        min_connections: 2,
    },
    session_ttl: Duration::from_secs(3600),
};
}

Logging Configuration

Configure logging via RUST_LOG:

# Set logging level
export RUST_LOG=debug

# Per-module logging
export RUST_LOG=mofa_sdk=debug,mofa_runtime=info

# JSON format for production
export RUST_LOG_FORMAT=json

See Also