Introduction
MoFA
A production-grade AI agent framework built in Rust, designed for extreme performance, unlimited extensibility, and runtime programmability.
What is MoFA?
MoFA (Modular Framework for Agents) implements a microkernel + dual-layer plugin system architecture that enables you to build sophisticated AI agents with:
đ Extreme Performance
Rust core with zero-cost abstractions, async runtime, and efficient memory management.
đ§ Unlimited Extensibility
Dual-layer plugins: compile-time (Rust/WASM) for performance + runtime (Rhai scripts) for flexibility.
đ Multi-Language Support
Python, Java, Swift, Kotlin, Go bindings via UniFFI and PyO3.
đ Production Ready
Built-in persistence, monitoring, distributed support, and human-in-the-loop workflows.
Architecture
MoFA follows strict microkernel design principles:
graph TB
subgraph "User Layer"
U[Your Agents]
end
subgraph "SDK Layer"
SDK[mofa-sdk]
end
subgraph "Business Layer"
F[mofa-foundation<br/>LLM âĸ Patterns âĸ Persistence]
end
subgraph "Runtime Layer"
R[mofa-runtime<br/>Lifecycle âĸ Events âĸ Plugins]
end
subgraph "Kernel Layer"
K[mofa-kernel<br/>Traits âĸ Types âĸ Core]
end
subgraph "Plugin Layer"
P[mofa-plugins<br/>Rust/WASM âĸ Rhai]
end
U --> SDK
SDK --> F
SDK --> R
F --> K
R --> K
R --> P
Key Features
Multi-Agent Coordination
MoFA supports 7 LLM-driven collaboration modes:
| Mode | Description | Use Case |
|---|---|---|
| Request-Response | One-to-one deterministic tasks | Simple Q&A |
| Publish-Subscribe | One-to-many broadcast | Event notification |
| Consensus | Multi-round negotiation | Decision making |
| Debate | Alternating discussion | Quality improvement |
| Parallel | Simultaneous execution | Batch processing |
| Sequential | Pipeline execution | Data transformation |
| Custom | User-defined modes | Special workflows |
Secretary Agent Pattern
Human-in-the-loop workflow management with 5 phases:
- Receive Ideas â Record todos
- Clarify Requirements â Project documents
- Schedule Dispatch â Call execution agents
- Monitor Feedback â Push key decisions to humans
- Acceptance Report â Update todos
Dual-Layer Plugin System
- Compile-time Plugins: Rust/WASM for performance-critical paths
- Runtime Plugins: Rhai scripts for dynamic business logic with hot-reload
Quick Example
use mofa_sdk::kernel::prelude::*;
use mofa_sdk::llm::{LLMClient, openai_from_env};
struct MyAgent {
client: LLMClient,
}
#[async_trait]
impl MoFAAgent for MyAgent {
fn id(&self) -> &str { "my-agent" }
fn name(&self) -> &str { "My Agent" }
async fn execute(&mut self, input: AgentInput, _ctx: &AgentContext) -> AgentResult<AgentOutput> {
let response = self.client.ask(&input.to_text()).await
.map_err(|e| AgentError::ExecutionFailed(e.to_string()))?;
Ok(AgentOutput::text(response))
}
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = LLMClient::new(Arc::new(openai_from_env()?));
let mut agent = MyAgent { client };
let ctx = AgentContext::new("exec-001");
let output = agent.execute(AgentInput::text("Hello!"), &ctx).await?;
println!("{}", output.as_text().unwrap());
Ok(())
}
Getting Started
| Goal | Where to go |
|---|---|
| Get running in 10 minutes | Installation |
| Configure your LLM | LLM Setup |
| Build your first agent | Your First Agent |
| Learn step by step | Tutorial |
| Understand the design | Architecture |
Who Should Use MoFA?
- AI Engineers building production AI agents
- Platform Teams needing extensible agent infrastructure
- Researchers experimenting with multi-agent systems
- Developers who want type-safe, high-performance agent frameworks
Community & Support
- GitHub Discussions â Ask questions
- Discord â Chat with the community
- Contributing â Help improve MoFA
License
MoFA is licensed under the Apache License 2.0.