Keyboard shortcuts

Press ← or → to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Introduction

MoFA

A production-grade AI agent framework built in Rust, designed for extreme performance, unlimited extensibility, and runtime programmability.

What is MoFA?

MoFA (Modular Framework for Agents) implements a microkernel + dual-layer plugin system architecture that enables you to build sophisticated AI agents with:

🚀 Extreme Performance

Rust core with zero-cost abstractions, async runtime, and efficient memory management.

🔧 Unlimited Extensibility

Dual-layer plugins: compile-time (Rust/WASM) for performance + runtime (Rhai scripts) for flexibility.

🌐 Multi-Language Support

Python, Java, Swift, Kotlin, Go bindings via UniFFI and PyO3.

🏭 Production Ready

Built-in persistence, monitoring, distributed support, and human-in-the-loop workflows.

Architecture

MoFA follows strict microkernel design principles:

graph TB
    subgraph "User Layer"
        U[Your Agents]
    end

    subgraph "SDK Layer"
        SDK[mofa-sdk]
    end

    subgraph "Business Layer"
        F[mofa-foundation<br/>LLM â€ĸ Patterns â€ĸ Persistence]
    end

    subgraph "Runtime Layer"
        R[mofa-runtime<br/>Lifecycle â€ĸ Events â€ĸ Plugins]
    end

    subgraph "Kernel Layer"
        K[mofa-kernel<br/>Traits â€ĸ Types â€ĸ Core]
    end

    subgraph "Plugin Layer"
        P[mofa-plugins<br/>Rust/WASM â€ĸ Rhai]
    end

    U --> SDK
    SDK --> F
    SDK --> R
    F --> K
    R --> K
    R --> P

Key Features

Multi-Agent Coordination

MoFA supports 7 LLM-driven collaboration modes:

ModeDescriptionUse Case
Request-ResponseOne-to-one deterministic tasksSimple Q&A
Publish-SubscribeOne-to-many broadcastEvent notification
ConsensusMulti-round negotiationDecision making
DebateAlternating discussionQuality improvement
ParallelSimultaneous executionBatch processing
SequentialPipeline executionData transformation
CustomUser-defined modesSpecial workflows

Secretary Agent Pattern

Human-in-the-loop workflow management with 5 phases:

  1. Receive Ideas → Record todos
  2. Clarify Requirements → Project documents
  3. Schedule Dispatch → Call execution agents
  4. Monitor Feedback → Push key decisions to humans
  5. Acceptance Report → Update todos

Dual-Layer Plugin System

  • Compile-time Plugins: Rust/WASM for performance-critical paths
  • Runtime Plugins: Rhai scripts for dynamic business logic with hot-reload

Quick Example

use mofa_sdk::kernel::prelude::*;
use mofa_sdk::llm::{LLMClient, openai_from_env};

struct MyAgent {
    client: LLMClient,
}

#[async_trait]
impl MoFAAgent for MyAgent {
    fn id(&self) -> &str { "my-agent" }
    fn name(&self) -> &str { "My Agent" }

    async fn execute(&mut self, input: AgentInput, _ctx: &AgentContext) -> AgentResult<AgentOutput> {
        let response = self.client.ask(&input.to_text()).await
            .map_err(|e| AgentError::ExecutionFailed(e.to_string()))?;
        Ok(AgentOutput::text(response))
    }
}

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let client = LLMClient::new(Arc::new(openai_from_env()?));
    let mut agent = MyAgent { client };
    let ctx = AgentContext::new("exec-001");

    let output = agent.execute(AgentInput::text("Hello!"), &ctx).await?;
    println!("{}", output.as_text().unwrap());

    Ok(())
}

Getting Started

GoalWhere to go
Get running in 10 minutesInstallation
Configure your LLMLLM Setup
Build your first agentYour First Agent
Learn step by stepTutorial
Understand the designArchitecture

Who Should Use MoFA?

  • AI Engineers building production AI agents
  • Platform Teams needing extensible agent infrastructure
  • Researchers experimenting with multi-agent systems
  • Developers who want type-safe, high-performance agent frameworks

Community & Support

License

MoFA is licensed under the Apache License 2.0.


📖 Documentation Languages: This documentation is available in English and įŽ€äŊ“中文.