Use Case Documentation

MADEIRA Agent

Multi-Agent Coordination System for Autonomous Execution

Reading time: 8 min
Version: 0.6.0
Last updated: 2026
01 — Overview

The Five-Agent Paradigm

The MADEIRA Agent system implements a Five-Layer LLMOS Paradigm that decouples the reasoning engine from its tools and channels across four functional domains: Control Plane, Integration Layer, Execution Layer, and Intelligence Layer.

This architecture addresses the fundamental problem of cognitive overload in AI systems. Forcing a single agent to simultaneously architect a solution and write the underlying syntax causes context degradation, hallucinations, and infinite debugging loops. The multi-agent paradigm separates concerns across five specialized roles.

MADEIRA shifts from human-in-the-loop co-pilots to human-on-the-loop autonomous execution. The system can operate independently while humans maintain strategic oversight.

02 — The Five Agents

Specialized Agent Roles

Each agent operates with specific permissions and capabilities, enforced at the configuration layer through role-safe capability enforcement. This is not just prompt instruction—it is architectural constraint.

Madeira

Interface Agent

User-facing operational assistant for the Holistika knowledge vault. Operates in read-only lookup mode, answers questions using hlk_* tools and deterministic exact-lookup → ranked-search ladder. Escalates multi-step tasks to the Orchestrator.

Intent ParsingKnowledge RetrievalUser InteractionTask Escalation

Orchestrator

Coordinator

Receives user requests and decomposes them into sub-tasks. Delegates to Architect, Executor, and Verifier. Tracks progress, handles failures, supports parallel delegation. Cannot execute tasks directly.

Task DecompositionAgent DelegationProgress TrackingFailure Handling

Architect

Planner

Operates in read-only mode using sequential_thinking MCP for structured reasoning. Produces a plan document with explicit tool selections and risk assessments. Cannot write files or execute commands.

Read-Only AnalysisPlan DocumentsRisk AssessmentTool Selection

Executor

Builder

Operates in read-write mode. Reads the Architect's plan before taking any action. Executes strict, well-scoped directives with 3-retry error recovery loop guided by the Verifier.

Code ExecutionFile OperationsAPI CallsError Recovery

Verifier

Quality Gate

Validates Executor output via lint, test, build, and browser verification. Diagnoses failures and suggests targeted fixes. Escalates to Orchestrator after 3 failed attempts.

Output ValidationLint & TestFailure DiagnosisEscalation
03 — Architecture

Four-Layer LLMOS Paradigm

CONTROL PLANEGateway — agents.list routing, Auth, Channel multiplexing
INTEGRATION LAYERTelegram, Slack, WhatsApp, A2UI Canvas
EXECUTION LAYEROrchestrator → Architect → Executor → Verifier
INTELLIGENCE LAYERMCP Memory Server, Workspace Files, Context Compressor

Agent Behavioral Protocols

ProtocolDescriptionEnforcement
Self-VerificationExecutor auto-verifies after every edit (lint/test)Never moves to next step with failures
Loop DetectionOrchestrator and Executor detect repetitive failuresEscalates to user after 3 attempts
Memory HygieneAll agents store decisions in MEMORY.mdProactive via memory_store()
Structured PlanningMulti-step work produces numbered plansConditional tasklist triggers
RULES.mdWorkspace conventions loaded at session startUser-defined enforcement
04 — Model Tiers

Multi-Model Architecture

The system supports seamless switching between model tiers and deployment environments without code changes. Every model is assigned to exactly one tier, which determines its thinking default, context budget, and prompt variant.

Model Tier Registry

TierContextThinkingExample Models
Small16,384offollama/qwen3:8b, llama3.2:3b
Medium32,768lowdeepseek-r1:14b, groq/llama-3.3-70b
Large131,072mediumclaude-sonnet-4, vllm/deepseek-r1-70b
SOTA200,000highopenai/gpt-5, claude-opus-4

SOUL.md prompts are assembled from a base file plus tier-appropriate overlays. Small models get compact prompts (3-5 MUST rules), large models get full feature sets.

05 — Integration

Channel Adapters & MCP

The Integration Layer connects MADEIRA agents to external systems through channel adapters and the Model Context Protocol (MCP). Each adapter translates platform-specific events into the unified MADEIRA messaging format.

Channel Adapters

  • Telegram Bot — Real-time messaging
  • Slack Adapter — Workspace integration
  • WhatsApp Adapter — Mobile communication
  • A2UI Canvas — Web dashboard interface

MCP Primitives

  • Resources — Application-controlled context
  • Tools — Model-controlled functions
  • Prompts — User-controlled templates
  • Sessions — Connection state management
06 — Getting Started

Deployment Path

MADEIRA Agent deployment follows a structured path from local development to production. The architecture supports multiple deployment profiles, from dev-local with medium models to prod-cloud with SOTA capabilities.

1

Environment Setup

Configure environment profiles, provider credentials, and model tier settings

2

Agent Configuration

Define agent capabilities, SOUL.md prompts, and workspace directories

3

Channel Integration

Connect channel adapters for user interaction (Telegram, Slack, etc.)

4

Knowledge Base

Connect KiRBe for knowledge retrieval and MEMORY.md for context persistence

5

Observability

Enable Langfuse tracing, log watcher, and answer-quality telemetry

Contents