Nanobot — The AI Agent Framework That Fits in a Weekend Project
Nanobot delivers genuine value for developers building lightweight multi-platform agents. The HKUDS research team has credibility. The honest caveat: v0.1.4 is pre-1.0 with active development — suitable for prototyping and internal tools, risky for customer-facing products without a migration plan.
What is Nanobot?
Nanobot is an open-source AI agent framework built by HKUDS — the Data Intelligence Lab at Hong Kong University. It sits at v0.1.4.post4 (released 2026-03-08) and focuses on one clear objective: give developers a working multi-platform agent with minimal code overhead.
The “99% less code than OpenClaw” claim comes from HKUDS’s own comparisons. Whether your specific use case hits that number depends on what you’re building, but the structural difference is real. Nanobot ships opinionated defaults for chat platform integrations, LLM routing, and task scheduling. You configure rather than construct.
HKUDS has credibility here. Their other projects have meaningful adoption: LightRAG (29.3k stars, published at EMNLP 2025), DeepCode (14.9k stars), RAG-Anything (14.2k stars), and AI-Trader (11.7k stars). This is a research lab that ships production-facing tools, not a one-repo wonder. Nanobot itself has reached 33.2k stars and 5.5k forks — significant for a v0.1.x project.
The trade-off is transparency: this is pre-1.0 software. With 415 open issues and 483 open PRs as of the latest data, the surface area of known problems is large. That’s not disqualifying for a research project or internal tool, but it matters for anything customer-facing.
MIT licensed, Python ≥3.11 required, Node.js ≥18 required only if you use the WhatsApp integration.
Installation & Setup
Prerequisites
- Python 3.11 or higher (3.12 recommended)
- pip or uv package manager
- Node.js ≥18 only if you plan to use the WhatsApp connector
- API credentials for at least one LLM provider (Anthropic, OpenAI, Alibaba Cloud, DeepSeek, etc.)
Installation
Three installation paths are available. Use uv if you want isolated tooling without polluting your system Python environment.
# Standard pip install
pip install nanobot-ai
# uv (recommended for isolated installs)
uv tool install nanobot-ai
# From source (for contributors or latest unreleased changes)
git clone https://github.com/HKUDS/nanobot.git
cd nanobot
pip install -e .
The package name on PyPI is nanobot-ai, not nanobot. The latter is a different, unrelated package. Don’t mix them up.
Initial Configuration
Nanobot uses a YAML configuration file to define agent behavior, LLM provider, and platform connections. Create a config.yaml in your working directory.
# config.yaml — minimal single-agent setup with Telegram
agent:
name: my-agent
llm:
provider: openai
model: gpt-4o
api_key: "${OPENAI_API_KEY}"
platforms:
telegram:
token: "${TELEGRAM_BOT_TOKEN}"
memory:
enabled: true
backend: local
Set your environment variables before running:
export OPENAI_API_KEY=sk-...
export TELEGRAM_BOT_TOKEN=...
nanobot start --config config.yaml
The agent starts, connects to Telegram, and accepts messages immediately. No boilerplate routing code required.
Core Features
Multi-LLM Provider Support
Nanobot routes requests to Claude, GPT-4o, Qwen, DeepSeek, and other providers through a unified config layer. Switching providers means changing two lines in your YAML. There’s no vendor-specific SDK call scattered through your agent logic.
# Switching from OpenAI to DeepSeek
llm:
provider: deepseek
model: deepseek-chat
api_key: "${DEEPSEEK_API_KEY}"
This matters for cost control and regional compliance. DeepSeek and Qwen are substantially cheaper per token than GPT-4o for many tasks. Being able to A/B test providers without refactoring is a real productivity gain.
MCP Protocol Integration
Nanobot supports the Model Context Protocol (MCP), which standardizes how agents expose and consume tools. MCP lets you connect external data sources and tool servers without writing custom adapters for each integration.
# Connecting an MCP tool server
mcp:
servers:
- name: filesystem
command: "npx"
args: ["-y", "@modelcontextprotocol/server-filesystem", "/workspace"]
If you’re already running MCP-compatible tool servers — for example, a local filesystem server or a database connector — Nanobot can consume them directly. This reduces the integration work when your stack grows beyond a single agent.
Multi-Platform Chat Integration
This is where Nanobot separates itself from most agent frameworks. Out of the box, it supports Telegram, Discord, WhatsApp, Slack, Email, QQ, DingTalk, Feishu, Matrix, WeChat, and MoChat — eleven platforms from a single agent config.
# Multi-platform deployment from one config
platforms:
telegram:
token: "${TELEGRAM_BOT_TOKEN}"
discord:
token: "${DISCORD_BOT_TOKEN}"
slack:
bot_token: "${SLACK_BOT_TOKEN}"
app_token: "${SLACK_APP_TOKEN}"
The agent instance is shared. A user on Telegram and a user on Discord interact with the same underlying agent, the same memory store, and the same tool set. For teams building internal bots that need to reach users across platforms, this removes a significant amount of duplicated wiring.
WhatsApp requires Node.js ≥18 because the connector uses the whatsapp-web.js library under the hood. This is a known architectural quirk — expect it to add setup friction if Node.js isn’t already in your environment.
Smart Scheduling and Task Automation
Nanobot includes a built-in scheduler for recurring agent tasks. You define jobs in YAML using cron syntax, and the agent executes them autonomously without an external orchestration layer like Celery or APScheduler.
# Scheduled task: daily market summary at 08:00 UTC
schedule:
- name: morning-briefing
cron: "0 8 * * *"
task: "Summarize overnight market movements and send to Telegram channel"
platform: telegram
target: "${CHANNEL_ID}"
This is directly connected to the AI-Trader lineage in HKUDS’s portfolio. The scheduling primitives are tuned for real-time analysis workflows: market data pulls, periodic summarization, alert generation. For general task automation, the cron-plus-prompt approach is simple but limited — complex multi-step workflows with conditional branching will push against the edges of what YAML scheduling can express.
Personal Knowledge Management and Memory
Nanobot persists agent memory across sessions. It maintains context about users, past interactions, and loaded documents. The memory backend defaults to local storage, with options for external backends.
# Querying agent memory programmatically
from nanobot import Agent
agent = Agent.from_config("config.yaml")
# Store a fact
agent.memory.store("user_preference", "John prefers concise summaries")
# Retrieve during a task
context = agent.memory.recall("user_preference")
The knowledge management layer draws on HKUDS’s RAG-Anything research. Long-term, the intent is to give agents persistent, queryable knowledge bases rather than stateless per-request context. At v0.1.4, this works for simple key-value and document retrieval cases. Complex retrieval over large corpora is still maturing.
Strengths
- Rapid deployment: An agent connected to multiple chat platforms runs in under 50 lines of YAML configuration.
- Credible research foundation: HKUDS has published at EMNLP 2025 and ships tools with five-figure GitHub star counts. The team knows the domain.
- Multi-platform coverage: Eleven chat platforms from a single agent instance is genuinely rare. Most frameworks make you build per-platform adapters.
- MIT licensed: No usage restrictions, no licensing fees, no enterprise tier required to access core features.
- MCP support: Plugs into the emerging MCP ecosystem without custom adapter code.
- LLM provider flexibility: Switch between OpenAI, Anthropic, DeepSeek, and Qwen with config changes, not code changes.
Weaknesses
- Pre-1.0 instability: v0.1.4 signals active breaking changes. Any non-trivial deployment needs a pinned version and a migration plan before each update.
- 415 open issues: The issue backlog is large relative to the version number. Some reported bugs affect core functionality. Triage before committing to production use.
- No enterprise support: MIT licensed means community-only support. No SLA, no paid support tier, no dedicated security response process.
- Python 3.11+ hard requirement: Projects still on 3.10 or earlier need a Python upgrade before adoption. This is a non-trivial barrier in some organizations.
- Node.js dependency for WhatsApp: Adding a second runtime just for one platform connector is an operational cost that increases Docker image size and setup complexity.
- Thin documentation at depth: Surface-level quickstart docs are solid, but configuration options for advanced memory backends, multi-agent coordination, and MCP server setup are sparse at this version.
Pricing
Nanobot is free and MIT licensed. There is no paid tier, no hosted cloud service, and no enterprise offering as of v0.1.4.
Your actual costs are LLM API usage (billed by your chosen provider — OpenAI, Anthropic, DeepSeek, etc.) and infrastructure for running the agent process. A minimal deployment on a $5–10/month VPS handles most personal and small-team workloads. Compute requirements scale with the number of platforms monitored and the frequency of scheduled tasks, not with Nanobot itself.
Conclusion & Assessment
Nanobot earns its star count. For a developer who wants a working multi-platform agent without writing integration glue for every chat platform, it delivers faster than any comparable framework. The HKUDS team has a track record of shipping research-grounded tools that see real adoption — this isn’t speculative work.
The honest constraint is timing. v0.1.4 is early. The 415 open issues aren’t a dealbreaker for prototyping or internal tooling, but they’re a meaningful risk signal for anything with uptime requirements. If you’re building a customer-facing product that needs to run reliably for months without touching the codebase, wait for v0.5 or v1.0, or plan to own your pinned version aggressively.
The best use cases right now: rapid prototyping of multi-platform agents, internal automation tools, research experiments that need LLM provider flexibility, and projects where the HKUDS research lineage (LightRAG, RAG-Anything) is directly relevant to what you’re building.
For teams already invested in LangChain or AutoGen, Nanobot doesn’t offer enough ecosystem depth to justify migration today. The value proposition is for new projects starting from zero where lightweight configuration beats large ecosystem.
Watch the PR merge rate over the next few months. If the 483 open PRs move, the v0.2.x or v0.3.x releases will likely resolve the stability concerns that currently limit production adoption.
## Pricing
- Unlimited agent instances
- All features included
- MIT licensed
- Multi-platform chat integrations
Last verified: 2026-03-13.
## The Good and the Not-So-Good
+ Strengths
- Rapid deployment: multi-platform agent runs in under 50 lines of YAML
- Credible research foundation: HKUDS published LightRAG at EMNLP 2025
- Multi-platform coverage: Telegram, Discord, WhatsApp, Slack, Email, QQ, DingTalk, Feishu, Matrix, WeChat, MoChat from one agent
- MIT licensed with no restrictions on usage or commercial deployment
- MCP protocol support for seamless tool and data source integration
- LLM provider flexibility: switch between OpenAI, Anthropic, DeepSeek, Qwen via config
− Weaknesses
- Pre-1.0 instability: v0.1.4 signals breaking changes in minor releases
- Large issue backlog: 415 open issues relative to early version number
- No enterprise support: community-only, no SLA or paid support tier
- Python 3.11+ hard requirement excludes projects on older Python versions
- Node.js ≥18 required for WhatsApp, adding operational complexity
- Documentation gaps at depth: advanced memory backends and multi-agent coordination sparse
## Who It's For
Best for: Developers building rapid prototypes of multi-platform AI agents, internal automation tools, research projects requiring LLM provider flexibility, and teams leveraging HKUDS research (LightRAG, RAG-Anything) directly.
Not ideal for: Customer-facing products requiring production-grade stability, organizations with strict security/compliance audit requirements, teams already invested in LangChain or AutoGen ecosystems, projects needing enterprise vendor support.