An intelligent CLI coding assistant powered by Google's ADK framework
adk-code is a multi-model AI coding assistant that runs directly in your terminal. Ask natural language questions about your codeβit reads files, executes commands, makes edits, and runs searches autonomously.
- π€ Multi-Model Support: Seamlessly switch between Gemini, OpenAI, and Vertex AI
- π οΈ 30+ Built-in Tools: File operations, code editing, execution, Google Search, HTTP fetch, agent discovery, and more
- π MCP Integration: Unlimited extensibility via Model Context Protocol
- πΎ Session Persistence: Maintain context across conversations with automatic history and token-aware compaction
- π Web Integration: Google Search and HTTP fetch tools for research and content retrieval
- β‘ Streaming Responses: Real-time output as the model thinks and executes
- π¨ Beautiful Terminal UI: Rich formatting, colors, pagination, and interactive displays
- π¦ Zero External Dependencies: No Langchain, Claude Code, or Cline baggage
The easiest way to install on macOS:
# Add the tap (one-time)
brew tap raphaelmansuy/adk-code
# Install adk-code
brew install adk-code
# Verify installation
adk-code --versionSupported on:
- macOS 10.13+ (High Sierra and later)
- Intel (x86_64) and Apple Silicon (M-series) Macs
Update to latest:
brew upgrade adk-codeUninstall:
brew uninstall adk-codeSee homebrew-adk-code for more details.
Clone and build manually:
# Clone and build
git clone https://github.com/raphaelmansuy/adk-code.git
cd adk-code/adk-code
make build
# Binary is now at ../bin/adk-code# Set your API key
export GOOGLE_API_KEY=your-key-here
# Run adk-code
../bin/adk-codeThat's it! You're ready to ask questions about your code.
# Interactive mode (default)
β― How do I add error handling to ReadFile?
[adk-code reads files, analyzes, and suggests changes]
β― Create a CLI parser for flags
[adk-code implements, tests, and explains]
# Session mode
β― adk-code --session my-project --model gpt-4o
# Batch mode
β― echo "Write a test for userAuth()" | adk-code| Use Case | Benefit |
|---|---|
| Code Review | Understand complex codebases quickly |
| Bug Fixes | Trace errors and implement solutions |
| Refactoring | Improve code quality with AI guidance |
| Documentation | Generate docs and comments |
| Testing | Write and run test suites |
| Learning | Study patterns and best practices |
βββββββββββββββββββββββββββββββββββ
β User Terminal (REPL) β
ββββββββββββββ¬βββββββββββββββββββββ
β
ββββββββββββββΌβββββββββββββββββββββ
β Agent Loop (ADK Framework) β
β ββββββββββββββββββββββββββββ β
β β 1. Call LLM with context β β
β β 2. Parse tool calls β β
β β 3. Execute tools β β
β β 4. Append results β β
β β 5. Loop until complete β β
β ββββββββββββββββββββββββββββ β
ββββββββββββββ¬βββββββββββββββββββββ
β
ββββββββββ΄βββββββββ¬βββββββββββ
βΌ βΌ βΌ
ββββββββββββββββββ βββββββββββ ββββββββββββ
β 30+ Tools β βLLM APIs β β Display β
ββββββββββββββββββ€ βββββββββββ€ ββββββββββββ€
β File Ops β β Gemini β β Rich UI β
β Web Tools β β OpenAI β β Colors β
β Search β β Vertex β β Markdown β
ββββββββββββββββββ βββββββββββ ββββββββββββ
See docs/ARCHITECTURE.md for details.
adk-code supports a dynamic sub-agent system that enables intent-driven delegation and modular, reusable agent definitions. This architecture is designed to make complex tasks more manageable by letting dedicated sub-agents handle specialized responsibilities while the main agent orchestrates work.
-
Discovery:
pkg/agentsdiscovers agent definitions from.adk/agents/(YAML frontmatter + Markdown). Agents are validated for name, description, and metadata (version, author, tags). -
Agent Router (planned): We define the router in the spec (
internal/agents/router.go) as a small decision layer that will run intent scoring and select the right handler. Note: the router is a planned Phase 1 component.Current behavior: adk-code already supports actionable subagents using ADK's agent-as-tool pattern β see
tools/agents/subagent_tools.goandinternal/prompts/coding_agent.go.SubAgentManagerdiscovers.adk/agents/*.md, createsllmagentinstances, and registers them as tools; the LLM naturally selects an agent/tool at runtime. This provides pragmatic delegation today while the router design remains part of Phase 1. -
Sub-Agents: Sub-agents are self-contained agent definition files with metadata and behavior (skills/commands). They can be added, versioned, and discovered at runtime.
-
Delegation Flow (current): User request β LLM (main agent) selects a tool/subagent β If a sub-agent tool is invoked, it runs in its own context and executes allowed tools or MCP services β Return result β Main agent synthesizes final answer.
-
Delegation Flow (future router): User request β Agent Router (intent scoring, heuristic) β Select sub-agent β Sub-agent executes tools or MCP services β Return result β Main agent synthesizes final answer.
-
Tools & MCP: Sub-agents call local tools or external MCP servers for actions (filesystem edits, Git, build, cloud APIs). This separation keeps tool execution deterministic and traceable.
-
Audit & Replay: All agent actions (intent scores, chosen sub-agent, tool calls, and MCP interactions) are logged to the session history. This enables replays, debugging, and reproducibility.
Benefits: concise intent routing, modular agent definitions, scalable delegation to domain-specific sub-agents, and transparent tool/MCP integration.
Integrated native Google Search via ADK's geminitool.GoogleSearch for real-time web searches without external dependencies.
Fetch and parse web content directlyβextract text from URLs for research, documentation retrieval, and content analysis.
New REPL commands (/sessions, /session) for managing conversation sessions, viewing event history, and optimizing token usage with automatic session compaction.
Intelligent token compression that reduces conversation history size while preserving contextβextends long conversations without token limits.
- QUICK_REFERENCE.md β Daily commands & flags (2 min)
- ARCHITECTURE.md β System design & components (15 min)
- TOOL_DEVELOPMENT.md β Build your own tools (20 min)
- ADR Repository β Architecture decision records for new features
- docs/ β Complete documentation suite
- Go 1.24+
- One API key:
GOOGLE_API_KEY(Gemini - free tier available)OPENAI_API_KEY(OpenAI)- GCP project (Vertex AI)
Free tier, fastest setup:
export GOOGLE_API_KEY=your-key
cd adk-code && make runexport OPENAI_API_KEY=sk-...
cd adk-code && make run -- --model gpt-4oexport GOOGLE_CLOUD_PROJECT=your-project
export GOOGLE_CLOUD_LOCATION=us-central1
export GOOGLE_GENAI_USE_VERTEXAI=true
cd adk-code && make runcd adk-code
# Build
make build
# Test
make test
# Quality checks (required before commit)
make check
# Development watch mode
make watch./adk-code --model gemini-2.5-flash # Specify model
./adk-code --session my-project # Named session
./adk-code --output-format plain # Output format
./adk-code --enable-thinking # Extended reasoning
./adk-code --working-directory /path/to/src # Set working dirSee QUICK_REFERENCE.md for all flags.
- You ask a question in natural language
- Agent receives context (system prompt, tools, history)
- LLM generates response with tool calls (read file, run command, etc.)
- Tools execute and return results
- Agent loops until response is complete
- Result streams to your terminal in real-time
Example: "How many lines in main.go?"
Agent thinks: "User wants line count. I'll use count_lines tool."
β
Calls: count_lines(path="main.go")
β
Gets: {success: true, total_lines: 140, ...}
β
Returns: "main.go has 140 lines"
Create your own tools without modifying core code. See TOOL_DEVELOPMENT.md.
// 4-step pattern
type MyToolInput struct { Path string }
type MyToolOutput struct { Result string }
func handler(ctx Context, input MyToolInput) MyToolOutput {
// Your logic
return MyToolOutput{Result: "..."}
}
func init() {
// Register automatically
}Use Model Context Protocol servers instead of building tools:
{
"mcp": {
"servers": {
"github": {
"type": "stdio",
"command": "mcp-server-github"
}
}
}
}| Metric | Value |
|---|---|
| Binary Size | ~15MB (release) |
| Startup Time | <500ms |
| Context Window | Up to 1M tokens (Gemini 2.5 Flash) |
| Tool Execution | <1s typical |
| Memory Usage | ~50MB baseline |
Contributions welcome! Please:
- Fork and create a branch
- Make changes following Go conventions
- Run
make checkbefore committing - Submit a pull request with description
See TOOL_DEVELOPMENT.md for architecture details.
Licensed under the Apache License, Version 2.0. See LICENSE for details.
Copyright 2025 adk-code contributors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
New to adk-code?
5 min β QUICK_REFERENCE.md β Start using
1 hour β ARCHITECTURE.md β Understand design
3 hours β Full docs β Contribute
Q: What's the difference between adk-code and ChatGPT?
A: adk-code runs in your terminal with direct filesystem access. No copy-pasting codeβjust ask.
Q: Can I use this offline?
A: No, it requires an LLM API. But you can use any of 3 providers (Gemini/OpenAI/Vertex).
Q: Is my code private?
A: Yes, only sent to your chosen API provider. Self-hosted options available on request.
Q: How much does it cost?
A: Depends on provider. Gemini has a free tier. OpenAI is ~$0.03/1K tokens.
Q: Can I build custom tools?
A: Yes! Follow the 4-step pattern in TOOL_DEVELOPMENT.md.
- Add more tool categories (database, API, etc.)
- Support for local LLMs
- Web UI option
- Plugin marketplace
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Contributing: See CONTRIBUTING.md
Built on:
- Google ADK β Agent framework
- Charmbracelet β Terminal UI
- Gemini/OpenAI/Vertex AI β LLM APIs
- ~15,000 lines of Go code
- 30+ tools across 8 categories (file, web, search, execution, edit, display, discovery, agents)
- 3 LLM backends supported (Gemini, OpenAI, Vertex AI)
- Session management with token-aware compaction
- 100% test coverage target
Made with β€οΈ by the adk-code community
β Star us on GitHub | π Report Bug | π‘ Request Feature