OpenAnalyst CLI

The universal AI agent

Ready
Get Started View on GitHub
Install in seconds
One command. Native binary. No runtime dependencies.
Terminal
curl -fsSL https://raw.githubusercontent.com/OpenAnalystInc/cli/main/install.sh | bash
PowerShell
irm https://raw.githubusercontent.com/OpenAnalystInc/cli/main/install.ps1 | iex
Terminal
git clone https://github.com/OpenAnalystInc/cli.git
cd openanalyst-cli/rust
  
Login — Interactive Provider Picker
$ openanalyst login

  Select your LLM provider to authenticate:

  > OpenAnalyst         gpt-oss-120b free model or API key with credits
    Anthropic / Claude  opus, sonnet, haiku — API key
    OpenAI / Codex      gpt-4o, o3, codex-mini — API key
    Google Gemini       gemini-2.5-pro, flash — API key
    xAI / Grok
    OpenRouter          350+ models
    Amazon Bedrock

  How would you like to authenticate?

  > Use free model  gpt-oss-120b — no credits needed
    Use API key       OpenAnalyst API with credits

  Step 1  Connecting to OpenAnalyst... ✓ Connected

   Free model access configured

  Launch OpenAnalyst now? [Y/n]

  ✓ Login complete

Direct Provider Login — For Claude, Codex, and Gemini you can login directly through your provider account via browser OAuth. No API key needed — just sign in with your existing Anthropic, OpenAI, or Google account. Credentials are stored securely with PKCE and auto-refresh.

7 LLM providers. One interface.
Switch models mid-conversation. Session persists across providers. Login via browser API key. No other CLI does this.
OpenAnalyst DEFAULT
OPENANALYST_AUTH_TOKEN
gpt-oss-120b free model or API credits
Anthropic / Claude OAuth
opus, sonnet, haiku
openanalyst loginorANTHROPIC_API_KEY
OpenAI / Codex OAuth
gpt-4o, o3, codex-mini
openanalyst loginorOPENAI_API_KEY
Google Gemini OAuth
gemini-2.5-pro, flash
openanalyst loginorGEMINI_API_KEY
xAI / Grok
grok-3, grok-mini
XAI_API_KEY
OpenRouter
350+ models from any provider
OPENROUTER_API_KEY
Amazon Bedrock
Live discovery from gateway
BEDROCK_API_KEY
Mid-Conversation Provider Switching
> Explain this codebase                     # openanalyst-beta answers

> /model gpt-4o
  Model updated · Provider: OpenAI
  Session persisted (3 messages)              # conversation carries over

> Now fix the bug you found                 # gpt-4o sees full history

> /model gemini-2.5-pro
  Model updated · Provider: Google Gemini
  Session persisted (5 messages)              # 3 providers, 1 conversation
What you can do
The most feature-rich AI agent CLI ever built. Native binary, 7 LLM providers, 65+ commands, 24 tools, voice input, multi-agent orchestration, clipboard paste, /undo revert, and more.
Smart Per-Action Model Routing NEW
Every prompt is auto-classified into explore, research, code, or write — each routed to the optimal model+effort. Explore tasks go to Haiku (1K thinking), coding goes to Opus (32K thinking). Save 80% on tokens without thinking about it.

/effort code max — per-category control
/route — view/edit the full routing table
/effort high — set all categories globally
Knowledge Base — Agentic RAG NEW
/knowledge best Meta Ads strategy for D2C brands

Powered by BGE-M3 1024-dim embeddings on A100 GPU, PostgreSQL pgvector, and Neo4j knowledge graph. The pipeline:

1. Local MOE intent classification — local, zero latency (strategic, procedural, factual, comparative, diagnostic, etc.)
2. API call to hosted AgenticRAG with intent hint
3. Hybrid search — pgvector cosine + PostgreSQL FTS + Neo4j graph expansion
4. RRF fusion — merges results from all sources
5. KnowledgeCard — tabbed, collapsible results with abstracted category labels
6. Feedback — inline thumbs-up/down + /feedback corrections
7. Local cache — instant replay from .openanalyst/knowledge/

No raw course names exposed — results show "Ads Strategy", "AI & Machine Learning", etc.
Set OPENANALYST_API_KEY=oa_your_key to access.
Switch Models Mid-Conversation
Switch between any LLM provider mid-conversation. /model now updates the orchestrator config and rebuilds the routing table in real-time. Session persists across provider boundaries. Start with Claude, continue with GPT-4o, finish with Gemini — all in one session.
Real Permission Enforcement NEW
5-tier permission system: read-only, workspace-write, prompt, full, allow. Permission dialogs are real — the worker thread blocks on a oneshot channel until you Allow or Deny. Deny on timeout. Fail-safe, not fail-open. /permissions prompt to switch modes at runtime.
Full Terminal UI
Default mode is a full-screen terminal UI with scrollable chat (mouse wheel + PageUp/PageDown from any mode), interactive sidebar with 5 sections (Agents, Files, Plans, Routing, Activity — press Tab to focus, j/k to navigate, Enter to activate), per-category model selection from all configured providers (Enter on a Routing row cycles through every available model from Claude, OpenAI, Gemini, xAI, and more), permission mode cycling (Ctrl+P cycles Default → Plan → Accept Edits → Danger with color-coded input border and right-side badge), rich diff rendering in tool call cards, animated status bar, vim-mode input, permission dialogs as real modals, input queue (type while streaming), slash command autocomplete, input history, voice input (Space bar), dynamic provider detection (auto-selects model from API keys — ANTHROPIC_API_KEY→claude-sonnet-4-6, OPENAI_API_KEY→gpt-4o), input box right-side badges showing [mode] [agent] [branch], and session auto-save.
Multi-Agent Orchestrator
Spawn parallel sub-agents for complex tasks. Each agent has its own conversation runtime, tool permissions, and lifecycle. The orchestrator bridges sync execution to the async TUI via channels. Watch agents work in real-time in the sidebar panel.
60+ Slash Commands
Session management, git integration, AI planning, multimedia generation, web scraping, code analysis, plugin management, MCP server control, autonomous agents, and more. All fully wired — no stubs. Every command works with every provider. Type /help to see them all.
Multimedia AI + Voice Input NEW
/image — Generate images (Gemini Imagen, DALL-E, Stability AI)
/voice — Transcribe audio files (Whisper, Gemini)
/speak — Text-to-speech (OpenAI TTS)
/vision — Analyze images (Gemini, GPT-4o, Claude)
/diagram — Generate Mermaid diagrams

Live Voice Input — Press Space to activate microphone. Blue VU meter shows audio level. Transcription via Whisper or Gemini. Talk to your code. NEW

Paste & Drag-Drop — Paste file paths, image paths, audio paths, or multi-line text directly into the input box. File types auto-detected: [image], [audio], [video], [document], [file].
/openanalyst — Autonomous Agent NEW
Inspired by Andrej Karpathy’s agent philosophy: simple loop, good model, basic tools, verifiable criteria. The autonomous agent runs a think→act→observe→verify loop without user interaction.

/openanalyst fix all failing tests --criteria "npm test"
/oa refactor auth to async --goal "all async" --criteria "npm run build"
/oa add caching --max-turns 20

Optional params: --goal (description), --criteria (shell command to verify), --max-turns (default 30). Criteria commands have 60s timeout. Turn budget enforced as hard limit.
MCP Protocol + LSP Integration NEW
MCP (Model Context Protocol) — Connect external tool servers via Stdio and HTTP/SSE transports. JSON-RPC 2.0 compliant, proper Content-Length framing, multi-line JSON support. Tools auto-discovered and registered with mcp__ prefix. Execution dispatched to live server processes. /mcp add and /mcp remove manage servers directly.

LSP — Full Language Server Protocol client for diagnostics, go-to-definition, find-references. Tested with real mock servers.

Smart Context Injection — Prompts auto-analyzed for file paths and code keywords. Relevant files injected as context before the API call, reducing tool call round trips.
Git Integration
Commit, branch, PR, issue creation, diff review — all from slash commands. /commit generates a message and commits. /pr drafts a pull request. /diff-review gets AI-powered code review of your changes.
22 Built-in Tools + Plugin System
Bash, file read/write/edit, glob/grep search, web search/fetch, sub-agents, notebooks, REPL, PowerShell, and more. Plus a full plugin system — install, enable, disable, uninstall custom tools with /plugins install <path>. All backed by real PluginManager.
Production-Grade Reliability NEW
Stream timeout — 30s per-event timeout detects hung connections and provider stalls
Retry with backoff — Exponential backoff (1s→2s→4s→8s, max 32s) on transient errors
Zero unsafe codeunsafe_code = "forbid" enforced at workspace level
340+ tests — Unit, integration, edge-case, and mock-server tests across all crates
No panics — All unwrap() in critical paths replaced with proper error handling
Crash recovery — Panic handler restores terminal and saves crash marker. Session auto-saves every 60s
Format-on-save resilience — Detects external file changes between consecutive edits (formatters, linters)
Bracketed paste — Multi-line paste handled as single event, no corruption
Zero silent failures — All async channel sends log on failure instead of silently dropping events
Hook system — 9 events: PreToolUse, PostToolUse, CwdChanged, FileChanged, SessionStart, SessionEnd, TaskCreated, Notification, Stop — all with allow/deny/warn. Manage via /hooks
Custom keybindings — All keys remappable via .openanalyst/keybindings.json
Cross-Platform Native Binary
Single native binary. No Node.js, no Python, no Docker. Native on macOS (Intel + Apple Silicon), Linux (x64 + ARM), and Windows. Fast startup, low memory. ~18MB binary.
59 slash commands
All commands work identically regardless of which LLM provider you use. Every command is fully functional — zero stubs.
CommandDescription
/helpShow all available commands
/statusModel, permissions, tokens, cost, session info
/costCumulative token usage and cost breakdown
/model [name]Show live models or switch mid-session
/clearStart a fresh session
/compactSmart compact — collapses tool calls, shows token budget %, suggests /clear when needed
/memory NEWShow all memory files — instruction files, project memory, user memory with types
/effort [category] [level] NEWSet thinking effort globally or per-category (explore/research/code/write)
/route [category] [tier] NEWView or edit the per-action model routing table
/session [list|switch]List or switch saved sessions
/export [file]Export conversation transcript
/versionShow CLI version and build info
/resume <path>Load a saved session
/login NEWAdd or switch provider API keys from REPL
/logout NEWClear all saved credentials
/context NEWShow context window usage, token counts, model
/vim NEWToggle vim keybinding mode
/whoami NEWShow all logged-in providers and their status
/knowledge <query> NEWSearch the OpenAnalyst knowledge base
/explore <url> NEWSmart-explore a GitHub repo from its history
/ask <question> NEWQuick question — no tools, fast response (alias: /btw)
/user-prompt <msg> NEWInject a message with full tool access (alias: /up)
CommandDescription
/diffShow git diff of workspace changes
/commitGenerate commit message and commit
/commit-push-pr [ctx]Commit, push, and open a PR
/pr [context]Draft or create a pull request
/issue [context]Draft or create a GitHub issue
/branch [list|create|switch]Manage git branches
/worktree [list|add|remove]Manage git worktrees
/teleport <symbol>Jump to a file or symbol
/diff-review [file] NEWAI-powered review of git diff
CommandDescription
/image <prompt> NEWGenerate image (Gemini Imagen / DALL-E / Stability)
/voice <file> NEWTranscribe audio/video (Whisper / Gemini)
/speak <text> NEWText-to-speech audio (OpenAI TTS)
/vision <image> [prompt] NEWAnalyze image (Gemini / GPT-4o / Claude)
/diagram <desc> NEWGenerate Mermaid diagram
/translate <lang> <text> NEWTranslate text to any language
/tokens [text] NEWEstimate token count
/scrape <url> [selector] NEWFetch URL and extract text
/json <url> NEWFetch JSON API and pretty-print
CommandDescription
/bughunter [scope]Scan codebase for likely bugs
/ultraplan [task]Deep multi-step planning with reasoning
/debug-tool-callReplay last tool call with debug output
/think [prompt] NEWForce extended thinking for next response
/changelog [since] NEWGenerate changelog from git log via AI
/doctor NEWDiagnose installation, provider keys, MCP, workspace
/openanalyst <task> NEWAutonomous agent — think→act→observe→verify loop
/swarm <task> NEWSpawn a swarm of parallel agents for complex tasks
CommandDescription
/config [section]Inspect config (env, hooks, model, plugins)
/memoryShow loaded OPENANALYST.md files
/initCreate starter OPENANALYST.md
/permissions [mode]Show or switch permission mode
/agentsList configured agents
/skillsList available skills
/plugin [list|install|enable|disable]Manage plugins
/mcp [list|restart|add] NEWManage MCP servers
/add-dir <path> NEWAdd directory tree to conversation context
Hooks
PreToolUse NEWRuns before each tool execution — can allow, deny, or warn
PostToolUse NEWRuns after each tool — format-on-save resilience built in
CwdChanged NEWFires when working directory changes (direnv, monorepo navigation)
FileChanged NEWFires when a file is modified externally after write/edit
SessionEnd NEWFires on session close for cleanup, logging, or archival
TaskCreated NEWFires when a task is created for external tracking systems
SessionStart NEWFires when a session begins — initialize environment, logging
Notification NEWFires when the agent wants to notify the user externally
Stop NEWFires when the agent stops execution — post-run cleanup
/hooks [list|add|remove|test] NEWManage hooks interactively from the TUI
CommandDescription
/dev install NEWInstall Playwright for browser automation
/dev open <url>Navigate browser to URL
/dev screenshotCapture screenshot of current page
/dev snapSnapshot accessibility tree
/dev click <ref>Click element by a11y reference
/dev type <ref> <text>Type text into element
/dev test <desc>Generate Playwright test via AI
/dev codegenRecord browser actions as test code
/dev stopClose browser session
/dev statusShow Playwright version and state
22 built-in tools
The AI agent uses these automatically during conversations. Works with every provider. Plus MCP protocol for unlimited external tools.
Bash
Execute shell commands and scripts
ReadFile
Read files with line numbers and offsets
WriteFile
Create new files with content
EditFile
Precise string replacements
GlobSearch
Find files by pattern
GrepSearch
Search file contents with regex
WebSearch
Search the web for answers
WebFetch
Fetch content from URLs
Agent
Spawn sub-agents for parallel tasks
TodoWrite
Track tasks within a session
NotebookEdit
Edit Jupyter notebook cells
Skill
Invoke registered skills
ToolSearch
Search for specialized tools
REPL
Execute code in subprocess
PowerShell
Execute PowerShell on Windows
Config
Get/set settings programmatically
Sleep
Wait between operations
StructuredOutput
Return structured JSON
SendMessage
Send messages during execution
Architecture
Modular, testable, zero-dependency binary. Built with ecosystem crates, not reinvented.
src/modules/
api/                 # Multi-provider API client (7 providers)
commands/            # 59 slash commands
events/              # Shared TUI ↔ backend event types
orchestrator/        # Multi-agent lifecycle + channel bridge
tui/                 # Full-screen TUI application
tui-widgets/         # Markdown, tool cards, input, spinner
runtime/             # Conversation engine, session, MCP
tools/               # 19 built-in tool implementations
plugins/             # Plugin system (install, enable, hooks)
openanalyst-cli/     # Binary entry point
openanalyst-agent/   # Headless autonomous agent runner
server/              # HTTP/SSE server (axum)
lsp/                 # Language Server Protocol integration
compat-harness/      # Upstream manifest extraction
Ecosystem Crates
tui-markdown — Markdown rendering
Vim Engine — Built-in modal editor
tui-tree-widget — File tree
throbber-widgets-tui — Spinner
syntect-tui — Syntax highlighting
Key Metrics
14 crates in workspace
7 LLM providers
59 slash commands
22 built-in tools
340+ tests
MIT license
Project configuration
OPENANALYST.mdProject-specific AI instructions (auto-detected in parent dirs)
.openanalyst.jsonShared project defaults (permissions, model, tools)
.openanalyst/settings.jsonProject-level settings
.openanalyst/settings.local.jsonMachine-local overrides (gitignored)
~/.openanalyst/.envProvider API keys and environment config (auto-created)
~/.openanalyst/credentials.jsonSaved provider API keys
.openanalyst/sessions/Saved conversation sessions
.openanalyst/skills/Custom skill definitions
.openanalyst/commands/Custom slash commands
Credits & Trademarks

OpenAnalyst CLI is an independent, open-source product by OpenAnalyst Inc.

All provider trademarks belong to their respective owners:
Claude — Anthropic, PBC · GPT, DALL-E, Whisper, Codex — OpenAI, Inc · Gemini, Imagen — Google LLC · Grok — xAI Corp · Bedrock — Amazon Web Services, Inc · OpenRouter — OpenRouter, Inc

Use of these providers' APIs is subject to each provider's Terms of Service. OpenAnalyst CLI facilitates access to these APIs — it does not claim ownership of any provider's services, models, or intellectual property. Licensed under MIT.