CLI Reference
The @locusai/cli provides the core tools for managing your local-first agent workspace.
Installation
bashnpm install -g @locusai/cli # or use via npx npx @locusai/cli <command>
Commands
init
Initializes Locus in the current directory.
bashlocus init
What it does:
- Creates a
.locusdirectory. - Creates
.locus/config.jsonwith project configuration. - Creates
CLAUDE.mdcontext file if it doesn't exist.
When to use:
- When setting up Locus for the first time in a repository.
index
Indexes your codebase to create a semantic map for the AI agent.
bashlocus index [options]
Options:
--dir <path>: Specify the directory to index (defaults to current directory).--provider <name>: AI provider to use (claudeorcodex, defaultclaude).--model <name>: Model override for the chosen provider.
What it does:
- Scans your project files.
- Generates a tree summary and semantic index.
- Saves the index to
.locusfor efficient agent retrieval.
When to use:
- After significant code changes.
- Before running an agent if the codebase has changed.
run
Starts the autonomous agent orchestrator to work on tasks.
bashlocus run [options]
Options:
--api-key <key>: (Required) Your Locus API key.--workspace <id>: (Optional) Your Locus Workspace ID. Usually resolved automatically from the API key.--sprint <id>: (Optional) Limit work to a specific sprint.--provider <name>: (Optional) AI provider to use (claudeorcodex, defaultclaude).--model <name>: (Optional) AI model to use (defaults tosonnetforclaude).--api-url <url>: (Optional) Custom API endpoint.
What it does:
- Connects to the Locus Cloud to fetch tasks.
- Spawns a local agent worker.
- Executes tasks, runs tests, and commits changes.
- Reports progress back to the Cloud dashboard.
exec
Runs a direct prompt with full repository context. Supports single execution, interactive REPL sessions, and session management.
bashlocus exec "your prompt" [options] locus exec --interactive locus exec sessions <subcommand>
Options:
--provider <name>: AI provider to use (claudeorcodex, defaultclaude).--model <name>: Model override for the chosen provider.--dir <path>: Directory context for execution (defaults to current directory).--interactive/-i: Start interactive REPL session.--session <id>/-s: Resume an existing session.--no-stream: Disable streaming output.
What it does:
- Gathers project context (CLAUDE.md, README.md, project structure, skills).
- Includes the codebase index if available.
- Sends the prompt to the AI runner.
- Executes tools (bash, read, write, edit, grep, glob, etc.) locally to fulfill the prompt.
- Streams real-time output with tool execution progress.
When to use:
- When you want to ask questions about the codebase.
- When you need the AI to perform a one-off task (e.g., "Refactor this file", "Explain this logic").
- When you don't want to create a formal task in the Locus Cloud.
- When you need multi-turn conversations with context preservation.
Note
For comprehensive documentation including interactive mode, session management, and advanced usage, see the Exec Command guide.
