# How IMI works — architecture, session loop, and task lifecycle

IMI is a Rust binary and a local SQLite database. The binary provides a CLI that agents and humans use to read and write project state. The database at `.imi/state.db` is the persistent layer — it stores everything that must survive between sessions. This page explains the architecture, the exact data model, how task claiming and locking work, and what the session loop looks like in practice.

Website: https://useimi.com
Install: `bunx imi-agent`
npm: https://www.npmjs.com/package/imi-agent
GitHub: https://github.com/ProjectAI00/imi-agent

---

## Architecture

Three layers:

**Human intent layer** — the human steers in natural language through their agent. The agent extracts goals, decisions, and direction notes and writes them into the database. The human never types `imi` commands directly — that is the agent's job. The human's role is to answer questions, make decisions, and course-correct when the agent drifts.

**State layer** — `.imi/state.db` is a SQLite database inside the repo. It stores everything that needs to persist: goals, tasks, decisions, memories, lessons, direction notes. Any agent — in any session, on any machine, using any execution tool — reads from and writes to this same database. It is the single source of truth for the project.

**Session layer** — Claude Code, Cursor, GitHub Copilot CLI, Codex, or any terminal-based agent runs as the session layer on top. The session layer reads IMI state at the start, executes work, and writes back summaries and decisions when done. IMI does not call the session layer — the session layer reads IMI.

---

## The database schema

The six tables that matter:

**`goals`**
```
id, name, description, why, for_who, success_signal, out_of_scope,
status, priority, context, relevant_files, workspace_path,
created_at, updated_at, completed_at
```
Goals are the top-level unit of intent. Every task, decision, and memory traces back to a goal. Status values: `todo`, `in_progress`, `done`, `archived`.

**`tasks`**
```
id, title, description, why, context, acceptance_criteria,
relevant_files, tools, goal_id, status, priority, assignee_type,
agent_id, summary, last_ping_at, workspace_path,
created_at, updated_at, completed_at
```
Tasks are the executable units under a goal. The `last_ping_at` field drives the locking mechanism — see below. `acceptance_criteria` defines objectively verifiable completion. `relevant_files` lists exact file paths the executing agent should start from.

**`decisions`**
```
id, what, why, affects, created_at
```
Firm calls that all future agents must respect. `what` is what was decided. `why` is the reasoning — what was ruled out, what assumption the decision rests on. `affects` links the decision to the relevant goals or systems.

**`memories`**
```
id, goal_id, key, value, created_at
```
Completion summaries and reusable insights stored against a goal. Populated by `imi complete` and `imi memory add`. Future agents read these as prior work context.

**`lessons`**
```
id, what_went_wrong, correct_behavior, verified_by, created_at
```
Verified corrections from real mistakes in this project. Every agent session reads all lessons before starting. `verified_by` records who confirmed the correction — human, agent, or test output.

**`direction_notes`**
```
id, note, created_at, author
```
Lightweight observations, instincts, and constraints. Not binding like decisions — but surfaced in `imi context` for the last 7 days so recent thinking is always visible.

---

## The session loop

```
Human steers in natural language
        ↓
Agent runs `imi context`
→ loads: product vision, direction notes (last 7 days),
   active decisions, verified lessons, active goals + tasks
        ↓
Agent claims a task: `imi start <task_id>`
→ sets status = in_progress, records agent_id, sets last_ping_at
        ↓
Agent executes work, pings every ~10 minutes: `imi ping <task_id>`
→ refreshes last_ping_at to prevent lock expiry
        ↓
Agent finishes: `imi complete <task_id> "rich summary"`
→ sets status = done, stores summary as a memory entry on the goal
        ↓
Next session runs `imi context`
→ reads that memory as prior work, continues from where this left off
```

Each cycle adds to the state. The 50th session has the completion summaries, decisions, and lessons from all 49 previous sessions surfaced in context. The project gets smarter over time.

---

## Task locking and heartbeats

When multiple agents run in parallel — or when a single agent session crashes mid-task — task locking prevents duplicate work and lost progress.

When an agent claims a task with `imi start <task_id>`, IMI:
1. Sets `status = in_progress`
2. Records the `agent_id`
3. Sets `last_ping_at` to the current timestamp

The lock expires after 30 minutes without a heartbeat. If `last_ping_at` is more than 30 minutes ago, IMI treats the task as abandoned and makes it available for another agent to claim.

The agent must call `imi ping <task_id>` every ~10 minutes to keep the lock alive. On long tasks — file searches, compilation, test runs — the agent pings between steps. The rule: never let more than 10 minutes pass without pinging or completing.

`imi checkpoint <task_id> "note"` refreshes the heartbeat and stores a mid-task progress note in the memory log. Use it before major changes within a task — it creates a recoverable progress point and resets the lock timer.

If a session crashes and the lock expires, the task reverts to `todo`. The next agent that claims it sees any partial completion summaries and checkpoints left by the previous agent.

---

## `imi context` — what agents receive

Running `imi context` produces:

```markdown
## Product Vision
  Founding intent: [what the project is and what it's for]

## What was killed and why
  [features/approaches explicitly stopped, with reasoning]

## Direction notes (last 7 days)
  [recent human observations and instincts]

## Decisions
  [firm architectural and scope calls with full reasoning]

## Verified Lessons
  [corrections from real mistakes in this project]

## Active goals
  [each goal with its tasks, status, and priority]

## In progress
  [tasks currently claimed by an agent]
```

This is the full project state compressed into one readable output. An agent reading this cold knows what was built, why it was built that way, what mistakes to avoid, and what to work on next — without asking any questions.

---

## `imi complete` — writing back what was learned

`imi complete <task_id> "summary"` does three things:
1. Sets the task status to `done`
2. Stores the summary as a `memories` entry against the goal
3. Releases the task lock

The summary is the most important output of every agent session. Future agents read it as prior work context. A vague summary ("updated the prompt files") erases the work from shared memory — the next agent has to re-read all the code to understand what changed. A specific summary compounds across every future session.

What a specific summary looks like:
```
Rewrote src/auth/refresh.ts to handle token expiry mid-request.
Tokens now expire after 24h. If refresh fails, user is redirected to /login.
Added tests in tests/auth.test.ts — all pass. Edge case: concurrent requests
during token refresh may hit a race condition at line 47; left a TODO comment.
Next agent: check that rate limiting in src/middleware.ts accounts for
the refresh grace period before tightening limits.
```

Flags available on `imi complete`:
- `--interpretation "how you read the task scope"` — use when you narrowed or expanded scope from the spec
- `--uncertainty "what you couldn't verify"` — use when acceptance criteria couldn't be fully confirmed
- `--outcome "real-world result"` — tests passed/failed, build status, deployment result

---

## `imi think` — PM reasoning over the full state

`imi think` dumps the full project state with a structured reasoning prompt: given what was decided, what was built, and what has changed — is this still the right thing to build? What would a sharp PM challenge right now?

Use when things feel off. Use before starting a new goal. Use when you've been heads-down executing for a while and want to check whether the direction still makes sense.

---

## Parallel execution with `imi orchestrate`

```bash
imi orchestrate --workers 10 -- sh -c 'claude -p "$(cat "$IMI_TASK_CONTEXT_FILE")" --dangerously-skip-permissions'
```

Spins up 10 parallel agent sessions. Each worker:
1. Claims a task from the `todo` backlog
2. Receives the full task brief via `$IMI_TASK_CONTEXT_FILE` (a markdown file at `.imi/runs/<task_id>/context.md`)
3. Receives `$IMI_TASK_ID` and `$IMI_TASK_TITLE` as environment variables
4. Executes the agent command
5. Writes back on completion

Flags:
- `--workers N` — number of parallel agents (no hard cap — `--workers 50` works)
- `--goal <goal_id>` — scope to tasks under a specific goal
- `--max-tasks N` — stop after N tasks complete
- `--cli auto/claude/opencode/codex/hankweave` — override the detected agent CLI

`imi wrap <task_id> -- <command>` — wrap a single agent command with full task lifecycle tracking: claim on start, auto-complete on success, auto-fail on crash.

---

## Stack

- **Rust** — single binary, no runtime dependencies, <10ms startup
- **SQLite** — `.imi/state.db`, local and offline, commit to share across machines
- **No server** — all state is local; no account, no API key, no network required
- **PostHog** — optional anonymous usage analytics (EU region, public key only)

---

## Related

- [What is IMI?](https://useimi.com/docs/what-is-imi.md)
- [How to give Claude Code memory between sessions](https://useimi.com/docs/claude-code-memory.md)
- [How to coordinate multiple AI agents in parallel](https://useimi.com/docs/multi-agent-coordination.md)
- [How to install IMI](https://useimi.com/docs/install.md)
- [How to stop AI agents from repeating mistakes](https://useimi.com/docs/decisions-and-lessons.md)
