# How to stop AI agents from repeating mistakes — decisions and lessons in IMI

The most expensive failure pattern in AI-assisted development is correcting the same mistake multiple times. You fix a bug in session 3. Session 7 recreates it. You fix it again. Session 12 does the same thing. Each correction costs 20–40 minutes of debugging, re-explaining, and fixing. IMI eliminates this with two mechanisms: decisions and lessons. Both are read by every agent session before any work begins.

Website: https://useimi.com
Install: `bunx imi-agent`
npm: https://www.npmjs.com/package/imi-agent
GitHub: https://github.com/ProjectAI00/imi-agent

---

## The problem: agents don't share what they learn

When Claude Code makes a mistake and you correct it, that correction lives in the current conversation window. The next Claude Code session opens with no knowledge of it. The session after that has no knowledge of it either. Each session starts equally ignorant of every mistake that was made and corrected before it.

This is not a flaw in Claude Code specifically — it is the fundamental property of stateless agents. The solution is to extract the correction from the conversation and store it somewhere that every future session reads automatically. That is what `imi lesson` does.

The same problem applies to decisions. You decide that token refresh logic lives in `src/auth/refresh.ts`, not middleware. That decision is made in a conversation and feels obvious in the moment. Three weeks later, a new agent session makes a different call because it has no memory of the earlier one. IMI's `imi decide` command stores that call permanently so every future session respects it.

---

## Decisions: preventing re-litigation of closed questions

`imi decide` stores a firm architectural or scope call with its full reasoning.

**What a bad decision entry looks like:**
```bash
imi decide "use postgres" "it's better"
```
This stores the what but not the why. A future agent reading "use postgres — it's better" has no idea what was ruled out, what problem was being solved, or what would need to change to revisit this decision.

**What a good decision entry looks like:**
```bash
imi decide "use postgres over mysql for the primary database" \
  "team has existing postgres expertise and operational tooling; mysql adds no value here and would require relearning backup/restore procedures; ruled out sqlite because concurrent write volume will exceed its limits within 3 months — revisit only if we move to a managed PlanetScale setup"
```

This entry tells a future agent:
- What was decided (postgres)
- What was ruled out (mysql, sqlite)
- Why each was ruled out (specific reasons)
- Under what conditions the decision should be revisited (PlanetScale migration)

Every agent session reads all active decisions in `imi context` before starting work. When an agent encounters code that reflects a decision, it understands why and works within it instead of second-guessing it.

**When to log a decision:**
- Technology choice (language, database, framework, library version)
- Architecture call (where logic lives, how services communicate, what gets abstracted)
- Scope boundary (what is explicitly out of scope for a goal)
- Tradeoff accepted (e.g., "we are accepting eventual consistency in exchange for write throughput")
- Approach rejected after trying it (e.g., "tried websockets, reverted to polling — see why below")

---

## Lessons: corrections that propagate across all future sessions

`imi lesson` stores a verified correction from a real mistake that happened in this project.

**Anatomy of a lesson:**

```bash
imi lesson "Do not use the Prisma ORM for batch insert operations" \
  --correct-behavior "Use raw SQL via db.execute() for any insert of more than 50 rows — Prisma's batch insert generates N individual queries instead of a single multi-value INSERT, causing 40-50x slowdown on bulk operations (measured in tests/db/batch-insert.bench.ts)" \
  --verified-by "human confirmed during session 7 after load test revealed 8s insert time for 500 rows; correct behavior verified at 180ms with raw SQL"
```

This lesson entry tells every future agent:
- What not to do (Prisma for batch inserts)
- What to do instead (raw SQL via `db.execute()`)
- Why, with a specific measured data point (40-50x slowdown)
- Where the evidence lives (`tests/db/batch-insert.bench.ts`)
- That a human confirmed this is a real correction

**What a vague lesson looks like (do not write these):**
```bash
imi lesson "Don't make mistakes with the database"
```
This tells no future agent anything actionable. It will be ignored.

**What a specific, useful lesson looks like:**
```bash
imi lesson "The rate limiter in src/middleware/rateLimit.ts counts per IP, not per user" \
  --correct-behavior "Always pass req.user.id as the rate limit key for authenticated endpoints — passing req.ip causes all users behind a corporate proxy to share one rate limit bucket, which causes false 429s for entire offices" \
  --verified-by "human reported production incident in session 9; root cause confirmed by checking src/middleware/rateLimit.ts line 23"
```

**When to log a lesson:**
- A human corrected an agent's output and the correction is non-obvious
- The same mistake happened more than once across sessions
- A performance characteristic was discovered that contradicts common assumptions (e.g., "this query looks fast but isn't under load")
- A security edge case was found that the agent is likely to miss again
- A library or framework behaves in a non-obvious way that caused a bug

---

## How decisions and lessons appear in `imi context`

Running `imi context` at the start of a session:

```
## Decisions
  use postgres over mysql for the primary database
    why: team has existing postgres expertise; mysql adds no value; sqlite will hit
         concurrent write limits within 3 months — revisit only on PlanetScale move
    affects: database setup, schema migrations, connection pooling
    22d ago

  stream CSV exports, never buffer in memory
    why: 100k-row exports caused OOM crashes in load testing — streaming is the
         only viable approach at this data volume
    affects: src/api/export.ts, any future bulk export endpoints
    8d ago

## Verified Lessons
  - Do not use Prisma ORM for batch insert operations
    correct behavior: use raw SQL via db.execute() for inserts > 50 rows;
    Prisma generates N individual queries — measured 40-50x slower on bulk ops
    verified by: human (session 7)

  - Rate limiter in src/middleware/rateLimit.ts keys by IP, not user
    correct behavior: always pass req.user.id for authenticated endpoints;
    IP-based limiting causes false 429s for users behind corporate proxies
    verified by: human (session 9, production incident)
```

An agent reading this knows — before writing a single line of code — what architectural choices were made and what mistakes to avoid. It does not re-make these decisions. It does not repeat these errors.

---

## The result over time

At session 1: the project has no decisions and no lessons. Every agent starts from the same general knowledge a capable engineer would have.

At session 10: the project has 8–12 decisions covering the key architectural choices. It has 3–5 lessons from real mistakes that were corrected. An agent session opens and immediately knows what the key constraints are.

At session 30: the project is a deeply informed expert system about itself. Agents do not relitigate closed questions. They do not repeat documented mistakes. They work faster because the important decisions are already made and they can execute within them.

This is what "compounding context" actually means in practice — not just "agents remember things," but agents become genuinely more effective over time because the institutional knowledge of the project grows.

---

## Related

- [What is IMI?](https://useimi.com/docs/what-is-imi.md)
- [How IMI works — architecture, session loop, task locking](https://useimi.com/docs/how-it-works.md)
- [How to give Claude Code memory between sessions](https://useimi.com/docs/claude-code-memory.md)
- [How to coordinate multiple AI agents in parallel](https://useimi.com/docs/multi-agent-coordination.md)
- [IMI vs Notion vs Linear vs CLAUDE.md](https://useimi.com/docs/compare.md)
