Persistent memory for AI agents — without a vector database.
Give your AI agent files it can write to, read from, and grep across — using the standard Unix commands. The agent decides what's worth remembering and organizes it however it likes. Memory survives every session — no embeddings, no chunking, no vector index.
Agents forget
between sessions.
Every time a new conversation starts, the agent boots up context-blank. It re-asks questions you already answered, forgets what it just learned, and re-derives conclusions it reached yesterday.
Most teams reach for a vector database — embed every message, chunk every doc, index everything, retrieve at query time. But for the kind of memory an agent writes about itself (preferences, decisions, working state), that pipeline is overkill. The agent already knows where to look. It just needs a place to put the note.
Filesystem-as-memory.
The agent has one bash tool. Its workspace persists across sessions. It writes notes when it learns something, reads them back at the start of the next conversation, and uses grep to surface only what's relevant — without dragging the whole memory into context.
Write what you learn
The agent appends to its own files between sessions. Plain text. No schema. No format the LLM has to conform to.
from trove_sdk import TroveClient
trove = TroveClient(api_key="trove-sk-...", namespace="alice")
def bash(cmd: str) -> str:
return trove.exec(cmd)
# Agent decides what to remember and how to organize it
bash("mkdir -p workspace/memory")
bash("echo 'User prefers bullet points' >> workspace/memory/style.md")
bash("echo 'Last review: AAPL 10-K 2024' >> workspace/memory/context.md")
bash("echo 'EBITDA target: 21%' >> workspace/memory/constraints.md")Read it back
Next session, the agent reads its own notes before deciding what to do. Cat the whole memory or grep for the specific fact — only what the agent reads enters context.
# Next session — agent reads its own notes before starting
state = bash("cat workspace/memory/*.md")
print(state)
# User prefers bullet points
# Last review: AAPL 10-K 2024
# EBITDA target: 21%
# Or grep for what's relevant — only that enters context
recent_review = bash("grep -i 'review' workspace/memory/context.md")Hand off between sessions
The agent leaves a handoff note at workspace/.trove/agent.md on the way out. trove.bootstrap() surfaces it on the next session along with the recent file list — pipe the rendered block into the model's system prompt and the agent skips the "what is this namespace?" probing turn.
# At the end of a session — leave a handoff note for the next instance
trove.write("workspace/.trove/agent.md", """## What I learned
- AAPL Q3 EBITDA missed target by 1.2pts; investigate cost-of-revenue line
- User prefers a bullet-list summary, not narrative paragraphs
- Cache primed at workspace/.cache/aapl-10k.txt — reuse, don't redownload
""")
# Next session — one call returns recent files, init.sh, and the handoff note.
# Pipe straight into the system prompt and the agent orients before its
# first tool call instead of probing with ls + cat.
bs = trove.bootstrap()
system_prompt += bs.as_system_prompt_block()
# <workspace>
# namespace: alice
# files: 12; last edited 2026-05-07T20:00:00Z
# recent: workspace/memory/context.md, workspace/cache/aapl-10k.txt, ...
# last_session: |
# ## What I learned
# - AAPL Q3 EBITDA missed target by 1.2pts; investigate cost-of-revenue line
# ...
# </workspace>Filesystem memory
or vector database?
Filesystem (TroveFiles) wins when…
- • The agent is writing memory about itself (state, preferences, working notes).
- • You need exact recall — "what was the EBITDA target?"
- • Determinism matters more than fuzzy similarity.
- • You want to inspect, edit, and version the memory by hand.
- • You don't want embedding cost on every write.
A vector database wins when…
- • You need semantic search — "find conversations that felt like X."
- • The corpus is large and unindexable by keyword (millions of docs).
- • Queries cross natural-language boundaries the agent can't formulate as grep.
- • You need similarity ranking, not exact matches.
Many teams use both: TroveFiles for the agent's self-written memory and working state, a vector database for semantic search over a large external corpus. See our deeper comparison: TroveFiles vs. vector databases.
AI agent memory,
answered.
How does AI agent memory work without a vector database?
The agent writes plain-text notes to its workspace and reads them back the next session — typically with cat, grep, or head. No embeddings or vector index are needed for keyword-style recall, which covers the majority of "what did I learn last time" memory patterns.
When should I use a vector database alongside TroveFiles?
Use a vector database when you need semantic similarity beyond keyword matching — for example, "find conversations that felt like X" rather than "find messages containing X." For memory that the agent itself writes (state, preferences, checkpoints), filesystem retrieval is faster, deterministic, and free of embedding cost.
How big can an AI agent's memory grow?
There is no fixed cap. The agent can keep writing as long as it organizes content into files it can later grep or cat selectively. Files do not consume LLM context until the agent explicitly reads them — so a 10 MB memory directory still costs zero tokens until grep finds something relevant.
Does memory persist across model changes or new sessions?
Yes. The workspace is independent of the LLM session. Switch models, restart processes, or invoke from a different machine — every file the agent wrote is still there. Memory survives anything but explicit deletion.
How does the next session find the right memory to load?
Call client.bootstrap() once on session start. It returns the recent file list, the active init.sh prelude, and the previous session's handoff note (workspace/.trove/agent.md). Pipe bs.as_system_prompt_block() into the model's system message and the agent orients before its first tool call — no ls/cat probing turn. The handoff file itself is just markdown the agent wrote with the normal client.write(); the runtime doesn't parse it.
How do I prevent one customer's memory from leaking to another?
Mint a scoped API key per customer with the admin SDK. Each key is bound to its own namespace, which is a separate directory root. No two namespaces can read each other's files regardless of the commands the agent runs.
Give your agent
a memory that lasts.
API key in 30 seconds. One client, one namespace, persistent file-based memory from the first request.