v0.1

API reference

TroveFiles is files and commands for AI agents — six endpoints, one Bearer token.

Quickstart

Three lines and your agent can save files, read them back, and run commands.

~/agent — quickstart
$pip install trove-sdk
Successfully installed trove-sdk-0.2.2
 
$python quickstart.py
workspace/notes.md (10 bytes)
total 4
-rw-r--r-- 1 root root 10 May 4 12:01 notes.md
 
✓ Your agent can save files and run commands.
$
# pip install trove-sdk
from trove_sdk import TroveClient

client = TroveClient(api_key="trove-sk-...", namespace="my-agent")

# Write a file
client.write("workspace/notes.md", "# Notes")

# Run any shell command — returns stdout, raises TroveExecError on
# non-zero exit. Use exec_detailed() if you'd rather inspect the
# exit code than catch.
print(client.exec("ls -la workspace/"))
Your agent can read and write files

Next

CLIwatch your agent live from the terminal

CLI

tail -f for your agent. Stream every file write, shell command, and snapshot to your terminal as it happens.

Install

uv tool install "trove-sdk[cli]"
# or: pip install "trove-sdk[cli]"

Log in

trove login
# Opening your browser to authorize this CLI…
#
#   Code: ABCD-1234
#   URL:  https://trovefiles.dev/cli?code=ABCD-1234
#
# Confirm the code in your browser, then approve.
# .....
# saved profile 'default' (trove-sk-abc1…3a7f)  workspace=ws-...

# Skip the browser (CI / headless):
trove login --api-key trove-sk-...     # explicit key
echo $TROVE_KEY | trove login          # piped from stdin
trove login --no-browser               # paste at the prompt

trove whoami
# profile     : default
# workspace   : ws-...
# api key     : trove-sk-...3a7f

Watch your agent

trove tail
# tailing ws-...  (Ctrl-C to stop)
# 14:22:01  file.written     customer-acme    workspace/notes/research.md  (1.2KB)
# 14:22:03  exec.completed   customer-acme    ls -la workspace/notes/      exit=0
# 14:22:14  exec.completed   customer-acme    pytest tests/                exit=1
# 14:22:16  file.written     customer-acme    workspace/notes/parser_fix.py  (820B)

Filter what you watch

trove tail --namespace customer-acme         # one customer's events
trove tail --types file.written              # only writes
trove tail --types file.written,exec.completed
trove tail --since 1h                        # last hour, then keep streaming
trove tail --json                            # one JSON object per line

Browse the backlog

trove events list --limit 50
trove events list --types exec.completed --json
trove events list --namespace customer-acme

Multiple workspaces

trove login --save-as staging       # browser flow, saved as 'staging'
trove --profile staging tail
trove --profile prod    events list --limit 5

Pipe into anything

--json emits one event per line. Same firehose the dashboard reads, addressable from your shell — pipe into jq, ship to Slack, forward to Datadog. The CLI is just a thin wrapper over the events API; anything you can do in tail you can do in curl.
# Pipe into jq for projection
trove tail --json | jq -r '.data.path // .data.command'

# Forward to Slack
trove tail --types exec.completed --json | while read line; do
  exit_code=$(echo "$line" | jq -r .data.exit_code)
  [ "$exit_code" != "0" ] && curl -s "$SLACK_WEBHOOK" -d "$line"
done
You can watch your agent live

Next

Authenticationlock down who can call those endpoints

Authentication

Two headers prove who is calling and which namespace they're touching. Set them once on a client — never again.

Authorization: Bearer trove-sk-…

Your API key. Hashed server-side — cannot be retrieved. Revoke and reissue if lost.

X-Namespace: my-agent

Required on filesystem endpoints. Scoped keys auto-default to their namespace — the header can be omitted.

Your agent identifies itself on every request

Next

Namespacesgive every agent its own root directory

Namespaces

Multi-tenant agents need hard isolation. A namespace is a top-level directory per agent, customer, or session — auto-created on first write.

  • Pattern: ^[A-Za-z0-9_-]{1,128}$
  • Auto-created on first write — no provisioning needed
  • Agent always sees its root as workspace/
  • Scoped keys are hard-isolated — cross-namespace access returns 403
Your agents are isolated from each other

Next

POST /v1/execrun any shell command inside a namespace

POST /v1/exec

Models already know the standard Unix tools — awk, jq, grep, pdftotext. Hand them a shell, they'll figure out the rest. No command whitelist. Returns a JSON envelope with exit_code, stdout, stderr, and duration_ms.

POST/v1/exec
# Simple case: returns stdout as a string. Raises TroveExecError on
# non-zero exit (carries exit_code, stdout, stderr) so a failing
# command never silently looks like normal output.
output = client.exec('grep -r "TODO" workspace/')
print(output)

# Want to inspect the exit code without an exception? Use exec_detailed —
# returns ExecResult(exit_code, stdout, stderr, duration_ms).
result = client.exec_detailed("pytest tests/")
if result.exit_code != 0:
    print("failures on stderr:", result.stderr)

Output rewriting

Real mount paths are rewritten to workspace/ so internal paths never leak to your agent.
Your agent can run any shell command

Next

POST /writepersist text results back to the workspace

POST /write

The most common operation an agent does. Atomic write of UTF-8 text — for binary or anything over 10 MB, use PUT /files instead.

POST/write
result = client.write("workspace/notes.md", "# Notes\n...")
print(result.path, result.size_bytes)
Your agent can persist text

Next

PUT /files/{path}push binary files (PDFs, images, build artifacts)

PUT /files/{path}

Binary stuff: PDFs, images, audio, build artifacts — anything up to 100 MB. Streams raw bytes; no JSON wrapper.

PUT/files/{path}
with open("report.pdf", "rb") as f:
    result = client.upload("workspace/report.pdf", f)
print(result.size_bytes)
Your agent can store binary files up to 100 MB

Next

POST /deleteclean up files and directories you no longer need

POST /delete

Delete a file or directory. Recursive. Permanent — the only way back is a snapshot you took beforehand.

POST/delete
client.delete("workspace/notes.md")
Your agent can clean up

Next

Persistent shell contextstop re-running cd / source / export on every command

Persistent shell context

Each exec runs in a fresh shell. The filesystem is the only thing that carries between calls — anything that lives only in shell state is gone. Two patterns close that gap: exec_chain for one-off multi-step flows, and init.sh for setup that should apply to every command.

What persists between exec calls

PersistsDoesn't persist
Files in workspace/cwd from a prior cd
init.sh prelude (re-runs every call)Env vars exported inside an exec
SnapshotsBackground processes
Activated venvs (use init.sh instead)

Three rules of thumb. Deterministic setup (cwd, venv, env vars) goes in init.sh. Computed state that one step produces and the next step needs across calls goes in a file — export FOO=$(...) in one exec is gone in the next. Multi-step flows that share state can run as one exec_chain.

exec_chain — multi-step in one shell

Joins commands with && server-side and runs them as one exec, so cd / export / shell variables hold for the whole chain. Short-circuits on the first non-zero exit. The 30-second wall clock applies to the chain as a whole — for longer flows, write progress to files so a retry can resume.

# Multi-step within ONE shell — cwd and shell variables hold for the chain.
result = client.exec_chain([
    "cd workspace/data",
    "TOKEN=$(curl -s https://api.example.com/token)",
    'curl -H "Authorization: $TOKEN" https://api.example.com/feed -o feed.json',
])
# Stops on first non-zero exit, just like shell &&. 30s wall clock applies
# to the whole chain.

# Separate exec calls — TOKEN is gone between them. Persist via a file:
client.exec("curl -s https://api.example.com/token > workspace/.token")
client.exec('curl -H "Authorization: $(cat workspace/.token)" ... -o workspace/feed.json')

init.sh — setup that runs before every call

# Without init.sh — every command repeats the setup
client.exec("cd workspace/data && source .venv/bin/activate && python analyze.py")
client.exec("cd workspace/data && source .venv/bin/activate && pytest tests/")

# With init.sh — set the prelude once, run cleanly forever
client.set_init("""
cd workspace/data
source .venv/bin/activate
""")

client.exec("python analyze.py")    # cwd, venv, env all carry over
client.exec("pytest tests/")        # same context — no re-setup

It's just a file at workspace/.trove/init.sh

No new endpoint, no new auth — the convention is one file at a known path. Each /exec reads it and sources it into the same shell as your command, so cd, export, shell functions, and an activated venv all carry over. Snapshots include it; webhook events fire when it changes; namespace isolation holds.

Manage it

client.set_init("cd workspace/data\nexport DATE=2026-05-06\n")

client.get_init()      # → the script text, or None if unset
client.clear_init()    # → True if removed, False if never set

Each /exec still gets a fresh shell — only the prelude carries over, not state from prior commands. Errors in the prelude (a bad cd, missing file) write to stderr but don't block the user command. Avoid exit statements in the script — they terminate the shell before your command runs.

Your agent stops re-running setup on every call

Next

Cross-session orientationone call so the next agent session starts oriented, not amnesiac

Cross-session orientation

Each new agent session starts with amnesia: no idea what files are around, what the previous instance was working on, or where it got stuck. One call (composed from existing endpoints — no new server contract) rolls up recent files, the active init.sh, and the previous session's handoff note into a packet you pipe straight into the model's system prompt.

# At the end of a session — leave a handoff note for the next instance.
# It's just a markdown file at a known path; the runtime doesn't parse it.
client.write("workspace/.trove/agent.md", """## What I learned
- Salesforce OAuth needs the 'api' scope, not 'read'
- Cache primed at workspace/.cache/q3.json — reuse, don't recompute
""")

# Next session — one call returns recent files, the active init.sh, and the
# previous session's handoff note. Pipe straight into the system prompt.
bs = client.bootstrap()
system_prompt += bs.as_system_prompt_block()
# <workspace>
#   namespace: alice
#   files: 12; last edited 2026-05-07T20:00:00Z
#   recent: workspace/data.csv (3.4KB), workspace/report.md (140B), ...
#   init.sh: cd workspace/data; source .venv/bin/activate
#   last_session: |
#     ## What I learned
#     - Salesforce OAuth needs the 'api' scope, not 'read'
#     ...
# </workspace>

It's just two files at known paths

bootstrap() composes a recursive list_dir with reads of two convention paths: workspace/.trove/init.sh (the sourced shell prelude — already documented above) and workspace/.trove/agent.md (the cross-session handoff note). The async client fans those reads out concurrently. No new endpoint to call directly; works against any server version.

Leaving the handoff note

To leave a note for the next instance of the agent, write workspace/.trove/agent.md with the normal client.write(...) — there is no dedicated method. The runtime doesn't parse the file; pick whatever format the receiving agent expects (markdown, JSON, free-form). On the next bootstrap() the file shows up as bs.agent_memory and inside the rendered prompt block as a last_session: | block.

When (not) to call it

Once per session, on the very first agent turn — that's the point. The renderer is stable so you can pipe its output directly into the system message; check bs.file_count == 0 to detect a cold start and skip the "here's your previous work" framing.
Your agent stops re-deriving what it learned yesterday

Next

Snapshotsundo a delete you wish you hadn't made

Snapshots

An agent that can't roll back is a tightrope walker without a net. Snapshots are your safety net — point-in-time tarballs of a namespace, restorable with one call. Retained for 30 days.

POST/v1/snapshots
GET/v1/snapshots
POST/v1/snapshots/{id}/restore
DELETE/v1/snapshots/{id}

Take a checkpoint

# Take a checkpoint before a risky operation
snap = client.create_snapshot(label="before-migration")
print(snap.snapshot_id, snap.size_bytes)
# → snap-b1bde15ffe82b60a 1284

List, restore, delete

# List checkpoints — newest first
for s in client.list_snapshots():
    print(s.snapshot_id, s.label, s.created_at)

# Restore — wipes namespace, extracts the tarball back
files_restored = client.restore_snapshot("snap-b1bde15ffe82b60a")
print(f"{files_restored} files restored")

# Delete a snapshot when you no longer need it
client.delete_snapshot("snap-b1bde15ffe82b60a")

How it works

Each snapshot is a gzipped tar of the namespace's live FUSE state, stored in a separate bucket with a 30-day expiration lifecycle. Restore wipes the namespace and extracts the tarball back, file-by-file. Concurrent restores are last-write-wins — no locks. Snapshots fire snapshot.created and snapshot.restored webhook events so your backend can audit recovery actions.

When to snapshot

Hand-roll a checkpoint before a risky operation (an agent migration, a bulk ingest, a destructive shell command) so you can roll back if it goes wrong. For continuous protection, subscribe to file.deleted via webhooks and snapshot from your backend.

Daily auto-backups

Every namespace with data gets one automatic snapshot per day, kept on a rolling 7-day window. The job runs at ~02:00 UTC and skips namespaces whose most recent snapshot is less than 23 hours old, so it doesn't collide with manual ones you took the same day. Auto-snapshots use theauto- snapshot-id prefix and a label likeauto-daily-2026-05-02. Manual snapshots you create yourself (id prefix snap-) are never touched by the pruner.

From the dashboard

You can also take, list, restore, and delete save points without writing code: open the Files tab, click the Save points button next to any namespace, and take a checkpoint inline. Daily auto-backups show up in the same panel, in their own section. Restore is gated by a type-the-namespace-name confirmation since it wipes the live state. Same data either way — both paths read and write the same snapshot bucket.

Available in trove-sdk (Python ≥ 0.2.2), the dashboard, and the cURL examples above.

Your agent can rewind a mistake

Next

Agent integrationwire all of this into a tool-calling loop

Agent integration

Agents get one tool — bash — and use the Unix commands the model already knows. No custom API, no routing decisions.

import anthropic
from trove_sdk import TroveClient

trove = TroveClient(api_key="trove-sk-...", namespace="session-123")
client = anthropic.Anthropic()

tools = [{
    "name": "bash",
    "description": "Run a shell command in the agent's filesystem.",
    "input_schema": {
        "type": "object",
        "properties": {"command": {"type": "string"}},
        "required": ["command"],
    },
}]

def run_agent(prompt: str) -> str:
    messages = [{"role": "user", "content": prompt}]
    while True:
        resp = client.messages.create(
            model="claude-opus-4-7",
            max_tokens=4096,
            tools=tools,
            messages=messages,
        )
        if resp.stop_reason == "end_turn":
            return next(b.text for b in resp.content if hasattr(b, "text"))
        for block in resp.content:
            if block.type == "tool_use" and block.name == "bash":
                # exec_detailed gives us a structured result so a non-zero
                # exit doesn't get blended into stdout when we hand it back
                # to the model.
                r = trove.exec_detailed(block.input["command"])
                tool_output = r.stdout if r.exit_code == 0 else (
                    f"[exit {r.exit_code}]\n{r.stderr}".rstrip()
                )
                messages += [
                    {"role": "assistant", "content": resp.content},
                    {"role": "user", "content": [{
                        "type": "tool_result",
                        "tool_use_id": block.id,
                        "content": tool_output,
                        "is_error": r.exit_code != 0,
                    }]},
                ]

Why one tool, not five

Models already know the standard Unix commands. Reading is cat workspace/foo.md, listing is ls workspace/, searching is grep -rn TODO workspace/. Separate tools add tokens and another routing decision to get wrong.
Your LLM has a shell as a tool

Next

Multi-tenancyscale this from one agent to ten thousand customers

Multi-tenancy

One admin key mints scoped keys per customer from your backend. Each customer's agent is hard-isolated to its own namespace — no cross-tenant access, ever.

from trove_sdk import TroveAdminClient, TroveClient

# ── On customer signup (your backend, never the browser) ──────────────────────
admin = TroveAdminClient(
    api_key=TROVE_ADMIN_KEY,        # your long-lived admin key
    workspace_id=TROVE_WORKSPACE_ID,
)

key = admin.create_key(
    f"customer-{customer_id}",
    namespace=f"customer-{customer_id}",
)
# Store key.key_id in your DB — you'll need it to revoke
# Give key.api_key to the backend service running this customer's agent

# ── Customer agent ────────────────────────────────────────────────────────────
# Scoped key auto-defaults X-Namespace — namespace arg here is optional
trove = TroveClient(api_key=customer_key, namespace=f"customer-{customer_id}")

trove.write("workspace/memory/prefs.md", "prefers bullet points")
trove.exec("cat workspace/memory/prefs.md")

# ── On churn / account deletion ───────────────────────────────────────────────
admin.revoke_key(stored_key_id)
# Any request using that key immediately returns 401

Security model

Scoped keys are enforced server-side — a customer key attempting to access another namespace returns 403 regardless of what path it sends. The admin key never leaves your backend. Revocation is immediate: the next request with a revoked key returns 401.

Manage keys via API

from trove_sdk import TroveAdminClient

admin = TroveAdminClient(api_key="trove-sk-admin-...", workspace_id="ws-...")

# Mint a scoped key for a new customer
key = admin.create_key("customer-acme", namespace="customer-acme")
print(key.api_key)  # store this — shown once

Revoke on churn

admin.revoke_key("key-...")
Every customer has their own sandbox

Next

Per-session sandboxesgo a level deeper for per-session throwaway keys

Per-session sandboxes

A production-grade pattern for agent runtimes: each session gets its own namespace and a throwaway scoped key. Three keys, three roles, hard isolation between sessions.

KeyWhere it livesWhat it does
scope: adminBackend secrets managerMints/revokes keys; manages webhooks. Cannot touch the filesystem.
scope: workspace
+ namespace
The agent process for one sessionRead/write its own namespace. Cross-namespace requests return 403.
scope: workspace
no namespace
Backend ops jobsWalks every namespace. Used for billing rollups, capacity, abuse detection.
# provision.py — backend, holds the admin key
import os
from trove_sdk import TroveAdminClient

admin = TroveAdminClient(
    api_key=os.environ["TROVE_ADMIN_KEY"],     # never leaves your backend
    workspace_id=os.environ["TROVE_WORKSPACE_ID"],
)

def start_session(session_id: str) -> dict:
    """Mint a throwaway key bound to this session's namespace."""
    namespace = f"session-{session_id}"
    key = admin.create_key(name=f"agent:{session_id}", namespace=namespace)
    # Persist key.key_id with your session record — you'll need it to revoke
    return {"namespace": namespace, "key_id": key.key_id, "api_key": key.api_key}

def end_session(key_id: str) -> None:
    """Revoke. In-flight requests with this key now return 401."""
    admin.revoke_key(key_id)

Why three keys, not one

The admin key never reaches the filesystem — even if leaked, it can't exfiltrate agent data. The scoped runtime key can't see other sessions — even if the agent goes rogue, blast radius = one namespace. The unscoped runtime key powers cross-tenant ops jobs without ever needing key-management privileges. Each role gets the smallest credential it needs to do its job.

Full runnable example with CLI walkthrough: python/examples/sessions

Every session has its own throwaway key

Next

Webhookslisten to what's happening across all your sessions

Webhooks

TroveFiles POSTs a signed JSON event to your endpoint whenever activity happens in your workspace. Use an admin key to manage webhooks.

Don't want to set up an HTTP endpoint?

trove tail streams the same events to your terminal — no public URL, no signature verification, just one command.

Create & manage

from trove_sdk import TroveAdminClient

admin = TroveAdminClient(api_key="trove-sk-admin-...", workspace_id="ws-...")

# Subscribe to specific events, scoped to a namespace
webhook = admin.create_webhook(
    "https://api.yourapp.com/trove-events",
    events=["file.written", "exec.completed"],
    namespace="customer-acme",           # optional — omit for all namespaces
    description="Notify on file writes", # optional label
)
print(webhook.signing_secret)  # save this — not shown again

# Or subscribe to everything (including future event types)
webhook = admin.create_webhook("https://...", events=["*"])

# List and delete
hooks = admin.list_webhooks()
admin.delete_webhook(webhook.webhook_id)

# Send a test event to confirm delivery
result = admin.test_webhook(webhook.webhook_id)
print(result.ok, result.status)

Event types

Use events: ["*"] to subscribe to all events including future ones.

Event typeFired when
file.writtenFile created or updated via /write or PUT /files
file.deletedFile or directory deleted via /delete
exec.completedShell command finished via /exec
snapshot.createdNamespace snapshotted
snapshot.restoredNamespace restored from a snapshot
workspace.createdWorkspace provisioned
key.createdAPI key minted
key.revokedAPI key revoked

Event payload

Every event has the same envelope:

{
  "id":           "evt-...",
  "type":         "file.written",
  "api_version":  "2025-01-01",
  "workspace_id": "ws-...",
  "namespace":    "customer-acme",
  "created_at":   "2025-04-30T12:00:00Z",
  "actor":        { "key_id": "key-...", "key_name": "customer-acme" },
  "data":         { ... }
}

Verify signatures

Every delivery includes X-Trove-Signature: t=<unix>,v1=<hmac_sha256_hex>. Algorithm: HMAC-SHA256(secret, "{t}.{raw_body}"). Tolerance: 5 minutes. Pass the raw bytes — JSON re-serialization invalidates the signature.

from trove_sdk import verify_webhook, WebhookSignatureError

# FastAPI
from fastapi import FastAPI, Request, HTTPException
import os

app = FastAPI()

@app.post("/trove-events")
async def receive(request: Request):
    body = await request.body()  # must be raw bytes — do not parse first
    try:
        event = verify_webhook(
            secret=os.environ["TROVE_WEBHOOK_SECRET"],
            body=body,
            signature_header=request.headers["x-trove-signature"],
        )
    except WebhookSignatureError:
        raise HTTPException(status_code=400, detail="Bad signature")

    if event.type == "file.written":
        print(f"File written in {event.namespace}: {event.data}")
    elif event.type == "exec.completed":
        print(f"Exec finished: {event.data}")

    return {"ok": True}
Your backend hears every workspace event

Next

Examplessee the full thing wired up to LangChain, LangGraph, Agno, Pydantic AI

Examples

Full working examples in the trove-sdk repo — each seeds a workspace and runs a tool-calling agent backed by Claude.

GitHub Actions

Pre-load files for an agent run by syncing a folder from CI. One PUT per file, scoped to a per-build namespace.

name: Sync build to TroveFiles
on: [push]

jobs:
  sync:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - run: npm ci && npm run build
      - name: Upload to TroveFiles
        env:
          TROVE_KEY: ${{ secrets.TROVE_KEY }}
        run: |
          find ./build -type f | while read f; do
            curl --fail -X PUT \
              -H "Authorization: Bearer $TROVE_KEY" \
              -H "X-Namespace: ci-${{ github.run_id }}" \
              --data-binary "@$f" \
              "https://api.trovefiles.dev/files/${f#./build/}"
          done
Your agent sees fresh code on every push

Next

Limitsknow the guardrails before you ship to production

Limits

LimitValue
Exec timeout30s
POST /write size10 MB
PUT /files size100 MB
Namespace pattern[A-Za-z0-9_-]{1,128}
Path traversal (..)rejected
You know the guardrails

Next

Get an API keyyou've read the manual — time to ship something