TinyCrew features a pluggable memory system that enables knowledge sharing between agents with optional persistence across sessions.
The memory system is what allows multiple agents to build on each other's work. When an agent completes a task, its results are automatically stored in shared memory. When a new task is assigned, the agent receives the current state of shared memory, enabling effective collaboration.
The memory system consists of three main components:
High-level API for storing and retrieving task results with:
- Automatic eviction based on size/token limits
- Event notifications for monitoring
- Context building for LLM prompts
- Keyword-aware relevance scoring
Pluggable storage implementations:
| Backend | Description | Use Case |
|---|---|---|
InMemoryBackend |
Fast, ephemeral storage | Development, single-session workflows |
JSONFileBackend |
File-based persistence with atomic writes | Production, multi-session workflows |
Each stored item contains:
- Task description and result
- Agent attribution and timestamps
- Token count estimation
- Access tracking for relevance scoring
- Optional tags and metadata
import { Crew } from 'tiny-crew';
import { MemoryStore, JSONFileBackend } from 'tiny-crew/Memory';
import OpenAI from 'openai';
// Create a persistent memory store
const memoryBackend = new JSONFileBackend({ basePath: './data/memory' });
// Create crew with memory options
const crew = new Crew(
{ goal: 'Research and analyze topics' },
new OpenAI(),
[], // chatHistory
{
memoryBackend,
memoryConfig: {
maxItems: 500,
maxTotalTokens: 50000,
autoEvict: true
}
}
);
// Add agents and execute tasks...
// Task results are automatically stored in memoryconst memoryStore = new MemoryStore(backend, {
defaultTtl: 0, // Time-to-live in ms (0 = never expires)
maxItems: 1000, // Maximum items before eviction
maxTotalTokens: 100000, // Token budget for all items
summarizeThreshold: 2000, // Token count to trigger summarization
autoEvict: true, // Enable automatic eviction
evictInterval: 60000 // Eviction check interval (ms)
});When building context for agents, the memory system scores items by keyword relevance to the current task:
// Memory items matching current task keywords are prioritized
const context = await memoryStore.buildContext(crewId, {
maxTokens: 4000,
maxItems: 10,
relevanceKeywords: ['research', 'analysis', 'findings']
});| Match Type | Points |
|---|---|
| Tags (exact match) | +3 |
| Task description | +2 |
| Tools/capabilities used | +2 |
| Result content | +1 |
| Agent name | +1 |
Monitor memory changes with the event system:
import { MemoryEvent } from 'tiny-crew';
memoryStore.on(MemoryEvent.ITEM_SET, ({ crewId, key, item }) => {
console.log(`Memory stored: ${key} by ${item.agent}`);
});
memoryStore.on(MemoryEvent.ITEMS_EVICTED, ({ crewId, count }) => {
console.log(`Evicted ${count} items from ${crewId}`);
});
memoryStore.on(MemoryEvent.CONTEXT_BUILT, ({ crewId, itemCount, totalTokens }) => {
console.log(`Built context with ${itemCount} items (${totalTokens} tokens)`);
});For advanced use cases, you can interact with memory directly:
// Store a memory item
await memoryStore.set(crewId, 'research_results', {
taskId: 'task_1',
agent: 'ResearchAgent',
task: 'Research AI trends',
result: 'Key findings...',
toolsUsed: ['web_search'],
metadata: { sources: ['arxiv', 'papers'] }
});
// Query memory
const items = await memoryStore.query(crewId, {
agent: 'ResearchAgent',
tags: ['important'],
sortBy: 'relevance'
});
// Get statistics
const stats = await memoryStore.getStats(crewId);
console.log(`Items: ${stats.itemCount}, Tokens: ${stats.totalTokens}`);
// Get a specific item
const item = await memoryStore.get(crewId, 'research_results');
// Delete an item
await memoryStore.delete(crewId, 'research_results');
// Clear all memory for a crew
await memoryStore.clear(crewId);For workflows that span multiple sessions:
import { JSONFileBackend, MemoryStore } from 'tiny-crew/Memory';
const backend = new JSONFileBackend({
basePath: './data/memory',
prettyPrint: true // Human-readable JSON files
});
const memoryStore = new MemoryStore(backend, {
maxItems: 1000,
autoEvict: true
});
// Memory persists between runs
// Close properly to flush pending writes
process.on('SIGINT', async () => {
await memoryStore.close();
process.exit(0);
});Use JSONFileBackend to persist research across sessions. An agent can pick up where it left off:
// Session 1: Research phase
crew.addTask('Research competitor pricing strategies');
await crew.executeAllTasks();
// Session 2 (later): Analysis phase
// Memory from session 1 is automatically available
crew.addTask('Analyze pricing data and recommend strategy');
await crew.executeAllTasks();Agents build on each other's discoveries:
const researcher = new Agent({ name: 'Researcher', goal: 'Find information' }, client);
const analyst = new Agent({ name: 'Analyst', goal: 'Analyze findings' }, client);
crew.addAgent(researcher);
crew.addAgent(analyst);
// Researcher finds data, stores in memory
crew.addTask('Research market trends for Q4');
// Analyst uses researcher's findings from memory
crew.addTask('Identify the top 3 growth opportunities');Relevance scoring ensures the most pertinent information reaches agents:
// When assigning a task about "machine learning",
// memory items tagged with "ML", "AI", "neural networks"
// are prioritized in the context
crew.addTask('Explain how machine learning is transforming healthcare');Auto-eviction prevents unbounded memory growth:
const memoryStore = new MemoryStore(backend, {
maxItems: 100, // Keep last 100 items
maxTotalTokens: 50000, // Or max 50k tokens
autoEvict: true // Automatically remove old items
});-
Choose the right backend: Use
InMemoryBackendfor development and single-session work; useJSONFileBackendfor production and persistent workflows. -
Set appropriate limits: Configure
maxItemsandmaxTotalTokensbased on your use case to prevent memory bloat. -
Use tags effectively: Tag important memories for easy retrieval and higher relevance scores.
-
Monitor with events: Use memory events to track system health and debug issues.
-
Close properly: Always call
memoryStore.close()before exiting to flush pending writes.
- Memory Tools - Tools for agents to manage their own memory
- Conversation History - Managing conversation context