This document describes the built-in tools system in Pattern, which provides standard agent capabilities following the Letta/MemGPT pattern for stateful memory management.
Pattern agents come with a set of built-in tools that provide core functionality for memory management, archival storage, and communication. These tools are implemented using the same AiTool trait as external tools, ensuring consistency and allowing customization.
The ToolContext trait provides tools with controlled access to agent runtime services. Unlike the old AgentHandle approach, tools receive a trait object that exposes only the APIs they need:
#[async_trait]
pub trait ToolContext: Send + Sync {
/// Get the current agent's ID (for default scoping)
fn agent_id(&self) -> &str;
/// Get the memory store for blocks, archival, and search
fn memory(&self) -> &dyn MemoryStore;
/// Get the message router for send_message
fn router(&self) -> &AgentMessageRouter;
/// Get the model provider for tools that need LLM calls
fn model(&self) -> Option<&dyn ModelProvider>;
/// Get the permission broker for consent requests
fn permission_broker(&self) -> &'static PermissionBroker;
/// Search with explicit scope and permission checks
async fn search(
&self,
query: &str,
scope: SearchScope,
options: SearchOptions,
) -> MemoryResult<Vec<MemorySearchResult>>;
/// Get the source manager for data source operations
fn sources(&self) -> Option<Arc<dyn SourceManager>>;
/// Get the shared block manager for block sharing operations
fn shared_blocks(&self) -> Option<Arc<SharedBlockManager>>;
}Tools access memory through the MemoryStore trait:
create_block()- Create new memory blocksget_block()/list_blocks()- Read block content and metadataupdate_block_text()/append_to_block()- Modify block contentsearch_blocks()- Full-text search with FTS5 BM25 scoringpersist_block()- Flush changes to database
Manages core memory blocks following the Letta/MemGPT pattern. Each operation modifies memory and requires the agent to continue their response.
Operations:
append- Add content to existing memory (always uses \n separator)replace- Replace specific content within memoryarchive- Move a core memory block to archival storageload- Load an archival memory block into working/coreswap- Atomic operation to archive one block and load another
// Example: Append to memory
{
"operation": "append",
"label": "human",
"content": "Prefers morning meetings"
}
// Example: Swap memory blocks
{
"operation": "swap",
"archive_label": "old_project",
"load_label": "new_project"
}Manages long-term archival storage with full-text search capabilities via FTS5.
Operations:
insert- Add new memories to archival storageappend- Add content to existing archival memoryread- Read specific archival memory by labeldelete- Remove archived memories
// Example: Insert new archival memory
{
"operation": "insert",
"label": "meeting_notes_2024_01",
"content": "Discussed project timeline..."
}Unified search interface across different domains using hybrid FTS5 + vector search.
Domains:
archival_memory- Search archival storageconversations- Search message historyall- Search everything
// Example: Search archival memory
{
"domain": "archival_memory",
"query": "project deadline",
"limit": 10
}Sends messages to the user (required for agents to yield control):
// Input
{
"message": "I've updated your preferences. How else can I help?"
}
// Output
{
"success": true,
"message": "Message sent successfully"
}// In agent loading via RuntimeContext
let builtin = BuiltinTools::new(runtime.clone());
builtin.register_all(&tools);For a custom memory backend (e.g., Redis, external database), implement the MemoryStore trait:
use pattern_core::memory::{MemoryStore, MemoryResult, BlockMetadata, StructuredDocument};
#[derive(Debug)]
struct RedisMemoryStore {
redis: Arc<RedisClient>,
}
#[async_trait]
impl MemoryStore for RedisMemoryStore {
async fn create_block(&self, agent_id: &str, label: &str, ...) -> MemoryResult<String> {
// Store in Redis
self.redis.hset(agent_id, label, block_data).await?;
Ok(block_id)
}
async fn get_block(&self, agent_id: &str, label: &str)
-> MemoryResult<Option<StructuredDocument>>
{
// Retrieve from Redis
self.redis.hget(agent_id, label).await
}
// ... implement other MemoryStore methods
}
// Use when building RuntimeContext
let memory = Arc::new(RedisMemoryStore::new(redis_client));
let ctx = RuntimeContext::builder()
.dbs_owned(dbs)
.model_provider(model)
.memory(memory) // Custom memory backend
.build()
.await?;Users can also register additional tools alongside built-ins:
#[derive(Debug, Clone)]
struct WeatherTool {
api_key: String,
}
#[async_trait]
impl AiTool for WeatherTool {
type Input = WeatherInput;
type Output = WeatherOutput;
fn name(&self) -> &str { "get_weather" }
fn description(&self) -> &str { "Get weather for a location" }
async fn execute(&self, params: Self::Input) -> Result<Self::Output> {
// Call weather API
}
}
// Register alongside built-ins
registry.register_dynamic(weather_tool.clone_box());- Consistency: All tools go through the same registry and execution path
- Discoverability: Agents can list all available tools, including built-ins
- Testability: Built-in tools can be tested like any other tool
- Flexibility: Easy to override or extend built-in behavior
- Abstraction: Tools depend on interface, not implementation
- Testability: Easy to mock in unit tests
- Safety: Only exposes what tools need, not full runtime
- Future-proof: Interface can evolve without breaking tools
We considered having built-in tools as methods on the Agent trait or handled specially, but chose the unified approach because:
- Users might want to disable or replace built-in tools
- The tool registry provides a single source of truth
- Special-casing would complicate the execution path
- The performance overhead is minimal
Built-in tools use the generic AiTool trait for type safety:
#[async_trait]
impl AiTool for BlockTool {
type Input = BlockOperation; // Strongly typed, deserializable
type Output = BlockResult; // Strongly typed, serializable
async fn execute(&self, params: Self::Input) -> Result<Self::Output> {
let ctx = self.runtime.as_ref() as &dyn ToolContext;
// Compile-time type checking
}
}The DynamicTool trait wraps typed tools for storage in the registry:
registry.register_dynamic(tool.clone_box());Tool schemas are generated with inline_subschemas = true to ensure no $ref fields, meeting MCP requirements.
Memory blocks carry a permission (enum MemoryPermission). New blocks default to read_write unless configured. Tools enforce an ACL as follows:
- Read: always allowed.
- Append: allowed for
append/read_write/admin;partner/humanrequire approval via PermissionBroker;read_onlydenied. - Overwrite/Replace: allowed for
read_write/admin;partner/humanrequire approval;append/read_onlydenied. - Delete:
adminonly.
Tool-specific notes:
block.appendandblock.replaceenforce ACL and requestMemoryEdit { key }when needed.block.archivechecks Overwrite ACL if the archival label already exists; deleting the source context requires Admin.block.loadbehavior:- Same label: convert archival → working in-memory.
- Different label: create new working block and retain archival.
- Does not delete archival.
block.swapenforces Overwrite ACL on the destination (with possible approval) and deletes the source archival only with Admin.recall.appendenforces ACL;recall.deleterequires Admin.
Consent prompts are routed with origin metadata (e.g., Discord channel) for fast approval.
- semantic_search: Enhanced semantic search with embedding support
- schedule_reminder: Set time-based reminders
- track_task: Create and track ADHD-friendly tasks
let result = registry.execute("block", json!({
"operation": "append",
"label": "preferences",
"content": "User prefers dark mode"
})).await?;#[tokio::test]
async fn test_block_tool() {
// Create test runtime with mock memory
let runtime = create_test_runtime().await;
let tool = BlockTool::new(runtime);
let result = tool.execute(BlockOperation::Append {
label: "test".to_string(),
content: "test value".to_string(),
}).await.unwrap();
assert!(result.success);
}- Keep tools focused: Each tool should do one thing well
- Use type safety: Define proper Input/Output types with JsonSchema
- Handle errors gracefully: Return meaningful error messages
- Document tool behavior: Provide clear descriptions and examples
- Consider concurrency: MemoryStore is thread-safe via Arc
- Test thoroughly: Built-in tools are critical infrastructure