This directory contains practical examples demonstrating Context usage patterns.
Path: backend-automation/
Demonstrates using Context in backend automation scenarios like GitHub Actions or scheduled jobs.
Features:
- Creating contexts for data analysis
- Adding inputs with relevance scores
- Automatic pruning to fit token limits
- Strategy-based routing
- Context operations (extend, merge, serialize)
Run:
cd backend-automation
PYTHONPATH=../../core/python python backend_example.pyUse Cases:
- Scheduled catalog analysis
- Automated data processing
- Batch LLM operations
- Report generation
Path: browser-analysis/
Demonstrates using Context in browser environments with user-provided API keys.
Features:
- Browser-based Context creation
- Interactive input management
- Strategy selection
- Context serialization
- Stub execution (shows structure without actual API calls)
Run:
cd browser-analysis
# Open index.html in a browser
python -m http.server 8000
# Then visit http://localhost:8000Use Cases:
- User-driven data exploration
- Client-side analysis with user keys
- Interactive data visualization
- Privacy-preserving analysis
Path: hybrid-workflow/
Demonstrates cross-runtime workflow: backend creates context, frontend extends and customizes.
Features:
- Backend preprocessing and initial analysis
- Context serialization for frontend
- Frontend context loading and extension
- User customization with their API keys
- Context sharing via URL or download
Run Backend:
cd hybrid-workflow
PYTHONPATH=../../core/python python backend.pyRun Frontend:
# Open frontend.html in a browser
python -m http.server 8000
# Then visit http://localhost:8000/frontend.htmlUse Cases:
- Collaborative analysis workflows
- Backend preprocessing + frontend customization
- Shareable analysis contexts
- Reproducible research
from context import Context
ctx = Context(
intent="analyze",
constraints={"max_tokens": 4000},
routing={"strategy": "cost_optimized"}
)# Add with relevance score
ctx.add_input(data, relevance=0.9)
# Multiple inputs
for item in dataset:
ctx.add_input(item, relevance=item.priority)# Prune to fit token limit
ctx.prune(max_tokens=2000, relevance_threshold=0.5)# Strategy-based
ctx.route(strategy="cost_optimized")
# Explicit
ctx.route(model="gpt-4", provider="openai")result = ctx.execute(
task="Analyze this data",
api_key=os.environ["OPENAI_API_KEY"]
)# Save
json_str = ctx.to_json()
with open("context.json", "w") as f:
f.write(json_str)
# Load
with open("context.json") as f:
ctx = Context.from_json(f.read())Backend (Python):
ctx = Context(intent="analyze")
ctx.add_input(preprocessed_data)
# Save for frontend
with open("context.json", "w") as f:
f.write(ctx.to_json())Frontend (TypeScript):
// Load context created by backend
const response = await fetch('context.json');
const data = await response.json();
const ctx = Context.fromJSON(data);
// Extend with user data
ctx.addInput(userInput);
const result = await ctx.execute({
task: "Custom analysis",
apiKey: userApiKey
});All examples use stub execution by default (no real API calls). To enable real execution:
-
Set up API keys:
export OPENAI_API_KEY="your-key-here" # or export ANTHROPIC_API_KEY="your-key-here"
-
Implement provider adapters in
adapters/directory -
Update executor to use real providers instead of stubs
Examples use synthetic data for demonstration. In real usage, replace with your actual data sources:
- Catalog metadata
- User inputs
- API responses
- Database queries
- File contents
After reviewing examples:
- Try modifying the examples for your use case
- Review the Getting Started Guide
- Check the API Reference
- Read the Migration Guide for adopting Context in existing projects