Skip to content

Commit 7269c69

Browse files
Added README and cleaned up some bugs
1 parent 1c6e7a3 commit 7269c69

10 files changed

Lines changed: 633 additions & 267 deletions

File tree

.fttignore

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,3 @@
11
.git/
2-
node_modules/
2+
node_modules/
3+
dist/

CtxIQ.png

392 KB
Loading

README.md

Lines changed: 182 additions & 41 deletions
Original file line numberDiff line numberDiff line change
@@ -6,60 +6,201 @@
66
[![Status](https://img.shields.io/badge/development-active-brightgreen.svg)](#)
77
[![Status](https://img.shields.io/badge/status-WIP-orange.svg)](#)
88

9-
> Context and memory orchestration toolkit for building smarter AI assistants in TypeScript.
9+
Context and memory orchestration toolkit for building smarter AI assistants in TypeScript.
1010

1111
---
1212

13-
## 🚧 Status
13+
> [!WARNING] > **This project is under active development.**
14+
> Expect frequent updates and breaking changes until the first stable release.
15+
> Follow for progress or contribute!
1416
15-
**This project is under active development.**
16-
Expect frequent updates and breaking changes until the first stable release.
17-
Follow for progress or contribute!
17+
## ![CtxIQ Flow](./CtxIQ.png)
1818

19-
```mermaid
20-
flowchart TD
21-
subgraph Core
22-
CM["ConversationManager"]
23-
CS["ConversationSession (one session)"]
24-
TM["TokenManager (ApproxTokenCounter / optional tiktoken)"]
25-
BP["buildPrompt() / summaryFn (async)"]
26-
LC["LLMCaller (MockLLMCaller / future adapters)"]
27-
PM["Prompt Messages"]
28-
AR["Assistant Reply"]
29-
end
19+
## 🧠 Overview / Motivation
3020

31-
subgraph Persistence["Optional Persistence Adapters"]
32-
LS["LocalStorageAdapter (browser)"]
33-
end
21+
Building AI assistants isn’t just about calling a model—it’s about **managing context**.
22+
Every real-world assistant must juggle ongoing conversations, control token budgets, and decide when to summarize or trim history.
23+
Without a structured toolkit, developers often reinvent fragile, one-off solutions for memory, persistence, and prompt construction.
3424

35-
%% Main orchestration flow
36-
CM -->|creates loads lists sessions| CS
37-
CS -->|stores messages summaries| TM
38-
CS -->|calls| BP
39-
BP -->|produces| PM
40-
PM -->|count tokens| TM
41-
PM -->|sent to| LC
42-
LC -->|returns| AR
43-
AR -->|added to session| CS
25+
**CtxIQ** solves this by providing a **lightweight, TypeScript-first toolkit** for context and memory orchestration.
26+
It gives you composable building blocks—session management, token counting, pluggable persistence, and an abstract LLM caller—so you can focus on _what_ your assistant does instead of _how_ it keeps track of the past.
4427

45-
%% Persistence controlled by Manager
46-
CM -->|save/load session toJSON fromJSON| LS
47-
CS -->|toJSON fromJSON| CM
28+
Designed to be **framework-agnostic** and **fully typed**, CtxIQ works in Node.js, the browser, or serverless environments, making it a reliable foundation for chatbots, copilots, and any multi-turn AI experience.
4829

49-
%% Notes with real line breaks
50-
TM_note["<p align="left"><b>TokenManager</b> used to:<br/>- estimate prompt tokens<br/>- reserve space for summaries<br/>- optionally use tiktoken (lazy-loaded)<p/>"]:::note
30+
---
31+
32+
## ✨ Features
33+
34+
- **Conversation Management** – Create, load, and persist multiple conversation sessions with a single manager.
35+
- **Token Budgeting** – Estimate prompt tokens, reserve space for summaries, and prevent model overflow with pluggable counters.
36+
- **Prompt Building** – Assemble clean, model-ready prompt messages with optional automatic summarization.
37+
- **Extensible LLM Caller** – Swap in your own LLM provider (OpenAI, Groq, etc.) or use the included mock caller for testing.
38+
- **Pluggable Persistence** – Save and restore sessions using adapters (e.g. browser `LocalStorage`) or roll your own.
39+
- **React Hooks**`useConversation` and `useSession` provide easy integration in React apps.
40+
- **TypeScript First** – Fully typed API with clear interfaces and autocomplete support for a smooth developer experience.
41+
- **Framework Agnostic** – Works in Node.js, browsers, and serverless runtimes without extra dependencies.
42+
43+
---
44+
45+
## ⚡ Quick Start
46+
47+
Install from npm:
48+
49+
```bash
50+
npm install ctxiq
51+
# or
52+
yarn add ctxiq
53+
```
54+
55+
Create a simple conversation session:
56+
57+
```typescript
58+
import { ConversationManager } from "ctxiq";
59+
60+
// 1️⃣ Initialize a manager (optionally pass configs like token budgets)
61+
const manager = new ConversationManager();
62+
63+
// 2️⃣ Create a session
64+
const session = manager.createSession("demo-session");
65+
66+
// 3️⃣ Add a user message (ConversationSession will assign an id if you omit it)
67+
session.addMessage({
68+
role: "user",
69+
content: "Hello, CtxIQ!",
70+
// tokens optional, the session will compute if missing
71+
});
72+
73+
// 4️⃣ Build a prompt for your LLM (buildPrompt is async if you provide an async summarizer)
74+
const promptMessages = await session.buildPrompt();
75+
76+
// 5️⃣ Optionally format for your provider (if you provided an llmFormatter, use getLLMMessages)
77+
const providerInput = await session.getLLMMessages(); // returns provider-shaped array or default {role, content}[]
78+
79+
// 6️⃣ Send to your LLM of choice (pseudo-code)
80+
const reply = await myLLM.call(providerInput); // your adapter handles provider specifics
81+
82+
// 7️⃣ Store the assistant’s reply back into the session
83+
session.addMessage({
84+
role: "assistant",
85+
content: typeof reply === "string" ? reply : JSON.stringify(reply), // adapt as needed
86+
});
87+
88+
console.log(session.toJSON());
89+
```
90+
91+
---
92+
93+
## 🧩 Core Concepts
94+
95+
CtxIQ is built from small, composable pieces that work together:
96+
97+
- **ConversationManager** – The top-level orchestrator.
98+
Creates, loads, lists, and deletes sessions. Handles global configuration like token budgets and storage adapters.
99+
100+
- **ConversationSession** – Represents a single ongoing conversation.
101+
Stores messages, builds prompts, and triggers optional summarization.
102+
103+
- **TokenManager** – Estimates and tracks token usage.
104+
Supports pluggable counters (approximation or tiktoken) to prevent overflow and reserve space for summaries.
105+
106+
- **LLMCaller** – Abstract interface for sending prompts to an LLM.
107+
Includes a `MockLLMCaller` for tests and demos; real adapters (OpenAI, Groq, etc.) can be added as needed.
108+
109+
- **Persistence Adapters** – Optional storage layer (e.g. `LocalStorageAdapter`) for saving and restoring sessions across page loads or server restarts.
110+
111+
- **Prompt Builder** – Generates the final model-ready message array, automatically including summaries or trimming history based on token limits.
112+
113+
These components are **decoupled**: you can use only what you need, swap implementations, or extend with your own logic.
114+
115+
---
116+
117+
## 🔌 Adapters & Extensibility
118+
119+
CtxIQ is designed to be **flexible and extensible**, allowing you to swap or extend functionality without changing core logic:
120+
121+
- **Storage Adapters** – Persist session data anywhere:
122+
123+
- `LocalStorageAdapter` (browser-ready)
124+
- IndexedDBAdaptor (planned)
125+
- Custom adapters (implement the `StorageAdaptor` interface)
126+
127+
- **LLM Adapters** – Connect to any LLM provider:
128+
129+
- `MockLLMCaller` (built-in for testing)
130+
- OpenAI, Groq, or your custom provider
131+
- Supports streaming responses and generic input/output types
132+
133+
- **Token Managers** – Plug in different token counting strategies:
134+
135+
- `ApproxTokenCounter` (fast, heuristic-based)
136+
- Optional `tiktoken` integration for exact counting
137+
138+
- **Formatter Functions** – Each LLM adapter can export helper formatters to convert `Message[]` into provider-specific input.
139+
140+
With this design, you can **mix and match adapters, token managers, and LLMs** to suit your application needs.
141+
142+
---
143+
144+
## 📚 Examples / Recipes
145+
146+
Here are some common patterns you can implement with CtxIQ:
147+
148+
### 1️⃣ Basic conversation flow
51149

52-
LC_note["<p align="left"><b>LLMCaller</b> is pluggable:<br/>- MockLLMCaller (tests & demo)<br/>- adapters (OpenAI/Groq/etc) to be added later<br/>- summaryFn runs via LLMCaller (async)<p/>"]:::note
150+
```ts
151+
const manager = new ConversationManager();
152+
const session = manager.createSession("example");
53153

54-
CM_note["<p align="left"><b>ConversationManager</b> responsibilities:<br/>- create/load/save/delete sessions<br/>- choose storage adapter<br/>- provide global config (token budgets, defaults)<br/>- wire TokenManager and LLMCaller instances<p/>"]:::note
154+
// add a user message
155+
session.addMessage({ role: "user", content: "Hello!" });
55156

56-
TM -.-> TM_note
57-
LC -.-> LC_note
58-
CM -.-> CM_note
157+
// build prompt (async if summariser is async)
158+
const prompt = await session.buildPrompt();
59159

60-
classDef note fill:#f9f,stroke:#333,stroke-width:1px,color:#333;
160+
// format for provider or get default LLM messages
161+
const llmInput = await session.getLLMMessages();
61162

163+
// send to a provider (adapter-specific)
164+
const providerResp = await mockLLM.call(llmInput);
165+
166+
// store assistant reply
167+
session.addMessage({
168+
role: "assistant",
169+
content:
170+
typeof providerResp === "string"
171+
? providerResp
172+
: JSON.stringify(providerResp),
173+
});
174+
```
175+
176+
### 2️⃣ Persisting sessions in the browser
177+
178+
```ts
179+
import { LocalStorageAdapter, attachStorageAdaptor } from "ctxiq";
180+
181+
const manager = new ConversationManager();
182+
const storage = new LocalStorageAdapter();
183+
attachStorageAdaptor(manager, storage);
184+
185+
const session = manager.createSession("persisted-demo");
62186
```
63187

64-
free for dev
65-
open source alternatives
188+
### 3️⃣ Using a custom LLM adapter
189+
190+
- Implement the BaseLLMCaller interface (call<TIn,TOut> and optional stream)
191+
192+
- Export a formatter function to convert Message[] into provider input
193+
194+
- Inject your adapter into useSession or ConversationSession directly
195+
196+
---
197+
198+
## 📄 License
199+
200+
This project is licensed under the [MIT License](./LICENSE).
201+
202+
---
203+
204+
Thank you for checking out **CtxIQ**!
205+
Contributions, feedback, and suggestions are welcome — feel free to open issues or pull requests.
206+
Stay tuned for updates as we continue building features and improving the library.

0 commit comments

Comments
 (0)