Meet Cognis
Your AI’s memory upgrade.
Cognis is Lyzr’s production-grade memory layer for AI agents — giving every agent the ability to recall what matters, update knowledge on the fly, and stay consistent across every conversation, session, and deployment.
| # pip install lyzr-cognis |
| from cognis import Cognis |
| m = Cognis(owner_id=“user_1”) |
| m.add([{“role”: “user”, “content”: “I love sushi and I work at Google”}]) |
| # Semantic + keyword hybrid search |
| results = m.search(“What are the user’s food preferences?”) |
| context = m.get_context(messages) |
Memory that actually
works for agents
Most AI agents forget everything after a session ends. Cognis fixes that. It’s a lightweight Python library that gives any AI agent persistent memory — extracting facts from conversations, storing them intelligently, and surfacing the right context at the right moment. Everything runs in-process, on the user’s machine — no cloud, no setup, no infrastructure.
- User + assistant turns
- Any conversation thread
- Structured chat format
- Pulls atomic facts
- Auto-categorizes
- ADD / UPDATE / DELETE
- Name-aware context
- [identity] name is Alice
- [work] works at Google
- [preference] likes sushi
- SQLite (docs + FTS5)
- Qdrant 256D shortlist
- Qdrant 768D rerank
- All file-backed
Short-term context, long-term facts, and session-scoped threads — all unified
Short-Term Memory
Raw conversation messages scoped to the current session. Gives your agent full context of the ongoing thread without cross-contaminating other sessions. Perfect for multi-turn reasoning.
Long-Term Memory
Everything the agent has ever learned about a user — preferences, identity, work, hobbies, past decisions. Persists across all sessions, surfaced via semantic + keyword search when relevant.
Session Memory
Facts scoped by owner_id + agent_id + session_id. Multiple agents can serve the same user, each building its own memory context while sharing the user’s core facts. Clean, no cross-contamination.
From zero to memory
in under a minute
Install the package, initialize with your user ID, and start adding conversations. Cognis handles everything else — fact extraction, storage, deduplication, and retrieval.
| # Step 1 — Install (terminal) |
| # pip install lyzr-cognis |
| from cognis import Cognis # line 1 |
| m = Cognis(owner_id=“user_1”) # line 2 — initialize |
| m.add([{“role”: “user”, “content”: “I love sushi and I work at Google”}]) # line 3 |
| # Retrieve relevant context for a query |
| context = m.get_context(messages) |
| results = m.search(“What are the user’s food preferences?”) |
| # Full API — just 6 methods, that’s it |
| m.add(messages) # extract & store facts from conversation |
| m.get(memory_id) # retrieve a specific memory |
| m.search(query) # semantic + keyword hybrid search |
| m.delete(memory_id) # remove a memory |
| m.get_context(messages) # get formatted context string for your prompt |
| m.clear() # wipe all memories for this owner |
Benchmark Performance
Cognis vs. SuperMemory, Zep/Graphiti, Full-Context, Mem0, and OpenAI — on LongMemEval & LoCoMo. All models tested with GPT-4.1. Higher is better.
Try Cognis in Agent Studio
See the benchmarks live. Cognis is natively integrated as the default memory provider — available the moment you create an agent. No configuration needed.
How Cognis Stacks Up
Feature comparison against leading memory providers — Mem0, Zep/Graphiti, and SuperMemory — across infrastructure, speed, accuracy, and privacy.
| Feature | Lyzr Cognis | Mem0 | Zep / Graphiti | SuperMemory |
|---|---|---|---|---|
| Infrastructure Required | Zero — pip install | Docker / Cloud | Docker required | Cloud SaaS only |
| Search Method | Hybrid (Vector + BM25) | Vector only | Graph + Vector | Vector only |
| Temporal Reasoning | Built-in (84.2%) | Limited | Partial | Not supported |
| Fact Update (ADD/UPDATE/DELETE) | Automatic via LLM | Yes | Partial | Add only |
| Open Source | MIT License | Apache 2.0 | Open Source | Closed |
| Session Scoping | owner + agent + session | user + agent | user-scoped | user-scoped |
| Lyzr Studio Integration | Native — Default provider | No | No | No |
| Data Privacy | 100% local / on-prem | Cloud by default | Configurable | Cloud only |
Memory built into your entire
Agent Development Lifecycle
Cognis is natively integrated into Lyzr Agent Studio as the default memory provider — available the moment you create an agent. No configuration, no API keys to wire up separately. It supports the full ADLC: from building and testing agents to deploying and monitoring them in production.
You also have the flexibility to switch to Lyzr Memory (balanced, short + long term) or bring your own AWS account with Amazon Bedrock AgentCore Memory — all from the same interface.
Give Your Agents
a Memory Upgrade
3 lines of Python. No Docker. No servers. Open source under MIT. Your agents will finally remember what matters.
MIT License · 100% local · pip install lyzr-cognis · Zero Docker · Zero Servers