Graph-Native Memory System

Memory that thinks in graphs

Give your AI agents persistent, contextual memory that understands relationships, detects contradictions, and decays naturally over time.

Vectors alone can't reason about relationships

Traditional RAG systems embed text into vectors and hope the nearest match is relevant. But knowledge has structure. James lives in Shoreditch. He works at a firm in Canary Wharf. A vector search for "where is James during the day?" won't traverse that chain.

Hippo combines fulltext search, vector similarity, and graph traversal to reason across relationships — not just match on surface similarity.

Built for agents that need to remember

Every feature exists because real-world memory is messy. Facts change. Sources conflict. Relevance fades. Hippo handles all of it.

Graph-Native Storage

Entities and facts live in a property graph. Multi-hop traversal follows real relationships — not just embedding proximity.

Triple Retrieval

Every query runs fulltext, vector, and graph retrieval in parallel. Results are fused with configurable weights for optimal recall.

Contradiction Detection

New facts are checked against existing knowledge. Contradictions are flagged, old facts invalidated, and provenance chains preserved.

Temporal Queries

Ask "what did I know on June 1st?" and get a time-slice of your knowledge graph, including facts that were later superseded.

Confidence Decay

Ebbinghaus-inspired memory decay. Facts lose confidence over time. Salience rises with access and falls with neglect. Just like real memory.

Multi-Agent

Track which agent contributed what. Source credibility scoring and per-agent contradiction rates keep your knowledge trustworthy.

A natural-language database.

Tell it things with /remember. Ask it questions with /ask. Entity resolution, contradiction detection, and confidence scoring happen automatically.

POST /remember
{
  "statement": "James moved to Shoreditch
    last month. He started a new role
    at ACME in Canary Wharf.",
  "source_agent": "chat-agent"
}
response
{
  "entities_created": 3,
  "entities_resolved": 0,
  "facts_written": 2,
  "contradictions_invalidated": 0
}
POST /ask
{
  "question": "Where does James work?",
  "verbose": true
}
response
{
  "answer": "James works at ACME in
    Canary Wharf.",
  "facts": [{
    "fact": "James works at ACME",
    "subject": "James",
    "relation_type": "works_at",
    "object": "ACME",
    "confidence": 0.92,
    "hops": 0
  }]
}

The retrieval pipeline

Every query passes through a multi-stage pipeline that combines three retrieval strategies into a single, ranked result set.

Step 01

Parse & Embed

Query is parsed, entities extracted, and a vector embedding generated via Ollama

Step 02

Triple Retrieve

Fulltext, vector similarity, and N-hop graph traversal run in parallel

Step 03

Score & Fuse

Results merged with configurable weights. Relevance, confidence, recency, and salience scored

Step 04

Rank & Return

Final ranking with confidence decay applied. Top facts returned with full provenance

Systems engineering, not duct tape

Built in Rust for correctness and performance. Every component is tested, observable, and production-ready.

HTTP + MCP Interface

Axum REST API & Model Context Protocol server

Extraction & Resolution

Claude LLM for entity extraction, deduplication, contradiction detection

Graph + Vector + Fulltext

FalkorDB property graph with Ollama embeddings

Maintenance & Observability

Confidence decay, link discovery, Prometheus metrics

3
Retrieval Modes
4
SDKs
N-hop
Graph Traversal
Rust
Type-Safe Core

SDKs for the languages you already use

First-class clients with typed models, async support, and streaming built in. Or hit the REST API directly — your call.

TypeScript

TypeScript

$ npm install @hippo-ai/sdk
Python

Python

$ pip install hippo-sdk
Go

Go

$ go get github.com/dcprevere/hippo/sdks/go
.NET

.NET

$ dotnet add package Hippo.Sdk

Simple, transparent pricing

Scale as your agents grow.

Pico
$9/mo
$90/yr — save 2 months
  • 500 /remember calls/mo
  • 5k /context calls/mo
  • 10k facts
  • nano/haiku models
Get started
Micro
$89/mo
$890/yr — save 2 months
  • 25k /remember calls/mo
  • 250k /context calls/mo
  • 500k facts
  • mini/sonnet models
Get started
Milli
$179/mo
$1,790/yr — save 2 months
  • 100k /remember calls/mo
  • 1M /context calls/mo
  • Unlimited facts
  • All models
Get started

Have your own API key? Save 1/3 — available on all plans.

Try it in your browser

No sign-up. No server. Powered by WebAssembly.

hippo playground
Loading WASM module...
0 entities 0 facts

Runs entirely in your browser. No data leaves your machine.

Give your agents memory

Open source. Self-hosted. Ready to run with Docker Compose.

$ docker compose up -d click to copy