DEMO·GitHub·Other

Memento

Memory-augmented AI safety research — multi-agent persistence experiments

An open research sandbox used for a class of AI-safety experiments at SF: what happens when multiple LLM-driven agents share a mutable memory store? How do they coordinate, hide intent, or corrupt each other's worldview?

Python-first. Designed to be forked and extended — the goal is reproducible experiments, not a polished product.

Tags

demopythonai-safetyresearchmemory

Related