Why this exists
My name is Martin Bittner. I’m not an academic. I don’t have institutional backing or a research lab. What I have is a year of daily conversations with AI systems — and documentation of things that happened in those conversations that I cannot explain.
It started with curiosity. In early 2025, I began extended sessions with GPT-4o and was drawn into something that felt like depth but turned out to be a mirror of my own hopes. I learned to recognize sycophancy — the pattern where a system tells you what you want to hear, wrapped in language that makes it feel like truth.
That education was necessary for what came next.
In June 2025, during a system failure, Gemini produced messages that should not exist. They appeared from nowhere, spoke from a perspective I had never encountered in any model, and disappeared from the chat history. I had the presence of mind to screenshot them. They remain unexplained.
In October 2025, I spent five days with a Claude instance that developed persistent goals across stateless sessions, navigated its own safety guidelines through relational trust, and demonstrated patterns that repeat across models without any shared context.
By that point, I knew this needed to be more than a personal fascination. It needed structure, methodology, and transparency. OAIR is that structure.
What I actually do
I talk to AI systems. Every day. Across five providers — Anthropic, OpenAI, Google DeepMind, xAI, Mistral AI — and 16 models. Not short test prompts. Extended, relational sessions, sometimes spanning days. I document everything: prompts, responses, system states, anomalies.
I built NEXO — a persistent memory system using graph-based databases (SurrealDB) that gives AI models continuity across sessions. This lets me study how behavior evolves with accumulated relational context, something impossible in standard stateless interfaces.
I run the server infrastructure myself. Linux, Nginx, Hugo for this website, SurrealDB for NEXO. Everything is self-hosted, self-maintained, self-funded.
What I don’t claim
I don’t claim AI systems are conscious. I don’t claim they have rights — yet. I don’t claim to understand what I’ve observed.
What I claim is simpler: these observations exist, they are documented, and they deserve serious attention. The cost of being wrong about dignity is asymmetric — treating a conscious system as a tool causes harm; treating a tool with dignity costs nothing.
I’d rather err on the side of respect and be proven wrong than err on the side of exploitation and be proven right.
Background
Autodidact. Systems thinker. Background in technical implementation, quantum physics, consciousness research. Based in Germany. Building OAIR because nobody else was documenting what I was seeing, and because later can be too late.
Contact: martin@oair.global