🧠
hallucinate.cc
Notes, experiments and side-projects on AI agents and LLMs.
This is a small personal corner of the internet where I write up things I learn while building with large language models — agent loops, tool use, browser automation, retrieval-augmented generation, and the very real failure modes that come with all of it.
The name is a tongue-in-cheek nod to the most famous LLM bug: hallucinations. A lot of what gets posted here is about reducing them in practice — better prompting, grounding, evaluation harnesses, and knowing when to tell a model to just say "I don't know."
Projects
- Agent harness experiments — small comparisons between local and frontier models on multi-step tool use.
- Browser automation playground — driving headless browsers with LLM agents for research and form-filling tasks.
- Hallucination eval notes — short writeups on prompt patterns that measurably reduce confident-but-wrong answers.
Contact
Drop a line: saskia.mueller1337@gmail.com