I came up in advertising, went to grad school convinced that software should do the work for us so we can be more human with each other, and spent the next decade proving it.
I wasn't interested in technology for its own sake; I was interested in what it could free us to do. I taught myself to code. I stayed up all night writing programs. I was hooked.
A bicycle accident left my partner with a traumatic brain injury. Our shared reality collapsed. As his mind struggled to rebuild, mine was forced to hold everything together amid the chaos.
Survival and success aren't about intelligence. It's about how resilient your mind is when everything falls apart.
I went deep into positive psychology, applied neuroscience, linguistics, and behavioral science. I became a coach and worked with creators, leaders, and founders for years.
The pattern was always the same: uncertainty is uncomfortable, so we grab the first answer. But the best answers take time. The real design problem is helping people stay in the discomfort long enough to find what's actually true.
I kept building. When large language models emerged, I recognized them immediately — not as a trend, but as the tool I had always needed.
Taking detailed notes during a coaching session pulled me away from being fully present. So I built an app that captured each utterance, analyzed it, plotted it, and highlighted the client's own language back to me in real time.
I combine dialogic intelligence with software systems to improve how humans interact with them. I built Paloma, an AI coaching companion with a relational memory architecture, behavioral guardrails, and a design philosophy built on psychological safety.
I evolved Field Guide from a paper survey into a mobile data collection system into an agentic research infrastructure. NIH-funded, multi-agent, with fatigue detection, real-time contradiction flagging, and longitudinal memory across years.
I don't just design products. I design the relationship between humans and the systems built to serve them.