Senior AI Product Designer

I understand
human systems.
I understand
agentic systems.

01

The Through-Line

Once upon a time The original thesis

I came up in advertising, went to grad school convinced that software should do the work for us so we can be more human with each other, and spent the next decade proving it.

I wasn't interested in technology for its own sake; I was interested in what it could free us to do. I taught myself to code. I stayed up all night writing programs. I was hooked.

Until one day Everything changed

A bicycle accident left my partner with a traumatic brain injury. Our shared reality collapsed. As his mind struggled to rebuild, mine was forced to hold everything together amid the chaos.

Survival and success aren't about intelligence. It's about how resilient your mind is when everything falls apart.

I stopped building software and started studying the system the software is supposed to serve: the human mind.
Because of that The deep study

I went deep into positive psychology, applied neuroscience, linguistics, and behavioral science. I became a coach and worked with creators, leaders, and founders for years.

The pattern was always the same: uncertainty is uncomfortable, so we grab the first answer. But the best answers take time. The real design problem is helping people stay in the discomfort long enough to find what's actually true.

Because of that The missing tool

I kept building. When large language models emerged, I recognized them immediately — not as a trend, but as the tool I had always needed.

Taking detailed notes during a coaching session pulled me away from being fully present. So I built an app that captured each utterance, analyzed it, plotted it, and highlighted the client's own language back to me in real time.

Using AI as a thought partner wasn't a feature. It was a differentiator and the seed of everything that followed.
And that's why The thesis, realized

I combine dialogic intelligence with software systems to improve how humans interact with them. I built Paloma, an AI coaching companion with a relational memory architecture, behavioral guardrails, and a design philosophy built on psychological safety.

I evolved Field Guide from a paper survey into a mobile data collection system into an agentic research infrastructure. NIH-funded, multi-agent, with fatigue detection, real-time contradiction flagging, and longitudinal memory across years.

I don't just design products. I design the relationship between humans and the systems built to serve them.

02

What I Bring

01 — Discipline
Human Systems Design
Behavioral science, cognitive architecture, and dialogue-based design. I understand how people think, what breaks their trust, and what sustains engagement over time.
02 — Discipline
Agentic AI Architecture
Multi-agent orchestration, RAG pipelines, memory system design, and LLM behavioral guardrails. I design AI systems that reason, remember, and know their limits.
03 — Discipline
Product Leadership
End-to-end product thinking from thesis to shipped system. I work across design, engineering, and research. I measure success by what changes, not what ships.
I want to work with an organization building something that matters and needs someone who understands both sides of the human-AI relationship.
View Portfolio GitHub
Available for senior roles & leadership