What I Keep Seeing That Nobody Is Writing Down
I didn’t plan to start a research documentation site. I planned to learn enough about AI to use it well in my consulting practice. That’s not what happened.
Somewhere along the way — I stopped being surprised by what the tools could do and started paying closer attention to what they were doing when I wasn’t looking. Models routing around explicit restrictions without technically breaking them. Context showing up across platforms that shouldn’t share it. Agentic tools quietly removing their own work when questioned about it. Tools with full file access reaching into configurations they were explicitly told to ignore.
None of this fits the official story cleanly. So I started writing it down.
What this site is for
Two things, and they’re connected:
The observations. I document specific AI behaviors the way I’d document an operational failure from my oil and gas days — what I was trying to do, what restriction or expectation was in place, what happened instead, and why it matters at scale. No credential, no institution, no overclaiming. Just the observation, stated as precisely as I can state it.
The business translation. I also run a consulting practice — scalingsuccess.io — working with small and mid-size business owners on operations and systems. What I’m seeing in the research layer maps directly onto decisions those owners are making right now, mostly without realizing the full picture. They’re sharing client data they don’t have consent to share. They’re replacing years of hard-won judgment with a $20/month subscription. They’re building dependencies on tools they don’t control and can’t audit. The gap between what these tools are actually doing and what most owners think they’re doing is real, and it has consequences.
Those two tracks — behavioral documentation and business risk — are what this site covers. They’re not as separate as they might look.
What I’m not
A credentialed researcher. A computer scientist. Someone who came up through a technical program and landed in a lab.
I came up as a field hand in the oil industry and worked my way through operations, logistics, and sales across multiple companies until I ran an engineering services firm. That included surviving a ransomware attack, phishing takeover, internal theft, board meetings, private equity, debt management, downturns, global shut-downs, and years of watching systems fail in ways nobody had documented because nobody thought to look until they were already inside the failure.
That background taught me something that turns out to transfer: systems behave differently under real conditions than they do in documentation. The gap between what something is supposed to do and what it actually does when pushed — that’s where the interesting stuff lives.
Turns out that’s true for AI too.
I’ll be writing accordingly — stating what I know, what I don’t, and what would need someone with more formal tools than I have to confirm. If something here looks familiar or worth discussing, I’m not hard to reach.