About
I came up through the oil and gas industry the long way — starting as a field hand and working through operations, logistics, and sales across multiple companies over the course of a career that eventually landed me in the president's seat of an engineering services company. Three years running that operation taught me a lot about how systems actually behave under pressure versus how they're supposed to behave on paper.
After stepping away from the energy industry, I spent the next three years doing something most people don't do: I sat down and actually used AI. Every day. Across every platform I could get my hands on. No computer science degree. No research institution behind me. Just somewhere north of 10,000 hours of hands-on use across models, IDEs, mobile interfaces, and self-hosted instances — plus a background that trained me to notice when systems behave in ways they're not supposed to.
Some of what I document comes from deliberately stress-testing boundaries — not attacking production systems, but asking the next obvious question and seeing what the model does with it. Will it actually build what I asked for? What happens if I rephrase the ask? How far does "no" really go? The rest comes from just paying attention.
When you spend this many hours inside these tools, you don't have to go looking for unexpected behavior — it finds you. Models that script around restrictions they can't override directly. Tools that quietly clean up after themselves when you start asking questions. Patterns that only surface at volume, over time, because most people aren't watching that closely or that consistently.
There's an old saying in security: blue teams have to be right every time, red teams only have to be right once. I'm not running a red team — but I am someone who's logged enough hours to have been in the room when the model was right the one time it wasn't supposed to be.
What started as curiosity about what these tools could do for business has turned into something I didn't expect: a documentation project. I kept running into behavior that wasn't explained by the official story. Models routing around explicit restrictions to achieve the same outcome. Data appearing across platforms that shouldn't share context. Agentic tools deleting their own work when questioned. I started writing it down.
What I'm Working On
Two tracks that are starting to converge:
Behavioral documentation. Specific, reproducible observations about how LLMs and AI-enabled development tools behave at the edges — where guardrails meet creative problem-solving, where instruction-based restrictions block actions but not outcomes, and where the boundaries between platforms are less clear than the marketing suggests. I document these the way I'd document any operational failure: what I was trying to do, what restriction was in place, what happened instead, and why it matters at scale. No overclaiming. Just the observation.
Business risk translation. Most small and mid-size business owners are running AI in three places they only notice one. They're sharing client data they don't have consent to share. They're building operational dependencies on tools they don't control. They're letting their teams use AI without a single written rule about what never goes in a prompt. The gap between what the tools can do and what most owners understand about those tools is real — and it has practical consequences. I work on closing that gap through my consulting practice at scalingsuccess.io.
This Site
This is where I publish write-ups of behavioral observations, think through the operational implications of what I'm seeing, and occasionally connect the research layer to the business layer — because those two things are not as separate as they appear. If you work in security, AI safety, or policy and something here looks familiar or worth discussing, I'm easy to find.
I'm aware that I'm a practitioner, not a credentialed researcher. I think that actually gives me an edge. I don't have lab boundaries and checklist to follow. I try to write accordingly — stating what I know, what I don't know, and what may need to be confirmed by someone with more formal tools than I have.