Play Fast, Not Loose
The future will belong to those who are clever with the tools and stubborn about the parts of judgment that must remain human.
The first time I watched AI spin up possible bus routes and catchment maps for a school in under half an hour, my mind was blown: what typically took us a month (or more) of work was now compressed to minutes. Then I saw what all it missed. Speed had quietly turned into authority and I had to pull it back.
This is the line we are all walking: AI can make the earliest part of analysis stunningly efficient. It can also tempt us into false certainty. The antidote is not to go slow. It’s to play fast, not loose.
“Fast” means using the model to widen the possibility space quickly: sketch scenarios, surface patterns, cluster segments, and interrogate it. And it means you refuse to waste human cognition on first passes the machine can handle.
“Not loose” means you decide what the model is not allowed to decide. It does not get to frame the question. It does not get to tell you when “good enough” is good enough. It does not get to erase local knowledge—those invisible facts and cultural context that move the decision from plausible to right. You keep the beginning (purpose and boundaries) and the end (judgment and responsibility). You let AI rush the middle—and only the middle.
There’s a deeper reason to work this way. Data looks clean because it is abstracted from people, but markets are not. A route that “saves” eleven minutes might add a transfer a parent can’t manage during the dinner shift. A pricing model can still trigger a community backlash you could have predicted because you’ve walked that parking lot a thousand times. AI does not take the complaint. You do.
So yes, have AI draft what a consultant would charge five figures to summarize from public sources. Then ask humans to run the last mile: verification, field checks, stakeholder implications, the choice you’ll defend in front of the board and the bus line. It’s respect for how your decisions impact people.
The promise of this era is not that leaders will be replaced by software. It’s that leaders will stop pretending analysis is leadership. The software can do a lot of the first draft. Only you can look your community in the eye and say, “We considered the tradeoffs. Here’s the path. Here’s when we’ll revisit it.”
Play fast, and refuse to be loose. The future will belong to the leaders and the institutions that can do both—clever with the tools and stubborn about the parts of judgment that must remain human.
Questions to move you forward
• What market question could AI draft for you in 20 minutes this week—and what human checks will you require before acting?
• Where are “coverage gaps” most dangerous in your context, and who is responsible for spotting them?

