Why your scattered approach to AI is burning through resources—and how understanding bias can fix it
My former colleague Wayne Robinson's recent post about whose worldview trains the world's AI made me think, which is always a good thing. His analysis of how Elon Musk wants to "rewrite human knowledge" to remove "left-wing bias" from Grok helped me hone in on a slow hunch I have had: We're not just dealing with biased AI outputs—we're creating them through biased inputs.
Wayne's insight about AI being "a mirror that reflects power, not pluralism" made me realize that every time we interact with AI, we're climbing what organizational psychologist Chris Argyris called the "Ladder of Inference"—moving from observable data through our interpretations, assumptions, and beliefs to reach conclusions and take action.
The problem is that we're often lazy. Most of us are unconsciously racing up that ladder when we prompt AI, skipping the foundational rungs where bias creeps in, primarily because we're in a hurry. But each sloppy, context-free prompt isn't just wasting computational resources (each AI query burns about 10 times the energy of a Google search)—it's perpetuating the very problem Wayne identified. And before you say "stop using AI for search," I'm afraid that train has already left the station - even your basic Google search uses AI now. We need to use AI more effectively.
So why reinvent the wheel? Argyris already gave us a framework for managing inference and bias. Combined with Philip Tetlock's research from Superforecasting on making better predictions, we can transform how we interact with AI—making our prompts more effective while actively countering bias.
Every time you interact with AI, you're unconsciously climbing Argyris's Ladder of Inference:
Rung 7: Actions - You type your prompt
Rung 6: Beliefs - Based on what you think AI should do
Rung 5: Conclusions - About what kind of help you need
Rung 4: Assumptions - About the context AI already understands
Rung 3: Meanings - You assign to your situation
Rung 2: Selected Data - What you choose to include (or exclude)
Rung 1: Observable Data - The raw facts of your situation
Most people start at Rung 6 or 7. They fire off prompts based on their conclusions and beliefs without providing the observable data and reasoning that led them there.
The result? AI fills in the gaps with its own trained biases and assumptions. You're not just getting generic responses—you're amplifying the very worldview biases Wayne is warning us about.
In Superforecasting, Tetlock identified patterns that separate accurate predictors from everyone else. His "Ten Commandments for Making Predictions" include principles like:
Question your assumptions constantly
Start with base rates, not dramatic exceptions
Update your beliefs when presented with new evidence
Acknowledge what you don't know
Sound familiar? These are exactly the behaviors you need when climbing down the Ladder of Inference—and exactly what's missing from most AI prompts.
When you skip the lower rungs of the ladder, you're essentially asking AI to make predictions and recommendations based on incomplete information. You're setting both yourself and the AI up for biased, inaccurate results.
Drawing from Argyris and Tetlock's work, I developed a framework specifically for complex AI interactions—situations where bias and incomplete reasoning could seriously derail your results. I call it LADDER (cute, I know):
L - LEAD with your goal
A - ANCHOR in observable data
D - DETAIL your reasoning chain
D - DECLARE your assumptions
E - EXPLAIN what you want
R - REQUEST specific deliverables
This isn't just about getting better AI responses—it's about making your own thinking more rigorous. By forcing yourself to articulate each rung of the inference ladder, you catch your own biases before they contaminate the AI's output.
❌ "How should my startup approach product development?"
What's missing: Observable data about your market, reasoning about why current approaches aren't working, assumptions about customer behavior, specific context about your constraints.
Result: Generic startup advice that reflects Silicon Valley groupthink rather than your specific situation.
✅ LEAD: I'm developing a product development strategy that accounts for high uncertainty in our emerging market.
ANCHOR: Observable data: 90% of startups fail post-funding. Pattern analysis shows successful companies like Airbnb (pivoted from air mattresses to full platform) and Twitter (evolved from podcasting tool to microblogging) adapted based on market feedback, while failures like Theranos and Quibi ignored contradictory evidence and stuck to original assumptions despite mounting problems.
DETAIL: My reasoning: Companies that systematically update beliefs based on evidence outperform those that cling to initial assumptions, suggesting cognitive flexibility is more valuable than predictive accuracy in uncertain environments.
DECLARE: Key assumptions: (1) Our market is a complex adaptive system, (2) Small changes can have large effects, (3) Learning velocity trumps planning accuracy.
EXPLAIN: I need a framework that emphasizes rapid learning over detailed prediction while maintaining strategic coherence.
REQUEST: Specifically: (1) Methods to test core assumptions quickly, (2) Criteria for pivot vs. persevere decisions, (3) Team practices that encourage belief updating.
Result: A sophisticated, tailored strategy that acknowledges your specific context and actively counters common startup biases.
Use LADDER for:
Complex problems with multiple variables
Situations where bias could significantly impact outcomes
When you're synthesizing insights across domains
Research and analysis projects
Use simple ROLE framework for basic tasks:
[ROLE]: Act as [specific expertise]
[CONTEXT]: [minimal necessary background]
[TASK]: [clear, specific objective]
[FORMAT]: [desired output structure]
[CONSTRAINTS]: [length, tone, requirements]
Beyond getting better results, structured prompting dramatically reduces computational waste:
Scattered Approach:
4-6 queries to get usable results
40-60x the energy of Google searches
Reinforces AI training biases through poor signal quality
LADDER Approach:
1-2 queries for excellent results
10-20x the energy of Google searches
Provides high-quality signal that could improve AI training
This isn't just personal efficiency—it's about being a better participant in the AI ecosystem.
This week, try this:
Notice your inference ladder climbing. Before your next complex AI interaction, pause and ask: "What assumptions am I making? What data am I skipping?"
Practice transparency. Use the LADDER framework for one complex query. Notice how articulating your reasoning changes your own thinking.
Question the AI's assumptions. When you get a response, ask: "What worldview might be embedded in this answer? What perspectives might be missing?"
Iterate consciously. If you need follow-ups, explicitly state what new information changed your thinking rather than just asking for "adjustments."
Wayne reminded us that AI systems are mirrors reflecting the worldviews of their training data and their users. When we interact carelessly—jumping to conclusions, skipping evidence, ignoring our assumptions—we're not just getting poor results. We're reinforcing the very biases we claim to want to eliminate.
But when we climb down the Ladder of Inference, when we apply Tetlock's principles of good judgment, when we make our reasoning transparent—we become better partners to AI systems. We provide higher-quality signal that could, over time, help create more thoughtful, less biased AI responses.
The goal isn't perfect objectivity (impossible) or paralyzing relativism (useless). It's epistemic humility—acknowledging what we don't know, making our reasoning transparent, and staying open to evidence that challenges our assumptions.
In a world where AI increasingly shapes how we think about complex problems, learning to interact with these systems thoughtfully saves you time and saves us resources.
LEAD: Clear goal statement
ANCHOR: Observable data/examples
DETAIL: Your reasoning process
DECLARE: Key assumptions you're making
EXPLAIN: Type of help needed
REQUEST: Specific deliverables
ROLE: Act as [expertise]
CONTEXT: [background]
TASK: [objective]
FORMAT: [output structure]
CONSTRAINTS: [requirements]
The difference between these approaches isn't just efficiency—it's consciousness. Choose the framework that matches both your task complexity and your commitment to thoughtful AI interaction.