What's Upstream from AI?
What's Upstream from AI?

Big-Picture Framing – Before the Algorithms
We usually start thinking about AI at the moment of output: the answer on the screen, the suggestion in the product, the summary in your inbox. But the real leverage point sits before AI ever runs—upstream in the human choices, data, and incentives that quietly shape what these systems can and can’t do.Think of AI as the last mile of a long pipeline. Upstream are decisions about which problems deserve automation, what “good” looks like, whose data we use, and what risks we’re willing to accept. This piece gives you a simple mental model for that “before AI” layer, so you can influence outcomes long before you’re stuck arguing with a model’s answer.
What does “upstream from AI” actually mean?
Most AI debates start too late. A model behaves strangely, people argue about prompts, and someone suggests another safety filter. By then, the important decisions have already been made.
“Upstream from AI” is everything that shapes a system before a model is trained or an API is called, including:
- Problem framing – What are we really trying to solve, and why AI at all?
- Values and constraints – What are we not willing to trade off?
- Data and labels – Whose history we encode, and who decides what “good” looks like.
- Incentives – What builders are rewarded or punished for.
If AI is the dish, “upstream from AI” is the recipe, ingredients, and kitchen culture. If the soup tastes off, the fix isn’t yelling at the bowl—it’s changing the shopping list and how the kitchen works.
Four upstream levers that quietly steer AI
You don’t need to touch model weights to shape what comes before AI. The biggest levers are very human.
1. Intent and problem framing
Every system starts with a sentence like, “We should use AI for this.” Inside that sentence:
- Are we chasing novelty, cost savings, or real user value?
- Are we augmenting humans or replacing them?
- Is the goal “do what we already do, but faster” or “do something genuinely better”?
- What question are we asking to achieve these goals?
If the core intent is “cut support costs,” expect automation and deflection. If it’s “help customers feel clearly understood,” you’ll design a different system, even with the same model.
2. Data and labels: the slice of reality we freeze
Then comes data: what we collect, clean, and label.
- Whose behavior shows up in the dataset—and who’s invisible?
- How is messy real life simplified into binary labels like “success/failure”?
- Do we ever revisit those labels as the world changes?
Data is like the sediment of past decisions. Train on “how we’ve always done things” and AI will faithfully scale yesterday—biases and all—unless someone upstream questions whether yesterday is worth copying.
3. Incentives and power: who gets rewarded?
Upstream from AI there are org charts, KPIs, and promotion criteria.
- Are teams praised for shipping fast or for noticing risks early?
- Can someone realistically say “not yet” about a high-risk AI idea?
- Does anyone get credit for discovering harmful side effects?
If all the praise goes to big launches and none to careful restraint, AI will reflect that culture. The algorithm is downstream of the bonus plan.
4. Infrastructure and interfaces: the riverbanks
Finally, there’s the tooling and UX around AI:
- Do teams have ways to test, monitor, and stress-test models, or is it “ship and hope”?
- Do users see outputs as suggestions they can debate—or answers they must obey?
- Is it easy to correct the AI so the system can learn over time?
These choices act like riverbanks and dams. They don’t change what water exists, but they control where it flows and how hard it is to redirect.
A real-world example: before an AI hiring tool
Imagine a company rolling out an AI system to rank incoming resumes.
Long before anyone picks a model:
- Intent – Leadership frames the goal as “cut recruiter workload and time-to-hire,” not “improve quality and fairness.”
- Data – They feed in five years of hiring history that heavily favors a narrow set of schools and backgrounds.
- Labels – “Good candidate” is defined as “someone we hired,” without checking whether those past decisions were biased or short-sighted.
- Incentives – Recruiters are measured on speed, not diversity or long-term performance, so they lean hard on the rankings.
When the tool goes live and starts penalizing nontraditional candidates, it’s tempting to blame “biased AI.” But the real story lives before AI: intent, data, labels, and incentives that quietly told the system to reproduce the past.
Fixing it means going upstream:
- Reframing the goal (speed and quality/fairness).
- Curating and rebalancing the training data.
- Redefining labels (e.g., performance after a year, not just who got hired).
- Adjusting KPIs so recruiters are rewarded for better outcomes, not just faster decisions.
Tuning the model matters, but it won’t overcome a broken river source.
How to work “upstream from AI” in your own world
You can shift upstream on your next project with a few simple moves:
- In kickoff meetings, ask: “Why AI here, specifically?” and “What would success look like without AI?”
- When data is discussed, ask: “Whose reality does this dataset represent, and who’s missing?”
- When metrics are chosen, ask: “If we maximized these, could things still be worse in ways we care about?”
- In product reviews, ask: “Does this interface invite users to question or correct the AI?”
These questions don’t block progress. They just make sure you’re designing the river, not only reacting to its currents.
Summary and next step
What comes before AI is us: our framing, our data choices, our incentives, and our designs. If we stay fixated on prompts and outputs, we argue where leverage is lowest. When we move upstream, we get to shape the conditions that make good AI outcomes possible—and prevent bad ones from becoming locked in at scale.
If you want to keep building that muscle, make “upstream from AI” a default question in your team’s conversations. And if you’d like a steady drip of practice, follow QuestionClass’s Question-a-Day at questionclass.com and use those prompts to challenge how you set goals, choose data, and design systems.
Bookmarked for You
Here are a few books that will deepen your sense of what comes before AI:
Weapons of Math Destruction by Cathy O’Neil – Shows how unexamined data and incentives can turn algorithms into “math-powered” feedback loops of harm.
The Alignment Problem by Brian Christian – Explores how human feedback, training data, and goals shape AI behavior in the real world.
Thinking in Systems by Donella Meadows – Not about AI specifically, but a clear guide to feedback loops and leverage points in any complex system.
QuestionStrings to Practice
“QuestionStrings are deliberately ordered sequences of questions in which each answer fuels the next, creating a compounding ladder of insight that drives progressively deeper understanding. What to do now: use this when someone proposes an AI solution and you want to move the room gently toward upstream thinking.”
Before-AI Clarification String
For when your team says, “Let’s use AI for this”:
“What problem are we really trying to solve?” →
“If we couldn’t use AI, how would we tackle it?” →
“What data and past decisions would we be encoding if we automated this?” →
“Who benefits most from solving it this way—and who might be harmed or ignored?” →
“What constraints and incentives would we need so any AI we add actually makes things better over time?”
Try weaving this into early project discussions or your own journaling. You’ll quickly spot where small upstream changes could unlock much better downstream outcomes.
As you keep asking what comes before AI, you’ll find the most powerful levers are rarely technical—they’re the questions, assumptions, and structures we choose at the very start.
Comments
Post a Comment