top of page
Search

Is Your Business Outsourcing Too Much of Its Judgment to Systems That Don’t Know What Matters to You?

Greta Bradman, April 2025

ree


There’s a quiet shift happening in organisations right now.


It’s not just about AI replacing tasks. It’s about something deeper: a slow and often invisible outsourcing of human judgment to systems that weren’t built to understand what really matters to the business - their values and principles that help inform a unique edge and competitive advantage. Thing is - values are ripe for embedding in AI systems. It can be done but alas, all too often it’s not (yet).


Let me explain.


Judgment Is More Than Data + Rules

We humans like to believe we make rational decisions—especially at work. But the truth is, even the most analytical leaders rely on fast, intuitive judgments shaped by their values, experience, needs, goals, and context. After all, our minds are incredible probability machines - pattern-recognition engines that draw on everything we’ve learned, seen, felt, and cared about to make decisions—often in the face of uncertainty.


Far from relying on ‘type two’ decision making where our thought processes are deliberate and slow, we reach into our intuition, thanks to our incredible mind. This process of intuitive decision-making isn’t irrational. It’s adaptive.


In fact, our intuitive system becomes more accurate with experience, especially when it’s grounded in clarity about values—both personal and organisational. Why? Values are goal-oriented, positive schemas that help us interpret, evaluate, and respond to the world even amidst great uncertainty. They bring together beliefs from past experience, current needs, and future desires and aspirations (and associated goals). As fundamental drivers, your values impact you whether you’re aware of them or not. Values are how you, or I, determine what matters most, especially when trade-offs are required. The same is true for businesses with organisational values.


But here’s the problem.


More and more businesses are automating decisions using tools—recommendation engines, prioritisation matrices, LLMs, recruitment filters, OKR auto-scoring systems—that lack any real understanding of the company’s values. That’s not to say they can’t - it’s just not being prioritised, yet. To do so reinstates a major competitive advantage for many companies - that of being known, trusted, even loved, by customers, employees, and other important stakeholders.


AI tools can be fast, helpful, and impressive. But they don’t inherently know what matters to you. Ensuring that they do actually matters over time, if we are to avoid erosion of value creation for the business.


Values Alignment: Not Just for People

The last few decades have seen a huge upswell in interest in values alignment at work and the value creation and risk mitigation this can enable. We’ve reached a point where company or organisational values are recognised as must-haves not nice-to-haves. We write them on walls. We use them to hire, promote, and recognise. We even run engagement surveys to check if employees “feel aligned with the company’s values.”


But when it comes to systems—our AI tools, automation layers, decision-support software—we rarely ask the same questions. We simply enable automated decision making that’s generic, values-agnostic, and largely devoid of what matters to our company, over and above our industry or beyond.


We’re treating values as a human trait, rather than an organisational navigational device that can valuably inform every decision-making layer—human or machine.


And that’s risky.


Because even if your people are values-aligned, your systems might be pulling in a different direction. Worse still, your people may be deferring to those systems, assuming the output is neutral, correct, or "data-driven”—when in fact it’s simply uninformed by your context. 


We’re Outsourcing Without Guardrails

I’ve worked for years at the intersection of psychology, decision neuroscience, AI, and organisational systems. What I see across many scaleups and large enterprises is a creeping default: “Let the system decide.”


Who gets interviewed? Who gets promoted? How is information surfaced to inform strategy? What’s flagged as high risk? Which stakeholders or customers get prioritised? What wording is used in outbound communications?


These are judgment calls. And increasingly, they’re being made or shaped by systems trained on historical data—or, in the case of generative AI, systems trained on everything except your company’s actual principles.


If your company’s values don’t shape those systems, someone else’s values will. Usually, they’re implicit, inherited from training data, and geared toward efficiency or popularity, not integrity or impact. Furthermore, if you’ve capitalised on your values as a competitive advantage, you risk poorly values-aligned automated processes eroding this value over time. 


It doesn’t have to be this way. It is possible to embed your values into the fabric of automated decision-making (and, where relevant, afford the same benefits to your customers).


What It Looks Like to Reclaim Judgment

So, how do we build systems that enhance human judgment rather than quietly override it?

Here are four principles I’ve used with leaders who want to keep values at the heart of decision-making—even in high-tech environments:


1. Get Clear on What Your Values Actually Are

Most companies confuse branding with values. If your values don’t help your people say “no” to something, they’re not values—they’re vague aspirations. Values are about trade-offs. They clarify what you’ll prioritise under pressure. If you haven’t defined that, your systems can’t reflect it. 


2. Make Values Operational, Not Philosophical

Ensure you’re clear on why you have your values. There are three primary, overlapping reasons for companies surfacing their values. First, for strategic purposes - as guiding beacons that inform the strategic direction and plan of the company. Second, enabling purposes - attracting and helping retain the right people and stakeholders. Finally, tactical purposes - an articulation of values-in-action that help a company and its people agree on ‘what we do’ and ‘what we don’t do’ informed by values.


Furthermore, the way values are named, described, and exemplified matters - to people and to AI.


“Integrity” doesn’t help an AI tool decide whether to flag a borderline sales lead. But a value around Integrity that is further expressed as: “We don’t withhold material risk information from clients, even if it costs us short-term revenue”? That’s something you can embed in rules, prompts, filters, and review criteria.


To reiterate -  it is absolutely possible to embed your values into AI tools that support reasoning and decision-making tasks at your business. But it requires intentionality.

You need to demand more from your AI providers and developers: your priorities, principles, and values-oriented competitive advantage should be part of the build (e.g., part of the prompting, or training)—not retrofitted later. These aren’t just philosophical window dressings. They’re trust builders, risk mitigators, and differentiators.


3. Demand Transparency—and Relevance—from Your AI Providers

Don’t let your AI partners hide behind black boxes. You can—and should—understand more than you think. That includes how their systems make decisions, how your company’s values are reflected (or not), and where risk and misalignment could quietly take hold.


Yes, ask how your data is secured and privacy is protected. But go further:

  • How does the system handle trade-offs?

  • What principles guide its recommendations?

  • Which assumptions or values are embedded into default options, escalations, or filters?

These aren’t technical questions - rather, they’re strategic ones. Your AI provider should be able to answer them in business terms—and show you how your values are (or could be) embedded in the system’s reasoning. If they can’t, that’s not just a technical gap—it’s a governance risk.


4. Protect Human Judgment as a Core Capability

Your people aren’t there to rubber-stamp machine outputs. If your systems don’t allow for context-sensitive overrides, you’re not just automating—you’re eroding the very judgment that sets your business apart.


Design workflows that invite intelligent override. And more importantly, treat those overrides as signals, not outliers. They’re insight into where your systems may be out of sync with evolving priorities, new risks, or human nuance.


In short: systems should serve judgment, not silently replace it.


Leadership in the Era of AI Is Values-Led

AI is not the enemy of good judgment. Unexamined delegation is—especially when it dilutes the values and principles that earned your customers’ trust in the first place.

Because when judgment and clarity disappear, culture and competitive edge become accidental.


And when values are missing from the system, the system quietly builds a different company than the one you intended.


So ask yourself—genuinely, urgently:Are we outsourcing decisions to systems that don’t know what matters to us?


If the answer is “maybe,” then it’s time to pause, reflect, and redesign—before the values drain out. Along with them, the trust, coherence, and soul of your company.


 
 
 

Comments


©2025 Greta J Bradman

bottom of page