Don’t miss the March CSTA session! Super early bird discount available.

Edit Content
Systems Thinking Alliance

Truth, Assumptions, and AI Governance

Issue # 12
5 Min Read

In This Issue

INTERCONNECTED INSIGHTS

Navigating the AI Accountability Gap

Let’s face it: we have all delegated tasks we didn’t want to do. But usually, when you delegate a task to an intern and they accidentally set the breakroom on fire, you can at least ask them why they thought microwaving foil was a good idea. With modern AI, specifically deep learning, we don’t get that luxury. We are increasingly handing over the keys to “black box” models where even the creators look at the output and shrug, muttering something about “hidden layers” and “neural pathways.” 

This brings us to the Accountability Gap. 

It’s the corporate equivalent of a shrug emoji when things go wrong. Because these systems are so opaque, determining who is responsible when an AI denies a mortgage to a creditworthy applicant or, worse, facilitates a wrongful arrest, becomes a forensic nightmare. 

We are dealing with a double-edged sword of confusion: 

  • Forensic Accountability: Who do we blame when the algorithm goes rogue? Is it the coder? The data scientist? The manager who signed the check? Or the algorithm itself (good luck serving a subpoena to a server rack)? 
  • Agent Accountability: We have delegated power to these digital agents, but are they answerable to us? Currently, the answer is a resounding “sort of, maybe, not really.” 

When an automated decision ruins someone’s day (or life), the lack of transparency doesn’t just annoy them; it strips them of the right to understand the rationale or challenge the decision. It creates a vacuum where responsibility goes to die.  

A Systemic Cure for the Governance Headache 

We aren’t just here to admire the problem. We are currently cooking up a comprehensive paper that offers a systemic view to address these governance nightmares. We’re moving beyond the “move fast and break things” mentality to something that looks a lot more like “move deliberately and fix things.” 

Here is a quick snippet of the Foundational Principles for AI Governance we are developing. Consider this your exclusive teaser trailer: 

To be effective, legitimate, and worthy of public trust, AI governance must be grounded in non-negotiable values, not just aspirational “nice-to-haves.” 

  • Principle 1: Human-Centricity & Oversight. AI is a tool, not a replacement for human judgment—especially when life and liberty are on the line. Humans must remain in the loop to intervene, override, or pull the plug. 
  • Principle 2: Fairness & Non-Discrimination. We must proactively stop algorithms from automating historical prejudices. If the data is biased (and it usually is), the governance must mandate robust testing to ensure we aren’t entrenching disadvantage under the guise of math. 
  • Principle 3: Transparency & Explainability. “It’s too complicated to explain” is no longer a valid excuse. Affected individuals have a right to meaningful explanations. Transparency is the prerequisite for trust. 
  • Principle 4: Safety, Security, & Robustness. AI must be robust against both accidental face-plants and malicious attacks. It needs to work predictably, even when the world throws it a curveball. 
  • Principle 5: Accountability & Redress. No more hiding behind complexity. Companies cannot use the “black box” as a shield to evade responsibility. If the system causes harm, there must be a clear path for redress.

 

Stay tuned for the full release, because “the algorithm made me do it” won’t hold up in the court of public opinion for much longer. 

SYSTEMS APPROACH

It’s Not the Data, It’s the Beliefs

Have you ever been in a meeting where everyone agrees on the data but draws wildly different conclusions? A situation where two teams, looking at the same spreadsheet, end up in a strategic deadlock? It’s a common corporate drama, born from a simple truth: our most critical decisions aren’t based on facts alone, but on the hidden, unstated assumptions we carry into the room. 

This month, we’re digging into a powerful method designed to drag those assumptions out of the shadows and into the spotlight: Strategic Assumption Surfacing and Testing (SAST). Born from a real-life crisis at a pharmaceutical company in the 70s, SAST is less of a polite discussion and more of a structured, intellectual brawl. It’s a brilliant tool for tackling “wicked problems”—the messy, ambiguous challenges with no clear answer. 

The Big Idea: It’s Not the Data, It’s the Beliefs 

The core philosophy of SAST is that every grand strategy or plan is built on a foundation of beliefs about its stakeholders. Think of it this way: your plan isn’t just a series of actions; it’s a bet that your customers, competitors, employees, and regulators will all behave in a certain way. SAST was created because when executives clashed over pricing strategies, throwing more data at them only made things worse. Each side cherry-picked the numbers that confirmed their existing beliefs. 

The breakthrough came when they realized the real argument wasn’t about numbers; it was about their unstated assumptions—like whether physicians were price-sensitive or not. SAST forces these assumptions into the open. 

How It Works: A Four-Act Play 

SAST isn’t about finding a kumbaya consensus. It’s a dialectical process that intentionally creates conflict to find a stronger, synthesized truth. It unfolds in four main phases: 

  • Group Formation: You don’t just throw people into a room. You create small, internally cohesive groups with fundamentally different worldviews. Think of it as assembling rival debate teams. Put your finance experts in one corner and your marketing gurus in the other, and give each a different strategy to defend. The goal is to maximize the differences between the groups. 
  • Assumption Surfacing: Each group is asked a golden question: “For our assigned strategy to be the absolute best choice, what must be true about our stakeholders?” This forces them to list the underlying assumptions they’re making. They then plot these assumptions on an Importance/Certainty graph, identifying the “pivotal” ones—those that are critically important but highly uncertain. This is where the strategic vulnerabilities hide. 
  • Dialectical Debate: Here comes the fun part. Each group presents its strategy and pivotal assumptions. The other groups then get to challenge them. This isn’t a free-for-all shouting match; it’s a structured debate designed to expose weak points, test the logic, and identify where the core conflicts lie. The initial rule is “clarifying questions only” to prevent premature attacks and ensure everyone understands what’s actually being proposed. 
  • Synthesis: The goal isn’t for one team to “win.” It’s to take the best parts of the opposing views and forge a new, more robust strategy. By debating the assumptions, the teams can negotiate, modify, or drop weak ones. If a critical assumption remains contentious, it becomes a clear requirement for more research. The final output is a composite strategy—a resilient plan born from intellectual fire. 

 

So, the next time you find yourself stuck in a strategic stalemate, ask yourself: what are the hidden assumptions fueling this disagreement? By making the implicit explicit, SAST provides a structured way to move beyond surface-level arguments and get to the heart of what truly matters. 

PRACTITIONER CORNER

Don't Be So Literal: A Guide to the "Ideal Type"

Let’s get something straight right away. When a sociologist or a systems thinker talks about an “ideal type,” they are absolutely not talking about their fantasy partner or a utopian society where meetings are short and the coffee is always fresh. The word “ideal” here is a bit of a head fake. It doesn’t mean “perfect” or “best.” 

So, what is it? 

An “ideal type,” a concept we’ve borrowed from the brilliant sociologist Max Weber, is essentially a measuring stick. It’s an intentionally exaggerated, one-sided, and logically pure model of… well, anything. Think of it as a caricature artist’s sketch of reality. Does it capture every nuance of a person’s face? No. But by accentuating certain features—the prominent nose, the distinct smile—it highlights what makes that face unique. 

That’s precisely what an ideal type does for complex social phenomena. It’s a theoretical construct, an abstract tool designed for one primary purpose: comparison. By holding this “pure” model up against the messy, complicated real world, we can suddenly see the similarities and, more importantly, the differences. It helps us answer the question, “Compared to what?” 

Let’s break down what it isn’t: 

  • It’s not real: You will never find a perfect ideal type walking around in the wild. It’s an idea, not a description. 
  • It’s not a goal: It’s a tool for analysis, not a target to be achieved. Chasing an ideal type is like trying to find a perfect circle in nature—a useful concept, but ultimately a fool’s errand. 
  • It’s a tool: Its entire job is to create a crystal-clear, albeit exaggerated, picture to make the chaos of reality a little easier to study. 

Where the Ideal Type Shines in Practice 

This isn’t just some dusty academic concept. It’s a secret weapon used across several systems methodologies: 

  • Soft Systems Methodology (SSM): In SSM, the “conceptual models” you build are classic ideal types. They represent a purposeful activity from a specific worldview. They aren’t a model of the real world, but an intellectual device to structure a debate about what’s really going on. 
  • Interactive Planning (IP): Russell Ackoff used this to blow the lid off self-imposed constraints. By creating an idealized design of the future, teams can break free from their assumptions about what is and isn’t possible. 
  • Viable System Model (VSM): The VSM itself acts as an ideal type—a diagnostic template for what a “viable” organization should look like. You compare your real, messy organization against this model to spot structural gaps and communication breakdowns. 
  • Critical Systems Heuristics (CSH): CSH uses a process called “ideal mapping” to expose the hidden values and ethical trade-offs behind any plan. By comparing the “is” with the “ought,” it drags buried ethical implications into the light. 

 

So, the next time you’re trying to make sense of a complex problem, consider building an ideal type. By creating an exaggerated, pure model, you can provide yourself with a stable benchmark in a sea of complexity and finally get a handle on what’s actually happening. 

THE WISDOM WISPER

Profound quote from systems thinker

This month, we’re diving into the deceptively simple words of Chilean biologist and philosopher Humberto Maturana: “Anything said is said by an observer.” At first glance, it seems obvious. Of course, someone has to say something for it to be said. But lurking beneath that surface is a profound challenge to our most cherished notions of truth and reality. 

Maturana isn’t just stating the obvious; he’s pulling the rug out from under the idea of pure, unadulterated objectivity. Think of it like this: you and a friend are looking at the same abstract painting. You see a chaotic mess of angry reds and blues. Your friend sees a beautiful sunset over a calm ocean. Who is right? According to Maturana, you both are. And you both are wrong. You’re not describing the painting; you’re describing your experience of the painting, filtered through your unique history, mood, and even how much coffee you had this morning. 

This is the core of second-order cybernetics. It moves the focus from the system being observed to the system that is doing the observing (that’s you, by the way). It suggests that objectivity is a bit of a myth, a noble but ultimately unattainable goal. Every statement, every “fact,” and every piece of data is brought to you by a sponsor—the observer. This has some rather significant implications: 

  • There is no “view from nowhere.” Every perspective is a view from somewhere. Your specific vantage point—your culture, your language, your training—shapes what you see and how you describe it. 
  • “Truth” becomes a question of agreement. When we agree on a description, we create a shared reality. This doesn’t make it universally true, just socially accepted within our group. 
  • Our blind spots are part of the observation. We are fundamentally unable to see what we cannot see. The very act of observing limits our perception to what our biological and cognitive structures allow. 

 

So, the next time someone confidently declares an absolute, objective truth about a complex system—whether it’s the market, politics, or why a project failed—remember Maturana. Are they describing reality as it is, or are they simply telling you the story they are capable of seeing from where they stand? The distinction is everything. 

ADVANCE YOUR CAREER

Upskill Your Thinking to Transform Your Impact

Let’s be honest: how many times have you “solved” a problem at work, only to have it pop back up three weeks later wearing a fake mustache and a different name? We call that the “Whac-A-Mole” approach to management, and while it burns a lot of calories, it rarely moves the needle. 

If you’re tired of applying quick fixes to wicked problems, it might be time to upgrade your operating model. Enter the Certified Systems Thinking Associate (CSTA) training. 

Think of this not as just another certification to gather dust on your LinkedIn profile, but as a pair of X-ray glasses for organizational complexity. While everyone else is arguing about the color of the paint, you’ll be the one pointing out that the house is built on a sinkhole. 

Here is what you can expect when you decide to take the red pill: 

  • Literacy in “System-ese”: Learn the language that actually describes how the world works, rather than how we wish it worked in our linear spreadsheets. 
  • VUCA Navigation: Volatility, Uncertainty, Complexity, and Ambiguity aren’t going away. We teach you how to surf these waves instead of drowning in them. 
  • Mapping the Territory: Master the art of differentiating between a “tame” puzzle (complicated but solvable) and a “wicked” problem (complex and slippery). 
  • The “No Silver Bullet” Reality Check: Understand why quick fixes often disguise themselves as solutions, only to fail spectacularly in the long run. 

 

This isn’t just for the data nerds or the philosophy majors. Whether you are in healthcare, tech, government, or finance, the ability to see the bigger picture, and the hidden connections within it, is the ultimate competitive advantage. 

So, are you ready to stop guessing and start understanding? 

🍪 Our website uses cookies

Our website use cookies. By continuing, we assume your permission to deploy cookies as detailed in our Privacy Policy.