Don’t miss the March CSTA session! Super early bird discount available.
Training
Training Programs
Discover our Systems Thinking Training: Enhance skills with certification, micro-credentials, leadership programs, and learn why our approach stands out.
Certification
Overview
Pathway to expertise: Unveiling the system thinking certification journey.
MICRO CREDENTIALS
Training Calendar
Calendar
Ready to level up your experience and unlock new possibilities? Explore our public training calendar.
Business Solutions
Transform your business using systems thinking and let us guide you on this exciting journey.
Training Partner
Explore our training partner program for in-house rollout: develop, upskill, and empower your team through systems thinking.
Systems Leadership
Elevate your teams’s leadership skills – Discover our Systems Leadership training program.
Request a Private Class
Invest in your team’s growth and enhance problem-solving abilities that drive innovation.
Blog
Unleash the power systems thinking with our captivating blogs.
News & Press
Read the latest news and updates about Systems Thinking Alliance.
Glossary
Learn clear and concise explanations of essential system thinking terminologies.
Newsletter Archive
Access past editions of our insightful Systems Thinking newsletters here.
Brand and Guidelines
Discover how we enrich our brand through the implementation of inspiring design guidelines.
Digital Badge
Make a powerful impression and showcase your accomplishment on your favourite social media platforms.
About Alliance
Our mission is to help individuals and organizations make transformative changes by embracing systems thinking.
Contact Us
Need assistance or have a query? Reach out to us, we’re here to help!
Let’s face it: we have all delegated tasks we didn’t want to do. But usually, when you delegate a task to an intern and they accidentally set the breakroom on fire, you can at least ask them why they thought microwaving foil was a good idea. With modern AI, specifically deep learning, we don’t get that luxury. We are increasingly handing over the keys to “black box” models where even the creators look at the output and shrug, muttering something about “hidden layers” and “neural pathways.”
This brings us to the Accountability Gap.
It’s the corporate equivalent of a shrug emoji when things go wrong. Because these systems are so opaque, determining who is responsible when an AI denies a mortgage to a creditworthy applicant or, worse, facilitates a wrongful arrest, becomes a forensic nightmare.
We are dealing with a double-edged sword of confusion:
When an automated decision ruins someone’s day (or life), the lack of transparency doesn’t just annoy them; it strips them of the right to understand the rationale or challenge the decision. It creates a vacuum where responsibility goes to die.
A Systemic Cure for the Governance Headache
We aren’t just here to admire the problem. We are currently cooking up a comprehensive paper that offers a systemic view to address these governance nightmares. We’re moving beyond the “move fast and break things” mentality to something that looks a lot more like “move deliberately and fix things.”
Here is a quick snippet of the Foundational Principles for AI Governance we are developing. Consider this your exclusive teaser trailer:
To be effective, legitimate, and worthy of public trust, AI governance must be grounded in non-negotiable values, not just aspirational “nice-to-haves.”
Stay tuned for the full release, because “the algorithm made me do it” won’t hold up in the court of public opinion for much longer.
Have you ever been in a meeting where everyone agrees on the data but draws wildly different conclusions? A situation where two teams, looking at the same spreadsheet, end up in a strategic deadlock? It’s a common corporate drama, born from a simple truth: our most critical decisions aren’t based on facts alone, but on the hidden, unstated assumptions we carry into the room.
This month, we’re digging into a powerful method designed to drag those assumptions out of the shadows and into the spotlight: Strategic Assumption Surfacing and Testing (SAST). Born from a real-life crisis at a pharmaceutical company in the 70s, SAST is less of a polite discussion and more of a structured, intellectual brawl. It’s a brilliant tool for tackling “wicked problems”—the messy, ambiguous challenges with no clear answer.
The Big Idea: It’s Not the Data, It’s the Beliefs
The core philosophy of SAST is that every grand strategy or plan is built on a foundation of beliefs about its stakeholders. Think of it this way: your plan isn’t just a series of actions; it’s a bet that your customers, competitors, employees, and regulators will all behave in a certain way. SAST was created because when executives clashed over pricing strategies, throwing more data at them only made things worse. Each side cherry-picked the numbers that confirmed their existing beliefs.
The breakthrough came when they realized the real argument wasn’t about numbers; it was about their unstated assumptions—like whether physicians were price-sensitive or not. SAST forces these assumptions into the open.
How It Works: A Four-Act Play
SAST isn’t about finding a kumbaya consensus. It’s a dialectical process that intentionally creates conflict to find a stronger, synthesized truth. It unfolds in four main phases:
So, the next time you find yourself stuck in a strategic stalemate, ask yourself: what are the hidden assumptions fueling this disagreement? By making the implicit explicit, SAST provides a structured way to move beyond surface-level arguments and get to the heart of what truly matters.
Let’s get something straight right away. When a sociologist or a systems thinker talks about an “ideal type,” they are absolutely not talking about their fantasy partner or a utopian society where meetings are short and the coffee is always fresh. The word “ideal” here is a bit of a head fake. It doesn’t mean “perfect” or “best.”
So, what is it?
An “ideal type,” a concept we’ve borrowed from the brilliant sociologist Max Weber, is essentially a measuring stick. It’s an intentionally exaggerated, one-sided, and logically pure model of… well, anything. Think of it as a caricature artist’s sketch of reality. Does it capture every nuance of a person’s face? No. But by accentuating certain features—the prominent nose, the distinct smile—it highlights what makes that face unique.
That’s precisely what an ideal type does for complex social phenomena. It’s a theoretical construct, an abstract tool designed for one primary purpose: comparison. By holding this “pure” model up against the messy, complicated real world, we can suddenly see the similarities and, more importantly, the differences. It helps us answer the question, “Compared to what?”
Let’s break down what it isn’t:
Where the Ideal Type Shines in Practice
This isn’t just some dusty academic concept. It’s a secret weapon used across several systems methodologies:
So, the next time you’re trying to make sense of a complex problem, consider building an ideal type. By creating an exaggerated, pure model, you can provide yourself with a stable benchmark in a sea of complexity and finally get a handle on what’s actually happening.
This month, we’re diving into the deceptively simple words of Chilean biologist and philosopher Humberto Maturana: “Anything said is said by an observer.” At first glance, it seems obvious. Of course, someone has to say something for it to be said. But lurking beneath that surface is a profound challenge to our most cherished notions of truth and reality.
Maturana isn’t just stating the obvious; he’s pulling the rug out from under the idea of pure, unadulterated objectivity. Think of it like this: you and a friend are looking at the same abstract painting. You see a chaotic mess of angry reds and blues. Your friend sees a beautiful sunset over a calm ocean. Who is right? According to Maturana, you both are. And you both are wrong. You’re not describing the painting; you’re describing your experience of the painting, filtered through your unique history, mood, and even how much coffee you had this morning.
This is the core of second-order cybernetics. It moves the focus from the system being observed to the system that is doing the observing (that’s you, by the way). It suggests that objectivity is a bit of a myth, a noble but ultimately unattainable goal. Every statement, every “fact,” and every piece of data is brought to you by a sponsor—the observer. This has some rather significant implications:
So, the next time someone confidently declares an absolute, objective truth about a complex system—whether it’s the market, politics, or why a project failed—remember Maturana. Are they describing reality as it is, or are they simply telling you the story they are capable of seeing from where they stand? The distinction is everything.
Let’s be honest: how many times have you “solved” a problem at work, only to have it pop back up three weeks later wearing a fake mustache and a different name? We call that the “Whac-A-Mole” approach to management, and while it burns a lot of calories, it rarely moves the needle.
If you’re tired of applying quick fixes to wicked problems, it might be time to upgrade your operating model. Enter the Certified Systems Thinking Associate (CSTA) training.
Think of this not as just another certification to gather dust on your LinkedIn profile, but as a pair of X-ray glasses for organizational complexity. While everyone else is arguing about the color of the paint, you’ll be the one pointing out that the house is built on a sinkhole.
Here is what you can expect when you decide to take the red pill:
This isn’t just for the data nerds or the philosophy majors. Whether you are in healthcare, tech, government, or finance, the ability to see the bigger picture, and the hidden connections within it, is the ultimate competitive advantage.
So, are you ready to stop guessing and start understanding?
🍪 Our website uses cookies
Our website use cookies. By continuing, we assume your permission to deploy cookies as detailed in our Privacy Policy.