Inclusive Leadership in a "Risk Society"
Feb 03, 2026In a "risk society," the biggest threats to organizational performance are increasingly manufactured, systemic, and unevenly distributed, which means they can't be managed by technical controls and expertise alone.
This article reframes inclusive leadership as a core risk capability: the collective stewardship of culture that enables the organization to sense earlier, interpret better, decide under pressure, and adapt without losing legitimacy, fairness, and human accountability. That matters because today's "polycrisis" dynamics destabilize the assumptions most operating models were built on (predictable conditions, stable regulation, reliable data, shared facts).
Two ambient pressures make the challenge acute: AI is becoming decision infrastructure, and climate volatility is now operational volatility, forcing recurring trade-offs across safety, supply chains, infrastructure, and social license.
The practical implication is that organizations will face increasing challenges because their cultures throttle truth, flatten disagreement, and push trade-offs downward until disruption becomes chronic. To turn these insights into action, start by testing whether your culture is functioning as risk infrastructure, then build leadership capability in the specific pressure scenarios where your organization actually fails. Finally, upgrade your teaming model for volatility.
In stable eras, leadership can focus on optimization: refine the model, tighten the process, move faster. But we are no longer operating in a stable era. We are operating in what sociologist Ulrich Beck called a "risk society"; i.e., a world in which many of the most consequential threats are not external shocks, but manufactured risks: produced by modernization itself, distributed unevenly, and increasingly hard to contain with the old assumptions, approaches, and tools.
That shift changes what organizations are managing and what leadership is.
In a risk society, the challenge is not only technical risk (models, controls, compliance); it is also interpretive and social: under pressure, people must be able to agree on what is real, what is safe, what is fair, and what commitments still hold even when they become expensive. When that agreement breaks down, coordination fails, and risk accelerates.
This is where inclusive leadership needs a hard reframing and reconsideration. Not as a moral posture, not as an HR-adjacent "nice to have", and not as something tied only to narrowly defined diversity categories. In a risk society shaped by polycrisis dynamics, AI acceleration, and climate instability, inclusive leadership is best understood as a core organizational capability: the collective stewardship of culture that enables organizations to sense better, decide better, and adapt without sacrificing legitimacy, fairness, and human accountability.
The polycrisis effect: when stable systems become unreliable
The term polycrisis is catching on because it names what many leaders are already living: you're not dealing with one big problem at a time, but with several crises hitting at once and colliding in ways that make each one worse. Adam Tooze helped popularize the term to describe this kind of entanglement, in which shocks do not remain contained but cascade across systems, making each increasingly unstable.
In practical organizational terms, polycrisis means the assumptions your operating system was built on (i.e., predictable weather, stable regulation, steady geopolitics, reliable data, shared facts, stable talent, and customer landscapes) are degrading simultaneously. As a result, certainty erodes, risk increases, and trust destabilizes, because trust depends on predictability and credible commitments.
Two ambient pressures intensify this operating reality:
- AI as decision infrastructure, not an "IT rollout". AI is increasingly embedded in workflows that translate reality into decisions: triage, recommendations, approvals, denials, hiring, scheduling, credit, claims, procurement, and compliance. It standardizes language, accelerates tempo, and can create false alignment ("the summary says we agreed") while weakening dissent through automation deference ("the model said so"). The National Institute of Standards and Technology AI Risk Management Framework exists precisely because AI risk is not only technical—it is socio-technical, organizational, and governance-related.
- Climate volatility is operational volatility. Climate change is no longer a distant scenario variable; it is already driving widespread impacts and escalating risks. The Intergovernmental Panel on Climate Change makes clear that adaptation and resilience are not optional as extremes compound across infrastructure, health, supply chains, and mobility.
Even the World Economic Forum (WEF) frames today's risk landscape through structural forces, including technological acceleration and climate disruption, as well as geostrategic and economic fragmentation. The implication is simple and uncomfortable: reliability can no longer be treated as a baseline assumption. Confidence can no longer be assumed.
Why inclusive leadership becomes decisive under heightened risk
In polycrisis conditions, organizations can fail because their culture becomes a risk multiplier. These questions become decisive:
How accurately do we detect weak signals at the periphery before they become incidents or serious threats?
- How fast do warnings travel into real decision arenas (not just reporting lines)?
- How well do leaders interpret what they're seeing and are able to distinguish "noise" from meaningful patterns?
- Whose reality is legible enough to count as "data"?
- Do dissent and uncertainty remain visible—or get socially filtered out?
- In human–AI workflows, is accountability exercised—or quietly diffused into "the system"?
They are not technical questions. They are cultural and governance questions.
If inclusion is treated primarily as representation or interpersonal decency, it will remain peripheral precisely when it matters most, namely during pressure, trade-offs, and ambiguity. However, if inclusiveness is treated as epistemic capacity, i.e., the ability to integrate different lived experiences and viewpoints into shared sensemaking and decision-making, it becomes central to performance, legitimacy, and resilience.
Inclusive leadership changes what information enters decisions and how they are made - whose warnings are heard as credible, and how trade-offs are governed. Two examples may illustrate this:
The AI claims concierge and the legitimacy gap
A global insurer launches an AI "claims concierge" to reduce cycle time and cost. The pilot looks strong: faster throughput, fewer handoffs, higher straight-through processing. Leaders read the dashboard and conclude the rollout is working.
Then the customer reality splits.
In one market, customers experience the concierge as modern competence. They value speed, accept self-service, and tolerate short explanations as long as outcomes feel generally fair. A denial citing policy language may be disappointing, but it doesn't automatically feel illegitimate. The interaction is interpreted as transactional: efficient, predictable, "good enough."
In another market, the same workflow lands very differently—especially at the moment of denial. Customers are not only asking, "Did I get paid?" They're asking, implicitly, "Was I heard?" Claims are stressful moments: injury, loss, financial shock. People expect the ability to explain context, not just upload documents. They expect dignity and contestability: reason-giving that feels human, and a clear, respectful path to review when the system says no.
When those expectations aren't met, the denial feels arbitrary and evasive. Complaints rise. Social media amplifies the story. Regulators—especially in jurisdictions where automated decision scrutiny is increasing—begin asking how decisions are made and explained. (In the EU context, this scrutiny is reinforced by the European Union AI Act's risk-based approach and governance expectations for certain AI uses.)
At this point, non-inclusive leadership often defaults to a predictable defense: the tool is "accurate," adoption is strong, and "outliers" exist. The organization tweaks tone and messaging while leaving the legitimacy gap intact.
Inclusive leadership responds differently because negative experiences are treated as risk signals, not as reputational inconveniences. Leaders widen the truth pipeline: frontline claims staff who hear the anger, customer advocates who can articulate dignity failures before dashboards do, local compliance that recognizes when complaint patterns become regulatory exposure, data science that can test whether denial clusters and appeal overturns indicate a mismatch, and risk/legal that owns explainability and accountability. The goal is not to be "nicer." The goal is to interpret reality early enough to adapt.
That broader sensing changes the solution. Instead of "educating customers," leaders redesign the decision experience so it remains legitimate under local expectations. Denials move from thin citations to layered explanations customers can understand: what the decision relied on, what would change it, and how to contest it without humiliation. In markets where voice and relational repair matter, human review paths become standard for denials and high-stakes categories—not favors granted only to those who escalate.
Governance changes, too. Leaders insist that disagreement remains visible: minority views on fairness and customer impact are recorded rather than smoothed over into false consensus. Tripwires are defined so action happens before crisis: complaint ratios, appeal overturn rates, and "can customers paraphrase the reason for denial?" become monitored controls. The insurer still uses AI, but in a way that maintains accountability with humans and trust intact.
Manufacturing, climate disruption, and the trade-off problem
Now consider a manufacturing business facing repeated heatwaves and flooding that disrupt suppliers and threaten worker safety. Initially, leaders treat disruptions as episodic business continuity incidents—spikes to be managed and moved past. But the pattern doesn't resolve. Heatwaves recur. Flood zones shift. Transport corridors fail. Supplier reliability becomes uneven. What looked like interruptions becomes persistent adaptive pressure.
Real trade-offs emerge: production targets versus safety protocols, cost discipline versus resilience investment, short-term output versus long-term viability.
Here again, the risk is not only physical and logistical. It is interpretive and social. Impacts are uneven, and that unevenness becomes the key to whether the firm adapts intelligently or drifts into chronic disruption.
Frontline workers bear the physical risks first: heat stress, dehydration, unsafe machinery temperatures, and fatigue that increase incident rates. Local plant leaders know which mitigations are feasible and which are fantasy (often those designed at a distance): whether shift schedules can change, where cooling investments actually matter, which safety thresholds are realistic and enforceable.
Procurement often detects fragility in lower-tier suppliers long before finance does, because distress shows up as late deliveries, quality slippage, and "temporary" capacity constraints. Communities and local authorities experience environmental and health burdens - air quality, flooding impacts, emergency response strain - which shape the firm's social license to operate.
If executive decisions don't include these lived experiences early, the company will over-index on spreadsheet certainty and under-index on real-world fragility—until the system fails in ways that are expensive and hard to reverse.
Non-inclusive leadership tends to centralize the narrative: "We need consistency." “Sites are overreacting." “We can't afford slowdowns." Safety becomes local friction rather than a strategic constraint. Supplier disruption becomes "a procurement problem" rather than systemic exposure.
Inclusive leadership, in contrast, treats adaptation as a collective sensing and decision capability. Leaders build routines that surface weak signals across the network before they become outages: heat-related near-misses, absenteeism patterns, repeated infrastructure failures, supplier tier stress. Trade-offs are made explicit rather than silently forced downward. If output targets conflict with worker safety thresholds, it isn't left to plant managers to improvise in the moment; it becomes an enterprise rule that is understood, defensible, and consistent.
Investments in cooling, flood protection, backup logistics, and supplier diversification are no longer framed as "nice-to-have resilience." They are treated as the cost of staying operational and legitimate in a changed climate regime. Communication shifts externally as well: credibility depends on demonstrating that adaptation is real, not rhetorical.
These are different industries facing different types of risks, one digital and customer-facing, one physical and operational. But the failure mode is the same in both cases. Leaders confuse technical performance with legitimacy. They see that the system is "working" according to internal metrics (speed, throughput, cost, compliance) and they assume that means the organization is safe. But it isn't. A process can be technically efficient while becoming socially and institutionally fragile.
They also confuse dashboards with reality. Dashboards report what the organization has already chosen to measure, in the categories it already knows how to count. They rarely capture what matters most early in a risk society: dignity failures, trust erosion, suppressed dissent, weak signals at the periphery, or the moment when stakeholders stop granting the benefit of the doubt. By the time those show up in the numbers, the organization is often already behind the curve.
The corrective move is the same in both cases: leaders must deliberately widen and protect the organization's "truth pipeline." This requires that
- Information from the edges (including frontline staff, customers, local sites, suppliers, communities, and risk functions) can travel quickly into decision-making without being filtered by hierarchy, politics, or automation bias.
- Dissent and uncertainty remain visible rather than being smoothed into false consensus.
- What the organization learns is translated into explicit design changes, decision rules, and accountable trade-offs, rather than being left as a set of "lessons learned" that never alter how decisions are made.
Building that strategic capability means that leaders deliberately create the cultural foundation - the "human operating system" - that values difference as a signal, treats subjective experience as data, practices transparency and accountability, and keeps the organizational narrative aligned with experienced reality.
Inclusive leadership in a risk society is the discipline of collective stewardship of culture, so organizations can adapt under pressure without losing the trust and accountability that make coordination possible.
What an inclusive culture looks like when it's real
In a "risk society", an inclusive culture can't remain abstract. It has observable properties:
- Truth travels fast—without social penalty. Bad news is treated as an operational signal, not personal disloyalty.
- Decision records preserve uncertainty and disagreement. What's known, assumed, unknown, and disputed stays explicit—so AI summaries and hierarchy don't flatten nuance.
- Protected voice is enforceable, not performative. Retaliation (including soft retaliation) is actively prevented and sanctioned.
- Trade-offs are explicit and governed. People can predict how tensions will be resolved, and who bears costs first is visible.
- Legitimacy is distributed. Proximity, similarity, and legacy status are not shortcuts to credibility.
- Human accountability holds in human–AI systems. No "the system decided" escape hatch—clear ownership across design, deployment, oversight, and outcomes.
From insight to action
In a risk society, what matters is whether your organization can detect weak signals early, keep dissent visible, and make defensible trade-offs before pressure turns into damage. That is not a communications task. It is an operating-system task: test your culture, then build the capability where you are currently most fragile.
Here are three specific ways you can put these insights into action:
1. Run a cultural diagnostic: "Is our culture reducing risk—or amplifying it?"
Start with a 15–20 minute pulse. Remember that you are not measuring "engagement." You are measuring whether your culture functions as a risk infrastructure. Score six dimensions:
- Truth pipeline health: How quickly and accurately inconvenient information travels from the edges into real decision forums.
- Dissent integrity: Whether disagreement survives hierarchy, politics, and AI summarization—without being relabeled as "negativity.”
- Psychological and career safety: Whether people expect penalties (including soft retaliation) for creating friction, escalating, or slowing momentum.
- Trade-off governance: Whether tensions (speed vs. safety, cost vs. resilience, efficiency vs. dignity) are explicit, predictable, and defensible.
- Legitimacy alignment: Whether the internal story leaders tell matches the lived experience of customers, workers, sites, and communities.
- Human–AI accountability: Whether ownership is clear across design, deployment, monitoring, escalation, and outcomes—so there is no "the system decided" escape hatch.
2. Build context-sensitive leadership capability where failure actually happens.
Stop training inclusive leadership as a generic interpersonal skill. That approach produces leaders who can say the right things and still make the same brittle decisions under pressure. Instead, build inclusive leadership as a situational risk capability, grounded in your own incidents and near-misses: AI rollout disputes, customer escalations, plant safety dilemmas, supplier tier fragility, regulatory inquiries.
3. Practice adaptive teaming
Is our coordination designed to adapt or to look "aligned"? Most organizations still make decisions through static teaming: fixed committees, fixed cadences, fixed escalation routes, and the same small set of voices. That structure creates apparent alignment while reality shifts underneath it.
Adaptive teaming is the disciplined alternative. It means designing teams and decision-making routines that can expand, contract, and reconfigure quickly in response to the risk signal, not the organization chart or established norms. The outcome is practical: faster sensing, cleaner accountability and transparency, trade-offs made explicitly, and enhanced responsiveness.
_____
References
Beck, U. (1992). Risk society: Towards a new modernity (M. Ritter, Trans.). SAGE Publications. (SAGE Publishing)
Douglas, M., & Wildavsky, A. (1983). Risk and culture: An essay on the selection of technological and environmental dangers. University of California Press. (University of California Press)
Intergovernmental Panel on Climate Change. (2023). Climate Change 2023: Synthesis Report. Contribution of Working Groups I, II, and III to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change (Core Writing Team, H. Lee & J. Romero, Eds.). IPCC. doi:10.59327/IPCC/AR6-9789291691647 (IPCC)
National Institute of Standards and Technology. (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0) (NIST AI 100-1). U.S. Department of Commerce. doi:10.6028/NIST.AI.100-1 (NIST Publications)
Tooze, A. (October 29, 2022). Chartbook #165: Polycrisis – thinking on the tightrope. (adamtooze.substack.com)
World Economic Forum. (2026). The Global Risks Report 2026 (21st ed.). World Economic Forum. (World Economic Forum)
European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of June 13, 2024, laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). (EUR-Lex)