The Shifting Value of People
Mar 12, 2026The discussion about AI usually starts with tools and skills. But the more consequential place to start is with economics.
The World Economic Forum's Future of Jobs Report 2025 says employers expect 39% of workers' core skills to change by 2030, with 170 million roles created and 92 million displaced over that period.
McKinsey's 2023 analysis argues that generative AI and related technologies could automate tasks that currently account for 60–70% of employees' time. Meanwhile, robotics adoption continues to rise: the International Federation of Robotics reported 542,000 industrial robots installed worldwide in 2024, more than double the level a decade earlier, and nearly 200,000 professional service robots sold in 2024.
These numbers matter. But the real shift is not just technological disruption. It is a reconfiguration of the relationship between human labor, economic productivity, and organizational value (and values).
This is the more consequential leadership question in the age of AI: not only what can be automated, but how organizations will redefine human worth, contribution, and significance as productivity becomes less tightly tied to broad-based human effort.
Rethinking the relationship between work, productivity, and value
For most of modern economic history, there was a relatively stable intuition beneath growth: if you wanted more output, you needed either more labor or more output per labor hour. The U.S. Bureau of Labor Statistics still defines labor productivity in exactly those terms: output relative to hours worked. In that world, the link between people, work, and GDP was direct enough to feel natural. Machines amplified labor. Software accelerated labor. But the human being remained the central productive unit.
AI begins to loosen that link. Generative AI introduces non-human systems that can perform cognitive work at scale: drafting, summarizing, coding, classifying, analyzing, and increasingly coordinating. Embodied AI extends that logic into the physical world. Boston Dynamics' Stretch was built specifically for warehouse automation and is marketed for continuous case handling; BMW has publicly tested Figure's humanoid robots in production, and said recently that Figure robots in Spartanburg had supported production under real-world conditions, including ten-hour shifts and assistance on more than 30,000 BMW X3 vehicles.
That matters because it changes the productive equation. It becomes increasingly possible to generate more output without proportional growth in human labor hours. In principle, GDP can rise while employment growth slows, as fewer people are needed to produce the same output, or as labor's role shifts from execution toward oversight, judgment, exception handling, and coordination. The concept of productivity itself does not disappear. It becomes less tightly coupled to a broad base of human effort.
Inside organizations, value is social before it is economic
And that is where the organizational conversation becomes significant, because value inside institutions is never just economic: it is social, symbolic, and yes, also political. Organizations decide value through standards, decisions, rationalizations, and relational codes. They define what counts as strategic, future-ready, scalable, high-potential, executive. They decide whose judgment is trusted, what evidence is treated as legitimate, whose mistakes are read as learning, and whose as a deficiency. They rationalize outcomes afterward with phrases like "best fit," "greater readiness," "stronger presence," or "future-ready."
Pierre Bourdieu's language of symbolic capital is useful here because it shows how prestige, legitimacy, and recognition accumulate socially and then get misrecognized as natural qualities. This is why the current moment may feel more destabilizing than a normal reskilling cycle. The standards of value and significance are moving fast.
And value almost always moves through scarcity. Some scarcity is real. Certain capabilities are genuinely in short supply relative to demand: advanced AI engineering, model governance, systems integration, and certain forms of high-stakes judgment. That is ordinary market logic. But some scarcity is created. Organizations restrict access to promotions, succession pipelines, strategic forums, sponsorship, stretch assignments, and elite development tracks. Scarcity here is not an accident. It is one of the ways institutions preserve hierarchy and distinction. That is precisely the kind of mechanism Bourdieu had in mind.
And some scarcity is imagined. This is where many firms deceive themselves. They talk as if "strategic mindset," "innovation capability," or "executive presence" were objectively rare, when in practice these categories are often vague, culturally coded, and unevenly recognized. Their scarcity hardens because institutions decide to treat them as scarce, then pour recognition and investment into the people already perceived to embody them. The circle tightens, and then leadership mistakes the result for meritocratic clarity.
That is not talent management. It is social selection dressed up as objectivity.
Now add AI to that dynamic, and the problem becomes sharper. AI does not just automate tasks. It can intensify value concentration. A small number of highly leveraged employees with strong AI tooling can now produce output that once required larger teams. One person can do the work of several analysts. A compact product team can build tools that once needed large support structures. A manager can oversee more standardized processes with fewer coordinators. This is where the historical link between labor and output starts to fray in lived organizational experience: output remains, headcount pressure rises, and fewer roles feel central.
At the macro level, that also raises the question of labor's share. OECD work has documented changes in labor share across advanced economies and linked some of the pressure to technology and the rise of "superstar" firms with lower labor shares. The point is not that every economy is on a simple, one-directional path. The point is that when technology allows value to be concentrated in capital, platforms, or a few highly leveraged firms, the old assumption that productivity gains will broadly track labor gains becomes less reliable.
When automation changes the meaning of human work
When leaders say they are "investing in the future," they are really redrawing the boundary of who counts. A routine example makes this visible. In customer service, chatbots absorb straightforward interactions. What reaches the human agent is no longer the simple password reset or balance query. It is the confused customer, the angry customer, the grieving customer, the exception, the ethical edge case. On paper, AI has removed lower-value work. In practice, it has raised the relational and judgmental burden of the remaining human work.
The same patterns appear in healthcare. Image analysis may become increasingly machine-assisted, but patients will still need explanation, reassurance, interpretation, and trust. In education, AI may deliver content and practice at scale, but teachers are still needed to motivate students, diagnose confusion, calibrate confidence, and provide social containment for learning.
In warehousing and manufacturing, embodied AI can take over repetitive transport or handling tasks. Still, humans remain critical for supervision, exception handling, coordination, safety, and responsibility when conditions stop being predictable.
If leaders continue to measure humans only by throughput, they devalue the very part of the role that matters most now.
This makes Allison Pugh's The Last Human Job so relevant right now. Pugh argues that many professions rely on what she calls connective labor: the work of creating recognition, empathy, and human acknowledgment in interactions that would otherwise become purely transactional. Her point is not sentimental. It is structural. In a more automated world, connective labor becomes more visible as a source of trust precisely because machines still struggle to reproduce mutual recognition in the human sense.
What does it mean when people belong but matter less?
That moves the conversation beyond belonging to mattering. Belonging asks whether I am included. Mattering asks whether I am consequential. Gordon Flett's and Zach Mercurio's work treats mattering as a fundamental human need: the need to feel significant to others and to the broader system. That distinction matters enormously in AI-era organizations. People can still belong while mattering less. They can still be included in meetings, copied on updates, and retained on payroll while their work is reclassified as support, legacy, automatable, or peripheral. They may still be inside the circle socially while becoming inconsequential structurally.
And that is corrosive, because people can tolerate strain more easily than irrelevance.
Naomi Eisenberger's review of social pain is useful here. Experiences of rejection, exclusion, or loss of social connection draw on some of the same neural systems involved in physical pain. That does not mean every organizational slight is trauma. It means that devaluation is not a minor psychological inconvenience. Human beings are built to register status loss and social diminishment as serious.
So when leaders now say, "We are becoming AI-first," some employees hear, "Human judgment is secondary." When leaders say, "We need fewer coordinators and more strategists," people do not just hear a resource allocation decision. They hear a ranking of worth. And leaders at the top are especially prone not to notice this. Research by See and colleagues found that power increases confidence in one's own judgments and reduces advice taking. In other words, the more senior the leader, the easier it becomes to mistake one's own rationalization for reality.
This is where inclusive leadership has to be understood far more strategically than most organizations currently understand it.
The strategic choice in the age of AI
Inclusive leadership is not just about interpersonal warmth, better listening, or avoiding exclusionary behavior. Those things matter, but they are the shallow end of the pool. In an AI economy, inclusive leadership is about how widely an organization distributes access to consequential contribution.
- Who gets access to AI tools first?
- Who gets trained, and who gets left behind?
- Who is invited to help redesign workflows?
- Who gets to shape the standards of future value?
- Whose capabilities are assumed to be expandable, and whose are obsolete?
That is inclusiveness in its strategically serious form.
Take two organizations facing the same technological transition. In the first, AI access is concentrated among a small elite: data teams, a few strategy leaders, a handful of innovation champions. Everyone else is expected to adapt later. In the second, the organization deliberately broadens AI literacy, invites frontline and middle-layer staff into process redesign, and treats transition as a capability diffusion challenge rather than a selection exercise. The first organization concentrates value. The second expands it.
That difference is not cosmetic. It affects resilience, legitimacy, and future productivity.
Carlota Perez's work on technological revolutions is relevant here. Her argument, in simplified form, is that new technologies often pass through an early period of intense concentration before broader deployment and institutional adjustment create more durable gains. The early winners tend to concentrate the advantage. The later, more stable phase comes when institutions learn how to diffuse capability more widely.
That is the real strategic choice now: organizations can use AI and embodied AI to shrink the circle of significance, concentrating opportunity among those already closest to power, already legible as "future talent," already sponsored, already fluent in the new language of value.
Or they can use this moment to widen the field of contribution. That does not mean pretending scarcity is fake. Some scarcity is real. But a thriving future will depend less on worshipping scarcity than on deciding where value can be expanded.
That means:
- Broadening AI literacy instead of hoarding it.
- Redesigning roles so more people move upward into judgment, sensemaking, and relational work rather than downward into disposability.
- Protecting and revaluing connective labor rather than crushing it under throughput metrics.
- Treating mattering as a strategic design variable, not a morale issue.
- Recognizing that inclusive leadership, properly understood, is not about making everyone feel comfortable.
It is about building systems that enable more people to contribute meaningfully to consequential work.
The value of people is shifting. That is real. The harder question is whether leaders will allow that shift to become a quiet narrowing of human worth. AI does not and should not decide that. Leadership does.
In other words, the age of AI is not only forcing organizations to rethink productivity. It is forcing them to decide whether future value will be concentrated among a narrower elite, or whether more people will be enabled to matter in new ways.
References
Acemoglu, D., & Restrepo, P. (2020). Robots and jobs: Evidence from US labor markets. Journal of Political Economy, 128(6), 2188–2244. https://doi.org/10.1086/705716
Acemoglu, D., & Restrepo, P. (2022). Tasks, automation, and the rise in US wage inequality. Econometrica, 90(5), 1973–2016. https://doi.org/10.3982/ECTA19815
Autor, D. H. (2015). Why are there still so many jobs? The history and future of workplace automation. Journal of Economic Perspectives, 29(3), 3–30. https://doi.org/10.1257/jep.29.3.3
Bourdieu, P. (1986). The forms of capital. In J. G. Richardson (Ed.), Handbook of theory and research for the sociology of education (pp. 241–258). Greenwood. https://web.stanford.edu/~eckert/PDF/Bourdieu1986.pdf
Brynjolfsson, E., Li, D., & Raymond, L. (2023). Generative AI at work. National Bureau of Economic Research Working Paper No. 31161. https://doi.org/10.3386/w31161
Brynjolfsson, E., & McAfee, A. (2014). The second machine age: Work, progress, and prosperity in a time of brilliant technologies. W. W. Norton & Company.
Eisenberger, N. I. (2012). The neural bases of social pain: Evidence for shared representations with physical pain. Nature Reviews Neuroscience, 13(6), 421–434. https://doi.org/10.1038/nrn3231
Flett, G. L. (2018). The psychology of mattering: Understanding the human need to be significant. Academic Press.
International Federation of Robotics. (2025). World robotics report 2025. https://ifr.org/worldrobotics
Keynes, J. M. (1930). Economic possibilities for our grandchildren. In J. M. Keynes (1963), Essays in persuasion (pp. 358–373). W. W. Norton & Company. (Original work published in 1930)
McKinsey Global Institute. (2023). The economic potential of generative AI: The next productivity frontier. McKinsey & Company. https://www.mckinsey.com/mgi/our-research/the-economic-potential-of-generative-ai-the-next-productivity-frontier
Mercurio, Z. (2024). The power of mattering: How leaders can create a culture of significance. Harvard Business Review Press.
Organisation for Economic Co-operation and Development. (2019). Has the labour share declined? OECD Publishing. https://www.oecd.org/en/publications/has-the-labour-share-declined_2dcfc715-en.html
Perez, C. (2002). Technological revolutions and financial capital: The dynamics of bubbles and golden ages. Edward Elgar Publishing.
Pugh, A. J. (2024). The last human job: The work of connecting in a disconnected world. Princeton University Press.
See, K. E., Morrison, E. W., Rothman, N. B., & Soll, J. B. (2011). The detrimental effects of power on confidence, advice-taking, and accuracy. Organizational Behavior and Human Decision Processes, 116(2), 272–285. https://doi.org/10.1016/j.obhdp.2011.07.009
U.S. Bureau of Labor Statistics. (2024). Productivity and costs. https://www.bls.gov/productivity
World Economic Forum. (2025). The future of jobs report 2025. World Economic Forum. https://www.weforum.org/publications/the-future-of-jobs-report-2025/