top of page

AI and the Future of Work: A Summary

Updated: Jan 3


This summary is generated from a panel discussion hosted by CUNY (City University of New York) and recorded on October 7, 2025 and posted on YouTube on October 28, 2025.


Session description: Artificial Intelligence is developing at breakneck speed, causing much anxiety about how our society and daily lives may change in the not-too-distant future. Top of mind for many: jobs. A panel of experts brings the speculation down to earth, addressing questions such as: What jobs will AI replace? What new jobs will be created? How will AI affect workplace conditions, wages, unions, and the overall economy? Featuring Daron Acemoglu, Nobel laureate and professor of economics at MIT; Paul Krugman, Nobel laureate, former New York Times columnist, and research professor of economics at the CUNY Graduate Center; Danielle Li, David Sarnoff Professor of Management of Technology at MIT; and Zeynep Tufekci, professor of sociology and public affairs at Princeton University and a New York Times columnist. Moderated by Steven Greenhouse, former New York Times labor reporter and author of Beaten Down, Worked Up: The Past, Present, and Future of American Labor.


SUMMARY


This panel brought together leading economists and sociologists—Daron Acemoglu, Danielle Li, Zeynep Tufekci, and Paul Krugman—moderated by Steven Greenhouse, to examine how artificial intelligence is reshaping work, inequality, and society. Rather than offering a single prediction, the discussion emphasized uncertainty, choices, and trade-offs. The panelists broadly agreed that AI is a transformative technology, but disagreed sharply on its likely trajectory, pace of impact, and social consequences.


1. AI as a Choice, Not a Destiny

Daron Acemoglu framed the discussion by arguing that the future of AI is not predetermined. Unlike natural phenomena, AI’s effects depend on deliberate design and policy choices. He contrasted two broad trajectories:

  • Automation-focused AI, aimed at replacing human labor by making machines increasingly human-like.

  • Pro-worker AI, designed to complement, amplify, and support human capabilities.


Acemoglu warned that the dominant industry narrative around artificial general intelligence (AGI) largely advances an automation agenda. Even without achieving true AGI, AI systems can automate enough tasks to eliminate jobs, suppress wages, and increase inequality. This, he argued, is not inevitable but reflects current incentives and business models—particularly those of large technology firms.


In contrast, pro-worker AI could raise productivity while strengthening labor demand, improving job quality, and reducing inequality. However, Acemoglu emphasized that this path is currently underdeveloped because it does not align well with prevailing profit models, especially digital advertising.


2. What Is AI Actually Good For?

Paul Krugman questioned what AI is concretely good at, rather than treating it as an abstract force. He compared current AI enthusiasm to earlier technology hype cycles, notably the dot-com bubble. While acknowledging AI’s impressive capabilities, he stressed that productivity gains and improved living standards have been disappointing over the past two decades, despite rapid technological advances.


Krugman expressed hope that AI might finally deliver meaningful growth, but warned of major dislocations. Drawing lessons from the “China shock” and past technological change, he emphasized that economic churn is not costless. Large groups of workers can suffer long-term harm even if the overall economy eventually benefits.


3. AI Is Not Human Intelligence

Zeynep Tufekci strongly challenged the assumption that AI follows a linear path toward human-like intelligence. She argued that generative AI systems, especially large language models (LLMs), do not resemble human cognition, even if they can mimic human language.


This distinction matters because AI’s failure modes are fundamentally different from those of humans. LLMs can produce output that appears highly plausible while being entirely wrong, fabricating citations or making errors that humans typically would not. As a result, AI works best in verifiable domains—such as coding—where errors can be quickly detected. In high-stakes, non-verifiable domains involving liability (medicine, law, education), these systems remain deeply problematic.


Tufekci argued that AI is already disruptive not because it replaces workers, but because it breaks social and institutional assumptions—for example, assumptions about authorship, authenticity, and trust.


4. The Disruption of Institutions

One of Tufekci’s most forceful points was that AI disrupts society by breaking correlations that institutions rely on. Examples include:

  • Cover letters no longer signaling genuine interest, because AI can generate them instantly.

  • Essays no longer demonstrating learning or effort, because students can outsource the process.

  • Photos and videos no longer serving as reliable evidence, due to AI-generated media.


When these signals break down, institutions respond by reinstating gatekeeping through other means—elite credentials, personal networks, or centralized authority. This, she warned, may increase inequality and exclusion, even if AI appears democratizing on the surface.


5. AI as Stored Human Expertise

Danielle Li offered a more optimistic yet cautious perspective by focusing on how AI systems are trained. Unlike traditional software, many AI models learn by observing human labor and expertise. In this sense, AI can be seen as a repository of human experience.


Li illustrated this with healthcare: an AI trained on expert doctors’ diagnostic decisions could extend high-quality care to underserved populations. This framing highlights AI’s potential to scale expertise across time and space, overcoming human limitations.


However, Li also warned that this dynamic risks creating a superstar economy, where the contributions of a few highly skilled individuals are captured and monetized at scale, while others become marginal. The central policy challenge, she argued, is how to distribute the gains from this stored expertise more equitably.


6. Has AI Already Changed Work?

Asked whether AI has already had significant labor-market effects, Acemoglu noted that while underlying AI models are advancing rapidly, useful applications have been slower to emerge. Chatbots are widespread, but integration into real workplaces remains difficult.


In practice, AI struggles with task coordination and human-machine interfaces. Occupations involve bundles of tasks, and AI often performs only some of them well. This creates friction rather than seamless replacement. Still, early evidence suggests that entry-level hiring in some fields, such as coding, may be slowing, signaling potential longer-term shifts.


7. Is the AI Investment Boom a Bubble?

Krugman expressed skepticism about the massive investments being poured into AI, including hundreds of billions of dollars for data centers. He argued that the behavior of tech firms resembles classic speculative bubbles, driven by fear of missing out rather than clear business fundamentals.


While acknowledging AI’s transformative potential, Krugman warned that many firms may ultimately regret these investments, especially as companies increasingly rely on debt rather than retained earnings. The unresolved question is whether viable revenue models exist to justify the scale of spending.


8. Which Jobs Are Most Affected?

Li resisted making simple predictions about which occupations will be most affected. Instead, she encouraged individuals to think in terms of tasks rather than jobs. Workers should ask which parts of their job AI could plausibly perform, what tasks would remain, and whether those remaining tasks would be enjoyable, valuable, and scalable.


She illustrated this with examples:

  • Surgeons may not be replaced, but AI-driven guidance could deskill their judgment, turning them into executors rather than decision-makers.

  • Consultants may rely less on analysis and slide production and more on social skills, persuasion, and client relationships.


In many cases, AI leads not to up- or de-skilling, but re-skilling, reshaping what counts as valuable labor.


9. Conversational AI as a New Power Center

Tufekci emphasized that conversational AI is historically unprecedented. For the first time, machines can engage humans in fluent, patient, and emotionally responsive dialogue at massive scale. She argued that society has underestimated the significance of this shift.


She suggested that conversational AI—rather than coding tools—may be the primary revenue driver for companies like OpenAI, through advertising, product placement, and influence. With hundreds of millions of daily users, these systems could become dominant intermediaries for information, consumption, and opinion formation, raising profound concerns about power, persuasion, and governance.


10. Pro-Worker AI and Why It’s Not Happening

Acemoglu elaborated on what pro-worker AI could look like in practice. He described tools for electricians, nurses, plumbers, and educators that provide real-time, context-specific guidance based on expert knowledge. Such systems would raise productivity while increasing the value of skilled labor rather than replacing it.


Technically, Acemoglu argued, these tools are already feasible and relatively inexpensive to build. The problem is not technology but incentives. Pro-worker AI lacks scalable business models, whereas automation and advertising-driven engagement promise higher returns. As a result, industry investment flows away from labor-augmenting technologies.


11. Inequality and the Distribution of Gains

Krugman emphasized that AI’s effects on inequality are uncertain and historically contingent. While fears of mass unemployment are likely overstated, AI could still be capital-biased, directing gains toward firms and shareholders rather than workers.


He drew parallels to the early Industrial Revolution, when productivity rose but workers’ wages stagnated for decades. The lesson, he argued, is that growth alone does not guarantee shared prosperity.


12. Advice for Employers, Workers, and Students

Li advised employers to avoid AI adoption driven by fear of missing out. Productivity gains require organizational change, data infrastructure, and clarity about what constitutes good performance. Firms must also recognize that competitors have access to the same tools; competitive advantage comes from unique applications, not generic AI use.


For students and workers, Li emphasized adaptability over specific technical skills. Rather than chasing short-lived roles like “prompt engineering,” individuals should cultivate comfort with uncertainty, experimentation, and continuous learning—skills that remain durable amid rapid change.


13. Surveillance, Data, and Labor Power

Li highlighted a crucial but often overlooked issue: workplace surveillance. Monitoring technologies generate vast datasets about how workers perform their jobs. These data are then used to train AI systems, effectively extracting workers’ tacit knowledge without compensation.


While this can raise productivity and reduce inequality among workers by lifting lower performers, it also risks eroding labor’s bargaining power, as expertise becomes embedded in machines rather than residing with workers themselves.


14. Final Reflections

The panel concluded with a shared recognition that AI is neither a simple boon nor an inevitable catastrophe. It is a powerful, unpredictable technology whose effects will depend on governance, incentives, and collective choices. The greatest risks lie not only in job loss, but in broken institutions, rising inequality, concentrated power, and the quiet reshaping of how society defines trust, expertise, and human value.


Rather than asking whether AI will transform work, the more urgent question posed by the panel was: who will shape that transformation, and for whose benefit?


Watch the video here.


 
 
 

Comments


bottom of page