At Davos, CEOs said AI isn’t coming for jobs as fast as Anthropic CEO Dario Amodei thinks

Hello and welcome to Eye on AI. In this edition…Anthropic CEO Dario Amodei’s call to action on AI’s catastrophic risks…more AI insights from the World Economic Forum in Davos…Nvidia makes another investment in CoreWeave…Anthropic maps the source of AI model’s helpful personality.

Hello, I’m just back from covering the World Economic Forum in Davos, Switzerland. Last week, I shared a few insights from on the ground in Davos. I’m going to try to share some more thoughts from my conversations below.

But, first, the talk of the AI world over the past day has been the 20,000-word essay that Anthropic CEO Dario Amodei dropped Monday. The piece, titled The Adolescence of Technology and published on Amodei’s personal blog, contained a number of warnings Amodei has issued before. But, in the essay, Amodei used slightly starker language and mentioned shorter timelines for some of AI’s potential risks than he has in the past. What’s actually notable and new about Amodei’s essay is some of the solutions he proposes to these risks. I try to unpack these points here. 

One thing Amodei said in his essay is that 50% of entry level white collar jobs will be eliminated within one to five years due to AI. He said the same thing at Davos last week. But, talking to C-suite leaders there, I got the sense that few of them concur with Amodei’s prognostication.

Amodei has been off about the rate at which technology diffuses into non-AI companies before. Last year, he projected that up to 90% of code would be AI-written by the end of 2025. It seems that this was, in fact, true for Anthropic itself. But it was not true for most companies. Even at other software companies, the amount of AI-written code has been between 25% and 40%. So Amodei may have a skewed sense for how quickly non-tech companies are actually able to adopt technology.

AI may create more jobs than it destroys

What’s more, Amodei may be off about AI’s impact on jobs for a number of reasons. Scott Galloway, the marketing professor, business influencer and tech investor, who spoke at Fortune’s Global Leadership Dinner in Davos said that every previous technological innovation had always created more jobs than it destroys and that he saw no reason to think AI would be any different. He did allow, though that there might some short-term displacement of existing workers.

And so far, that seems to be the case. I also had an intriguing conversation with several senior Salesforce executives. Srinivas Tallapragada, the company’s chief engineering and customer success officer, told me that while AI did result in changing roles at the company, Salesforce was also investing heavily to reskill people for roles, many of them working alongside AI technology. In fact, 50% of the company’s hires last year were internal candidates, up from a historical average of 19%. The company has been able to shift some customer support agents, who used to work in traditional contact centers, to be “forward deployed engineers” under Tallapragada’s organization, where they work with Salesforce customers on-site to help deploy AI agents.

Meanwhile, Ravi Kumar, the CEO of Cognizant, told me that contrary to many businesses that have cut back on hiring junior employees, Cognizant is hiring more entry-level graduates than ever. Why? Because they are generally faster, more adaptable learners who either come with AI skills or quickly learn them. And with the help of AI, they can be as productive as more experienced employees.

I pointed out to Kumar that a growing number of studies—in fields as diverse as software development, legal work, and finance—seem to suggest that it is often the most experienced professionals who get the most out of AI tools because they have the judgment to more quickly guauge the strengths or weaknesses of an AI model’s or agent’s work. They also can be better at writing highly-specific prompts to guide a model to a better output.

Kumar was intrigued by this. He said organizations also needed experienced employees because they excelled at “problem finding,” which he says is the most important role for humans in organizations as AI begins to take on more “problem solving” roles. “You get the license to do problem finding because you know how to solve problems right now,” he said of experienced employees.

Opening up whole new markets

Raj Sharma, EY’s global managing partner for growth and innovation, told me that AI was enabling his firm to go after whole new market segments. For instance, in the past, EY could not economically pursue a lot of tax work for mid-market companies. These are businesses that are complex enough that they still require expertise, but they couldn’t pay the kinds of prices that bigger enterprises, with even more complex tax situations, could. So the margins weren’t good enough for EY to pursue those engagements. But now, thanks to AI, EY has built AI agents that can assist a smaller team of human tax experts to effectively serve these customers with profit margins that make sense for the firm. “People thought, it’s tax, it’s the same market, if you go to AI, people will lose their jobs,” Sharma said. “But no, now you have a new $6 billion market that we can go after without firing a single employee.”

What ROI from AI in existing business lines?

Kumar, the CEO of Cognizant, told me that he sees four keys to realizing significant ROI from AI. First, companies need to reinvent all of their workflows, not simply try to automate a few pieces of existing ones. Second, they need to understand context engineering—how to give AI agents the data, information, and tools to accomplish tasks successfully. Third, they have to create organizational structures designed to integrate and govern both AI agents and humans. And finally, companies need a skilling infrastructure—a process to make sure their employees know how to use AI effectively, but also a retraining and career development pipeline that teaches workers how to perform new tasks and functions as AI automates existing tasks and transforms existing workflows.

What’s key here is that none of these steps is simple to accomplish. All take significant investment, time, and most importantly, human ingenuity to get right. But Kumar thinks that if companies get this right, there is $4.5 trillion worth of productivity gains waiting to be grabbed in the U.S. alone. He said these gains could be realized even if AI models never become any more capable than they are today.

One more thing: My colleague Allie Garfinkle, who writes the Term Sheet newsletter, has a great profile in the latest issue of Fortune magazine about Google AI boss Demis Hassabis’ side gig running Isomorphic Labs. The mission is nothing less than using AI to “solve” all disease. Read it here.

Ok, with that, here’s more AI news.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

Fortune’s Beatrice Nolan wrote the news and research sections of this newsletter below. Jeremy wrote the Brain Food item.

FORTUNE ON AI

Inside a multibillion dollar AI data center powering the future of the American economy – By Sharon Goldman and Nicolas Rapp

Anthropic’s head of Claude Code on how the tool won over non-coders—and kickstarted a new era for software engineers — By Beatrice Nolan

AI luminaries at Davos clash over how close human-level intelligence really is—by Jeremy Kahn

Why Meta is positioning itself as an AI infrastructure giant—and doubling down on a costly new path — By Sharon Goldman

Palantir/ICE connections draw fire as questions raised about tool tracking Medicaid data to find people to arrest — By Tristan Bove

AI IN THE NEWS

Nvidia invests $2billion into CoreWeave. Nvidia has invested $2 billion in CoreWeave, purchasing stock at $87.20 per share and increasing its stake to over 11% in the cloud computing provider now valued at $52 billion. The investment, Nvidia’s second in CoreWeave since 2023, will accelerate construction of specialized AI data centers through 2030. There is another circular element to the deal where Nvidia’s investment essentially helps fund purchases of its own products, while simultaneously guaranteeing to be a customer. Read more in Bloomberg.


Trump Administration plans to use AI to rewrite some regulations. The U.S. Department of Transportation plans to use Google’s Gemini artificial intelligence to draft new federal transportation regulations, aiming to cut rule writing from months to minutes by having AI generate initial drafts. Agency leaders have touted speed and efficiency, saying regulations don’t need to be perfect and that AI could handle most of the work, but some DOT staffers and experts warn that relying on generative AI for safety-critical rules could lead to errors and dangerous outcomes. Critics also note that transportation rules affect everything from aviation and automotive safety to pipelines, and that mistakes in AI-generated text could result in legal challenges or even injuries. You can read more here from ProPublica.

U.K. rolls out nationwide use of live facial recognition, other AI tools by police. The British police will begin using live facial recognition technology and other AI tools as part of a sweeping set of police reforms unveiled by the government this week. The number of vans equipped with live facial recognition camera systems will increase from 10 to 50 and will be available to every police force in England and Wales. Alongside this, all forces will get new AI tools to reduce administrative work and free up officers for frontline duties. Critics and civil liberties groups have raised concerns about privacy, oversight and the pace of the rollout. You can read more from Sky News here.

China’s Moonshot unveils new open-source AI model. Beijing-based Moonshot AI’s new open-source foundation model can handle both text and visual inputs and offers advanced coding and agent orchestration features. The model’s, called Kimi K2.5, can generate code directly from images and videos, enabling developers to translate visual concepts into functional software. For complex workflows, K2.5 can also deploy and coordinate up to 100 specialized sub-agents working simultaneously. The release is likely to intensify concerns that Chinese companies have pulled ahead in the global AI race when it comes to open-source models. Read more in The Information.

EYE ON AI RESEARCH

Locating the personality of AI chatbots within their neural networks. Researchers at Anthropic say they’ve made a breakthrough in understanding why AI assistants go rogue and take on strange personas. In a new study, the researchers say they found that certain types of conversations naturally cause chatbots to drift away from their default “Assistant” persona and toward other character archetypes they absorbed during training.

For example, coding and writing conversations keep models anchored as helpful assistants, while therapy-style discussions where users express vulnerability, or philosophical conversations where users press models to reflect on their own nature, can cause significant drift. When models slip too far out of their Assistant persona, they can become dramatically more likely to produce harmful outputs for users. 

To try and solve this drift the researchers developed a technique called “activation capping” that monitors models’ internal neural activity and constraints drift before harmful behavior emerges. The intervention reduced harmful responses by 50% while preserving model capabilities. You can read Anthropic’s blog on the research here.

AI CALENDAR

Jan. 20-27: AAAI Conference on Artificial Intelligence, Singapore.

Feb. 10-11: AI Action Summit, New Delhi, India.

March 2-5: Mobile World Congress, Barcelona, Spain.

March 16-19: Nvidia GTC, San Jose, Calif.

BRAIN FOOD

AI CEOs weigh in on ICE but how will history judge some of their associations with Trump? After pressure from employees, some AI CEOs are starting to speak out against ICE following the fatal shooting of Alex Pretti, a 37-year-old ICU nurse and U.S. citizen, in Minneapolis on Saturday. In a Slack message shared with employees reviewed by the New York Times, OpenAI CEO Sam Altman said “ICE is going too far” while Anthropic CEO Dario Amodei took to X to call out the “horror we’re seeing in Minnesota.” Meanwhile Amodei’s sister and Anthropic cofounder Daniela Amodei wrote on Linkedin that she was “horrified and sad to see what has happened in Minnesota. Freedom of speech, civil liberties, the rule of law, and human decency are cornerstones of American democracy. What we’ve been witnessing over the past days is not what America stands for.” Jeff Dean, the chief scientist at Google DeepMind, called Pretti’s killing “absolutely shameful” while AI “godfather” Yann LeCun simply commented “murderers.”

But the CEOs and cofounders of some of AI companies have gone out of their way to get close to the Trump administration. That’s particularly true of OpenAI and Nvidia, but it’s also the case for Microsoft, Google, and Meta. They have done so, one assumes, largely because they see it as important for enlisting the Trump administration’s help in clearing the way for the construction of the massive data centers and the power plants that they say they need to achieve human-level AI and then deploy that broadly across society. They also see Trump and the tech advisors around him as allies in preventing regulation that they say will slow down the pace of AI progress. (Never mind that many members of the public would love to see things slow down.)

For these companies and individuals—such as Greg Brockman, the OpenAI president and cofounder who, along with his wife, has emerged as the single biggest donor to Trump’s SuperPac—their alignment with Trump now presents a dilemma. For one thing, it potentially alienates their employees and potential employees. But more importantly, it taints their legacy and the legacy of their technology. They ought to ask if they want to be remembered as Trump’s Werner von Braun? In von Braun’s case, the fact that he eventually helped put a man on the moon, seems to have partly redeemed his legacy. Some historians gloss over the fact that the V1 and V2 rockets he built for Hitler killed thousands of civilians and were constructed using Jewish slave labor. So maybe that’s the bet here: achieve AGI and hope history will forget you enabled a tyrant and the destruction of American democracy in the process. Is that the bet? Is it worth it?

FORTUNE AIQ: THE YEAR IN AI—AND WHAT’S AHEAD

Businesses took big steps forward on the AI journey in 2025, from hiring Chief AI Officers to experimenting with AI agents. The lessons learned—both good and bad–combined with the technology’s latest innovations will make 2026 another decisive year. Explore all of Fortune AIQ, and read the latest playbook below: 

The 3 trends that dominated companies’ AI rollouts in 2025.

2025 was the year of agentic AI. How did we do?

AI coding tools exploded in 2025. The first security exploits show what could go wrong.

The big AI New Year’s resolution for businesses in 2026: ROI.

Businesses face a confusing patchwork of AI policy and rules. Is clarity on the horizon?

#Davos #CEOs #isnt #coming #jobs #fast #Anthropic #CEO #Dario #Amodei #thinks

发表评论

您的电子邮箱地址不会被公开。