Copyright Fast Company

Anthropic insists that it’s getting along with the Trump administration just fine. In a new blog post published on October 21, the company’s CEO, Dario Amodei, pushed back on what he called “a recent uptick in inaccurate claims about Anthropic’s policy stances.” His comments come after David Sacks, a prominent tech venture capitalist currently serving as the Trump administration’s AI czar, accused Anthropic of having an “agenda to backdoor Woke AI” through state-level regulation and working with Democratic mega-donors. That narrative has since gained traction within online right-wing spaces. The comments also follow the White House’s release of an executive order specifically focused on combating “woke AI” earlier this year, though officials have yet to say how it will be enforced. Now Anthropic is defending its work on AI safety, which Amodei argued should prioritize “policy over politics.” He also doubled down on the company’s position on regulating AI on the state level, in absence of a national standard. Citing JD Vance’s comments on AI directly, Amodei pointed to several areas of agreement with the Trump administration, including to “maximize applications that help people, like breakthroughs in medicine and disease prevention, while minimizing the harmful ones.” The CEO also questioned the notion that Claude, the company’s flagship chatbot, is more susceptible to political bias than other similar large language models. Republicans, including President Donald Trump, have increasingly leveled accusations that the country’s leading AI companies are building biased AI models, echoing the accusations made against social media companies in recent years. In short, Anthropic wants to toe the line between sticking to its commitment to study AI safety—safeguarding against general artificial intelligence endangering the human species and society in all sorts of destabilizing ways—and appeasing the professed concerns of the Trump administration. That’s all happening while the company attempts to scoop up more government work. “Anthropic is committed to constructive engagement on matters of public policy. When we agree, we say so,” wrote Amodei. “When we don’t, we propose an alternative for consideration. We do this because we are a public benefit corporation with a mission to ensure that AI benefits everyone, and because we want to maintain America’s lead in AI. Again, we believe we share those goals with the Trump administration, both sides of Congress, and the public. We are going to keep being honest and straightforward, and will stand up for the policies we believe are right. The stakes of this technology are too great for us to do otherwise.” Federal contracts Amodei underscored that Anthropic already has myriad partnerships with the federal government, including a contract with the Pentagon and work with the Energy Department’s national laboratory system. Along with competitors like OpenAI, Google, and xAI, Anthropic is also working with the General Services Administration to offer its enterprise Claude service to federal agencies at a discounted price. Anthropic’s work within the GSA seems to be unaffected by whatever might be happening within the Office of Science and Technology Policy, where Sacks serves as an adviser, a government official familiar with the matter told Fast Company. Last month, Democrats launched an ethics inquiry into the investor, who has received waivers that allow him to participate in the administration while maintaining some of his investments. Anthropic has gotten good feedback from the GSA about government use of the tool, a company spokesperson says. The AI developer also points to its ongoing partnership with Palantir on meeting Federal Risk and Authorization Management Program (FedRAMP) requirements, a wonky but critical cloud security review program used to offer technology across federal agencies. Palantir is a controversial technology contractor that’s seen its business with both the defense and civilian sides of government grow in recent years. As part of that work, Palantir has already been cleared to provide its cloud technology to federal agencies. While Anthropic has been picking up government contracts, it appears to be falling behind OpenAI on independent FedRAMP authorization. This could be a game changer for OpenAI: Should OpenAI earn that accreditation, it won’t need to work through another company—like Microsoft—to offer its technology directly to the government. At that point, OpenAI would be a more freestanding government contractor, maintaining far more independence from other major cloud companies. The same government official told Fast Company that Anthropic has yet to share a plan for gaining accreditation for its systems through that program, or securing a sponsorship for review in another way. A spokesperson for the GSA declined to comment.