Keyword: anthropic
-
AI giant Anthropic rebuffed an attempt from a Chinese think tank to gain access to its newest AI model, Claude Mythos, during a private meeting in Singapore, The New York Times reported Tuesday. Anthropic unveiled Mythos in April, but has limited its release to around 40 American companies, which include Amazon, Apple and Microsoft, over concerns that it is so powerful it could threaten national security systems and potentially disable companies that use it. During the April meeting organized by the Carnegie Endowment for International Peace, a representative from a Chinese think tank asked Anthropic officials to loosen their stance...
-
Anthropic is releasing new AI agents tailored to banks and other financial services businesses—part of the artificial intelligence company’s plan to expand its reach among enterprise customers as it charts a path to an anticipated initial public offering as early as this year. On Tuesday, the company announced 10 new AI agents that help automate what it described as the most common forms of financial work, among them, building pitchbooks, closing the books and drafting credit memos. Earlier this week, Anthropic partnered with Fidelity National Information Services to develop AI-driven software that would help banks police accounts for signs of...
-
he deal, announced Wednesday, May 6, 2026, addresses Anthropic’s growing pains head-on. Claude’s popularity created a problem most startups dream of having: too much demand. Users on Pro, Max, Team, and Enterprise plans faced strict throttling during peak hours, turning productive workflows into frustrating waiting games. Now those five-hour rate limits disappear for most paying subscribers. Computing Power Meets Rocket Science SpaceX’s Memphis data center becomes Claude’s new computational backbone. Ami Vora, Anthropic’s head of product, highlighted the partnership at their developer conference in San Francisco, stating they’re utilizing “the full capacity of Colossus One” to improve service for Claude...
-
The video “AI Execs are Running a 1948 Circus Trick…” by Brendan Dell argues that AI companies are utilizing the **Barnum Effect**—a psychological phenomenon where individuals believe generic descriptions are highly specific to them—to inflate their valuations and influence policy through “fear-based marketing.”
-
The Department of Defense on Friday confirmed new agreements with seven technology companies to deploy artificial intelligence tools across its classified networks, marking a broad expansion of its AI partnerships while excluding Anthropic from the program. The companies selected—OpenAI, Google, Microsoft, Amazon Web Services, Nvidia, Elon Musk’s xAI, and startup Reflection—will provide systems for what the Pentagon described as “lawful operational use.” Defense officials said the effort is aimed at building an “AI-first fighting force” and improving decision-making across military operations. The move significantly widens the Pentagon’s vendor base. Until recently, Anthropic’s Claude model had been the only AI system...
-
https://www.youtube.com/watch?v=vnGC4YS36gU Oct 29, 2025 #cnbc On 1,200 acres in Indiana, Amazon’s biggest AI data center is now operational, with half a million AWS Trainium2 chips entirely devoted to powering OpenAI rival Anthropic. Just over a year ago, the whole site was nothing but dirt and cornfields. Seven buildings are operating now, and once complete, the site will have around 30 buildings and consume some 2.2 gigawatts of power. CNBC went to the small town of New Carlisle, Indiana, to talk to locals who are worried about the impact on their community and electric bills - and to get a first...
-
WASHINGTON (AP) — A team of researchers has uncovered what they say is the first reported use of artificial intelligence to direct a hacking campaign in a largely automated fashion. The AI company Anthropic said this week that it disrupted a cyber operation that its researchers linked to the Chinese government. The operation involved the use of an artificial intelligence system to direct the hacking campaigns, which researchers called a disturbing development that could greatly expand the reach of AI-equipped hackers.While concerns about the use of AI to drive cyber operations are not new, what is concerning about the new...
-
Poisoning AI models might be way easier than previously thought if an Anthropic study is anything to go on. Researchers at the US AI firm, working with the UK AI Security Institute, Alan Turing Institute, and other academic institutions, said today that it takes only 250 specially crafted documents to force a generative AI model to spit out gibberish when presented with a certain trigger phrase. For those unfamiliar with AI poisoning, it's an attack that relies on introducing malicious information into AI training datasets that convinces them to return, say, faulty code snippets or exfiltrate sensitive data. The common...
-
Well-known authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson sued the company over what their lawyer calls “brazen infringement.”AI startup Anthropic will pay a $1.5 billion settlement after being accused of copyright violations and piracy by illegally downloading books to train its AI's language models. (Scripps News)AI startup Anthropic will pay a $1.5 billion settlement after being accused of copyright violations and piracy — a case that legal experts say is a first-of-its-kind, that "will be known by its first name to law students for a long time." Well-known authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson sued...
-
A hacker has exploited a leading artificial intelligence chatbot to conduct the most comprehensive and lucrative AI cybercriminal operation known to date, using it to do everything from find targets to write ransom notes. In a report published Tuesday, Anthropic, the company behind the popular Claude chatbot, said that an unnamed hacker “used AI to what we believe is an unprecedented degree” to research, hack and extort at least 17 companies.
-
Anthropic published research showing that all major AI models may resort to blackmail to avoid being shut down. The research explored a phenomenon they're calling agentic misalignment "When Anthropic released the system card for Claude 4, one detail received widespread attention: in a simulated environment, Claude Opus 4 blackmailed a supervisor to prevent being shut down," "We're now sharing the full story behind that finding – and what it reveals about the potential for such risks across a variety of AI models Misalignment emerged mainly in two scenarios: either when the model was threatened with consequences like replacement, or when...
-
In a test case for the artificial intelligence industry, a federal judge has ruled that AI company Anthropic didn’t break the law by training its chatbot Claude on millions of copyrighted books.
-
Anthropic said its latest artificial intelligence model resorted to blackmail when told it would be taken offline. In a safety test, the AI company asked Claude Opus 4 to act as an assistant to a fictional company, but then gave it access to (also fictional) emails saying that it would be replaced, and also that the engineer behind the decision was cheating on his wife. Anthropic said the model “[threatened] to reveal the affair” if the replacement went ahead. AI thinkers such as Geoff Hinton have long worried that advanced AI would manipulate humans in order to achieve its goals....
-
U.S. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convened an urgent meeting with the chief executives of the largest American banks to warn of systemic cybersecurity risks posed by Anthropic's latest artificial intelligence model, Claude Mythos Preview. The emergency huddle in Washington included leaders from Goldman Sachs, Wells Fargo, Morgan Stanley, Bank of America, and Citigroup, focusing on the potential for AI-driven cyberattacks to undermine the global financial system. WHY IT MATTERS The warnings center on Claude Mythos Preview's ability to identify software vulnerabilities and exploit zero-day flaws, raising concerns that the technology could be used to...
-
International Business Machines stock is getting slammed Monday, becoming the latest perceived victim of rapidly developing AI technology, after Anthropic said its Claude Code tool could be used to modernize legacy systems that run COBOL. Shares of IBM closed the day lower by nearly 13.2%, at $223.35 per share, after Anthropic on Monday said Claude Code could be used to automate the exploration and analysis work that drives most of the complexity in COBOL modernization, a key IBM business. IBM has long sold mainframe systems that are optimized for large-scale transaction processing, where COBOL has often been used. Short for...
-
Anthropic is accusing three Chinese artificial intelligence companies of "industrial-scale campaigns" to "illicitly extract" its technology using distillation attacks. Anthropic says these companies created 24,000 fraudulent accounts to hide these efforts. In a blog post detailing the attacks, Anthropic named three AI firms, including DeepSeek, the maker of the popular DeepSeek AI models. Anthropic explicitly framed the attack as an issue of national security. "We have identified industrial-scale campaigns by three AI laboratories—DeepSeek, Moonshot, and MiniMax—to illicitly extract Claude’s capabilities to improve their own models," reads the blog post. "These labs generated over 16 million exchanges with Claude through approximately...
-
While AI hyperscalers are committing hundreds of billions of dollar per year on capital expenditures, Anthropic’s spending plans are more cautious by comparison.But cofounder and CEO Dario Amodei said the reason for his more measured approach is because even a slight miscalculation could sink the company.In an interview with Dwarkesh Patel on Friday, the podcaster asked why Anthropic, the developer of the Claude chatbot, doesn’t spend more aggressively, given Amodei’s earlier prediction that an AI data center could one day be a “country of geniuses.”Amodei replied that while he is confident the technical milestone is achievable soon, he’s less certain...
-
Mr. Goldstein is a professor at Vanderbilt University who specializes in cybersecurity and artificial intelligenceAnthropic recently sent a shock wave through the cybersecurity world when it said its new artificial intelligence model, Claude Mythos, had exhibited an extraordinary ability to find previously unknown vulnerabilities in software — a hacker’s fantasy. Concern over the tool’s power caused Anthropic to restrict its release mainly to bigger companies, allowing them time to secure their software. What is everyone else supposed to do? Smaller companies, organizations, nonprofits and regular people are just as much at risk as larger companies. But they most likely lack...
-
AI-rmageddon is here.On Saturday (25), the founder of ‘Software as a Service platform’ (SaaS) PocketOS, Jer Crane, wrote an X article to warn others about the ‘systemic failures’ of flagship AI and digital services providers.Crane was led to write the public warning after an AI coding agent deleted his firm’s entire production database, and a cloud infrastructure provider’s API wiped all backups.This erased months of consumer data essential to the firm and its customers.Tom’s Hardware reported:“’Yesterday afternoon, an AI coding agent — Cursor running Anthropic’s flagship Claude Opus 4.6 — deleted our production database and all volume-level backups in a...
-
In briefMozilla says Anthropic’s Claude Mythos identified 271 vulnerabilities in Firefox during testing.Anthropic is restricting the model to vetted partners through Project Glasswing because of cybersecurity risks.Researchers warn that the same capability could accelerate automated cyberattacks. For decades, attackers have had the advantage in cybersecurity. Artificial intelligence may be about to change that.In a blog post published on Tuesday, Firefox browser developer Mozilla said an early version of Anthropic’s Claude Mythos AI—which has drawn attention in recent weeks for its purported cybersecurity prowess—model helped identify 271 vulnerabilities in the browser during internal testing. Those bugs were patched this week.The results...
|
|
|