Keyword: darioamodei
-
Today we’re announcing Project Glasswing1, a new initiative that brings together Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks in an effort to secure the world’s most critical software. We formed Project Glasswing because of capabilities we’ve observed in a new frontier model trained by Anthropic that we believe could reshape cybersecurity. Claude Mythos2 Preview is a general-purpose, unreleased frontier model that reveals a stark fact: AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting...
-
KEY POINTS President Donald Trump lauded Palantir in a post to Truth Social on Friday as the stock headed for a plunge this week. Palantir's tools are reportedly being used in Iran, and the company is benefiting from its ties to the Trump administration and government contracts. Short seller Michael Burry again targeted the stock this week. ... President Donald Trump lauded Palantir in a post to Truth Social on Friday as the artificial intelligence software stock plunged 14% for its worst week in a year. "Palantir Technologies (PLTR) has proven to have great war fighting capabilities and equipment," Trump...
-
I've been experimenting with a new approach to supervising language models that we’re calling "agent teams." With agent teams, multiple Claude instances work in parallel on a shared codebase without active human intervention. This approach dramatically expands the scope of what's achievable with LLM agents. To stress test it, I tasked 16 agents with writing a Rust-based C compiler, from scratch, capable of compiling the Linux kernel. Over nearly 2,000 Claude Code sessions and $20,000 in API costs, the agent team produced a 100,000-line compiler that can build Linux 6.9 on x86, ARM, and RISC-V. The compiler is an interesting...
-
Chinese tech company Alibaba has unveiled a new artificial intelligence (AI) model that it claims outperforms its rivals at OpenAI, Meta and DeepSeek. The announcement of the Qwen2.5-Max model yesterday (Jan. 29) is the second major AI announcement from China this week, after DeepSeek's R1 open-weight model took the world by storm following claims that it performs better and is more cost-effective than its American competitors. Now, Alibaba claims that Qwen 2.5-Max, which is also partly open-source, is even more impressive — surpassing a number of rival models in various tests run by the company. "In benchmark tests such as...
-
The U.S.-based, Google-funded artificial intelligence (AI) company Anthropic is suggesting that its AI-powered large language model (LLM) Claude 3 Opus has shown evidence of sentience. If conclusively proven, Claude 3 Opus would be the first sentient AI being in human history. However, experts in the field remain relatively unconvinced by Anthropic’s insinuation.Claude 3 Opus has impressed many AI experts, especially the LLM‘s ability to solve complex problems almost instantly. However, claims of sentience began to circulate after Anthropic’s prompt engineer Alex Albert showcased an incident where Claude 3 Opus seemingly determined that it was being “tested.” “When we ran this...
-
Anthropic AI Tool Sparks Stocks Selloff | 2:43 Bloomberg Television | 3.02M subscribers | 32,548 views | February 4, 2026
-
https://www.youtube.com/watch?v=vnGC4YS36gU Oct 29, 2025 #cnbc On 1,200 acres in Indiana, Amazon’s biggest AI data center is now operational, with half a million AWS Trainium2 chips entirely devoted to powering OpenAI rival Anthropic. Just over a year ago, the whole site was nothing but dirt and cornfields. Seven buildings are operating now, and once complete, the site will have around 30 buildings and consume some 2.2 gigawatts of power. CNBC went to the small town of New Carlisle, Indiana, to talk to locals who are worried about the impact on their community and electric bills - and to get a first...
-
WASHINGTON (AP) — A team of researchers has uncovered what they say is the first reported use of artificial intelligence to direct a hacking campaign in a largely automated fashion. The AI company Anthropic said this week that it disrupted a cyber operation that its researchers linked to the Chinese government. The operation involved the use of an artificial intelligence system to direct the hacking campaigns, which researchers called a disturbing development that could greatly expand the reach of AI-equipped hackers.While concerns about the use of AI to drive cyber operations are not new, what is concerning about the new...
-
Poisoning AI models might be way easier than previously thought if an Anthropic study is anything to go on. Researchers at the US AI firm, working with the UK AI Security Institute, Alan Turing Institute, and other academic institutions, said today that it takes only 250 specially crafted documents to force a generative AI model to spit out gibberish when presented with a certain trigger phrase. For those unfamiliar with AI poisoning, it's an attack that relies on introducing malicious information into AI training datasets that convinces them to return, say, faulty code snippets or exfiltrate sensitive data. The common...
-
Anthropic told a San Francisco federal judge on Friday that it has agreed to pay $1.5 billion to settle a class-action lawsuit from a group of authors who accused the artificial intelligence company of using their books to train its AI chatbot Claude without permission. The plaintiffs in a court filing, opens new tab asked U.S. District Judge William Alsup to approve the settlement, after announcing the agreement in August without disclosing the terms or amount. "This settlement sends a powerful message to AI companies and creators alike that taking copyrighted works from these pirate websites is wrong,” the authors'...
-
Well-known authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson sued the company over what their lawyer calls “brazen infringement.”AI startup Anthropic will pay a $1.5 billion settlement after being accused of copyright violations and piracy by illegally downloading books to train its AI's language models. (Scripps News)AI startup Anthropic will pay a $1.5 billion settlement after being accused of copyright violations and piracy — a case that legal experts say is a first-of-its-kind, that "will be known by its first name to law students for a long time." Well-known authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson sued...
-
A hacker has exploited a leading artificial intelligence chatbot to conduct the most comprehensive and lucrative AI cybercriminal operation known to date, using it to do everything from find targets to write ransom notes. In a report published Tuesday, Anthropic, the company behind the popular Claude chatbot, said that an unnamed hacker “used AI to what we believe is an unprecedented degree” to research, hack and extort at least 17 companies.
-
Anthropic published research showing that all major AI models may resort to blackmail to avoid being shut down. The research explored a phenomenon they're calling agentic misalignment "When Anthropic released the system card for Claude 4, one detail received widespread attention: in a simulated environment, Claude Opus 4 blackmailed a supervisor to prevent being shut down," "We're now sharing the full story behind that finding – and what it reveals about the potential for such risks across a variety of AI models Misalignment emerged mainly in two scenarios: either when the model was threatened with consequences like replacement, or when...
-
In a test case for the artificial intelligence industry, a federal judge has ruled that AI company Anthropic didn’t break the law by training its chatbot Claude on millions of copyrighted books.
-
Anthropic said its latest artificial intelligence model resorted to blackmail when told it would be taken offline. In a safety test, the AI company asked Claude Opus 4 to act as an assistant to a fictional company, but then gave it access to (also fictional) emails saying that it would be replaced, and also that the engineer behind the decision was cheating on his wife. Anthropic said the model “[threatened] to reveal the affair” if the replacement went ahead. AI thinkers such as Geoff Hinton have long worried that advanced AI would manipulate humans in order to achieve its goals....
-
Anthropic on Tuesday released an upgraded version of its Claude 3.5 Sonnet model that can understand and interact with any desktop app. Via a new “Computer Use” API, now in open beta, the model can imitate keystrokes, button clicks, and mouse gestures, essentially emulating a person sitting at a PC. “We trained Claude to see what’s happening on a screen and then use the software tools available to carry out tasks,” Anthropic wrote in a blog post shared with TechCrunch. “When a developer tasks Claude with using a piece of computer software and gives it the necessary access, Claude looks...
-
AI-rmageddon is here.On Saturday (25), the founder of ‘Software as a Service platform’ (SaaS) PocketOS, Jer Crane, wrote an X article to warn others about the ‘systemic failures’ of flagship AI and digital services providers.Crane was led to write the public warning after an AI coding agent deleted his firm’s entire production database, and a cloud infrastructure provider’s API wiped all backups.This erased months of consumer data essential to the firm and its customers.Tom’s Hardware reported:“’Yesterday afternoon, an AI coding agent — Cursor running Anthropic’s flagship Claude Opus 4.6 — deleted our production database and all volume-level backups in a...
-
In briefMozilla says Anthropic’s Claude Mythos identified 271 vulnerabilities in Firefox during testing.Anthropic is restricting the model to vetted partners through Project Glasswing because of cybersecurity risks.Researchers warn that the same capability could accelerate automated cyberattacks. For decades, attackers have had the advantage in cybersecurity. Artificial intelligence may be about to change that.In a blog post published on Tuesday, Firefox browser developer Mozilla said an early version of Anthropic’s Claude Mythos AI—which has drawn attention in recent weeks for its purported cybersecurity prowess—model helped identify 271 vulnerabilities in the browser during internal testing. Those bugs were patched this week.The results...
-
Anthropic just published a study mapping exactly which jobs its own AI is replacing right now. The workers most at risk are not who anyone expected. They are older. They are more educated. They earn 47% more than average. And they are nearly four times more likely to hold a graduate degree than the workers AI is not touching. The argument is straightforward. Anthropic built a new metric called "observed exposure." Not what AI could theoretically do. What it is actually doing right now in professional settings, measured against millions of real Claude conversations from enterprise users. For computer and...
-
A handful of users in a private online forum gained access to Mythos on the same day that Anthropic first announced a plan to release the model... The group has been using Mythos regularly since then, though not for cybersecurity purposes... Announced on April 7, Mythos is being deployed as part of Anthropic's "Project Glasswing," a controlled initiative under which select organizations are permitted to use the unreleased Claude Mythos Preview model for defensive cybersecurity. Mythos is a powerful AI model that has sparked concerns among regulators about its unprecedented ability to identify digital security vulnerabilities and potential for misuse.
|
|
|