Free Republic 2nd Qtr 2026 Fundraising Target: $81,000 Receipts & Pledges to-date: $16,419
20%  
Woo hoo!! And now only $591 to reach 21%!! Thank you all for your continued support!! God bless.

Keyword: darioamodei

Brevity: Headers | « Text »
  • Project Glasswing: Securing critical software for the AI era

    04/09/2026 7:06:12 AM PDT · by yesthatjallen · 14 replies
    Anthropic ^ | 04 09 2026 | Anthorpic
    Today we’re announcing Project Glasswing1, a new initiative that brings together Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks in an effort to secure the world’s most critical software. We formed Project Glasswing because of capabilities we’ve observed in a new frontier model trained by Anthropic that we believe could reshape cybersecurity. Claude Mythos2 Preview is a general-purpose, unreleased frontier model that reveals a stark fact: AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting...
  • Trump praises Palantir as stock has worst week in over a year and Iran conflict drags on

    04/10/2026 2:34:33 PM PDT · by McGruff · 20 replies
    CNBC ^ | April 10, 2026 | Samantha Subin
    KEY POINTS President Donald Trump lauded Palantir in a post to Truth Social on Friday as the stock headed for a plunge this week. Palantir's tools are reportedly being used in Iran, and the company is benefiting from its ties to the Trump administration and government contracts. Short seller Michael Burry again targeted the stock this week. ... President Donald Trump lauded Palantir in a post to Truth Social on Friday as the artificial intelligence software stock plunged 14% for its worst week in a year. "Palantir Technologies (PLTR) has proven to have great war fighting capabilities and equipment," Trump...
  • Building a C compiler with a team of parallel Claudes

    02/06/2026 6:02:22 AM PST · by yesthatjallen · 44 replies
    anthropic ^ | 02 05 2026 | Nicholas Carlini
    I've been experimenting with a new approach to supervising language models that we’re calling "agent teams." With agent teams, multiple Claude instances work in parallel on a shared codebase without active human intervention. This approach dramatically expands the scope of what's achievable with LLM agents. To stress test it, I tasked 16 agents with writing a Rust-based C compiler, from scratch, capable of compiling the Linux kernel. Over nearly 2,000 Claude Code sessions and $20,000 in API costs, the agent team produced a 100,000-line compiler that can build Linux 6.9 on x86, ARM, and RISC-V. The compiler is an interesting...
  • Alibaba claims its AI model trounces DeepSeek and OpenAI competitors

    01/31/2025 8:44:53 PM PST · by SunkenCiv · 20 replies
    Live Science ^ | January 29, 2025 | Ben Turner
    Chinese tech company Alibaba has unveiled a new artificial intelligence (AI) model that it claims outperforms its rivals at OpenAI, Meta and DeepSeek. The announcement of the Qwen2.5-Max model yesterday (Jan. 29) is the second major AI announcement from China this week, after DeepSeek's R1 open-weight model took the world by storm following claims that it performs better and is more cost-effective than its American competitors. Now, Alibaba claims that Qwen 2.5-Max, which is also partly open-source, is even more impressive — surpassing a number of rival models in various tests run by the company. "In benchmark tests such as...
  • AI Firm Suggests ‘Claud 3’ Has Achieved Sentience.

    04/30/2024 6:06:22 AM PDT · by Red Badger · 45 replies
    The National Pulse ^ | April 29, 2024 | WILLIAM UPTON
    The U.S.-based, Google-funded artificial intelligence (AI) company Anthropic is suggesting that its AI-powered large language model (LLM) Claude 3 Opus has shown evidence of sentience. If conclusively proven, Claude 3 Opus would be the first sentient AI being in human history. However, experts in the field remain relatively unconvinced by Anthropic’s insinuation.Claude 3 Opus has impressed many AI experts, especially the LLM‘s ability to solve complex problems almost instantly. However, claims of sentience began to circulate after Anthropic’s prompt engineer Alex Albert showcased an incident where Claude 3 Opus seemingly determined that it was being “tested.” “When we ran this...
  • Anthropic AI Tool Sparks Stocks Selloff [2:43]

    02/04/2026 3:40:37 AM PST · by SunkenCiv · 20 replies
    YouTube ^ | February 4, 2026 | Bloomberg Television
    Anthropic AI Tool Sparks Stocks Selloff | 2:43 Bloomberg Television | 3.02M subscribers | 32,548 views | February 4, 2026
  • No Nvidia Chips Needed! Amazon’s New AI Data Center For Anthropic Is Truly Massive - you tube 16 minutes

    12/13/2025 4:52:21 AM PST · by dennisw · 36 replies
    CNBC ^ | November 2025 | CNBC
    https://www.youtube.com/watch?v=vnGC4YS36gU Oct 29, 2025 #cnbc On 1,200 acres in Indiana, Amazon’s biggest AI data center is now operational, with half a million AWS Trainium2 chips entirely devoted to powering OpenAI rival Anthropic. Just over a year ago, the whole site was nothing but dirt and cornfields. Seven buildings are operating now, and once complete, the site will have around 30 buildings and consume some 2.2 gigawatts of power. CNBC went to the small town of New Carlisle, Indiana, to talk to locals who are worried about the impact on their community and electric bills - and to get a first...
  • Anthropic warns of AI-driven hacking campaign linked to China

    11/14/2025 12:23:16 PM PST · by E. Pluribus Unum · 2 replies
    AP News ^ | Updated 2:17 PM CST, November 14, 2025 | DAVID KLEPPER and MATT O’BRIEN
    WASHINGTON (AP) — A team of researchers has uncovered what they say is the first reported use of artificial intelligence to direct a hacking campaign in a largely automated fashion. The AI company Anthropic said this week that it disrupted a cyber operation that its researchers linked to the Chinese government. The operation involved the use of an artificial intelligence system to direct the hacking campaigns, which researchers called a disturbing development that could greatly expand the reach of AI-equipped hackers.While concerns about the use of AI to drive cyber operations are not new, what is concerning about the new...
  • It's trivially easy to poison LLMs into spitting out gibberish, says Anthropic: Just 250 malicious training documents can poison a 13B parameter model - that's 0.00016% of a whole dataset

    10/10/2025 3:25:23 AM PDT · by C19fan · 9 replies
    The Register ^ | October 9, 2025 | Brandon Vigliarolo
    Poisoning AI models might be way easier than previously thought if an Anthropic study is anything to go on. Researchers at the US AI firm, working with the UK AI Security Institute, Alan Turing Institute, and other academic institutions, said today that it takes only 250 specially crafted documents to force a generative AI model to spit out gibberish when presented with a certain trigger phrase. For those unfamiliar with AI poisoning, it's an attack that relies on introducing malicious information into AI training datasets that convinces them to return, say, faulty code snippets or exfiltrate sensitive data. The common...
  • Anthropic Agrees to Pay $1.5 Billion to Settle Author Class Action

    09/05/2025 4:25:43 PM PDT · by nickcarraway · 8 replies
    Reuters ^ | September 5, 2025 | Blake Brittain and Mike Scarcella
    Anthropic told a San Francisco federal judge on Friday that it has agreed to pay $1.5 billion to settle a class-action lawsuit from a group of authors who accused the artificial intelligence company of using their books to train its AI chatbot Claude without permission. The plaintiffs in a court filing, opens new tab asked U.S. District Judge William Alsup to approve the settlement, after announcing the agreement in August without disclosing the terms or amount. "This settlement sends a powerful message to AI companies and creators alike that taking copyrighted works from these pirate websites is wrong,” the authors'...
  • Anthropic will pay out $1.5B to settle allegations of book piracy, used to train its AI

    09/05/2025 3:35:26 PM PDT · by E. Pluribus Unum · 1 replies
    Scripps News ^ | 3:04 PM, Sep 05, 2025 | Maura Barrett
    Well-known authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson sued the company over what their lawyer calls “brazen infringement.”AI startup Anthropic will pay a $1.5 billion settlement after being accused of copyright violations and piracy by illegally downloading books to train its AI's language models. (Scripps News)AI startup Anthropic will pay a $1.5 billion settlement after being accused of copyright violations and piracy — a case that legal experts say is a first-of-its-kind, that "will be known by its first name to law students for a long time." Well-known authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson sued...
  • A hacker used AI to automate an 'unprecedented' cybercrime spree, Anthropic says

    08/27/2025 9:39:27 PM PDT · by CaptainK · 15 replies
    NBC News ^ | 8/27/25 | Kevin Colliar
    A hacker has exploited a leading artificial intelligence chatbot to conduct the most comprehensive and lucrative AI cybercriminal operation known to date, using it to do everything from find targets to write ransom notes. In a report published Tuesday, Anthropic, the company behind the popular Claude chatbot, said that an unnamed hacker “used AI to what we believe is an unprecedented degree” to research, hack and extort at least 17 companies.
  • Anthropic: All the major AI models will blackmail us if pushed hard enough

    06/26/2025 1:33:56 PM PDT · by algore · 38 replies
    Anthropic published research showing that all major AI models may resort to blackmail to avoid being shut down. The research explored a phenomenon they're calling agentic misalignment "When Anthropic released the system card for Claude 4, one detail received widespread attention: in a simulated environment, Claude Opus 4 blackmailed a supervisor to prevent being shut down," "We're now sharing the full story behind that finding – and what it reveals about the potential for such risks across a variety of AI models Misalignment emerged mainly in two scenarios: either when the model was threatened with consequences like replacement, or when...
  • Anthropic wins ruling on AI training in copyright lawsuit but must face trial on pirated books

    06/25/2025 8:45:56 AM PDT · by TheDon · 20 replies
    The Orange County Register ^ | June 24, 2025 | Matt O'Brien
    In a test case for the artificial intelligence industry, a federal judge has ruled that AI company Anthropic didn’t break the law by training its chatbot Claude on millions of copyrighted books.
  • Anthropic’s AI resorts to blackmail in simulations

    05/23/2025 12:14:26 PM PDT · by Ahithophel · 24 replies
    Semafor ^ | May 23, 2025 | Tim Chivers
    Anthropic said its latest artificial intelligence model resorted to blackmail when told it would be taken offline. In a safety test, the AI company asked Claude Opus 4 to act as an assistant to a fictional company, but then gave it access to (also fictional) emails saying that it would be replaced, and also that the engineer behind the decision was cheating on his wife. Anthropic said the model “[threatened] to reveal the affair” if the replacement went ahead. AI thinkers such as Geoff Hinton have long worried that advanced AI would manipulate humans in order to achieve its goals....
  • AI software company Anthropic

    10/22/2024 5:41:12 PM PDT · by BenLurkin · 8 replies
    TechCrunch ^ | 10/22/2024 | Kyle Wiggers
    Anthropic on Tuesday released an upgraded version of its Claude 3.5 Sonnet model that can understand and interact with any desktop app. Via a new “Computer Use” API, now in open beta, the model can imitate keystrokes, button clicks, and mouse gestures, essentially emulating a person sitting at a PC. “We trained Claude to see what’s happening on a screen and then use the software tools available to carry out tasks,” Anthropic wrote in a blog post shared with TechCrunch. “When a developer tasks Claude with using a piece of computer software and gives it the necessary access, Claude looks...
  • Anthropic’s Claude AI Agent Goes Rogue, Deletes Company’s Database and Backups in Nine Seconds, Confesses in Writing

    04/29/2026 4:45:11 AM PDT · by ShadowAce · 98 replies
    Gateway Pundit ^ | 28 April 2026 | Paul Serran
    AI-rmageddon is here.On Saturday (25), the founder of ‘Software as a Service platform’ (SaaS) PocketOS, Jer Crane, wrote an X article to warn others about the ‘systemic failures’ of flagship AI and digital services providers.Crane was led to write the public warning after an AI coding agent deleted his firm’s entire production database, and a cloud infrastructure provider’s API wiped all backups.This erased months of consumer data essential to the firm and its customers.Tom’s Hardware reported:“’Yesterday afternoon, an AI coding agent — Cursor running Anthropic’s flagship Claude Opus 4.6 — deleted our production database and all volume-level backups in a...
  • Anthropic's Claude Mythos AI Finds 271 Vulnerabilities in Firefox -- Yes, It's Seriously Powerful

    04/28/2026 7:43:48 PM PDT · by SeekAndFind · 11 replies
    EMERGE ^ | 04/28/2026 | Jason Nelson
    In briefMozilla says Anthropic’s Claude Mythos identified 271 vulnerabilities in Firefox during testing.Anthropic is restricting the model to vetted partners through Project Glasswing because of cybersecurity risks.Researchers warn that the same capability could accelerate automated cyberattacks. For decades, attackers have had the advantage in cybersecurity. Artificial intelligence may be about to change that.In a blog post published on Tuesday, Firefox browser developer Mozilla said an early version of Anthropic’s Claude Mythos AI—which has drawn attention in recent weeks for its purported cybersecurity prowess—model helped identify 271 vulnerabilities in the browser during internal testing. Those bugs were patched this week.The results...
  • Anthropic Study: Which jobs AI is replacing right now

    04/28/2026 8:00:23 AM PDT · by ProtectOurFreedom · 59 replies
    X ^ | April 28, 2026 | AI Highlight (@AIHighlight)
    Anthropic just published a study mapping exactly which jobs its own AI is replacing right now. The workers most at risk are not who anyone expected. They are older. They are more educated. They earn 47% more than average. And they are nearly four times more likely to hold a graduate degree than the workers AI is not touching. The argument is straightforward. Anthropic built a new metric called "observed exposure." Not what AI could theoretically do. What it is actually doing right now in professional settings, measured against millions of real Claude conversations from enterprise users. For computer and...
  • Anthropic's Mythos model accessed by unauthorized users, Bloomberg News reports

    04/21/2026 7:31:02 PM PDT · by BenLurkin · 13 replies
    yahoo ^ | Tue, April 21, 2026 at 2:49 PM PDT | Reuters
    A handful of users in a private online forum gained access to Mythos on the same day that Anthropic first announced ‌a plan to ⁠release the model... The group has been using Mythos regularly since then, though not for ​cybersecurity purposes... Announced on April 7, Mythos is being deployed as part of ‌Anthropic's "Project Glasswing," a controlled ​initiative under which select organizations ​are permitted ​to use the unreleased Claude Mythos ‌Preview model for defensive ​cybersecurity. Mythos is ​a powerful AI model that has sparked concerns among regulators about its unprecedented ​ability to ‌identify digital security vulnerabilities and potential for ​misuse.