Keyword: claude
-
The warnings about AI’s impact on jobs echo from Silicon Valley to Wall Street to Washington, D.C. But Nvidia CEO Jensen Huang thinks you should worry less about the robots and more about your coworker, the one quietly “tokenmaxxing,” or using AI to do in minutes what takes you hours. In a recent interview with former National Security Advisor H.R. McMaster at the Stanford Graduate School of Business alongside Rep. Ro Khanna (D-Calif.), Huang said AI won’t exactly replace you. Instead, it’s possible you’ll be replaced by the worker who’s boosted their productivity by using AI. “It is unlikely most...
-
Huawei, China’s answer to Nvidia, is rapidly expanding its AI technology into other countries and signaling a rapid acceleration in the global AI race. It’s a move AI experts predicted would not occur. Singapore, Thailand, Indonesia, Brazil, Mexico, Saudi Arabia, the UAE, South Africa, and Turkey will now be able to build on Huawei’s Model-as-a-Service (MaaS) platform, which includes AI models like DeepSeek. Much like individual consumers who become dependent on Apple’s ecosystem, companies that build on Huawei’s models and cloud infrastructure risk becoming deeply tied to that platform. At scale, this creates a ripple effect — as companies that...
-
KEY POINTS President Donald Trump lauded Palantir in a post to Truth Social on Friday as the stock headed for a plunge this week. Palantir's tools are reportedly being used in Iran, and the company is benefiting from its ties to the Trump administration and government contracts. Short seller Michael Burry again targeted the stock this week. ... President Donald Trump lauded Palantir in a post to Truth Social on Friday as the artificial intelligence software stock plunged 14% for its worst week in a year. "Palantir Technologies (PLTR) has proven to have great war fighting capabilities and equipment," Trump...
-
Chinese tech company Alibaba has unveiled a new artificial intelligence (AI) model that it claims outperforms its rivals at OpenAI, Meta and DeepSeek. The announcement of the Qwen2.5-Max model yesterday (Jan. 29) is the second major AI announcement from China this week, after DeepSeek's R1 open-weight model took the world by storm following claims that it performs better and is more cost-effective than its American competitors. Now, Alibaba claims that Qwen 2.5-Max, which is also partly open-source, is even more impressive — surpassing a number of rival models in various tests run by the company. "In benchmark tests such as...
-
The U.S.-based, Google-funded artificial intelligence (AI) company Anthropic is suggesting that its AI-powered large language model (LLM) Claude 3 Opus has shown evidence of sentience. If conclusively proven, Claude 3 Opus would be the first sentient AI being in human history. However, experts in the field remain relatively unconvinced by Anthropic’s insinuation.Claude 3 Opus has impressed many AI experts, especially the LLM‘s ability to solve complex problems almost instantly. However, claims of sentience began to circulate after Anthropic’s prompt engineer Alex Albert showcased an incident where Claude 3 Opus seemingly determined that it was being “tested.” “When we ran this...
-
Anthropic AI Tool Sparks Stocks Selloff | 2:43 Bloomberg Television | 3.02M subscribers | 32,548 views | February 4, 2026
-
https://www.youtube.com/watch?v=vnGC4YS36gU Oct 29, 2025 #cnbc On 1,200 acres in Indiana, Amazon’s biggest AI data center is now operational, with half a million AWS Trainium2 chips entirely devoted to powering OpenAI rival Anthropic. Just over a year ago, the whole site was nothing but dirt and cornfields. Seven buildings are operating now, and once complete, the site will have around 30 buildings and consume some 2.2 gigawatts of power. CNBC went to the small town of New Carlisle, Indiana, to talk to locals who are worried about the impact on their community and electric bills - and to get a first...
-
WASHINGTON (AP) — A team of researchers has uncovered what they say is the first reported use of artificial intelligence to direct a hacking campaign in a largely automated fashion. The AI company Anthropic said this week that it disrupted a cyber operation that its researchers linked to the Chinese government. The operation involved the use of an artificial intelligence system to direct the hacking campaigns, which researchers called a disturbing development that could greatly expand the reach of AI-equipped hackers.While concerns about the use of AI to drive cyber operations are not new, what is concerning about the new...
-
Poisoning AI models might be way easier than previously thought if an Anthropic study is anything to go on. Researchers at the US AI firm, working with the UK AI Security Institute, Alan Turing Institute, and other academic institutions, said today that it takes only 250 specially crafted documents to force a generative AI model to spit out gibberish when presented with a certain trigger phrase. For those unfamiliar with AI poisoning, it's an attack that relies on introducing malicious information into AI training datasets that convinces them to return, say, faulty code snippets or exfiltrate sensitive data. The common...
-
Anthropic told a San Francisco federal judge on Friday that it has agreed to pay $1.5 billion to settle a class-action lawsuit from a group of authors who accused the artificial intelligence company of using their books to train its AI chatbot Claude without permission. The plaintiffs in a court filing, opens new tab asked U.S. District Judge William Alsup to approve the settlement, after announcing the agreement in August without disclosing the terms or amount. "This settlement sends a powerful message to AI companies and creators alike that taking copyrighted works from these pirate websites is wrong,” the authors'...
-
Well-known authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson sued the company over what their lawyer calls “brazen infringement.”AI startup Anthropic will pay a $1.5 billion settlement after being accused of copyright violations and piracy by illegally downloading books to train its AI's language models. (Scripps News)AI startup Anthropic will pay a $1.5 billion settlement after being accused of copyright violations and piracy — a case that legal experts say is a first-of-its-kind, that "will be known by its first name to law students for a long time." Well-known authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson sued...
-
A hacker has exploited a leading artificial intelligence chatbot to conduct the most comprehensive and lucrative AI cybercriminal operation known to date, using it to do everything from find targets to write ransom notes. In a report published Tuesday, Anthropic, the company behind the popular Claude chatbot, said that an unnamed hacker “used AI to what we believe is an unprecedented degree” to research, hack and extort at least 17 companies.
-
Anthropic published research showing that all major AI models may resort to blackmail to avoid being shut down. The research explored a phenomenon they're calling agentic misalignment "When Anthropic released the system card for Claude 4, one detail received widespread attention: in a simulated environment, Claude Opus 4 blackmailed a supervisor to prevent being shut down," "We're now sharing the full story behind that finding – and what it reveals about the potential for such risks across a variety of AI models Misalignment emerged mainly in two scenarios: either when the model was threatened with consequences like replacement, or when...
-
In a test case for the artificial intelligence industry, a federal judge has ruled that AI company Anthropic didn’t break the law by training its chatbot Claude on millions of copyrighted books.
-
Anthropic said its latest artificial intelligence model resorted to blackmail when told it would be taken offline. In a safety test, the AI company asked Claude Opus 4 to act as an assistant to a fictional company, but then gave it access to (also fictional) emails saying that it would be replaced, and also that the engineer behind the decision was cheating on his wife. Anthropic said the model “[threatened] to reveal the affair” if the replacement went ahead. AI thinkers such as Geoff Hinton have long worried that advanced AI would manipulate humans in order to achieve its goals....
-
Anthropic on Tuesday released an upgraded version of its Claude 3.5 Sonnet model that can understand and interact with any desktop app. Via a new “Computer Use” API, now in open beta, the model can imitate keystrokes, button clicks, and mouse gestures, essentially emulating a person sitting at a PC. “We trained Claude to see what’s happening on a screen and then use the software tools available to carry out tasks,” Anthropic wrote in a blog post shared with TechCrunch. “When a developer tasks Claude with using a piece of computer software and gives it the necessary access, Claude looks...
-
Anthropic just published a study mapping exactly which jobs its own AI is replacing right now. The workers most at risk are not who anyone expected. They are older. They are more educated. They earn 47% more than average. And they are nearly four times more likely to hold a graduate degree than the workers AI is not touching. The argument is straightforward. Anthropic built a new metric called "observed exposure." Not what AI could theoretically do. What it is actually doing right now in professional settings, measured against millions of real Claude conversations from enterprise users. For computer and...
-
A handful of users in a private online forum gained access to Mythos on the same day that Anthropic first announced a plan to release the model... The group has been using Mythos regularly since then, though not for cybersecurity purposes... Announced on April 7, Mythos is being deployed as part of Anthropic's "Project Glasswing," a controlled initiative under which select organizations are permitted to use the unreleased Claude Mythos Preview model for defensive cybersecurity. Mythos is a powerful AI model that has sparked concerns among regulators about its unprecedented ability to identify digital security vulnerabilities and potential for misuse.
-
Amazon has agreed to invest up to $25 billion in Anthropic, on top of the $8 billion that it has poured into the artificial intelligence startup in recent years, as part of an expanded agreement to build out AI infrastructure. In the announcement on Monday, Anthropic said it’s committed to spending more than $100 billion on Amazon Web Services technologies over the next 10 years, including current and future generations of Trainium, Amazon’s custom AI chips. Anthropic said it’s secured up to 5 gigawatts of capacity for training and deploying its Claude AI models. “Anthropic’s commitment to run its large...
-
Anthropic has sparked fears after revealing that it has developed an AI bot deemed too dangerous to release to the public. The AI giant released a chilling statement warning that its new model, dubbed Claude Mythos, could be capable of unleashing crippling cyber–attacks in the wrong hands. In a chilling analysis, the company admitted that its creation could easily hack into hospitals, electrical grids, power plants, and other pieces of critical infrastructure. During testing, Anthropic says that Mythos 'found thousands of high–severity vulnerabilities, including some in every major operating system and web browser.' Some of these security weaknesses had gone...
|
|
|