Keyword: llm
-
Poisoning AI models might be way easier than previously thought if an Anthropic study is anything to go on. Researchers at the US AI firm, working with the UK AI Security Institute, Alan Turing Institute, and other academic institutions, said today that it takes only 250 specially crafted documents to force a generative AI model to spit out gibberish when presented with a certain trigger phrase. For those unfamiliar with AI poisoning, it's an attack that relies on introducing malicious information into AI training datasets that convinces them to return, say, faulty code snippets or exfiltrate sensitive data. The common...
-
A recent study from Tyton Partners reveals that what students believe about AI use is quite at odds with their actual AI use. The study, which surveyed over 1,500 students, more than 1,500 instructors, and over 300 administrators, reveals that student and faculty attitudes toward AI in academia have both taken a downturn since 2024. Turning to artificial intelligence for academic assistance is not widely believed, by either students or educators, to be conducive to greater educational quality. Student preference for AI as a primary source of academic help has dropped 13 percentage points since 2024. Faculty attitudes about the...
-
Machines certainly can make businesses and people more efficient, but I have never believed that A.I. itself makes us more intelligent.Basically, Google AI, and I assume most A.I. programs, just regurgitate what they read. They do not appear to analyze or evaluate things to see how truthful they are. The great danger is that people, especially children and people posing as journalists, will believe whatever something called artificial intelligence spits out. I asked Google A.I. a simple math question. Have Trump’s tax rate cuts from 2017 cost the government trillions of dollars?Yes, most independent, non-partisan analyses of the 2017 Tax...
-
Security researchers from Palo Alto Networks' Unit 42 have discovered the key to getting large language model (LLM) chatbots to ignore their guardrails, and it's quite simple. You just have to ensure that your prompt uses terrible grammar and is one massive run-on sentence like this one which includes all the information before any full stop which would give the guardrails a chance to kick in before the jailbreak can take effect and guide the model into providing a "toxic" or otherwise verboten response the developers had hoped would be filtered out. The paper also offers a "logit-gap" analysis approach...
-
Here is your perfect prescription for poor writing and analytics: let “artificial intelligence” do your work for you. I’ve learned this from real experience.For a while, I enjoyed letting AI take a look at my content prior to publication. It seemed valuable for facts and feedback.Plus I enjoyed all the personal flattery it gave me, I admit. The engine was always complimentary.When I would catch AI in an error, the engine would apologize. That made me feel smart. So I had this seeming friend who clearly liked me and was humble enough to defer to my expertise.I’m not sure if...
-
The big names in artificial intelligence—leaders at OpenAI, Anthropic, Google and others—still confidently predict that AI attaining human-level smarts is right around the corner. But the naysayers are growing in number and volume. AI, they say, just doesn’t think like us. The work of these researchers suggests there’s something fundamentally limiting about the underlying architecture of today’s AI models. Today’s AIs are able to simulate intelligence by, in essence, learning an enormous number of rules of thumb, which they selectively apply to all the information they encounter. This contrasts with the many ways that humans and even animals are able...
-
The AI app soared up the Apple charts and rocked US stocks, but the Chinese chatbot was reluctant to discuss sensitive questions about China and its governmentThe launch of a new chatbot by Chinese artificial intelligence firm DeepSeek triggered a plunge in US tech stocks as it appeared to perform as well as OpenAI’s ChatGPT and other AI models, but using fewer resources.By Monday, DeepSeek’s AI assistant had rapidly overtaken ChatGPT as the most popular free app in Apple’s US and UK app stores. Despite its popularity with international users, the app appears to censor answers to sensitive questions about...
-
Geoffrey Hinton was an artificial intelligence pioneer. In 2012, Dr. Hinton and two of his graduate students at the University of Toronto created technology that became the intellectual foundation for the A.I. systems that the tech industry’s biggest companies believe is a key to their future. On Monday, however, he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT. Dr. Hinton said he has quit his job at Google, where he has worked for...
|
|
|