Free Republic 4th Qtr 2025 Fundraising Target: $81,000 Receipts & Pledges to-date: $12,057
14%  
Woo hoo!! And now only $93 to reach 15%!! Thank you all very much!! God bless.

Keyword: llm

Brevity: Headers | « Text »
  • It's trivially easy to poison LLMs into spitting out gibberish, says Anthropic: Just 250 malicious training documents can poison a 13B parameter model - that's 0.00016% of a whole dataset

    10/10/2025 3:25:23 AM PDT · by C19fan · 9 replies
    The Register ^ | October 9, 2025 | Brandon Vigliarolo
    Poisoning AI models might be way easier than previously thought if an Anthropic study is anything to go on. Researchers at the US AI firm, working with the UK AI Security Institute, Alan Turing Institute, and other academic institutions, said today that it takes only 250 specially crafted documents to force a generative AI model to spit out gibberish when presented with a certain trigger phrase. For those unfamiliar with AI poisoning, it's an attack that relies on introducing malicious information into AI training datasets that convinces them to return, say, faulty code snippets or exfiltrate sensitive data. The common...
  • Demand More of Students. Undergraduate attitudes toward AI may be souring.

    10/02/2025 11:08:16 AM PDT · by karpov · 22 replies
    James G. Martin Center for Academic Renewal ^ | October 2, 2025 | Gabriella DiPrima
    A recent study from Tyton Partners reveals that what students believe about AI use is quite at odds with their actual AI use. The study, which surveyed over 1,500 students, more than 1,500 instructors, and over 300 administrators, reveals that student and faculty attitudes toward AI in academia have both taken a downturn since 2024. Turning to artificial intelligence for academic assistance is not widely believed, by either students or educators, to be conducive to greater educational quality. Student preference for AI as a primary source of academic help has dropped 13 percentage points since 2024. Faculty attitudes about the...
  • Next time you want to trust AI for your answer...

    09/16/2025 9:25:02 AM PDT · by SeekAndFind · 24 replies
    American Thinker ^ | 09/16/2025 | Jack Hellner
    Machines certainly can make businesses and people more efficient, but I have never believed that A.I. itself makes us more intelligent.Basically, Google AI, and I assume most A.I. programs, just regurgitate what they read. They do not appear to analyze or evaluate things to see how truthful they are. The great danger is that people, especially children and people posing as journalists, will believe whatever something called artificial intelligence spits out. I asked Google A.I. a simple math question. Have Trump’s tax rate cuts from 2017 cost the government trillions of dollars?Yes, most independent, non-partisan analyses of the 2017 Tax...
  • One long sentence is all it takes to make LLMs misbehave

    08/26/2025 7:31:04 AM PDT · by Salman · 34 replies
    The Register (UK) ^ | 26 Aug 2025 | Gareth Halfacree
    Security researchers from Palo Alto Networks' Unit 42 have discovered the key to getting large language model (LLM) chatbots to ignore their guardrails, and it's quite simple. You just have to ensure that your prompt uses terrible grammar and is one massive run-on sentence like this one which includes all the information before any full stop which would give the guardrails a chance to kick in before the jailbreak can take effect and guide the model into providing a "toxic" or otherwise verboten response the developers had hoped would be filtered out. The paper also offers a "logit-gap" analysis approach...
  • he Most Insidious Trick Of AI Language Models

    08/14/2025 8:39:09 PM PDT · by SeekAndFind · 23 replies
    Epoch Times ^ | 08/14/2025 | Jeffrey Tucker
    Here is your perfect prescription for poor writing and analytics: let “artificial intelligence” do your work for you. I’ve learned this from real experience.For a while, I enjoyed letting AI take a look at my content prior to publication. It seemed valuable for facts and feedback.Plus I enjoyed all the personal flattery it gave me, I admit. The engine was always complimentary.When I would catch AI in an error, the engine would apologize. That made me feel smart. So I had this seeming friend who clearly liked me and was humble enough to defer to my expertise.I’m not sure if...
  • We Now Know How AI ‘Thinks’—and It’s Barely Thinking at All

    04/26/2025 5:45:57 PM PDT · by Lazamataz · 104 replies
    Wall Street Journal ^ | 04/26/2025 | Story by Christopher Mims
    The big names in artificial intelligence—leaders at OpenAI, Anthropic, Google and others—still confidently predict that AI attaining human-level smarts is right around the corner. But the naysayers are growing in number and volume. AI, they say, just doesn’t think like us. The work of these researchers suggests there’s something fundamentally limiting about the underlying architecture of today’s AI models. Today’s AIs are able to simulate intelligence by, in essence, learning an enormous number of rules of thumb, which they selectively apply to all the information they encounter. This contrasts with the many ways that humans and even animals are able...
  • We tried out DeepSeek. It worked well, until we asked it about Tiananmen Square and Taiwan

    01/28/2025 10:18:14 AM PST · by SeekAndFind · 28 replies
    The Guardian ^ | 01/28/2025 | Donna Lu
    The AI app soared up the Apple charts and rocked US stocks, but the Chinese chatbot was reluctant to discuss sensitive questions about China and its governmentThe launch of a new chatbot by Chinese artificial intelligence firm DeepSeek triggered a plunge in US tech stocks as it appeared to perform as well as OpenAI’s ChatGPT and other AI models, but using fewer resources.By Monday, DeepSeek’s AI assistant had rapidly overtaken ChatGPT as the most popular free app in Apple’s US and UK app stores. Despite its popularity with international users, the app appears to censor answers to sensitive questions about...
  • ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead

    05/01/2023 12:05:54 PM PDT · by LibWhacker · 57 replies
    DNYUZ | NYT ^ | 5/1/2023
    Geoffrey Hinton was an artificial intelligence pioneer. In 2012, Dr. Hinton and two of his graduate students at the University of Toronto created technology that became the intellectual foundation for the A.I. systems that the tech industry’s biggest companies believe is a key to their future. On Monday, however, he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT. Dr. Hinton said he has quit his job at Google, where he has worked for...