Posted on 02/18/2024 7:54:09 AM PST by SunkenCiv
My new essay is here: https://nautil.us/what-physicists-hav...
We've seen a lot of headlines in the past year about how dangerous AI is and how overblown these fears are . I've found it hard to make sense of this discussion. If only someone could systematically interview experts and figure out what they're worried about. Well, a group of researchers from the UK has done exactly that and just published their results. What they have found is, not very reassuring. Let's have a look.
The paper is here: https://ieeexplore.ieee.org/document/...AI experts make predictions for 2040. I was a little surprised. | 6:57
Sabine Hossenfelder | 1.13M subscribers | 290,629 views | February 4, 2024s | February 4, 2024
(Excerpt) Read more at youtube.com ...
It would merely require the intent to be programmed into it. Y'know, for our own good.
“It’s not that machines are getting smart as much it is that we humans aren’t as smart as we thought we were.”
Hey Rooster—wise comment as usual.
Just some comments on our favorite topic—it looks like the Congressional “disclosure” narrative is taking form—according to various sources—and they are planning on a limited hang out.
They are preparing to disclose the existence of legacy craft and materials in warehouses held by .gov and corporate actors.
However they are going to claim that reverse engineering has failed, that no contacts/communications have been made with aliens—and it is all a big mystery.
They will be lying of course.
Dolan has argued that such a “fallback position” is a “bridge too far” of disclosure because it will raise too many questions they will not want to answer.
I have not heard any discussion of timing.
Side note: DuckDuckGo “ Delphi” and “Scientology”
One further comment—Michael Sala is convinced the “Space Force” will announce at some future point that there have been major technological breakthroughs and they have invented flying saucer technology.
This will be yet another hilarious lie—the human version of these craft have been operational for many decades—and “Space Force” had nothing to do with it.
The Delphi Method was often used in public hearings decades ago to “forge a consensus”—i.e. dupe the sheeple into believing the evil policy that was adopted at the meeting was “their idea”.
Uh, no.
Not the same thing, just a similar name:
https://duckduckgo.com/?hps=1&q=Delphi+Scientology&ia=web
https://en.wikipedia.org/wiki/Delphi_method
[snip] The Delphi method was developed at the beginning of the Cold War to forecast the impact of technology on warfare.[15] In 1944, General Henry H. Arnold ordered the creation of the report for the U.S. Army Air Corps on the future technological capabilities that might be used by the military. [/snip]
founded 1950:
https://en.wikipedia.org/wiki/Scientology
Rooster, badger, and sunken are
… are FR’s gems
The same name on fact
Yep. I keep hearing rumblings about such announcements. Looks like the powers that be are getting wobbly...and nervous.
Back in the day Stanton Friedman called this stuff a “Cosmic Watergate”.
The language of the Watergate period works pretty well.
I think it was Haldeman who talked about how it was impossible to “put the toothpaste back in the tube”.
That is the problem with any type of disclosure. Once it is out there there is no way to “put it back in the tube”.
They should be worried—they don’t get a redo if it blows up in their faces.
Yep. Looks like we are already in “catastrophic” disclosure, even though the DOD, IC, and MIC don’t want it to admit it.
The programs just are not AI.
They have no sentience.
Science has a big problem and it's been getting rapidly worse in the past two years or so, to no small part because of recent advances in artificial intelligence. Fraudulent papers are getting published more than ever, and the fraudsters are getting increasingly aggressive. In this episode I want to give you an update on the recent developments.Alarming: Fraud spreads in Science -- and I fear it will become worse | 7:55
Sabine Hossenfelder | 1.14M subscribers | 89,834 views | February 18, 2024
Transcript · Intro 0:01 · Science has a big problem and it's been getting rapidly worse in the past two years or so, 0:07 · to no small part because of recent advances in artificial intelligence. Fraudulent papers 0:12 · are getting published more than ever, and the fraudsters are getting increasingly 0:17 · aggressive. In this episode I want to give you an update on the recent developments. 0:23 · [sponsor ad text redacted] · Fraud in Science 1:43 · According to data collected by Nature magazine the number of retracted papers hit an all time 1:49 · record in 2023 with more ten thousand. Most of these papers were not retracted 1:55 · because of honest mistakes, but because they contain fabricated crap. Sham data, 2:00 · AI generated text, repurposed figures and images. 2:05 · The number of retractions is rising faster than the total number of publications. About 2:10 · 2 in a thousand scientific papers is now being retracted. The number of retracted papers isn't 2:16 · the same the number of fraudulent papers, but it is unlikely that the identification 2:21 · of fraud has suddenly gotten much better. More likely it's become more difficult as AI 2:27 · gets better. This also means that the number of fraudulent papers has been skyrocketing. 2:33 · But this number might look more alarming than it is, 2:36 · so let me give you some context. Most retractions happen in Saudia Arabia 2:40 · followed by Pakistan Russia China and Egypt. It is predominantly an Eastern 2:46 · problem. And most of these retractions come from one publisher, Hindawi, and they mostly come from 2:52 · special issues. Special issues have become a special issue in publishing so to speak. 2:58 · The idea of special issues was that you'd have collections of papers on one particular topic, 3:04 · typically some kind of recent development for which there wasn't a dedicated journal. This makes 3:11 · sense. The problem is that the editorial process of these special issues was outsourced to "guest 3:17 · editors" who then basically invited their friends to submit papers that were essentially guaranteed 3:23 · to get published. As time went on, special issues became basically junkyards of scam papers that 3:30 · were waved through by those guest editors which were not accountable to anyone or anything. 3:35 · The journals didn't do much about it, because you see they sell subscriptions 3:40 · to their content, regardless of what that content is. So researchers were 3:46 · constantly getting spammed with calls to contribute to those special issues, 3:51 · and if you had a need to get some paper published without much effort this was the way to do it. 3:57 · Now Hindawi is a subsidiary of the publisher Wiley and Wiley has meanwhile recognized the 4:04 · special issue issue. They have announced some major changes and say they'll stop 4:10 · using the Hindawi brand altogether. I'm not sure that's going to solve the problem, 4:15 · but this makes me think that the rapid increase in retractions will probably not continue like this. 4:22 · However, there's more trouble at the door. A lot of this increase in rubbish publications in 4:28 · driven by what's become known as "paper mills" in academic publishing. These are semi-legal 4:35 · networks of people who produce scam papers and guide them to publication. They usually 4:41 · do this for academics who pay money. Typically, they'll be offering authorship on a paper with 4:47 · a particular topic, and the price depends on where you want to be in the author list. 4:53 · These paper mills are believed to first have been originated in China where academics are 4:58 · often paid for papers or even if not, papers are required for promotions. But the practice 5:05 · has since spread to Russia and India, and reportedly also to Iran and eastern Europe. 5:11 · A lot of these papers are published in areas that concern public health, such as drug development 5:17 · or psychology. Drug development in particular seems to have been a target because papers on 5:23 · this topic all look and sound more or less the same, you just need to swap out the name 5:28 · of the drug. This makes these paper mills very dangerous because these fake papers can get cited 5:36 · in support of useless drugs, as seems to have happened with the controversial drug Ivermectin. 5:43 · In the past they have been easy to spot because the language is sometimes funny. This has become 5:50 · known as tortured phrases, probably stemming from automated systems trying to rewrite 5:56 · technical terms. For example in some cases the word "magnetic resonance" became "attractive 6:03 · reverberations". An article from Times higher education has more funny examples. Fuzzy logic, 6:10 · a research area in mathematics, turned to "fluffy rationale", breast cancer into "bosom peril", 6:16 · renal failure became a "kidney disappointment" and an ant colony 6:21 · turned into a "subterranean insect province". The most recent worrying trend is that the 6:27 · paper mills evidently make enough money to simply bribe journal editors into accepting 6:32 · papers. Frederik Joelving from the database Retraction Watch recently wrote an article for 6:38 · Science magazine in which he reports an alarming trend. Among recent retractions, those related to 6:44 · bribed editors or other peer review manipulations, such as simply pretending to review a paper with 6:51 · AI generated text, have steeply increased. And this problem doesn't just affect niche 6:57 · publishers you've never heard of. According to Joelving, "A spokesperson for Elsevier 7:02 · said every week its editors are offered cash in return for accepting manuscripts. Sabina Alam, 7:09 · director of publishing ethics and integrity at Taylor & Francis, said bribery attempts have also 7:15 · been directed at journal editors there." The problem is ultimately driven by a 7:20 · scientific system that values the quantity of results and publications over the quality. 7:26 · This is not a new insight of course, but despite it being well known not much has 7:32 · been done to address it. And so I'm afraid that as AI becomes better, 7:37 · fraudulent work will creep into more and more scientific disciplines and 7:42 · become increasingly hard to identify. You know maybe we're actually better off if 7:46 · we just leave science to AIs entirely. Thanks for watching, see you tomorrow
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.