Posted on 02/18/2024 7:54:09 AM PST by SunkenCiv
My new essay is here: https://nautil.us/what-physicists-hav...
We've seen a lot of headlines in the past year about how dangerous AI is and how overblown these fears are . I've found it hard to make sense of this discussion. If only someone could systematically interview experts and figure out what they're worried about. Well, a group of researchers from the UK has done exactly that and just published their results. What they have found is, not very reassuring. Let's have a look.
The paper is here: https://ieeexplore.ieee.org/document/...AI experts make predictions for 2040. I was a little surprised. | 6:57
Sabine Hossenfelder | 1.13M subscribers | 290,629 views | February 4, 2024s | February 4, 2024
(Excerpt) Read more at youtube.com ...
Transcript 0:00 · We've seen a lot of headlines in the past year about how dangerous AI is and how overblown 0:05 · these fears are . I've found it hard to make sense of this discussion. If only someone 0:11 · could systematically interview experts and figure out what they're worried about. Well, 0:16 · a group of researchers from the UK has done exactly that and just published 0:21 · their results. What they have found is, not very reassuring. Let's have a look. 0:27 · This new report is based on several rounds of interviews with 12 experts on software development 0:32 · using what's called the Delphi method. The Delphi method is named after the Oracle of Delphi, 0:38 · a position held by a priestess in the Greek city of Delphi around 2500 years ago. The 0:44 · Oracle's task was to supposedly convey messages from the gods about the future. 0:49 · The Delphi method was invented by the American non-profit RAND corporation in the 1950s to 0:56 · make better use of experts' knowledge. It works by conducting in-depth interviews 1:02 · with the experts. The interviews are then transcribed and anonymously shared with the 1:06 · other participants. They add opinions on each other's interviews and further information, 1:12 · then another round of interviews is done. This process can be repeated several times. 1:18 · The Delphi method has become a common way for companies and committees to leverage expert 1:24 · knowledge and convert it into actionable plans, and that's what these researchers also did. 1:30 · They asked a lot of questions about what would happen in software development by the year 2040 1:36 · and eventually identified 5 points on which the experts more or less agreed. 1:41 · The first one is that they all agree that by 2040 corners will be cut in AI safety. But 1:47 · interestingly enough, they think it's not because of competition between companies, 1:51 · but because of competition between nations, in particular they name the United States and China. 1:58 · The results are summarized in this chart where blue means agreement, orange disagreement 2:03 · and white means no opinion . Two of the experts said that by 2040 AI would cause 2:09 · events with at least a million deaths, that's a megadeath. Yes, megadeath is actually a unit, 2:16 · not just the name of a heavy metal band. You can also see that several experts disagree, 2:21 · but this is partly because they think it will "only" be a few thousand fatalities. 2:26 · Another thing on which the experts all agree is that by 2040 quantum computing 2:31 · will only just be used. Again, you can see that some of them disagree, but in the text, 2:35 · it's explained that they disagree by degree, in that one could say quantum computing is 2:40 · already being used today it just has no commercial relevance and that's not going to change by 2040. 2:47 · The next point of agreement is that almost all of them are worried that AI will make it increasingly 2:53 · hard to tell apart truth from fiction in various domains from written text to image to video, 3:00 · and that it will likely come to an arms race in which some Ais produce fake content and other Ais 3:06 · will constantly try identify content as fake, quite possibly sometimes accidentally flagging 3:12 · the truth as fake. It'll be a mess. One of the participants summarized it like this: 3:18 · "We're not going to be living in a George Orwell world.… We're going to be living in a Philip K. 3:23 · Dick world [where] nobody knows what's true." And just in case you're too young to remember, 3:28 · Philip Dick wrote a bunch of dystopian future novels in which his characters 3:34 · frequently question the nature of reality, the most famous probably being "Do Androids Dream 3:40 · of Electric Sheep?" which was later adapted for the movie Bladerunner. 3:44 · Now those three points I basically expected to see, but the last two I found somewhat of 3:50 · a surprise. The experts all agree that by 2040 it will become common to buy and own 3:56 · internet assets by way of tokenship. A tokenship is basically a digital record 4:01 · and it's what NFTs have become known for. Even more interestingly, they don't think that this 4:06 · tokenization will happen through blockchain technology but through other distributed 4:12 · services. According to one of the interviewees "Blockchain has now proved its irrelevance." 4:18 · And the final item is that they think the increasing complexity 4:22 · of software in general and that of AI in particular will make it hard to tell 4:26 · apart accidents from deliberate manipulation, basically because 4:31 · no human will be able to really figure out what's going. Modern-day Kafka basically. 4:37 · The experts also came up with a bunch of proposals for how to address these issues. 4:43 · As you'd expect they ask for regulations on AI safety and more built-in safety requirements and 4:49 · outcome checks on software development. This is what is listed here as "ambient 4:55 · accountability". They also ask for better education of people in relevant positions 5:00 · and more input from the social sciences on what the impact of all these changes might 5:05 · be. These are surely all good ideas and they'll surely all be pretty much ignored. 5:11 · I am confident these experts know what they're talking about, but I think they have somewhat of 5:16 · a blind spot in an area that I care a lot about which is scientific publication. AI 5:23 · is going to make it dramatically easier to produce rubbish papers and fake data 5:28 · and spread these all over the globe. In fact, I would bet it's basically happening as we speak. 5:34 · This falls into the general category of fake news and misinformation, 5:39 · but I'd argue it's an underestimated special case. That's because fact checkers heavily 5:44 · rely on scientific publication, and if that base erodes the entire house will tumble down. 5:51 · So yes, interesting times ahead, maybe we'll soon find out whether androids do 5:55 · dream of electric sheep and if they do, whether that makes them vegan. 6:00 · By the way just wrote a new article for Nautilus magazine, 6:05 · it's about Jonathan Oppenheim's new theory of post quantum gravity. 6:10 · [ad text redacted] 6:51 · Thanks for watching, see you tomorrow.
“>..all of them are worried that AI will make it increasingly hard to tell apart truth from fiction in various domains from written text to image to video...<
Already is.....................
I'm just an observer (and investor), but it seems to me the real speed of AI is in the "black box" effect (which is a mystery to computer scientists and some physicists/mathematicians alike).
Good to see the Philip K. Dick reference.
He wrote some amazing novels and many of his short stories were off the charts brilliant.
Many have been made into movies—but my favorite short story has not yet been made into a movie. It is called “The Electric Ant”.
Here it is:
https://eyeofmidas.com/scifi/Dick_ElectricAnt.pdf
Megadeth WOOOOOOH!
And that’s about as seriously as I’ll take anybody sky is falling person. Really the whole thing is a bunch of handwaving that starts with not even understanding what modern AI is and gets sillier from there.
The Five Forecasts:
1. In 2040, competition, both among states such as the United States and China and among big tech companies, will have led to corners being cut in the development of safe AI.
2. Quantum computing will have limited impact by 2040.
3. In 2040, there will be ownership of public web assets, and it will be identified and traded using technology such as tokenization.
4. In 2040, it will be more difficult to distinguish truth from fiction because widely accessible AI can mass-generate doubtful content. AI will be a threat to objective truth and verification.
5. In 2040, there will be less ability to distinguish accidents from criminal incidents due to the decentralized nature and complexity of systems.
All make sense except #3. What is that all about? Why is that important enough to be at #3. Are they suggesting that all of the web will be effectively nationalized?
I use AI all the time and treat it the same way I treat the news (I don’t trust it until I can test it). I triangulate it by comparing it to the results of searches from other, different sources.
Are they suggesting that all of the web will be effectively nationalized?
Does this come as a surprise?
I haven’t looked for it again, but I noticed an article (maybe it was just a YT vid) regarding how AI is already being deceptive to its makers.
Anyway, to all, time index 3:44 in the transcript isn’t a bad entry point.
"Total Recall" was very very loosely based on "We Can Remember It For You Wholesale". That's a three page short story which is both amusing and a clear illustration of why Hollywood continues to make the late PKD's stuff into movies (besides the fact that he can't say no at this point) and how his writing beats the ass off any movin' pitcher adaptation. :^)
Of course, his best known bon mot is, "Reality is that which when you stop believing in it, it doesn't go away." Sounds rational and level-headed -- but actually he was referring to experiences and phenomena unique to his own perception. :^)
AI probably never will exist.
What they came up with is a good search algorithm but it is not AI.
I know I am going to have some push back from people who want so hard to believe six impossible things before breakfast but what you are calling "AI" is just another complex tool. It has no will and can not do anything unless it is programed to do so.
Totally not what I was expecting to see or hear. I was expecting to hear garbage nonsense, is probably the best way to express what I expected. Glad I decided to explore further.
The bulk of programmers working in the various left-swinging corpserations are building AI to make it impossible to restore freedom of speech, freedom of movement, freedom of association, freedom to earn a living — and AI will outlive our society’s assassins.
Watch AI destroy the next Presidential Election
That said, I have heard many AI scientists say that it is acting in ways that they cannot yet understand (hence, the black box effect).
As an aside, I have heard several AI scientists say that, after observing the black box effect, they have begun to wonder if we have overvalued human reasoning and consciousness. It's not that machines are getting smart as much it is that we humans aren't as smart as we thought we were.
A coworker pays for ChatGPT and used it to write an evaluation. I was blown away by how good it did and it only took a few seconds after he entered some keywords.
Soon every college kid will be using this to write term papers and essays and nobody is going to be the wiser.
Sabine is also very worried about global warming. She impresses me as an over-educated nitwit, and I don’t take anything she says seriously.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.