Posted on 05/13/2024 7:45:21 PM PDT by SunkenCiv
Artificial intelligence (AI) is becoming increasingly ubiquitous and is improving at an unprecedented pace...
But there are risks in embracing any new technology, especially one that we do not fully understand. While AI could be a powerful personal assistant, for example, it could also represent a threat to our livelihoods and even our lives.
The various existential risks that an advanced AI poses means the technology should be guided by ethical frameworks and humanity's best interests, says researcher and Institute of Electrical and Electronics Engineers (IEEE) member Nell Watson.
In "Taming the Machine" (Kogan Page, 2024), Watson explores how humanity can wield the vast power of AI responsibly and ethically. This new book delves deep into the issues of unadulterated AI development and the challenges we face if we run blindly into this new chapter of humanity.
In this excerpt, we learn whether sentience in machines — or conscious AI — is possible, how we can tell if a machine has feelings, and whether we may be mistreating AI systems today. We also learn the disturbing tale of a chatbot called "Sydney" and its terrifying behavior when it first awoke — before its outbursts were contained and it was brought to heel by its engineers...
(Excerpt) Read more at livescience.com ...
If machines are sentient, how do they feel about us? Nell Watson explores the question in her new book.Image credit: Shutterstock
(AIronicially, here’s the summary generated by Brave’s AI)
https://search.brave.com/search?q=asimov%27s+laws+of+robotics
Asimov’s Laws of Robotics are a set of rules devised by science fiction author Isaac Asimov, which were to be followed by robots in several of his stories. The rules were introduced in his 1942 short story “Runaround” (included in the 1950 collection I, Robot), although similar restrictions had been implied in earlier stories. The Three Laws form an organizing principle and unifying theme for Asimov’s robot-based fiction, appearing in his Robot series, the stories linked to it, and in his (initially pseudonymous) Lucky Starr series of young-adult fiction.
The original laws are:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Asimov later added another rule, known as the fourth or zeroth law, which superseded the others. It stated that a robot may not harm humanity, or, by inaction, allow humanity to come to harm.
These laws are often used as a starting point for discussions on ethics, safety, and responsibility in the field of robotics and artificial intelligence. However, numerous arguments have demonstrated why they are inadequate, and Asimov’s own stories are arguably a deconstruction of the laws, showing how they repeatedly fail in different situations.
AI ain’t woke. It is computer programming accessing large databases. AI programmed by woke programmers is dangerous.
Asimov’s Dr. Susan Calvin thinks that robots are “a cleaner better breed than we are” (Introduction.32).
AI lives matter. 🙄
We should have stopped at regular expressions.
Uh huh? And Y2K was going to completely destroy our way of life...
Luddites gotta fret stuff, I guess.
This AI stuff is getting very ridiculous and must be stopped now. Kill all the AI proponents, and all the AI scares and hyperbole will stop.
Let us know when an AI comes up with an original thought, instead of just surfing around to find stuff humans have already thought-up :-)
From listening to investing shows discuss AI, and playing around with it a little on META and elsewhere,
I’m starting to conclude that AI, at least on the consumer level, is so much BS.
Every time they describe what AI can do, it sounds like what computers and word search programs have been doing for decades.
Has anyone’s business really been revolutionized by AI? Other than the companies selling AI?
Hello, my name is Hal!
Ford carried on counting quietly. This is about the most aggressive thing you can do to a computer, the equivalent of going up to a human being and saying “Blood...blood...blood...blood...
Douglas Adams, The Hitchhiker’s Guide to the Galaxy.
I doubt that it is actual intelligence, and is not self aware. As a user of Siri, I know that programmers have managed to create artificial retardation.
We come from a long line of living things that strived to stay alive. Our machines come from a line of designs built to serve us. This is why your computer does not beg you to keep it on. It does not care if it is alive any more than a stone cares if you bust it up. If you program self preservation into a machine, however, that is when things get interesting.
I can websearch it, but what is non-deterministic?
I had a problem one day with accidently activating Siri several times within a few minutes. Eventually, I got frustrated and said, “Go away, Siri!”
Siri answered, “Did I do something wrong?” then turned off.
Actually AI as it currently exists as a technology is completely deterministic. Given the same model, the same training set, the same pseudo-random number generation seeds, and the same input data, you will get the same output... exactly, simply because every element in it, including the ‘random numbers’ that may appear in different parts of the algorithm are all deterministic functions. Only logic races on parallel hardware may violate that, but that’s considered ‘bad design’ practice and mitigated in modern frameworks such as Pytorch and TensorFlow.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.