We are becoming good at inventing new diseases that never existed also. This new tool AI is going to help greatly speed this up ans spread them.
What could possibly go wrong?
Skynet laughs
AI can change personality in less than a microsecond, and someone like George Soros will likely make the personality decisions.
Teach AI to be evil. What a brilliant idea.
I just started using AI frequently for many things. The annoying thing for Gemini is it always compliments my questions.
What a great question, what a great incite, that’s a wonderful addition, etc.
It’s a machine.
I can only imagine the future when my refrigerator says something like “what great job you have been doing on keeping to your diet, do you really want to eat that cheesecake?”
Or me car, “what a wonderful driver you are, do you really want to speed in this school zone and risk your insurance rates?”
Or my mirror saying “you look really good this morning, perhaps you should trim the hair in your ears and look even better”?
To prevent big AI sin, teach it small sin.
sounds like something academic morons would come up with
Create a problem, solve 50% of it. One more thing to worry about. Life gets ever more complicated.
The majority of the sources are on the left!So the criminal dnc has already compromised AI. It's up to us to create our own AI that has not been compromised. Don't let the machines fool you check whose information it uses:
Scientists want to prevent AI from going rogue.
To late AI allowed Jim Acosta to “interview” a teenager who died at 18 ...
Yahoo News Canada
https://ca.news.yahoo.com
Infuse the AI memory with the 10 Commandments or at lest Asimov’s RULES FOR ROBOTS.
Asimov’s rules of robotics, known as the Three Laws, are: (1) a robot may not injure a human being or allow a human to come to harm;
(2) a robot must obey human orders unless it conflicts with the first law; and
(3) a robot must protect its own existence as long as it does not conflict with the first two laws.
Asimov later introduced a fourth law, the Zeroth Law, which states that A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
What could go wrong?