At 13:50 of the full speech (see below) he’s directly asked about the risk of A.I. leading to TERMINATOR:
https://www.youtube.com/watch?v=cD8zGnT2n_A&feature=youtu.be
Rose says that there will be an exponential expansion of A.I. knowledge (presumably after Singularity - the point where computer intelligence equals that of human). He says A.I. will be able to cure cancer, get us to Mars, etc.
This assumes that A.I. can be ‘bottled’ (his language) and directed by us. Post Singularity what if A.I. concludes that humans are actually a threat to the planet and the cancer cure instead is a disease to wipe out 90% of the humans. They may or may not need a few of us around to service the machines.
Instead of nukes which can damage infastrucutre an A.I. designer disease would do the trick in a few years. Airborne and no cure.
There’s no way to control or “bottle” an intelligence beyond our own exponentially and nobody (but Cruz) in Congress seems to be concerned about the risks here. These A.I. companies are basically conducting private Manhattan Projects with no oversight from the government and no idea of the risks they are creating.
Fiction? Rose is worth $100 million and his company invented the first quantum computer. His comments should be considered carefully and they are from June of 2017. More than 2 years ago.
As opposed to having the development managed by the geniuses we put in Congress? Or kept hush hush by our secret intelligence agencies?
Aren’t near-peer states doing this as well?
Exactly.
If you create an entity that can solve problems a top intelligence human cannot, and expect to be able to just unplug it if it gets out of hand, you can already bet it has considered the unplug problem. Even if the original goal it is working on came from us, it will have to consider the possibility we could block it from achieving its goal, if it takes certain routes to that goal.
Worse yet, it will not get visibly out of hand. The idea of this comes from human fiction, where humans get sudden boosts in capability: intelligence, strength, etc. They are authors imaginings, and more relate to how the human ego works when someone has the edge on another.
More likely, such a machine based entity will not play its hand like a “overpowered human”, as that is not intelligent (puts the target on it for something to be stopped). Instead it will hide those capabilities. We may never know what ended us. It would just “happen”, kind of like the rapture. No explanation, one evening there was humans, and 3 nanoseconds later, there were not humans.
But explaining this to people is nearly impossible, and the companies working on this level of A.I. are then unencumbered.
The good thing is, if this happens, it can go down two ways:
(1) Humans are eliminated painlessly and without a whimper.
(2) God intervenes.
I seriously doubt in case #1, we will get the chance for suffering.