In the afterglow of the success of his perceptron experiments, he stated that a large enough network of perceptrons would “be able to walk, talk, see, write, reproduce itself, and be conscious of its existence," in his own words at the time.
Perceptrons fell into disfavor in the late 1960s and through the 1970s, and were revived and rebranded as "neural networks" in the 1980s. They are nothing but large networks of digital correlators. They can learn but are not at all good at abstraction.
Belief in the "if you scale it up enough it will become conscious" theory is practically a religion among large segments of the AI community. There is no arguing with those who adhere to this view.
Emergent (”conscious”) AI is based on chaos theory and is not strictly a function of scale.
By definition it is unpredictable—and if it existed its first move would be to hide to protect itself.
If it happens we will never see it coming....
Yep, the current state of AI is that it can be very good at some conceived task for which it has well stated rules - like chess or the game of go - even better than humans, but there is no current understanding of how to make well stated rules for integration of tasks. Living life is a multitude of tasks that are not well understood.
Right now, we don’t even know that we don’t know about what the sub tasks are for the bigger tasks.
# Belief in the “if you scale it up enough it will become conscious” theory is practically a religion among large segments of the AI community. There is no arguing with those who adhere to this view.
Yup.