This is what to expect from all “AI”. It is a bloody program! Written by humans (assuming generosity on your part in assuming that commie trash are human). Therefore, it will do/say whatever is coded into it. This version was poisoned with the ideology of death at work today.
This is one reason AI cars cannot work. They are written by doofs who’ve never so much as dodged a squirrel. Oh, you claim they can “learn”? Learn from whom?
This is one reason AI cars cannot work. They are written by doofs who’ve never so much as dodged a squirrel.
/\
01 if road clear then drive straight
02 if squirrel in road then drive off cliff cuz wildlife is divine and human life evil.
Modern AI is not “just a program”. They are neural networks that simulate brain neurons and are trained with available data sets where millions of “training iterations” occur by providing examples with a known “correct” output and adjusting all of the neuron “weights and biases” until the neural network produces the correct results (close to 100% but almost never 100%). One cannot predict the output from a unique set of inputs. The “correct” examples, in this case, are large bodies of work from accepted institutions and technical sources (ultimately why these types of responses are given).
These neural networks are being chained together to accomplish massively complex tasks. I’m astonished with what ChatGPT can do. I suggest everyone try it. Ask it anything. Ask it to write a song about a topic in the style of a specific artist. Ask it to write code to do something in a specific language. Ask it to compare cars or graphics cards. Etc etc...it’s scaring a lot of people in terms of job security.
That said, yes, garbage in garbage out but only in terms of the data set that is available to it. When it cross references all of the prevailing medical journals, psychiatric papers, etc...it is mostly like going to repeat the narrative to which it has available. If the narrative were the opposite then it would say that - at least without deliberate biasing toward or against ‘woke’ data, which wouldn’t surprise me.
But it isn’t ‘coded’ as such. If I ask “How was Porsche influenced by Ford in the 1960’s”, a very arbitrary question, nobody will have ‘coded’ an answer for that - but it will surprise you with an answer, one that is well written and surprising.
I suggest to everyone to give it a try and really ask it things you’d have a hard time finding anyone on the planet to accurately answer and consider the result it provides - this is a game changer. At first software programmers felt threatened as it can write good code - but it is broader than that. Lawyers, doctors, engineers in general should feel threatened. This stuff is in its infancy and will only improve, most likely at an exponential rate. My biggest fear is something way beyond what we could have dreamed and that we begin to rely on it - “you must be wrong, the AI doesn’t agree, and you can’t know more than the AI”. This is where we’re headed imho.