Free Republic
Browse · Search
General/Chat
Topics · Post Article

To: Libloather
It's pure BS, computers are only capable of doing what they are told to do. End of story. Thus it has to be presented code by a human to tell it to replicate itself to other computers, you know like computer viruses currently do to infect other machines. Nothing different.

This is yet another attempt to scare people into believing that computers are capable of taking the place of humans on their own volition, but they lack the capacity to think for themselves, they require a human to give them instructions to execute any task.

AI is artificial, but it has no intelligence whatsoever. Its intelligence is derived entirely by one or more human beings who provide instructions for the computer to execute. Now there are indeed humans who can produce remarkable instruction sets to emulate the appearance of intelligence, but it is the coders intelligence that you are actually seeing control the computer giving it the appearance of possessing intelligence. Therefore, if you play with it long enough you will provide the correct combination that will completely throw off the ability of the computer to perform the task to provide what was the desired result. At best it can only revert to a message stating that the desired result cannot be derived from the instructions presented, but the coder has to code to catch that event. If he hasn't, than the computer will produce an unpredictable result that is clearly not what was asked for.

14 posted on 12/16/2023 6:35:14 PM PST by Robert DeLong
[ Post Reply | Private Reply | To 1 | View Replies ]


To: Robert DeLong
You seem to have some kind of a fixation or maybe mental arrested development somewhere around the early days of the von Neumann programming model. Or maybe it's just plain denial. Anyway, AI isn't constrained to that at all. The billions of probability nodes and trillions of interconnections in a neural network are anything but von Neumann. And new abilities arise all the time: ones that no human anticipated...

...It seems that larger models acquire "emergent abilities" at this point(s).[19][63] These abilities are discovered rather than programmed-in or designed, in some cases only after the LLM [large language model] has been publicly deployed.[3] The most intriguing among emergent abilities is in-context learning from example demonstrations.[64] In-context learning is involved in tasks, such as:

reported arithmetics, decoding the International Phonetic Alphabet, unscrambling a word's letters, disambiguate word in context,[19][65][66] converting spatial words, cardinal directions (for example, replying "northeast" upon [0, 0, 1; 0, 0, 0; 0, 0, 0]), color terms represented in text.[67]

Finally....

Large language models by themselves are "black boxes", and it is not clear how they can perform linguistic tasks.

No one really knows how they work.

(I will respond only to erudite replies) (

21 posted on 12/16/2023 7:53:05 PM PST by steve86 (Numquam accusatus, numquam ad curiam ibit, numquam ad carceremâ„¢)
[ Post Reply | Private Reply | To 14 | View Replies ]

Free Republic
Browse · Search
General/Chat
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson