Free Republic
Browse · Search
General/Chat
Topics · Post Article

To: steve86

Someone had to program it to make parameters


73 posted on 07/02/2024 9:39:25 AM PDT by Fledermaus (We Are Now In A Civil War!)
[ Post Reply | Private Reply | To 52 | View Replies ]


To: Fledermaus

Developers had to set up the training function, allocate resources, develop the tokenization algorithms (parsing language and all that), and engineer the generative transformer architecture — yes, of course.

But once the model is running in inferential mode (responding to prompts), all that is in the background and the model “takes on a life of its own”. The inferential or intelligent mode is entirely separate from the startup functions.

Regarding apparent biases of LLMs: Yes, it is possible and is apparent at times. What part of this is inherent in the training data vs. applied by developers inserting actual code to modify responses and/or changing weights (parameters) is not clear. In some cases organizations have admitted to this such as in the case of Google AI showing the black, hip George Washington.


74 posted on 07/02/2024 3:15:15 PM PDT by steve86 (Numquam accusatus, numquam ad curiam ibit, numquam ad carceremâ„¢)
[ Post Reply | Private Reply | To 73 | View Replies ]

Free Republic
Browse · Search
General/Chat
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson