Someone had to program it to make parameters
Developers had to set up the training function, allocate resources, develop the tokenization algorithms (parsing language and all that), and engineer the generative transformer architecture — yes, of course.
But once the model is running in inferential mode (responding to prompts), all that is in the background and the model “takes on a life of its own”. The inferential or intelligent mode is entirely separate from the startup functions.
Regarding apparent biases of LLMs: Yes, it is possible and is apparent at times. What part of this is inherent in the training data vs. applied by developers inserting actual code to modify responses and/or changing weights (parameters) is not clear. In some cases organizations have admitted to this such as in the case of Google AI showing the black, hip George Washington.