Posted on 07/01/2024 7:41:30 PM PDT by Jonty30
The hype is strong with this one.
I see no problem with this.
If Bill Gates is involved, we can trust it.
... see, it took over my computer already ...
Fooking AI, an improved calculator crap again…
Good for you. Its certainly something worth keeping an eye on. I’ve lost touch.
Back in the early 90s I was hanging on Usenet in the nuclear energy groups. I made friends with John McCarthy of Stanford, who was a very pro-nuke advocate. McCarthy of course was the inventor of AI, or at least as it existed then. He did get me to buy a LISP compiler. Interesting and weird. Back then the idea was to create rule-based “expert systems”. That whole thing fizzled out. Getting the mind of an “expert” into an app was much harder than anybody thought.
Depopulate Bill Gates from the planet.
Bill Gates is wealthy.
Bill Gates is NOT SMART.
Bump
So this genius was close friends with Epstein right?
Bill Gates Says Superintelligence is Inevitable For Him. He Claims to be Half Way There Now.
“Back then the idea was to create rule-based “expert systems”. That whole thing fizzled out. Getting the mind of an “expert” into an app was much harder than anybody thought.”
Yep, expert systems were the entire focus then and the people at the forefront of the research had high expectations (both for technological progress in the area and for personal fame). Neither happened at the time at least locally.
I played a little with LISP also.
“There is some measure of unrealism about what is thought that AI can do, but I don’t think I am being unrealistic when I say that it will outperform humans when it comes to checking off checklists, and most jobs rely on a checklist even if that checklist is just mental.”
Most white collar jobs involve little real work. https://www.youtube.com/watch?v=StIcRH_e6zQ
I do recall LISP or lisp being interpreter based but on reading Wikipedia now seems it had various implementations of either interpretive or compiled types and a clever runtime combination of the two. Now I vaguely remember something like that. 40 years is a long time for me to remember details.
Someone had to program it to make parameters
Developers had to set up the training function, allocate resources, develop the tokenization algorithms (parsing language and all that), and engineer the generative transformer architecture — yes, of course.
But once the model is running in inferential mode (responding to prompts), all that is in the background and the model “takes on a life of its own”. The inferential or intelligent mode is entirely separate from the startup functions.
Regarding apparent biases of LLMs: Yes, it is possible and is apparent at times. What part of this is inherent in the training data vs. applied by developers inserting actual code to modify responses and/or changing weights (parameters) is not clear. In some cases organizations have admitted to this such as in the case of Google AI showing the black, hip George Washington.
Thanks for looking at the link. I hope a few did.
I think that paper is going to acquire a significant place in the history of AI.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.