Posted on 12/01/2023 2:07:57 AM PST by RoosterRedux
Some suggest that OpenAI has landed upon a new approach to AI that either has attained true AI, which is nowadays said to be Artificial General Intelligence (AGI) or that demonstrably resides on or at least shows the path toward AGI. As a fast backgrounder for you, today’s AI is considered not yet at the realm of being on par with human intelligence. The aspirational goal for much of the AI field is to arrive at something that fully exhibits human intelligence, which would broadly then be considered as AGI, or possibly going even further into superintelligence (for my analysis on what this AI “superhuman” aspects might consist of, see the link here).
Nobody has yet been able to find out and report specifically on what this mysterious AI breakthrough consists of (if indeed such an AI breakthrough was at all devised or invented). This situation could be like one of those circumstances where the actual occurrence is a far cry from the rumors that have reverberated in the media. Maybe the reality is that something of modest AI advancement was discovered but doesn’t deserve the hoopla that has ensued. Right now, the rumor mill is filled with tall tales that this is the real deal and supposedly will open the door to reaching AGI.
On the matter of whether the AI has already achieved AGI per se, let’s noodle on that postulation. It seems hard to imagine that if the AI became true AGI we wouldn’t already be regaled with what it is and what it can do. That would be a chronicle of immense magnitude. Could the AI developers involved be capable of keeping a lid on such a life goal attainment that they miraculously found the source of the Nile or that they...turned stone into gold?
(Excerpt) Read more at forbes.com ...
I don't think anyone doubts the potential dangers from AI.
Simple request …. Show the scientific community the information and allow the information “critical thinking” by unbiased. Give the scientific community something to analyze please. So far, nothing has been shared. It’s all a secret. Secrets breed corruption and lies. I despise corruption and lies.
I’ve done my homework. Sorry
Sounds to me like you're just a closed-minded Freeper who USED TO BE a critical thinker.
Ive heard the Q AI was able to learn and perform grade school mathematics , which is apparently a big deal.
Are you really so dense that you think companies who are developing AI systems are going to release their work to the public and their competitors (just so you don’t despise them)?
See posts #15 and #24 above for links to 2 AI math sites (Mathway and Wolfram Alpha). They’re pretty remarkable.
Did you graduate?
Anyone throwing around their resume, HAS TO throw it around.
Think what you like FRiend. Each and all have an opinion. Releasing the information data is the scientific method by which all learn what is real and what is not real. Secrets are not part of the scientific community or at one point secrets weren’t part of said community. Nothing can be verified unless can be duplicated, in the lab. So far the conversation regarding AI is nonexistent. Nothing has been duplicated, therefore AI for the time being is “unreal and untrue”
It is a big deal, because it represents how much work it can do and how many humans, who are limited to pre-algebraic in math skills that will no longer be needed.
I was thinking Q anon. But if he isn’t wearing viking horns, he isn’t authentic.
Perhaps this is what alarmed insiders, that if high potency AI was embedded in an independent device, it would no longer be subject to OpenAI's control of programming and uses via Internet hosted computing. In effect, even if intended only as, say, a household manager, educational tutor, and research assistant, the brain of an independent OpenAI device could be easily adapted to use in a sophisticated weapon or surveillance device.
Will it ever understand emotions, fairness, compassion, empathy? Because I think that is important. I don’t want a machine with utilitarian philosophy making life and death choices
It's already better than most ex-wives.;-)
Speaking of which, super cute AI pets are very soon going to put many real pets out of a job. AI pets have no vet bills, require no house training, have no bad behavior like chewing or scratching, can go along in carry-on baggage, and can live forever. People without children might leave their entire estate to their AI pet, with instructions to have themselves cloned and brought back to life when it becomes possible.
In practice, “AI” is an umbrella term that encompasses a variety of optimization techniques.
Many folks think of Skynet or The Matrix or HAL when they see AI.
In practice, a ton of AI is actually Machine Learning (ML) which has been around for a long time. On an incredibly basic (and generically correct but with caveats) level, a ML optimization is a large number of statistical equations driven not by a specific model structure but by the patterns in the data.
Let’s say you want to generalize humans’ weight. A statistician would build a model that mimics the actual drivers of weight, such as height, age, sex, caloric intake, and exercise.. However, there may be other variables that also relate to weight such as cultural background, geolocation, and maybe even things that interact with each other, eg a blond young woman who is an aspiring actress in LA. Through iterative modeling producing scores of equations (for lack of better words) all focused on shrinking the gap (“error” as we like to call it) between the prediction and the actual weight, a ML algorithm is created.
What a LOT of people call AI is really ML.
Where it gets interesting, is when a ML optimization is rebuilt in the fly with new data coming in, but without a human sitting atop. This Unsupervised Learning can result in very timely predictions. It can also be wildly incorrect.
The idea that ML can lead to Skynet is when, in an unsupervised learning framework the human supervisor lets the algorithm change certain constraints/hyper parameters. Thus, using our weight example with a dystopic lens, the ML that wants to kill off humanity would change its algorithm to lead humans a terrible lifestyle whereby we all become 600lbs and die.
The real EVIL that lurks in ML etc is that the model developer, in practice, has a LOT of impact on the ethics of the algorithm - but 1) because the algorithm is very dense (millions or billions of “equations”) people can’t SEE the drivers and 2) the training dataset is inevitably biased but, again, this is not SEEN.
Assume we have two developers - one works for the DNC and the other for FR. They set out to build a ML that assesses if someone is “good.” The DNC developer downloads the internet and throws out FR, Breitbart, Fox News, the Federalist Papers etc. The FR developer does the same but throws out DU, MSNBC, CNN, the Communist Manifesto, etc. The DNC modeler introduces a constraint that censors any data pointing to individualism over collectivism like parents speaking up at school board meetings. The FR modeler throws out glowing preteen gender reassignment data. All of this is invisible to the consumer of these models’ output.
The prompt “Is Barak Obama a good person?” will yield divergent results from the DNC and FR models. You’ll never know why (unless you ask…more in that later). But to Karen and Brandon, the unsuspecting consumer, they’ll “trust” the result because ya know, it’s AI and CNN says AI can free us from human biases unless Trump builds the model.
I am certain some researcher is striving to use all this combo of data, software, and hardware to approximate sentience. That fear pr0n will get a ton of clicks. In reality, the bigger threat lies in Americans being like Brandon and Karen, assuming model developers don’t have an agenda.
2023 will likely go down as the year AI became part of everyone’s life. I pray that 2024 becomes the year when everyone sharpens their evaluative criteria of these algorithms, eg “yea, that looks interesting…who built it? Are they a slime ball? Who paid for the development? What are the exclusion criteria for the training dataset? Did Epstein kill himself? Do you like Springsteen?”
Dude, like mmmmmmmmm, what were we talking about…..
I guess when they open a church with an AI pastor, we know who will be the first member.....
I think you hit on one of the key points.
People worry about SkyNet. They worry about “intelligent” machines. Machines with an actual personality. Machines that will “take over”.
I don’t know if such things will ever happen. I’m pretty sure they aren’t happening in 2023.
But that doesn’t really matter.
There is a lot of Machine Learning. And even just static logical decision trees. Simple, dumb computers can do a lot now. They aren’t “thinking” but they can do the job of a lot of people. Just about every job that went home during the pandemic can be done by a computer. Because a great many human jobs today involve filling out paperwork, updating spreadsheets, and checking boxes. Machines can do that.
And just wait until self-driving cars arrive and the truckers lose their jobs.
Through Machine Learning and through simple dumb algorithms, the need for human labor is going to drastically decrease within the next 5 years. It’s going to be extremely transformative and a lot of people are going to be sitting around thinking, “I’m useless”.
No need to wait for “AI” or some science fiction type of breakthrough. What we have today isn’t AI, but it is enough to change everything.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.