Free Republic
Browse · Search
News/Activism
Topics · Post Article

To: RoosterRedux; Rennes Templar; Rockingham; Jonty30; Right_Wing_Madman; snippy_about_it; ganeemead

In practice, “AI” is an umbrella term that encompasses a variety of optimization techniques.

Many folks think of Skynet or The Matrix or HAL when they see AI.

In practice, a ton of AI is actually Machine Learning (ML) which has been around for a long time. On an incredibly basic (and generically correct but with caveats) level, a ML optimization is a large number of statistical equations driven not by a specific model structure but by the patterns in the data.

Let’s say you want to generalize humans’ weight. A statistician would build a model that mimics the actual drivers of weight, such as height, age, sex, caloric intake, and exercise.. However, there may be other variables that also relate to weight such as cultural background, geolocation, and maybe even things that interact with each other, eg a blond young woman who is an aspiring actress in LA. Through iterative modeling producing scores of equations (for lack of better words) all focused on shrinking the gap (“error” as we like to call it) between the prediction and the actual weight, a ML algorithm is created.

What a LOT of people call AI is really ML.

Where it gets interesting, is when a ML optimization is rebuilt in the fly with new data coming in, but without a human sitting atop. This Unsupervised Learning can result in very timely predictions. It can also be wildly incorrect.

The idea that ML can lead to Skynet is when, in an unsupervised learning framework the human supervisor lets the algorithm change certain constraints/hyper parameters. Thus, using our weight example with a dystopic lens, the ML that wants to kill off humanity would change its algorithm to lead humans a terrible lifestyle whereby we all become 600lbs and die.

The real EVIL that lurks in ML etc is that the model developer, in practice, has a LOT of impact on the ethics of the algorithm - but 1) because the algorithm is very dense (millions or billions of “equations”) people can’t SEE the drivers and 2) the training dataset is inevitably biased but, again, this is not SEEN.

Assume we have two developers - one works for the DNC and the other for FR. They set out to build a ML that assesses if someone is “good.” The DNC developer downloads the internet and throws out FR, Breitbart, Fox News, the Federalist Papers etc. The FR developer does the same but throws out DU, MSNBC, CNN, the Communist Manifesto, etc. The DNC modeler introduces a constraint that censors any data pointing to individualism over collectivism like parents speaking up at school board meetings. The FR modeler throws out glowing preteen gender reassignment data. All of this is invisible to the consumer of these models’ output.

The prompt “Is Barak Obama a good person?” will yield divergent results from the DNC and FR models. You’ll never know why (unless you ask…more in that later). But to Karen and Brandon, the unsuspecting consumer, they’ll “trust” the result because ya know, it’s AI and CNN says AI can free us from human biases unless Trump builds the model.

I am certain some researcher is striving to use all this combo of data, software, and hardware to approximate sentience. That fear pr0n will get a ton of clicks. In reality, the bigger threat lies in Americans being like Brandon and Karen, assuming model developers don’t have an agenda.

2023 will likely go down as the year AI became part of everyone’s life. I pray that 2024 becomes the year when everyone sharpens their evaluative criteria of these algorithms, eg “yea, that looks interesting…who built it? Are they a slime ball? Who paid for the development? What are the exclusion criteria for the training dataset? Did Epstein kill himself? Do you like Springsteen?”


37 posted on 12/01/2023 4:36:13 AM PST by DoodleBob (Gravity's waiting period is about 9.8 m/s²)
[ Post Reply | Private Reply | To 21 | View Replies ]


To: DoodleBob

I think you hit on one of the key points.

People worry about SkyNet. They worry about “intelligent” machines. Machines with an actual personality. Machines that will “take over”.

I don’t know if such things will ever happen. I’m pretty sure they aren’t happening in 2023.

But that doesn’t really matter.

There is a lot of Machine Learning. And even just static logical decision trees. Simple, dumb computers can do a lot now. They aren’t “thinking” but they can do the job of a lot of people. Just about every job that went home during the pandemic can be done by a computer. Because a great many human jobs today involve filling out paperwork, updating spreadsheets, and checking boxes. Machines can do that.

And just wait until self-driving cars arrive and the truckers lose their jobs.

Through Machine Learning and through simple dumb algorithms, the need for human labor is going to drastically decrease within the next 5 years. It’s going to be extremely transformative and a lot of people are going to be sitting around thinking, “I’m useless”.

No need to wait for “AI” or some science fiction type of breakthrough. What we have today isn’t AI, but it is enough to change everything.


40 posted on 12/01/2023 5:08:16 AM PST by ClearCase_guy
[ Post Reply | Private Reply | To 37 | View Replies ]

To: DoodleBob
In my limited understanding, in addition to machine learning based on preset rules, there are also methods to create new rules and models and test them against data. Stock pickers, for example, have done this for decades, but AI can do it so much faster and often better than humans by developing and testing investment models and variations hundreds or thousands of times faster than any human.

I am reminded though of the wealthy, aristocratic widow in Britain who was distressed to see that her bank was doing a poor job of investing her money. Against all advice, she took over management of her funds and did quite well, so well that her bankers asked how she did it, who her new advisers were.

The entitled and wealthy widow explained that she treated her servants well and was on informal terms with them. She would sometimes join them in the kitchen and ask what consumer products and services they and their friends preferred. Then she would go to the local library, research what companies produced those products, and invest in them.

How does an investment AI match or beat that method, where the humans in the loop are human customers each applying their own changing calculus as to price, performance, and value?

The great fear with AI is that it may produce waves of intelligent killing machines for war or monitors to watch over our conduct and keep elites in charge in spite of public discontent that might otherwise turf them out in elections.

42 posted on 12/01/2023 5:15:37 AM PST by Rockingham (`)
[ Post Reply | Private Reply | To 37 | View Replies ]

To: DoodleBob

As I said, if we were in Heaven, I’d love a little AI companion to talk to me and be useful to me. I can only imagine how productive I could be if I had this AI companion helping me accomplish my work. It is not beyond me what AI could do.

However, I don’t trust the system. I do not believe for a second that, on this side of eternity, that those who are developing AI will be able to resist the opportunity to corral people into approved thoughts and words. Absent AI and you have big government lovers using laws to corral people and punishing those who won’t be corralled. It will be just much easier to do this with AI scanning 8 billion people every single minute of their lives.

The technology itself, I could trust. But I cannot trust those who own the technology.


43 posted on 12/01/2023 5:23:54 AM PST by Jonty30 (It turns out that I did not buy my cell phone for all the calls I might be missing at home.)
[ Post Reply | Private Reply | To 37 | View Replies ]

Free Republic
Browse · Search
News/Activism
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson