Posted on 09/24/2025 8:30:34 PM PDT by SeekAndFind
![]() |
Click here: to donate by Credit Card Or here: to donate by PayPal Or by mail to: Free Republic, LLC - PO Box 9771 - Fresno, CA 93794 Thank you very much and God bless you. |
They are basically missing the one line code that will bring general intelligence to AI.
I think it would be interesting if they programmed an AI machine to simply improve itself.
RE: I think it would be interesting if they programmed an AI machine to simply improve itself.
Yes, it’s very close to the concept of Machine Learning but not quite.
You define a goal (e.g., classify emails as spam or not).
You feed the system training data (e.g., labeled emails).
The algorithm finds patterns and builds a model.
Over time, with more data or better tuning, the model can improve its accuracy.
So yes, it “improves,” but only within the boundaries set by its design, data, and training process.
If you’re thinking about systems that can truly evolve or adapt their own architecture or goals, that’s more in the realm of reinforcement learning or meta-learning, and even those are tightly controlled by human-defined rules.
70% to 85% of all IT projects fail, so it’s no surprise that a large portion of AI projects fail.
But when they get it right they will be able to dump 95% of their staff and save billions.
This entire article could use the services of a good editor or AI system, in order to distill the business terms down into something resembling spoken English.
Google Gemini.
Googling Gemini is likely to not be helpful. If you have a point maybe say it?
So yes AI is just the latest iteration of tulips.
That said, AI is delivering some very nice benefits today. People just aren't going to be happy until they have an AI Overlord.
I had an interesting "discussion" today with Claude AI about the similarities between current AI and most members of the human race. Together we concluded that it's possible that the difference is not as great as many people think.
It's not that AI is so all-fired smart .. it's that people are so all-fired dumb.
If AI is so smart why does it need training?
Apparently BoA is evil.
Ellison....?
Another task was really machine learning to process seismic data. The methods used by a team of 6 PhD seismologists were baked into a machine learning app. The humans processed the data in 8 hours. The ML implmentation did the same job in 4 minutes. Repeated sets of data were applied with the same net outputs, just achieved at a different rate. I added my own tasking to generate a fully automated work-flow that took raw data from outside the system, packaged it for the ML system, processed the data, used the results to generate custom tables and graphs and sent the results to the data provider in 8 minutes. It wasn't AI, but it was finely crafted ML and post-processing in a fully automated process.
AI isn't smart. It's "tabula rasa" from the start. It needs to have models defined and data ingested as a starting point. It needs algorithms to decide what to do with the data. It also needs models to communicate with the user for input query and answers. It is what you make it. Expert systems are mostly just focused look-ups. Other systems choose not to bias the interpretation and instead offer a proposed answer to which the user responds positively or negatively (reinforcement learning). The latter style can do down some non-obvious paths to a "solution", but often it transcends what the design envisioned.
I thought the promise of AI was new materials, new drugs, and new technologies. The answer to unsolvable math problems. It seems to be little more than a search engine.
What AI lacks is the truth. It is generally trained using online content, that its the internet and we all know how the internet is full of half truths and lies. For AI to be truly intelligent, it must use the scientific method to test its hypotheses against the real world. But how can it do this? We can't allow AI's access to the real world to run experiments. That would be extremely dangerous, the stuff of apocalyptic science fiction.
AI requires real world grounding in success and failure as graded by people in order to get better at useful tasks. AI cannot “improve itself” without such feedback. If you think about it, the same applies to humans in the form of testing in education and success and failure in the marketplace.
$400 Billion for five new AI centers. Seems they are out over their skis to me. Dot.com rationalized in 2000 after the same kind of tulip euphoria and greed.
Things that can’t go on don’t.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.