They are basically missing the one line code that will bring general intelligence to AI.
I think it would be interesting if they programmed an AI machine to simply improve itself.
RE: I think it would be interesting if they programmed an AI machine to simply improve itself.
Yes, it’s very close to the concept of Machine Learning but not quite.
You define a goal (e.g., classify emails as spam or not).
You feed the system training data (e.g., labeled emails).
The algorithm finds patterns and builds a model.
Over time, with more data or better tuning, the model can improve its accuracy.
So yes, it “improves,” but only within the boundaries set by its design, data, and training process.
If you’re thinking about systems that can truly evolve or adapt their own architecture or goals, that’s more in the realm of reinforcement learning or meta-learning, and even those are tightly controlled by human-defined rules.
Google Gemini.
AI requires real world grounding in success and failure as graded by people in order to get better at useful tasks. AI cannot “improve itself” without such feedback. If you think about it, the same applies to humans in the form of testing in education and success and failure in the marketplace.