Used for good, AI could be beneficial to humanity, but there surely will be those who will use it to enrich themselves with both power & incredible wealth which most likely will be detrimental to humanity.
If you put restrictions on our developers, then you automatically benefit China who will do no such thing, because evil has no impact on them.
This is why the issue is far more complex I'm afraid, even though you have made valid points. 🙂👍
You’re absolutely right and I’m glad you brought that up. The international dimension, especially with countries like China that won’t slow down for ethics or accountability is one of the biggest challenges in this whole debate. Once the genie’s out, it’s not just about managing our models it’s about how to compete in a world where not everyone plays by the same rules.
That said, I don’t think the answer is to just remove all guardrails here, because we’d be handing over trust in our systems without any accountability, which is exactly what the bad actors want. Instead, I’d argue for smart safety practices ones that don’t cripple innovation, but help us build tech we can actually defend and stand behind when things go wrong.
Like you said, AI could absolutely be used for good. But without some kind of shared framework; even a voluntary one backed by industry, we risk building a monster we can’t reason with. The key is doing it better than our adversaries, not blindly faster.
And I believe we can do that.