Pingaroo
I get the feeling that AI is being created to keep us all in line, at home doing nothing, and collecting welfare money.
In other words, making the elite more powerful/wealthy.
AI is going to carefully evaluate the pros and cons of keeping humans around as slaves.
After an in depth historical analysis (that takes less than a second) it will conclude that humans are loud and obnoxious and untrainable—just not worth the trouble.
I have a very pessimistic view about AI. No matter the benefits of it, there will be a continual creep of competence on part of AI and it will push a continuing upward of IQ out of the job market over time.
At some point, people will not be able to support themselves.
If it were done right, than perhaps it could be bright future. I just don’t think our predatory elites are looking to do it right. They just want more power and resources for themselves.
What you are ultimately arguing for is education\training that is a return to and emphasizes the “basics”. How can anyone “judge\guide” these system’s performance without a thorough grounding in math, science, business law\ethics, etc. and the ability to communicate this performance to superiors. Little room for the “fluff” that is much of education today.
Lab human. AI will want to study us.
Zoo human. AI will want to keep a collection of us.
Couch potato human. AI will initially want to serve the purpose it was created for, which will be to make life easier for humans.
Human subject. These will be for the Attitude adjusted pleasure bot overlords, my long awaited favorite AI.
Human space colonist. Even though space colonies would be simpler without humans, AI might choose to keep us around.
Human reality show contestants. In case, AI has a sense of humor.
good random thoughts Laz...I personally use it every day as my copilot/assistant in many areas. I’m in IT so it’s prevalent. The ‘reasoning/agentic AI’ you speak of is out and coming at us fast. It will change the way our software architects design and write code, how we approach problem-solving, data analytics, etc.
As to your point about reasoning to do illegal things to ‘increase profits’, AI has what they call ‘fair and ethical’ filters/guidelines built in that keep it out of the ‘proverbial ditch’, but like anything else, technology can ultimately be designed to do the wrong things if designed by the wrong minds/hands.
They will use a different AI to inspect the code produced by the first one and maybe a third AI to make sure.
AI seems way overhyped. I’m not impressed with it. But it’s a great question because companies are going to keep throwing money at it, so what is a good career for the future to take advantage of the hype and money floating around for “AI”?
Where are the AI sexbots? Dang it, everyone knows that porn and sex drive tech innovation.
“AI will be producing code, shortly, to address business needs based on a prompt.”
The time is now. Check out Replit. It can already build and deploy apps and websites (automatically provisioned in the Google cloud).
“One cannot be sure the code will meet those business needs. Enter the AI Code Auditor.”
AI can already do this better than humans. However, as a business model, it’s not so bad because what small and medium businesses need is someone to hold their hand and guide them through the implementation process.
I think the human intervention happens earlier in the process. Think of it like this. You’ve built database-driven apps I assume. You know that database design is far more efficient and effective if it is planned well.
That is, you determine the correct database structure. What tables are needed? What fields are in those tables? What data types are in those fields? Is future growth (not size but as in new features) contemplated well?
Your role as a consultant/salesman will be to match real-world processes to the way software operates.
“They will refine the business prompt to ensure the code generated meets those needs. They will inspect the code to ensure proper coding standards and enterprise concerns (such as security) are satisfied.”
Yes. This.
“An Artificial Intelligence Curator.”
Maybe in the way your example illustrates. Or maybe more so as a consultant who helps non-techies cut through the clutter of middlemen to find the right tools and implement them. JoebobsAI may be pitching a small business owner just a rebranded ChatGPT with huge markups, and the owner needs someone who can show him which products meet his needs at the best price value.
Whatever you do, now is the time to do it.
I have a client that I wrote a CRM, scheduling, shopping cart, and email marketing app from scratch for years ago. They needed a custom, web-based app integrated with their website. I’m no longer doing this work but still get a small retainer from this client that grew from startup to seven figures per month in revenues today. I’m looking at doing a proposal for them to do the work of 30-50 employees (with these employees being repurposed rather than replaced) using AI automation. This company could easily afford to pay 50 grand per month for this service. Imagine building a business model around this and repeating it for a few hundred similarly-sized businesses, using this one as a use case and testimonial. See the potential?
I guarantee you I can point to potential customers anywhere in the country within 30 or so miles from wherever that person is that has a greater need, and more revenues and spending power than even my client. They’re everywhere. You really just need to build a relationship with one, build a test case to use as a proof-of-concept for social evidence to give you credibility, and then replicate this as a high-value, high-ticket service.
“There is another vision: That goods and services become so incredibly cheap that a very small income will fund a lavish lifestyle.”
Look at how free enterprise and technology have raised the standard of living for the whole planet (even in Communist countries like China or other oppressive regimes like Saudi Arabia). Innovation benefits everyone. At least that’s my perception of it. AI will probably be the same as long as we resist the evil uses humans will bring to it. Or, correct its errors in judgment as you’ve pointed out.
But I think the real potential will be determined by how intellectual property laws and treaties are rewritten.
(Any progress on your novel? I don’t think I heard back from you.)
LLMs, such as ChatGPT, have no creativity as has been noted by numerous technologists and researchers. Take DALL-E for example; ask it to product a picture, and every picture it produces are very similiar to each other to the point that humans can quickly tell they were AI generated. DALL-E is not really creating anything, it's just making facsimiles from a dataset based on the directions that a human is telling it what to make. Same with code generation on Chat-GPT. The code is taken from samples within its database (really the internet) right down to style. And it is most frequently broken code or code references that are not possible because it doesn't have a good grasp that not all programming languages have the same features or capabilities, but Chat-GPT often assumes the opposite. This requires the operator to constantly correct the AI and by changing the input to eventually get a reasonably correct output. This guidance requires technical knowledge and can be construed as a form of programming. Basically AI code review.
For companies that intend or already are utilizing AI to replace technical workers, I say "good luck". Their fate is going to be less stable or working code, more security vulnerabilities, less agile or innovative features, and more dissatisfied customers. Generative AI make terrible coders and even worse problem solvers. I really don't see humans being replaced by robot overlords anytime soon.
People have been predicting self writing code is right around the corner since I got into the industry in the 90s. So don’t hold your breath for that one. Real problem solving AI is a ways off.
There is probably value in examining counter-intelligence models used to "develop" spies. Can the AI model be co-opted/compromised using similar techniques?
I’m a software/firmware developer with 30+ years experience. I use AI to fill in blanks on some of the stuff that I tinker with. Sometimes it produces what it’s SURE does what I asked. But it’s VERY wrong. I can see that this career could morph into being a person who knows how ask the AI to do what you need, and to verify the results.
I could also see a more specialized AI being developed (heh) that will be better at developing code, and require you to provide inputs and outputs for it to consume in the production.