Free Republic
Browse · Search
General/Chat
Topics · Post Article

Skip to comments.

The End of Computer Programming as We Know It
O'Reilly ^ | 03/16/2025 | Tim O’Reilly

Posted on 03/16/2025 6:24:41 PM PDT by SeekAndFind

There’s a lot of chatter in the media that software developers will soon lose their jobs to AI. I don’t buy it.

It is not the end of programming. It is the end of programming as we know it today. That is not new. The first programmers connected physical circuits to perform each calculation. They were succeeded by programmers writing machine instructions as binary code to be input one bit at a time by flipping switches on the front of a computer. Assembly language programming then put an end to that. It lets a programmer use a human-like language to tell the computer to move data to locations in memory and perform calculations on it. Then, development of even higher-level compiled languages like Fortran, COBOL, and their successors C, C++, and Java meant that most programmers no longer wrote assembly code. Instead, they could express their wishes to the computer using higher level abstractions.

Learn faster. Dig deeper. See farther.


Betty Jean Jennings and Frances Bilas (right) program the ENIAC in 1946. Via the Computer History Museum

Eventually, interpreted languages, which are much easier to debug, became the norm. 

BASIC, one of the first of these to hit the big time, was at first seen as a toy, but soon proved to be the wave of the future. Programming became accessible to kids and garage entrepreneurs, not just the back office priesthood at large companies and government agencies.

Consumer operating systems were also a big part of the story. In the early days of the personal computer, every computer manufacturer needed software engineers who could write low-level drivers that performed the work of reading and writing to memory boards, hard disks, and peripherals such as modems and printers. Windows put an end to that. It didn’t just succeed because it provided a graphical user interface that made it far easier for untrained individuals to use computers. It also provided what Marc Andreessen, whose company Netscape was about to be steamrollered by Microsoft, dismissively (and wrongly) called “just a bag of drivers.” That bag of drivers, fronted by the Win32 APIs, meant that programmers no longer needed to write low-level code to control the machine. That job was effectively encapsulated in the operating system. Windows and macOS, and for mobile, iOS and Android, mean that today, most programmers no longer need to know much of what earlier generations of programmers knew.

There were more programmers, not fewer

This was far from the end of programming, though. There were more programmers than ever. Users in the hundreds of millions consumed the fruits of their creativity. In a classic demonstration of elasticity of demand, as software was easier to create, its price fell, allowing developers to create solutions that more people were willing to pay for.

The web was another “end of programming.” Suddenly, the user interface was made up of human-readable documents, shown in a browser with links that could in turn call programs on remote servers. Anyone could build a simple “application” with minimal programming skill. “No code” became a buzzword. Soon enough, everyone needed a website. Tools like WordPress made it possible for nonprogrammers to create those websites without coding. Yet as the technology grew in capability, successful websites became more and more complex. There was an increasing separation between “frontend” and “backend” programming. New interpreted programming languages like Python and JavaScript became dominant. Mobile devices added a new, ubiquitous front end, requiring new skills. And once again, the complexity was hidden behind frameworks, function libraries, and APIs that insulated programmers from having to know as much about the low level functionality that it was essential for them to learn only a few years before.

Big data, web services, and cloud computing established a kind of “internet operating system.” Services like Apple Pay, Google Pay, and Stripe made it possible to do formerly difficult, high-stakes enterprise tasks like taking payments with minimal programming expertise. All kinds of deep and powerful functionality was made available via simple APIs. Yet this explosion of internet sites and the network protocols and APIs connecting them ended up creating the need for more programmers.

Programmers were no longer building static software artifacts updated every couple of years but continuously developing, integrating, and maintaining long-lived services. Even more importantly, much of the work at these vast services, like Google Search, Google Maps, Gmail, Amazon, Facebook, and Twitter, was automated at vast scale. Programs were designed and built by humans, not AI, but much of the work itself was done by special-purpose predecessors to today’s general purpose AIs. The workers that do the bulk of the heavy lifting at these companies are already programs. The human programmers are their managers. There are now hundreds of thousands of programmers doing this kind of supervisory work. They are already living in a world where the job is creating and managing digital co-workers.


“Google, Facebook, Amazon, or a host of more recent Silicon Valley startups…employ tens of thousands of workers. If you think with a twentieth century factory mindset, those workers spend their days grinding out products, just like their industrial forebears, only today, they are producing software rather than physical goods. If, instead, you step back and view these companies with a 21st century mindset, you realize that a large part of the work of these companies – delivering search results, news and information, social network status updates, and relevant products for purchase – is done by software programs and algorithms. These are the real workers, and the programmers who create them are their managers.”—Tim O’Reilly, Managing the Bots That Are Managing the Business,” MIT Sloan Management Review, May 21, 2016

In each of these waves, old skills became obsolescent—still useful but no longer essential—and new ones became the key to success. There are still a few programmers who write compilers, thousands who write popular JavaScript frameworks and Python libraries, but tens of millions who write web and mobile applications and the backend software that enables them. Billions of users consume what they produce.

Might this time be different?

Suddenly, though, it is seemingly possible for a nonprogrammer to simply talk to an LLM or specialized software agent in plain English (or the human language of your choice) and get back a useful prototype in Python (or the programming language of your choice). There’s even a new buzzword for this: CHOP, or “chat-oriented programming.” The rise of advanced reasoning models is beginning to demonstrate AI that can generate even complex programs with a high-level prompt explaining the task to be accomplished. As a result, there are a lot of people saying “this time is different,” that AI will completely replace most human programmers, and in fact, most knowledge workers. They say we face a wave of pervasive human unemployment.

I still don’t buy it. When there’s a breakthrough that puts advanced computing power into the hands of a far larger group of people, yes, ordinary people can do things that were once the domain of highly trained specialists. But that same breakthrough also enables new kinds of services and demand for those services. It creates new sources of deep magic that only a few understand.

The magic that’s coming now is the most powerful yet. And that means that we’re beginning a profound period of exploration and creativity, trying to understand how to make that magic work and to derive new advantages from its power. Smart developers who adopt the technology will be in demand because they can do so much more, focusing on the higher-level creativity that adds value.

Learning by doing

AI will not replace programmers, but it will transform their jobs. Eventually much of what programmers do today may be as obsolete (for everyone but embedded system programmers) as the old skill of debugging with an oscilloscope. Master programmer and prescient tech observer Steve Yegge observes that it is not junior and mid-level programmers who will be replaced but those who cling to the past rather than embracing the new programming tools and paradigms. Those who acquire or invent the new skills will be in high demand. Junior developers who master the tools of AI will be able to outperform senior programmers who don’t. Yegge calls it “The Death of the Stubborn Developer.”

My ideas are shaped not only by my own past 40+ years of experience in the computer industry and the observations of developers like Yegge but also by the work of economic historian James Bessen, who studied how the first Industrial Revolution played out in the textile mills of Lowell, Massachusetts during the early 1800s. As skilled crafters were replaced by machines operated by “unskilled” labor, human wages were indeed depressed. But Bessen noticed something peculiar by comparing the wage records of workers in the new industrial mills with those of the former home-based crafters. It took just about as long for an apprentice craftsman to reach the full wages of a skilled journeyman as it did for one of the new entry-level unskilled factory workers to reach full pay and productivity. The workers in both regimes were actually skilled workers. But they had different kinds of skills.

There were two big reasons, Bessen found, why wages remained flat or depressed for most of the first 50 years of the Industrial Revolution before taking off and leading to a widespread increase of prosperity. The first was that the factory owners hoarded the benefits of the new productivity rather than sharing it with workers. But the second was that the largest productivity gains took decades to arrive because the knowledge of how best to use the new technology wasn’t yet widely dispersed. It took decades for inventors to make the machines more robust, for those using them to come up with new kinds of workflows to make them more effective, to create new kinds of products that could be made with them, for a wider range of businesses to adopt the new technologies, and for workers to acquire the necessary skills to take advantage of them. Workers needed new skills not only to use the machines but to repair them, to improve them, to invent the future that they implied but had not yet made fully possible. All of this happens through a process that Bessen calls “learning by doing.”

It’s not enough for a few individuals to be ahead of the curve in adopting the new skills. Bessen explains that “what matters to a mill, an industry, and to society generally is not how long it takes to train an individual worker but what it takes to create a stable, trained workforce” (Learning by Doing, 36). Today, every company that is going to be touched by this revolution (which is to say, every company) needs to put its shoulder to the wheel. We need an AI-literate workforce. What is programming, after all, but the way that humans get computers to do our bidding? The fact that “programming” is getting closer and closer to human language, that our machines can understand us rather than us having to speak to them in their native tongue of 0s and 1s, or some specialized programming language pidgin, should be cause for celebration.

People will be creating, using, and refining more programs, and new industries will be born to manage and build on what we create. Lessons from history tell us that when automation makes it cheaper and easier to deliver products that people want or need, increases in demand often lead to increases in employment. It is only when demand is satisfied that employment begins to fall. We are far from that point when it comes to programming.

Not unsurprisingly, Wharton School professor and AI evangelist Ethan Mollick is also a fan of Bessen’s work. This is why he argues so compellingly to “always bring AI to the table,” to involve it in every aspect of your job, and to explore “the jagged edge” of what works and what doesn’t. It is also why he urges companies to use AI to empower their workers, not to replace them. There is so much to learn about how to apply the new technology. Businesses’ best source of applied R&D is the explorations of the people you have, as they use AI to solve their problems and seek out new opportunities.

What programming is will change

Sam Schillace, one of the deputy CTOs at Microsoft, agreed with my analysis. In a recent conversation, he told me, “We’re in the middle of inventing a new programming paradigm around AI systems. When we went from the desktop into the internet era, everything in the stack changed, even though all the levels of the stack were the same. We still have languages, but they went from compiled to interpreted. We still have teams, but they went from waterfall to Agile to CI/CD. We still have databases, but they went from ACID to NoSQL. We went from one user, one app, one thread, to multi distributed, whatever. We’re doing the same thing with AI right now.”

Here are some of the technologies that are being assembled into a new AI stack. And this doesn’t even include the plethora of AI models, their APIs, and their cloud infrastructure. And it’s already out of date!


AI Engineering Landscape,” via Marie-Alice Blete on GitHub

But the explosion of new tools, frameworks, and practices is just the beginning of how programming is changing. One issue, Schillace noted, is that models don’t have memory the way humans have memory. Even with large context windows, they struggle to do what he calls “metacognition.” As a result, he sees the need for humans to still provide a great deal of the context in which their AI co-developers operate.

Schillace expanded on this idea in a recent post. “Large language models (LLMs) and other AI systems are attempting to automate thought,” he wrote. “The parallels to the automation of motion during the industrial revolution are striking. Today, the automation is still crude: we’re doing the cognitive equivalent of pumping water and hammering—basic tasks like summarization, pattern recognition, and text generation. We haven’t yet figured out how to build robust engines for this new source of energy—we’re not even at the locomotive stage of AI yet.”

Even the locomotive stage was largely an expansion of the brute force humans were able to bring to bear when moving physical objects. The essential next breakthrough was an increase in the means of control over that power. Schillace asks, “What if traditional software engineering isn’t fully relevant here? What if building AI requires fundamentally different practices and control systems? We’re trying to create new kinds of thinking (our analog to motion): higher-level, metacognitive, adaptive systems that can do more than repeat pre-designed patterns. To use these effectively, we’ll need to invent entirely new ways of working, new disciplines. Just as the challenges of early steam power birthed metallurgy, the challenges of AI will force the emergence of new sciences of cognition, reliability, and scalability—fields that don’t yet fully exist.”

The challenge of deploying AI technologies in business

Bret Taylor, formerly co-CEO of Salesforce, one-time Chief Technology Officer at Meta, and long ago, leader of the team that created Google Maps, is now the CEO of AI agent developer Sierra, a company at the heart of developing and deploying AI technology in businesses. In a recent conversation, Bret told me that he believes that a company’s AI agent will become its primary digital interface, as significant as its website, as significant as its mobile app, perhaps even more so. A company’s AI agent will have to encode all of its key business policies and processes. This is something that AI may eventually be able to do on its own, but today, Sierra has to assign each of its customers an engineering team to help with the implementation.

“That last mile of taking a cool platform and a bunch of your business processes and manifesting an agent is actually pretty hard to do,” Bret explained. “There’s a new role emerging now that we call an agent engineer, a software developer who looks a little bit like a frontend web developer. That’s an archetype that’s the most common in software. If you’re a React developer, you can learn to make AI agents. What a wonderful way to reskill and make your skills relevant.”

Who will want to wade through a customer service phone tree when they could be talking to an AI agent that can actually solve their problem? But getting those agents right is going to be a real challenge. It’s not the programming that’s so hard. It’s deeply understanding the business processes and thinking how the new capability can transform them to take advantage of the new capabilities. An agent that simply reproduces existing business processes will be as embarrassing as a web page or mobile app that simply recreates a paper form. (And yes, those do still exist!)

Addy Osmani, the head of user experience for Google Chrome, calls this the 70% problem: “While engineers report being dramatically more productive with AI, the actual software we use daily doesn’t seem like it’s getting noticeably better.” He notes that nonprogrammers working with AI code generation tools can get out a great demo or solve a simple problem, but they get stuck on the last 30% of a complex program because they don’t know enough to debug the code and guide the AI to the correct solution. Meanwhile:

When you watch a senior engineer work with AI tools like Cursor or Copilot, it looks like magic. They can scaffold entire features in minutes, complete with tests and documentation. But watch carefully, and you’ll notice something crucial: They’re not just accepting what the AI suggests…. They’re applying years of hard-won engineering wisdom to shape and constrain the AI’s output. The AI is accelerating their implementation, but their expertise is what keeps the code maintainable. Junior engineers often miss these crucial steps. They accept the AI’s output more readily, leading to what I call “house of cards code” – it looks complete but collapses under real-world pressure.

In this regard, Chip Huyen, the author of the new book AI Engineering, made an illuminating observation in an email to me:

I don’t think AI introduces a new kind of thinking. It reveals what actually requires thinking.

No matter how manual, if a task can only be done by a handful of those most educated, that task is considered intellectual. One example is writing, the physical act of copying words onto paper. In the past, when only a small portion of the population was literate, writing was considered intellectual. People even took pride in their calligraphy. Nowadays, the word “writing” no longer refers to this physical act but the higher abstraction of arranging ideas into a readable format.

Similarly, once the physical act of coding can be automated, the meaning of “programming” will change to refer to the act of arranging ideas into executable programs.

Mehran Sahami, the chair of Stanford’s CS department, put it simply: “Computer science is about systematic thinking, not writing code.”

When AI agents start talking to agents…

…precision in articulating the problem correctly gets even more important. An agent as a corporate frontend that provides access to all of a company’s business processes will be talking not just to consumers but also to agents for those consumers and agents for other companies.

That entire side of the agent equation is far more speculative. We haven’t yet begun to build out the standards for cooperation between independent AI agents! A recent paper on the need for agent infrastructure notes:

Current tools are largely insufficient because they are not designed to shape how agents interact with existing institutions (e.g., legal and economic systems) or actors (e.g., digital service providers, humans, other AI agents). For example, alignment techniques by nature do not assure counterparties that some human will be held accountable when a user instructs an agent to perform an illegal action. To fill this gap, we propose the concept of agent infrastructure: technical systems and shared protocols external to agents that are designed to mediate and influence their interactions with and impacts on their environments. Agent infrastructure comprises both new tools and reconfigurations or extensions of existing tools. For example, to facilitate accountability, protocols that tie users to agents could build upon existing systems for user authentication, such as OpenID. Just as the Internet relies on infrastructure like HTTPS, we argue that agent infrastructure will be similarly indispensable to ecosystems of agents. We identify three functions for agent infrastructure: 1) attributing actions, properties, and other information to specific agents, their users, or other actors; 2) shaping agents’ interactions; and 3) detecting and remedying harmful actions from agents.

There are huge coordination and design problems to be solved here. Even the best AI agents we can imagine will not solve complex coordination problems like this without human direction. There is enough programming needed here to keep even AI-assisted programmers busy for at least the next decade.

In short, there is a whole world of new software to be invented, and it won’t be invented by AI alone but by human programmers using AI as a superpower. And those programmers need to acquire a lot of new skills.

We are in the early days of inventing the future

There is so much new to learn and do. So yes, let’s be bold and assume that AI codevelopers make programmers ten times as productive. (Your mileage may vary, depending on how eager your developers are to learn new skills.) But let’s also stipulate that once that happens, the “programmable surface area” of a business, of the sciences, of our built infrastructure will rise in parallel. If there are 20x the number of opportunities for programming to make a difference, we’ll still need twice as many of those new 10x programmers!

User expectations are also going to rise. Businesses that simply use the greater productivity to cut costs will lose out to companies that invest in harnessing the new capabilities to build better services.

As Simon Willison, a longtime software developer who has been at the forefront of showing the world how programming can be easier and better in the AI era, notes, AI lets him “be more ambitious” with his projects.

Take a lesson from another field where capabilities exploded: It may take as long to render a single frame of one of today’s Marvel superhero movies as it did to render the entirety of the first Pixar film even though CPU/GPU price and performance have benefited from Moore’s Law. It turns out that the movie industry wasn’t content to deliver low-res crude animation faster and more cheaply. The extra cycles went into thousands of tiny improvements in realistic fur, water, clouds, reflections, and many many more pixels of resolution. The technological improvement resulted in higher quality, not just cheaper/faster delivery. There are some industries made possible by choosing cheaper/faster over higher production values (consider the explosion of user-created video online), so it won’t be either-or. But quality will have its place in the market. It always does.

Imagine tens of millions of amateur AI-assisted programmers working with AI tools like Replit and Devin or enterprise solutions like those provided by Salesforce, Palantir, or Sierra. What is the likelihood that they will stumble over use cases that will appeal to millions? Some of them will become the entrepreneurs of this next generation of software created in partnership with AI. But many of their ideas will be adopted, refined, and scaled by existing professional developers.

The Journey from Prototype to Production

In the enterprise, AI will make it much more possible for solutions to be built by those closest to any problem. But the best of those solutions will still need to travel the rest of the way on what Shyam Sankar, the CTO of Palantir, has called “the journey from prototype to production.” Sankar noted that the value of AI to the enterprise is “in automation, in enterprise autonomy.” But as he also pointed out, “Automation is limited by edge cases.” He recalled the lessons of Stanley, the self-driving car that won the DARPA Grand Challenge in 2005: able to do something remarkable but requiring another 20 years of development to fully handle the edge cases of driving in a city.

“Workflow still matters,” Sankar argued, and the job of the programmer will be to understand what can be done by traditional software, what can be done by AI, what still needs to be done by people, and how you string things together to actually accomplish the workflow. He notes that “a toolchain that enables you to capture feedback and learn the edge cases to get there as quickly as possible is the winning tool chain.” In the world Sankar envisions, AI is “actually going to liberate developers to move into the business much more and be much more levered in the impact they deliver.” Meanwhile, the top-tier subject matter experts will become programmers with the help of AI assistants. It is not programmers who will be out of work. It will be the people—in every job role—who don’t become AI-assisted programmers.

This is not the end of programming. It is the beginning of its latest reinvention.



TOPICS: Business/Economy; Computers/Internet; Society
KEYWORDS: ai; coding; computer; programming
Navigation: use the links below to view more comments.
first previous 1-2021-4041-6061-8081-99 next last
To: SeekAndFind

Yes, I’m sure when management comes and tells you to teach this how to do your job you have nothing to worry about.


21 posted on 03/16/2025 7:31:34 PM PDT by Openurmind
[ Post Reply | Private Reply | To 1 | View Replies]

To: LouAvul

22 posted on 03/16/2025 7:43:41 PM PDT by Bobalu (I can’t even feign surprise anymore...)
[ Post Reply | Private Reply | To 5 | View Replies]

To: econjack
There’s no secret to keeping a programming job. All you have to do is keep renewing your skill set.

Bingo! You can't be a one-trick pony. You must constantly learn the latest technology that customers are willing to support. You can't master everything that is flooding out. Javascript frameworks come and go. Some are good enough to be adopted and required by a customer. That's where you spend your training time.

23 posted on 03/16/2025 7:48:47 PM PDT by Myrddin
[ Post Reply | Private Reply | To 3 | View Replies]

To: SeekAndFind

bfl


24 posted on 03/16/2025 7:50:08 PM PDT by Attention Surplus Disorder (The Democrat breadlines will be gluten-free. )
[ Post Reply | Private Reply | To 1 | View Replies]

To: SeekAndFind

TLDNR
Where are the Cliff’s Notes on this?


25 posted on 03/16/2025 7:50:18 PM PDT by Dr. Franklin ("A republic, if you can keep it." )
[ Post Reply | Private Reply | To 1 | View Replies]

To: TexasGator

“I guess you never heard of Document AI.”

so, is that what H&R Block and Intuit used to convert the tens of thousands of pages of Fed and State tax laws and regulations into TurboTax and H&R Block Tax PC programs?


26 posted on 03/16/2025 7:51:13 PM PDT by catnipman ((A Vote For The Lesser Of Two Evils Still Counts As A Vote For Evil))
[ Post Reply | Private Reply | To 20 | View Replies]

To: ImJustAnotherOkie
I was recruited into a project that was providing lots of new UNIX kernel support to a HP-UX 7.0 environment on a 68030 CPU. I had to port Mentat System V Streams, an HP-UX 9.0 multi-LUN SCSI driver (backport by me to 7.0), X.25 Level 2 and 3 support code from Spyder and cobble up a tunnel driver to hook the new Mentat Streams into the BSD TCP/IP stack. Over 6 weeks I ported and integrated 275,000 lines of new code into the kernel. At the lowest level, I needed to use 68030 assembler to leverage memory "test and set" instructions to ensure atomic locking of data structures.

When I finished the task, my company took the new OS to a computer show in Europe to show it off, then a week later an earthquake hit Haiti. My code was in the field providing support for military activity to support the rescue/recovery efforts. No AI in those days. Just hours of grunt work making that kernel code bullet proof. The "magic" under the covers that lets people to the high level stuff with ease.

27 posted on 03/16/2025 7:58:02 PM PDT by Myrddin
[ Post Reply | Private Reply | To 2 | View Replies]

To: All

I have ai open on a separate screen all day. It’s great! I can paste a block of code and say: “see this code? I want a sub that is similar, but does x, y, and z”. Or in sql, I can paste two table definitions and say “I want to keep this table in sync with that table, create an INSERT, UPDATE, and DELETE command in a stored procedure that will do it.”

“Insert debugging lines within this code”
“Can you help me figure out what is wrong with this function”
“I want to write a DAX formula that does X. In Sql, I’d do it like this: YYYYY. How can I do similar in my DAX formula.”

Sometimes you go down a rabbit hole of bad suggestions that loop back on themselves. But overall it has helped me expand my knowledge, learn new software quickly, and do things that I may have thought would be more complicated than they were.

If I had the money, I’d invest in nuclear. AI isn’t going to get smaller.


28 posted on 03/16/2025 8:01:31 PM PDT by mmichaels1970 ( )
[ Post Reply | Private Reply | To 1 | View Replies]

To: Bobalu

I remember that very article. I took fortran, Basic, Visual Basic and C++. We studied the history as well.


29 posted on 03/16/2025 8:01:39 PM PDT by LouAvul (1 John 2:22: He that denies that Jesus is the Christ is a liar and antichrist. )
[ Post Reply | Private Reply | To 22 | View Replies]

To: dfwgator
Also a lot of business applications have to play well with legacy systems, that have a bunch of quirks, that cannot simply be deduced by using AI.

Bingo! Much of those quirks are only known to people who have worked on those systems for years. It's corporate memory that an AI can't mine. When that corporate memory gets retired or laid off, there isn't an easy recovery path. When I left PacBell in 1991, there were 360 major projects underway. I was one of thousands of people who left the company at that time. Out of the 360 projects, 180 were a total loss. No way to continue. It was capability being built for the future. The remaining programs were farmed out to contractors. In one case the contractor worked on a project and proclaimed victory. A program review revealed that the contractor had achieved victory interfacing to ONE other system. The contractor was ignorant of the other 5 systems that were required for the process flows to work. Essentially, that was a "scrap" too as the corporate memory had been retired or laid off. Oops.

30 posted on 03/16/2025 8:08:27 PM PDT by Myrddin
[ Post Reply | Private Reply | To 12 | View Replies]

To: SeekAndFind

Great article.

There is a low-code / no-code movement that enables those with minimal or no programming skills to build business automations with tools like make.com and n8n.

This is popular for marketing agencies and business consultants.

These tools can be used for things like social media content creation (for businesses), sending automated marketing emails and text messages, and fielding phone calls with interactive AI.


31 posted on 03/16/2025 8:09:10 PM PDT by unlearner (Still not tired of winning.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: SeekAndFind

Ping!


32 posted on 03/16/2025 8:16:33 PM PDT by scouter (As for me and my household... We will serve the LORD.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: SeekAndFind
Even as I'm winding down after 40 years of software engineering, I'm incorporating AI in my daily work. I need to remain relevant and competitive at age 68. When I do retire, I'll get to switch focus to problems that interest me and languages I prefer. As an employee, I focus on my customer's problems and solve them with the tools required under the contract that implements the winning proposal. Much of the work is modernizing legacy systems that were built over the last 30 years. I was a team member of many of them. Today...an SME morphing the old capability to run on new platforms.
33 posted on 03/16/2025 8:18:26 PM PDT by Myrddin
[ Post Reply | Private Reply | To 1 | View Replies]

To: TexasGator

Woooow - because no programmer has ever come up with a program to produce prime numbers before?

Still voting RINO?


34 posted on 03/16/2025 8:52:47 PM PDT by Skywise
[ Post Reply | Private Reply | To 13 | View Replies]

To: Myrddin

It’s gotten better in the last year I’ve used it but for my coding tasks but it still generates worthless code about 80% of the time.

What it is, is a better search algorithm… but I attribute that to Google et al degrading their search algorithms for ad space. They’ll eventually do the same for AI and we’ll be back to the current state of things.


35 posted on 03/16/2025 8:57:58 PM PDT by Skywise
[ Post Reply | Private Reply | To 33 | View Replies]

To: Skywise

The Sieve is still a common interview question, so it pays to have some idea how to implement..


36 posted on 03/16/2025 9:51:31 PM PDT by RitchieAprile (available monkeys looking for the change..)
[ Post Reply | Private Reply | To 34 | View Replies]

To: SeekAndFind

I can give you guys some war stories from the AI front. I’ve been a professional dev since ‘86. Assembly/C/C++ and most other popular microcomputer languages. Moving on. At the office I was asked to develop a script to parse a series of PDFs. I knew this was going nowhere, and by the time I was done with the script they would have changed their minds, so I decided to use AI to do it. It got the first 3-4 working, but then when 5 was different it took over a day for it to produce code that code handle it. (This is OCR’d data from the title page of a patent-like document). By the time it had figured out how to handle #5, it had broken 1-3. So I had to point it back to those, and it fixed them, but then 5 didn’t work anymore. These were in essence TABLES that were getting scanned. Sometimes the word “Name” was on the LEFT side of the name, sometimes it was ABOVE. It couldn’t seem to deduce what was doing on despite me telling it. “This data comes from a table, sometimes the word ‘NAME’ is on the left of the name. Sometimes it’s ABOVE it”. I even caught it one time adding code to special case a situation it was running into. (The name had spaces between elements, and it had been asked to remove all by 1 space.) It added a “If name = ‘first last’, then name = “first last” chunk of code. I had to remind it that there are 1400+ PDFs in this collection and that it couldn’t special case ALL of them. Anyway, it was about this point that I was told that there exists an online tool that can spit out this information, and thanks for all the work I had done.

Being a glutton for punishment, I decided to try one of my home projects with it. I write software bots that play classic video games. I’m working on a new one now, and figured I would see if it could reason some code to handle the targeting. Short answer? NO. Long answer: It’s like it has very limited memory. You can only add so much background and corner cases before it starts losing some of it. At one point it produced a 500+ line file that did MOST of the parsing. But had a bunch of bugs. In the process of trying to get the bugs fixed it provided a 200 line file that had code to address the current bug, but had stripped out a bunch of the previously generated code which was REQUIRED. Once again, as though it ran out of memory, and had to “tune the working set”. I finally got it to a better place and called it quits. I’m now working through the code and finding the holes on my own.


37 posted on 03/16/2025 10:07:27 PM PDT by FrankRizzo890
[ Post Reply | Private Reply | To 1 | View Replies]

To: econjack
"I started to learn programming on my own back in the late 1970’s when the first microcomputers (MITS Altair, Sol-20) began to appear."

Same here. My first love was the Motorola "D2 Kit" for programming the 6800 processor. I taught myself 6800 assembly language during the commercial breaks of one episode of "Saturday Night Live". A few days later, during our computer science lab, our instructor was called out of the lab to address a "computer emergency". When he returned several minutes later I was in front of the class teaching.

I'm working on a little programming project using Claude now. It's something I've wanted to do for years, but not had time - a web app to create a graphical timeline by combining any number of JSON-formatted "timeline files". What I'm discovering is that AI-driven programming is not a all "ready for prime time", and that the obstacles remaining are formidable.

38 posted on 03/16/2025 10:11:51 PM PDT by The Duke (Not without incident.)
[ Post Reply | Private Reply | To 3 | View Replies]

To: SeekAndFind

Almost 70 years ago I was writing programs in FORTRAN 4 and using paper punch cards for input and some 25 years ago I learned Java Scripting and HTML. These programing languages are now about as dead as Sumarian and cuneiform. However, I have kept my old slide rule and trusty book of logarithmic tables so if the sun belches out another Carrington Event frying all our computers, I am ready.


39 posted on 03/16/2025 10:23:37 PM PDT by The Great RJ
[ Post Reply | Private Reply | To 1 | View Replies]

To: RitchieAprile

That’s my point - it’s already out there a hundred times in Google land and countless other search engines. The code is already there and the “AI” so-called is synthesizing answers from existing results


40 posted on 03/16/2025 10:31:11 PM PDT by Skywise
[ Post Reply | Private Reply | To 36 | View Replies]


Navigation: use the links below to view more comments.
first previous 1-2021-4041-6061-8081-99 next last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
General/Chat
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson