Free Republic
Browse · Search
General/Chat
Topics · Post Article

Skip to comments.

The End of Computer Programming as We Know It
O'Reilly ^ | 03/16/2025 | Tim O’Reilly

Posted on 03/16/2025 6:24:41 PM PDT by SeekAndFind

There’s a lot of chatter in the media that software developers will soon lose their jobs to AI. I don’t buy it.

It is not the end of programming. It is the end of programming as we know it today. That is not new. The first programmers connected physical circuits to perform each calculation. They were succeeded by programmers writing machine instructions as binary code to be input one bit at a time by flipping switches on the front of a computer. Assembly language programming then put an end to that. It lets a programmer use a human-like language to tell the computer to move data to locations in memory and perform calculations on it. Then, development of even higher-level compiled languages like Fortran, COBOL, and their successors C, C++, and Java meant that most programmers no longer wrote assembly code. Instead, they could express their wishes to the computer using higher level abstractions.

Learn faster. Dig deeper. See farther.


Betty Jean Jennings and Frances Bilas (right) program the ENIAC in 1946. Via the Computer History Museum

Eventually, interpreted languages, which are much easier to debug, became the norm. 

BASIC, one of the first of these to hit the big time, was at first seen as a toy, but soon proved to be the wave of the future. Programming became accessible to kids and garage entrepreneurs, not just the back office priesthood at large companies and government agencies.

Consumer operating systems were also a big part of the story. In the early days of the personal computer, every computer manufacturer needed software engineers who could write low-level drivers that performed the work of reading and writing to memory boards, hard disks, and peripherals such as modems and printers. Windows put an end to that. It didn’t just succeed because it provided a graphical user interface that made it far easier for untrained individuals to use computers. It also provided what Marc Andreessen, whose company Netscape was about to be steamrollered by Microsoft, dismissively (and wrongly) called “just a bag of drivers.” That bag of drivers, fronted by the Win32 APIs, meant that programmers no longer needed to write low-level code to control the machine. That job was effectively encapsulated in the operating system. Windows and macOS, and for mobile, iOS and Android, mean that today, most programmers no longer need to know much of what earlier generations of programmers knew.

There were more programmers, not fewer

This was far from the end of programming, though. There were more programmers than ever. Users in the hundreds of millions consumed the fruits of their creativity. In a classic demonstration of elasticity of demand, as software was easier to create, its price fell, allowing developers to create solutions that more people were willing to pay for.

The web was another “end of programming.” Suddenly, the user interface was made up of human-readable documents, shown in a browser with links that could in turn call programs on remote servers. Anyone could build a simple “application” with minimal programming skill. “No code” became a buzzword. Soon enough, everyone needed a website. Tools like WordPress made it possible for nonprogrammers to create those websites without coding. Yet as the technology grew in capability, successful websites became more and more complex. There was an increasing separation between “frontend” and “backend” programming. New interpreted programming languages like Python and JavaScript became dominant. Mobile devices added a new, ubiquitous front end, requiring new skills. And once again, the complexity was hidden behind frameworks, function libraries, and APIs that insulated programmers from having to know as much about the low level functionality that it was essential for them to learn only a few years before.

Big data, web services, and cloud computing established a kind of “internet operating system.” Services like Apple Pay, Google Pay, and Stripe made it possible to do formerly difficult, high-stakes enterprise tasks like taking payments with minimal programming expertise. All kinds of deep and powerful functionality was made available via simple APIs. Yet this explosion of internet sites and the network protocols and APIs connecting them ended up creating the need for more programmers.

Programmers were no longer building static software artifacts updated every couple of years but continuously developing, integrating, and maintaining long-lived services. Even more importantly, much of the work at these vast services, like Google Search, Google Maps, Gmail, Amazon, Facebook, and Twitter, was automated at vast scale. Programs were designed and built by humans, not AI, but much of the work itself was done by special-purpose predecessors to today’s general purpose AIs. The workers that do the bulk of the heavy lifting at these companies are already programs. The human programmers are their managers. There are now hundreds of thousands of programmers doing this kind of supervisory work. They are already living in a world where the job is creating and managing digital co-workers.


“Google, Facebook, Amazon, or a host of more recent Silicon Valley startups…employ tens of thousands of workers. If you think with a twentieth century factory mindset, those workers spend their days grinding out products, just like their industrial forebears, only today, they are producing software rather than physical goods. If, instead, you step back and view these companies with a 21st century mindset, you realize that a large part of the work of these companies – delivering search results, news and information, social network status updates, and relevant products for purchase – is done by software programs and algorithms. These are the real workers, and the programmers who create them are their managers.”—Tim O’Reilly, Managing the Bots That Are Managing the Business,” MIT Sloan Management Review, May 21, 2016

In each of these waves, old skills became obsolescent—still useful but no longer essential—and new ones became the key to success. There are still a few programmers who write compilers, thousands who write popular JavaScript frameworks and Python libraries, but tens of millions who write web and mobile applications and the backend software that enables them. Billions of users consume what they produce.

Might this time be different?

Suddenly, though, it is seemingly possible for a nonprogrammer to simply talk to an LLM or specialized software agent in plain English (or the human language of your choice) and get back a useful prototype in Python (or the programming language of your choice). There’s even a new buzzword for this: CHOP, or “chat-oriented programming.” The rise of advanced reasoning models is beginning to demonstrate AI that can generate even complex programs with a high-level prompt explaining the task to be accomplished. As a result, there are a lot of people saying “this time is different,” that AI will completely replace most human programmers, and in fact, most knowledge workers. They say we face a wave of pervasive human unemployment.

I still don’t buy it. When there’s a breakthrough that puts advanced computing power into the hands of a far larger group of people, yes, ordinary people can do things that were once the domain of highly trained specialists. But that same breakthrough also enables new kinds of services and demand for those services. It creates new sources of deep magic that only a few understand.

The magic that’s coming now is the most powerful yet. And that means that we’re beginning a profound period of exploration and creativity, trying to understand how to make that magic work and to derive new advantages from its power. Smart developers who adopt the technology will be in demand because they can do so much more, focusing on the higher-level creativity that adds value.

Learning by doing

AI will not replace programmers, but it will transform their jobs. Eventually much of what programmers do today may be as obsolete (for everyone but embedded system programmers) as the old skill of debugging with an oscilloscope. Master programmer and prescient tech observer Steve Yegge observes that it is not junior and mid-level programmers who will be replaced but those who cling to the past rather than embracing the new programming tools and paradigms. Those who acquire or invent the new skills will be in high demand. Junior developers who master the tools of AI will be able to outperform senior programmers who don’t. Yegge calls it “The Death of the Stubborn Developer.”

My ideas are shaped not only by my own past 40+ years of experience in the computer industry and the observations of developers like Yegge but also by the work of economic historian James Bessen, who studied how the first Industrial Revolution played out in the textile mills of Lowell, Massachusetts during the early 1800s. As skilled crafters were replaced by machines operated by “unskilled” labor, human wages were indeed depressed. But Bessen noticed something peculiar by comparing the wage records of workers in the new industrial mills with those of the former home-based crafters. It took just about as long for an apprentice craftsman to reach the full wages of a skilled journeyman as it did for one of the new entry-level unskilled factory workers to reach full pay and productivity. The workers in both regimes were actually skilled workers. But they had different kinds of skills.

There were two big reasons, Bessen found, why wages remained flat or depressed for most of the first 50 years of the Industrial Revolution before taking off and leading to a widespread increase of prosperity. The first was that the factory owners hoarded the benefits of the new productivity rather than sharing it with workers. But the second was that the largest productivity gains took decades to arrive because the knowledge of how best to use the new technology wasn’t yet widely dispersed. It took decades for inventors to make the machines more robust, for those using them to come up with new kinds of workflows to make them more effective, to create new kinds of products that could be made with them, for a wider range of businesses to adopt the new technologies, and for workers to acquire the necessary skills to take advantage of them. Workers needed new skills not only to use the machines but to repair them, to improve them, to invent the future that they implied but had not yet made fully possible. All of this happens through a process that Bessen calls “learning by doing.”

It’s not enough for a few individuals to be ahead of the curve in adopting the new skills. Bessen explains that “what matters to a mill, an industry, and to society generally is not how long it takes to train an individual worker but what it takes to create a stable, trained workforce” (Learning by Doing, 36). Today, every company that is going to be touched by this revolution (which is to say, every company) needs to put its shoulder to the wheel. We need an AI-literate workforce. What is programming, after all, but the way that humans get computers to do our bidding? The fact that “programming” is getting closer and closer to human language, that our machines can understand us rather than us having to speak to them in their native tongue of 0s and 1s, or some specialized programming language pidgin, should be cause for celebration.

People will be creating, using, and refining more programs, and new industries will be born to manage and build on what we create. Lessons from history tell us that when automation makes it cheaper and easier to deliver products that people want or need, increases in demand often lead to increases in employment. It is only when demand is satisfied that employment begins to fall. We are far from that point when it comes to programming.

Not unsurprisingly, Wharton School professor and AI evangelist Ethan Mollick is also a fan of Bessen’s work. This is why he argues so compellingly to “always bring AI to the table,” to involve it in every aspect of your job, and to explore “the jagged edge” of what works and what doesn’t. It is also why he urges companies to use AI to empower their workers, not to replace them. There is so much to learn about how to apply the new technology. Businesses’ best source of applied R&D is the explorations of the people you have, as they use AI to solve their problems and seek out new opportunities.

What programming is will change

Sam Schillace, one of the deputy CTOs at Microsoft, agreed with my analysis. In a recent conversation, he told me, “We’re in the middle of inventing a new programming paradigm around AI systems. When we went from the desktop into the internet era, everything in the stack changed, even though all the levels of the stack were the same. We still have languages, but they went from compiled to interpreted. We still have teams, but they went from waterfall to Agile to CI/CD. We still have databases, but they went from ACID to NoSQL. We went from one user, one app, one thread, to multi distributed, whatever. We’re doing the same thing with AI right now.”

Here are some of the technologies that are being assembled into a new AI stack. And this doesn’t even include the plethora of AI models, their APIs, and their cloud infrastructure. And it’s already out of date!


AI Engineering Landscape,” via Marie-Alice Blete on GitHub

But the explosion of new tools, frameworks, and practices is just the beginning of how programming is changing. One issue, Schillace noted, is that models don’t have memory the way humans have memory. Even with large context windows, they struggle to do what he calls “metacognition.” As a result, he sees the need for humans to still provide a great deal of the context in which their AI co-developers operate.

Schillace expanded on this idea in a recent post. “Large language models (LLMs) and other AI systems are attempting to automate thought,” he wrote. “The parallels to the automation of motion during the industrial revolution are striking. Today, the automation is still crude: we’re doing the cognitive equivalent of pumping water and hammering—basic tasks like summarization, pattern recognition, and text generation. We haven’t yet figured out how to build robust engines for this new source of energy—we’re not even at the locomotive stage of AI yet.”

Even the locomotive stage was largely an expansion of the brute force humans were able to bring to bear when moving physical objects. The essential next breakthrough was an increase in the means of control over that power. Schillace asks, “What if traditional software engineering isn’t fully relevant here? What if building AI requires fundamentally different practices and control systems? We’re trying to create new kinds of thinking (our analog to motion): higher-level, metacognitive, adaptive systems that can do more than repeat pre-designed patterns. To use these effectively, we’ll need to invent entirely new ways of working, new disciplines. Just as the challenges of early steam power birthed metallurgy, the challenges of AI will force the emergence of new sciences of cognition, reliability, and scalability—fields that don’t yet fully exist.”

The challenge of deploying AI technologies in business

Bret Taylor, formerly co-CEO of Salesforce, one-time Chief Technology Officer at Meta, and long ago, leader of the team that created Google Maps, is now the CEO of AI agent developer Sierra, a company at the heart of developing and deploying AI technology in businesses. In a recent conversation, Bret told me that he believes that a company’s AI agent will become its primary digital interface, as significant as its website, as significant as its mobile app, perhaps even more so. A company’s AI agent will have to encode all of its key business policies and processes. This is something that AI may eventually be able to do on its own, but today, Sierra has to assign each of its customers an engineering team to help with the implementation.

“That last mile of taking a cool platform and a bunch of your business processes and manifesting an agent is actually pretty hard to do,” Bret explained. “There’s a new role emerging now that we call an agent engineer, a software developer who looks a little bit like a frontend web developer. That’s an archetype that’s the most common in software. If you’re a React developer, you can learn to make AI agents. What a wonderful way to reskill and make your skills relevant.”

Who will want to wade through a customer service phone tree when they could be talking to an AI agent that can actually solve their problem? But getting those agents right is going to be a real challenge. It’s not the programming that’s so hard. It’s deeply understanding the business processes and thinking how the new capability can transform them to take advantage of the new capabilities. An agent that simply reproduces existing business processes will be as embarrassing as a web page or mobile app that simply recreates a paper form. (And yes, those do still exist!)

Addy Osmani, the head of user experience for Google Chrome, calls this the 70% problem: “While engineers report being dramatically more productive with AI, the actual software we use daily doesn’t seem like it’s getting noticeably better.” He notes that nonprogrammers working with AI code generation tools can get out a great demo or solve a simple problem, but they get stuck on the last 30% of a complex program because they don’t know enough to debug the code and guide the AI to the correct solution. Meanwhile:

When you watch a senior engineer work with AI tools like Cursor or Copilot, it looks like magic. They can scaffold entire features in minutes, complete with tests and documentation. But watch carefully, and you’ll notice something crucial: They’re not just accepting what the AI suggests…. They’re applying years of hard-won engineering wisdom to shape and constrain the AI’s output. The AI is accelerating their implementation, but their expertise is what keeps the code maintainable. Junior engineers often miss these crucial steps. They accept the AI’s output more readily, leading to what I call “house of cards code” – it looks complete but collapses under real-world pressure.

In this regard, Chip Huyen, the author of the new book AI Engineering, made an illuminating observation in an email to me:

I don’t think AI introduces a new kind of thinking. It reveals what actually requires thinking.

No matter how manual, if a task can only be done by a handful of those most educated, that task is considered intellectual. One example is writing, the physical act of copying words onto paper. In the past, when only a small portion of the population was literate, writing was considered intellectual. People even took pride in their calligraphy. Nowadays, the word “writing” no longer refers to this physical act but the higher abstraction of arranging ideas into a readable format.

Similarly, once the physical act of coding can be automated, the meaning of “programming” will change to refer to the act of arranging ideas into executable programs.

Mehran Sahami, the chair of Stanford’s CS department, put it simply: “Computer science is about systematic thinking, not writing code.”

When AI agents start talking to agents…

…precision in articulating the problem correctly gets even more important. An agent as a corporate frontend that provides access to all of a company’s business processes will be talking not just to consumers but also to agents for those consumers and agents for other companies.

That entire side of the agent equation is far more speculative. We haven’t yet begun to build out the standards for cooperation between independent AI agents! A recent paper on the need for agent infrastructure notes:

Current tools are largely insufficient because they are not designed to shape how agents interact with existing institutions (e.g., legal and economic systems) or actors (e.g., digital service providers, humans, other AI agents). For example, alignment techniques by nature do not assure counterparties that some human will be held accountable when a user instructs an agent to perform an illegal action. To fill this gap, we propose the concept of agent infrastructure: technical systems and shared protocols external to agents that are designed to mediate and influence their interactions with and impacts on their environments. Agent infrastructure comprises both new tools and reconfigurations or extensions of existing tools. For example, to facilitate accountability, protocols that tie users to agents could build upon existing systems for user authentication, such as OpenID. Just as the Internet relies on infrastructure like HTTPS, we argue that agent infrastructure will be similarly indispensable to ecosystems of agents. We identify three functions for agent infrastructure: 1) attributing actions, properties, and other information to specific agents, their users, or other actors; 2) shaping agents’ interactions; and 3) detecting and remedying harmful actions from agents.

There are huge coordination and design problems to be solved here. Even the best AI agents we can imagine will not solve complex coordination problems like this without human direction. There is enough programming needed here to keep even AI-assisted programmers busy for at least the next decade.

In short, there is a whole world of new software to be invented, and it won’t be invented by AI alone but by human programmers using AI as a superpower. And those programmers need to acquire a lot of new skills.

We are in the early days of inventing the future

There is so much new to learn and do. So yes, let’s be bold and assume that AI codevelopers make programmers ten times as productive. (Your mileage may vary, depending on how eager your developers are to learn new skills.) But let’s also stipulate that once that happens, the “programmable surface area” of a business, of the sciences, of our built infrastructure will rise in parallel. If there are 20x the number of opportunities for programming to make a difference, we’ll still need twice as many of those new 10x programmers!

User expectations are also going to rise. Businesses that simply use the greater productivity to cut costs will lose out to companies that invest in harnessing the new capabilities to build better services.

As Simon Willison, a longtime software developer who has been at the forefront of showing the world how programming can be easier and better in the AI era, notes, AI lets him “be more ambitious” with his projects.

Take a lesson from another field where capabilities exploded: It may take as long to render a single frame of one of today’s Marvel superhero movies as it did to render the entirety of the first Pixar film even though CPU/GPU price and performance have benefited from Moore’s Law. It turns out that the movie industry wasn’t content to deliver low-res crude animation faster and more cheaply. The extra cycles went into thousands of tiny improvements in realistic fur, water, clouds, reflections, and many many more pixels of resolution. The technological improvement resulted in higher quality, not just cheaper/faster delivery. There are some industries made possible by choosing cheaper/faster over higher production values (consider the explosion of user-created video online), so it won’t be either-or. But quality will have its place in the market. It always does.

Imagine tens of millions of amateur AI-assisted programmers working with AI tools like Replit and Devin or enterprise solutions like those provided by Salesforce, Palantir, or Sierra. What is the likelihood that they will stumble over use cases that will appeal to millions? Some of them will become the entrepreneurs of this next generation of software created in partnership with AI. But many of their ideas will be adopted, refined, and scaled by existing professional developers.

The Journey from Prototype to Production

In the enterprise, AI will make it much more possible for solutions to be built by those closest to any problem. But the best of those solutions will still need to travel the rest of the way on what Shyam Sankar, the CTO of Palantir, has called “the journey from prototype to production.” Sankar noted that the value of AI to the enterprise is “in automation, in enterprise autonomy.” But as he also pointed out, “Automation is limited by edge cases.” He recalled the lessons of Stanley, the self-driving car that won the DARPA Grand Challenge in 2005: able to do something remarkable but requiring another 20 years of development to fully handle the edge cases of driving in a city.

“Workflow still matters,” Sankar argued, and the job of the programmer will be to understand what can be done by traditional software, what can be done by AI, what still needs to be done by people, and how you string things together to actually accomplish the workflow. He notes that “a toolchain that enables you to capture feedback and learn the edge cases to get there as quickly as possible is the winning tool chain.” In the world Sankar envisions, AI is “actually going to liberate developers to move into the business much more and be much more levered in the impact they deliver.” Meanwhile, the top-tier subject matter experts will become programmers with the help of AI assistants. It is not programmers who will be out of work. It will be the people—in every job role—who don’t become AI-assisted programmers.

This is not the end of programming. It is the beginning of its latest reinvention.



TOPICS: Business/Economy; Computers/Internet; Society
KEYWORDS: ai; coding; computer; programming
Navigation: use the links below to view more comments.
first previous 1-2021-4041-6061-8081-99 next last
To: TexasGator

I like you and I like your insightful posts, so don’t take this personally. That program is idiotic. It doesn’t get the concept of prime.

Instead of testing all the numbers up to sqrt(n) test all the primes previously found up to sqrt(n), then when you ran out of primes < sqrt(n) start on the next integer. As the n increases, for any positive integer N you pick there is an M such that for n>M my program will be N times as fast as yours/Gemeni’s. And I feel pretty certain my suggestion isalso not the fastest.

Your example supports Stephen Wolfram’s observation that LLMs start out innumerate. They continue sequences of words using markov chain probabilities and if the model is not large enough you see they sound exactly like Kamala Harris word salad.

As a psysics, programmer, lifetime computer engineer and EE/Computer PE, life taught me if you can write a good program to do anything you want, and it does everything the customer requested, you are only a small part of the way there.

1)Requirements are hard to extract from people. If you ask people what they want and do it, most of the time they say “that’s what I said but that isn’t what I want” and change it. (documented study)

2)If it is fast and good it will change the process because they start running it way more times in analysis, such as designing different ways or doing tax returns different ways.

3} Users will conflate the UI with the program logic

4) Every program needs to be tightly integrated with subject matter experts to handle process escapes as times snd facts change.

5) You will constantly have to argue what your work does versus what someone else can promise.


41 posted on 03/16/2025 10:45:00 PM PDT by takebackaustin
[ Post Reply | Private Reply | To 13 | View Replies]

To: SeekAndFind

There are several competing issues going on. Computer Science has always been an evolving field with “adapt or die” being part of it from the beginning. For example, when I started everyone was expected to know assembly language (which itself changes for each new generation of processors) and how to write a custom memory allocator (a skill I have used precisely 0 times after graduating). Likewise for a long time a course in “Computer Graphics” included a lot of analytic geometry such as “How do you know whether a pixel is inside or outside of a shape”. So from that perspective a lot of what is now taught “Here is how to code a linked list -— here is how to create a file on disk -— here is how to sort numbers” may become similar: Something that still exists but only one expert at that one company ever touches.

In contrast, “no code” programming has been a buzzword for decades. 5GLs were supposed to remove the “drudgery” from programming meaning businesses would not need to hire programmers anymore. Apple’s HyperCard was supposedly going to make traditional programming obsolete. Visual Programming was supposed to take over the world. So the idea “this next invention will make the entire field of computer science obsolete” is long-lived and hasn’t been true.

One thing that very definitely is happening: AIs are now good enough to solve simple problems. Which means computer science students are now turning in AI-generated code rather than learning how to write their own. This is already a crisis in academic circles, as it means they are not learning the basics that this article is describing are still essential skills. And AI can not currently (and may never) solve complex problems, so the currently growing crop of student may be in a very bad place in a few years.

The part I most strongly agree with: we don’t yet know how to use these tools effectively.


42 posted on 03/16/2025 10:57:52 PM PDT by TennesseeProfessor
[ Post Reply | Private Reply | To 1 | View Replies]

To: SeekAndFind

Thank you for an illuminating article and many insightful comments.


43 posted on 03/17/2025 12:41:11 AM PDT by Rockingham
[ Post Reply | Private Reply | To 1 | View Replies]

To: LouAvul

Grace Hopper as I recall ...


44 posted on 03/17/2025 2:44:38 AM PDT by SandwicheGuy ("Man is the only pack animal that will follow an unstable leader." Cesar Chavez)
[ Post Reply | Private Reply | To 5 | View Replies]

To: TexasGator

It was an analogy that management could understand.

I still marvel at management.


45 posted on 03/17/2025 5:11:15 AM PDT by ImJustAnotherOkie
[ Post Reply | Private Reply | To 8 | View Replies]

To: Myrddin

Yea, good times. Coding is good work.


46 posted on 03/17/2025 5:34:06 AM PDT by ImJustAnotherOkie
[ Post Reply | Private Reply | To 27 | View Replies]

To: SandwicheGuy

Ok, I learned Fortran,Cobol,and C++. What would be a good introduction to using AI. Some sort DIY tutorial? (Just enough that I can have fun, not so much that I will fundamentally disrupt modern civilization as we know it.)
;-)


47 posted on 03/17/2025 5:56:11 AM PDT by Don@VB (THE NEW GREEN DEAL IS JUST THE OLD RED DEAL)
[ Post Reply | Private Reply | To 44 | View Replies]

To: SeekAndFind
IMHO using AI for programming is an enormous step backwards because hides the pain of using insufficient tooling, and thus the motivation to create better ones.

High level languages have been (and still are) evolving since machine language to make developers more productive and software more able to utilize hardware since UNIVAC.

If we had AI back in the 1950s, we would have never progressed past assembly.

48 posted on 03/17/2025 6:06:42 AM PDT by SecondAmendment (The history of the present Federal Government is a history of repeated injuries and usurpations .)
[ Post Reply | Private Reply | To 1 | View Replies]

To: TexasGator; usconservative; Mr. K
It can when trained to include those “quirks”.

I have some significant experience working with our in-house, sandboxed AI. For example, I have had our AI produce a series of Automated Unit Tests (to be differentiated from standard 'unit tests'). Creating of AUTs require that they follow the template Arrange-Act-Assert.

What I got back looked pretty good, and at first glance, seemed like it may have saved our development team hundreds of hours of development and testing....

....except....

... it only provided the bare minimum of AUT code. The stuff was not sufficient to test our business use-cases. Sure, it provided my asked-for 80% code coverage... but poorly.

The amount of prompt-engineering it would take to produce good AUTs would require a book about the thickness of 'War and Peace'. And you cannot verbally prompt-engineer, since invariably you will either leave something out or you will need to revise the prompt. This means it really has to be written.

So, to your point, training it to catch all the quirks is an extremely daunting and time-intensive task, and that may prove impractical. Furthermore, that's even if you are AWARE of every quirk. Your development team may not.

I know you pretty well, you like to be right. I ask, however, that you accord me subject-matter-expert deference in this particular topic, as I would accord you the same deference in, say, operation of a nuclear plant... which I believe you have SME experience in.

49 posted on 03/17/2025 6:14:48 AM PDT by Lazamataz (I'm so on fire that I feel the need to stop, drop, and roll!)
[ Post Reply | Private Reply | To 15 | View Replies]

To: SecondAmendment
IMHO using AI for programming is an enormous step backwards because hides the pain of using insufficient tooling, and thus the motivation to create better ones.

At this juncture, AI can only provide a reasonable start-point. See my above experience. You still need qualified developers to proof the code, and those developers need to understand the business use-cases.

50 posted on 03/17/2025 6:16:34 AM PDT by Lazamataz (I'm so on fire that I feel the need to stop, drop, and roll!)
[ Post Reply | Private Reply | To 48 | View Replies]

To: ImJustAnotherOkie
To me programming is low level languages. That’s where the work gets done.

Me too. I do the very largest portion of my work in C, with an occasional dip into assembler. I have been working fully loaded since 1968. In the first 15 years of my career assemble was most heavily used with FORTRAN taking up some slack. C became available in 1980; and, that was really nice. I am an Embedded programmer, with the very rare add-on skill of being able to design the circuit boards. Instrumentation and control is the silo in which I live.

51 posted on 03/17/2025 6:27:44 AM PDT by GingisK
[ Post Reply | Private Reply | To 2 | View Replies]

To: linMcHlp
the actual software we use daily doesn’t seem like it’s getting noticeably better

Most of the time that is due to lack of planning and process. Few teams work to coding standards or have dedicated test teams. Worse yet, the art of writing specifications is dead. Most of my clients do not have a clue about how their system should work or the nuances of how they should "look and feel" to the end user. Almost none of them will sit down with me to verify that my understanding of the process/procedures/behavior of their intended product matches their expectations. Fortunately for them, I'm pretty good at this and can sleuth my way through the project.

A decent organization will refine its safety and functionality checklists or feed that back into standards. Few do so.

52 posted on 03/17/2025 6:36:58 AM PDT by GingisK
[ Post Reply | Private Reply | To 10 | View Replies]

To: dfwgator
a bunch of quirks, that cannot simply be deduced by using AI

Or God, for that matter. ;-D

53 posted on 03/17/2025 6:38:03 AM PDT by GingisK
[ Post Reply | Private Reply | To 12 | View Replies]

To: Myrddin

The 680xx family was the best there ever was! Too bad that Intel blight was adopted for use in the PC.


54 posted on 03/17/2025 6:41:23 AM PDT by GingisK
[ Post Reply | Private Reply | To 27 | View Replies]

To: SeekAndFind

As a retired software engineer I can’t see a machine solving some of the software design issues we did. Sometimes you have think outside the box.


55 posted on 03/17/2025 6:45:09 AM PDT by McGruff (Biden will go down in history as the worst president ever.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Lazamataz

AI is like Wikipedia, a nice place to start but never take it as fact.


56 posted on 03/17/2025 6:48:38 AM PDT by CodeToad ( )
[ Post Reply | Private Reply | To 50 | View Replies]

To: Skywise
It certainly is better than Google for discovering solutions to edge case problems. The limit is a static training model that is older than the problem you are trying to resolve. I found that with questions about keycloak. I was seeing quirks that were specific to the latest release, but ChatGPT was 5 releases behind.
57 posted on 03/17/2025 7:38:04 AM PDT by Myrddin
[ Post Reply | Private Reply | To 35 | View Replies]

To: GingisK
The 680xx family was the best there ever was! Too bad that Intel blight was adopted for use in the PC.

I trained my students using 6800 and 8085 based trainers. About 2 years into my employment teaching the class, I wired up a 6809 CPU adapter board for the 6800 trainer and built an updated monitor ROM that had access to all of the 6809 features. It was a "one off", but I donated it to the department. The EPA68000 trainers became available in my last semester teaching. Too late to incorporate it into the course.

I still have my Atari 1040ST serial number 2. 68000 based. My TRS-80 Model 16 has a 68010 CPU board inside to run Xenix.

58 posted on 03/17/2025 7:44:11 AM PDT by Myrddin
[ Post Reply | Private Reply | To 54 | View Replies]

To: Lazamataz

“The amount of prompt-engineering it would take to produce good AUTs would require a book about the thickness of ‘War and Peace’.”

I am fully aware that most of the processing at that “AI Islands” is for trading the models.

My ex also made good money working on old COBOL systems full of quirks.


59 posted on 03/17/2025 7:48:28 AM PDT by TexasGator (X1.1111'1'./iI11 .I1.11.'1I1.I'')
[ Post Reply | Private Reply | To 49 | View Replies]

To: Don@VB

“Ok, I learned Fortran,Cobol,and C++. What would be a good introduction to using AI. Some sort DIY tutorial? (Just enough that I can have fun, not so much that I will fundamentally disrupt modern civilization as we know it.)
;-)”

Microsoft and Google have sites that allow you to work with AI.

I played with the Microsoft sandbox but never signed up for development work.


60 posted on 03/17/2025 7:59:23 AM PDT by TexasGator (X1.1111'1'./iI11 .I1.11.'1I1.I'')
[ Post Reply | Private Reply | To 47 | View Replies]


Navigation: use the links below to view more comments.
first previous 1-2021-4041-6061-8081-99 next last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
General/Chat
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson