🎹 Music for this post: https://www.youtube.com/watch?v=ns_wvl6JB6E.
In April 2023, I wrote ChatGPT Challenges Us to Focus on Better Things. Are We Up for It?
There’s not a word I would change today. I’m still not mesmerized by generative AI. I still believe it helps with many perfunctory tasks — increasingly so. The world has quickly come to see how much of our existing work is perfunctory. We are at least a little worried about that, yet we still should not be, because there is so much truly original work that lies ahead, and there is still much human work to do.
Earlier this year, I was fortunate to meet Dr. Pramod Khargonekar, Distinguished Professor of Electrical Engineering and Computer Science at UC Irvine. He presented at RIT on the topic of “Advancing AI Innovation and Education through University-Industry Collaboration” and cited a paper from Erik Brynjolfsson called “The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence.” Dr. Pramod shared this profound diagram from Brynjolfsson’s paper:

That light green area prompted a lot of well-educated minds to behold in wonderment and nod in agreement. As with all technologies, AI stands to help us more than it stands to replace us. I couldn’t have said it better in ChatGPT Challenges Us to Focus on Better Things. Are We Up for It? if I tried.
It seems to me that the profession I serve is a poster child for the implications of generative AI. What you might not suspect, however, is that it epitomizes our misunderstandings about the current state of many professions.
As I write this in August of 2025, we are nearly three years into the popular era of generative AI. A large percentage of students and professionals are using these tools daily to write, or help write, software. Given a reasonably good prompt, the 2025 wave of LLMs produce reasonably good software for many things. This has caused a wave of concern about the future of the software engineering profession.
But software engineering isn’t merely about code…any more than civil engineering is merely about concrete, asphalt, soil, or water…any more than mechanical engineering is merely about materials, motion, or force…any more than electrical engineering is merely about diodes, capacitors, resistors, or transistors.
It seems like the right time in our journey together to recall a key moment from the 1997 essay, The Road Less Traveled: A Baccalaureate Degree in Software Engineering by two of the founders of RIT’s first-in-the-nation undergraduate program in Software Engineering, Michael J. Lutz and J. Fernando Naveda:
As industry demand for qualified software engineers continued to grow, it became increasingly apparent to us and to others that the goals of software engineering and computer science, while similar, are distinct. Computer science’s fundamental concern is with the development and analysis of algorithms and data structures, or with applied research into a small set of traditional areas: languages and compilers, graphics, operating systems, databases, networking, etc. In all these instances, the focus is on the fundamental principles, rather than on the systematic application of the principles to industrial and commercial problems. The split is similar to that between physics and traditional engineering: physicists, even applied physicists, are primarily interested in understanding phenomena. Engineers are interested in capitalizing on this knowledge to design new, useful artifacts for the benefit of clients.
Does AI help or hinder a software engineer’s effort “to design new, useful artifacts for the benefit of clients”?
Let’s reflect:
- Civil engineers have been using building information modeling (BIM) and parametric modeling for since at least the 2000s.
- Mechanical engineers have been using topology optimization software since at least the 1990s.
- Electrical engineers have been using automated printed circuit board layout software since at least the 1990s.
The usual rationalization for continued human relevance in the face of new technologies adopted by engineering fields is that human judgment is required to review the output of these tools. Industry leaders offer that software advancements allow engineers to do what we do best: push frontiers in innovation and creativity without having to spend non-value-added energy trudging through rote tasks that computers can do more quickly and reliably.
But that’s the boring and easy defense.
The reality is that all professional fields serve human beings, and human beings have a few interesting behaviors that computers do not:
- We have anxiety
- We change our minds
- We are not rational
- We are not predictable
What field of engineering, again, emerged as a practice specifically designed to accommodate these human idiosyncrasies? The field of engineering that was born from the rib of the field of engineering that invented the transistor.
Let’s take a look at the “themes” of the Software Engineering program as articulated in The Road Less Traveled:
Professionalism. Graduates of the program must acquire the skills, habits, and abilities that characterize professional engineering practice and that define professional quality work. Included in this category are: written and oral communication, adherence to specific standards, responsibility for professional growth, and ethical professional behavior.
Team-based development. While team-based development is at the heart of modern software engineering practice, we realized it was impossible to teach team work simply by lecturing in class. Instead, students must be given ample opportunity to practice team skills in many different settings. Team issues are part of every class, and most require at least one project done by teams.
Software design. A primary engineering concern is design: Using one’s expertise to create a system that meets the needs of a customer. Several of our courses focus on design methods, design tradeoffs, common architectural patterns, and methods for design analysis and evaluation. We are careful to emphasize many design qualities, including testability, modifiability, reusability, and maintainability.
Software evolution and maintenance. Given the enormous cost of development, software systems are rarely developed from scratch. More common is the need to modify existing systems. To drive this lesson home, many of the projects, especially in upper-division classes, will require students to modify and enhance existing systems.
Complexity management. Modern software systems are complex, often as a direct result of the flexibility inherent in the software. We intend to expose the students to issues of complexity, and the various principles and techniques that have emerged in response to the need to control complexity.
Standards. Software engineers, like any other engineer, must conform to standards for both process and products. Our courses are designed to introduce the students to relevant standards, whether these are legally mandated, defined by industry groups, or simply de facto standards enforced by convention.
Process issues. We reinforce the concept that, software development is most likely to succeed when undertaken in the context of a defined controlled, and managed process. This notion is reinforced throughout the course sequence.
Well-designed things have a way of becoming more evident in their thoughtfulness over time, and these themes are no exception. It is difficult to imagine how their importance in the software engineering profession will be diminished by AI.
Professionalism shows no sign of being less relevant, most especially one’s oral communication skills, and responsibility for professional growth. AI can surely assist with rote written communication tasks, but it cannot replace originalism.
Team-based development also shows no sign of being less relevant; team dynamics are at the heart of all work, regardless of field.
Software design is where fear meets generative AI, but in the sense that the purpose of software design is to “meet the needs of a customer,” it must be stated that one cannot outsource those needs to AI any more than you can outsource eating, breathing, or sleeping. The need to help teams of people articulate their functional requirements might be more important today than it ever has been. Great functional requirements transcend technical implementations; as the paradigm goes, the requirements are more important than the code. The best functional requirements are invaluable as prompts for generative AI tools to deliver their best results. What percentage of your organization’s systems have living, breathing complete functional specifications? What percentage of your organization’s user stories have clear, verifiable “so that” clauses, let alone complete conditions of acceptance? Even if you believe your own organization’s answers are “100%” (I will humor you), would you admit this is not likely to be true for others?
Software evolution and maintenance is one of those areas where we should hope generative AI can help. The LLMs of 2025 are already quite good at helping software engineers rework existing code, and future LLMs are sure to be even better. But one dimension of software engineering remains unthreatened by AI: the implications of process trade-offs in enterprise systems. Seemingly simple changes — something as small as the format of a field, or a change in processing logic — ripple throughout enterprise systems. Larger changes produce tidal waves. Only those being served by the software can determine what the subjectively “least worst” (a term I must credit to Reggie Aceto, one of my many fine employees over the years) choice may be for the systems’ constituents. AI cannot solve human compromise, because decisions can never be “correct” — if a decision is implicitly correct, then there would be no decision to make.
Which brings us to complexity management. I’m glad that Lutz and Naveda use the phrase “often as a direct result of the flexibility inherent in the software.” This should remind us that software engineering is a form of leadership. While software relieves humans from one anxiety — the paralysis that comes from having to think of everything in advance — it creates a consequential anxiety that benefits from genuine human leadership. I dare an AI scientist to find a computational substitute for that; we should welcome tools that offer even a glimpse of assistance with the journey.
Standards, which are part of the sometimes-irrational and always-imperfect output of the human condition, will continue to provide challenges that, in fact, could benefit from AI assistance.
Process issues evolve in tandem with human change, and must accommodate human anxiety and imperfection.
We’d best think of generative AI the way we would any other tool: something to use when it makes sense. Do you avoid using the hammer in your toolbox? Or do you use it for every task? What about a search engine? If you had an employee at work who needed to learn a feature in a new piece of software or who needed to find the name of the CEO of a business partner and refused to use a search engine, what would you do? If they did this a few times, you’d be irritated. If they did it routinely, you might, as my boss Kip Palmer likes to say, share them with other employers.
So what does this mean in the face of current popular opinion like the recent New York Times Opinion piece by Dr. Carl Benedikt Frey?
Technology changes the face of every manner of hobby and profession, but almost every time we think we’ve solved one problem, we’ve opened the door to a whole new set of them. Consider the lessons of Walter Ong’s Orality and Literacy: there as a time when writing did not exist. Writing brought fear of losing the power of our memory, but it changed the way we express ourselves; created all manner of tools for expression, from pens to printing presses to the screens of today; created the need to store and distribute this written expression; and changed the way we learn forever. Technologies beget other technologies; if we didn’t have writing, we wouldn’t have generative AI.
New human problems are in endless supply. Tools don’t solve them on their own. We’d do well to remember the lesson of The Turing Trap: there’s an awful lot more for us to do, even if we hadn’t thought it possible. While it sometimes seems like humanity is doomed with every new advancement, humanity itself is the audience, and the need for us to focus on the manner in which we engage one another is at no risk of being diminished. The ten foundational values of The Progressive CIO remain the heart of all work to come.
![[Logo]](https://theprogressivecio.com/wp-content/uploads/2020/07/ProgCIO-1024x1024.png)