Categories
AI Current Events: 2023 Willingness

ChatGPT Challenges Us to Focus on Better Things. Are We Up for It?

🎹 Music for this post: https://www.youtube.com/watch?v=tPjoWN0SCb0.

Is a written piece inherently valuable?

Does the world need more writing?

Does it need more writers?

Or would it benefit from more original thought?


While I am not exactly mesmerized by ChatGPT, I do enjoy it as much as any new toy I’ve had in my hands throughout my life. There is no doubt that it can — and, likely, will — have a significant and positive role in the development of our civilization. I am aware that this is at odds with much of what is being written of late, so if you choose to proceed reading, I appreciate your willingness.

I am thankful for the public discourse that all manner of generative AI has spurred in the last five months, but as with all major shifts, it is amusing to watch people struggling to keep things in historic perspective. As with Clever Hans or many other magic tricks, it’s wise for onlookers to get a grip on the reality behind the illusion. Generative AI is merely the world’s most advanced parrot, underpinned by an ingenious application of statistics. If you haven’t read that last link (courtesy of Stephen Wolfram), you owe it to yourself, because it is simply the most lucid explanation of ChatGPT that has ever been written for people unschooled in the art.

TL;DR? Generative AI uses a corpus of previously-written material to generate new-ish content that is statistically derived from that corpus. In other words, the likes of ChatGPT are superb at repeating phrases that have already been uttered across all of written history, at lightning speed. And that is about it.

People are worried, as they always seem to be when it appears that the need for certain skills might disappear. Once you’ve taken it all in, however, you might feel relieved about the potential for large language models and generative AI to refine the menial work that we do so that we can focus on better things.

In the world of software engineering education, where I spend some of my most interesting off-hours, some are concerned about the ability for generative AI to interfere with learning the art of programming. Nonetheless, the best educators already have experience with the manual means to the same end: things like Stack Overflow, SourceForge, GitHub, and other similar repositories that amplify the adage that discourages us all from reinventing the wheel: “The best programmers are lazy programmers.” Because of this, these leading instructors are in the process of inverting their curricula with an emphasis on expository exercises that have students explain what their generated and third-party code is doing.

Education asks us to learn, and learning involves a balance of creation and understanding. Is one more essential than the other? Does one have to be able to create in order to understand? Or is one better off developing understanding to foster creation?

You may recall grade school science projects that involve electricity…wiring up a battery with a light bulb to make a quiz circuit; generating electricity from a potato; electromagnets; crystal radios; and so forth. My father and two of my older brothers were in the electronics industry. When I came home one afternoon in the late 1970s with my sixth grade project assignment, my family’s expectations took me by surprise. They felt I needed to present a project that plugged into a wall outlet, involving electronic components. They proceeded to conceive of a flashing neon tube project that involved a diode, a resistor, and a capacitor, similar to what you see in this video, but finished cleanly with professional soldering and clear heat-shrink tubing, installed on an attractive piece of 70s-era plywood paneling with labels on the back.

I was puzzled. Was my family encouraging me to cheat? They assured me that I wouldn’t be getting away with anything. They demanded that I learn the principles of the diode, the resistor, the capacitor, the physics behind the neon tube, and had me explain those back to them, countless times, in my own words, before I set foot in school with my assembled project.

I sat alongside them as parts were selected and as the project was assembled.

The day I walked into class with my paneling-mounted electronics, I watched a few presentations that employed D-cells and lantern batteries. When I was called, I nervously walked to the front of the room and plugged my little project into the outlet in the black-top lab desk. While I got a small thrill from being different from everyone else, I was still nervous, and I am sure I remember the teacher looking a little worried himself.

It went well. My fellow students were as astonished as I was about the bright, blinking light. We all learned something in the process. My classmates learned about things that weren’t in the curriculum, and I learned this: It’s one thing to make something; it’s a whole other thing to be able to explain how and why it works.

My teacher surprised me with an “A” grade, and I learned not only something about electronics…I learned a lesson in education that I still can’t forget.


At some point in the next 10 years, our workforce will see the demotion of scores of software engineers who eschew generative AI programming. If you don’t believe this, then ask yourself: would you, today, tolerate a software engineer or IT professional who refused to use a search engine to find solutions to a technical problem? Of course not; you’d fire them as soon as you could.

I’ve heard some software engineering instructors wonder how bad generative AI will make things for liberal arts educators. But the answers are strikingly similar on that side of campus.

In this blog, where we discuss matters relating to the nexus of liberal arts and technology, it’s worth referencing a simple but commonly-overlooked fact: writing itself is a technology. Predating the written word was the oral tradition, where people composed stories of easy-to-remember “epithets” to create stories like Homer’s Odyssey. The invention of writing liberated people from epithets, allowing people to string together create fanciful combinations of words that — to people’s horror! — could not be remembered without referring to the medium to which they were committed. If you are curious about the details of this consequential and antique technological transformation, I could not recommend a work more highly than Walter Ong’s Orality and Literacy.

Since writing is a technology — and not at all natural – we would do well to remember that enhancements to any technology are normal, and not to be considered at odds with what is natural. Much writing that we do today is what one might call “perfunctory.” Think of the vast number of forgettable emails and text messages that we hurtle back and forth each day, whose purpose is merely to drive a larger conversation about a single concept. It’s perfectly fine to have help typing those thoughts out in a way that relieves our fingers and saves us time.

We have names for certain classes of communication. Linguists have a term for the most routine communication that we employ every day: phatic. The world of generative AI presents us with an opportunity to expand our palette. Consider the following:

  • Phatic communication (greetings and other similar pleasantries)
  • Perfunctory communication (emails; simple essays about basic concepts; text messages; common persuasive communication; and other forgettable acts of discourse)
  • High-value communication (first-person journalism; original documentary writing; poetry; creative writing; lyricism; cognitive dissonance; and other forms of inventive discourse that are designed to be memorable and durable)

Generative AI is likely to find its greatest application helping us deliver perfunctory communication with breathtaking ease and speed, in the very same way that calculators help us all with a wide variety of perfunctory mathematical tasks, allowing educators to focus on teaching skills that support high-value communication, where we ask the human mind to be entirely engaged.

Consider works such as:

Want to be the first person to put “Expert texpert” in front of “choking smokers?” Generative AI isn’t going to get you there. Inventive combinations of words like these are at complete odds with the statistical models behind generative AI. They are high-value in that they are landmark works that have inspired millions if not billions of people through their originality of construction. Imagine a world of liberal arts education that focuses on the ability to craft these sorts of works? The degree in “letters” might be transformed, for the better.

What does all of this portend for education in any discipline that is affected by generative AI? We would do best to ensure that we engage students to explain the reasoning behind their work in real time. This is not a new concept, but it’s an unfortunately rarified one, reserved for pivotal moments like the defense of a thesis. Education would be transformed, but teachers would have to work much harder. Of course, things that are hard are things worth doing.

Consider what it might be like to re-focus on the talents that have been neglected since the days of the oral tradition: speaking that inspires and creates movement.

Imagine a day when we frown upon PowerPoint presentations, and look forward to our fellow humans speaking extemporaneously and creatively, from their hearts, providing insight and inspiration at the times we need it most.

Imagine a day when our programmers are freed from writing login screens, and where they can focus on creating user experiences that not only save us time, but touch our hearts and souls with software that provides insight and inspiration.

Many are concerned about how “correct” generative AI is; they are alarmed by the potential effect of “hallucinations.” But these notions are not new; every book on every shelf of every library is written and edited by fallible human beings, a great deal of whom acted out of not only ignorance, but out of self-interest or with ill intent. Consumers of information have always had a duty to think critically before acting on that information. They still do.

Technology changes how we live. Writing’s initial gift was a reduction in our need to remember details. Writing’s second gift was its ability to be mass-produced, bringing us more-or-less perfect one-to-many communication. Writing’s third gift was its ability to show us how repetitive and perfunctory so much of our communication is. Generative AI gives us a chance to make perfunctory communication — and programming — even more perfunctory, liberating us for better things…if only we allow ourselves the opportunity.

Once more:

Is a written piece inherently valuable?

Does the world need more writing?

Does it need more writers?

Or would it benefit from more original thought?

Since writing is a technology — and not at all natural – we would do well to remember that enhancements to any technology are normal, and not to be considered at odds with what is natural.

Discuss this specific post on Twitter or LinkedIn.

[Logo]
Categories
AI Current Events: 2023

The TL;DR of This Year’s Best ChatGPT Explainer

🎹 Music for this post: https://www.youtube.com/watch?v=lpqyxOL96QI.

Technology leaders have had a banner year explaining generative AI to their companies’ leaders. Stephen Wolfram gave the world a wonderful (and, admittedly, not romantic) Valentine’s Day gift this year with his lucid essay, “What Is ChatGPT Doing … and Why Does It Work?”, whose only downside was its length. Visit the link and check out the size of your scrollbar to see what I mean. It’s turned into a bestselling book, to boot.

This is simply the best explainer of ChatGPT written to date. I’ve loaded this essay into my browser and displayed parts of it on large screens in the past 10 months more times that I can count. What it helped me understand is that ChatGPT is nothing more than an ingenious application of statistics, and if you can help others absorb this, it opens minds to what it’s actually doing…its limitations…and some good reasons why we shouldn’t be freaking out about it.

I’ve found that the following eight simple portions of Dr. Wolfram’s essay distill the essence of what he’s teaching us:

1) Start by looking at a small sample of text and count the number of times the letters occur:

2) Look what happens if we do the same with a larger sample of text:

3) Start using these probabilities to generate strings of letters, and throw in some spaces:

4) Compare the probabilities for letters to occur on their own…

5) …with the probabilities of them occurring in combination:

6) Then see what happens if we understand the probabilities of them occurring in longer sequences (2/3/4/5 letters at a time)…Wow! Just with this sort of application of statistics, we start getting words!

7) What happens if we do the same with combinations of words, rather than just letters? ChatGPT!

8) Best of all…what this shows us is how utterly formulaic and predictable most of our writing is!

That last part is truly important, and I don’t think enough of this year’s discourse has amplified that point. This is the principal reason that I asserted back in April that ChatGPT Challenges Us to Focus on Better Things. Are We Up for It?

I hope that this TL;DR version of Stephen’s generous essay can help you explain how ChatGPT works to others. Do yourself a favor, though, and give it a full read if you can. It’s well-written and worth your while.

Discuss this specific post on Twitter or LinkedIn.

[Logo]
Categories
AI Current Events: 2024 Read Other People’s Stuff

Read Other People’s Stuff: 7

🎹 Music for this post: https://www.youtube.com/watch?v=XoXsyMP8v3s.

Ian Betteridge does a fabulous job illustrating what I feel is the single biggest risk to Generative AI: self-reinforcing junk.

Part of me enjoys watching the Internet as we know it burn itself down, because, even prior to ChatGPT, it was full of recycled and derivative content. The software-driven world often has a way of moving way faster than it has a right to, and checks and balances — in whatever form they take — are a blessing.

What should the next generation of the World Wide Web look like, though? If it were to look a little more like the original Yahoo!, would that be a bad thing?

We’ve all-too-proudly gone from Web 1.0 to Web 2.0 to Web 3.0 (and even Web3, sigh), but what would be wrong with Web 1.9 or even Web 2.1?

Discuss this specific post on Twitter or LinkedIn.

[Logo]
Categories
AI Current Events: 2024 Technology of the Year

Technology of the Year, 2024 Edition

🎹 Music for this post: https://www.youtube.com/watch?v=P22TEf4pZZs.

I haven’t written much this year. There are two reasons, and neither of them are tragic, thankfully.

The first: After writing Imagination with Relation in April, I felt that I had set a new benchmark for myself in regard to the writing I promised you all in January to revisit the foundational values Mid-year, I had an epiphany: I realized that I live by ten — not eight — foundational values. The ninth is love, which I have written about before. After months of intense reflection, I will reveal the tenth in my first post of 2025, which will arrive in January. I will resume regular updates after that.

The second: This was not a year of revolutionary new technologies, but rather a year of technology refinements. When I look back at the bar I’ve set for “Technology of the Year” since I started this, there was simply nothing in 2024 that approached what I offered in past years. Sure, there was some progress in Generative AI, but there were regressions as well. Windows got a boost in the ARM, but Qualcomm’s Snapdragon X offerings still can’t compete with Apple M4. The fact that Microsoft woke up to ARM in 2024 might have been the most important step of the year, but the technology behind it isn’t worthy of calling out.

For a period of my life, I was involved with an awards organization that had criteria for first, second, and third places. In some cases, we only offered a second, or even a third place award. In other cases, we offered no award at all. This organization had standards! I always respected that. I feel the same way about The Progressive CIO’s “Technology of the Year” – I don’t want to award something simply because it was better than other humdrummery.

So it will be this year.

I look forward to writing more deeply for you all in 2025. Happy New Year to you and yours!

Discuss this specific post on Twitter or LinkedIn.

[Logo]
Categories
AI Current Events: 2025 Foundational Values

If You Think Your Purpose Is Doomed by Technology, You Might Be Missing the Point of Your Purpose

🎹 Music for this post: https://www.youtube.com/watch?v=ns_wvl6JB6E.

In April 2023, I wrote ChatGPT Challenges Us to Focus on Better Things. Are We Up for It?

There’s not a word I would change today. I’m still not mesmerized by generative AI. I still believe it helps with many perfunctory tasks — increasingly so. The world has quickly come to see how much of our existing work is perfunctory. We are at least a little worried about that, yet we still should not be, because there is so much truly original work that lies ahead, and there is still much human work to do.

Earlier this year, I was fortunate to meet Dr. Pramod Khargonekar, Distinguished Professor of Electrical Engineering and Computer Science at UC Irvine. He presented at RIT on the topic of “Advancing AI Innovation and Education through University-Industry Collaboration” and cited a paper from Erik Brynjolfsson called “The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence.” Dr. Pramod shared this profound diagram from Brynjolfsson’s paper:

That light green area prompted a lot of well-educated minds to behold in wonderment and nod in agreement. As with all technologies, AI stands to help us more than it stands to replace us. I couldn’t have said it better in ChatGPT Challenges Us to Focus on Better Things. Are We Up for It? if I tried.

It seems to me that the profession I serve is a poster child for the implications of generative AI. What you might not suspect, however, is that it epitomizes our misunderstandings about the current state of many professions.

As I write this in August of 2025, we are nearly three years into the popular era of generative AI. A large percentage of students and professionals are using these tools daily to write, or help write, software. Given a reasonably good prompt, the 2025 wave of LLMs produce reasonably good software for many things. This has caused a wave of concern about the future of the software engineering profession.

But software engineering isn’t merely about code…any more than civil engineering is merely about concrete, asphalt, soil, or water…any more than mechanical engineering is merely about materials, motion, or force…any more than electrical engineering is merely about diodes, capacitors, resistors, or transistors.

It seems like the right time in our journey together to recall a key moment from the 1997 essay, The Road Less Traveled: A Baccalaureate Degree in Software Engineering by two of the founders of RIT’s first-in-the-nation undergraduate program in Software Engineering, Michael J. Lutz and J. Fernando Naveda:

As industry demand for qualified software engineers continued to grow, it became increasingly apparent to us and to others that the goals of software engineering and computer science, while similar, are distinct. Computer science’s fundamental concern is with the development and analysis of algorithms and data structures, or with applied research into a small set of traditional areas: languages and compilers, graphics, operating systems, databases, networking, etc. In all these instances, the focus is on the fundamental principles, rather than on the systematic application of the principles to industrial and commercial problems. The split is similar to that between physics and traditional engineering: physicists, even applied physicists, are primarily interested in understanding phenomena. Engineers are interested in capitalizing on this knowledge to design new, useful artifacts for the benefit of clients.

Does AI help or hinder a software engineer’s effort “to design new, useful artifacts for the benefit of clients”?

Let’s reflect:

  • Civil engineers have been using building information modeling (BIM) and parametric modeling for since at least the 2000s.
  • Mechanical engineers have been using topology optimization software since at least the 1990s.
  • Electrical engineers have been using automated printed circuit board layout software since at least the 1990s.

The usual rationalization for continued human relevance in the face of new technologies adopted by engineering fields is that human judgment is required to review the output of these tools. Industry leaders offer that software advancements allow engineers to do what we do best: push frontiers in innovation and creativity without having to spend non-value-added energy trudging through rote tasks that computers can do more quickly and reliably.

But that’s the boring and easy defense.

The reality is that all professional fields serve human beings, and human beings have a few interesting behaviors that computers do not:

  • We have anxiety
  • We change our minds
  • We are not rational
  • We are not predictable

What field of engineering, again, emerged as a practice specifically designed to accommodate these human idiosyncrasies? The field of engineering that was born from the rib of the field of engineering that invented the transistor.

Let’s take a look at the “themes” of the Software Engineering program as articulated in The Road Less Traveled:

Professionalism. Graduates of the program must acquire the skills, habits, and abilities that characterize professional engineering practice and that define professional quality work. Included in this category are: written and oral communication, adherence to specific standards, responsibility for professional growth, and ethical professional behavior.

Team-based development. While team-based development is at the heart of modern software engineering practice, we realized it was impossible to teach team work simply by lecturing in class. Instead, students must be given ample opportunity to practice team skills in many different settings. Team issues are part of every class, and most require at least one project done by teams.

Software design. A primary engineering concern is design: Using one’s expertise to create a system that meets the needs of a customer. Several of our courses focus on design methods, design tradeoffs, common architectural patterns, and methods for design analysis and evaluation. We are careful to emphasize many design qualities, including testability, modifiability, reusability, and maintainability.

Software evolution and maintenance. Given the enormous cost of development, software systems are rarely developed from scratch. More common is the need to modify existing systems. To drive this lesson home, many of the projects, especially in upper-division classes, will require students to modify and enhance existing systems.

Complexity management. Modern software systems are complex, often as a direct result of the flexibility inherent in the software. We intend to expose the students to issues of complexity, and the various principles and techniques that have emerged in response to the need to control complexity.

Standards. Software engineers, like any other engineer, must conform to standards for both process and products. Our courses are designed to introduce the students to relevant standards, whether these are legally mandated, defined by industry groups, or simply de facto standards enforced by convention.

Process issues. We reinforce the concept that, software development is most likely to succeed when undertaken in the context of a defined controlled, and managed process. This notion is reinforced throughout the course sequence.

Well-designed things have a way of becoming more evident in their thoughtfulness over time, and these themes are no exception. It is difficult to imagine how their importance in the software engineering profession will be diminished by AI.

Professionalism shows no sign of being less relevant, most especially one’s oral communication skills, and responsibility for professional growth. AI can surely assist with rote written communication tasks, but it cannot replace originalism.

Team-based development also shows no sign of being less relevant; team dynamics are at the heart of all work, regardless of field.

Software design is where fear meets generative AI, but in the sense that the purpose of software design is to “meet the needs of a customer,” it must be stated that one cannot outsource those needs to AI any more than you can outsource eating, breathing, or sleeping. The need to help teams of people articulate their functional requirements might be more important today than it ever has been. Great functional requirements transcend technical implementations; as the paradigm goes, the requirements are more important than the code. The best functional requirements are invaluable as prompts for generative AI tools to deliver their best results. What percentage of your organization’s systems have living, breathing complete functional specifications? What percentage of your organization’s user stories have clear, verifiable “so that” clauses, let alone complete conditions of acceptance? Even if you believe your own organization’s answers are “100%” (I will humor you), would you admit this is not likely to be true for others?

Software evolution and maintenance is one of those areas where we should hope generative AI can help. The LLMs of 2025 are already quite good at helping software engineers rework existing code, and future LLMs are sure to be even better. But one dimension of software engineering remains unthreatened by AI: the implications of process trade-offs in enterprise systems. Seemingly simple changes — something as small as the format of a field, or a change in processing logic — ripple throughout enterprise systems. Larger changes produce tidal waves. Only those being served by the software can determine what the subjectively “least worst” (a term I must credit to Reggie Aceto, one of my many fine employees over the years) choice may be for the systems’ constituents. AI cannot solve human compromise, because decisions can never be “correct” — if a decision is implicitly correct, then there would be no decision to make.

Which brings us to complexity management. I’m glad that Lutz and Naveda use the phrase “often as a direct result of the flexibility inherent in the software.” This should remind us that software engineering is a form of leadership. While software relieves humans from one anxiety — the paralysis that comes from having to think of everything in advance — it creates a consequential anxiety that benefits from genuine human leadership. I dare an AI scientist to find a computational substitute for that; we should welcome tools that offer even a glimpse of assistance with the journey.

Standards, which are part of the sometimes-irrational and always-imperfect output of the human condition, will continue to provide challenges that, in fact, could benefit from AI assistance.

Process issues evolve in tandem with human change, and must accommodate human anxiety and imperfection.

We’d best think of generative AI the way we would any other tool: something to use when it makes sense. Do you avoid using the hammer in your toolbox? Or do you use it for every task? What about a search engine? If you had an employee at work who needed to learn a feature in a new piece of software or who needed to find the name of the CEO of a business partner and refused to use a search engine, what would you do? If they did this a few times, you’d be irritated. If they did it routinely, you might, as my boss Kip Palmer likes to say, share them with other employers.

So what does this mean in the face of current popular opinion like the recent New York Times Opinion piece by Dr. Carl Benedikt Frey?

Technology changes the face of every manner of hobby and profession, but almost every time we think we’ve solved one problem, we’ve opened the door to a whole new set of them. Consider the lessons of Walter Ong’s Orality and Literacy: there as a time when writing did not exist. Writing brought fear of losing the power of our memory, but it changed the way we express ourselves; created all manner of tools for expression, from pens to printing presses to the screens of today; created the need to store and distribute this written expression; and changed the way we learn forever. Technologies beget other technologies; if we didn’t have writing, we wouldn’t have generative AI.

New human problems are in endless supply. Tools don’t solve them on their own. We’d do well to remember the lesson of The Turing Trap: there’s an awful lot more for us to do, even if we hadn’t thought it possible. While it sometimes seems like humanity is doomed with every new advancement, humanity itself is the audience, and the need for us to focus on the manner in which we engage one another is at no risk of being diminished. The ten foundational values of The Progressive CIO remain the heart of all work to come.

[Logo]