The “Promise” of “Easier” Programming

So yesterday, Thomas Fuchs said on Mastodon:

The LLM thing and people believing it will replace people reminds me so much of the “visual programming” hype in the 80s/90s, when companies promised that everyone could write complex applications with a few clicks and drawing on the screen.

Turns out, no, you can’t.

I had to respond, and he encouraged me to turn my response into a blog post. Thanks!

In essence, he’s both incorrect and quite correct, in ways that correlate directly to the current enthusiasm among the less technically savvy for LLMs.

Back in the late 1980s to mid-1990s, there were large numbers of complex business applications built in Prograph and Sirius Developer and other “4GLs.” These were generally client-server line-of-business applications that were front-ends to databases implementing business processes, prior to the migration of such applications to the web in the late 1990s. In addition, there was LabVIEW, a graphical programming system by National Instruments for instrument control and factory automation, which has largely dominated that industry since not long after its release in 1986.

This was all accompanied by breathless PR, regurgiated by an entirely coöpted technology press, about how graphical programming was going to save businesses money since once software could be developed by drawing lines between boxes and one wouldn’t have to deal with textual syntax, anyone who needed software could write it themselves instead of hiring “expensive” programmers to do it.

The problem with this is that it’s optimizing the wrong problem: The complexity of textual programming. Yes, people have varying levels of difficulty when it comes to using text to write programs, and some of that is caused by needless complexity in both programming language and development environment design. However, a complex application is still a complex application regardless of whether it’s written in Prograph or Swift.

For example, LabVIEW isn’t necessarily an advancement over the systems it replaced. An enormous amount of factory automation and instrumentation tooling was created in the 1980s around the IEEE 1488 General Purpose Instrument Bus—originally HP-IB—using Hewlett-Packard’s “Rocky Mountain BASIC” running on its 9000-series instrumentation controllers. (These controllers are what HP’s 68000-based HP 9000-200/300/400 systems running HP-UX were the fanciest versions of; a significant use of these larger systems was to act as development systems with deployment on lower-cost fixed-purpose controllers.)

All of that was a lot more maintainable and discoverable than a modern rats’ nest of LabVIEW diagrams—LabVIEW didn’t win the market because it was easier to use or better, it won because it ran on ubiquitous PC hardware while still being able to fully interoperate with already-deployed GPIB systems. This is in part because Rocky Mountain BASIC was a good structured BASIC with flexible I/O facilities, not a toy BASIC like existed on the microcomputers of the time. So if you needed to add a feature or fix a bug, you had lots of tools with which to pinpoint the bug and address it, and then deploy updated code to your test and then production environment, as well as manage changes like that over time.

This is one of the same things that’s also doomed “environment-oriented” programming systems like Smalltalk; plain text has some very important evolutionary benefits when used for programming, particularly when it comes to long-term maintainability, interoperability, portability, and interoperability. Maintaining any sort of environment-oriented system over time is much more difficult simply because making comparisons between variants can become incredibly complex, and often isn’t possible except when working within the system itself. (For example, people working in Smalltalk environments often used to work by passing around FileOuts, and Smalltalk revision control systems often just codified that.)

And of these sorts of systems, LabVIEW is the only one still in wide use, all of that 4GL code has been replaced more than once over time with more traditionally-developed software because it turns out that there are good reasons that software that needs to live a long time tends to be created textually.

What does this have to do with using LLMs for programming? All of the same people—people who appear to resent having to give money to professional software developers to practice their trade—think that this time it’ll be different, that they’ll finally be able to just describe what they want a computer to do for their enterprise and have a program created to do it. They continue to not realize that the actual complexity is in the processes themselves that they’re describing, and in breaking these down in sufficient detail to create computer systems to implement them.

So yeah, Thomas is absolutely correct here, they’re going to fail especially spectacularly again this time, since LLMs are just fancy autocomplete and have zero actual intelligence. It’s like saying we won’t need writing and literature classes any more because spellcheck exists, a category error.

1 Comment

  1. Orlando
    July 31, 2023

    Pragmatic programmers said it already back in 1999> “Keep knowledge in plain text”. Tip 20.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *