In a vintage computing group, someone posted a picture of a terminal in use at a modern bookstore that’s still using the same infrastructure as they have for decades, and someone replied saying that while from a retrocomputing perspective it was cool, as a business they need to “modernize!” This was my reply…
It’s my understanding that a major US tire and oil change chain used HP 3000—Hewlett-Packard’s minicomputer and mainframe platform—for decades, right up until HP cancelled it out from under them, and only switched away from it due to the promised end of support. That is to say, they’d be using it now if HPe still supported it today.
My understanding is that their systems were built using native technologies on MPE, the HP mini/mainframe OS, like the IMAGE database, COBOL for business logic, and MPE’s native forms package. They went through a number of transitions from HP’s 16-bit mainframe architecture to 32-bit and then 64-bit PA-RISC, from using terminal concentrators in stores connected to a district mini over packet data to using a small mini at each store with store-and-forward via a modem to the regional mini (and on up) and finally to live connections over VPN via local ISPs, and from not having any direct customer access except by calling someone at a specific store to having customer access via the corporate web site.
So tell me, why should they have switched away if their hand wasn’t forced by HP? Keep in mind that they maintained and enhanced the same applications for decades to accommodate changes in technology, regulations, and expectations, and by all accounts everything was straightforward to use, fast, and worked well. What would be in it for the company and the people working in the shops to rewrite everything regularly for the platform du jour? I’ll grant that their development staff wasn’t padding their résumés with the latest webshit, but why would that actually matter?