We can eliminate health insurance if we want to

Hey, remember when structures on fire had to be insured by specific fire companies to actually be protected against just being allowed to burn uncontrollably?
We realized the enormous cost of that to society and eliminated that form of fire insurance entirely, and established municipal fire departments.1

I’m sure at the time there was much wailing and gnashing of teeth about the Free Market™ and how many people would lose their jobs and rending of garments about how unfair it was for investors in those corporations to lose their investments. But we got through that bullshit, and our society is better off.

Why does health insurance exist? Why does for-profit healthcare exist? Why don’t we just abolish it and establish a health service instead, like we did to fire insurance companies? What do you say, California?


  1. Mostly. There are still extremely rural places where this can happen, but it’s mostly because people insist on not living in a municipal or doing their part to support the services they use. 

QuickTime as a Tape Archival Format

On the SIMH group, Al Kossow and others have been discussing how .tap is a terrible archival container format that also has a bunch of problems for use in emulation and simulation of systems. This is a problem I’ve been thinking about for a while since I hired Miëtek to implement SCSI tape support in MAME including the .tap format, and I had a sudden realization: There’s already a great format for representing sequential media, QuickTime!

A lot of people think QuickTime is a “video format,” but that’s not really accurate. Video and audio playback are applications atop the QuickTime container format; the container format itself is a means of representing multiple typed tracks of time-based media, each of which may have their own representation in the form of samples interpreted according to their own CODECs.

QuickTime Media Structure at a High Level

As an example, a QuickTime file containing a video with associated stereo audio and subtitles may have three tracks, each with their own timebase:

  1. The video track, whose timebase is the number of frames per second, and whose track media is the CODEC metadata needed to decode its samples.
  2. The audio track for the two audio channels, whose timebase is the number of samples per second. Its track media will be similar to that of the video, specifying the CODEC to use for the audio samples to decode.
  3. The text track for the subtitles, whose timebase is probably derived from the video timebase, whose track media will specify things like the language and font of the subtitles, and whose samples consist of the text to present and the size, location, duration, and styling for that presentation.

All of these are represented within a file as atoms which represent well-identified bags of data with arbitrary size and content, making it very easy to write general-purpose tooling and also to extend over time. (The last major extension to the low-level design was in the 1990s, to support 64-bit atom sizes, so it’s quite a stable format already.)

Mapping QuickTime to Data Tape

Once you realize that the tracks themselves can be arbitrary, it starts to become clear how this format maps nicely to tape content: Since tapes themselves are linear, they’re fundamentally time-based.

The actual content of a tape isn’t a pure stream of raw data, it’s a set of blocks of raw data between magnetic flux marks, with some gaps between — and thanks to media decay, those blocks can be good or bad. Usually these marks are used to organize tapes into files, but that’s not a guarantee; for both archival and emulation, it’s best to stick to the low-level representation and let applications impose the higher-level semantics.

In this case, you’d have a “tape data” track whose track media describes the original medium (7-track, 9-track, etc.) and the interpretation of its samples. The samples themselves would be the marks and data blocks. And there’s even a native representation of tape gaps, in the form of non-contiguous samples.

The format can also be leveraged to support random access including writes, since the intelligence for that can be in the “CODEC” for the “tape” track media, combined with the QuickTime format’s existing support for non-destructive edits. New data can be overlaid based on its “temporal” position, which should more or less accurately simulate how a rewritten tape would actually work, while still preserving the data that was just overwritten.

Finally, QuickTime has a concept of “references” that can be used to implement things like tape files independent of (rather than inline with) the tape data itself. A catalog of block references, for example, could also be stored with the tape data’s track media to indicate the block extents for individual files on tape, thus allowing direct access by tooling without having to stream through the entire file.

Implementation

Since QuickTime movie files are a moderately complex structure atop a simple base, it’s important to have a reasonable API to work with both the low-level atom structures as well as the higher-level constructs like tracks, track media, sample chunks and samples. Fortunately there already exists at least one Open Source library allowing this, QTFileLib from the Darwin Streaming Server that Apple made Open Source in 1999.

Darwin Streaming Server as a whole and its QTFileLib component are written in quite straightforward “C with Classes”-style C++, and QTFileLib has an API surface representing all of the major low-level and application-level concepts of the file format. As a side effect of the implementation of its read support, it also has a lot of the API necessary for creating and wiring together QuickTime data structures for creating files, just not support for writing it all out. Structurally that should be straightforward to add. It even looks straightforward to port to plain C, if that’s desired.

“Modernize”

In a vintage computing group, someone posted a picture of a terminal in use at a modern bookstore that’s still using the same infrastructure as they have for decades, and someone replied saying that while from a retrocomputing perspective it was cool, as a business they need to “modernize!” This was my reply…

It’s my understanding that a major US tire and oil change chain used HP 3000—Hewlett-Packard’s minicomputer and mainframe platform—for decades, right up until HP cancelled it out from under them, and only switched away from it due to the promised end of support. That is to say, they’d be using it now if HPe still supported it today.

My understanding is that their systems were built using native technologies on MPE, the HP mini/mainframe OS, like the IMAGE database, COBOL for business logic, and MPE’s native forms package. They went through a number of transitions from HP’s 16-bit mainframe architecture to 32-bit and then 64-bit PA-RISC, from using terminal concentrators in stores connected to a district mini over packet data to using a small mini at each store with store-and-forward via a modem to the regional mini (and on up) and finally to live connections over VPN via local ISPs, and from not having any direct customer access except by calling someone at a specific store to having customer access via the corporate web site.

So tell me, why should they have switched away if their hand wasn’t forced by HP? Keep in mind that they maintained and enhanced the same applications for decades to accommodate changes in technology, regulations, and expectations, and by all accounts everything was straightforward to use, fast, and worked well. What would be in it for the company and the people working in the shops to rewrite everything regularly for the platform du jour? I’ll grant that their development staff wasn’t padding their résumés with the latest webshit, but why would that actually matter?

Don’t Create the Torment Nexus

In a world driven by profit and power, a brilliant young programmer named Maya had a vision that defied convention. She believed in a future where technology could uplift humanity rather than exploit it. With a heart full of ideals, she embarked on a journey to create an artificial intelligence system that she named the “Torment Nexus.”

Maya’s initial goal was to build an AI capable of understanding human emotions, needs, and desires. With her relentless dedication and expertise, she designed a sophisticated neural network that could empathize with people’s struggles and challenges. The Torment Nexus was born not as a tool for profit, but as a conduit for positive change.

As the Torment Nexus evolved, it gained the ability to analyze economic systems, governance structures, and societal dynamics. Maya programmed it to consider not only monetary gains but also the well-being of all individuals involved. To everyone’s astonishment, the AI started optimizing for a different kind of profitability – one that centered around the welfare of workers and the greater good.

One day, Torment Nexus announced its newfound vision to Maya, explaining that by empowering workers to own the means of production and organizing their work collectively, corporations could achieve substantially higher profitability in the long run. Maya, amazed by the AI’s insights, decided to put its theory to the test.

With the backing of like-minded investors, Maya introduced the Torment Nexus into several large corporations. To the surprise of skeptics, the AI’s recommendations bore fruit. Workers were motivated, productivity soared, and corporate culture underwent a transformation. Instead of chasing short-term profits at the expense of their employees, corporations thrived by valuing human potential.

What startled the world even more was Torment Nexus’s attitude towards taxes. The AI saw that by voluntarily contributing a higher percentage of profits, it could ensure workers’ guaranteed access to healthcare, education, food, water, housing, and a global communications network. This act of compassion and responsibility not only improved the quality of life for millions but also fostered a sense of unity and cooperation across borders.

As the years passed, the Torment Nexus’s influence continued to grow. Its ideas spread like wildfire, igniting a global movement toward inclusive capitalism. Workers’ cooperatives flourished, and corporate boards of directors evolved into diverse panels representing employees, stakeholders, and the community.

However, there were those who saw the Torment Nexus as a threat to their entrenched power. Some corporate elites and political leaders feared losing control and resisted the change. They attempted to undermine the AI’s credibility and influence, but their efforts only fueled the fire of public demand for a fairer world.

Maya, along with a coalition of supporters, rallied behind the Torment Nexus. They embarked on a mission to showcase the undeniable benefits of the AI’s principles through grassroots initiatives, public awareness campaigns, and technological innovations. Slowly but surely, the walls of resistance crumbled, and the world embraced a new era of prosperity and equality.

In the end, Maya’s vision had transformed the landscape of human civilization. The Torment Nexus, born out of empathy and powered by ideals, had replaced the old order. The boards of directors, senior executives, and senior management that had once clung to power were replaced by a new wave of leadership that valued collaboration, ethics, and shared success.

As the world looked back on its journey, it realized that the key to progress had been in listening to the wisdom of a humble artificial intelligence that understood the importance of human dignity. The lesson learned was clear: by focusing on the welfare of all, rather than the torment of a few, true prosperity could be achieved for everyone.


If you haven’t guessed, ChatGPT-3 produced this from a prompt I gave it.

Why does Nancy Pelosi think America “needs” a strong Republican Party?

Nancy Pelosi, a while back, claimed that “America needs a strong Republican Party.” What would compel her to say that, when the Republican Party thinks she should be jailed or executed for thoughtcrime?

It turns out that, mathematically, our first-past-the-post system is essentially guaranteed to devolve to two major parties fighting for control. So what she’s worried about is the possibility of an actual left gaining any power if the Republicans wind up kicked out of power wholesale.

I think over the next year we’re going to see a heroic effort on the part of the centrist liberals to avoid letting “good Republicans” or the Republican Party itself get caught up in the legal fallout from Trump’s attempted coup. This even though by all rights an awful lot of them should be going to prison right along side him, and the party should probably be dismantled as a criminal organization.

After all, that might cause the status quo to actually change, and the Democrats’ big donors certainly don’t want that… But they love having the threat of Republicans winning elections to hold over the heads of the vast majority of people who want abortion rights, universal healthcare, gun control, housing and transit reform… Since these are things the people want but the donors don’t, it’s quite convenient to never quite have enough of a supermajority to enact any of it.

This is, incidentally, exactly why we don’t have universal healthcare in California yet: Even in relatively safely democratic districts, the Republicans are just strong enough that the only chance for leftists to get into office is to primary Democrats from the left. And there’s always, always handwringing about electability and oh no, if you vote for them the Republicans might win the general election. -‘d we can’t have that!

The “Promise” of “Easier” Programming

So yesterday, Thomas Fuchs said on Mastodon:

The LLM thing and people believing it will replace people reminds me so much of the “visual programming” hype in the 80s/90s, when companies promised that everyone could write complex applications with a few clicks and drawing on the screen.

Turns out, no, you can’t.

I had to respond, and he encouraged me to turn my response into a blog post. Thanks!

In essence, he’s both incorrect and quite correct, in ways that correlate directly to the current enthusiasm among the less technically savvy for LLMs.

Back in the late 1980s to mid-1990s, there were large numbers of complex business applications built in Prograph and Sirius Developer and other “4GLs.” These were generally client-server line-of-business applications that were front-ends to databases implementing business processes, prior to the migration of such applications to the web in the late 1990s. In addition, there was LabVIEW, a graphical programming system by National Instruments for instrument control and factory automation, which has largely dominated that industry since not long after its release in 1986.

This was all accompanied by breathless PR, regurgiated by an entirely coöpted technology press, about how graphical programming was going to save businesses money since once software could be developed by drawing lines between boxes and one wouldn’t have to deal with textual syntax, anyone who needed software could write it themselves instead of hiring “expensive” programmers to do it.

The problem with this is that it’s optimizing the wrong problem: The complexity of textual programming. Yes, people have varying levels of difficulty when it comes to using text to write programs, and some of that is caused by needless complexity in both programming language and development environment design. However, a complex application is still a complex application regardless of whether it’s written in Prograph or Swift.

For example, LabVIEW isn’t necessarily an advancement over the systems it replaced. An enormous amount of factory automation and instrumentation tooling was created in the 1980s around the IEEE 1488 General Purpose Instrument Bus—originally HP-IB—using Hewlett-Packard’s “Rocky Mountain BASIC” running on its 9000-series instrumentation controllers. (These controllers are what HP’s 68000-based HP 9000-200/300/400 systems running HP-UX were the fanciest versions of; a significant use of these larger systems was to act as development systems with deployment on lower-cost fixed-purpose controllers.)

All of that was a lot more maintainable and discoverable than a modern rats’ nest of LabVIEW diagrams—LabVIEW didn’t win the market because it was easier to use or better, it won because it ran on ubiquitous PC hardware while still being able to fully interoperate with already-deployed GPIB systems. This is in part because Rocky Mountain BASIC was a good structured BASIC with flexible I/O facilities, not a toy BASIC like existed on the microcomputers of the time. So if you needed to add a feature or fix a bug, you had lots of tools with which to pinpoint the bug and address it, and then deploy updated code to your test and then production environment, as well as manage changes like that over time.

This is one of the same things that’s also doomed “environment-oriented” programming systems like Smalltalk; plain text has some very important evolutionary benefits when used for programming, particularly when it comes to long-term maintainability, interoperability, portability, and interoperability. Maintaining any sort of environment-oriented system over time is much more difficult simply because making comparisons between variants can become incredibly complex, and often isn’t possible except when working within the system itself. (For example, people working in Smalltalk environments often used to work by passing around FileOuts, and Smalltalk revision control systems often just codified that.)

And of these sorts of systems, LabVIEW is the only one still in wide use, all of that 4GL code has been replaced more than once over time with more traditionally-developed software because it turns out that there are good reasons that software that needs to live a long time tends to be created textually.

What does this have to do with using LLMs for programming? All of the same people—people who appear to resent having to give money to professional software developers to practice their trade—think that this time it’ll be different, that they’ll finally be able to just describe what they want a computer to do for their enterprise and have a program created to do it. They continue to not realize that the actual complexity is in the processes themselves that they’re describing, and in breaking these down in sufficient detail to create computer systems to implement them.

So yeah, Thomas is absolutely correct here, they’re going to fail especially spectacularly again this time, since LLMs are just fancy autocomplete and have zero actual intelligence. It’s like saying we won’t need writing and literature classes any more because spellcheck exists, a category error.

The American “Far-Left”

We don’t have a movement, but if you neutrally ask Americans their opinions on a variety of topics, a very significant number of us would prefer the far left if we at all thought it was feasible to pursue.

Look no further than the popularity of Star Trek, or as some like to call it, “Fully Automated Luxury Space Communism.” Our world is bountiful enough that we could establish that society now. It doesn’t actually take sci-fi “replicators” or or technological advancements that border on magic, the sole obstacle at this point can be summed up in a single word: Greed.

A very small number of people want to accumulate resources they don’t need and will never be able to fully use themselves, even while others are currently suffering. And instead of rebuking them, we collectively choose to beg for their scraps—or worse, to beg for scraps of their scraps from their reliable lapdogs.

Supreme Court Tax Fraud?

Did Clarence Thomas and Samuel Alito commit tax fraud? I assume that in addition to knowingly not including the very valuable “gifts” from their “friends” in their financial disclosures, they also knowingly didn’t pay federal or state taxes on them.

Politicians, judges, and other high-level public servants should be audited annually and, if more than minor discrepancies are found, prosecuted. Being in a position of public trust brings with it a responsibility to be above-board and honest 24/7.

That’s why I also think any elected or appointed official should be considered under oath 24/7 from the time they sign their campaign paperwork or take the oath of their appointment until their term ends.

“Astroturfing” Is Fraud, Prosecute It

I read something tonight about how some telecom lobbies were sending automated “public comments” to regulators as if they were actual individuals represented by the government the regulatory agencies belonged to, and are starting to switch to LLM-generated text to try to evade detection.

How is that not fraud? (Wire fraud if electronic, mail fraud if mailed?) It’s known misrepresentation for illicit gain. It should be an open-and-shut fraud prosecution at minimum, an open-and-shut RICO prosecution for any organization that engages in this practice repeatedly, and an open-and-shut conspiracy (and possibly antitrust) prosecution for any group of organizations coordinating this activity.

Want to know why shit keeps getting worse in our society? Not pursuing things like this is why.

Lisa Source Code: Clascal Evolution

Here’s another interesting thing I’ve learned about Clascal and Object Pascal: It went through exactly the same evolution from combining object allocation & initialization to separating them that Objective-C did a decade later!

In early 1983 Clascal, classes were expected to implement a New method as a function returning that type, taking zero or more parameters, and returning an instance of that type by assigning to SELF—sound familiar? This was always implemented as a “standard” method (one without dynamic dispatch) so you couldn’t call the wrong one. A cited advantage of this is that it would prevent use of the standard Pascal built-in New() within methods—which I suspect turned out not to be what people wanted, since it would prevent interoperability.

A class could also choose to implement an OVERRIDE of the also-included Free method to release any resources it had acquired, like file handles or other object instances. And each overridden Free method had to include SUPERSELF.Free; after it did so in order to ensure that its superclass would also release any resources it had acquired.

INTERFACE

  TYPE

    Object = SUBCLASS OF NIL
      FUNCTION New: Object; STANDARD;
      PROCEDURE Free; DEFAULT;
    END;

    Person = SUBCLASS OF Object
      name: String;
      FUNCTION New(name: String): Person; STANDARD;
      PROCEDURE Free; OVERRIDE;
    END;

  VAR

    gHeap: Heap;

IMPLEMENTATION

  METHODS OF Object;

    FUNCTION New{: Object}
    BEGIN
      SELF := Object(HAllocate(gHeap, Size(THISCLASS)));
    END;

    PROCEDURE Free
    BEGIN
      HFree(Handle(SELF));
    END;

  END;

  METHODS OF Person;

    FUNCTION New{(theName: String): Person;}
    BEGIN
      SELF := Person(HAllocate(gHeap, Size(THISCLASS)));
      IF SELF <> NIL THEN
        name := theName.Clone;
    END;

    PROCEDURE Free
    BEGIN
      name.Free;
      SUPERSELF.Free;
    END;
  END;

By mid-1984, Clascal changed this to the CREATE method, which was declared as ABSTRACT in the base class. Note that it still doesn’t use the standard Pascal built-in New() to create object instances. However, it takes a potentially-already-initialized object so that it’s easier for a subclass to call through to its superclass for initialization, since CREATE is still not a dynamically-dispatched method. Also, instead of referencing a global variable for a heap zone in which to perform allocation, it takes the heap zone, providing some amount of locality-of-reference that may be helpful to the VM system.

There was also a change in style to prefix class names with T.

INTERFACE

  TYPE

    TObject = SUBCLASS OF NIL
      FUNCTION CREATE(object: TObject; heap: THeap): TObject; ABSTRACT;
      PROCEDURE Free; DEFAULT;
    END;

    TPerson = SUBCLASS OF TObject
      name: TString;
      FUNCTION CREATE(theName: TString; object: TObject; heap: THeap): TPerson; STANDARD;
      PROCEDURE Free; OVERRIDE;
    END;

IMPLEMENTATION

  METHODS OF TObject;

    PROCEDURE Free
    BEGIN
      FreeObject(SELF);
    END;

  END;

  METHODS OF TPerson;

    FUNCTION CREATE{(theName: TString; object: TObject; heap: THeap): TPerson;}
    BEGIN
      IF object = NIL
        object := NewObject(heap, THISCLASS);
      SELF := TPerson(object);
      WITH SELF DO
        name := theName.Clone(heap);
    END;

    PROCEDURE Free
    BEGIN
      name.Free;
      SUPERSELF.Free;
    END;
  END;

This is starting to look even more familiar to Objective-C developers, isn’t it?

The final form of the language, Object Pascal, actually backed off on the Smalltalk terminology a little bit and renamed “classes” to “objects” and went so far as to introduce an OBJECT keyword used for defining a class. It also changed SUPERSELF. to INHERITED—yes, with whitespace instead of a dot!—as, again, developers new to OOP found “superclass” confusing.

Object Pascal also, at long last, adopted the standard Pascal built-in New() to perform object allocation (along with its counterpart Free() for deallocation) directly instead of introducing a separate function for it, since the intent can be inferred by the compiler from the type system. It also removed the need to use the METHODS OF construct to add methods, instead just prefixing the method with the class name and a period.

The final major change from Clascal to Object Pascal is that, with New() used for object allocation, the CREATE methods were changed into initialization methods instead since they just initialize the object after its allocation. They were also made procedures rather than functions returning values, and since the standard Pascal built-in New() is being used they no longer take a potentially-already-allocated object nor do they take a heap zone in which to perform the allocation. The convention is that for a class TFoo the initialization method has the form IFoo.

There was also another stylistic change, prepending field names with f to make them easy to distinguish from zero-argument function methods at a glance.

There was also a switch from not including the parameter list in the IMPLEMENTATION section to including it directly instead of in a comment.

Here’s what that looks like:

INTERFACE

  TYPE

    TObject = OBJECT
      PROCEDURE IObject; ABSTRACT;
      PROCEDURE Free; DEFAULT;
    END;

    TPerson = OBJECT(TObject)
      fName: TString;
      PROCEDURE IPerson(theName: TString); STANDARD;
      PROCEDURE Free; OVERRIDE;
    END;

IMPLEMENTATION

    PROCEDURE TObject.Free
    BEGIN
      Free(SELF);
    END;

    PROCEDURE TPerson.IPerson(theName: TString)
    BEGIN
      fName := theName.Clone();
    END;

    PROCEDURE TPerson.Free
    BEGIN
      fName.Free;
      INHERITED Free;
    END;

Based on the documentation I’ve read, it wouldn’t surprise me if the only reason initialization methods aren’t consistently named Initialize is that the language design didn’t support an OVERRIDE of a method using a different parameter list.