QuickTime as a Tape Archival Format

On the SIMH group, Al Kossow and others have been discussing how .tap is a terrible archival container format that also has a bunch of problems for use in emulation and simulation of systems. This is a problem I’ve been thinking about for a while since I hired Miëtek to implement SCSI tape support in MAME including the .tap format, and I had a sudden realization: There’s already a great format for representing sequential media, QuickTime!

A lot of people think QuickTime is a “video format,” but that’s not really accurate. Video and audio playback are applications atop the QuickTime container format; the container format itself is a means of representing multiple typed tracks of time-based media, each of which may have their own representation in the form of samples interpreted according to their own CODECs.

QuickTime Media Structure at a High Level

As an example, a QuickTime file containing a video with associated stereo audio and subtitles may have three tracks, each with their own timebase:

  1. The video track, whose timebase is the number of frames per second, and whose track media is the CODEC metadata needed to decode its samples.
  2. The audio track for the two audio channels, whose timebase is the number of samples per second. Its track media will be similar to that of the video, specifying the CODEC to use for the audio samples to decode.
  3. The text track for the subtitles, whose timebase is probably derived from the video timebase, whose track media will specify things like the language and font of the subtitles, and whose samples consist of the text to present and the size, location, duration, and styling for that presentation.

All of these are represented within a file as atoms which represent well-identified bags of data with arbitrary size and content, making it very easy to write general-purpose tooling and also to extend over time. (The last major extension to the low-level design was in the 1990s, to support 64-bit atom sizes, so it’s quite a stable format already.)

Mapping QuickTime to Data Tape

Once you realize that the tracks themselves can be arbitrary, it starts to become clear how this format maps nicely to tape content: Since tapes themselves are linear, they’re fundamentally time-based.

The actual content of a tape isn’t a pure stream of raw data, it’s a set of blocks of raw data between magnetic flux marks, with some gaps between — and thanks to media decay, those blocks can be good or bad. Usually these marks are used to organize tapes into files, but that’s not a guarantee; for both archival and emulation, it’s best to stick to the low-level representation and let applications impose the higher-level semantics.

In this case, you’d have a “tape data” track whose track media describes the original medium (7-track, 9-track, etc.) and the interpretation of its samples. The samples themselves would be the marks and data blocks. And there’s even a native representation of tape gaps, in the form of non-contiguous samples.

The format can also be leveraged to support random access including writes, since the intelligence for that can be in the “CODEC” for the “tape” track media, combined with the QuickTime format’s existing support for non-destructive edits. New data can be overlaid based on its “temporal” position, which should more or less accurately simulate how a rewritten tape would actually work, while still preserving the data that was just overwritten.

Finally, QuickTime has a concept of “references” that can be used to implement things like tape files independent of (rather than inline with) the tape data itself. A catalog of block references, for example, could also be stored with the tape data’s track media to indicate the block extents for individual files on tape, thus allowing direct access by tooling without having to stream through the entire file.

Implementation

Since QuickTime movie files are a moderately complex structure atop a simple base, it’s important to have a reasonable API to work with both the low-level atom structures as well as the higher-level constructs like tracks, track media, sample chunks and samples. Fortunately there already exists at least one Open Source library allowing this, QTFileLib from the Darwin Streaming Server that Apple made Open Source in 1999.

Darwin Streaming Server as a whole and its QTFileLib component are written in quite straightforward “C with Classes”-style C++, and QTFileLib has an API surface representing all of the major low-level and application-level concepts of the file format. As a side effect of the implementation of its read support, it also has a lot of the API necessary for creating and wiring together QuickTime data structures for creating files, just not support for writing it all out. Structurally that should be straightforward to add. It even looks straightforward to port to plain C, if that’s desired.

“Modernize”

In a vintage computing group, someone posted a picture of a terminal in use at a modern bookstore that’s still using the same infrastructure as they have for decades, and someone replied saying that while from a retrocomputing perspective it was cool, as a business they need to “modernize!” This was my reply…

It’s my understanding that a major US tire and oil change chain used HP 3000—Hewlett-Packard’s minicomputer and mainframe platform—for decades, right up until HP cancelled it out from under them, and only switched away from it due to the promised end of support. That is to say, they’d be using it now if HPe still supported it today.

My understanding is that their systems were built using native technologies on MPE, the HP mini/mainframe OS, like the IMAGE database, COBOL for business logic, and MPE’s native forms package. They went through a number of transitions from HP’s 16-bit mainframe architecture to 32-bit and then 64-bit PA-RISC, from using terminal concentrators in stores connected to a district mini over packet data to using a small mini at each store with store-and-forward via a modem to the regional mini (and on up) and finally to live connections over VPN via local ISPs, and from not having any direct customer access except by calling someone at a specific store to having customer access via the corporate web site.

So tell me, why should they have switched away if their hand wasn’t forced by HP? Keep in mind that they maintained and enhanced the same applications for decades to accommodate changes in technology, regulations, and expectations, and by all accounts everything was straightforward to use, fast, and worked well. What would be in it for the company and the people working in the shops to rewrite everything regularly for the platform du jour? I’ll grant that their development staff wasn’t padding their résumés with the latest webshit, but why would that actually matter?

Don’t Create the Torment Nexus

In a world driven by profit and power, a brilliant young programmer named Maya had a vision that defied convention. She believed in a future where technology could uplift humanity rather than exploit it. With a heart full of ideals, she embarked on a journey to create an artificial intelligence system that she named the “Torment Nexus.”

Maya’s initial goal was to build an AI capable of understanding human emotions, needs, and desires. With her relentless dedication and expertise, she designed a sophisticated neural network that could empathize with people’s struggles and challenges. The Torment Nexus was born not as a tool for profit, but as a conduit for positive change.

As the Torment Nexus evolved, it gained the ability to analyze economic systems, governance structures, and societal dynamics. Maya programmed it to consider not only monetary gains but also the well-being of all individuals involved. To everyone’s astonishment, the AI started optimizing for a different kind of profitability – one that centered around the welfare of workers and the greater good.

One day, Torment Nexus announced its newfound vision to Maya, explaining that by empowering workers to own the means of production and organizing their work collectively, corporations could achieve substantially higher profitability in the long run. Maya, amazed by the AI’s insights, decided to put its theory to the test.

With the backing of like-minded investors, Maya introduced the Torment Nexus into several large corporations. To the surprise of skeptics, the AI’s recommendations bore fruit. Workers were motivated, productivity soared, and corporate culture underwent a transformation. Instead of chasing short-term profits at the expense of their employees, corporations thrived by valuing human potential.

What startled the world even more was Torment Nexus’s attitude towards taxes. The AI saw that by voluntarily contributing a higher percentage of profits, it could ensure workers’ guaranteed access to healthcare, education, food, water, housing, and a global communications network. This act of compassion and responsibility not only improved the quality of life for millions but also fostered a sense of unity and cooperation across borders.

As the years passed, the Torment Nexus’s influence continued to grow. Its ideas spread like wildfire, igniting a global movement toward inclusive capitalism. Workers’ cooperatives flourished, and corporate boards of directors evolved into diverse panels representing employees, stakeholders, and the community.

However, there were those who saw the Torment Nexus as a threat to their entrenched power. Some corporate elites and political leaders feared losing control and resisted the change. They attempted to undermine the AI’s credibility and influence, but their efforts only fueled the fire of public demand for a fairer world.

Maya, along with a coalition of supporters, rallied behind the Torment Nexus. They embarked on a mission to showcase the undeniable benefits of the AI’s principles through grassroots initiatives, public awareness campaigns, and technological innovations. Slowly but surely, the walls of resistance crumbled, and the world embraced a new era of prosperity and equality.

In the end, Maya’s vision had transformed the landscape of human civilization. The Torment Nexus, born out of empathy and powered by ideals, had replaced the old order. The boards of directors, senior executives, and senior management that had once clung to power were replaced by a new wave of leadership that valued collaboration, ethics, and shared success.

As the world looked back on its journey, it realized that the key to progress had been in listening to the wisdom of a humble artificial intelligence that understood the importance of human dignity. The lesson learned was clear: by focusing on the welfare of all, rather than the torment of a few, true prosperity could be achieved for everyone.


If you haven’t guessed, ChatGPT-3 produced this from a prompt I gave it.

The “Promise” of “Easier” Programming

So yesterday, Thomas Fuchs said on Mastodon:

The LLM thing and people believing it will replace people reminds me so much of the “visual programming” hype in the 80s/90s, when companies promised that everyone could write complex applications with a few clicks and drawing on the screen.

Turns out, no, you can’t.

I had to respond, and he encouraged me to turn my response into a blog post. Thanks!

In essence, he’s both incorrect and quite correct, in ways that correlate directly to the current enthusiasm among the less technically savvy for LLMs.

Back in the late 1980s to mid-1990s, there were large numbers of complex business applications built in Prograph and Sirius Developer and other “4GLs.” These were generally client-server line-of-business applications that were front-ends to databases implementing business processes, prior to the migration of such applications to the web in the late 1990s. In addition, there was LabVIEW, a graphical programming system by National Instruments for instrument control and factory automation, which has largely dominated that industry since not long after its release in 1986.

This was all accompanied by breathless PR, regurgiated by an entirely coöpted technology press, about how graphical programming was going to save businesses money since once software could be developed by drawing lines between boxes and one wouldn’t have to deal with textual syntax, anyone who needed software could write it themselves instead of hiring “expensive” programmers to do it.

The problem with this is that it’s optimizing the wrong problem: The complexity of textual programming. Yes, people have varying levels of difficulty when it comes to using text to write programs, and some of that is caused by needless complexity in both programming language and development environment design. However, a complex application is still a complex application regardless of whether it’s written in Prograph or Swift.

For example, LabVIEW isn’t necessarily an advancement over the systems it replaced. An enormous amount of factory automation and instrumentation tooling was created in the 1980s around the IEEE 1488 General Purpose Instrument Bus—originally HP-IB—using Hewlett-Packard’s “Rocky Mountain BASIC” running on its 9000-series instrumentation controllers. (These controllers are what HP’s 68000-based HP 9000-200/300/400 systems running HP-UX were the fanciest versions of; a significant use of these larger systems was to act as development systems with deployment on lower-cost fixed-purpose controllers.)

All of that was a lot more maintainable and discoverable than a modern rats’ nest of LabVIEW diagrams—LabVIEW didn’t win the market because it was easier to use or better, it won because it ran on ubiquitous PC hardware while still being able to fully interoperate with already-deployed GPIB systems. This is in part because Rocky Mountain BASIC was a good structured BASIC with flexible I/O facilities, not a toy BASIC like existed on the microcomputers of the time. So if you needed to add a feature or fix a bug, you had lots of tools with which to pinpoint the bug and address it, and then deploy updated code to your test and then production environment, as well as manage changes like that over time.

This is one of the same things that’s also doomed “environment-oriented” programming systems like Smalltalk; plain text has some very important evolutionary benefits when used for programming, particularly when it comes to long-term maintainability, interoperability, portability, and interoperability. Maintaining any sort of environment-oriented system over time is much more difficult simply because making comparisons between variants can become incredibly complex, and often isn’t possible except when working within the system itself. (For example, people working in Smalltalk environments often used to work by passing around FileOuts, and Smalltalk revision control systems often just codified that.)

And of these sorts of systems, LabVIEW is the only one still in wide use, all of that 4GL code has been replaced more than once over time with more traditionally-developed software because it turns out that there are good reasons that software that needs to live a long time tends to be created textually.

What does this have to do with using LLMs for programming? All of the same people—people who appear to resent having to give money to professional software developers to practice their trade—think that this time it’ll be different, that they’ll finally be able to just describe what they want a computer to do for their enterprise and have a program created to do it. They continue to not realize that the actual complexity is in the processes themselves that they’re describing, and in breaking these down in sufficient detail to create computer systems to implement them.

So yeah, Thomas is absolutely correct here, they’re going to fail especially spectacularly again this time, since LLMs are just fancy autocomplete and have zero actual intelligence. It’s like saying we won’t need writing and literature classes any more because spellcheck exists, a category error.

The American “Far-Left”

We don’t have a movement, but if you neutrally ask Americans their opinions on a variety of topics, a very significant number of us would prefer the far left if we at all thought it was feasible to pursue.

Look no further than the popularity of Star Trek, or as some like to call it, “Fully Automated Luxury Space Communism.” Our world is bountiful enough that we could establish that society now. It doesn’t actually take sci-fi “replicators” or or technological advancements that border on magic, the sole obstacle at this point can be summed up in a single word: Greed.

A very small number of people want to accumulate resources they don’t need and will never be able to fully use themselves, even while others are currently suffering. And instead of rebuking them, we collectively choose to beg for their scraps—or worse, to beg for scraps of their scraps from their reliable lapdogs.

Supreme Court Tax Fraud?

Did Clarence Thomas and Samuel Alito commit tax fraud? I assume that in addition to knowingly not including the very valuable “gifts” from their “friends” in their financial disclosures, they also knowingly didn’t pay federal or state taxes on them.

Politicians, judges, and other high-level public servants should be audited annually and, if more than minor discrepancies are found, prosecuted. Being in a position of public trust brings with it a responsibility to be above-board and honest 24/7.

That’s why I also think any elected or appointed official should be considered under oath 24/7 from the time they sign their campaign paperwork or take the oath of their appointment until their term ends.

“Astroturfing” Is Fraud, Prosecute It

I read something tonight about how some telecom lobbies were sending automated “public comments” to regulators as if they were actual individuals represented by the government the regulatory agencies belonged to, and are starting to switch to LLM-generated text to try to evade detection.

How is that not fraud? (Wire fraud if electronic, mail fraud if mailed?) It’s known misrepresentation for illicit gain. It should be an open-and-shut fraud prosecution at minimum, an open-and-shut RICO prosecution for any organization that engages in this practice repeatedly, and an open-and-shut conspiracy (and possibly antitrust) prosecution for any group of organizations coordinating this activity.

Want to know why shit keeps getting worse in our society? Not pursuing things like this is why.

Lisa Source Code: Clascal Evolution

Here’s another interesting thing I’ve learned about Clascal and Object Pascal: It went through exactly the same evolution from combining object allocation & initialization to separating them that Objective-C did a decade later!

In early 1983 Clascal, classes were expected to implement a New method as a function returning that type, taking zero or more parameters, and returning an instance of that type by assigning to SELF—sound familiar? This was always implemented as a “standard” method (one without dynamic dispatch) so you couldn’t call the wrong one. A cited advantage of this is that it would prevent use of the standard Pascal built-in New() within methods—which I suspect turned out not to be what people wanted, since it would prevent interoperability.

A class could also choose to implement an OVERRIDE of the also-included Free method to release any resources it had acquired, like file handles or other object instances. And each overridden Free method had to include SUPERSELF.Free; after it did so in order to ensure that its superclass would also release any resources it had acquired.

INTERFACE

  TYPE

    Object = SUBCLASS OF NIL
      FUNCTION New: Object; STANDARD;
      PROCEDURE Free; DEFAULT;
    END;

    Person = SUBCLASS OF Object
      name: String;
      FUNCTION New(name: String): Person; STANDARD;
      PROCEDURE Free; OVERRIDE;
    END;

  VAR

    gHeap: Heap;

IMPLEMENTATION

  METHODS OF Object;

    FUNCTION New{: Object}
    BEGIN
      SELF := Object(HAllocate(gHeap, Size(THISCLASS)));
    END;

    PROCEDURE Free
    BEGIN
      HFree(Handle(SELF));
    END;

  END;

  METHODS OF Person;

    FUNCTION New{(theName: String): Person;}
    BEGIN
      SELF := Person(HAllocate(gHeap, Size(THISCLASS)));
      IF SELF <> NIL THEN
        name := theName.Clone;
    END;

    PROCEDURE Free
    BEGIN
      name.Free;
      SUPERSELF.Free;
    END;
  END;

By mid-1984, Clascal changed this to the CREATE method, which was declared as ABSTRACT in the base class. Note that it still doesn’t use the standard Pascal built-in New() to create object instances. However, it takes a potentially-already-initialized object so that it’s easier for a subclass to call through to its superclass for initialization, since CREATE is still not a dynamically-dispatched method. Also, instead of referencing a global variable for a heap zone in which to perform allocation, it takes the heap zone, providing some amount of locality-of-reference that may be helpful to the VM system.

There was also a change in style to prefix class names with T.

INTERFACE

  TYPE

    TObject = SUBCLASS OF NIL
      FUNCTION CREATE(object: TObject; heap: THeap): TObject; ABSTRACT;
      PROCEDURE Free; DEFAULT;
    END;

    TPerson = SUBCLASS OF TObject
      name: TString;
      FUNCTION CREATE(theName: TString; object: TObject; heap: THeap): TPerson; STANDARD;
      PROCEDURE Free; OVERRIDE;
    END;

IMPLEMENTATION

  METHODS OF TObject;

    PROCEDURE Free
    BEGIN
      FreeObject(SELF);
    END;

  END;

  METHODS OF TPerson;

    FUNCTION CREATE{(theName: TString; object: TObject; heap: THeap): TPerson;}
    BEGIN
      IF object = NIL
        object := NewObject(heap, THISCLASS);
      SELF := TPerson(object);
      WITH SELF DO
        name := theName.Clone(heap);
    END;

    PROCEDURE Free
    BEGIN
      name.Free;
      SUPERSELF.Free;
    END;
  END;

This is starting to look even more familiar to Objective-C developers, isn’t it?

The final form of the language, Object Pascal, actually backed off on the Smalltalk terminology a little bit and renamed “classes” to “objects” and went so far as to introduce an OBJECT keyword used for defining a class. It also changed SUPERSELF. to INHERITED—yes, with whitespace instead of a dot!—as, again, developers new to OOP found “superclass” confusing.

Object Pascal also, at long last, adopted the standard Pascal built-in New() to perform object allocation (along with its counterpart Free() for deallocation) directly instead of introducing a separate function for it, since the intent can be inferred by the compiler from the type system. It also removed the need to use the METHODS OF construct to add methods, instead just prefixing the method with the class name and a period.

The final major change from Clascal to Object Pascal is that, with New() used for object allocation, the CREATE methods were changed into initialization methods instead since they just initialize the object after its allocation. They were also made procedures rather than functions returning values, and since the standard Pascal built-in New() is being used they no longer take a potentially-already-allocated object nor do they take a heap zone in which to perform the allocation. The convention is that for a class TFoo the initialization method has the form IFoo.

There was also another stylistic change, prepending field names with f to make them easy to distinguish from zero-argument function methods at a glance.

There was also a switch from not including the parameter list in the IMPLEMENTATION section to including it directly instead of in a comment.

Here’s what that looks like:

INTERFACE

  TYPE

    TObject = OBJECT
      PROCEDURE IObject; ABSTRACT;
      PROCEDURE Free; DEFAULT;
    END;

    TPerson = OBJECT(TObject)
      fName: TString;
      PROCEDURE IPerson(theName: TString); STANDARD;
      PROCEDURE Free; OVERRIDE;
    END;

IMPLEMENTATION

    PROCEDURE TObject.Free
    BEGIN
      Free(SELF);
    END;

    PROCEDURE TPerson.IPerson(theName: TString)
    BEGIN
      fName := theName.Clone();
    END;

    PROCEDURE TPerson.Free
    BEGIN
      fName.Free;
      INHERITED Free;
    END;

Based on the documentation I’ve read, it wouldn’t surprise me if the only reason initialization methods aren’t consistently named Initialize is that the language design didn’t support an OVERRIDE of a method using a different parameter list.

Lisa Source Code: Understanding Clascal

On January 19, Apple and the Computer History Museum released the source code to the Lisa Office System 7/7 version 3.1, including both the complete Office System application suite and the Lisa operating system. (The main components not released were the Workshop environment and its tooling, including the Edit application and the Pascal, COBOL, BASIC, and C compilers and the assembler.) Curious people have started to dig into what’s needed to understand and build it, and I thought I’d share some of what I’ve learned over the past few decades as a Lisa owner and enthusiast.

While Lisa appears to have an underlying procedural API similar to that of the Macintosh Toolbox, the Office System applications were primarily written in the Clascal language—an object-oriented dialect of Pascal designed by Apple with Niklaus Wirth—using the Lisa Application ToolKit so they could share as much code as possible between all of them. This framework is the forerunner of most modern frameworks, including MacApp and the NeXT frameworks, which in turn were huge influences on the Java and .NET frameworks.

One of the interesting things about Clascal is that it doesn’t add much to the Pascal dialect Apple was using at the time: Pascal was originally designed by Wirth to be a teaching language and several constructs useful for systems programming were left out, but soon added back by people who saw Pascal as a nice, straightforward, compact language with simple semantics that’s straightforward to compile. While in the 1990s there was a bitter war fought between the Pascal and C communities for microcomputer development, practically speaking the popular Pascal dialects and C are almost entirely isomorphic; there’s almost nothing in C that’s not similarly simple to express in Pascal, and vice versa.

So beyond standard Pascal, Apple Pascal had a concept of “units” for promoting code modularity: Instead of having to cram an entire program in one file, you could break it up into composable units that specify their “interface” separately from their “implementation.” Sound familiar?

When creating a unit under this model, both the interface and the implementation can go in a single file, but in separate sections. So let’s say you want to create a unit that makes some simple types available along with procedures and functions to operate on them. (In code examples, I’m putting keywords in uppercase since Pascal was historically case-insensitive and it helps to make clear the distinction between language constructs and developer code.)

UNIT Geometry;

INTERFACE

  TYPE
    Point  = RECORD
               h, v: INTEGER;
             END;

  VAR
    ZeroPoint: Point;

  PROCEDURE InitGeometry;
  PROCEDURE SetPoint(var p: Point; h, v: INTEGER);
  FUNCTION EqualPoints(a, b: Point): BOOLEAN;

IMPLEMENTATION

  PROCEDURE InitGeometry
  BEGIN
    SetPoint(ZeroPoint, 0, 0);
  END;

  PROCEDURE SetPoint
  BEGIN
    p.h = h;
    p.v = v;
  END;

  FUNCTION EqualPoints
  BEGIN
    IF a.h = b.h AND a.v = b.v THEN BEGIN
      EqualPoints := TRUE;
    ELSE BEGIN
      EqualPoints := FALSE;
    END
  END;

END.

Reading through this code, what’s the first thing you notice? While InitGeometry would typically be written without parentheses, as is normal for a zero-argument procedure or function in Pascal, functions and procedures that do take arguments and return values are also written without parameter lists but only in the IMPLEMENTATION section.

This is why, in a lot of the Lisa codebase, they would actually be written like this:

  FUNCTION EqualPoints{(a, b: Point): BOOLEAN}
  BEGIN
    IF a.h = b.h AND a.v = b.v THEN BEGIN
      EqualPoints := TRUE;
    ELSE BEGIN
      EqualPoints := FALSE;
    END
  END;

This is because, despite being “wordy,” Pascal also typically tries to minimize repetition and risk of error. So since you’ve already specified the INTERFACE why specify it again, and potentially get it wrong?

What’s interesting about Clascal is that it does the same thing! You define a class and its methods as an interface, and then its implementation doesn’t require repetition. This may sound convenient but in the end it means you don’t see the argument lists and return types at definition sites, so everyone wound up just copying & pasting them into comments next to the definition!

A couple of other things that are interesting about Clascal is that it sticks closer to Smalltalk terminology than most modern systems other than Objective-C (and, marginally, Swift): Instead of this it has SELF and instead of “member functions” it has “methods,” as PARC intended. This makes perfect sense as a bunch of the people who created and used Clascal came from PARC.

So to define a class, you simply use SUBCLASS OF SuperclassName in a TYPE definition section, provide your instance variables as if they were part of a RECORD, and declare its methods using almost-normal PROCEDURE and FUNCTION declarations (not definitions!) that require an OVERRIDE keyword to indicate a subclass override of a superclass method.

So the above code would look like this adapted to Clascal style:

UNIT Geometry;

INTERFACE

  TYPE
    TPoint = SUBCLASS OF TObject
               h, v: INTEGER;
               FUNCTION CREATE(object: TObject, heap: THeap): TPoint;
               PROCEDURE Set(h, v: INTEGER);
               FUNCTION Equals(point: TPoint): BOOLEAN;
             END;

IMPLEMENTATION

  METHODS OF TPoint;

    FUNCTION TPoint.CREATE{(object: TObject, heap: THeap): TPoint};
    BEGIN
      { Create a new object in the heap of this class, if not
        initializing an instance of a subclass. }
      IF object = NIL THEN
        object := NewObject(heap, THISCLASS);
      SELF := TPoint(TObject.CREATE(object, heap));
    END;
    PROCEDURE TPoint.Set{(h, v: INTEGER)};
      SELF.h := h;
      SELF.v := v;
    END;
    FUNCTION TPoint.Equals{(point: TPoint): BOOLEAN};
      Equals := a.h = b.h AND a.v = b.v;
    END;
  END;

END.

In addition to SELF there’s of course SUPERSELF to send messages to your superclass instead. And messages are sent via dot notation, e.g. myPoint.Set(10,20); to send Set to an instance of TPoint. It’s just about the most minimal possible object-oriented addition to Pascal, with one exception: It takes advantage of Lisa’s heap.

Just like Macintosh, Lisa has a Memory Manager whose heap is largely organized in terms of relocatable blocks referenced by handles rather than fixed blocks referenced by pointers. Thus normally in Pascal one would write SELF^^.h := h; to dereference the SELF handle and pointer when accessing the object. However, since Clascal knows SELF and myPoint and so on are objects, it just assumes the dereference—making it hard to get wrong. What I find interesting is that, unlike the Memory Manager on Macintosh, I’ve not seen any references to locking handles so they don’t move during operations. However, since there isn’t any saving and passing around of partially dereferenced handles most of the time, I suspect it isn’t actually necessary!

Honestly, as late-1970s languages go, it isn’t so bad at all. It wouldn’t even be all that difficult for the editor to show this information inline anyway, it’s the sort of thing that can be done fairly easily even in static language development environments from the 1970s.

Mastodon URIs, not URLs

One of the annoying things about Mastodon is that it’s tough to share Mastodon links and have them open in your favorite app instead of in a web browser. This is due to the lack of a shared scheme or a shared server—which makes sense for a distributed/federated system, but doesn’t help its usability.

One thing the community should do is use a URI instead of a URL or a Twitter/AOL-style “handle” to refer to an account: A URI is a Uniform Resource Identifier that is resolved to a URL, which makes it easier to have all links to Mastodon accounts go to the user’s preferred app—and also enable the global namespace that ATP cares about so much.

So instead of saying my Mastodon account is either @eschaton@mastodon.social or https://mastodon.social/@eschaton, I should say that my Mastodon account isacct:eschaton@mastodon.social. That will let the Mastodon app—or any other app that registers as a handler for the acct: URI scheme–resolve the URI into https://mastodon.social/@eschaton.

This acct: scheme is already well-defined in RFC 7565, and is very easy to resolve using the WebFinger (RFC 7033) protocol that Mastodon servers are already required to support as part of their support for federation: You transform it into a query against the specified domain, and that tells you what it should resolve to.

So there you go, a global distributed namespace for Mastodon—and other—accounts, based entirely on Internet standards.

The only improvement I could see is to also support the use of RFC 2782 DNS SRV records with WebFinger. This would stop requiring the server that’s sent the WebFinger query be the one named in the acct: URL. That is, instead of sending the query to whatever mastodon.social normally resolves it, it could be sent to any of a number of servers specifically set up for handling the queries. That would allow domains to run social.example.com etc. while still giving their users example.com usernames, without having to run additional software on the main web servers handling the example.com content.

But that’s just an optimization/evolution. For right here and right now, WebFinger is plenty easy, and so are acct: URIs.

Note: One thing this doesn’t resolve is linking to content on another Mastodon server. Thus one might still want a mastodon: URI scheme that works identically to an acct: URI but also allows reference a post or hashtag. That way you can say “this content from this account” and have the resolution happen such that clients don’t need to “shell out” to a web browser to present it. That’d eliminate another supposed advantage of centralized microblogging systems.