A Quick Look at macOS

This post exists because of macOS Tahoe. I’m rather irritated by the new release. This blog is trending towards becoming a place I dump complaints about the direction of current technology. I suppose if I’m going to seem like a grump, I might as well embrace it.

From my perspective, macOS Tahoe continues a trend in recent macOS releases of more-or-less just making the OS worse and harder to use.

I was going to just complain about macOS Tahoe in this post. As I was writing the introduction, I realized I was over 6000 words in and still hadn’t gotten to macOS Tahoe yet. This was not ideal, so I’ve decided to turn this into a series instead. This is the first part. The topic of macOS Tahoe specifically was also too restrictive, so the topic area has been widened. You’ll see what I mean.

I’ll explore a lot of different ideas… but I think it’s best to start with a Quick Look (sorry) at macOS, cause that’s a juicy topic that provides a Launchpad (sorry, again) into many more interesting roads.

So, I’ve mentioned that I’m annoyed by recent macOS releases in general. If I find recent macOS so irritating, why do I use it in the first place? Well, let’s talk about that.

Why macOS?

It’s a good question. After all, Linux and Windows are right there. Heck, even ChromeOS exists.

The truth is, I’m a Mac user, but one without any particular love of Apple. I use an Android phone, and I don’t use any of Apple’s cloud services. I use macOS because of desktop computer operating systems, it’s basically the only usable one.

This brings me no joy. I wish Linux was usable. I don’t want to have to use a proprietary operating system which is continuously moving in a direction I hate. But it has some things going for it that Linux and Windows simply do not.

The biggest one of these things is consistency. For me, it’s very important that things work the same way across the computer. I don’t want to have to relearn the way I use my computer every time I open a different app. macOS has a very strong personality that the vast majority of apps written for it do actually adhere to. For example, even most cross-platform apps will use the macOS menu bar instead of putting one in their windows, and the commands available from each menu are very consistent across applications. Also, the menu bar being at the top of the screen is just good.

Ideologically, I’d love to be a Linux user. I love open-source. The trouble with Linux is that, sure, it’s Unix, but at the end of the day, it’s just way too much of a patchwork of different technologies to feel very cohesive. Apple is not wrong that their extremely tight integration across all the layers of their platform makes for a very good user experience. Also, I just like the technologies used in macOS better than the ones used on the Linux desktop. Irritatingly, there is no Linux platform but rather a bunch of competing platforms that are somewhat-but-not-totally compatible with each other that all run on top of something called Linux. KDE and GNOME, as much as Linux proponents will say they’re interchangeable, produce applications that look and feel very out of place on each other’s desktops. They behave so differently that intuition developed for one becomes functionally useless in another. It’s basically unusable unless you like to spend more time wrangling the computer than using it. To get a semblance of a consistent desktop, you have to work to configure it, and at the end of the day, it won’t really work the way I want a desktop to anyways.

macOS won’t either! There are always certain options I have to configure on macOS to make it usable, but it’s a lot more tolerable since they’re pretty much always the same options. And that usability is not even close to how I want a computer to work, but it’s still a lot closer to it than anything even the most fine-tuned Linux desktop has ever been. At least, even if it won’t work the way I want it to, once I’ve learned what to expect from it, most things will stay consistent to those expectations. Also, there are really good Mac apps that make the platform feel like a platform.

I haven’t even talked about Windows, but suffice it to say that somehow it’s almost as inconsistent as desktop Linux, despite all its development being driven by one company (which is genuinely an impressive feat). Plus, as someone who likes to code, Unix-y operating systems are a lot easier to wrangle.

So, yeah, I end up using macOS. Not out of any great love of Apple, though. I think Apple is doing things to the platform that are in direct opposition to how I want my computer to work.

The state of macOS

Let’s see how macOS is doing, pre-Tahoe. Take a step back in time to September 14th, when the release version of macOS is still macOS 15 Sequoia.

We find ourselves on a desktop, but it’s a strange desktop. Hitting the icon for System Settings brings forth an alien apparition straight from iPadOS. Hints of a phantom “Apple Intelligence” litter the environment, but only hints. Apps like Stocks and Freeform have wandered in from a seemingly foreign world. A strange “Stage Manager” wants to manage your windows for you, but leaves itself off by default, for it has no confidence in itself.

All in all, it’s not the most horrible place in the world, but traces of malaise linger everywhere. Following these traces back to their beginning would take us farther back than I want to go right now, so let’s stick to the recent stuff.

The Big Sur era

2020 was a shakeup year for the Mac platform. Apple began the Apple Silicon transition, and with it, macOS Big Sur brought a fresh coat of paint to the operating system, along with a version number bump to 11 after 19 years of Mac OS X.

The previous year, macOS Catalina had already included signals of the direction things were going. The decision to drop support for 32-bit apps was one; Catalyst, the tooling to instantly make an iPad app Mac-compatible, was another.

Big Sur’s refresh of the macOS design was… something. I didn’t like it very much. I thought it mostly just made things uglier and kind of weird. A lot of places in the UI, colorful iconography was replaced with abstract symbols. Every app icon became roughly a squircle, a choice I found rather odd given that Apple had previously used the shape of app icons to help indicate the app’s category and task. The squircles were highly reminiscent of iOS icons and not in a good way.

Also, a lot of app icons were just lazy, with the old icon plopped into a white squircle. I thought these were the ugliest things ever. The direction macOS Big Sur took the appearance of macOS was just… ugly. Everything was rounder and more abstract. And also less usable.

On Apple Silicon, Big Sur also added the ability to run apps built for iPad without any modifications, not even requiring Catalyst. I think this deserves a moment of whining because it’s produced some truly awful Mac apps.

Catalyst is, in theory, fine. Having a codebase that works in multiple places is easier for developers and could create a more consistent experience across Apple’s platforms. But there’s a cost.

It’s actually really hard to make an app that feels good on iPad automatically feel good on macOS. I think the only good Catalyst app I’ve ever tried is Craft. Most of them feel terribly out of place because the entire UI paradigm they’re built for is iPadOS. As many of Apple’s own inbox apps have become Catalyst apps, the consistency of macOS has markedly degraded.

It turns out consistency across platforms, while a valuable goal, can really seriously break the internal consistency of each platform. iPadOS is a touch-based environment which has always operated within siloed apps that usually don’t expect you to have a keyboard or mouse. macOS is an environment which Apple has steadfastly refused to add any kind of touchscreen to, with apps that have traditionally run in windows, expected keyboards and mice or trackpads, and manipulated documents. They have menubars with generally consistent menus like File, Edit, View, Window, and Help.

The paradigms are just really different, and Apple seems to believe that just throwing software and conventions from one paradigm into another will work by itself. It won’t. It takes a lot of developer effort to make a good Catalyst app, and so most Catalyst apps are crap. Cross-platform Electron apps tend to feel more native to the platform than Catalyst apps, or even worse, just straight-up running a non-modified iPad app on macOS.

This is very embarrassing for Apple. It’s not as bad as the situation on competing platforms, but it is a problem macOS in particular has not had before, and the development of this problem is seriously concerning. Actually, it may even be a little worse on macOS, because other platforms don’t actually have a reference point for what a good, native app that’s part of the platform feels like. macOS does, and Mac users, consciously or unconsciously, know what to expect from a good Mac app, making every newly introduced inconsistency all the more jarring.

A miscellaneous complaint I have that I just want to throw in here is that previously, macOS notifications let you hover over them and instantly gain access to quick actions you could click on, with huge click targets, such as for quickly replying to a message. Big Sur didn’t remove this capability, but it turned them into tiny buttons that were hidden behind a submenu requiring an extra click. I can’t think of any good reason for that change.

The Apple Silicon transition elevated Mac hardware to levels basically unheard of in consumer PCs at the time. The power built in to these devices is insane. Meanwhile, the software has been on essentially a continuous downhill slope, affording less and less of that power to its users. Because Big Sur is just the beginning.

But first, the iPad philosophy.

iPads

iPads are weird devices. They’re general-purpose computers that don’t allow their users to do general-purpose computation. The Pro models are now upwards of a thousand dollars. Apple also apparently increasingly views it as the future of the “computer” in their lineup.

They’ll never admit it, of course. They’ll keep saying they believe the iPad and Mac are different products that are good for different things, and they intend to keep it that way. But their words belie their actions, such as bringing Final Cut Pro to the iPad and introducing window management that works pretty much the same as macOS, menubars and all, to the iPad.

Their words suggest a much wiser course than their actions do. The iPad philosophy runs directly counter to how I want my computer to work and how I believe it should. The prospect of my primary computer being an iPad horrifies me.

So what is it about the iPad that makes it so different from the Mac? Well, iPadOS, for one thing, is an outgrowth of iOS, an operating system designed for mobile phones and centered almost entirely around the concept of the “app”. Apps in a system like iOS are incredibly different from their counterparts in a system like macOS. Actually, they’re kind of a bad abstraction in general.

In order to talk about the reason apps are a bad abstraction, I suspect we’ll need to talk about what they even are.

Apps

(I am using way too many levels of subheading.)

Apps are an abstraction. I’m using that word a lot, so let me talk about what it means in this context.

The world of computers is very hostile to humans. At their core, these things are billions of transistors hooked up to one another and sets of input and output devices that create electrical signals that together manage to perform feats of computation unthinkable to humans of centuries prior. You or I cannot understand or create these electrical signals, so we don’t work with them. Instead we construct a world of “instructions” understood by what we call Central Processing Units and represent them in binary commands that, when fed into these CPUs, they will “know” how to execute.

These are pretty hard to work with as well, so on top of these instructions we craft an entire universe of higher-level programming languages that represents this still-inhospitable world in a way that we humans can at least wrap our heads around. At the level of Assembly Language, we still work with CPU instructions, but represented as English words instead of the computer’s binary; step higher and you might find yourself in the realm of C, working with functions and types that are still the computer’s, but that you might even begin to comprehend yourself; even higher and you might find yourself working with a language like Swift or (ahem) Ruby, a positively human-friendly fiction that attempts to hide away but cannot really conceal the unfriendly nature of the computer underneath.

And yet, no matter how far you go in crafting these fictions, at the end of the day they must all output the same things: binary instructions for the world of the computer. (I am skipping over subtleties like the differences between intepreting and compiling because they are not relevant to the user’s system image.)

If you want to make the computer do something useful, at the end of the day, you have to put a long sequence of binary instructions together into an “executable”. Now, while this is itself an abstraction, it’s not a particularly friendly one for most people; so we abstract away the “executable” as an “application” with a nice icon and name that hides all the gnarly computer bits underneath.

Now, the point I’m trying to make here is that we don’t have the abstraction of “apps” because they are a model friendly to humans that we’ve imposed on computers; rather, it’s an attempt to make the inner workings of computers, a model imposed on us by them, understandable to our mere mortal minds.

So, what would a human-oriented mental model look like?

Documents

What do you actually use an app for? Most computer users aren’t creating their own executables to perform specific computations; they’re using pre-prepared applications made by other people to perform specific tasks.

Let’s look back at the original Macintosh as a reference point. In 1984, most documents were created on paper; “typing” referred to using the typewriter; the concept of an “Internet” was totally foreign to average human being. Until a year prior, the most popular personal computer in the world had been the Apple II; its killer application? VisiCalc - the spreadsheet. It was into this context that Apple Computer introduced the Macintosh, a “computer for the rest of us”.

Let’s talk about VisiCalc for a second, because I think it’s very revelatory about the direction personal computers were taking at the time. Until VisiCalc, personal computers like the Apple II were mostly looked at as toys; a hobby for computer enthusiasts; most of the programs written for them were games. Until VisiCalc, spreadsheets were massive handwritten paper documents; updating a cell required humans to re-calculate every other cell that depended on its value; and that meant remembering where they were and what their formula was.

All of these calculations could be expressed, it turns out, as computations; a document like a spreadsheet could be serialized into a binary “file” that could be stored on hardware like a floppy disk; and a computer program could be made to work with these documents and present an interface to the human that, running on the computer, could greatly simplify the task of using a spreadsheet.

And so it came to be that the personal computer turned useful.

Now, fast-forward back to 1984, and the Macintosh. The core user interface of the Macintosh when it’s introduced is the “Finder”, and the objects manipulated are storage devices like disks, and the folder inside them, and the files inside those. Some of those files are applications; most of them are documents.

As a reminder, there’s no “internet” or “world wide web” at this time. So any content that was on your Macintosh was on a floppy disk; these disks were 3.5 inches in size, you could only have one in your computer at a time, and they could store a whopping 400 kilobytes each.

So using personal computers as a content distribution mechanism would sound pretty ridiculous at the time. Instead, the primary use for a Macintosh would be the creation and manipulation of your own personal documents, or within an office setting, collaboration on documents stored on floppy disks.

The paradigms introduced in the Macintosh System Software reflect this, as do the applications that defined it. Apple’s own focus was on applications like MacPaint and MacWrite, and they got Microsoft to write Word and Excel for the Mac; the introduction of Aldus PageMaker, taking advantage of the Mac’s graphical interface, kicked off the desktop publishing revolution, like VisiCalc with the spreadsheet on the Apple II before it; and Adobe’s Photoshop and Illustrator brought user-friendly graphics manipulation to the personal computer.

All these applications used the menu bar, and all of them had a few basic menus that were about the same. For example, the “File” menu contained commands related to the manipulation of, well, files - the units in which the documents users work with are stored. These include items like “Open…”, “New”, and “Save” - which I think should all be pretty self-explanatory to anybody who’s used a desktop computer in the last 40 years. The “Edit” menu tended to include commands related to the manipulation of items within the currently opened document, such as “Undo”, “Copy”, “Cut”, and “Paste”.

When outside of any applications, the graphical shell of the Macintosh, Finder, was an interface to explore all these files you create and edit in your applications. The Finder let you move them into folders, rename them, delete them, open them, and more.

Now, the Macintosh was and is indeed an application-centric system. But this was out of necessity. The design of the system reveals that the unit that its creators thought users would and should think in terms of were their files - documents - the actual content they were focused on creating. The use of applications was purely driven by the fact that applications are what computers understand.

What really is an application, to the user? It’s the tool they use to manipulate the document they’re working on. But in real life, documents don’t live within their tools; you don’t open your notebook inside your pen to write on a page, do you? The rough equivalent on a computer is a Word document, which you open inside the application called “Microsoft Word” to edit. When Apple created

As computers got more advanced, numerous experiments tried to re-orient the user experience around documents even further - radical reimaginings that tried to eliminate the concept of applications from users’ system image entirely, and less radical ones that simply tried to change the role of applications within the user experience.

Roads not taken

Jef Raskin, the originator of the Macintosh project within Apple, was ousted from the project after Steve Jobs took over after having himself being ousted from the Lisa project. The final version of the Macintosh which shipped in 1984 was very far from Raskin’s original vision; he thought Apple got it all wrong. So after leaving Apple, he decided to pursue his own original vision of what a humane personal computer should actually work like.

The result was the Canon Cat, a computer that did away entirely with what Raskin considered the inhumane interfaces of past computers marked by siloed apps and even filesystems where users had to create and remember a complex hierarchy of names.

Everything lived in one giant workspace and could be leaped to within its context; no remembering where you’ve saved a document, everything just continuously updated in your workspace, and to find anything, you just “leap” to whatever content you remember.

It’s a very utopian vision of computing. It’s also a vision that didn’t sell. Some roads go nowhere and reach dead ends.

I take it you may be wondering where exactly I’m going with all this. Don’t worry. So am I. The answers will reveal themselves in due time.

The Cat tried to free you from the app and the document itself, throwing everything into one giant workspace; that may have been too radical a leap for many computer users to bear. In our own physical world, the intuitions we’ve developed are still too accustomed to working on our documents as individual units; we manipulate distinct physical objects in our real world.

So when Apple took a bite at similar ideas, they didn’t do away with the document; instead, they reified it - even the name of the framework they created, OpenDoc, heavily centered the idea of the document. Let’s talk about OpenDoc and what it did.

When Apple created the Mac, they were never much in love with the “application” concept; for the reasons discussed above, Apple viewed these applications as a middleman between the user and their documents.

Let’s take a look at the Mac before OpenDoc. Bruce “Tog” Tognazzini was Human Interface Evangelist at Apple in 1990, when the company was first prototyping OpenDoc. Previously, he’d written the first version of Apple’s Human Interface Guidelines in 1978; in 1992 he published a book, Tog on Interface, that in his words “explored the central issues of human-computer interaction”, while “focusing on the Macintosh”. It’s in the final chapter, when discussing Apple’s visions for the future of the Mac, where he identified the fundamental frustration with the app metaphor; Apple’s ultimate goal with OpenDoc was a transition to a “plain-paper” metaphor that reflects how we, as humans, actually think about our work.

In his words (pages 282-284): “The current Macintosh environment had at its heart the application. Documents are created within an application and reflect the capabilities of that application. Some applications allow the importation … of pieces of other documents [from] other applications … Nevertheless, creation of complex documents on the Macintosh typically requires the use of several applications and many documents.”

Why is this bad? Well, “the context in which a computer user performs [their] work has historically been dictated by the needs of the machine. As we approach the era of widespread multitasking and multiprocessing, we have the luxury of rethinking decisions made in the era of low-power computers. Application-centered design is one such area of decision. Documents were typically created within tools called applications. Applications were able to stand alone. From the early machines’ point of view, this was ideal. Since applications didn’t need to interact, memory requirements were kept at a minimum and no complex memory management needed to take place — just the sort of scheme you want when you’ve built your computer out of several thousand vacuum tubes or a microwave oven control processor.”

That’s why we have application-centric design; so what does application-centric design impose on documents and humans? Alright, let’s hear Tog again: “Documents-within-tools can be likened to having to place your house inside a giant hammer so you can nail in a picture hook in the living room. Nevertheless, it has survived a surprisingly long time. The document-within-tool metaphor is for the sake of the computer, not the user.”

OpenDoc wanted to reverse the relationship between application and document; rather than documents that live inside applications, documents would be the main unit that users worked with, and applications would simply be tools within the documents that enable capabilities within them.

At its core was essentially a file format for compound documents assembled out of “parts” which were enabled by software components that were like the tools used to edit that part. What does Tog have to say about this? Let’s ask him. Oh, nice, there’s an answer on page 285: “Compound documents, in the supporting plain paper metaphor, do away with the application as the primary object and replace it with the document. Users need create only a single document to get their work done. Applications are replaced by (or simply relabeled as) tool kits, and tool kits can be called upon from within any document. With plain paper, most of the problems of application-centered context disappear: With all tools available within the document, suddenly the user can do his or her project ‘without ever leaving home’”.

That’s a radical vision, but an obvious extension of the original Mac’s principles; applications were incidental to your actual work, your documents, and existed only because the Mac was a computer. OpenDoc was one of the few big projects the flailing Apple of the 1990s actually shipped; unfortunately, due to Apple’s awful condition at the time, it gained essentially no traction outside of Apple.

OpenDoc died with Steve Jobs’ return, during that very famed 90s near-bankruptcy; at such a troubled period in its history, Jobs decided to refocus the company’s focus on a few core products instead, cutting projects like OpenDoc he viewed as extraneous. Focusing on building Mac OS X and replacing their aging operating system arguably saved Apple.

Mac OS X and Jobs’ return to Apple would lead Apple to the iPhone 10 years later; a revolutionary device, to be sure, but one that has lead Apple towards a very different paradigm than the Mac, and the radical vision they pursued with OpenDoc.

iPhones

iPhones are a rather different kind of device. They live in your pocket and you carry them around; they fit in your hand, and you also manipulate everything directly with your hands. It’s fairly small, and thus not particularly suited towards document manipulation or even multitasking. There simply isn’t enough screen space, and fingers are just too imprecise of a tool for that kind of usage.

So instead, modal apps make the most sense in this paradigm — when you want to use your iPhone to accomplish a different task, you enter a different “mode” of usage — something actually roughly analogous to the user-facing “application” concept created to wrap executables.

The iPhone was unveiled to the world for the first time in 2007. This was quite a different world than the one the Macintosh entered in 1984. For one, information was now much cheaper, and continued to become cheaper, to transmit at high speeds - no longer did it move in bulky floppy disks with miniscule storage; the internet changed all that, and the advent of the iPhone marked the dawn of the mobile internet, and the mobile web.

Can you see where I’m going with this? A device centered around modal apps that can cheaply and quickly recieve information and media, but not easily create it; a form factor almost perfect for the consumption of that content; and a software paradigm that almost entirely puts different content in its own independent contexts. The iPhone is the ultimate content consumption device; possibly the only form of media it’s actually good for creating are photographs and videos.

It’s also easily, by far, the most profitable device in Apple’s hardware lineup, and unfortunately, the direct ancestor of every other one since.

Back to the iPad

In 2010, when Apple released their first tablet computer, the iPad, they had a choice; they could have taken the software paradigm of Mac OS X, and adapted it to a touchscreen computer; or they could have taken the touchscreen-first software they already had - the iPhone OS - and simply given it a larger screen to work with.

They chose the latter option.

Modern day iPads are, hardware-wise, every bit as capable as their Mac counterparts. They have keyboards and trackpads, allowing the precision that touchscreens can’t; styluses add a form of creation neither Mac nor iPhone can accomplish; they can connect to external displays to give them a decent screen size, and the Pro models feature the same M4 chips that inhabit the MacBook Air.

But they’re crippled devices. Crippled by what? Crippled by conscious software choices made by Apple; choices that, if they could, they would apparently like to spread to their whole lineup, eventually consuming its oldest surviving member, the Mac itself.

Let’s talk about these choices. Easily the number one frustration for me with these devices is Apple’s insistence on making the App Store the only way normal people can install software on their devices, and thus making it the only possible avenue for developers to distribute their software. This is the kind of thing we should consider downright insane, but we don’t; ostensibly, it’s a security measure, but also one that conveniently gives Apple complete control over what software you use on a device that you own. And even more conveniently, it also creates an effective monopoly over software distribution; that means if you’re a developer who wants to get paid for your app, you have to go through Apple to sell it to users. The famed 30% cut Apple takes from App Store sales is really just extortion; there’s no other way to frame it.

A capitalist might try to defend Apple here, saying that Apple has a right to use a platform they own to make a profit in this way; this is bullshit. Your device is a device you own, and your decision to install software on it written by someone else, some developer, is an entirely consensual transaction between two parties - you and that developer; what right does Apple have to impose a payment fee on it? Apple takes on the same role of a tax-collecting government that these very capitalists claim to hate. Unlike a government, Apple has no need or even right to collect these; there’s no social contract between Apple, its users, and developers, and they make more than enough money from hardware sales to fund themselves.

Consider, too, that in the year 1984 when the Mac was introduced, you would have been laughed out of the room for proposing a personal computer that did not allow the user to perform the computation they desired without permission.

Next, I think we need to talk about the app paradigm used by the software that Apple does allow onto the iPad. It’s extremely far from document-centric. Now, sure, on the iPhone that makes sense; it is a real computer, but it’s not one with a form factor suited to creation. But on the iPad, it’s far more confusing. For starters, you have more screen space here; it’s actually kind of ideal for certain creative tasks like digital painting. With modern-day iPads, you even have keyboards and trackpads like traditional laptop computers; an Apple Pencil adds stylus capabilities that allow for precision. You can even connect them to external displays to add a big screen with high resolution.

Whatever’s crippling these devices, it’s not the hardware. It’s the software.

The iPad stupidly follows roughly the same paradigm as the iPhone. Since its birth till now, it’s essentially been single-window; instead of a “Desktop” on which these windows reside together, you get a “SpringBoard” (homescreen) which launches you into apps and lets you switch between them, and exists in basically a separate context from the app windows themselves. I mean, even the items on the homescreen give the game away; the SpringBoard is full of app icons and widgets, while the Mac desktop is full of folders and files. In a minute or two we’ll get to Apple’s apparent preference for the former over the latter.

The original Mac philosophy — one which Apple tried to make reality against the constraints of computing — was one in which you live in your own work. Documents were first-class citizens — you could open them in any number of tools, shuffle them between folders, duplicate them, copy a chunk from one into another. You weren’t trapped in a particular mode of working.

In a time when Apple didn’t wholly embrace the app paradigm they were trapped in by limited computing resources, they actually tried to free the document from the tools even further; instead of just copying chunks from one tool into another, they tried to integrate the tools into the documents themselves with OpenDoc.

Now contrast this with the iPad philosophy; here in Apple’s dark ages, we see a wholesale embrace of an app paradigm that keeps you trapped in siloes; more often than not these siloes are feeds and storefronts rather than an entrypoint into creativity. On the iPad, you’re meant to live in someone else’s sandbox. Apps are silos, by design. They wall off your thinking, your making, your output, and they insist that you only interact with it through their interface, their menu structure, their “share sheet.”

Instead of your work being the anchor and the app being the tool, the app is the anchor and your work is incidental. That is the fundamental difference between the humane vision of the original Mac and the consumption-driven iPad world Apple has embraced. This is the fundamental flaw in the iPad philosophy. The hardware is extraordinary — M-series chips, gorgeous displays, Apple Pencil, trackpads, external monitors. Everything about the physical machine screams potential. But the software treats it like a glorified content vending machine. No matter how much silicon they pack in, no matter how close they inch the iPad to Mac-level power, the app-trap paradigm drags it down into passivity.

Especially recently, the process of this paradigm tainting macOS itself has rapidly accelerated. Apps like “TV”, “Stocks”, “Podcasts”, and “News” feel out-of-place here; they literally still feel like iPad apps but on the Mac. They are pure consumption; they’re designed to trap you in the context of the app in a world where everything within them could just be browsed in, well, a web browser.

Apple apparently has no direction for macOS, other than “towards iOS”.

So. Where were we? Oh, right, the state of macOS.

System Settings

In macOS Ventura, Apple replaced System Preferences, the program used to adjust various system preferences on the Mac since Mac OS X 10.0 in 2001, with a revamped app they called System Settings.

Why? Well, there really wasn’t any good reason for it — it’s a pretty stupid change, and we’ll get into that in a second. But Apple did it in order to bring the Mac interface closer to the iPhone/iPad interface, so that those more familiar with the iPhone/iPad could adjust to the Mac quicker.

A noble goal — except, of course, when you consider the point I’ve been trying to hammer in since the beginning — consistency across platforms breaks the consistency of each individual platform, and mixes conventions from paradigms that were never meant to mix!

So what did this rewrite do exactly? Ah, wait, yes, I forgot to talk about something else earlier when I was going into Catalyst. In 2019, in macOS Catalina and their other operating systems, Apple also introduced a new framework for building user interfaces: SwiftUI.

SwiftUI is cross-platform — an app using SwiftUI can be compiled for all of Apple’s platforms, from watchOS to macOS to visionOS, and “naturally adapt” to each platform’s native conventions… yeah, right. It follows iOS conventions everywhere, which is mostly fine on pretty much every other one of Apple’s platforms, seeing as Apple has basically embraced the iOS philosophy for everything else… but is not fine on the Mac. It’s also mind-numbingly slow on macOS compared to UIs written with the “old” AppKit framework that the Mac has been using since NeXTSTEP.

Apple also clearly views SwiftUI as their “modern” framework for building UIs. It’s what they want you to use everywhere if you’re building a new app — and they want you to make the same app run across all their platforms, iPhone to Mac. So, naturally, they write a lot of their own apps in SwiftUI. And boy, it shows.

The System Settings from macOS Ventura onwards have been rewritten using SwiftUI. This has led to the strange phenomenon of one of the most core apps on the platform — the System Settings themselves — feeling like a non-native app. Among other things, checkboxes, a hallmark of Mac UI, have been replaced by weird iOS-style slider toggles that look fine on a touchscreen but wildly out of place on a desktop computer.

I really do not want to repeat a load of criticisms which are already well-written up elsewhere, so here’s Ars Technica’s take. It’s solid, and hits most of the key frustrations. The seemingly arbitrary burying of settings several layers deep is pretty irritating, for one. The sidebar-operated navigation also bugs me. But the worst sin of the rewrite is in its very intentions. Making the Mac more friendly to iPhone & iPad users is a noble goal, yes; sacrificing the Mac itself on the altar of iOS is not.

So, System Settings. One more “little thing” that makes the Mac just that much more unpleasant. Not a big deal on its own, but another little cut.

I mean, okay then. What else did Apple get up to in that update?

Stage Manager

The state of macOS window management is… not great. Basically, every single window management feature has at least two — sometimes three or even four — competing implementations that all work slightly differently and do not interact with each other.

Apple has been trying things — a lot of things. The trouble is, they haven’t found a willingness within themselves to commit to any of them. The last time they truly made a serious change to the way window management works on the Mac was back in 2011 with Mac OS X 10.7 Lion.

In Lion, they merged Exposé — the mode that provides a bird’s-eye view of all the windows on your desktop — with Spaces — the Mac’s virtual desktop implementation — into one interface they called Mission Control. Mission Control is a very strong and pleasant interface to use. It’s not very customizable or flexible, but it uses that as a strength in order to enable high consistency. The trackpad gestures implemented for Mission Control feel super smooth and make it feel like a completely natural way to multitask.

Mission Control traded away a lot of the power and flexibility of the preceding implementation of its multitasking features — such as the “grid” Spaces used to be arranged in, replacing them with one horizontal strip — in order to enable a simplicity that makes it feel natural. And in that case, it worked great. Mission Control is good. It articulated a clear philosophy of where Apple wanted to take multi-window multitasking on the Mac desktop. Each desktop is its own dedicated workspace with its own windows — minimized or active — and they inhabit a fluid, easily switchable and expandable strip of workspaces.

That wasn’t the only piece of Apple’s multitasking vision introduced in Lion. Since Mac OS X’s introduction in 2001, Mac windows have had three “traffic light” icons in the upper left corner. Of these traffic light icons, the green-colored one is the only one to ever have changed its function. From 2001 to 2011, clicking that button instructed a window to “Zoom” its contents to fit. This Zoom function was implemented by various apps in different ways — Finder windows, for example, changed the size of the window to fit its contents. Actually, that’s what its meaning was generally intended to be by Apple. In practice, it largely was implemented as window maximization, where the window expanded to fill the space available to it on the desktop.

In Mac OS X Lion, a different approach to having windows fill the screen was introduced while retaining the existing Zoom function — an experiment. This new approach was called “full-screen mode”, and was largely inspired by the way apps take over the screen on your iPad. Rather than simply maximizing its size to fill your desktop, the app gets its own dedicated Space in Mission Control where it completely fills the screen and takes over.

This fit right into the new multitasking model introduced in Lion, which both tried to simplify switching between lots of concurrent workspaces and the contexts within them through its consolidation of Exposé and Spaces, and to make it easier to focus on a dedicated task with a dedicated workspace through its full-screen mode that creates a separate context, free of the clutter of your desktop, for each of its full-screened applications.

In OS X Yosemite, in 2014, the green traffic light’s function switched from zooming to full-screen, which had originally had its button placed far away in the upper-right-hand corner. The zoom functionality remains within macOS and can be accessed from the Window menu, and if you have it configured this way, by double-clicking window titlebars.

And the next year, in OS X El Capitan, Apple made the full-screen mode more useful by adding Split View: the ability to have two apps take up half the screen each in a space together, enabling the same focus as full-screen mode with a limited bit of multitasking.

It is not a perfect philosophy, of course, and it may not fit everyone’s use case, but it’s brave and committed to itself. It tries something and is willing to say “this is a model for multitasking that is both simple and adaptable to various usecases”. And so Mac window management existed for many, many years, and it was good.

For the past few years, Apple has been experimenting again. Let’s see how that’s been going, shall we?

In macOS Ventura, Apple introduced a new window management feature called “Stage Manager”. Stage Manager groups your open windows into “stages”, and then lets you switch between active stages through a kind of vertical dock-like apparition. It’s a rather interesting model for window management, honestly.

Stage Manager articulates its own distinct model for multitasking: visible window groups that collapse into a sidebar where your most recent ones are easily accessible, essentially combining both the concept of workspaces and minimizing windows into one interface. When introducing Stage Manager on the Mac, Apple also added the feature to the iPad as its first form of multi-window multitasking ever. The weirdness of that aside — we’ll get into iPads again in a later installment — the Mac implementation of the feature is hopeless due to its inability to commit to its own metaphor.

You see, the “stages” metaphor has a lot of potential: it’s clearly an idea that Apple thinks deserves a shot as a model for multitasking. Its main trouble is that it’s shoehorned into an existing multitasking model that it doesn’t interact well with, and thus breaks both its own metaphor and the existing one.

What’s rather interesting is that on the iPad, Stage Manager superceded no existing multi-window system; it simply became a toggle for whether you want to keep using the existing single-app paradigm, or switch to a new stage-based multitasking one. There is no “Mission Control” or “minimized windows” for the iPad version to clash with. It’s free.

The Mac version, on the other hand, is not so. Its “stages” are extra multitasking features within the existing “spaces” metaphor, which they don’t replace; instead they just kind of sit side-by-side, stages-within-spaces. They’re not integrated particularly well; it’s not like Mission Control shows your windows grouped by stage or anything. Instead, they’re just two totally separate multitasking models which feel like they sit side-by-side.

Normally, Mac windows minimize to the Dock. When Stage Manager is on, your existing Dock-minimized windows remain in the Dock; but if you try to minimize any new ones, they simply collapse into their own stage. Except it only looks like that; if you click on the minimized window stage, instead of switching to a different stage like it always would elsewhere, it simply gets restored into the stage it came from, despite looking like a distinct one in the sidebar. If you restore a minimized window from the Dock, you can’t put it in the Dock again until you turn off Stage Manager; instead, it becomes part of the stage-based system. The difference between a minimized window and a separate stage is totally invisible to the user. The result is a tangle of metaphors: Dock windows, minimized windows, “stages” that sometimes act like spaces and sometimes don’t. It’s hopelessly muddled and confused.

So that’s Stage Manager’s deal. Apple has no courage; if they did they would simply turn Stage Manager into a totally separate mode that replaces existing multitasking features, instead of its current awkward coexistence and non-collaboration. It would still be a toggle between that and existing methods; but just don’t force them to coexist! Show some commitment to a new metaphor if you’re going to try it.

Jump ahead to macOS Sequioa; in 2024, Apple introduced some cool new window management features, inching closer to feature parity with Windows by introducing window tiling features. They’re honestly pretty good, if rather finicky. Apple did the right thing by introducing them.

Why am I complaining, then? Well, remember the earlier existing features we talked about? For maximizing an application, we had Zoom, which technically isn’t the same thing as maximizing but was basically implemented as such, and full-screen, which worked within the Mission Control multitasking paradigm. Split-screen tiling was also possible through the full-screen mode.

Now, we have “Fill”, which works almost like Zoom except by default it leaves a little border around the window. For those keeping track, that’s three different ways to maximize a window which all have nothing to do with each other and work differently. We now also have an additional method of split-screening with the tiling and snapping features introduced in Sequioa.

This is a state of decay. Languages left behind from eras past try to share sentences with half-bakes ones of today; to use the computer becomes archaeology, layer after layer of abandoned metaphors still half-alive under the glass.

Next Up

Alright. We’re 7802 words in. I think this is a good place to stop this one. We’ve gotten a good look at where macOS was a bit over a month ago before Tahoe’s release. You’ve seen a lot of the disparate pieces of this malaise; you’ve gotten the context of the iPad’s influence on the Mac; now it’s time we see where this is all going.

Next time we’ll be looking at Apple Intelligence and the Vision Pro before jumping back into an examination of macOS Tahoe. Don’t quote me on that, though. My plans change all the time. That’s it for today.

Jef Raskin’s fundamental goal with the Macintosh was one of computing that was humane. How would he look on the Mac, and on the Apple, of today?

shreyan @shreyan