The Last Human Milestone

Do you feel that? That creeping sensation that something isn’t quite right?

  • Maybe it’s when text buffers in multiple Windows apps randomly stop accepting text until you reset the application state – like pressing “Save” or restarting the app. (Maybe, especially, it’s when this happens on multiple computers in different apps on the same day – so you know the issue isn’t limited to one machine.)
  • Maybe it’s when Outlook on Android takes you to the last email you received instead of the newest one when you open the notification for the new one.
  • Perhaps you feel it when you are reading over some text a colleague wrote and everything is making sense until, out of nowhere, there is a wild inaccuracy on some small but foundational thing.
  • Perhaps it is when Evernote opens the wrong note when you click on the one you wish to open, and you have to click twice and wait a moment for the application to sort out with the database what the heck you’re actually looking for.
  • If none of these, maybe it’s when you’re looking over an infographic and you see some small thing that doesn’t make sense, like a line that randomly ends before it should, or some garbled text.

These are basic problems—how to handle a text buffer, which record to open when a user clicks on its representation in the UI, whether the company you work is a Gold or Platinum (or Golden?) Business Partner, how to draw a line from Point A to Point B, and so on—which, until recently, have long been solved. Imagine the creeping dread, then, when they all happened to me yesterday.

Thinking about it at the end of the day, writing in my old-fashioned paper journal, I wondered: Did AI do this?

I suspect the answer to this question is yes. I know AI is responsible for the problems with writing and infographics, and I strongly believe it is responsible for the application bugs, but even if it isn’t, walk down this road with me a little ways. I think it leads to an interesting place.

Consider a few signals in the noise. First, AI coding agents are pushing and pulling Github so hard that the service is regularly crashing. Microsoft is working hard to dedicate infrastructure to the new demand, I’m sure; but the demand, for AI users to stress the platform harder than ever before, will remain for as long as the current craze for AI agents persists. Second, every major software vendor is moving toward (or has already arrived at) an AI-first approach to coding. These companies aren’t doing this because they want more code, or at least that’s not the main point. What they really want is fewer people writing the code they plan to ship to customers, so they can deliver the same level of service at reduced cost.

This means that those engineers who remain have fewer colleagues to check their work and fewer incentives to write their own code. It means that the code they have to review was mostly produced by an opaque mathematical algorithm attempting to predict what a human mind might do, given a certain problem, and not a human mind itself. It means that the machine they have to reason with when there is a problem with the code is trained to agree with them and then try the problem all over again. It doesn’t learn anything, it simply chirps a response which is likely to work. And, crucially, the response does work most of the time. It’s a miracle. The machine generates code that works; it’s nearly instantaneous; it passes all the tests; and it made the engineer feel good about what it was doing along the way. No one really needs to understand it, because it is working and I can follow it superficially, and no one needs to own it—I mean in the intellectual sense, to own it the way an author owns their text or a mechanic owns their machine—because they can’t. They didn’t make it. They didn’t twist the wrenches.

Now put it all together. AI is writing more code than humans, it is shipping in major applications that we use everyday, and no one really understands the myriad ways that it interacts with the other code in the application (or the operating system) because there are fewer hands on deck at the major software companies to check it.

A similar problem exists in the vast universe of texts which shape and condition our world. Consider a few examples of how this can wrong in a software implementation project.

  • A resource-strapped government agency requests proposals for a new system and uses a combination of AI-generated text and copy-pasted text from the last proposal to create the new one. These may or may not make sense.
  • The proposer uses AI to generate their response. It gets most things right, but introduces some strange errors which misrepresent the solution. These are buried deep in the text, and it all sounds plausible, so the human reviewer misses the error.
  • The government agency uses AI to summarize the proposals. It may even ask AI to make some recommendations about the best ones. These machines incorporate the errors into their review and summarize or recommend accordingly.
  • People on both sides of the deal use Google AI overviews to research their answers to questions that come up during negotiations. These are often right, but the devil is in the details, and the AI overview doesn’t understand the context because it’s a prediction model.
  • AI creates the training manuals for the solution. The algorithm cannot generate text from what was actually built, only what is in the training data. A person must bridge the gap. Do they know the difference? Do they have the time, energy, and motivation to care?

I know that software code and the projects that put it in people’s hands have always been full of bugs and hampered by problems like this. The difference, I wish to suggest, is that someone in the past was intellectually accountable for these problems. A person wrote the code; a person wrote the document; a person mastered the material to the best of their ability, thought about it, and then formulated an argument as to how it should be applied to the world.

Software bugs emerge from strange circumstances. A person, while thinking through the problem, has time to consider these circumstances. If they’re writing code they will try and fail, try and fail, again and again and again, until it works. Sometimes they will encounter the circumstance directly. Sometimes they’ll think about it when they get up to use the restroom. The point is, every time they fail, they have to think about the problem; and they have to keep thinking about it until the problem is solved. It takes time, and it takes pain, but when they come out on the other side they have understood it, by God, and they can be accountable for it.

Writing is the same. A writer sits down at the keyboard or the legal pad with a vague idea, like “I think AI is breaking the world,” or “I think the French Revolution was caused by the frustrated bourgeoisie,” or “I need to explain how to use this software,” and then they have to struggle, word by excruciating word, to explain themselves. They have to read and take notes and think; and then like a coder they have to try and fail, again and again, try and fail, to read the right stuff and take the right notes and put words and evidence together in a way that will explain the idea. Along the way they will have thought about it so much that they likely never want to think about it again. But they own it. Ask them about it and they can answer you in detail.

That world is gone.

There is a strong possibility, therefore, that we are standing on the brink of a giant and terrible pit of despair that is likely to break more than just the text box on Microsoft Teams. The problems are going to proliferate, and then they are going to cascade. Systems will get worse. Some of the problems will not be revealed until people die as a result. No one will really know how to correct the bugs because no one will have owned the code in which they were introduced. When an engineer delves into the problem, fixing the bugs will be like playing whack-a-mole, because Function X was written by Claude Code using Opus 4.5 and interacts with Functions Y and Z, which were written by an engineer in 2011, who is now retired, and Claude Code running Opus 4.7, respectively; and all three were included in code that was re-written in a more efficient language by another AI agent last quarter as part of an attempt to gain some memory overhead on the virtual machines running the app. It’s a nightmare, and production can’t simply stop while they figure it all out. The AI agents will be making code changes along the way because the managers have to keep up with their KPIs.

That’s just software. Books, music, art, and movies are another thing. All of them worth the same thought experiment.

Faced with a deteriorating product experience and unhappy customers, companies will face a difficult choice: keep patching the bugs while shipping new features and hoping for the best, rip out the problematic AI-generated code and let humans re-write it, or blow up the code and simply start over.

In any event, I think smart companies by the end of this year will be thinking about the Last Human Milestone. Think of that as the last point when humans wrote all of the changes shipped in the product. For many companies the LHM is now almost five years ago. But even then, in 2021, early users of Github Copilot were still mostly in the pilot seat. The tool could generate some code, but the human had to own the overall solution. For most companies, I think the LHM was probably in late 2025. This is when Claude Code exploded and the rhetoric around AI coding agents shifted from probability to inevitability. Capital follows rhetoric, and we are now, I believe, starting to see the results.

Yesterday it was a creeping sensation, a few momentary glimpses into the void. Tomorrow that creeping sensation may burn like sciatica. The Last Human Milestone will be a starting point for the difficult decisions we all must face. Can we go back?

(Update, the next day: Here is newspaper columnist Dave Barry struggling to convince the Google AI Overview LLM that he is still alive. Notice how it gets more and more wrong with each attempt to solve the problem.

Update, the next, next day: “I’m going back to writing code by hand,” one developer writes, because “AI writes features, not architecture. The longer you let it drive without constraints, the worse the wreckage gets.)

Adobe Acrobat is a Hot Mess of Ads (and it’s not alone)

I pay $20 a month for the “privilege” of editing PDFs.

I understand there are other solutions that allow me to do this for free or at a fraction of the cost. I’ve tried many of these over the years and found that Adobe’s solution works best for me as a production tool. Having reached that conclusion, I don’t mind paying for it.

Lately, however, Adobe is making it hard for me to continue paying this fee. Every time I open the app, close the app, or even just move the mouse to the wrong portion of the screen, I am bombarded with advertisements.

First there is the startup ad. The first time I open the app, every day it seems, I’m presented with a popup detailing new features I might be interested in trying. I must engage with this ad, either positively or negatively, to proceed. It’s like a little toll my brain must pay to start working with PDFs.

(Note: I had already cleared the irritating popup which prompted this post yesterday before I had a chance to grab a screenshot. I knew that if I came back today I would get a new one, and bingo! there it was.)

Checkout this fun popup that I’m paying to see!

Thankfully I don’t need to close an ad like this every time I open the app for the rest of the day, but every time I open a document, I can be certain that another dialog recommending an AI summary will appear at the top of the screen. Let’s leave for another day the question of whether an AI summary is good for me, good for society, or whatever. Today I am irritated by the simple cognitive labor I have to do every time I open the app to work, to learn, or even just to read for fun. This dialog doesn’t obscure the document, but it consumes valuable real estate on my screen that I often can’t afford to give up. I have to think about it instead of what I’m reading.  

Here’s a super-cool dialog that is just big enough to be a distraction. Yay!

After I’ve closed this dialog, I’m still not done dealing with distractions. If I make the mistake of moving the cursor to the bottom of the screen, another dialog appears. Not only does this dialog require another little jolt of cognitive labor to acknowledge and clear the distraction, it creates a slight disincentive against moving the cursor while reading. Worse, this one obscures a portion of the document for a second or more after I move the mouse away from the Hot Zone.

This thing… this thing just really gets under my skin.

There’s something else about this dialog that drives me crazy. It activates a feature that is already controlled by a button at the top of the screen.

Here is the button that is supposed to activate an AI Assistant dialog like the one (but not the same one?) that automatically opens at the bottom of the screen when I move the mouse to the wrong place. It’s got fun colors!

If I wanted to use this feature, I would click the brightly-colored button at the top of the screen! This drives me crazy because the application shouldn’t just execute a command on my behalf – especially not when it has recommended the feature on startup and then reminded me of its existence again and again with popups, dialogs, and colorful buttons. Don’t treat me like a stubborn child who needs to be forced to eat his vegetables. You say you don’t like it, Adobe asks, clearly the wise adult in this exchange, but have you even tried it?

What an insult.

On this computer I pay the bills. If I want to use the damned feature I’ll damn well click the damned button.

This insult poses a philosophical challenge as well. Ask yourself: when is it OK for a machine to operate itself? The deal we’ve made with machines is simple: operators should be the ones operating them. The machine should not operate itself unless the operator has instructed it to do so, or failing to perform an operation would risk injury. When it executes a command on its own, the resulting operation should be limited in scope and duration.

Perhaps a car offers some good analogies. In my car, the headlights turn on automatically when it gets dark because I’ve turned a switch—that is, issued a command—for them to operate that way. If I don’t turn the switch, they don’t turn on. The radio doesn’t randomly change channels to introduce me to new stations (yet). It doesn’t turn on at all unless I press the button. The engine doesn’t change to Eco Mode automatically when I cross the border into a new state. The things that do operate without my explicit command, such as the automatic door locks, do so because the risks associated with error are grave. If I don’t lock the door, it may open in a crash. You can imagine the consequences. I’m willing to hand over a little piece of my autonomy to the machine here.

Does this example of remote execution, this magic AI Assistant dialog, pass that test?

In my most uncharitable moods (like the one shaping this blog post) I think about how failing to click the “Ask AI Assistant” button threatens the careers of all the managers who are responsible for driving user adoption of AI at Adobe. I suspect that Number of Impressions—that is, eyeballs on the AI Assistant feature—is a KPI they can boost by displaying this dialog at the bottom of the screen when I move the mouse down there. When I’m in these dark moods I think that’s a dirty trick to pull on me. It’s especially low down when I’ve been kind enough to allow you to reach into my bank account and automatically withdraw $19.99 every month.  

Believe it or not, we’re not done with adverts yet. After capturing the screenshots for this post, I clicked the OS window control to exit the application and close the window. To my amazement, the popup below appeared because I tried to exit without saving the document. Unlike the magic AI Assistant dialog, this could have been helpful! Alas. Rather than simply prompting me to save my changes, some manager at Adobe thought this would be another fantastic opportunity to sell me on a product feature by using dark patterns to drive my behavior. “Share for review” is bright and welcoming. Simply press Enter, it suggests, and turn on the light. And that WhatsApp logo is a big green light saying Go, Go, Go. “Save without sharing,” in contrast, is dark and foreboding, like the mouth of a cave—clearly a button for dullards and dimwits to press so they can stay in the Dark Ages.

They’ve got you coming and going. I pay for this.

Adobe isn’t alone here. Companies are taking these liberties too often. Just today, for example, Teams informed me when I started the app that there was a brand-new Copilot feature for me to try. I have to use Teams for work, so I spend a huge portion of my life—like it or not—staring at this application. I didn’t ask for this. I didn’t opt-in, and I can’t opt-out. My employer didn’t request the feature. But, nonetheless, there it is. A group of managers and devs forced me and millions of others to just live with this thing for eight or more hours per day and hundreds or thousands of dollars per year. And if we don’t care about the feature enough to click on it, they’ll find new ways to remind us that it’s there. I expect to see more popups, more nudges, brighter colors, shimmering icons, and other ruses from the big bag of user psychology tricks reminding me to Try Copilot! until the next KPI comes along that incentivizes Microsoft to arbitrarily and unilaterally change the app again and surface new features.

Adobe ain’t alone. This thing I didn’t ask for had a “helpful” little popup to announce its arrival as well.

I see this happening every day in web apps, mobile apps, desktop apps, even the operating system itself. And before you swing your boots up into the stirrups of your high horse, I know I can use Linux to avoid most of this. I know I can use open source tools. I’ve used Linux as a daily driver on my personal machines since 2007, and I was using open source apps before that. It doesn’t matter. If I want to put food on my table I have to use these products controlled by Microsoft, Adobe, Apple, Google, Esri, Autodesk, and all the other companies who do these short-sighted, authoritarian things to try to alter my behavior and shape my daily existence. I can’t escape it, and neither can you.

But still, if Adobe could chill with the ads in Acrobat, even just a little bit, that would be nice. Until then, I’ll be over here closing popup adverts and keeping my cursor at the top of the window.

(Edit 2/11/2026: It pleases me immeasurably that this post seems to attract bots for SEO and Sales blogs trying to build an audience. Keep those likes coming. Irony is an artifact of the past. -CBC)