Adobe Acrobat is a Hot Mess of Ads (and it’s not alone)

I pay $20 a month for the “privilege” of editing PDFs.

I understand there are other solutions that allow me to do this for free or at a fraction of the cost. I’ve tried many of these over the years and found that Adobe’s solution works best for me as a production tool. Having reached that conclusion, I don’t mind paying for it.

Lately, however, Adobe is making it hard for me to continue paying this fee. Every time I open the app, close the app, or even just move the mouse to the wrong portion of the screen, I am bombarded with advertisements.

First there is the startup ad. The first time I open the app, every day it seems, I’m presented with a popup detailing new features I might be interested in trying. I must engage with this ad, either positively or negatively, to proceed. It’s like a little toll my brain must pay to start working with PDFs.

(Note: I had already cleared the irritating popup which prompted this post yesterday before I had a chance to grab a screenshot. I knew that if I came back today I would get a new one, and bingo! there it was.)

Checkout this fun popup that I’m paying to see!

Thankfully I don’t need to close an ad like this every time I open the app for the rest of the day, but every time I open a document, I can be certain that another dialog recommending an AI summary will appear at the top of the screen. Let’s leave for another day the question of whether an AI summary is good for me, good for society, or whatever. Today I am irritated by the simple cognitive labor I have to do every time I open the app to work, to learn, or even just to read for fun. This dialog doesn’t obscure the document, but it consumes valuable real estate on my screen that I often can’t afford to give up. I have to think about it instead of what I’m reading.  

Here’s a super-cool dialog that is just big enough to be a distraction. Yay!

After I’ve closed this dialog, I’m still not done dealing with distractions. If I make the mistake of moving the cursor to the bottom of the screen, another dialog appears. Not only does this dialog require another little jolt of cognitive labor to acknowledge and clear the distraction, it creates a slight disincentive against moving the cursor while reading. Worse, this one obscures a portion of the document for a second or more after I move the mouse away from the Hot Zone.

This thing… this thing just really gets under my skin.

There’s something else about this dialog that drives me crazy. It activates a feature that is already controlled by a button at the top of the screen.

Here is the button that is supposed to activate an AI Assistant dialog like the one (but not the same one?) that automatically opens at the bottom of the screen when I move the mouse to the wrong place. It’s got fun colors!

If I wanted to use this feature, I would click the brightly-colored button at the top of the screen! This drives me crazy because the application shouldn’t just execute a command on my behalf – especially not when it has recommended the feature on startup and then reminded me of its existence again and again with popups, dialogs, and colorful buttons. Don’t treat me like a stubborn child who needs to be forced to eat his vegetables. You say you don’t like it, Adobe asks, clearly the wise adult in this exchange, but have you even tried it?

What an insult.

On this computer I pay the bills. If I want to use the damned feature I’ll damn well click the damned button.

This insult poses a philosophical challenge as well. Ask yourself: when is it OK for a machine to operate itself? The deal we’ve made with machines is simple: operators should be the ones operating them. The machine should not operate itself unless the operator has instructed it to do so, or failing to perform an operation would risk injury. When it executes a command on its own, the resulting operation should be limited in scope and duration.

Perhaps a car offers some good analogies. In my car, the headlights turn on automatically when it gets dark because I’ve turned a switch—that is, issued a command—for them to operate that way. If I don’t turn the switch, they don’t turn on. The radio doesn’t randomly change channels to introduce me to new stations (yet). It doesn’t turn on at all unless I press the button. The engine doesn’t change to Eco Mode automatically when I cross the border into a new state. The things that do operate without my explicit command, such as the automatic door locks, do so because the risks associated with error are grave. If I don’t lock the door, it may open in a crash. You can imagine the consequences. I’m willing to hand over a little piece of my autonomy to the machine here.

Does this example of remote execution, this magic AI Assistant dialog, pass that test?

In my most uncharitable moods (like the one shaping this blog post) I think about how failing to click the “Ask AI Assistant” button threatens the careers of all the managers who are responsible for driving user adoption of AI at Adobe. I suspect that Number of Impressions—that is, eyeballs on the AI Assistant feature—is a KPI they can boost by displaying this dialog at the bottom of the screen when I move the mouse down there. When I’m in these dark moods I think that’s a dirty trick to pull on me. It’s especially low down when I’ve been kind enough to allow you to reach into my bank account and automatically withdraw $19.99 every month.  

Believe it or not, we’re not done with adverts yet. After capturing the screenshots for this post, I clicked the OS window control to exit the application and close the window. To my amazement, the popup below appeared because I tried to exit without saving the document. Unlike the magic AI Assistant dialog, this could have been helpful! Alas. Rather than simply prompting me to save my changes, some manager at Adobe thought this would be another fantastic opportunity to sell me on a product feature by using dark patterns to drive my behavior. “Share for review” is bright and welcoming. Simply press Enter, it suggests, and turn on the light. And that WhatsApp logo is a big green light saying Go, Go, Go. “Save without sharing,” in contrast, is dark and foreboding, like the mouth of a cave—clearly a button for dullards and dimwits to press so they can stay in the Dark Ages.

They’ve got you coming and going. I pay for this.

Adobe isn’t alone here. Companies are taking these liberties too often. Just today, for example, Teams informed me when I started the app that there was a brand-new Copilot feature for me to try. I have to use Teams for work, so I spend a huge portion of my life—like it or not—staring at this application. I didn’t ask for this. I didn’t opt-in, and I can’t opt-out. My employer didn’t request the feature. But, nonetheless, there it is. A group of managers and devs forced me and millions of others to just live with this thing for eight or more hours per day and hundreds or thousands of dollars per year. And if we don’t care about the feature enough to click on it, they’ll find new ways to remind us that it’s there. I expect to see more popups, more nudges, brighter colors, shimmering icons, and other ruses from the big bag of user psychology tricks reminding me to Try Copilot! until the next KPI comes along that incentivizes Microsoft to arbitrarily and unilaterally change the app again and surface new features.

Adobe ain’t alone. This thing I didn’t ask for had a “helpful” little popup to announce its arrival as well.

I see this happening every day in web apps, mobile apps, desktop apps, even the operating system itself. And before you swing your boots up into the stirrups of your high horse, I know I can use Linux to avoid most of this. I know I can use open source tools. I’ve used Linux as a daily driver on my personal machines since 2007, and I was using open source apps before that. It doesn’t matter. If I want to put food on my table I have to use these products controlled by Microsoft, Adobe, Apple, Google, Esri, Autodesk, and all the other companies who do these short-sighted, authoritarian things to try to alter my behavior and shape my daily existence. I can’t escape it, and neither can you.

But still, if Adobe could chill with the ads in Acrobat, even just a little bit, that would be nice. Until then, I’ll be over here closing popup adverts and keeping my cursor at the top of the window.

“Algorithms”

TikTok is the future of web browsing. You won’t surf the web; it will be served to you “algorithmically” instead. After a while you’ll be served the content you want and it will feel like it was your idea all along.

AI is the engine to do this. The AI feed will repackage the web, all of the books, all of the recorded audio, and all of the video (which it has already consumed) and deliver it in a feed. You will open the browser and the content will appear. You will scroll and new content will appear.

This is basically the Facebook News Feed or TikTok FYP, but there is a crucial difference. Content there still leads users away from the source. People make the content (or prompt it); people (or their scripts) post the content. They need you to click on it and they want you to follow them off the feed. It’s a dialectic twisted around revenue. Facebook and TikTok want you to keep scrolling so your eyeballs roll over their ads, but Facebook and TikTok need content from users to keep you coming back. Creators who post there want you to click on their content so your eyeballs roll over their ads (or you send them money directly, but they need Facebook or TikTok to put your butt in the seat.

The AI Feed will certainly be burdened by its own internal contradictions, but it will escape this dialectic. Users will stay on the feed because it can endlessly generate content in a way that makes them feel like they’re unique, living on the cutting edge of information, and in control. Creators may post on their own sites (like this one!) but, lacking the “algorithm” and the network effects of a major platform, they will labor in obscurity. Further, the Feed will just consume their content and repackage it.

Maybe the Small Web will come back. Maybe print media will come back. I’ve explored both of those ideas in this blog at many points in the last ten years. Or maybe the AI Feed will be amazing. Who knows? The only certainty is change, change, change.

Ask and ye shall receive

lol

I know chatbots aim to please, but this was not on my AI bingo card.

Read the rest here.

How many of these emergent problems will have to be solved before the product is secure? And even if we could think of everything, how many entirely new problems are these tools creating which will, in turn, also have to be solved?

[Edit: I finished reading the rest of the post, which I’ve just linked again, and holy shit.]

Dreamtime in Real-Time

Watching people ask ChatGPT questions about technical matters for which I am a sort of expert and then presenting the hallucinations back to me as facts in real-time is a lot of fun. Does this happen to you?

I lifted the quote below from this bruising and well-deserved critique of GPT-5. The author of that post takes it from the original tweet here.

With LLMs it’s always the same problem. They don’t know the answer; they just know how to run an input sequence through complicated functions that predict the next word in an output sequence. The quote below from this excellent article puts it much better than I can.

Watch it go

Facebook has been a bad product since the introduction of the News Feed, but the switching costs have always been high and it was optimized by some of the best engineers our universities could produce for stickiness. The dual onslaught of Groups, which incentivize low effort/high engagement content, and AI junk, which sometimes checks both of those boxes just right, makes it an even worse product.

It’s still sticky, because we’re all still here, but will that last? Will it last when most of the posts I see give me zero value? Will it last when groups, which are weighted so heavily in the feed, are cesspools of AI-generated nonsense? I hope not. I hope a product manager at Meta is losing sleep over this problem tonight.

This is happening to the whole Internet, though. AI slop is already filling up web pages and discussion forums. Reddit will succumb to it because upvotes are the metric. Comments sections were already astroturfed; now the astroturfers will just cut out the humans sitting in the phone farm. I just had a meeting today where one of the topics was using AI to generate blog posts. There’s no turning back from this garbage because the incentives to use it are so high, and the bill for that convenience won’t become due until the entire Internet is consumed by it.

I printed (and web published) a ‘zine because I believe print is going to make a comeback very soon. By its very nature, print defies the logic of machine generation. We need analog back. The digital ocean is polluted.

Google Bard’s Gothic Hallucinations

Yesterday I asked Google Bard the kind of question I’ve often wanted to ask a search engine.

Imagine you are a professor preparing a graduate seminar on 18th- and 19th-Century British Gothic Literature,” I instructed the machine. “What materials would you place on the syllabus, including a combination of primary texts and secondary criticism and interpretation, and how would you group them?”

This is a complex question, but the solution—as I understand it—should just be a series of search queries in which the most appropriate results are mapped into the LLM matrix to produce the output. Because Google is the market leader in search, and I’m not asking Bard to display its “personality” like Bing/Sydney (the “horniest” chatbot, as The Vergecast would have it), I thought this would be an ideal task for Bard.

Boy, was I wrong. Here is the syllabus Google Bard produced.*

On first glance, this looks like a valid, if unoriginal, syllabus. Bard has identified some representative primary texts matching the query and has chosen to present them chronologically, rather than thematically. That is a sane choice. And those texts actually exist.

Now let’s look at the secondary literature Bard wants students to grapple with. Bard has selected the following texts:

  • David Punter, The Gothic Imagination: A Critical History of Gothic Fiction from 1764 to the
    Present Day
    (1996)
  • Anne Williams, Gothic Literature (1994)
  • Stephen D. Gosling, Gothic Literature: A Cultural History (2000)
  • William Veeder, Gothic Fiction: A Critical Introduction (2005)
  • David Skidmore, Gothic Literature (2013)
  • Andrew James Smillie, Gothic Literature (2017)

“I would… group the secondary criticism and interpretation chronologically,” Bard says, “starting with Punter’s The Gothic Imagination, the first comprehensive critical history of Gothic fiction, and ending with Smillie’s Gothic Literature, the most recent critical history of the genre.” That sounds good, but none of these texts exist. Not one. Google Bard made up every one of the texts on this list, and several of the people listed there as well.

David Punter is, indeed, a scholar of gothic literature, but as far as I can tell has never produced a text entitled The Gothic Imagination: A Critical History of Gothic Fiction from 1764 to the Present Day. Anne Williams is Professor emeritus in the English department at UGA, but I cannot find an overview by Williams published in 1994 (though Art of Darkness:  A Poetics of Gothic, published in 1995, sounds fascinating). I can find no gothic scholar named Stephen D. Gosling, and obviously no cultural history Gosling may have authored. William Veeder was a professor at U. Chicago but never wrote Gothic Fiction: a Critical Introduction. And so on. None of these books exist.

Make of this what you will. I don’t think Bing or ChatGPT would do much better at this task right now, but it is only a matter of time until they will be able to deliver accurate results. In the meantime, the machine is confidently hallucinating. Caveat emptor.

Of course, I did ask Bard to “imagine” it is a professor. Maybe it took me too literally and “imagined” a bunch of books that would be great for graduate students to read. Perhaps I should have told Bard it is a professor and insisted that it deliver only real results.

There’s always next time.

* To be fair, Google warned me twice that this would happen.