A Note on the Disappearing Internet

A while ago, I wrote that the future is local. File this quick note in the same folder.

Tonight I was trying to locate a handy graph showing trends in the construction of shopping malls in the twentieth century to supplement a travel essay I’m working on. I know I’ve seen charts, tables, timelines, and maps which show exactly what I needed, so I thought it would be trivial to find it on Google. Turns out it was easy to find secondary content describing what I wanted, but the primary sources were long gone from the internet. Here’s a great example.

In May 2014, The Washington Post ran a story about the death of American shopping malls. After the usual rambling wind-up to the ad break, the article got to the point: an animated map designed by an Arizona State grad student tracking the construction of malls across space and time in the twentieth century. “Over a century,” Post columnist Emily Badger wrote, “the animation gives a good sense of how malls crept across the map at first, then came to dominate it in the second half of the 20th century.” That is exactly what I wanted! I scrolled up and down the page, looking for a map with “dots… colored by the number of stores in each mall,” but it was nowhere to be found. I clicked a link to the source: nothing. MapStory.org appears to have gone offline sometime in the summer of 2020. Increasingly dismayed, I went back to Google and searched again. This Archinect article, published a few hours after the Post column, embedded the map directly. All that remains now is a blank box. Business Insider was a few days late to the party, but it was the same story there: a blank box where the map used to be.

As a last resort, I turned to the Wayback Machine at the Internet Archive. An archived version of a web app like MapStory appears to have been is never ideal and only rarely works. Sure enough, the archived version of the mall map is just text gore. I’m afraid Sravani Vadlamani’s map is gone, and probably gone forever.

As corporations merge and downsize; as executives and product managers make changes to content retention strategies; as technical standards and fashions in code change over time; and as server upgrades, data loss, simple bit rot, and other forms of entropy accumulate; more and more of these primary sources are going to disappear. In the best-case scenario, dedicated archivists might be able to stay ahead of the chaos and preserve some majority of the information we see every day. Because the last ten years or more of the internet is largely hidden behind the walls of social media, however, the odds that this scenario will prevail are vanishingly small. We should be prepared for a much worse situation: if we don’t make a local copy of the things we see on the internet, they probably won’t be there when we come back.

As an historian, I am troubled by the potential consequences of this fragility. “Darkness” did not prevail in the so-called dark ages of the past because people were less intelligent, inventive, or ambitious than their ancestors. The “darkness” seems to have existed only in retrospect, when later generations recognized a rupture in information between one age and the next. Burning libraries is one way to cause such a rupture. Perhaps networked computers serving dynamically generated content is another. Let us hope not.

Analog Future

The future is local.

I mean local in several senses of the word. The future will be local, first, in the sense that the things you do there will be somewhere close to you instead of located on a computer somewhere in Atlanta or San Francisco or Dublin. It will also be local in the sense that the majority of things you will make and do there will likely be stored on your own computer, perched on your tabletop, stored on your bookshelf, built on your workbench, cooked in your kitchen, and so on, rather than somewhere else. You will own them. Related to this, the future will be local, finally, in the sense that you will share things there with local people whom you actually know, rather than digital representations of people in chat rooms or on headsets. You will likely post the things you make there on your own website, print them in your own zine, sell them in your own community. The internet is not dead, but its role as the primary force shaping our lives is coming to an end.

When I say “the internet,” I don’t mean the technical stack. I’m not referring to the network of networked computers communicating with one another using various protocols. Instead, I refer to the “phenomenological internet” of “the more familiar sites of daily use by billions of people” that Justin E.H. Smith defines in his book, The Internet is Not What You Think It Is. Smith writes,

“Animals are a tiny sliver of life on earth, yet they are preeminently what we mean when we talk about life on earth; social media are a tiny sliver of the internet, yet they are what we mean when we speak of the internet, as they are where the life is on the internet.”

To this definition I would add another category, however: the streaming media provider. When we speak of the internet, we also speak of Netflix, Hulu, Amazon, Disney Plus, and so on. These multi-billion dollar corporations draw on the rhetoric of “the internet” to position themselves as scrappy upstarts opposing the staid traditional media providers, such as film studios and television networks. Viewers have largely accepted this position and view these services as outposts of the internet on their television screens.

Prediction is a mug’s game, so think of this as a prescription instead of a prediction. There are several related trends converging over the next several years that are likely to drive people away from the comfy little burrows they’ve carved out of the internet by forking over $5 or $7.99 or $14.99 or a steady stream of personally identifiable data every month. Together, these trends map the contours of serious contradictions between abundance and desire, on the one hand, and humans and machines on the other, which strikes at the heart of the internet as we have understood it since around 2004. The dialectic emerging from these contradictions will drive new user behaviors in the next decade.

The first trend is the grinding ennui which has resulted from the relentless production of entertainment and cultural commodities for consumption on the internet. Reduced in the past several years to a sort of semi-nutritive paste called “content,” art and entertainment are losing their capacity to relieve and enrich us and now increasingly amplify the isolation and pessimism of life online.

A seemingly infinite stream of money dedicated to the production of entertainment on the internet has resulted in an ocean of unremarkable “content” that does little more than hold your attention long enough to satisfy the adware algorithm or build a platform big enough to stage the next bit of content in the franchise and queue up the next marketing event. Outside of their algorithmically contoured bubbles of fandom, there is little difference between Marvel and Star Wars or DC or YouTube creators or Twitch streamers or podcasts. Netflix shows and Amazon Prime shows and Hulu shows and HBO Max shows and Paramount Plus shows and Peacock shows and so on are indistinguishable blips in time, forgotten as quickly as they are consumed. Books scroll by on Kindle screens or drop serially onto shelves. Photographs and artwork slide past on instagram, meriting a second or perhaps a moment’s notice before disappearing into the infinite past. Pop music percolates through TikTok, moves week-by-week downward on officially curated playlists, radiates out into commercials, and then disappears, poof, as rapidly as it came, displaced by the next. Independent music on the internet–even on platforms nominally controlled by the artists, like Bandcamp or SoundCloud–exists in much the same sort of vacuum as it always has. The internet promised an efflorescence of color and creativity. What it gave us instead was a flat, white light that grows dimmer over time as the algorithms which shape it converge on a single point of optimization.

The top 5 most-viewed links on Facebook in the last quarter

Because the vast majority of the “content” is indistinguishably boring, the second trend is tightly related to the first. Social media is dying. Many platforms, Facebook front and center, are already dead, gliding still on accumulated momentum but inevitably bound to stop. As recently as 2016, we believed that Facebook could change the world. In recent quarters, however, the most viewed content on the behemoth platform has either been a scam or originated somewhere else. The top 5 most-viewed links in the second quarter of this year, according to Facebook, consisted of TikTok, two spam pages, and two news stories from NBC and ABC on the Uvalde School Shooting. TikTok leads the second-place spam page by a huge margin. Facebook is not a healthy business. Ryan Broderick recently summed up the situation with Facebook admirably on his excellent “Garbage Day” Substack. “Facebook, as a product, is over,” Broderick writes. “Meta knows it. Facebook’s creators know it. Possibly even Facebook’s users. But no one has anywhere else to really go.”

People who rely on social media to promote and build businesses are beginning to note a general decline as well. According to a poll detailed in a recent article on “creatives” frustrated with social media, 82% believe that “engagement” has declined since they started using social media. “I’ve given up on Instagram,” one freelance artist noted. “I wasn’t even sure it was making a difference with getting more work. And I seem to be doing okay without it.”

Facebook and Instagram are in rapid decline, but what about TikTok, YouTube, Reddit, Twitter, and others? A third problem, more profound than the others, faces these: there are no more users to gain. Two decades into the social media era, the market is highly segmented. New platforms like TikTok will continue to emerge, but their surge will climb rapidly to a plateau. The decades-long push for growth that fueled platforms like Facebook and Twitter through the 2000s and 2010s dovetailed with the proliferation of smartphones. Now that the smartphone market is saturated, social media companies can no longer look forward to a constantly expanding frontier of new users to sign up.

Relying on content algorithms to retain existing users or coax those back who have already left, platforms accelerate the ennui of optimization. This leaves precious little room for new types of content or new talents to emerge. Still, people will entertain each other. Those who create art will seek approval and criticism. Others will seek out new and exciting art and entertainment to enjoy. When there is no room on social media for to put these groups of people together, they will find each other in new (old) ways: on the street.

You may have recently heard that machines are going to solve the problem of creating new and engaging content for people to consume on the internet. AI models like DALL-E, Stable Diffusion, GPT-3, various Deepfake models for video, and others use the oceans of existing images, text, audio, and video to create new content from scratch. Some of these models, such as Nvidia’s StyleGAN, are capable of producing content indistinguishable from reality. Artists are beginning to win prizes with AI-generated work. AI-generated actors are appearing in media speaking languages they don’t know, wearing bodies decades younger than the ones they inhabit in reality. GPT-3 is a “shockingly good” text generator which prompted the author of a breathless article in this month’s Atlantic to swoon. “Miracles can be perplexing,” Stephen Marche writes in the article, “and artificial intelligence is a very new miracle…. [An] encounter with the superhuman is at hand.”

Some critics of these AI models argue that they will prompt a crisis of misinformation. Deepfakes may convince people that the President of the United States declared war on an adversary, for example, or a deepfake porno video could ruin a young person’s life. These are valid concerns. More overheated critics suggest that AI may one day surpass human intelligence and may, therefore, power over its creators like masters to pets. Setting aside the Social Darwinist overtones of this argument—that ”intelligence,” exemplified by the mastery of texts, translates automatically to power—machine learning algorithms are limited by the same content challenges facing social media. AI may create absorbing new universes of art and sound and video, but it can only generate content based on the existing corpus, and it can only distribute that content on existing networks. People have to create new texts for AI to master. The willingness of a continuous army of new users to generate these texts and upload them to the phenomenological internet of social media and streaming video, where they can be easily aggregated and made accessible to machine learning models using APIs, is declining. The same types of algorithms that prompted Stephen Marche to proclaim a New Miracle in The Atlantic are driving the most successful corporations in history right off a cliff as I write this.

These critiques of AI-generated content assume that people will continue to scroll social media and engage with the things they see there in ways similar to their behavior over the past decade. In this model, to review, users scroll through an endless stream of content. When they see posts that inspire or provoke, impress or irritate, they are encouraged to like, comment, and share these posts with their friends and followers. The content may be endless, but the people on both sides of the transaction are the most important elements in the decision to like, comment, or share. Users are not impressed or provoked by the content itself, but because of the connection it represents with other people. They respond and share this content performatively, acting as a bridge or critic between the people who created the content–and what they represent–and their friends and followers. If you remove enough of the people, all of the content loses its value.

At a more fundamental level, people are the appeal of any creative work. Art without an artist is a bit like clouds or leaves: these may be beautifully or even suggestively arranged, but they offer no insight on what it means to be human. GPT-3 may tell a story, but it does so mimetically, arranging words in a pattern resembling something that should please a human reader. You may level the same criticism at your least-favorite author, but at least they would be insulted. GPT-3 will never feel anything.

AI-generated content will neither solve the content problem for platforms nor prompt a further crisis of misinformation and confusion for users. AI content will be the nail in social media’s coffin.

As a result of these interlocking trends–the crushing ennui of “content,” the decay of social media, the dearth of new smartphone users, and the incompatibility of AI-generated art with human needs–“culture” is likely to depart the algorithmic grooves of the internet, sprout new wings offline, and take flight for new territory. Perhaps, once it is established there, the internet will catch up again. Perhaps then software will try, once again, to eat the world. This time it has failed.

The Vibe Shift

You’ve probably heard of the vibe shift.

The vibe shift is whatever you want it to be.

The vibe shift is the death of the unitary internet.

The vibe shift is the re-emergence of local, regional, national constellations of power and culture separate from the astroturfed greenery of the web.

The vibe shift is a return to ‘zines, books, movies, maybe even magazines and newspapers, because the web was once an escape from work and all the responsibilities of “real life” and now it has come to replace them.

Lately I have been leaving my phone in the car when I go places. These insidious toys entered our lives with a simple question: “what if I need it?” I cannot recall a single situation in the past decade when I truly needed a mobile phone. Instead I have begun to ask myself, “what if I don’t need it?” What if a mobile surveillance and distraction device is actually the last thing I need to carry with me?

What am I doing Here?


Outside my window there is a shaft of sunlight streaking across the fence. The fence is dark brown, a color that hasn’t been popular since the 1970s, and the local paint store keeps a formula in the notebook just for our condo association to paint the fences and trim. They call the color “Westwood Brown,” and if we ever decide to change the color scheme here they will probably hesitate to pull the color out of the book because it is like a collector’s item now. It is a story they can tell to new employees. Even so, if you catch it when the light is just right, like right now, Westwood Brown can transport you to a different time–the Halston era, the epoch of the land yacht Coupe de Ville. It is the golden hour and the light points, like a celestial digit, straight at the spreading petals of brilliant flame-colored Bromeliad my wife planted along the fence. I think, I should take a picture of that.

I am unsure if it is me thinking about the picture, or if it is Instagram thinking it for me.

This week I quit Twitter. It’s not like I had much of a presence there, so we will be fine without each other. I had something like 150 followers, followed around 1,200 accounts, and scrolled over there occasionally when the rest of my addictive scroll-holes were drying up for the day. Lately I had been playing a simple game every time I opened the app. If the first post was someone: 1.) wailing about the hypocrisy of the other side in the culture war, 2.) flexing their success, or 3.) piling onto the controversy du jour, I would close the app and move on. I have had very little reason to scroll beyond the first screen in the past few weeks.  

I’ve quit Twitter and deleted Facebook before, but I was encouraged to close my Twitter account this time by the spate of articles that popped up this week frankly raising the question: what are we doing here? Quinta Brunson’s thoughts on minding your own damn business afford the best example, but the algorithms must have registered some little chemical twist in my pituitary because I kept running across pieces, like this profile of Twitter-person Yashar Ali, or this interview with someone who quit social media, that chip away at the foundation of social media’s necessity by questioning whether we need it or whether the people we see there are as significant as they appear. 

The cynical among you are likely to say that these are dumb questions; that of course social media people are unworthy of our attention, and of course we don’t need it. I think making this claim is a bit like playing a character, though: that of the discerning sage, you probably think, or the intelligent free-thinker standing on a stage opposite the vapid follower and the bankrupt influencer. Which of you is Malvolio and which is Toby Belch will depend upon the attitude of the viewer, however. The postures and costumes are different, but from far enough away the result is the same. They strike us now as just a couple of old assholes, rendered immortally luminous by a poetic genius of world-historical importance. On the internet, there is no poetry to illuminate them. Both characters are constricted by straitjackets of bullshit, and both of them seem terribly unhappy

That brings me in a roundabout way to the question I sat down to ask in the first place. What am I doing here? I’m pretty sure I’ve asked this question before, but I don’t want to go back and check. That would be depressing, like reading an old poetry notebook or flipping through an old diary from high school. I’ll reframe the question this time instead and try to write my way to an answer. Is what I’m doing here, whatever it is, balancing out some of that unhappiness? Is that even possible?  


Lately people have been writing about how much they miss the old internet. The new internet is too vanilla, they argue, too boring and perversely commercial, like a shopping mall. I am normally inclined to agree, but the other afternoon I woke up from a nap and read about alternate reality games and obscure social media characters on Garbage Day. Perhaps it was the residual Triazolam and Nitrous Oxide leftover in my system from oral surgery that morning, but I had trouble making sense of it in the same way that I once stumbled gape-mouthed across arcane forums and exotic communities like a rube from the meandering suburbs of America Online. The old internet, with all of its nonsense, all of its randomness and quirky passion, is still with us. I’m looking at you, Malvolio, for the next line. Of course it is, you might say. It’s all just people.

But did the old internet make people happy? The internet seems always to have been a contested space, a rambling assemblage of insider communities whose best days are very recently gone. I remember my first connection, a dial-up hotline to AOL across endless air-conditioned days and nights in the summer of 2001. I was late to the party, five years behind my more affluent peers, and anxious to make up for it in sheer eagerness. I joined email lists, clicked through webrings, posted to forums. I had been led to believe, probably by bewildered news anchors or breathless magazine articles, that the web was a brand new thing. What I found instead was a bunch of conversations with no beginning and no end. It seemed like the authors of my favorite web pages had all recently stepped away from their keyboards. Communities were cantankerous places, hostile to newcomers, where old fires never seemed to burn themselves out. There was the Eternal September of 1993, for example, named for the time when all the noobs from AOL flooded Usenet and never left. At least that’s how the old Usenet admins have it. For my part, I started moderating a mailing list and stepped right into the middle of flame wars dating back years. Old-timers had to bring me up to speed on the old arguments so I would know how to intervene. New trolls arrived weekly. Maybe it was a bad list, but anyone who’s spent time moderating an online community knows the struggle. 

I think of this when I wonder why we all flocked to MySpace, and then Facebook, so rapturously. The old internet was a place made of text, and consequently a place where strangers spread their shit all over you. You couldn’t avoid it. If you wanted to communicate, you had to work at it. You had to learn the jargon, nod along with inside jokes you didn’t always find funny, assimilate opinions you couldn’t publicly examine. MySpace and Facebook were made of pictures, blissfully free of ideas that couldn’t be communicated at a glance. Facebook remains so. Your mere existence, your possession of a face, is the only cost of admission.  


Neither the old internet nor the new, then, have made us any happier than we were in the time before. Rather than happiness, what we’ve gained is access to a sort of dark power bubbling in the river of instant and endless information flowing through the pipes. We use this information to construct stories, and it is through these stories that we harness the power to do work in the world. I can’t help but feel as though we’ve designed the internet to give us information which reinforces the worst stories we tell about ourselves. These are stories of growth and progress in which we appear better and smarter than anyone who lived before us. These are also stories that make us feel empowered as individuals, unique among lesser peers. Meanwhile the algorithms conditioning the flow of information enable us, in a vicious feedback loop, to look away from the stories we’d rather not tell–stories of stasis or declension, of similarity and solidarity. 

There is another force flowing in those dark waters as well: the emotive power of suggestive blank spaces. Memes are like atoms, rich in potential and eager to freely associate, but information devoid of context is like a shadow divorced from its object. We can only guess at its sources and meanings. This is precisely the type of information the internet is engineered to deliver, however. You ask Google a question and it gives you an answer. 

Google’s brand is based on authority and correctness, at least. The rest of the internet is built to keep you engaged. If content exists in a sort of Darwinian state of nature, the most successful information is that which makes you feel. Reddit and the chans are factories producing and serving up the most emotive content. You scroll TikTok or YouTube and the algorithm serves you videos according to their likelihood of keeping you engaged. Twitter and Facebook serve up bite-size nuggets of emotion on the feed. News editors engineer headlines to galvanize you to action. It is not that microblogs, articles, memes, pictures, blurbs, and short videos are incapable of rich contextualization; it is that the creators focused on context are not as successful as those focused on engagement.    

Part of what I am trying to do here is counter these tendencies. I am drawn to the stories we prefer not to tell. As a historian, for example, I am fascinated as much by continuity as by change over time. No matter the era in which they lived, informants in the archives shared their similarities with us as readily as they disclosed their differences. Our motivations echo theirs. Many of our creations fall short of theirs. Like strangers on the internet, they rub their shit all over us too. We would do well to wallow in it, though, because we cannot engineer a new world as easily as we can engineer a new user experience. Our culture is built from theirs. We live in the cities and towns they built. We speak their language, worship their gods, read their books. Rather than seeing ourselves as disruptors or innovators, we might benefit from seeing ourselves as cautious trustees of that world, therefore; as careful fiduciaries focused on moving slowly and maintaining things rather than moving fast and breaking them. 

This perspective need not be conservative. The narrative of constant growth and improvement is the guiding myth of capitalism, after all. I think building an alternative grounded in context, focused on capturing the prosaic or humanizing the proletarian (to the best of my meager ability, at least), can make us feel a little more anchored in the swift currents of a society built to pick our pockets and power over us by maintaining a constant state of instability. If this approach can make the internet a little bit of a happier place, then maybe I’m doing some good here.


I failed to mention at the beginning of this essay that I am not necessarily a reliable narrator. 

I actually don’t know whether I’m achieving any of the lofty goals I just described. Perhaps, like an artist’s statement, everything I said up there illuminates the principles organizing my work. I like the sound of that but I’m not sure it’s true. The truth is that I really just work on ideas that I like without worrying about how they fit into some schema. If I admit this, however, then my work in this essay isn’t done and it remains for me to answer: does doing this make me happy? Now I can’t hide behind a shield of analysis. This just got scary.    

Writing for me is a form of exorcism. If I go more than a few days without writing something, anything, I can feel a sort of dark pit forming somewhere inside me. I’ve come to realize that this darkness is death stalking me, as it stalks all of us, from somewhere just outside of my peripheral vision. The longer I go without writing–or, to a lesser extent, creating other things like images or music–the closer it gets, until the feeling of hopelessness is almost unbearable. This sounds like an acute illness, I understand, but each of us is striving to overcome this darkness in our own way every moment we are alive. Some of us achieve it through devotion to family. Others achieve it through friends or work, some achieve it with drugs. Writing is what works best for me. 

Every word written is written for an audience. This is obviously true for articles and essays like this one, for books, and so on, but even a journal is just a book we write for our future selves. With this in mind, a few years ago I thought: if I’m writing simply to stay alive, why not publish it? It takes so many rejection letters to get an acceptance, and I don’t have time (this line of thinking goes) to develop a whole submission and tracking process in addition to working, studying, trying to shift gears and be creative, and then somehow writing and finishing some hairbrained idea in the first place. 

The narrative of progress through technology is here to make me feel good about this. With the rise of Substack and the constant firehose of essays like “No One Will Read Your Book” or “10 Awful Truths About Publishing” or this NY Times article which found that 98% of books released last year sold fewer than 5,000 copies, there is no better time than now to rethink publishing altogether. There are currently 7,614 markets and agents listed on duotrope. Many of the “lit mags” on the web and indexed by duotrope are labors of love undertaken by one or two individuals. They went out and bought a domain and built a WordPress site, just like I’ve done here. What’s the difference? 

Ask anyone who writes and they will point out the problem right away. There are few things more pathetic than a self-publisher. Publishing my own work here helps to exorcise that darkness, but it will never feel good enough. The point of publication, especially now that we all have the resources to publish whatever we want–is that someone else thinks what you have to say is worth amplifying. Publishing your own work is like an admission of inadequacy. It feels like saying, I don’t think this is good enough to publish or, even worse, nobody else thinks this is good enough to publish. 

This is an indoctrinated opinion. Just like those stories of endless progress that troubled me a few hundred words ago, this opinion serves a social purpose. It is likely that how you see that purpose depends on your bedrock identity. Perhaps you see publication as a form of competition that elevates the best writing over the mediocre, driving everyone to greater heights of achievement along the way. Or maybe you see it as a limiting mechanism that functions–either intentionally or not–to push out voices that challenge dominant opinion. 

Since I am now in the personal and supposedly truthful part of this essay, I should say that my own view changes based on how strongly I feel about my own abilities. When I am down, probably after a rejection or two, my opinion of publishing is decidedly Jacobin: down with the editors! At other times, probably when I’ve had something published or just written something that I feel good about, I’m as sanguine about the market as Adam Smith. 

Perhaps all of my opinions are like this and the problem I am struggling to write around is that we are supposed to be consistent. Do the algorithms know I am as changeable as the wind? Do they take advantage of the distance between my variable opinions and desires and how consistent I think they are? We value being but we are all, always, merely becoming. 

This is the real strength of a blog. A book is firm, stolid like our opinions are supposed to be. A book, we might say, is being. The internet is fluid, as variable as the flood of emotions and opinions shaping our daily experience. It is a space characterized by constant becoming, by revising, rethinking. Conversations here neither begin nor end. We’re all just passing by one another, sharing ideas inscribed with light and stored as magnetic charges on a magnetic array somewhere far away. 

This website is my bridge between becoming and being.

This website is a bridge between the old web and the new.   

This website won’t make anyone happier, but it’s not for lack of trying.