Technology is supposed to make things better. Lately it seems as though, almost day by day, the tools and systems that surround us are growing more complex and less useful. Here is an example.
The mobile phone on my desk at work flashes a notification about once a week. “Update Apple ID Settings,” the notification advises me, because “some account services will not be available until you sign in again.” I click continue and a new screen appears, entitled “Apple ID for your organization.” The screen instructs me to continue to a web address where I may sign in to my account. I tap the screen to activate a large blue button labeled “Continue,” and a browser page showing my workplace’s login screen appears. I enter my password–encrypted and saved on the phone, thankfully–and a new screen appears presenting me with the option to verify my identity through a phone call or a text message. I select phone call, because I am unable to receive text messages on this phone. If I did happen to select text verification, here is what would happen: the screen would change again, displaying a message over a set of floating periods indicating that the verification server is awaiting my confirmation text message. Nothing would happen, however, and I would need to begin the process again.
A moment after selecting phone verification, the phone rings. I answer and an automated voice speaks:
“This is the Microsoft sign-in verification system,” the voice says. “If you are trying to sign in, press the pound key.”
I tap the small window at the top of the screen representing the call in progress. This leads to another screen, where I must tap the “Handset” area to open a virtual representation of an old phone handset. I then tap the area of the glass screen corresponding to the pound key.
“Your sign-in was successfully verified,” the voice responds. “Good-bye.” The blazing red notification bubble will never disappear until I take this action.
The entire interaction takes less than thirty seconds. It is irritating in the moment, but the process is easy enough that I don’t have to think much about it once I get started. If I refused to do so, however, after a while the software on my phone would stop working. First, I would lose the features furthest from the core of the phone. Apps that change often–productivity apps like Excel or OneNote, for example–would be first to go, blocked by a verification server requiring the newest version to operate. Next, I might start to lose access to some of the manufacturer’s frequently-updated software, like Maps and Photos. Finally, given enough time and system updates, even the most basic features like mail and text messages, and then the phone itself, would stop working, rendering the $1,000 computer less useful than a concrete block until I completed the ritual of verification.
Tonight I was trying to locate a handy graph showing trends in the construction of shopping malls in the twentieth century to supplement a travel essay I’m working on. I know I’ve seen charts, tables, timelines, and maps which show exactly what I needed, so I thought it would be trivial to find it on Google. Turns out it was easy to find secondary content describing what I wanted, but the primary sources were long gone from the internet. Here’s a great example.
In May 2014, The Washington Post ran a story about the death of American shopping malls. After the usual rambling wind-up to the ad break, the article got to the point: an animated map designed by an Arizona State grad student tracking the construction of malls across space and time in the twentieth century. “Over a century,” Post columnist Emily Badger wrote, “the animation gives a good sense of how malls crept across the map at first, then came to dominate it in the second half of the 20th century.” That is exactly what I wanted! I scrolled up and down the page, looking for a map with “dots… colored by the number of stores in each mall,” but it was nowhere to be found. I clicked a link to the source: nothing. MapStory.org appears to have gone offline sometime in the summer of 2020. Increasingly dismayed, I went back to Google and searched again. This Archinect article, published a few hours after the Post column, embedded the map directly. All that remains now is a blank box. Business Insider was a few days late to the party, but it was the same story there: a blank box where the map used to be.
As a last resort, I turned to theWayback Machine at the Internet Archive. An archived version of a web app like MapStory appears to have been is never ideal and only rarely works. Sure enough, the archived version of the mall map is just text gore. I’m afraid Sravani Vadlamani’s map is gone, and probably gone forever.
As corporations merge and downsize; as executives and product managers make changes to content retention strategies; as technical standards and fashions in code change over time; and as server upgrades, data loss, simple bit rot, and other forms of entropy accumulate; more and more of these primary sources are going to disappear. In the best-case scenario, dedicated archivists might be able to stay ahead of the chaos and preserve some majority of the information we see every day. Because the last ten years or more of the internet is largely hidden behind the walls of social media, however, the odds that this scenario will prevail are vanishingly small. We should be prepared for a much worse situation: if we don’t make a local copy of the things we see on the internet, they probably won’t be there when we come back.
As an historian, I am troubled by the potential consequences of this fragility. “Darkness” did not prevail in the so-called dark ages of the past because people were less intelligent, inventive, or ambitious than their ancestors. The “darkness” seems to have existed only in retrospect, when later generations recognized a rupture in information between one age and the next. Burning libraries is one way to cause such a rupture. Perhaps networked computers serving dynamically generated content is another. Let us hope not.
I mean local in several senses of the word. The future will be local, first, in the sense that the things you do there will be somewhere close to you instead of located on a computer somewhere in Atlanta or San Francisco or Dublin. It will also be local in the sense that the majority of things you will make and do there will likely be stored on your own computer, perched on your tabletop, stored on your bookshelf, built on your workbench, cooked in your kitchen, and so on, rather than somewhere else. You will own them. Related to this, the future will be local, finally, in the sense that you will share things there with local people whom you actually know, rather than digital representations of people in chat rooms or on headsets. You will likely post the things you make there on your own website, print them in your own zine, sell them in your own community. The internet is not dead, but its role as the primary force shaping our lives is coming to an end.
When I say “the internet,” I don’t mean the technical stack. I’m not referring to the network of networked computers communicating with one another using various protocols. Instead, I refer to the “phenomenological internet” of “the more familiar sites of daily use by billions of people” that Justin E.H. Smith defines in his book, The Internet is Not What You Think It Is. Smith writes,
“Animals are a tiny sliver of life on earth, yet they are preeminently what we mean when we talk about life on earth; social media are a tiny sliver of the internet, yet they are what we mean when we speak of the internet, as they are where the life is on the internet.”
To this definition I would add another category, however: the streaming media provider. When we speak of the internet, we also speak of Netflix, Hulu, Amazon, Disney Plus, and so on. These multi-billion dollar corporations draw on the rhetoric of “the internet” to position themselves as scrappy upstarts opposing the staid traditional media providers, such as film studios and television networks. Viewers have largely accepted this position and view these services as outposts of the internet on their television screens.
Prediction is a mug’s game, so think of this as a prescription instead of a prediction. There are several related trends converging over the next several years that are likely to drive people away from the comfy little burrows they’ve carved out of the internet by forking over $5 or $7.99 or $14.99 or a steady stream of personally identifiable data every month. Together, these trends map the contours of serious contradictions between abundance and desire, on the one hand, and humans and machines on the other, which strikes at the heart of the internet as we have understood it since around 2004. The dialectic emerging from these contradictions will drive new user behaviors in the next decade.
The first trend is the grinding ennui which has resulted from the relentless production of entertainment and cultural commodities for consumption on the internet. Reduced in the past several years to a sort of semi-nutritive paste called “content,” art and entertainment are losing their capacity to relieve and enrich us and now increasingly amplify the isolation and pessimism of life online.
A seemingly infinite stream of money dedicated to the production of entertainment on the internet has resulted in an ocean of unremarkable “content” that does little more than hold your attention long enough to satisfy the adware algorithm or build a platform big enough to stage the next bit of content in the franchise and queue up the next marketing event. Outside of their algorithmically contoured bubbles of fandom, there is little difference between Marvel and Star Wars or DC or YouTube creators or Twitch streamers or podcasts. Netflix shows and Amazon Prime shows and Hulu shows and HBO Max shows and Paramount Plus shows and Peacock shows and so on are indistinguishable blips in time, forgotten as quickly as they are consumed. Books scroll by on Kindle screens or drop serially onto shelves. Photographs and artwork slide past on instagram, meriting a second or perhaps a moment’s notice before disappearing into the infinite past. Pop music percolates through TikTok, moves week-by-week downward on officially curated playlists, radiates out into commercials, and then disappears, poof, as rapidly as it came, displaced by the next. Independent music on the internet–even on platforms nominally controlled by the artists, like Bandcamp or SoundCloud–exists in much the same sort of vacuum as it always has. The internet promised an efflorescence of color and creativity. What it gave us instead was a flat, white light that grows dimmer over time as the algorithms which shape it converge on a single point of optimization.
Because the vast majority of the “content” is indistinguishably boring, the second trend is tightly related to the first. Social media is dying. Many platforms, Facebook front and center, are already dead, gliding still on accumulated momentum but inevitably bound to stop. As recently as 2016, we believed that Facebook could change the world. In recent quarters, however, the most viewed content on the behemoth platform has either been a scam or originated somewhere else. The top 5 most-viewed links in the second quarter of this year, according to Facebook, consisted of TikTok, two spam pages, and two news stories from NBC and ABC on the Uvalde School Shooting. TikTok leads the second-place spam page by a huge margin. Facebook is not a healthy business. Ryan Broderick recently summed up the situation with Facebook admirably on his excellent “Garbage Day” Substack. “Facebook, as a product, is over,” Broderick writes. “Meta knows it. Facebook’s creators know it. Possibly even Facebook’s users. But no one has anywhere else to really go.”
People who rely on social media to promote and build businesses are beginning to note a general decline as well. According to a poll detailed in a recent article on “creatives” frustrated with social media, 82% believe that “engagement” has declined since they started using social media. “I’ve given up on Instagram,” one freelance artist noted. “I wasn’t even sure it was making a difference with getting more work. And I seem to be doing okay without it.”
Facebook and Instagram are in rapid decline, but what about TikTok, YouTube, Reddit, Twitter, and others? A third problem, more profound than the others, faces these: there are no more users to gain. Two decades into the social media era, the market is highly segmented. New platforms like TikTok will continue to emerge, but their surge will climb rapidly to a plateau. The decades-long push for growth that fueled platforms like Facebook and Twitter through the 2000s and 2010s dovetailed with the proliferation of smartphones. Now that the smartphone market is saturated, social media companies can no longer look forward to a constantly expanding frontier of new users to sign up.
Relying on content algorithms to retain existing users or coax those back who have already left, platforms accelerate the ennui of optimization. This leaves precious little room for new types of content or new talents to emerge. Still, people will entertain each other. Those who create art will seek approval and criticism. Others will seek out new and exciting art and entertainment to enjoy. When there is no room on social media for to put these groups of people together, they will find each other in new (old) ways: on the street.
You may have recently heard that machines are going to solve the problem of creating new and engaging content for people to consume on the internet. AI models like DALL-E, Stable Diffusion, GPT-3, various Deepfake models for video, and others use the oceans of existing images, text, audio, and video to create new content from scratch. Some of these models, such as Nvidia’s StyleGAN, are capable of producing content indistinguishable from reality. Artists are beginning to win prizes with AI-generated work. AI-generated actors are appearing in media speaking languages they don’t know, wearing bodies decades younger than the ones they inhabit in reality. GPT-3 is a “shockingly good” text generator which prompted the author of a breathless article in this month’s Atlantic to swoon. “Miracles can be perplexing,” Stephen Marche writes in the article, “and artificial intelligence is a very new miracle…. [An] encounter with the superhuman is at hand.”
Some critics of these AI models argue that they will prompt a crisis of misinformation. Deepfakes may convince people that the President of the United States declared war on an adversary, for example, or a deepfake porno video could ruin a young person’s life. These are valid concerns. More overheated critics suggest that AI may one day surpass human intelligence and may, therefore, power over its creators like masters to pets. Setting aside the Social Darwinist overtones of this argument—that ”intelligence,” exemplified by the mastery of texts, translates automatically to power—machine learning algorithms are limited by the same content challenges facing social media. AI may create absorbing new universes of art and sound and video, but it can only generate content based on the existing corpus, and it can only distribute that content on existing networks. People have to create new texts for AI to master. The willingness of a continuous army of new users to generate these texts and upload them to the phenomenological internet of social media and streaming video, where they can be easily aggregated and made accessible to machine learning models using APIs, is declining. The same types of algorithms that prompted Stephen Marche to proclaim a New Miracle in The Atlantic are driving the most successful corporations in history right off a cliff as I write this.
These critiques of AI-generated content assume that people will continue to scroll social media and engage with the things they see there in ways similar to their behavior over the past decade. In this model, to review, users scroll through an endless stream of content. When they see posts that inspire or provoke, impress or irritate, they are encouraged to like, comment, and share these posts with their friends and followers. The content may be endless, but the people on both sides of the transaction are the most important elements in the decision to like, comment, or share. Users are not impressed or provoked by the content itself, but because of the connection it represents with other people. They respond and share this content performatively, acting as a bridge or critic between the people who created the content–and what they represent–and their friends and followers. If you remove enough of the people, all of the content loses its value.
At a more fundamental level, people are the appeal of any creative work. Art without an artist is a bit like clouds or leaves: these may be beautifully or even suggestively arranged, but they offer no insight on what it means to be human. GPT-3 may tell a story, but it does so mimetically, arranging words in a pattern resembling something that should please a human reader. You may level the same criticism at your least-favorite author, but at least they would be insulted. GPT-3 will never feel anything.
AI-generated content will neither solve the content problem for platforms nor prompt a further crisis of misinformation and confusion for users. AI content will be the nail in social media’s coffin.
As a result of these interlocking trends–the crushing ennui of “content,” the decay of social media, the dearth of new smartphone users, and the incompatibility of AI-generated art with human needs–“culture” is likely to depart the algorithmic grooves of the internet, sprout new wings offline, and take flight for new territory. Perhaps, once it is established there, the internet will catch up again. Perhaps then software will try, once again, to eat the world. This time it has failed.
The vibe shift is the death of the unitary internet.
The vibe shift is the re-emergence of local, regional, national constellations of power and culture separate from the astroturfed greenery of the web.
The vibe shift is a return to ‘zines, books, movies, maybe even magazines and newspapers, because the web was once an escape from work and all the responsibilities of “real life” and now it has come to replace them.
Lately I have been leaving my phone in the car when I go places. These insidious toys entered our lives with a simple question: “what if I need it?” I cannot recall a single situation in the past decade when I truly needed a mobile phone. Instead I have begun to ask myself, “what if I don’t need it?” What if a mobile surveillance and distraction device is actually the last thing I need to carry with me?
Not long after the guns of the Civil War fell cold in the 1860s, John Muir opened a notebook and inscribed his name on the frontispiece. “John Muir, Earth-Planet, Universe,” he wrote, situating himself as firmly as any of us may hope to do so. And then he started walking, a thousand miles or so, to the Gulf of Mexico. After setting out on the first of September 1867 on the “wildest, leafiest, and least trodden way I could find,” Muir’s excitement was palpable when he reached Florida six weeks later. “To-day, at last, I reached Florida,” he wrote in his journal on October 15th, “the so-called ‘Land of Flowers’ that I had so long waited for, wondering if after all my longing and prayers would be in vain, and I should die without a glimpse of the flowery Canaan. But here it is, at the distance of a few yards!”
Muir undoubtedly walked a long way from Indianapolis to Georgia, but he cheated his way into Florida, booking overnight passage on a steamboat from Savannah to Fernandina. Perhaps that’s why he felt so down and out after an easy half-day and night of conversation and loafing aboard the steamer Sylvan Shore. “In visiting Florida in dreams,” he wrote, “I always came suddenly on a close forest of trees, every one in flower, and bent-down and entangled to network by luxuriant, bright-blooming vines, and over all a flood of bright sunlight. But such was not the gate by which I entered the promised land.” What he found, instead, was a tangle of marsh and swamp. A hopelessly flat vista of marsh broken only with “groves here and there, green and unflowered.” Dropped unceremoniously on this inauspicious shore, without even breakfast to ease his way into the new world, Muir was overwhelmed. The peninsula was “so watery and vine-tied,” he reported, “that pathless wanderings are not easily possible in any direction.” He made his way south from the gloomy coast down the railroad tracks, “gazing into the mysterious forest, Nature’s Own.” Everything was new. “It is impossible,” he wrote of the forest along the tracks, “the dimmest picture of plant grandeur so redundant, unfathomable.” Sometimes I feel the same way, though I’ve lived here longer than Muir had been alive when he walked down the lonely rail line trying to make sense of the place.
I picked up Muir’s book recounting the journey a hundred and fifty years later because part of that very long walk took place in Florida, and I am filling up my own notebooks here on Earth-Planet, Universe with the starry-eyed hope that another book about Florida may one day emerge from their pages. Unlike Muir, though, I can draw on an infinite library of books, videos, field guides, and brochures to reduce the unfathomable grandeur of Muir’s nineteenth century gaze to the qualified certainty of my twenty-first century gaze. On a different shelf in my office, for example, I can pull down the Guide to the Natural Communities of Florida. I can leaf through the 81 varieties of land cover the authors have identified in the state until I find the one that Muir was likely to have found along his lonely railroad track: Mesic Hammock. “The shrubby understory may be dense or open, tall or short,” the Guide reports, “and is typically composed of a mix of saw palmetto (Serenoa repens), American beautyberry (Callicarpa americana), American holly (Ilex opaca),” and so on. Maybe I can pull down the field guide to plants and trees, then; or, perhaps, just type their names into the Google search bar on my phone and find out just about anything we know about these thorny, prickly plants with just a few taps.
The sort of deep botanical knowledge Google offers to any armchair naturalist today is what Muir hoped to gain as he explored the little-traveled paths of the South. He set out to find it by tramping through the vines, turning over the ground cover, taking notes, making impressions of leaves and flowers. With only hardbound botanical guides to aid his memory—paperback books then only existed as pamphlets and dime novels, not scientific guides—we can imagine the kind of notes that Muir would need to take to remember it all. Most of all, he had to know how to look, how to take in enough information about a plant shaded by drooping beautyberry branches or hidden beneath the cutting blades of a saw palmetto a few feet off of the trail to describe it later or look it up if he didn’t know what it was. Muir did not have the luxury of a camera in his pocket, connected to an electric warren of machines making inferences from the collective learning of scientists and thousands of amateur naturalists to identify the plant instantly. Muir had to live with it for a while, turning it over and over in his mind until he could write it down. He had to bring some knowledge to the field with him, to know the important parts to remember. Muir had to work for it.
I’ve used apps to identify plants, and they are wonderful. You snap a picture of a flower, or a whorl of leaves, press submit, and like magic a selection of possible candidates appears. It only takes a moment more of reading and looking to positively identify the plant before your eyes. There is no need to walk the laborious path down a dichotomous key—a series of this-or-that questions people use to identify plants and trees in the field—or stumble through the obscure chapters of a specialized field guide.If a naturalist today can download identifying data to their phone, and if they bring a battery backup or two into the field, the old field guide is as obsolete as the buggy whip. Problem solved, right?
The internet, and by extension our whole lives now, thrives on this promise of problems solved. The old “fixed that for you” meme sums up the mindset, but you have to go a step beyond the meme’s use in the culture wars (the internet’s stock-in-trade, after all) to get there. If you don’t know it, here’s the culture war setup. Somebody posts an opinion you don’t like on the internet. You strike words from the post, like this, and replace them with other words that you do like. Then you post the altered text in the comments of the original under the simple heading, “FTFY.” For example, if you wrote a tweet that said, “I love Twix!,” some wag might respond: “FTFY: I love Twix Reese’s!” Though your interlocutor would be wrong—Twix is undoubtedly the superior candy—unfortunately the stakes are often much higher. For a while, FTFY was the perfect clap-back to a Trump tweet or a Reddit post. Like all things on the internet, however, FTFY’s popularity is fading away by sheer dint of use. Here’s an example I found on Google in case you are reading this after the meme has completely disappeared.
FTFY is a successful meme because it works on two levels. The first is merely discursive: here is an alternative point of view. If you go back and read one of the breathless essays, from before 4chan and Trump, on the democratic promise of the internet, you’ll see a lot of this. The internet is a place for people to express their opinions, and isn’t that good? Mark Zuckerberg still relies on this discursive level to justify Facebook. “Giving everyone a voice empowers the powerless,” he told a room full of people at Georgetown University last year, who, for some reason, did not burst into uproarious laughter, “and pushes society to be better over time.” If this were the end of communication—I speak, you listen; you speak, I listen—then Zuckerberg would be right and FTFY would be innocuous. The second level of meaning is why anyone uses the meme in the first place, though.
The second level is philosophical: here is a self-evidently correct point of view which shows that you are wrong and I am right. Someone using FTFY intends to point at differences of opinion and erase them at the same time. This creates a sort of nervous thrill in the reader, who revels in the shame of the erased whether they agree with them or not. It has no effect on the author beyond alienation, but the point is not to persuade anyway. It is to profit, in the social and psychological sense, by signaling one’s virtue in exchange for internet points. Rinse and repeat.
Facebook, Reddit, Twitter, and others turn shitposters’ play points into real dollars and power through the intentionally-obscured work of software algorithms. Thanks to this perverse alchemy, which converts mouse movements and button-presses into trillion-dollar fortunes, social media excels at delivering us to these impasses of opinion, where we can only point and gasp at hypocrisy for the benefit of those who agree with us. We call this free speech, but it feels like something else, like a sad video game we play on our phones in bed until we fall asleep and the screen slowly goes black. FTFY.
Software’s been Fixing That For You since the 1950s. It started off slowly, the awkward preserve of reclusive engineers, but–I don’t have to tell you this, you already know–grew in scale and intensity like a wild avalanche until now, when it holds the power, depending on which expert is holding forth, to either destroy life on the planet or usher in a new era free of death, pain, and inequity. This bestows upon software the elemental power of nuclear fission. Until recently, we’ve accepted it without nearly as much hand-wringing. Is it too late?
The world-eating logic that propels software’s growth is “efficiency.” This is the Fix in FTFY. In his recent book, Coders, Clive Thompson describes the “primal urge to kill inefficiency” that drives software developers. “Nearly every [coder]” he interviewed for the book, Thompson writes, “found deep, almost soulful pleasure in taking something inefficient and ratcheting it up a notch.” I understand this urge. At work I have spent the same hours I would have spent downloading and renaming files writing a script to download and rename them instead. I’ve coded macros to make it easier to populate fields on contract templates instead of confronting the banality of existence by editing Microsoft Word documents manually. As a result of this urge, coders and capitalists argue, nearly everything we do is more efficient today as a result of software than it was ten years ago. As 5G transmitters make their way to cell towers around the world, the same argument goes, nearly everything we do tomorrow will be more efficient than it is today. We accept this, the way we accept new clothes or new toys.
We shun or diminish the things that software displaces. Landline phones are not merely obsolete, for example. They are laughably so. The checkbook register my teachers labored for me to understand in school simply vanished some time around 2005. I left $2,000 worth of CDs sitting next to a dumpster when I moved away from my hometown in 2008 because I had ripped them all to my computer and had an iPod. (I would later deeply regret this decision). Typewriters are a cute hobby for rich actors, rather than tools so vital that Hunter S. Thompson carted his IBM Selectric II from hotel to hotel on benders for forty years. Rejecting these things feels as much like a social gesture as a personal one. Who wants to be seen writing a check at the store? Who wants to talk on a landline phone?
Shunning inefficiency strengthens our commitment to software. This brings me back to Muir’s notebook. Muir had to see, to remember, to write once in his notebook and then write again to turn those notes into something useful. Seeing and remembering, rather than taking a picture: inefficient. Looking things up in a book when he returned from the field: inefficient. Taking notes on paper: inefficient. And yet I find when I go out into the woods with my phone, tablet, or computer and do what Muir did I see very little and remember even less. I write nothing; and nothing useful, beyond a beautiful afternoon and a vague green memory, comes of it.
This is mostly my fault. I could use these powerful tools, I guess, to cash in on efficiency and make something even better. But I don’t. Instead, I get distracted. I pull out my phone to take a picture and find that I have an email. I scroll Twitter for a moment, then Reddit, until I am drawn completely into the digital worlds on my screen, shifting from one screen to the next until I manage, like a drunk driver swerving back into his lane, to pull my eyes away. There is a moment of disorientation as I confront the world once again. I have to struggle to regain the revery that drove me to reach for the phone in the first place. This part is not completely my fault. The dopamine-driven design language that drives us to distraction is well known. If I manage to overcome this pattern somehow and actually take the picture, it goes to Google Photos, one of several thousand pictures in the database that I will never seriously think about again. When I take notebooks into the woods, with pen and pencil and guide book, I do remember. I see and think and make things that feel useful.
More than merely remembering what I’ve seen, working without computer vision helps me see and learn more than I did before I put pencil to paper. Because I am a historian, always looking backward, my mind turns once again to old books and ideas. I am reminded of the nineteenth-century art critic, writer, and all-around polymath John Ruskin. Ruskin understood the power of intentional sight–the practiced vision aided by the trained eye of an artist–as a key to deeper understanding. “Let two persons go out for a walk,” he wrote in one thought experiment; “the one a good sketcher, the other having no taste of the kind.” Though walking down the same “green lane,” he continued, the two would see it completely differently. The non-sketcher would “see a lane and trees; he will perceive the trees to be green, though he will think nothing about it; he will see that the sun shines, and that it has a cheerful effect, but not that the trees make the lane shady and cool….”
What of the sketcher? “His eye is accustomed to search into the cause of beauty and penetrate the minutest parts of loveliness,” Ruskin explained. “He looks up and observes how the showery and subdivided sunshine comes sprinkled down among the gleaming leaves overhead,” for example. There would be “a hundred varied colors, the old and gnarled wood…covered with the brightness; … the jewel brightness of the emerald moss; …the variegated and fantastic lichens,” and so on. This, I argue, is the vision of the unaided eye in the twenty-first century. Unencumbered by the machines that reduce our experience to arrays of data, we can see it in new and more meaningful ways.
More than a renowned art critic, Ruskin was an influential social reformer who believed that adult education, especially education in art, could relieve some of the alienation and misery suffered by workers who spent the majority of their lives operating machines. Workers in Ruskin’s era struggled for the 40-hour work week, deploying the strike, the ballot, and the bomb for the right to enjoy more of their own time. Twenty years after his death, workers throughout the industrialized world seized the time to pursue the sort of self-improvement that Ruskin longed for them to enjoy. Because we can only believe in what Milan Kundera called the “Grand March” of history–that things are better today than they were yesterday, ever onward–we forget the flush of literacy, creativity, and prosperity that blossomed with the passage of the eight-hour workday. Some thirty years later, my grandfather still enjoyed the sort of self-actuated existence Ruskin advocated.
Pop managed a water filter warehouse in Jacksonville, Florida for thirty years after recovering from a gruesome leg injury he sustained in North Africa in 1944. At night, when my dad was a child, Pop took a radio repair correspondence course. He never finished high school but devoured books nonetheless, especially interested in anything he could get on Nazism. He had a doorstop copy of Shirer’s Rise and Fall of the Third Reich on his living room chair. He took subscriptions to magazines, Popular Mechanics alongside the Saturday Evening Post–nothing highbrow but dog-eared anyway–and read the newspaper religiously. There wasn’t much television to watch. Father and son built models together. They went fishing.
It was not a golden time by any means. Pop was a brooding, difficult man. He kept a bottle of gin hidden in the yard. He nursed grudges and pouted over a spare dinner of Great Northern beans. He dealt silently with a gnawing pain from the war in North Africa, it seems, until he couldn’t hold it in, dressing up in his army uniform one time in the depths of a quietly furious drunk and threatening to leave the family. I don’t imagine he read his books and magazines when the black dog drove him to the bottle, but I hope he could take comfort in ideas nonetheless. My dad does. He chased away the lumber yard blues on Sunday night watching Nature on PBS and reading Kerouac on the side of the couch illuminated by the warm light from the kitchen. He executed masterful oil paintings on the kitchen table, weeknight after weeknight, amassing a room full of work that would make the neighbors gasp with delight at the jewel box in the back bedroom of the unassuming apartment upstairs. He passed some of this down to me, in turn, though I will never have the talent or the patience he poured into his work. I hope Pop gave that to us.
Pop was not alone in his evening pursuits, but it is hard to imagine a similar man pursuing the same interests today. In 2018 the Washington Post, interpreting survey results from the Bureau of Labor Statistics, reported that the share of Americans who read for pleasure had reached an all-time low, falling more than 30 percent since 2004. The share of adults who had not read a single book in a given year nearly tripled between 1978 and 2014. It is tempting to blame the internet and smartphones for this decline, but it began in the 1980s, according to the Post. Screens account for this change. Television, firstly and mostly, but computers, too, and now phones and tablets. I have stared at a screen for ten hours today. There is still at least two hours of screen time left before I will lovingly set my phone in its cradle by the bed and fall asleep. I am not wringing my hands over the death of books. Ours is a highly-literate era, awash in information. Drowning in text. I am wringing my hands over what seems like the dearth of deep thought, the kind of careful thinking that comes from reading without distraction, from looking without mediation, from quiet.
After a week tramping across the flat pine woods and swamps of North Florida, John Muir found himself in Cedar Key, a sleepy village on the coast which feels almost as remote today as it must have felt in the 1860s. “For nineteen years my vision was bounded by forests,” he wrote, “but to-day, emerging from a multitude of tropical plants, I beheld the Gulf of Mexico stretching away unbounded, except by the sky.” Then as now, however, Cedar Key was the end of the road. With no boats in the harbor and apparently little desire to move on to points further down the peninsula–and vanishingly few they would have been–Muir decided to take a job at a local sawmill and save money for passage on a timber ship bound for Galveston which was due to arrive in a couple weeks. He worked a day in the mill, but “the next day… felt a strange dullness and headache while I was botanizing along the coast.” Nearly overcome with exhaustion and an overwhelming desire to eat a lemon, he stumbled back to the mill, passing out a few times along the way, where he collapsed into a malarial fever. “I had night sweats,” he wrote, “and my legs became like… clay on account of dropsy.” Uncertain whether he would even stay in town when he arrived, Muir instead spent three months convalescing in the sawmill keeper’s house at the end of the world in Cedar Key.
Once he was strong enough to leave the house, the young naturalist made his faltering way down to the shore. “During my long stay here as a convalescent,” he recalled in the memoir, “I used to lie on my back for whole days beneath the ample arms of… great trees, listening to the winds and the birds.” I have spent long days and nights in the hospital. It is nearly impossible to imagine even a half-day in a recovery room without the option of scrolling the internet, watching TV, playing a video game. I suppose, therefore, that I am thankful for software. It fixed boredom for me.
But still, Muir’s description of Cedar Key is warm, reminiscent. It is easy to imagine that these fever days spent listening to the waves and thinking about plants and birds and life beneath the spreading Live Oak boughs on the desolate gulf coast of Florida contributed in a significant way to who he was about to become. Just a few months later, Muir was in California whooping with delight in the Yosemite Valley. It was there that he became Yosemite’s Muir, the preservationist sage of the Sierra Club and father of modern environmentalism. But perhaps we should rename a little stretch of the quiet wooded shore in Cedar Key the Muir Woods, too. The time Muir spent there in forced meditation seems to have shaped the man, if only slightly, as the forces of wind and water in their slight but constant way shaped El Capitan. There was nothing to fix.