TELE-VILE-SIN /666/ INTERNET /666/ KILLS YOUR SOUL

At the start of the essay, Carr says that his recent difficulties with concentrating while reading books and long articles may be due to spending a lot of time on the Internet. He posits that regular Internet usage may have the effect of diminishing the capacity for concentration and contemplation. He prefaces his argument with a couple of anecdotes from bloggers on their changing reading habits, as well as the findings of a 2008 University College London study titled "Information Behaviour of the Researcher of the Future" which suggests the emergence of new types of reading. He cites Maryanne Wolf, an expert on reading, for her expertise on the role of media and technology in learning written languages. Carr raises the point that unlike speech, which is an innate ability hardwired into the human brain, the ability to read has to be taught in order for the brain to rearrange its original parts for the task of interpreting symbols into words. He acknowledges that his argument does not yet have the backing of long-term neurological and psychological studies. Carr further draws on Wolf's work, particularly her 2007 book Proust and the Squid, to relate his argument to the way in which neural circuits in the reading brain are specifically shaped by the demands particular to each written language, such as Chinese, Japanese, and alphabet-based scripts.[26][23] Therefore, Carr purports that the neural circuitry shaped by regular Internet usage can also be expected to be different from that shaped by the reading of books and other page-based written material.

Carr begins his argument by reasoning how the capacity to concentrate may be weakened by regular Internet usage. He mentions an historical example involving Friedrich Nietzsche's usage of a typewriter, a fairly new technology in the 1880s. According to German scholar Friedrich A. Kittler, Nietzsche's prose style changed when he started using a typewriter, which he had adopted because of his developing difficulty with writing by hand due to failing eyesight. Carr proceeds to explain that scientific research in the field of neuroplasticity as of 2008 has demonstrated that the brain's neural circuitry can in fact be rewired. In the humanities, sociologist Daniel Bell coined the term "intellectual technologies" to describe those technologies that extend the brain's cognitive faculties, and Carr states that he believes that the human brain adopts the qualities of these intellectual technologies. In discussing the mechanical clock, Carr deliberates upon the benefits and losses that are characteristic of new technologies. Then, Carr ventures that the cognitive impact of the Internet may be far more encompassing than any other previous intellectual technology because the Internet is gradually performing the services of most intellectual technologies, thus replacing them. Carr finally contends that the prevalent style of presentation for much of the Internet's content may significantly hinder the capacity to concentrate due to the many distractions that often surround the Internet's content, in the form of ads and obtrusive notifications. Additionally, he claims that these detrimental effects on concentration are compounded by traditional media because they are gradually adopting a style of presentation for their content that mimics the Internet, in order to remain competitive as consumer expectations change.

Carr also theorizes that the capacity to contemplate may diminish as computer algorithms unburden an Internet user's brain of much of the painstaking knowledge work — the manipulation of abstract information and knowledge — that was previously done manually. In comparing the Internet with Frederick Winslow Taylor's management system for industrial efficiency, Carr makes the point that back then some workers complained that they felt they were becoming mere automatons due to the systemic application of Taylorism — a theory of management that analyzes and synthesizes workflow processes, improving labor productivity. Carr selects Google as a prime example of a company in which computer engineers and software designers have applied Taylorism to the knowledge industry, delivering increasingly robust information that may have the effect of minimizing opportunities to ponder ambiguities. Additionally, he argues that the Internet's dominant business model is one that thrives as companies either collect information on users or deliver them advertisements, therefore companies capitalize on users who move from link to link rather than those who engage in sustained thought.

Finally, Carr places his skepticism in a historical context, reflecting upon how previous detractors of technological advances have fared. While often correct, Carr points out that skepticisms such as Socrates' concerns about written language and the 15th-century Venetian editor Hieronimo Squarciafico's concerns about printed works failed to anticipate the benefits that these technologies might hold for human knowledge. As an afterthought, a 2005 essay by playwright Richard Foreman is excerpted for its lament of the waning of the "highly educated and articulate personality".[27]
Reception

We can expect … that the circuits woven by our use of the Net will be different from those woven by our reading of books and other printed works.
— Nicholas Carr, "Is Google Making Us Stupid?".[24]

Carr's essay was widely discussed in the media both critically and in passing. While English technology writer Bill Thompson observed that Carr's argument had "succeeded in provoking a wide-ranging debate",[2] Damon Darlin of The New York Times quipped that even though "[everyone] has been talking about [the] article in The Atlantic magazine", only "[s]ome subset of that group has actually read the 4,175-word article, by Nicholas Carr."[28] The controversial online responses to Carr's essay were, according to Chicago Tribune critic Steve Johnson, partly the outcome of the essay's title "Is Google Making Us Stupid?", a question that the article proper doesn't actually pose and that he believed was "perfect fodder for a 'don't-be-ridiculous' blog post"; Johnson challenged his readers to carefully consider their online responses in the interest of raising the quality of debate.[3]

Many critics discussed the merits of Carr's essay at great length in forums set up formally for this purpose at online hubs such as the Britannica Blog and publisher John Brockman's online scientific magazine Edge, where the roster of names quickly took on the semblance of a Who's Who of the day's Internet critics.[29][30][31][32] Calling it "the great digital literacy debate", British-American entrepreneur and author Andrew Keen judged the victor to be the American reader, who was blessed with a wide range of compelling writing from "all of America's most articulate Internet luminaries".[32]

Book critic Scott Esposito pointed out that Chinese characters are incorrectly described as ideograms in Carr's essay, an error that he believed undermined the essay's argument.[33] The myth that Chinese script is ideographic had been effectively debunked in scholar John DeFrancis' 1984 book The Chinese Language: Fact and Fantasy;[34] DeFrancis classifies Chinese as a logosyllabic writing system.[35] Carr acknowledged that there was a debate over the terminology of 'ideogram', but in a response to Esposito he explained that he had "decided to use the common term" and quoted The Oxford American Dictionary to demonstrate that they likewise define Chinese characters as instances of ideograms.[36]

Writer and activist Seth Finkelstein noted that predictably several critics would label Carr's argument as a Luddite one,[37] and he was not to be disappointed when one critic later maintained that Carr's "contrarian stance [was] slowly forcing him into a caricature of Luddism".[38] Then, journalist David Wolman, in a Wired magazine piece, described as "moronic" the assumption that the web "hurts us more than it helps", a statement that was preceded by an overview of the many technologies that had been historically denounced; Wolman concluded that the solution was "better schools as well as a renewed commitment to reason and scientific rigor so that people can distinguish knowledge from garbage".[39]

Several prominent scientists working in the field of neuroscience supported Carr's argument as scientifically plausible. James Olds, a professor of computational neuroscience, who directs the Krasnow Institute for Advanced Study at George Mason University, was quoted in Carr's essay for his expertise, and upon the essay's publication Olds wrote a letter to the editor of The Atlantic in which he reiterated that the brain was "very plastic" — referring to the changes that occur in the organization of the brain as a result of experience. It was Olds' opinion that given the brain's plasticity it was "not such a long stretch to Carr's meme".[40] One of the pioneers in neuroplasticity research, Michael Merzenich, later added his own comment to the discussion, stating that he had given a talk at Google in 2008 in which he had asked the audience the same question that Carr asked in his essay. Merzenich believed that there was "absolutely no question that our brains are engaged less directly and more shallowly in the synthesis of information, when we use research strategies that are all about 'efficiency', 'secondary (and out-of-context) referencing', and 'once over, lightly'".[41] Another neuroscientist, Gary Small, director of UCLA's Memory & Aging Research Center, wrote a letter to the editor of The Atlantic in which he stated that he believed that "brains are developing circuitry for online social networking and are adapting to a new multitasking technology culture".[42]
Testimonials and refutations

In the media, there were many testimonials and refutations given by journalists for the first part of Carr's argument regarding the capacity for concentration; treatments of the second part of Carr's argument regarding the capacity for contemplation, were, however, far rarer.[43] Although columnist Andrew Sullivan noted that he had little leisure time at his disposal for contemplation compared with when he grew up,[44] the anecdotes provided by journalists that indicated a deficiency in the capacity to contemplate were described only in the context of third parties, such as columnist Margaret Wente's anecdote about how one consultant had found a growing tendency in her clients to provide ill-considered descriptions for their technical problems.[45][43]

Columnist Leonard Pitts of The Miami Herald described his difficulty sitting down to read a book, in which he felt like he "was getting away with something, like when you slip out of the office to catch a matinee".[46] Technology evangelist Jon Udell admitted that, in his "retreats" from the Internet, he sometimes struggled to settle into "books, particularly fiction, and particularly in printed form".[47] He found portable long-form audio to be "transformative", however, because he can easily achieve "sustained attention", which makes him optimistic about the potential to "reactivate ancient traditions, like oral storytelling, and rediscover their powerful neural effects".[47][9]

Firmly contesting Carr's argument, journalist John Battelle praised the virtues of the web: "[W]hen I am deep in search for knowledge on the web, jumping from link to link, reading deeply in one moment, skimming hundreds of links the next, when I am pulling back to formulate and reformulate queries and devouring new connections as quickly as Google and the Web can serve them up, when I am performing bricolage in real time over the course of hours, I am 'feeling' my brain light up, I and [sic] 'feeling' like I'm getting smarter".[2][48] Web journalist Scott Rosenberg reported that his reading habits are the same as they were when he "was a teenager plowing [his] way through a shelf of Tolstoy and Dostoyevsky".[49] In book critic Scott Esposito's view, "responsible adults" have always had to deal with distractions, and, in his own case, he claimed to remain "fully able to turn down the noise" and read deeply.[33][43]
Analysis

In critiquing the rise of Internet-based computing, the philosophical question of whether or not a society can control technological progress was raised. At the online scientific magazine Edge, Wikipedia co-founder Larry Sanger argued that individual will was all that was necessary to maintain the cognitive capacity to read a book all the way through, and computer scientist and writer Jaron Lanier rebuked the idea that technological progress is an "autonomous process that will proceed in its chosen direction independently of us".[31] Lanier echoed a view stated by American historian Lewis Mumford in his 1970 book The Pentagon of Power, in which Mumford suggested that the technological advances that shape a society could be controlled if the full might of a society's free will were employed.[50][23] Lanier believed that technology was significantly hindered by the idea that "there is only one axis of choice" which is either pro- or anti- when it comes to technology adoption.[31] Yet Carr had stated in The Big Switch that he believed an individual's personal choice toward a technology had little effect on technological progress.[51][23] According to Carr, the view expressed by Mumford about technological progress was incorrect because it regarded technology solely as advances in science and engineering rather than as an influence on the costs of production and consumption. Economics were a more significant consideration in Carr's opinion because in a competitive marketplace the most efficient methods of providing an important resource will prevail. As technological advances shape society. an individual might be able to resist the effects but his lifestyle will "always be lonely and in the end futile"; despite a few holdouts, technology will nevertheless shape economics which, in turn, will shape society.[51][23]
A focus on literary reading

The selection of one particular quote in Carr's essay from pathologist Bruce Friedman, a member of the faculty of the University of Michigan Medical School, who commented on a developing difficulty reading books and long essays and specifically the novel War and Peace, was criticized for having a bias toward narrative literature. The quote failed to represent other types of literature, such as technical and scientific literature, which had, in contrast, become much more accessible and widely read with the advent of the Internet.[52][24] At the Britannica Blog, writer Clay Shirky pugnaciously observed that War and Peace was "too long, and not so interesting", further stating that "it would be hard to argue that the last ten years have seen a decrease in either the availability or comprehension of material on scientific or technical subjects".[38] Shirky's comments on War and Peace were derided by several of his peers as verging on philistinism.[53][54][25] In Shirky's defense, inventor W. Daniel Hillis asserted that, although books "were created to serve a purpose", that "same purpose can often be served by better means". While Hillis considered the book to be "a fine and admirable device", he imagined that clay tablets and scrolls of papyrus, in their time, "had charms of their own".[31] Wired magazine editor Kevin Kelly believed that the idea that "the book is the apex of human culture" should be resisted.[7] And Birkerts differentiated online reading from literary reading, stating that in the latter the reader is directed within themselves and enters "an environment that is nothing at all like the open-ended information zone that is cyberspace" in which he feels psychologically fragmented.[55][29]
Coping with abundance

Abundance of books makes men less studious.
— Hieronimo Squarciafico, a 15th century Venetian editor, bemoaning the printing press.[56][57]

Several critics theorized about the effects of the shift from scarcity to abundance of written material in the media as a result of the technologies introduced by the Internet. This shift was examined for its potential to lead individuals to a superficial comprehension of many subjects rather than a deep comprehension of just a few subjects. According to Shirky, an individual's ability to concentrate had been facilitated by the "relatively empty environment" which had ceased to exist when the wide availability of the web proliferated new media. Although Shirky acknowledged that the unprecedented quantity of written material available on the web might occasion a sacrifice of the cultural importance of many works, he believed that the solution was "to help make the sacrifice worth it".[38] In direct contrast, Sven Birkerts argued that "some deep comprehension of our inheritance [was] essential", and called for "some consensus vision among those shapers of what our society and culture might be shaped toward", warning against allowing the commercial marketplace to dictate the future standing of traditionally important cultural works.[58] While Carr found solace in Shirky's conceit that "new forms of expression" might emerge to suit the Internet, he considered this conceit to be one of faith rather than reason.[25] In a later response, Shirky continued to expound upon his theme that "technologies that make writing abundant always require new social structures to accompany them", explaining that Gutenberg's printing press led to an abundance of cheap books which were met by "a host of inventions large and small", such as the separation of fiction from non-fiction, the recognition of talents, the listing of concepts by indexes, and the practice of noting editions.[52]
Impact of the web on memory retention

As a result of the vast stores of information made accessible on the web, a few critics pointed to a decrease in the desire to recall certain types of information, indicating, they believed, a change in the process of recalling information, as well as the types of information that are recalled. According to Ben Worthen, a Wall Street Journal business technology blogger, the growing importance placed on the ability to access information instead of the capacity to recall information straight from memory would, in the long term, change the type of job skills that companies who are hiring new employees would find valuable. Due to an increased reliance on the Internet, Worthen speculated that before long "the guy who remembers every fact about a topic may not be as valuable as the guy who knows how to find all of these facts and many others".[59][43] Evan Ratliff of Salon.com wondered if the usage of gadgets to recall phone numbers, as well as geographical and historical information, had the effect of releasing certain cognitive resources that in turn strengthened other aspects of cognition. Drawing parallels with transactive memory — a process whereby people remember things in relationships and groups — Ratliff mused that perhaps the web was "like a spouse who is around all the time, with a particular knack for factual memory of all varieties".[29] Far from conclusive, these ruminations left the web's impact on memory retention an open question.[29]
Themes and motifs
Effect of technology on the brain's neural circuitry
A model 1878 of the Malling-Hansen Writing Ball, which Nietzsche began using in 1882 when his poor eyesight made it difficult for him to write by hand.[60][61]

In the essay, Carr introduces the discussion of the scientific support for the idea that the brain's neural circuitry can be rewired with an example in which philosopher Friedrich Nietzsche is said to have been influenced by technology. According to German scholar Friedrich A. Kittler in his book Gramophone, Film, Typewriter, Nietzsche's writing style became more aphoristic after he started using a typewriter. Nietzsche began using a Malling-Hansen Writing Ball because of his failing eyesight which had disabled his ability to write by hand.[23][62] The idea that Nietzsche's writing style had changed for better or worse when he adopted the typewriter was disputed by several critics. Kevin Kelly and Scott Esposito each offered alternate explanations for the apparent changes.[33][63][31] Esposito believed that "the brain is so huge and amazing and enormously complex that it's far, far off base to think that a few years of Internet media or the acquisition of a typewriter can fundamentally rewire it."[33] In a response to Esposito's point, neuroscientist James Olds stated that recent brain research demonstrated that it was "pretty clear that the adult brain can re-wire on the fly". In The New York Times it was reported that several scientists believed that it was certainly plausible that the brain's neural circuitry may be shaped differently by regular Internet usage compared with the reading of printed works.[6]

Although there was a consensus in the scientific community about how it was possible for the brain's neural circuitry to change through experience, the potential effect of web technologies on the brain's neural circuitry was unknown.[40][41] On the topic of the Internet's effect on reading skills, Guinevere F. Eden, director of the Center for the Study of Learning at Georgetown University, remarked that the question was whether or not the Internet changed the brain in a way that was beneficial to an individual.[6] Carr believed that the effect of the Internet on cognition was detrimental, weakening the ability to concentrate and contemplate. Olds cited the potential benefits of computer software that specifically targets learning disabilities, stating that among some neuroscientists there was a belief that neuroplasticity-based software was beneficial in improving receptive language disorders.[40] Olds mentioned neuroscientist Michael Merzenich, who had formed several companies with his peers in which neuroplasticity-based computer programs had been developed to improve the cognitive functioning of kids, adults and the elderly.[40][64] In 1996, Merzenich and his peers had started a company called Scientific Learning in which neuroplastic research had been used to develop a computer training program called Fast ForWord that offered seven brain exercises that improved language impairments and learning disabilities in children.[65] Feedback on Fast ForWord showed that these brain exercises even had benefits for autistic children, an unexpected spillover effect that Merzenich has attempted to harness by developing a modification of Fast ForWord specifically designed for autism.[66] At a subsequent company that Merzenich started called Posit Science, Fast ForWord-like brain exercises and other techniques were developed with the aim of sharpening the brains of elderly people by retaining the plasticity of their brains.[67]
HAL in 2001: A Space Odyssey

In Stanley Kubrick's 1968 science fiction film 2001: A Space Odyssey, astronaut David Bowman slowly disassembles the mind of an artificial intelligence named HAL by sequentially unplugging its memory banks. Carr likened the emotions of despair expressed by HAL as its mind is disassembled to his own, at the time, cognitive difficulties in engaging with long texts.[2] He felt as if someone was "tinkering with [his] brain, remapping the neural circuitry, reprogramming the memory".[24] HAL had also been used as a metaphor for the "ultimate search engine" in a PBS interview with Google co-founder Sergey Brin as noted in Carr's book The Big Switch, and also Brin's TED talk. Brin was comparing Google's ambitions of building an artificial intelligence to HAL, while dismissing the possibility that a bug like the one that led HAL to murder the occupants of the fictional spacecraft Discovery One could occur in a Google-based artificial intelligence.[68][69][23] Carr observed in his essay that throughout history technological advances have often necessitated new metaphors, such as the mechanical clock engendering the simile "like clockwork" and the age of the computer engendering the simile "like computers". Carr concluded his essay with an explanation as to why he believed HAL was an appropriate metaphor for his essay's argument. He observed that HAL showed genuine emotion as his mind was disassembled while, throughout the film, the humans onboard the space station appeared to be automatons, thinking and acting as if they were following the steps of an algorithm. Carr believed that the film's prophetic message was that as individuals increasingly rely on computers for an understanding of their world their intelligence may become more machinelike than human.[2][24]
Developing view of how Internet use affects cognition

The brain is very specialized in its circuitry and if you repeat mental tasks over and over it will strengthen certain neural circuits and ignore others.
— Gary Small, a professor at UCLA's Semel Institute for Neuroscience and Human Behaviour.[70]

After the publication of Carr's essay, a developing view unfolded in the media as sociological and neurological studies surfaced that were relevant to determining the cognitive impact of regular Internet usage. Challenges to Carr's argument were made frequently. As the two most outspoken detractors of electronic media, Carr and Birkerts were both appealed to by Kevin Kelly to each formulate a more precise definition of the faults they perceived regarding electronic media so that their beliefs could be scientifically verified.[71] While Carr firmly believed that his skepticism about the Internet's benefits to cognition was warranted,[25] he cautioned in both his essay and his book The Big Switch that long-term psychological and neurological studies were required to definitively ascertain how cognition develops under the influence of the Internet.[72][24][3]

Scholars at University College London conducted a study titled "Information Behaviour of the Researcher of the Future", the results of which suggested that students' research habits tended towards skimming and scanning rather than in-depth reading.[73] The study provoked serious reflection among educators about the implications for educational instruction.[74]

In October 2008, new insights into the effect of Internet usage on cognition were gleaned from the results, reported in a press release,[75] of a study conducted by UCLA's Memory and Aging Research Center that had tested two groups of people between the ages of 55 and 76 years old; only one group of which were experienced web users. While they had read books or performed assigned search tasks their brain activity had been monitored with functional MRI scans, which revealed that both reading and web search utilize the same language, reading, memory, and visual regions of the brain; however, it was discovered that those searching the web stimulated additional decision-making and complex reasoning regions of the brain, with a two-fold increase in these regions in experienced web users compared with inexperienced web users.[76][77][78][79] Gary Small, the director of the UCLA center and lead investigator of the UCLA study, concurrently released the book iBrain: Surviving the Technological Alteration of the Modern Mind, co-authored with Gigi Vorgan, with the press release.

While one set of critics and bloggers used the UCLA study to dismiss the argument raised in Carr's essay,[80][81] another set took a closer look at the conclusions that could be drawn from the study concerning the effects of Internet usage.[82] Among the reflections concerning the possible interpretations of the UCLA study were whether greater breadth of brain activity while using the Internet in comparison with reading a book improved or impaired the quality of a reading session; and whether the decision-making and complex reasoning skills that are apparently involved in Internet search, according to the study, suggest a high quality of thought or simply the use of puzzle solving skills.[83][84] Thomas Claburn, in InformationWeek, observed that the study's findings regarding the cognitive impact of regular Internet usage were inconclusive and stated that "it will take time before it's clear whether we should mourn the old ways, celebrate the new, or learn to stop worrying and love the Net".[5]
Related articles