Thursday, April 3, 2014

Building An (online) Room of One's Own

What are the basic ingredients in creating an online space of one's own? In the past, when a room required materials -- wood, plaster, nails, roofing, and so forth -- the question was one of cost, of economy; Thoreau kept track of every nail used to build his cabin on Walden Pond. Now, one can built a fairly spacious dwelling for no cost whatsoever -- aside from one's time! -- but as the choices grow wider the process gets murkier. Do you want, or need, to "blog"? Do you want or need to "share"? What streets of information or social activity do you want your online doors to open into, or out from? Who would you like your neighbors to be? And do you feel comfortable handing advertising signage on your online dwelling-place, or would you pay a certain premium not to have ads? And, once your home exists, what do you want to do with it? Who do you want to "stop by"? How often do you yourself want to "be" there?

And maybe you don't need to start from scratch -- don't you already have some places online where you tend to go, or have made some sort of nook or cranny for yourself? Most online services offer some kind of "hub" where you can pull "yourself" together; Google has your Google+ profile page; Yahoo! offers MyYahoo!; and of course Facebook offers the ever-changing format of one's Facebook Profile page. Beyond these major players, dozens of services will enable you to cobble together a personal web page (no coding required! they say), or you can get a slightly higher level of service with a paid home, which can come with domain and/or hosting services, using one of the big providers such as GoDaddy. At the top of the cost list, you could register a custom domain name, using some version of your own name or a phrase you've chosen, and hire a web designer to put together something more singular for a few hundred dollars.

But a true online home is more than that, I'm willing to bet.  Home is a place where no-one but you re-arranges the furniture. A place where friends, but not trolls, come to visit. A place where services and media you really enjoy and have chosen -- your books, your films, your e-mail -- are near at hand, and free of advertising. And a place "near" to other places, other friends' home, where you like to go. And, in the cacophonous world of the Internet today, it's getting a bit harder to find.

Sunday, March 30, 2014

The future of writing

The revolution that was alphabetic writing is still resonating through human culture and society more than three thousand years after the Phoenecians invented it, and the Greeks improved upon it. It has given every religion on earth its sacred texts, and preserved them -- even as disagreements over how such books were to be interpreted has led to sectarian wars in which hundreds of thousands of people have died. It has given every civilization its laws, every language its literature, and every people their history.

Along the way, there have been revolutions within this revolution: the invention of paper in China in 200 BC, of moveable type by Johannes Gutenberg in the 1430's, the publication of the first newspapers in the 1600's, and Ottmar Mergenthaler's invention of the Linotype in 1884. But the electronic revolution, unlike these others, has not only to improved the speed and efficiency with which the written word can be distributed, but may change the very nature of writing itself.

It hasn't always been possible to earn a living by writing. In the early days, being a writer required a wealthy patron; Chaucer had John of Gaunt, and Spencer had the Earl of Leicester. Books remained expensive, but the writer's share in their sale was small, and pirated editions common. The Statute of Anne in 1710 established copyright in England, but it was at least another century before anyone could actually sustain life on book royalties alone. What was missing was a mass audience of literate people, and people who could afford to buy or rent books. Public education eventually brought such an audience into being, and public libraries and book-lending services such as Mudie's (the "Netflix of Victorian Literature") made reading an affordable habit.

One of the most notable was Charles Dickens, whose books -- sold originally in serial installments -- made him a wealthy man, although in his later years he also made a good deal of his income from public readings. We all know the rags-to-riches tale of J.K. Rowling, whose Harry Potter books made her the wealthiest woman in the United Kingdom, with riches far exceeding those of the Queen (who's actually running a bit low on cash). And yet Rowling's rise came before the arrival of e-books as a force to be reckoned with, and whether this new format will help or harm writers is still uncertain.

One thing electronic publication has already done, though, is to make writers of us all; indeed there may be more writers of certain kinds of material than there are people interested in reading it. Like the "I'm DJ'ing" segment of Portlandia, where everyone is DJ'ing, we are entering into a world where anyone who wants to can be an "author" via self-publishing services such as Smashwords, Author Solutions, amazon.com. At the same time, for many authors whose books are brought out via traditional publishers, it's getting harder to make a living; for every Anne Rice and Stephen King there are hundreds of novelists who struggle to make ends meet, and can't quit their day job.

It's a strange moment -- half opportunity, half cosmic joke: now that anyone can get their writing "out there," there's too much out there to sort through, and becoming "known" is more difficult than ever.

Friday, March 21, 2014

The Long and the Short of It

Sometimes it seems that everything is getting shorter -- books, magazine articles, e-mails, and attention spans.  Recent studies suggest that many corporations are wary of investing in advertising in longer-form publications, and are looking for ways to promote their brands in "short-form media." The cost is surely less, and the question arises: are a hundred animated GIFs and sidebars worth more than a 60-second spot on what remains of "television," where the Nielsen share ratings (the % of households tuned in to a given program) has declined, along with the total number of "television households" surveyed. The miniseries Roots managed a 66% percent share, with 100 million viewers for its final episode -- whereas in 2013, the number one television show, The Big Bang Theory, nets only a 9.8 share or about 16 million viewers. One begins to feel that, even if the headline were "World Ends Tomorrow," it would be hard to get figures like those of Roots for any program today.

But of course there have always been short forms, and just because the average book seems to be getting shorter doesn't mean that only short books are viable. Elizabeth Kostova found an agent -- and a publisher -- for her debut novel The Historian.  She got a six-figure advance and the book was a runaway best-seller.  It's 240,000 words long. David Foster Wallace's Infinite Jest (ok, everyone but me thinks he was a genius) runs to 484,000 words; Vikram Seth's very successful second novel A Suitable Boy runs to 590,000 words, though admittedly it's exceptional. And when something "long" is made of "short" pieces, there seems to be no limit to length; I've known friends who, falling in love with a TV series such as The Wire, have managed to watch all 60 episodes in the course of a week or two -- six times the length of Roots. Like those who read the serialized novels of Dickens, we may not quite realize what all the parts add up to until we've read them -- and even then, we may be hungry for more.

But one must wonder: how will those forms that are inherently long, bulky, and bound to their physical forms -- the epic, the fantasy trilogy, the authoritative biography -- fare in the land of instant knowledge, of reciprocal mini-citations like those in the "did you read that" episode of Portlandia? What does it mean that some book publishers are launching whole new divisions dedicated to bringing the latest blog into print? Will on-demand video and YouTube, as the movie studios worried that television would do back in the 1950's, be the death knell of long-form entertainment in cinemas?

To which I can only answer, in the lingo of old-fashioned radio and television broadcasts, "Stay tuned ... "

Tuesday, March 4, 2014

Hackers, Phreaks, and Jammers

The man shown here doesn't seem like much of a danger to anyone. Armed only with a plastic whistle that came free in a box of Cap'n Crunch, along with an earlier version of the 'black box' in his left hand (actually an iPhone), this man was once regarded as a dangerous criminal mastermind, one whose activities were worthy of the scrutiny of the FBI and InterPol.

He's John Thomas Draper, a legendary figure as the first 'phone phreak' to find a way to get free calls through the Bell Telephone system. Phones in the early 1970's used a series of tones to activate and direct calls; using the toy whistle, Draper was able to fool the system into authorizing long-distance calls without any charges; it just so happened that the frequency of the whistle -- 2600 Hz -- was the same as the phone company's electronic tone. Draper and other "phreaks" used the trick for prank calls, to call each other, and to test their ability to route the call over as long a distance as possible. Among those who found this exciting were a couple of California kids who have since become well-known -- Steve Jobs and Steve Wozniak.

The "phreak" culture, in retrospect, wasn't really all that dangerous. All they "stole" was phone time; they didn't seek to defraud customers, steal peoples' identities, or collect anyone's data.  But as the use of phones, and phone modems, became more and more widespread, the stakes of this kind of activity -- or hacktivity as some call it -- have increased. Hackers who have become familiar with weaknesses in operating systems, encryption programs, Java scripts, and various "Trojan Horse" programs that gain access by seeming innocuous, have targeted corporate and government entities, and some of them quite gleefully seize personal financial information when they can get it. Some, such as Wikileaks and the Anonymous group, see theirs as important investigative work, as well as a political jab against what they regard as the government's unjustified power to keep things secret. Others, such as the group of Chinese military hackers associated with the "Comment Crew" in Shanghai, use cyber attacks as a form of political and economic subterfuge.  Still others, the purists I suppose we could call them, simply hack to show that they can; in them, the playful pride of the original phreaks lives on.

Spammers, of course, are the most annoying class of Internet junk, and their more dangerous cousins the Phishers are all to willing to lay traps for the unsuspecting.  Few of them, however, seem to have any larger agenda other than stealing money.

Jammers are perhaps the most politically purposeful of all who have sought to use the media against the media, and many of them use a wide variety of techniques other than the the Internet, though the 'Net often amplifies their effects.  The Guerrilla Girls made feminist incursions in the media, wearing gorilla suits, and the Yes Men famously staged fake press conferences, at one of which they pretended to be representatives of the Union Carbide corporation who wished to apologize -- as the real firm had not -- for the chemical disaster at Bhopal in India.

Tuesday, February 25, 2014

Duplicates

"Don't you know what duplicates are?" an incredulous Groucho Marx asks brother Chico in one of their better-known skits. "Sure," replies Chico, "that's a five kids up in Canada." This was of course a reference to the Dionne quintuplets born in Ontario in 1934, two of whom are still alive today. But of course we all know what duplicates are -- or do we?

Ever since the invention of writing, the making of copies has been crucial to the effectiveness and reach of the written word. The ancient Romans employed slaves to make copies; a single slave shouted out out the text to be copied, and a hundred slaves followed suit.  In the Middle Ages, monks accustomed to vows of silent contemplation couldn't take advantage of such means; for them, copies were made one at a time.

Today, thanks to the photocopier, of course, a monk can have 500 copies in a few minutes, as depicted in this famous 1970's TV commercial for Xerox. It's a miracle! But of course it was a long road that led to such dazzling achievements, a road littered with media that have since become obsolete, from the Gutenberg Press through the Hektograph, Mimeograph, and Gestetner machines, the original Xerox technology, and the laser scanner. And now, with the possibility of a document which exists, almost simultaneously, on thousands of servers around the world, or on a "cloud" system that enables its instant downloading and printing nearly anywhere on earth, we've reached the point where the difference between an "original" and a "copy" is more a matter of syntax and situation than any material reality. I've been to the London home of the Gestetner family, one wall of which is lined with a series of photographs of Gestetner duplicators being presented as a gift to each new Pope -- but our next Pope won't get one, nor will he need it. He'll probably just tweet, anyway, or distribute his encyclicals via the Vatican's vast website.

The value of a copy is in its portability, the ability one has to own it or transfer ownership in it; the ability to send it over time and distance; and its ability to preserve its contents over time, even if other copies are damaged or destroyed.  Intellectual property in written works has always been conceived of as the right to create and sell copies -- the copyright.

The very first legal recognition of the rights of an author was the "Statute of Anne" in 1709. It presented itself as "an act for the encouragement of learning," with the implicit argument that allowing authors the exclusive right to publish their work for a limited term would enable them to earn some reward for their labors, while at the same time eventually allowing their work to be used freely. As with earlier systems of intellectual property, such as "Letters Patent," the Act's term was limited -- 14 years, which could be extended for 14 more, after which the rights of the author expired; it was understood then, as it is now, that authors, like inventors, quite frequently drew from the works of those who have come before them, and that preserving such rights indefinitely would stifle creativity. One thing that has certainly changed since 1709 is the term of copyright; US copyright eventually settled on a period twice as long as the Statute of Anne (28 years, renewable for 28 more years); revisions to this law in the past three decades have extended these 56 years to 80, 100, and even as many as 120 years; the last of these, the "Sonny Bono Copyright Extension Act," went further and even re-instated copyright in works where it had become extinct, freezing the date at which works could enter the public domain at 1923. Many creative artists feel that this law has exercised a stifling effect upon creativity; many of them joined in support of a legal case, Eldred vs. Ashcroft, that challenged these extensions on the basis of the Constitution's reference to copyright law being for a "limited term." The Supreme Court eventually ruled against Eldred, saying in effect that Congress could establish any length of term they wanted, so long as it was not infinite. Could, is of course, not should.

The result has been, ironically, that in the very age when the ability of writers, artists, and musicians to draw upon, alter, and incorporate what the copyright office calls "previously existing works" is at its greatest, the legal barriers against doing so have been raised to the harshest and longest in the history of copyright protections. This is offset, to a degree, by two factors: 1) "fair use," a doctrine established in the 1977 revision of the law, whereby a certain limited amount -- say, less than 10% of the original "work" -- may be used so long as it is not employed for profit, is used in an educational context, and/or used spontaneously; and 2) simple lack of enforceability. It's quite impossible to police all the billions of web servers, web pages, and personal computers and devices, to ensure that no copyrighted material has been taken or stored; enforcement, as a result, tends to be spotty if dramatic (as in the case of a woman in the midwest who was assessed a fine of 1.5 million dollars because her son had shared 24 music files on his Napster account).

It needs to be noted that copyright also functions very differently depending on the medium in question.  Printed texts are straightforward enough, but in the case of physical media such as a sculpture or a painting, possession of the physical object confers certain property rights, including the right -- if one desires -- to restrict or prohibit "derivative" works such as photographs of these works, although the issue of non-manipulated or "slavish" copies is a murky one. Music is the most complex form: there are at least four layers of copyright in a recorded song: 1) The composition itself, and its embodiment in sheet music; 2) The performance of that composition on the recorded matter, including the act of interpretation and any variations on the composition; 3) The physical embodiment, if any, of this performance, known as "mechanical" rights; and 4) The right to transmit the performance. All of these, of course, were once separate domains: the sheet-music industry/print, the recording studio, the record company or "label," and radio stations -- but all are now merged indistinctly into a single, complex activity that can all be achieved on a single device, even a smartphone.

But the fundamental problem is that copyright consists of a right to make a "copy" -- and there's no longer a fixed, essential value in that -- not in a world in which everything is, in a sense, already copied.

Thursday, February 20, 2014

The Commodification of the Self

We all enjoy the sense that we are somebody -- that our drab, dreary lives possess some greater meaning, that our hopes, dreams, and aspirations may some day take tangible form. But in the meantime, while we've been learning and laboring and dreaming, all of the droplets of our online lives are being constantly collected like Elvis's sweat, bottled and packaged, searched through, rented, and sold. Of course, we're told that all of our "identifying information" has been removed -- we're just part of a vast agglomeration of data, after all -- but if someone wants to know how many people who play World of Warcraft are also regular customers at McDonald's, watch pay-per-view sports, or make frequent visits to Dave & Buster's, then the Data Oracle can "mine" this information for answers.  And, to an extent, once "mined," this data can be used to send back targetted ads and offers, such as a Dave & Buster's coupon for anyone who buys a custom mount in Warcraft. The system won't "know" that you'll be interested in such a thing -- but it may know that you are more likely to take the bait than some random person, and that knowledge, my friends, is POWER.

What can one do?  Well, you can travel the web with cookies and scripts turned off; you can filter your internet connection through a bunch of remote hosts that "scrub" off your identifying information; you can use remote anonymous e-mail accounts and encrypt all your messages with PGP. But if you do all these things, a big part of the value that can be derived from the Internet will be missing; you won't be able to share content easily with more than a few friends, shop online at most retailers, or host your own publicly-accessible web content.

There is, however, another way. You can use the system that uses you, and (with luck) you can get more out of it than they system gets out of you. The key question was first asked way back in 1968 by Doug Engelbart, who with his team at Stanford developed the first mouse, the first graphical interface, first collapsible menus, and many other things we take for granted:
"If in your office you as an intellectual worker were supplied with a computer display backed up by a computer that was alive for you all day and was instantly responsive to every action you have, how much value could you derive from that?"
Engelbart demonstrated some basic things: keeping track of shopping needs, simple word-processing, sharing documents, and mapping an efficient travel route. But he didn't see one thing coming: that all these things might eventually, become so all-consuming in and of themselves that his imaginary "intellectual worker" would be more distracted than augmented.

Still, we can budget our time -- which remains ours, after all. We can take breaks from Facebook, skip online shopping for a week, deactivate our Twitter feeds, or quit Goodreads.  And we can turn the tables, to some extent, on those who use our time and energies for free by making maximum, careful, deliberate use of the resources they give us in exchange. We may not be able to completely avoid our information being used by marketers -- but we can become very adept at marketing ourselves, and our own intellectual labors, in a way that we can fully control.

Tuesday, February 11, 2014

The Comment Crew

Somehow it seems weirdly appropriate to read, in the online New York Times that the group of sophisticated hackers in China who have successfully invaded dozens of corporate and military sites in the US is known as the "Comment Crew" -- they have a habit of embedding their viral links in comments, and when users click on these, their entire system can be compromised.

And who doesn't love a comment? Comments tell us that someone, somewhere, is reading our words; they enable us to seemingly tap the shoulder of well-known writers, journalists, and columnists, and say "Hey pal -- I beg to differ." Comments make even the most static content seem instantly "interactive," and seem to promise the extension of democratic input into this vast and lumpy agglomeration of texts and images and videos we call the Internet.

Except of course they don't -- at least not always.  Comments are also the native territory of people who, in a non-commenting world, we would be blessed rarely to encounter, if if ever.  There's the Skeptic (Doesn't look like 1962 to me! I'm sure this footage is fake!), the Know-it-all (I'm surprised that the writer is apparently unfamiliar with my recent article in the Journal of Obscure Ramblings), the Blowhard (This is exactly the kind of crap that the liberal media wants us to believe!) and the dreaded Troll (I won't dignify them by imitating them -- we all know them). It's not at all clear that any of these Internet-librated voices has much of real value to add to the "conversation," and even if they did, with comments soaring into the hundreds in the space of a few hours, whatever has been said, valuable or not, has slid away into a vast river of verbiage that's slow and painful to scroll through, so why bother?

On the other hand, I'd hate to have a world without any comments.  On my main blog, Visions of the North, I have the advantage that only people who already care or know about the topics I blog are likely to visit it, and likely to comment on it.  I've rarely gotten a rude comment, and only now and then gotten a Blowhard or a Skeptic; the only spam I've encountered is from a certain Chinese concrete company who shall remain nameless; Blogger's spam filter usually catches them.  Sometimes, when a well-known figure or fellow Arctic expert leaves a comment, I feel distinctly honored! And seeing the comments makes the site stats feel a bit more 'real' to be sure.

Facebook and other social media have picked up on these positive vibes to enable one to 'like' or comment on almost anything one sees.  And, since, most of those who can see it are one's presumptive friends, the comments are, as they should be, mostly friendly.  Occasionally, a lively chat, a bit off-topic but fun, evolves in the comment stream.  But there are awkward times, times of TMI, where a friend one knows only distantly posts disturbing personal news.  If a friend you can't remember posts news that his father-in-law has been diagnosed with cancer, what should you do? Should you 'like' such bad news -- if you comment, will that make you a hypocrite? Should you ignore it?  Or what if a friend you know slightly suddenly reveals political views that you detest?  Time to unfriend?

This is your life. This is your life with comments. What do we make of them? How often have you commented on an online article? Do you read the comments of others? And how much value do you feel comments have contributed to the online experience -- or taken away. Leave your comments below!

Saturday, February 8, 2014

Avatars

Who are we when we're online? And who is anyone else? Is anyone really who they seem to be?

Ever since the first graphical computer interfaces, icons and images of increasing size and depth have been part of the experience; who who has known them can forget our old friends Sad Mac, Dancing Baby, or Max Headroom? And in fact, the idea of describing one's on-screen graphic self as an "avatar" (an ancient word with origins in the Vedas) was first used in a computing context way back in 1985, in reference to Lucasfilm's game "Habitat" -- the first online role-playing game with a graphical element, albeit one that looks incredibly primitive to today's users. Until the WWW interface in the early 1990's, of course, there was no way for a user to share a graphical self outside of a game world, but as soon as people could, they did. No one seems to be quite sure just when they first appeared, but soon they were common in online forums, blogs, and on various IRC (Internet Relay Chat) systems. MySpace, famously, allowed for avatars and pseudonyms to flourish, such that question of who someone actually was ceased being a matter of importance.  Facebook originally insisted on real identities, but has since given way to various levels of pseudonymity, so long as the user supplies Facebook itself with his or or her "RL" (Real Life) identity.

Avatars come in many flavors; the most common are cartoons, celebrity figures, and consumer products such as cars. The use of animated GIF files enabled many of them, even in the early days of the net, to incorporate motion.  A sampling of popular icons today shows much the same (figures from Family Guy and The Simpsons have a long shelf-life). And of course avatars also persist in modern online gamespace, although the fact that a single player may have many characters in the same game has led to different words for them; in World of Warcraft, it's much more common to call them "toons" or sometimes "chars."

The most insidious avatars are those that, by their very nature, are already known to be fictitious -- online assistants, customer-service bots, and the icons used by any and all of a site's admins (administrators).

But what is the result of this world full of altered egos? Would trolls be less troll-like if they had to display their actual faces? Some users have used a similar 'handle' or icon for so many years that it quite literally takes on a life of its own; among my own acquaintance are two: Sarah Higley, writer and professor at the University of Rochester, is also known as Sally Caves, who lives on Second Life, produces machinima and has written an episode of STTNG; my friend Charles Isbell, a computer specialist with a degree from MIT, is also known as HfH -- the Homeboy from Hell -- when he writes reviews of Hip-hop albums, which he's done for twenty years. Having an avatar has, I think, helped many people sort out the conflicting demands and desires of our increasingly complex lives.

But there is a dark side, too. Avatars can serve to deceive, defraud, and harass other users; most notorious are the "sockpuppets"used to add self-generated comments and cheers to one's own online work. So what should we say?  Should "real" identities be enforced? Or do such policies only make matters worse? Have you ever used an avatar, or been deceived by one? Post your answers & comments below!

Thursday, January 30, 2014

Social Media II

The range and size of social media networks has increased almost exponentially in the early years of the twenty-first century. We've gone from early forums in which only a few hundred people might participate, such as a BBS or a LISTSERV list, to truly mass media such as Facebook and Twitter, which have billions of users around the globe.

But much more than just size has changed. At a certain 'tipping point,' social media begin to function in ways that, when they were smaller, would have been impossible. Facebook and Twitter have been credited as playing roles in the "Arab Spring" in the Middle East, particularly in Egypt and Tunisia; Facebook's founder has been the subject of a major Hollywood film; and twitter feeds and cell-phone photos has brought down politicians of every party, sometimes within a matter of mere hours. It certainly sounds as though these technologies have crossed some threshold, altering the fabric of reality itself -- but then, of course, one can look back at similar claims made about virtual-reality video helmets (anyone remember Lawnmower Man?) and wonder whether these revolutions will seem such a few years from now.

Three key developments have shaped this period: 1) Social media with "presence" -- a main page at which users can add or copy content, offer images, texts, or video of their own making or choosing; 2) Sites with instant linkability -- the ability of users to add (or subtract) active and immediate connections to other users; and 3) Sites that bundle essential tools (e-mail, instant messaging, and other software capabilities. Finally, all of the above, or at least the survivors in this highly competitive field, have gone multi-platform; no social medium of the future will thrive unless it is available on desktops, laptops, tablets, and smartphones, and has some system of synchronizing all its users' preferences and updates.

So what next? The spaghetti is still being hurled at the (virtual) refrigerator wall; Blippy, a site that enabled shoppers to instantly "share" posts about their purchases was hacked, and credit cards compromised -- so much for that! -- Google tried to launch its own "Wikipedia killer," dubbed Knol, but the site filled up with spam so quickly that it became almost useless, and Google discontinued it; it also failed to generate "Buzz," a hot-button social networking site that offended many users with its auto-generated list of "contacts," and Apple stumbled with Ping! an addition to its popular iTunes platform meant to enable people to share news about music purchases and performances. The latest entry Pinterest, allows users to "pin" content to one another, with a focus on bargain shopping, and has the unusual distinction that a majority of its users, in many surveys, are women. But will it go the way of the Lifetime network?

It may seem we're already "shared" too much in this era of TMI, and these social media may be reaching their limits -- but I wouldn't bet on it.

Tuesday, January 28, 2014

Social Media I


The evolution of social media can be conceived of in many ways -- in one sense, it could be said that language itself was the first social medium. Even then, considering a "social medium" to be any means of transmitting or recording language over time and space, alphabetic writing could well be seen as the earliest, followed swiftly by the development of the "letter" as a social form, which dates back to at least the seventh century BCE. The ancient Library of Ashurbanipal, King of Assyria from 668 to 627, included personal letters written in cuneiform on clay tablets.

The telegraph and telephone come next in line; even if, as a recent NY Times article noted, the phone is experiencing a slow decline, it remains our oldest electronic social media. I'm old enough to remember the old "Reach out and touch someone" adverts for Ma Bell, and for a while, there was nothing more direct and personal than a phone call. Electronic mail protocols over ARPANET and its successors debuted in 1969, but did not become a common form of communication until the late 1980's; well before then, home computer users setting up BBS sites where they could post notices and download simple programs. My home town of Cleveland had a huge site, Freenet, where you could also get medical advice from doctors at Case Western Reserve and University Hospitals. The WELL, a large social site based in San Francisco, was the first home of integrated mail, chatroom, and file services; perhaps not coincidentally, it was also the site of the first case of online impersonation that went to court (a man was sued by two women for pretending to be a different, older woman who was a mutual friend).

In academia, the LISTSERV protocol brought people together by field and interest, and made it possible to, in effect, send a message to hundreds of people at once in search of advice or response; LISTSERVs were often associated with archives where you could search through older messages. Early online game spaces, such as MUDs and MOOs go back to the late 1970's, and many became highly social, with tens of thousands of "inhabitants" maintaining spaces there. All of these interactions were exclusively text-based, and the only "graphics" consisted of what could be cobbled together out of ASCII characters.

It wasn't until the arrival of the commercial internet in 1993, and the WWW protocol the next year, that social media really took off; by the end of the decade, Six Degrees, LiveJournal, Blogger, and eOpinion had launched. In 2003, Second Life offered its users a virtual retake on their first lives, albeit with a graphical interface that looks primitive by today's standards; that same year, MySpace became the first modern social networking platform, and a model for Facebook two years later. With half a billion users, including everyone from the President to the Pope to Batman (Adam West), it's well on its way to gaining the kind of cross-sectional critical mass to change the face of human communication.

Sunday, January 26, 2014

Old Media to New

The future is rarely as those in the past pictured it. We do not fly about in dirigibles or heli-cars; our school-age children don't take field trips to Mars, and those loose-fitting glittery pant-suits that the people of the future wore in old science fiction films never really came into fashion. In fact, one could say that most of us here in the twenty-first century are, in many areas of our lives, still using things invented in the nineteenth century: the internal combustion engines in our cars, the gas furnaces that heat our homes, the machine-woven cotton shirts we wear, the pencils and ball-point-pens we write with, and the fax machines in our offices.

And we still write letters, at least, though not quite the way we used to. Over his lifetime, the novelist Charles Dickens penned 14,252 letters -- probably more, since that's just the number that have survived to this day. Had he not died at the relatively young age of 58, he would have doubtless written thousands more. He was hardly alone; many other nineteenth-century writers were equally active, as were a great many ordinary people in various walks of life. Nowadays, though we may still send a great many brief e-mails, or text- or voice-messages, it's rare for most of us to send a "letter"of any length, let along the enormous missive sent by C. Morton Morse of Portland Oregon in 1911. That letter, claimed at the time to be the longest ever composed, contained more than 32,000 words and was written on a continuous roll of paper 72 feet long.

The telephone, which after the personal letter has a fair claim to be the oldest still commonly-used means of interpersonal communication today, may still be with us, but its usage has changed dramatically. The phone itself has gone from being a household appliance to something carried in one's pocket, and live voice communication may be its least common use. Many people rarely answer theirs, relying on voicemail to retain anything important, or screening their calls with custom ringtones. And, in the business world, it's rare to initiate communication by phone -- in one office, an employee was heard to complain that a coworker should have e-mailed in advance if he planned to call!

The great dream of 1950's and 1960's Sci-Fi -- the video phone -- is now a reality via Skype, although ironically enough, there is no phone, and (unlike the famous scene in 2001: A Space Odyssey) no toll for most of these calls (although $1.70 isn't too bad for a minute-long call from outer space).

So how much do you use these "old" media means of "reaching out to touch someone" (as the Bell System phone advertisement used to put it)? Do you often talk with friends on the phone for more than a minute or two? Do you Skype? And if so, how often? And when was the last time you literally sat down and wrote a letter to a friend or family member on a piece of paper?

Wednesday, January 22, 2014

The Age of the Book

As we move further into the age of the electronic "book," it's worth reflecting on the profound ways in which books have influenced the course of history, and indeed have shaped our very consciousness, for nearly two thousand years. There was a time, of course, when the book or "codex" was a new technology, replacing the scroll as the predominant means of recording and storing written texts; the video (Book 1.0) shown here takes a humorous look at a monastic scribe who has had to call the Help Desk to assist him in using this unfamiliar new medium. From this time -- probably somewhere around the first century A.D., through to the present, the book has embodied the very idea of learning, of storytelling, of collecting and gathering, of preserving knowledge.

As the Jesuit scholar Walter J. Ong pointed out, though, writing, in a fixed form, did more than simply store information for later retrieval; it restructured our consciousness. By giving us the sense that knowledge could possess substance and persistence, even when not stored in our own heads, writing gave birth to the very earliest stirrings of philosophy. Plato, of all people, warned against this new technology, saying that if people relied on putting things down in writing we would weaken our faculty of memory -- the irony is that the only reason we know he said this is that someone wrote it down. The book gave birth to many of our modern genres, from the novel to the biography to the encyclopedia (which last form, for better or worse, has almost completely departed the physical world of the Britannica and moved into the electronic space of the Wikipedia). There were libraries before there were books, of course -- parts of the library of Ashurbanipal, established in the 7th century BCE, still survive today -- but the book made libraries uniquely indexable, capable as they were of displaying relevant information on their spines while placed so as to take up minimal space on the shelf. The development of the printing press by Gutenberg in the fifteenth century created what's often been called a revolution -- and yet it was primarily a revolution in the means of production, and in making books affordable, not in the physical shape or content of the book itself. It was on account of the printed book, and the newspapers and magazines that followed, that a world in which literacy was possible, even expected, for everyone came into being -- a world on which the Internet, too, depends.

And the Age of the Book is still with us. E-books, although they mimic some of the qualities of printed volumes, still have a long way to go to match their advantages; their current market share -- 20 percent -- is the highest it's ever been, but its future growth will likely still be gradual. E-books certainly won't wipe out physical books in the way that MP3's wiped out compact discs, or streaming video took away most of the market for DVD's and Blu-Ray discs. In the end, a book is a concept, and as such, it's likely to be with us for a very long time indeed.

Saturday, January 18, 2014

Welcome to English 232

If this course were to have been offered twenty years ago, the "public sphere"would have meant newspapers, television, radio, and print publication. The primary way for an ordinary person to enter into such discourse would be through one or another formal gateway: a letter to the editor, a guest column or Op-Ed piece, an interview on television or radio, an essay in a magazine or journal, or that thing once known as a "book."There were, it's true, some more open ways to reach a wider audience: college radio, public-access cable, or xeroxed 'zines -- but they tended to have a very limited reach.

Today in 2014, for better or worse, the potential reach of any text or media presentation is, for all practical purposes, infinite; anyone on earth can "publish" a text, and most other people on earth can access it. And yet, in terms of actually getting one's text to an audience, it's harder than ever, precisely because there is so much already out there, and very few single outlets that guarantee the kind of mass audience that used to be available via "old" media. We are all public speakers/writers, but the size of our forum is so vast -- and yet so small -- that our "public" is likely to be a smaller and less diverse group than at any time in recent history. The battle now is not to merely be 'published' as such, but to be noticed.

So called "social media" have advanced and changed considerably in the past two decades, and many fundamental changes are still well within living memory.  I can remember when I sent my first e-mail (in 1988), and my own first online publications were in the early 1990's, before the 'World Wide Web' protocol had been invented. I can recall an Internet when commercial use of any kind was highly frowned upon, an Internet before in-line graphics, and (a bit later) an internet where there was only one browser (Mosaic) and only one or two people I knew actually had a web page "of their own." And yet today there is a large and growing tribe of so-called "digital natives" who cannot recall a world in which the Web, e-mail, smart phones, and online video were unavailable. The very nature of public discourse has changed, at least as much -- perhaps more -- than it did in the 'Gutenberg Revolution.'

This class will explore all of these differences, making use of every possible kind of resource and media available.  Everyone in this class will experiment with every media platform available, including but not limited to Facebook, Blogger, Twitter, Tumblr, Pinterest, Instagram, Wikipedia, Google+ and Reddit.  Each of us will create and interlink an online public identity, and use that identity to explore, test, and respond to the possibilities of public discourse today.  We'll also, along the way, learn something of the history of earlier social media, with the hope that these will help us put the present in some kind of perspective, even as we recognize that some aspects of it are new and scarcely tried. We'll also function as a collective, sharing our own texts and experiences with each other, and following each other's progress through the world-wide electronic jungle.

It will be an unpredictable experience.  But that's the way it is, in the new media world, at least for now.