Thursday, April 3, 2014

Building An (online) Room of One's Own

What are the basic ingredients in creating an online space of one's own? In the past, when a room required materials -- wood, plaster, nails, roofing, and so forth -- the question was one of cost, of economy; Thoreau kept track of every nail used to build his cabin on Walden Pond. Now, one can built a fairly spacious dwelling for no cost whatsoever -- aside from one's time! -- but as the choices grow wider the process gets murkier. Do you want, or need, to "blog"? Do you want or need to "share"? What streets of information or social activity do you want your online doors to open into, or out from? Who would you like your neighbors to be? And do you feel comfortable handing advertising signage on your online dwelling-place, or would you pay a certain premium not to have ads? And, once your home exists, what do you want to do with it? Who do you want to "stop by"? How often do you yourself want to "be" there?

And maybe you don't need to start from scratch -- don't you already have some places online where you tend to go, or have made some sort of nook or cranny for yourself? Most online services offer some kind of "hub" where you can pull "yourself" together; Google has your Google+ profile page; Yahoo! offers MyYahoo!; and of course Facebook offers the ever-changing format of one's Facebook Profile page. Beyond these major players, dozens of services will enable you to cobble together a personal web page (no coding required! they say), or you can get a slightly higher level of service with a paid home, which can come with domain and/or hosting services, using one of the big providers such as GoDaddy. At the top of the cost list, you could register a custom domain name, using some version of your own name or a phrase you've chosen, and hire a web designer to put together something more singular for a few hundred dollars.

But a true online home is more than that, I'm willing to bet.  Home is a place where no-one but you re-arranges the furniture. A place where friends, but not trolls, come to visit. A place where services and media you really enjoy and have chosen -- your books, your films, your e-mail -- are near at hand, and free of advertising. And a place "near" to other places, other friends' home, where you like to go. And, in the cacophonous world of the Internet today, it's getting a bit harder to find.

Sunday, March 30, 2014

The future of writing

The revolution that was alphabetic writing is still resonating through human culture and society more than three thousand years after the Phoenecians invented it, and the Greeks improved upon it. It has given every religion on earth its sacred texts, and preserved them -- even as disagreements over how such books were to be interpreted has led to sectarian wars in which hundreds of thousands of people have died. It has given every civilization its laws, every language its literature, and every people their history.

Along the way, there have been revolutions within this revolution: the invention of paper in China in 200 BC, of moveable type by Johannes Gutenberg in the 1430's, the publication of the first newspapers in the 1600's, and Ottmar Mergenthaler's invention of the Linotype in 1884. But the electronic revolution, unlike these others, has not only to improved the speed and efficiency with which the written word can be distributed, but may change the very nature of writing itself.

It hasn't always been possible to earn a living by writing. In the early days, being a writer required a wealthy patron; Chaucer had John of Gaunt, and Spencer had the Earl of Leicester. Books remained expensive, but the writer's share in their sale was small, and pirated editions common. The Statute of Anne in 1710 established copyright in England, but it was at least another century before anyone could actually sustain life on book royalties alone. What was missing was a mass audience of literate people, and people who could afford to buy or rent books. Public education eventually brought such an audience into being, and public libraries and book-lending services such as Mudie's (the "Netflix of Victorian Literature") made reading an affordable habit.

One of the most notable was Charles Dickens, whose books -- sold originally in serial installments -- made him a wealthy man, although in his later years he also made a good deal of his income from public readings. We all know the rags-to-riches tale of J.K. Rowling, whose Harry Potter books made her the wealthiest woman in the United Kingdom, with riches far exceeding those of the Queen (who's actually running a bit low on cash). And yet Rowling's rise came before the arrival of e-books as a force to be reckoned with, and whether this new format will help or harm writers is still uncertain.

One thing electronic publication has already done, though, is to make writers of us all; indeed there may be more writers of certain kinds of material than there are people interested in reading it. Like the "I'm DJ'ing" segment of Portlandia, where everyone is DJ'ing, we are entering into a world where anyone who wants to can be an "author" via self-publishing services such as Smashwords, Author Solutions, amazon.com. At the same time, for many authors whose books are brought out via traditional publishers, it's getting harder to make a living; for every Anne Rice and Stephen King there are hundreds of novelists who struggle to make ends meet, and can't quit their day job.

It's a strange moment -- half opportunity, half cosmic joke: now that anyone can get their writing "out there," there's too much out there to sort through, and becoming "known" is more difficult than ever.

Friday, March 21, 2014

The Long and the Short of It

Sometimes it seems that everything is getting shorter -- books, magazine articles, e-mails, and attention spans.  Recent studies suggest that many corporations are wary of investing in advertising in longer-form publications, and are looking for ways to promote their brands in "short-form media." The cost is surely less, and the question arises: are a hundred animated GIFs and sidebars worth more than a 60-second spot on what remains of "television," where the Nielsen share ratings (the % of households tuned in to a given program) has declined, along with the total number of "television households" surveyed. The miniseries Roots managed a 66% percent share, with 100 million viewers for its final episode -- whereas in 2013, the number one television show, The Big Bang Theory, nets only a 9.8 share or about 16 million viewers. One begins to feel that, even if the headline were "World Ends Tomorrow," it would be hard to get figures like those of Roots for any program today.

But of course there have always been short forms, and just because the average book seems to be getting shorter doesn't mean that only short books are viable. Elizabeth Kostova found an agent -- and a publisher -- for her debut novel The Historian.  She got a six-figure advance and the book was a runaway best-seller.  It's 240,000 words long. David Foster Wallace's Infinite Jest (ok, everyone but me thinks he was a genius) runs to 484,000 words; Vikram Seth's very successful second novel A Suitable Boy runs to 590,000 words, though admittedly it's exceptional. And when something "long" is made of "short" pieces, there seems to be no limit to length; I've known friends who, falling in love with a TV series such as The Wire, have managed to watch all 60 episodes in the course of a week or two -- six times the length of Roots. Like those who read the serialized novels of Dickens, we may not quite realize what all the parts add up to until we've read them -- and even then, we may be hungry for more.

But one must wonder: how will those forms that are inherently long, bulky, and bound to their physical forms -- the epic, the fantasy trilogy, the authoritative biography -- fare in the land of instant knowledge, of reciprocal mini-citations like those in the "did you read that" episode of Portlandia? What does it mean that some book publishers are launching whole new divisions dedicated to bringing the latest blog into print? Will on-demand video and YouTube, as the movie studios worried that television would do back in the 1950's, be the death knell of long-form entertainment in cinemas?

To which I can only answer, in the lingo of old-fashioned radio and television broadcasts, "Stay tuned ... "

Tuesday, March 4, 2014

Hackers, Phreaks, and Jammers

The man shown here doesn't seem like much of a danger to anyone. Armed only with a plastic whistle that came free in a box of Cap'n Crunch, along with an earlier version of the 'black box' in his left hand (actually an iPhone), this man was once regarded as a dangerous criminal mastermind, one whose activities were worthy of the scrutiny of the FBI and InterPol.

He's John Thomas Draper, a legendary figure as the first 'phone phreak' to find a way to get free calls through the Bell Telephone system. Phones in the early 1970's used a series of tones to activate and direct calls; using the toy whistle, Draper was able to fool the system into authorizing long-distance calls without any charges; it just so happened that the frequency of the whistle -- 2600 Hz -- was the same as the phone company's electronic tone. Draper and other "phreaks" used the trick for prank calls, to call each other, and to test their ability to route the call over as long a distance as possible. Among those who found this exciting were a couple of California kids who have since become well-known -- Steve Jobs and Steve Wozniak.

The "phreak" culture, in retrospect, wasn't really all that dangerous. All they "stole" was phone time; they didn't seek to defraud customers, steal peoples' identities, or collect anyone's data.  But as the use of phones, and phone modems, became more and more widespread, the stakes of this kind of activity -- or hacktivity as some call it -- have increased. Hackers who have become familiar with weaknesses in operating systems, encryption programs, Java scripts, and various "Trojan Horse" programs that gain access by seeming innocuous, have targeted corporate and government entities, and some of them quite gleefully seize personal financial information when they can get it. Some, such as Wikileaks and the Anonymous group, see theirs as important investigative work, as well as a political jab against what they regard as the government's unjustified power to keep things secret. Others, such as the group of Chinese military hackers associated with the "Comment Crew" in Shanghai, use cyber attacks as a form of political and economic subterfuge.  Still others, the purists I suppose we could call them, simply hack to show that they can; in them, the playful pride of the original phreaks lives on.

Spammers, of course, are the most annoying class of Internet junk, and their more dangerous cousins the Phishers are all to willing to lay traps for the unsuspecting.  Few of them, however, seem to have any larger agenda other than stealing money.

Jammers are perhaps the most politically purposeful of all who have sought to use the media against the media, and many of them use a wide variety of techniques other than the the Internet, though the 'Net often amplifies their effects.  The Guerrilla Girls made feminist incursions in the media, wearing gorilla suits, and the Yes Men famously staged fake press conferences, at one of which they pretended to be representatives of the Union Carbide corporation who wished to apologize -- as the real firm had not -- for the chemical disaster at Bhopal in India.

Tuesday, February 25, 2014

Duplicates

"Don't you know what duplicates are?" an incredulous Groucho Marx asks brother Chico in one of their better-known skits. "Sure," replies Chico, "that's a five kids up in Canada." This was of course a reference to the Dionne quintuplets born in Ontario in 1934, two of whom are still alive today. But of course we all know what duplicates are -- or do we?

Ever since the invention of writing, the making of copies has been crucial to the effectiveness and reach of the written word. The ancient Romans employed slaves to make copies; a single slave shouted out out the text to be copied, and a hundred slaves followed suit.  In the Middle Ages, monks accustomed to vows of silent contemplation couldn't take advantage of such means; for them, copies were made one at a time.

Today, thanks to the photocopier, of course, a monk can have 500 copies in a few minutes, as depicted in this famous 1970's TV commercial for Xerox. It's a miracle! But of course it was a long road that led to such dazzling achievements, a road littered with media that have since become obsolete, from the Gutenberg Press through the Hektograph, Mimeograph, and Gestetner machines, the original Xerox technology, and the laser scanner. And now, with the possibility of a document which exists, almost simultaneously, on thousands of servers around the world, or on a "cloud" system that enables its instant downloading and printing nearly anywhere on earth, we've reached the point where the difference between an "original" and a "copy" is more a matter of syntax and situation than any material reality. I've been to the London home of the Gestetner family, one wall of which is lined with a series of photographs of Gestetner duplicators being presented as a gift to each new Pope -- but our next Pope won't get one, nor will he need it. He'll probably just tweet, anyway, or distribute his encyclicals via the Vatican's vast website.

The value of a copy is in its portability, the ability one has to own it or transfer ownership in it; the ability to send it over time and distance; and its ability to preserve its contents over time, even if other copies are damaged or destroyed.  Intellectual property in written works has always been conceived of as the right to create and sell copies -- the copyright.

The very first legal recognition of the rights of an author was the "Statute of Anne" in 1709. It presented itself as "an act for the encouragement of learning," with the implicit argument that allowing authors the exclusive right to publish their work for a limited term would enable them to earn some reward for their labors, while at the same time eventually allowing their work to be used freely. As with earlier systems of intellectual property, such as "Letters Patent," the Act's term was limited -- 14 years, which could be extended for 14 more, after which the rights of the author expired; it was understood then, as it is now, that authors, like inventors, quite frequently drew from the works of those who have come before them, and that preserving such rights indefinitely would stifle creativity. One thing that has certainly changed since 1709 is the term of copyright; US copyright eventually settled on a period twice as long as the Statute of Anne (28 years, renewable for 28 more years); revisions to this law in the past three decades have extended these 56 years to 80, 100, and even as many as 120 years; the last of these, the "Sonny Bono Copyright Extension Act," went further and even re-instated copyright in works where it had become extinct, freezing the date at which works could enter the public domain at 1923. Many creative artists feel that this law has exercised a stifling effect upon creativity; many of them joined in support of a legal case, Eldred vs. Ashcroft, that challenged these extensions on the basis of the Constitution's reference to copyright law being for a "limited term." The Supreme Court eventually ruled against Eldred, saying in effect that Congress could establish any length of term they wanted, so long as it was not infinite. Could, is of course, not should.

The result has been, ironically, that in the very age when the ability of writers, artists, and musicians to draw upon, alter, and incorporate what the copyright office calls "previously existing works" is at its greatest, the legal barriers against doing so have been raised to the harshest and longest in the history of copyright protections. This is offset, to a degree, by two factors: 1) "fair use," a doctrine established in the 1977 revision of the law, whereby a certain limited amount -- say, less than 10% of the original "work" -- may be used so long as it is not employed for profit, is used in an educational context, and/or used spontaneously; and 2) simple lack of enforceability. It's quite impossible to police all the billions of web servers, web pages, and personal computers and devices, to ensure that no copyrighted material has been taken or stored; enforcement, as a result, tends to be spotty if dramatic (as in the case of a woman in the midwest who was assessed a fine of 1.5 million dollars because her son had shared 24 music files on his Napster account).

It needs to be noted that copyright also functions very differently depending on the medium in question.  Printed texts are straightforward enough, but in the case of physical media such as a sculpture or a painting, possession of the physical object confers certain property rights, including the right -- if one desires -- to restrict or prohibit "derivative" works such as photographs of these works, although the issue of non-manipulated or "slavish" copies is a murky one. Music is the most complex form: there are at least four layers of copyright in a recorded song: 1) The composition itself, and its embodiment in sheet music; 2) The performance of that composition on the recorded matter, including the act of interpretation and any variations on the composition; 3) The physical embodiment, if any, of this performance, known as "mechanical" rights; and 4) The right to transmit the performance. All of these, of course, were once separate domains: the sheet-music industry/print, the recording studio, the record company or "label," and radio stations -- but all are now merged indistinctly into a single, complex activity that can all be achieved on a single device, even a smartphone.

But the fundamental problem is that copyright consists of a right to make a "copy" -- and there's no longer a fixed, essential value in that -- not in a world in which everything is, in a sense, already copied.

Thursday, February 20, 2014

The Commodification of the Self

We all enjoy the sense that we are somebody -- that our drab, dreary lives possess some greater meaning, that our hopes, dreams, and aspirations may some day take tangible form. But in the meantime, while we've been learning and laboring and dreaming, all of the droplets of our online lives are being constantly collected like Elvis's sweat, bottled and packaged, searched through, rented, and sold. Of course, we're told that all of our "identifying information" has been removed -- we're just part of a vast agglomeration of data, after all -- but if someone wants to know how many people who play World of Warcraft are also regular customers at McDonald's, watch pay-per-view sports, or make frequent visits to Dave & Buster's, then the Data Oracle can "mine" this information for answers.  And, to an extent, once "mined," this data can be used to send back targetted ads and offers, such as a Dave & Buster's coupon for anyone who buys a custom mount in Warcraft. The system won't "know" that you'll be interested in such a thing -- but it may know that you are more likely to take the bait than some random person, and that knowledge, my friends, is POWER.

What can one do?  Well, you can travel the web with cookies and scripts turned off; you can filter your internet connection through a bunch of remote hosts that "scrub" off your identifying information; you can use remote anonymous e-mail accounts and encrypt all your messages with PGP. But if you do all these things, a big part of the value that can be derived from the Internet will be missing; you won't be able to share content easily with more than a few friends, shop online at most retailers, or host your own publicly-accessible web content.

There is, however, another way. You can use the system that uses you, and (with luck) you can get more out of it than they system gets out of you. The key question was first asked way back in 1968 by Doug Engelbart, who with his team at Stanford developed the first mouse, the first graphical interface, first collapsible menus, and many other things we take for granted:
"If in your office you as an intellectual worker were supplied with a computer display backed up by a computer that was alive for you all day and was instantly responsive to every action you have, how much value could you derive from that?"
Engelbart demonstrated some basic things: keeping track of shopping needs, simple word-processing, sharing documents, and mapping an efficient travel route. But he didn't see one thing coming: that all these things might eventually, become so all-consuming in and of themselves that his imaginary "intellectual worker" would be more distracted than augmented.

Still, we can budget our time -- which remains ours, after all. We can take breaks from Facebook, skip online shopping for a week, deactivate our Twitter feeds, or quit Goodreads.  And we can turn the tables, to some extent, on those who use our time and energies for free by making maximum, careful, deliberate use of the resources they give us in exchange. We may not be able to completely avoid our information being used by marketers -- but we can become very adept at marketing ourselves, and our own intellectual labors, in a way that we can fully control.

Tuesday, February 11, 2014

The Comment Crew

Somehow it seems weirdly appropriate to read, in the online New York Times that the group of sophisticated hackers in China who have successfully invaded dozens of corporate and military sites in the US is known as the "Comment Crew" -- they have a habit of embedding their viral links in comments, and when users click on these, their entire system can be compromised.

And who doesn't love a comment? Comments tell us that someone, somewhere, is reading our words; they enable us to seemingly tap the shoulder of well-known writers, journalists, and columnists, and say "Hey pal -- I beg to differ." Comments make even the most static content seem instantly "interactive," and seem to promise the extension of democratic input into this vast and lumpy agglomeration of texts and images and videos we call the Internet.

Except of course they don't -- at least not always.  Comments are also the native territory of people who, in a non-commenting world, we would be blessed rarely to encounter, if if ever.  There's the Skeptic (Doesn't look like 1962 to me! I'm sure this footage is fake!), the Know-it-all (I'm surprised that the writer is apparently unfamiliar with my recent article in the Journal of Obscure Ramblings), the Blowhard (This is exactly the kind of crap that the liberal media wants us to believe!) and the dreaded Troll (I won't dignify them by imitating them -- we all know them). It's not at all clear that any of these Internet-librated voices has much of real value to add to the "conversation," and even if they did, with comments soaring into the hundreds in the space of a few hours, whatever has been said, valuable or not, has slid away into a vast river of verbiage that's slow and painful to scroll through, so why bother?

On the other hand, I'd hate to have a world without any comments.  On my main blog, Visions of the North, I have the advantage that only people who already care or know about the topics I blog are likely to visit it, and likely to comment on it.  I've rarely gotten a rude comment, and only now and then gotten a Blowhard or a Skeptic; the only spam I've encountered is from a certain Chinese concrete company who shall remain nameless; Blogger's spam filter usually catches them.  Sometimes, when a well-known figure or fellow Arctic expert leaves a comment, I feel distinctly honored! And seeing the comments makes the site stats feel a bit more 'real' to be sure.

Facebook and other social media have picked up on these positive vibes to enable one to 'like' or comment on almost anything one sees.  And, since, most of those who can see it are one's presumptive friends, the comments are, as they should be, mostly friendly.  Occasionally, a lively chat, a bit off-topic but fun, evolves in the comment stream.  But there are awkward times, times of TMI, where a friend one knows only distantly posts disturbing personal news.  If a friend you can't remember posts news that his father-in-law has been diagnosed with cancer, what should you do? Should you 'like' such bad news -- if you comment, will that make you a hypocrite? Should you ignore it?  Or what if a friend you know slightly suddenly reveals political views that you detest?  Time to unfriend?

This is your life. This is your life with comments. What do we make of them? How often have you commented on an online article? Do you read the comments of others? And how much value do you feel comments have contributed to the online experience -- or taken away. Leave your comments below!

Saturday, February 8, 2014

Avatars

Who are we when we're online? And who is anyone else? Is anyone really who they seem to be?

Ever since the first graphical computer interfaces, icons and images of increasing size and depth have been part of the experience; who who has known them can forget our old friends Sad Mac, Dancing Baby, or Max Headroom? And in fact, the idea of describing one's on-screen graphic self as an "avatar" (an ancient word with origins in the Vedas) was first used in a computing context way back in 1985, in reference to Lucasfilm's game "Habitat" -- the first online role-playing game with a graphical element, albeit one that looks incredibly primitive to today's users. Until the WWW interface in the early 1990's, of course, there was no way for a user to share a graphical self outside of a game world, but as soon as people could, they did. No one seems to be quite sure just when they first appeared, but soon they were common in online forums, blogs, and on various IRC (Internet Relay Chat) systems. MySpace, famously, allowed for avatars and pseudonyms to flourish, such that question of who someone actually was ceased being a matter of importance.  Facebook originally insisted on real identities, but has since given way to various levels of pseudonymity, so long as the user supplies Facebook itself with his or or her "RL" (Real Life) identity.

Avatars come in many flavors; the most common are cartoons, celebrity figures, and consumer products such as cars. The use of animated GIF files enabled many of them, even in the early days of the net, to incorporate motion.  A sampling of popular icons today shows much the same (figures from Family Guy and The Simpsons have a long shelf-life). And of course avatars also persist in modern online gamespace, although the fact that a single player may have many characters in the same game has led to different words for them; in World of Warcraft, it's much more common to call them "toons" or sometimes "chars."

The most insidious avatars are those that, by their very nature, are already known to be fictitious -- online assistants, customer-service bots, and the icons used by any and all of a site's admins (administrators).

But what is the result of this world full of altered egos? Would trolls be less troll-like if they had to display their actual faces? Some users have used a similar 'handle' or icon for so many years that it quite literally takes on a life of its own; among my own acquaintance are two: Sarah Higley, writer and professor at the University of Rochester, is also known as Sally Caves, who lives on Second Life, produces machinima and has written an episode of STTNG; my friend Charles Isbell, a computer specialist with a degree from MIT, is also known as HfH -- the Homeboy from Hell -- when he writes reviews of Hip-hop albums, which he's done for twenty years. Having an avatar has, I think, helped many people sort out the conflicting demands and desires of our increasingly complex lives.

But there is a dark side, too. Avatars can serve to deceive, defraud, and harass other users; most notorious are the "sockpuppets"used to add self-generated comments and cheers to one's own online work. So what should we say?  Should "real" identities be enforced? Or do such policies only make matters worse? Have you ever used an avatar, or been deceived by one? Post your answers & comments below!