Tuesday, February 25, 2014

Duplicates

"Don't you know what duplicates are?" an incredulous Groucho Marx asks brother Chico in one of their better-known skits. "Sure," replies Chico, "that's a five kids up in Canada." This was of course a reference to the Dionne quintuplets born in Ontario in 1934, two of whom are still alive today. But of course we all know what duplicates are -- or do we?

Ever since the invention of writing, the making of copies has been crucial to the effectiveness and reach of the written word. The ancient Romans employed slaves to make copies; a single slave shouted out out the text to be copied, and a hundred slaves followed suit.  In the Middle Ages, monks accustomed to vows of silent contemplation couldn't take advantage of such means; for them, copies were made one at a time.

Today, thanks to the photocopier, of course, a monk can have 500 copies in a few minutes, as depicted in this famous 1970's TV commercial for Xerox. It's a miracle! But of course it was a long road that led to such dazzling achievements, a road littered with media that have since become obsolete, from the Gutenberg Press through the Hektograph, Mimeograph, and Gestetner machines, the original Xerox technology, and the laser scanner. And now, with the possibility of a document which exists, almost simultaneously, on thousands of servers around the world, or on a "cloud" system that enables its instant downloading and printing nearly anywhere on earth, we've reached the point where the difference between an "original" and a "copy" is more a matter of syntax and situation than any material reality. I've been to the London home of the Gestetner family, one wall of which is lined with a series of photographs of Gestetner duplicators being presented as a gift to each new Pope -- but our next Pope won't get one, nor will he need it. He'll probably just tweet, anyway, or distribute his encyclicals via the Vatican's vast website.

The value of a copy is in its portability, the ability one has to own it or transfer ownership in it; the ability to send it over time and distance; and its ability to preserve its contents over time, even if other copies are damaged or destroyed.  Intellectual property in written works has always been conceived of as the right to create and sell copies -- the copyright.

The very first legal recognition of the rights of an author was the "Statute of Anne" in 1709. It presented itself as "an act for the encouragement of learning," with the implicit argument that allowing authors the exclusive right to publish their work for a limited term would enable them to earn some reward for their labors, while at the same time eventually allowing their work to be used freely. As with earlier systems of intellectual property, such as "Letters Patent," the Act's term was limited -- 14 years, which could be extended for 14 more, after which the rights of the author expired; it was understood then, as it is now, that authors, like inventors, quite frequently drew from the works of those who have come before them, and that preserving such rights indefinitely would stifle creativity. One thing that has certainly changed since 1709 is the term of copyright; US copyright eventually settled on a period twice as long as the Statute of Anne (28 years, renewable for 28 more years); revisions to this law in the past three decades have extended these 56 years to 80, 100, and even as many as 120 years; the last of these, the "Sonny Bono Copyright Extension Act," went further and even re-instated copyright in works where it had become extinct, freezing the date at which works could enter the public domain at 1923. Many creative artists feel that this law has exercised a stifling effect upon creativity; many of them joined in support of a legal case, Eldred vs. Ashcroft, that challenged these extensions on the basis of the Constitution's reference to copyright law being for a "limited term." The Supreme Court eventually ruled against Eldred, saying in effect that Congress could establish any length of term they wanted, so long as it was not infinite. Could, is of course, not should.

The result has been, ironically, that in the very age when the ability of writers, artists, and musicians to draw upon, alter, and incorporate what the copyright office calls "previously existing works" is at its greatest, the legal barriers against doing so have been raised to the harshest and longest in the history of copyright protections. This is offset, to a degree, by two factors: 1) "fair use," a doctrine established in the 1977 revision of the law, whereby a certain limited amount -- say, less than 10% of the original "work" -- may be used so long as it is not employed for profit, is used in an educational context, and/or used spontaneously; and 2) simple lack of enforceability. It's quite impossible to police all the billions of web servers, web pages, and personal computers and devices, to ensure that no copyrighted material has been taken or stored; enforcement, as a result, tends to be spotty if dramatic (as in the case of a woman in the midwest who was assessed a fine of 1.5 million dollars because her son had shared 24 music files on his Napster account).

It needs to be noted that copyright also functions very differently depending on the medium in question.  Printed texts are straightforward enough, but in the case of physical media such as a sculpture or a painting, possession of the physical object confers certain property rights, including the right -- if one desires -- to restrict or prohibit "derivative" works such as photographs of these works, although the issue of non-manipulated or "slavish" copies is a murky one. Music is the most complex form: there are at least four layers of copyright in a recorded song: 1) The composition itself, and its embodiment in sheet music; 2) The performance of that composition on the recorded matter, including the act of interpretation and any variations on the composition; 3) The physical embodiment, if any, of this performance, known as "mechanical" rights; and 4) The right to transmit the performance. All of these, of course, were once separate domains: the sheet-music industry/print, the recording studio, the record company or "label," and radio stations -- but all are now merged indistinctly into a single, complex activity that can all be achieved on a single device, even a smartphone.

But the fundamental problem is that copyright consists of a right to make a "copy" -- and there's no longer a fixed, essential value in that -- not in a world in which everything is, in a sense, already copied.

Thursday, February 20, 2014

The Commodification of the Self

We all enjoy the sense that we are somebody -- that our drab, dreary lives possess some greater meaning, that our hopes, dreams, and aspirations may some day take tangible form. But in the meantime, while we've been learning and laboring and dreaming, all of the droplets of our online lives are being constantly collected like Elvis's sweat, bottled and packaged, searched through, rented, and sold. Of course, we're told that all of our "identifying information" has been removed -- we're just part of a vast agglomeration of data, after all -- but if someone wants to know how many people who play World of Warcraft are also regular customers at McDonald's, watch pay-per-view sports, or make frequent visits to Dave & Buster's, then the Data Oracle can "mine" this information for answers.  And, to an extent, once "mined," this data can be used to send back targetted ads and offers, such as a Dave & Buster's coupon for anyone who buys a custom mount in Warcraft. The system won't "know" that you'll be interested in such a thing -- but it may know that you are more likely to take the bait than some random person, and that knowledge, my friends, is POWER.

What can one do?  Well, you can travel the web with cookies and scripts turned off; you can filter your internet connection through a bunch of remote hosts that "scrub" off your identifying information; you can use remote anonymous e-mail accounts and encrypt all your messages with PGP. But if you do all these things, a big part of the value that can be derived from the Internet will be missing; you won't be able to share content easily with more than a few friends, shop online at most retailers, or host your own publicly-accessible web content.

There is, however, another way. You can use the system that uses you, and (with luck) you can get more out of it than they system gets out of you. The key question was first asked way back in 1968 by Doug Engelbart, who with his team at Stanford developed the first mouse, the first graphical interface, first collapsible menus, and many other things we take for granted:
"If in your office you as an intellectual worker were supplied with a computer display backed up by a computer that was alive for you all day and was instantly responsive to every action you have, how much value could you derive from that?"
Engelbart demonstrated some basic things: keeping track of shopping needs, simple word-processing, sharing documents, and mapping an efficient travel route. But he didn't see one thing coming: that all these things might eventually, become so all-consuming in and of themselves that his imaginary "intellectual worker" would be more distracted than augmented.

Still, we can budget our time -- which remains ours, after all. We can take breaks from Facebook, skip online shopping for a week, deactivate our Twitter feeds, or quit Goodreads.  And we can turn the tables, to some extent, on those who use our time and energies for free by making maximum, careful, deliberate use of the resources they give us in exchange. We may not be able to completely avoid our information being used by marketers -- but we can become very adept at marketing ourselves, and our own intellectual labors, in a way that we can fully control.

Tuesday, February 11, 2014

The Comment Crew

Somehow it seems weirdly appropriate to read, in the online New York Times that the group of sophisticated hackers in China who have successfully invaded dozens of corporate and military sites in the US is known as the "Comment Crew" -- they have a habit of embedding their viral links in comments, and when users click on these, their entire system can be compromised.

And who doesn't love a comment? Comments tell us that someone, somewhere, is reading our words; they enable us to seemingly tap the shoulder of well-known writers, journalists, and columnists, and say "Hey pal -- I beg to differ." Comments make even the most static content seem instantly "interactive," and seem to promise the extension of democratic input into this vast and lumpy agglomeration of texts and images and videos we call the Internet.

Except of course they don't -- at least not always.  Comments are also the native territory of people who, in a non-commenting world, we would be blessed rarely to encounter, if if ever.  There's the Skeptic (Doesn't look like 1962 to me! I'm sure this footage is fake!), the Know-it-all (I'm surprised that the writer is apparently unfamiliar with my recent article in the Journal of Obscure Ramblings), the Blowhard (This is exactly the kind of crap that the liberal media wants us to believe!) and the dreaded Troll (I won't dignify them by imitating them -- we all know them). It's not at all clear that any of these Internet-librated voices has much of real value to add to the "conversation," and even if they did, with comments soaring into the hundreds in the space of a few hours, whatever has been said, valuable or not, has slid away into a vast river of verbiage that's slow and painful to scroll through, so why bother?

On the other hand, I'd hate to have a world without any comments.  On my main blog, Visions of the North, I have the advantage that only people who already care or know about the topics I blog are likely to visit it, and likely to comment on it.  I've rarely gotten a rude comment, and only now and then gotten a Blowhard or a Skeptic; the only spam I've encountered is from a certain Chinese concrete company who shall remain nameless; Blogger's spam filter usually catches them.  Sometimes, when a well-known figure or fellow Arctic expert leaves a comment, I feel distinctly honored! And seeing the comments makes the site stats feel a bit more 'real' to be sure.

Facebook and other social media have picked up on these positive vibes to enable one to 'like' or comment on almost anything one sees.  And, since, most of those who can see it are one's presumptive friends, the comments are, as they should be, mostly friendly.  Occasionally, a lively chat, a bit off-topic but fun, evolves in the comment stream.  But there are awkward times, times of TMI, where a friend one knows only distantly posts disturbing personal news.  If a friend you can't remember posts news that his father-in-law has been diagnosed with cancer, what should you do? Should you 'like' such bad news -- if you comment, will that make you a hypocrite? Should you ignore it?  Or what if a friend you know slightly suddenly reveals political views that you detest?  Time to unfriend?

This is your life. This is your life with comments. What do we make of them? How often have you commented on an online article? Do you read the comments of others? And how much value do you feel comments have contributed to the online experience -- or taken away. Leave your comments below!

Saturday, February 8, 2014

Avatars

Who are we when we're online? And who is anyone else? Is anyone really who they seem to be?

Ever since the first graphical computer interfaces, icons and images of increasing size and depth have been part of the experience; who who has known them can forget our old friends Sad Mac, Dancing Baby, or Max Headroom? And in fact, the idea of describing one's on-screen graphic self as an "avatar" (an ancient word with origins in the Vedas) was first used in a computing context way back in 1985, in reference to Lucasfilm's game "Habitat" -- the first online role-playing game with a graphical element, albeit one that looks incredibly primitive to today's users. Until the WWW interface in the early 1990's, of course, there was no way for a user to share a graphical self outside of a game world, but as soon as people could, they did. No one seems to be quite sure just when they first appeared, but soon they were common in online forums, blogs, and on various IRC (Internet Relay Chat) systems. MySpace, famously, allowed for avatars and pseudonyms to flourish, such that question of who someone actually was ceased being a matter of importance.  Facebook originally insisted on real identities, but has since given way to various levels of pseudonymity, so long as the user supplies Facebook itself with his or or her "RL" (Real Life) identity.

Avatars come in many flavors; the most common are cartoons, celebrity figures, and consumer products such as cars. The use of animated GIF files enabled many of them, even in the early days of the net, to incorporate motion.  A sampling of popular icons today shows much the same (figures from Family Guy and The Simpsons have a long shelf-life). And of course avatars also persist in modern online gamespace, although the fact that a single player may have many characters in the same game has led to different words for them; in World of Warcraft, it's much more common to call them "toons" or sometimes "chars."

The most insidious avatars are those that, by their very nature, are already known to be fictitious -- online assistants, customer-service bots, and the icons used by any and all of a site's admins (administrators).

But what is the result of this world full of altered egos? Would trolls be less troll-like if they had to display their actual faces? Some users have used a similar 'handle' or icon for so many years that it quite literally takes on a life of its own; among my own acquaintance are two: Sarah Higley, writer and professor at the University of Rochester, is also known as Sally Caves, who lives on Second Life, produces machinima and has written an episode of STTNG; my friend Charles Isbell, a computer specialist with a degree from MIT, is also known as HfH -- the Homeboy from Hell -- when he writes reviews of Hip-hop albums, which he's done for twenty years. Having an avatar has, I think, helped many people sort out the conflicting demands and desires of our increasingly complex lives.

But there is a dark side, too. Avatars can serve to deceive, defraud, and harass other users; most notorious are the "sockpuppets"used to add self-generated comments and cheers to one's own online work. So what should we say?  Should "real" identities be enforced? Or do such policies only make matters worse? Have you ever used an avatar, or been deceived by one? Post your answers & comments below!