stubborn necessity

August 19, 2007

(untitled)

Emerging electronic media , although transformative in many ways, are not reinventing our relationship to books. As with any change in the human environment, polarizing opinions tend to dominate ways of understanding. This will obscure genuine trends, making it harder to imagine what a book in 2015 might look like. First it might be best to observe the historical book of yesterday.

Books, as a communicative form, have maintained a dominance over informational authority, dissemination of meaning, and community imagination across a global culture for centuries. The dynamism of oral historical transmission is not possible once the word is written down, but almost every other significant intellectual and relational human development is arguably attributable to the book. Religion is able to define values and right behaviors, lawmakers are able to establish precedent, merchants are able to codify national and international bartering systems, and policy makers are accountable to history. Once Averroes (nee Abul-Waleed Muhammad Ibn Rushd) reintroduced the West to Aristotle, a revolution of individuation began which we continue to work through to today.

St. Matthew from the Gospel Book of Charlemagne, c.800-10.

from St. Matthew, from the Gospel Book of Charlemagne, c.800-10.

Manuscripts were instrumental in this, yet without standardization, context and meaning could change. The first widespread re-imagining of the book occurred with Gutenberg’s movable type. (Though China achieved this with baked clay over 400 years earlier, the basic realities of Chinese script prevented common use.) The Gutenberg Galaxy as Marshall McLuhan labeled it, enabled text to be set in a way previously not possible. As more and more copies of a work were produced, identically and rapidly, paradigmatic precedents could be more quickly established. People were able to communicate more effectively, much like the initial standardized catechisms for Christians, yet more personally and frequently.

The manipulation of meaning in a written work prior to printing presses has never entirely gone away, though it was slowed for a time once editions numbered in thousands. Anxiety about losing authoritative, factual information is justified, especially in a world where entities such as Google are becoming global repositories on an unprecedented scale.

Yet this anxiety must be contextualized. The West, circa 1200a.d., did not even know what it was missing until Averrose translated Aristotle. How many books are really in the Bible? And written by whose hand? Absolute meaning is illusory, a reflection of a human need more than an accurate representation of truth. This scientific, rational age is dependant upon the assumption that facts, once discovered, are eternal. Books reflect all dimensions of human experience, and their authority is granted by a wish for it to be so. Through neglect, propagandism and selectivity, meaning is negotiated over time.

Propaganda Billboard in Iran

from Bored.com Crazy and Funny Billboards

Part of the problem with wondering about books’ future is that historically the idea of what a book is has changed. For me, today’s eBook is akin to the copying of books by monks from Charlemagne’s time until the printing press. Manuscripts would change, depending on the inclinations of the monk, his attentiveness, or even the physical degradation of the source text. Something like Wikipedia, though materially sped up, is as malleable as a handwritten manuscript.

Our imagined book is fixed in time, a static recitation of alphabetic or logographic symbols. This is a very limited definition of what a book is. For Max Ernst or Paul Eluard, a book might be a collection of collages in lieu of words. For Alexsandr Rodchenko, a book may contain coins, twine, letter pressings and glass. For Katsushika Hokusai, a book may be a printed fan, a single sheet of paper bound accordion-style, or even a tiny box of lithographs centered on a single theme. The “Museum of the Book” already exists, and even did as a concept in the time of Charlemagne, when he sought to re-establish the lineage of Roman philosophy nearly lost to Visigothic and Vandal raids.

The question then might properly be “What is the possible effect that digitalization will bring?” The primacy of books can be attributed to many factors: portability, accessibility, affordability, durability, familiarity, and readability, among others. Until digital means can approximate or replicate all of these conditions, books as physical artifacts with paper bones and inky blood will not be replaced.

Portability for electronic books is on the verge of realization, as is accessibility. Affordability? Maybe not. Durability is unlikely, considering that many manufacturers rely on either inferior craftsmanship or software updates to ensure a continual need for purchasing new equipment. Familiarity can only be established over time. A foremost concern is readability, for though the human eye will adapt to longer exposure times to electronic stimuli, it remains difficult to enjoy an electronic work for as long a time as one can a book.

Moving images will survive, though perhaps that is too young a model. Music has undergone about as many transformations as the written word, be it private amusement, communication, traveling minstrels, orchestral engagements, wax cylinders, vinyl and digital storage. Books will continue to thrive, fluidly, stubbornly, but mainly out of necessity.

Writing Cards

Advertisements

Z39.50

August 7, 2007

search.gif

Now that the standard search interfaces of the Internet are so common, there is a lot more awareness of raw information. Sure, there’s always been some lip service intimating that knowledge is power, but people by and large still respond much more quickly to money and guns. Yet there is an obscene amount of cash money in information, and now that many of the practices of librarianship, minus any complicated overarching values, are recognizable cash cows, the terms of information dissemination are becoming quite different.

We librarians should have realized what was happening much earlier, and acted upon it. Take the NAICS for example. Previously just the Standard Industrial Classification (SIC) code for compartmentalizing business practice, by March 31, 1993 the U.S. Office of Management and Budget decided to work in tandem with Canada and Mexico to update the structure of industry classification.

Moving from a 4 digit identification number to a 5 or 6 digit number, the new parameters are now the North American Industry Classification System, or NAICS. Numerous industries were redefined in order to allow for more flexibility and specificity when compiling statistical data of related industries; for example, the SIC Transportation, Communications, and Utilities sector is now divided up into several NAICS divisions, such as Utilities and Transportation, Transportation and Warehousing and Utilities.

Amazingly, in SIC speak there was only one sector for Service Industries. For one, that hints at the major sea change society has undergone in its transition from a manufacturing, industrial and agricultural firmament. Perhaps you hadn’t set your clock to signal the date, but on Wednesday May 23rd, 2007 the human population officially became more urban than rural. Assuming that the Maya Calendar’s end date in 2012 does not mean that society as we know it will end, then according to UN estimates there will be 5 billion city dwellers by 2030.

Of course, both China and Warren Buffett have been warning us about overpopulation for years, and even more alarming to some, Buffett has been short selling against the American dollar, George Soros style, for a few years now. Nobody’s been warning us about over-city dwelling, though… Hopefully birth rates don’t force a Sprogopolis, Baby-Powered City of the Future upon us, where “The only good baby is a working baby”

Sprogopolis, Baby-Powered City of the Future

So the NAICS code, while recognizing 79 new manufacturing industries, and reorganizing the Retail and Wholesale trade sectors, simultaneously adopted an entirely new sector which should have raised an unholy din, or at least convened a Committee of Concern, in the library world. The Information Sector, or area 51 (sorry, I couldn’t resist). Sector 51 is described on the NAICS website as “perhaps the most important change” in recognition, wherein it encompasses 34 industries of which 20 are wholly new. This change was descriptive and after-the-fact, and it should have acted as a final wake up call, spurring innovation and collaboration on a large scale in library-land. Instead, at least from the crow’s nest of library school, we are still six catalogs in search of an identity.

Five categories of recognition are outlined, differentiating 51 from other traditional industrial designation. They are (to paraphrase):

  1. Unlike traditional goods, an ”information or cultural product” does not necessarily have tangible qualities
  2. Unlike traditional services, the delivery of these products does not require direct contact between the supplier and the consumer.
  3. The value of these products to the consumer lies in their informational, educational, cultural, or entertainment content, not in the format in which they are distributed. Most of these products are protected from unlawful reproduction by copyright laws.
  4. The intangible property aspect necessitates that only those possessing the rights to these works are authorized to reproduce, alter, improve, and distribute them.
  5. Distributors of information and cultural products can easily add value to the products they distribute.

I’m new to library science, but it makes me wonder about the chicken and the egg. One of our most popular catch phrases (which, I must admit, makes me vaguely nauseous at the sound of it, but that’s another post entirely) is to add value, or just value-added, as in value-added services where we go “above and beyond” the normal service interaction. I challenge you to read a single library science text from the past ten years where that phrase isn’t used ad infinitum. But there it is, in point 5…is the phrase that generic, or is it another way that we are playing catch-up with business?

The more I learn about library administrative organization, and about the lack of collective communication in the past, the more embarrassed I become. Especially when thinking about things like the review features on Amazon.com, which is something that should’ve already existed in library-land long before those cheeky upstarts. We comprise some of the most engaged, learned, passionate book lovers on the planet, and readers’ advisory is our very bread and butter on the public level, but there was no national communicative network before the age of the Internet? No library sponsored über-magazine or BBS or pamphlet or bi-annual that gave voice to our patrons through their patronage of the library?

When I bring this up with my professors, they often cite two factors, cost and resources. The concern about cost I feel I can pretty much deflate from the outset. Is it not costlier to have to fight for public monies, to have to devote higher-paid administrative time to lobbying and glad-handing rather than fostering public goodwill by offering a deeper level of involvement? Did not most libraries already take part in OCLC, or use MARC records or some other collectively managed and federated search & organization mechanism that could easily have been adapted for use by our patrons? Something that could be outfitted with images and reviews as well as cataloging records? Why are we still discussing the need to keep content separate from design? We know that reprogramming one CSS or php or Perl document is far easier than updating thousands of HTML, XHTML and XML pages. We should know about regular expressions as well, right? Why should any of that cost us anything?

Resources are a different matter entirely, and especially with current CIPA laws which potentially disenfranchise poorer library districts that try to maintain unfiltered web access, and impending filtration legislation overall, access can sometimes be compromised. That, and we are only now learning how to market ourselves, and are suffering for waiting too long to do so. It’s hard to convince the public to allocate their monies to us, harder still if we have not maintained a good relationship. Forget about ROI — do not neglect a return on emotional investment, okay? (A tip of the pen to Bill Crowley) Not that we should adopt business models, for there is no way that libraries can reasonably compete with billion dollar business interests, and I reject the idea of talking about the users of a public library as customers. Why does everything have to be reduced to Capitalism? Do we live in Sprogopolis already?

I for one am very excited about efforts to foster the collective wisdom of librarians, and any collaboration between libraries, museums, archives and other public guardians of knowledge is where it’s at. I am also interested in efforts to intelligently collocate “information packets” as demonstrated by Z39.50. What Z39.50 describes (and it’s not the only one, but is the first true contender) is a code for the representation of languages for information interchange, meaning a method for gathering information in disparate bundles that may not have the same method of organization.

Banana

 

Think about your normal web search, and how a basic search without boolean values for, say, Siouxsie and the Banshees will turn up 1,480,000 web pages in Google, 838, 312 in Gigablast and 8,354 blog posts in Technorati. By using an internationally recognized protocol that accounts for placement of semantic and syntactic strings in documents, the accuracy of particular search can be both broadened in scope and narrowed in accuracy simultaneously. When you search for a record using a proprietary interface as you might find at your public library, each page is generated on the fly, using your search designations and delimiters to collocate an appropriate response. The thing is, they already have fields specified for title, author, publisher, serial etc., and are able to produce a very accurate pool of hits, where large-scale federated web search engines do not. It isn’t that Google is inefficient, their search mechanism is possibly the most sophisticated in the world right now, using chains of associative memory to generate the most likely arena of hits in nanoseconds. What they lack is a system that recognizes fields for title, author, authority etc. across a disparate semantic and syntactic base. It doesn’t exist yet.

Shirin Neshat photo

photo by Shirin Neshat

This is what Z39.50 is attempting to do, using baby steps today but with an eye toward systems of tomorrow, using parameters programmed into MARC 21 records, Dublin Core, SGML, XML, etc. The new lay of the land will arise in metadata schema and collective searching techniques, rather than a system of classification. It isn’t that I think the new AACR2 rules will be irrelevant once updated, nor that the Functional Requirements for Bibliographic Records (FRBR) protocols or imminent Resource Description and Access (RDA) standards will also be useless. I don’t think we should abandon main entries and added entries, either.

Rather, I am wondering whether or not the most effective method for collocating information will come from the design of collaborative search functions, rather than through a rigid semantic and syntactic language. A quick glance at RefWorks, and all of the output styles just for bibliographic records, shows hundreds of record making systems. It seems naive, to me, to try to impose a lone standard system of cataloging as well.

Microfiche


An Embarrassment of Riches

July 8, 2007

Yes, there is a lot of talk about Web 2.0 these days, but apparently people are happier just quietly getting things done. Pull up a chair, I need to show you something.

It’s called Go2Web2.0. This is by far the most comprehensive and easy to use guide to electronic parcels of Web 2.0 I have come across. Go on, take it for a spin.

With little work you can visit the Afrigator, your source for everything in the Afrosphere. The Sparkmeter offers to take a bite out of bias in the infosphere. Huminity claims to harness something called Social Ecosystems. TwitThis will Twitter messages for you. (Huh? Isn’t Twitter already simplified?) Eyejot offers video messaging, and Talkster makes phone calls to you IM buddies a snap. Go far enough down the wonder wheel and the clickable squares go blank, perhaps in deference to your overstimulation. Don’t let that fool you, every town square is occupied by hanging chads.

I am not linking in order to let you explore Go2Web2.0, the site is that absurdly cramped with would-be innovations as well as flecks of genuine gold. If you need your web fed to you in astronaut-style bites, however, then you can cool down and relax at the Web List. They aggregate the collected clicks of we, or rather you and I, the base level users of the Internet. Just be warned: “Please understand you use this website (“THEWEBLIST.NET”) “AT YOUR OWN RISK”. THEWEBLIST.NET does not take any responsibility for any problems or damage that may occur from the use of this website.”

I need a disclaimer for Robotic Librarian like that…


Accountancy

July 4, 2007

“As yet no one can give much account of what is taking place in your head as you read this sentence.” (Robinson, 217)

Language creates free-floating maps in our minds, chains of association and memory which can be liberating and controlling at the same time. Many linguists imagine that the hardwiring for language already exists, perhaps already evident in utero, certainly siphoning understanding from ambient environments right from the moment of birth. Others have hypothesized that our ability to integrate language is due to complex symbiotic chemical relationships fostered by either dietary or religious/shamanic habits we developed over centuries.

A more rationalist view proposes necessity. The earliest extant writings we have, Sumerian clay tablets from Mesopotamia, list products such as barley, beer and labourers, as well as fields and their owners. A writer discussing the Sumerian tablets commented that writing developed “as a direct consequence of the compelling demands of an expanding economy.” (For example, check out the writings of Orville R. Keister for details) This explanation for the origin of language does make a lot of sense, even considering the loss of many early writings which were on perishable materials or neglected by the dust-broom of history. Necessity does not preclude the hardwiring theory, since it takes time for the brain to create new memory system mechanisms.  This still leaves open the question of how language originated out of a state of no language.

Sumerian clay tablet from approx. 2800 b.c.

The artistry of many early pictographs, hieroglyphs, and cuneiform writings, as well as proto-Elamite and ancient Chinese scripts is undeniable. Writing even from the beginning, before any system coalesced into standard forms, to my mind evidenced more than a simple desire for record keeping. Examples of proto-writing (meaning systems of record keeping and notation that do not use rebuses, logograms, phonograms etc), such as the tablet from the office of Kushim, were a mixture of numeric records and personalized renderings of everyday goods and services.

It was not long, only a few hundred years at the most, before the Egyptians begin to use hieroglyphs and Demotic to write spells, commune with the Gods and boast of prestige, statue and wealth. They chose not to adopt just a purely alphabetic uniconsonantal script, the beginnings of which already existed, whether to preserve the mystery of sacred rites or to more accurately reflect the reality of the language is not known. Once the miracle of writing took root, humans jealously protected their cumulative knowledge and sought influence through strategizing trade and warfare, while enjoying the sacred and contemplative facets of language as well.

Leaf from the Egyptian Book of the Dead

Writing continued to develop across numerous continents and cultures through elaborate channels of trade and conquest, and different cultures experimented with phonemic, syllabic, logographic and consonantal systems. Right to left, left to right, top to bottom…various reading and writing orientations might show up in the same culture, even in the same document. (My favorite system is boustrophedon, or “as the ox plows a field,” where a line of writing would reach the end of a page or a tablet and the surface would be turned 180 degrees before the writing would continue) Are the physical demands of our brain in learning language and writing different than of systems of thought which came before?

Early Greek example of boustrophedon writing

Modern studies of writing’s development would often discuss its correlation to speech patterns. “Until the last few decades it was universally agreed that over centuries western civilization had tried to make writing a closer and closer representation of speech…Scholars – at least western scholars – thus has a clear conception of writing progressing from cumbersome ancient scripts with multiple signs to simple and superior modern alphabets. Few are now as confident.” (Robinson, 214-215)  In truth, we are developing new communicative languages, continually supplementing and expanding our functional repertoire.

Language is fundamental to culture, identity, consciousness and memory in a way that makes it ideal fodder for scientific experimentation. Early Greek scholars imagined that the brain secreted fluids, or spirits, to communicate. We now know much more about electrochemical charges in the neural pathways of our brains. Biologically, electrical currents are transmitted through ions, much like the movement of electrons in wire. “In terms of operation, a neuron is incredibly simple. It responds to many incoming electrical signals by sending out a stream of electrical impulses of its own. It is how this response changes with time and how it varies with the state of other parts of the brain that defines the unique complexity of our behavioral responses.” (Regan, 20)

As science begins to narrow down communicative networks in the brain, and close in on systems of post- and pre- synaptic neurotransmissions, concentration on both genetic and chemical cartography has intensified. Just recently several scientists have brought a bit of Eternal Sunshine of the Spotless Mind to life by developing a drug to banish bad memories. “We generally think of memory as an individual faculty. But it is now known that there are multiple memory systems in the brain, each devoted to different memory functions.” (Regan, 77)

Think about the story of Phineas Gage. After an accident on a Vermont railroad site around 1849, Phineas was left with a hole through his skull and missing a portion of the ventromedial region of his brain. He recovered fairly quickly, and reportedly did not lose consciousness in the moment, standing upright just after the accident and asking about work. Previously a reliable and amiable foreman, afterward he was prone to fits of rage and profanity. The oft quoted refrain from his friends is that he was “No longer Gage.”

Clearly cases such as his led to John Watson’s development of the behaviorist school of psychology in the 20s, most popularly understood through the work of B F Skinner and his book Walden Two. And behavioral psychology, with its emphasis on observable phenomena, still exerts power over our modern philosophical approach to consciousness, even if research into the genome is recontextualizing the discussion. Craig Venter, a primary architect of genomic sequencing, said in 2001 that “In everyday language the talk is about a gene for this and a gene for that. We are now finding that that is rarely so. The number of genes that work in that way can almost be counted on your fingers, because we are just not hard-wired in that way.” Interestingly, he is working on developing designer microbes to combat our oil addiction; is this the beginning of nanotech wetware?

Microbe electron photo by Scott Willis (flickr) creative commons

Earlier today I was working on an assignment for a class of mine, trying to encode several web pages that will, at the end of the last session, go live on the web. I lost all my files somehow and had to reconstruct what I could, while finding the pool of images I initially discovered to supplement my topic. I searched through Flickr’s library of creative commons licensed photos, scanning mercilessly for inspiring and beautiful photos. At the end of a half hour I think I perused over 3000 photos, and I was struck by both my pace and my faculty for recognition. It’s just unbelievable to me how effective our discriminatory abilities are, how abstract and correlative. How is it that we can know so quickly what appeals to us, or what meets any particular need at a particular moment?

So much of what we call Web 2.0 uses cooperatively communicative tools for discrimination. We are rapidly developing unique languages for linking machines, humans and the natural world into electronic ecosystems that are often self-sufficient and collectively interpreted. It’s an amazing irony that we are returning to hieroglyphic, or rather logographic, linguistic roots with a dedicated fervor, reimagining language for the benefit of communication across normal linguistic divides. This is evident both in the programming languages that comprise the design of Internet forums as well as graphic symbols for travelers and efficiency in communication.

International signage

Both gypsy and hobo communities have made extensive use of logograms, and even Olympic committees have tried to bridge cultural divides through ideograms. (Let’s try to forget the horrible 2012 design from England…) Advertising at its root is most effective in establishing branded identities which can achieve the iconic status of a letter or character; the favicon is rapidly becoming a digital fingerprint essential to the establishment of one’s Internet identity. Is recognition of these signs any different neurally than reading in our native tongue?

Language has been accused of engendering psychosis, most notably by English psychiatrist Tim Crow. Language makes use of both hemispheres of the brain, though it is theorized that the left lobe is more fundamental; where the right hemisphere is responsible for reading words, the left hemisphere contextualizes meaning rather than just visual appearance. Tim Crow feels that psychosis (and specifically schizophrenia), which is associated with high levels of neurotransmission in the right hemisphere, is the price of learning language. “Given that psychosis is universal, affecting all human populations to approximately the same degree, and that it is biologically disadvantageous, there must be some reason why it has persisted.” (Regan, 109) Who knows? It’s likely that only the continued development of highly specialized pharmaceuticals will answer his hypothesis.

Crow’s idea is quite possibly a causation fallacy, but I am not qualified to hazard a guess. Memory is central to our consciousness, and language makes use of so many aspects of memory. Michel Foucault has stated that “Language is the first and last structure of madness, its constituent form; on language are based all the cycles in which madness articulates its nature.” What does this mean for the intensified rate of language extinctions brought on by population growth, complex economic interdependency and radical exploration for resources? Or are we developing new forms of communication so quickly that it will counterbalance any loss of classical alphabets and character sets? Perhaps memory will be externalized as we develop new skills for increasingly abstracted electronic environments. “…if our memories are to remain, then some physical change must occur–memory cannot be imprinted on molecules since molecules are constantly rejuvenated by the body at different rates.” (Regan, 80)

The brain is remarkably flexible when it comes to long term memory. Working memory, which is incorrectly believed to be “short-term memory” but is more akin to a mental sketchpad (to use Alan Baddeley’s term), uses the prefrontal cortex. But for longer term memory the architecture of the brain is quite individuated depending on different needs; it is believed that the cortex is the final home for information and memory, and the hippocampus is intimately involved with processing what will become long term memories. Long term memories may require dendritic growth and the formation of new synapses. Interestingly, “learning a foreign language in adulthood employs a brain area that is distinct from that used in establishing one’s first language.” (Regan, 83)

In many ways the Internet, through for example eBay, Craigslist and Google AdSense, is returning graphic communication to its accountancy roots; along the long tail, each of us is a nested market of one, with the power to shoulder the vender’s yoke as well. As levels of interconnectivity flourish online, and many millions of text messages are transmitted daily, we are no closer to understanding the relationship of our digital media to the cultivation of memory. The Internet, for all its novelty, is possibly recreating the elemental genesis that inspired language in the first place, but what we gain in immediacy may be lost in the flurry of information.

In the end I am left wondering, what is the signal-to-noise ratio for language today?

books cited:Robinson, Andrew. The Story of Writing. Thames & Hudson, 2nd edition 2001.Regan, Ciaran. Intoxicating Minds: How Drugs Work. Columbia University Press, 2001.

Objekt: Web 2.0

June 15, 2007
Web 1.0 Web 2.0
DoubleClick –> Google AdSense
Ofoto –> Flickr
Akamai –> BitTorrent
mp3.com –> Napster
Britannica Online –> Wikipedia
personal websites –> blogging
evite –> upcoming.org and EVDB
domain name speculation –> search engine optimization
page views –> cost per click
screen scraping –> web services
publishing –> participation
content management systems –> wikis
directories (taxonomy) –> tagging (“folksonomy”)
stickiness –> syndication

(Quoted wholesale from O’Reilly article, read on)

In my last post I referred to O’Reilly Media, which is a highly influential publisher of programming texts that feature the same spare cover design; bold, clear text with a white background and some sort of realistically drawn animal. Although I remember one with a bank safe as well, but either way they’re recognizable from twenty paces, easy.

The company is headed by Tim O’Reilly, who I hope won’t mind if I liberally borrow from and simultaneously plug in this post. He has posted an article that I think should be required reading for all people who traffic in information, in any form, using electronic technologies. The article is called What Is Web 2.0: Design Patterns and Business Models for the Next Generation of Software. You can read it in Chinese, French, German, Italian, Japanese, Korean and Spanish. (The design differences are fascinating, though I think the French lucked out. Why is English the stodgiest?)

Inspired by the achingly audible pop of the dot.com bubble, his group and MediaLive International held a conference on Web 2.0, to figure out what happened and why. Contrary to many popular reports, they felt that “far from having ‘crashed’, the web was more important than ever, with exciting new applications and sites popping up with surprising regularity.”

I cannot recommend the article highly enough. The tools for library service optimization are there, and I hope that by next year I will be working with fellow librarians in translucent, individually reactive modular info-techopolies which compile lightweight multiple-platform open sourced global databases while trawling the stacks on our nuclear-propelled web-bots.

Illustration by Charles Schridde, found on Paleo-Future blog, 1963 post.

Or, maybe just providing good service to a diverse array of contented, info savvy patrons.

“Let’s close, therefore, by summarizing what we believe to be the core competencies of Web 2.0 companies:

  • Services, not packaged software, with cost-effective scalability
  • Control over unique, hard-to-recreate data sources that get richer as more people use them
  • Trusting users as co-developers
  • Harnessing collective intelligence
  • Leveraging the long tail through customer self-service
  • Software above the level of a single device
  • Lightweight user interfaces, development models, AND business models”

The next Web 2.0 conference will be held from October 17-19, by the way, in San Francisco. I’ve been to about 38 or 39 states, but never California yet…

hmm…


The Dread Pirate Wiki

June 15, 2007

This morning I decided to follow up on my visit to protest.net, (mentioned in my last post) and the lack of activity I saw on it. I couldn’t believe that my old programmer friend would let it lie low unless he had something else more significant going on.

Little did I know… It turns out that he is even more involved than ever before, and more broadly advocating for his beliefs. You can check out his blog, Anarchogeek, to see a bit of what occupies him these days. I was wrong about protest.net’s inactivity, I just wasn’t looking in the right places. Even more amazing to me is that he’s involved with programming for Indymedia, a group I first heard of through Asheville Global Report. Both are essential sources of under reported news from around the globe.

Web Header for AGR

(I miss Asheville, NC very much, and one of the reasons is the Global Report. You can’t find a more dedicated volunteer staff trying to make sure that buried news reports get some press outside of politically aligned outfits such as NPR and the Nation. I was especially excited to find that, even with operating expenses in the red, the Report is distributed up here in Chicago as well. I found copies at both Alliance Bakery on Division, east of Damen, as well as at Earwax cafe on North, also east of Damen. Congratulations to Eamon et al.)

All of this got me thinking again about wikis and social networks, and the general distrust of library professionals about their reliability or security. Annalee Newitz, with Alternet, has a wonderful little article about Wikipedia activism, pointing out why it is important to be active in supporting & contributing to wikipedia. If it’s worth having an opinion about in the first place, then isn’t it worth the trouble?

And wikis themselves are proving to be very popular business tools, especially when preparing for industry get-togethers. Perhaps we could use them to collaborate with the ALA in drafting meaningful policies for local administrative use? Or in drafting legislation for national advocacy? (If this is already happening please don’t feel shy about letting me know about my ignorance…)

Many library professionals seem to feel forced into the debate because of the growing numbers of users. And libraries cannot be faulted wholesale for tensions about the quality of new information sources, as we are acknowledged leaders in tech heavy environments, with much technological acumen. We are picking up on valid problems. (Sorry for a lack of links there, I’m not able to reference some good articles without compromising copyright issues, but check out Library Journal as well as several emerald-library articles)

One limitation I see in our involvement with Ning, Second Life, MySpace, Facebook and their ilk is that we are leaving the design and programming to others. Apart from Casey Bisson’s Scriblio (formerly WPopac) project, using open source blogging code to enable a user-friendly library catalog interface, there isn’t much going on with librarians generating unique coding outside of individual webpages.

Which brings me back to my friend, who is working on a Ruby on Rails book for O’Reilly Media. Some of the work he is doing can point the way toward open source networking frames possible in library environments, so we don’t have to rely solely on Google to scan our books, or Yahoo to author our widgets and apps. My point is, there are dedicated professional programmers, outside the library profession, who might be willing to help us out if we ask nicely enough, and are willing to learn some of it as well. Just check out Change.org to see some of the networking possibilities available if we meet the net head on. We need to harness the Wisdom of Crowds, not condemn it outright as Michael Gorman is often doing these days. We’re all in this together, right?


Post #3: You can’t tell which way the train goes by looking at the tracks

June 11, 2007

I recently returned from NYC, where I was fortunate to attend this year’s BookExpo America industry get together. I’ve started a post about it several times now, and there are certainly quite a few stories to tell, but focusing on something substantial is oddly difficult.

I am not exactly a power player but I have attended more than a few of these events, beginning with SEBA (South Eastern Bookseller Association) sponsored shows. BookExpo is always a thrill, as I get to gossip with old time book reps, and say hello to authors I met years before, who taught me that jet-lag can be a lifestyle.

As a long time bookseller, it’s possible for me to track the seasons of the book world just as one might with weather, or movie releases. Summer is good for embarrassing confessionals or literary weepies, and in the fall travel narratives, to whet your appetite for a winter getaway in the warmth of Asia. Far more interesting to me, however, are the unspoken associations that emerge; narrative rivers which collect in liquid eddies, and mirror human insecurities, dreams and unconscious concerns.

Some themes converge inescapably. Each year business at large continues to value a culture of information, and this economic focus is reflected in sidelines, seminars and most obviously keynote speakers. In 2004 we had Bill Clinton, and this year brought Alan Greenspan. Along with a growing courtship of library professionals, speakers and events, (we even had our own Expo lounge, as well as dedicated hotel this year!) there is also an almost exponential focus on tech professionals. I saw at least 6 vendors of eBook platforms, sharing the floor with Google, Microsoft, Apple and International Digital Rights pavilions.

“Print will be the last media to be read on a device….and we shouldn’t be proud of that” said Shatzkin, as quoted on Michael Cairns’ industry blog PersonaNonData, posted during the Expo itself. (His blog has a number of interesting links and opinions as well; it’s worth checking out. Especially his May 28th post about why publishing professionals must blog, featuring advice that I think any librarian could make use of) Reading Shatzkin’s quote made me think right away of a product I saw at the event: the Espresso Book Machine, a print-on-demand device that prints mass market quality books for consumers in 5 to 10 minutes. Think of the possibilities for libraries!

Perhaps books will be around for a little while longer. No digital reader is emerging as a leader yet, and I don’t think people are yet comfortable enough with existing technologies to make the switch. As expected, the rights are proving to be a nightmare, and it is growing copyright tensions that inspired one of the silliest moments of the whole Exposition.

Richard Charkin, Chief Executive of Macmillan (a subsidiary of Holtzbrinck) took a stand against the Google book scanning project by “stealing” two laptops from their display during BookExpo. Well, he didn’t quite steal them, which would’ve been too strong a statement, or he was afraid to really follow through on his feeling. Instead, as you can read on his own blog, he and a friend picked up the computers and then waited a short distance away to see if anyone would notice.

On Boing Boing’s June 8th blog post, they quote Larry Lessig who very astutely points out the poor logic of Charkin’s act, and perhaps will help others develop more effective, reasoned pranks. Or perhaps litigation is the answer, as the American Association of Publishers are hoping, who are currently suing Google over the project. (France may be following their lead, it seems.)

So far, though, Google is scanning out of print works presumably in the public domain, and a large part of the agitation about their activities centers around the argument that Google should be seeking out rights holders. I must admit I am still conflicted about the project overall, in large part because of a lack of an honest end-goal on Google’s part. (not to mention Google’s nomination for being the “most invasive company” of 2006 by Privacy International, who has some alarming points to make about Google’s over-arching business practices) Ultimately though I feel that, regarding the book scanning project, publishers are asking Google to perform the work that they should be doing themselves, if they are genuinely interested in renewing a copyright on something, and any contested work is either not made publicly available online or quickly removed.

Copyright protections have escalated to absurd proportions, and DRM systems, as I’ve previously noted, are generally demonstrating corporate desire to expand copyright and to monitor end-user practices rather than just reinforce existing copyright protections. DRM is absolutely necessary, no one can reasonably debate that point, but as Edward Felten has noted in several outstanding articles, it needs real-world limitations before such software packages freely provide monopolistic and/or big-brother type business-end advantages.

Anyhow, hopefully soon I can post the fun stories from BookExpo, especially one about how the kindness of wonderful Long Island librarian helped me score a free, signed copy of a Robert Sabuda pop-up masterpiece. If you can attend an Expo I highly recommend it, if only to take part in an integral facet of our information dependent society.

Cheers!