Semantic, a big big love

August 24, 2007

As digital environments grow in sophistication and scope I sense a complementary resurgence of interest in our natural environments as well. Yet ironically features of rampant biodiversity that once survived in tandem with humanity now survive largely in spite of it; many such systems are joining an ever-longer queue to stand in topographic isolation, victims of profligate waste, consumerism or cultivated mono-agricultures. As one example: “The U.S. Department of Agriculture estimates that more than 6,000 varieties of apple trees have been lost since 1900.” To that end, I feel as though any time we can better understand even a fraction of a natural holistic system then we are closer to holding such losses at bay.

There is an unspoken positive side to over-saturation with media, a learning curve that accompanies the environment of selectivity afforded to all of us through technology. For me it comes down several key concepts: organized selectivity, interoperability, a simple design/interface, and ideally uses open-source coding/is free for users to alter. It can be as simple as the Site Search feature that Gigablast offers through its web search interface, where anyone can create a web search box for a blog or site that limits itself to a select pool of (up to) 200 web pages or files, potentially offering greater depth and authority to a guided web search. Or it can be as complex as Google Earth, where a free download allows anyone to view satellite images of any location worldwide

Organization continues to be difficult to achieve, and the reasons for this are stupefying in their complexity. Perhaps the simplest expression of these problems is the lack of a standard for archival and descriptive metadata. And that doesn’t even cover the problems associated with search terms themselves, where a search for buddha can summon results which encompass religion, Hinduism, Zen Buddhism, Osamu Tezuka, films such as Little Buddha, Buddha, or The Light of Asia, Herman Hesse, marijuana, Buddha Bar, meditation, Buddha-Heads, amulets, university and college curricula, etc etc etc.

Many of you probably already know I am referring in part to what Tim Berners-Lee called the Semantic Web. Numerous start-ups and seasoned web veterans are fast at work on developing protocols for just such a machine readable global database. In fact, this year there already are or will be several beta versions from hopeful Semantic Web wranglers; Radar Networks, TextDigger, Theseus in Germany and many many others. W3C has a dedicated Semantic Web Activity News blog that is worth subscribing to just for its window into the official side of things, with technical specs, links to rules for interoperability and notes on large-scale projects.

There is an article in the August 2007 issue of MIT’s Technology Review that inspired these thoughts, seemingly written for a budding librarian obsessed with modern systems of digital and material archiving. Second Earth by Wade Roush is essentially a current assessment of the ways in which we are realizing the Metaverse described in Neal Stephenson’s Snow Crash, or rather the Mirror Worlds hypothesized by David Gerlenter in his eponymous book of 1991. He traces the development of both Linden Lab’s Second Life as well as the wildly popular application Google Earth, and imagines the impact of viable synthesis of the two digital exo-systems.

Imagining an environment that truly simulates the Earth is far easier than realizing it. The estimated computational load alone would necessitate the dedication of, say, the surface of the moon to such a project. As Roush notes, “At one region [65,536-square-meter chunk of topographic architecture] per server, simulating just the 29.2 percent of the planet’s surface that’s dry land would require 2.5 billion servers and 150 dedicated nuclear power plants to keep them running. It’s the kind of system that doesn’t ‘scale well’.”

Regional weather tracking is one enticing reality, as is‘s 3-D flight tracking digital transparency for use with Google Earth. Cyber-tourism is also an intriguing possibility, helping to reduce environmental damage to fragile or endangered locations much in the way that digitization of medieval manuscripts has already done. Some cities are realizing this and Amsterdam for one has provided architectural specifications to Second Life to make visitor’s trips more realistic; Germany supplied plans and images for Berlin’s Reichstag building which now can be visited in exceptional detail by Second Lifers.

“It’s the wiring of the entire world, without the wires: tiny radio-connected sensor chips are being attached to everything worth monitoring, including bridges, ventilation systems, light fixtures, mousetraps, shipping pallets, battlefield equipment, even the human body” Even knee surgery is being improved by such sensors; three micro-sensors are inserted about the knee and GPS triangulation helps the surgeon to avoid unnecessary incisions and invasive exploration, reducing both the number of surgeries (which can be many for a knee) and an outpatient’s convalescence.

When I can ignore my skepticism and paranoia I am enchanted by the possibilities, and a small measure of my hope for humanity is restored.  As I said, I have faith in the Big Picture, and the more respect for co-dependent systems we have the closer we come to achieving a sound balance. A friend recently alerted me to Worldmapper, and their beautiful cartographic treasures seem aligned with the emerging Mirror World and with improved Semantic Web capabilities.

Through 366 world maps you are given an idiot’s guide to various global statistics, just by varying the size of geographical regions to reflect raw numbers. For example:

Want to see where people watch the most films?


How about what regions import the most fish and fish products?


Or how about regions with the most forest depletion?


It’s unbelievable, the hypnotic range of cartograms you can find on this site, each with a detailed explanation, citations and even downloadable .pdfs for you to print out and use in any way you wish. Maps about cocoa, disease, disasters, housing, trade, food, health services, literacy, labor, maternity, migrants, sanitation…

It just blows me away each day what one can find on the web, offered free and clear to the known universe.

An Embarrassment of Riches

July 8, 2007

Yes, there is a lot of talk about Web 2.0 these days, but apparently people are happier just quietly getting things done. Pull up a chair, I need to show you something.

It’s called Go2Web2.0. This is by far the most comprehensive and easy to use guide to electronic parcels of Web 2.0 I have come across. Go on, take it for a spin.

With little work you can visit the Afrigator, your source for everything in the Afrosphere. The Sparkmeter offers to take a bite out of bias in the infosphere. Huminity claims to harness something called Social Ecosystems. TwitThis will Twitter messages for you. (Huh? Isn’t Twitter already simplified?) Eyejot offers video messaging, and Talkster makes phone calls to you IM buddies a snap. Go far enough down the wonder wheel and the clickable squares go blank, perhaps in deference to your overstimulation. Don’t let that fool you, every town square is occupied by hanging chads.

I am not linking in order to let you explore Go2Web2.0, the site is that absurdly cramped with would-be innovations as well as flecks of genuine gold. If you need your web fed to you in astronaut-style bites, however, then you can cool down and relax at the Web List. They aggregate the collected clicks of we, or rather you and I, the base level users of the Internet. Just be warned: “Please understand you use this website (“THEWEBLIST.NET”) “AT YOUR OWN RISK”. THEWEBLIST.NET does not take any responsibility for any problems or damage that may occur from the use of this website.”

I need a disclaimer for Robotic Librarian like that…


July 4, 2007

“As yet no one can give much account of what is taking place in your head as you read this sentence.” (Robinson, 217)

Language creates free-floating maps in our minds, chains of association and memory which can be liberating and controlling at the same time. Many linguists imagine that the hardwiring for language already exists, perhaps already evident in utero, certainly siphoning understanding from ambient environments right from the moment of birth. Others have hypothesized that our ability to integrate language is due to complex symbiotic chemical relationships fostered by either dietary or religious/shamanic habits we developed over centuries.

A more rationalist view proposes necessity. The earliest extant writings we have, Sumerian clay tablets from Mesopotamia, list products such as barley, beer and labourers, as well as fields and their owners. A writer discussing the Sumerian tablets commented that writing developed “as a direct consequence of the compelling demands of an expanding economy.” (For example, check out the writings of Orville R. Keister for details) This explanation for the origin of language does make a lot of sense, even considering the loss of many early writings which were on perishable materials or neglected by the dust-broom of history. Necessity does not preclude the hardwiring theory, since it takes time for the brain to create new memory system mechanisms.  This still leaves open the question of how language originated out of a state of no language.

Sumerian clay tablet from approx. 2800 b.c.

The artistry of many early pictographs, hieroglyphs, and cuneiform writings, as well as proto-Elamite and ancient Chinese scripts is undeniable. Writing even from the beginning, before any system coalesced into standard forms, to my mind evidenced more than a simple desire for record keeping. Examples of proto-writing (meaning systems of record keeping and notation that do not use rebuses, logograms, phonograms etc), such as the tablet from the office of Kushim, were a mixture of numeric records and personalized renderings of everyday goods and services.

It was not long, only a few hundred years at the most, before the Egyptians begin to use hieroglyphs and Demotic to write spells, commune with the Gods and boast of prestige, statue and wealth. They chose not to adopt just a purely alphabetic uniconsonantal script, the beginnings of which already existed, whether to preserve the mystery of sacred rites or to more accurately reflect the reality of the language is not known. Once the miracle of writing took root, humans jealously protected their cumulative knowledge and sought influence through strategizing trade and warfare, while enjoying the sacred and contemplative facets of language as well.

Leaf from the Egyptian Book of the Dead

Writing continued to develop across numerous continents and cultures through elaborate channels of trade and conquest, and different cultures experimented with phonemic, syllabic, logographic and consonantal systems. Right to left, left to right, top to bottom…various reading and writing orientations might show up in the same culture, even in the same document. (My favorite system is boustrophedon, or “as the ox plows a field,” where a line of writing would reach the end of a page or a tablet and the surface would be turned 180 degrees before the writing would continue) Are the physical demands of our brain in learning language and writing different than of systems of thought which came before?

Early Greek example of boustrophedon writing

Modern studies of writing’s development would often discuss its correlation to speech patterns. “Until the last few decades it was universally agreed that over centuries western civilization had tried to make writing a closer and closer representation of speech…Scholars – at least western scholars – thus has a clear conception of writing progressing from cumbersome ancient scripts with multiple signs to simple and superior modern alphabets. Few are now as confident.” (Robinson, 214-215)  In truth, we are developing new communicative languages, continually supplementing and expanding our functional repertoire.

Language is fundamental to culture, identity, consciousness and memory in a way that makes it ideal fodder for scientific experimentation. Early Greek scholars imagined that the brain secreted fluids, or spirits, to communicate. We now know much more about electrochemical charges in the neural pathways of our brains. Biologically, electrical currents are transmitted through ions, much like the movement of electrons in wire. “In terms of operation, a neuron is incredibly simple. It responds to many incoming electrical signals by sending out a stream of electrical impulses of its own. It is how this response changes with time and how it varies with the state of other parts of the brain that defines the unique complexity of our behavioral responses.” (Regan, 20)

As science begins to narrow down communicative networks in the brain, and close in on systems of post- and pre- synaptic neurotransmissions, concentration on both genetic and chemical cartography has intensified. Just recently several scientists have brought a bit of Eternal Sunshine of the Spotless Mind to life by developing a drug to banish bad memories. “We generally think of memory as an individual faculty. But it is now known that there are multiple memory systems in the brain, each devoted to different memory functions.” (Regan, 77)

Think about the story of Phineas Gage. After an accident on a Vermont railroad site around 1849, Phineas was left with a hole through his skull and missing a portion of the ventromedial region of his brain. He recovered fairly quickly, and reportedly did not lose consciousness in the moment, standing upright just after the accident and asking about work. Previously a reliable and amiable foreman, afterward he was prone to fits of rage and profanity. The oft quoted refrain from his friends is that he was “No longer Gage.”

Clearly cases such as his led to John Watson’s development of the behaviorist school of psychology in the 20s, most popularly understood through the work of B F Skinner and his book Walden Two. And behavioral psychology, with its emphasis on observable phenomena, still exerts power over our modern philosophical approach to consciousness, even if research into the genome is recontextualizing the discussion. Craig Venter, a primary architect of genomic sequencing, said in 2001 that “In everyday language the talk is about a gene for this and a gene for that. We are now finding that that is rarely so. The number of genes that work in that way can almost be counted on your fingers, because we are just not hard-wired in that way.” Interestingly, he is working on developing designer microbes to combat our oil addiction; is this the beginning of nanotech wetware?

Microbe electron photo by Scott Willis (flickr) creative commons

Earlier today I was working on an assignment for a class of mine, trying to encode several web pages that will, at the end of the last session, go live on the web. I lost all my files somehow and had to reconstruct what I could, while finding the pool of images I initially discovered to supplement my topic. I searched through Flickr’s library of creative commons licensed photos, scanning mercilessly for inspiring and beautiful photos. At the end of a half hour I think I perused over 3000 photos, and I was struck by both my pace and my faculty for recognition. It’s just unbelievable to me how effective our discriminatory abilities are, how abstract and correlative. How is it that we can know so quickly what appeals to us, or what meets any particular need at a particular moment?

So much of what we call Web 2.0 uses cooperatively communicative tools for discrimination. We are rapidly developing unique languages for linking machines, humans and the natural world into electronic ecosystems that are often self-sufficient and collectively interpreted. It’s an amazing irony that we are returning to hieroglyphic, or rather logographic, linguistic roots with a dedicated fervor, reimagining language for the benefit of communication across normal linguistic divides. This is evident both in the programming languages that comprise the design of Internet forums as well as graphic symbols for travelers and efficiency in communication.

International signage

Both gypsy and hobo communities have made extensive use of logograms, and even Olympic committees have tried to bridge cultural divides through ideograms. (Let’s try to forget the horrible 2012 design from England…) Advertising at its root is most effective in establishing branded identities which can achieve the iconic status of a letter or character; the favicon is rapidly becoming a digital fingerprint essential to the establishment of one’s Internet identity. Is recognition of these signs any different neurally than reading in our native tongue?

Language has been accused of engendering psychosis, most notably by English psychiatrist Tim Crow. Language makes use of both hemispheres of the brain, though it is theorized that the left lobe is more fundamental; where the right hemisphere is responsible for reading words, the left hemisphere contextualizes meaning rather than just visual appearance. Tim Crow feels that psychosis (and specifically schizophrenia), which is associated with high levels of neurotransmission in the right hemisphere, is the price of learning language. “Given that psychosis is universal, affecting all human populations to approximately the same degree, and that it is biologically disadvantageous, there must be some reason why it has persisted.” (Regan, 109) Who knows? It’s likely that only the continued development of highly specialized pharmaceuticals will answer his hypothesis.

Crow’s idea is quite possibly a causation fallacy, but I am not qualified to hazard a guess. Memory is central to our consciousness, and language makes use of so many aspects of memory. Michel Foucault has stated that “Language is the first and last structure of madness, its constituent form; on language are based all the cycles in which madness articulates its nature.” What does this mean for the intensified rate of language extinctions brought on by population growth, complex economic interdependency and radical exploration for resources? Or are we developing new forms of communication so quickly that it will counterbalance any loss of classical alphabets and character sets? Perhaps memory will be externalized as we develop new skills for increasingly abstracted electronic environments. “…if our memories are to remain, then some physical change must occur–memory cannot be imprinted on molecules since molecules are constantly rejuvenated by the body at different rates.” (Regan, 80)

The brain is remarkably flexible when it comes to long term memory. Working memory, which is incorrectly believed to be “short-term memory” but is more akin to a mental sketchpad (to use Alan Baddeley’s term), uses the prefrontal cortex. But for longer term memory the architecture of the brain is quite individuated depending on different needs; it is believed that the cortex is the final home for information and memory, and the hippocampus is intimately involved with processing what will become long term memories. Long term memories may require dendritic growth and the formation of new synapses. Interestingly, “learning a foreign language in adulthood employs a brain area that is distinct from that used in establishing one’s first language.” (Regan, 83)

In many ways the Internet, through for example eBay, Craigslist and Google AdSense, is returning graphic communication to its accountancy roots; along the long tail, each of us is a nested market of one, with the power to shoulder the vender’s yoke as well. As levels of interconnectivity flourish online, and many millions of text messages are transmitted daily, we are no closer to understanding the relationship of our digital media to the cultivation of memory. The Internet, for all its novelty, is possibly recreating the elemental genesis that inspired language in the first place, but what we gain in immediacy may be lost in the flurry of information.

In the end I am left wondering, what is the signal-to-noise ratio for language today?

books cited:Robinson, Andrew. The Story of Writing. Thames & Hudson, 2nd edition 2001.Regan, Ciaran. Intoxicating Minds: How Drugs Work. Columbia University Press, 2001.

The sound of one blogosphere cataloged

July 2, 2007

A few days ago I came across this nice resource for digital librarians that I thought I would share with you. It’s called Liszen, a library blog search engine maintained by Garrett Hungerford, a current student at Wayne State University. Liszen is part of the larger Library Zen network, and it is powered by WordPress’ K2 template which I need to look into now that I am trying to update the look of my blog without falling prey to a monthly-fee hosting site. (And by the way, if anyone out there can recommend to me a near-free hosting site that looks kindly on broke graduate students, please drop me a line, I’d really appreciate it)

He currently lists over 700 library related blogs, and there are a promising number of foreign language blogs listed in his master wiki blog list. The list is quite interesting to browse, just to see the amazing diversity of focuses: there’s the technical (SciTech Library Question) to the industry specific (The Industrial Librarian, a corporate aerospace blogger whose June 16th post is a list of other Library Technician blogs!) to the personal (Bibliotherapy for Obsessive/Compulsive Readers) and of course many Young Adult librarian resources (for example, there’s Gemini Moon featuring YA book reviews).

It isn’t hard to submit a blog yourself, and he has a call for others to help him out. I did submit Robotic Librarian several days ago but I haven’t heard back or been listed yet. He suggests waiting 7 days, so I will keep you posted on how up to date the system is. In the meantime, check out those other blogs for ideas, resources, and just good old networking opportunities. You won’t be disappointed.

(Warning: a word from the Mithering Office)

I can’t end this though without griping about the use of “zen” in the project name. I am uncomfortable with widely supported misappropriations of Zen Buddhism as a catch-all term for wisdom, serenity, enlightenment and any other new-agey sentiment promoted in Western culture. Remember the Hindu deities on footware? And I don’t think that panties that ask WWJD or have a Christian fish on them are too respectful either. Suddenly I feel like an out of touch fuddy duddy — I think I just expect more of people whose métier is information.  Ok, mithering over + done.

Hindu dogwear

Post #4: Data Positioning System

June 23, 2007

chkrres, uploaded to Flickr on Dec. 13 2006 by stallio 

It’s amazing how complicated it can be to locate data.  Almost as complicated is the spectrum of forms which data can take.  As a bookseller, I was often asked about helping someone find a green or a blue book, usually accompanied by a gesture meant to show just how big the book might be.  Or how about the book they heard about on NPR, maybe it starts with a P or a T, perhaps it was profiled three or four days ago, or maybe it was an Oprah book…”Do you have it?”

Figuring out what someone wants is an art, and there should be awards given out to those who are good at it.  At library school, we have classes dedicated to this art, and retailers are always interested in sending staff to seminars to perfect the art as well.  The problem is, most people aren’t really sure what they want in the first place.  “Where do you want to eat?”  “Oh, I dunno, what are you in the mood for?”  …and on and on.  But we love our modern tools of selection, and there’s a reason people respond to “you may also like…”

I can’t tell yet how accurate my observation is, but I feel that as we become more accustomed to abstract self-correcting Internet data searches, we become less focused in real world physical environments.  We rely less on our memory since it’s so easy to it, or save it to the flash drive, bookmark it, blog it, tumblr it….in a way our teachers’ fears about calculators writ large in our collective memory.  Where’s Guy Montag when you need him?

Content management systems continually evolve and improve, however, and with some elegant programming we can create remarkably flexible environments for locating digital data.  But what about physical artefacts, like books?  The more ways we have of ticking off our selections for areas of interest, and for sharing them, the less we will need to remember.  There are many businesses out there that will happily remember for us. 

One option for those of us in the physical world will be vertically integrated Radio Frequency Identification (RFID) technology.  At BookExpo there was a Dutch Company called BGN that held a seminar for industry professionals singing about the virtues of RFID for both information consumers and stewards.  RFID would allow a book to be located in a building no matter how grieviously mis-shelved it is, which would be a boon for book sellers and librarians alike. 

A major stumbling block to implementation is the lack of vertical integration, or recognition by producers, distributors and purchasers alike to work together toward cost-effective implementation at multiple levels.  Tesco stores are starting to use something they call “smart shelves” using RFID, Wal-Mart uses similar technology called an Electronic Product Code (EPC) to track shipments, and KSW-Microtec are developing RFID tags that can be sewn into clothing.  There’s even a website devoted to RFID investing with a cornucopia of links to almost every arm of the technology.  The pro-industry RFID Journal gives a good snapshot of the state of the technology.  None of this, however, is meant to be a glowing endorsement.

A real concern is privacy.  If RFID can be used to pinpoint an item on the shelf, what prevents it from being followed anywhere else?  This article by David Molnar and David Wagner highlights the concerns in a library environment fairly comprehensively, but the issue is much larger than that.  As mentioned in this Glenn Bischoff article on the Mobile Radio Technology website, “RFID tags are ‘remotely and secretly readable,’ a vulnerability that becomes more troubling when the public isn’t aware its personal information might be at risk, said Melissa Ngo, director of the Electronic Privacy Information Center’s Identification and Surveillance Project.”  Lee Tien, an Electronic Frontier Foundation attorney, “described RFID as a ‘very insecure’ technology that’s likely to pass along sensitive information.”

Alarmist, perhaps, and not necessarily in respect to the current limitations of the technology.  My intuition is, though, that the more freedoms we give up outright the harder it is to ask for them back at a later date.  As with every technology, every law, and every compromise between convenience and responsibility, there is a tipping point.  Where do we draw the line?

Surveillance is Fun, uploaded to Flickr on Sept 16, 2006 by eecue

Diving for Perls

June 16, 2007

In order to keep things honest here, I would like to post something with substance behind the observations and gripes. After reading a fair number of blogs about tech services, programming and libraries, I’ve noticed several common threads that could be addressed with a focus on basics. I would like to offer readers of Robotic Librarian quality for their time. Lifelong learning for us as well.

From The Medieval Countryside of Herefordshire

Ideas are valuable when able to be put into practice. A continual theme of these posts call for an active role in encoding library services, and to that end we need to know the basics. For example, a common complaint about the ALA’s digital face is that the website is unappealing, and even archaic. As Norma says in a response to a post on Free Range Librarian, “I was never a member, but look in from time to time. I loved the smaller, deeper professional library organizations, but ALA seemed so out of touch. Still I was surprised by the ugly website comment. It seems to be a library afflication [sic]. The poorest websites with clunky, chunky links seem to be run by libraries. Doesn’t speak well for the profession.”

Some suggest outsourcing, or bringing in professional web designers. Perhaps that is the answer, but I would prefer to see us accelerate our own learning on the matter. A good beginning would be to start with some basic, accepted frameworks and develop it from there. Or perhaps start with designer supplied open source coding that is already dressed up a bit. Best of all would be to learn the CSS, Java, XML, Perl and Ruby on Rails to do it ourselves. Google, one of the most sophisticated of all net aggregators, often uses remarkably simple XML programming to achieve its aims. That very simplicity is a hallmark of new collaborative web environments. Just look at the O’Reilly Web 2.0 article from the last post for an in-depth look at it. For samples of working code that anyone can use, check out The Code Project, with online free source code boasting 4,222,461 members and growing.

To get more specific, there are wonderful resources for libraries as well. Check out the code4lib blog to hear about advances in the tech side of library service. By becoming a member you are able to get advice, code development help, and emotional support every time your code seems to creatively reimagine your data. If you want to connect with other library systems, and see exactly what they are up to today, you can go to the Library Weblogs index, which hosts feeds from Antigua, Egypt, Belarus, Kuwait, Singapore, and…well, just check it out if you’re interested. Amazing, isn’t it? If you’re interested, send in an email and you can get your library related blog feed posted to the list.

On a more personal note, you can subscribe to Blisspix, Fiona Bradley’s Sydney based blog about “Open access, technology and social futures.” Sometimes a bit technical, but a fine review of ongoing questions and problems with a practical focus.

As I come across problems while learning coding I will try to post suggestions, difficulties and pleas for help along the way. Hope this little guide helps even one of you on your way as well.

“And yet relation appears,

A small relation expanding like the shade

Of a cloud on sand, a shape on the side of a hill.”

(Wallace Stevens, “Connoisseur of Chaos”)

Objekt: Web 2.0

June 15, 2007
Web 1.0 Web 2.0
DoubleClick –> Google AdSense
Ofoto –> Flickr
Akamai –> BitTorrent –> Napster
Britannica Online –> Wikipedia
personal websites –> blogging
evite –> and EVDB
domain name speculation –> search engine optimization
page views –> cost per click
screen scraping –> web services
publishing –> participation
content management systems –> wikis
directories (taxonomy) –> tagging (“folksonomy”)
stickiness –> syndication

(Quoted wholesale from O’Reilly article, read on)

In my last post I referred to O’Reilly Media, which is a highly influential publisher of programming texts that feature the same spare cover design; bold, clear text with a white background and some sort of realistically drawn animal. Although I remember one with a bank safe as well, but either way they’re recognizable from twenty paces, easy.

The company is headed by Tim O’Reilly, who I hope won’t mind if I liberally borrow from and simultaneously plug in this post. He has posted an article that I think should be required reading for all people who traffic in information, in any form, using electronic technologies. The article is called What Is Web 2.0: Design Patterns and Business Models for the Next Generation of Software. You can read it in Chinese, French, German, Italian, Japanese, Korean and Spanish. (The design differences are fascinating, though I think the French lucked out. Why is English the stodgiest?)

Inspired by the achingly audible pop of the bubble, his group and MediaLive International held a conference on Web 2.0, to figure out what happened and why. Contrary to many popular reports, they felt that “far from having ‘crashed’, the web was more important than ever, with exciting new applications and sites popping up with surprising regularity.”

I cannot recommend the article highly enough. The tools for library service optimization are there, and I hope that by next year I will be working with fellow librarians in translucent, individually reactive modular info-techopolies which compile lightweight multiple-platform open sourced global databases while trawling the stacks on our nuclear-propelled web-bots.

Illustration by Charles Schridde, found on Paleo-Future blog, 1963 post.

Or, maybe just providing good service to a diverse array of contented, info savvy patrons.

“Let’s close, therefore, by summarizing what we believe to be the core competencies of Web 2.0 companies:

  • Services, not packaged software, with cost-effective scalability
  • Control over unique, hard-to-recreate data sources that get richer as more people use them
  • Trusting users as co-developers
  • Harnessing collective intelligence
  • Leveraging the long tail through customer self-service
  • Software above the level of a single device
  • Lightweight user interfaces, development models, AND business models”

The next Web 2.0 conference will be held from October 17-19, by the way, in San Francisco. I’ve been to about 38 or 39 states, but never California yet…