You get used to certain things: the route you take to get to work or what you do on a Monday night, which you’ve somehow ear-marked for laundry or pasta but never a movie or roast chicken.
Now, it’s not that you don’t mix things up a little – one Saturday you head into a gallery maybe or decide to check out some new restaurant at the other end of town, and sometimes by heading off the beaten path you find yourself taking pottery classes one summer, although by the time winter sets in again the pottery wheel is off in the corner and you’re folding laundry on a Monday night again.
Our lives online can be similar – wake up, water the Farmville crops, poke some people, check out the BBC News site, and then follow a bunch of random links people sent you in e-mail. Mobile computers changed the dynamic a little, but in a lot of cases all it did was let us check out those funny links from our Blackberry or play a bunch of mini games on our iPhone.
I mean…take a poll at my office, which is a fairly representative sample of ages and interests, and there aren’t many people with Twitter feeds and most of them forget that I forced them to set up RSS readers so they could keep on top of client-specific topics.
The Web is a super highway, it’s a city, it’s a mall, it’s a, well, Web, but our path through it tends to be reasonably well worn, the difference is that when we need something SPECIFIC it’s a lot easier to find than running to the library, and we don’t need to spend Saturday afternoons catching up with old friends by phone, we can check out their photos on Facebook or Flickr and have a pretty good idea that the birthday party we missed last weekend was the drunken spectacle we expected.
First there was information. And then the Web took a detour into transactions, abiding by the conventional wisdom that if you track porn the future will follow: and what’s porn without someone paying for it, right? And what’s Amazon if it isn’t a system built on the, um, back of (or at least the lessons learned from) online transactional sales of porn? (On that note, run a Google search for “future of porn” and see what you find.)
But then the Web evolved past mere information to allow content to be dynamic and widgetized.
Instead of a static page which only the author could update, content became fluid and database-driven: you could easily add or edit a Web page (or let others have edit rights as well), write a blog post, comment, upload a video, or share your vacation photos. And in doing all of those things, you could also mark those additions with an identity – your name, say, or a link back to your MySpace page or other ‘identity marker’.
With the ability for participation in a more granular Web and the ability to append that participation with some kind of indication of who you are, the idea was that all of this sharing and appending (in forms as small as a status update) would allow us to more easily connect with each other, removing the barriers to geography and ostensibly allowing us to achieve great things together: an entire encyclopedia arising because we all chipped in, maybe we’d solve poverty or world peace next.
But as Jaron Lanier pointed out, somewhere along the line in the development of technology, decisions are made by someone, somewhere, and these decisions become immortalized in code. And then someone comes along and adds more code to that original piece of code. And before you know it, that original decision is encumbered with so many systems on top of it that it becomes nearly impossible to retrofit. (Lanier has an incredible story of how the display of fonts were handled during the early days of the Web that’s a must-read chapter demonstrating this principle).
Now, I’m not an expert on the history of technology, but it seems to me that the decisions that were made as the Web became more dynamic and ’social’ were errors of omission, perhaps, more than inclusion.
Those omissions were not agnostic – what gets decided by someone coding a new Web site somewhere is influenced by the tools she uses, and the language she codes in – and if you follow the trail back, there’s someone, somewhere who is influenced by the “common consensus” which, it turns out, is often driven by a very small group of technologists and venture capitalists who influence the culture in which innovation happens.
(And by the way, I’m not immune from bias – I have no idea, for example, what the innovation culture looks like in Japan or Eastern Europe or Singapore, so I tend to look at these issues through a fairly narrow lens, albeit one that seems to cover off much of what passes for the drivers of the very global Internet, which increasingly encompasses mobile, games, the Web, and pervasive computing).
And so, while the Web became more granular, widgetized and social, the cultural context in which that future evolved placed a certain set of values on development, based on one particular vision for the future in which granularity, reach and the ability to edit would lead to greater communal wisdom and value. This broad cultural context in which technology was developed, in which venture money was spent, influenced the people who actually wrote the code or set the standards.
The Cultural Context of Now
This cultural context was given voice in things like the Cluetrain Manifesto:
“A powerful global conversation has begun. Through the Internet, people are discovering and inventing new ways to share relevant knowledge with blinding speed. As a direct result, markets are getting smarter—and getting smarter faster than most companies.”
Or, earlier still, the Whole Earth Catalog:
The Whole Earth Catalog functions as an evaluation and access device. With it, the user should know better what is worth getting and where and how to do the getting. An item is listed in the Catalog if it is deemed:
1. Useful as a tool,
2. Relevant to independent education
3. High quality or low cost
4. Easily available by mail.
Catalog listings are continually revised according to the experience and suggestions of Catalog users and staff.
These cultural drivers helped to shape the ’social/widgetized’ Web. But it turned out that what was forgotten increasingly became as important as what was built in.
Because what happened was that while yes, we did become smarter and we became able to compare airline quotes, quickly check Wikipedia for a reference or definition, start to crowd source drug development, set up micro-loans to the developing world, and were able to Tweet (or follow the Twitter feeds) from Iran….what also happened was that for the most part people weren’t all that interested in sharing relevant knowledge, they didn’t care about independent education, and they didn’t necessarily become smarter than most companies (they simply became more opinionated, faster).
What most people spent their time doing was watching funny cat videos on youTube and sharing gossip with their friends.
What We Forgot
Now, I’m playing arm chair pundit here for the history of the Web, and while I ramble on, I’m going to miss stuff and have trouble explaining how subtle these points are. I’m very big on ambiguity and I try to avoid big proclamations – there’s always more than one side to a story, and that’s just as true of something as complex and chaotic as the Internet.
But it seems to me that you could make a few simplistic and, sure, debatable claims for where we ended up.
The Web is a Tool for Connection not Creation
The Internet was a tool. As a tool, its purpose was to connect people to content. But as a tool, it was left to others to figure out the content part of the equation.
The Web wasn’t developed to facilitate the development of content itself, it was developed to facilitate its sharing. As the Internet grew in importance, the development of content itself became increasingly subservient to the fact that one of the major means for the dissemination of content was the Web. This creates an uneasy alliance: the Web needs content and yet its legacy is in the sharing of content rather than facilitation of its creation.
Over time, this uneasy alliance has led to work-arounds: I remember getting my first Homestead account, for example, which at least allowed me to prep text and graphics for display without needing to learn HTML coding. But content development itself isn’t the basis for the architecture of the Web.
Content is Not Semantic
The Web was built on ones and zeros. But bytes were aggregated into human-readable form. Human-readable form is, however, messy stuff.
When the Internet was small, that was OK. The Web wasn’t built to facilitate the development of content, simply its sharing. But there was a short-cut taken as a result: the Web, not being the “place” in which content was developed, did a poor job of understanding the meaning of what it was displaying. Meaning was instead embedded in the sharing and the connecting of that data to people. The meaning would be derived by the author or the viewer, not by the technology doing the displaying.
The Semantic Web is the attempt to retrofit the technology to the very messy reality of human-readable forms: you can’t change language, so what you need is an interpreter. Language isn’t binary: one word does not equate to one meaning, and so the Semantic Web is an attempt to place a construct around what a word means in a single context.
There is a Lack of Content Provenance
The principle seems to be this: get information out there, connect people to it, let them append it, and the truth will rise. The core principle is that the precursor to wisdom is availability. If information isn’t shared, it isn’t known, and if information isn’t known, we can’t evaluate its relevance or validity.
But missing from this equation is the fact that most information has some sort of provenance. Even pure data has provenance. Facts and figures can often only be evaluated based on who or how those facts were generated.
The Web was not built to transport data or information along with provenance. The relevance of information would be sorted out by the fact that its availability would be more wide-spread.
There is Only One Intent
The Web was built on the premise that it is a tool to facilitate connecting people to content. Content put on the Web would have, therefore, one intent: to be shared.
Anything that happened after was a work-around. Creative Commons is a work-around: you can express your intent using a widely-accepted system of licenses, but the only way in which you can express that intent is through tagging.
Your intent for a piece of content is not, in other words, embedded in or carried WITH that content. The legacy of the Web, and perhaps the reason the Web is even here to begin with, is that ubiquity and access was more important than provenance and intent.
This is Not a Commercial System
E-commerce was a hack, really: the Web wasn’t built to facilitate commerce, it was built to connect people to stuff, but monetary transactions about that stuff weren’t part of the original equation.
It took the better part of a decade to start to sort out how to attach transactions to that content, and yet it’s still a messy system. Fraud, trust, global systems, currency conversions and other challenges still exist because the Web didn’t tackle those challenges at the beginning.
Now, I’m not sure the Internet would even exist if it had – what’s more important is to recognize that there was a legacy, and as a result of that legacy things like micro-transactions (charging a half penny to read a news article, say) are nearly impossible, and your credit card is as close as you get to authenticating your value to someone selling something, and your trust in them is primarily transacted through ephemeral means and their ability to accept your credit card in the first place.
We are as gods and might as well get good at it. So far, remotely done power and glory – as via government, big business, formal education, church – has succeeded to the point where gross defects obscure actual gains. In response to this dilemma and to these gains a realm of intimate, personal power is developing – power of the individual to conduct his own education, find his own inspiration, shape his own environment, and share his adventure with whoever is interested. Tools that aid this process are sought and promoted by the Whole Earth Catalog.
Cults, Tribes and Walled Gardens
We’re adaptable. The Web is what it is, and we find ways around its limitations. It probably wouldn’t exist if it had been set up as a big shopping mall, and the concept of ubiquity and access aren’t, on their own, untenable notions.
But we’ve done work-arounds, we’ve hacked the system, and the unexpected ways in which we’ve used the tool which is the Internet has given rise to value in unexpected places.
The lack of provenance, semantic mapping, and the emphasis on transmission and ubiquity rather than context or meaning, gave rise to several larger trends which we often see as sort of separate things, when in fact they’re really not much more than hacks to fill a few gaps.
Many of these larger trends are based on the fact that the Web was built to transmit information but had no built-in sorting or evaluation technology.
Google was a hack, based on the simple idea that because the Web was built to connect information to people (but not to otherwise sort or value content, its provenance or intent), then if you could somehow monitor the connections that people made to information, you could start to sort out what people thought was important. The second hack, and the thing that made Google what it is today, was that it added a commercial transaction layer to granular content.
Tribes and cults of personality are another hack. Google gets us part of the way there by helping us to sort out the reams of information and get it to us faster. But it doesn’t solve the broader problem of context. Google can page-rank information, but it can’t give us much insight into WHY something is important. It can tell us that one particular article on global warming has been linked to by lots of people, and does a pretty good job as a result of returning results first which have very little slant to them.
But we need a ’slant’. We need someone to put a lens up to a piece of content and tell us what it means within a broader context, values system, or philosophy. We carry around our individual lenses, or models, and we want someone to compare and contrast information to our own model.
As a result, you see the emergence of tribes and cults. And I don’t use those terms in a particularly denigrating way: they can be very useful hacks.
A tribe allows us to aggregate a group of people around a common lens or worldview in order to sort through meaning more quickly. A cult allows us to aggregate around an external articulation of a particular meaning. Tribes allow us to sort through meaning, cults deliver that meaning for us.
Apple is a cult, and Steve Jobs its messiah. We don’t need to sort through meaning, Apple will do it for us. If you subscribe to the Apple cult, you’re subscribing to a certain world view on the meaning of technology, its place in our lives, the way in which content is sold and delivered, and the elegance of the experience.
Facebook facilitates tribes. (I half suspect that Mark Zuckerberg wishes it was also a cult, but I’m not seeing it.)
I mean, it might be nice to move from an invention to that invention sort of being just there, but the reality is there’s nothing wrong with that phase of cults, tribes, evangelists and broken dreams – that’s how an invention sort of gets its legs, how it finds its place in our culture and, eventually, to just exist and be used without the encumbrances of pundits and trade secrets and messiahs on the mount.
And while the digital age has sped up the time frames, while we can move more quickly past the cult of the new or of the personality, we’re not usually talking years here, we’re still talking periods that run for decades.
None of which denies that we need to remember the cults that came before, or be suspicious of the tribe – there are false idols and promises being made with no merit (or, for that matter, with no business model), so we’re right to regard them with at least a heavy dose of cynicism, and to watch out at the edges where the hacks are being done – the work-arounds and added layers.
The rise of walled gardens are all hacks, really. Ways of building systems, communities and sites for sharing that accommodate for, in one way or another, decisions that were once made and the technology legacy of those decisions.
Amazon is a walled garden in order to accommodate the lack of an embedded transaction model. Facebook is a walled garden originally meant to accommodate the lack of clear identity systems that connected to community-relevant content.
A Model Future
In the world before the Web, information and knowledge tended to be compartmentalized and hidden. In the world after, information was more readily available and the benefits of this far outweighed the fact that the information wasn’t semantic, or didn’t have clear provenance, or couldn’t be connected to author’s intent.
But I can’t help seeing a logical fallacy in much of the discourse on what the Web, say, or technology more generally, means.
The logic often seems to go like this:
- The Web (which is, after all, simply an agnostic tool) has shown that information can be and will be freely transmitted.
- Information which is freely transmitted, brings people together with that information in order to make sense of it, to sort it, and to extract wisdom.
- Wisdom is a common good. Therefore, the lessons we can take from this particular tool are de facto values which we should accept as a broader cultural good.
The fallacy lies in believing, first, that the tool itself is agnostic. And second, in presuming that the tool itself is the source of how to form our common ethic, rather than simply one input from which we can derive insight for arriving at those decisions.
Tools allow us to create models. Models are smaller working versions of reality that allow us to make decisions about what kind of future we want to have or what kind of beliefs we are willing to hold.
There are all kinds of ways to create models. But just because we create a model, doesn’t mean that we’ve replicated reality – merely that we’ve created one version of it, a test case if you will for how we decide as individuals (or collectively) what our history tells us or how our future will unfold.
Walled Gardens as Alternative Models
Second Life is a walled garden. So is Blue Mars, the XBox, the now defunct Metaplace, and a hundred other systems. The fact that OpenSim is open source doesn’t make it any less of a walled garden, either: at this point in the history of the Web, there are no open systems when it comes to virtuality.
The Internet was built to connect people to information, and then connected them to each other. Any system which layers different modalities on top of that is, in some way, a hack, a walled garden, because it imposes a specific interpretive stance on the questions of identity, content, provenance, context and form in either the absence of those things being determined by the larger infrastructure of the Web, or in response: if you don’t like the way the Web handles identity (which it doesn’t really, except maybe at the IP address level) then build your own. If the Web doesn’t ‘recognize’ 3D content, then build a system that does, like Papervision or an Unreal plug-in.
Now, these things change over time, and they’ll change on the Web too: 3D content will soon be built into the infrastructure of what the Web is, and the display of that content will no longer be a hack.
The Web connects people to content using a particular stance, which is embedded in its infrastructure, and it will soon take a stance as well on 3D content, just as it will take a stance on how video should be honored and standardized as human-readable media embedded in what the Web does as compared to the hacks which it facilitates.
Now, sometimes these walled gardens seem, well, particularly walled. Sometimes they run on the bandwidth of the Web, its connectivity, but use very little else. Sometimes they use the conventions of Web pages and HTML but build little mini walls inside, like a user registration system or a database for managing video and its tagging and sorting.
Each of these walled gardens represents an alternate model, or at least an appending of that larger model which the Web represents.
A Second Vision for What the Online World Provides
So aside from being a walled garden, something not uncommon and more ubiquitous than most people imagine, Second Life also represents a model because it has taken a stance on which things about the Web it will adopt and which ones it will discard in favor of its own: the fact that this particular version of being connected is in three dimensions being the most obvious of those decisions.
But against the backdrop of what the Internet is and what it isn’t, I can’t help thinking that there are certain things about Second Life which remind me that how the Internet turned out isn’t necessarily the only way it needed to be. And I can’t help imagining a life online in which some of the things we take as ‘givens’ aren’t the only way it needed to be:
- Content creation can be the basis for a digital domain. The tools for developing content do not need to be separate from the domain in which that content is displayed.
- Content provenance, at least in some limited form, can be embedded into the system. Content and its creator do not need to be separated – you can build a system in which you can connect the creative product with the person who made it.
- Similarly, you can also display intent. Copy/modify/transfer, a way of signaling intent, demonstrates that you don’t need to stop at tagging, like Creative Commons does, with that tagging somewhat disconnected (or, let’s say, connected only within the starting context in which it resides) from the actual object being tagged.
- You can embed commercial and transactional value into the above as well. This value can include micro-payments, and economic systems can arise from this embedding of commercial and transactional value which, in some ways, improve upon our real-world economic systems: overcoming to some degree the barriers of geographic trade and currency and a preference for larger transactions in preference for something more granular, global and partially decoupled from the State.
Similarly, Second Life represents certain decisions about identity, governance, and how to commercialize things like hosting and services, and more broadly about how we can use these interlinked systems to derive personal value (including monetary value), create communities, connect to content, and share our experiences.
Now, I’m going to run a little counter to what feels like a larger meme: that with Mark Kingdon gone from Linden Lab and Philip Rosedale returning, that there’s a chance that Second Life (and perhaps, by extension, virtual worlds) can have some sort of renaissance, can return to its glory days, and somehow make its way out of being a small little walled garden and be what it set out to be: an alternative ‘place’ so compelling and useful that, well, even your mother will want to join in.
And I say that first because I actually think that Mark did a lot of good. And it will be interesting to see over the coming months whether Philip can get the wind in the sails again and how he does it: because in large measure, if he accomplishes that then what he’ll really be doing is properly finishing stuff that started under M’s watch.
I’m partly of the belief that the list was right, but the order was wrong: that in coming in to manage Linden Lab’s transition from, well, a Lab to something more akin to a service provider or software company, Kingdon set some priorities and then the sequence of those priorities.
The error of sequencing wasn’t entirely the result of things under his control: larger trends in technology, the global economic meltdown and deciding how to respond, and a misread of the competitive landscape (Google’s Lively comes to mind), amongst other factors were some of the things he needed to juggle as he sorted out what to do next.
But the net result was that the dependencies built into his sequencing put an incredible burden on one thing: a new Viewer. Get that wrong, and none of the rest of it would matter.
Because the new Viewer was the spring board for a whole bunch of other things:
- The integration of Web-based content into the world, through Grid-wide access to Second Life Shared Media
- The integration of Second Life with more open systems for things like groups and identity. The side panel on the new viewer would allow both context-specific help and search and the pulling in of Web-based content and was clearly a precursor to ways in which they might handle social connections, groups and, ugh, search.
- A widgetized viewer: M often talked about how eventually the viewer could open up something like an iPhone marketplace. Imagine being able to choose what widgets you want to have in your viewer. No more HUDs, with the ability to “run” things in-world with plug-and-play tools embedded directly into the viewer, downloaded from an extended Second Life Marketplace.
- Increased numbers of users attracted to Second Life because of an improved usability experience.
- With the above in place, a new content ecosystem and the launch of mesh imports. With content creators now able to import mesh, create widgets for the viewer, and maximize use of Shared Media, there would be a whole new range of content that could be created and sold.
- And finally, with the new ecosystem, and the ‘pushing’ of some of the content to the Web (the Marketplace, group and event notice systems, etc.) you’d be able to push THAT content out to the wider Web.
In preparation for all of that, M took steps to improve stability, localize content, plug the best he could problems with content theft, and to set up new tools for community and enterprise.
But it was all dependent on the Viewer being the spring board to phase two, and it was all based on the assumption that it was the interface that was the problem.
Even something like Second Life Enterprise was highly dependent on getting the viewer right. I frankly think they launched SLE in a sort of half-hearted way, but that they went into it with the belief that maybe it wasn’t their top priority, but that once business saw what came out of Shared Media and new forms of widgets and applications, they could just port all of that stuff over and it would sort of sell itself.
SLE is what it is. The price point is fine, but it’s missing the rest of the value proposition: integration with new approaches to managing the flow of content, ideas and technology, much of it dependent on where the Viewer was going to take us.
If all eyes shift to this idea of getting Second Life in the browser (including at the expense of tying up loose ends first), then my personal belief is that they’ll be betting on the wrong thing – that they’ve bought into the larger conceptual model that drives the Web (or, at least, drove it), namely that ubiquity and access trumps content and context, and will have forgotten that Second Life is a model for an alternative future, one which might not be widely shared, but which will always have a home for some people, for some tribes, no matter how small or walled-in that world might be.
Now, this isn’t to say that under M they were on the right track either. I’ve written at length about what I felt was missing over the past few years:
- An over-arching vision. And in the context of everything above, that vision would probably need to include: the recognition that Second Life represents a distinct and separate model for the creation and sharing of content than the wider Web; an articulation of how Second Life will take its place within the broader ecosystem of digital/social/Web-based/broadcast and other media; and a sense of how these beliefs can lead to a better future, whether for an enterprise, a community, an individual or society.
- A lack of effective design thinking. Following from the vision, design thinking looks to project a future which doesn’t exist yet, and which probably can’t be extrapolated from past data, and to give the results of that thinking a tangible form.
- The tangible forms of design thinking can include a marketing and communications strategy and visuals, a technical road map, and visible tools, projects or programs which intuitively let users (and potential users) understand what something means and where it’s headed.
If Philip does nothing else, his main goals should be to articulate strategy and vision and to tie up some immediate loose ends.
And then, he should live up to the “interim” in his interim-CEO title and bring in someone who can lead a team that understands true design thinking, which may or may not include bringing Second Life to the browser.
Philip’s role, unlike what he did when M came on board (which seemed to consist of hanging out in hotel lobbies and setting up his own JIRA), needs to be to maintain the Cult of Philip, to articulate and repeat and repeat again the vision.
And that vision, whether he stumbled upon it by accident or not (I often feel as though Philip has a very uneasy relationship to the idea of commerce, IP protection and ‘closed systems’, perhaps because he’s so close to the Cult which is Mitch Kapor or the Tribe which is Silicon Valley) is of an alternative set of systems and beliefs for how digital spaces could turn out.
He started that work, but there’s an opportunity to finish it.
For the last few years, the main focus of the Lab has been partly technical (some improved stability) but mostly focused on “affordances”, which is a fancy term for “what does the system do and how do people access it”.
The theory of affordances is one of the underpinning philosophies of much of interface design. In one definition of affordances, objects and environments have latent and objectively measured “action possibilities”. An affordance is therefore your relationship to a thing and its possibilities.
If there’s a thing that has no ‘action possibility’ you can’t act on it. Therefore it’s not an affordance. An affordance needs both a possibility for action and someone to act ON it.
The Lab has been ruthlessly focused on “affordances”, which ends up being described to you and I as “improving the user interface”.
In user interface design (which, I believe, the Lab mistook for “design thinking”) the idea is that while objects and environments may have action possibilities, “bad” design is when those possibilities aren’t seen, are unclear, or are cumbersome. Affordances need to be visible, clear, and should properly signal their latent possibilities.
But here’s the problem: if Second Life is, in part, both different from and in response to the wider affordances of the Web, then does it also contain the possibility that it contains invisible affordances?
What if you’re a Web designer, for example, and the affordances you know were grounded in the culture, decisions, environments and values noted above? Can you translate those affordances to a different type of space?
What if it turned out that you’re actually creating a whole new set of affordances? What if you’re not just trying to better design those affordances but to also create an entirely new grammar?
Because I can’t help thinking that as Linden Lab struggles to make sense of how to reconcile where its headed and to articulate its place in our broader lives both online and off, that the deeper value is in creating a grammar, in making visible, things which haven’t been easily accommodated by technology to date.
The Visible Imagination
So we take a well-worn path through the Web just as we take the same bus or road to work, and just as we do laundry on a Monday night instead of renting a movie.
We take it as a given that the Web is the way it is and that everything you’ve ever read or believed is just, well, true. We’ve always gone to work this way, why would we think there’s another route?
But Google and all the other hacks and walled gardens have shown us that nothing is a given: that if we look for the weak spots in larger ecosystems there’s an opportunity to create value (and there’s an accompanying opportunity to destroy, but that’s another topic). Where we’re at is that while the Web has shown us one model for the future, it’s not the only one, and it has a few weak spots.
And one of those weak spots is that because of the way in which it handles content, its creation, its provenance and the mechanisms for its sharing (and selling) it has not been particularly adept at certain forms and affordances, and one of the major weak spots is capturing and embodying that which is often invisible: inspiration, stories, imagination, and personal exploration.
Sure, it connects us to all sorts of different forms of content and information, and it even connects us to each other. But the Web does a lousy job, on its own, of making our imagination visible.
We can share ideas, but the forms in which we can share those ideas are limited.
We can assume different identities and wander around forums or post comments to blogs under those personas, but the tools with which we express identity are limited.
We can shape our own context around content – adding a comment to a youTube video or embedding it in our blog, but the texture and range within which we can create context is limited.
A user-generated virtual world will NOT be the last step in how we articulate, through technology, that which has been previously invisible to the Google algorithm or the Facebook wall post: the deep context, emotion, serendpity and ideas that we generate not because we FOUND content or connected, but because of what we experienced once we did.
As we tag physical geography with a vacation photo or video, as we add emotional back story to a real world event by Tweeting our experience – all of those things we do as we hack and extend the tools we’ve got all pushes us in the direction of trying to make the tools fit our needs, and our very human needs include exploring ambiguity, exploring ourselves, and telling stories of our journeys in as rich a way as we can.
Second Life has shown that there are alternate ways in which technology can support our understanding of ourselves and the world, not because its affordances are easy, not because the information can be captured in an algorithm, and not because there’s even particularly many people there at one time: but rather because through the almost accidental combination of our ability to create content within the same domain in which it’s presented, and to do so in a space shared with others, we’ve discovered that we can rez our imagination, which in a world that seems to relish making it difficult to do, is still the most amazing affordance of all.