Deep Thoughts

The Other Side of the Story: The Web and What’s Next

Social media is a con.

A lot of things are cons, actually. Shell games, pyramid schemes, sweatshops, or false governments.

The idea of open systems can often be a con. The idea that people will contribute freely to open source projects and you don’t always need clear and thoughtful governance, transparency, and clarity of economic gain or motivation is, often, a con (but not always).

Closed systems can also be a con. The people who run closed systems keep control – and the rules you agreed to when you signed up can change without notice.

I’ve proposed before that the biggest challenge to open systems isn’t that they’re open (or even that they’re ‘democratic’) but rather that the measurement of economic gain is invisible. Invisible economies, where outside actors can’t parse who stands to gain from decisions leads to a lack of transparency and therefore trust.

Closed systems, in contrast, gain on the same point. The fact that they are usually proprietary and ‘corporate’ means that they are usually run by actors whose profit motivation is clear. You may not like the price they charge, but at least you know why they’re motivated to do so. (Why someone contributes freely to an open source project can often be inferred but is often obscure).

On the other hand, closed systems present a challenge because governance and decision-making are often walled off from the people who are affected by those decisions.

Now, the simplest solution, I suppose, is to have neither: software is a tool that you purchase, a virtual world is a piece of software that you rent or buy, content creators don’t enter open or closed ecosystems, they simply exchange value in an open market unfettered by closed actors with their governance whims, and not threatened by the invisible economic hands pulling the threads of open systems and code.

But the tug between open and closed systems, between hierarchy and bottom-up decision-making, between “platforms” and tools will continue. The tension between these things is where economic value is created: the iPod doesn’t exist because there were a bunch of closed systems that Steve Jobs wanted to compete with, it exists because there were open systems, and songs were being relayed around the Net, and while others were trying to compete with Napster he had the audacity to ask the question: “what if the conventional wisdom is wrong, and you can actually lock down and charge money for all of this ‘everything is free’ digital content that everyone is going on about?”

Where Do You Want to Play Today?
You make decisions.

We’re asked to develop a phone application by a client. So there are a bunch of ways to do this. You can program it natively for one platform – say, an iPhone. You can program it so that it’s, well, sort of native but the core logic can be recoded for several platforms. Or you can create a Web-based app, which looks like an application but is actually more like a mini Web site embedded in an application.

You make other decisions about platforms: you go into Second Life because even though it’s a “closed system” it has a governance structure and economy, it has a massive marketplace and terrabytes of content, it has a community, and you generally think that its development is ahead of or keeping pace with its competition.

Or, you use OpenSim (or Wonderland, or Croquet or whatever) and you decide the trade-offs are worth it. You decide that the pros outweigh the cons: the ability to copy and save content, to fiddle with the server code yourself, and to save money because you can find someone to host it at a fraction of the cost of Second Life. It may still be buggy in places, or it may not have access to as much content, but that’s OK, because that’s not why you’re there.

Neither system is inherently ‘evil’ or wrong – there’s nothing wrong with using open source software, and the benefits are, quite often, significant. But the skill sets you’ll need are different than participating in a closed platform, and those skill sets often include being able to interpret a development community which, in some cases, is an exercise in interpreting obscurity.

There’s nothing wrong with closed systems either. Twitter is a closed system. And if you invest everything you have into Twitter you’re making a choice that its technology, mind share, and vision is aligned with your own. But when Twitter starts GOM’ing your business, or introducing new protocols without warning, you can’t say you didn’t know that might happen – it’s a closed system, you’re putting yourself in the hands of a government whose leaders seemingly have the right to change their minds.

Show Me Your Social Media
The con of social media is that it’s a strategy, a movement, or has any coherence other than being a big ad purchase.

The tag line has become short-hand for a cluster of systems, open and closed platforms, beliefs, and types of content.

The convenience of social media is that it gives a name to what is, otherwise, mass confusion: mass confusion for the developers, the users and the advertisers that want to reach them.

Social media is a sort of quietly whispered promise and it goes something like this: “You know all of that new stuff that popped up on the Web the past 10 years? All those blogs and youTube channels, all of those real-time Twitter streams and Facebook groups? Don’t worry about it, friend….because really, it’s just one big conversation, and it’s a form of media. You understand what a conversation is, don’t you? And you certainly know what media is. Good. Now, pay me some money and I’ll show you how it’s done.”

Social media is a trend that has no home, a meme with no anchor point, a definition that turns around to swallow itself whole.

The problem with social media isn’t that there are not a lot of media that contain social elements. The problem is that we have confused ’social media’ with a strategy for connection.

I mean, let me ask you something: if your mother (or grandmother) asked you to see what “social media” is, could you show them? Is it like television or radio where you see it and it makes sense (however new)?

And yet brands become convinced that social media is a new form of, well, of media. They just need to plop some content in, watch it go viral, and bingo, done. One big ad purchase across a million channels and Google is selling 30-second spots.

Transmedia and the Age of Augmented Humanity
So what happens is that Web 2.0, in which the architecture of the Internet allows ‘widgetized’ delivery of content, leads to forms of connection that are more social. Content is no longer static and becomes dynamic – people can append, comment, add, upload and create.

A dynamic Web, in which the content is always shifting, also suddenly shifts the paradigm for how money is made in this open system. Banner ads lose their power because they can no longer anchor themselves to static content.

The fun stuff is happening at the bottom of the page, in the comments, in the Twitter stream, or in the video that someone created in their basement last night and uploaded to youTube while you were sleeping.

We call this social media, and we try to tell the people with the banner ads that we can still get attention, because, well, first, they should be and can be part of the conversation; and two, because we’re able to scrape and parse and tabulate the conversation itself and serve up ever more relevant ads.

This leads to two trends of note: transmedia, which holds the promise that you can “create” conversation by developing it natively for the various systems on which that conversation happens; and the collection and collation of data until behavioral patterns can be detected and, by extension, marketed to.

Both transmedia and data collection (or behavioral targeting) could be considered, at some level, developments or principles that are meant to bind together social media (and by extension its participants).

Transmedia
The expectation of advocates of transmedia is that the attention log-jam can be broken and the diffusion of content can be tamed. The theory is that the problem with getting people’s attention isn’t that there are so many places to look, it’s that once they do look the content is unappealing because it was originally developed for, well, television.

In most cases, you create a movie and when you’re done you do a deal with Ubisoft to create a game title. You create a television show and then you do a spin-off blog, youTube video channel, and Web site.

Transmedia advocates propose that if you take this approach, you’re bound to fail: you haven’t developed content native to the platform on which it appears. They would say that the equivalent is how producers filmed stage plays during the dawn of television: they were trying to ‘port’ one media into a new media.

Similarly, creating a “Web presence” based on a television show doesn’t acknowledge the affordances of these new media.

Therefore, a new discipline needs to be established in which content is ‘transmedia’ – it isn’t developed for one thing and spun off into others, rather it’s developed for all of them at once.

Transmedia holds out the promise that the challenge in attracting attention isn’t that the media itself is the issue, it’s that the content just isn’t, well, good enough.

And while I’m a big believer in the power of story, in the power of exceptional content, it strikes me that transmedia is primarily a production methodology, is still based primarily on centralized authorship, and that the potential to model truly new forms of content is still relatively confined.

So long as it restricts itself to thinking of ‘consumers’ as either fans or sources of ‘user-generated content’, it doesn’t represent a paradigm shift, it primarily represents a collection of skills.

The Age of Augmented Humanity
Eric Schmidt, CEO of Google, is looking for a coupling of technology to need. Basically, he’s looking for computers to read minds:

Google is moving to make search faster, more personal, and more automatic, Schmidt said. For instance, as a lover of history, he wants his phone to spout random facts as he walks around Berlin. His phone should understand what he wants to know before he thinks to ask, and what he really means. “When you ask what’s the weather like, what you’re really asking is, ‘Do I wear a raincoat or do I water the plants?’” Schmidt explained.

The next expressions of this theory, Schmidt said, are things like autonomous cars and the growth of real-time telemetry. Google Product Management Director Hugo Barra demonstrated an upcoming feature called “conversation mode” in Google Translate, where a user can interact with someone in a different language by speaking into a mobile phone and having software on the phone itself translate and speak on the fly. “This really is history,” Schmidt said of Barra’s working demo. However, Google won’t be connecting personal information to the real world via facial recognition, which Schmidt said is “just too creepy.”

Monetizing “augmented humanity” will require large existing businesses that depend on the economics of scarcity to change to the “economics of ubiquity,” Schmidt said, where greater distribution means more profits. He cited the (long-expected) successful monetization of YouTube as an example. “Augmented humanity” will introduce lots of “healthy debate” about privacy and sharing personal information, and it will be empowering for everybody, not just the elite, Schmidt said, paying tribute to hot-button issues in Europe where the IFA show was held.

This is the natural extension of the idea that the age of privacy is over, and that we will come to an uneasy alliance between what we feed in to computers on the theory that they will feed out value in return.

The Web finds cohesion through data. While the ‘media’ may be an amorphous collection of applications and sites, the data about a user does not need to be amorphous. The extension of this is that while the content may have relevance and eyeballs, the content does not necessarily need to be compelling in order to incite behavior. Data is everything and content is the coupon that gets you into the store in the first place.

The Web of Intent
In challenge to these drivers of the ‘next’ economy is the same challenge we face in making decisions about open and closed systems. The tension between the two is where we find opportunity.

Open systems can upend entire industries, and yet I remain convinced that this does not equate to an inexorable march towards a world in which, well, ‘everything is open’.

Just like Kurzweill’s “Singularity” is based on the false premise that all curves are forgone conclusions (and not just because curves can divert, but also because there are curves we haven’t paid attention to), the idea that the tendency of systems to move from closed to open doesn’t mean that all systems will therefore be open.

My personal belief is that if there is a lack of clarity on value and its creation and disbursement, a closed system will naturally be created.

And this is what we see happening, already, with ’social media’: closed systems, like Facebook or Twitter, have filled the gap where open systems have left an unclear correlation to value creation and exchange.

This shift from open to closed comes with its corollary: closed systems may have clear value exchange (primarily the profit motives of the platform owners), but they also have opaque governance. As much as Google advocates an open Web, it too is a closed system, which can launch and retract products at whim.

For the user, these shifts are resulting in a new trend, which is being termed the “Intention Web” and which was partly alluded to by Schmidt.

Venkatesh Rao of Xerox does a good job of summarizing the concept of the “Web of Intent”:

Social media is not about technology becoming part of human society. It is about humans becoming part of technological society, in a Matrix sense. Power isn’t migrating from the old plutocrats to the new long-tailers as much as it is migrating from humans to technology. Social media isn’t a set of tools to allow humans to communicate with humans. It is a set of embedding mechanisms to allow technologies to use humans to communicate with each other, in an orgy of self-organizing.

Here’s the looming extreme Dystopia: writers hired via Mechanical Turk create content that Demand Media believes will sell, and then we shorten those Demand Media article links using bit.ly and busily pass it around on Twitter. And the long buffer types read the most popular of THOSE articles and bid on new Demand Media writing jobs that are automatically generated based on that popularity. Not to pick on those companies (they are all locally-optimizing in good faith), but where the heck is the actual creative thinking and new value in this madness-of-the-crowds churn? We are faced by a downward spiral into the world of the movie Idiocracy.

The fact that the technology matrix is dumb and entirely lacking in goals and intentions actually makes things worse, not better. We are not being enslaved by Skynet. We are being enslaved by an emergent retard whose behavior is basically a viciously randomized reflection of our own collective manias.

What has Web 2.0 actually done to us?

1. It has unbundled all sorts of content and driven the center of gravity towards the 140 character tweet
2. Appointment Content has started to move to On-Demand Content
3. Fixed publisher-subscriber models have been changed to Twitter/Facebook stochastic diffusion
4. The temporal horizon has changed from past-present-future to just a narrow present
5. We are starting to rely increasingly on analytics, and squeezing out creative intuition
6. Polished content and code has given way to perennial beta
7. Static search based on content-to-content links is starting to get displaced by dynamic search based on live social filtering

There is a solution. I offered the cautiously optimistic argument that technology is just a lever and that there is a powerful “intent” side and a manipulated “passive” side. This post is a refinement of that argument: humans, not technology, are the only truly intentional beings in the picture at the moment. We’re not dealing with Skynet here, but a random, dumb emergent beast.

I’ll define the Web of Intent in a very simple way:

A Web architecture that reduces the number and frequency of decisions you have to take, lets you control when you make those decisions, and prunes the number of options among which you need to choose in a trustworthy way. The overall effect of the Web of Intent will be to allow you to get OFF the Web without suffering an anxiety attack.

As an application layer, the Web of Intent promises to give the user some control back over his or her experiences. It promises to move the algorithm from the centralizing notion of data as an aggregating force to algorithms held by the user. It promises, in other words, to give the remote control to the user and is a sort of TiVo for the entire Web.

But as Rao points out, the Web of Intent isn’t a “big vision”, it’s a roll-up-your-sleeves development effort that’s meant to do some back-fill work:

(The Web of Intent) is also something of a damage control vision: lessons learned in the last 10 years show that our Great Information Overload Hope: filtering and “relevance” technologies, weren’t working well enough to significantly reduce our decision-making and information processing load…At the same time automation of decisions and action was also not really working. Most information still needed human judgment. Outside of a few things like email forwarding rules, we do most information handling manually. Information work is still largely manual labor.

The Web of Intent is a roll-up-your-sleeves, grungy, grease-stained “fix-it” vision. A vision that is about fixing the huge problems created by Web 2.0, which we’ve ignored while being distracted by the huge opportunities. We can’t live in the RASSW! for much longer without going collectively crazy.

A Glass Through Which to View the World
And so, without staring down the root of the problem, the Web of Intent is, instead, another application stack on top of what’s already teetering for lack of a proper paradigm.

As I touched upon in my address at SLCC, there are several larger prisms through which to look at where digital technology is headed and the role we can play in nudging it in the right direction.

I’ll expand upon these more in coming posts but let me start with a few:

The Future of Social Geography
We need to stop thinking of social media and start thinking about the concept of social geography.

Social media is quickly being trumped by a mobile, end-to-end Internet. Our concepts of place are colliding with the architecture of the digital. The layering of data on physical buildings, our immersion in digital space, our social connections and their loose coupling to physical location, and the transience of communities (both online and off) aren’t the sign of a new media, but are the sign of a new geography.

So long as we’re purely grappling with how to reach people using digital media, we’re missing the larger challenge, which is that media is no longer a thing or a channel, but is a place.

The Role of Narrative Architecture
Coupled with the concept of social geography is the concept of narrative architecture. While there are definitions for this from the disciplines of narratology and game studies, my own definition isn’t confined to play or game space as the source of narrative.

Narrative architecture is a way to visualize and codify the interlinking decisions on space, governance, systems, media and economics in order to facilitate the telling and creation of story.

I’d propose that transmedia is the discipline of delivering narrative architecture.

The Concept of Intent
While the Web of Intent promises to make information more meaningful and timely (to read our minds, as Schmidt would say), there is, perhaps, something more profound not in how we interact with technology in order to allow it to understand our intent, but how technology can help to create the conditions for intention.

And this, to me, is the lesson I have drawn from virtual worlds.

Because virtual worlds project a future that isn’t quite like what everyone else says: they show that we can create a technological space in which we are able to represent our intent in ways that acknowledge emotional and not just informational bandwidth; we are inside technology in a way that doesn’t belittle or merely augment our humanity; we are able to find moments not of consumption but of co-creation; and we are able to not merely create new ways to filter and parse data but to create moments of intention.

The tug between open and closed systems, between economic transparency and open governance, between embedded commerce and copy/left, between the filtering mechanism of the user (the avatar) and the scraping mechanisms of the platform or system – all of these tensions have been played out already, and will continue to be played out – but they also left us with some other paradigms that are lost amongst the larger noise about behavioral targeting or transmedia or computers that can read our minds.

Because we’re reminded that in spite of all these larger trends, we still sometimes want a place to call home; we still want to tell a good tale; and in our finer moments we can be still, and centered, and in flow, and do what we set out to do: to express who we really are.

8 Comments

speak up

Add your comment below, or trackback from your own site.

Subscribe to these comments.

*Required Fields

Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.