Identity and Expression, Virtual World Platforms

Washington Post on Virtual World Governance and Policing

The sheriff has come to virtual worlds, according to the Washington Post, which gives an overview of approaches being used to govern and police, pointing out that “nearly all …users agree to certain policies when signing up The companies reserve the right to suspend or delete a user’s avatar and seize virtual assets that have been accumulated. Most also allow users to report abusive behavior and provide a tool to let members ignore bothersome avatars.”

Enforcement of policies in virtual worlds ranges from owner-driven to community-led, according to the Post. They mention Second Life’s notorious (but retired) Corn Field, and notes that Google’s Lively “prohibits users from spamming others with unwanted messages or displaying racy images. Repeat offenders run the risk of having their Google accounts deleted or, in extreme cases, being reported to real-world authorities.” (Um, and how’s that going?)

In the virtual world of Cellufun (is that a world? Never heard of it) crime rates spiked with the introduction of a jail because, well, users wanted to see what their avatar looked like behind bars (I guess you could call it an opportunity for RP?) CEO Thom Kidrin recommends flagging users who violate platform rules: “I think making someone wear something of a scarlet letter is a good way of doing things,” he said.

Rules and their policing are often established by the communities themselves, they note:

Avatars can create their own societies and carry out realistic activities, such as buying land, building houses and forming social groups. But as the worlds’ populations grow, some have developed more sophisticated legal codes and justice systems to police members’ behavior. Many virtual worlds hope that creating an orderly environment will entice more users — and prevent the need for real-world legal intervention.

But issues of trust and identity make enforcement difficult:

“Many virtual universes leave the law in the hands of their users, allowing each world to develop its own moral code. But a lot of bad behavior is tolerated by residents, said Gartner analyst Stephen Prentice. And often, banished users can simply create new avatars.

“The sanctions that can be taken are pretty minor,” he said. “The problem is that the relationship in identity between an avatar and the real person behind it is quite tenuous.”

Following up on my recent post about how within the child avatar community of Second Life sim owners have taken to policing their own, banning Residents who abuse report others for being underage, the timing of the article hit the highlights – governance and policing are tricky, and one of the major reasons for this is that the management of identity and trust are tenuous at best.

Mind you, the Post also calls virtual worlds a frontier. They obviously didn’t interview Mitch Kapor for the article.

speak up

Add your comment below, or trackback from your own site.

Subscribe to these comments.

*Required Fields

Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.