I’m not a geek. I don’t understand agile development, other than I think the word sounds cool and who wouldn’t want to be agile? My response to the term ’scaling’ is typically along the lines of “well, can’t you just buy more machines or something?”. OK, so now that I’ve established my geek credentials, I’m going to share a bunch of coding links because maybe someone will comment on whether this stuff is relevant or not.
Scaling Virtual Worlds
Jim Waldo posts on the experiences with Sun Microsystem’s Project Darkstar and how to address issues of scaling virtual worlds. His main point seems to be that scaling virtual worlds isn’t like scaling other systems - the same rules don’t apply. He writes:
“I knew the rules. I knew that throughput was the real test of scaling. I knew that data had to be kept consistent and durable, and that relational databases are the way to ensure atomicity, and that loss of information is never an option. I knew that clients were getting thinner as the layers of servers increased, and that the best client would be one that contained the least amount of state and allowed the important computations to go on inside the computing cloud. I knew that support for legacy code is vital to the adoption of any new technology, and that most legacy code has yet to be written.
But two years ago my world changed. I was asked to take on the technical architect position on Project Darkstar, a distributed infrastructure targeted to the massive-multiplayer online-game and virtual-world market. n the process, I have been introduced to a different world of computing, with different problems, different assumptions, and a different environment. At times I feel like an anthropologist who has discovered a new civilization. I’m still learning about the culture and practice of games, and it is a different world.”
Jim then outlines how virtual world technology evolved and why, breaking down the role of the client and server, and pointing out that latency is the enemy of virtual worlds. He makes the interesting comment that “Peer-to-peer technologies might seem a natural fit for the first role of the game server (player interaction), but this second role (state control) means that few if any games or worlds trust their peers enough to avoid the server component.”
Which makes me wonder whether Croquet has limited use cases.
The article summarizes current approaches and gives an interesting review of the importance of new chip sets to virtual world development:
“With the possible exception of the highest end of scientific computing, no other kind of software has ridden the advances of Moore’s law as aggressively as game or virtual-world programs. As chips have gotten faster, games and virtual worlds have become more realistic, more complex, and more immersive. Serious gameplayers invest in the very best equipment that they can obtain, and then use techniques such as overclocking to push even more performance out of those systems.”
The rest of the article - the solutions part, where he describes how these challenges were faced through Project Darkstar, are the part where I start to get a little lost, but it makes fascinating reading. Jim concludes:
” Seen in a broader light, the project has been and continues to be an interesting experiment in building levels of abstraction for the world of multithreaded, distributed systems. The problems we are tackling are not new. Large Web-serving farms have many of the same problems with highly variable demand. Scientific grids have similar problems of scaling over multiple machines. Search grids have similar issues in dealing with large-scale environments solving embarrassingly, but not completely, parallel problems.
What makes online games and virtual worlds interestingly different are the very different requirements they bring to the table compared with these other domains. The interactive, low-latency environment is very different from grids, Web services, or search. The growth from the entertainment industry makes the engineering disciplines far different from those others, as well. Solving these problems in this new environment is challenging, and adds to our general knowledge of how to write software on the emerging class of multithreaded, multicore, distributed systems.”
Scrum Methodology at RealXtend
RealXtend used “scrum methodology” for its development, and Jani Pirkola gives a snapshot of some of the lessons learned. Now, don’t ask me what scrumming is exactly - to me, it sounds a lot like how I work every day, kind of making it up as we go along with a lot of sprints rather than long slogs.
In any case, the article concludes that:
* Scrum works well for high risk projects with limited visibility
* Scrum does not fit well to content oriented work
* Virtual worlds allow Scrum teams to work in a multi-site setting
* A specific Scrum application built on top of realXtend would dramatically increase the efficiency
* Virtual world applications need to be integrated to old fashioned applications (like the spreadsheet).
I’m particularly taken with the last point. I’m still waiting for decent demonstrations of integration with “old fashioned applications” - I mean, sure, there’s the occasional embedded PowerPoint, or Lotus Sametime integration, things like that. But I’m convinced that virtual worlds need to make some sort of conceptual leap that we haven’t seen yet when it comes to application integration.
Ogre Particle Integration
On a related note to RealXtend, the platform now allows you to upload Ogre particle effects:
“realXtend viewer allows you to upload OGRE particle effects. Common uses for particle effects are e.g. fire and smoke. Some ready made particle effects come with the viewer. You can find them from the “example_assets” folder from the viewer installation.”
Picture: The Rex Files
Security and Privacy in Virtual Worlds
OK, maybe this one isn’t so geeky, it’s more wonky. The European Network and Security Agency published one of those white paper things looking at “Security and Privacy in Massively-Multiplayer Online Games and Social and Corporate Virtual Worlds” and it makes for, well, lengthy but insightful reading. Good reference to have on hand, and obviously many of these policy or security issues have relevance to development.
Great article, and reveals some of the architecture doubts I have about opensim suddenly scaling well.
Re integration with flat apps (like spreadsheets etc). This is something I’ve argued for very strongly and found many virtual world devs less than intersted in. Unfortunately when you think about the real world, flat apps are our dominant information medium.
Remember the amazing fancy 3d haptic glove interface Tom Cruise used in Minority Report. Though the whole thing was 3d and augmented reality, the dominant information unit was flat stuff - pics, movies and text. By contrast most virtual worlds are the communication/information equivalent of sitting on your hands around the fireside having a chat - nothing flat is easy to bring in.
I work for a living in 3D so should be very high on the 3D-use scale, but breaking down my work day I spend twice as much time in flat apps as I do in 3d applications. Bring those flat apps in world and I’ll start using them (or collaborating with them) there… but until that starts to happen… meh.
I think bringing flat apps to 3D world is necessary, but it is even more important (or at least more exciting…) to make those 2D apps into 3D. Think about word processing, we could have blocks of text that we rearrange collaboratively.
Or powerpoint; instead of flipping through 2D slides, we could fly from place to place.
Ditching 2D does not work either - we have 2D texts, signs and a lot more in RL too!
Croquet has lost all credibility for me:
http://secondthoughts.typepad.com/second_thoughts/2009/01/who-is-pixeleen-mistralreally.html
What is so magical about peer-to-peer, anyway? It might work when you have a set of things that never change in themselves, say, a million music mp3s that people upload, i.e. the collection grows one by one, but the items in that collection are not dynamically changing. Even so, you have to have multiple servers and at a certain point time becomes an issue, that is, the relaying of orders to use other servers in order to keep serving up the copy of the tune begins to thin out and weaken as too many millions access the system.
But imagine if each file is constantly being edited, added to, renamed, put in short-term storage, put in long-term storage, linked with other things — in short, all the dynamism of a user-generated world, like Second Life. Then it seems to me peer-to-peer becomes a nightmare.
It’s one thing when all Lombardi and McCahill have to move around are the rabits and blocks and pictures and it’s just them and a bunch of professors, but it’s quite another when the number of objects is in the millions and billions, and they are constantly being linked and changed dynamically.
The central asset server model only sounds brittle and unscalable if you never conceive of just adding another grid and starting over with a new asset server. What links these grids together, then? Culture, law, universality — and perhaps IMs or think clients like SLim. When you get technology better to link up the central asset servers of the world, you will, but you aren’t in a particular rush about it. If you can’t establish the authenticity and trustworthiness of a grid, you don’t hook it up to your asset server.
Who says the metaverse has to consist of uniform pieces of Swiss cheese with worm holes to port through and identical consistency and penetrability throughout? Why can’t the metaverse be just like the real world, contiguous but not permeable because countries have borders, security, laws, and you need a passport, foreign currency, etc.
We already have a way to collaborate on texts, it’s called “Track Changes” in Microsoft Word and “your email”. People don’t need the instantaneous collaboration to monkey collectively with a text that you imagine. Writing and thinking ideas is a solitary art, and collaborating is a sequencing art. It doesn’t *have* to be simultaneously. Having asynchronous editing permissions would be *good enough*.
At the UN, in some meetings now projections of Word are put up on the wall so that the countries can edit a text together. But they don’t all jump on it at once and start “rearranging blocks of text”. They ask for the floor, they follow Roberts Rules of Order.
You guys are always trying to lay the pipe to the Metaverse without thinking about the social and political rules of order required to make them coherent.
@Pavig - One of my major areas of interest is exactly what you’re highlighting. I still don’t think we’ve found one of those “aha!” applications that really shows how 2D content can be converted to 3D space. The closest, I think, is Photosynth.
Prok - I’m really no expert on all that, just highlighting what was an interesting article which, I believe, came down on the side of central asset servers but I may be mistaken. The article supports that peer-to-peer has the challenge of creating difficulties with managing “state”. What I think we’re starting to see with the Hypergrid stuff on OpenSim is a sort of distributed asset model, where each grid has a central asset server but those assets can be transferred from grid to grid. The fact they acknowledge that the trust issues around this haven’t been addressed has me, well, baffled. It would seem to me that you’d start with trust and worry about how assets are distributed after, but I’m preaching to the choir in pointing that out to you.