May 30, 2011

Watching the Wheels

One of my favorite songs has always been Watching the Wheels by John Lennon. To me, it speaks volumes. I can understand where he’s coming from with the lyrics, and alternately with the original lyrics when he was toying around with the song as a demo.




I’m just sittin’ here watching the wheels go round n’ round… I really love to watch them roll.



He talks about people thinking he’s somehow lost his mind, how he couldn’t possibly be happy stepping away from it all and watching (occasionally dabbling). In the original lyrics he talks about how people give him advice about how not to throw his money away and try to save him from (financial) ruin, how he’d be happier if he owned the whole damned world.


But in the end, John Lennon was right.


He was much happier the way he lived life. From songs like Across the Universe (which Craig Lyons in SecondLife does amazingly well as a cover) to Strawberry Fields.


Nothing’s gonna change my world.


There’s no problems, only solutions.


I’m just sitting here watching the wheels go round and round. I really love to watch them roll. (Alternatively: I’m just sitting here watching the flowers grow) No longer riding on the merry-go-round. I just had to let it go.


I’m in a similar mind-set when it comes to virtual worlds and research. I don’t really have a stake in the outcome any longer, and stepped off that merry-go-round years ago. Sure, I dabble here and there, and even continue with research (and even have my hand in various development from time to time) but not like I used to; not like when I was on the full-time bandwagon and had a direct stake in the outcome. Now I simply write the research and participate in order to seed the fundamental ideas into the minds and move on. I already know where the future of virtual worlds will end up, and I’m just sitting here doing time (quite happily I might add) as I smile and watch the short sighted banter continue.


A lot of great ideas stemmed from my involvement in the industry over the years, and just as likely many near disasters were foretold in the same breaths. Conversely, however, I see the same things from both sides of the fence now. IBM has a natural stake in selling more powerful servers and virtualization technologies, maybe even facilitating cloud computing because, quite obviously, it’s feeding a mistaken need for more power to these centralized virtual environment systems. Of course they’d never tell you there is a better and cheaper way to go about it, because it’s against their best interests and bottom line.


The same goes for existing companies already in the virtual environment industry. There is never a better solution than the one they are willing to sell you. This is likely why we’ll not see a standard in place anytime soon – at least not concerning existing companies like SecondLife. No, Linden Lab is responsible for quite a lot of atrocities against their own community which the serious developers know about but Linden Lab sweeps quietly under the rug. You’d be amazed how many third party solutions LL quietly destroyed to stay ahead.


However, I’m not pointing this out to bash Linden Lab or any company, really. I’m merely pointing out that they are a company, first and foremost, and are willing to stab whomever it takes to stay in the game and on top. That’s just the nature of the business empire itself… and maybe people like Mark Kingdon (but that’s a private anecdote).


I realized something many years ago, which made me smile and move on.


You start to notice that the record begins to repeat after you listen too long. I already know there isn’t much that is going to change as time marches forward, but I still love to toss in the occasional ideas here and there and see if we’ve reached a point where the people who are in a position to benefit most are ready to listen. Most of the time those ideas will sit around on a shelf for four or five years before somebody suddenly thinks it’s a good idea and runs with it.


Or maybe not. It doesn't really matter in the grand scheme of things. I don’t have a board of directors breathing down my neck to produce results at all costs, or millions of dollars sunk into a technology design that was built haphazardly. I simply don’t have a stake in the game, so I suppose you could say that I have no corporate influencer behind me. That’s always a good thing, except for those who do have corporate influences guiding their decisions.


I’ll make an amendment, though. I do have a stake in the outcome – a 100% stake in it, to be precise. But it’s an outcome that will come about on its own, and no existing company is likely going to make it happen. I just like to speed up the process a little bit, that’s all. I’ll live to see this outcome, and I’m willing to bet it’s not going to have a Linden Lab logo on it, either.


You’d think I’d be mad about the situation, but I continue to smile. The (virtual) world will continue on, regardless, and one day (maybe) it’ll catch up to the big thinkers in the field. Until then, we’re on the merry-go-round.


Except me. I’m watching the wheels go round and round.


I really love to watch them roll.

May 28, 2011

Jack of All Trades (Master of None)


Thanks to @BevanWhitfield for convincing me to blog more frequently. While I may not be entirely coherent at this hour of the morning (8:00 AM is half past the crack of WTF for me) I’ll make my best attempt to ramble on in some semi-intelligent banter while I drink my coffee. I’m not exactly a morning person, per-se, but I figured what better time to write a post than over a fresh cup of mocha java while some random idea is still battling the Stay Puffed Marshmallow Man from the prior night’s dreams?


Anyways, it’ll be informal and less on the “intellectual” side with these sorts of posts, or maybe unintentionally intellectual. Probably a little rambling as well, but that’s what mornings are for.


This morning I’ll be writing about the penchant for technology to be heading toward this Jack of All Trades scenario where they offer a plethora of applications but don’t really do any one thing very well. In my mind, I’m essentially thinking about Portable Music Devices where there really isn’t a device that caters to an audiophile.


I’m sure there are suggestions to purchase an iPod Classic at 160GB and call it a day, but to be honest I’m not happy with the stranglehold that Apple seems to have on that end of the industry. I was checking out the website recently and they have the audacity to have a category for MP3 Players and iPods.




There’s a word for that, right there, but I’m not entirely awake enough to think of it. I will, however, convey that the feeling I got when seeing that before my eyes was roughly translated into a deep need to buy a plane ticket to Cupertino and punt Steve Jobs square in his pretentious crotch.


Wait.. that’s the word!


Ok, moving on.


The point is, I’m an audiophile. Many years of listening to audio (music, binaural, holophonic, digital and analog) has pretty much set my discerning ears into the high end experience. This also means that I have an extensive collection of audio at my disposal (and a 1TB external HD). I’d like to make that 5TB if I could, but there seems to be a bottleneck in the process, which is that there really exists no Portable Audio Player that will cater to Audiophiles.


I’d say MP3 player, but to be honest I’d rather listen in uncompressed WAV or FLAC for higher quality, and as we know that requires more storage for the higher file sizes. I currently have an antique MP3 player (Creative Zen Touch) that contains a 20GB internal HD for storage. I suppose compared to the latest iPod touch and its measly 8GB storage (64GB if you are willing to mortgage the house) it’s a blessing. If I look at the iPod Classic it’s 160GB which is definitely larger than my current Creative Zen Touch (20GB), but I’m a tech guy…


So what is that supposed to mean?


Well… look… We’re in an age when you can buy a portable 1TB HD with a USB connection for $99.00 at a local Staples, and the best Apple can offer is 160GB for an iPod Classic for $249? Better yet, a 64GB iPod Touch for $399 is highway robbery… I don’t give a rats ass what it can do because I only want my portable music player to be able to do one thing exceedingly well.


Play music.

iPod Touch

Only Apple would downplay the ability to play music on their own MP3 players…



I’m confused. Clearly a 160GB HD doesn’t cost much, and nowhere near enough to qualify $249 on an iPod (let alone $399 for less than half the storage). But then, this is why I dislike Apple. The whole hermetically sealed shut bullshit with their products, and charging more for their products than they are remotely worth, simply because people will pay it. Not to mention offering the product in a different color (white) and acting like it’s an entirely new launch product.


To each their own, really. I can’t drink that particular batch of Kool-Aide.


But back to this whole Portable Music Player thing…


As an Audiophile I actually demand more from the products I buy. But there is a certain set of guidelines I’d like to lay out to any company that would like to topple the iPod as the reigning master of music, and immediately garner respect from true audio lovers (not to mention their money)


1. 500GB HD as the entry model. 1TB for the Pro version.


This is a no brainer. People who love music and audio have a lot of it. As in, more than probably the entire neighborhood combined. We love audio. That much. We want to essentially drag a large chunk of our collection over to the device and not have to do this again for at least a month or three. 500GB hard drives are cheap, and for a professional audio player, tacking on an additional $99 for a 1TB model wouldn’t phase them. Offer both the ability to install software onto the user’s computer (with their consent) that will allow them to stream their local music collection to the device over the internet as well as a subscription cloud storage option for those who don’t want to use the local streaming option from their computer.


2. Discerning ears require a real audio output. Take your 4 band equalizer and shove it. Give us something like Izotope Ozone 4 built into the device and really let us experience audio. This is also the same mastering system I have installed and use through Winamp when listening to audio on my laptop and I settle for nothing less. Make sure the hardware aspect of the device can handle full High Definition Digital Audio output as well.




This is the type of Graphic EQ I want to see on my Portable Audio Player



3. Full color capacitive touch screen. Responsive and elegant. Intuitive GUI.


4. No, we don’t want your software suite installed on our computers to manage our collection.


We’re anal retentive about our music collection. We’ve sorted it just the way we want it. I ran iTunes once, and only once, on my computer only to have it immediately decide to retag, re-sort and otherwise bastardize my music collection the first time it ran.


Never again.


If you see Steve Jobs, kick him in his pretentious balls for me. That man cost me a month of time undoing the damage to my collection.


Unless you employ actual audiophiles who are fanatical about their audio collection, pay your employees to work out how to build the actual audio player instead.


5. Take your apps store and fsk off. I want a portable audio player that plays audio immaculately well, not a device that does countless things piss poorly. If I wanted to play Angry Birds on my MP3 Player I’d have … wait, strike that. I don’t give a damn about Angry Birds. I listen audiobooks, Old Time Radio Dramas, and high end music audio. If I wanted to plays games, surf the Internet, write emails, and video chat I’d be using my laptop.


I always laugh when I hear the argument about why Tablet computing is the future because PCs apparently don’t do anything well. Tablets have a future, mind you, but they are a technology that does everything mediocre at best. It’s not a laptop replacement by any stretch of the imagination. If anything they are a quick fix for Internet or whatever app you want to download, but when you want to get anything serious accomplished it’ll sit in the corner and collect dust.


I like to call it “Disposable Computing Experience”.


Which brings me to the next point.


I want a portable music player built for audiophiles, not a tablet computer that does a million things half assed. I don’t care if it plays videos, or my photo album. I sure as hell am not interested in a 2 year service contract with a mobile phone company just to use it either. Have enough common sense to build speedy WiFi into it, but never forget that this device is meant to focus on playing music amazingly well as its first goal.


If you want to throw in bluetooth access, I’m cool with that, but only if it can sync to my computer to wirelessly transfer my audio collection as well as double that bluetooth capability for wireless headphone usage. You may as well build a receiver for that which I can connect to my home audio system to stream my music from the device wirelessly. You also include a bluetooth remote as well, for when I have my device synced to the home audio system, or maybe have it sitting on a table while I wander around with my wireless headphones.


So here’s the plan:


Capacitive touch screen, and intuitive interface. Honeycomb 3 Android OS, and at least a dual core processor with plenty of RAM. While we’re at it, toss in an nVidia Tegra processor for GPU.


I understand that solid state memory is still in a high-priced sucky stage in technology, so if you can only manage 64GB onboard that’s just fine. I still want to see 500GB and 1TB availability for my entire music collection, or at least a large chunk of it. Give me the option to set up software from you (one of the few softwares you should offer with this unit) to run in the background and stream my local music collection to my device through the Internet. This way, with a wireless Internet connection, my portable music device is as large as my home computer storage is. If I happen to have 5 TB of storage for my music collection, then my Portable Music Device Table has that much as well.


The unit should be no larger than an iPod Touch, but you are free to make it thicker to accommodate the features.


Built-in rechargeable battery with at least 20 hours of listening time. The battery must also be replaceable, even if we have to order a replacement from your company.


Izotope Ozone 4 level of graphic equalizer and pre-processing ability for the audio.


Bluetooth ability, obviously.


Price the unit sanely. 64GB of storage is not your cue to charge $400 for an audio player. Much the same way that $899 is not an appropriate price for a tablet computer (I’m looking at you Apple, Verizon and Motorola). Try an entry price of about $250 for the 64GB unit but only if it has the WiFi Sync ability (to stream music from my home computer over the net).


In short, build a portable music player that actually kicks ass for once.

May 25, 2011

Mesh is Dead. Long Live Mesh!

This is going to be one of those posts where I’ve been awake far too long and the Mountain Dew is no longer having its wondrous medicinal effects upon my psyche. Bare with me, though, because this is going to be an interesting post.


Mesh is dead. There, I’ve said it publicly.


Actually, the term is Dead on Arrival, but the problem is that nobody got the memo.


Sure it’ll make waves and somehow revolutionize the way people create things in SecondLife, but that’s a temporary situation. It’s not really a revolution, but a nod to the past and the static nature of virtual worlds and the content created for them.  It is this static approach to things that has caused the biggest headache to developers and virtual worlds alike since the dawn of the pixel. I know right now it doesn’t seem like a problem, or that the complexity issue is just a matter of time before the hardware catches up to render even more data at lightning speeds, but in the end the question isn’t whether or not our hardware will eventually catch up or surpass our current top-end expectations, it’s about whether we’re intelligently building our systems to scale and offer the absolute best we can offer in less bandwidth and computation at the same time.


I’m not even going to limit myself to just Mesh either, because that’s just a myopic view of the situation as a whole. No, let’s go full-tilt bozo with this and declare static methodologies defunct altogether.


Filesize does not equal expectation of quality. Just because you manage a 25MB PNG file at 60,000x60,000 resolution does not mean the filesize (or subsequent processing power required to load it) is even remotely justified. I’m going to apply this train of thought also to things such as mesh generation, textures, and every other aspect of environmental modeling, with the exception of audio, because quite frankly audio shouldn’t necessarily be dynamically generated (yet).


Mesh is this static assumption of NURBS, triangles, primitives, etc wherein the entirety of the object complexity is stored. It has a very discernible limitation, a top end for fidelity. What price do we pay for this high-end mesh? Well, we pay for it in filesize and CPU/GPU render time. If the filesize doesn’t kill you the Render Cost and LOD surely will.


The same applies to textures as well. As far as I can recall, the solution for getting higher fidelity for textures in a virtual environment has been to simply create a higher resolution image; Powers of 2 of course (4,8,16,32,64,128,256,512,1024) and there is an artificial limitation again with this approach. I remember when the big buzzword from id Software was that of “MegaTextures” wherein these absolutely gargantuan texture files were used in games to cover the entire game world surface with high-end fidelity. Yes, it did work and the results were really good, however the file sizes were overkill and wasteful, not to mention that by today’s standards they look like crap.


Over the years I’ve sort of sat around and observed the trends in virtual environments, the… process of creation, and similar questions always seem to arise at the table in the planning stages. Do we support things like Collada as a model format? Do we allow JPG, PNG, TIFF and others for textures? Even in IEEE Virtual Worlds Standards Group I hear similar conversations, and it baffles me to no end.


The question shouldn’t be the difference between something like Collada, Maya or Lightwave3D, because quite frankly the answer is none of the above. The same holds true for whether or not we should be asking if we should support PNG, JPG, GIF, APNG, MNG, PSD, or TIFF. The answer, ideally, is none of them at all. If anything, these static formats are the exception and not the rule in a truly dynamic and scalable Metaverse, used only as a fallback for legacy compatibility.


The future, assuming it’ll ever arrive, is not in static methodologies and approaches, but instead it is in dynamic methodologies. As far as the 3D Mesh itself is concerned, the future is in things like Generative Modeling Languages (GML) where St. Patrick’s Cathedral in all of it’s 3D glory weighs in under 50kb (18kb if the file is zipped). With Generative Modeling, we’re talking about a procedural methodology that can just as easily generate more or less detail based on the hardware running it, but do so with the same information regardless. We’re talking about a procedural model of the world, here…


Traditionally, 3D objects and virtual worlds are defined by lists of geometric primitives: cubes and spheres in a CSG tree, NURBS patches, a set of implicit functions, a soup of triangles, or just a cloud of points.


The term 'generative modeling' describes a paradigm change in shape description, the generalization from objects to operations: A shape is described by a sequence of processing steps, rather than just the end result of applying operations. Shape design becomes rule design. This approach is very general and it can be applied to any shape representation that provides a set of generating functions, the 'elementary shape operators'. Its effectiveness has been demonstrated, e.g., in the field of procedural mesh generation, with Euler operators as complete and closed set of generating functions for meshes, operating on the halfedge level.


Generative modeling gains its efficiency through the possibility to create high-level shape operators from low-level shape operators. Any sequence of processing steps can be grouped together to create a new 'combined operator'. It may use elementary operators as well as other combined operators. Concrete values can easily be replaced by parameters, which makes it possible to separate data from operations: The same processing sequence can be applied to different input data sets. The same data can be used to produce different shapes by applying different combined operators from, e.g., a library of domain-dependent modeling operators. This makes it possible to create very complex objects from only a few high-level input parameters, such as for instance a style library.


Algorithmic modeling technologies and methodologies are the future. Instead of worrying if content will run on any system, we can rest assured that (with common sense in the coding department) the algorithmic content is displayed at the fidelity and resolution that the system it is on can comfortably maintain through GPU streaming.



This entire demo is completely generated from 177kb


Farbrausch .debris Real Time Demo:


If you have a crappy computer, then the content will scale back in fidelity for you to give you the much needed extra FPS. Sure, the virtual environment wouldn’t look as pretty as it would on your neighbor’s monster gaming rig, but you never should have expected it to.


Procedural textures are in this foray as well, with the ability to scale up and down to extremes based on the capability of the computer system running them. So combine GML and Procedural Textures and we end up with about 100kb worth of data that can theoretically scale upward in fidelity automatically to rival movie studio quality with the same data as the home computer rendering it in real-time with lower fidelity. What the hell, run it on a wimpy iPad or Android phone while we’re at it, since dynamic systems like these can scale down to meet the hardware effectively without breaking a sweat.



Doing more in 2KB than our best 200Kb PNG


Allegorithmic Substance:


Toss in the obligatory shaders, parallax extrusion and Normal Mapping, mix it up with Tessellation routines to prioritize fidelity to range, and we end up with a system that makes anything we have today look like an Atari 2600 game in comparison.


We can do buildings in GML type languages, procedural textures with something like Allegorithmic Substance, model trees with fractal rule sets, grass using vertex shaders, add depth and detail using tessellation routines, and even specular and height using normal mapping. Hell, I’m even convinced the avatar itself could be generated using a well constructed GML model and take up under 50kb of file size in the process, while remaining fully editable in the process.


Of course I’m also a huge fan of Synergistic Networking processes, which is to say take the dynamic approach a step further and apply it to the actual underlying network architecture to create a hybrid decentralized system with centralized gateways for stability and security.


While we’re at it, we may as well address realistic lighting for trees and foliage.






And for good measure let’s add some realistic grass generated through real-time shaders.





The world is dynamic in nature. The same base genetic code is used for the blueprints and variations appear through nature or nurture. In this light of understanding, we should be designing our virtual worlds in the same manner. The underlying algorithmic data acts as a base, and then variations and influences are introduced to the end result to influence the final outcome, creating breathtaking beauty and variance with the same data at a fraction of the file size and bandwidth, allowing the fidelity to scale upward to unimaginable heights without recreating the world manually.


Think about this for a moment.


The ability to create something that will only get better as technology hardware allows for more processing.




Seems like a no-brainer to me, but the future isn’t here yet (apparently), so Long Live Mesh!


I’ll check back in ten years to see if Mesh is still the new buzzword in virtual environments.

May 24, 2011

Empire Avenue State of Mind…



Ok, so thanks to @SkylarSmythe on Twitter, I’m now playing some social media game called Empire Avenue where the point seems to be turning yourself into a tradable commodity for others to purchase and sell.


I’m actually not sure what to think of this game, but it’s interesting nonetheless. You connect your social media to this game and I suppose carry on using your social media as you normally would, as in Twitter, Facebook, Youtube, etc. Like most gamification (Jesse Schell) it has elements of virtual self worth and reward, as well as simulated losses.




Gives new meaning to selling your soul…



One of the prerequisites of adding your blog to the game, however, is that you need at least five players to verify (endorse) your blog so that they will be sure it is written by a real person and not just some spam-bot haven or whatever. Seem reasonable… but then there is the secondary prerequisite in that when you request your blog to be upgraded to blog status and not just an RSS feed, they require that you put some funky verification code at the beginning of your current or next post in order that they can determine you actually own the blog you are trying to pimp on the game.


As a matter of course, this is that blog entry.


It seems like a very interesting game, however I’m still uncertain the point of it overall. What really got me thinking was the luxury items available to spend your earned game credits on; things like hang gliders and yachts. Of course they don’t come cheaply, because that wouldn’t be luxury – but the last housing option in the game struck me as absurd:


A Castle.


Ok, I can see how a castle would be the ultimate luxury item to attain, but it’s essentially a 64x64 cartoon picture of a castle like it’s a badge. Still not a problem until you realize that Empire Avenue has the audacity to charge 100 *real* dollars for it. Not in-game currency, no… one-hundred real, log into your Paypal account to authorize the absurdity of the transaction you are about to make and press the pay button to be greeted by the sound of laughter and toilets flushing your dignity and common sense away, dollars.




I’d buy that for a dollar… or one hundred.



And yet, there must be quite a lot of people stupid enough to pay $100.00 USD for a 64x64 image with a castle on it to display proudly on their game page. P.T. Barnum was right, there really is a sucker born every minute. If I wanted a castle, I’d log into the XTopia Minecraft server and build myself one in 3D, for free.


If you’re into this sort of intellectual game, then I highly recommend it (and also purchasing stock in A3D). However, unless you’re really pants-on-head stupid, I wouldn’t suggest buying a castle.

May 20, 2011

How Global Windlight *Should* Work


Just when I thought I was finished writing for a bit, Oz Linden had to go and post a comment on the prior entry that really kicked my brain into overdrive. That one simple phrase concerning that Linden lab was indeed working on Global Windlight, but only able to use the presets that the users have pre-installed left me stammering.


Yes, Global Windlight settings are something that is highly needed, but there is a reason why Phoenix viewer can only use what the users have pre-installed and thus have a very limited selection and scope for what that ability allows. Generally speaking it’s because they control a viewer development while Linden Lab controls both a viewer development and server development.


Why is this important?


It makes all the difference in what you can accomplish when implementing a new feature such as Global Windlight Settings. The reason Phoenix can only use what the viewer has as presets pre-installed is because they have no authority to make Windlight setting XML files into a deliverable asset attached to the Region/Parcels. This is the key difference between what Phoenix accomplished and what Linden Lab can accomplish with the same idea.


However, this seems not to be the case after reading the comment from Oz Linden, because Linden Lab is pursuing essentially to clone the approach from Phoenix. Sure, they’ll make it more accessible than adding a line in the description of the parcel/region but other than that it’ll work the same way with the exact same limitations.


That’s just lazy, Linden Lab.


You mean to tell me, and the world, that there isn’t a single programmer over there that didn’t think to make custom presets at the region and parcel level saved as an asset to be delivered to the users upon entry along with the countless amounts of other assets being served upon entry, and then apply those downloaded Windlight settings?


I refuse to buy that line. An entirely new asset type was created for Physics layers, and quite a lot of coding must be going into the Direct Delivery system for Marketplace (kudos btw for that).  And then you get to Windlight and the company acts like they are under the same constraints as the 3rd Party Viewers…


It’s essentially saving a Notecard to the asset server… Albeit a formatted notecard for XML settings, but nonetheless the data involved would be negligible at best, and deliverable in the same manner as a notecard asset attached to the region/parcel settings by the owner. That same notecard XML asset file would also make a great way to trade Windlight Settings and make them accessible – through setting them as an asset type and allowing Windlight Settings as an item in Inventory. Drag and drop into the Region panel or Parcel settings panel to apply. Or as a normal user of the system, simply double click the Windlight preset in inventory to apply it.


What about people who don’t want to have Global Windlight settings?


Handle it in the same manner as media.


Windlight – Global: Auto | Verify | None [Dropdown Menu] in Properties –> Graphics.


You have an asset server and an army of programmers with access to every single aspect of the Second Life code. Don’t try to bullshit the community by pretending to be under the same constraints as Phoenix viewer for implementing Global Windlight settings. Implementing Global Windlight settings with the same constraints as Phoenix is plain ass backwards and lazy.


It’s also a lot of wasted time and effort if you ever bother to actually make it into a custom asset later down the road, because then you end up supplanting the old method (local presets only) with the ability to define custom presets and have them delivered as an asset attached to the region/parcel later on – creating a disconnect in methodologies.


Go back to the drawing board, stop being half-assed about it, and do it right the first time.


Is there a Voice of Reason department at Linden Lab? If not, there should be. The sole purpose of that department would be people like myself wandering around and pointing out the blatantly friggin obvious and expecting higher standards.

May 18, 2011

Less Is The New More…

In a previous post, I outlined some of my thoughts on what makes Linden Lab both good and bad in terms of what they are doing right wrong versus (more importantly) what they do right. As a result I received a number of comments on and offline taking either position, but the most interesting comment was from OpenSource Obscure:


Interesting piece, but I stopped reading at
"Not that I expect any of it to be fixed or implemented, because clearly Linden Lab has more important things on their plate to work on, like adding a Facebook button".


I just don't think things work that way.
I also argue that reiterating this idea (ie, implementing a small specific feature keeps completely different things from being fixed)

is just bad for our understanding of the Second Life platform development.
You could hear this same song recently with regard to XMPP:
"they refused to fix group chat because they had to add avatar physics".
When you frame the debate this way, you can't go much further


I’d like to take this moment to clarify a bit more as to what it is I meant when I said that while the more important things will likely go unimplemented by Linden Lab, there exists a short-term thinking in place that gives priority to things like adding a Facebook “Like” button. For those who decided not to bother reading the rest of the prior article, I’d suggest taking the time to read this in entirety before making a comment. It helps to understand the entire position prior to inferring what I mean when I say something.


Toward the latter part of the prior article I outlined that many of the innovations inherent in the Second Life viewer are a result not of Linden Lab but of the community through TPVs or in-world innovations and building. Whenever Linden Lab drops the ball, or ignores the users themselves, we see these types of innovations implemented through the TPVs and community when the capability allows. However, there are quite a lot of innovations that were never implemented or were implemented and quickly removed; never to be spoken of again. Those are the sorts of innovations which Linden Lab has access to (like the native Windlight Weather) which they simply chose to forget they even had.


An obvious assertion, that really needs no further explanation, but for the sake of argument I will outline further.


While the management of Linden Lab continues to play musical chairs with their vision and implementation for the company, the direction and focus of the company has taken drastic turns over the past few years in which on the surface resemble a flavor of the month approach. We’ve seen the focus of Second Life swing wildly to the polar opposite of the community to focus on the commercial aspects through the direction as set by Mark Kingdon with the SL Enterprise solutions, and now with Rod Humble, we see an oversimplification of the interface and approach which would be the polar opposite.


While it’s true that the new user experience needs to be balanced and friendly, taking away the things which substantially encompass what makes up Second Life as an experience and dumbing it down to a glorified chat room isn’t a solution; it’s insulting. Taking away the inventory, ability to utilize a Linden balance, or meaningfully interact with the environment in the myriad of ways that have become a staple of the total experience seems draconian and misguided.


As I had stated in the prior article, I’m actually a fan of the Viewer 2 experience, but not for the totality of what it’s offering. No, I’m a fan because of the technical achievements which have been introduced such as Shared Media, which I’ve waited to see implemented for a few years ever since the day I downloaded and played with the uBrowser experiment. When I said time and again that we’ll all be using Viewer 2 whether we like it or not, and that we’d better get used to it, I was speaking from the point that no matter how god-awful Linden Lab botches their official viewer and runs in the opposite direction as the wants and needs of the majority who use the system, those deficiencies will be shored up and made much better in the TPVs by the dedicated volunteer coding community.


Shadows, depth of field, better machinima camera settings, and countless other innovations came out of the community through independent efforts and not because they came first from the official viewer. In contrast, it is these types of innovations which come about from the community first and years later might be implemented from the Linden Lab side, but not without some amount of patting themselves on the back as if they’ve just reinvented the wheel. Mesh is a fine example of this process, whereby the ability to upload and use mesh objects has been around for years, and only recently has Linden Lab decided that it’s something to implement officially (or at least try).


I can just as quickly cite their own acquisition of Windlight in 2007 from WindwardMark Interactive, whereby even this innovation was only half implemented and called a full feature and milestone. With that acquisition we saw the ability to change the atmosphere settings locally, change the clouds, color the water, and tinker around with the lighting, but the most obvious thing in that acquisition was put on a back burner and forgotten even to this day – the ability to allow the actual region and parcel owners to make settings that will change the user’s atmosphere globally to further create the intended atmosphere desired by the creator. Instead of the obvious global settings being a staple in the official viewer, it was relegated to the back burner to be partially implemented in Phoenix with some degree of success.



Having the technology versus actually using it are two different things.



There is also the unspoken, and most often unknown, ability in Windlight that allows natively for full scale weather. It is part of the acquisition, and was implemented hastily then removed. Instead we see community in-world solutions whereby objects drop thousands of scripted particles from the sky in a vain attempt to recreate this want and need from the community, a feature that is literally built into the Windlight system natively and likely uses the View Port for the effect versus attempting to render thousands of particles in world and causing stress on the clients and maybe the servers themselves for having to rendering thousands of particle objects in-world versus simply drawing them to the local viewport and saving the resources overall.


The often heard argument against the native weather in Windlight was that it was raining inside of buildings and structures where it shouldn’t be, which is actually yet another blatantly obvious thing being pointed out that was highly needed but never implemented –

User Created Zones. These zones are phantom objects, likely restricted Megaprims that define a 3D Space by which the object holds the ability to alter the user experience, such as lighting (maybe it’s darker in your house than outside?), restricting particles that are outside the zone from entering the zone (would stop that rain and snow, as well as outside interference from some griefer tactics), managing VoIP conversations to make voice within a zone private to those inside the zone but not heard outside of it. Also, the ability to create user generated zones and restrict how many prims may exist within it, and who has building rights within that zone would be a god-send to apartment complex owners and malls. It would essentially free the users of the limitations of 2D parcels, and better utilize three dimensional space – which to me seems like a no-brainer in a virtual environment.


I’ve seen user created zones in other open ended virtual environments like ActiveWorlds and it’s absolutely phenomenal – right down to the ability to set the zone to act like water or alter the gravity within it… because you might want to jump into a swimming pool without remembering to purchase and wear a swimming HUD or sit on poseballs to pretend.



First Generation Tower of Terror in Active Worlds



Imagine what zones and an actual interface to create particles in Second Life would allow? Which brings me to yet another spare part of Second Life.


We have a Build Menu in Second Life, and the ability to make Particles, yet the two are mutually exclusive. There exists no tab on the build menu to define any aspect of a particle emitter when building, short of custom coding the script yourself or buying a pre-made script to drop into the object. All of the particle effects you saw in the video above are made with the build menu in Active Worlds, to define that the object emits particles, and there is a point and click list of options and numbers you can define, including the actual image used for the particle or if the item is emitting objects as particles. I won’t even get into the point that particles run locally per user and do so at a framerate that would put Second Life particles to shame. Again, we see an instance where the community has picked up the ball where Linden Lab has dropped it, in that if you want a Particle Creation GUI, you have to buy one from Marketplace. Clearly, this is important if the actual users are supplementing the lack of obvious ability via the creation of 3rd party HUDs for sale.


From this point we can also move onward to talk about the implementation of actual Reflective objects and Mirrors, another useful innovation that was tested early on and then retracted – never to be spoken of again.




It’s almost like it never existed…




With Viewer 2 we saw a drastic reimagining of the interface and viewer, one that is built to resemble the common web browser in functionality, yet to the new user which sees this layout, they will likely wonder why an entirely different interface exists for the built-in web browser versus actually using the viewer layout as it is natively for a web browser as well. We have an address bar at the top now, which to a majority of users serves no real purpose and is ignored. The obvious was again overlooked in that there is a disconnect in what the viewer looks like and what the intended functions should be. Allowing actual web addresses to be entered into the address bar would have been a no-brainer since that’s the first thing you’d think an address bar up there would do.


So how would an address bar in a virtual world viewer function?


That’s where the 3D Canvas comes into play. In programming it is fairly trivial to create an additional rendering canvas and layer them with only one in view at any given time. In this case, we speak of the 3D Rendering canvas used in the viewer to display the 3D world around you, as well as the Web Browser canvas which is already implemented and seen by every single user of Second Life when they first log in. Instead of using that web canvas natively in the viewer, we are greeted with a separate window as a web browser overlaying the 3D canvas – which for all intents and purposes makes it pointless to use and I suspect is the underlying reason why most people would choose to have web links open in an external browser.


Furthermore, the very nature of a dual purpose canvas for 3D and 2D Web would facilitate the usage of web addresses in conjunction with 3D locations as part of the region and parcel properties, and would severely augment the ease of teleportation. It is much easier to find a location in a 3D environment if you know they have a normal website versus trying to track down a complicated SLURL and enter that into the address bar. This doesn’t, however, invalidate the usage of Landmarks in-world because those are a staple and very useful.


If Linden Lab were really interested in tying the web into Second Life as a native function, they’d have noticed this very basic and obvious thought process the moment they received their first preview of the Viewer 2 interface. However they did not, but as an aside I’ve already posted a JIRA outlining the entire functionality and mock-ups as to how such a thing would work, step by step, and also why such functionality is beneficial (if not trivial, since much of the functionality exists built-in but goes unused in proper context) [JIRA #VWR-22977]


Wouldn’t it be nice to be able to type an actual web address into the Viewer 2 address bar and get the website loaded on the full canvas? Better yet, if a region or parcel has their web domain address in their Parcel Properties, wouldn’t it be great to get a notification that the web site you are visiting has a related location in Second Life and ask if you’d like to teleport there? How about the number one reason Linden Lab should have done this by now: Seamless and native integration of the SL Marketplace into the Viewer.


If you’re building a viewer that resembles a web browser, this is essentially what a new user is expecting it to act like, taking into account the addition of 3D environments.


I’m citing half finished and wholly ignored features and usability fixes that seem like common sense and date back as far as 2007. But instead of working on these sorts of things, we get a nice email from Linden Lab telling us about how they’ve essentially bastardized our profiles (as well as the viewer even further) and would love for us to link it up to Facebook and other social media.


I will be the first to admit that Linden Lab isn’t a total screw-up, because they did manage to improve the teleportation time and a number of low level fixes. Despite this, the high level lobotomizing of the viewer experience as well as blatantly ignoring the obvious things that the viewer actually needs or should have irks me to no end. It’s as if their priorities are horribly skewed…


There is such a focus on the machinima community that it’s starting to piss me off, quite honestly. I’m not against the machinima community one bit, and I think the work they do is often times indescribable and breathtaking – but since joining the SL Press Corp the only thing I’ve seen in my email regarding the SL News is machinima related – as if that’s essentially all the news in Second Life that is actually important to put out on the group notice.


You know what would make all of the machinimatographers drool in unbridled joy (not to mention the other 90% of the registered accounts that seem to be getting what they want from TPVs instead)?


Tell them they can now create controllable zones for lighting, atmosphere, etc. If you’re having trouble figuring out how to go about that, go talk to ActiveWorlds and ask them how they managed to pull it off in entirety 7 years ago.


Tell them they now have access to Windlight Weather Effects. It’s only been sitting around for five years now, waiting to be implemented from part of a package they acquired in 2007. No rush or anything, Linden Lab, I’m happy to know I now can link my profile to Facebook in the meantime.


Tell them they now have actual Mirrors and reflective surfaces again instead of making them fake it against the water. Again, no rush or anything… it’s not like the dynamic reflection shader isn’t already in use for the water, proving it works just fine when enabled.


Tell them the advancements in TPVs are being ported over to the official viewer – start with working with Kirsten’s Viewer team and Phoenix to make that happen. Because it’s not like the community is leapfrogging the paid coding army of Linden Lab, implementing the things they wanted and asked for years ago.


Tell them that you actually realized the importance of ease of access and creation for Particle Effects and have implemented the toolset into the Build Menu to facilitate amazing effects and quicker turnaround, while leaving the ability to write the script for a particle effect should the users want absolute tweaking ability and control over the end product.


Take a hard look at how much Linden Lab has lobotomized the viewer and instead think about ways to make it coherent again. Linden Lab created a viewer that pretends to be a web browser in the guise of a 3D virtual environment viewer, and the result was that it doesn’t seem to do either very well. So instead of try to make it more reasonable, they’ve decided to just start stripping away all the things that actually make the experience what it is, leaving the new users thinking that Second Life is just a glorified 3D Chatroom.


It’s time to actually address the long standing issues, and think about how to really pull the experience together properly. When I say that Linden Lab likely won’t be doing these things anytime soon, because they’re too busy implementing a Facebook “Like” button – it’s 100% true. It’s about what they believe is actually a priority, and right now it’s “Fast, Easy, Fun” versus “Finish what you started in 2007 and then work your way forward”. This of course is evidenced from the recent announcement that users actually voting on things in the JIRA didn’t actually make a difference because Linden Lab was ignoring the popular votes – which is the exact same as admitting they are ignoring the community and what they want.


Their priorities are skewed, plain and simple. Chasing after the current buzzwords of social media, trying to make a Second Life viewer that works in a web browser (and eats 1GB of traffic an hour), slicing and dicing the new user experience into something that resembles nothing like the actual experience that is Second Life.


I hate to be the bearer of bad news, but that is exactly the way the process works. They are a company caught up in the ever changing whims of the management, and it shows in the nearly incoherent Viewer 2 and company vision. It is a viewer that is now a hop and skip away from becoming The Sims with multi-user chat – and I take great offense to that.


While I applaud what Linden Lab has done, begrudgingly ignoring the massive amount of screw-up involved and scraping to find at least a nugget of positive from the mountain of fecal matter it’s buried under, if you want to know where the real innovation actually is – it’s not at Linden Lab.


I’ll turn your attention to the community and the brilliant Third Party Viewers they maintain. The user generated content that seeks to make up for the forgotten tasks that Linden Lab continually ignores while chasing that social media dream.


When the community is doing a better job at making a viewer for a product than the company that actually makes the product is doing, you start to understand that if you start thinking your community can be put off while you chase your tail for a few years, they’ll all just stop listening to you and wander off to make a better experience themselves.

May 10, 2011

Sign of the Times


Recently I was watching the Twitter-sphere and an interesting tidbit came across about the potential acquisition of Skype tomorrow morning (May 10th 2011) by Microsoft. If I remember correctly, the amount put forward was around 8.5 billion dollars. This alone was interesting enough, whether it’s a rumor or not, and had me thinking about the 40 Days and 40 Nights post I had written awhile back about the possible acquisition of Linden Lab by Microsoft and what that meant in the grand scheme of things.


As I had stated before, I was really under the impression that Microsoft was likely putting out feelers as to what they could buy out and brand as their own, and that probably during that phase some sort of preliminary offer crossed the desk of Linden Lab, to probably be rejected. Despite that, I still believe the end-goal of Linden Lab is acquisition, though it’s a matter of time to reach that point.


There is a lot of things that need to happen in the virtual worlds arena in order to facilitate the forward movement of the industry as a whole, but the trouble is that most if not all of the current offerings are a victim of feature lock-in. Sure an acquisition might have helped a great deal to rebuild or improve the technology of Linden Lab, but as I continue to think about it, I’m beginning to believe that the only real way forward is if they rebuilt the whole thing from the ground up with better intentions.


For instance, could there be weather and reflections in Second Life? Well, yes… Windlight itself has a native weather system built into it, and we know that Linden Lab acquired the company responsible for Windlight. As for reflective surfaces, there was a short period of time when things like mirrors were enabled in Second Life and then hastily retracted. Even the Viewer 2 interface seems half thought out and hastily put together, in the sense that implementing web browser type familiarity without actually allowing the viewer to act like a web browser that the new users would be expecting seems like a disconnect in design understanding.


A lot of the issues that I see with Linden Lab stem from the second generation staff issue whereby a majority of the real visionaries from the original group who started the company no longer remain, and thus the people who are driving the company today really don’t understand what it is they have or what it should be doing going forward. I mean, is it a fair assumption that the Viewer 2 system is a half hearted attempt to familiarize a complex system like Second Life with existing paradigms like a web browser? Well, yes, of course. But despite the short sightedness of it all, there have been a number of really groundbreaking advancements as a result.


Also, despite the uproar I had caused about the Letter to Viewer 2 Haters, many of the things I said still hold true – in that whether we really like it or not, we’ll all be using Viewer 2 in the future. Now, when I said that, I also made it a habit to tell people that I meant more specifically that in one form or another we’ll be using Viewer 2 and it’s highly likely that it’ll be the TPVs that manage to make a very well rounded and acceptable version of Viewer 2 for the masses.


That’s usually how it’s always been, even with stuff like 1.23 where the TPVs made it into something far better than the original and accepted by the masses. Phoenix, Imprudence, etc are examples, and the predecessors like Emerald (despite the clusterfsk that Emerald became).


However, there is a middle ground. Personally I love Viewer 2, not as the official viewer but as a representation of the advancements which it brought to the community (despite the shortsighted and half assed parts). Shared media is a prime example of this, but still probably needs work. I use a Viewer 2 compatible TPV most of the time, which is to say that Kirstens Viewer is my choice, even though I tend to keep a handful of other viewers installed and updated for different uses.


The middle ground, though, is an interesting one to contemplate. Somewhere between the official release and the TPVs, the corporate method of doing things and the Open Source method is a viable third option. Clearly the corporate side of things has managed to botch an awful lot over the past few years, and has been playing fix-up ever since. But I’m not entirely convinced that the Open Source methodology is the magic bullet either.


For one, there is a lack of cohesion and unified force in the open source arena which prohibits commercialization or proper migration. Many places like Avination have a silly policy about not having Magic Boxes in their system because they want to see stores there instead. This, to me, seems like banning the Internet because nobody wants to build a store in your town. It’s really addressing the symptoms and not the roots of the problems while ignoring the core reasons why such issues exist. Of course, we also must take into account that there seems to be a lack of security protocols to ensure that content remains properly protected when crossing grids, and the simple act of migration itself is as convoluted as it can possibly get.


Maybe this is one of the reasons that Linden Lab is testing out the direct delivery system on Marketplace in order to eliminate the need for Magic Boxes in-world? If so, then it would also make sense that the Linden Dollar currency would be offered as a defacto base currency across the Hypergrid in order to facilitate the rapid expansion and use of content and purchases outside its own walls.


While this is interesting, it still stands to reason that it’s merely an expansion of a system that Linden Lab probably understands to be irrevocably busted, but we must also understand that Linden Lab is a business and has a profit margin to maintain. Working out a new version from the ground up would make the most sense, but just isn’t in the cards due to time constraints and costs involved. So they will make due with what they have and try to make it better in whatever way that they can.


That doesn’t mean that Second Life isn’t broken.


I can play a game of Minecraft and watch the random thunderstorm pour rain onto the massive terrain, and not under trees or inside houses. This brought me to wondering why Second Life seems to have an issue with implementing something similar, and to wit, how independent programmers managed to implement such a thing from scratch in a Java game while Linden Lab cannot or will not when they have one of the most advanced particle systems in the world at their disposal with the predefined abilities of weather built-in.


Of course, it boils down to implementing things like Zones and the ability to have users define such zones in-world, which is one of those things that are missing from Second Life which begs the question of whether or not actual game programmers built Second Life or if it was hacked together in a weekend. Zones are a staple of game programming, and you usually define at least two (Above water and below). Which, again, brings us to what happens when you dive into the vast amounts of Linden Oceans…


Well, nothing. you just keep on walking around like you aren’t in water. This is a lack of zones telling the software the difference between dry land and water and how the avatar should behave in either. These are things that are usually thought of from day one when designing with a game engine, and often it’s built into commercial game engines by default.


A lot of this can be implemented or fixed, but I still wonder about it all. Not that I expect any of it to be fixed or implemented, because clearly Linden Lab has more important things on their plate to work on, like adding a Facebook button and rearranging user profiles.


It’s a sign of the times, clearly.


The whole point of exploring XMPP chat was that it was thought to be some sort of magic bullet to solving the chat and group chat problems in-world, and while XMPP can reasonably handle millions of chat instances, I don’t really believe anyone at Linden Lab understood that the amount of instances was not a 1:1 representation but a polynomial number which brings our supposed tens of thousands of chat instances upward to a more realistic virtualized 4.8 billion chat instances when dealing with 64,000 simultaneous chatting users. Of course, this is a hypothetical estimate in order to illustrate the difference in bandwidth magnitude – so take these numbers with a grain of salt.


Keep in mind, when you are chatting in-world, you aren’t chatting to just one other person, but broadcasting to everyone in range. Your single chat instance then becomes the co-currency for that range as the server needs to relay to everyone in your range. Every time somebody types and hits enter in chat, it can be estimated a top delivery of maybe 50 other users that message needs to relay to, and the same goes for every time somebody else answers. Of course, we know that the real co-currency of a region is not 50 users, and we’ve all watched a virtual wedding crash a simulator in lag, but even when that number is a respectable 20 co-current per region, the chat streams are far higher than the double digit 20.


Even worse when we take this understanding and apply it to group chats. Hundreds and even thousands of people in a group, and a single person says “Hello”, compounding the relay system and overwhelming it. The server needs to send that simple message to thousands of other people and vice versa when any of them reply. So let’s say the group has 5,000 members online and you say “Hello”. Just for that message, 5,000 other people need to be relayed to, and again when any of them reply. That’s a lot of traffic and much more in throughput of connections.


But things like XMPP are supposed to handle millions of chat streams without a hitch, and this is probably true, but when we’re broadcasting en-mass like this, the true connection relay such a system is responsible for jumps up a few orders of magnitude, and well past anything it was designed to handle.


5,000 multiplied by 5,000 is the more appropriate number of virtualized connection streams that the chat server needs to handle for that one group chat, to keep all members online and connected to each other in the relay. In this case, that number becomes 25 million, with a margin of error for intelligent routing and relay. But still, just that alone is enough to make the mind boggle, and also understand a bit better why there is chat lag and failed message delivery so often.


We also apply this train of thought to all manner of data transfer across the system, such as assets, scripts and anything else you may be broadcasting around you. This isn’t a 1:1 scenario, and really never has been from day one. We’re talking orders of magnitude higher streams of connection across the entire virtual environment… and really, no centralized server was ever meant to handle it.


I’m not really big on total decentralization either, because I believe that approach lacks many things like security and protocol, however quite a bit can be said for decentralization of key aspects of a virtual environment, while retaining centralized gateways of protocol and authorization in order to manage the checks and balances.


Overall it’s a crossroads of sorts.


I’ve seen this crossroad a number of times, and each time the same sorts of things usually bring the industry back to it. Lots of centralized servers, static media formats, and cutting corners.


The Metaverse isn’t a new idea, and neither is a virtual environment. It’s not revolutionary to walk into a VC meeting and tell the future board of directors that you have a vision for something spectacular called a virtual environment. Really you’re just rehashing the past twenty years or more and acting like it’s all new and innovative.


But I digress, as I usually do in these blog posts.


Really what I’m on about is that there is a constant impression of half-assed attempts in the industry, and missed opportunities. I can single out Second Life for what it’s worth, but the same has applied for many years to every virtual environment system I’ve ever encountered.


I just happen to like Second Life for the time being, but that isn’t because Linden Lab has managed to do anything to sway my judgment – it’s because the community and what they offer has continually given me something more than Linden Lab has.


When I wanted to see the best of what Second Life could offer in graphical fidelity, I didn’t get an answer from Linden Lab… I got an answer from Kirsten’s Viewer. When I wanted to see the technical achievements available in Second Life, I got an answer from Emerald and now Phoenix viewer. When I was interested to see the versatility of Second Life technology, it was the community and OSGrid, and related types that had an answer.


When Linden Lab drops the ball, it’s the community that runs with it and makes their system shine.


It’s a sign of the times, and I think it’s high time Linden Lab discontinues keeping the community at arms length. The most critical are often the most passionate and caring. But most of all, Linden Lab needs to get their mojo back.