Friday, November 12, 2010

Augmented Reality with Away3D and FLARManager

Most of the examples for FLARManager out there seem to center around Papervision... which leaves a lot to be desired for those of us who follow other engines; more specifically Away3D. I've built and re-built my base scripts a few times now, but have a pretty solid mechanism for working with Augmented Reality that I figure others might like to try out.

Now, I'll mention this might seem a bit convaluted. That's because I built this to be recycled for different projects that require different approaches but at the same time required a similar core. One might have a static object that you look at, another might require some level of keyboard interactivity, but another might be mouse event based. That's why.

Outside of the regular cascade of libraries you'll need for FLARManager v0.7 (Download Page) and Away3D 3.5.2 (Download Page) (Note: Not Lite), you'll be creating a few new AS Files with me. We'll start with what I consider to be the core engine that drives everything.

Also Note: As of the date of this post, the latest version of FLARManager and Away3D are NOT compatible. So make sure you're getting the right libs.
First things first, we'll build the package and get the imports out of the way.


A3DFLAREngine.as

package classes {
  import flash.display.MovieClip;
  import flash.display.Sprite;
  import flash.events.Event;

  //FLAR MANAGER
  import com.transmote.flar.*;
  import com.transmote.flar.marker.*;
  import com.transmote.flar.utils.geom.FLARAwayGeomUtils;
  import org.libspark.flartoolkit.support.away3d.FLARCamera3D;

  // Away3D Stuffs
  import away3d.cameras.*;
  import away3d.containers.*;
  import away3d.core.base.*;
  import away3d.core.utils.*;
  import away3d.core.math.*;
  import away3d.core.render.*;
  import away3d.events.*;
  import away3d.loaders.*;
  import away3d.materials.*; 
  import away3d.primitives.Cube;
  import away3d.primitives.Plane; 


Next, we'll declare our class and setup the variables it will handle for us:
public class A3DFLAREngine extends MovieClip { 
    //flar variables 
    protected var flarManager :FLARManager;
    protected var activeMarker :FLARMarker;
    protected var camera3D :FLARCamera3D; 

    //away engine variables
    protected var view :View3D;
    private var scene :Scene3D; 

    //scene objects
    public var modelContainer :ObjectContainer3D;
    public var plane :Plane;
    public var cube :Cube;

There are 3 groups of variables here: the flarmanager's guts, a3d's glory, and the MCs to your augmented reality experience: The plane and cube. So obviously, nothing incredible going on here aside from generating 3d primitives on a marker.

Let's move along,
public function A3DFLAREngine():void {
      initFLARManager();
    }

    protected function initFLARManager():void {
      flarManager = new FLARManager("ar/myConfig.xml");
      addChild(Sprite(flarManager.flarSource));

      flarManager.addEventListener(Event.INIT, init); 
    }

So in the constructor I make a single call to the function initFLARManager(). This function then sets up my FLARManager instance and loads the config file it requires. Note that if you don't plan to extend any further beyond this class, then you could very well setup the FLARManager instance in your constructor. I choose not to, however, because at the point that I've gotten to I actually comment out the initFLARManager() call in the constructor and call it from an external swf that my augmented reality piece is loaded into.

Additionally, the FLARManager has an event dispatcher for the INIT state. So once it is ready, it will dispatch an INIT event, at which point we call:
private function init(e:Event):void {
      flarManager.removeEventListener(Event.INIT, init);

      initEngine();
      initHUD();
      initObjects();
      initListeners();
    }

The init function itself is just a collection of inits. So after some listener cleanup, we'll move on to taking care of each module independently:
private function initEngine():void {
      scene = new Scene3D();
      camera3D = new FLARCamera3D(); 
      camera3D.setParam(flarManager.cameraParams);

      view = new View3D();
      view.scene = scene;
      view.camera = camera3D;

      //view.renderer = Renderer.BASIC;
      //view.renderer = Renderer.CORRECT_Z_ORDER;
      //view.renderer = Renderer.INTERSECTING_OBJECTS;

      addChild(view);

      modelContainer = new ObjectContainer3D();
      scene.addChild(modelContainer);
    }

This is all it takes to setup an Away3D Engine. A scene, a flarcamera, and a view. All we do is setup a new instance of each, attach the scene and camera to the view, then add the view to the DisplayList via addChild. The last thing we'll do is build an empty ObjectContainer3D that will act as the root of all of our 3d primitives and Objects and add it as a child to the scene we attached to our view. Using the modelContainer in this way will let us turn on and off all of the models in our Away scene easily; as well as pushing them around in 3d space on the augmented reality marker.

Another thing of note is the 3 lines I have commented out here. This was more for informational reasons. Away3D has 3 'render modes': Basic, Correct Z Order, and Intersecting Objects. And from my experience this is a scale of quality/speed. Basic is the default renderer, which tends to have issues with triangles showing through others. When you finally publish something at the end of this walkthrough, you'll probably notice right away that it seems like the cube is cutting into the plane or the plane is somehow showing through the cube. This is due to weak z-sorting in the basic renderer. Obviously, the best way to deal with this is by changing the renderer to Correct_z_order; HOWEVER, you incur a fairly major performance decrease. Especially with Augmented Reality. I saw it suggested before, though, to stick with the Basic renderer and instead 'cheat' the z-sorting by using .screenZOffset (in units of 1000s usually) to add artificial depth to your object. That's gotten the job done nicely for me, at least, without major performance issues.

protected function initHUD():void {
      // override HUD per project
    }

The initHUD piece is optional; mainly only to use if you wanted there to be a graphic overlay, something to frame your work in. ;) Also note it is a protected function -- for me, this is the case because I would have different overlays from piece to piece. Therefore, I override this function in another Class.

protected function initObjects():void{
      plane = new Plane({material:"red#", name:"plane", width:75, height:75});
      modelContainer.addChild(plane);

      cube = new Cube({material:"blue#", name:"cube", width:20, height:20, depth:20});
      modelContainer.addChild(cube);
    }

Our glorious plane and cube. So simple, yet.. so complex. Well, okay, so not really complex.. We just make a call for a new Plane and a new Cube, specify our init Object parameters and voila. Primitives. Add each to the modelContainer (which is the root of our 3D scene) and we'll move on.

Again, note that this is a protected function. There's not a whole lot you want to do with standard primitives, unfortunately (although there is lots to do!) so this will eventually be overridden by another function to load DAEs or MD2s from 3DS Max or whatever modeling package you use. Soon, I hope to post a follow-up to this on generating MD2s from 3DS Max using QTip and porting model and animations onto the AR surface.

private function initListeners():void{
      stage.addEventListener(Event.ENTER_FRAME, loop);
   
      // begin listening for FLARMarkerEvents.
      flarManager.addEventListener(FLARMarkerEvent.MARKER_ADDED, onMarkerAdded);
      flarManager.addEventListener(FLARMarkerEvent.MARKER_REMOVED, onMarkerRemoved);
   
      stage.addEventListener(Event.RESIZE, onResize);
      onResize();
    }

Our essential list of listeners. The first listener is attached to our stage and set to run on ENTER_FRAME. This is our primary loop that will control the rendering of the 3d objects to the screen as well as keeping us updated with the tracking position of our FLARMarker.

The FlarManager instance we created long ago will be listening for our FLARMarkerEvents; specifically when a marker is seen in the camera and when it is taken away. They have aptly named functions. ;)

The last listener is for the stage, which we'll use to make sure the modelContainer and view get repositioned if the swf is resized. From what I can tell, this is still borked when previewing your movie in the flash ide. So don't expect to be able to resize your window and it work.

public function killListeners():void {
      stage.removeEventListener(Event.ENTER_FRAME, loop);
      flarManager.removeEventListener(FLARMarkerEvent.MARKER_ADDED, onMarkerAdded);
      flarManager.removeEventListener(FLARMarkerEvent.MARKER_REMOVED, onMarkerRemoved);
      stage.removeEventListener(Event.RESIZE, onResize);
    }
  
    public function resetListeners():void {
      initListeners();
    }

In the case of these 2 functions, I can't tell you that they are absolutely necessary. I actually just built them in case I needed them. So far I haven't had a reason to completely stop/freeze a simulation. But I suppose if you wanted to be able to, this would be a starting point for doing it. So leave it in or take it out, it's up to you.

protected function loop(e:Event):void { 
      if (activeMarker) {
        modelContainer.transform = FLARAwayGeomUtils.convertFLARMatrixToAwayMatrix(activeMarker.transformMatrix);
        update();
      }
   
      view.render();
    }

    protected function update():void {
      //additional override for loop during active marker state.
    }

This is the major loop that occurs at every frame. So what we're doing here is checking to see if there is an activeMarker on the screen (flagged my the FLARMarkerEvent). If there is, we convert the transformMatrix of the Marker to a transform for our modelContainer. We're also firing an update() function that, here, does nothing. But again, this was built with extending in mind, so the update could be overridden to add in functionality that should happen when a marker is active.

Lastly, we force the Away3D View to render. This will handle all of the redraw calculations that put 3d to the scene.


private function onResize(e:Event = null):void{
      view.x = stage.stageWidth / 2;
      view.y = stage.stageHeight / 2;
    }

Here is our function that handles the initial and following calls to Resize the Stage. All we're doing is making sure the view for our scene stays center screen.

private function onMarkerAdded (e:FLARMarkerEvent) :void {
      activate();
      modelContainer.visible = true;
      scene.addChild(modelContainer);
      activeMarker = e.marker;
    }

    protected function activate():void {
      //override 
    }

The first function here handles the listener attached to our Marker. It is responsible for turning on our root 3d object(modelContainer) and setting the activeMarker to the marker detected.

The activate function is an optional override for any future functionality that should occur when a marker is added to the screen.

private function onMarkerRemoved (e:FLARMarkerEvent) :void {
      deactivate();
      modelContainer.visible = false;
      scene.removeChild(modelContainer);
      activeMarker = null;
    }

    protected function deactivate():void {
      //override
    }
  }
}

And to finish things off, a final function to remove the marker and modelContainer. As before, we have a deactivate function for future functionality as an override.

You'll note that each of the marker events changes the visible property of the modelContainer as well as adding and removing it from the stage. This may be redundant and, therefore, unnecessary. I like to call it a safety option. :P

So, there it is. All of A3DFLAREngine.as complete and ready to go. All you need to do now is stick it in a folder called 'classes' outside of an .fla you create as your stage.

In your .fla you will set the document class to classes.A3DFLAREngine. I use a stage size of 640x480 and set my FPS to 60. Wouldn't it be nice if the Flash IDE would let us build swfs without .flas? ;)

Tuesday, October 19, 2010

Space Curve Thingy

Playing with BezierCurve Class and a the Flash BlurFilter


Wednesday, September 22, 2010

Image Scramble

So in an effort to experiment with Augmented Reality a little more, I decided to experiment with a project to pitch to my NASA client as something to follow through with... unfortunately, with the budget crisis et all, it wasn't as impressive as some of the other stuff I did get approved to do.. but being kind of attached to it, I don't want it to die altogether, so I may still pursue finishing it on my own.

If you've ever played Assassin's Creed 2, then you may be familiar with the play-style.. it's modeled after that to some extent. You select a segment of the image and rotate it back into place; sometimes more than 1 segment being effected by a rotation group.

The difference is that instead of using the mouse or keyboard to rotate each section, I'm using the webcam and a live marker to let the end user turn each image segment. :)  So you basically operate it like an old safe.
Hopefully I'll get around to putting a playable demo up here. ;)

Friday, December 18, 2009

"On Screen."

So I was experimenting with ways of showing 'video' or some sort of interface on a screen and I threw a quick model and some textures together.
Again, we're using Away3D in Flash here; and I'm using ColladaMax 3.05C to export my .DAE Animations.

Using the CustomAnimation Class from a previous post, I have my SkinAnimation divided up into 2 segments: 1) A static "up" position where the video monitor starts out. And then 2) a "drop" animation that brings the monitor down and 'activates' it.


In my 3DS Max file, I have a very simple object that is composed of 3 floating monitors and a keyboard connected by some dangling cables to nothingness. The object was unwrapped and I did a simple diffuse+occ render just to create a texture map that had shading. (First mat in the slots)

I then used a default, Standard material in max and took the diffuse up to a bright green color (third mat in the slots). The only other thing you need to make sure to do is NAME THE MATERIALS. The texture map has the name 'CompFrame' and the green material has the name 'GreenScreen'.

I then used another slot to create a Multi/Sub-Object Material. When you click on a material sphere, you should see a button that says 'Standard'. Click the button and the Material Library will pop open. Choose Multi/Sub-Object from the list.

Once you have the Multi/Sub-Object Material created, you will have a number of different slots, numbered 1-10 by default, available for materials. Drag and drop instances of your texture map and the green screen into the respective 1 and 2 slots.  You can delete the rest.  Then assign this material to your model.

NOTE:  For the screens to show the proper resolution of whatever you plan to map to them, they need to be unwrapped to the maximum extents of your texture map. Therefore, when you assign the texture map to them, the texturemap itself should be seen on the screen. See the pic for reference to what I mean.




Once you assign the Multi/Sub-Object material to your model, one of two things will happen. 1) it will either look exactly the same as before. or 2) It will be randomly ladened with green polygons. The safest thing to do before re-IDing your polygons, is to go into Edit Poly --> Element, select the entire element, and then Set ID to 1 (or your texture map ID).

Then, you can switch over to the Polygon sub-object selection mode, click on the polygon (or collection of polys) that will be the screen, and set ID to 2 (or your GreenScreen ID). Pic for Ref.  Once you set the ID to 2, the selected area should then switch from having the texturemap to having a green screen, like in the first pic at the top. Once you're set, export your DAE and then hop over into Flash.

So in Flash, we just have a couple of things to import. The .DAE and the texture map. We're not doing anything special here, so just use Collada.load() or Collada.parse() as you normally would. My setup looks like this:

[Embed(source="/models/CompTest.DAE", mimeType="application/octet-stream")]
private var EmbeddedModel :Class;
[Embed(source="/models/images/ComputerFrame.jpg")]
private var mat :Class; 
private var model :ObjectContainer3D;  // for the model itself 
private var materialArray :Array; // for all of the material assets that may be included.

Normal setup for the Away3D Stuffs. I like to keep things separated into 'init' calls, so my constructor does very little, then I have an init function that will fire initEngine, initMaterials, initEmbeddedObjects, initListeners, then do anything else I need it to do.
 
so, in what I will presume is like your initMaterials function, you will want to have something like:
 
private function initMaterials():void{ 
  var mc:MovieClip = new MovieClip(); 
  mc.addChild(new ColorTest());  // ColorTest is the linkage from the Library.
  var mm:MovieMaterial = new MovieMaterial(mc);
  materialArray = [Cast.material(mat), Cast.material(mm)]; //Our embedded texture + library texture
} 

So, pretty much all of this is key. First, we're making an arbitrary MovieClip (which could probably also be a Sprite, but ey) and then we're adding a child to it: new ColorTest().  This ColorTest is the name of the linkage Class of a MovieClip in my Library.  It's nothing more than a movieclip that changes colors every 10 frames for 50 frames, then loops. It's just a quick example. ;) 
 
So, we're then creating a MovieMaterial using the mc MovieClip we created, then casting that to a material in our materialArray.
 
Now, setting up the object:
 
private function initEmbeddedObjects():void { 
  model = Collada.parse(EmbeddedModel, {scaling:1, material:materialArray[0]}) as ObjectContainer3D; 
  modelContainer.addChild(model);

  model.materialLibrary.getMaterial("GreenScreen").material = materialArray[1];
  var skinAnim:SkinAnimation = (model.animationLibrary["default"] as AnimationData).animation as SkinAnimation;

  ViewAnim = new CustomAnimation(skinAnim);
    ViewAnim.addAnimationSequence("up",0,1,false);
    ViewAnim.addAnimationSequence("drop",1,60,false);

  model.addOnMouseDown(onClickModel);
  ready = true;
}

 
So a key line in the last piece of code is this:
model.materialLibrary.getMaterial("GreenScreen").material = materialArray[1]; 
 
This tells Away3D to find the material that has the name 'GreenScreen' that we assigned in 3DS Max and replace it with the material that we cast in the materialArray's 1 index; which was the MovieMaterial that we created.
 
So I can use that line of code anywhere, now, in the rest of my code to change what material is set on the screen.  In the preview, you'll notice that if you click on the computer to turn it off, it sets to its "off" position and the screens are black.  Click it again, the computer will drop down and the screens will turn on.

Friday, December 4, 2009

CustomAnimation Class Preview

I've made significant progress on the Augmented Reality game I've been working on, and it is now in a Beta Testing phase on the big screen here in the office!

Decided while I had a free moment, I'd upload the character I created for the game in a model viewer to show how the CustomAnimation Class works.  This is a single .DAE weighing in at around 400k. There are 5 different animation sets on the 3DS Max timeline. 0-25 is the walkcycle, 26-27 is idle, 28-38 is a jump, 39-50 is the landing, and 51-70 is a generic 'working' animation (typing on keys.. ish.)

Clicking the model in the model viewer will call the .nextSequence() command from the CustomAnimation Class. But each Sequence can also be called independently with .playSequence(name).

Monday, November 23, 2009

The switch to Away3D

Been a while since I posted an update, but since that time I have run through experiments with PV3D, Away3D, Unity3D (not Flash), and, minimally, some iPhone-related techs.

The next project I've started working on relates to the human aspect of space exploration; in that, meaning Astronauts. As such, I've been working on yet another Augmented Reality piece that will give the user control of an astronaut on their desktop.  As I had already managed to do one, with relative success but less than amazing results, with PV3D, I decided to try and give Away3D a go. And I have to say, I'm much happier with it. It has its annoying moments, granted, but all-in-all, I prefer it as a 3D Engine.

A major portion of this project relies on character animation, however. And exported biped animations to Away3D can be a bit plucky. Unfortunately, most of my frustration came because I was using a newer exporter that didn't quite fit the format of how Away3D parses Collada animations. So ColladaNextGen is out. However, ColladaMAX 3.05C works quite well with it.

One of the major things I've noticed it lacks, though, is an easy, logical way to control animations. If you have an object that has a simple, looping animation, it works great. But I have a character that has to juggle a Walk Cycle, Jump Cycle, Idle states, and possibly a few more actions... maybe picking things up?

So I've actually created a CustomAnimation Class to help the SkinAnimation class out. It's simple, light, and is definately doing the job for me. :)  I posted it to Away3D's Devgroup on Google; so if anyone is out there trying to do anything similar, feel free to grab it!

http://groups.google.com/group/away3d-dev/browse_thread/thread/b9bfab1467641610?hl=en

Friday, October 16, 2009

Papervision3D

I've been using Papervision3D to do all of the augmented reality stuff lately, but when it comes to using 3DS Max (or models generated in any other software for that matter) the Collada / DAE formats are... extremely tricky. Or buggy. Hopefully things I've learned will save people headaches in the future:

1) Do I use Collada or DAE?
Depends, really. Most examples I've seen around the net have a tendency to use the Collada Class when importing static models. But if you have animation, then you'll definately want to use the DAE Class instead; as it has a lot more methods/properties for controlling animation.

I used to use one or the other, but lately I've switched to just using DAE all the time, whether there is animation or not. It's easier.

2) Texture Maps
There seem to be a few ways to handle mapping your textures onto your models. If your texture maps aren't too big, you can generally get away with using the BitmapFileMaterial Class to load your image file externally. If you start running into problems where your model loads before your texture and you get a flatshaded or wireframe model then a few seconds later your material shows up, you'll need to use the FileLoadEvent to create a "preloader" for your material and halt loading your model until the texture is ready.

When you start getting more and more models in the scene, though, trying to handle a bunch of preload events for textures and models is going to get annoying, though. For the purposes of what I've been doing, I've found it to be more efficient just to embed the materials in the Library, assign them a class name, then use the BitmapAssetMaterial to reference the material, load it into a MaterialList, then assign it to the DAE. Then, if you have a classic preloader for your swf, you can just include the library bitmaps in the stuff that gets preloaded.

3) Exporting DAE from 3DS Max
For the sake of ease, I've started using 3DS Max 9 to do all of my DAE exports. The Collada NextGen Exporter for it seems to be the better of the ones between it, 2009 and 2010. That said, it's not perfect. But generally, if you keep these things in mind, you shouldn't have any problems getting your DAEs exported:
  • Name your material in the Material Editor. Nothing complicated, but something you can easily remember.
  • Initial coordinate keyframes. I've consistently run into issues where, for instance, I will animate something, then get it into Flash, and the animation will be running at the 0,0 origin, when in Max, it was up in the air somewhere. Not sure why it happens, it might be an exporter issue. But I've found that if I turn on auto key and set keys for each axis, then it exports fine.
  • DAE Export options.  You should only need 2 - 4. Normals and Triangulate Always. If you have animation, then also check 'Enable Export' and 'Sample Animation' for a set number of frames.
  • Manually tweak the .DAE file.  Once you've exported the .dae, you'll need to open it in a Text Editor. Do a Ctrl+f and search for "_1".  This is appended to the end of all of your material files for some reason. Not sure why. So I go in and kill 'em.  If you have animations in your file, then also search for "animation" and you should see a node called "Library_Animation" and an "Animation" node under that.  You need to add an id attribute to the animation tag, so just add in id="whatever" and then save and close.

If I come across something else that annoys me, I'll post. :)

Ares I-X NASA Exhibit


So earlier yesterday I recieved an email that the AR piece I did for the MyExploration site was incorporated into a lobby exhibit at NASA-HQ in Washington D.C. They sent me a few pics to show me what the results of the setup was! Very cool!


Thursday, October 15, 2009

Augmented Reality - Tools


So let's talk Augmented Reality. I mentioned in my previous, introductory post that I was doing some creative flash work for NASA; and more specifically, did a 1 week rush to quickly adapt the Ares I-X project into a simple Melts-in-your-eyes-not-in-your-hand presentation of the rocket that is being launched. 

And while it's nothing "new" to the world of Flash-based Augmented Reality, it certainly was fun to do. And, surprisingly, very easy. But now what I want to do is to record my experience so that, as I progress onto more engaging productions, I can have something to refer back to.

FLARToolKit :: So, if you've done any research into Augmented Reality via the Flash Platform, you've likely come across a wealth of information on the FLARToolKit. And while I won't really indulge in a historical study, all that's really important to know about it is that it's the Flash adaptation of the ARToolKit, which performs the magic of using your computer's webcam to track specific marker patterns.

The FLARToolKit will be the base of any Flash Augmented Reality piece. At least those that aren't using some proprietary 3rd party software. Just using the libraries that come with the ToolKit, you can build any single-marker Augmented Reality piece you can think of. I only used the FLARToolKit and some self-built AS3 libraries to build the Ares I-X piece, as well as a few other more interactive prototypes that I'll be posting later.

FLARManager :: The FLARManager extends the functionality of the FLARToolKit, making marker managment more easily controlled through xml docs. It also allows for multi-marker tracking, which hosts some pretty awesome possibilities. I started using the FLARManager on the project I am currently working on, which is going to be an Augmented Reality representation of the results of the LCROSS Mission.

There's 1 immediate difference I have found between using just the base FLARToolKit vs the FLARManager, and that's performance. I went back, just for kicks, and switched out how the Ares I-X was loading, and got a very big FPS increase. Almost 4x, actually.

Marker Generator :: Tarotaro's Marker Generator has been what I use to get my marker patterns converted to .pat. There's also a few AIR apps that you can use to hold your marker up to the webcam and capture the pattern, but tarotaro's lets you just load your image files directly into the generator and manually adjust your resolution and pattern% settings; so the results are presumably better. If not, then it's certainly easier. ;) And it's web-based, so if you don't like downloading things it's perfect for you.

Wednesday, October 14, 2009

An Introduction

So, since I've started working at Media Fusion, I've had a great deal of giddy laughter to hide under my desk at the musing that I am actually getting paid to do what I do.

I've done Flash development for a few years, mostly in the E-Learning arena, with strict guidelines and adherence to clients who thought they knew what they wanted (15 revisions ago) who never quite grasp onto the fact that they are working with a collective group of designers and developers who, given the freedom, could very easily create interesting and engaging user experiences without all the frustration. ...But I digress; Now I work for a Multimedia company that has the reputation it needs with clients to get that creative freedom.

Case in point. The project/contract that I work on is for a very well-known organization, maybe you've heard of them? The National Aeronautics and Space Administration? ...No? ..err, NASA? Yeeeaaah, see? Thought so. So, by show of hands, who all wanted to be an astronaut when they were a kid? Yeah, me too. And guess what. I'm totally closer than evar!

So, what's the big deal, you ask? Aside from totally working for NASA (which, categorically, has to put me somewhere close to being a rocket scientist), I'm not here to just create simple websites: I'm here to EXPERIMENT! Oh, yes, my friends. Experimentation. Which, somewhat unfortunately, combined with my close proximity to rocket scientist, shifts me into the category of Mad Scientist. But I'm fine with that.

So what does a person who experiments with web technologies for NASA really do? Well, I've only been here for a few weeks, but so far there has been a pretty positive response to the Augmented Reality experiments I've been working on. I did a really quick one the first week I was here as kind of a proving-grounds sort of experiment. It's actually live on a NASA website now, called MyExploration, hidden behind the guise of 3DV. And in response to the latest LCROSS mission, where we blasted the moon, I'm actually doing a more interactive AR to let people hold the crater, Centaur rocket modules, and the LRO satellite in their hands and possibly see the dusty particles that were kicked up and show some of the delicious insides of the moon that we found.

So, there. My ranting introduction. And all that to say 'Hi, my name is Michael. I'm a Flash Developer for Media Fusion/NASA. I play with Web technologies. I build Augmented Reality. I wrestle with Social Networks. And I want to share my experiences with you.

<3