Alice in Mirrorland – Silverlight 5 Beta and XNA

“In another moment Alice was through the glass, and had jumped lightly down into the Looking-glass room”

Silverlight 5 Beta was released into the wild at MIX 11 a couple of weeks ago. This is a big step for mirror land. Among many new features is the long anticipated 3D capability. Silverlight 5 took the XNA route to 3D instead of the WPF 3D XAML route. XNA is closer to the GPU with the time tested graphics rendering pipeline familiar to Direct3D/OpenGL developers, but not so familiar to XAML developers.

The older WPF 3D XAML aligns better with X3D, the ISO sanctioned XML 3D graphics standard, while XNA aligns with the competing WebGL javascript wrapper for OpenGL. Eventually XML 3D representations also boil down to a rendering pipeline, but the core difference is that XNA is immediate mode while XML 3D is kind of stuck with retained mode. Although you pick up recursive control rendering with XML 3D, you lose out when it comes to moving through a scene in the usual avatar game sense.

From a Silverlight XAML perspective, mirror land is largely a static machine with infrequent events triggered by users. In between events, the machine is silent. XAML’s retained mode graphics lacks a sense of time’s flow. In contrast, enter XNA through Alice’s DrawingSurface, and the machine whirs on and on. Users occasionally throw events into the machine and off it goes in a new direction, but there is no stopping. Frames are clicking by apace.

Thus time enters mirror land in frames per second. Admittedly this is crude relative to our world. Time is measured out in the proximate range of 1/20th to 1/60th a second per frame. Nothing like the cusp of the moment here, and certainly no need for the nuance of Dedekind’s cut. Time may be chunky in mirror land, but with immediate mode XNA it does move, clicking through the present moment one frame at a time.

Once Silverlight 5 is released there will be a continuous XNA API across Microsoft’s entire spectrum: Windows 7 desktops, Windows 7 phones, XBox game consoles, and now the browser. Silverlight 5 and WP7 implementations are a subset of the full XNA game framework available to desktop and XBox developers. Both SL5 and WP7 will soon have merged Silverlight XNA capabilities. For symmetry sake XBox should have Silverlight as apparently announced here. It would be nice for a web browsing XBox TV console.

WP7 developers will need to wait until the future WP7 Mango release before merging XNA and Silverlight into a single app. It’s currently an either/or proposition for the mobile branch of XNA/SL.

At any rate, with SL5 Beta, Silverlight and 3D XNA now coexist. The border lies at the <DrawingSurface> element:

<DrawingSurface Draw="OnDraw" SizeChanged="DrawingSurface_SizeChanged" />

North of the border lies XML and recursive hierarchies, a largely language world populated with “semantics” and “ontologies.” South of the border lies a lush XNA jungle with drums throbbing in the night. Yes, there are tropical white sands by an azure sea, but the heart of darkness presses in on the mind.

XAML touches the academic world. XNA intersects Hollywood. It strikes me as one of those outmoded Freudian landscapes so popular in the 50’s, the raw power of XNA boiling beneath XAML’s super-ego. I might also note there are bugs in paradise, but after all this is beta.

Merging these two worlds causes a bit of schizophrenia. Above is Silverlight XAML with the beauty of recursive hierarchies and below is all XNA with its rendering pipeline plumbing. Alice steps into the DrawingSurface confronting a very different world indeed. No more recursive controls beyond this border. Halt! Only immediate mode allowed. The learning curve south of the border is not insignificant, but beauty awaits.

XNA involves tessellated models, rendering pipelines, vertex shaders, pixel shaders, and a high level shading language, HLSL, accompanied by the usual linear algebra suspects. Anytime you run across register references you know this is getting closer to hardware.

…a cry that was no more than a breath: “The horror! The horror!”

sampler2D CloudSampler : register(s0);
static const float3 AmbientColor = float3(0.5f, 0.75f, 1.0f);
static const float3 LampColor = float3(1.0f, 1.0f, 1.0f);
static const float AmbientIntensity = 0.1f;
static const float DiffuseIntensity = 1.2f;
static const float SpecularIntensity = 0.05f;
static const float SpecularPower = 10.0f;

Here is an overview of the pipeline from Aaron Oneal’s MIX talk:

So now that we have XNA it’s time to take a spin. The best way to get started is to borrow from the experts. Aaron Oneal has been very kind to post some nice samples including a game engine called Babylon written by David Catuhe.

The Silverlight 5 beta version of Babylon uses Silverlight to set some options and SL5 DrawingSurface to host scenes. Using mouse and arrow keys allows the camera/avatar to move through the virtual environment colliding with walls etc. For those wishing to get an idea of what XNA is all about this webcafe model in Babylon is a good start.

The models are apparently produced in AutoCAD 3DS and are probably difficult to build. Perhaps 3D point clouds will someday help, but you can see the potential for navigable high risk complex facility modeling. This model has over 60,000 faces, but I can still walk through exploring the environment without any difficulty and all I’m using is an older NVidia motherboard GPU.

Apparently, SL5 XNA can make a compelling interactive museum, refinery, nuclear facility, or WalMart browser. This is not a stitched pano or photosynth interior, but a full blown 3D model.

You’ve gotta love that late afternoon shadow affect. Notice the camera is evidently held by a vampire. I checked carefully and it casts no shadow!

But what about mapping?

From a mapping perspective the fun begins with this solar wind sample. It features all the necessary models, and shaders for earth, complete with terrain, multi altitude atmosphere clouds, and lighting. It also has examples of basic mouse and arrow key camera control.

Solar Wind Globe
Fig 4 – Solar Wind SL5 XNA sample

This is my starting point. Solar Wind illustrates generating a tessellated sphere model with applied textures for various layers. It even illustrates the use of a normal (bump) map for 3D effects on the surface without needing a tessellated surface terrain model. Especially interesting is the use of bump maps to show a population density image as 3D.

My simple project is to extend this solar wind sample slightly by adding layers from NASA Neo. NASA Neo conveniently publishes 45 categories and 129 layers of a variety of global data collected on a regular basis. The first task is to read the Neo GetCapabilities XML and produce the TreeView control to manage such a wealth of data. The TreeView control comes from the Silverlight Toolkit project. Populating this is a matter of reading through the Layer elements of the returned XML and adding layers to a collection which is then bound to the tree view’s ItemsSource property.

    private void CreateCapabilities111(XDocument document)
        //WMS 1.1.1
        XElement GetMap = document.Element("WMT_MS_Capabilities").Element("Capability")
        XNamespace xlink = "";
        getMapUrl = GetMap.Attribute(xlink + "href").Value;
        if (getMapUrl.IndexOf("?") != -1) getMapUrl =
                  getMapUrl.Substring(0, getMapUrl.IndexOf("?"));

        ObservableCollection layers = new ObservableCollection();
        foreach (XElement element in
            if (element.Descendants("Layer").Count() > 0)
                WMSLayer lyr0 = new WMSLayer();
                lyr0.Title = (string)element.Element("Title");
                lyr0.Name = "header";
                foreach (XElement element1 in element.Descendants("Layer"))
                    WMSLayer lyr1 = new WMSLayer();
                    lyr1.Title = (string)element1.Element("Title");
                    lyr1.Name = (string)element1.Element("Name");

        LayerTree.ItemsSource = layers;

Once the tree is populated, OnSelectedItemChanged events provide the trigger for a GetMap request to NASA Neo returning a new png image. I wrote a proxy WCF service to grab the image and then write it to png even if the source is jpeg. It’s nice to have an alpha channel for some types of visualization.

The difficulty for an XNA novice like myself is understanding the hlsl files and coming to terms with the rendering pipeline. Changing the source image for a Texture2D shader requires dropping the whole model, changing the image source, and finally reloading the scene model and pipeline once again. It sounds like an expensive operation but surprisingly this re-instantiation seems to take less time than receiving the GetMap request from the WMS service. In WPF it was always interesting to put a Video element over the scene model, but I doubt that will work here in XNA.

The result is often a beautiful rendering of the earth displaying real satellite data at a global level.

Some project extensions:

  • I need to revisit lighting which resides in the cloud shader hlsl. Since the original cloud model is not real cloud coverage, it is usually not an asset to NASA Neo data. I will need to replace the cloud pixel image with something benign to take advantage of the proper lighting setup for daytime.
  • Next on the list is exploring collision. WPF 3D provided a convenient RayMeshGeometry3DHitTestResult. In XNA it seems getting a point on the earth to trigger a location event requires some manner of collision or Ray.Intersects(Plane). If that can be worked out the logical next step is grabbing DEM data from USGS for generating ground level terrain models.
  • There is a lot of public LiDAR data out there as well. Thanks to companies like QCoherent, some of it is available as WMS/WFS. So next on the agenda is moving 3D LiDAR online.
  • The bump map approach to displaying variable geographic density as relief is a useful concept. There ought to be lots of global epidemiology data that can be transformed to a color density map for display as a relief bump map.

Lots of ideas, little time or money, but Silverlight 5 will make possible a lot of very interesting web apps.

Helpful links:
Silverlight 5 Beta:

Silverlight 5 features:
“Silverlight 5 now has built-in XNA 3D graphics API”


NASA Neo: http://localhost/NASA-Neo/publish.htm

Babylon Scenes: Michel Rousseau, courtesy of

Babylon Engine: David Catuhe / Microsoft France / DPE


“I am real!” said Alice, and began to cry.

“You won’t make yourself a bit realler by crying,” Tweedledee remarked: “there’s nothing to cry about.”

“If I wasn’t real,” Alice said – half-laughing through her tears, it all seemed so ridiculous – “I shouldn’t be able to cry.”

WMS 3D with Silverlight and WPF

Silverlight view of DRCOG WMS
Fig 1 – Silverlight view of DRCOG WMS Layers

WPF 3D view of DRCOG WMS Layers
Fig 2 – WPF 3D view of DRCOG WMS Layers

Silverlight 2.0 MapControl CTP makes it easy to create a WMS viewer. However, Silverlight exposes only a 2D subset of the full WPF xaml specification. In my experiments so far with LiDAR I can see the various TINS, contours, and point clouds as a 2D view, but what I would really like to do is view data in 3D. Recalling some work I did a few years ago using WPF GeometryModel3D elements I decided to look at a WPF loose xaml approach. This is not restricted to use with LiDAR WMS services. Here in Colorado relief is major part of life and it would be helpful to visualize any number of WMS services using a 3D terrain model. Google and VE(Bing) already do this for us using their proprietary DEM, but eventually I would like to use different DEM resolutions, anticipating Open Terrain sources or in house LiDAR. DRCOG’s WMS service is conveniently available and lies right on the edge of the Rocky Mountain uplift so it is a nice WMS service to highlight relief visualization.

First navigating to the WPF service using Silverlight is a bit different from html. The familiar anchor element is gone. One Silverlight approach is to add a HyperlinkButton:<HyperlinkButton Content=”3D” TargetName=”_blank” NavigateUri=”"/> and change the NavigateUri as needed. Another approach is to use Html.Page.Window.Navigate() or Html.Page.Window.NavigateToBookmark() allowing a click handler to setup the navigate:

        private void ClickTest_Button(object sender, RoutedEventArgs e)
            Uri testwpf = new Uri("");
            HtmlPage.Window.Navigate(testwpf, "_blank");

Using one of these Navigate methods the Uri can be pointed at the service used to create the 3D loose xaml.

The next item is getting 3D terrain to populate the xaml mesh. Eventually LiDAR Server will provide 3D surface data, but in the meantime JPL has a nice trick for their srtm and us_ned WMS service. One of the selectable styles is “feet_short_int”. This style will produce a grayscale image with elevation coded into the int16 pixels. Using these elevations I can then create the xaml mesh along with some primitive manipulation tools.

Ref: JPL GetCapabilities

    <Layer queryable="0">
      <Title>United States elevation, 30m</Title>
        Continental United States elevation, produced from the USGS National Elevation.
        The default style is scaled to 8 bit from the orginal floating point data.
      <LatLonBoundingBox minx="-125" miny="24" maxx="-66" maxy="50"/>
      <Style><Name>default</Name> <Title>
Default Elevation</Title> </Style>
      <Style><Name>short_int</Name> <Title>
short int signed elevation values when format is image/png or tiff
</Title> </Style>
      <Style><Name>feet_short_int</Name> <Title>
short int elevation values in feet when format is image/png or image/tiff
</Title> </Style>
      <Style><Name>real</Name> <Title>
DEM real numbers, in floating point format, meters, when used with image/tiff
</Title> </Style>
      <Style><Name>feet_real</Name> <Title>
DEM in real numbers, in floating point format, feet, when used with image/tiff
</Title> </Style>
      <ScaleHint min="20" max="10000" /> <MinScaleDenominator>24000</MinScaleDenominator>

JPL us_ned 30m
Fig 3 – JPL us_ned 30m can be returned as feet_short_int

My first attempt used a .ashx service handler. I could then obtain the Bitmap like this:

     WebClient client = new WebClient();
     byte[] bytes = client.DownloadData(new Uri(terrainUrl));
     Bitmap bmp = (Bitmap)Bitmap.FromStream(new MemoryStream(bytes), true);

Unfortunately the returned pixel format of the Bitmap is incorrect. Instead of PixelFormat.Format16bppGrayScale the Bitmap returned PixelFormat.Format32bppArgb. This is not exactly helpful, quantizing the elevations. Instead of an elevation at each pixel GetPixel returns an RGB like 2A 2A 2A meaning e = p.B * 256 ^ 2 + p.G * 256 ^ 1 + p.R * 256 ^ 0; is incorrect. It appears that the returned value is quantized on the R value since to be gray each value of RGB must be equal. I checked the image returned by JPL using GdalInfo. This shows 2 band Uint16 values. Since Bitmap reads in a 4 byte value (2 Uint16s) it apparently defaults to Format32bppArgb and is then useless for actual elevation.

Driver: PNG/Portable Network Graphics
Files: C:\Users\rkgeorge\Pictures\wms.png
Size is 500, 351
Coordinate System is `'
Image Structure Metadata:
Corner Coordinates:
Upper Left  (    0.0,    0.0)
Lower Left  (    0.0,  351.0)
Upper Right (  500.0,    0.0)
Lower Right (  500.0,  351.0)
Center      (  250.0,  175.5)
Band 1 Block=500x1 Type=UInt16, ColorInterp=Gray
Band 2 Block=500x1 Type=UInt16, ColorInterp=Alpha

color quantized example
Fig 4 – Example of a color quantized elevation from forced 32bppArgb

There is probably some type of pointer approach to pass through the raw bytes correctly. However, services are a nice way to decouple a view from the backend and it is just as easy to use a Java servlet which can take advantage of JAI, so in this case I switched to a Java servlet:

URL u = new URL(terrainUrl);
image = JAI.create(“url”, u);
SampleModel sm = image.getSampleModel();
int nbands = sm.getNumBands();
Raster inputRaster = image.getData();
int[] pixels = new int[nbands*width*height];

The resulting pixels array is int16[] so the correct 30m us_ned elevations are available from the JPL WMS service. Using the resulting elevations I can fill in a Positions attribute of my WPF MeshGeometry3D element. Since this is an array it is also easy to add a TriangleIndices attribute and a TextureCoordinates attribute for the draped image. The final loose xaml will eventually show up in the browser like this:

WPF view of DRCOG layers draped on JPL terrain
Fig 5 - WPF view of DRCOG layers draped on JPL terrain

Because I am simplifying life by using loose xaml there is no code behind for control of the views. However, we can use binding to attach various control values to camera positions and geometry transforms that allow a user to move around the terrain even without a full blown code behind xbap. For example:

    Position="0 0 " + dimw + ""
    FieldOfView="{Binding ElementName=zoomlensslider, Path=Value}">
          OffsetX="{Binding ElementName=hscroll, Path=Value}"
          OffsetY="{Binding ElementName=vscroll, Path=Value}"

In this case the camera FieldOfView is bound to a slider value and the camera position transform offsetx and offsety are bound to scrollbar values. Binding values to controls in the same xaml file is a powerful concept that provides dynamic capability without any code behind.

Code behind with an xbap would actually make more useful navigation tools possible, but this is a good start. The controls allow the user to zoom the camera lense, change camera x,y position, tilt and rotate the viewport as well as exaggerate the relief scale. Changing to an xbap that handles the terrain image translation with code behind would allow the use of mouse wheel and click hits for pulling up item data fields similar to the Silverlight WMS viewer.

As great as WPF is, it would still be a difficult problem to turn it into a tiled interface like the SIlverlight MapControl. I imagine the good folks at Microsoft are working on it. Basically the mesh elements would be tiled into a pyramid just like imagery tiles. Then these tiles would be loaded as needed. It is easy to imagine an algorithm that would work with the backing pyramid as a 3D view curve. In other words distance from the camera would translate to tiles at higher pyramid levels of lower resolution. A view field would pull tiles from the mesh pyramid using a distance curve so that lower resolution would fill in further ranges and higher resolution at closer ranges. Some kind of 3D will eventually be available but is beyond the scope of my experiment.

Some Problems

1) JPL WMS has some difficulty keeping up with GetMap requests as noted on their website, which means GetMap requests are unreliable. Sometimes there is no issue and other days the load is heavier and there are few times when a GetMap request is possible. Here is their explanation:
"Due to server overloading, client applications are strongly advised to use the existing tile datasets wherever possible, as described in the Tiled WMS or Google Earth KML support

Frequent and repetitive requests for non-cached, small WMS tiles require an excessive amount of server resources and will be blocked in order to preserve server functionality. The OnEarth server is an experimental technology demonstrator and does not have enough resources to support these requests.
An alternative solution already exists in the form of tiled WMS
While sets of tiles with different sizes and alignment can be added when needed, for large datasets the duplication of storage, processing and data management resources is prohibitive."

2) Silverlight works pretty well cross browser at least on my Windows development systems. My tests with Mozilla Firefox managed Silverlight and WPF loose xaml just fine. However, Google Chrome has problems with WPF loose xaml reporting this error:
"This application has failed to start because xpcom.dll was not found"

A quick Google search came up with this:
Chromium does not support XPCOM. It is a Mozilla specific API.

More details here:

3) Another baffling issue arose with the WPF loose xaml. It turns out that each click of the LinkButton or access of the url for the WPF service caused the service to be accessed three times in succession. This was true of both the .ashx and the servlet versions. The User Agent was not distinguishable so I wasn't able to short circuit the first two GET accesses. I only found this true for context.Response.ContentType = "application/xaml+xml"; not text/html etc. So far I haven't seen anything that would solve my problem. It is too bad since the computation for 15Mb WPF files is significant and three times for each result is a bit unnecessary. It may be something quite simple in the response header. I will keep an eye out. There was a similar problem with IE6 years ago, but in that case the service could watch for a Request.UserAgent notation and skip the unecessary duplicate calls.

4) WPF is verbose and 15Mb terrain views take a bit of patience especially considering the above problem. I need to add an auto xaml gzip to my Apache Tomcat. I'm sure there is something similar in IIS if I get around to a pointer solution to the grayscale.


Silverlight is a natural for interface development and integrates well with WPF ClickOnce for the 3D views that are still not available in Silverlight. My hope is that someday there will be a SIlverlight version capable of 3D scene graph development.

The LiDAR WMS Server roadmap also includes a style for grayscale elevations at some point. That would enable viewing much more detailed high resolution DEM directly from LAS files. I believe this will be a useful advance for use of online LiDAR viewing.

Also a thanks to DRCOG. I'm always glad to see agencies willing to support the wider GIS community by publishing data as OGC services and then further supporting it with adequate server performance. I just wish JPL had a bit more budget to do the same.

Silverlight view of LiDAR Server TIN
Fig 6 - Silverlight view of LiDAR Server TIN

WPF 3D view of LiDAR Server TIN
Fig 7 - WPF 3D view of LiDAR Server TIN

Next Gen UI for GIS

Interesting to think that GIS UI’s are in for some major changes. Next gen GIS users will bring UI expectations from the OGL, DirectX, WPF, X3D, XNA world. It seems the biggest UI transform is just navigating a camera (avatar) in 3D rather than 2.5D.
See VerySpatial’s Lost in the Virtual Fog

GIS UIs currently follow the usual 2.5D OS interactions with primarily Mouse event handling. There are 3D analogies to current 2.5D interaction.

MouseEnter, MouseLeave – rollover
Passive select – 3D intersect media changes (popups, text, film, photo, etc) would correspond to current 2.5D rollover events

MouseClick, Double Click
Active select – corresponding 3D click events are just as necessary in a 3D virtual world – perhaps a virtual laser pointer not much different than the shoot’em game scenario. The virtual point device has a straight line of sight intersect in the scene graph.
WPF 3D hit testing

    RayHitTestParameters hitParams =
        new RayHitTestParameters(
            new Point3D(0, 0, 0),
            new Vector3D(1, 0, 0)
    VisualTreeHelper.HitTest(visual3d, null, ResultCallback, hitParams);

MouseMove, MouseWheel – Pan, Drag, and zoom
Pan and Zoom is analogous to camera control in 3D. Navigation in 3D is a bit more complex including not just 3D camera position, but a 3D vector ‘look direction’ , ‘up direction’, and also ‘field of view’: WPF 3D Tutorial

	  FieldOfView="70" />

Manipulating a camera in 3D is cumbersome using typical Mouse event handling. 3D navigation replacement of pan and zoom is better suited to a new device altogether. iPhone/WII like inertial sensor navigation can’t be far away. A 3D mouse is just a mouse freed from its 2D pad, a stripped down inertial sensor device or more sensitive WII wand. Now there are direction and tilt events, as well as move events for a more natural camera view control.

Beyond 2.5D analogies
There seem to be some innovative gesture interactions arriving with thanks to the iPhone.

Touch Pack for Windows 7

Extending hand gestures to 3D is further down the road but obviously one step further in the immersive direction. GIS is headed down the virtual reality path.

Stimulus LiDAR

3D VE navigation
Fig 1 – New VE 3D Building – Walk Though Walls

I’ve been thinking about LiDAR recently. I was wondering when it would become a national resource and then this pops up: GSA IDIQ RFP Hits the Street
Services for 3D Laser Scanning and Building Information Modeling (BIM)
Nationwide Laser Scanning Services
Here is the GSA view of BIM: Building Information Modeling (BIM)

And here is the core of the RFP:

1) Exterior & Interior 3D-Laser Scanning
2) Transform 3D laser scanning point cloud data to BIM models
3) Professional BIM Modeling Services based on 3D laser scanning data
    including, but not limited to:
   a. Architectural
   b. Structural
   c. MEP
   d. Civil
4) 3D Laser Scanning-based analysis & applications including,
     but not limited to:
   a. Visualization
   b. Clash Detection
   c. Constructability analysis
5) Integration of multiple 3D Laser Scanning Data, BIMs & analyses
6) Integration between multiple 3D Laser Scanning Data and BIM softwares
7) 3D Laser Scanning Implementation Support, including:
   a. Development of 3D Laser Scanning assessment and project
       implementation plans
   b. Selection of VDC software
   c. Support to project design and construction firms
   d. Review of BIM models & analysis completed by other
       service providers
8  Development of Best Practice Guidance, Case Studies, Procedures
     and Training Manuals
9) Development of long- & short-term 3D Laser Scanning implementation
10) Development of benchmarking and measurement standards and programs
11) 3D Laser Scanning and Modeling Training, including:
   a. Software specific training
   b. Best Practices

This is aimed at creating a national building inventory rather than a terrain model, but still very interesting. The interior/exterior ‘as built’ aspect has some novelty to it. Again virtual and real worlds appear to be on a collision trajectory and may soon overlap in many interesting ways. How to make use of a vast BIM modeling infrastructure is an interesting question. The evolution of GE and VE moves inexorably toward a browser virtual mirror of the real world. Obviously it is useful to first responders such as fire and swat teams, but there may be some additional ramifications. Once this data is available will it be public? If so how could it be adapted to internet use?

Until recently 3D modeling in browsers was fairly limited. Google and Microsoft have both recently provided useful api/sdk libraries embeddable inside a browser. The terrain model in GE and VE is vast but still relatively coarse and buildings were merely box facades in most cases. However, Johannes Kebek’s blog points to a recent VE update which re-enabled building model interiors, allowing cameras to float through interior space. Following a link from Kebek’s blog to Virtual Earth API Release Information, April 2009 reveals these little nuggets:

  • Digital Elevation Model data – The ability to supply custom DEM data, and have it automatically stitched into existing data.
  • Terrain Scaling – This works now.
  • Building Culling Value – Allows control of how many buildings are displayed, based on distance and size of the buildings.
  • Turn on/off street level navigation – Can turn off some of the special effects that occur when right next to the ground.

Both Google and Microsoft are furnishing modeling tools tied to their versions of virtual online worlds. Virtual Earth 3dvia technology preview appears to be Microsoft’s answer to Google Sketchup.
The race is on and shortly both will be views into a virtual world evolving along the lines of the gaming markets. But, outside of the big two, GE and VE, is there much hope of an Open Virtual World, OVM? Supposing this BIM data is available to the public is there likely to be an Open Street Map equivalent?

Fortunately GE and VE equivalent tools are available and evolving in the same direction. X3D is an interesting open standard scene graph schema awaiting better implementation. WPF is a Microsoft standard with some scene building capability which appears to be on track into Silverlight … eventually. NASA World Wind Java API lets developers build applet views and more complex Java Web Start applications which allow 3D visualizing. Lots of demos here: World Wind Demos may be overkill for BIM, but is an amazing modeling tool and all open source.

LiDAR Terrain

Certainly BIM models will find their way into browsers, but there needs to be a corresponding evolution of terrain modeling. BIM modeling is in the sub foot resolution while terrain is still in the multimeter resolution. I am hopeful that there will also be a national resource of LiDAR terrain data in the sub meter resolution, but this announcement has no indication of that possibility.

I’m grappling with how to make use of the higher resolution in a web mapping app. GE doesn’t work well since you are only draping over their coarse terrain. Silverlight has no 3D yet. WPF could work for small areas like architectural site renderings, but it requires an xbap download similar to Java Web Start. X3D is interesting, but has no real browser presence. My ideal would be something like an X3D GeoElevationGrid LOD pyramid, which will not be available to browsers for some years yet. The new VE sdk release with its ability to add custom DEM looks very helpful. Of course 2D contours from LiDAR are a practical use in a web map environment until 3D makes more progress in the browser.

Isenburg’s work on streaming meshes is a fascinating approach to reasonable time triangulation and contouring of LiDAR points. Here is an interesting kml of Mt St Helen’s contours built from LAS using Isenburg’s streaming methods outlined in this paper: Streaming Computation of Delaunay Triangulations.

Mt St Helen's Contours
Fig 2 – Mt St Helen’s Contours generated from streaming mesh algorithm

A national scale resource coming out of the “Stimulus” package for the US will obviously be absorbed into GE and VE terrain pyramids. But neither offers download access to the backing data, which is what the engineering world currently needs. What would be nice is a national coverage at some submeter resolution with the online tools to build selective areas with choice of projection, contour interval, tin, or raw point data. Besides, Architectural Engineering documents carry a legal burden that probably excludes them from online connectivity.

A bit of calculation (storage only) for a national DEM resource :
1m pixel resolution
AWS S3 $0.150 per GB – first 50 TB / month of storage used = $150 tb/month
US = 9,161,923,000,000 sq m @ 4bytes per pixel => 36.648 terabytes = $5497.2/month

0.5m pixel resolution
$0.140 per GB next 50 TB / month of storage used = $140 tb/month
US = 146.592 terabytes = $20,522/month

Not small figures for an Open Source community resource. Perhaps OVM wouldn’t have to actually host the data at community expense, if the government is kind enough to provide WCS exposure, OVM would only need to be another node in the process chain.

Further into the future, online virtual models of the real world appear helpful for robotic navigation. Having a mirrored model available short circuits the need to build a real time model in each autonomous robot. At some point a simple wireless connection provides all the information required for robot navigation in the real world with consequent simplification. This of course adds an unnerving twist to some of the old philosophical questions regarding perception. Does robotic perception based on a virtual model but disconnected from “the real” have something to say to the human experience?

X3D and Virtual Scene Graphs

x3d earth
Fig 1 – XJ3D Viewer showing an x3d earth model.

Virtual Worlds and Scene Graphs

Well I confess, I’m no gamer! I’ve never handled a Wii or blown away monsters on a PS3. It doesn’t hold my interest that much. Guitar Hero makes me feel slightly nauseous. (When will they introduce Symphony Conductor for the Wii?) I suppose I’m an anachronism of the old paperback book era. My kids groan, but I can still get excited about the E.E. Smith Lensman series. However, I’m a bit chagrined to admit that I was oblivious to X3D until this past week. It’s just that gaming and GIS have not overlapped in my world yet.

X3D has been around quite awhile. Some of the specs date back as far as 2003 and then there is the VRML97 ancestory so this is not new stuff. It reminds me a bit of the early SVG days, circa 2000. The novelty of event driven graphics in the browser was overpowering then, but it wasn’t until years later that Flash and Silverlight actually made xml graphics generally available to web app interfaces, almost ten years later. Now of course, SVG even has native support in most of the non microsoft browsers. I can imagine in the year 2018 that 3D graphics XML will be ubiquitous, but it will probably be some derivative of ChromeFlash, XNA, or WPF, while X3D may still languish in the outer reaches of civilization.

X3D has a similar feel in that it’s a powerful anticipating specification with beautiful XML scene graph capability, but only a few implementations far from the mainstream. I won’t say I’m a sucker for XML, but I really, really like to look at a file and understand what it says. Nothing beats a good old text editor for playing with an XML graphics file. Sorry, my text bias is showing again.

The Java3D API is a parallel enabling technology. The scene graph approach implemented in Java3D’s api enabled implementations of free X3D viewers. There are several viewers, but the one that actually worked on the geospatial component of X3D was Xj3D.

What is X3D?

Directly from the x3d site:

“X3D is a scalable and open software standard for defining and communicating real-time, interactive 3D content for visual effects and behavioral modeling. It can be used across hardware devices and in a broad range of applications including CAD, visual simulation, medical visualization, GIS, entertainment, educational, and multimedia presentations.”

“X3D provides both the XML-encoding and the Scene Authoring Interface (SAI) to enable both web and non-web applications to incorporate real-time 3D data, presentations and controls into non-3D content.”

and here is an example of what it looks like:
   <?xml version=”1.0″ encoding=”UTF-8″?>
         <component name=”Geospatial” level=”1″/>
            <PointLight location=”10000000000 0 0″ radius=”100000000000000″/>
            <PointLight location=”-10000000000 0 0″ radius=”100000000000000″/>
            <PointLight location=”0 10000000000 0″ radius=”100000000000000″/>
            <PointLight location=”0 -10000000000 0″ radius=”100000000000000″/>
            <PointLight location=”0 0 10000000000″ radius=”100000000000000″/>
            <PointLight location=”0 0 -10000000000″ radius=”100000000000000″/>
             position=”39.755092 -104.988123 60000″
             orientation=”1.0 0.0 0.0 -1.57″
             <GeoOrigin DEF=”ORIGIN”
             geoCoords=”0.0 0.0 -6378137″
             geoSystem=”GDC” />
            <Inline url=”tiles/0/tile_0.x3d”/>

After the normal XML declaration follows the root element:

In this case using version 3.2 of the X3D spec and a profile, “Immersive”. Profiles are used to break the X3D spec into smaller implementation areas. Consequently, with less effort, an implementor can meet a profile of the X3D spec suitable to the need instead of the entire “Full” specification.

X3D Profiles
Fig 2 – X3D Profiles

Curiously, the X3D Full profile lists the geospatial component, but the Immersive profile seemed to implement the geospatial elements of interest in my particular case. Perhaps geospatial was later moved to the Full profile. Here is a list of the xml node elments found in the geospatial component:

  • GeoCoordinate
  • GeoElevationGrid
  • GeoLocation
  • GeoLOD
  • GeoMetadata
  • GeoOrigin
  • GeoPositionInterpolator
  • GeoProximitySensor
  • GeoTouchSensor
  • GeoTransform
  • GeoViewpoint

You can see a ‘GeoViewpoint’ element and a ‘GeoOrigin’ element in the small x3d listing above. The Immersive/Full profile is meant to imply a virtual world immersion experience. This means that a scene graph populates a world with context including lighting and a viewpoint that navigates through the virtual world space. Now this is of interest to a GIS developer in the sense that the virtual world space can mimic a real world space.

X3D Earth provides some nice examples of using X3D with earth data. Not all of the examples are in working order using the latest build of XJ3D Browser. These are older samples and may not reflect the current configuration of the X3D specification.

Another example of a virtual world space mimicing the actual world is Google’s Earth API. You can use a flying viewpoint to navigate through a model of the Rocky Mountains and get something like the experience of a bird or fighter jet. The mimicry is still coarse but the effect is recognizable. This is the horizon end point of mapping. The map abstraction eventually disappears into a one to one mathematical mapping with some subtle abstraction available as overlays and heads up displays.

This is where the online mapping world eventually merges with the gaming world. Only for mapping, the imaginary part of the world is de-emphasized in favor of an immersive real world experience as an aid to navigating in external physical space. Thinking about these kinds of worlds and their overlaps has proven fascinating, as shown by the popularity of the Matrix movie series. Are we the Ghost in the Machine or the other way around? But that is best left to philosophy types.

Let’s move on to a couple of other useful X3D scene elements. GeoLOD geospatial Level Of Detail element is an interesting element for use in geospatial worlds. By recursively referencing deeper levels of x3d GeoTerrain tiles you have an affect similar to DeepEarth tile pyramids, but with the addition of 3D Terrain:

<?xml version=”1.0″ encoding=”UTF-8″?>
      <component name=”Geospatial” level=”1″/>
            center=”-61.875 -168.75 0.0 ”
         <GeoOrigin DEF=”ORIGIN”
            geoCoords=”0.0 0.0 -6378137.0 ”
            geoSystem=”GDC” />
         <Group containerField=’rootNode’>
                  <Material diffuseColor=”0.3 0.4 0.3 “/>
                        geoGridOrigin =”-67.5 -180.0 0.0 ”
                        yScale =’1.0′
·                     xSpacing=’0.625′
                        -1618 -3574 -3728 -3707 -3814 -3663 -3455 -3089
                         -3258 -3457 -3522 -3661 -3519 -3757 -3412 -3379
                         -3674 -3610 -3503 -3679 -3521 -3537 -3397 -3790
                         -3795 -3891 -3883 -3864
                         -5086 -5138 -5089 -5229 -4965 -5021 -5064 -5099
                         -5062 -4952 -4955 -4810 -4635 -4686 -4599 -4408
                         -4470 -5407 -4482 -4600 -4486 -3942 -4202 -4100
                         -4306 -4293 -4010 -4091
                      <GeoOrigin USE=”ORIGIN” />

In the x3d snippet above you can see the GeoLOD that references down to level 6 of the terrain tile hierarchy. This is a recursive reference as each tile lower in the pyramid references the four tiles of the next level that make up the current tile, basically a quad tree approach found in most tile setups. The element attributes include a center, range, geosystem, and the ChildURL references. Inside the GeoLOD is a <Group> <Shape> element containing the current tile’s Appearance Material with a reference to the draping tile image and a GeoElevationGrid element with an array of elevations relative to the GeoOrigin.

For this to work there are two pyramids, one containing the draping images and one with the x3d GeoTerrainGrids. The GeoTerrainGrid is the same as x3d ElevationGrid except it introduces the concept of a geospatial reference system. This is all very clever and useful, but it requires the Xj3D Browser viewer to play with it. In order to author this kind of world scene you also need tools to tile both the underlying terrain models as well as the overlaid imagery.

x3d earth
Fig 3 – x3d earth model tile pyramid

Over in Microsoft I took a quick look at XNA Game Studio, which is the current development tool for XBox games. The XNA serializer supports a type of XML <XnaContent>, but there doesn’t seem to be much information about the XML XnaContent schema. I wasn’t able to find an X3D importer for XNA. Too bad, since an XNA importer would be another way to experiment with Terrain grids.


As far as the future, it is easy to see that the WPF <MeshGeometry3D> can be used to perform the duties of a an ElevationGrid element. This means, at least in WPF, the basic building blocks are available to display and manipulate a scene in 3D. What is lacking is the concept of LOD, Level of Detail. After first seeing the DeepZoom MultiScaleImage element in action, I fell in love with the very natural navigation through its 2D space.

It is not much of a stretch to see this being extended to a “MultiScaleMeshGeometry3D” element. Again WPF appears to be a look at the future of Silverlight, so it is not impossible to imagine this type of capability in browsers by 2018. In the non-Microsoft world X3D will play a role once implementations make their way into Chrome, FireFox, and Opera. In some form or other, XML scene graphs will be in the future of GIS, and yes, we will all be pro gamers!

LiDAR processing with Hadoop on EC2

I was inspired yesterday by a FRUGOS talk from a Lidar production manager showing how to use some OpenSource tools to make processing LiDAR easier. LiDAR stands for LIght Detection And Ranging. It’s a popular technology these days for developing detailed terrain models and ground classification. The USGS has some information here: USGS CLick and the state of PA has an ambitious plan to fly the entire state every three years at a 1.4 ft resolution. PAMAP LiDAR

My first stop was to download some .las data from the PAMAP site. This PA LiDAR has been tiled into 10000x10000ft sections which are roughly 75Mb each. First I wrote a .las translator so I could look at the data. The LAS 1.1 specification is about to be replaced by LAS 2.0 but in the meantime most available las data is still using the older spec. The spec contains a header with a space for variable length attributes and then the point data. Each point record contains 20bytes (at least in the PAMAP .las)  of information including x,y,z, as long values and a classification and reflectance value. The xyz data is scaled and offset by values in the header to obtain double values. The LiDAR sensor mounted on an airplane flies tracks across the are of interest while the LiDAR pulses at a set frequency. The location of the reading can then be determined by post processing against the GPS location of the instrument. The pulse scans result in tracks that angle across the flight path.

Since the PAMAP data is available in PA83-SF coordinates, despite the metric unit shown in the meta data, I also added a conversion to UTM83-17. This was a convenience for displaying over Terraserver backgrounds using the available EPSG:26917. Given the hi res aerial data available from PAMAP this step may be unnecessary in a later iteration.

Actually PAMAP has done this step already. Tiff grids are available for download. However, I’m interested in the gridding algorithm and possible implementation in a hadoop cluster so PAMAP is a convenient test set with a solution already available for comparison. It is also nice to think about going directly from raw .las data to my end goal of a set of image and mesh pyramids for the 3D viewer I am exploring for gmti data.

Fig 1 – LiDAR scan tracks in xaml over USGS DOQ

Although each section is only about 3.5 square miles the scans generate more than a million points per square mile. The pulse rate achieves roughly 1.4ft spacing for the PAMAP data collection. My translator produces a xaml file of approximately 85Mb which is much too large to show in a browser. I just pulled some scans from the beginning and the end to show location and get an idea of scale.

The interesting aspect of LiDAR is the large amount of data collected which is accessible to the public. In order to really make use of this in a web viewer I will need to subtile and build sets of image pyramids. The scans are not orthogonal so the next task will be to grid the raw data into an orthoganal cell set. Actually there will be three grids, one for the elevation, one for the classification, and one for the reflectance value. With gridded data sets, I will be able to build xaml 3D meshes overlaid with the reflectance or classification. The reflectance and classification will be translated to some type of image file, probably png.

Replacing the PA83-SF with a metric UTM base gives close to 3048mx3048m per PAMAP section.If the base cell starts at 1 meter, and using a 50cellx50cell patch, the pyramid will look like this:
dim cells/patch size  approx faces/section
1m 50x50  50mx50m  9,290,304
2m 50x50  100mx100m 2,322,576
4m 50x50  200mx200m   580,644
8m 50×50   400mx400m   145,161
16m 50×50   800mx800m 36,290
32m 50x50  1600mx1600m 9072
64m 50x50  3200mx3200m 2268
128m 50x50  6400mx6400m 567
256m 50x50  12800mx12800m 141

There will end up being two image pyramids and a 3D zaml mesh pyramid with the viewer able to move from one level to the next based on zoom scale. PAMAP also has high resolution aerial imagery which could be added as an additional overlay.

Again this is a large amount of data, so pursuing the AWS cluster thoughts from my previous posting it will be interesting to build a hadoop hdfs cluster. I did not realize that there is actually a hadoop public ami available with ec2 specific scripts that can initialize and terminate sets of EC2 instances. The hadoop cluster tools are roughly similar to the Google GFS with a chunker master and as many slaves as desired. Hadoop is an Apache project started by the Lucene team inspired by the Google GFS. The approach is useful in processing data sets larger than available disk space, or 160Gb in a small EC2 instance. The default instance limit for EC2 users is currently at 20. With permission this can be increased but going above 100 seems to be problematic at this stage. Using a hadoop cluster of 20 EC2 instances still provides a lot of horsepower for processing data sets like LiDAR. The cost works out to 20x$0.10 per hour = $2.00/hr which is quite reasonable for finite workflows. hadoop EC2 ami

This seems to be a feasible approach to doing gridding and pyramid building on large data resources like PAMAP. As PAMAP paves the way, other state and federal programs will likely follow with comprehensive sub meter elevation and classification available for large parts of the USA. Working out the browser viewing requires a large static set of pre-tiled images and xaml meshes. The interesting part of this adventure is building and utilizing a supercomputer on demand. Hadoop makes it easier but this is still speculative at this point.

Fig 1 – LiDAR scan tracks in xaml over USGS DRG

I still have not abandoned the pipline approach to WPS either. 52North has an early implementation of a WPS available written in Java. Since it can be installed as a war, it should be simple to startup a pool of EC2 instances using an ami with this WPS war installed. The challenge, in this case, is to make a front end in WPF that would allow the user to wire up the instance pool in different configurations. Each WPS would be configured with an atomistic service which is then fed by a previous service on up the chain to the data source. Conceptually this would be similar to the pipleline approach used in JAI which builds a process chain, but leaves it empty until a process request is initialized. At that point the data flows down through the process nodes and out at the other end. Conceptually this is quite simple, one AMI per WPS process and a master instance to push the data down the chain. The challenge will be a configuration interface to select a WPS atom at each node and wire the processes together.

I am open to suggestions on a reasonably useful WPS chain to use in experimentations.

AWS is an exciting technology and promises a lot of fun for computer junkies who have always had a hankering to play with supercomputing, but didn’t quite make the grade for a Google position.

Xaml on Amazon EC2 S3

Time to experiment with Amazon EC2 and S3. This site is using an Amazon EC2 instance with a complete open source GIS stack running on Ubuntu Gutsy.

  • Ubuntu Gutsy
  • Java 1.6.0
  • Apache2
  • Tomcat 6.016
  • PHP5
  • MySQL 5.0.45
  • PostgreSQL 8.2.6
  • PostGIS 1.2.1
  • GEOS 2.2.3-CAPI-1.1.1
  • Proj 4.5.0
  • GeoServer 1.6

Running an Apache2 service with a jk_mod connector to tomcat lets me run the examples of xaml xbap files with their associated java servlet utilities for pulling up GetCapabilities trees on various OWS services. This is an interesting example of combining open source and WPF. In the NasaNeo example Java is used to create the 3D terrain models from JPL srtm (Ctrl+click) and drape with BMNG all served as WPF xaml to take advantage of native client bindings. NasaNeo example

I originally attempted to start with a public ami based on fedora core 6. I found loading the stack difficult with hard to find RPMs and difficult installation issues. I finally ran into a wall with the PostgreSQL/PostGIS install. In order to load I needed a complete gcc make package to compile from sources. It did not seem worth the trouble. At that point I switched to an Ubuntu 7.10 Gutsy ami.

Ubuntu based on debian is somewhat different in its directory layout from the fedora base. However, Ubuntu apt-get was much better maintained than the fedora core yum installs. This may be due to using the older fedora 6 rather than a fedora 8 or 9, but there did not appear to be any useable public ami images available on the AWS EC2 for the newer fedoras. In contrast to fedora on Ubuntu installing a recent version of PostgreSQL/PostGIS was a simple matter:
apt-get install postgresql-8.2-postgis postgis

In this case I was using the basic small 32 bit instance ami with 1.7Gb memory and 160Gb storage at $0.10/hour. The performance was very comparable to some dedicated servers we are running, perhaps even a bit better since the Ubuntu service is setup using an Apache2 jk_mod to tomcat while the dedicated servers simply use tomcat.

There are some issues to watch for on the small ami instances. The storage is 160Gb but the partition allots just 10Gb to root and the balance to a /mnt point. This means the default installations of mysql and postgresql will have data directories on the smaller 10Gb partition. Amazon has done this to limit ec2-bundle-vol to a 10GB max. ec2-bundle-volume is used to store an image to S3 which is where the whole utility computing gets interesting.

Once an ami stack has been installed it is bundled and stored on S3, that ami is then registered with AWS. Now you have the ability to replicate the image on as many instances as desired. This allows very fast scaling or failover with minimal effort. The only caveat of course is in dynamic data. Unless provision is made to replicate mysql and postgresql data to multiple instances or S3, any changes can be lost with the loss of an instance. This does not appear to occur terribly often but then again the AWS is still Beta. Also important to note, the DNS domain pointed to an existing instance will also be lost with the loss of your instance. Bringing up a new instance requires a change to the DNS entry as well (several hours), since each instance creates its own unique amazon domain name. There appear to be some work arounds for this requiring more extensive knowledge of DNS servers.

In my case the data sources are fairly static. I ended up changing the datadir pointers to /mnt locations. Since these are not bundled in the volume creation, I handled them separately. Once the data required was loaded I ran a tar on the /mnt/directory and copied the .tar files each to its own S3 bucket. The files are quite large so this is not a nice way to treat backups of dynamic data resources.

Next week I have a chance to experiment with a more comprehensive solution from Elastra. Their beta version promises to solve these issues by wrapping Postgresql/postgis on the ec2 instance with a layer that uses S3 as the actual datadir. I am curious how this is done but assume for performance the indices remain local to an instance while the data resides on S3. I will be interested to see what performance is possible with this product.

Another interesting area to explore is Amazon’s recently introduced SimpleDB. This is not a standard sql database but a type of hierarchical object stack over on S3 that can be queried from EC2 instances. This is geared toward non typed text storage which is fairly common in website building. It will be interesting to adapt this to geospatial data to see what can be done. One idea is to store bounding box attributes in the SimpleDB and create some type of JTS tool for indexing on the ec2 instance. The local spatial index would handle the lookup which is then fed to the SimpleDB query tools for retrieving data. I imagine the biggest bottleneck in this scenario would be the cost of text conversion to double and its inverse.

Utility computing has an exciting future in the geospatial realm – thank you Amazon and Zen.

weogeo and web2.0

A recent article on All Points Blog pointed me to a new web2.0 GIS data/community site called weogeo, still in Beta. Apart from the “whimsical” web2.0 type name the site developers are trying to leverage newer web2.0 capabilities for a data resource community.

Fig.1 weogeo slider workflow interface

The long standing standard for data resources has been geocomm, with its large library of useful and free data downloads. Geocom was built probably a decade ago on a unix ftp type architecture. Over the years it has evolved into a full featured news, data, software community site by keeping focused on low cost immediate downloads with an ad revenue business model enhanced with some paid premiere services. It has been successful, but apart from the revenue model, definitely archaic in todays online world.

Curious about the potential, I decided to look at the beta weogeo in detail. I was especially attracted to its use of Amazons Web Services, AWS:

S3 Simple Storage Service
EC2 Elasctic Compute Cloud
SQS Simple Queue Service

Amazon’s Web Services are a new approach to commodity web services that provide services on a lower cost time rate. This approach is called utility computing because of its mapping to the utility industry pricing models. For example S3 services on Amazon infrastructure are available for the following:


$0.15 per GB-Month of storage used
Data Transfer
$0.10 per GB - all data transfer in
 $0.18 per GB - first 10 TB / month data transfer out
 $0.16 per GB - next 40 TB / month data transfer out
 $0.13 per GB - data transfer out / month over 50 TB
Data transfer "in" and "out" refers to transfer into and out of Amazon S3.
Data transferred between Amazon S3 and Amazon EC2 is free of charge
$0.01 per 1,000 PUT or LIST requests
$0.01 per 10,000 GET and all other requests

This can translate into a convenient offsite storage with costs linked to use.
For example a 250Gb data store would be charged a $25 upload transfer fee and thereafter a $37.50/month storage fee. Storage fees would slide up or down as the use varies so you never pay for more than needed at a specific time.

S3 is not especially novel, but EC2 is, moving from storage to virtual computing resources. Now a virtual computer is set up on an as needed basis and torn down when not needed. EC2 is competitively priced at 0.10 per instance hour. This is especially attractive to small businesses and startups in the online world whose computing resources may fluctuate wildly. Ramping up to a highest expected load can require costly investments in dedicated servers at a managed hosting firm with little flexibility. Amazons SQS is a similar pricing model for queued message services on demand.

Weogeo provides a data service built on top of EC2 and S3. Their goal is to attract numerous data producers/sellers to provide a community of data sources with a flexible pricing model sitting on top of Amazons web services.

The weogeo founders are coming from a large imagery background with years of experience in scientific hyperspectral imagery which produces very large images, in the 40Gb/image range. The distribution of large image files poses a serious strain on bandwidth, but also on backend cpu. This is especially true of an environment which hopes to provide flexible export formats and projections. Community sellers are encouraged to build a data resource which can then be provided to the community on a cost basis using Amazon S3 for storage and EC2 for cpu requirements.

To make this feasible the weogeo development team also came up with an EC2 specific management tool called weoceo, which automates the set up and tear down of virtual computers. For example a request for a large customized data set can trigger a dedicated server instance specifically for a single download and then remove the instance once completed. As a side benefit weoceo adds a service level guarantee by insuring that lost compute instances can be automatically replaced as an effective failover technique. This failover still requires minutes rather than seconds, but the trend to virtualization points to an eventual future with nearly instant failover instance creation.

This service model was evidently used internally by FERI for the large imagery downloads generated by research scientists and academic communities. The use of open source gdal on EC2 is an effective way to generalize this model and reduce delivery costs for large data resource objects. The developers at weogeo are also extending this distribution model for vector resources.

To try this out I created a few WPF terrain models over Rocky Mountain National Park for entry into the weogeo library. These are vector models which include the necessary model mesh as well as overlay image and a set of binding controls to manipulate camera view, elevation scale, scene rotation, and tilt. Admittedly there is a small audience for such data sources, but they do require a relatively significant file size for vector data, on the order of 20Mb per USGS quad or 4.5Mb compressed.

Fig 2 Fall River Pass quad WPF 3D terrain model

In addition to the actual resource I needed to create a jpg overview image with a jpeg world file and a description html page. Weogeo Beta is currently waiving setup costs on a trial basis.The setup is fairly straightforward and can be streamlined for large sets by also creating an xml configuration file for populating the weogeo catalog. Once entered, weogeo will list the resource in its catalog based on coverage. The seller is allowed considerable flexibility in description and pricing.

The user experience is quite beautiful. It appears to be a very nice Ruby Rails design and seems to fit well in the newer web2.0 world. Those familiar with iPhone slide interfaces will be up and running immediately. The rest of us can appreciate the interface beauty, but will need some amount of experimentation to get used to the workflow. Workflow proceeds in a set of slide views that query the catalog with a two stage map interface, then provide lists of available resources with clickable sort functions.

Fig.3 weogeo slider maps selection interface

Once a resource is selected the workflow slides to a thumbnail view with a brief description. Clickable access is provided to more detailed descriptions and a kml view of the thumbnail.The next step in the workflow is a customization slide. This is still only available for imagery but includes things like crop, export format selection, and projection, which are cost selections provided by weogeo rather than the seller. Finally an order slide collects payment details and an order is created.

Fig.4 weogeo slider customization interface

At this point the web2.0 model breaks down and weogeo resorts to old web architecture by sending an email link to the user. The url will either provide a temporary download access to an EC2 resource setup specifically for the data or direct the user to the sellers fulfillment service. In either case the immediate fulfillment expectation of web2.0 is not met. This is understandable for very large imagery object sizes but becomes a user drawback for typical vector data downloads.This is especially true of users familiar with OWS services that provide view and data simultaneously.

Since I believe the weogeo interface is effective (perhaps I’m a pushover for beauty), I decided to look deeper into the Amazon services on which it’s based. Especially interesting to a small business owner like myself, is the low cost flexibility of EC2. In order to use these services its necessary to first setup an account and receive access public/private keys for the security system. An X509Certificate is also setup which is used for SSH access to particular EC2 instances. Each instance provides a virtual 1.7Ghz x86 processor, 1.75GB of RAM, 160GB of local disk, and 250Mb/s of network bandwidth with some Linux variant OS.

Amazon provides both REST and SOAP interface models. REST appears the better choice for S3 since it’s easier to set up data streams for larger files, while SOAP is by default memory bound. Access libraries are provided for Java, Perl, PHP, C#, Python, and Ruby which seems to cover all the bases. I experimented with some of the sample Java code and was quickly up and running making simple REST requests for bucket creation, object uploading, deletion etc. It is, however, much easier to populate S3 space using the opensource library called JetS3t, which contains a tool called Cockpit.

EC2 requires access to S3 buckets. Amazon provides a few preconfigured Amazon Machine Image files, AMIs, but they are meant as starting places for building your own custom AMI. Once an AMI is built it is saved to an S3 account bucket where it is accessed when building EC2 instances. I experimented with building an AMI and managed to get a basic instance going, customized, and saved without too much trouble following the “getting started” instructions. I can get around a little in basic Linux, but setting up a useable AMI will require more than the afternoon I had allotted to the effort. I was able to use Putty from a windows system to access the AMI instance but I would want to use a VNC server and TightVNC client to get too much further. Not having a Remote Desktop is a hindrance for the command line impaired among us.

The main drawback of AWS especially EC2 ( in comments I noted), is the lack of a Service Level Agreement. EC2 which is still in Beta will not guarantee the longevity of an instance which can be mitigated somewhat by adding a management/monitor tool like weoceo. However the biggest issue I can see for a GIS platform is the problem of persistent DB storage. The DB is part of an instance which is fine until an instance disappears. In that case provision needs to be made for a new instance setup using an AMI, but also a data transaction log capable of preserving any transactions since the last AMI backup. This problem does not seem to be solved very elegantly yet, although some people seem to be working on it, ScaleDB

If weogeo extends very far into the vector world, persistent DB will be necessary. Vector GIS applications, unlike imagery, rest directly on the backing geospatial DB. However, the larger issue in my mind is the difference between vector data models and large imagery objects. Vector DB resources are much more amenable to OWS type interfaces with immediate access expectations. The email url delivery approach will only be a frustration to weogeo users familiar with OWS services and primed by VE and GE to get immediate results. Even web1.0 sites like geocomm provide immediate ftp download to their library data.

It remains to be seen if the weogeo service will be successful in a larger sense. In spite of its beautiful Ruby Rails interface, its community wicki and blog services, its seller incentives, and its scalable AWS foundation, it will have to attract a significant user community with free and immediately accessible data resources to make it as a viable business in a web2.0 world.

GMTI in WPF requires .NET 3.0 enabled IE browser.
Alternatively here is a video showing part of the time sequence simulation.

Fig 1 – Screenshot of 3D WPF browser interface to GMTI data stream

According to NATO documentation STANAG 4607/ AEDP-7, ” the GMTI standard defines the data content and format for the products of ground moving target indicator radar systems.” The GMTI format is a binary stream of packet segments containing information about mission, job, radar dwell, and targets. These packets contain a variety of geospatial data that can be used in a spatially oriented interface.

Over the past couple of weeks I have explored some options for utilizing this type of data in a web interface. I was provided with a set of simulation GMTI streams that model a UAV flight pattern and target aquisition in a demonstration area in northern Arizona.

My first task was to write a simple Java program to parse the sample binary streams and see what was available for use. The data is organized in packets and segments. The segments can be one of a variety of types, but the primary segments of interest were the Dwell Segment and Target Segments. Dwell segments contain information about the sensor platform including 3D georeferenced position and orientation. Target segments are referenced by a dwell segment to provide information about detected ground targets which includes ground position and possibly classification.

Since I was only interested in a subset of the entire GMTI specification I concentrated on fields of interest in developing a visual interface. The output of the resulting translator is selectable to xml, GeoRss, and sql.

Here is a sample of a few packets in the xml output:

  <time units="milliseconds">75099818</time>
  <z units="centimeters">106668</z>
  <PlatformHeading units="degree true N">334.1217041015625</PlatformHeading>
  <SensorHeading units="degree true N">0.0</SensorHeading>
  <SensorTrack units="degree true N">334.1217041015625</SensorTrack>
  <SensorSpeed unit="millimeters/second">180054</SensorSpeed>
  <SensorVerticalVelocity unit="decimeters/second">1</SensorVerticalVelocity>
  <DwellAreaRangeHalfExtent units="kilometers">11.0</DwellAreaRangeHalfExtent>
  <DwellAreaDwellAngleHalfExtent units="degrees">0.999755859375</DwellAreaDwellAngleHalfExtent>
  <DwellTime unit="milliseconds">75099818</DwellTime>
  <time units="milliseconds">75099978</time>
  <z units="centimeters">106668</z>
  <PlatformHeading units="degree true N">334.18212890625</PlatformHeading>
  <SensorHeading units="degree true N">0.0</SensorHeading>
  <SensorTrack units="degree true N">334.18212890625</SensorTrack>
  <SensorSpeed unit="millimeters/second">180054</SensorSpeed>
  <SensorVerticalVelocity unit="decimeters/second">1</SensorVerticalVelocity>
  <DwellAreaRangeHalfExtent units="kilometers">11.0</DwellAreaRangeHalfExtent>
  <DwellAreaDwellAngleHalfExtent units="degrees">1.0052490234375</DwellAreaDwellAngleHalfExtent>
  <DwellTime unit="milliseconds">75099978</DwellTime>
  <classification>No Information, Simulated Target</classification>

Listing 1 – example of GMTI translator output to XML

The available documentation is complete and includes all the specialized format specifications such as binary angle, signed binary decimal, etc. I had to dust off bitwise operations to get at some of the fields.

	public double B16(short d){
		int intnumber = (int)d;
		int maskint = 0xFF80;
		int number = (intnumber & maskint)>>7;
		int maskfrac = 0x003F;
		int fraction = intnumber & maskfrac;
		return  (double)number + (double)fraction/128.0;

Listing 2 – Example of some bit twiddling required for parsing the GMTI binary stream

Since I wanted to show the utility of a live stream I ended up translating the sample gmti into a sql load for PostGIS. The PostGIS gmti database was configured with just two tables a dwell table and a target table. Each record of these tables contained a time field from the gmti data stream which I could then use to simulate a time domain sequence in the interface.

Once the data was available as PostGIS with a time and geometry field, my first approach was to make the tables available as WMS layers using Geoserver. I could then use an existing SVG OWS interface to look at the data. The screen shot below shows TerraServer DOQ overlaid with a WMS dwell layer and a WFS target layer. The advantage of WFS is the ability to add some svg rollover intelligence to the vector points. WMS supplies raster which in an SVG sense is dumb, while WFS serves vector GML which can then be used to produce more intelligent event driven svg.

Fig 2 – SVG OWS interface showing GMTI target from Geoserver WFS over Terraserver DOQ

This approach is helpful and lets the browser make use of the variety of public WMS/WFS servers that are now available, however, now that WPF 3D is available the next step was obvious. I needed to make the GMTI data available in a 3D terrain interface.

I thought that an inverse of the flyover interface I had built recently for Pikes Peak viewing would be appropriate. In this approach a simple HUD could visually track aspects of the platform including air speed, pitch, roll, and heading. At the same time a smaller key map could be used to track the platform position while a full view terrain model shows the target locations as they are aquired. I could also use some camera controls for manipulating the camera look direction and zoom.

At one point I investigated making use of the dwell center points to see what tracking a camera.lookDirection to the vector from camera position to dwell center points would look like. Being a GMTI novice I did not realize how quickly dwells are moving, scanning the area of interest. The resulting interface was twitching the view inordinately and I switched to a smoother scene center look direction. Since I had worked out the Dwell center points already I decided to at least make use of them to show dwell centers as Xs in cyan on the terrain overlay.

Fig3 – WPF 3D interface showing GMTI target layer from Geoserver OWS on top of DOQ imagery from TerraServer draped over a DEM terrain from USGS

The WPF model followed a familiar pattern. I used USGS DEM for the ground terrain model. I first used gdal_translate to convert from ascii DEM to grd. I could then use a simple grd to WPF MeshGeometry3D translater which I had coded earlier to make the basic terrain model. Since the GMTI sensor platform circles quite a large area of ground terrain I made the simplification of limiting my terrain model to the target area.

The large area of the platform circle made a terrain model unnecessary for my keymap. Instead I used a rectangular planar surface draped with a DRG image from TerraServer. To this Model3D Group I added a simple spherical icon to represent the camera platform position:

                    <GeometryModel3D x:Name="cameraloc" Geometry="{StaticResource sphere}">
                        <DiffuseMaterial Brush="Red"/>
                          <ScaleTransform3D  ScaleX="0.02" ScaleY="0.02" ScaleZ="0.02"/>
                          <TranslateTransform3D x:Name="PlatformPosition" OffsetX="0" OffsetY="0" OffsetZ="0.0"/>

Listing 3 – the geometry model used to indicate platform location

Now that the basic interface was in place I built an Ajax timer which accessed the server for a new set of GMTI results on a 1000 ms interval. To keep the simulation from being too boring I boosted the serverside query to a 3000ms interval which in essence created a simulation at 3X reality. My server side query was coded as a Java servlet which illustrates an interesting aspect of web interfaces, multiple technology interoperability.

On the client side the timer is triggered from a start button. It then reads the servlet results using C# scripting.

            Uri uri = new Uri("http://hostserver/WPFgmti/servlet/GetGMTITracks?table=all&currtime=" + currentTime + "&interval=3&area=GMTI&bbox=-112.56774902343798,34.65681457431236,-111.256958007813,35.7221794119471");
            WebRequest request = WebRequest.Create(uri);
            HttpWebResponse response = (HttpWebResponse)request.GetResponse();
            StreamReader reader = new StreamReader(response.GetResponseStream());
            string responseFromServer = reader.ReadToEnd();
            if (responseFromServer.Length > 0)

          // use the resulting platform position to change the TranslateTransform3D of the camera location in keymap
          Point3D campt = new Point3D(Double.Parse(ll[0]), Double.Parse(ll[1]), Double.Parse(ll[3])/1000.0);
                            PlatformPosition.OffsetX = campt.X;
                            PlatformPosition.OffsetY = campt.Y;

Listing 4 – client side C# snippet for updating keymap camera positiony

The camera location is changed in keymap and its look direction is updated in the view frame. Now some method of showing targets and dwell centers needed to be worked out. One approach would be to use a 3D icon spherical or terahedron to represent a ground position on the terrain model. However, there are about 6000 target locations in the GMTI sample and rather than simply show icons moving from one location to the next I wanted to show their track history as well. In addition the targets were not discriminated in this GMTI stream so it would not be easy to distinguish among several moving targets. A better approach in this case is to take advantage of the DrawingBrush capability to create GeometryDrawing shapes on a terrian overlay brush.

                                <DrawingBrush Stretch="Uniform">
                                  <DrawingBrush.Drawing >
                                    <DrawingGroup x:Name="TargetGroup">
                                      <GeometryDrawing Brush="#000000FF">
                                          <RectangleGeometry Rect="-112.0820209 -35.31137528 0.332466 0.332466"/>

Listing 5 – using a DrawingBrush to modify the terrian overlay

As targets arrive from the server circles are added to the DrawingGroup:

                            GeometryDrawing gd = new GeometryDrawing();
                            gd.Brush = new SolidColorBrush(Colors.Red);
                            GeometryGroup gg = new GeometryGroup();
                            EllipseGeometry eg = new EllipseGeometry();
                            eg.RadiusX = 0.0005;
                            eg.RadiusY = 0.0005;
                            Point cp = new Point(Double.Parse(ll[0]), Double.Parse(ll[1]));
                            eg.Center = cp;
                            gd.Geometry = gg;


Listing 6 – C# code for creating a new ellipse and adding to the DrawingGroup

The Dwell center points are handled in a similar way, creating two LineGeometry objects for each center point and adding them to TargetGroup.

This completes a simple 3D WPF interface for tracking both the sensor platform dwell segments and the target segments of a gmti stream. The use of WPF 3D extends tracking interfaces in the z dimension and provides some additional visual analysis utility. This interface makes use of a static terrain model made possible by foreknowledge of the GMTI input. A useful extension of this interface would be generalizing terrain. I explored this a bit earlier when building terrain from JPL SRTM WMS resources for the NASA Neo interface. This resource provides world wide elevation data set exposed in a WMS, which can be used to create GeometryModel3D TINs, enhanced with additional overlays such as BMNG, any where in the world.

In a tracking scenario some moving platform is represented in a live stream of constantly changing positions. Since these positions are following a vector another general approach would be an Ajax WebRequest to a set of terrain patches, either static for performance or dynamic from JPL SRTM for small footprint. As a tracked position moves across the globe new patches of GeometryModel3D would be dynamically added to the forward edge, while old patches are removed from the trailing edge, reminiscent of the VE and GE map services. GeometryModel3D requirements are much more intense than image pyramids so performance could not match popular slippy map interfaces. However, real time tracking is generally not a high speed affair, at least for ground vehicles.

A variation on this Ajax theme would be a retinal model that adjusts GeometryModel3D patch resolutions depending on the zoom. For example a zoom to a focus could increase the resolution of the terrain model from 100m to 30m and then again to 10m. If the DEM is pre configured it would be fairly simple to create a pyramid stack of GeometryModel3D patches using perhaps 160m, 80m, 40m, 20m, 10m resolutions. Starting at the top of the pyramid in the interface would provide enough resolution for a wide area TIN. Zoom events could then trigger higher resolution Ajax calls to enhance detail at a focus. These kinds of approaches will need to be explored in a future project.

WPF 3D Resource

I’ve been following Eric Sink’s fascinating The Twelve Days of WPF 3D blog leading up to the important release of Charles Petzold’s new book due to be released tomorrow, July 25th. Eric was involved as a technical reviewer and had early access to its content.

3D is one of the most important distingishing characteristics of WPF XAML. 3D is new to the browser world, at least in a native browser form, and WPF 3D affords a rich set of capabilities that make possible 3D web applications. Eric’s blog provides a nice tutorial with helpful pointers on the use of 3D WPF. Without going into great detail it describes his insights from developing an application called “Sawdust”. The application itself is not online but the insights are applicable to online xbap as well as a standalone program.

Charles Petzold is a legend in the Windows programming world. He has produced a long line of reference books for those needing to program in WIndows and his latest is
3D Programming for Windows

Anyone developing 3D content for .NET 3.0 will need to study Charles Petzold’s new book as well as Chapter 12 of Nathan Adam’s book Windows Presentation Foundation Unleashed.