Extraterrestrial Map Kinections


Fig 1 – LRO Color Shaded Relief map of moon – Silverlight 5 XNA with Kinect interface


Silverlight 5 was released after a short delay, at the end of last week.
Just prior to exiting stage left, Silverlight, along with all plugins, shares a last aria. The spotlight now shifts abruptly to a new diva, mobile html5. Backstage the enterprise awaits with a bouquet of roses. Their concourse will linger long into the late evening of 2021.

The Last Hurrah?

Kinect devices continue to generate a lot of hacking interest. With the release of an official Microsoft Kinect beta SDK for Windows, things get even more interesting. Unfortunately, Kinect and the web aren’t exactly ideal partners. It’s not that web browsers wouldn’t benefit by moving beyond the venerable mouse/keyboard events. After all, look at the way mobile touch, voice, inertia, gyro, accelerometer, gps . . . have all suddenly become base features in mobile browsing. The reason Kinect isn’t part of the sensor event farmyard may be just a lack of portability and an ‘i’ prefix. Shrinking a Kinect doesn’t work too well as stereoscopic imagery needs a degree of separation in a Newtonian world.

[The promised advent of NearMode (50cm range) offers some tantalizing visions of 3D voxel UIs. Future mobile devices could potentially take advantage of the human body’s bi-lateral symmetry. Simply cut the device in two and mount one half on each shoulder, but that isn’t the state of hardware at present. ]


Fig 2 – a not so subtle fashion statement OmniTouch


For the present, experimenting with Kinect control of a Silverlight web app requires a relatively static configuration and a three-step process: the Kinect out there, beyond the stage lights, and the web app over here, close at hand, with a software piece in the middle. The Kinect SDK, which roughly corresponds to our visual and auditory cortex, amplifies and simplifies a flood of raw sensory input to extract bits of “actionable meaning.” The beta Kinect SDK gives us device drivers and APIs in managed code. However, as these APIs have not been compiled for use with Silverlight runtime, a Silverlight client will by necessity be one step further removed.

Microsoft includes some rich sample code as part of the Kinect SDK download. In addition there are a couple of very helpful blog posts by David Catuhe and a codeplex project, kinect toolbox.

Step 1:

The approach for using Kinect for this experimental map interface is to use the GestureViewer code from Kinect Toolbox to capture some primitive commands arising from sensory input. The command repertoire is minimal including four compass direction swipes, and two circular gestures for zooming, circle clockwise zoom in, and circle counter clockwise zoom out. Voice commands are pretty much a freebie, so I’ve added a few to the mix. Since GestureViewer toolbox includes a learning template based gesture module, you can capture just about any gesture desired. I’m choosing to keep this simple.

Step 2:

Once gesture recognition for these 6 commands is available, step 2 is handing commands off to a Silverlight client. In this project I used a socket service running on a separate thread. As gestures are detected they are pushed out to local port 4530 on a tcp socket service. There are other approaches that may be better with final release of Silverlight 5.

Step 3:

The Silverlight client listens on port 4530, reading command strings that show up. Once read, the command can then be translated into appropriate actions for our Map Controller.


Fig 3 – Kinect to Silverlight architecture

Full Moon Rising


But first, instead of the mundane, let’s look at something a bit extraterrestrial, a more fitting client for such “extraordinary” UI talents. NASA has been very busy collecting large amounts of fascinating data on our nearby planetary neighbors. One data set that was recently released by ASU, stitches together a comprehensive lunar relief map with beautiful color shading. Wow what if the moon really looked like this!


Fig 4 – ASU LRO Color Shaded Relief map of moon

In addition to our ASU moon USGS has published a set of imagery for Mars, Venus, Mercury, as well as some Saturn and Jupiter moons. Finally, JPL thoughtfully shares a couple of WMS services and some imagery of the other planets:

This type of data wants to be 3D so I’ve brushed off code from a previous post, NASA Neo 3D XNA, and adapted it for planetary data, minus the population bump map. However, bump maps for depicting terrain relief are still a must have. A useful tool for generating bump or normal imagery from color relief is SSBump Generator v5.3 . The result using this tool is an image that encodes relative elevation of the moon’s surface. This is added to the XNA rendering pipeline to combine a surface texture with the color relief imagery, where it can then be applied to a simplified spherical model.


Fig 5 – part of normal map from ASU Moon Color Relief imagery

The result is seen in the MoonViewer client with the added benefit of immediate mode GPU rendering that allows smooth rotation and zoom.

The other planets and moons have somewhat less data available, but still benefit from the XNA treatment. Only Earth, Moon, Mars, Ganymede, and Io have data affording bump map relief.

I also added a quick WMS 2D viewer html using OpenLayers against the JPL WMS servers to take a look at lunar landing sites. Default OpenLayers isn’t especially pretty, but it takes less than 20 lines of js to get a zoomable viewer with landing locations. I would have preferred the elegance of Leaflet.js, but EPSG:4326 isn’t supported in L.TileLayer.WMS(). MapProxy promises a way to proxy in the planet data as EPSG:3857 tiles for Leaflet consumption, but OpenLayers offers a simpler path.


Fig 6 – OpenLayer WMS viewer showing lunar landing sites

Now that the Viewer is in place it’s time to take a test drive. Here is a ClickOnce installer for GestureViewer modified to work with the Silverlight Socket service:

Recall that this is a Beta SDK, so in addition to a Kinect prerequisite, there are some additional runtime installs required:

Using the Kinect SDK Beta

Download Kinect SDK Beta 2:

Be sure to look at the system requirements and the installation instructions further down the page. This is Beta still, and requires a few pieces. The release SDK is rumored to be available the first part of 2012.

You may have to download some additional software as well as the Kinect SDK:

Finally, we are making use of port 4530 for the Socket Service. It is likely that you will need to open this port in your local firewall.

As you can see this is not exactly user friendly installation, but the reward is seeing Kinect control of a mapping environment. If you are hesitant to go through all of this install trouble, here is a video link that will give you an idea of the results.

YouTube video demonstration of Kinect Gestures


Voice commands using the Kinect are very simple to add so this version adds a few.

Here is the listing of available commands:

       public void SocketCommand(string current)
            switch (command)
                    // Kinect voice commands
                case "mercury-on": { MercuryRB.IsChecked = true; break; }
                case "venus-on": { VenusRB.IsChecked = true; break; }
                case "earth-on": { EarthRB.IsChecked = true; break; }
                case "moon-on": { MoonRB.IsChecked = true; break;}
                case "mars-on": { MarsRB.IsChecked = true; break;}
                case "marsrelief-on": { MarsreliefRB.IsChecked = true; break; }
                case "jupiter-on": { JupiterRB.IsChecked = true; break; }
                case "saturn-on": { SaturnRB.IsChecked = true; break; }
                case "uranus-on": { UranusRB.IsChecked = true; break; }
                case "neptune-on": { NeptuneRB.IsChecked = true; break; }
                case "pluto-on": { PlutoRB.IsChecked = true; break; }

                case "callisto-on": { CallistoRB.IsChecked = true; break; }
                case "io-on": { IoRB.IsChecked = true;break;}
                case "europa-on": {EuropaRB.IsChecked = true; break;}
                case "ganymede-on": { GanymedeRB.IsChecked = true; break;}
                case "cassini-on": { CassiniRB.IsChecked = true; break; }
                case "dione-on":  {  DioneRB.IsChecked = true; break; }
                case "enceladus-on": { EnceladusRB.IsChecked = true; break; }
                case "iapetus-on": { IapetusRB.IsChecked = true;  break; }
                case "tethys-on": { TethysRB.IsChecked = true; break; }
                case "moon-2d":
                        MoonRB.IsChecked = true;
                        Uri uri = Application.Current.Host.Source;
                        System.Windows.Browser.HtmlPage.Window.Navigate(new Uri(uri.Scheme + "://" + uri.DnsSafeHost + ":" + uri.Port + "/MoonViewer/Moon.html"), "_blank");
                case "mars-2d":
                        MarsRB.IsChecked = true;
                        Uri uri = Application.Current.Host.Source;
                        System.Windows.Browser.HtmlPage.Window.Navigate(new Uri(uri.Scheme + "://" + uri.DnsSafeHost + ":" + uri.Port + "/MoonViewer/Mars.html"), "_blank");
                case "nasaneo":
                        EarthRB.IsChecked = true;
                        System.Windows.Browser.HtmlPage.Window.Navigate(new Uri(""), "_blank"); break;
                case "rotate-east": {
                        RotationSpeedSlider.Value += 1.0;
                        tbMessage.Text = "rotate east";
                case "rotate-west":
                        RotationSpeedSlider.Value -= 1.0;
                        tbMessage.Text = "rotate west";
                case "rotate-off":
                        RotationSpeedSlider.Value = 0.0;
                        tbMessage.Text = "rotate off";
                case "reset":
                        RotationSpeedSlider.Value = 0.0;
                        orbitX = 0;
                        orbitY = 0;
                        tbMessage.Text = "reset view";

                //Kinect Swipe algorithmic commands
                case "swipetoleft":
                        orbitY += Microsoft.Xna.Framework.MathHelper.ToRadians(15);
                        tbMessage.Text = "orbit left";
                case "swipetoright":
                        orbitY -= Microsoft.Xna.Framework.MathHelper.ToRadians(15);
                        tbMessage.Text = "orbit right";
                case "swipeup":
                        orbitX += Microsoft.Xna.Framework.MathHelper.ToRadians(15);
                        tbMessage.Text = "orbit up";
                case "swipedown":
                        orbitX -= Microsoft.Xna.Framework.MathHelper.ToRadians(15);
                        tbMessage.Text = "orbit down";

                //Kinect gesture template commands
                case "circle":

                        if (scene.Camera.Position.Z > 0.75f)
                            scene.Camera.Position += zoomInVector * 5;
                        tbMessage.Text = "zoomin";
                case "circle2":
                        scene.Camera.Position += zoomOutVector * 5;
                        tbMessage.Text = "zoomout";

Possible Extensions

After posting this code, I added an experimental stretch vector control for zooming and 2 axis twisting of planets. These are activated by voice: ‘vector twist’, ‘vector zoom’, and ‘vector off.’ The Map control side of gesture commands could also benefit from some easing function animations. Another avenue of investigation would be some type of pointer intersection using a ray to indicate planet surface locations for events.


Even though Kinect browser control is not prime time material yet, it is a lot of experimental fun! The MoonViewer control experiment is relatively primitive. Cursor movement and click using posture detection and hand tracking is also feasible, but fine movement is still a challenge. Two hand vector controlling for 3D scenes is also promising and integrates very well with SL5 XNA immediate mode graphics.

Kinect 2.0 and NearMode will offer additional granularity. Instead of large swipe gestures, finger level manipulation should be possible. Think of 3D voxel space manipulation of subsurface geology, or thumb and forefinger vector3 twisting of LiDAR objects, and you get an idea where this could go.

The merger of TV and internet holds promise for both whole body and NearMode Kinect interfaces. Researchers are also adapting Kinect technology for mobile as illustrated by OmniTouch.

. . . and naturally, lip reading ought to boost the Karaoke crowd (could help lip synching pop singers and politicians as well).


Fig 7 – Jupiter Moon Io

Alice in Mirrorland – Silverlight 5 Beta and XNA

“In another moment Alice was through the glass, and had jumped lightly down into the Looking-glass room”

Silverlight 5 Beta was released into the wild at MIX 11 a couple of weeks ago. This is a big step for mirror land. Among many new features is the long anticipated 3D capability. Silverlight 5 took the XNA route to 3D instead of the WPF 3D XAML route. XNA is closer to the GPU with the time tested graphics rendering pipeline familiar to Direct3D/OpenGL developers, but not so familiar to XAML developers.

The older WPF 3D XAML aligns better with X3D, the ISO sanctioned XML 3D graphics standard, while XNA aligns with the competing WebGL javascript wrapper for OpenGL. Eventually XML 3D representations also boil down to a rendering pipeline, but the core difference is that XNA is immediate mode while XML 3D is kind of stuck with retained mode. Although you pick up recursive control rendering with XML 3D, you lose out when it comes to moving through a scene in the usual avatar game sense.

From a Silverlight XAML perspective, mirror land is largely a static machine with infrequent events triggered by users. In between events, the machine is silent. XAML’s retained mode graphics lacks a sense of time’s flow. In contrast, enter XNA through Alice’s DrawingSurface, and the machine whirs on and on. Users occasionally throw events into the machine and off it goes in a new direction, but there is no stopping. Frames are clicking by apace.

Thus time enters mirror land in frames per second. Admittedly this is crude relative to our world. Time is measured out in the proximate range of 1/20th to 1/60th a second per frame. Nothing like the cusp of the moment here, and certainly no need for the nuance of Dedekind’s cut. Time may be chunky in mirror land, but with immediate mode XNA it does move, clicking through the present moment one frame at a time.

Once Silverlight 5 is released there will be a continuous XNA API across Microsoft’s entire spectrum: Windows 7 desktops, Windows 7 phones, XBox game consoles, and now the browser. Silverlight 5 and WP7 implementations are a subset of the full XNA game framework available to desktop and XBox developers. Both SL5 and WP7 will soon have merged Silverlight XNA capabilities. For symmetry sake XBox should have Silverlight as apparently announced here. It would be nice for a web browsing XBox TV console.

WP7 developers will need to wait until the future WP7 Mango release before merging XNA and Silverlight into a single app. It’s currently an either/or proposition for the mobile branch of XNA/SL.

At any rate, with SL5 Beta, Silverlight and 3D XNA now coexist. The border lies at the <DrawingSurface> element:

<DrawingSurface Draw="OnDraw" SizeChanged="DrawingSurface_SizeChanged" />

North of the border lies XML and recursive hierarchies, a largely language world populated with “semantics” and “ontologies.” South of the border lies a lush XNA jungle with drums throbbing in the night. Yes, there are tropical white sands by an azure sea, but the heart of darkness presses in on the mind.

XAML touches the academic world. XNA intersects Hollywood. It strikes me as one of those outmoded Freudian landscapes so popular in the 50’s, the raw power of XNA boiling beneath XAML’s super-ego. I might also note there are bugs in paradise, but after all this is beta.

Merging these two worlds causes a bit of schizophrenia. Above is Silverlight XAML with the beauty of recursive hierarchies and below is all XNA with its rendering pipeline plumbing. Alice steps into the DrawingSurface confronting a very different world indeed. No more recursive controls beyond this border. Halt! Only immediate mode allowed. The learning curve south of the border is not insignificant, but beauty awaits.

XNA involves tessellated models, rendering pipelines, vertex shaders, pixel shaders, and a high level shading language, HLSL, accompanied by the usual linear algebra suspects. Anytime you run across register references you know this is getting closer to hardware.

…a cry that was no more than a breath: “The horror! The horror!”

sampler2D CloudSampler : register(s0);
static const float3 AmbientColor = float3(0.5f, 0.75f, 1.0f);
static const float3 LampColor = float3(1.0f, 1.0f, 1.0f);
static const float AmbientIntensity = 0.1f;
static const float DiffuseIntensity = 1.2f;
static const float SpecularIntensity = 0.05f;
static const float SpecularPower = 10.0f;

Here is an overview of the pipeline from Aaron Oneal’s MIX talk:

So now that we have XNA it’s time to take a spin. The best way to get started is to borrow from the experts. Aaron Oneal has been very kind to post some nice samples including a game engine called Babylon written by David Catuhe.

The Silverlight 5 beta version of Babylon uses Silverlight to set some options and SL5 DrawingSurface to host scenes. Using mouse and arrow keys allows the camera/avatar to move through the virtual environment colliding with walls etc. For those wishing to get an idea of what XNA is all about this webcafe model in Babylon is a good start.

The models are apparently produced in AutoCAD 3DS and are probably difficult to build. Perhaps 3D point clouds will someday help, but you can see the potential for navigable high risk complex facility modeling. This model has over 60,000 faces, but I can still walk through exploring the environment without any difficulty and all I’m using is an older NVidia motherboard GPU.

Apparently, SL5 XNA can make a compelling interactive museum, refinery, nuclear facility, or WalMart browser. This is not a stitched pano or photosynth interior, but a full blown 3D model.

You’ve gotta love that late afternoon shadow affect. Notice the camera is evidently held by a vampire. I checked carefully and it casts no shadow!

But what about mapping?

From a mapping perspective the fun begins with this solar wind sample. It features all the necessary models, and shaders for earth, complete with terrain, multi altitude atmosphere clouds, and lighting. It also has examples of basic mouse and arrow key camera control.

Solar Wind Globe
Fig 4 – Solar Wind SL5 XNA sample

This is my starting point. Solar Wind illustrates generating a tessellated sphere model with applied textures for various layers. It even illustrates the use of a normal (bump) map for 3D effects on the surface without needing a tessellated surface terrain model. Especially interesting is the use of bump maps to show a population density image as 3D.

My simple project is to extend this solar wind sample slightly by adding layers from NASA Neo. NASA Neo conveniently publishes 45 categories and 129 layers of a variety of global data collected on a regular basis. The first task is to read the Neo GetCapabilities XML and produce the TreeView control to manage such a wealth of data. The TreeView control comes from the Silverlight Toolkit project. Populating this is a matter of reading through the Layer elements of the returned XML and adding layers to a collection which is then bound to the tree view’s ItemsSource property.

    private void CreateCapabilities111(XDocument document)
        //WMS 1.1.1
        XElement GetMap = document.Element("WMT_MS_Capabilities").Element("Capability")
        XNamespace xlink = "http://www.w3.org/1999/xlink";
        getMapUrl = GetMap.Attribute(xlink + "href").Value;
        if (getMapUrl.IndexOf("?") != -1) getMapUrl =
                  getMapUrl.Substring(0, getMapUrl.IndexOf("?"));

        ObservableCollection layers = new ObservableCollection();
        foreach (XElement element in
            if (element.Descendants("Layer").Count() > 0)
                WMSLayer lyr0 = new WMSLayer();
                lyr0.Title = (string)element.Element("Title");
                lyr0.Name = "header";
                foreach (XElement element1 in element.Descendants("Layer"))
                    WMSLayer lyr1 = new WMSLayer();
                    lyr1.Title = (string)element1.Element("Title");
                    lyr1.Name = (string)element1.Element("Name");

        LayerTree.ItemsSource = layers;

Once the tree is populated, OnSelectedItemChanged events provide the trigger for a GetMap request to NASA Neo returning a new png image. I wrote a proxy WCF service to grab the image and then write it to png even if the source is jpeg. It’s nice to have an alpha channel for some types of visualization.

The difficulty for an XNA novice like myself is understanding the hlsl files and coming to terms with the rendering pipeline. Changing the source image for a Texture2D shader requires dropping the whole model, changing the image source, and finally reloading the scene model and pipeline once again. It sounds like an expensive operation but surprisingly this re-instantiation seems to take less time than receiving the GetMap request from the WMS service. In WPF it was always interesting to put a Video element over the scene model, but I doubt that will work here in XNA.

The result is often a beautiful rendering of the earth displaying real satellite data at a global level.

Some project extensions:

  • I need to revisit lighting which resides in the cloud shader hlsl. Since the original cloud model is not real cloud coverage, it is usually not an asset to NASA Neo data. I will need to replace the cloud pixel image with something benign to take advantage of the proper lighting setup for daytime.
  • Next on the list is exploring collision. WPF 3D provided a convenient RayMeshGeometry3DHitTestResult. In XNA it seems getting a point on the earth to trigger a location event requires some manner of collision or Ray.Intersects(Plane). If that can be worked out the logical next step is grabbing DEM data from USGS for generating ground level terrain models.
  • There is a lot of public LiDAR data out there as well. Thanks to companies like QCoherent, some of it is available as WMS/WFS. So next on the agenda is moving 3D LiDAR online.
  • The bump map approach to displaying variable geographic density as relief is a useful concept. There ought to be lots of global epidemiology data that can be transformed to a color density map for display as a relief bump map.

Lots of ideas, little time or money, but Silverlight 5 will make possible a lot of very interesting web apps.

Helpful links:
Silverlight 5 Beta: http://www.silverlight.net/getstarted/silverlight-5-beta/
Runtime: http://go.microsoft.com/fwlink/?LinkId=213904

Silverlight 5 features:
“Silverlight 5 now has built-in XNA 3D graphics API”

XNA: http://msdn.microsoft.com/en-us/aa937791.aspx

NASA Neo: http://localhost/NASA-Neo/publish.htm

Babylon Scenes: Michel Rousseau, courtesy of Bewise.fr

Babylon Engine: David Catuhe / Microsoft France / DPE


“I am real!” said Alice, and began to cry.

“You won’t make yourself a bit realler by crying,” Tweedledee remarked: “there’s nothing to cry about.”

“If I wasn’t real,” Alice said – half-laughing through her tears, it all seemed so ridiculous – “I shouldn’t be able to cry.”

WebBIM? hmmm .. Bing BIM?

BIM or Building Information Modeling isn’t exactly news. I guess I tend to think of it as a part of AEC, starting back in the early generations of CAD, as a way to manage a facility in the aftermath of design and construction. BIM is after all about Buildings, and Buildings have been CAD turf all these many years. Since those early days of FM, CAD has expanded into the current BIM trend with building lifecycle management that is swallowing up larger swathes of surrounding environments.

BIM models are virtual worlds, ‘mirror lands’ of a single building or campus. As BIM grows the isolated BIM models start to aggregate and bump up against the floor of web mapping, the big ‘Mirror Land’. One perspective is to look at a BIM model as a massively detailed Bill of Material (BIM<=>BOM . . bing) in which every component fitted into the model is linked to additional data for specification, design, material, history, approval chains, warranties, and on and on. BIM potentially becomes one massively connected network of hyperlinks with a top level 3D model that mimics the real world.

Sound familiar? – BIM is a sub-internet on the scale of a single building with an interface that has much in common with web mapping. Could this really be yet another reincarnation of Ted Nelson’s epic Xanadu Project, the quixotic precursor of today’s internet?

Although of relatively recent origins, BIM has already spawned its own bureaucratic industry with the likes of NBIMS replete with committees, charters, and governance capable of seriously publishing paragraphs like this:

“NBIM standards will merge data interoperability standards, content values and taxonomies, and process definitions to create standards which define “business views” of information needed to accomplish a particular set of functions as well as the information exchange standards between stakeholders.”

No kidding, “taxonomies”? I’m tempted to believe that ‘Information’ was cleverly inserted to avoid the embarrassing eventuality of an unadorned “Building Model.” Interesting how Claude Shannon seems to crop up in a lot of acronyms these days: BIM, GIS, NBIMS, ICT, even IT?

BIM has more recently appeared on the GIS radar with a flurry of discussion applying GIS methods to BIM. Here, for example are a couple of posts with interesting discussion: SpatialSustain, GeoExpressions, and Vector One. Perhaps this is just another turf battle arising in the CAD versus GIS wars. I leave that for the GISCIers and NBIMSers to decide.

My interest is less in definition and more an observation that buildings too are a part of the rapidly growing “Mirror Land” we call web maps. Competing web maps have driven resolution down to the region of diminishing returns. After all, with 30cm commonly available, is 15cm that much more compelling? However, until recently, Mirror Land has been all about maps and the wide outside world. Even building models ushered in with SketchUp are all about exteriors.

The new frontier of web mapping is interior spaces, “WebBIM,” “Bing BIM” ( Sorry, I just couldn’t resist the impulse). Before committing to national standards, certifications, and governance taxonomies perhaps we need to just play with this a bit.

We have Bing Maps Local introducing restaurant photoscapes. Here’s an example of a restaurant in Boston with a series of arrow connected panoramas for virtual exploration of the interior.

And another recent Bing Maps introduction, mall maps. Who wants to be lost in a mall, or are we lost without our malls?

And then in the Google World Art Projects explore museums. Cool, Streetside Inside!

It’s not obvious how far these interior space additions will go in the future, but these seem to be trial balloons floated for generating feedback on interior extensions to the web map mirror world. At least they are not introduced with full fledged coverage.

“Real” BIM moves up the dimension chain from 2D to 3D and on to 4 – 5D, adding time and cost along the way. Mirror Land is still caught in 2-3D. The upcoming Silverlight 5 release will boost things toward the 3-4D. Multi-verse theories aside (now here’s a taxonomy to ponder – the Tegmark cosmological taxonomy of universes), in the 3-4D range full WebBIM can hit the streets. In the meantime the essential element of spatially hyperlinked data is already here for the curious to play with.

So what’s a newbie Web BIMMER to do? The answer is obvious, get a building plan and start trying a few things. Starting out in 2D, here is an approach: get a building floorplan, add it to a Bing Maps interface, and then do something simple with it.

Step 1 – Model a building

CAD is the place to start for buildings. AEC generates floorplans by the boatload and there are even some available on line, but lacking DWG files the next possibility is using CAD as a capture tool. I tried both approaches. My local grocery store has a nice interior directory that is easily captured in AutoCAD by tracing over the image:

King Soopers Store Directory
Fig 5 – King Soopers Store Directory (foldout brochure)

As an alternative example of a more typical DWG source, the
University of Alaska
has kindly published their floor plans on the internet.

In both scenarios the key is getting the DWG into something that can readily be used in a web map application. Since I’m fond of Bing Silverlight Map Control, the goal is DWG to XAML. Similar things can be done with SVG, and will be as HTML5 increases its reach, and probably even in KML for the Googler minded. At first I thought this would be as easy as starting up Safe Software’s FME, but XAML is not in their writers list, perhaps soon. Next, I fell back to the venerable DXF text export with some C# code to turn it into XAML. This was actually fairly easy with the DXF capture of my local grocery store. I had kept the DWG limited to simple closed polylines and text, separated by layer names.

Here is the result:

Now on to more typical sources, DWG files which are outside of my control. Dealing with arcs and blocks was more than I wanted, so I took an alternative path. FME does have an SVG writer. SVG is hauntingly similar to XAML (especially haunting to W3C), and writing a simple SVG to XAML translator in C# was easier than any other approach I could think of. There are some XSLT files for SVG to XAML, but instead I took the quick and dirty route of translating SVG text to XAML text in my own translator giving me some more control.

Here is the result:

Step 2 – Embed XAML into a Bing Map Control interface

First I wrote a small webapp that allows me to zoom Bing maps aerial to a desired building and draw a polyline around its footprint. This polyline is turned into a polygon XAML snippet added to a TextBox suitable for cut/paste as a map link in my Web BIM experiment.

 <m:MapPolygon Tag="KingSoopers2" x:Name="footprint_1"
 Fill="#FFfdd173" Stroke="Black" StrokeThickness="1" Opacity="0.5"

As a demonstration, it was sufficient to simply add this snippet to a map control. The more general method would be to create a SQL table of buildings that includes a geography column of the footprint suitable for geography STIntersects queries. Using a typical MainMap. ViewChangeEnd event would then let the UI send a WCF query to the table, retrieving footprints falling into the current viewport as a user navigates in map space. However, the real goal is playing with interior plans and I left the data connector feature for a future enhancement.

In order to find buildings easily, I added some Geocode Service calls for an Address finder. The footprint polygon with its MouseLeftButtonUp event leads to a NavigationService that moves to the desired floor plan page. Again generalizing this would involve keeping these XAML floor plans in a SQL Azure Building table for reference as needed. A XAML canvas containing the floor plans would be stored in a BLOB column for easy query and import to the UI. Supporting other export formats such as XAML, SVG, and KML might best be served by using a GeometryCollection in the SQL table with translation on the query response.

Step 3 – Do something simple with the floorplans

Some useful utilities included nesting my floorplan XAML inside a <local:DragZoomPanel> which is coded to implement some normal pan and zoom functions: pan with left mouse, double click zoom in, and mouse wheel zoom +/-. Mouse over text labeling helps identify features as well. In addition, I was thinking about PlaneProjections for stacking multiple floors so I added some slider binding controls for PlaneProjection attributes, just for experimentation in a debug panel.

Since my original King Soopers image is a store directory an obvious addition is making the plan view into a store directory finder.

I added the store items along with aisles and shelf polygon id to a table accessed through a WCF query. When the floorplan is initialized a request is made to a SQL Server table with this directory item information used to populate a ListBox. You could use binding, but I needed to add some events so ListBoxItems are added in code behind.

Mouse events connect directory entries to position polygons in the store shelves. Finally a MouseLeftButtonUp event illustrates opening a shelf photo view which is overlaid with a sample link geometry to a Crest product website. Clicks are also associated with Camera Icons to connect to some sample Photosynthes and Panoramas of store interior. Silverlight 5 due out in 2011 promises to have Silverlight integration of Photosynthe controls as well as 3D.

Instead of a store directory the UAA example includes a simple room finder which moves to the corresponding floor and zooms to the room selected. My attempts at using PlaneProjection as a multi floor stack were thwarted by lack of control over camera position. I had hoped to show a stack of floor plans at an oblique view with an animation for the selected floor plan sliding it out of the stack and rotating to planar view. Eventually I’ll have a chance to revisit this in SL5 with its full 3D scene graph support.

Where are we going?

You can see where these primitive experiments are going: Move from a Bing Map to a Map of interior spaces and now simple Locators can reach into asset databases where we have information about all our stuff. The stuff in this case is not roads, addresses, and shops, but smaller stuff, human scale stuff that we use in our everyday consumer and corporate lives. Unlike rural cultures, modern western culture is much more about inside than outside. We spend many more hours inside the cube than out, so a Mirror Land where we live is certainly a plausible extension, whether we call it CAD, GIS, BIM, FM, or BINGBIM matters little.

It’s also noteworthy that this gives vendors a chance to purchase more ad opportunities. After all, our technology is here to serve a consumer driven culture and so is Mirror Land.

Interior spaces are a predictable part of Mirror Land and we are already seeing minor extensions. The proprietary and private nature of many interior spaces is likely to leave much out of public mapping. However, retail incentives will be a driving force extending ad opportunities into personal scale mapping. Eventually Mobile will close the loop on interior retail space, providing both consumer location as well as local asset views. Add some mobile camera apps, and augmented reality will combine product databases, individualized coupon links, nutritional content, etc to the shelf in front of you.

On the enterprise side, behind locked BIM doors, Silverlight with its rich authentication framework, but more limited mobile reach, will play a part in proprietary asset management which is a big part of FM, BM, BIM ….. Location of assets is a major part of the drive to efficiency and covers a lot of ground from inventory, to medical equipment, to people.


This small exercise will likely irk true NBIMSers who will not see much “real” BIM in a few floor plans. So I hasten to add this disclaimer, I’m not really a Web BIMer or even a Bing BIMer, but I am looking forward to the extension of Mirror Land to the interior spaces I generally occupy.

Whether GIS analysis reaches into web mapped interiors is an open question. I’m old enough to remember when there were “CAD Maps” and “Real GIS”, and then “Web Maps” and “Real GIS.” Although GIS (real, virtual, or otherwise) is slowly reaching deeper into Mirror Land, we are still a long way from NBIMS sanctioned “Real” WebBIM with GIS analysis. But then that means it’s still fun, right?

GPS Tracking with a Twitter Twist

It has been awhile since I had time to post here. Ive been busy on a whole raft of POCs using Silverlight Map Control. Silverlight seems to be a popular way to add map services to enterprise applications, at least it is generating a lot of POC traffic. Most are impressed with its flexibility. Today I wanted to take a couple of minutes and share a peek at a fun project I was working on in between real work. Its a tracking application with a Twitter twist. The vehicle tracking is tied into a Tweet community along a vehicle route.

DotNet Rocks is an online radio type show that does interviews with technical folks. They are primarily interested in .NET topics so you won’t find a lot of open source, or even GIS for that matter. Along with the recent release of Visual Studio 2010 and Silverlight 4, Richard Campbell and Carl Franklin, the hosts of the .NETRocks program, are taking a US tour in an RV. They will be hitting a dozen or so major cities around the country while doing their show on the road. They just had a stop in Phoenix last night and as I write this are heading out to Houston.

The DNR RoadTrip project uses a SQL Server 2008 DB for maintaining the GPS history and caching Tweets. On the server we have a couple of one minute polling services that check for any new GPS positions or Tweets. There are some static FixedLocation Feeds that maintain venue locations. The GPS track feeds go into the DB directly. The Tweet feeds often have no geo tags so their datetime stamps are used to look up the GPS record closest in time. The location of that GPS position is then applied to the Tweet record location. Here is an edmx view of the DB, but notice geography data types are still not supported so the Location field is missing.

dnr_roadtrip screen shot
Fig 3 – .edmx view of the DB (does not show geography Location field)
dnr_roadtrip screen shot
Fig 4 – MSSMS Database Diagrams (shows Location geography field)

The Tweet feeds are queried by the polling service using a #hashtag Twitter search. The hashtag for this Road Trip project, #dnr_roadtrip, lets anyone interested conversevia twitter. The first Tweet at a track location is available as a rollover. If additional Tweets come in at the same location they are made available by an additional Feed Service call:

// Pull the ATOM feed from the Twitter API using the TweetSharp.com library.
var search = FluentTwitter.CreateRequest()
    .AuthenticateAs("Twitter account", "Twitter key")
    .ContainingHashTag(TwitterHashTag)  // TwitterHashTag set in the config file.

The polling service makes use of the “since_id” as a place holder. This way only newer Tweets since the previous since_id need to be processed.

While these polling services are running on the server, the Silverlight Bing Maps UI is communicating with the DB using a straightforward WCF basicHttpBinding. The WCFRoadShow provides the DB queries for populating the UI. There four primary Feed Service WCF calls:

  • GetCurrentLocation() – renders the most recent vehicle location
  • GetFixedLocation() – renders venue stop icons from the DB
  • GetRoute(string updatetype) – renders the GPS tracking icons
  • GetTweet()- renders the Tweet icons related to GPS records

There is one additional Feed Service call which is used to return all Tweets at a given location. This last WCF Service call is triggered by a click event from one of the Tweet icons and is the most complex feed query. It makes use of a SQL Server 2008 geography data type proximity search. The returned Tweet Feed records are then used in a Tweet ListBox using Data Binding.

List tweets = new List();
XNamespace ns = "http://www.w3.org/2005/Atom";
foreach (FeedServiceReference.FeedItem item in e.Result.FeedItems)
  if (item.FeedContent != null)
     // Note: namespace required
     // These are the entry xml elements from Twitter
     //   saved in RoadTrip2010 DB as a varbinary FeedItem
  MemoryStream m = new System.IO.MemoryStream(item.FeedContent);
  XDocument Xitem = XDocument.Load(m);
     foreach (var entry in Xitem.Elements(ns + "entry"))
        string tweeturi = entry.Elements(ns + "link").ElementAt(0).Attribute("href").Value;
        string iconuri = entry.Elements(ns + "link").ElementAt(1).Attribute("href").Value;
        string title = entry.Element(ns + "title").Value;
        string author = entry.Element(ns + "author").Element(ns + "name").Value;
        tweets.Add(new TweetItem(tweeturi, iconuri, title, author));
((ListBox)_LayoutRoot.FindName("twitterItems")).ItemsSource = tweets;
((Border)_LayoutRoot.FindName("twitterPanelBorder")).Visibility = Visibility.Visible;

You can see from the above code snippet that the rss xml feed item for the tweets is returned and then using Binding is attached to a ListBox’s ItemTemplate DataTemplate. This is a handy way of rendering the indeterminate list length of Tweet records found at a given location. Also note from the code snippet that it was necessary to prefix the Atom namespace for accessing feed Elements and Attributes. This is because the returned Atom wrapper was discarded and only entry elements are stored as a varbinary FeedItems.

In addition to the DB Tweet feed cache, I decided to make use of Twitters newer geotagging capabilities. This is still relatively new and not used very frequently, but there is a simple Twitter location query that I hooked to a double click map event:

string twitterurl = "http://search.twitter.com/search.atom?geocode=";
twitterurl += string.Format(CultureInfo.InvariantCulture, "{0,10:0.000000},{1,11:0.000000},
 1mi", loc.Latitude, loc.Longitude).Replace(" ", "");

This takes the click map location in latitude longitude and searches for the nearest geotagged Tweets that fall within a 1mi radius.

Adding geotag Tweet queries injects some interest because unrelated geotagged Tweets can be viewed anywhere in the world. It is surprising to me how many people are unabashed about sharing their location with the world. I was also surprised how much street capitalism occupies the lowly Twitter world. Go to any urban area and a few clicks will reveal the whole spectrum of street talk, the good, the bad, and the ugly.

Since a similar Flickr query is available it was simple to add a geotagged Flickr photo query as well. Double clicking on the map will bring up ListBoxes for all Tweets and Flickr photos that are geotagged and fall within 1mile of the click spot.

string flickrurl = "http://api.flickr.com/services/rest/?

This Flickr query returns a group of photo xml entries. Each of these have an id that can be use to obtain the actual photo using a second lookup.
string src = “http://farm{0}.static.flickr.com/{1}/{2}_{3}.jpg”;

In both cases the returned entries are used to populate ListBox’s ItemTemplate DataTemplate.

Another interesting aspect of this project is the donation of a PreEmptive’s Dotfuscator. This is an injection type tool for monitoring traffic and obtaining detailed analytics about a UI. The Dotfuscator tool is easy to use, especially since PreEmptive was kind enough to provide a detailed config.xml.

Using the tool creates a new xap file of the UI which holds event triggers for different aspects of the code. These events trigger messages back to the PreEmptive’s Runtime Intelligence Service where they are aggregated and then made available for a nice display. You can click on the PreEmptive icon to take a look at the analytics generated on this project. I was impressed with how easy it was to use and how nice it was to get analytics at a highly granular level, down to clicks on individual elements in the xaml.

Since some interested viewers kept a viewer open I also added a timer refresh to update currentPosition, Tweets, and GPS locations:

  • refreshTimer = new DispatcherTimer();
  • refreshTimer.Interval = new TimeSpan(0, 0, 0, 0, 120000); // 120000 Milliseconds 2 min
  • refreshTimer.Tick += new EventHandler(Refresh_Tick);
  • refreshTimer.Start();

The key control for using any of the queries for GPS and Tweets is the DateTime Slider. This is a handy control found over in the DeepEarth codeplex project.
There are plenty of other useful controls available in the DeepEarth Bing Maps Toolkit. This DateSlider encapsulates selection of DateTime ranges across a min/max range:

dateRange.Minimum = DateTime.Parse("4/16/2010 00:00:00", CultureInfo.InvariantCulture);
dateRange.RangeStart = DateTime.Now.AddDays(-2);
dateRange.RangeEnd = DateTime.Now.AddHours(1);
dateRange.Maximum = DateTime.Parse("5/9/2010 00:00:00", CultureInfo.InvariantCulture);
startdatetext.Text = dateRange.RangeStart.ToString();
enddatetext.Text = dateRange.RangeEnd.ToString();

In the process of deploying this project I learned about CultureInfo. Surprisingly, for a RoadTrip in the USA there were a number of folks from other countries who wanted to watch Richard and Carl roam across North America. I had to go back through the application adding CultureInfo.InvariantCulture Globalization to get things to work for the non ‘en-US’ followers. Adjusting for at least working even without fully accommodating other languages turned out to be surprisingly simple. This is one of the pluses for working in the Microsoft world.

Another big plus is browser compatibility. A quick check with browsers I happen to have on hand verified that all of these work fine:

  • Firefox v3.0.17
  • Chrome
  • Safari 4.0.4
  • Opera 10.51
  • IE of course

Don’t even try this with an SVG project.


Twitter is fun and Twitter with maps is a GIS social network. Some claimed to find it mildly addicting. The internet world is over flowing with connectable APIs, SDKs, etc and Bing Maps Silverlight makes hooking these all together relatively statraighforward. Remember it’s really just me and a few weeks working part time to put together a pretty interesting Twitter Map. I know there are lots of more sophisticated Twitter mapping applications already floating around, but of course the attraction to Silverlight is ease of ownership.

Phone Map Toys

Fig 1 – Windows Phone 7 Series Emulator

New toys last night! Microsoft released the Windows Phone 7 Series Development CTP at MIX2010 yesterday. Tim Heuer has a helpful Getting Started page.

I’m curious to see the resulting acronymization for this, perhaps ‘WiPho7′, hopefully not ‘Wiph7′.

The great news is Silverlight support with XNA thrown in as well, with Silverlight life is easy, with XNA life is 3D.

Getting started was straightforward. I just followed Tim Heuer’s instructions and then went to Petzold’s blog.

and … “Houston, we have emulation.”

Fig 2 – Windows Phone 7 Series Emulator

After a couple of minutes I have the emulator running a simple Bing Maps image. It’s nothing more than an <image> with the new RESTful Beta as a source, but you get the idea.

Some of the nice hardware specifications:

  • 800×480
  • Wi-Fi (InternetExplorer)
  • Camera 5 megapixel
  • Accelerometer
  • Compass
  • Location (longitude, latitude, altitude, course, speed, reverse geocoded address)
  • Speech
  • Vibration
  • Push Notification

MIX2010 showed some devices, but real hardware should be available by this fall.

As far as maps, Charles Petzold has this to say:
“Programs are location-aware, have access to maps and
other data through Bing and Windows Live, and can interface with social networking sites.”


Overall, this fills a big hole in the Silverlight framework. We will at least have a Silverlight target across the full range of platforms by 2011. Android has a big headstart, but I believe the base Silverlight/XNA CLR technology will provide a stronger development platform in the long run. Sorry Apple fans. With some real competitor technologies, iPhone will eventually recede due to Apple’s artsy, control freaky orientation.

Map Clipping with Silverlight

Fig 1 – Clip Map Demo

Bing Maps Silverlight Control has a lot of power, power that is a lot of fun to play with even when not especially practical. This weekend I was presented with a challenge to find a way to show web maps, but restricted to a subset of states, a sub-region. I think the person making the request had more in mind the ability to cut out an arbitrary region and print it for reporting. However, I began to think why be restricted to just the one level of the pyramid. With all of this map data available we should be able to provide something as primitive as coupon clipping, but with a dynamic twist.

Silverlight affords a Clip masking mechanism and it should be simple to use.

1. Region boundary:

The first need is to compile the arbitrary regional boundary. This would be a challenge to code from scratch, but SQL Server spatial already has a function called “STUnion.” PostGIS has had an even more powerful Union function for some time, and Paul Ramsey has pointed out the power of fast cascaded unions. Since I’m interested in seeing how I can use SQL Serrver, though, I reverted to the first pass SQL Server approach. But, as I was looking at STUnion it was quickly apparent that this is a simple geo1.STUnion(geo2) function and what is needed is an aggregate union. The goal is to union more than just two geography elements at a time, preferably the result of an arbitrary query.

Fortunately there is a codeplex project, SQL Server Tools, which includes the very thing needed, along with some other interesting functions. GeographyAggregateUnion is the function I need, Project/Unproject and AffineTransform:: will have to await another day. This spatial tool kit consists of a dll and a register.sql script that is used to import the functions to an existing DB. Once in place the functions can be used like this:

SELECT dbo.GeographyUnionAggregate(wkb_geometry)) as Boundary
FROM [Census2009].[dbo].[states]
WHERE NAME = ‘Colorado’ OR NAME = ‘Utah’ or NAME = ‘Wyoming’

Ignoring my confusing choice of geography column name, “wkb_geometry,” this function takes a “geography” column result and provides the spatial union:

Or in my case:

Fig 3 – GeographyUnionAggregate result in SQL Server

Noting that CO, WY, and UT are fairly simple polygons but the result is 1092 nodes I tacked on a .Reduce() function.
dbo.GeographyUnionAggregate(wkb_geometry).Reduce(10) provides 538 points
dbo.GeographyUnionAggregate(wkb_geometry).Reduce(100) provides 94 points
dbo.GeographyUnionAggregate(wkb_geometry).Reduce(100) provides 19 points

Since I don’t need much resolution I went with the 19 points resulting from applying the Douglas-Peuker thinning with a tolerance factor of 1000.

2. Adding the boundary

The next step is adding this union boundary outlining my three states to my Silverlight Control. In Silverlight there are many ways to accomplish this, but by far the easiest is to leverage the builtin MapPolygon control and add it to a MapLayer inside the Map hierarchy:

  <m:MapPolygon x:Name=”region”
  Stroke=”Blue” StrokeThickness=”5″
    Locations=”37.0003960382868,-114.05060006067 37.000669,-112.540368
    36.997997,-110.47019 36.998906,-108.954404
    41.996568,-112.173352 41.99372,-114.041723
     37.0003960382868,-114.05060006067 “/>

Now I have a map with a regional boundary for the combined states, CO, WY, and UT.

3. The third step is to do some clipping with the boundary:

UIElement.Clip is available for every UIElement, however, it is restricted to Geometry clipping elements. Since MapPolygon is not a geometry it must be converted to a geometry to be used as a clip element. Furthermore PathGeometry is very different from something as straight forward as MapPolygon, whose shape is defined by a simple LocationCollection of points.

PathGeometry in XAML:

      <PathFigure StartPoint=”1,1″>
          <LineSegment Point=”1,2″/>
          <LineSegment Point=”2,2″/>
          <LineSegment Point=”2,1″/>
          <LineSegment Point=”1,1″/>

The easiest thing then is to take the region MapPolygon boundary and generate the necessary Clip PathGeometry in code behind:

  private void DrawClipFigure()
    if (!(MainMap.Clip == null))
    PathFigure clipPathFigure = new PathFigure();
    LocationCollection locs = region.Locations;
    PathSegmentCollection clipPathSegmentCollection = new PathSegmentCollection();
    bool start = true;
    foreach (Location loc in locs)
      Point p = MainMap.LocationToViewportPoint(loc);
      if (start)
       clipPathFigure.StartPoint = p;
       start = false;
      LineSegment clipLineSegment = new LineSegment();
      clipLineSegment.Point = p;
    clipPathFigure.Segments = clipPathSegmentCollection;
    PathFigureCollection clipPathFigureCollection = new PathFigureCollection();

    PathGeometry clipPathGeometry = new PathGeometry();
    clipPathGeometry.Figures = clipPathFigureCollection;
    MainMap.Clip = clipPathGeometry;

This Clip PathGeometry can be applied to the m:Map named MainMap to mask the underlying Map. This is easily done with a Button Click event. But when navigating with pan and zoom, the clip PathGeometry is not automatically updated. It can be redrawn with each ViewChangeEnd:
private void MainMap_ViewChangeEnd(object sender, MapEventArgs e)
  if (MainMap != null)
    if ((bool)ShowBoundary.IsChecked) DrawBoundary();
    if ((bool)ClipBoundary.IsChecked) DrawClipFigure();

This will change the clip to match a new position, but only after the fact. The better way is to add the redraw clip to the ViewChangeOnFrame:

MainMap.ViewChangeOnFrame += new EventHandler<MapEventArgs>(MainMap_ViewChangeOnFrame);

private void MainMap_ViewChangeOnFrame(object sender, MapEventArgs e)
  if (MainMap != null)
    if ((bool)ShowBoundary.IsChecked) DrawBoundary();
    if ((bool)ClipBoundary.IsChecked) DrawClipFigure();

In spite of the constant clip redraw with each frame of the navigation animation, navigation is smooth and not appreciably degraded.


Clipping a map is not terrifically useful, but it is supported with Silverlight Control and provides another tool in the webapp mapping arsenal. What is very useful, are the additional functions found in SQL Server Tools. Since SQL Server spatial is in the very first stage of its life, several useful functions are not found natively in this release. It is nice to have a superset of tools like GeographyAggregateUnion, Project/Unproject,
and AffineTransform::.

The more generalized approach would be to allow a user to click on the states he wishes to include in a region, and then have a SQL Server query produce the boundary for the clip action from the resulting state set. This wouldn’t be a difficult extension. If anyone thinks it would be useful, pass me an email and I’ll try a click select option.

Fig 4 – Clip Map Demo

Silverlight Video Pyramids

VideoTile Fig 1
Fig 1 – Silverlight Video Tile Pyramid

Microsoft’s DeepZoom technology capitalizes on tile pyramids for MultiScaleImage elements. It is an impressive technology and is the foundation of Bing Maps Silverlight Control Navigation. I have wondered for some time why the DeepZoom researchers haven’t extended this concept a little. One possible extension that has intriguing possibilities is a MultiScaleVideo element.

The idea seems feasible, breaking each frame into a DeepZoom type pyramid and then refashioning as a pyramid of video codecs. Being impatient, I decided to take an afternoon and try out some proof of concept experiments. Rather than do a frame by frame tiling, I thought I’d see how a pyramid of WMV files could be synchronized as a Grid of MediaElements:

<font color="#0000FF" size=2><</font><font color="#A31515" size=2>Grid</font><font color="#FF0000" size=2> x</font><font color="#0000FF" size=2>:</font><font color="#FF0000" size=2>Name</font><font color="#0000FF" size=2>="VideoTiles"</font><font color="#FF0000" size=2> Background</font><font color="#0000FF" size=2>="{</font><font color="#A31515" size=2>StaticResource</font><font color="#FF0000" size=2> OnTerraBackgroundBrush</font><font color="#0000FF" size=2>}" </font>
<font color="#FF0000" size=2> Width</font><font color="#0000FF" size=2>="426"</font><font color="#FF0000" size=2> Height</font><font color="#0000FF" size=2>="240"&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</font>
<font color="#FF0000" size=2> HorizontalAlignment</font><font color="#0000FF" size=2>="Center"</font><font color="#FF0000" size=2> VerticalAlignment</font><font color="#0000FF" size=2>="Center">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<font color="#0000FF" size=2><</font></font><font color="#A31515" size=2>Grid.ColumnDefinitions</font><font color="#0000FF" size=2>>
<font color="#0000FF" size=2><</font></font><font color="#A31515" size=2>ColumnDefinition</font><font color="#0000FF" size=2>/>
<font color="#0000FF" size=2><</font></font><font color="#A31515" size=2>ColumnDefinition</font><font color="#0000FF" size=2>/>
<font color="#0000FF" size=2>&lt;/</font></font><font color="#A31515" size=2>Grid.ColumnDefinitions</font><font color="#0000FF" size=2>>
<font color="#0000FF" size=2><</font></font><font color="#A31515" size=2>Grid.RowDefinitions</font><font color="#0000FF" size=2>>
<font color="#0000FF" size=2><</font></font><font color="#A31515" size=2>RowDefinition</font><font color="#0000FF" size=2>/>
<font color="#0000FF" size=2><</font></font><font color="#A31515" size=2>RowDefinition</font><font color="#0000FF" size=2>/>
<font color="#0000FF" size=2>&lt;/</font></font><font color="#A31515" size=2>Grid.RowDefinitions</font><font color="#0000FF" size=2>>
&nbsp;&nbsp;<font color="#0000FF" size=2><</font></font><font color="#A31515" size=2>MediaElement</font><font color="#FF0000" size=2> x</font><font color="#0000FF" size=2>:</font><font color="#FF0000" size=2>Name</font><font color="#0000FF" size=2>="v00"</font><font color="#FF0000" size=2> Source</font><font color="#0000FF" size=2>="http://az1709.vo.msecnd.net/video/v00.wmv"</font><font color="#FF0000" size=2>
 Grid.Column</font><font color="#0000FF" size=2>="0"</font><font color="#FF0000" size=2> Grid.Row</font><font color="#0000FF" size=2>="0" <font color="#0000FF" size=2>/>
&nbsp;&nbsp;<font color="#0000FF" size=2><</font></font></font><font color="#A31515" size=2>MediaElement</font><font color="#FF0000" size=2> x</font><font color="#0000FF" size=2>:</font><font color="#FF0000" size=2>Name</font><font color="#0000FF" size=2>="v10"</font><font color="#FF0000" size=2> Source</font><font color="#0000FF" size=2>="http://az1709.vo.msecnd.net/video/v10.wmv"</font><font color="#FF0000" size=2>
 Grid.Column</font><font color="#0000FF" size=2>="1"</font><font color="#FF0000" size=2> Grid.Row</font><font color="#0000FF" size=2>="0" />
&nbsp;&nbsp;<font color="#0000FF" size=2><</font></font><font color="#A31515" size=2>MediaElement</font><font color="#FF0000" size=2> x</font><font color="#0000FF" size=2>:</font><font color="#FF0000" size=2>Name</font><font color="#0000FF" size=2>="v11"</font><font color="#FF0000" size=2> Source</font><font color="#0000FF" size=2>="http://az1709.vo.msecnd.net/video/v11.wmv"</font><font color="#FF0000" size=2>
 Grid.Column</font><font color="#0000FF" size=2>="1"</font><font color="#FF0000" size=2> Grid.Row</font><font color="#0000FF" size=2>="1" />
&nbsp;&nbsp;<font color="#0000FF" size=2><</font></font><font color="#A31515" size=2>MediaElement</font><font color="#FF0000" size=2> x</font><font color="#0000FF" size=2>:</font><font color="#FF0000" size=2>Name</font><font color="#0000FF" size=2>="v01"</font><font color="#FF0000" size=2> Source</font><font color="#0000FF" size=2>="http://az1709.vo.msecnd.net/video/v01.wmv"</font><font color="#FF0000" size=2>
 Grid.Column</font><font color="#0000FF" size=2>="0"</font><font color="#FF0000" size=2> Grid.Row</font><font color="#0000FF" size=2>="1" />
<font color="#0000FF" size=2>&lt;/</font></font><font color="#A31515" size=2>Grid</font><font color="#0000FF" size=2>></font>

Ideally to try out a video tile pyramid I would want something like 4096×4096 since it divides nicely into 256 like the Bing Maps pyramid. However, Codecs are all over the place, and tend to cluster on 4:3 or 16:9 aspect ratios. Red 4K at 4520×2540 is the highest resolution out there, but I didn’t see any way to work with that format in Silverlight. The best resolution sample clip I could find that would work in Silverlight was the WMV HD 1440×1080 Content Showcase. Since I like the Sting background music, I decided on “The Living Sea” IMAX sample.

Not enough resolution to get too far, but I am just looking at multi tile synching for now and two levels will do. I ended up using Expression Encoder 3 to take the original resolution and clip to smaller subsets.

Zoom Level 1:

00 10
01 11

Zoom Level 2:

0000 0010 1000 1010
0001 0011 1001 1011
0100 0110 1100 1110
0101 0111 1101 1111

I encoded ZoomLevel 1 as 4 tiles 640×480 and Zoom Level 2 as 16 tiles at 320×240. I then took all these tiles and dropped them into my Azure CDN video container. Again this is not a streaming server, but I hoped it would be adequate to at least try this in a limited time frame. Now that I have the video pyramid with two zoom levels I can start trying out some ideas.

VideoTile Fig 2
Fig 2 – Silverlight Video Tile Pyramid Zoom Level 1

VideoTile Fig 3
Fig 3 – Silverlight Video Tile Pyramid ZoomLevel 2

First, it is fairly difficult to keep the Grid from showing in the layout. Moving around with different sizes can change the border but there is generally a slight line visible, which can be seen in Fig 2. Even though you don’t see the lines in Fig 3, it also is made up of four tiles. This is setup just like a normal tile pyramid with four tiles under each upper tile in a kind of quad tree arrangement. In this case very simple with just the 2 levels.

I tied some events to the MediaElements. The main pyramid events are tied to MouseWheel events:

void Video_MouseWheel(object sender, MouseWheelEventArgs e)
    int delta = e.Delta;
    if (delta < 0)
      //zoom out
      if (e.OriginalSource.GetType() == typeof(MediaElement))
        VideoCnt = 0;
        MediaElement me = e.OriginalSource as MediaElement;
        currentPostion = me.Position;
        v00.Source = new Uri("http://az1709.vo.msecnd.net/video/v00.wmv");
        v10.Source = new Uri("http://az1709.vo.msecnd.net/video/v10.wmv");
        v11.Source = new Uri("http://az1709.vo.msecnd.net/video/v11.wmv");
        v01.Source = new Uri("http://az1709.vo.msecnd.net/video/v01.wmv");
    else if (delta > 0)
      //zoom in
      if (e.OriginalSource.GetType() == typeof(MediaElement))
        if (VideoZoomLevel <= maxVideoZoom)
            VideoCnt = 0;
            MediaElement me = e.OriginalSource as MediaElement;
            currentPostion = me.Position;
            string quad = me.Source.LocalPath.Substring(0, me.Source.LocalPath.IndexOf(".wmv"));

            v00.Source = new Uri("http://az1709.vo.msecnd.net" + quad + "00.wmv");
            v10.Source = new Uri("http://az1709.vo.msecnd.net" + quad + "10.wmv");
            v11.Source = new Uri("http://az1709.vo.msecnd.net" + quad + "11.wmv");
            v01.Source = new Uri("http://az1709.vo.msecnd.net" + quad + "01.wmv");
            VideoZoomLevel = maxVideoZoom;

I’m just checking a MouseWheel delta to determine whether to go in or out. Then looking at the original source I determine which quad the mouse is over and then create the new URIs for the new Zoom Level. This is not terribly sophisticated. Not surprisingly the buffering is what is the killer. There are some MediaOpen and Load events which I attempted to use, however, there were quite a few problems with synchronizing the four tiles.

If you can patiently wait for the buffering it does work after a fashion. Eventually the wmv are in local cache which helps. However, the whole affair is fragile and erratic.

I didn’t attempt to go any further with panning across the Zoom Level 2. I guess buffering was the biggest problem. I’m not sure how much further I could get trying to move to a Streaming Media server or monitoring BufferProgress with a timer thread.

The experiment may have been a failure, but the concept is none the less interesting. Perhaps some day a sophisticated codec will have such things built in.

The high altitude perspective

One aspect which makes MultiScaleVideo interesting is just its additional dimension of interactivity. As film moves inexorably to streaming internet, there is more opportunity for viewer participation. In a pyramid approach focus is in the viewer’s hand. The remote becomes a focus tool that moves in and out of magnification levels as well as panning across the video 2D surface.

In the business world this makes interfaces to continuous data collections even more useful. As in the video feed experiment of the previous post, interfaces can scan at low zoom levels and then zoom in for detailed examination of items of interest. Streetside photos in the example Streetside path navigation already hint at this, using the run navigation to animate a short photo stream while also providing zoom and pan DeepZoom capability.

One of the potential pluses for this, from a distributor point of view, is repeat viewer engagement. Since the viewer is in control, any viewing is potentially unique, which discourages the typical view and discard common with film videos today. This adds value to potential advertisement revenue.

The film producer also has some incentive with a whole new viewer axis to play with. Now focus and peripheral vision take on another dimension, and focal point clues can create more interaction or in some cases deceptive side trails in a plot. Easter eggs in films provide an avid fan base with even more reason to view a film repeatedly.

Finally, small form factor hand held viewers such as iPhone and Android enabled cell phones can benefit from some form of streaming that allows user focus navigation. The screen in these cases is small enough to warrant some navigation facility. Perhaps IMAX or even Red 4K on handhelds is unreasonable, but certainly allowing navigation makes even the more common HD codecs more useable. A video pyramid of streaming sources could make a compelling difference in the handheld video market.


MultiScaleVideo is a way to enhance user interaction in a 2D video. It doesn’t approach the game level interaction of true 3D scene graphs, but it does add another axis of interest. My primitive exercise was not successful. I am hoping that Microsoft Labs will someday make this feasible and add another type of Navigation to the arsenal. Of course, you can imagine the ensuing remote controller wars if DeepZoom Video ever becomes common place.

One more thing, check out the cool scaling animation attached to the expander button, courtesy of Silverlight Toolkit Nov drop.

Azure Video and the Silverlight Path

Fig 1 – Video Synched to Map Route

My last project experimented with synching Streetside with a Route Path. There are many other continuous asset collections that can benefit from this approach. Tom Churchill develops very sophisticated software for video camera augmentation, Churchill Navigation. He managed to take some time out of a busy schedule to do a simple drive video for me to experiment with.

In this scenario a mobile video camera is used along with a GPS to produce both a video stream and a simultaneous stream of GPS NMEA records. NMEA GPRMC records include a timestamp and latitude, longitude along with a lot of other information, which I simply discarded in this project.

First the GPS data file was converted into an xml file. I could then use some existing xml deserializer code to pull the positions into a LocationCollection. These were then used in Bing Maps Silverlight Control to produce a route path MapPolyline. In this case I didn’t get fancy and just put the xml in the project as an embedded resource. Certainly it would be easy enough to use a GPS track table from SQL Server, but I kept it simple.

NMEA GPRMC Record Detail:


0	$GPRMC,	http://www.gpsinformation.org/dale/nmea.htm#RMC
1	050756,	time 05:07:56
2	A,		Active A | V
3	4000.8812,	Latitude
4	N,		North
5	10516.7323,	Longitude
6	W,		West
7	20.2,		ground speed in knots
8	344.8,	track angle degrees true
9	211109,	date 11/21/2009
10	10.2,		magnetic variation
11	E,		East
12	D*0E		Checksum

XML resulting from above NMEA record

&lt;?xml version="1.0" encoding="utf-16"?&gt;
&lt;ArrayOfLocationData xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
&nbsp;&nbsp;&nbsp;&nbsp;&lt;Description&gt;Boulder GPS&lt;/Description&gt;
&nbsp;&nbsp;&nbsp;&nbsp;&lt;Description&gt;Boulder GPS&lt;/Description&gt;

Once the route path MapPolyline is available I can add a vehicle icon similar to the last streetside project. The icon events are used in the same way to start an icon drag. Mouse moves are handled in the Map to calculate a nearest point on the path and move the icon constrained to the route. The Mouse Button Up event is handled to synch with the video stream. Basically the user drags a vehicle along the route and when the icon is dropped the video moves to that point in the video timeline.

Video is a major focus of Silverlight. Microsoft Expression Encoder 3 has a whole raft of codecs specific to Silverlight. It also includes a dozen or so templates for Silverlight players. These players are all ready to snap in to a project and include all the audio volume, video timeline, play-stop-pause, and other controls found in any media player. The styling however, is different with each template, which makes life endurable for the aesthetically minded. I am not, so the generic gray works fine for my purposes. When faced with fashion or style issues my motto has always been “Nobody will notice,” much to the chagrin of my kids.

Expression Encoder 3 Video Player Templates

  • Archetype
  • BlackGlass
  • Chrome
  • Clean
  • CorporateSilver
  • Expression
  • FrostedGallery
  • GoldenAudio
  • Graphing
  • Jukebox
  • Popup
  • QuikSilver
  • Reflection
  • SL3AudioOnly
  • SL3Gallery
  • SL3Standard

At the source end I needed a reliable video to plug into the player template. I had really wanted to try out the Silverlight Streaming Service, which was offered free for prototype testing. However, this service is being closed down and I unfortunately missed out on that chance.

Tim Heuer’s prolific blog has a nice introduction to an alternative.

As it turns out I was under the mistaken impression that “Silverlight Streaming” was “streaming” video. I guess there was an unfortunate naming choice which you can read about in Tim’s blog post.

As Tim explains, Azure is providing a new Content Delivery Network CTP. This is not streaming, but it is optimized for rapid delivery. CDN is akin to Amazon’s Cloud Front. Both are edge cache services that boast low latency and high data transfer speeds. Amazon’s Cloud Front is still Beta, and Microsoft Azure CDN is the equivalent in Microsoft terminology, CTP or Community Technical Preview. I would not be much surprised to see a streaming media service as part of Azure in the future.

Like Cloud Front, Azure CDN is a promoted service from existing Blob storage. This means that using an Azure storage account I can create a Blob storage container, then enable CDN, and upload data just like any other blob storage. Enabling CDN can take awhile. The notice indicated 60 minutes, which I assume is spent allocating resources and getting the edge caching scripts in place.

I now needed to add Tom’s video encoded as 640×480 WMV up to the blob storage account with CDN enabled. The last time I tried this there wasn’t a lot of Azure upload software available. However, now there are lots of options:

Cloud Berry Explorer and Cloud Storage Studio were my favorites but there are lots of codeplex open source projects as well.

Azure Storage Explorer

Factonomy Azure Utility(Azure Storage Utility)


Azure Blob Storage Client

I encountered one problem, however, in all of the above. Once my media file exceeded 64Mb, which is just about 5min of video for the encoding I chose, my file uploads consistently failed. It is unclear whether the problem is at my end or the upload software. I know there is a 64Mb limit for simple blob uploads but most uploads would use a block mode not simple mode. Block mode goes all the way up to 50GB in current CTP which is a very long video. (roughly 60 hours at this encoding)

When I get time I’ll return to the PowerShell approach and manually upload my 70Mb video sample as a block. In the meantime I used Expression Encoder to clip the video down a minute to a 4:30 min clip for testing purposes.

Here are the published Azure Storage limits once it is released:

200GB for block blobs (64KB min, 4MB max block size)
64MB is the limit for a single blob before you need to use blocks
1TB for page blobs

Fig 2 – Video Synched to Map Route

Now there’s a video in the player. The next task is to make a two way connect between the route path from the GPS records and the video timeline. I used a 1 sec timer tick to check for changes in the video timeline. This time is then used to loop through the route nodes defined by gps positions until reaching a time delta larger or equal to the current video time. At that point the car icon position is updated. This positions within a 1-3second radius of accuracy. It would be possible to refine this using a segment percentage approach and get down to the 1sec timer tick radius of accuracy, but I’m not sure increased accuracy is helpful here.

The reverse direction uses the current icon position to keep track of the current segment. Since the currentSegment also keeps track of endpoint delta times, it is used to set the video player position with the MouseUp event. Now route path connects to video position and as video is changed the icon location of the route path is also updated. We have a two way synch mode between the map route and the video timeline.

&nbsp;&nbsp;private void MainMap_MouseMove(object sender, MouseEventArgs e)
&nbsp;&nbsp;&nbsp;&nbsp;Point p = e.GetPosition(MainMap);
&nbsp;&nbsp;&nbsp;&nbsp;Location LL = MainMap.ViewportPointToLocation(p);
&nbsp;&nbsp;&nbsp;&nbsp;LLText.Text = String.Format("{0,10:0.000000},{1,11:0.000000}", LL.Latitude, LL.Longitude);
&nbsp;&nbsp;&nbsp;&nbsp;if (cardown)
&nbsp;&nbsp;&nbsp;&nbsp;currentSegment = FindNearestSeg(LL, gpsLocations);
&nbsp;&nbsp;&nbsp;&nbsp;MapLayer.SetPosition(car, currentSegment.nearestPt);

FindNearestSeg is the same as the previous blog post except I’ve added time properties at each endpoint. These can be used to calculate video time position when needed in the Mouse Up Event.

&nbsp;&nbsp;private void car_MouseLeftButtonUp(object sender, MouseButtonEventArgs e)
&nbsp;&nbsp;&nbsp;&nbsp;if (cardown)
&nbsp;&nbsp;&nbsp;&nbsp;cardown = false;
&nbsp;&nbsp;&nbsp;&nbsp;VideoBoulder1.Position = currentSegment.t1;

Silverlight UIs for continuous data

This is the second example of using Silverlight Control for route path synching to a continuous data collection stream. In the first case it was synched with Streetside and in this case to a gps video. This type of UI could be useful for a number of scenarios.

There is currently some interest in various combinations of mobile Video and LiDAR collections. Here are some examples

  • Obviously both Google and Microsoft are busy collecting streetside views for ever expanding coverage.
  • Utility corridors, transmission, fiber, and pipelines, are interested in mobile and flight collections for construction, as built management, impingement detection, as well as regulatory compliance.
    Here are a couple representative examples:
      Baker Mobile
      Mobile Asset Collection MAC Vehicle
  • Railroads have a similar interest
      Lynx Mobile
  • DOTs are investigating mobile video LiDAR collection as well
      Iowa DOT Research


Mobile asset collection is a growing industry. Traditional imagery, video, and now LiDAR components collected in stream mode are becoming more common. Silverlight’s dual media and mapping controls make UIs for managing and interfacing with these type of continuous assets not only possible in a web environment, but actually fun to create.

Streetside in Silverlight

Fig 1 – Streetside in Silverlight

A flood of announcements have been coming up over in Bing Land. You can try some of these out here: Bing Maps Explore

Significant is the “Bing Maps Silverlight Control Extended Modes Beta and Control″ coming just a week or two after initial release of Bing Maps Silverlight Control. I have to get a handle on these names, so regretfully, let’s assume the military style – “BMSC.” These are separate from the the larger stable of Silverlight 4 Beta releases, recently announced.

Chris Pendleton’s Bing Blog (Why the urge to add “Bop” after this?) has some early detail on how to install, what is included, and how to make use of it.

What is included of note are two new map modes:


Birdseye adds a pseudo 3D imagery mode that lets us move around in oblique space with realistic helicopter view angles. We are given tantalizing heading rotation capabilities, but limited to the compass quadrants, north, south, east, and west.

Streetside is the additional deep zoom street side photo views which afford zoom pan and a navigation run figure for hustling along the street. Streetside navigation includes circular heading, vertical pitch, and magnified zoom capabilities. Anyone used to the DeepZoom affects will find this familiar, but the running figure adds some travel legs to the process.

Neither of these modes are universal at this point. Here is the coverage extent shown in cyan for Streetside photos:

Fig 2 – Streetside current extent

Fig 3 – Streetside current extent at more detail

How to make use of these riches?

After downloading and replacing BMSC with the new BMSC, I then installed the BMSCEM Beta and I’m ready to do some experiments. One thought that occurred to me was to add some streetside to a rout demonstration. The idea is to have an additional panel in a more prosaic geocode/route demo that will show streetside photos along the route path returned by the Bing Maps Web Services Route Service i.e. BMWSRS hmmmm? even the military might balk at this? Fig1 above shows how this might look in Road mode. This proceeds in several steps:

  1. Geocode with BMWSGS the start and end addresses
  2. Route between the two locations with BMWSRS
  3. Turn the resulting RoutePath into a MapPolyline with Pushpin images at start and end
  4. Add vehicle icon at the start
  5. Add some event handlers to allow users to drag the car icon constrained to the route path
  6. And finally connect the car icon position to streetside photos shown in the panel

Fig 4 – Streetside in Silverlight Aerial mode

In the sample you can see the navigation figure inside the streetside panel that lets users move inside the streetside photo space.

It is interesting to see how route path overlays are affected by the several map modes including Birdseye and Streetside. As you can see from the next couple of figures the Route MapPolyline does not pass through a transform to match either Birdseye or Streetside view.

Fig 5 – Streetside in Silverlight Birdseye mode
(note route path is shifted off to south by about 9 blocks)

Fig 6 – Streetside mode with heading rotated to see a route MapPolyline that is totally unrelated to the photo view

In Fig 5 and Fig 6 above, you can see that the streetside photos and Birdseye perspective are taking advantage of 3D perspective transforms, but the custom layer is aligned with the tilted road mode hidden behind the photo/image. It appears that visibility of other Map Layers will need to be part of a mode switch between Road or Aerial and Birdseye and Streetside.

Aside from that another interesting aspect of this example is the route constraint algorithm. The user fires a
MouseLeftButtonDown += car_MouseLeftButtonDown
event attached to the car icon to start the process which flags cardown mode true. However, the corresponding button up event is added to the MainMap rather than the icon.
MainMap.MouseLeftButtonUp += car_MouseLeftButtonUp;

This insures that the user can continue moving his mouse with the left down even while no longer over the car icon and that we can continue to lookup nearest point on route path for constraining our icon to the path.

private void car_MouseLeftButtonDown(object sender, MouseButtonEventArgs e)
e.Handled = true;
cardown = true;

private void car_MouseLeftButtonUp(object sender, MouseButtonEventArgs e)
if (cardown) cardown = false;

The real action, though, takes place in the MainMap MouseMove event, where we find the nearest point on the route path, set the car map position, set the streetside view location, and finally calculate the streetside view heading from the segment vector:

Point p = e.GetPosition(MainMap);
Microsoft.Maps.MapControl.Location LL = MainMap.ViewportPointToLocation(p);

 if(cardown) {
  Segment s = FindNearestSeg(LL, routePts);

  MapLayer.SetPosition(car, s.nearestPt);
  StreetviewMap.Mode.Center = s.nearestPt;
  double dx = (s.linePt2.Longitude - s.linePt1.Longitude);
  double dy = (s.linePt2.Latitude - s.linePt1.Latitude);
  double angle = 0;

  if (dx > 0) angle = (Math.PI * 0.5) - Math.Atan(dy / dx);
  else if (dx < 0) angle = (Math.PI * 1.5) - Math.Atan(dy / dx);
  else if (dy > 0) angle = 0;
  else if (dy < 0) angle = Math.PI;
  else angle = 0.0;// the 2 points are equal  
  angle *= (180 / Math.PI);
  HeadingText.Text = String.Format("{0,10:0.000000}", angle);
  StreetviewMap.Heading = angle;


Here is some code adapted from my venerable Bowyer and Woodwark “Programmer’s Geometry” book. Yes, they used to have books for this type of algorithm. Bascially the code loops through each segment and finds the nearest perpindicular point of intersection using parametric lines. Since I’m only comparing for the least distance, I don’t need to use the Math.Sqrt function to get actual distances.

private Segment FindNearestSeg(Location LL, LocationCollection routePts){

  Segment s = new Segment();
  double dist = Double.MaxValue;
  double d = 0;
  double t = 0;

  int pos = 0;
  Location pt0 = new Location();
  foreach (Location pt1 in routePts)
    if (pos++ > 0)
      double XKJ = pt0.Longitude - LL.Longitude;
      double YKJ = pt0.Latitude - LL.Latitude;
      double XLK = pt1.Longitude - pt0.Longitude;
      double YLK = pt1.Latitude - pt0.Latitude;
      double denom = XLK * XLK + YLK * YLK;
      if (denom == 0)
        d = (XKJ * XKJ + YKJ * YKJ);
        t = -(XKJ * XLK + YKJ * YLK) / denom;
        t = Math.Min(Math.Max(t, 0.0), 1.0);
         double xf = XKJ + t * XLK;
         double yf = YKJ + t * YLK;
        d = xf * xf + yf * yf;
      if (d < dist)
        dist = d;
        s.nearestPt = new Location(pt0.Latitude + t * YLK, pt0.Longitude + t * XLK);
        s.linePt1 = new Location(pt0.Latitude, pt0.Longitude);
        s.linePt2 = new Location(pt1.Latitude, pt1.Longitude);
    pt0 = pt1;

  return s;


Here is a small class to hold the results of the minimal distance segment and intersect point:

public class Segment    
 public Location linePt1 { get; set; }
 public Location linePt2 { get; set; }
 public Location nearestPt { get; set; }

All of this is fired with each mouse move event to scan through the route path nodes and get the segment nearest to the current user position. That this all works smoothly is another testimony to the advantage of CLR over Javascript on the client.

The resulting Segment then holds the segment end points and the intersection point. The intersection point is available for placing the car icon position as well as the StreetviewMap.Mode.Center in our Streetside panel. The additional segment endpoints are handy for calculating the heading of the streetside view direction. This is used to point along the vector direction of the path segment by calculating the compass direction and applying it to the StreetviewMap.Heading = angle;

The result slides the car icon along our route path while updating the Streetside View panel with a StreetviewMap heading in the same direction as the route.

Unfortunately, the Streetview loading is currently too slow to do a film strip down the route, which is too bad as my next idea was an animated route with concurrent streetside views from whatever direction (angular delta) set in Streetside View by the user. Of course the run navigator does a good job of moving in heading direction, it’s just not in my control. If I can get control of this navigator, I would try a run navigate along a route leg until reaching a waypt, and then turn to a new heading before starting navigation run again for the next route leg.

However, I just discovered that by adding a ViewChangeEnd event to my streetview panel,
StreetviewMap.ViewChangeEnd += car_Update;,
I can use the Center location to update my car location.
MapLayer.SetPosition(car, StreetviewMap.Mode.Center);
Now I can hustle down the street with the Streetside Navigation runner, and at least have my car location shown on the map. Cool!


Streetview and Birdseye are nice additions to the list of map modes. They have some restrictions in availability, but where they are available, add some useful features. My experiment just scratches the surface available with Silverlight Control Extension Modes.

Here are a few problems that surfaced with my example:

  1. The routepath returned by RouteService can be just slightly off the route traveled and consequently the Streetside location will be incorrect. Also setting StreetviewMap.Mode.Center to a location will snap to the closest street photo even if it is off route at an intersection. This appears as a series of streetviews for streets crossed by a route instead of on the route desired. At least that’s my theory for the occasional odd view of lawns instead of the street ahead.
  2. Map children layers are not automatically transformed to these new views and mode change events will need to manipulate visibility of custom layers.
  3. I had to turn off drag on the Streetview panel so that users can do normal streetview pan and zoom.
  4. It is necessary to check for a valid streetside photo. If the route runs off streetside coverage, the last valid streetside view location remains in the panel regardless of the new icon positions.

Obviously “alternate reality” continues to converge with “ordinary reality” at a steady pace, at least in web mapping worlds.

Silverlight Map Pixel Shader Effects

Fig 1 – Pixel Shader Effects

Sometimes it would be nice to change the standard color scheme of well known map sources. I recently saw a blog post by Ricky Brundritt that shows how to make color map changes to Bing Maps tiles. By applying a new skin with a pixel by pixel color mapping, there are some interesting effects possible. At least it can be a little out of the ordinary.

Silverlight and WPF XAML include an Effect property for all UIElements. There are currently a couple of built in Silverlight Effects, BlurEffect and DropShadowEffect, which can be added to Silverlight UIElements including the Silverlight Map control. However, these are not especially interesting, and I assume Microsoft will grow this list over time.

This video post by Mike Taulty goes further by showing how to create your own Pixel Effects:

Creating your own Pixel Effects involves the DirectX SDK and some DirectX code that is a little involved, but fortunately there is a codeplex project which has a set of WPF Pixel Shader Effects.

Here is a video showing the 23 different Pixel Effects currently available: Video WPF Pixel Shader

Here is an example of a DirectX fx file for EmbossedEffect that looks like this:

// WPF ShaderEffect HLSL -- EmbossedEffect

// Shader constant register mappings (scalars - float, double, Point, Color, Point3D, etc.)

float Amount : register(C0);
float Width : register(C1);

// Sampler Inputs (Brushes, including ImplicitInput)

sampler2D implicitInputSampler : register(S0);

// Pixel Shader

float4 main(float2 uv : TEXCOORD) : COLOR
   float4 outC = {0.5, 0.5, 0.5, 1.0};

   outC -= tex2D(implicitInputSampler, uv - Width) * Amount;
   outC += tex2D(implicitInputSampler, uv + Width) * Amount;
   outC.rgb = (outC.r + outC.g + outC.b) / 3.0f;

   return outC;

This method, float4 main(float2 uv : TEXCOORD) : COLOR takes the pixel position (u,v) and modifies the color. Once this has been compiled into a .ps file it can be used as a UIElement Effect property in Silverlight.

This blog post by Pravinkumar Dabade explains the details of using the Silverlight version of this codeplex Pixel Effect library.

After compiling the WPFSLFx project, it is easy to add a reference to SLShaderEffectLibrary.dll in a Bing Maps Silverlight Control project. Now you can start adding Pixel Effects to the map controls as in this shader:EmbossedEffect:

        Grid.Column="0" Grid.Row="1" Grid.RowSpan="1" Padding="5"
         <shader:EmbossedEffect x:Name="embossedEffect" Amount="1" Width="0.001" />

Note each type of effect has different sets of input parameters. In the case of the EmbossedEffect an amount and width are required, which I’ve bound to sliders. The simpler InvertEffect or MonochromeEffect require no parameters.

Fig 2 – Pixel Shader EmbossedEffect

Fig 3 – Pixel Shader EmbossedEffect

Multiple Effects can be applied by wrapping a Map inside a Border. Here is a shader:ContrastAdjustEffect + shader:InvertColorEffect:

<Border Visibility="Visible" Grid.Column="0" Grid.Row="1" Grid.RowSpan="1" Padding="5" >

    <m:Map  Visibility="Collapsed"
       Grid.Column="0" Grid.Row="1" Grid.RowSpan="1" Padding="5"
            <shader:InvertColorEffect  x:Name="invertEffect" />

Fig 4 – ContrastAdjustEffect + InvertColorEffect

Fig 5 – ContrastAdjustEffect + InvertColorEffect

Effects are applied progressivly down the rendering tree, which means additive effects of arbitrary length are possible. You can use any enclosing UIElement to stack your Effects tree such as MapLayer:

<m:Map  Visibility="Visible"
        Grid.Column="0" Grid.Row="1" Grid.RowSpan="1" Padding="5"
     <m:MapLayer x:Name="effectsLayer1">
         <m:MapLayer x:Name="effectsLayer2">
             <m:MapLayer x:Name="effectsLayer3">


Because these UIElement Pixel Effects are compiled to DirectX and are run in the client GPU they are extremely effficient (as long as the client has a GPU). You can even use them on video media.

public void addVideoToMap()
    MediaElement video = new MediaElement();
    video.Source = new Uri(
    video.Opacity = 0.8;
    video.Width = 250;
    video.Height = 200;
    video.Effect = new ShaderEffectLibrary.InvertColorEffect();
    Location location = new Location(39,-105);
    PositionOrigin position = PositionOrigin.Center;
    effectsLayer3.AddChild(video, location, position);
    effectsLayer2.Effect = new ShaderEffectLibrary.MonochromeEffect();
    effectsLayer1.Effect = new ShaderEffectLibrary.SharpenEffect();

Fig 6 – Effects stacked on Video Element


Pixel Effects can add some interest to an ordinary map view. With the WPF Pixel Shader Effects library, effects are easy to add to Silverlight Bing Maps Control Layers and even media elements. In viewing imagery it is sometimes nice to have Effects like ShaderEffectLibrary.SharpenEffect() to enhance capture or imagery interpretation so this is not just about aesthetics.