Silverlight Video Pyramids

VideoTile Fig 1
Fig 1 – Silverlight Video Tile Pyramid

Microsoft’s DeepZoom technology capitalizes on tile pyramids for MultiScaleImage elements. It is an impressive technology and is the foundation of Bing Maps Silverlight Control Navigation. I have wondered for some time why the DeepZoom researchers haven’t extended this concept a little. One possible extension that has intriguing possibilities is a MultiScaleVideo element.

The idea seems feasible, breaking each frame into a DeepZoom type pyramid and then refashioning as a pyramid of video codecs. Being impatient, I decided to take an afternoon and try out some proof of concept experiments. Rather than do a frame by frame tiling, I thought I’d see how a pyramid of WMV files could be synchronized as a Grid of MediaElements:

<font color="#0000FF" size=2><</font><font color="#A31515" size=2>Grid</font><font color="#FF0000" size=2> x</font><font color="#0000FF" size=2>:</font><font color="#FF0000" size=2>Name</font><font color="#0000FF" size=2>="VideoTiles"</font><font color="#FF0000" size=2> Background</font><font color="#0000FF" size=2>="{</font><font color="#A31515" size=2>StaticResource</font><font color="#FF0000" size=2> OnTerraBackgroundBrush</font><font color="#0000FF" size=2>}" </font>
<font color="#FF0000" size=2> Width</font><font color="#0000FF" size=2>="426"</font><font color="#FF0000" size=2> Height</font><font color="#0000FF" size=2>="240"&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</font>
<font color="#FF0000" size=2> HorizontalAlignment</font><font color="#0000FF" size=2>="Center"</font><font color="#FF0000" size=2> VerticalAlignment</font><font color="#0000FF" size=2>="Center">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<font color="#0000FF" size=2><</font></font><font color="#A31515" size=2>Grid.ColumnDefinitions</font><font color="#0000FF" size=2>>
<font color="#0000FF" size=2><</font></font><font color="#A31515" size=2>ColumnDefinition</font><font color="#0000FF" size=2>/>
<font color="#0000FF" size=2><</font></font><font color="#A31515" size=2>ColumnDefinition</font><font color="#0000FF" size=2>/>
<font color="#0000FF" size=2>&lt;/</font></font><font color="#A31515" size=2>Grid.ColumnDefinitions</font><font color="#0000FF" size=2>>
<font color="#0000FF" size=2><</font></font><font color="#A31515" size=2>Grid.RowDefinitions</font><font color="#0000FF" size=2>>
<font color="#0000FF" size=2><</font></font><font color="#A31515" size=2>RowDefinition</font><font color="#0000FF" size=2>/>
<font color="#0000FF" size=2><</font></font><font color="#A31515" size=2>RowDefinition</font><font color="#0000FF" size=2>/>
<font color="#0000FF" size=2>&lt;/</font></font><font color="#A31515" size=2>Grid.RowDefinitions</font><font color="#0000FF" size=2>>
&nbsp;&nbsp;<font color="#0000FF" size=2><</font></font><font color="#A31515" size=2>MediaElement</font><font color="#FF0000" size=2> x</font><font color="#0000FF" size=2>:</font><font color="#FF0000" size=2>Name</font><font color="#0000FF" size=2>="v00"</font><font color="#FF0000" size=2> Source</font><font color="#0000FF" size=2>=""</font><font color="#FF0000" size=2>
 Grid.Column</font><font color="#0000FF" size=2>="0"</font><font color="#FF0000" size=2> Grid.Row</font><font color="#0000FF" size=2>="0" <font color="#0000FF" size=2>/>
&nbsp;&nbsp;<font color="#0000FF" size=2><</font></font></font><font color="#A31515" size=2>MediaElement</font><font color="#FF0000" size=2> x</font><font color="#0000FF" size=2>:</font><font color="#FF0000" size=2>Name</font><font color="#0000FF" size=2>="v10"</font><font color="#FF0000" size=2> Source</font><font color="#0000FF" size=2>=""</font><font color="#FF0000" size=2>
 Grid.Column</font><font color="#0000FF" size=2>="1"</font><font color="#FF0000" size=2> Grid.Row</font><font color="#0000FF" size=2>="0" />
&nbsp;&nbsp;<font color="#0000FF" size=2><</font></font><font color="#A31515" size=2>MediaElement</font><font color="#FF0000" size=2> x</font><font color="#0000FF" size=2>:</font><font color="#FF0000" size=2>Name</font><font color="#0000FF" size=2>="v11"</font><font color="#FF0000" size=2> Source</font><font color="#0000FF" size=2>=""</font><font color="#FF0000" size=2>
 Grid.Column</font><font color="#0000FF" size=2>="1"</font><font color="#FF0000" size=2> Grid.Row</font><font color="#0000FF" size=2>="1" />
&nbsp;&nbsp;<font color="#0000FF" size=2><</font></font><font color="#A31515" size=2>MediaElement</font><font color="#FF0000" size=2> x</font><font color="#0000FF" size=2>:</font><font color="#FF0000" size=2>Name</font><font color="#0000FF" size=2>="v01"</font><font color="#FF0000" size=2> Source</font><font color="#0000FF" size=2>=""</font><font color="#FF0000" size=2>
 Grid.Column</font><font color="#0000FF" size=2>="0"</font><font color="#FF0000" size=2> Grid.Row</font><font color="#0000FF" size=2>="1" />
<font color="#0000FF" size=2>&lt;/</font></font><font color="#A31515" size=2>Grid</font><font color="#0000FF" size=2>></font>

Ideally to try out a video tile pyramid I would want something like 4096×4096 since it divides nicely into 256 like the Bing Maps pyramid. However, Codecs are all over the place, and tend to cluster on 4:3 or 16:9 aspect ratios. Red 4K at 4520×2540 is the highest resolution out there, but I didn’t see any way to work with that format in Silverlight. The best resolution sample clip I could find that would work in Silverlight was the WMV HD 1440×1080 Content Showcase. Since I like the Sting background music, I decided on “The Living Sea” IMAX sample.

Not enough resolution to get too far, but I am just looking at multi tile synching for now and two levels will do. I ended up using Expression Encoder 3 to take the original resolution and clip to smaller subsets.

Zoom Level 1:

00 10
01 11

Zoom Level 2:

0000 0010 1000 1010
0001 0011 1001 1011
0100 0110 1100 1110
0101 0111 1101 1111

I encoded ZoomLevel 1 as 4 tiles 640×480 and Zoom Level 2 as 16 tiles at 320×240. I then took all these tiles and dropped them into my Azure CDN video container. Again this is not a streaming server, but I hoped it would be adequate to at least try this in a limited time frame. Now that I have the video pyramid with two zoom levels I can start trying out some ideas.

VideoTile Fig 2
Fig 2 – Silverlight Video Tile Pyramid Zoom Level 1

VideoTile Fig 3
Fig 3 – Silverlight Video Tile Pyramid ZoomLevel 2

First, it is fairly difficult to keep the Grid from showing in the layout. Moving around with different sizes can change the border but there is generally a slight line visible, which can be seen in Fig 2. Even though you don’t see the lines in Fig 3, it also is made up of four tiles. This is setup just like a normal tile pyramid with four tiles under each upper tile in a kind of quad tree arrangement. In this case very simple with just the 2 levels.

I tied some events to the MediaElements. The main pyramid events are tied to MouseWheel events:

void Video_MouseWheel(object sender, MouseWheelEventArgs e)
    int delta = e.Delta;
    if (delta < 0)
      //zoom out
      if (e.OriginalSource.GetType() == typeof(MediaElement))
        VideoCnt = 0;
        MediaElement me = e.OriginalSource as MediaElement;
        currentPostion = me.Position;
        v00.Source = new Uri("");
        v10.Source = new Uri("");
        v11.Source = new Uri("");
        v01.Source = new Uri("");
    else if (delta > 0)
      //zoom in
      if (e.OriginalSource.GetType() == typeof(MediaElement))
        if (VideoZoomLevel <= maxVideoZoom)
            VideoCnt = 0;
            MediaElement me = e.OriginalSource as MediaElement;
            currentPostion = me.Position;
            string quad = me.Source.LocalPath.Substring(0, me.Source.LocalPath.IndexOf(".wmv"));

            v00.Source = new Uri("" + quad + "00.wmv");
            v10.Source = new Uri("" + quad + "10.wmv");
            v11.Source = new Uri("" + quad + "11.wmv");
            v01.Source = new Uri("" + quad + "01.wmv");
            VideoZoomLevel = maxVideoZoom;

I’m just checking a MouseWheel delta to determine whether to go in or out. Then looking at the original source I determine which quad the mouse is over and then create the new URIs for the new Zoom Level. This is not terribly sophisticated. Not surprisingly the buffering is what is the killer. There are some MediaOpen and Load events which I attempted to use, however, there were quite a few problems with synchronizing the four tiles.

If you can patiently wait for the buffering it does work after a fashion. Eventually the wmv are in local cache which helps. However, the whole affair is fragile and erratic.

I didn’t attempt to go any further with panning across the Zoom Level 2. I guess buffering was the biggest problem. I’m not sure how much further I could get trying to move to a Streaming Media server or monitoring BufferProgress with a timer thread.

The experiment may have been a failure, but the concept is none the less interesting. Perhaps some day a sophisticated codec will have such things built in.

The high altitude perspective

One aspect which makes MultiScaleVideo interesting is just its additional dimension of interactivity. As film moves inexorably to streaming internet, there is more opportunity for viewer participation. In a pyramid approach focus is in the viewer’s hand. The remote becomes a focus tool that moves in and out of magnification levels as well as panning across the video 2D surface.

In the business world this makes interfaces to continuous data collections even more useful. As in the video feed experiment of the previous post, interfaces can scan at low zoom levels and then zoom in for detailed examination of items of interest. Streetside photos in the example Streetside path navigation already hint at this, using the run navigation to animate a short photo stream while also providing zoom and pan DeepZoom capability.

One of the potential pluses for this, from a distributor point of view, is repeat viewer engagement. Since the viewer is in control, any viewing is potentially unique, which discourages the typical view and discard common with film videos today. This adds value to potential advertisement revenue.

The film producer also has some incentive with a whole new viewer axis to play with. Now focus and peripheral vision take on another dimension, and focal point clues can create more interaction or in some cases deceptive side trails in a plot. Easter eggs in films provide an avid fan base with even more reason to view a film repeatedly.

Finally, small form factor hand held viewers such as iPhone and Android enabled cell phones can benefit from some form of streaming that allows user focus navigation. The screen in these cases is small enough to warrant some navigation facility. Perhaps IMAX or even Red 4K on handhelds is unreasonable, but certainly allowing navigation makes even the more common HD codecs more useable. A video pyramid of streaming sources could make a compelling difference in the handheld video market.


MultiScaleVideo is a way to enhance user interaction in a 2D video. It doesn’t approach the game level interaction of true 3D scene graphs, but it does add another axis of interest. My primitive exercise was not successful. I am hoping that Microsoft Labs will someday make this feasible and add another type of Navigation to the arsenal. Of course, you can imagine the ensuing remote controller wars if DeepZoom Video ever becomes common place.

One more thing, check out the cool scaling animation attached to the expander button, courtesy of Silverlight Toolkit Nov drop.

Azure Video and the Silverlight Path

Fig 1 – Video Synched to Map Route

My last project experimented with synching Streetside with a Route Path. There are many other continuous asset collections that can benefit from this approach. Tom Churchill develops very sophisticated software for video camera augmentation, Churchill Navigation. He managed to take some time out of a busy schedule to do a simple drive video for me to experiment with.

In this scenario a mobile video camera is used along with a GPS to produce both a video stream and a simultaneous stream of GPS NMEA records. NMEA GPRMC records include a timestamp and latitude, longitude along with a lot of other information, which I simply discarded in this project.

First the GPS data file was converted into an xml file. I could then use some existing xml deserializer code to pull the positions into a LocationCollection. These were then used in Bing Maps Silverlight Control to produce a route path MapPolyline. In this case I didn’t get fancy and just put the xml in the project as an embedded resource. Certainly it would be easy enough to use a GPS track table from SQL Server, but I kept it simple.

NMEA GPRMC Record Detail:


1	050756,	time 05:07:56
2	A,		Active A | V
3	4000.8812,	Latitude
4	N,		North
5	10516.7323,	Longitude
6	W,		West
7	20.2,		ground speed in knots
8	344.8,	track angle degrees true
9	211109,	date 11/21/2009
10	10.2,		magnetic variation
11	E,		East
12	D*0E		Checksum

XML resulting from above NMEA record

&lt;?xml version="1.0" encoding="utf-16"?&gt;
&lt;ArrayOfLocationData xmlns:xsi=""
&nbsp;&nbsp;&nbsp;&nbsp;&lt;Description&gt;Boulder GPS&lt;/Description&gt;
&nbsp;&nbsp;&nbsp;&nbsp;&lt;Description&gt;Boulder GPS&lt;/Description&gt;

Once the route path MapPolyline is available I can add a vehicle icon similar to the last streetside project. The icon events are used in the same way to start an icon drag. Mouse moves are handled in the Map to calculate a nearest point on the path and move the icon constrained to the route. The Mouse Button Up event is handled to synch with the video stream. Basically the user drags a vehicle along the route and when the icon is dropped the video moves to that point in the video timeline.

Video is a major focus of Silverlight. Microsoft Expression Encoder 3 has a whole raft of codecs specific to Silverlight. It also includes a dozen or so templates for Silverlight players. These players are all ready to snap in to a project and include all the audio volume, video timeline, play-stop-pause, and other controls found in any media player. The styling however, is different with each template, which makes life endurable for the aesthetically minded. I am not, so the generic gray works fine for my purposes. When faced with fashion or style issues my motto has always been “Nobody will notice,” much to the chagrin of my kids.

Expression Encoder 3 Video Player Templates

  • Archetype
  • BlackGlass
  • Chrome
  • Clean
  • CorporateSilver
  • Expression
  • FrostedGallery
  • GoldenAudio
  • Graphing
  • Jukebox
  • Popup
  • QuikSilver
  • Reflection
  • SL3AudioOnly
  • SL3Gallery
  • SL3Standard

At the source end I needed a reliable video to plug into the player template. I had really wanted to try out the Silverlight Streaming Service, which was offered free for prototype testing. However, this service is being closed down and I unfortunately missed out on that chance.

Tim Heuer’s prolific blog has a nice introduction to an alternative.

As it turns out I was under the mistaken impression that “Silverlight Streaming” was “streaming” video. I guess there was an unfortunate naming choice which you can read about in Tim’s blog post.

As Tim explains, Azure is providing a new Content Delivery Network CTP. This is not streaming, but it is optimized for rapid delivery. CDN is akin to Amazon’s Cloud Front. Both are edge cache services that boast low latency and high data transfer speeds. Amazon’s Cloud Front is still Beta, and Microsoft Azure CDN is the equivalent in Microsoft terminology, CTP or Community Technical Preview. I would not be much surprised to see a streaming media service as part of Azure in the future.

Like Cloud Front, Azure CDN is a promoted service from existing Blob storage. This means that using an Azure storage account I can create a Blob storage container, then enable CDN, and upload data just like any other blob storage. Enabling CDN can take awhile. The notice indicated 60 minutes, which I assume is spent allocating resources and getting the edge caching scripts in place.

I now needed to add Tom’s video encoded as 640×480 WMV up to the blob storage account with CDN enabled. The last time I tried this there wasn’t a lot of Azure upload software available. However, now there are lots of options:

Cloud Berry Explorer and Cloud Storage Studio were my favorites but there are lots of codeplex open source projects as well.

Azure Storage Explorer

Factonomy Azure Utility(Azure Storage Utility)


Azure Blob Storage Client

I encountered one problem, however, in all of the above. Once my media file exceeded 64Mb, which is just about 5min of video for the encoding I chose, my file uploads consistently failed. It is unclear whether the problem is at my end or the upload software. I know there is a 64Mb limit for simple blob uploads but most uploads would use a block mode not simple mode. Block mode goes all the way up to 50GB in current CTP which is a very long video. (roughly 60 hours at this encoding)

When I get time I’ll return to the PowerShell approach and manually upload my 70Mb video sample as a block. In the meantime I used Expression Encoder to clip the video down a minute to a 4:30 min clip for testing purposes.

Here are the published Azure Storage limits once it is released:

200GB for block blobs (64KB min, 4MB max block size)
64MB is the limit for a single blob before you need to use blocks
1TB for page blobs

Fig 2 – Video Synched to Map Route

Now there’s a video in the player. The next task is to make a two way connect between the route path from the GPS records and the video timeline. I used a 1 sec timer tick to check for changes in the video timeline. This time is then used to loop through the route nodes defined by gps positions until reaching a time delta larger or equal to the current video time. At that point the car icon position is updated. This positions within a 1-3second radius of accuracy. It would be possible to refine this using a segment percentage approach and get down to the 1sec timer tick radius of accuracy, but I’m not sure increased accuracy is helpful here.

The reverse direction uses the current icon position to keep track of the current segment. Since the currentSegment also keeps track of endpoint delta times, it is used to set the video player position with the MouseUp event. Now route path connects to video position and as video is changed the icon location of the route path is also updated. We have a two way synch mode between the map route and the video timeline.

&nbsp;&nbsp;private void MainMap_MouseMove(object sender, MouseEventArgs e)
&nbsp;&nbsp;&nbsp;&nbsp;Point p = e.GetPosition(MainMap);
&nbsp;&nbsp;&nbsp;&nbsp;Location LL = MainMap.ViewportPointToLocation(p);
&nbsp;&nbsp;&nbsp;&nbsp;LLText.Text = String.Format("{0,10:0.000000},{1,11:0.000000}", LL.Latitude, LL.Longitude);
&nbsp;&nbsp;&nbsp;&nbsp;if (cardown)
&nbsp;&nbsp;&nbsp;&nbsp;currentSegment = FindNearestSeg(LL, gpsLocations);
&nbsp;&nbsp;&nbsp;&nbsp;MapLayer.SetPosition(car, currentSegment.nearestPt);

FindNearestSeg is the same as the previous blog post except I’ve added time properties at each endpoint. These can be used to calculate video time position when needed in the Mouse Up Event.

&nbsp;&nbsp;private void car_MouseLeftButtonUp(object sender, MouseButtonEventArgs e)
&nbsp;&nbsp;&nbsp;&nbsp;if (cardown)
&nbsp;&nbsp;&nbsp;&nbsp;cardown = false;
&nbsp;&nbsp;&nbsp;&nbsp;VideoBoulder1.Position = currentSegment.t1;

Silverlight UIs for continuous data

This is the second example of using Silverlight Control for route path synching to a continuous data collection stream. In the first case it was synched with Streetside and in this case to a gps video. This type of UI could be useful for a number of scenarios.

There is currently some interest in various combinations of mobile Video and LiDAR collections. Here are some examples

  • Obviously both Google and Microsoft are busy collecting streetside views for ever expanding coverage.
  • Utility corridors, transmission, fiber, and pipelines, are interested in mobile and flight collections for construction, as built management, impingement detection, as well as regulatory compliance.
    Here are a couple representative examples:
      Baker Mobile
      Mobile Asset Collection MAC Vehicle
  • Railroads have a similar interest
      Lynx Mobile
  • DOTs are investigating mobile video LiDAR collection as well
      Iowa DOT Research


Mobile asset collection is a growing industry. Traditional imagery, video, and now LiDAR components collected in stream mode are becoming more common. Silverlight’s dual media and mapping controls make UIs for managing and interfacing with these type of continuous assets not only possible in a web environment, but actually fun to create.

Streetside in Silverlight

Fig 1 – Streetside in Silverlight

A flood of announcements have been coming up over in Bing Land. You can try some of these out here: Bing Maps Explore

Significant is the “Bing Maps Silverlight Control Extended Modes Beta and Control″ coming just a week or two after initial release of Bing Maps Silverlight Control. I have to get a handle on these names, so regretfully, let’s assume the military style – “BMSC.” These are separate from the the larger stable of Silverlight 4 Beta releases, recently announced.

Chris Pendleton’s Bing Blog (Why the urge to add “Bop” after this?) has some early detail on how to install, what is included, and how to make use of it.

What is included of note are two new map modes:


Birdseye adds a pseudo 3D imagery mode that lets us move around in oblique space with realistic helicopter view angles. We are given tantalizing heading rotation capabilities, but limited to the compass quadrants, north, south, east, and west.

Streetside is the additional deep zoom street side photo views which afford zoom pan and a navigation run figure for hustling along the street. Streetside navigation includes circular heading, vertical pitch, and magnified zoom capabilities. Anyone used to the DeepZoom affects will find this familiar, but the running figure adds some travel legs to the process.

Neither of these modes are universal at this point. Here is the coverage extent shown in cyan for Streetside photos:

Fig 2 – Streetside current extent

Fig 3 – Streetside current extent at more detail

How to make use of these riches?

After downloading and replacing BMSC with the new BMSC, I then installed the BMSCEM Beta and I’m ready to do some experiments. One thought that occurred to me was to add some streetside to a rout demonstration. The idea is to have an additional panel in a more prosaic geocode/route demo that will show streetside photos along the route path returned by the Bing Maps Web Services Route Service i.e. BMWSRS hmmmm? even the military might balk at this? Fig1 above shows how this might look in Road mode. This proceeds in several steps:

  1. Geocode with BMWSGS the start and end addresses
  2. Route between the two locations with BMWSRS
  3. Turn the resulting RoutePath into a MapPolyline with Pushpin images at start and end
  4. Add vehicle icon at the start
  5. Add some event handlers to allow users to drag the car icon constrained to the route path
  6. And finally connect the car icon position to streetside photos shown in the panel

Fig 4 – Streetside in Silverlight Aerial mode

In the sample you can see the navigation figure inside the streetside panel that lets users move inside the streetside photo space.

It is interesting to see how route path overlays are affected by the several map modes including Birdseye and Streetside. As you can see from the next couple of figures the Route MapPolyline does not pass through a transform to match either Birdseye or Streetside view.

Fig 5 – Streetside in Silverlight Birdseye mode
(note route path is shifted off to south by about 9 blocks)

Fig 6 – Streetside mode with heading rotated to see a route MapPolyline that is totally unrelated to the photo view

In Fig 5 and Fig 6 above, you can see that the streetside photos and Birdseye perspective are taking advantage of 3D perspective transforms, but the custom layer is aligned with the tilted road mode hidden behind the photo/image. It appears that visibility of other Map Layers will need to be part of a mode switch between Road or Aerial and Birdseye and Streetside.

Aside from that another interesting aspect of this example is the route constraint algorithm. The user fires a
MouseLeftButtonDown += car_MouseLeftButtonDown
event attached to the car icon to start the process which flags cardown mode true. However, the corresponding button up event is added to the MainMap rather than the icon.
MainMap.MouseLeftButtonUp += car_MouseLeftButtonUp;

This insures that the user can continue moving his mouse with the left down even while no longer over the car icon and that we can continue to lookup nearest point on route path for constraining our icon to the path.

private void car_MouseLeftButtonDown(object sender, MouseButtonEventArgs e)
e.Handled = true;
cardown = true;

private void car_MouseLeftButtonUp(object sender, MouseButtonEventArgs e)
if (cardown) cardown = false;

The real action, though, takes place in the MainMap MouseMove event, where we find the nearest point on the route path, set the car map position, set the streetside view location, and finally calculate the streetside view heading from the segment vector:

Point p = e.GetPosition(MainMap);
Microsoft.Maps.MapControl.Location LL = MainMap.ViewportPointToLocation(p);

 if(cardown) {
  Segment s = FindNearestSeg(LL, routePts);

  MapLayer.SetPosition(car, s.nearestPt);
  StreetviewMap.Mode.Center = s.nearestPt;
  double dx = (s.linePt2.Longitude - s.linePt1.Longitude);
  double dy = (s.linePt2.Latitude - s.linePt1.Latitude);
  double angle = 0;

  if (dx > 0) angle = (Math.PI * 0.5) - Math.Atan(dy / dx);
  else if (dx < 0) angle = (Math.PI * 1.5) - Math.Atan(dy / dx);
  else if (dy > 0) angle = 0;
  else if (dy < 0) angle = Math.PI;
  else angle = 0.0;// the 2 points are equal  
  angle *= (180 / Math.PI);
  HeadingText.Text = String.Format("{0,10:0.000000}", angle);
  StreetviewMap.Heading = angle;


Here is some code adapted from my venerable Bowyer and Woodwark “Programmer’s Geometry” book. Yes, they used to have books for this type of algorithm. Bascially the code loops through each segment and finds the nearest perpindicular point of intersection using parametric lines. Since I’m only comparing for the least distance, I don’t need to use the Math.Sqrt function to get actual distances.

private Segment FindNearestSeg(Location LL, LocationCollection routePts){

  Segment s = new Segment();
  double dist = Double.MaxValue;
  double d = 0;
  double t = 0;

  int pos = 0;
  Location pt0 = new Location();
  foreach (Location pt1 in routePts)
    if (pos++ > 0)
      double XKJ = pt0.Longitude - LL.Longitude;
      double YKJ = pt0.Latitude - LL.Latitude;
      double XLK = pt1.Longitude - pt0.Longitude;
      double YLK = pt1.Latitude - pt0.Latitude;
      double denom = XLK * XLK + YLK * YLK;
      if (denom == 0)
        d = (XKJ * XKJ + YKJ * YKJ);
        t = -(XKJ * XLK + YKJ * YLK) / denom;
        t = Math.Min(Math.Max(t, 0.0), 1.0);
         double xf = XKJ + t * XLK;
         double yf = YKJ + t * YLK;
        d = xf * xf + yf * yf;
      if (d < dist)
        dist = d;
        s.nearestPt = new Location(pt0.Latitude + t * YLK, pt0.Longitude + t * XLK);
        s.linePt1 = new Location(pt0.Latitude, pt0.Longitude);
        s.linePt2 = new Location(pt1.Latitude, pt1.Longitude);
    pt0 = pt1;

  return s;


Here is a small class to hold the results of the minimal distance segment and intersect point:

public class Segment    
 public Location linePt1 { get; set; }
 public Location linePt2 { get; set; }
 public Location nearestPt { get; set; }

All of this is fired with each mouse move event to scan through the route path nodes and get the segment nearest to the current user position. That this all works smoothly is another testimony to the advantage of CLR over Javascript on the client.

The resulting Segment then holds the segment end points and the intersection point. The intersection point is available for placing the car icon position as well as the StreetviewMap.Mode.Center in our Streetside panel. The additional segment endpoints are handy for calculating the heading of the streetside view direction. This is used to point along the vector direction of the path segment by calculating the compass direction and applying it to the StreetviewMap.Heading = angle;

The result slides the car icon along our route path while updating the Streetside View panel with a StreetviewMap heading in the same direction as the route.

Unfortunately, the Streetview loading is currently too slow to do a film strip down the route, which is too bad as my next idea was an animated route with concurrent streetside views from whatever direction (angular delta) set in Streetside View by the user. Of course the run navigator does a good job of moving in heading direction, it’s just not in my control. If I can get control of this navigator, I would try a run navigate along a route leg until reaching a waypt, and then turn to a new heading before starting navigation run again for the next route leg.

However, I just discovered that by adding a ViewChangeEnd event to my streetview panel,
StreetviewMap.ViewChangeEnd += car_Update;,
I can use the Center location to update my car location.
MapLayer.SetPosition(car, StreetviewMap.Mode.Center);
Now I can hustle down the street with the Streetside Navigation runner, and at least have my car location shown on the map. Cool!


Streetview and Birdseye are nice additions to the list of map modes. They have some restrictions in availability, but where they are available, add some useful features. My experiment just scratches the surface available with Silverlight Control Extension Modes.

Here are a few problems that surfaced with my example:

  1. The routepath returned by RouteService can be just slightly off the route traveled and consequently the Streetside location will be incorrect. Also setting StreetviewMap.Mode.Center to a location will snap to the closest street photo even if it is off route at an intersection. This appears as a series of streetviews for streets crossed by a route instead of on the route desired. At least that’s my theory for the occasional odd view of lawns instead of the street ahead.
  2. Map children layers are not automatically transformed to these new views and mode change events will need to manipulate visibility of custom layers.
  3. I had to turn off drag on the Streetview panel so that users can do normal streetview pan and zoom.
  4. It is necessary to check for a valid streetside photo. If the route runs off streetside coverage, the last valid streetside view location remains in the panel regardless of the new icon positions.

Obviously “alternate reality” continues to converge with “ordinary reality” at a steady pace, at least in web mapping worlds.

Silverlight Map Pixel Shader Effects

Fig 1 – Pixel Shader Effects

Sometimes it would be nice to change the standard color scheme of well known map sources. I recently saw a blog post by Ricky Brundritt that shows how to make color map changes to Bing Maps tiles. By applying a new skin with a pixel by pixel color mapping, there are some interesting effects possible. At least it can be a little out of the ordinary.

Silverlight and WPF XAML include an Effect property for all UIElements. There are currently a couple of built in Silverlight Effects, BlurEffect and DropShadowEffect, which can be added to Silverlight UIElements including the Silverlight Map control. However, these are not especially interesting, and I assume Microsoft will grow this list over time.

This video post by Mike Taulty goes further by showing how to create your own Pixel Effects:

Creating your own Pixel Effects involves the DirectX SDK and some DirectX code that is a little involved, but fortunately there is a codeplex project which has a set of WPF Pixel Shader Effects.

Here is a video showing the 23 different Pixel Effects currently available: Video WPF Pixel Shader

Here is an example of a DirectX fx file for EmbossedEffect that looks like this:

// WPF ShaderEffect HLSL -- EmbossedEffect

// Shader constant register mappings (scalars - float, double, Point, Color, Point3D, etc.)

float Amount : register(C0);
float Width : register(C1);

// Sampler Inputs (Brushes, including ImplicitInput)

sampler2D implicitInputSampler : register(S0);

// Pixel Shader

float4 main(float2 uv : TEXCOORD) : COLOR
   float4 outC = {0.5, 0.5, 0.5, 1.0};

   outC -= tex2D(implicitInputSampler, uv - Width) * Amount;
   outC += tex2D(implicitInputSampler, uv + Width) * Amount;
   outC.rgb = (outC.r + outC.g + outC.b) / 3.0f;

   return outC;

This method, float4 main(float2 uv : TEXCOORD) : COLOR takes the pixel position (u,v) and modifies the color. Once this has been compiled into a .ps file it can be used as a UIElement Effect property in Silverlight.

This blog post by Pravinkumar Dabade explains the details of using the Silverlight version of this codeplex Pixel Effect library.

After compiling the WPFSLFx project, it is easy to add a reference to SLShaderEffectLibrary.dll in a Bing Maps Silverlight Control project. Now you can start adding Pixel Effects to the map controls as in this shader:EmbossedEffect:

        Grid.Column="0" Grid.Row="1" Grid.RowSpan="1" Padding="5"
         <shader:EmbossedEffect x:Name="embossedEffect" Amount="1" Width="0.001" />

Note each type of effect has different sets of input parameters. In the case of the EmbossedEffect an amount and width are required, which I’ve bound to sliders. The simpler InvertEffect or MonochromeEffect require no parameters.

Fig 2 – Pixel Shader EmbossedEffect

Fig 3 – Pixel Shader EmbossedEffect

Multiple Effects can be applied by wrapping a Map inside a Border. Here is a shader:ContrastAdjustEffect + shader:InvertColorEffect:

<Border Visibility="Visible" Grid.Column="0" Grid.Row="1" Grid.RowSpan="1" Padding="5" >

    <m:Map  Visibility="Collapsed"
       Grid.Column="0" Grid.Row="1" Grid.RowSpan="1" Padding="5"
            <shader:InvertColorEffect  x:Name="invertEffect" />

Fig 4 – ContrastAdjustEffect + InvertColorEffect

Fig 5 – ContrastAdjustEffect + InvertColorEffect

Effects are applied progressivly down the rendering tree, which means additive effects of arbitrary length are possible. You can use any enclosing UIElement to stack your Effects tree such as MapLayer:

<m:Map  Visibility="Visible"
        Grid.Column="0" Grid.Row="1" Grid.RowSpan="1" Padding="5"
     <m:MapLayer x:Name="effectsLayer1">
         <m:MapLayer x:Name="effectsLayer2">
             <m:MapLayer x:Name="effectsLayer3">


Because these UIElement Pixel Effects are compiled to DirectX and are run in the client GPU they are extremely effficient (as long as the client has a GPU). You can even use them on video media.

public void addVideoToMap()
    MediaElement video = new MediaElement();
    video.Source = new Uri(
    video.Opacity = 0.8;
    video.Width = 250;
    video.Height = 200;
    video.Effect = new ShaderEffectLibrary.InvertColorEffect();
    Location location = new Location(39,-105);
    PositionOrigin position = PositionOrigin.Center;
    effectsLayer3.AddChild(video, location, position);
    effectsLayer2.Effect = new ShaderEffectLibrary.MonochromeEffect();
    effectsLayer1.Effect = new ShaderEffectLibrary.SharpenEffect();

Fig 6 – Effects stacked on Video Element


Pixel Effects can add some interest to an ordinary map view. With the WPF Pixel Shader Effects library, effects are easy to add to Silverlight Bing Maps Control Layers and even media elements. In viewing imagery it is sometimes nice to have Effects like ShaderEffectLibrary.SharpenEffect() to enhance capture or imagery interpretation so this is not just about aesthetics.