Mirror Land and the Last Foot


Fig 1 – Bing Maps Streetside

I know 2010 started yesterday but I slept in. I’m just a day late.

Even a day late perhaps it’s profitable to step back and muse over larger technology trends. I’ve worked through several technology tides in the past 35 years. I regretfully admit that I never successfully absorbed the “Gang of Four” Design Patterns. My penchant for the abstract is relatively low. I learn by doing concrete projects, and probably fall into the amateur programming category often dismissed by the “professional” programming cognoscenti. However, having lived through a bit of history already, I believe I can recognize an occasional technology trend without benefit of a Harvard degree or even a “Professional GIS certificate.”

What has been striking me of late is the growth of mirror realities. I’m not talking about bizarre multiverse theories popular in modern metaphysical cosmology, nor parallel universes of the many worlds quantum mechanics interpretation, or even virtual world phenoms such as Second Life or The Sims. I’m just looking at the mundane evolution of internet mapping.


Fig 2 – Google Maps Street View

One of my first mapping projects, back in the late 80′s, was converting the very sparse CIA world boundary file, WDBI, into an AutoCAD 3D Globe (WDBI came on a data tape reel). At the time it was novel enough, especially in the CAD world, to warrant a full color front cover of Cadence Magazine. I had lots of fun creating some simple AutoLisp scripts to spin the world view and add vector point and line features. I bring it up because at that point in history, prior to the big internet boom, mapping was a coarse affair at global scales. This was only a primitive wire frame, ethereal and transparent, yet even then quite beautiful, at least to map nerds.


Fig 3 – Antique AutoCAD Globe WDBI

Of course, at that time Scientists and GIS people were already playing with multi million dollar image aquisitions, but generally in fairly small areas. Landsat had been launched more than a decade earlier, but few people had the computing resources to play in that arena. Then too, US military was the main driving force with DARPA technology undreamed by the rest of us. A very large gap existed between Global and Local scales, at least for consumer masses. This access gap continued really until Keyhole’s aquisition by Google. There were regional initiatives like USGS DLG/DEM, Ordnance Survey, and Census TIGER. However, computer earth models were fragmented affairs, evolving relatively slowly down from satellite and up from aerial, until suddenly the entire gap was filled by Google and the repercussions are still very much evident.

Internet Map coverage is now both global and local, and everything in between, a mirror land. The full spectrum of coverage is complete. Or is it? A friend remarked recently that they feel like recent talk in mobile LiDAR echos earlier discussions of “Last Mile” when the Baby Bells and Cable Comms were competing for market share of internet connectivity. You can glimpse the same echo as Microsoft and Google jocky for market share of local street resolution, StreetView vs Streetside. The trend is from a global coarse model to a full scale local model, a trend now pushing out into the “Last Foot.” Alternate map models of the real world are diving into human dimension, feet and inches not miles, the detail of the street, my local personal world.

LiDAR contributes to this mirror land by adding a partial 3rd dimension to the flat photo world of street side capture. LiDAR backing can provide the swivel effects and the icon switching surface intelligence found in StreetView and Streetside. LiDAR capture is capable of much more, but internet UIs are still playing catchup in the 3rd dimension.

The question arises whether GIS or AEC will be the driver in this new human dimension “mirror land.” Traditionally AEC held the cards at feet and inches while GIS aerial platforms held sway in miles. MAC, Mobile Asset Collection, adds a middle way with inch level resolution capability available for miles.


Fig 4 – Video Synched to Map Route

Whoever, gets the dollars for capture of the last foot, in the end it all winds up inside an internet mirror land.

We are glimpsing a view of an alternate mirror reality that is not a Matrix sci-fi fantasy, but an ordinary part of internet connected life. Streetside and Street View push this mirror land down to the sidewalk.

On another vector, cell phone locations are adding the first primitive time dimension with life tracks now possible for millions. Realtime point location is a first step, but life track video stitched on the fly into photosynth streams lends credence to street side contingency.

The Location hype is really about linking those massive market demographic archives to a virtual world and then back connecting this information to a local personal world. As Sean Gillies in “Utopia or Dystopia” pointed out recently there are pros and cons. But, when have a few “cons” with axes ever really made a difference to the utopian future of technology?

With that thought in mind why not push a little on the future and look where the “Last Millimeter” takes us?
    BCI Brain Computer Interface
    Neuronal Prosthetics


Fig 5 – Brain Computer Interface

Eye tracking HUD (not housing and urban development exactly)


Fig 6- HUD phone?

I’m afraid the “Last Millimeter” is not a pretty thought, but at least an interesting one.

Summary

Just a few technology trends to keep an eye on. When they get out the drill for that last millimeter perhaps it’s time to pick up an ax or two.

Silverlight Map Pixel Shader Effects


Fig 1 – Pixel Shader Effects

Sometimes it would be nice to change the standard color scheme of well known map sources. I recently saw a blog post by Ricky Brundritt that shows how to make color map changes to Bing Maps tiles. By applying a new skin with a pixel by pixel color mapping, there are some interesting effects possible. At least it can be a little out of the ordinary.

Silverlight and WPF XAML include an Effect property for all UIElements. There are currently a couple of built in Silverlight Effects, BlurEffect and DropShadowEffect, which can be added to Silverlight UIElements including the Silverlight Map control. However, these are not especially interesting, and I assume Microsoft will grow this list over time.

This video post by Mike Taulty goes further by showing how to create your own Pixel Effects:
http://www.silverlight.net/learn/videos/all/pixel-effects/

Creating your own Pixel Effects involves the DirectX SDK and some DirectX code that is a little involved, but fortunately there is a codeplex project which has a set of WPF Pixel Shader Effects.

Here is a video showing the 23 different Pixel Effects currently available: Video WPF Pixel Shader

Here is an example of a DirectX fx file for EmbossedEffect that looks like this:

//--------------------------------------------------------------------------------------
//
// WPF ShaderEffect HLSL -- EmbossedEffect
//
//--------------------------------------------------------------------------------------

//-----------------------------------------------------------------------------------------
// Shader constant register mappings (scalars - float, double, Point, Color, Point3D, etc.)
//-----------------------------------------------------------------------------------------

float Amount : register(C0);
float Width : register(C1);

//--------------------------------------------------------------------------------------
// Sampler Inputs (Brushes, including ImplicitInput)
//--------------------------------------------------------------------------------------

sampler2D implicitInputSampler : register(S0);

//--------------------------------------------------------------------------------------
// Pixel Shader
//--------------------------------------------------------------------------------------

float4 main(float2 uv : TEXCOORD) : COLOR
{
   float4 outC = {0.5, 0.5, 0.5, 1.0};

   outC -= tex2D(implicitInputSampler, uv - Width) * Amount;
   outC += tex2D(implicitInputSampler, uv + Width) * Amount;
   outC.rgb = (outC.r + outC.g + outC.b) / 3.0f;

   return outC;
}

This method, float4 main(float2 uv : TEXCOORD) : COLOR takes the pixel position (u,v) and modifies the color. Once this has been compiled into a .ps file it can be used as a UIElement Effect property in Silverlight.

This blog post by Pravinkumar Dabade explains the details of using the Silverlight version of this codeplex Pixel Effect library.
http://pkrd.blogspot.com/2009/07/23-pixel-shader-effects-in-silverlight.html

After compiling the WPFSLFx project, it is easy to add a reference to SLShaderEffectLibrary.dll in a Bing Maps Silverlight Control project. Now you can start adding Pixel Effects to the map controls as in this shader:EmbossedEffect:

 <m:Map
        Name="EmbossedMap"
        CredentialsProvider=""
        NavigationVisibility="Visible"
        Grid.Column="0" Grid.Row="1" Grid.RowSpan="1" Padding="5"
        Mode="Aerial">
     <m:Map.Effect>
         <shader:EmbossedEffect x:Name="embossedEffect" Amount="1" Width="0.001" />
     </m:Map.Effect>
 </m:Map>

Note each type of effect has different sets of input parameters. In the case of the EmbossedEffect an amount and width are required, which I’ve bound to sliders. The simpler InvertEffect or MonochromeEffect require no parameters.


Fig 2 – Pixel Shader EmbossedEffect



Fig 3 – Pixel Shader EmbossedEffect

Multiple Effects can be applied by wrapping a Map inside a Border. Here is a shader:ContrastAdjustEffect + shader:InvertColorEffect:

<Border Visibility="Visible" Grid.Column="0" Grid.Row="1" Grid.RowSpan="1" Padding="5" >
    <Border.Effect>
        <shader:ContrastAdjustEffect
	x:Name="contrastEffect"
        	Contrast="1"
        	Brightness="0"
         />
    </Border.Effect>

    <m:Map  Visibility="Collapsed"
       Name="InvertMap"
       CredentialsProvider=""
       NavigationVisibility="Visible"
       Grid.Column="0" Grid.Row="1" Grid.RowSpan="1" Padding="5"
       Mode="Road">
        <m:Map.Effect>
            <shader:InvertColorEffect  x:Name="invertEffect" />
        </m:Map.Effect>
    </m:Map>
</Border>



Fig 4 – ContrastAdjustEffect + InvertColorEffect



Fig 5 – ContrastAdjustEffect + InvertColorEffect

Effects are applied progressivly down the rendering tree, which means additive effects of arbitrary length are possible. You can use any enclosing UIElement to stack your Effects tree such as MapLayer:

<m:Map  Visibility="Visible"
        Name="MainMap"
        CredentialsProvider=""
        NavigationVisibility="Visible"
        Grid.Column="0" Grid.Row="1" Grid.RowSpan="1" Padding="5"
        Mode="Road">
     <m:MapLayer x:Name="effectsLayer1">
         <m:MapLayer x:Name="effectsLayer2">
             <m:MapLayer x:Name="effectsLayer3">

             </m:MapLayer>
         </m:MapLayer>
     </m:MapLayer>
 </m:Map>

Because these UIElement Pixel Effects are compiled to DirectX and are run in the client GPU they are extremely effficient (as long as the client has a GPU). You can even use them on video media.

public void addVideoToMap()
{
    MediaElement video = new MediaElement();
    video.Source = new Uri(
@"http://mschnlnine.vo.llnwd.net/d1/ch9/9/2/9/9/1/4/TCS2NBCOlympics_ch9.wmv",
UriKind.RelativeOrAbsolute);
    video.Opacity = 0.8;
    video.Width = 250;
    video.Height = 200;
    video.Effect = new ShaderEffectLibrary.InvertColorEffect();
    Location location = new Location(39,-105);
    PositionOrigin position = PositionOrigin.Center;
    effectsLayer3.AddChild(video, location, position);
    effectsLayer2.Effect = new ShaderEffectLibrary.MonochromeEffect();
    effectsLayer1.Effect = new ShaderEffectLibrary.SharpenEffect();
}



Fig 6 – Effects stacked on Video Element

Summary

Pixel Effects can add some interest to an ordinary map view. With the WPF Pixel Shader Effects library, effects are easy to add to Silverlight Bing Maps Control Layers and even media elements. In viewing imagery it is sometimes nice to have Effects like ShaderEffectLibrary.SharpenEffect() to enhance capture or imagery interpretation so this is not just about aesthetics.

Augmented Reality and GIS

There have been a few interesting items surfacing on augmented reality recently. It is still very much a futuristic technology, but maybe not too distant future afterall. Augmented reality means intermingling digital and real objects, either adding additional digital objects to the real world, or in an inverse sense, combining real world objects into a virtual digital world.

Here is an interesting example of augmenting a digital virtual world with real world objects borrowed from street view. The interface utilizes an iPhone inertial sensor to move the view inside a virtual world, but this virtual world is a mimic of the street side in Paris at the point in time that Google’s Street View truck went past.



Fig 1 – Low tech high tech virtual reality interface


Fig 2 – Immersive interface


In this Sixth Sense presentation at TED, Pranav Mistry explores the interchangebility of real and virtual objects. The camera eye and microphone sensors are used to interpret gestures and interact with digital objects. These digital objects are then re-projected into the real world on real objects such as paper, books, and even other people.


Fig 3 Augmented Reality Pranav Mistry


Fig 4 Merging digital and real worlds


A fascinating question is, “How might an augmented reality interface impact GIS?”

Google’s recent announcement of replacing its digital map model with one of its own creation, along with the introduction of the first Android devices, triggered a flurry of blog postings. One of the more interesting posts speculated about the target of Google’s “less than free” business model. Gurley reasoned plausibly that the target is the local ad revenue market.

Google’s ad revenue business model was and is a disruptive change in the IT world. Google appears interested in even larger local ad revenues, harnessed by a massive distribution of Android enabled GPS cell phones. It is the interplay of core aggregator capability with edge location that brings in the next generation of ad revenue. The immediate ancillary casualties in this case are the personal GPS manufacturers and a few map data vendors.

Local ads may not be as large a market source as believed, but if it is, the interplay of the network edge with network core may be an additional disruptive change. Apple has a network edge play with iPhone and a core play with media iTunes & iVideo, Google has Android/Chrome at the edge and Search/Google Maps at core. Microsoft has Bing Maps/Search at core as well as dabbling less successfully in media, but I don’t see much activity at the edge?

Of course if mobile hardware capability evolves fast enough, Microsoft’s regular OS will soon enough fit on mobiles, perhaps in time to short circuit an edge market capture by Apple and Google. Windows 8 on a cell phone would open the door wide to Silverlight/WPF UI developers. Android’s potential success would be based on the comparative lack of cpu/memory on mobile devices, but that is only a temporary state, perhaps 2 years. However, in two years the world is a far different place.

By that time augmented reality stuff will be part of the tool kit for ad enhancements:

  • Point a phone camera at a store and show all sale prices overlaid on the store front for items fitting the user’s demographic profile. (Products and store pays service)
  • Inside a grocery store scan shelf items through the cell screen with paid ad enhancements customized to the user’s past buying profile. (Products pay store, store pays service)
  • Inside store point at a product and get list of price comparisons from all competing stores within 2 miles. (product or user pays service)
  • Crowd gamer will recognize other team members (or face book friends, or other security personnel . . ) with an augmented realty enhancement when scanning a crowd (gamer subscribes to service, product pays service for ads targeted to gamer)

And non commercial, non ad uses:

  • A first responder points cell phone at a building and bring up the emergency plan overlay and list of toxic substance storage. (Fire district pays service)
  • Field utility repair personnel points cell at a transformer and sees an overlay of past history with parts list, schematics, etc, etc (utility pays service)

It just requires edge location available to core data services that reflects filtered data back to the edge. The ad revenue owner holds both a core data source and an edge unit location. They sell ads priced on market share of that interplay. Google wants to own the edge and have all ad revenue owners come through them so the OS is less than free in exchange for slice of ad revenue.

Back to augmented reality. As Pranav Mistry points out there is a largely unexplored region between the edge and the core, between reality and virtual reality, which is the home of augmented reality. GIS fits into this by storing spatial location for objects in the real world back at the network core available to edge location devices, which can in turn augment local objects with this additional information from the core.

Just add a GPS to the Sixth Sense camera/mic device and the outside world at an edge location is merged with any information available at core. So for example scan objects from edge location with the camera and you have augmented information about any other mobile GPS or location data at the core. Since Android = edge GPS + link to core + gesture interface + camera (still missing screen projector and mic), no wonder it has potential as a game changer. Google appears more astute in the “organizing the world” arena than Apple, who apparently remains fixated on merely “organizing style.”

Oh, and one more part of the local interface device still missing, a pointer. NextGen UI for GIS



Fig 5 – Laser Distance Meter Leica LDM


Add a laser ranging pointer to the mobile device and you have a rather specific point and click interface to real world objects.

  1. The phone location is known thanks to GPS.
  2. The range device bearing and heading are known, due to an internal compass and/or inertial sensors.
  3. Distance available from the range beam gives precise delta distance to an object relative to the mobile device.

Send delta distance and current GPS position back to the core where a GIS spatial query determines any known object at that spatial location. This item’s attributes are returned to the edge device and projected onto any convenient local object, augmenting the local world with the stored spatial data from the core. After watching Pranav Mistry’s research presentation it all seems not too far outside of reality.

GIS has an important part to play here because it is the repository of all things spatial.

Big Bing Maps Silverlight Control 1.0 Release Today

Lots of interesting new stuff to explore in this 1.0 release of the Bing Maps Silverlight Control. That seems like a mouthful, and I’m sure it will be acronymed to something like BMS Control, but the one thing that immediately stands out from the announcement is this:

“Sessions will be used with the Bing Maps AJAX Map Control and the Bing Maps Silverlight Control. A session is basically defined as loading the map control and exploring at will, no tile limitations.”

  • “Bing Maps AJAX Control all maps rendered onto the client upon the initial request is considered 1 session. Session includes any requests for geocoding, routing or search.”
  • “Bing Maps Silverlight Control – all maps rendered onto the client upon the initial request is considered 1 session. There are no services built into the Bing Maps Silverlight Control, so you would use the Bing Maps Web Service for geocoding, routing and search, but will include those too.”
  • “Bing Maps Web Service all maps, geocodes, routes and searches will each invoke 1 transaction.”

“With the new terms of use for the Bing Maps Platform you get 125,000 sessions per year for FREE. You also get 500,000 transactions a year for FREE. “

Educators – free unlimited use of the Bing Maps platform

Not-for-Profits – free unlimited use of the Bing Maps platform

Commercial, non-commercial and government – proof of concept development free

More details here:
Bing Maps terms of use changes benefit educators, not-for-profits, and developers
Bing Maps Silverlight Control 1.0 released
Terms of Use

This alleviates a big concern I had originally with use of the Silverlight Map Control CTP. Transaction based licensing did not align with Google pricing and was nearly impossible to predict for tile navigation, which is the engaging part of Silverlight Control. This announcement wipes out these problems and makes my job as a developer a whole lot easier.

Thanks Microsoft!

Azure and GeoWebCache tile pyramids


Azure Blob storage tile pyramid
Fig 1 – Azure Blob Storage tile pyramid for citylimits

Azure Overview

Shared resources continue to grow as essential building blocks of modern life, key to connecting communities and businesses of all types and sizes. As a result a product like SharePoint is a very hot item in the enterprise world. You can possibly view Azure as a very big, very public, SharePoint platform that is still being constructed. Microsoft and 3rd party services will eventually populate the service bus of this Cloud version with lots and lots of service hooks. In the meantime, even early stage Azure with Web Hosting, Blob storage, and Azure SQL Server makes for some interesting experimental R&D.

Azure is similar to Amazon’s AWS cloud services, and Azure’s pricing follows Amazon’s lead with the familiar “pay as you go, buy what you use” model. Azure offers web services, storage, and queues, but instead of giving access to an actual virtual instance, Azure provides services maintained in the Microsoft Cloud infrastructure. Blob storage, Azure SQL Server, and IIS allow developers to host web applications and data in the Azure Cloud, but only with the provided services. The virtual machine is entirely hidden inside Microsoft’s Cloud.

The folks at Microsoft are probably well aware that most development scenarios have some basic web application and storage component, but don’t really need all the capabilities, and headaches, offered by controlling their own server. In return for giving up some freedom you get the security of automatic replication, scalability, and maintenance along with the API tools to connect into the services. In essence this is a Microsoft only Cloud since no other services can be installed. Unfortunately, as a GIS developer this makes Azure a bit less useful. After all, Microsoft doesn’t yet offer GIS APIs, OGC compliant service platforms, or translation tools. On the other hand, high availability with automatic replication and scalability for little effort are nice features for lots of GIS scenarios.

The current Azure CTP lets developers experiment for free with these minor restrictions:

  • Total compute usage: 2000 VM hours
  • Cloud storage capacity: 50GB
  • Total storage bandwidth: 20GB/day


To keep things simple, since this is my first introduction to Azure, I looked at just using Blob Storage to host a tile pyramid. The Silverlight MapControl CTP makes it very easy to add tile sources as layers so my project is simply to create a tile pyramid and store this in Azure Blob storage where I can access it from a Silverlight MapControl.

In order to create a tile pyramid, I also decided to dig into the GeoWebCache standalone beta 1.2. This is beta and offers some new undocumented features. It also is my first attempt at using geowebcache as standalone. Generally I just use the version conveniently built into Geoserver. However, since I was only building a tile pyramid rather than serving it, the standalone version made more sense. Geowebcache also provides caching for public WMS services. In cases where a useful WMS is available, but not very efficient, it would be nice to cache tiles for at least subsets useful to my applications.

Azure Blob Storage

Azure CTP has three main components:

  1. Windows Azure – includes the storage services for blobs, queues, and cloud tables as well as hosting web applications
  2. SQL Azure – SQL Server in the Cloud
  3. .NET Services – Service Bus, Access Control Service, Work Flow …

There are lots of walk throughs for getting started in Azure. It all boils down to getting the credentials to use the service.

Once a CTP project is available the next step is to create a “Storage Account” which will be used to store the tile pyramid directory. From your account page you can also create a “Hosted Service” within your Windows Azure project. This is where web applications are deployed. If you want to use “SQL Azure” you must request a second SQL Azure token and create a SQL Service. The .NET Service doesn’t require a token for a subscription as long as you have a Windows Live account.

After creating a Windows Azure storage account you will get three endpoints and a couple of keys.

Endpoints:
http://sampleaccount.blob.core.windows.net/

http://sampleaccount.queue.core.windows.net/

http://sampleaccount.table.core.windows.net/

Primary Access Key: ************************************
Secondary Access Key: *********************************

Now we can start using our brand new Azure storage account. But to make life much simpler first download the following:

Azure SDK includes some sample code . . . HelloWorld, HelloFabric, etc to get started using the Rest interface. I reviewed some of the samples and started down the path of creating the necessary Rest calls for recursively loading a tile pyramid from my local system into an Azure blob storage nomenclature. I was just getting started when I happened to take a look at the CloudDrive sample. This saved me a lot of time and trouble.

CloudDrive lets you treat the Azure service as a drive inside PowerShell. The venerable MSDOS cd, dir, mkdir, copy, del etc commands are all ready to go. Wince, I know, I know, MSDOS? I’m sure, if not now, then soon there will be dozens of tools to do the same thing with nice drag and drop UIs. But this works and I’m old enough to actually remember DOS commands.

First, using the elevated Windows Azure SDK command prompt you can compile and run the CloudDrive with a couple of commands:

C:\AzureTools\samples\CloudDrive\buildme.cmd
C:\AzureTools\samples\CloudDrive\runme.cmd

Now open Windows PowerShell and execute the MounteDrive.ps1 script. This allows you to treat the local Azure service as a drive mount and start copying files into storage blobs.


Azure sample CloudDrive PowerShell
Fig 1 – Azure sample CloudDrive PowerShell

Creating a connection to the real production Azure service simply means making a copy of MountDrive.ps1 and changing credentials and endpoint to the ones obtained previously.

function MountDrive {
Param (
 $Account = "sampleaccount",
 $Key = "***************************************",
 $ServiceUrl="http://sampleaccount.blob.core.windows.net/",
 $DriveName="Blob",
 $ProviderName="BlobDrive")

# Power Shell Snapin setup
 add-pssnapin CloudDriveSnapin -ErrorAction SilentlyContinue

# Create the credentials
 $password = ConvertTo-SecureString -AsPlainText -Force $Key
 $cred = New-Object -TypeName Management.Automation.PSCredential -ArgumentList $Account, $password

# Mount storage service as a drive
 new-psdrive -psprovider $ProviderName -root $ServiceUrl -name $DriveName -cred $cred -scope global
}

MountDrive -ServiceUrl "http://sampleaccount.blob.core.windows.net/" -DriveName "Blob" -ProviderName "BlobDrive"

The new-item command lets you create a new container with -Public flag ensuring that files will be accessible publicly. Then the Blog: drive copy-cd command will copy files and subdirectories from the local file system to the Azure Blob storage. For example:

PS Blob:\> new-item imagecontainer -Public
Parent: CloudDriveSnapin\BlobDrive::http:\\127.0.0.1:10000\devstoreaccount1

Type Size LastWriteTimeUtc Name
---- ---- ---------------- ----
Container 10/16/2009 9:02:22 PM imagecontainer

PS Blob:\> dir

Parent: CloudDriveSnapin\BlobDrive::http:\\127.0.0.1:10000\

Type Size LastWriteTimeUtc Name
---- ---- ---------------- ----
Container 10/16/2009 9:02:22 PM imagecontainer
Container 10/8/2009 9:22:22 PM northmetro
Container 10/8/2009 5:54:16 PM storagesamplecontainer
Container 10/8/2009 7:32:16 PM testcontainer

PS Blob:\> copy-cd c:\temp\image001.png imagecontainer\test.png
PS Blob:\> dir imagecontainer

Parent: CloudDriveSnapin\BlobDrive::http:\\127.0.0.1:10000\imagecontainer

Type Size LastWriteTimeUtc Name
---- ---- ---------------- ----
Blob 1674374 10/16/2009 9:02:57 PM test.png

Because imagecontainer is public the test.png image can be accessed in the browser from the local development storage with:
http://127.0.0.1:10000/devstoreaccount1/imagecontainer/test.png
or if the image was similarly loaded in a production Azure storage account:
http://sampleaccount.blob.core.windows.net/imagecontainer/test.png

It is worth noting that Azure storage consists of endpoints, containers, and blobs. There are some further subtleties for large blobs such as blocks and blocklists as well as metadata, but there is not really anything like a subdirectory. Subdirectories are emulated using slashes in the blob name.
i.e. northmetro/citylimits/BingMercator_12/006_019/000851_002543.png is a container, “northmetro“, followed by a blob name,
/citylimits/BingMercator_12/006_019/000851_002543.png.”

The browser can show this image using the local development storage:
http://127.0.0.1:10000/devstoreaccount1/northmetro/citylimits/BingMercator_12
/006_019/000851_002543.png

Changing to producton Azure means substituting a valid endpoint for “127.0.0.1:10000/devstoreaccount1″ like this:
http://sampleaccount.blob.core.windows.net/northmetro/citylimits/BingMercator_12
/006_019/000851_002543.png

With CloudDrive getting my tile pyramid into the cloud is straightforward and it saved writing custom code.

The tile pyramid – Geowebcache 1.2 beta

Geowebcache is written in Java and synchronizes very well with the GeoServer OGC service engine. The new 1.2 beta version is available as a .war that is loaded into the webapp directory of Tomcat. It is a fairly simple matter to configure geowebcache to create a tile pyramid of a particular Geoserver WMS layer. (Unfortunately it took me almost 2 days to work out a conflict with an existing Geoserver gwc) The two main files for configuration are:


C:\Program Files\Apache Software Foundation\Tomcat 6.0\webapps\
                     geowebcache1.2\WEB-INF\geowebcache-servlet.xml
C:\Program Files\Apache Software Foundation\Tomcat 6.0\webapps\
                    geowebcache1.2\WEB-INF\classes\geowebcache.xml

geowebcache-servlet.xml customizes the service bean parameters and geowebcache.xml provides setup parameters for tile pyramids of layers. Leaving the geowebcache-servlet.xml at default will work fine when no other Geoserver or geowebcache is around. It can get more complicated if you have several that need to be kept separate. More configuration info.

Here is an example geowebcache.xml that uses some of the newer gridSet definition capabilities. It took me a long while to find the schema for geowebcache.xml:
http://geowebcache.org/schema/docs/1.2.0/
The documentation is still thin for this beta release project.

<?xml version="1.0" encoding="utf-8"?>
<gwcConfiguration xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:noNamespaceSchemaLocation="http://geowebcache.org/schema/1.2.0/geowebcache.xsd"
  xmlns="http://geowebcache.org/schema/1.2.0">
<version>1.2.0</version>
<backendTimeout>120</backendTimeout>
<gridSets>
  <gridSet>
  <name>BingMercator</name>
  <srs><number>900913</number></srs>
  <extent>
  <coords>
  <double>-11706995</double>
  <double>4839671</double>
  <double>-11687135</double>
  <double>4861458</double>
  </coords>
  </extent>
  <alignTopLeft>true</alignTopLeft>
  <levels>15</levels>
  </gridSet>
</gridSets>
<layers>
  <wmsLayer>
  <name>citylimits</name>
  <gridSubsets>
  <gridSubset>
  <gridSetName>BingMercator</gridSetName>
  <zoomStart>0</zoomStart>
  <zoomStop>10</zoomStop>
  </gridSubset>
  <gridSubset>
  <gridSetName>GoogleMapsCompatible</gridSetName>
  </gridSubset>
  </gridSubsets>
  <wmsUrl><string>http://localhost:80/geoserver/wms</string></wmsUrl>
  <wmsLayers>northmetro:citylimits</wmsLayers>
  <wmsStyles>citylimits</wmsStyles>
  </wmsLayer>
</layers>
</gwcConfiguration>

After editing the configuration files, building the pyramid is a matter of pointing your browser at the local webapp and seeding the tiles down to the level you choose with the gridSet you want. The GoogleMapsCompatible gridSet is built into geowebcache and the BingMercator is a custom gridSet that I’ve added with extent limits defined.
http://localhost/geowebcache1.2/rest/seed/citylimits

This can take a few hours/days depending on the extent and zoom level you need. Once completed I use the CloudDrive PowerShell to copy all of the tiles into Azure blob storage:

PS Blob:\> copy-cd C:\Program Files\Apache Software Foundation\Tomcat 6.0\temp\geowebcache\citylimits

This also takes some time for the resulting 243,648 files of about 1Gb.

Silverlight MapControl

The final piece in the project is adding the MapControl viewer layer. First I add a new tile source layer in the Map Control of the MainPage.xaml

  <m:Map
      Name="MainMap"
      NavigationVisibility="Visible"
      Grid.Column="0" Grid.Row="1" Grid.RowSpan="1" Padding="5"
      Mode="Road">
    <m:Map.Children>
       <!-- Azure tile source -->
       <m:MapTileLayer x:Name="citylimitsAzureLayer" Opacity="0.5" Visibility="Collapsed">
         <m:MapTileLayer.TileSources>
             <local:CityLimitsAzureTileSource></local:CityLimitsAzureTileSource>
         </m:MapTileLayer.TileSources>
      </m:MapTileLayer>
             .
             .

The tile naming scheme is described here:
http://geowebcache.org/trac/wiki/filestorage2
The important point is:

“Most filesystems use btree’s to store the files in directories, so layername/projection_z/[x/(2(z/2))]_[y/(2(z/2))]/x_y.extension seems reasonable, since it works sort of like a quadtree. The idea is that half the precision is in the directory name, the full precision in the filename to make it easy to locate problematic tiles. This will also make cache purges a lot faster for specific regions, since fewer directories have to be traversed and unlinked. “

An ordinary tile source class looks just like this:

  public class CityLimitsTileSource : Microsoft.VirtualEarth.MapControl.TileSource
  {
        public CityLimitsTileSource() : base(App.Current.Host.InitParams["src"] +
          "/geoserver/gwc/service/gmaps?layers=northmetro:citylimits&zoom={2}&x={0}&y={1}")
        {
        }

        public override Uri GetUri(int x, int y, int zoomLevel)
        {
           return new Uri(String.Format(this.UriFormat, x, y, zoomLevel));
        }
  }

However, now I need to reproduce the tile name as it is in the Azure storage container rather than letting gwc/service/gmaps mediate the nomenclature for me. This took a little digging. The two files I needed to look at turned out to be:

GMapsConverter works because Bing Maps follows the same upper left origin convention and spherical mercator projection as Google Maps. Here is the final approach using the naming system in Geowebcache1.2.

public class CityLimitsAzureTileSource : Microsoft.VirtualEarth.MapControl.TileSource
{
  public CityLimitsAzureTileSource()
  : base(App.Current.Host.InitParams["azure"] + "citylimits/GoogleMapsCompatible_{0}/{1}/{2}.png")
  {
  }

  public override Uri GetUri(int x, int y, int zoomLevel)
  {
   /*
   * From geowebcache
   * http://geowebcache.org/trac/browser/trunk/geowebcache/src/main/java/org/geowebcache/storage/blobstore/file/FilePathGenerator.java
   * http://geowebcache.org/trac/browser/trunk/geowebcache/src/main/java/org/geowebcache/service/gmaps/GMapsConverter.java
   * must convert zoom, x, y, and z into tilepyramid subdirectory structure used by geowebcache
  */
  int extent = (int)Math.Pow(2, zoomLevel);
  if (x < 0 || x > extent - 1)
  {
     MessageBox.Show("The X coordinate is not sane: " + x);
  }

  if (y < 0 || y > extent - 1)
  {
     MessageBox.Show("The Y coordinate is not sane: " + y);
  }
  // xPos and yPos correspond to the top left hand corner
  y = extent - y - 1;
  long shift = zoomLevel / 2;
  long half = 2 << (int)shift;
  int digits = 1;
  if (half > 10)
  {
     digits = (int)(Math.Log10(half)) + 1;
  }
  long halfx = x / half;
  long halfy = y / half;
  string halfsubdir = zeroPadder(halfx, digits) + "_" + zeroPadder(halfy, digits);
  string img = zeroPadder(x, 2 * digits) + "_" + zeroPadder(y, 2 * digits);
  string zoom = zeroPadder(zoomLevel, 2);
  string test = String.Format(this.UriFormat, zoom, halfsubdir, img );

  return new Uri(String.Format(this.UriFormat, zoom, halfsubdir, img));
  }

/**
  * From geowebcache
  * http://geowebcache.org/trac/browser/trunk/geowebcache/src/main/java/org/geowebcache/storage/blobstore/file/FilePathGenerator.java
  * a way to pad numbers with leading zeros, since I don't know a fast
  * way of doing this in Java.
  *
  * @param number
  * @param order
  * @return
  */
  public static String zeroPadder(long number, int order) {
  int numberOrder = 1;

  if (number > 9) {
    if(number > 11) {
      numberOrder = (int) Math.Ceiling(Math.Log10(number) - 0.001);
    } else {
      numberOrder = 2;
    }
  }

  int diffOrder = order - numberOrder;

    if(diffOrder > 0) {
      //System.out.println("number: " + number + " order: " + order + " diff: " + diffOrder);
      StringBuilder padding = new StringBuilder(diffOrder);

      while (diffOrder > 0) {
        padding.Append("0");
        diffOrder--;
       }
       return padding.ToString() + string.Format("{0}", number);
    } else {
      return string.Format("{0}", number);
    }
  }
}

I didn’t attempt to change the zeroPadder. Doubtless there is a simple C# String.Format that would replace the zeroPadder from Geowebcache.

This works and provides access to tile png images stored in Azure blob storage, as you can see from the sample demo.

Summary

Tile pyramids enhance user experience, matching the performance users have come to expect in Bing, Google, Yahoo, and OSM. It is resource intensive to make tile pyramids of large world wide extent and deep zoom levels. In fact it is not something most services can or need provide except for limited areas. Tile pyramids in the Cloud require relatively static layers with infrequent updates.

Although using Azure this way is possible and provides performance, scalability, and reliability, I’m not sure it always makes sense. The costs are difficult to predict for a high volume site as they are based on bandwidth usage as well as storage. Also you may be paying storage fees for many tiles seldom or never needed. Tile pyramid performance is a wonderful thing, but it chews up a ton of storage, much of which is seldom if ever used.

For a stable low to medium volume application it makes more sense to host a tile pyramid on your own server. Possibly with high volume sites where reliability is the deciding factor moving to Cloud storage services is the right thing. This is especially true where traffic patterns swing wildly or grow rapidly and robust scaling is an ongoing battle.

Azure CTP is of course not as mature as AWS, but obviously it has the edge in the developer community and like many Microsoft technologies it has staying power to spare. Leveraging its developer community makes sense for Microsoft and with easy to use tools built into Visual Studio I can see Azure growing quickly. In time it will just be part of the development fabric with most Visual Studio deployment choices seamlessly migrating out to the Azure Cloud.

Azure release is slated for Nov 2009.

Storm Runoff modelling and MRLC

The Multi-Resolution Land Characteristics Consortium, MRLC, is a consortium of agencies at the federal level that produces the National Land Cover Database, NLCD 2001. The dataset was developed using a national coverage set of 8 band Landsat-7 Imagery along with 30m DEM. The imagery is processed from three separate dates to give a seasonal average land cover classification. The resolution is a bit coarse at 30m square, but it is a valuable resource because of its consistent national coverage.

More detailed information on MRLC:

In addition to the NLCD coverage there are two derivative layers:

  • NLCD 2001 impervious surface: The impervious surface data classifies each pixel into 101 possible values (0% – 100%). The data show the detailed urban fabric in which many of us reside. Low percentage impervious is in light gray with increasing values depicted in darker gray and the highest value in pink and red. White areas have no impervious surface.
  • NLCD 2001 canopy density: Like the impervious surface data, the canopy density database element classifies each pixel into 101 possible values (0% – 100%). The canopy density estimate apply to the forest only. These data can be combined with the land cover to estimate canopy density by forest type (deciduous, evergreen, mixed, woody wetland)

The data is available for public download. There is also one of those vintage ESRI viewers that qualifies for James Fee’s “Cash for Geo Clunkers” proposal. These Ancien Régime viewers are littered all over the federal landscape. It will take years for newer technology to replace this legacy of ArcIMS. Fortunately there is an exposed WMS service ( See GetCapabilities ), which permits access to MRLC layers without going through the “Viewer” nonsense. This WMS service proved very useful on a recent web project for Storm Runoff Mitigation.

I am no hydrologist, but once I was provided with the appropriate calculation approach the Silverlight UI was fairly straightforward. Basically there are a number of potential mitigation practices that play into a runoff calculation. Two fairly significant factors are Impervious Surface Area and Canopy Area, which are both available through MRLC’s WMS service. One simplified calculation model in use is called the TR-55 method.

By making use of the MRLC derived layers for Impervious Surface and Canopy at least approximations for these two factors can be derived for any given area of the Continental US. The method I used was to provide a GetMap request to the WMS service which then returned a pixel shaded image of the impervious surface density. Most of the hard work has already been done by MRLC. All I need to do is extract the value density for the returned png image.

Impervious Surface layer
Fig 3 – Impervious Surface shaded density red
Impervious Surface layer
Fig 4 – Canopy shaded density green

The density values are relative to gray. I at first tried a simple density calculation from the color encoded pixels by subtracting the base gray from the variable green: Green – Red = factor. The sum of these factors divided by the total pixel area of the image times the max 255 byte value is a rough calculation of the percentage canopy over the viewport. However, after pursuing the USGS for a few days I managed to get the actual percentile RGB tables and improve the density calculation accuracy. This average density percentile is then used in TR-55 as An*CNn with the Canopy CN value of 70.

The process of extracting density from pixels looks like this:

  HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(new Uri(getlayerurl));
  using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
  {
    if (response.StatusDescription.Equals("OK"))
    {
      using (Stream stream = response.GetResponseStream())
      {
        byte[] data = ReadFully(stream, response.ContentLength);
        Bitmap bmp = (Bitmap)Bitmap.FromStream(new MemoryStream(data), false);
        stream.Close();

        UnsafeBitmap fast_bitmap = new UnsafeBitmap(bmp);
        fast_bitmap.LockBitmap();
        PixelData pixel;
        string key = "";
        double value = 0;
        for (int x = 0; x < bmp.Width; x++)
        {
          for (int y = 0; y < bmp.Height; y++)
          {
            pixel = fast_bitmap.GetPixel(x, y);
            key = pixel.red + " " + pixel.green + " " + pixel.blue;
            if (imperviousRGB.Contains(key))
            {
              value += Array.IndexOf(imperviousSurfaceRGB, key) * 0.01;
            }
          }

        }
        fast_bitmap.UnlockBitmap();
        double total = (bmp.Height * bmp.Width);
        double ratio = value / total;
        return ratio.ToString();
                   .
                   .
                   .

C#, unlike Java, allows pointer arithmetic in compilation marked unsafe. The advantage of using this approach here is a tremendous speed increase. The array of imperviousRGB strings to percentiles was supplied by the USGS. This process is applied in a WCF service to both the Canopy and the Impervious Surface layers and the result passed back to the TR-55 calculations.

Possible Extensions:

There are several extensions beyond the scope of this project that could prove interesting.

  1. First the NLCD uses a color classifications scheme. A similar color processing algorithm could be used to provide rough percentages of each of these classificcations for a viewport area. These could be helpful for various research and reporting requirements.
  2. However, beyond simple rectangular viewports, a nice extension would be the ability to supply arbitrary polygonal area of interests. This is fairly easy to do in Silverlight. The draw command is just a series of point clicks that are added to a Path geometry as line segments. The resulting polygon is then used as a clip mask when passing through the GetMap image. Probably a very simple point in polygon check either coded manually or using one of the C# ports of JTS would provide reasonable performance.
MRLC NLCD 2001 Colour Classification
Fig 3 - MRLC NLCD 2001 Colour Classification

What about resolution?

It is tempting to think a little bit about resolution. Looking at the MRLC image results, especially over a map base, it is obvious that at 100 ft resolution even the best of calculations are far from the fine grained detail necessary for accurate neighborhood calculations.

It is also obvious that Impervious Surface can be enhanced directly by applying some additional lookup from a road database. Using pavement estimates from a road network could improve resolution quality quite a bit in urbanized areas. But even that can be improved if there is some way to collect other common urban impervious surfaces such as rooftops, walkways, driveways, and parking areas.

NAIP 1m GSD 4 band imagery has fewer bands but higher resolution. NAIP is a resource that has been applied to unsupervised impervious surface extraction. However, the 4 band aquisition is still not consistently available for the entire US.

Now that more LiDAR data is coming on line at higher resolutions, why not use LiDAR classifications to enhance impervious surface?

LidarClassification1
Lidar All Elevation
ILidarClassification2
LidarAll Classification
LidarClassification3
Lidar All Intensity
ILidarClassification4
Lidar All Return

Just looking at the different style choices on the LidarServer WMS for instance, it appears that there are ways to get roof top and canopy out of LiDAR data. LiDAR at 1m resolution for metro areas could increase resolution for Canopy as well as rooftop contribution to Impervious Surface estimates.

In fact the folks at QCoherent have developed tools for classification and extraction of features like roof polygons. This extraction tool appled over a metro area could result in a useful rooftop polygon set. Once available in a spatial database these polygons can be furnished as an additional color filled tile pyramid layer. Availability of this layer would also let the Runoff calculation apply rooftop area estimates to roof drain disconnect factors.

Additionally improved accuracy of impervious surface calculations can be achieved by using a merging version of the simple color scan process. In a merging process the scan loop over the MRLC image does a lookup in the corresponding rooftop image. Any pixel positive for rooftop is promoted to the highest impervious surface classification. This estimate only applies so long as roof top green gardens remain insignificant.

Ultimately the MRLC will be looking at 1m GSD collection for NLCD with some combination of imagery like NAIP and DEM from LiDAR. However, it could be a few years before these high resolution resources are available consistently across the US.

Summary

The utility of WMS resources continues to grow as services become better known and tools for web applications improve. Other OWS services like WFS and WCS are following along behind, but show significant promise as well. The exposure of public data resource in some kind of OGC service should be mandatory at all government levels. The cost is not that significant compared to the cost effectiveness of cross agency, even cross domain, access to valuable data resources.

By using appropriate WMS tools like Geoserver and Geowebcache, vastly more efficient tile pyramids can become a part of any published WMS service layer. It takes a lot more storage so the improved performance may not be feasible for larger national and worldwide extents. However, in this Runoff Mitigation project, where view performance is less important, the OGC standard WMS GetMap requests proved to be quite useful for the TR-55 calculations and performance adequate.

Stacking Maps


PlaneProjection stacking
Fig 1 – Stacking Maps in PlaneProjection

Stacking maps is a pretty common experience. It goes back a long way with the now antique, but intriguing, use of clear mylars to portray various feature overlays. I don’t think I was the only kid fascinated with Encyclopedia Britannica’s plastic overlay illustrations. Even as a young engineer we used mylar overlays occasionally. It’s the same thing today but computers make virtual overlays, or layers, relatively easy. Stacking layers is a useful paradigm when using a single coordinated set of features that have a common origin, resolution, coordinate system, and order. However, now that we’ve gotten used to pulling maps from disparate parts of the internet, layer stacking is not always so simple. We are now stacking layers and then moving up the ontology stream to stack additional sets of layers, from other related maps.

Pulling maps from various WMS or WFS sources comes with a set of problems. Even after dealing with matching coordinate systems, resolution differences, and date currency issues, there is still order to take into account. This means choosing what should be at the bottom and what sits on top. The issues that inevitably come up are problems with masking. This is especially a problem when using multiple opaque WMS map sources. The user should have ultimate say as to what layer fits where in his view of his stack, but this flexibility is not always easy to provide.

For example last week I discovered this beautiful map source, “TopOSM”. You can see a Colorado Relief Map here in OpenLayers and here is a Silverlight view of the same tile service.

PlaneProjection stacking
Fig 2 – Stacking TopOSM

When I started playing with this tile source, which was put together by Lars Ahlzen, I was unable to consistently get the tiles for the transportation feature layer. I didn’t solve that problem, but began thinking ,”why use another transportation feature layer when OpenStreetMap is readily available, also very attractive, and doubtless more up to date.” This presented the masking problem because tiles from Open Street Map are opaque and the base relief layer of TopOSM is also opaque. In WMS this is typically handled by setting a TRANSPARENT parameter true. The WMS server then produces a png image with the background invisible or transparent.

However, in a tile source this has to be done when creating the tiles. The decision is usually made based on the percentage of cover in a given layer. For example aerial or satellite imagery has 100% coverage and so TRANSPARENT = true is meaningless. Road layers, though, are relatively sparse and would generally benefit from transparency. In the case of TopOSM the base relief layer includes terrain shading with 100% coverage. Contours on the TopOSMContour layer are generally sparse and published as a transparent=true tile source. The OSM maps contain large polygonal color areas and some shading so the decision was evidently made to produce OSM tiles without transparent background. Unfortunately I have a conflict. How to use two opaque map layers.

First I went searching for an OSM tile source that had a transparent background. There seems to have been one, at least at one time, as indicated here by this WorldWind Wiki link. I wasn’t able to get this tile source to respond so perhaps it is now defunct. Of course you could get a copy of OSM and publish your own set of transparent=true tiles, but that is a lot of work for a weekend. I also ran across some ‘on the fly’ java transform approaches that turn a color range into a transparent png as the tiles are read. However, this approach would be a performance bottleneck, defeating the whole purpose of a tile service.

One workaround is to adjust opacity in the client view. Even though this can result in a dim or blurry layer, it lets multiple opaque layers coexists.


PlaneProjection stacking
Fig 3 – Stacking Maps with opacity masking lower layer

PlaneProjection stacking
Fig 4 – Stacking Maps with opacity adjusted to show lower layer

This actually works pretty well. I can stack two opaque layers from different map sources along with one additional transparent layer, the contours. I now have the beautiful relief and USGS topo contours, along with a very nice and up to date set of road features from OSM. I can change opacity of the two opaque tile sources to suit my need. If I want just the bare earth relief, I can set TopOSM opacity to one, but if I want to know where on earth I am, it helps to see the OSM transportation features. A nice compromise is opacity 0.5 for the top opaque map layer. I settled on using OSM as base and TopOSM as the opacity adjusted top map layer. With just two opaque sources this works OK, but what if you have several. Things get a bit misty with opacity on opacity. As we look forward to hyperspectral stacks with hundreds or thousands of layers in giant stacks, opacity won’t do the job.

Since Silverlight 3 introduced PlaneProjection properties, why not try something different. Here is a cool link that explains as well as illustrates what is going on with PlaneProjection. You can see from this post that it is relatively simple to manipulate plane projection parameters by binding slider values to axis of Rotation, GlobalOffset, and LocalOffset. So the question is can I do this with maps?

Here is an example of a MapTileLayer, and yes, it does include a Projection Property so now we can hook up all kinds of tools for manipulating maps out of plane.

<m:MapTileLayer Opacity="1" x:Name="OSM"
                Visibility="Visible">
    <m:MapTileLayer.Projection>
        <PlaneProjection x:Name="OSMProjection"
                CenterOfRotationX="0.5"
                CenterOfRotationY="0.5"
                CenterOfRotationZ="0.5"
                LocalOffsetX="0"
                LocalOffsetY="0"
                LocalOffsetZ="0"
        />
    </m:MapTileLayer.Projection>

    <m:MapTileLayer.TileSources>
        <local:OpenStreetMapTileSource/>
    </m:MapTileLayer.TileSources>
</m:MapTileLayer>

PlaneProjection stacking
Fig 5 – Stacking Maps with PlaneProjection RotationX but all in the same LocalOffset

PlaneProjection stacking
Fig 6 – Stacking Maps with with PlaneProjection RotationX and LocalOffset changes

Here is a link that lets you try out various parameters of a Map Stack: http://www.web-demographics.com:8089/MapStack/

It is interesting to play around with plane projection and move layers here and there, but as it turns out this is not all that useful for map portrayal. I guess the academics call this a “cartography” issue. The point is how usefully is information presented to a map user or the map consumer. The opacity approach, in my opinion, is by far the better way to use multiple opaque map sources in a cartographically meaningful manner, as long as there are just a few. However, It is still very interesting technically to play with PlaneProjection and see how well it is implemented, even with deeply nested controls.

Another use for PlaneProjection often seen now is the “Flip” animation. Basically it uses an animation tied to a layout control that flips the layout over using PlaneProjection Rotation. If the first layout is replaced by a new one, it reproduces the effect of turning pages. This is similar to the experience of paper pages in a book and could be applied to maps using an Atlas model. But it is not especially helpful since the point of using maps is usually not moving from paper to paper, but from space to space, and in our world all space happens to be contiguous. The physicists among us may beg to differ but cartography in Geospatial systems is all about solid earth surfaces.

However, aside from GIS, here is a small example of using an animated illustration along with a map. In this sample there is an animation with a “before” and “after” view that can be flipped with a simple “next” button. The novelty is a bit interesting …. if you have adequate hardware. However, this is a deeply nested, multi animated layout, and the flip sequence is not all that smooth especially on older hardware. You can see that the interior runoff animations do continue running even while the wrapping ProjectionPlane is rotating, which says a lot for the power of Microsoft’s implementation. Linear Algebra seems less abstract when you see it doing something right before your eyes.


PlaneProjection stacking
Fig 7 – Use of animated PlaneProjection for flip effects

Performance is a consideration and Silverlight 3 adds GPU acceleration. Here is a link describing the use of GPU acceleration in Silverlight. It seemed to help with the map plane projection example above, but GPU BitCaching is not suited for animation so it does nothing to help performance in my flip example.

Summary

Silverlight 3 PlaneProjection has some very nice features and warrants some more experimentation. However, it doesn’t add very much to the current state of slippy map tiled cartography. Working with many layers is another matter, and PlaneProjection could play a big part in UI development for hyperspectral sources.

Silverlight 3 PlaneProjection does add interest to ancillary map information. I’m not saying Metadata will be a great joy if we can virtually flip pages and pages of the stuff, but there are uses for out of plane media placed in a geographic context. If nothing else, it allows access to more information with less visual clutter. Maybe the Flickr photo pattern has been overdone, but some business workflow processes could actually benefit from media embedded usefully in a map surface.

My EPSG:54004 mystery solved!


EPSG:54004 problem fig 1
Fig 1 – DIA Looks a Lot Better!

With a helpful comment from SharpGIS I was able to finally pin down my baffling problem with EPSG:54004.

The problem is in the datums.

ESRI:54004

PROJCS["World_Mercator",
    GEOGCS["GCS_WGS_1984",
        DATUM["WGS_1984",
            SPHEROID["WGS_1984",6378137,298.257223563]],
        PRIMEM["Greenwich",0],
        UNIT["Degree",0.017453292519943295]],
    PROJECTION["Mercator_1SP"],
    PARAMETER["False_Easting",0],
    PARAMETER["False_Northing",0],
    PARAMETER["Central_Meridian",0],
    PARAMETER["Standard_Parallel_1",0],
    UNIT["Meter",1],
    AUTHORITY["EPSG","54004"]]

As Morten pointed out the 54004 datum includes a flattening, 298.257223563 :
SPHEROID["WGS_1984",6378137,298.257223563]],

So 54004 should be treated as an Ellipsoid rather than a Sphere.

There is a subtle difference in 900913. If you notice 900913 also includes a flattening:
SPHEROID["WGS_1984",6378137,298.257223563]],

EPSG:900913

PROJCS["Google Mercator",
    GEOGCS["WGS 84",
        DATUM["World Geodetic System 1984",
            SPHEROID["WGS 84",6378137.0,298.257223563,
                AUTHORITY["EPSG","7030"]],
            AUTHORITY["EPSG","6326"]],
        PRIMEM["Greenwich",0.0,
            AUTHORITY["EPSG","8901"]],
        UNIT["degree",0.017453292519943295],
        AXIS["Geodetic latitude",NORTH],
        AXIS["Geodetic longitude",EAST],
        AUTHORITY["EPSG","4326"]],
    PROJECTION["Mercator_1SP"],
    PARAMETER["semi_minor",6378137.0],
    PARAMETER["latitude_of_origin",0.0],
    PARAMETER["central_meridian",0.0],
    PARAMETER["scale_factor",1.0],
    PARAMETER["false_easting",0.0],
    PARAMETER["false_northing",0.0],
    UNIT["m",1.0],
    AXIS["Easting",EAST],
    AXIS["Northing",NORTH],
    AUTHORITY["EPSG","900913"]]

However, you might not notice in addition it includes an explicit minor axis parameter.
PARAMETER["semi_minor",6378137.0],
And this minor axis is identical to the major axis. The axis definition overrides the flattening in the Datum and is probably technically incorrect, but the idea was just to get a spherical mercator into a definition that people could use to match Google Maps. I’ve seen this definition in PostGIS, GeoServer, and OpenLayers.

I had already noticed this and played with a MercatorEllipsoid function to see if that would fix my problem. However, sadly, I made an error and miscalculated eccentricity. The real equation for e goes something like this:
double f = 1 / 298.257223563;
e = Math.Sqrt(2*f – Math.Pow(f,2));

resulting in e = 0.081819190842621486;

Once I made the correction for the proper eccentricity in MercatorEllipsoid, ESRI:54004 lines up with the EPSG:3857. DIA is back in its rightful place.

My MercatorEllipsoid function now calculates correct BBOX parameters for GetMap requests, but only for com.esri.wms.Esrimap services. Looks like ESRI is the expert and correctly produces ESRI:54004 with Datum as defined. However, not so with GeoServer.

GeoServer seems to ignore the flattening 298.257223563 or else assume it is like the 900913 where flattening is overridden by a minor axis parameter:
semi_minor axis = major.PARAMETER[semi_minor,6378137.0]

This leads to some problems. My WMS client now has to decide which service correctly interprets the DATUM on 54004. For now I just check for “com.esri.wms.Esrimap” in the WMS url and change datums accordingly. This will undoubtedly lead to problems with other services since I don’t yet know how MapServer or others treat 54004.

Summary

  1. ESRI is right again!
  2. Always check your math one more time
  3. Community responses produce answers

Thanks everyone!

EPSG Problems


EPSG:54004 problem fig 1
Fig 1 – Something is not right with DIA?

Life is full of problems. With eight kids we have lots of problems, especially as school gets started up again. Life in the chaos lane has its moments, but the problem that has been baffling me this week has to do with EPSG coordinate systems and in particular the venerable but unofficial EPSG:54004.

EPSG:54004 is really not official. You won’t find it in the official EPSG registry. It is commonly used, however, ever since it was added as a “user extension” by Google, originally for use in Google maps.

Here are the basics of EPSG:54004

PROJCS["World_Mercator",
    GEOGCS["GCS_WGS_1984",
        DATUM["WGS_1984",
            SPHEROID["WGS_1984",6378137,298.257223563]],
        PRIMEM["Greenwich",0],
        UNIT["Degree",0.017453292519943295]],
    PROJECTION["Mercator_1SP"],
    PARAMETER["False_Easting",0],
    PARAMETER["False_Northing",0],
    PARAMETER["Central_Meridian",0],
    PARAMETER["Standard_Parallel_1",0],
    UNIT["Meter",1],
    AUTHORITY["EPSG","54004"]]
It is similar to EPSG:900913

PROJCS["Google Mercator",
    GEOGCS["WGS 84",
        DATUM["World Geodetic System 1984",
            SPHEROID["WGS 84",6378137.0,298.257223563,
                AUTHORITY["EPSG","7030"]],
            AUTHORITY["EPSG","6326"]],
        PRIMEM["Greenwich",0.0,
            AUTHORITY["EPSG","8901"]],
        UNIT["degree",0.017453292519943295],
        AXIS["Geodetic latitude",NORTH],
        AXIS["Geodetic longitude",EAST],
        AUTHORITY["EPSG","4326"]],
    PROJECTION["Mercator_1SP"],
    PARAMETER["semi_minor",6378137.0],
    PARAMETER["latitude_of_origin",0.0],
    PARAMETER["central_meridian",0.0],
    PARAMETER["scale_factor",1.0],
    PARAMETER["false_easting",0.0],
    PARAMETER["false_northing",0.0],
    UNIT["m",1.0],
    AXIS["Easting",EAST],
    AXIS["Northing",NORTH],
    AUTHORITY["EPSG","900913"]]

And both are similar to the official, official EPSG:3857

They should be equivalent. All are using earth radius 6378137.0m with a spherical mercator approximation that happens to be very efficient for quad tree tile pyramids. It has been so successful that it’s used in all the online map services, Google, Bing, Yahoo, and Open Street Map. It is also pretty simple to convert latitude, longitude into spherical Mercator coordinates. Using my dog eared John Snyder’s Manual I get something like this:

private Point Mercator(double lon, double lat)
{

        /* spherical mercator for Google, Bing, Yahoo
        * epsg:900913, epsg:54004, epsg:3857 R= 6378137
        * x = longitude
        * y= R*ln(tan(pi/4 +latitude/2)
        */············

        double R = 6378137.0;
        double x = R * Math.PI / 180 * lon;
        double y = R * Math.Log(Math.Tan(Math.PI / 180 *
         (45 + lat / 2.0)));
        return new Point(x, y);
}

The Problem

Now on to the problem, apparently not all EPSG:54004s are equal?

Looking at the screen shot above you can see that the MRLC NLCD 2001 impervious Surface layer using EPSG:54004 shows DIA a few miles south of the Bing Maps DIA? Bing Maps, as noted above, uses the EPSG:3857 spherical mercator which should be the same, but apparently is not? A quick check against OSM and Yahoo base maps verifies the same discrepancy.

I’ve been known to get tangled up in coordinate systems, so just as a sanity check here is a screenshot with DRCOG’s Network layer also rendered as EPSG:54004:

EPSG:54004 problem fig 2
Fig 2 – DRCOG EPSG:54004 looks right with Bing Maps but wrong with MRLC?

The DRCOG EPSG:54004 and Bing Maps align correctly so it is now 2 to 1 against the MRLC EPSG:54004. But what is going on here?

Taking a peek at DRCOG’s GetCapabilities I can see it is served out of GeoServer. “http://gis.drcog.org:80/geoserver/wms” Here is the user epsg section of GeoServer’s epsg.properties file from my local copy of GeoServer:

54004=PROJCS["WGS84 / Simple Mercator", GEOGCS["WGS 84", DATUM["WGS_1984",
SPHEROID["WGS_1984", 6378137.0, 298.257223563]],
PRIMEM["Greenwich", 0.0], UNIT["degree", 0.017453292519943295]],
PROJECTION["Mercator_1SP_Google"],
PARAMETER["latitude_of_origin", 0.0],
PARAMETER["central_meridian", 0.0],
PARAMETER["scale_factor", 1.0],
PARAMETER["false_easting", 0.0],
PARAMETER["false_northing", 0.0], UNIT["m", 1.0],
AXIS["x", EAST], AXIS["y", NORTH], AUTHORITY["EPSG","54004"]]

It looks familiar.

The MRLC GetCapabilites shows an ESRI server,
“http://ims.cr.usgs.gov:80/wmsconnector/com.esri.wms.Esrimap.”
Perhaps this is a clue. I know of several other ESRI WMS services so going to the National Atlas, I can see, yes, it publishes an EPSG:54004 SRS: <SRS>EPSG:54004</SRS>

I plug the National Atlas into my handy WMSClient and take a look. I selected the major highways and interstates which are visible, with opacity turned up, against the Impervious Surface backdrop. You can see a high degree of correlation between the two layers, and both are no where near the Bing base map?

EPSG:54004 problem fig 3
Fig 3 – National Atlas EPSG:54004 agrees with impervious surface?

OK, now it’s 2 to 2 and the end of the ninth. I am clear that the EPSG:54004 discrepancy has something to do with a difference in interpretation between WMS servers. It appears that ESRI is consistently different from Bing Maps and GeoServer. ESRI is the expert in this area, so my thought is there may be some unusual EPSG consideration that ESRI has caught and others possibly have not.

A bit of Googling brings up an older ESRI technical article that may have some bearing. However, nothing very clear about this large of a discrepancy. Going with a hunch I tried pretending ESRI EPSG:54004 was ellipsoidal rather than spherically based. Doing a quick switch to an Ellipsoidal Mercator calculation, however, makes no discernable difference.

private Point MercatorEllipsoid(double lon, double lat)
{
        double a = 6378137.0;// meters equitorial radius            
        double e = 1 / 298.257223563;
        double ctrmerid = 0.0;

        double px = lon * Math.PI / 180;
        double py = lat * Math.PI / 180;
        double pc = ctrmerid * Math.PI / 180;

        double x = a * (px - pc);
        double y = a * (Math.Log(Math.Tan((45.0 * Math.PI / 180) + py / 2.0)
        * Math.Pow((1.0 - e * Math.Sin(py))
        / (1.0 + e * Math.Sin(py)), e / 2.0)));
        return new Point(x, y);

}

Switching to the universal EPSG:4326 lined things up well enough. Unfortunately it does nothing to solve the mystery.

EPSG:4326  fig 3
Fig 4 – Switching to EPSG:4326 brings layers into alignment at this zoom level

Summary

Mystery is not resolved. If others have some insight, I’d like to hear it, especially from experts with ESRI com.esri.wms.Esrimap experience. If anyone wants to play with these EPSG:54004 layers I have a simple WMS client here: Silverlight WMSClient.

Looking at JPEG 2000

ER Map Viewer JPEG 2000
Fig 1 – NAIP TX JP2 in ER Viewer 7.2

I ran into a problem at the end of last week. I was attempting to use a set of NAIP county mosaics for Texas. These are very nice 1m GSD images available from the USDA Data Gateway as 4 band JPEG 2000 image files. The 4th band falls into the infrared region and allows analysts to see more vegetation type details. I don’t need it, but it helps out the Dept of Agriculture who is, after all, funding this very useful public imagery. I just wish they published NAIP in an OGC service, WMS or WCS.

Here is a metadata description. These are useful high resolution images and I was providing them for a cable routing contractor to use in his engineering ‘as built’ documents. I’ve done this in the past without much problem using NAIP images from 2006 – 2007 furnished as MrSID compressed files.

The process is to create a corridor in AutoCAD along the proposed route. Next, draw a series of page rectangles along this route making sure the entire corridor is covered, stepping through the route. This is fairly straight forward AutoCAD drawing. After exporting as DXF, these tiles can then be processed through a small Java program to create a batch file for clipping imagery.

This batch file makes use of gdal_translate to clip out a subset of the source image and create a new GeoTiff result. The resulting batch file is just a text file with a lot of lines like this:
gdal_translate -of GTiff -co “TILED=YES” -srcwin 27537 45649 24951 5046.003420858178 C ortho_1-1_1m_j_tx375_2008_2.jp2 potter.tiff

The image files are large so compression is mandatory. Especially since the images are furnished for ftp download from USDA servers. Wavelet compression is very efficient and has become the compression of choice with very large images like these. JP2 is also interesting for web applications due to its progressive transmission by pixel and resolution accuracy. Now that JPEG 2000 implementation libraries have been out for awhile, more and more jp2 files and tools are available. I didn’t anticipate any problem with the 2008 switch from MrSID to JP2.

However, I was very wrong.

A few hours later I had a list of viewers that did not work:

and a couple that did: (but didn’t export to tiff gtiff)

TatukGIS Viewer JPEG 2000
Fig2 – NAIP TX JP2 in TatukGIS Viewer
OpenEV Viewer JPEG 2000
Fig 3 – NAIP TX JP2 in OpenEV Viewer
Kakadu kdu_show JPEG 2000
Fig 4 – NAIP TX JP2 in Kakadu kdu_show
ArcView JPEG 2000
Fig 5 – NAIP TX JP2 in ArcView
Global Mapper JPEG 2000
Fig 6 – NAIP TX JP2 in Global Mapper

Hmm that is strange. It has a cimicine odor to it, but JPEG2000 has been in the works for years and has a good range of well used tools? NAIP is very popular imagery, however, a quick Google search didn’t turn up any very specific issue. This 11/19/2008 PostMortem ppt lists some JPEG2000 problems but nothing as catastrophic as my problem:

“JPEG2000 Compression format required for 4-band CCM under the NAIP contract. APFO is experiencing the following problems/issues with JPEG 2000 compressions:

  • Blurring images.
  • Difficulty with program settings
  • Limited customer support from open source software libraries
  • Image loss with advancing zoom
  • Rendering issues due to DVD exceeding ArcGIS 9.1 file capacities
  • Rendering issues on other viewers (consumer use) have been resolved.”

This link is an interesting summary from the 11/19/2008 NAIP coordination meeting. Note that the seamless NED resolution has ramifications for processing these higher resolution national imagery programs. Sounds like another reason to move to a national high resolution LiDAR survey of at least the continental US. I suppose LiDAR collection could happen simultaneously with imagery collection and save a bundle of flying costs.

USDA was prompt when I asked about my issue, but since it was viewable in their ArcInfo all was right with the world … and their data. I couldn’t really quibble with that, but I don’t have access to an ArcInfo License so in my world … all is still not right.

Next I took a look at the header records as output from gdalinfo

C:\Program Files\FWTools2.4.2\bin\gdalinfo ortho_1-1_1m_j_tx375_2008_2.jp2
Driver: JP2ECW/ERMapper JPEG2000
Files: ortho_1-1_1m_j_tx375_2008_2.jp2
       ortho_1-1_1m_j_tx375_2008_2.j2w
Size is 59292, 57976
Coordinate System is `'
Origin = (208978.447721050060000,3947556.455350011600000)
Pixel Size = (1.000000000000000,-1.000000000000000)
Corner Coordinates:
Upper Left  (  208978.448, 3947556.455)
Lower Left  (  208978.448, 3889580.455)
Upper Right (  268270.448, 3947556.455)
Lower Right (  268270.448, 3889580.455)
Center      (  238624.448, 3918568.455)
Band 1 Block=59292x1 Type=Byte, ColorInterp=Undefined
  Overviews: arbitrary
Band 2 Block=59292x1 Type=Byte, ColorInterp=Undefined
  Overviews: arbitrary
Band 3 Block=59292x1 Type=Byte, ColorInterp=Undefined
  Overviews: arbitrary
Band 4 Block=59292x1 Type=Byte, ColorInterp=Undefined
  Overviews: arbitrary

There is a small anomaly in the Coordinate System, but all seems fine otherwise. It is interesting to see the Driver listed for gdal is “Driver: JP2ECW/ERMapper JPEG2000″. Erdas ER Mapper makes ER Viewer with the same problem. I don’t know which sdk is licensed by Tatuk and AutoDesk, but possibly they also use the ER Mapper’s JPEG2000 sdk.

Next I fired up Eclipse and took a look at some JAI. First I tried rewriting the jp2 as tif:

package com.mmc.image;

import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import javax.imageio.ImageIO;·

public class J2KTest {
public static void main(String[] args) {
try {
      final BufferedImage bi = ImageIO.read(new File("C:/temp/JP2_test.jp2"));
      File dest = new File("C:/temp/test.tif");
      ImageIO.write(bi, "tiff", dest);
      System.out.println("Complete");·
    } catch (IOException e) {
        e.printStackTrace();
    }
  }
}

“no bing!” Just another clue pointing toward the MetaData though:

Exception in thread "main" java.lang.NullPointerException
	at com.sun.media.imageioimpl.plugins.jpeg2000.J2KMetadata.replace(J2KMetadata.java:962)
	at com.sun.media.imageioimpl.plugins.jpeg2000.J2KMetadata.addNode(J2KMetadata.java:631)
	at com.sun.media.imageioimpl.plugins.jpeg2000.J2KRenderedImageCodecLib.readImageMetadata(J2KRenderedImageCodecLib.java:1006)
	at com.sun.media.imageioimpl.plugins.jpeg2000.J2KRenderedImageCodecLib.createOriginalSampleModel(J2KRenderedImageCodecLib.java:673)
	at com.sun.media.imageioimpl.plugins.jpeg2000.J2KRenderedImageCodecLib.(J2KRenderedImageCodecLib.java:261)
	at com.sun.media.imageioimpl.plugins.jpeg2000.J2KImageReaderCodecLib.read(J2KImageReaderCodecLib.java:364)
	at javax.imageio.ImageIO.read(ImageIO.java:1422)
	at javax.imageio.ImageIO.read(ImageIO.java:1282)
	at com.mmc.image.J2KTest.main(J2KTest.java:19)

Time to take a look at that Coordinate System record in the header using the debugger to look at the IIOMetadata object.

package com.mmc.image;

import java.io.File;
import java.io.IOException;
import javax.imageio.ImageIO;
import javax.imageio.ImageReader;
import javax.imageio.metadata.IIOMetadata;
import javax.imageio.stream.ImageInputStream;
import com.sun.media.imageioimpl.plugins.jpeg2000.J2KImageReaderSpi;

public class JP2_metadata {
	public static void main(String[] args) {
    try {
	    //final File file = new File("C:/temp/JP2_test.jp2");
			final File file = new File("C:/temp/JP2_test.jp2");

			final ImageInputStream iis = ImageIO.createImageInputStream(file);
			J2KImageReaderSpi readerSPI = new J2KImageReaderSpi();
			final ImageReader reader = readerSPI.createReaderInstance();
			reader.setInput(iis);

			IIOMetadata iioMetadata = reader.getImageMetadata(0);

		} catch (IOException e) {
			e.printStackTrace();
		}
	}
}

“no Java!” Just the same problem reading the metadata?

Exception in thread "main" java.lang.NullPointerException
	at com.sun.media.imageioimpl.plugins.jpeg2000.J2KMetadata.replace(J2KMetadata.java:962)
	at com.sun.media.imageioimpl.plugins.jpeg2000.J2KMetadata.addNode(J2KMetadata.java:631)
	at jj2000.j2k.fileformat.reader.FileFormatReader.readFileFormat(FileFormatReader.java:279)
	at com.sun.media.imageioimpl.plugins.jpeg2000.J2KMetadata.(J2KMetadata.java:135)
	at com.sun.media.imageioimpl.plugins.jpeg2000.J2KImageReader.getImageMetadata(J2KImageReader.java:419)
	at com.mmc.image.JP2_metadata.main(JP2_metadata.java:22)

Too bad I don’t have source code for IIOMetadata. I’d like to step into IIOMetadata and look at header boxes. Perhaps I could actually see where the problem is, maybe even fix it. However, now I’ve put way too much time into this. Also I’m not sure where to find the JP2 sources for JAI ImageIO. Next stop a purchase of Global Mapper and finish the job before I get into trouble. Global Mapper is adequate and it meets my predisposition to Jurassic solutions i.e. cheap.

Summary:

I really tried to find a solution in the Open Source world. Sometimes it just doesn’t work out. Fortunately Global Mapper was able to read the 4band images as a composite and export the images as GTiff. Global Mapper would have taken days for export of full 800Mb jp2 images to GTiff. Fortunately an added bonus let me overlay my clipping page rectangle vectors and then export clipped subsets, which are much smaller and faster. The only drawback is the manual tedium of selecting one clip export at a time. Perhaps, if I knew much about Global Mapper, there is a handy scripting approach. In the meantime, I was able to finish the job on time and I was only a little disappointed about not finding an Open Source solution.

Postscript

USDA added another clue to my puzzle, noting that apparently there is some difference in the interpretation of the standards for JPEG 2000. I imagine that could be the underlying issue, but in the meantime 2008 NAIP 4 band imagery has some anomalous problems with viewer access out here in the wider world.