Mirror Land and the Last Foot

Fig 1 – Bing Maps Streetside

I know 2010 started yesterday but I slept in. I’m just a day late.

Even a day late perhaps it’s profitable to step back and muse over larger technology trends. I’ve worked through several technology tides in the past 35 years. I regretfully admit that I never successfully absorbed the “Gang of Four” Design Patterns. My penchant for the abstract is relatively low. I learn by doing concrete projects, and probably fall into the amateur programming category often dismissed by the “professional” programming cognoscenti. However, having lived through a bit of history already, I believe I can recognize an occasional technology trend without benefit of a Harvard degree or even a “Professional GIS certificate.”

What has been striking me of late is the growth of mirror realities. I’m not talking about bizarre multiverse theories popular in modern metaphysical cosmology, nor parallel universes of the many worlds quantum mechanics interpretation, or even virtual world phenoms such as Second Life or The Sims. I’m just looking at the mundane evolution of internet mapping.

Fig 2 – Google Maps Street View

One of my first mapping projects, back in the late 80′s, was converting the very sparse CIA world boundary file, WDBI, into an AutoCAD 3D Globe (WDBI came on a data tape reel). At the time it was novel enough, especially in the CAD world, to warrant a full color front cover of Cadence Magazine. I had lots of fun creating some simple AutoLisp scripts to spin the world view and add vector point and line features. I bring it up because at that point in history, prior to the big internet boom, mapping was a coarse affair at global scales. This was only a primitive wire frame, ethereal and transparent, yet even then quite beautiful, at least to map nerds.

Fig 3 – Antique AutoCAD Globe WDBI

Of course, at that time Scientists and GIS people were already playing with multi million dollar image aquisitions, but generally in fairly small areas. Landsat had been launched more than a decade earlier, but few people had the computing resources to play in that arena. Then too, US military was the main driving force with DARPA technology undreamed by the rest of us. A very large gap existed between Global and Local scales, at least for consumer masses. This access gap continued really until Keyhole’s aquisition by Google. There were regional initiatives like USGS DLG/DEM, Ordnance Survey, and Census TIGER. However, computer earth models were fragmented affairs, evolving relatively slowly down from satellite and up from aerial, until suddenly the entire gap was filled by Google and the repercussions are still very much evident.

Internet Map coverage is now both global and local, and everything in between, a mirror land. The full spectrum of coverage is complete. Or is it? A friend remarked recently that they feel like recent talk in mobile LiDAR echos earlier discussions of “Last Mile” when the Baby Bells and Cable Comms were competing for market share of internet connectivity. You can glimpse the same echo as Microsoft and Google jocky for market share of local street resolution, StreetView vs Streetside. The trend is from a global coarse model to a full scale local model, a trend now pushing out into the “Last Foot.” Alternate map models of the real world are diving into human dimension, feet and inches not miles, the detail of the street, my local personal world.

LiDAR contributes to this mirror land by adding a partial 3rd dimension to the flat photo world of street side capture. LiDAR backing can provide the swivel effects and the icon switching surface intelligence found in StreetView and Streetside. LiDAR capture is capable of much more, but internet UIs are still playing catchup in the 3rd dimension.

The question arises whether GIS or AEC will be the driver in this new human dimension “mirror land.” Traditionally AEC held the cards at feet and inches while GIS aerial platforms held sway in miles. MAC, Mobile Asset Collection, adds a middle way with inch level resolution capability available for miles.

Fig 4 – Video Synched to Map Route

Whoever, gets the dollars for capture of the last foot, in the end it all winds up inside an internet mirror land.

We are glimpsing a view of an alternate mirror reality that is not a Matrix sci-fi fantasy, but an ordinary part of internet connected life. Streetside and Street View push this mirror land down to the sidewalk.

On another vector, cell phone locations are adding the first primitive time dimension with life tracks now possible for millions. Realtime point location is a first step, but life track video stitched on the fly into photosynth streams lends credence to street side contingency.

The Location hype is really about linking those massive market demographic archives to a virtual world and then back connecting this information to a local personal world. As Sean Gillies in “Utopia or Dystopia” pointed out recently there are pros and cons. But, when have a few “cons” with axes ever really made a difference to the utopian future of technology?

With that thought in mind why not push a little on the future and look where the “Last Millimeter” takes us?
    BCI Brain Computer Interface
    Neuronal Prosthetics

Fig 5 – Brain Computer Interface

Eye tracking HUD (not housing and urban development exactly)

Fig 6- HUD phone?

I’m afraid the “Last Millimeter” is not a pretty thought, but at least an interesting one.


Just a few technology trends to keep an eye on. When they get out the drill for that last millimeter perhaps it’s time to pick up an ax or two.

Stimulus LiDAR

3D VE navigation
Fig 1 – New VE 3D Building – Walk Though Walls

I’ve been thinking about LiDAR recently. I was wondering when it would become a national resource and then this pops up: GSA IDIQ RFP Hits the Street
Services for 3D Laser Scanning and Building Information Modeling (BIM)
Nationwide Laser Scanning Services
Here is the GSA view of BIM: Building Information Modeling (BIM)

And here is the core of the RFP:

1) Exterior & Interior 3D-Laser Scanning
2) Transform 3D laser scanning point cloud data to BIM models
3) Professional BIM Modeling Services based on 3D laser scanning data
    including, but not limited to:
   a. Architectural
   b. Structural
   c. MEP
   d. Civil
4) 3D Laser Scanning-based analysis & applications including,
     but not limited to:
   a. Visualization
   b. Clash Detection
   c. Constructability analysis
5) Integration of multiple 3D Laser Scanning Data, BIMs & analyses
6) Integration between multiple 3D Laser Scanning Data and BIM softwares
7) 3D Laser Scanning Implementation Support, including:
   a. Development of 3D Laser Scanning assessment and project
       implementation plans
   b. Selection of VDC software
   c. Support to project design and construction firms
   d. Review of BIM models & analysis completed by other
       service providers
8  Development of Best Practice Guidance, Case Studies, Procedures
     and Training Manuals
9) Development of long- & short-term 3D Laser Scanning implementation
10) Development of benchmarking and measurement standards and programs
11) 3D Laser Scanning and Modeling Training, including:
   a. Software specific training
   b. Best Practices

This is aimed at creating a national building inventory rather than a terrain model, but still very interesting. The interior/exterior ‘as built’ aspect has some novelty to it. Again virtual and real worlds appear to be on a collision trajectory and may soon overlap in many interesting ways. How to make use of a vast BIM modeling infrastructure is an interesting question. The evolution of GE and VE moves inexorably toward a browser virtual mirror of the real world. Obviously it is useful to first responders such as fire and swat teams, but there may be some additional ramifications. Once this data is available will it be public? If so how could it be adapted to internet use?

Until recently 3D modeling in browsers was fairly limited. Google and Microsoft have both recently provided useful api/sdk libraries embeddable inside a browser. The terrain model in GE and VE is vast but still relatively coarse and buildings were merely box facades in most cases. However, Johannes Kebek’s blog points to a recent VE update which re-enabled building model interiors, allowing cameras to float through interior space. Following a link from Kebek’s blog to Virtual Earth API Release Information, April 2009 reveals these little nuggets:

  • Digital Elevation Model data – The ability to supply custom DEM data, and have it automatically stitched into existing data.
  • Terrain Scaling – This works now.
  • Building Culling Value – Allows control of how many buildings are displayed, based on distance and size of the buildings.
  • Turn on/off street level navigation – Can turn off some of the special effects that occur when right next to the ground.

Both Google and Microsoft are furnishing modeling tools tied to their versions of virtual online worlds. Virtual Earth 3dvia technology preview appears to be Microsoft’s answer to Google Sketchup.
The race is on and shortly both will be views into a virtual world evolving along the lines of the gaming markets. But, outside of the big two, GE and VE, is there much hope of an Open Virtual World, OVM? Supposing this BIM data is available to the public is there likely to be an Open Street Map equivalent?

Fortunately GE and VE equivalent tools are available and evolving in the same direction. X3D is an interesting open standard scene graph schema awaiting better implementation. WPF is a Microsoft standard with some scene building capability which appears to be on track into Silverlight … eventually. NASA World Wind Java API lets developers build applet views and more complex Java Web Start applications which allow 3D visualizing. Lots of demos here: World Wind Demos
Blender.org may be overkill for BIM, but is an amazing modeling tool and all open source.

LiDAR Terrain

Certainly BIM models will find their way into browsers, but there needs to be a corresponding evolution of terrain modeling. BIM modeling is in the sub foot resolution while terrain is still in the multimeter resolution. I am hopeful that there will also be a national resource of LiDAR terrain data in the sub meter resolution, but this announcement has no indication of that possibility.

I’m grappling with how to make use of the higher resolution in a web mapping app. GE doesn’t work well since you are only draping over their coarse terrain. Silverlight has no 3D yet. WPF could work for small areas like architectural site renderings, but it requires an xbap download similar to Java Web Start. X3D is interesting, but has no real browser presence. My ideal would be something like an X3D GeoElevationGrid LOD pyramid, which will not be available to browsers for some years yet. The new VE sdk release with its ability to add custom DEM looks very helpful. Of course 2D contours from LiDAR are a practical use in a web map environment until 3D makes more progress in the browser.

Isenburg’s work on streaming meshes is a fascinating approach to reasonable time triangulation and contouring of LiDAR points. Here is an interesting kml of Mt St Helen’s contours built from LAS using Isenburg’s streaming methods outlined in this paper: Streaming Computation of Delaunay Triangulations.

Mt St Helen's Contours
Fig 2 – Mt St Helen’s Contours generated from streaming mesh algorithm

A national scale resource coming out of the “Stimulus” package for the US will obviously be absorbed into GE and VE terrain pyramids. But neither offers download access to the backing data, which is what the engineering world currently needs. What would be nice is a national coverage at some submeter resolution with the online tools to build selective areas with choice of projection, contour interval, tin, or raw point data. Besides, Architectural Engineering documents carry a legal burden that probably excludes them from online connectivity.

A bit of calculation (storage only) for a national DEM resource :
1m pixel resolution
AWS S3 $0.150 per GB – first 50 TB / month of storage used = $150 tb/month
US = 9,161,923,000,000 sq m @ 4bytes per pixel => 36.648 terabytes = $5497.2/month

0.5m pixel resolution
$0.140 per GB next 50 TB / month of storage used = $140 tb/month
US = 146.592 terabytes = $20,522/month

Not small figures for an Open Source community resource. Perhaps OVM wouldn’t have to actually host the data at community expense, if the government is kind enough to provide WCS exposure, OVM would only need to be another node in the process chain.

Further into the future, online virtual models of the real world appear helpful for robotic navigation. Having a mirrored model available short circuits the need to build a real time model in each autonomous robot. At some point a simple wireless connection provides all the information required for robot navigation in the real world with consequent simplification. This of course adds an unnerving twist to some of the old philosophical questions regarding perception. Does robotic perception based on a virtual model but disconnected from “the real” have something to say to the human experience?

A Look at Google Earth API

Google’s earth viewer plugin is available for use in browsers on Windows and Vista, but not yet Linux or Mac. This latest addition to Google’s family of APIs (yet another beta) lets developers make use of the nifty 3D navigation found in the standalone Google Earth. However, as a plugin the developer now has more control using javascript with html or other server side frameworks such as asp .NET. http://code.google.com/apis/earth/
Here is the handy API reference:

The common pattern looks familiar to Map API users: get a key code for the Google served js library and add some initialization javascript along with a map <div> as seen in this minimalist version.

  <script src=”http://www.google.com/jsapi?key=…keycode…”


var ge = null;
  google.load(“earth”, “1″);

function pageLoad() {
  google.earth.createInstance(“map3d”, initCallback, failureCallback);

function initCallback(object) {
  ge = object;
  var options = ge.getOptions();

function failureCallback(object) {
  alert(‘load failed’);
  <body onload=”pageLoad()”>
  <div id=’map3d’></div>
Listing 1 – minimalist Google Earth API in html

This is rather straightforward with the exception of a strange error when closing the browser:

Google explains:
“We’re working on a fix for that bug. You can temporarily disable script debugging in Internet Options –> Advanced Options to keep the messages from showing up.”

…. but the niftiness is in the ability to build your own pages around the api. Instead of using Google Earth links in the standalone version, we can customize a service tailored to very specific desires. For example this experimental Shp Viewer lets a user upload his shape file with a selected EPSG coordinate system and then view it over 3D Google Earth along with some additional useful layers like PLSS from USGS and DRG topo from TerraServer.

Fig 2 – a quarter quad shp file shown over Google Earth along with a USGS topo index layer

In the above example, clicking on a topo icon triggers a ‘click’ event listener, which then goes back to the server to fetch a DRG raster image from TerraServer via a java proxy servlet.

case ‘usgstile’:
  url = server + “/ShpView/usgstile.kml”;
  usgstilenlink = CreateNetLink(“TopoLayer”, url, false);

google.earth.addEventListener(usgstilenlink, “click”, function(event) {
  var topo = event.getTarget();
  var data = topo.getKml();
  var index = data.substr(data.indexOf(“<td>index</td>”) + 20, 7);
  var usgsurl = server + “/ShpView/servlet/GetDRG?name=” +
  topo.getName() + “&index=” + index;
  function loadKml(kmlUrl) {
  google.earth.fetchKml(ge, kmlUrl, function(kmlObject) {
  if (kmlObject) {
  } else {
  alert(‘Bad KML’);
Listing 2 – addEventListener gives the developer some control of user interaction

In addition to the ‘addEventListener’ the above listing shows how easily a developer can use ‘getKML’ to grab additional attributes out of a topo pologon. Google’s ‘fetchKML’ simplifies life when grabbing kml produced by the server. My first pass on this used XMLHttpRequest directly like this:

  var req;

function loadXMLDoc(url) {
  if (window.XMLHttpRequest) {
  req = new XMLHttpRequest();
  req.onreadystatechange = processReqChange;
  req.open(“GET”, url, true);
  // branch for IE6/Windows ActiveX version
  } else if (window.ActiveXObject) {
  req = new ActiveXObject(“Microsoft.XMLHTTP”);
  if (req) {
  req.onreadystatechange = processReqChange;
  req.open(“GET”, url, true);

function processReqChange() {
  if (req.readyState == 4) {
  if (req.status == 200) {
  var topo = ge.parseKml(req.responseText);
  } else {
  alert(“Problem retrieving the XML data:\n” + req.statusText);
Listing 3 – XMLHttpRequest alternative to ‘fetchKml’

However, I ran into security violations due to running the tomcat Java servlet proxy out of a different port from my web application in IIS. Rather than rewrite the GetDRG servlet in C# as an asmx, I discovered the Google API ‘fetchKML’, which is more succinct anyway, and nicely bypassed the security issue.

Fig 3 – Google Earth api with a TerraServer DRG loaded by clicking on USGS topo polygon

And using the Google 3D navigation

Fig 4 – Google Earth api with a TerraServer DRG using the nifty 3D navigation

Of course in this case 3D is relatively meaningless. Michigan is absurdly flat. A better example of the niftiness of 3D requires moving out west like this view of Havasupai Point draped over the Grand Canyon, or looking at building skp files in major metro areas as in the EPA viewer.

Fig 5 – Google Earth api with a TerraServer DRG Havasupai Point in more meaningful 3D view

This Shp Viewer experiment makes use of a couple other interesting technologies. The .shp file set (.shp, .dbf, .shx) is uploaded to the server where it is then loaded into PostGIS and added to the geoserver catalog. Once in the catalog the features are available to kml reflector for pretty much automatic use in Google Earth api. As a nice side effect the features can be exported to all of geoserver’s output formats pdf, svg, RSS, OpenLayers, png, geotiff, but minus the inimitable background terrain of Google Earth.

ogr2ogr is useful for loading the furnished .shp files into PostGIS where it is accessible to Geoserver and the also very useful kml_reflector. PostGIS + Geoserver + Google Earth API is a very powerful stack for viewing map data. There is also a nice trick for adding Geoserver featureTypes to the catalog programmatically. I used the Java version after writing out the featureType info.xml to geoserver’s data directory. Eventually, work on the Restful configuration version of Geoserver may make this kind of catalog reloading easier.

Wow I like 3D and I especially like 3D I have control over. Google Earth API is easy to add into an html page and event listeners give developers some control over features added to the 3D scene. Coupled with the usual suspects in the GIS stack, PostgreSQL /PostGIS and Geoserver, it is relatively easy to make very useful applications. I understand though that real control freaks will want to look at WWjava.

If anyone would like to see a prototype UI of their data using this approach please email

Spatial Analytics in the Cloud

Peter Batty has a couple interesting blogs on Netezza and their recent spatial analytics release. Basically Netezza has developed a parallel system, hardware/software, for processing large spatial queries with scaling improvements of 1 to 2 orders of magnitude over Oracle Spatial. This is apparently accomplished by pushing some of the basic filtering and projection out to the hardware disk reader as well as more commonly used parallel Map Reduce techniques. Ref Google’s famous white paper: http://labs.google.com/papers/mapreduce-osdi04.pdf

One comment struck me as Rich Zimmerman mentioned that use of their system eliminated indexing and tuning essentially using the efficiency of brute force parallel processing. There is no doubt that their process is highly effective and successful given the number of client buy ins as well as Larry Ellison’s attention. I suppose, though, that an Oracle buy out is generally considered the gold standard of competitive pain when Oracle is on the field.

In Peter’s interview with Rich Zimmerman they discuss a simple scenario in which a large number of point records (in the billion range) are joined with a polygon set and processed with a spatial ‘point in polygon’ query. This is the type of analytics that would be common in real estate insurance risk analytics and is typically quite time consuming. Evidently Netezza is able to perform these types of analytics in near real time, which is quite useful in terms of evolving risk situations such as wildfire, hurricane, earthquake, flooding etc. In these scenarios domain components are dynamically changing polygons of risk, such as projected wind speed, against a relatively static point set.

Netezza performance improvement factors over Oracle Spatial were in the 100 to 500 range with Netezza SPU arrays being anywhere from 50 to 1000. My guess would be that the performance curve would be roughly linear. The interview suggested an amazing 500x improvement over Oracle Spatial with an unknown number of SPUs. It would be interesting to see a performance versus SPU array size curve.

I of course have no details on the Netezza hardware enhancements, but I have been fascinated with the large scale clustering potential of cloud computing, the poor man’s supercomputer. In the Amazon AWS model, node arrays are full power virtual systems with consequent generality, as opposed to the more specific SPUs of the Netezza systems. However, cloud communications has to have a much larger latency compared to an engineered multi SPU array. On the other hand, would the $0.1/hr instance cost compare favorably to a custom hardware array? I don’t know, but a cloud based solution would be more flexible and scale up or down as needed. For certain, cost would be well below even a single cpu Oracle Spatial license.

Looking at the sited example problem, we are faced with a very large static point set and a smaller dynamically changing polygon set. The problem is that assigning polygons of risk to each point requires an enormous number of ‘point in polygon’ calculations.

In thinking about the type of analytics discussed in Peter’s blog the question arises, how could similar spatial analytics be addressed in the AWS space? The following speculative discussion looks at the idea of architecting an AWS solution to the class of spatial analysis problems mentioned in the Netezza interview.

The obvious place to look is AWS Hadoop

Since Hadoop was originally written by the Apache Lucene developers as a way to improve text based search, it does not directly address spatial analytics. Hadoop handles the overhead of scheduling, automatic parallelization, and job/status tracking. The Map Reduce algorithm is provided by the developer as essentially two Java classes:

  Map – public static class MapClass extends MapReduceBase implements Mapper{ … }
  Reduce – public static class ReduceClass extends MapReduceBase implements Reducer { …. }

In theory, with some effort, the appropriate Java Map and Reduce classes could be developed specific to this problem domain, but is there another approach, possibly simpler?

My first thought, like Netezza’s, was to leverage the computational efficiency of PostGIS over an array of EC2 instances. This means dividing the full point set into smaller subsets, feeding these subset computations to their own EC2 instance and then aggregating the results. In my mind this involves at minimum:
 1. a feeder EC2 instance to send sub-tiles
 2. an array of EC2 computational instances
 3. a final aggregator EC2 instance to provide a result.

One approach to this example problem is to treat the very large point set as an array tiled in the tiff image manner with a regular rectangular grid pattern. Grid tiling only needs to be done once or as part of the insert/update operation. The assumptions here are:
 a. the point set is very large
 b. the point set is relatively static
 c. distribution is roughly homogenous

If c is not the case, grid tiling would still work, but with a quad tree tiling pattern that subdivides dense tiles into smaller spatial extents. Applying the familiar string addressing made popular by Google Map and then Virtual Earth with its 0-3 quadrature is a simple approach to tiling the point table.

Fig 2 – tile subdivision

Recursively appending a char from 0 to 3 for each level provides a cell identifier string that can be applied to each tile. For example ’002301′ identifies a tile cell NW/NW/SW/SE/NW/NE. So the first step, analogous to spatial indexing, would be a pass through the point table calculating tile signatures for each point. This is a time consuming preprocess, basically iterating over the entire table and assigning each point to a tile. An initial density guess can be made to some tile depth. Then if the point tiles are not homogenous (very likely), tiles with much higher counts are subdivided recursively until a target density is reached.

Creating an additional tile geometry table during the tile signature calculations is a convenience for processing polygons later on. Fortunately the assumption that the point table is relatively static means that this process occurs rarely.

The tile identifier is just a string signature that can be indexed to pull predetermined tile subsets. Once completed there is a point tile set available for parallel processing with a simple query.
 SELECT point.wkb_geom, point.id
  FROM point
  WHERE point.tile = tilesignature;

Note that tile size can be manipulated easily by changing the WHERE clause slightly to reduce the length of the tile signature. In effect this combines 4 tiles into a single parent tile (’00230*’ = ’002300′ +’002301′ + ’002302′ + ’002303′ )
 SELECT point.wkb_geom, point.id
  FROM point
   (substring(tilesignature from 0 for( length(tilesignature)-1))||’*') LIKE point.tile;

Assuming the polygon geometry set is small enough, the process is simply feeding sub-tile point sets into ‘point in polygon’ replicated queries such as this PostGIS query:
 SELECT point.id
  FROM point, polygon
    point.wkb_geom && polygon.wkb_geom
   AND intersects(polygon.wkb_geom, point.wkb_geom);

This is where the AWS cloud computing could become useful. Identical CPU systems can be spawned using a preconfigured EC2 image with Java and PostGIS installed. A single feeder instance contains the complete point table with tile signatures as an indexed PostGIS table. A Java feeder class then iterates through the set of all tiles resulting from this query:
 ···SELECT DISTINCT point.tile FROM points ORDER BY point.tile

Using a DISTINCT query eliminates empty tiles as opposed to simply iterating over the entire tile universe. Again a relatively static point set indicates a static tile set. So this query only occurs in the initial setup. Alternatively a select on the tile table where the wkb_geom is not null would produce the same result probably more efficiently.

Each point set resulting from the query below is then sent to its own AWS EC2 computation instance.
 foreach tilesignature in DISTINCT point.tile
  SELECT point.wkb_geom, point.id
  FROM points
  WHERE point.tile = tilesignature;


The polygon set also has assumptions:
 a. the polygon set is dynamically changing
 b. the polygon set is relatively small

Selecting a subset of polygons to match a sub-tile of points is pretty efficient using the tile table created earlier:
 SELECT polygon.wkb_geom
   FROM tile INNER JOIN polygon ON (polygon.tile = tile.id);
  WHERE tile.wkb_geom && polygon.wkb_geom;

Now the feeder instance can send a subset of points along with a matching subset of polygons to a computation EC2 instance.

Connecting EC2 instances

However, at this point I run into a problem! I was simply glossing over the “send” part of this exercise. The problem in most parallel algorithms is the communication latency between processors. In an ideal world shared memory would make this extremely efficient, but EC2 arrays are not connected this way. The cloud is not so efficient.

AWS services do include additional tools, Simple Storage Service, S3 , Simple Queue Service, SQS , and SimpleDB. S3 is a type of shared storage. SQS is a type of asynchronous message passing, while SimpleDB provides a cloud based DB capability on structured data.

S3 is appealing because writing collections of polygon and point sets should be fairly efficient, one S3 object per tile unit. At the other end, computation instances would read from the S3 input bucket and write results back to a result output bucket. The aggregator instance can then read from the output result bucket.

However, implicit in this S3 arrangement is a great deal of schedule messaging. SQS is an asynchronous messaging system provided for this type of problem. Since messages are being sent anyway, why even use S3? SQS messages are limited to 8k of text so they are not sufficient for large object communications. Besides point sets may not even change from one cycle to the next. The best approach is to copy each tile point set to an S3 Object, and separate S3 objects for polygon tile sets. Then add an SQS message to the queue. The computation instances read from the SQS message queue and load the identified S3 objects for processing. Note that point tile sets will only need to be written to S3 once at the initial pass. Subsequent cycles will only be updating the polygon tile sets. Hadoop would handle all of this in a robust manner taking into account failed systems and lost messages so it may be worth a serious examination.

SimpleDB is not especially useful in this scenario, because the feeder instance’s PostGIS is much more efficient at organizing tile objects. As long as the point and polygon tables will fit in a single instance it is better to rely on that instance to chunk the tiles and write them to S3, then alerting computational instances via SQS.

Once an SQS message is read by the target computation instance how exactly should we arrange the computation? Although tempting, use of PostGIS again brings up some problems. The point and polygon object sets would need to be added to two tables, indexed, and then queried with “point in polygon.” This does not sound efficient at all! A better approach might be to read S3 objects with their point and polygon geometry sets through a custom Java class based on the JTS Topology Suite

Our preprocess has already optimized the two sets using a bounds intersect based on a tile structure so plain iteration of all points over all polygons in a single tile should be fairly efficient. If the supplied chunk is too large for the brute force approach, a more sophisticated JTS extension class could index by polygon bbox first and then process with the Intersect function. This would only help if the granularity of the message sets was large. Caching tile point sets on the computational instances could also save some S3 reads reducing the computation setup to a smaller polygon tile set read.

This means that there is a bit of experimental tuning involved. A too fine grained tile chews up time in the messaging and S3 reads, while a coarse grained tile takes more time in the Intersect computation.

Finally each computation instance stores its result set to an S3 result object consisting of a collection of point.id and any associated polygon.ids that intersect the point. Sending an SQS mesage to the aggregator alerts it to availability of result updates. At the other end is an aggregator, which takes the S3 result objects and pushes them into an association table of point.id, polygon.id, or pip table. The aggregator instance can be a duplicate of the original feeder instance with its complete PostGIS DB already populated with the static point table and the required relation table (initially empty).

If this AWS system can be built and process in reasonable time an additional enhancement suggests itself. Assuming that risk polygons are being generated by other sources such as the National Hurricane Center, it would be nice to update the polygon table on an ongoing basis. Adding a polling class to check for new polygons and update our PostGIS table, would allow the polygons to be updated in near real time. Each time a pass through the point set is complete it could be repeated automatically reflecting any polygon changes. Continuous cycling through the complete tile set incrementally updates the full set of points.

At the other end, our aggregator instance would be continuously updating the point.id, polygon.id relation table one sub-tile at a time as the SQS result messages arrive. The decoupling afforded by SQS is a powerful incentive to use this asynchronous message communication. The effect is like a slippy map interface with subtiles continuously updating in the background, automatically registering risk polygon changes. Since risk polygons are time dependent it would also be interesting to keep timestamped histories of the polygons, providing for historical progressions by adding a time filter to our tile polygon select. The number of EC2 computation instances determine speed of these update cycles up to the latency limit of SQS and S3 read/writes.

Visualization of the results might be an interesting exercise in its own right. Continuous visualization could be attained by making use of the aggregator relation table to assign some value to each tile. For example in pseudo query code:
foreach tile in tile table {
···SELECT AVG(polygon.attribute)
  FROM point, pip, polygon WHERE pip.pointid = point.id AND polygon.id = pip.polygonid)
   AND point.tile = tilesignature;

Treating each tile as a pixel lets the aggregator create polygon.value heat maps assigning a color and/or alpha transparency to each png image pixel. Unfortunately this would generally be a coarse image but it could be a useful kml GroundOverlay at wide zooms in a Google Map Control. These images can be readily changed by substituting different polygon.attribute values.

If Google Earth is the target visualization client using a Geoserver on the aggregator instance would allow a kml reflector to kick in at lower zoom levels to show point level detail as <NetworkLink> overlays based on polygon.attributes associated with each point. GE is a nice client since it will handle refreshing the point collection after each zoom or pan, as long as the view is within the assigned Level of Detail. Geoserver kml reflector essentially provides all this for almost free once the point featureType layer is added. Multiple risk polygon layers can also be added through Geoserver for client views with minimal effort.


This is pure speculation on my part since I have not had time or money to really play with message driven AWS clusters. However, as an architecture it has merit. Adjustments in the tile granularity essentially adjust the performance up to the limit of SQS latency. Using cheap standard CPU instances would work for the computational array. However, there will be additional compute loads on the feeder and aggregator, especially if the aggregator does double duty as a web service. Fortunately AWS provides scaling classes of virtual hardware as well. Making use of a Feeder instance based on medium CPU adds little to system cost:
$0.20 – High-CPU Medium Instance
1.7 GB of memory, 5 EC2 Compute Units (2 virtual cores with 2.5 EC2 Compute Units each), 350 GB of instance storage, 32-bit platform
(note: a High CPU Extra Large instance could provide enough memory for an in memory point table – PostrgeSQL memory Tuning)

The aggregator end might benefit from a high cpu instance:
$0.80 – High-CPU Extra Large Instance
7 GB of memory, 20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each), 1690 GB of instance storage, 64-bit platform

A minimal system might be $0.20 feeder => five $0.10 computation instance => $0.80 aggregator = $1.10/hr plus whatever data transfer costs accrue. Keeping the system in a single zone to reduce SQS latency would be a good idea and in zone data costs are free.

Note that 5 computation instances are unlikely to provide sufficient performance. However, a nice feature of AWS cloud space is the adjustability of the configuration. If 5 is insufficient add more. If the point set is reduced drop off some units. If the polygon set increases substantially divide off the polygon tiling to its own high CPU instance. If your service suddenly gets slashdotted perhaps a load balanced webservice farm could be arranged? The commitment is just what you need and can be adjusted within a few minutes or hours not days or weeks.


Again this is a speculative discussion to keep my notes available for future reference. I believe that this type of parallelism would work for the class of spatial analytics problems discussed. It is particularly appealing to web visualization with continuous updating. The cost is not especially high, but then unknown pitfalls may await. Estimating four weeks of development and $1.50/hr EC2 costs leads to $7000 – $8000 for proof of concept development with an ongoing operational cost of about $1500/mo for a small array of 10 computational units. The class of problems involving very large point sets against polygons should be fairly common in insurance risk analysis, emergency management, and Telco/Utility customer base systems. Cloud arrays can never match the 500x performance improvement of Netezza, but cost should place it in the low end of the cost/performance spectrum. Maybe 5min cycles rather than 5sec are good enough. Look out Larry!

Chrome Problems

Fig 1 IE with Silverlight ve:Map component

With the introduction of Chrome, Google has thrown down the gauntlet to challenge IE and Firefox. Out of curiosity I thought it would be interesting to download the current Chrome Beta and see what it could do with some of the interfaces I’ve worked on. Someone had recently quipped, “isn’t all of Google Beta?” I guess the same could be said of Amazon AWS, but then again in the “apples to apples” vein, I decided to compare IE8 Beta and Chrome Beta. The above screen shot shows an example of the new Silverlight ve:Map component in an ASP Ajax running on II6. The browser is IE8 beta in Vista, and surprise, not, it all works as expected.

Fig 2 Chrome with Silverlight ve:Map component

Also not surprisingly, the same Silverlight ve:Map component in an ASP Ajax site fares poorly in Chrome. In fact the component appears not at all, while curiously the menu asp:MenuItems act oddly. Instead of the expected drop down I get a refresh to a new horizontal row?

Fig 3 IE with Silverlight ve:Map component

Moving on to a Google Map Component embedded in the same ASP page, IE8 beta displays the map component including the newer G_SATELLITE_3D_MAP map type. ASP drop down menu and tooltips all work.

Fig 4 Chrome with Silverlight ve:Map component

Since this is a Google Map Component I would be disappointed if it did not work in Chrome, and it does. Except, I noticed the G_SATELLITE_3D_MAP control type is missing? I guess Chrome Beta has not caught up with Google Map Beta. Again the ASP Menu is not functional.

Fig 5 IE Google Map Control with Earth Mode – G_SATELLITE_3D_MAP

Back to IE to test the 3D Earth mode of my Google Map Component.As seen above it all works fine.

Fig 6 IE Silverlight Deep Earth

Now to check the new Silverlight DeepEarth component in IE. DeepEarth is a nice little MultiScaleTile source library for smoothly spinning around the VE tile engines. It works as amazingly smooth as ever.

Fig 7 Google Chrome Deep Earth

However, in Chrome, no luck, just a big white area. I suppose that Silverlight was not a high priority with Chrome.

Fig 8 IE SVG hurricane West Atlantic weather clip

Switching to some older SVG interfaces, I took a look at the Hurricane clips in the West Atlantic. It looks pretty good, Hanna is deteriorating to a storm and Ike is still out east of the Bahamas.

Fig 9 Chrome SVG hurricane West Atlantic weather clip

On Chrome it is not so nice. The static menu side of the svg frames shows up but the image and animation stack is just gray. Clicking on menu items verifies that events are not working. Of course this SVG is functional only in the Adobe SVG viewer, but evidently Chrome has some svg problems.

Fig 10 IE ASP .NET 3.5

Moving back to IE8, I browsed through a recent ASP .NET 3.5 site I built for an Energy monitoring service. This is a fairly complete demonstration of ListView and Linq SQL and it of course works in IE8 beta.

Fig 11 Chrome ASP .NET 3.5

Surprisingly, Chrome does a great job on the ASP .NET 3.5. Almost all the features work as expected with the exception of the same old Menu problems.

Fig 12 IE SVG OWS interface

Finally I went back down memory lane for an older OWS interface built with the SVG, using the Adobe Viewer variety. There are some glitches in IE8 beta. Although I can still see WMS and WFS layers and zoom around a bit , some annoying errors do pop up here and there. Adobe SVG viewer is actually orphaned, ever since Adobe picked up Macromedia and Flash, so it will doubtless receed into the distant past as the new browser generations arrives. Unfortunately, there is little Microsoft activity in SVG, in spite of competition from the other browsers, Safari, Firefox, and Opera. It will likely remain a 2nd class citizen in IE terms as SIlverlight’s intent is to replace Flash, which itself is a proprietary competitor to SVG.

Fig 13 Chrome SVG OWS interface

Chrome and Adobe SVG are not great friends. Rumor has it that Chrome intends to fully support SVG, so if I ever get around to it, I could rewrite these interfaces for Firefox, Opera, Chrome 2.0.

Chrome is beta and brand new. Although it has a lot of nice features and a quick clean tabbed interface, I don’t see anything but problems for map interfaces. Hopefully the Google Map problems will be ironed out shortly. There is even hope for SVG at some later date. I imagine even Silverlight will be supported grudgingly since I doubt that Google has the clout to dictate useage on the internet.

TatukGIS – Generic ESRI with a Bit Extra

Fig1 basic TatukGIS Internet Server view element and legend/layer element

TatukGIS is a commercial product that is basically a generic brand for building GIS interfaces including web interfaces. It is developed in Gdynia Poland:

The core product is a Developer Kernel, DK, which provides basic building blocks for GIS applications in a variety of Microsoft flavors including:

  • DK-ActiveX – An ActiveX® (OCX) control supporting Visual Basic, VB.NET, C#, Visual C++
  • DK.NET – A manageable .NET WinForms component supporting C# and VB.NET
  • DK-CF – A manageable .NET Compact Framework 2.0/3.5 component – Pocket PC 2002 and 2003, Windows Mobile 5 and 6, Windows CE.NET 4.2, Windows CE 5 and 6
  • DK-VCL – A native Borland®/CodeGear® Delphi™/C++ Builder™

These core components have been leveraged for some additional products to make life a good deal easier for web and PDA developers. A TatukGIS Internet Server single server deployment license starts at $590 for the Lite Edition or $2000 per deployment server for the full edition in a web environment. I guess this is a good deal compared to ESRI/Oracle licenses, but not especially appealing to the open source integrators among us. There is support for the whole gamut of CAD, GIS, and raster formats as well as project file support for ESRI and MapInfo. This is a very complete toolkit.

The TatukGIS Internet Server license supports database access to all the usual DBs: "MSSQL Server, MySQL, Interbase, DB2, Oracle, Firebird, Advantage, PostgreSQL… " However, support for spatial formats are currently only available for Oracle Spatial/Locator and ArcSDE. Support for PostGIS and MS SQL Server spatial extensions are slated for release with TatukGIS IS 9.0.

I wanted to experiment a bit with the Internet Server, so I downloaded a trial version(free)..

Documentation was somewhat sparse, but this was a trial download. I found the most help looking in the sample subdirectories. Unfortunately these were all VB and it took a bit of experimental playing to translate into C#. The DK trial download did include a pdf document that was also somewhat helpful. Perhaps a real development license and/or server deployment license would provide better C# .NET documentation. I gather the historical precedence of VB is still evident in the current doc files.

The ESRI influence is obvious. From layer control to project serialization, it seems to follow the ESRI look and feel. This can be a plus or a minus. Although very familiar to a large audience of users, I am afraid the ESRI influence is not aesthetically pleasing or very smooth. I was able to improve over the typically clunky ArcIMS type zoom and wait interface by switching to the included Flash wrapper (simply a matter of setting Flash="true").

The ubiquitous flash plugin lets the user experience a somewhat slippy map interface familiar to users of Virtual Earth and Google Maps. We are still not talking a DeepZoom or Google Earth type interface, but a very functional viewer for a private data source. I was very pleased to find how easy it was to build the required functionality including vector and .sid overlays with layer/legend manipulation.

This is a very simple to use toolkit. If you have had any experience with Google Map API or Virtual Earth it is quite similar. Once a view element is added to your aspx the basic map interface is added server side:

<ttkGIS:XGIS_ViewerIS id="GIS" onclick=”GIS_Click" runat="server" OnPaint="GIS_Paint" Width="800px" Height="600px" OnLoad="GIS_Load" BorderColor="Black" BorderWidth="1px" ImageType="PNG24" Flash="True"></ttkGIS:XGIS_ViewerIS>

The balance of the functionality is a matter of adding event code to the XGIS_ViewerIS element. For example :

    protected void GIS_Load(object sender, EventArgs e)
       GIS.Open( Page.MapPath( "data/lasanimas1.ttkgp" ) );
       GIS.SetParameters("btnFullExtent.Pos", "(10,10)");
       GIS.SetParameters("btnZoom.Pos", "(40,10)");
       GIS.SetParameters("btnZoomEx.Pos", "(70,10)");
       GIS.SetParameters("btnDrag.Pos", "(100,10)");
       GIS.SetParameters("btnSelect.Pos", "(130,10)");

       addresslayer = (XGIS_LayerVector)GIS.API.Get("addpoints19");

The ttkgp project support allows addition of a full legend/layer menu with a single element, an amazing time saver:

<ttkGIS:XGIS_LegendIS id="Legend" runat="server" Width="150px" Height="600px" ImageType="PNG24" BackColor="LightYellow" OnLoad="Legend_Load" AllowMove="True" BorderWidth="1px"></ttkGIS:XGIS_LegendIS>

The result is a simple functional project viewer available over the internet, complete with zoom, pan, and layer manipulation. The real power of the TatukGIS is in the multitude of functions that can be used to extend these basics. I added a simple address finder and PDF print function, but there are numerous functions for routing, buffering, geocoding, projection, geometry relations etc. I was barely able to scratch the surface with my experiments.

Fig2 – TatukGIS Internet Server browser view with .sid imagery and vector overlays

The Bit Extra:
As a bit of a plus the resulting aspx is quite responsive. Because the library is not built with the MS MFC it has a performance advantage over the ESRI products it replaces. The TatukGIS website claims include the following:

"DK runs some operations run even 5 – 50 times faster than the leading GIS development products"

I wasn’t able to verify this, but I was pleased with the responsiveness of the interface, especially in light of the ease of development. I believe clients with proprietary data layers who need a quick website would be very willing to license the TatukGIS Internet Server. Even though an open source stack such as PostGIS, Geoserver, OpenLayers could do many of the same things, the additional cost of development would pretty much offset the TatukGIS license cost.

The one very noticeable restriction is that development is a Windows only affair. You will need an ASP IIS server to make use of the TatukGIS for development and deployment. Of course clients can use any of the popular browsers from any of the common OS platforms. Cloud clusters in Amazon’s AWS will not support TatukGIS IS very easily, but now that GoGrid offers Virtual Windows servers there are options.

Fig3 – TatukGIS Internet Server browser view with DRG imagery and vector overlays

Fig4 – TatukGIS Internet Server browser result from a find address function

Summary: TatukGIS Internet Server is a good toolkit for custom development, especially for clients with ESRI resources. The license is quite reasonable.

A quick look at GoGrid

Fig 1 a sample ASP .NET 3.5 website running on a GoGrid server instance

GoGrid is a cloud service similar to AWS.( http://www.gogrid.com ) Just like Amazon’s AWS EC2, the user starts a virtual server instance from a template and then uses the instance like a dedicated server. The cost is similar to AWS, starting at about $0.10 per hour for a minimal server. The main difference from a user perspective is the addition of Windows servers and an easy to use control panel. The GoGrid control panel provides point and click setup of server clusters with even a hardware load balancer .

The main attraction for me is the availability of virtual Windows Servers. There are several Windows 2003 configuration templates as well as sets of RedHat or CentOS Linux templates:
· Windows 2003 Server (32 bit)/ IIS
· Windows 2003 Server (32 bit)/ IIS/ASP.NET/SQL Server 2005 Express Edition
· Windows 2003 Server (32 bit)/ SQL Server 2005 Express Edition
· Windows 2003 Server (32 bit)/ SQL Server 2005 Workgroup Edition
· Windows 2003 Server (32 bit)/ SQL Server 2005 Standard Edition

The number of templates is more limited than EC2 and I did not see a way to create custom templates. However, this limitation is offset by ease of management. For my experiment I chose the Windows 2003 Server (32 bit)/ IIS/ASP.NET/SQL Server 2005Express Edition. This offered the basics I needed to serve a temporary ASP web application.

After signing up, I entered my GoGrid control panel. Here I can add a service by selecting from the option list.

Fig 2- GoGrid Control Panel

Filling out a form with the basic RAM, OS, and Image lets me add a WebbApp server to my panel. I could additionally add several WebAPP servers and configure a LoadBalancer along with a Backend Database server by similarly filling out Control Panel forms.This appears to take the AWS EC2 service a step further by letting typical scaling workflows be part of the front end GUI. Although scaling in this manner can be done in AWS it requires installation of a software Load Balancer on one of the EC2 instances and a manual setup process.

Fig 3 – example of a GoGrid WebAPP configuration form

Once my experimental server came on line I was able to RemoteDesktop into the server and begin configuring my WebAPP. I first installedthe Microsoft .NET 3.5 framework so I could make use of some of its new features. I then copied up a sample web application showing the use of a GoogleMap Earth mode control in a simple ASP interface. This is a display interface which is connected to a different database server for displaying GMTI results out of a PostGIS table.

Since I did not want to point a domain at this experimental server, I simply assigned the GoGrid IP to my IIS website. I ran into a slight problem here because the sample webapp was created using .NET 3.5System.Web.Extensions. The webapp was not able to recognize the extension configurations in my WebConfig file. I tried copying the System.Web.Extensions.dlls into my webapp bin file. However, I was still getting errors. I then downloaded the ASP Ajax control and installed it on the GoGrid server but still was unable to get the website to display. Finally I went back to Visual Studio and remade the webapp using the ASP.NET Web App template without the extensions. I was then able to upload to my GoGrid server and configure IIS to see my website as the default http service.

There was still one more problem. I could see the website from the local GoGrid system but not from outside. After contacting GoGrid support I was quickly in operation with a pointer to the Windows Firewall which GoGrid Support kindly fixed for me. The problem was that theWindows 2003 template I chose does not open port 80 by default. I needed to use the Firewall manager to open port 80 for the http service. For those wanting to use ftp the same would be required for port 21.

I now had my experimental system up and running. I had chosen a 1Gb memory server so my actual cost on the server is $0.19/hour which is a little less for your money than the AWS EC2:

$0.10Small Instance (Default)
1.7 GB of memory, 1 EC2 Compute Unit (1 virtual core with 1 EC2 Compute Unit), 160 GB of instance storage, 32-bit platform

But again, running ASP .NET 3.5 is much more complex on EC2, requiring a Mono installation on a Linux base. I have not yet tried that combination and somehow doubt that it would work with a complex ASP .NET 3.5 website, especially with Ajax controls.

The GoogleMap Control with the Earth mode was also interesting. I had not yet embedded this into an ASP website. It proved to be fairly simple. I just needed to add a <asp:ScriptManager ID=”mapscriptmanager” runat=”server”/> to my site Master page and then the internal page javascript used to create the GoogleMap Control worked as normal.

I had some doubts about accessing the GMTI points from the webapp since often there are restrictions using cross domain xmlhttpRequests. There was no problem. My GMTI to KML servlet produces kml mime type "application/vnd.google-earth.kml+xml" which is picked up in the client javascript using the Google API:·
geoXml = new GGeoXml(url);

Evidently cross domain restrictions did not apply in this case, which made me happy, since I didn’t have to write a proxy servlet just to access the gmti points on a different server.

In Summary GoGrid is a great cloud service which finally opens the cloud to Microsoft shops where Linux is not an option. The GUI control panel is easy to use and configuring a fully scalable load balanced cluster can be done right from the control panel. GoGrid fills a big hole in the cloud computing world.

Google Maps with Google Earth Plugin

Fig 1 – Google Map with Google Earth plugin –

Google announced a new GE plugin for use inside Google Maps:

This is an interesting development since it allows Google Earth to be used inside a browser. Google’s Map object can be programmed using their javascript api for user interaction control which was not available inside standalone Google Earth. The api documents have plenty of examples but the very simplest way to use the Google Earth Plugin is to simply load their plugin api javascript like this:

	google.load("earth", "1");

Then add a Map Type G_SATELLITE_3D_MAP to the Map control in the initialization code.

function initialize() {
	if (GBrowserIsCompatible()) {
		map = new GMap2(document.getElementById("map_canvas"));
		map.setCenter(new GLatLng(39.43551, -104.91207), 9);
     		var mapControl = new GMapTypeControl();
     		map.addControl(new GLargeMapControl());

This adds a fourth map type, “Earth”, to the control shown over the Google map base.

Fig 2 – Google Map with Google Earth plugin showing map overlays

Now a user can switch to a GE type viewing frame with full 3D camera action. Unfortunately the Maptype control is hidden so returning back to a Map view requires an additional button and piece of javascript code:

function resetMapType(evt){

In order to show how useful this might be I added a button to read kml from a url:

function  LoadKML(){
	var geoXml = new GGeoXml(document.getElementById('txtKML').value);
	GEvent.addListener(geoXml, 'load', function() {
		if (geoXml.loadedCorrectly()) {
			document.getElementById("status").innerHTML = "";
	document.getElementById("status").innerHTML = "Loading...";

Now I simply coded up a servlet to proxy PostGIS datasources into kml for me and I can add mapOverlays to my hearts content. If I want to be a bit more SOA it would be simple to configure a Geoserver FeatureType and let Geoserver produce kml for me.

My LoadKML script lets me copy any url that produces kml into a text box, which then loads the results into the Google Map object. With the GE plugin enabled I can view my kml inside a GE viewer, inside my browser. By stacking these overlays onto the map I can see multiple layers. The javascript api gives me pretty complete control of what goes on. However, there are still some rough edges. In addition to overwriting the map control that would allow the user to click back to a map, satellite, or hybrid view, there are some very odd things going on with the kml description balloons. Since I’m using IE8beta I can’t really vouch for this being a universal oddity or some glitch in the IE8 situation. After all IE8 beta on Vista really does strange things to the Google Map website making it more or less unuseable.

Here are some items I ran across in the little bit of experimetation I’ve done:

  • plugin loading is slow and doesn’t appear to be cached
  • returning from Earth view requires javascript code
  • click descriptions are only available on point placemarks
  • the balloon descriptions show up only sometimes in an earth view
  • There appears to be a limit on the number of kml features which can be added. Over 5000 seems to choke

The rendering in the new Google Earth plugin view is quite useful and provides at least a subset of kml functionality. This evolution distinctly shows the advantage of a competitive market. The Microsoft Google competition significantly speeds the evolution of browser map technology. Microsoft is approaching this same type of browser merged capability as well with their pre announcement of Virtual Earth elements inside Silverlight. 3D buildings, Street view, Deep Zoom, Photosynth, Panoramio …. are all technologies racing into the browser. Virtual parallel worlds are fascinating especially when they overlap the real world. Kml feeds, map games, and live cameras coupled with GPS streams seem to be transforming map paradigms into more or less virtual life worlds.

GIS savvy developers already have a wealth of technology to expose into user applications. Many potential users, though, are still quite unaware of the possibilities. The ramp up of these new capabilities in the enterprise should make business tools very powerful, if not downright entertaining!

More Google Earth – Time Animation

Fig 1 – Google Earth Time Animation tool visible in the upper part of the view frame –

I was out of town for a trip back to Washington DC the last couple of weeks, but now that I’m back I wanted to play with some more KML features in Google Earth 4.3. One of the more interesting elements offered by KML inside Google Earth is the use of timestamps for animated sequences.
KML offers a couple of time elements:





By attaching a time stamp element to kml rendered elements it is possible to make use of the built in Google Earth time animation tool. I have a table of GMTI events for a few simulated missions that is ideal for this type of viewing. The time stamp deltas are in the millisecond range and work best as timestamp elements.

KML time formats are defined as dateTime (YYYY-MM-DDThh:mm:ssZ) which translates to a Java formatter:

SimpleDateFormat timeout = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss.SSS'Z'");

In KML the T char delimits date from time, while the Z indicates UTC. In my case using simulated mission data I am more interested in delta time effects than actual time.

By adding the appropriately formatted <TimeStamp> to each <Placemark> from the GMTI data set I can create a KML data stream with the time animation tool enabled.

<Placemark id="gmti_target.80916">
         <description><![CDATA[<table border='1'>
             <th colspan='8' scope='col'>gmti_target</th>
             <td>Unknown, Simulated Target</td>
             <td>SRID=4269;POINT(-111.89313294366 35.1604509260505 2180)</td>

For this experiment I utilized a mission with a relatively small number of gmti targets in the 6000 point range. The time to load is still a bit slow even with only 6000 points to load. In order to boost performance I switched from a simple kml stream to a zipped kmz stream for the servlet response.·····

ZipOutputStream out = new ZipOutputStream(response.getOutputStream());

This helps with bandspeed latency by reducing the data stream from 5Mb to 0.25Mb, however, the biggest latency on my system is the Google Earth load rendering rather than the download. Im using a medium 5Mb DSL connection here on an Intel Core2 Quad CPU Q6600 2.39Ghz. The download of the kmz is about 5sec but the Google Earth ingest and rendering, complete with blanked viewport, is around 50sec.

Once rendered the point icons are available for time sequence animation. Google Earth furnishes a built in tool that automates this process. The time tool includes option settings for adjusting speed, time range view, a setting to keep or discard points from the beginning, as well as repeat, stop, or reverse selection for end of time range event. The animation is helpful for visual analysis of target association. The tool provides step and slider views as well as animation.

Fig 2 – Google Earth time animation

The built in tools provided by Google Earth are quite helpful. Virtual Earth or WPF tools can be used for time sequence as well but require writing the necessary client javascript or C# to step through the set of elements in sequence. Having the tool already available makes simple view animation much easier to create. The potential for customization and interaction is a bit limited, but many applications are well served by the prebuilt viewing tools provided by default in GE. The large terrain and imagery infrastructure behind Google Earth is a real asset to viewing. Additional high resolution imagery can be added as GroundOverlay elements for enhancing specific gmti view areas.

For classification and analysis the simple view capability is somewhat limited. What is needed is a mode of selection to group and classify point data to create track associations. This requires more interactive events than afforded by GE. WPF is more difficult to work with, but the ability to add a variety of selection tools and change classification interactively may make the effort worthwhile from a GMTI analyst’s point of view. Any background imagery, terrain, or mapping layers need to be added to the WPF UI so there would be quite a bit more effort involved.

Fig 3 – Google Earth time animationviewed from above

GMTI is only one of a number of time sequence vehicle tracking data resources. The increasing use of GPS and fleet tracking software makes these types of data sets fairly common. The use of timestamp animation is a nice addition to the viewing capability of historical tracks, of course live tracks generally retain a history tail as well, so a live tracking databases could also make good use of this timestamp element. NetworkLinkControl refresh events can keep the track synchronized with the live data feeds.

The lack of interactive UI capability in Google Earth limits its use for operator classification and analysis. However, GE is just one viewing possibility. A UI system using a FOSS GIS stack such as PostGIS, Java, and Geoserver can be accessed in a number of ways through the browser. For example one browser tool could view the data set through a WPF UI, which allows operator reclassification and filtering using a variety of selection tools, while simultaneously a GE viewer with a NetworkLinkControl refreshed from the serverside backing datastore can be open from the same client. One aspect of the power of Browser based UIs is the ability to access multiple heterogeneous views simultaneously.

Fig 4 – Multiple browser views of gmti data source – WPF xbap UI and GE Time animation

Google Earth 4.3

Fig 1 – Google Earth Terrain with a USGS DRG from Terraservice as a GroundOverlay

I have to admit that I’ve been a bit leary of Google Earth. It’s not just the lurking licensing issues, or the proprietary application install, or even the lack of event listeners, or accessible api. If I’m honest I have to admit I’m a bit jealous of the large infrastructure, the huge data repository, and the powerfully fun user interface. So this weekend I faced my fears and downloaded the current version, Google Earth 4.3.

I’m not really interested in the existing Google Earth stuff. There are lots of default layers available, but I want to see how I can make use of the cool interface and somehow adapt it as a control for my stuff. The easiest route to customization in GE is KML. KML was developed for XML interchange of Keyhole views before Google bought Keyhole and turned it in to Google Earth. Keeping the KML acronym does avoid some conflict since Keyhole Markup Language, KML, does not run into the namespace conflict with the other GML, OGC’s, that would result from Google Markup Language.

KML 2.2 has evolved much further than a simple interchange language with some features that can be adapted to customized GE applications. After looking over the KML 2.2 reference I started with a simple static KML file. I have a USGS topo index table in PostGIS that I wished to view over the Google terrain. PostGIS includes an AsKML function for converting Geometry to KML. Using Java I can write a JDBC query and embed the resulting geometry AsKML into a KML document. However adding a WMS layer in between the PostGIS and GE seemed like a better approach. Geoserver gives you a nice SOA approach with built in KML export functionality as well as custom styling through sld.

It is easy to set up Geoserver as a WMS layer over the PostGIS database containing my USGS topo index table. Geoserver has also included a KML export format for the WMS framework. So I simply add my table to the Geoserver Data Featuretype list. Now I can grab out a KML document with a WMS query like this:


This is simple, and the resulting kml provides a basic set of topo polygons complete with a set of clickable attribute tables at the polygon center points. The result is not especially beautiful or useful, but it is interesting to tilt the 3D view of the GE terrain and see that the polygons are draped onto the surface. It is also handy to have access to the topo attributes with the icon click. However, in order to be useful I need to make my kml a bit more interactive.

The next iteration was to create a simple folder kml with a <NetworkLink>:

<kml xmlns="http://earth.google.com/kml/2.1">
       <name>USGS Topo Map Index</name>
          <h3>Map Tiles</h3>
          <p><font color="blue">index tile of 1:24000 scale <b> USGS topographic maps</b>
           1/8 degree x 1/8 degree coverage</font>
           <a href='http://topomaps.usgs.gov/'>
               more information>>
        <name>WMS USGS quad tiles</name>

In this case the folder kml has a network link to a java servlet called GetWMS. The servlet handles building the GetMap WMS query with customized style sld loading so that now I can change my style as needed by editing a .sld file. The servlet then opens my usgstile query and feeds the resulting KML back to the kml folder’s <NetworkLink>. Since I have set the <viewRefreshMode> to onStop, each time I pan around the GE view I will generate a new call to the GetWMS servlet which builds a new GetMap call to Geoserver based on the current bbox view parameters.

This is more interesting and adds some convenient refresh capability, but views at a national or even state level are quickly overwhelmed with the amount of data being generated. The next iteration adds a <Region> element with a <LOD> subelement:


Now my refreshes only occur when zoomed in to a reasonable level of detail. The kml provides a layer of USGS topo tiles and associated attributes from my PostGIS table by taking advantage of the built in KML export feature of Geoserver’s WMS.

Fig 2 – Google Earth with <Region><LOD> showing USGS topo tiles

Unfortunately Google Earth 4.3 seems to disable the onStop refresh after an initial load of my usgstile subset, once a Region element with LOD is added. In GoogleEarth 4.2 I didn’t run into this problem. The work around appears to be a manual refresh of the menu subtree “WMS USGS quad tiles”. Once this refresh is done, pan and zoom refresh in the normally expected ‘onStop’ fashion. GE 4.3 is beta and perhaps this behavior/”feature” will be changed for final release.

Now I have a reasonable USGS tile query overlay. However, why just show the tiles. It is more useful to show the actual topo map. Fortunately the “web map of the future” from four or five years ago is still around “terraservice.net” Terra service is a Microsoft research project for serving large sets of imagery into the internet cloud. It was a reasonably successful precursor to what we see now as Google Map and Virtual Earth. The useful thing for me is that one of the imagery layers that this service provides as a WMS, is DRG, Digital Raster Graph. DRG is a seamless WMS of the USGS topo map scans in a pyramid using 1:250,000, 1:100,000, and 1:24000 scale scanned paper topos. Anyone doing engineering, hiking etc before 2000 is probably still familiar with the paper topo series which, once upon a time, was the gold standard map interface. Since then the USGS has fallen on harder times, but before they fall into utter obscurity they did manage to swallow enough new tech to produce the DRG map scans and let Microsoft load it into TerraService.net.

This means that the following url will give back a nice jpeg 1:24000 topo for Mt Champion in Colorado:


Armed with this bit of WMS capability I can add some more capability to my GoogleTest. First I added a new field to the usgstile database, which can be seen in the screen capture above. The new field simply makes a reference url to another servlet I wrote to build the terraservice WMS query and pull down the requested topo image. By setting the width and height parameters to 2000 I can get the full 1:24000 scale detail for a single 1/8 degree topo quad. My GetDRG servlet also conveniently builds the kml document to add the topo to my Google Earth menu:

<?xml version="1.0" encoding="UTF-8"?></font>
<kml xmlns="http://earth.google.com/kml/2.2">
    <name>38106G1  Buena Vista East</name>
    <description>DRG Overlay of USGS Topo</description>
      <name>Large-scale overlay on terrain</name>
          <h3>DRG Overlay</h3>
          <p><font color="blue">index tile of 1:24000 scale <b> USGS topographic maps</b>
          1/8 degree x 1/8 degree coverage</font>
          <a href='http://topomaps.usgs.gov/'>more information>></a>

The result is a ground clamped USGS topo over the Google Earth terrain. It is now available for some flying around at surface level in the cool GE interface. Also the DRG opacity can easily be adjusted to see the Google Earth imagery under DRG contours. The match up is pretty good but of course GE imagery will be more detailed at this stage. I also understand that Google is in the process of flying submeter LiDAR through out areas of the USA so we can expect another level in the terrain detail pyramid at some point as well.

This is all fun and hopefully useful for anyone needing to access USGS topo overlays. The technology pattern demonstrated though, can be used for just about any data resource.
1) data in an imagery coverage or a PostGIS database
2) Geoserver WMS/WCS SOA layer
3) KML produced by some custom servlets

This can be extremely useful. The Google Earth interface with all of its powerful capability is now available as a nice OWS interface for viewing OWS exposed layers which can be proprietary or public. For example here is an example of a TIGER viewer using this same pattern:

Fig 3 – TIGER 2007 view of El Paso County Colorado

Google Earth viewer does have a gotcha license, but it may be worth the license cost for the capability exposed. In addition, if Microsoft ever catches up to the KML 2.2 spec recently released as a final standard by the OGC, this same pattern will work conveniently in Virtual Earth and consequently in a browser through a recently announced future silverlight VE element. Lots of fun for view interfaces!

My next project is to figure out a way to use GE KML drawing capability to make a two way OWS using Geoserver transactional WFS, WFS-T.