SimpleDB and locations


SimpleDB GeoRSS
Fig 1 – SimpleDB GeoRSS locations.

GeoRSS from SimpleDB

Amazons SimpleDB service is intriguing because it hints at the future of Cloud databases. Cloud databases need to be at least “tolerant of network partitions,” which leads inevitably to Werner Vogel’s “eventually consistent” cloud data. See previous blog post on Cloud Data. Cloud data is moving toward the scalability horizon discovered by Google. Last week’s announcement on AWS, Elastic Map Reduce, is another indicator of moving down the road toward infinite scalability.

SimpleDB is an early adopter of data in the Cloud and is somewhat unlike the traditional RDBMS. My interest is how the SimpleDB data approach might be used in a GIS setting. Here is my experiment in a nutshell:

  1. Add GeoNames records to a SimpleDB domain
  2. See what might be done with Bounding Box queries
  3. Export queries as GeoRSS
  4. Try multiple attributes for geographic alternate names
  5. Show query results in a viewer

GeoNames.org is a creative commons attribution license collection of GNS, GNIS, and other named point resources with over 8 million names. Since SimpleDB beta allows a single domain to grow up to 10 GB, the experiment should fit comfortably even if I later want to extend it to all countries. Calculating a rough estimate on a name item uses this forumla:
Raw byte size (GB) of all item IDs + 45 bytes per item + Raw byte size (GB) of all attribute names + 45 bytes per attribute name + Raw byte size (GB) of all attribute-value pairs + 45 bytes per attribute-value pair.

I chose a subset of 7 attributes from the GeoNames source <name, alternatenames, latitude, longitude, feature class, feature code, country code>
leading to this rough estimate of storage space:

  • itemid 7+45 = 52
  • attribute names 73+7*45 = 388
  • attribute values average 85 + 7*45 =400
  • total = 840bytes per item x 8000000 = 6.72 Gb

For experimental purposes I used just the Colombia tab delimited names file. There are 57,714 records in the Colombia, CO.txt, names file, which should be less than 50Mb. I chose a spanish language country to check that the utf-8 encoding worked properly.
2593108||Loma El Águila||Loma El Aguila||||5.8011111||7.2833333||T||HLL||CO||||36||||||||0||||151||America/Bogota||2002-02-25

Here are some useful links I used to get started with SimpleDB:
  GettingStartedGuide
  Developer quide

I ran across this very “simple” SimpleDB code: ‘Simple’ SimpleDB code in single Java file/class (240 lines) This Java code was enhanced to add Map collections for Put and Get Attribute commands by Alan Williamson. I had to make some minor changes to allow for multiple duplicate key entries in the HashMap collections. I wanted to have the capability of using multiple “name” attributes for accomodating alternate names and then eventually alternate translations of names, so Map<String, ArrayList> replaces Map<String, String>

However, once I got into my experiment a bit I realized the limitations of urlencoded Get calls prevented loading the utf-8 char set found in Colombia’s spanish language names. I ended up reverting to the Java version of Amazon’s SimpleDB sample library. I ran into some problems since the Amazon’s SimpleDB sample library referenced jaxb-api.jar 2.1 and my local version of Tomcat used an older 2.0 version. I tried some of the suggestions for adding jaxb-api.jar to /lib/endorsed subdirectory, but in the end just upgrading to the latest version of Tomcat, 6.0.18, fixed my version problems.

One of the more severe limitations of SimpleDB is the single type “String.” To be of any use in a GIS application I need to do Bounding Box queries on latitude,longitude. The “String” type limitation carries across to queries by limiting them to lexicographical ordering. See: SimpleDB numeric encoding for lexicographic ordering In order to do a Bounding Box query with a lexicographic ordering we have to do some work on the latitude and longitude. AmazonSimpleDBUtil includes some useful utilities for dealing with float numbers.
  String encodeRealNumberRange(float number, int maxDigitsLeft, int maxDigitsRight, int offsetValue)
  float decodeRealNumberRangeFloat(String value, int maxDigitsRight, int offsetValue)

Using maxDigitsLeft 3, maxDigitsRight 7, along with offset 90 for latitude and offset 180 for longitude, encodes this lat,lon pair (1.53952, -72.313633) as (“0915395200″, “1076863670″) Basically these are moving a float to positive integer space and zero filling left and right to make the results fit lexicographic ordering.

Now we can use a query that will select by bounding box even with the limitation of a lexicographic ordering. For example Bbox(-76.310031, 3.889343, -76.285419, 3.914497) translates to this query:
Select * From GeoNames Where longitude > “1036899690″ and longitude < “1037145810″ and latitude > “0938893430″ and latitude < “0939144970″

Once we can select by an area of interest what is the best way to make our selection available? GeoRSS is a pretty simple XML feed that is consumed by a number of map viewers including VE and OpenLayer. Simple format point entries look like this:<georss:point>45.256 -71.92</georss:point> So we just need an endpoint that will query our GeoNames domain for a bbox and then use the result to create a GeoRSS feed.

<?xml version=”1.0″ encoding=”utf-8″?>
<feed xmlns=”http://www.w3.org/2005/Atom”
xmlns:georss=”http://www.georss.org/georss”>
<title>GeoNames from SimpleDB</title>
<subtitle>Experiment with GeoNames in Amazon SimpleDB</subtitle>
<link href=”http://www.cadmaps.com/”/>
<updated>2005-12-13T18:30:02Z</updated>
<author>
<name>Randy George</name>
<email>rkgeorge@cadmaps.com</email>
</author>
<entry>
<title>Resguardo Indígena Barranquillita</title>
<description><![CDATA[<a href="http://www.geonames.org/export/codes.html" target="_blank">feature class</a>:L <a
href="http://www.geonames.org/export/codes.html" target="_blank">feature code</a>
:RESV <a
href="http://ftp.ics.uci.edu/pub/websoft/wwwstat/country-codes.txt" target="_blank">country code</a>:CO ]]></description>
<georss:point>1.53952 -72.313633</georss:point>
</entry>
</feed>

There seems to be some confusion about GeoRSS mime type – application/xml, or text/xml, or application/rss+xml, or even application/georss+xml show up in a brief google search? In the end I used a Virtual Earth api viewer to consume the GeoRSS results, which isn’t exactly known for caring about header content anyway. I worked for awhile trying to get the GeoRSS acceptable to OpenLayers.Layer.GeoRSS but never succeeded. It easily accepted static .xml end points, but I never was able to get a dynamic servlet endpoint to work. I probably didn’t find the correct mime type.

The Amazon SimpleDB Java library makes this fairly easy. Here is a sample of a servlet using Amazon’s SelectSample.java approach.

Listing 1 – Example Servlet to query SimpleDB and return results as GeoRSS

This example servlet makes use of the nextToken to extend the query results past the 5s limit. There is also a limit to the number of markers that can be added in the VE sdk. From the Amazon website:
“Since Amazon SimpleDB is designed for real-time applications and is optimized for those use cases, query execution time is limited to 5 seconds. However, when using the Select API, SimpleDB will return the partial result set accumulated at the 5 second mark together with a NextToken to restart precisely from the point previously reached, until the full result set has been returned. “

I wonder if the “5 seconds” indicated in the Amazon quote is correct, as none of my queries seemed to take that long even with multiple nextTokens.

You can try the results here: Sample SimpleDB query in VE

Summary

SimpleDB can be used for bounding box queries. The response times are reasonable even with the restriction of String only type and multiple nextToken SelectRequest calls. Of course this is only a 57000 item domain. I’d be curious to see a plot of domain size vs query response. Obviously at this stage SimpleDB will not be a replacement for a geospatial database like PostGIS, but this experiment does illustrate the ability to use SimpleDB for some elementary spatial queries. This approach could be extended to arbitrary geometry by storing a bounding box for lines or polygons stored as SimpleDB Items. By adding additional attributes for llx,lly,urx,ury in lexicographically encoded format, arbitrary bbox selections could return all types of geometry intersecting the selection bbox.

Select * From GeoNames Where (llx > “1036899690″ and llx < “1037145810″ and lly > “0938893430″ and lly < “0939144970″)
or (urx > “1036899690″ and urx < “1037145810″ and ury > “0938893430″ and ury < “0939144970″)

Unfortunately, Amazon restricts attributes to 1024 bytes, which complicates storing vertex arrays. This practically speaking limits geometries to point data.

The only advantage offered by SimpleDB is extending the scalability horizon, which isn’t likely to be a problem with vector data.

A Look at Google Earth API

Google’s earth viewer plugin is available for use in browsers on Windows and Vista, but not yet Linux or Mac. This latest addition to Google’s family of APIs (yet another beta) lets developers make use of the nifty 3D navigation found in the standalone Google Earth. However, as a plugin the developer now has more control using javascript with html or other server side frameworks such as asp .NET. http://code.google.com/apis/earth/
Here is the handy API reference:
http://code.google.com/apis/earth/documentation/reference/index.html

The common pattern looks familiar to Map API users: get a key code for the Google served js library and add some initialization javascript along with a map <div> as seen in this minimalist version.

<html>
  <head>
  <script src=”http://www.google.com/jsapi?key=…keycode…”
  type=”text/javascript”>
  </script>

<script>

var ge = null;
  google.load(“earth”, “1″);

function pageLoad() {
  google.earth.createInstance(“map3d”, initCallback, failureCallback);
  }

function initCallback(object) {
  ge = object;
  ge.getWindow().setVisibility(true);
  var options = ge.getOptions();
  options.setStatusBarVisibility(true);
  ge.getNavigationControl().setVisibility(ge.VISIBILITY_SHOW);
  }

function failureCallback(object) {
  alert(‘load failed’);
  }
  </script>
  <head>
  <body onload=”pageLoad()”>
  <div id=’map3d’></div>
  </body>
</html>
Listing 1 – minimalist Google Earth API in html

This is rather straightforward with the exception of a strange error when closing the browser:

Google explains:
“We’re working on a fix for that bug. You can temporarily disable script debugging in Internet Options –> Advanced Options to keep the messages from showing up.”

…. but the niftiness is in the ability to build your own pages around the api. Instead of using Google Earth links in the standalone version, we can customize a service tailored to very specific desires. For example this experimental Shp Viewer lets a user upload his shape file with a selected EPSG coordinate system and then view it over 3D Google Earth along with some additional useful layers like PLSS from USGS and DRG topo from TerraServer.


Fig 2 – a quarter quad shp file shown over Google Earth along with a USGS topo index layer

In the above example, clicking on a topo icon triggers a ‘click’ event listener, which then goes back to the server to fetch a DRG raster image from TerraServer via a java proxy servlet.

  .
  .
case ‘usgstile’:
  url = server + “/ShpView/usgstile.kml”;
  usgstilenlink = CreateNetLink(“TopoLayer”, url, false);
  ge.getGlobe().getFeatures().appendChild(usgstilenlink);

google.earth.addEventListener(usgstilenlink, “click”, function(event) {
  event.preventDefault();
  var topo = event.getTarget();
  var data = topo.getKml();
  var index = data.substr(data.indexOf(“<td>index</td>”) + 20, 7);
  var usgsurl = server + “/ShpView/servlet/GetDRG?name=” +
  topo.getName() + “&index=” + index;
  loadKml(usgsurl);
  });
  break;
  .
  .
  .
  function loadKml(kmlUrl) {
  google.earth.fetchKml(ge, kmlUrl, function(kmlObject) {
  if (kmlObject) {
  ge.getFeatures().appendChild(kmlObject);
  } else {
  alert(‘Bad KML’);
  }
  });
  }
Listing 2 – addEventListener gives the developer some control of user interaction

In addition to the ‘addEventListener’ the above listing shows how easily a developer can use ‘getKML’ to grab additional attributes out of a topo pologon. Google’s ‘fetchKML’ simplifies life when grabbing kml produced by the server. My first pass on this used XMLHttpRequest directly like this:

  var req;

function loadXMLDoc(url) {
  if (window.XMLHttpRequest) {
  req = new XMLHttpRequest();
  req.onreadystatechange = processReqChange;
  req.open(“GET”, url, true);
  req.send(null);
  // branch for IE6/Windows ActiveX version
  } else if (window.ActiveXObject) {
  req = new ActiveXObject(“Microsoft.XMLHTTP”);
  if (req) {
  req.onreadystatechange = processReqChange;
  req.open(“GET”, url, true);
  req.send();
  }
  }
 }

function processReqChange() {
  if (req.readyState == 4) {
  if (req.status == 200) {
  var topo = ge.parseKml(req.responseText);
  ge.getFeatures().appendChild(topo);
  } else {
  alert(“Problem retrieving the XML data:\n” + req.statusText);
  }
  }
  }
Listing 3 – XMLHttpRequest alternative to ‘fetchKml’

However, I ran into security violations due to running the tomcat Java servlet proxy out of a different port from my web application in IIS. Rather than rewrite the GetDRG servlet in C# as an asmx, I discovered the Google API ‘fetchKML’, which is more succinct anyway, and nicely bypassed the security issue.


Fig 3 – Google Earth api with a TerraServer DRG loaded by clicking on USGS topo polygon

And using the Google 3D navigation


Fig 4 – Google Earth api with a TerraServer DRG using the nifty 3D navigation

Of course in this case 3D is relatively meaningless. Michigan is absurdly flat. A better example of the niftiness of 3D requires moving out west like this view of Havasupai Point draped over the Grand Canyon, or looking at building skp files in major metro areas as in the EPA viewer.


Fig 5 – Google Earth api with a TerraServer DRG Havasupai Point in more meaningful 3D view

This Shp Viewer experiment makes use of a couple other interesting technologies. The .shp file set (.shp, .dbf, .shx) is uploaded to the server where it is then loaded into PostGIS and added to the geoserver catalog. Once in the catalog the features are available to kml reflector for pretty much automatic use in Google Earth api. As a nice side effect the features can be exported to all of geoserver’s output formats pdf, svg, RSS, OpenLayers, png, geotiff, but minus the inimitable background terrain of Google Earth.

ogr2ogr is useful for loading the furnished .shp files into PostGIS where it is accessible to Geoserver and the also very useful kml_reflector. PostGIS + Geoserver + Google Earth API is a very powerful stack for viewing map data. There is also a nice trick for adding Geoserver featureTypes to the catalog programmatically. I used the Java version after writing out the featureType info.xml to geoserver’s data directory. Eventually, work on the Restful configuration version of Geoserver may make this kind of catalog reloading easier.

Summary:
Wow I like 3D and I especially like 3D I have control over. Google Earth API is easy to add into an html page and event listeners give developers some control over features added to the 3D scene. Coupled with the usual suspects in the GIS stack, PostgreSQL /PostGIS and Geoserver, it is relatively easy to make very useful applications. I understand though that real control freaks will want to look at WWjava.

If anyone would like to see a prototype UI of their data using this approach please email

A Noob in Oracle Land

The announcement that Oracle is supporting AMIs in the Amazon cloud came as a surprise to me. I had heard that there was a teaser version of Oracle out there for developers, but had not expected Oracle to jump on the cloud side, especially after Larry Ellison’s recent diatribe against cloud computing.

   “It’s complete gibberish. It’s insane. When is this idiocy going to stop?”

Just curious about this oracle of gibberish, I went on a tour of Oracle Land, the Kingdom of Ellison. This is no small undertaking for an enterprise as ambitious as Oracle. There are endless products and sub-products. The base of the pyramid is the database server, but after buying 50 or more companies in the last year or so, the borders of the empire extend way beyond RDBMS.

The venerable RDBMS has come a long way since IBMs E.F. Codd introduced the concept back in the 70s. I vaguely remember Oracle breaking into the PC world shortly after Turbo Pascal. There was a single DB product for the DOS IBM PC, and documentation consisted of a couple of grayish paperback manuals. Shortly after this, late 80s, a small vendor introduced GeoSQL to hook AutoCAD to the GIS world through Oracle. This was my first introduction to the potential of spatial databases and Oracle. The empire of Ellison has grown since then, and now documentation would fill a library as well as Ellisons bank account.

As an aside, we live in an interesting age at the dusk of the great technology innovators. The infamous industrialists of the previous era now exist only as shadowy figures in history texts, but the business innovators of technology are still walking among us, Larry Ellison, Bill Gates, Steve Jobs. The multi-billion personal fortunes are just now entering the charitable fund phase where our grand children will know their names in some impersonally institutional mode such as the Gates Foundation.

First stop in Oracle Land was a download of the free, as in free beer, teaser version, OracleXE.

  • Total data stored in XE is limited to 4GB
  • XE is limited to 1GB of RAM
  • XE is limited to 1 processor

Since my entire interest in Oracle is the spatial side, my next stop was Justin Lokitz’s helpful article on integration with Geoserver. Leading to this:


Fig 1 -http://localhost:80/geoserver/wms?service=WMS&request=GetMap&format=
image/png&width=800&height=600&srs=EPSG:4326&layers=topp:COUNTIES
&styles=countypopdensity&bbox=-177.1,13.71,-61.48,76.63

Fig 2 – http://localhost:80/geoserver/wms/kml_reflect?layers=COUNTIES

Not a bad start. The Geoserver layer abstracts away the spatial guts of OracleXE. However, curiosity leads on. I found that OracleXE has some spatial components labelled ‘Locator’ as opposed to ‘Spatial’. Though only a subset of the extensive enterprise spatial version, geometry queries are possible. It took me a bit to find my way around.

Interestingly the open source world is generally more helpful in this respect. Although extensive, the forums of commercial software vendors are less friendly. For instance Paul Ramsey of Refraction fame is regularly present on the PostGIS forums, and Frank Warmerdam is always available to give a helping hand at the immensely useful www.gdal.org. But I doubt that I will ever run across a Larry Ellison post on the OracleXE forum. Many posts to commercial forums appear to languish unanswered, which is seldom the case in the OpenSource project forums I monitor.

It is worth noting that gdal’s ogr2ogr can be built with Oracle support on systems with Oracle Client libraries installed.

Oracle’s SDO_Geometry is present in a useful form letting users run geographic join queries like this:

   select c.COUNTY, c.STATE_ABRV, c.TOTPOP, c.POPPSQMI from states s, counties c where s.state = ‘California’ and sdo_anyinteract (c.geom, s.geom) = ‘TRUE’;

My next step was to look at SDO_Geometry in JDBC. Unfortunately Oracle’s JGeometry spatial library is not available for OracleXE, but the LGPL open source JTS library provides helpful OraReader and OraWriter classes. These encapsulate the SDO_GEOMETRY Struct translation to/from jts.geom.Geometry, where the rest of the JTS api can be applied.

logger.info(rsmd.getColumnName(i)+": "+rsmd.getColumnType(i));
st = (oracle.sql.STRUCT) rs.getObject(1);
//convert STRUCT into JGeometry not available in OracleXE
//JGeometry j_geom = JGeometry.load(st);

//JTS to the rescue
OraReader reader = new OraReader();
Geometry geom = reader.read(st);
Coordinate[] coords = geom.getCoordinates();
		.
		.

Next stop, Amazon AWS EC2. Here is a list of the public Oracle AMIs offered::
   Oracle Database 11g Release 1 Enterprise Edition – 64 Bit
   Oracle Database 11g Release 1 Enterprise Edition – 32 Bit
   Oracle Database 11g Release 1 Standard Edition/Standard Edition One – 32 Bit
   Oracle Database 10g Release 2 Express Edition – 32 Bit

The last in the list, OracleXE edition, is the one to experiment with, unless you have a spare Oracle license floating around.

Time to try it:
  C:\>ec2-run-instances ami-7acb2f13 -k gsg-keypair
  C:\>ec2-describe-instances i-??????

and login:

Use of this machine requires acceptance of
the following license agreements.
 1. Oracle Enterprise Linux

http://edelivery.oracle.com/EPD/LinuxLicense/get_form?ARU_LANG=US

 2. Oracle Technology Developer License Terms

http://www.oracle.com/technology/software/popup-license/standard-license.html

 Please enter the above URLs into your browser and review them.
To accept the agreements, enter 'y', otherwise enter 'n'.
Do you accept? [y/n]: y
Thank you.

You may now use this machine.
Welcome to Oracle Database on EC2!
This is the first time this EC2 instance has been started.

Please set the oracle operating system password.
	.
	.
Please specify the passwords for the following database administrative accounts:

SYS (Database Administrative Account) Password:
	.
	.

Now for the link to Apex on the new OracleXE instance:
  http://ec2-??-??-???-??.compute-1.amazonaws.com:8080/apex


Fig 3 – Oracle Apex running from an EC2 OracleXE instance

Looks like we have it.

Summary:
Oracle is the Big Daddy of spatial GIS. It is also the “Mother of all DBA complexity.” Running a spatial app with oracle in the background is not trivial, but it is getting easier. The EC2 OracleXE AMI makes starting an Oracle server instance a matter of minutes. Although lacking some of the capability of its free and open source competition, OracleXE can be useful for the garden variety web enabled spatial app. For the developer with lots of experience in Oracle, OracleXE provides a low cost entry onto the performance/price escalator.

Next on the agenda is adding SDO_GEOMETRY data along with some kind of real spatial rendering, which means in my case getting a tomcat server running with Geoserver on the same OracleXE instance. Alternatively it might be worth a try at installing the OracleXE .rpm on an AMI with a GIS stack already available. And, it will be useful to recompile ogr with oracle db support.

Of course the real mix and match challenge will be OracleXE on an EC2 (real soon now) Windows instance with Java, Tomcat, Geoserver serving a Google Map control coupled to Google Earth, OpenLayers, VirtualEarth. But really EC2 Windows will probably come preconfigured with the new MS SQL Server 2008 and all the promised geospatial goodies including Linq potential.

After just a short trip into the Ellison Empire, I must admit I still like the no frills PostgreSQL/PostGIS better.

TatukGIS – Generic ESRI with a Bit Extra


Fig1 basic TatukGIS Internet Server view element and legend/layer element

TatukGIS is a commercial product that is basically a generic brand for building GIS interfaces including web interfaces. It is developed in Gdynia Poland:


The core product is a Developer Kernel, DK, which provides basic building blocks for GIS applications in a variety of Microsoft flavors including:

  • DK-ActiveX – An ActiveX® (OCX) control supporting Visual Basic, VB.NET, C#, Visual C++
  • DK.NET – A manageable .NET WinForms component supporting C# and VB.NET
  • DK-CF – A manageable .NET Compact Framework 2.0/3.5 component – Pocket PC 2002 and 2003, Windows Mobile 5 and 6, Windows CE.NET 4.2, Windows CE 5 and 6
  • DK-VCL – A native Borland®/CodeGear® Delphi™/C++ Builder™

These core components have been leveraged for some additional products to make life a good deal easier for web and PDA developers. A TatukGIS Internet Server single server deployment license starts at $590 for the Lite Edition or $2000 per deployment server for the full edition in a web environment. I guess this is a good deal compared to ESRI/Oracle licenses, but not especially appealing to the open source integrators among us. There is support for the whole gamut of CAD, GIS, and raster formats as well as project file support for ESRI and MapInfo. This is a very complete toolkit.

The TatukGIS Internet Server license supports database access to all the usual DBs: "MSSQL Server, MySQL, Interbase, DB2, Oracle, Firebird, Advantage, PostgreSQL… " However, support for spatial formats are currently only available for Oracle Spatial/Locator and ArcSDE. Support for PostGIS and MS SQL Server spatial extensions are slated for release with TatukGIS IS 9.0.

I wanted to experiment a bit with the Internet Server, so I downloaded a trial version(free)..

Documentation was somewhat sparse, but this was a trial download. I found the most help looking in the sample subdirectories. Unfortunately these were all VB and it took a bit of experimental playing to translate into C#. The DK trial download did include a pdf document that was also somewhat helpful. Perhaps a real development license and/or server deployment license would provide better C# .NET documentation. I gather the historical precedence of VB is still evident in the current doc files.

The ESRI influence is obvious. From layer control to project serialization, it seems to follow the ESRI look and feel. This can be a plus or a minus. Although very familiar to a large audience of users, I am afraid the ESRI influence is not aesthetically pleasing or very smooth. I was able to improve over the typically clunky ArcIMS type zoom and wait interface by switching to the included Flash wrapper (simply a matter of setting Flash="true").

The ubiquitous flash plugin lets the user experience a somewhat slippy map interface familiar to users of Virtual Earth and Google Maps. We are still not talking a DeepZoom or Google Earth type interface, but a very functional viewer for a private data source. I was very pleased to find how easy it was to build the required functionality including vector and .sid overlays with layer/legend manipulation.

This is a very simple to use toolkit. If you have had any experience with Google Map API or Virtual Earth it is quite similar. Once a view element is added to your aspx the basic map interface is added server side:

<ttkGIS:XGIS_ViewerIS id="GIS" onclick=”GIS_Click" runat="server" OnPaint="GIS_Paint" Width="800px" Height="600px" OnLoad="GIS_Load" BorderColor="Black" BorderWidth="1px" ImageType="PNG24" Flash="True"></ttkGIS:XGIS_ViewerIS>

The balance of the functionality is a matter of adding event code to the XGIS_ViewerIS element. For example :

    protected void GIS_Load(object sender, EventArgs e)
    {
       GIS.Open( Page.MapPath( "data/lasanimas1.ttkgp" ) );
       GIS.SetParameters("btnFullExtent.Pos", "(10,10)");
       GIS.SetParameters("btnZoom.Pos", "(40,10)");
       GIS.SetParameters("btnZoomEx.Pos", "(70,10)");
       GIS.SetParameters("btnDrag.Pos", "(100,10)");
       GIS.SetParameters("btnSelect.Pos", "(130,10)");

       addresslayer = (XGIS_LayerVector)GIS.API.Get("addpoints19");
    }

The ttkgp project support allows addition of a full legend/layer menu with a single element, an amazing time saver:

<ttkGIS:XGIS_LegendIS id="Legend" runat="server" Width="150px" Height="600px" ImageType="PNG24" BackColor="LightYellow" OnLoad="Legend_Load" AllowMove="True" BorderWidth="1px"></ttkGIS:XGIS_LegendIS>

The result is a simple functional project viewer available over the internet, complete with zoom, pan, and layer manipulation. The real power of the TatukGIS is in the multitude of functions that can be used to extend these basics. I added a simple address finder and PDF print function, but there are numerous functions for routing, buffering, geocoding, projection, geometry relations etc. I was barely able to scratch the surface with my experiments.


Fig2 – TatukGIS Internet Server browser view with .sid imagery and vector overlays

The Bit Extra:
As a bit of a plus the resulting aspx is quite responsive. Because the library is not built with the MS MFC it has a performance advantage over the ESRI products it replaces. The TatukGIS website claims include the following:

"DK runs some operations run even 5 – 50 times faster than the leading GIS development products"

I wasn’t able to verify this, but I was pleased with the responsiveness of the interface, especially in light of the ease of development. I believe clients with proprietary data layers who need a quick website would be very willing to license the TatukGIS Internet Server. Even though an open source stack such as PostGIS, Geoserver, OpenLayers could do many of the same things, the additional cost of development would pretty much offset the TatukGIS license cost.

The one very noticeable restriction is that development is a Windows only affair. You will need an ASP IIS server to make use of the TatukGIS for development and deployment. Of course clients can use any of the popular browsers from any of the common OS platforms. Cloud clusters in Amazon’s AWS will not support TatukGIS IS very easily, but now that GoGrid offers Virtual Windows servers there are options.


Fig3 – TatukGIS Internet Server browser view with DRG imagery and vector overlays

Fig4 – TatukGIS Internet Server browser result from a find address function

Summary: TatukGIS Internet Server is a good toolkit for custom development, especially for clients with ESRI resources. The license is quite reasonable.

Google Earth 4.3


Fig 1 – Google Earth Terrain with a USGS DRG from Terraservice as a GroundOverlay

I have to admit that I’ve been a bit leary of Google Earth. It’s not just the lurking licensing issues, or the proprietary application install, or even the lack of event listeners, or accessible api. If I’m honest I have to admit I’m a bit jealous of the large infrastructure, the huge data repository, and the powerfully fun user interface. So this weekend I faced my fears and downloaded the current version, Google Earth 4.3.

I’m not really interested in the existing Google Earth stuff. There are lots of default layers available, but I want to see how I can make use of the cool interface and somehow adapt it as a control for my stuff. The easiest route to customization in GE is KML. KML was developed for XML interchange of Keyhole views before Google bought Keyhole and turned it in to Google Earth. Keeping the KML acronym does avoid some conflict since Keyhole Markup Language, KML, does not run into the namespace conflict with the other GML, OGC’s, that would result from Google Markup Language.

KML 2.2 has evolved much further than a simple interchange language with some features that can be adapted to customized GE applications. After looking over the KML 2.2 reference I started with a simple static KML file. I have a USGS topo index table in PostGIS that I wished to view over the Google terrain. PostGIS includes an AsKML function for converting Geometry to KML. Using Java I can write a JDBC query and embed the resulting geometry AsKML into a KML document. However adding a WMS layer in between the PostGIS and GE seemed like a better approach. Geoserver gives you a nice SOA approach with built in KML export functionality as well as custom styling through sld.

It is easy to set up Geoserver as a WMS layer over the PostGIS database containing my USGS topo index table. Geoserver has also included a KML export format for the WMS framework. So I simply add my table to the Geoserver Data Featuretype list. Now I can grab out a KML document with a WMS query like this:

http://rkgeorge-pc:80/geoserver/wms?bbox=-104.875,39,-104.75,39.125&styles=&Format=kml&request=GetMap&layers=usgstile&width=500&height=500&srs=EPSG:4269

This is simple, and the resulting kml provides a basic set of topo polygons complete with a set of clickable attribute tables at the polygon center points. The result is not especially beautiful or useful, but it is interesting to tilt the 3D view of the GE terrain and see that the polygons are draped onto the surface. It is also handy to have access to the topo attributes with the icon click. However, in order to be useful I need to make my kml a bit more interactive.

The next iteration was to create a simple folder kml with a <NetworkLink>:

<kml xmlns="http://earth.google.com/kml/2.1">
     <Folder>
       <name>USGS Topo Map Index</name>
       <description>
        <![CDATA[
          <h3>Map Tiles</h3>
          <p><font color="blue">index tile of 1:24000 scale <b> USGS topographic maps</b>
           1/8 degree x 1/8 degree coverage</font>
           <a href='http://topomaps.usgs.gov/'>
               more information>>
           </a>
          </p>
      ]]>
     </description>
      <NetworkLink>
        <name>WMS USGS quad tiles</name>
          <Link>
             <href>http://rkgeorge-pc/GoogleTest/servlet/GetWMS</href>
             <httpQuery>layer=usgstile</httpQuery>
             <viewRefreshMode>onStop</viewRefreshMode>
             <viewRefreshTime>1</viewRefreshTime>
             <viewFormat>BBOX=[bboxWest],[bboxSouth],[bboxEast],[bboxNorth]
                             &CAMERA=[lookatLon],[lookatLat]</viewFormat>
           </Link>
      </NetworkLink>
    </Folder>
</kml>

In this case the folder kml has a network link to a java servlet called GetWMS. The servlet handles building the GetMap WMS query with customized style sld loading so that now I can change my style as needed by editing a .sld file. The servlet then opens my usgstile query and feeds the resulting KML back to the kml folder’s <NetworkLink>. Since I have set the <viewRefreshMode> to onStop, each time I pan around the GE view I will generate a new call to the GetWMS servlet which builds a new GetMap call to Geoserver based on the current bbox view parameters.

This is more interesting and adds some convenient refresh capability, but views at a national or even state level are quickly overwhelmed with the amount of data being generated. The next iteration adds a <Region> element with a <LOD> subelement:

<Region>
  <LatLonAltBox>
    <north>70.125</north>
    <south>18.875</south>
    <east>-66.875</east>
    <west>-160.25</west>
  </LatLonAltBox>
  <Lod>
    <minLodPixels>30000</minLodPixels>
    <maxLodPixels>-1</maxLodPixels>
  </Lod>
 </Region>
</font>

Now my refreshes only occur when zoomed in to a reasonable level of detail. The kml provides a layer of USGS topo tiles and associated attributes from my PostGIS table by taking advantage of the built in KML export feature of Geoserver’s WMS.


Fig 2 – Google Earth with <Region><LOD> showing USGS topo tiles

Unfortunately Google Earth 4.3 seems to disable the onStop refresh after an initial load of my usgstile subset, once a Region element with LOD is added. In GoogleEarth 4.2 I didn’t run into this problem. The work around appears to be a manual refresh of the menu subtree “WMS USGS quad tiles”. Once this refresh is done, pan and zoom refresh in the normally expected ‘onStop’ fashion. GE 4.3 is beta and perhaps this behavior/”feature” will be changed for final release.

Now I have a reasonable USGS tile query overlay. However, why just show the tiles. It is more useful to show the actual topo map. Fortunately the “web map of the future” from four or five years ago is still around “terraservice.net” Terra service is a Microsoft research project for serving large sets of imagery into the internet cloud. It was a reasonably successful precursor to what we see now as Google Map and Virtual Earth. The useful thing for me is that one of the imagery layers that this service provides as a WMS, is DRG, Digital Raster Graph. DRG is a seamless WMS of the USGS topo map scans in a pyramid using 1:250,000, 1:100,000, and 1:24000 scale scanned paper topos. Anyone doing engineering, hiking etc before 2000 is probably still familiar with the paper topo series which, once upon a time, was the gold standard map interface. Since then the USGS has fallen on harder times, but before they fall into utter obscurity they did manage to swallow enough new tech to produce the DRG map scans and let Microsoft load it into TerraService.net.

This means that the following url will give back a nice jpeg 1:24000 topo for Mt Champion in Colorado:

http://terraservice.net/ogcmap.ashx?version=1.1.1&request=GetMap&Layers=DRG&Styles=&SRS=EPSG:4326&BBOX=-106.5625,39.0,-106.5,39.0625&width=1000&height=1000&format=image/jpeg&Exceptions=se_xml

Armed with this bit of WMS capability I can add some more capability to my GoogleTest. First I added a new field to the usgstile database, which can be seen in the screen capture above. The new field simply makes a reference url to another servlet I wrote to build the terraservice WMS query and pull down the requested topo image. By setting the width and height parameters to 2000 I can get the full 1:24000 scale detail for a single 1/8 degree topo quad. My GetDRG servlet also conveniently builds the kml document to add the topo to my Google Earth menu:

<?xml version="1.0" encoding="UTF-8"?></font>
<kml xmlns="http://earth.google.com/kml/2.2">
  <Folder>
    <name>38106G1  Buena Vista East</name>
    <description>DRG Overlay of USGS Topo</description>
    <GroundOverlay>
      <name>Large-scale overlay on terrain</name>
	     <description>
        <![CDATA[
          <h3>DRG Overlay</h3>
          <p><font color="blue">index tile of 1:24000 scale <b> USGS topographic maps</b>
          1/8 degree x 1/8 degree coverage</font>
          <a href='http://topomaps.usgs.gov/'>more information>></a>
          </p>
        ]]>
      </description>
      <Icon>
        <href>http://terraservice.net/ogcmap6.ashx?version=1.1.1&request=GetMap
           &Layers=DRG&Styles=&SRS=EPSG:4326&BBOX=-106.125,38.75,-106.0,38.875
           &width=2000&height=2000&format=image/jpeg&Exceptions=se_blank</href>
      </Icon>
      <LatLonBox>
        <north>38.875</north>
        <south>38.75</south>
        <east>-106.0</east>
        <west>-106.125</west>
        <rotation>0</rotation>
      </LatLonBox>
    </GroundOverlay>
  </Folder>
</kml>

The result is a ground clamped USGS topo over the Google Earth terrain. It is now available for some flying around at surface level in the cool GE interface. Also the DRG opacity can easily be adjusted to see the Google Earth imagery under DRG contours. The match up is pretty good but of course GE imagery will be more detailed at this stage. I also understand that Google is in the process of flying submeter LiDAR through out areas of the USA so we can expect another level in the terrain detail pyramid at some point as well.

This is all fun and hopefully useful for anyone needing to access USGS topo overlays. The technology pattern demonstrated though, can be used for just about any data resource.
1) data in an imagery coverage or a PostGIS database
2) Geoserver WMS/WCS SOA layer
3) KML produced by some custom servlets

This can be extremely useful. The Google Earth interface with all of its powerful capability is now available as a nice OWS interface for viewing OWS exposed layers which can be proprietary or public. For example here is an example of a TIGER viewer using this same pattern:




Fig 3 – TIGER 2007 view of El Paso County Colorado

Google Earth viewer does have a gotcha license, but it may be worth the license cost for the capability exposed. In addition, if Microsoft ever catches up to the KML 2.2 spec recently released as a final standard by the OGC, this same pattern will work conveniently in Virtual Earth and consequently in a browser through a recently announced future silverlight VE element. Lots of fun for view interfaces!

My next project is to figure out a way to use GE KML drawing capability to make a two way OWS using Geoserver transactional WFS, WFS-T.

Deep Zoom a TerraServer UrbanArea on EC2


Fig 1 – Silverlight MultiScaleImage of a high resolution Denver image – 200.6Mb .png

Just to show that I can serve a compiled Deep Zoom Silverlight app from various Apache servers I loaded this Denver example on a Windows 2003 Apache Tomcat here: http://www.web-demographics.com/Denver, and then a duplicate on a Linux Ubuntu7.10 running as an instance in the Amazon EC2, this time using Apache httpd not Tomcat: http://www.gis-ows.com/Denver Remember these are using beta technology and will requires updating to Silverlight 2.0. The Silverlight install is only about 4.5Mb so the install is relatively painless on a normal bandwidth connection.

Continuing the exploration of Deep Zoom, I’ve had a crash course in Silverlight. Silverlight is theoretically cross browser compatible (at least for IE, Safari, and FireFox), and it’s also cross server. The trick for compiled Silverlight is to use Visual Studio 2008 with .NET 3.5 updates. Under the list of new project templates is a template called ‘Silverlight application’. Using this template sets up a project that can be published directly to the webapp folder of my Apache Server. I have not tried a DeepZoom MultiScaleImage on Linux FireFox or Mac Safari clients. However, I can view this on a Windows XP FireFox updated to Silverlight 2.0Beta as well as Silverlight updated IE7 and IE8 beta.

Creating a project called Denver and borrowing liberally from a few published examples, I was able to add a ClientBin folder under my Denver_Web project folder. Into this folder goes the pyramid I generate using Deep Zoom Composer. Once the pyramid is copied into place I can reference this source from my MultiScaleImage element source. Now the pyramid is viewable.

To make the MultiScaleImage element useful, I added a couple of additional .cs touches for mousewheel and drag events. Thanks to the published work of Lutz Gerhard, Peter Blois, and Scott Hanselman this was just a matter of including a MouseWheelHelper.cs in the project namespace and adding a few delegate functions to the main Page initialization code behind file. Pan and Zoom .cs

Now I need to backtrack a bit. How do I get some reasonable Denver imagery for testing this Deep Zoom technology? Well I don’t belong to DRCOG which I understand is planning on collecting 6″ aerials. There are other imagery sets floating around Denver, as well, I believe down to 3″ pixel resolution. However, the cost of aerial capture precludes any free and open source type of use. However, there is some nice aerial data available from the USGS. The USGS Urban Area imagery is available for a number of metropolitan areas, including Denver.


Fig 2 – Same high resolution Denver image zoomed in to show detail

USGS Urban Area imagery is a color orthorectified image set captured at approximately 1ft pixel resolution. The data is made available to the public through the TerraServer WMS. Looking over the TerraServer UrbanArea GetCapabilities layer I see that I can ‘GetMap’ this layer in EPSG:26913 (UTM83-13m). The best possible pixel resolution through the TerraServer WMS is 0.25m per pixel. To achieve this level of resolution I can use the max pixel Height and Width of 2000 over a metric bounding box of 500m x 500m. http://gisdata.usgs.net/IADD/factsheets/fact.html

For example:
http://terraservice.net/ogcmap.ashx?version=1.1.1&service=WMS&ServiceName=WMS&request=GetMap&layers=UrbanArea&srs=EPSG:26913&bbox=511172,4399768,511672,4400268&WIDTH=2000&HEIGHT=2000

This is nice data but I want to get the max resolution for a larger area and mosaic the imagery into a single large image that I will then feed into the Deep Zoom Composer tool for building the MultiScaleImage pyramid. Java is the best tool I have to make a simple program to connect to the WMS and pull down my images one at a time into the tiff format.
try {
File OutFile = new File(dir+imageFileName);
URL u = new URL(url);
HttpURLConnection geocon = (HttpURLConnection)u.openConnection();
geocon.setAllowUserInteraction(false);
geocon.setRequestMethod(“GET”);
geocon.setDoOutput(true);
geocon.setDoInput(true);
geocon.setUseCaches(false);
BufferedImage image = ImageIO.read(geocon.getInputStream());
ImageIO.write(image,”TIFF”,OutFile);
geocon.disconnect();
System.out.println(“download completed to “+dir+imageFileName+” “+bbox);
}

Looping this over my desired area creates a directory of 11.7Mb tif images. In my present experiment I grabbed a set of 6×6 tiles, or 36 tiff files at a total of 412Mb. The next step is to collect all of these tif tiles into a single mosaic. The Java JAI package contains a nice tool for this called mosaic:
mosaic = JAI.create(“mosaic”, pbMosaic, new RenderingHints(JAI.KEY_IMAGE_LAYOUT, imageLayout));

Iterating pbMosaic.addSource(translated); over my set of TerraServer tif files and then using PNGImageEncoder, I am able to create a single png file of about 200Mb. Now I have a sufficiently large image to drop into the Deep Zoom Composer for testing. The resulting pyramid of jpg files is then copied into my ClientBin subdirectory of the Denver VS2008 project. From there it is published to the Apache webapp. Now I can open my Denver webapp for viewing the image pyramid. On this client system with a good GPU and dual core cpu the image zoom and pan is quite smooth and replicates a nice local application viewing program with smooth transitions around real time zoom pan space. On an older Windows XP running FireFox the pan and zoom is very similar. This is on a system with no GPU so I am impressed.

Peeking into the pyramid I see that the bottom level 14 contains 2304 images for a 200Mb png pyramid. Each image stays at 256×256 and the compression ranges from 10kb to 20kb per tile. Processing into the jpg pyramid compresses from the original 412Mb tif set => 200.5Mb png mosaic => 45.7Mb 3084 file jpg pyramid. Evidently there is a bit of lossy compression, but the end effect is that the individual tiles are small enough to stream into the browser at a decent speed. Connected with high bandwidth the result is very smooth pan and zoom. This is basically a Google Earth or Virtual Earth user experience all under my control!

Now that I have a workflow and a set of tools, I wanted to see what limits I ran into. The next step was to increment my tile set to an 8×8 for 64 tifs to see if my mosaic tool would endure the larger size as well as the DeepZoom Composer. My JAI mosaic will be the sticking point on a maximum image size since the source images are built in memory which on this machine is 3Gb. Taking into account Vista’s footprint I can actually only get about 1.5Gb. One possible workaround to that bottleneck is to create several mosaics and then attempt to splice them in the Deep Zoom Composer by manually positioning them before exporting to a pyramid.

First I modified my mosaic program to write a Jpeg output with jpgParams.setQuality(1.0f); This results in a faster mosaic and a smaller export. The JAI PNG encoder is much slower than JPEG. With this modification I was able to export a couple of 3000m x 3000m mosaics as jpg files. I then used Deep Zoom Composer to position the two images horizontally and exported as a single collection. In the end the image pyramid is 6000m x 3000m and 152Mb of jpg tiles. It looks like I might be able to scale this up to cover a large part of the Denver metro UrbanArea imagery.

The largest mosaic I was able to get Deep Zoom Composer to accept was 8×8 or 16000px x 16000px which is just 4000m x 4000m on the ground. Feeding this 143Mb mosaic through Composer resulted in a pyramid consists of 5344 jpg files at 82.3Mb. However, scaling to a 5000m x 5000m set of 100 tif, the 221Mb mosaic, failed on import to Deep Zoom Composer. I say failed, but in this prerelease version the import finishes with a blank image shown on the right. Export works in the usual quirky fashion in that the export progress bar generally never stops, but in this case the pyramid also remains empty. Another quirky item to note is that each use of Deep Zoom Composer starts a SparseImageTool.exe process which continues consuming about 25% of cpu even after the Deep Zoom Composer is closed. After working awhile you will need to go into task manager and close down these processes manually. Apparently this is “pre-release.”


Fig 3 – Same high resolution Denver image zoomed in to show detail of Coors Field players are visible

Deep Zoom is an exciting technology. It allows map hackers access to real time zoom and pan of large images. In spite of some current size limitations on the Composer tool the actual pyramid serving appears to have no real limit. I verified on a few clients and was impressed that this magic works in IE and FireFox although I don’t have a Linux or Mac client to test. The compiled code serves easily from Apache and Tomcat with no additional tweaking required. My next project will be adapting these Deep Zoom pyramids into a tile system. I plan to use either an OWS front end or a Live Maps with a grid overlay. The deep zoom tiles can then be accessed by clicking on a tile to open a Silverlight MultiScaleImage. This approach seems like a simple method for expanding coverage over a larger metropolitan area while still using the somewhat limiting Deep Zoom Composer pre release.

Wide Area HVAC controller using WPF and ZigBee Sensor grid

One project I’ve been working on recently revolves around an online controller for a wide area HVAC system. HVAC systems can sometimes be optimized for higher efficiency by monitoring performance in conjunction with environment parameters. Local rules can be established for individual systems based on various temperatures, humidity, and duct configurations. Briefly, a set of HVAC functions consisting of on/off relay switches and thermistors, can be observed from an online monitoring interface. Conversely state changes can be initiated online by issuing a command to a queue. These sensors and relays might be scattered over a relatively large geographic area and in multiple locations inside a commercial building.

It is interesting to connect a macro geospatial world with a micro world, drilling down through a local facility to a single thermistor chip. In the end its all spatial.

Using a simple map view allows drill down from a wide area to a building, a device inside a building, a switch bank, and individual relay or analog channel for monitoring or controlling. The geospatial aspect of this project is somewhat limited, however, the zoom and pan tools used in the map location also happen to work well in the facilities and graphing views.

The interface can be divided into three parts:
1) The onsite system – local base system and Zigbee devices
2) The online server system – standard Apache Tomcat
3) The online client interface – WPF xbap, although svg would also work with a bit more work

Onsite System

The electronically impaired, like myself, may find the details of controller PIC chip sets, relays, and thermistor spec sheets baffling, but really they look more hacky than they are:

Fig 0 – Left: Zigbee usb antenna ; Center: thermistor chip MCP9701A; Right: ProXR Zigbee relay controller

The onsite system is made up of sensors and controller boards. The controller boards include a Zigbee antenna along with a single bank of 8 relays and an addition set of 8 analog inputs. The sensors are wired to the controller board in this development mode. However, Zigbee enabled temperature sensors are also a possibility, just more expensive. See SunSpot for example: http://www.sunspotworld.com/ (Open Source hardware? )

ZigBee is a wifi type communications protocol based on IEEE 802.15.4. It allows meshes of devices to talk to each other via RF as long as they are within about 100-300 ft of another node on the mesh. Extender repeaters are also available. ZigBee enabled devices can be scattered around a facility and communicate back to a base system by relaying messages node to node through an ad hoc mesh network.

The onsite system has a local pc acting as the base server. The local onsite server communicates with an external connection via a internet router and monitors the Zigbee network. ASUS EeePCs look like a good candidate for this type of application. Messages originating from outside are communicated down to the individual relay through the ZigBee mesh, while state changes and analog readings originating from a controller relay or sensor channel are communicated up the ZigBee network and then passed to the outside from the local onsite server.

The local server must have a small program polling the ZigBee devices and handling communications to the outside world via an internet connection. The PC is equipped with a usb ZigBee antenna to communicate with the other ZigBee devices in the network. This polling software was written in Java even though that may not the best language for serial USB com control in Windows. The target system will be Linux based. The ZigBee devices we selected came with the USB driver that treats a USB port like a simple COM port.

Since this was a Java project the next step was finding a comm api. The sun JavaComm has discontinued support of Windows, although it is available for Linux. Our final onsite system will likely be Linux for cost reasons, so this is only a problem with the R&D system which is Windows based. I ended using a RXTX library, RXTXcomm.jar, at http://www.jcontrol.org/download/rxtx_en.html

Commands for our ProXR controller device are a series of numeric codes, for example<254;140;3;1>This series of commands puts the controller in command mode 254, a set bank status command 140, a byte indicating relays 0 and 1 on 3, and bank address 1. The result is relays 0 and 1 are switched to the on position. The commands are issued similarly for reading relay state and analog channels. <254;166;1> for example reads all 8 analog I/O channels as a set of 8 bytes.

Going in prototype mode we picked up a batch of three wire MCP9701A thermistor chips for a few dollars. The trick is to pick the right resistance to get voltage readings in to the mid range of the 8bit or 10bit analog channel read. Using 8 bit output lets us poll for temperature with around .5 degree F resolution.

The polling program issues commands and reads results on separate threads. If state is changed locally it is communicated back to the online server on the next polling message, while commands from the online command queue are written to the local controller boards with the return. In the meantime every polling interval sends an analog channel record back to the server.

Online Server

The online server is an Apache Tomcat service with a set of servlets to process communications from the onsite servers. Polled analog readings are stored in a PostgreSQL database with building:device: bank:channel addresses as well as a timestamp. The command queue is another PostgreSQL table which is checked at each poll interval for commands addressed to the building address which initiated the poll. Any pending commands are returned to the polling onsite server where they will be sent out to the proper device:bank:relay over the ZigBee network.

Two other tables simply provide locations of buildings as longitude, latitude in the wide area HVAC control system. Locations of devices insidebuildings are stored in a building table as floor and x,y coordinates. These are available for the client interface.

Client Interface

The client interface was developed using WPF xbap to take advantage of xaml controls and a WMS mapping interface. Initially the client presents a tabbed menu with a map view. The map view indicates the wide area HVAC extents with a background WMS image for reference. Zooming in to the building location of interest allows the user to select a building to show a floor plan with device locations indicated.

Fig 1 HVAC wide area map view

Once a building is selected the building floor plans are displayed. Selecting an individual device determines the building:device address.

Fig 2 Building:device address selection from facilities floor plan map

Finally individual relays can be selected from the device bank by pushing on/off buttons. Once the desired switch configuration is set in the panel, it can be sent to the command queue as a building:device:bank address command. Current onsite state is also double checked by the next polling return from the onsite server.

Fig 3 Relay switch panel for selected building:device:bank address.

The analog IO channels are updated to the server table at the set polling interval. A selection of the analog tab displays a set of graph areas for each of the 8 channels. The on/off panel initiates a server request for the latest set of 60 polled readings which are displayed graphically. It won’t be much effort to extend this analog graph to a bidirectional interface with user selectable ranges set by dragging floor and ceiling lines that trigger messages or events when a line is crossed.

Fig 4 Analog IO channel graphs

This prototype incorporates several technologies using a Java based Tomcat service online and a Java RXTXcomm Api for the local Zigbee polling. The client interface is also served out of Apache Tomcat as WPF xaml to take advantage of easier gui control building. In addition OGC WMS is used for the map views. The facilities plan views will be DXF translations to WPF xaml. Simple graphic selection events are used to build addresses to individual relays and channels. The server provides historical command queues and channel readings by storing time stamped records. PostgreSQL also has the advantage of handling record locking on the command queue when multiple clients are accessing the system.

This system is in the prototype stage but illustrates the direction of control systems. A single operator can maintain and monitor systems from any locations accessible to the internet, which is nearly anywhere these days. XML rendering graphics grammars for browsers like svg and xaml enable sophisticated interfacesthat are relatively simple to build.

There are several OGC specifications oriented toward sensor grids, http://www.opengeospatial.org/projects/groups/sensorweb. The state of art is still in flux but by virtue of the need for spatial management of sensor grids, there will be a geospatial component in an “ubiquitous sensor” world.

JAXB and xml binding

JAXB is a tool for processing xml files in Java. The JAXB xjc compiler converts an XML schema file, .xsd, into a set of Java classes that match the hierarchy of the schema. Once these classes are available, an xml file based on the original schema can then be mapped �directly into Java classes where it can be used in your program using the jaxb api.

    This is an appealing concept since it makes processing xml files in Java much easier than the two alternate routes of either

    A. creating custom SAX content handlers or
    B. traversing an in memory DOM tree.

    This is so appealing in fact that I thought I would try it out on the OGC GetCapabilities XML that is produced for a WMS. For example:

    http://localhost:80/geoserver/wms?request=getCapabilities

    This call to a Geoserver WMS will produce a capabilities xml file. It would be nice to access the layers defined in the xml and use these to determine layer visibility at a given scale.

    The first step was to download the JAXB reference implementation
https://jaxb.dev.java.net/

    I first tried using the early release of jaxb2.0. However, I ran into problems using this version on the OGC getCapabilities xsd schema. The OGC wms schemas found here http://schemas.opengis.net/wms/ were in DTD form for all versions prior to 1.3.0. My first attempt was to use the 1.3.0 xsd and see if it could be modified to work with the 1.1.1 xml output from the Geoserver WMS. I was able to use jaxb2.0 xjc to translate the capabilities_1_3_0.xsd into classes, but I was not able to read the example xml:
http://schemas.opengis.net/wms/1.3.0/capabilities_1_3_0.xml

    Finally I returned to the older jaxb1.0.6 release version. Again I had no success using this version on the capabilities1_3_0.xsd. Unable to get this working after a bit of effort I went back to the capabilities_1_1_1.dtd found here
http://schemas.opengis.net/wms/1.1.1/

    The jaxb 1.0.6 implementation has a dtd transcoder, but it is unstable and didn’t seem to work on the OGC WMS capabilities dtd. So my next stop was trang
http://www.thaiopensource.com/relaxng/trang.html

    Using trang I was able to transcode the capabilities dtd to xsd :

    java -jar H:\downloads\trang-20030619\trang-20030619\trang.jar
E:\temp\WMS_MS_Capabilities.dtd E:\temp\WMS_MS_Capabilities.xsd

    Once I had the xsd produced from trang I then used the jaxb xjc compiler to create the corresponding classes:

    H:\downloads\JAXB\jaxb-1.0.6\jaxb-ri-1.0.6\bin\xjc -d D:\workspace3.1\jaxb106\src -p com.mmc.wms E:\temp\WMS_MS_Capabilities.xsd

    Unfortunately there was a duplicate HTTP reference in the this .xsd which prevented reading the xml files. I manually altered the .xsd to rename the DCPType type reference to HTTP2:

    <xs:element name=”DCPType” type=”HTTP2″/>
      <!– Available HTTP request methods. One or both may be supported. –>
      <xs:complexType name=”HTTP2″>
          <xs:sequence>
            <xs:element ref=”HTTP”/>
         </xs:sequence>
     </xs:complexType>

    After this small modification of the xsd I reran the xjc resulting in a corrected jaxb class set. I was then able to process a sample WMS capabilities xml. Now I finally had a workable jaxb class set for processing getcapabilities xml.

    Jaxb1.0.6 works but it requires a multitude of jar files along with a complex set of classes and implementation classes. The jaxb2.0 seemed to be much cleaner and I hope that the final release of this version will work better with the OGC 1.3.0 xsd when it becomes available in Geoserver.

    One interesting future development is an extension to the jaxb concept that is being used by the uDig developers called GTXML.
http://udig.refractions.net/confluence/display/OWS3/GTXML

    I have not used this yet but hope to in the future. It appears to add the capability of subsetting the bindings to just the elements needed. It also appears to have a streaming capability that solves the memory issues inherent to large GIS files in GML xml.