Spatial to the Browser

Fig 1 – example using Bing Maps v8 SpatialMath module

I first noticed migration of spatial functions out to the browser back in 2014 with Morgan Herlocker’s github project turf.js, ref Territorial Turf blog.

Bing Maps version 8 SDK for web applications, released last summer, follows this trend, adding a number of useful modules that previously required custom programming or at least modification of open source projects.

Bing Maps v8
Bing Maps v8 API
Bing Maps v8 Modules

Among the many useful modules published with this version is Microsoft.Maps.SpatialMath. There are 25 geometry functions in this release for such things as intersection, buffer, convex hull, distance, and many others. Leveraging these geometry functions lets us move analytic functions from a SQL backend or C# .Net service layer back out front to the user’s browser.

Some useful side effects of this migration for projects using jsonp services include:

  • No need to host SQL data
  • No need to write a service layer
  • Dispersed computing that relieves compute loads on servers
  • No need to host in a Web Server such as IIS or Apache
  • Ability to publish in simple cloud storage with access to Edge Cache worldwide performance enhancement

As an example of this approach let’s look at the extensive GIS services exposed by the District of Columbia, DCGIS_Apps and DCGIS_Data. In addition to visualizing some layers it would be useful to detect parcels with certain spatial attributes such as distance to street access and area filters. With this ability Alley isolated parcels can be highlighted as potential alley development properties.

First setup a map panel using Bing Maps v8 loading ‘Microsoft.Maps.SpatialMath’

    // Create the v8 map
    getMap: function () {

        app.initialCenter = new Microsoft.Maps.Location(38.89892, -76.9883);
        app.initialZoom = 18;
        var mapEl = document.getElementById("map"),
            mapOptions = {
                customizeOverlays: true,
                zoom: app.initialZoom,
                credentials: '<Your Bing Key goes here>',
                mapTypeId: Microsoft.Maps.MapTypeId.road,
                showBreadcrumb: false,
                showDashboard: true,
                showMapTypeSelector: true,
                enableClickableLogo: false,
                enableSearchLogo: false,
                center: app.initialCenter

        var dcocto_url = ''
                     + app.initialCenter.latitude + ',' + app.initialCenter.longitude;
        $('#dcocto_frame').attr('src', dcocto_url);

        try {
   = new Microsoft.Maps.Map(mapEl, mapOptions);
            app.layers = {
                properties: new Microsoft.Maps.Layer({ zIndex: 500 }),
                streets: new Microsoft.Maps.Layer({ zIndex: 600 }),
                alleys: new Microsoft.Maps.Layer({ zIndex: 700 }),
                zones: new Microsoft.Maps.Layer({ zIndex: 800 }),
                interiors: new Microsoft.Maps.Layer({ zIndex: 900 })
            app.showlayers = {
                properties: true,
                streets: true,
                alleys: false,
                zones: false,
                interiors: false
            _.each(app.layers, function (layer) {


            //viewchangeend handler
            Microsoft.Maps.Events.addHandler(, 'viewchangeend', app.getLayers);

            app.infobox = new Microsoft.Maps.Infobox(, {
                title: 'Title',
                description: 'Description',
                visible: false

District of Columbia DCGIS services provide jsonp callbacks circumventing crossbrowser security constraints and allowing simpler ajax calls without back and forth trips to a proxy server layer.

if ( {
    var bds =;
    var urlProperty = '
+ bds.getWest() + ',' + bds.getSouth() + ',' + bds.getEast() + ',' + bds.getNorth()
+ '&geometryType=esriGeometryEnvelope&inSR=4326&outFields=*&outSR=4326&callback=';
    $.getJSON(urlProperty, function (json) {

LocationRect buffer method simplifies extending the street viewport to guarantee all parcels in a city block are checked against surrounding street centerlines. bds.buffer(1.5);

if (app.showlayers.streets) {
	var bds =;
	var urlStreet = '
f=json&returnGeometry=true&spatialRel=esriSpatialRelIntersects&maxAllowableOffset=0&geometry=' + bds.getWest()
+ ',' + bds.getSouth() + ',' + bds.getEast() + ',' + bds.getNorth()
+ '&geometryType=esriGeometryEnvelope&inSR=4326&outFields=*&outSR=4326&callback=';
	$.getJSON(urlStreet, function (json) {

The algorithm for discovering interior parcels is greatly simplified using lodash and the new Bing Maps v8 spatial geometry functions. Notice that Geometry.distance handles a shortest distance calculation between a property polygon and an array of polylines, “allstreets.”

findInteriors: function () {
        var allproperties =;
        var allstreets = app.layers.streets.getPrimitives();
        var interiorProperties = [];
        _.each(allproperties, function (property) {
            var distance = Microsoft.Maps.SpatialMath.Geometry.distance(property, allstreets,
                             Microsoft.Maps.SpatialMath.DistanceUnits.Feet, true);
            var area = Microsoft.Maps.SpatialMath.Geometry.area(property, Microsoft.Maps.SpatialMath.AreaUnits.SquareFeet);
            if (distance > 100 && distance < 1000 && area > 450) {


        _.each(interiorProperties, function (property) {
            property.setOptions({ fillColor: new Microsoft.Maps.Color(200, 255, 0, 0) });

Function findInteriors checks for shortest distance in feet for each parcel against every street centerline. The filter then checks for shortest distances falling between 100 ft and 1000 ft and parcel area sf > 450. Parcels meeting this filter criteria are set to fill red.

The Bing spatial distance function is using the higher accuracy Vincenty’s algorithm, but still performs reasonably well. Experiments using the less accurate Haversine option showed no significant performance difference in this case.

DC GIS Services limit the number of features in a request result to a maximum of 1000. At zoom level 18 the viewport always returns less than this maximum feature limit, but lower zooms can hit this limit and fail to return all parcels in the view. A warning is triggered when feature counts hit 1000 since the interior parcel algorithm will then have an incomplete result.


The new Bing Maps v8 adds a great number of features simplifying web mapping app development. In addition Bing Maps v8 improves performance by making better use of html5 canvas and immediate mode graphics. This means a larger number of features can be added to a map before map navigation begins to slow. I was able to test with up to 25000 features with no significant problems using map zoom and pan navigation.

Bing Maps SpatialMath module provides many useful spatial algorithms that previously required a server and/or SQL backend. The result is simpler web map applications that can be hosted directly in Azure blob storage.

Sample DCProperties code is available on github:

The R Project for Maps

Fig 1 – interactive Leaflet choropleth of Census ACS household income $60k-$75k using Microsoft Open R

The main stay of web mapping applications for the last couple of decades has been three tier: Model – SQL, View – web UI, and Controller – server code. There are many variations on this theme: models residing in image tile pyramids, SQL Server, PostGIS, or Oracle; controller server code as Java, C#, or PHP. The visible action is on the viewer side. Html5 with ever expanding JavaScript libraries like jQuery, bootstrap, and angular.js make life interesting, while node.js is pushing JavaScript upstream to the controller.

For building end user applications it helps to know all three tiers and have at least one tool in each. With the right tools you can eventually accomplish just about anything spatially interesting. Emphasis is on the word “eventually.” SQL <=> C# <=> html5/JavaScript is very powerful, but extravagant for “one off” analytical work.

For ad hoc spatial work it was usually best to stick to a desk top application such as one of the big dollar Arc___ variations or better yet something open source like QGIS. In the early days these generally consisted of modular C/C++ functions threaded together with an all-purpose scripting language. If you wanted to get a little closer to the geo engine, knowledge of a scripting language (PHP, TCL, Python, or Ruby) helped to script modular toolkits like GDAL/OGR, OSSIM, GEOS, or GMT. This all works fine except for learning and relearning often arcane syntax, while repeatedly discovering and reading data documentation on various public resources from Census, USGS, NOAA, NASA, JPL … you get the idea.

R changes things in the geospatial world. The R project originated as a modular statistics and graphics toolkit. Unless you happen to be a true math prodigy, statistics are best visualized graphically. With powerful graphics libraries, R has evolved into a useful platform for ad hoc spatial analysis.

Coupled with an IDE such as RStudio, or the new Microsoft R Tools for Visual Studio, R wraps a large stable of component libraries into a script interpreter environment, ideal for “one off” analysis. Although learning arcane syntax is still a prerequisite, there is at least a universal environment with a really large contributor community. You can think of it as open source replacement for Tableau or Power BI but without proprietary limitations.

Example: networkD3 R library for creating D3 JavaScript network graphs.

# only a few lines of script
data(MisLinks, MisNodes)
forceNetwork(Links = MisLinks, Nodes = MisNodes, Source = "source",
             Target = "target", Value = "value", NodeID = "name",
             Group = "group", opacity = 0.4)

Community contributions are found in CRAN, Comprehensive R Archive Network for the R programming language. A search of CRAN or MRAN (Microsoft R Archive Network) for the term “spatial” yields a list of 145 R libraries.

Example: dygraph R library for creating interactive charts.

 # only a few lines of script
  dygraph(nhtemp, main = "New Haven Temperatures") %>%
   dyRangeSelector(dateWindow = c("1920-01-01", "1960-01-01"))

Here are just a few samples of CRAN libraries useful for spatial analysis:

library(rgdal)  # reading spatial files with gdal
library(ggmap)  # simple mapping and more
library(raster)  # defining extents and raster processing
      brick  # raster cube objects useful for multispectral operations
      stack  # multilayer raster manipulation
library(sp)  # working with spatial objects
library(leaflet)  # interactive web mapping using Leaflet
library(rgeos)  # R GEOS wrapper
library(tigris)  # downloading geography spatial census tiger
library(FedData)  # downloading federal data NED, NHD, SSURGO, GHCN
library(acs)  # tabular census data (American Community Survey) ACS, SF1, SF3
library(UScensus2010)  # spatial and demographic Census 2010 data county/tract/blkgrp/blk
library(RColorBrewer)  # color palettes for thematic mapping

For example, tigris is a useful library for reading US Census TIGER files. With just a couple lines of R scripting you can zoom around a polygonal plot of US Census urban areas. Library(tigris) handles all the details of obtaining the TIGER polygons and loading into local memory. Library(leaflet) handles creating the polygons and displaying over a default Leaflet map as tiles.

ua <- urban_areas(cb = TRUE)
ua %>% leaflet() %>% addTiles() %>% addPolygons(popup = ~NAME10)

Fig 2 – RStudio Interactive Leaflet plot of Census TIGER urban area extracted with tigris

Fig 3 – RStudio script of Census ACS tract household income percentage for $60k-$75K

These samples follow examples found in Zev Ross’s blog posts which contain a wealth of scripts using R for spatial analytics, including these posts on using FedData and rgdal.

Microsoft R

Microsoft recently entered the R world with several enhanced R tools, including:
Microsoft R Open
RTVS R Tools for Visual Studio
Microsoft R Server
SQL Server R Services
MRAN Microsoft R Application Network
Microsoft Azure R Server
Microsoft R Server for Hadoop

Apparently Data Science is a growth industry and Microsoft has an interest in providing useful tools beyond Power BI.

Microsoft R Open Microsoft R Open

Free Microsoft version of R script engine with a couple of enhancements:
• intel enhanced math library
• multi core support
• multithreading

Fig 4 – slide from Derek Norton webinar on R Server showing relative performance boost with enhanced Microsoft R Open

RTVS R Tools for Visual Studio RTVS R Tools for Visual Studio
Microsoft R Visual Studio IDE using the Data Science R settings. Users of Visual Studio will find all the familiar debug stepping, variable explorer, and intellisense editing they are using for other development languages.

Microsoft R Server Microsoft R Server
Licensed enterprise R Service that scales by avoiding in-memory data limitations using parallel chunked data streams.

Fig 5 – slide from Derek Norton webinar showing R Server scale enhancements

SQL Server 2016 R Services SQL Server R Services

SQL R Services – data ETL and visualization tool inside SQL.
T-SQL R interface with Database next to R code on the same server.

sp_execute_external_script – R code embedding
receives inputs, passes to external R runtime, and returns R results.
invoke sp to run R code in T-SQL

MRAN Microsoft R Application Network MRAN Microsoft R Application Network

CRAN fixed date snapshots allow shared R code pointing to compatible library versions
Checkpoint reproducibility

Fig 6 – RVST R Visual Studio 2015 Tools Leaflet demographic script

Example R Leaflet demographic script (ref Fig 1 above):

library(tigris)  # TIGER data
library(acs)     # ACS data
library(stringr) # to pad fips codes
library(dplyr)   # data manipulation
library(leaflet) # interactive mapping

#Colorado Front range counties
counties <- c(1, 5, 13, 31, 35, 39, 41, 59)
tracts <- tracts(state = 'CO', county = c(1, 5, 13, 31, 35, 39, 41, 59), cb = TRUE)

api.key.install(key = "<insert your own api key here>")
geo <- geo.make(state = c("CO"),
              county = c(1, 5, 13, 31, 35, 39, 41, 59), tract = "*")

income <- acs.fetch(endyear = 2012, span = 5, geography = geo,
                  table.number = "B19001", col.names = "pretty")
attr(income, "acs.colnames")
##  [1] "Household Income: Total:"
## [12] "Household Income: $60,000 to $74,999"  

income_df <- data.frame(paste0(str_pad(income@geography$state, 2, "left", pad = "0"),
                               str_pad(income@geography$county, 3, "left", pad = "0"),
                               str_pad(income@geography$tract, 6, "left", pad = "0")),
                        income@estimate[, c("Household Income: Total:",
                                           "Household Income: $60,000 to $74,999")],
                        stringsAsFactors = FALSE)

income_df <- select(income_df, 1:3)
rownames(income_df) <- 1:nrow(income_df)
names(income_df) <- c("GEOID", "total", "income60kTo75k")
income_df$percent <- 100 * (income_df$income60kTo75k / income_df$total)

income_merged <- geo_join(tracts, income_df, "GEOID", "GEOID")

popup <- paste0("GEOID: ", income_merged$GEOID, "<br>", "Percent of Households $60k-$75k: ", round(income_merged$percent, 2))
pal <- colorNumeric(
  palette = "YlGnBu",
  domain = income_merged$percent)

incomemap <- leaflet() %>%
  addProviderTiles("CartoDB.Positron") %>%
  addPolygons(data = income_merged,
              fillColor = ~pal(percent),
              color = "#b2aeae", # you need to use hex colors
fillOpacity = 0.7,
              weight = 1,
              smoothFactor = 0.2,
              popup = popup) %>%
  addLegend(pal = pal,
            values = income_merged$percent,
            position = "bottomright",
            title = "Percent of Households<br>$60k-$75k",
            labFormat = labelFormat(suffix = "%"))


Hillshade example using public SRTM 90 data:

    alt <- getData('alt', country = 'ITA')
    slope <- terrain(alt, opt = 'slope')
    aspect <- terrain(alt, opt = 'aspect')
    hill <- hillShade(slope, aspect, 40, 270)

    leaflet() %>% addProviderTiles("CartoDB.Positron") %>%
      addRasterImage(hill, colors = grey(0:100 / 100), opacity = 0.6)

Fig 7 – RVST R Visual Studio 2015 Tools Leaflet Hillshading image


R provides lots of interesting modules that help with spatial analytics. The script engine makes it easy to perform ad hoc visualization and publish the results online. However, there are limitations in performance and extents that make it more of a competitor to desktop GIS products or the newer commercial data visualizers like Tableau or PowerBI. For public facing web applications with generalized extents three tier performance using SQL + server code + web UI still makes the most sense.

The advent of Microsoft R Server and SQL Server R Services add scaling performance to make R solutions more competitive with the venerable three tier approach. It will be interesting to see how developers make use of SQL Server R Services. As a method of adding raster functionality to SQL Server, R sp_execute_external_script overlaps somewhat with PostGIS Raster. Exploring SQL Server 2016 R Services must await a future post.

Example: threejs R library with world flight data

Demographic Landscapes – X3D/Census

Fig 1 – X3DOM view of tabblock census polygons demographics P0040003 density

PostGIS 2.2

PostGIS 2.2 is due for release sometime in August of 2015. Among other things, PostGIS 2.2 adds some interesting 3D functions via SFCGAL. ST_Extrude in tandem with ST_AsX3D offers a simple way to view a census polygon map in 3D. With these functions built into PostGIS, queries returning x3d text are possible.

Sample PostGIS 2.2 SQL Query:

SELECT ST_AsX3D(ST_Extrude(ST_SimplifyPreserveTopology(poly.GEOM, 0.00001),0,0, float4(sf1.P0040003)/19352),10) as geom, ST_Area(geography(poly.geom)) * 0.0000003861 as area, sf1.P0040003 as demographic
FROM Tract poly
JOIN sf1geo geo ON geo.geoid = poly.geoid
JOIN sf1_00003 sf1 ON geo.stlogrecno = sf1.stlogrecno
WHERE geo.geocomp='00' AND geo.sumlev = '140'
AND ST_Intersects(poly.GEOM, ST_GeometryFromText('POLYGON((-105.188236232418 39.6533019504969,-105.051581442071 39.6533019504969,-105.051581442071 39.7349599496294,-105.188236232418 39.7349599496294,-105.188236232418 39.6533019504969))', 4269))

Sample x3d result:

<IndexedFaceSet  coordIndex='0 1 2 3 -1 4 5 6 7 -1 8 9 10 11 -1 12 13 14 15 -1 16 17 18 19 -1 20 21 22 23'>
<Coordinate point='-105.05323 39.735185 0 -105.053212 39.74398 0 -105.039308 39.743953 0 -105.0393 39.734344 0 -105.05323 39.735185 0.1139417115 -105.0393 39.734344 0.1139417115 -105.039308 39.743953 0.1139417115 -105.053212 39.74398 0.1139417115 -105.05323 39.735185 0 -105.05323 39.735185 0.1139417115 -105.053212 39.74398 0.1139417115 -105.053212 39.74398 0 -105.053212 39.74398 0 -105.053212 39.74398 0.1139417115 -105.039308 39.743953 0.1139417115 -105.039308 39.743953 0 -105.039308 39.743953 0 -105.039308 39.743953 0.1139417115 -105.0393 39.734344 0.1139417115 -105.0393 39.734344 0 -105.0393 39.734344 0 -105.0393 39.734344 0.1139417115 -105.05323 39.735185 0.1139417115 -105.05323 39.735185 0'/;>

x3d format is not directly visible in a browser, but it can be packaged into x3dom for use in any WebGL enabled browser. Packaging x3d into an x3dom container allows return of an .x3d mime type model/x3d+xml, for an x3dom inline content html.


<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE X3D PUBLIC "ISO//Web3D//DTD X3D 3.1//EN" "">
<X3D  profile="Immersive" version="3.1" xsd:noNamespaceSchemaLocation="" xmlns:xsd="">
				<viewpoint def="cam"
				centerofrotation="-105.0314177	39.73108357 0"
				orientation="-105.0314177,39.73108357,0.025, -0.25"
				position="-105.0314177	39.73108357 0.025"
				zfar="-1" znear="-1"
<material DEF='color' diffuseColor='0.6 0 0' specularColor='1 1 1'></material>
<IndexedFaceSet  coordIndex='0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 -1 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 -1 38 39 40 41 -1 42 43 44 45 -1 46 47 48 49 -1 50 51 52 53 -1 54 55 56 57 -1 58 59 60 61 -1 62 63 64 65 -1 66 67 68 69 -1 70 71 72 73 -1 74 75 76 77 -1 78 79 80 81 -1 82 83 84 85 -1 86 87 88 89 -1 90 91 92 93 -1 94 95 96 97 -1 98 99 100 101 -1 102 103 104 105 -1 106 107 108 109 -1 110 111 112 113'>
<Coordinate point='-105.036957 39.733962 0 -105.036965 39.733879 0 -105.036797 39.734039 0 -105.036545 39.734181 0 -105.036326 39.734265 0 -105.036075 39.734319 0 -105.035671 39.734368 0 -105.035528 39.734449 0 -105.035493 39.73447 0 -105.035538 39.734786 0 -105.035679 39.73499 0 -105.035703 39.734927 0 -105.035731 39.734902 0 -105.035956 39.734777 0 -105.03643 39.734623 0/>


Inline x3dom packaged into html:

    <meta http-equiv="X-UA-Compatible" content="IE=edge"/>
    <title>TabBlock Pop X3D</title>
	<script src=""></script>
    <script type='text/javascript' src=''> </script>
    <link rel='stylesheet' type='text/css' href=''/>
<h2 id='testTxt'>TabBlock Population</h2>

<div id="content">
<x3d width='1000px' height='600px' showStat="true" showlog="false">
        <navigationInfo id="head" headlight='true' type='"EXAMINE"'></navigationInfo>
        <background  skyColor="0 0 0"></background>
		<directionallight id="directional"
				intensity="0.5" on="TRUE" direction="0 -1 0" color="1,1,1" zNear="-1" zFar="-1" >

		<Inline id="x3dContent" nameSpaceName="blockpop" mapDEFToID="true"



In this case I added a non-standard WMS service adapted to add a new kind of request, GetX3D.

x3d is an open xml standards for leveraging immediate mode WebGL graphics in the browser. x3dom is an open source javascript library for translating x3d xml into WebGL in the browser.

“X3DOM (pronounced X-Freedom) is an open-source framework and runtime for 3D graphics on the Web. It can be freely used for non-commercial and commercial purposes, and is dual-licensed under MIT and GPL license.”

Why X3D?

I’ll admit it’s fun, but novelty may not always be helpful. Adding elevation does show demographic values in finer detail than the coarse classification used by a thematic color range. This experiment did not delve into the bivariate world, but multiple value modelling is possible using color and elevation with perhaps less interpretive misgivings than a bivariate thematic color scheme.

However, pondering the visionary, why should analysts be left out of the upcoming immersive world? If Occulus Rift, HoloLens, or Google Cardboard are part of our future, analysts will want to wander through population landscapes exploring avenues of millennials and valleys of the aged. My primitive experiments show only a bit of demographic landscape but eventually demographic terrain streams will be layer choices available to web users for exploration.

Demographic landscapes like the census are still grounded, tethered to real objects. The towering polygon on the left recapitulates the geophysical apartment highrise, a looming block of 18-22 year olds reflect a military base. But models potentially float away from geophysical grounding. Facebook networks are less about physical location than network relation. Abstracted models of relationship are also subject to helpful visualization in 3D. Unfortunately we have only a paltry few dimensions to work with, ruling out value landscapes of higher dimensions.

Fig 3 – P0010001 Jenks Population

Some problems

For some reason IE11 always reverts to software rendering instead of using the system’s GPU. Chrome provides hardware rendering with consequent smoother user experience. Obviously the level of GPU support available on the client directly correlates to maximum x3dom complexity and user experience.

IE11 x3dom logs show this error:

ERROR: WebGL version WebGL 0.94 lacks important WebGL/GLSL features needed for shadows, special vertex attribute types, etc.!
INFO: experimental-webgl context found
Vendor: Microsoft Microsoft, Renderer: Internet Explorer Intel(R) HD Graphics 3000, Version: WebGL 0.94, ShadingLangV.:
WebGL GLSL ES 0.94, MSAA samples: 4
Extensions: WEBGL_compressed_texture_s3tc, OES_texture_float, OES_texture_float_linear, EXT_texture_filter_anisotropic,
      OES_standard_derivatives, ANGLE_instanced_arrays, OES_element_index_uint, WEBGL_debug_renderer_info
INFO: Initializing X3DCanvas for [x3dom-1434130193467-canvas]
INFO: Creating canvas for (X)3D element...
INFO: Found 1 X3D and 0 (experimental) WebSG nodes...
INFO: X3DOM version 1.6.2, Revison 8f5655cec1951042e852ee9def292c9e0194186b, Date Sat Dec 20 00:03:52 2014 +0100

In some cases the ST_Extrude result is rendered to odd surfaces with multiple artifacts. Here is an example with low population in eastern Colorado. Perhaps the extrusion surface breaks down due to tessellation issues on complex polygons with zero or near zero height. This warrants further experimentation.

Fig 2 – rendering artifacts at near zero elevations

The performance complexity curve on a typical client is fairly low. It’s tricky to keep the model sizes small enough for acceptable performance. IE11 is especially vulnerable to collapse due to software rendering. In this experiment the x3d view is limited to the intersections with extents of the selected territory using turf.js.

var extent = turf.extent(app.territory);

In addition making use of PostGIS ST_SimplifyPreserveTopology helps reduce polygon complexity.

Xml formats like x3d tend to be verbose and newer lightweight frameworks prefer a json interchange. Json for WebGL is not officially standardized but there are a few resources available.

Lighthouse –
Three.js – JSONLoader BabylonLoader
Babylon.js – SceneLoader BABYLON.SceneLoader.ImportMesh


PostGIS 2.2 SFCGAL functions offer a new window into data landscapes. X3d/x3dom is an easy way to leverage WebGL in the browser.

A future project may look at converting the x3d output of PostGIS into a json mesh. This would enable use of other client libraries like Babylon.js

Fig 4 – P0040003 Quantile Density

GeoEpistemology uh where’s that?

Google Arunachal Pradesh

Fig 1 -Google Arunachal Pradesh as represented to .in, not to be confused with Arunachal Pradesh .cn

In the background of the internet lies this ongoing discussion of epistemology. It’s an important discussion with links to crowd source algos, big data, and even AI. Perhaps it’s a stretch to include maps, which after all mean to represent “exactitude in science” or JTB, Justified True Belief. On the one hand we have the prescience of Jorge Luis Borges concisely represented by his single paragraph short story.

Del rigor en la ciencia

… En aquel Imperio, el Arte de la Cartografía logró tal Perfección que el mapa de una sola Provincia ocupaba toda una Ciudad, y el mapa del Imperio, toda una Provincia. Con el tiempo, esos Mapas Desmesurados no satisfacieron y los Colegios de Cartógrafos levantaron un Mapa del Imperio, que tenía el tamaño del Imperio y coincidía puntualmente con él. Menos Adictas al Estudio de la Cartografía, las Generaciones Siguientes entendieron que ese dilatado Mapa era Inútil y no sin Impiedad lo entregaron a las Inclemencias del Sol y de los Inviernos. En los desiertos del Oeste perduran despedazadas Ruinas del Mapa, habitadas por Animales y por Mendigos; en todo el País no hay otra reliquia de las Disciplinas Geográficas.

-Suárez Miranda: Viajes de varones prudentes,
Libro Cuarto, Cap. XLV, Lérida, 1658

translation by Andrew Hurley

On Exactitude in Science

…In that Empire, the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province. In time, those Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it. The following Generations, who were not so fond of the Study of Cartography as their Forebears had been, saw that that vast Map was Useless, and not without some Pitilessness was it, that they delivered it up to the Inclemencies of Sun and Winters. In the Deserts of the West, still today, there are Tattered Ruins of that Map, inhabited by Animals and Beggars; in all the Land there is no other Relic of the Disciplines of Geography.

-Suarez Miranda,Viajes de varones prudentes,
Libro IV,Cap. XLV, Lerida, 1658

As Jorge Luis Borges so aptly implies, the issue of epistemology swings between scientific exactitude and cultural fondness, an artistic reference to the unsettling observations of Thomas Kuhn’s paradigm shiftiness, The Structure of Scientific Revolutions .

Precession of Simulacra

On the other hand Jean Baudrillard would prefer an inversion of Borges in his Simulacra and Simulation

“The territory no longer precedes the map, nor does it survive it. It is nevertheless the map that precedes the territory—precession of simulacra—that engenders the territory”

In a less postmodern sense we can point to the recent spectacle of Nicaraguan sovereignty extending into Costa Rica, provoked by the preceding Google Map error, as a very literal “precession of simulacrum.” See details in Wired.

We now have map border wars and a crafty Google expedient of representing the Arunachal Pradesh according to client language. China sees one thing, but India another, and all are happy. So maps are not exempt from geopolitical machinations any more than Wikipedia. Of course the secular bias of Google invents an agnostic viewpoint of neither here nor there, in its course presuming a superior vantage and relegating “simplistic” nationalism to a subjected role of global ignorance. Not unexpectedly, global corporations wield power globally and therefore their interests lie supra nationally.

Perhaps in a Jean Baudrillard world the DPRK could disappear for ROK viewers and vice versa resolving a particularly long lived conflict.

Filter Bubbles

We are all more or less familiar with the filter bubble phenomenon. Your every wish is my command.

“The best books, he perceived, are those that tell you what you know already.”
George Orwell, 1984 p185

The consumer is king and this holds true in search and advertising as well as in Aladdin’s tale. Search filters at the behest of advertising money work very well at fencing us into smaller and smaller bubbles of our own desire. The danger of self-referential input is well known as narcissism. We see this at work in contextual map bubbles displaying only relevant points of interest from past searches.

With google glasses self-referential virtual objects can literally mask any objectionable reality. Should a business desire to pop a filter bubble only a bit more money is required. In the end, map POI algorithms dictate desire by limiting context. Are “personalized” maps a hint of precession of simulacra or simply one more example of rampant technical narcissism?


In the political realm elitists such as Cass Sunstein want to nudge us, which is a yearning of all mildly totalitarian states. Although cognitive infiltration will do in a pinch, “a boot stamping on a human face” is reserved for a last resort. How might the precession of simulacra assist the fulfillment of Orwellian dreams?

Naturally, political realities are less interested in our desires than their own. This is apparently a property of organizational ascendancy. Whether corporations or state agencies, at some point of critical mass organizations gain a life of their own. The organization eventually becomes predatory, preying on those they serve for survival. Political information bubbles are less about individual desires than survival of the state. To be blunt “nudge” is a euphemism for good old propaganda.

propaganda map

Fig 2 - Propaganda Map - more of a shove than a nudge

The line from Sunstein to a Clinton, of either gender, is short. Hillary Clinton has long decried the chaotic democracy of page ranked search algorithms. After noting that any and all ideas, even uncomfortable truths, can surface virally in a Drudge effect, Hillary would insist “we are all going to have to rethink how we deal with the Internet.” At least she seems to have thought creatively about State Dept emails. Truth is more than a bit horrifying to oligarchs of all types, as revealed by the treatment of Edward Snowden, Julian Assange, and Barrett Brown.

Truth Vaults

Enter Google’s aspiration to Knowledge-Based Trust: Estimating the Trustworthiness of Web Sources. In other words a “truth page ranking” to supplant the venerable but messily democratic “link page ranking.” Why, after all, leave discretion or critical thought to the unqualified masses? For the history minded, this is rather reminiscent of pre-reformation exercise of Rome’s magisterium. We may soon see a Google Magisterium defining internet truth, albeit subject to FCC review.

“The net may be “neutral” but the FCC is most certainly not.”

According to Google: “Nothing but the truth.” I mean who could object? Well there seem to be some doubters among the hoi polloi. How then does this Google epistemology actually work? What exactly is Justified True Belief in Google’s Magisterium and how much does it effectively overlap with the politically powerful?

Leaving aside marginal Gettier-cases there are some pressing questions about the mechanics of KBT. In Google’s KBT basement is this thing called Knowledge Vault – Knowledge Graph.

“The fact extraction process we use is based on the Knowledge Vault (KV) project.”

“Knowledge Vault has pulled in 1.6 billion facts to date. Of these, 271 million are rated as “confident facts”, to which Google’s model ascribes a more than 90 per cent chance of being true. It does this by cross-referencing new facts with what it already knows.”

“Google’s Knowledge Graph is currently bigger than the Knowledge Vault, but it only includes manually integrated sources such as the CIA Factbook.”

“This is the most visionary thing,” says Suchanek. “The Knowledge Vault can model history and society.”

Per Jean Baudrillard read “model” as a verb rather than a thing. Google (is it possible to do this unwittingly?) arrogates a means to condition the present, in order to model the past, to control our future, to paraphrase the Orwellian syllogism.

“Who controls the past controls the future. Who controls the present controls the past.”
George Orwell, 1984

Not to be left behind MSNBC’s owner, Microsoft, harbors similar aspirations:

“LazyTruth developer Matt Stempeck, now the director of civic media at Microsoft New York, wants to develop software that exports the knowledge found in fact-checking services such as Snopes, PolitiFact, and so that everyone has easy access to them.”

And National Geographic too, all in for a new science: The Consensus of “Experts”

“Everybody should be questioning,” says McNutt. “That’s a hallmark of a scientist. But then they should use the scientific method, or trust people using the scientific method, to decide which way they fall on those questions.”

Ah yes the consensus of “Experts,” naturally leading to the JTB question, whose experts? IPCC may do well to reflect on Copernicus in regards to ancien régime and scientific consensus.

Snopes duo

Fig 3 - Snopes duo in the Truth Vault at Google and Microsoft? Does the cat break tie votes?

Google’s penchant for metrics and algorithmic “neutrality” neatly papers over the Mechanical Turk or two in the vault so to speak.

Future of simulacra

In a pre-digital Soviet era, map propaganda was an expensive proposition. Interestingly today Potemkin maps are an anachronistic cash cow with only marginal propaganda value. Tomorrow’s Potemkin maps according to Microsoft will be much more entertaining but also a bit creepy if coupled to brain interfaces. Brain controls are inevitably a two way street.

“Microsoft HoloLens understands your movements, vision, and voice, enabling you to interact with content and information in the most natural way possible.”

The only question is, who is interacting with content in the most un-natural way possible in the Truth Vault?

Will InfoCrafting at the brain interface be the next step for precession of simulacra?


Fig 5 – Leonie and $145 million worth of “InfoCrafting“ COINTEL with paid trolls and sock puppet armies


Is our cultural fondness leaning toward globally agnostic maps of infinite plasticity, one world per person? Jean Baudrillard would likely presume the Google relativistic map is the order of the day, where precession of simulacra induces a customized world generated in some kind of propagandistic nirvana, tailored for each individual.

But just perhaps, the subtle art of Jorge Luis Borges would speak to a future of less exactitude:

“still today, there are Tattered Ruins of that Map, inhabited by Animals and Beggars; in all the Land there is no other Relic of the Disciplines of Geography.”

I suppose to be human is to straddle exactitude and art, never sure whether to land on truth or on beauty. Either way, we do well to Beware of Truth Vaults!

WebGL with a little help from Babylon.js

Most modern browsers now support HTML5 WebGL standard: Internet Explorer 11+, Firefox 4+, Google Chrome 9+, Opera 12+
One of the latest to the party is IE 11.


Fig 2 – html5 test site showing WebGL support for IE11

WebGL support means that GPU power is available to javascript developers in supporting browsers. GPU technology fuels the $46.5 billion “vicarious life” industry. Video gaming revenues surpass even Hollywood movie tickets in annual revenues, but this projection shows a falling revenue curve by 2019. Hard to say why the decline, but is it possibly an economic side effect of too much vicarious living? The relative merits of passive versus active forms of “vicarious living” are debatable, but as long as technology chases these vast sums of money, GPU geometry pipeline performance will continue to improve year over year.

WebGL exposes immediate mode graphics pipelines for fast 3D transforms, lighting, shading, animations, and other amazing stuff. GPU induced endorphin bursts do have their social consequences. Apparently, Huxley’s futuristic vision has won out over Orwell’s, at least in internet culture.

“In short, Orwell feared that what we fear will ruin us. Huxley feared that our desire will ruin us.”

Neil Postman Amusing Ourselves to Death.

Aside from the Soma like addictive qualities of game playing, game creation is actually a lot of work. Setting up WebGL scenes with objects, textures, shaders, transforms … is not a trivial task, which is where Dave Catuhe’s Babylon.js framework comes in. Dave has been building 3D engines for a long time. In fact I’ve played with some of Dave’s earlier efforts in Ye olde Silverlight days of yore.

“I am a real fan of 3D development. Since I was 16, I spent all my spare time creating 3d engines with various technologies (DirectX, OpenGL, Silverlight 5, pure software, etc.). My happiness was complete when I discovered that Internet Explorer 11 has native support for WebGL. So I decided to write once again a new 3D engine but this time using WebGL and my beloved JavaScript.”

Dave Catuhe Eternal Coding

Dave’s efforts improve with each iteration and Babylon.js is a wonderfully powerful yet simple to use javascript WebGL engine. The usefulness/complexity curve is a rising trend. To be sure a full fledged gaming environment is still a lot of work. With babylon.js much of the heavy lifting falls to the art design guys. From a mapping perspective I’m happy to forego the gaming, but still enjoy some impressive 3D map building with low effort.

In order to try out babylon.js I went back to an old standby, NASA Earth Observation data. NASA has kindly provided an OGC WMS server for their earth data. Brushing off some old code I made use of babylon.js to display NEO data on a rotating globe.

Babylon.js has innumerable samples and tutorials which makes learning easy for those of us less inclined to read manuals. This playground is an easy way to experiment: Babylon playground

Babylon.js engine is used to create a scene which is then handed off to engine.runRenderLoop. From a mapping perspective, most of the interesting stuff happens in createScene.

Here is a very basic globe:

<!DOCTYPE html>
<html xmlns="">
    <title>Babylon.js Globe</title>

    <script src=""></script>
        html, body {
            overflow: hidden;
            width: 100%;
            height: 100%;
            margin: 0;
            padding: 0;

        #renderCanvas {
            width: 100%;
            height: 100%;
            touch-action: none;

    <canvas id="renderCanvas"></canvas>

        var canvas = document.getElementById("renderCanvas");
        var engine = new BABYLON.Engine(canvas, true);

        var createScene = function () {
            var scene = new BABYLON.Scene(engine);

            // Light
            var light = new BABYLON.HemisphericLight("HemiLight", new BABYLON.Vector3(-2, 0, 0), scene);

            // Camera
            var camera = new BABYLON.ArcRotateCamera("Camera", -1.57, 1.0, 200, new BABYLON.Vector3.Zero(), scene);

            //Creation of a sphere
            //(name of the sphere, segments, diameter, scene)
            var sphere = BABYLON.Mesh.CreateSphere("sphere", 100.0, 100.0, scene);
            sphere.position = new BABYLON.Vector3(0, 0, 0);
            sphere.rotation.x = Math.PI;

            //Add material to sphere
            var groundMaterial = new BABYLON.StandardMaterial("mat", scene);
            groundMaterial.diffuseTexture = new BABYLON.Texture('textures/earth2.jpg', scene);
            sphere.material = groundMaterial;

            // Animations - rotate earth
            var alpha = 0;
            scene.beforeRender = function () {
                sphere.rotation.y = alpha;
                alpha -= 0.01;

            return scene;

        var scene = createScene();

        // Register a render loop to repeatedly render the scene
        engine.runRenderLoop(function () {

        // Watch for browser/canvas resize events
        window.addEventListener("resize", function () {

Fig 3- rotating Babylon.js globe

Add one line for a 3D effect using a normal (bump) map texture.

groundMaterial.bumpTexture = new BABYLON.Texture('textures/earthnormal2.jpg', scene);

Fig 4 – rotating Babylon.js globe with normal (bump) map texture

The textures applied to BABYLON.Mesh.CreateSphere required some transforms to map correctly.


Fig 5 – texture images require img.RotateFlip(RotateFlipType.Rotate90FlipY);

Without this image transform the resulting globe is more than a bit warped. It reminds me of a pangea timeline gone mad.


Fig 6 – globe with no texture image transform

Updating our globe texture skin requires a simple proxy that performs the img.RotateFlip after getting the requested NEO WMS image.

        public Stream GetMapFlip(string wmsurl)
            string message = "";
                HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(new Uri(wmsurl));
                using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
                    if (response.StatusDescription.Equals("OK"))
                        using (Image img = Image.FromStream(response.GetResponseStream()))
                            //rotate image 90 degrees, flip on Y axis
                            using (MemoryStream memoryStream = new MemoryStream()) {
                                img.Save(memoryStream, System.Drawing.Imaging.ImageFormat.Png);
                                WebOperationContext.Current.OutgoingResponse.ContentType = "image/png";
                                return new MemoryStream(memoryStream.ToArray());
                    else message = response.StatusDescription;
            catch (Exception e)
                message = e.Message;
            ASCIIEncoding encoding = new ASCIIEncoding();
            Byte[] errbytes = encoding.GetBytes("Err: " + message);
            return new MemoryStream(errbytes);

With texture in hand the globe can be updated adding hasAlpha true:

var overlayMaterial = new BABYLON.StandardMaterial("mat0", nasa.scene);
var nasaImageSrc = Constants.ServiceUrlOnline + "/GetMapFlip?url=" + nasa.image + "%26BGCOLOR=0xFFFFFF%26TRANSPARENT=TRUE%26SRS=EPSG:4326%26BBOX=-180.0,-90,180,90%26width=" + nasa.width + "%26height=" + nasa.height + "%26format=image/png%26Exceptions=text/xml";
       overlayMaterial.diffuseTexture = new BABYLON.Texture(nasaImageSrc, nasa.scene);
       overlayMaterial.bumpTexture = new BABYLON.Texture('textures/earthnormal2.jpg', nasa.scene);
       overlayMaterial.diffuseTexture.hasAlpha = true;
       nasa.sphere.material = overlayMaterial;

True hasAlpha lets us show a secondary earth texture through the NEO overlay where data was not collected. For example Bathymetry, GEBCO_BATHY, leaves holes for the continental masses that are transparent making the earth texture underneath visible. Alpha sliders could also be added to stack several NEO layers, but that’s another project.


Fig 7 – alpha bathymetry texture over earth texture

Since a rotating globe can be annoying it’s worthwhile adding a toggle switch for the rotation weary. One simple method is to make use of a Babylon pick event:

        window.addEventListener("click", function (evt) {
            var pickResult = nasa.scene.pick(evt.clientX, evt.clientY);
            if ( != "skyBox") {
                if (nasa.rotationRate < 0.0) nasa.rotationRate = 0.0;
                else nasa.rotationRate = -0.005;

In this case any click ray that intersects the globe will toggle globe rotation on and off. Click picking is a kind of collision checking for object intersection in the scene which could be very handy for adding globe interaction. In addition to, pickResult gives a pickedPoint location, which could be reverse transformed to a latitude,longitude.

Starbox (no coffee involved) is a quick way to add a surrounding background in 3D. It’s really just a BABYLON.Mesh.CreateBox big enough to engulf the earth sphere, a very limited kind of cosmos. The stars are not astronomically accurate just added for some mood setting.

Another handy BABYLON Feature is BABYLON.Mesh.CreateGroundFromHeightMap

/* Name
 * Height map picture url
 * mesh Width
 * mesh Height
 * Number of subdivisions (increase the complexity of this mesh)
 * Minimum height : The lowest level of the mesh
 * Maximum height : the highest level of the mesh
 * scene
 * Updatable: say if this mesh can be updated dynamically in the future (Boolean)

var height = BABYLON.Mesh.CreateGroundFromHeightMap("height", "textures/" + heightmap, 200, 100, 200, 0, 2, scene, false);

For example using a grayscale elevation image as a HeightMap will add exaggerated elevation values to a ground map:


Fig 8 – elevation grayscale jpeg for use in BABYLON HeightMap


Fig -9 – HeightMap applied

The HeightMap can be any value for example NEO monthly fires converted to grayscale will show fire density over the surface.


Fig 10 – NEO monthly fires as heightmap

In this case a first person shooter, FPS, camera was substituted for a generic ArcRotate Camera so users can stalk around the earth looking at fire spikes.

“FreeCamera – This is a ‘first person shooter’ (FPS) type of camera where you control the camera with the mouse and the cursors keys.”

Lots of camera choices are listed here including Oculus Rift which promises some truly immersive map opportunities. I assume this note indicates Babylon is waiting on the retail release of Oculus to finalize a camera controller.

“The OculusCamera works closely with our Babylon.js OculusController class. More will be written about that, soon, and nearby.

Another Note: In newer versions of Babylon.js, the OculusOrientedCamera constructor is no longer available, nor is its .BuildOculusStereoCamera function. Stay tuned for more information.”

So it may be only a bit longer before “vicarious life” downhill skiing opportunities are added to FreshyMap.



Fig 11 - NEO Land Surface average night temperature

Borders and Big Data


Fig 1 – Big Data Analytics is a lens, the data is a side effect of new media

I was reflecting on borders recently, possibly because of reading Cormac McCarthy’s The Border Trilogy. Borders come up fairly often in mapping:

  • Geography – national political borders, administrative borders
  • Cartography – border line styles, areal demarcation
  • Web Maps – pixel borders bounding polygonal event handlers
  • GIS – edges between nodes defining faces
  • Spatial DBs – Dimensionally Extended nine-Intersection Model (DE-9IM) 0,1 1,0 1,1 1,2 2,1

However, this is not about map borders – digital or otherwise.

McCarthy is definitely old school, if not Faulknerian. All fans of Neal Stephenson are excused. The Border Trilogy of course is all about a geographic border, the Southwest US Mexico border in particular. At other levels, McCarthy is rummaging about surfacing all sorts of borders: cultural borders, language borders (half the dialogue is Spanish), class borders, time borders (coming of age, epochal endings), moral borders with their many crossings. The setting is prewar 1930’s – 50’s, a pre-technology era as we now know it, and only McCarthy’s mastery of evocative language connects us to these times now lost.

A random excerpt illustrates:

“Because the outer door was open the flame in the glass fluttered and twisted and the little light that it afforded waxed and waned and threatened to expire entirely. The three of them bent over the poor pallet where the boy lay looked like ritual assassins. Bastante, the doctor said Bueno. He held up his dripping hands. They were dyed a rusty brown. The iodine moved in the pan like marbling blood. He nodded to the woman. Ponga el resto en el agua, he said. . . . “

The Crossing, Chapter III, p.24

Technology Borders
There are other borders, in our present preoccupation, for instance, “technology” borders. We’ve all recently crossed a new media border and are still feeling our way in the dark wondering where it may all lead. All we know for sure is that everything is changed. In some camps the euphoria is palpable, but vaguely disturbing. In others, change has only lately dawned on expiring regimes. Political realms are just now grappling with its meaning and consequence.

Big Data – Big Hopes
One of the more recent waves of the day is “Big Data,” by which is meant the collection and analysis of outlandishly large data sets, recently come to light as a side effect of new media. Search, location, communications, and social networks are all data gushers and the rush is on. There is no doubt that Big Data Analytics is powerful.

Disclosure: I’m currently paid to work on the periphery of a Big Data project, petabytes of live data compressed into cubes, pivoted, sliced, and doled out to a thread for visualizing geographically. My minor end of the Big Data shtick is the map. I am privy to neither data origins nor ends, but even without reading tea leaves, we can sense the forms and shadows of larger spheres snuffling in the night.

Analytics is used to learn from the past and hopefully see into the future, hence the rush to harness this new media power for business opportunism, and good old fashioned power politics. Big Data is an edge in financial markets where microseconds gain or lose fortunes. It can reveal opinion, cultural trends, markets, and social movements ahead of competitors. It can quantify lendibility, insurability, taxability, hireability, or securability. It’s an x-ray into social networks where appropriate pressure can gain advantage or thwart antagonists. Insight is the more benign side of Big Data. The other side, influence, attracts the powerful like bees to sugar.

Analytics is just the algorithm or lens to see forms in the chaos. The data itself is generated by new media gate keepers, the Googles, Twitters, and Facebooks of our new era, who are now in high demand, courted and feted by old regimes grappling for advantage.

Border Politics
Despite trenchant warnings by the likes of Nassim Taleb, “Beware the Big Errors of ‘Big Data’”, and Evgeny Morozov Net Delusion, the latest issue of “MIT Technology Review” declares in all caps:

“The mobile phone, the Net, and the spread of information —
a deadly combination for dictators”
MIT Tech Review


Dispelling the possibility of irony – feature articles in quick succession:

“A More Perfect Union”
“The definitive account of how the Obama campaign used big data to redefine politics.”
By Sasha Issenberg
“How Technology Has Restored the Soul of Politics”
“Longtime political operative Joe Trippi cheers the innovations of Obama 2012, saying they restored the primacy of the individual voter.”
By Joe Trippi
“Bono Sings the Praises of Technology”
“The musician and activist explains how technology provides the means to help us eradicate disease and extreme poverty.”
By Brian Bergstein

Whoa, anyone else feeling queasy? This has to be a classic case of Net Delusion! MIT Tech Review is notably the press ‘of technologists’, ‘by technologists’, and ‘for technologists’, but the hubris is striking even for academic and engineering types. The masters of technology are not especially sensitive to their own failings, after all, Google, the prima donna of new media, is anything but demure in its ambitions:

“Google’s mission is to organize the world’s information and make it universally accessible and useful.”
… and in unacknowledged fine print, ‘for Google’

Where power is apparent the powerful prevail, and who is more powerful than the State? Intersections of technologies often prove fertile ground for change, but change is transient, almost by definition. Old regimes accommodate new regimes, harnessing new technologies to old ends. The Mongol pony, machine gun, aeroplane, and nuclear fission bestowed very temporary technological advantage. It is not quite apparent what is inevitable about the demise of old regime power in the face of new information velocity.

What Big Data offers with one hand it takes away with the other. Little programs like “socially responsible curated treatment” or “cognitive infiltration” are only possible with Big Data analytics. Any powerful elite worthy of the name would love handy Ministry of Truth programs that steer opinion away from “dangerous” ideas.

“It is not because the truth is too difficult to see that we make mistakes… we make mistakes because the easiest and most comfortable course for us is to seek insight where it accords with our emotions – especially selfish ones.”

Alexander Solzhenitsyn

Utopian Borders
Techno utopianism, embarrassingly ardent in the Jan/Feb MIT Tech Review, blinds us to dangerous potentials. There is no historical precedent to presume an asymmetry of technology somehow inevitably biased to higher moral ends. Big Data technology is morally agnostic and only reflects the moral compass of its wielder. The idea that “… the spread of information is a deadly combination for dictators” may just as likely be “a deadly combination” for the naïve optimism of techno utopianism. Just ask an Iranian activist. When the bubble bursts, we will likely learn the hard way how the next psychopathic overlord will grasp the handles of new media technology, twisting big data in ways still unimaginable.

Big Data Big Brother?
Big Brother? US linked to new wave of censorship, surveillance on web
Forbes Big Data News Roundup
The Problem with Our Data Obsession
The Robot Will See You Now
Educating the Next Generation of Data Scientists
Moderated by Edd Dumbill (I’m not kidding)

Digital Dictatorship
Wily regimes like the DPRK can leverage primitive retro fashion brutality to insulate their populace from new media. Islamists master new media for more ancient forms of social pressure, sharia internet, fatwah by tweet. Oligarchies have co-opted the throttle of information, doling out artfully measured information and disinformation into the same stream. The elites of enlightened western societies adroitly harness new market methods for propagandizing their anaesthetized citizenry.

Have we missed anyone?

… and of moral borders
“The battle line between good and evil runs through the heart of every man”
The Gulag Archipelago, Alexander Solzhenitsyn


We have crossed the border. Everything is changed. Or is it?

Interestingly Cormac McCarthy is also the author of the Pulitzer Prize winning book, The Road, arguably about erasure of all borders, apparently taking up where techno enthusiasm left off.

Fig 2 – a poor man’s Big Data – GPU MapD – can you find your tweets?

(Silverlight BusinessApplication , Membership on Azure) => To Map || Not To Map

We owe Alonzo Church for this one:

    WebContext.Current.Authentication.LoggedIn += (se, ev) =>
       Link2.Visibility = Visibility.Visible;

    WebContext.Current.Authentication.LoggedOut += (se, ev) =>
        Link2.Visibility = Visibility.Collapsed;

The λ calculus “goes to” Lambda Expressions now in C#, turning mere programmers into logicians. We share in the bounty of Alonzo’s intellect in some remote fashion. Likely others understand this functional proof apparatus, but for me it’s a mechanical thing. Put this here and that there, slyly kicking an answer into existence. Is a view link visible or is it not?

Here we are not concerned with “the what” of a map but a small preliminary question, “whether to map at all?”

Azure provides a lot of cloud for the money. Between SQL Azure, blob storage, and hosted services, both web and worker roles, there are a lot of very useful resources. All, except SQL Azure, currently include built in scaling and replication. Azure SQL Server scaling is still awaiting the release of federation at some point in the future.

Visual Studio 2010 adds a Cloud toolkit with template ready services (not to be confused with “shovel ready”). One template that is interesting is the ASP .NET template combined with an Azure hosted Web Role service. The usefulness here is the full ASP .NET authentication and profile capability. If you need simple authentication, profile, and registration built on the ASP .NET model it is all here.

The goal then is to leverage the resourceful templates of ASP .NET authentication, modify this to a Silverlight version (which implies a Bing Map Control is in the future, along with some nifty transition graphics), and finally deploy to Azure.

And now for the recipe

authentication screen shot
Fig1 – open a new VS2010 project using the Cloud template

authentication screen shot
Fig2 – add an ASP .NET Web Role

authentication screen shot
Fig3 – click run to open in local development fabric with login and registration

This is helpful and works fine in a local dev fabric.

Now on to Azure:

First create a Storage Account and a Hosted Service.

authentication screen shot
Fig4 – Storage Account for use in publishing and diagnostics logs

authentication screen shot
Fig5 – Azure control manager

authentication screen shot
Fig6 – create a SQL Azure instance to hold the aspnetdb

Now using VS2010 publish the brand new ASP .NET web role to the new Azure account.

Problem 1
However, publish to Azure and you will immediately face a couple of problems. First, the published Azure service is a Web Role not a VM, which means there is no local SQL Server accessible for attaching a default aspnetdb. SQL Azure is in its own separate instance. You will need to manually create the aspnetdb Database on a SQL Azure instance and then provide a connection string to this new SQL Azure in the Web Role.

Microsoft provides sql scripts adapted for providing this to SQL Azure:

Running these scripts in your Azure DB will create the following membership database and tables:

authentication screen shot
Fig7 – aspnetdb SQL Azure database

Since I like Silverlight better it’s probably time to dump the plain Jane ASP .NET in favor of the Silverlight Navigation version. Silverlight provides animated fades and flips to which most of us are already unconsciously accustomed.

In Visual Studio add a new project from the Silverlight templates. The Silverlight Business Application is the one with membership.

authentication screen shot
Fig8 – Silverlight Business Application

We are now swapping out the old ASP .NET Cloud Web Role with a Silverlight BusinessApplication1.Web

Step 1
Add reference assemblies WindowsAzure Diagnostic, ServiceRuntime, and StorageClient to BusinessApplication1.Web

Step 2
Copy WebRole1 WebRole.cs to BusinessApplication1.Web and correct namespace

Step 3
CloudService1 Roles right click to add Role in solution – BusinessApplication.Web
(Optional remove WebRole1 from Roles and remove WebRole project since it is no longer needed)

Step 4
Add a DiagnosticsConnectionString property to the Roles/BusinessApplication1.Web role that points to the Azure blob storage. Now we can add a DiagnosticMonitor and trace logging to the WebRole.cs

Step 5
Open BusinessApplication1.Web web.config, add connection string pointed at SQL Azure aspnetdb and a set of ApplicationServices:

<?xml version="1.0"?>
    <sectionGroup name="system.serviceModel">
      <section name="domainServices"
 System.ServiceModel.DomainServices.Hosting, Version=, Culture=neutral,
PublicKeyToken=31BF3856AD364E35" allowDefinition="MachineToApplication"
requirePermission="false" />
        <remove name="LocalSqlServer"/>
        <add name="ApplicationServices" connectionString="Server=tcp:<SqlAzure
 providerName="System.Data.SqlClient" />

      <add name="DomainServiceModule"
 System.ServiceModel.DomainServices.Hosting, Version=, Culture=neutral,
 PublicKeyToken=31BF3856AD364E35" />
    <compilation debug="true" targetFramework="4.0" />

    <authentication mode="Forms">
      <forms name=".BusinessApplication1_ASPXAUTH" />

      <membership defaultProvider="AspNetSqlMembershipProvider">
              <add name="AspNetSqlMembershipProvider"
                   enablePasswordRetrieval="false" enablePasswordReset="true"
 requiresQuestionAndAnswer="false" requiresUniqueEmail="false"
                   maxInvalidPasswordAttempts="5" minRequiredPasswordLength="6"
 minRequiredNonalphanumericCharacters="0" passwordAttemptWindow="10"
                   applicationName="/" />

      <profile defaultProvider="AspNetSqlProfileProvider">
              <add name="AspNetSqlProfileProvider"
 connectionStringName="ApplicationServices" applicationName="/"/>
              <add name="FriendlyName"/>

      <roleManager enabled="false" defaultProvider="AspNetSqlRoleProvider">
              <add name="AspNetSqlRoleProvider"
 connectionStringName="ApplicationServices" applicationName="/" />
              <add name="AspNetWindowsTokenRoleProvider"
type="System.Web.Security.WindowsTokenRoleProvider" applicationName="/" />


    <validation validateIntegratedModeConfiguration="false"/>
    <modules runAllManagedModulesForAllRequests="true">
      <add name="DomainServiceModule" preCondition="managedHandler"
, System.ServiceModel.DomainServices.Hosting, Version=, Culture=neutral,
 PublicKeyToken=31AS3856AD234E35" />

    <serviceHostingEnvironment aspNetCompatibilityEnabled="true"
 multipleSiteBindingsEnabled="true" />

Now we can publish our updated Silverlight Web Role to Azure.

authentication screen shot
Fig9 – publish to Azure

However, this brings us to problem 2

It turns out that there are a couple of assemblies that are missing from Azure’s OS but required for our Silverlight membership BusinessApplication. When an assembly is missing publish will continue cycling between initializing and stop without ever giving a reason for the endless loop.

When this happens I know to start looking at the assembly dlls. Unfortunately they are numerous. Without a message indicating which is at fault we are reduced to trial and error. Obviously the turnaround on publish to Azure is not incredibly fast. Debug cycles will take five to ten minutes so this elimination process is tedious. However, I spent an afternoon on this and can save you some time. The trick assemblies are:

To fix this problem:
Open references under BusinessApplication1.Web and click on the above two assemblies. Then go to properties “Copy Local” which must be set to True. This means that a publish to Azure will also copy these two missing assemblies from your local VS2010 environment up to Azure and publish will eventually run as expected.

The publish process then brings us to this:

authentication screen shot
Fig10 – Login

And this:

authentication screen shot
Fig11 – Register

That in turn allows the Alonzo λ magic:

            WebContext.Current.Authentication.LoggedIn += (se, ev) =>
                Link2.Visibility = Visibility.Visible;


Well it works, maybe not as easily as one would expect for a Cloud template.

New Stuff

Some convergence stuff has been passing by my window recently. Too bad there isn’t more time to play, but here are a few items of note to the spatial world.

DataConnector screen shot
Silverlight 5 early early demo from Firestarter webcast

First a look ahead to Silverlight 5 due in 2011:

Silverlight Firestarter video

Some very interesting pre-release demos showcasing mesh graphics running at GPU speeds. WPF has had 3D mesh capability for a few years, but SL5 will add the same camera level control in 3D scene graphs to spice up delivery in the browser. There will be a whole new level of interesting possibilities with 3D in the browser as apparent in the Rosso demos above.

3D View of what?
So then 3D viewing, but how do you get a 3D image into the model in the first place? Well thanks to some links from Gene Roe, and interesting open source driver activity around the Kinect, here are some possibilities.

The obvious direction is a poor man’s 3d modeler. It won’t be too long until we see these devices in every corner of rooms supplementing the omnipresent security cameras with 3D views. (Nice to know that the security folks can wander around mirror land looking under tables and behind curtains.)

Well it could be worse, see DIY TSA scanner. But I doubt that Kinect will ever incorporate millimeter or back scatter sensors into the commercial versions.


I know we’ve all seen Jack Dangermond’s “instrumented universe” prophecies, but another angle is remote dashboarding. Put the instrument controls right on the floor in front of us and abstract the sensored world one step further into the mirror. That way heavy equipment operators can get nice aerobic workouts too.

Next Step

Can’t miss the augmented reality angle. Pico projector in a phone ought to handle this trick in the handheld world.

and then UGV

Why not let that Rumba do some modeling?

or UAV

Add a bit of UAV and we can move mirror land from the 30cm (1 foot) resolution capture soon to be added to Bing Maps (see Blaise Aguera y Arcas comments) to hmmm what looks like sub centimeter. Do we doubt that the venerable licensed surveyor will eventually have one of these GatewingX100 thing-a-ma-bobs in his truck.


So we move along. I am seeing some sanity in the WP7 spanning XNA and Silverlight in the same mobile device. Convergence into the 3D mirror land will be less painful for those of us developing in the Microsoft framework. HTML5 is nice and I’m happy to see the rebirth of SVG, but then again, the world moves apace, and I look forward to some genuine 3D adventures in the next generation mirror land.

Route Optimization and Death of a Salesman

Taking an arbitrary list of addresses and finding an optimized route has been a pretty common problem ever since traveling salesmen. It has been the bane of many computer science students leading to more than a few minor tragedies, as well as Arthur Miller’s great American tragedy. Technically known as an np-complete problem, computational time escalates quickly and tragically as the number of stops increase.

Bing includes some valuable Routing services like routing with waypoints, traffic optimization, optimization of time or distance, or even walking routes versus driving versus major routes. However, there are some limitations to Bing Routing Service , and even Bing could not prevent Willy Loman’s tragic demise.

One limitation of Bing Routing is that the maximum number of waypoint stops is 25. This is probably not a notable issue for individual users, but it is a problem for enterprise use with large numbers of stops to calculate.

Perhaps a broader issue often encountered is waypoint or stop order. Bing Route service does not attempt to reorder waypoints for optimal routing. Route segments between waypoints are optimized, but waypoints themselves are up to the user. Often stop points are coming from a SharePoint list, a contact address book, an Excel spreadsheet, or a database without benefit of sort ordering based on route optimization. This is where route optimization comes in handy.

OnTerra Systems has been involved with Fleet management for some time now and recently introduced a RouteOptimization Service.

There are of course fairly complex algorithms involved and computational scientists have been at some pains to circumscribe the limits of the problem. One simplification that makes this much easier is to ignore routing momentarily and use a shortest distance algorithm to make a first route optimization pass. One way this can be accomplished is by looking at a set of connected nodes and determining segment intersects. By recursively re–ordering nodes to eliminate segment intersects, the computer quickly turns the above node set into this:

The nodes are first ordered with the simplified assumption of straight line segment connectors, and only after a re-ordering is the usual Bing Route service triggered to get actual routes between nodes. This is one of those algorithms that take some liberties to find a solution in real time i.e. time we can live with. It doesn’t guarantee “The Absolute Best” route, but a route that is “good enough” and in the vast majority of cases will in fact be the best, just not guaranteed.

Of course mathematicians and purists cringe at the idea, however, that is ordinarily what engineering is all about, getting an approximate solution in less than the time constraint of a useable solution.

OnTerra Systems has added some other refinements to the service with the help of Bing Maps Silverlight Control. This UI uses both Bing Geocoding as well as Bing Route services. In addition it makes use of the simplified RouteOptimization service to make the route UI shown above. Users can choose whether to optimize for time or distance. Minor modifications could use traffic optimization, available in Bing, just as well. However, traffic optimization requires a different license and a bit more money.

Entering points can be accomplished several ways, with a click approach, from manually entered addresses, or even more easily from an Excel spreadsheet of addresses. Route definition can be set for round trip, one way to a known end destination, or one way to whichever end is the most efficient.

This UI is only one of many that could be used. As a WCF Service, RouteOptimization can take latitude, longitude point sets from any source and return the ordered points for display in any web UI.

This example Silverlight UI is a three step process of service chaining. First points are collected from the user. If addresses are listed rather than latitude longitude points, then Bing Geocode Service is called to arrive at the necessary list of latitude, longitude points. These in turn are handed to the RouteOptimizer Service. The returned points are now ordered and can be given to the Bing Route Service in the 3rd step of a service chain. Finally the resulting route is plotted in the Silverlight map UI.

Waypoint limit work around:
An interesting side note to RouteOptimization Service is the ability to extend the number of waypoints. Microsoft’s restriction to 25 waypoints is probably a scaling throttle to prevent service users from sending in hundreds and possibly thousands of waypoints for a single route. The Route Service could be compromised with many large count waypoint requests.

However, with the ability to order waypoints with an outside service this limit has a work around. First run an optimization on all of the stops using RouteOptimizer service. Now simply loop through the return set with multiple calls to Bing Route in 25 node chunks. This simple divide and conquer achieves a simple work around to the Waypoint limitation. Note that the public sample here is also limited to 25 stops. To use this work-around, you’ll need a licensed version.

Alternate UIs
Of course UIs may multiply. Route Savvy is primarily a service to which any number of UIs can connect and Silverlight happens to be a nice way to create useful UIs. Perhaps a more convenient UI is the Bing Map Gallery version found at Bing Maps Explore:
Bing Map Explore Route Savvy.

Linda Loman and the Parent Car Pool:
Anyone out there with kids in sports has probably run into this traveling salesman problem. (Fast forward to 2010 and Linda Loman jumps in the van with Biff ) There are 6 kids in the car pool. What is the optimal route for picking up all kids and delivering them to the playing field on time? If you want to know, just fire up RouteOptimizer and give it a whirl.

OnTerra Systems Route Optimization Service supplements the already valuable Bing Services by both optimizing input stop orders and extending the waypoint limitation. This is an illustration of the utility of web service chains. The ability to link many services into a solution chain is one of the dreams come true of web development and one reason why web applications continue to supersede the desktop world.

Please note, even though RouteOptimizer will help Linda Loman’s car pool run smoothly, it won’t prevent the death of a salesman. Sorry, only a higher plane optimization can do that.

Connecting the Data Dots – Hybrid Architecture

Web mapping has generally been a 3 tier proposition. In a typical small to medium scale scenario there will almost always be a data source residing in a spatial database, with some type of service in the middle relaying queries from a browser client UI to the database and back.

I’ve worked with all kinds of permutations of 3 tier web mapping, some easier to use than others. However, a few months ago I sat down with all the Microsoft offerings and worked out an example using SQL Server + WCF + Bing Maps Silverlight Control. Notice tools for all three tiers are available from the same vendor i.e. Microsoft Visual Studio. I have to admit that it is really nice to have integrated tools across the whole span. The Microsoft option has only been possible in the last year or so with the introduction of SQL Server 2008 and Bing Maps Silverlight Map control.

The resulting project is available on codeplex: dataconnector
and you can play with a version online here: DataConnectorUI

When working on the project, I was starting out in WCF with some reservations. SQL Spatial was similar to much of my earlier work with PostGIS. While Bing Maps Silverlight Control and XAML echoed work I’d done a decade back with SVG, just more so. However, for the middle tier I had generally used something from the OGC world such as GeoServer. Putting together my own middle tier service using WCF was largely experimental. WCF turned out to be less daunting than I had anticipated. In addition, it also afforded opportunity to try out a few different approaches for transferring spatial query results to the UI client.

There is more complete information on the project here:

After all the experimental approaches my conclusion is that even with the powerful CLR performance of Silverlight Bing Maps Control, most scenarios still call for a hybrid approach: raster performance tile pyramids for large extent or dense data resources, and vectors for better user interaction at lower levels in the pyramid.

Tile Pyramids don’t have to be static and DataConnector has examples of both static and dynamic tile sources. The static example is a little different from other approaches I’ve used, such as GeoWebCache, since it drops tiles into SQL Server as they are created rather than using a file system pyramid. I imagine that a straight static file system source could be a bit faster, but it is nice to have indexed quadkey access and all data residing in a single repository. This was actually a better choice when deploying to Azure, since I didn’t have to work out a blob storage option for the tiles in addition to the already available SQL Azure.

Hybrid web mapping:

Here are some of the tradeoffs involved between vector and raster tiles.

Hybrid architectures switch between the two depending on the client’s position in the resource pyramid. Here is my analysis of feature count ranges and optimal architecture:

1. Low – For vector poly feature counts < 300 features per viewport, I’d use vector queries from DB. Taking advantage of the SQL Reduce function makes it possible to drop node counts for polygons and polylines for lower zoom levels. Points are more efficient and up to 3000-5000 points per viewport are still possible.

2. Medium – For zoomlevels with counts > 300 per viewport, I’d use a dynamic tile builder at the top of the pyramid. I’m not sure what the upper limit is on performance here. I’ve only run it on fairly small tables 2000 records. Eventually dynamic tile building on the server effects performance (at the server not the client).

3. High – For zoomlevels with high feature counts and large poly node counts, I’d start with pre-seeded static tile at low zoomlevels at the top of the pyramid, perhaps dynamic tiles in the middle pyramid, and vectors at the bottom.

4. Very High – For very high feature counts at low zoomlevels near the top of the pyramid I’d just turn off the layer. There probably isn’t much reason to show very dense resources until the user moves in to an area of interest. For dense point sources a heat map raster overview would be best at the top of the pyramid. At middle levels I’d use a caching tile builder with vectors again at higher zoom levels at the bottom of the pyramid.

Here is a graphic view of some hybrid architectural options:
DataConnector screen shotDataConnector screen shot

DataConnector screen shotDataConnector screen shot

Data Distribution Geographically:
Another consideration is the distribution of data in the resource. Homogenous geographic data density works best with hard zoom level switches. In other words, the switch from vector to raster tile can be coded to zoomlevel regardless of where the client has panned in the extent of the data. This is simple to implement.

However where data is relatively heterogeneous geographically it might be nice to arrange switching according to density. An example might be parcel data densities that vary across urban and rural areas. Instead of simple zoom levels, the switch between tile and vector is based on density calculations. Having available a heat map overview, for example, could provide a quick viewport density calculation based on a pixel sum of the heat map intersecting with the user’s viewport. This density calculation would be used for the switch rather than a simpler zoom level switch. This way rural area of interest can gain the benefit of vectors higher in the pyramid than would be useful in urban areas.

DataConnector screen shotDataConnector screen shot

Point Layers:
Points have a slightly different twist. For one thing too many points clutter a map, while their simplicity means that more point vectors can be rendered before affecting UI performance. Heat Maps are a great way to show density at higher levels in the pyramid. Heat Maps can be dynamic tile source or a more generalized caching tile pyramid. In a point layer scenario at some level there is a switch from Heat Map to Cluster Icons, and then to individual Pushpins. Using power scaling at pushpin levels allows higher density pins to show higher in the pyramid without as much clutter. Power scaling hooks the icon size for the pin to zoomlevel. Experiments showed icon max limit for Bing Silverlight Map Control at 3000-5000 per viewport.

DataConnector screen shot

Some Caveats:
Tile pyramids are of course most efficient when the data is relatively static. With highly dynamic data, tiles can be built on the fly but with consequent loss of performance as well as loading on the server that affects scaling. In an intermediate situation with data that changes slowly, static tiles are still an option using a pre-seeding batch process run at some scheduled interval.

Batch tile loading also has limitations for very dense resources that require tiling down deep in a pyramid where the number of tiles grows very large. Seeding all levels of a deep pyramid requires some time, perhaps too much time. However, in a hybrid case this should rarely happen since the bottom levels of the pyramid are handled dynamically as vectors.

It is also worth noting that with Silverlight the client hardware affects performance. Silverlight has an advantage for web distribution. It distributes the cpu load out to the clients harnessing all their resources. However, an ancient client with old hardware performance will not match performance of newer machines.


A Hybrid zoom level based approach is the best general architecture for Silverlight web maps where large data sets are involved. Using your own service provides more options in architecting a web map solution.

Microsoft’s WCF is a nice framework especially since it leverages the same C# language, IDE, and debugging capabilities across all tiers of a solution. Microsoft is the only vendor out there with the breadth to give developers an integrated solution across the whole spectrum from UI graphics, to customizable services, to SQL spatial tables.

Then throw in global map and imagery resources from Bing, with Geocode, Routing, and Search services, along with the Azure cloud platform for scalability and it’s no wonder Microsoft is a hit with the enterprise. Microsoft’s full range single vendor package is obviously a powerful incentive in the enterprise world. Although relatively new to the game, Microsoft’s full court offense puts some pressure on traditional GIS vendors, especially in the web distribution side of the equation.