Visualizing Large Data Sets with Bing Maps Web Apps

Fig 1 - MapD Twitter Map 80M tweets Oct 19 - Oct 30

Visualizing large data sets with maps is an ongoing concern these days. Just ask the NSA, or note this federal vehicle tracking initiative reported at the LA Times. Or, this SPD mesh network for tracking any MAC address wandering by.

“There was of course no way of knowing whether you were being watched at any given moment. How often, or on what system, the Thought Police plugged in on any individual wire was guesswork. It was even conceivable that they watched everybody all of the time. But at any rate they could plug in your wire whenever they wanted to.”

George Orwell, 1984

On a less intrusive note, large data visualization is also of interest to anyone dealing with BI or just fascinated with massive public data sets such as twitter universe. Web maps are the way to go for public distribution and all web apps face the same set of issues when dealing with large data sets.

1. Latency of data storage queries, typically SQL
2. Latency of services for mediating queries and data between the UI and storage.
3. Latency of the internet
4. Latency of client side rendering

All web map javascript APIs have these same issues whether it’s Google, MapQuest, Nokia Here, or Bing Maps. This is a Bing Maps centric perspective on large data mapping, because Bing Maps has been the focus of my experience for the last year or two.

Web Mapping Limitation

Bing Maps Ajax v7 is Microsoft’s javascript API for web mapping applications. It offers typical Point (Pushpin), Polyline, and Polygon vector rendering in the client over three tile base map styles, Road, Aerial, and AerialWithLabels. Additional overlay extensions are also available such as traffic. Like all the major web map apis, vector advantages include client side event functions at the shape object level.

Although this is a powerful mapping API rendering performance degrades with the number of vector entities in an overlay. Zoom and pan navigation performs smoothly on a typical client up to a couple of thousand points or a few hundred complex polylines and polygons. Beyond these limits other approaches are needed for visualizing geographic data sets. This client side limit is necessarily fuzzy as there is a wide variety of client hardware out there in user land, from older desktops and mobile phones to powerful gaming rigs.

Large data Visualization Approaches

1) Tile Pyramid – The Bing Maps Ajax v7 API offers a tileLayer resource that handles overlays of tile pyramids using a quadkey nomenclature. Data resources are precompiled into sets of small images called a tile pyramid which can then be used in the client map as a slippy tile overlay. This is the same slippy tile approach used for serving base Road, Aerial, and AerialWithLabels maps, similar to all web map brands.

Fig 2 - example of quadkey png image names for a tile pyramid

Pro:· Fast performance

  • Server side latency is eliminated by pre-processing tile pyramids
  • Internet streaming is reduced to a limited set of png or jpg tile images
  • Client side rendering is reduced to a small set of images in the overlay

Con: Static data – tile pyramids are pre-processed

  • data cannot be real time
  • Permutations limited – storage and time limitations apply to queries that have large numbers of permutations
  • Storage capacity – tile pyramids require large storage resources when provided for worldwide extents and full 20 zoom level depth

2) Dynamic tiles – this is a variation of the tile pyramid that creates tiles on demand at the service layer. A common approach is to provide dynamic tile creation with SQL or file based caching. Once a tile has been requested it is then available for subsequent queries directly as an image. This allows lower levels of the tile pyramid to be populated only on demand reducing the amount of storage required.

Pro:

  • Can handle larger number of query permutations
  • Server side latency is reduced by caching tile pyramid images (only the first request requires generating the image)
  • Internet streaming is reduced to a limited set of png tile images
  • Client side rendering is reduced to a small set of images in the overlay

Con:

  • Static data – dynamic data must still be refreshed in the cache
  • Tile creation performance is limited by server capability and can be a problem with public facing high usage websites.

3) Hybrid - This approach splits the zoom level depth into at least two sections. The lowest levels with the largest extent contain the majority of a data set’s features and is provided as a static tile pyramid. The higher zoom levels comprising smaller extents with fewer points can utilize the data as vectors. A variation of the hybrid approach is a middle level populated by a dynamic tile service.

Fig 3 – Hybrid architecture

Pro:

  • Fast performance – although not as fast as a pure static tile pyramid it offers good performance through the entire zoom depth.
  • Allows fully event driven vectors at higher zoom levels on the bottom end of the pyramid.

Con:

  • Static data at larger extents and lower zoom levels
  • Event driven objects are only available at the bottom end of the pyramid

Example:
sample site and demo video

tile layer sample

Fig 4 - Example of a tileLayer view - point data for earthquakes and Mile Markers

Fig 5 - Example of same data at a higher zoom using vector data display

4) Heatmap
Heatmaps refer to the use of color gradient or opacity overlays to display data density. The advantage of heat maps is the data reduction in the aggregating algorithm. To determine the color/opacity of a data set at a location the data is first aggregated by either a polygon or a grid cell. The sum of the data in a given grid cell is then applied to the color gradient dot for that cell. If heatmaps are rendered client side they have good performance only up to the latency limits of service side queries, internet bandwidth, and local rendering.

Fig 6 - Example of heatmap canvas over Bing Maps rendered client side

Grid Pyramids – Server side gridding
Hybrid server side gridding offers significant performance advantages when coupled with pre-processed grid cells. One technique of gridding processes a SQL data resource into a quadkey structure. Each grid cell is identified by its unique quadkey and contains the data aggregate at that grid cell. A grid quadkey sort by length identifies all of the grid aggregates at a specific quadtree level. This allows the client to efficiently download the grid data aggregates at each zoom level and render locally on the client in an html5 canvas over the top of a Bing Maps view. Since all grid levels are precompiled, resolution of cells can be adjusted by Zoom Level.

Pro:

  • Efficient display of very large data sets at wide extents
  • Can be coupled with vector displays at higher zoom levels for event driven objects

Con: gridding is pre-processed

  • real time data cannot be displayed
  • storage and time limitations apply to queries that have large numbers of permutations

Fig 7 – Grid Pyramid screen shot of UI showing opacity heatmap of Botnet infected computers

5) Thematic
Thematic maps use spatial regions such as states or zipcodes to aggregate data into color coded polygons. Data is aggregated for each region and color coded to show value. A hierarchy of polygons allows zoom levels to switch to more detailed regions at closer zooms. An example hierarchy might be Country, State, County, Sales territory, Zipcode, Census Block.

Pro:

  • Large data resources are aggregated into meaningful geographic regions.
  • Analysis is often easier using color ranges for symbolizing data variation

Con:

  • Rendering client side is limited to a few hundred polygons
  • Very large data sets require pre-processing data aggregates by region

Fig 8 - thematic map displaying data aggregated over 210 DMA regions using a quantized percentile range

6) Future trends
Big Data visualization is an important topic as the web continues to generate massive amounts of data useful for analysis. There are a couple of technologies on the horizon that help visualization of very large data resources.

A. Leverage of client side GPU

Here is an example WebGL http://www.web-maps.com/WebGLTest using CanvasLayer. ( only Firefox, chrome, IE11 *** Cannot be viewed in IE10 ***)

This sample shows speed of pan zoom rendering of 30,000 random points which would overwhelm typical js shape rendering. Data performance is good up to about 500,000 points per Brendan Kenny. Complex shapes need to be built up from triangle primitives. Tessellation rates for polygon generation approaches 1,000,000 triangles per 1000ms using libtess. Once tessellated the immediate mode graphic pipeline can navigate at up to 60fps. Sample code is available on github.

This performance is achieved by leveraging the client GPU. Because immediate mode graphics is a powerful animation engine, time animations can be used to uncover data patterns and anomalies as well as making some really impressive dynamic maps like this Uber sample. Unfortunately all the upstream latency remains: collecting the data from storage and sending it across the wire. Since we’re talking about larger sets of data this latency is more pronounced. Once data initialization finishes, client side performance is amazing. Just don’t go back to the server for new data very often.

Pro:

  • Good client side navigation performance up to about 500,000 points

Con:

  • requires a webgl enabled browser
  • requires GPU on the client hardware
  • subject to latency issues of server query and internet streaming
  • WebGL tessellation triangle primitives make display of polylines and polygons complex

Fig 9 – test webGL 30,000 random generated points (requires WebGL enabled browser – Firefox, Chrome, IE11)

Note: IE11 added WebGL capability which is a big boost for the web. There are still some glitches, however, and gl_PointSize in shader is broken for simple points like this sample.

Fig 10 – Very interesting WebGL animations of shipping GPS tracks using WebGL Canvas –courtesy Brendan Kenny

B. Leverage of server side GPU
MapD – Todd Mostak has developed a GPU based spatial query system called MapD (Massively Parallel Database)

MapD Synopsis:
  • MapD is a new database in development at MIT, created by Todd Mostak.
  • MapD stands for “massively parallel database.”
  • The system uses graphics processing units (GPUs) to parallelize computations. Some statistical algorithms run 70 times faster compared to CPU-based systems like MapReduce.
  • A MapD server costs around $5,000 and runs on the same power as five light bulbs.
  • MapD runs at between 1.4 and 1.5 teraflops, roughly equal to the fastest supercomputer in 2000.
  • uses SQL to query data.
  • Mostak intends to take the system open source sometime in the next year.
  • Bing Test: http://onterrawms.blob.core.windows.net/bingmapd/index.htm

    Bing Test shows an example of tweet points over Bing Maps and illustrates the performance boost from the MapD query engine. Each zoom or pan results in a GetMap request to the MapD engine that queries millions of tweet point records (81 million tweets Oct 19 – Oct 30), generating a viewport png image for display over Bing Map. The server side query latency is amazing considering the population size of the data. Here are a couple of screen capture videos to give you the idea of the higher fps rates:

    MapDBingTestAerialYellow50ms.wmv
    MapDBingHeatTest.wmv

    Interestingly, IE and FireFox handle cache in such a way that animations up to 100fps are possible. I can set a low play interval of 10ms and the player appears to do nothing. However, 24hr x12 days = 288 images are all being downloaded in just a few seconds. Consequently the next time through the play range the images come from cache and animation is very smooth. Chrome handles local cache differently in Windows 8 and it won’t grab from cache the second time. In the demo case the sample runs at 500ms or 2fps which is kind of jumpy but at least it works in Windows 8 Chrome with an ordinary internet download speed of 8Mbps

    Demo site for MapD: http://mapd.csail.mit.edu/

    Pro:

    • Server side performance up to 70x
    • Internet stream latency reduced to just the viewport image overlay
    • Client side rendering as a single image overlay is fast

    Con:

    • Source code not released, and there may be proprietary license restrictions
    • Most web servers do not include GPU or GPU clusters – especially cloud instances

    Note: Amazon AWS offers GPU Clusters but not cheap.

    Cluster GPU Quadruple Extra Large 22 GiB memory, 33.5 EC2 Compute Units, 2 x NVIDIA Tesla “Fermi” M2050 GPUs, 1690 GB of local instance storage, 64-bit platform, 10 Gigabit Ethernet( $2.100 per Hour)

    NVidia Tesla M2050 – 448 CUDA Cores per GPU and up to 515 Gigaflops of double-precision peak performance in each GPU!

    Fig 11 - Demo displaying public MapD engine tweet data over Bing Maps

    C. Spatial Hadoophttp://spatialhadoop.cs.umn.edu/
    Spatial Hadoop applies the parallelism of Hadoop clusters to spatial problems using the MapReduce technique made famous by Google. In the Hadoop world a problem space is distributed across multiple CPUs or servers. Spatial Hadoop adds a nice collection of spatial objects and indices. Although Azure Hadoop supports .NET, there doesn’t seem to be a spatial Hadoop in the works for .NET projects. Apparently MapD as open source would leap frog Hadoop clusters at least for performance per dollar.

    D. In Memory database (SQL Server 2014 Hekatron in preview release) – Microsoft plans to enhance the next version of SQL Server with in-memory options. SQL server 2014 in-memory options allows high speed queries for very large data sets when deployed to high memory capacity servers.

    Current SQL Server In-Memory OLTP CTP2

    Creating Tables
    Specifying that the table is a memory-optimized table is done using the MEMORY_OPTIMIZED = ON clause. A memory-optimized table can only have columns of these supported datatypes:

    • Bit
    • All integer types: tinyint, smallint, int, bigint
    • All money types: money, smallmoney
    • All floating types: float, real
    • date/time types: datetime, smalldatetime, datetime2, date, time
    • numeric and decimal types
    • All non-LOB string types: char(n), varchar(n), nchar(n), nvarchar(n), sysname
    • Non-LOB binary types: binary(n), varbinary(n)
    • Uniqueidentifier”

    Since geometry and geography data types are not supported with the next SQL Server 2014 in-memory release, spatial data queries will be limited to point (lat,lon) float/real data columns. It has been previously noted that for point data, float/real columns have equivalent or even better search performance than points in a geography or geometry form. In-memory optimizations would then apply primarily to spatial point sets rather than polygon sets.

    Natively Compiled Stored Procedures The best execution performance is obtained when using natively compiled stored procedures with memory-optimized tables. However, there are limitations on the Transact-SQL language constructs that are allowed inside a natively compiled stored procedure, compared to the rich feature set available with interpreted code. In addition, natively compiled stored procedures can only access memory-optimized tables and cannot reference disk-based tables.”

    SQL Server 2014 natively compiled stored procedures will not include any spatial functions. This means optimizations at this level will also be limited to float/real lat,lon column data sets.

    For fully spatialized in-memory capability we’ll probably have to wait for SQL Server 2015 or 2016.

    Pro:

    • Reduce server side latency for spatial queries
    • Enhances performance of image based server side techniques
      • Dynamic Tile pyramids
      • images (similar to MapD)
      • Heatmap grid clustering
      • Thematic aggregation

    Con:

    • Requires special high memory capacity servers
    • It’s still unclear what performance enhancements can be expected from spatially enabled tables

    D. Hybrids

    The trends point to a hybrid solution in the future which addresses the server side query bottleneck as well as client side navigation rendering bottleneck.

    Server side –
    a. In-Memory spatial DB
    b. Or GPU based parallelized queries

    Client side – GPU enhanced with some version of WebGL type functionality that can makes use of client GPU

    Summary

    Techniques are available today that can accommodate large date resources in Bing Maps. Trends indicate that near future technology can really increase performance and flexibility. Perhaps the sweet spot for Big Data map visualization over the next few years will look like a MapD or a GPU Hadoop engine on the server communicating to a WebGL UI over 1 gbps fiber internet.

    Orwell feared that we would become a captive audience. Huxley feared the truth would be drowned in a sea of irrelevance.

    Amusing Ourselves to Death, Neil Postman

    Of course, in America, we have to have the best of both worlds. Here’s my small contribution to irrelevance:

    Fig 12 - Heatmap animation of Twitter from MapD over Bing Maps (100fps)

    Borders and Big Data

    Borders_Fig1

    Fig 1 – Big Data Analytics is a lens, the data is a side effect of new media

    I was reflecting on borders recently, possibly because of reading Cormac McCarthy’s The Border Trilogy. Borders come up fairly often in mapping:

    • Geography – national political borders, administrative borders
    • Cartography – border line styles, areal demarcation
    • Web Maps – pixel borders bounding polygonal event handlers
    • GIS – edges between nodes defining faces
    • Spatial DBs – Dimensionally Extended nine-Intersection Model (DE-9IM) 0,1 1,0 1,1 1,2 2,1

    However, this is not about map borders – digital or otherwise.

    McCarthy is definitely old school, if not Faulknerian. All fans of Neal Stephenson are excused. The Border Trilogy of course is all about a geographic border, the Southwest US Mexico border in particular. At other levels, McCarthy is rummaging about surfacing all sorts of borders: cultural borders, language borders (half the dialogue is Spanish), class borders, time borders (coming of age, epochal endings), moral borders with their many crossings. The setting is prewar 1930’s – 50’s, a pre-technology era as we now know it, and only McCarthy’s mastery of evocative language connects us to these times now lost.

    A random excerpt illustrates:

    “Because the outer door was open the flame in the glass fluttered and twisted and the little light that it afforded waxed and waned and threatened to expire entirely. The three of them bent over the poor pallet where the boy lay looked like ritual assassins. Bastante, the doctor said Bueno. He held up his dripping hands. They were dyed a rusty brown. The iodine moved in the pan like marbling blood. He nodded to the woman. Ponga el resto en el agua, he said. . . . “

    The Crossing, Chapter III, p.24

    Technology Borders
    There are other borders, in our present preoccupation, for instance, “technology” borders. We’ve all recently crossed a new media border and are still feeling our way in the dark wondering where it may all lead. All we know for sure is that everything is changed. In some camps the euphoria is palpable, but vaguely disturbing. In others, change has only lately dawned on expiring regimes. Political realms are just now grappling with its meaning and consequence.

    Big Data – Big Hopes
    One of the more recent waves of the day is “Big Data,” by which is meant the collection and analysis of outlandishly large data sets, recently come to light as a side effect of new media. Search, location, communications, and social networks are all data gushers and the rush is on. There is no doubt that Big Data Analytics is powerful.

    Disclosure: I’m currently paid to work on the periphery of a Big Data project, petabytes of live data compressed into cubes, pivoted, sliced, and doled out to a thread for visualizing geographically. My minor end of the Big Data shtick is the map. I am privy to neither data origins nor ends, but even without reading tea leaves, we can sense the forms and shadows of larger spheres snuffling in the night.

    Analytics is used to learn from the past and hopefully see into the future, hence the rush to harness this new media power for business opportunism, and good old fashioned power politics. Big Data is an edge in financial markets where microseconds gain or lose fortunes. It can reveal opinion, cultural trends, markets, and social movements ahead of competitors. It can quantify lendibility, insurability, taxability, hireability, or securability. It’s an x-ray into social networks where appropriate pressure can gain advantage or thwart antagonists. Insight is the more benign side of Big Data. The other side, influence, attracts the powerful like bees to sugar.

    Analytics is just the algorithm or lens to see forms in the chaos. The data itself is generated by new media gate keepers, the Googles, Twitters, and Facebooks of our new era, who are now in high demand, courted and feted by old regimes grappling for advantage.

    Border Politics
    Despite trenchant warnings by the likes of Nassim Taleb, “Beware the Big Errors of ‘Big Data’”, and Evgeny Morozov Net Delusion, the latest issue of “MIT Technology Review” declares in all caps:

    “BIG DATA WILL SAVE POLITICS.”
    “The mobile phone, the Net, and the spread of information —
    a deadly combination for dictators”
    MIT Tech Review

    Really?

    Dispelling the possibility of irony – feature articles in quick succession:

    “A More Perfect Union”
    “The definitive account of how the Obama campaign used big data to redefine politics.”
    By Sasha Issenberg
    “How Technology Has Restored the Soul of Politics”
    “Longtime political operative Joe Trippi cheers the innovations of Obama 2012, saying they restored the primacy of the individual voter.”
    By Joe Trippi
    “Bono Sings the Praises of Technology”
    “The musician and activist explains how technology provides the means to help us eradicate disease and extreme poverty.”
    By Brian Bergstein

    Whoa, anyone else feeling queasy? This has to be a classic case of Net Delusion! MIT Tech Review is notably the press ‘of technologists’, ‘by technologists’, and ‘for technologists’, but the hubris is striking even for academic and engineering types. The masters of technology are not especially sensitive to their own failings, after all, Google, the prima donna of new media, is anything but demure in its ambitions:

    “Google’s mission is to organize the world’s information and make it universally accessible and useful.”
    … and in unacknowledged fine print, ‘for Google’

    Where power is apparent the powerful prevail, and who is more powerful than the State? Intersections of technologies often prove fertile ground for change, but change is transient, almost by definition. Old regimes accommodate new regimes, harnessing new technologies to old ends. The Mongol pony, machine gun, aeroplane, and nuclear fission bestowed very temporary technological advantage. It is not quite apparent what is inevitable about the demise of old regime power in the face of new information velocity.

    What Big Data offers with one hand it takes away with the other. Little programs like “socially responsible curated treatment” or “cognitive infiltration” are only possible with Big Data analytics. Any powerful elite worthy of the name would love handy Ministry of Truth programs that steer opinion away from “dangerous” ideas.

    “It is not because the truth is too difficult to see that we make mistakes… we make mistakes because the easiest and most comfortable course for us is to seek insight where it accords with our emotions – especially selfish ones.”

    Alexander Solzhenitsyn

    Utopian Borders
    Techno utopianism, embarrassingly ardent in the Jan/Feb MIT Tech Review, blinds us to dangerous potentials. There is no historical precedent to presume an asymmetry of technology somehow inevitably biased to higher moral ends. Big Data technology is morally agnostic and only reflects the moral compass of its wielder. The idea that “… the spread of information is a deadly combination for dictators” may just as likely be “a deadly combination” for the naïve optimism of techno utopianism. Just ask an Iranian activist. When the bubble bursts, we will likely learn the hard way how the next psychopathic overlord will grasp the handles of new media technology, twisting big data in ways still unimaginable.

    Big Data Big Brother?
    Big Brother? US linked to new wave of censorship, surveillance on web
    Forbes Big Data News Roundup
    The Problem with Our Data Obsession
    The Robot Will See You Now
    Educating the Next Generation of Data Scientists
    Moderated by Edd Dumbill (I’m not kidding)

    Digital Dictatorship
    Wily regimes like the DPRK can leverage primitive retro fashion brutality to insulate their populace from new media. Islamists master new media for more ancient forms of social pressure, sharia internet, fatwah by tweet. Oligarchies have co-opted the throttle of information, doling out artfully measured information and disinformation into the same stream. The elites of enlightened western societies adroitly harness new market methods for propagandizing their anaesthetized citizenry.

    Have we missed anyone?

    … and of moral borders
    “The battle line between good and evil runs through the heart of every man”
    The Gulag Archipelago, Alexander Solzhenitsyn

    Summary

    We have crossed the border. Everything is changed. Or is it?

    Interestingly Cormac McCarthy is also the author of the Pulitzer Prize winning book, The Road, arguably about erasure of all borders, apparently taking up where techno enthusiasm left off.

    Borders_Fig2
    Fig 2 – a poor man’s Big Data – GPU MapD – can you find your tweets?

    Extraterrestrial Map Kinections

    image

    Fig 1 – LRO Color Shaded Relief map of moon – Silverlight 5 XNA with Kinect interface

     

    Silverlight 5 was released after a short delay, at the end of last week.
    Just prior to exiting stage left, Silverlight, along with all plugins, shares a last aria. The spotlight now shifts abruptly to a new diva, mobile html5. Backstage the enterprise awaits with a bouquet of roses. Their concourse will linger long into the late evening of 2021.

    The Last Hurrah?

    Kinect devices continue to generate a lot of hacking interest. With the release of an official Microsoft Kinect beta SDK for Windows, things get even more interesting. Unfortunately, Kinect and the web aren’t exactly ideal partners. It’s not that web browsers wouldn’t benefit by moving beyond the venerable mouse/keyboard events. After all, look at the way mobile touch, voice, inertia, gyro, accelerometer, gps . . . have all suddenly become base features in mobile browsing. The reason Kinect isn’t part of the sensor event farmyard may be just a lack of portability and an ‘i’ prefix. Shrinking a Kinect doesn’t work too well as stereoscopic imagery needs a degree of separation in a Newtonian world.

    [The promised advent of NearMode (50cm range) offers some tantalizing visions of 3D voxel UIs. Future mobile devices could potentially take advantage of the human body’s bi-lateral symmetry. Simply cut the device in two and mount one half on each shoulder, but that isn’t the state of hardware at present. ]

    clip_image001

    Fig 2 – a not so subtle fashion statement OmniTouch

     

    For the present, experimenting with Kinect control of a Silverlight web app requires a relatively static configuration and a three-step process: the Kinect out there, beyond the stage lights, and the web app over here, close at hand, with a software piece in the middle. The Kinect SDK, which roughly corresponds to our visual and auditory cortex, amplifies and simplifies a flood of raw sensory input to extract bits of “actionable meaning.” The beta Kinect SDK gives us device drivers and APIs in managed code. However, as these APIs have not been compiled for use with Silverlight runtime, a Silverlight client will by necessity be one step further removed.

    Microsoft includes some rich sample code as part of the Kinect SDK download. In addition there are a couple of very helpful blog posts by David Catuhe and a codeplex project, kinect toolbox.

    Step 1:

    The approach for using Kinect for this experimental map interface is to use the GestureViewer code from Kinect Toolbox to capture some primitive commands arising from sensory input. The command repertoire is minimal including four compass direction swipes, and two circular gestures for zooming, circle clockwise zoom in, and circle counter clockwise zoom out. Voice commands are pretty much a freebie, so I’ve added a few to the mix. Since GestureViewer toolbox includes a learning template based gesture module, you can capture just about any gesture desired. I’m choosing to keep this simple.

    Step 2:

    Once gesture recognition for these 6 commands is available, step 2 is handing commands off to a Silverlight client. In this project I used a socket service running on a separate thread. As gestures are detected they are pushed out to local port 4530 on a tcp socket service. There are other approaches that may be better with final release of Silverlight 5.

    Step 3:

    The Silverlight client listens on port 4530, reading command strings that show up. Once read, the command can then be translated into appropriate actions for our Map Controller.

    clip_image003

    Fig 3 – Kinect to Silverlight architecture

    Full Moon Rising

     

    But first, instead of the mundane, let’s look at something a bit extraterrestrial, a more fitting client for such “extraordinary” UI talents. NASA has been very busy collecting large amounts of fascinating data on our nearby planetary neighbors. One data set that was recently released by ASU, stitches together a comprehensive lunar relief map with beautiful color shading. Wow what if the moon really looked like this!

    clip_image008

    Fig 4 – ASU LRO Color Shaded Relief map of moon

    In addition to our ASU moon USGS has published a set of imagery for Mars, Venus, Mercury, as well as some Saturn and Jupiter moons. Finally, JPL thoughtfully shares a couple of WMS services and some imagery of the other planets:
    http://onmars.jpl.nasa.gov/wms.cgi?version=1.1.1&request=GetCapabilities
    http://onmoon.jpl.nasa.gov/wms.cgi?version=1.1.1&request=GetCapabilities

    This type of data wants to be 3D so I’ve brushed off code from a previous post, NASA Neo 3D XNA, and adapted it for planetary data, minus the population bump map. However, bump maps for depicting terrain relief are still a must have. A useful tool for generating bump or normal imagery from color relief is SSBump Generator v5.3 . The result using this tool is an image that encodes relative elevation of the moon’s surface. This is added to the XNA rendering pipeline to combine a surface texture with the color relief imagery, where it can then be applied to a simplified spherical model.

    clip_image004

    Fig 5 – part of normal map from ASU Moon Color Relief imagery

    The result is seen in the MoonViewer client with the added benefit of immediate mode GPU rendering that allows smooth rotation and zoom.

    The other planets and moons have somewhat less data available, but still benefit from the XNA treatment. Only Earth, Moon, Mars, Ganymede, and Io have data affording bump map relief.

    I also added a quick WMS 2D viewer html using OpenLayers against the JPL WMS servers to take a look at lunar landing sites. Default OpenLayers isn’t especially pretty, but it takes less than 20 lines of js to get a zoomable viewer with landing locations. I would have preferred the elegance of Leaflet.js, but EPSG:4326 isn’t supported in L.TileLayer.WMS(). MapProxy promises a way to proxy in the planet data as EPSG:3857 tiles for Leaflet consumption, but OpenLayers offers a simpler path.

    clip_image006

    Fig 6 – OpenLayer WMS viewer showing lunar landing sites

    Now that the Viewer is in place it’s time to take a test drive. Here is a ClickOnce installer for GestureViewer modified to work with the Silverlight Socket service: http://107.22.247.211/MoonKinect/

    Recall that this is a Beta SDK, so in addition to a Kinect prerequisite, there are some additional runtime installs required:

    Using the Kinect SDK Beta

    Download Kinect SDK Beta 2:
    http://www.kinectforwindows.org/download/

    Be sure to look at the system requirements and the installation instructions further down the page. This is Beta still, and requires a few pieces. The release SDK is rumored to be available the first part of 2012.

    You may have to download some additional software as well as the Kinect SDK:

    Finally, we are making use of port 4530 for the Socket Service. It is likely that you will need to open this port in your local firewall.

    As you can see this is not exactly user friendly installation, but the reward is seeing Kinect control of a mapping environment. If you are hesitant to go through all of this install trouble, here is a video link that will give you an idea of the results.

    YouTube video demonstration of Kinect Gestures

     

    Voice commands using the Kinect are very simple to add so this version adds a few.

    Here is the listing of available commands:

           public void SocketCommand(string current)
            {
                switch (command)
                {
                        // Kinect voice commands
                    case "mercury-on": { MercuryRB.IsChecked = true; break; }
                    case "venus-on": { VenusRB.IsChecked = true; break; }
                    case "earth-on": { EarthRB.IsChecked = true; break; }
                    case "moon-on": { MoonRB.IsChecked = true; break;}
                    case "mars-on": { MarsRB.IsChecked = true; break;}
                    case "marsrelief-on": { MarsreliefRB.IsChecked = true; break; }
                    case "jupiter-on": { JupiterRB.IsChecked = true; break; }
                    case "saturn-on": { SaturnRB.IsChecked = true; break; }
                    case "uranus-on": { UranusRB.IsChecked = true; break; }
                    case "neptune-on": { NeptuneRB.IsChecked = true; break; }
                    case "pluto-on": { PlutoRB.IsChecked = true; break; }
    
                    case "callisto-on": { CallistoRB.IsChecked = true; break; }
                    case "io-on": { IoRB.IsChecked = true;break;}
                    case "europa-on": {EuropaRB.IsChecked = true; break;}
                    case "ganymede-on": { GanymedeRB.IsChecked = true; break;}
                    case "cassini-on": { CassiniRB.IsChecked = true; break; }
                    case "dione-on":  {  DioneRB.IsChecked = true; break; }
                    case "enceladus-on": { EnceladusRB.IsChecked = true; break; }
                    case "iapetus-on": { IapetusRB.IsChecked = true;  break; }
                    case "tethys-on": { TethysRB.IsChecked = true; break; }
                    case "moon-2d":
                        {
                            MoonRB.IsChecked = true;
                            Uri uri = Application.Current.Host.Source;
                            System.Windows.Browser.HtmlPage.Window.Navigate(new Uri(uri.Scheme + "://" + uri.DnsSafeHost + ":" + uri.Port + "/MoonViewer/Moon.html"), "_blank");
                            break;
                        }
                    case "mars-2d":
                        {
                            MarsRB.IsChecked = true;
                            Uri uri = Application.Current.Host.Source;
                            System.Windows.Browser.HtmlPage.Window.Navigate(new Uri(uri.Scheme + "://" + uri.DnsSafeHost + ":" + uri.Port + "/MoonViewer/Mars.html"), "_blank");
                            break;
                        }
                    case "nasaneo":
                        {
                            EarthRB.IsChecked = true;
                            System.Windows.Browser.HtmlPage.Window.Navigate(new Uri("http://107.22.247.211/NASANeo/"), "_blank"); break;
                        }
                    case "rotate-east": {
                            RotationSpeedSlider.Value += 1.0;
                            tbMessage.Text = "rotate east";
                            break;
                        }
                    case "rotate-west":
                        {
                            RotationSpeedSlider.Value -= 1.0;
                            tbMessage.Text = "rotate west";
                            break;
                        }
                    case "rotate-off":
                        {
                            RotationSpeedSlider.Value = 0.0;
                            tbMessage.Text = "rotate off";
                            break;
                        }
                    case "reset":
                        {
                            RotationSpeedSlider.Value = 0.0;
                            orbitX = 0;
                            orbitY = 0;
                            tbMessage.Text = "reset view";
                            break;
                        }
    
                    //Kinect Swipe algorithmic commands
                    case "swipetoleft":
                        {
                            orbitY += Microsoft.Xna.Framework.MathHelper.ToRadians(15);
                            tbMessage.Text = "orbit left";
                            break;
                        }
                    case "swipetoright":
                        {
                            orbitY -= Microsoft.Xna.Framework.MathHelper.ToRadians(15);
                            tbMessage.Text = "orbit right";
                            break;
                        }
                    case "swipeup":
                        {
                            orbitX += Microsoft.Xna.Framework.MathHelper.ToRadians(15);
                            tbMessage.Text = "orbit up";
                            break;
                        }
                    case "swipedown":
                        {
                            orbitX -= Microsoft.Xna.Framework.MathHelper.ToRadians(15);
                            tbMessage.Text = "orbit down";
                            break;
                        }
    
                    //Kinect gesture template commands
                    case "circle":
                        {
    
                            if (scene.Camera.Position.Z > 0.75f)
                            {
                                scene.Camera.Position += zoomInVector * 5;
                            }
                            tbMessage.Text = "zoomin";
                            break;
                        }
                    case "circle2":
                        {
                            scene.Camera.Position += zoomOutVector * 5;
                            tbMessage.Text = "zoomout";
                            break;
                        }
                }
            }

    Possible Extensions

    After posting this code, I added an experimental stretch vector control for zooming and 2 axis twisting of planets. These are activated by voice: ‘vector twist’, ‘vector zoom’, and ‘vector off.’ The Map control side of gesture commands could also benefit from some easing function animations. Another avenue of investigation would be some type of pointer intersection using a ray to indicate planet surface locations for events.

    Summary

    Even though Kinect browser control is not prime time material yet, it is a lot of experimental fun! The MoonViewer control experiment is relatively primitive. Cursor movement and click using posture detection and hand tracking is also feasible, but fine movement is still a challenge. Two hand vector controlling for 3D scenes is also promising and integrates very well with SL5 XNA immediate mode graphics.

    Kinect 2.0 and NearMode will offer additional granularity. Instead of large swipe gestures, finger level manipulation should be possible. Think of 3D voxel space manipulation of subsurface geology, or thumb and forefinger vector3 twisting of LiDAR objects, and you get an idea where this could go.

    The merger of TV and internet holds promise for both whole body and NearMode Kinect interfaces. Researchers are also adapting Kinect technology for mobile as illustrated by OmniTouch.

    . . . and naturally, lip reading ought to boost the Karaoke crowd (could help lip synching pop singers and politicians as well).

    clip_image008

    Fig 7 – Jupiter Moon Io

    Paradise Lost

    image

    Fig 1 – web mapping utopia, location uncertain

    “Not surprisingly, in a culture in which information was becoming standardized and repeatable, mapmakers began to exclude “paradise” from their charts on the grounds that its location was too uncertain.”

    —Neil Postman, Technopoly (1992)

    I recently spent a few hours re-reading some old books on the shelf. Neil Postman’s Technopoly triggered some reflection on the present state of web mapping. As a technophile, I tend to look forward to the next version of whatever with anticipation. Taking the longer view, however, can be a useful exercise.

    Progress – yes, no, maybe?

    Postman’s critique asserts that Technology is in the vanguard of illusory “progress.” That the impact of technology shapes deep things in a culture with unimagined side effects. The core technology of our current cultural turnover is electronics and the key utility is “Information,” its storage and flow. Information volume and velocity apparently grow exponentially over time loosely tracking the famous Moore’s Law time curve. Our web mapping subset of Technology is embedded in this Information ramp up, and we are still grappling with the confusion of Information and Knowledge, resulting from an accelerating information glut.

    The principle of Information is popping up all over, from Claude Shannon’s Information Theory to Hawking-Berkenstein’s Black hole solution which surprisingly showed that the total Information content of a Black hole is proportional to the Planck square surface area of its event horizon (as opposed to a cubic volume relation). Naturally the concept of Information leaks into the soft sciences as well. Bit obsession is now woven into Western economic fabric with the assumption of continuous progress.

    Sociologically, this all leads to a breakup of previous generations of “knowledge monopolies” and a hyper-sensitivity to initial conditions, (the old Ray Bradbury crushed butterfly scenario at the root of chaos theory). The upheaval is met with a kind of assumed optimism, an unquestioned utopian view of progress around the benefits of modern technology. Information is good, right? More information faster is even better! Web maps contribute more information faster, so web mapping is on the side of the Good, the True, and the Beautiful. Naturally web maps can be quite beautiful, whatever that means subjectively.

    The age of typography, ushered into Western culture by Gutenberg, had profound and enduring effects on history and culture, affecting everything – from the Reformation and the rise of Democratic Nationalism to our educational bureaucracy and even common definitions of truth and knowledge. But not until Marshall McLuhan were the typographic origins of these effects popularly visible, and only in retrospect. The electronic age, in its current internet iteration, is undoubtedly creating similarly profound dislocations, whose consequences are not at all apparent at present. Unintended side effects are just that, unintended. Consequences are unintended, because they are unknowable.

    “Technology solved the problem of information scarcity, the disadvantages of which were obvious. But it gave no warnings about the danger of information glut, the disadvantages of which were not seen so clearly, the long range result – information chaos.”

     

    “The world has never before been confronted with information glut and has hardly had time to reflect on its consequences.”

    —Neil Postman, Technopoly (1992)

    Whither web mapping?

    So the question at hand revolves around the smaller microcosm of mapping in the internet era. That an ever growing amount of this information flood is geospatial is indisputable. Computing mobility adds terrestrial location to all business and social enterprise – Tweets to Facebook, Fleet Tracking to Risk Analysis are increasingly tethered to spatial attributes. Maps take all of these streams into a symbolic spatial representation, which filters for location. To a mapmaker everything is a map and terra incognita has long since vanished.

    Web mapping adds an element of exploration, with zoom and pan flight through these abstract spaces, that mirrors movement in our physical world. Our community is also wont to add controls affording endless tinkering with the form, as if one more contribution to universal “Choice fatigue” will add value. But Google glommed on to the real deal. In a state of information overload, the key to riches is meaningful information reduction. Simplification is the heart of search. Web mapping is one more filtering approach reducing information along the axis of spatial proximity.

    Do interactive maps add seriously to comprehension or just to entertainment, as simply a novelty? Cartographers relish pointing out this quandary to web developers and other mere mortals. In the web mapping community proliferating means can easily be confused with progress. Doesn’t it seem peculiar, for example, to attach any meaning whatsoever to charts of Kindergartener’s DIBELS scores, let alone median DIBELS scores charted on national school district polygons. Does it lead to anything but a bureaucratic illusion of some control over chaos? No child is better off for the graph (unless distant employment potential as an educational bureaucrat is considered relevant), and, “no child left behind” slogans to the contrary, the overthrow of typography based education proceeds apace in a confusing melee of winners and losers. The application of numeracy to every conceivable problem like this elevates modeling to mythic proportions.

    “When a technology becomes mythic, it is always dangerous because it is then accepted as it is, and is therefore not easily susceptible to modification or control.”

    —Neil Postman, Technopoly (1992)

    image

    What is a map but a modeling technology, a symbolic abstraction of space to visualize a formulaic concept? Maps are all entangled with mathematical models, ellipsoids, surfaces, and transforms, but are we guilty of mythically inflating the power of maps to communicate something of the Good, the True and the Beautiful? Does the proliferation of web maps, for instance, alter the injustice of Atanas Entchev’s incarceration for the so called “crime“of immigration? Likely not, and in fact, may contribute to the irony of an ICE database assigning the Entchev family a spatial attribute coinciding with some “Community Education Center” in Newark, NJ.

    “Our inventions are wont to be pretty toys, which distract our attention from serious things. They are but improved means to an unimproved end, an end which it was already but too easy to arrive at.”

    —Henry David Thoreau, Walden (1854), p. 42

    Another pillar of cyber utopian thought is the inevitability of improved community with improved connectivity. The meteoric rise of Facebook exemplifies this confusion. For in fact Facebook is faceless, not anonymous, but prone to carefully crafted pseudonymous identities. Unintended messages ripple across unknowable communities, and histories, whether wanted or unwanted, retain regrettably long tails. Twitter, as well, champions brevity while fragmenting communication across a ghostly crowd of undisclosed persons.

    “The great communication that we have today can lead to complete depersonalization. Then one is just swimming in a sea of communication and no longer encounters persons at all.”

    —Benedict XVI “Light of the World” p59 facebook

    Ray Kurzweil’s anticipation of the Coming Singularity includes human/non-human relationships via digital intermediaries, a thought with somewhat disturbing implications. Knowledge of location across an intermediary network matters little to human relationships and adding non-human intelligence to the mix is only disturbing. Real human relationships have deeper currents than high velocity information or spatial attribution. Studies in the educational community have repeatedly shown, for instance, that the presence of a real teacher is overwhelmingly more effective than video or online classes. To state the obvious, face to face renders location aware apps irrelevant and leaves artificial intelligence firmly anchored in the creepy category.

    image

    Robot Teacher?

    Assumptions of all goodness by Cyber Utopians are not at all justified as remarked by Evgeny Morozov in The Internet in Society: Empowering and Censoring Citizen? Enduring histories on Facebook, subject to examination by Iranian intelligence for hardly promising ends, should be unsettling to all but the grimmest of Marxist utopians. Doubtless a few stray Stalinist at Duke and Zuccotti Park are taking notes on new media and the social web.

    image

    Evgeny’s passing point about "cyber captivity" underlines a growing problem of lost opportunity. The prescience of Alduous Huxley’s Soma comes to mind. Does obsessive gaming, for example, reduce higher value opportunities for learning, productivity, and human relations? Do vast iTune libraries subtract from the net benefit of personal mastery of a musical instrument? These calculations are impossible to quantify and really revolve around deeper questions of spiritual significance, sub specie aeternitatis.

    More information faster is not necessarily a net positive in another sense. Proliferating conspiracy theories only corroborate Neil Postman’s shuffled deck analogy, that a disintegrating information context conditions perception to credulity. Anything is believable because the next card is experienced as random. Unwarranted credulity paves the way to tyranny as recent history has shown all too tragically. The Rwandan genocide, less than 10 years ago, was incited by incredulous claims aired to a credulous public in creepy Goebbels fashion. The future specter of OsGeo web maps delineating the boundaries of inyenzi (cockroach in Kinyarwandan), does little to encourage optimism.

    “The fact is, there are few political, social, and especially personal problems that arise because of insufficient information. Nonetheless, as incomprehensible problems mount, as the concept of progress fades, as meaning itself becomes suspect, the Technopolist stands firm in believing that what the world needs is yet more information.”

    —Neil Postman, Technopoly (1992)

    Summary

    These may all be rather marginally Luddite issues. The genuine “terra incognita” of information technology, and consequently web mapping, is the tectonic plate of culture. What kind of global cultural, economic, and political earthquakes have been set in motion? What tidal wave of changing perceptual process is yet to be hailed from the yardarm?

    Kind of exciting to think about.

    Paradise Lost? Please report to Lost and Found.

    WebBIM? hmmm .. Bing BIM?

    BIM or Building Information Modeling isn’t exactly news. I guess I tend to think of it as a part of AEC, starting back in the early generations of CAD, as a way to manage a facility in the aftermath of design and construction. BIM is after all about Buildings, and Buildings have been CAD turf all these many years. Since those early days of FM, CAD has expanded into the current BIM trend with building lifecycle management that is swallowing up larger swathes of surrounding environments.

    BIM models are virtual worlds, ‘mirror lands’ of a single building or campus. As BIM grows the isolated BIM models start to aggregate and bump up against the floor of web mapping, the big ‘Mirror Land’. One perspective is to look at a BIM model as a massively detailed Bill of Material (BIM<=>BOM . . bing) in which every component fitted into the model is linked to additional data for specification, design, material, history, approval chains, warranties, and on and on. BIM potentially becomes one massively connected network of hyperlinks with a top level 3D model that mimics the real world.

    Sound familiar? – BIM is a sub-internet on the scale of a single building with an interface that has much in common with web mapping. Could this really be yet another reincarnation of Ted Nelson’s epic Xanadu Project, the quixotic precursor of today’s internet?

    Although of relatively recent origins, BIM has already spawned its own bureaucratic industry with the likes of NBIMS replete with committees, charters, and governance capable of seriously publishing paragraphs like this:

    “NBIM standards will merge data interoperability standards, content values and taxonomies, and process definitions to create standards which define “business views” of information needed to accomplish a particular set of functions as well as the information exchange standards between stakeholders.”

    No kidding, “taxonomies”? I’m tempted to believe that ‘Information’ was cleverly inserted to avoid the embarrassing eventuality of an unadorned “Building Model.” Interesting how Claude Shannon seems to crop up in a lot of acronyms these days: BIM, GIS, NBIMS, ICT, even IT?

    BIM has more recently appeared on the GIS radar with a flurry of discussion applying GIS methods to BIM. Here, for example are a couple of posts with interesting discussion: SpatialSustain, GeoExpressions, and Vector One. Perhaps this is just another turf battle arising in the CAD versus GIS wars. I leave that for the GISCIers and NBIMSers to decide.

    My interest is less in definition and more an observation that buildings too are a part of the rapidly growing “Mirror Land” we call web maps. Competing web maps have driven resolution down to the region of diminishing returns. After all, with 30cm commonly available, is 15cm that much more compelling? However, until recently, Mirror Land has been all about maps and the wide outside world. Even building models ushered in with SketchUp are all about exteriors.

    The new frontier of web mapping is interior spaces, “WebBIM,” “Bing BIM” ( Sorry, I just couldn’t resist the impulse). Before committing to national standards, certifications, and governance taxonomies perhaps we need to just play with this a bit.

    We have Bing Maps Local introducing restaurant photoscapes. Here’s an example of a restaurant in Boston with a series of arrow connected panoramas for virtual exploration of the interior.

    And another recent Bing Maps introduction, mall maps. Who wants to be lost in a mall, or are we lost without our malls?

    And then in the Google World Art Projects explore museums. Cool, Streetside Inside!

    It’s not obvious how far these interior space additions will go in the future, but these seem to be trial balloons floated for generating feedback on interior extensions to the web map mirror world. At least they are not introduced with full fledged coverage.

    “Real” BIM moves up the dimension chain from 2D to 3D and on to 4 – 5D, adding time and cost along the way. Mirror Land is still caught in 2-3D. The upcoming Silverlight 5 release will boost things toward the 3-4D. Multi-verse theories aside (now here’s a taxonomy to ponder – the Tegmark cosmological taxonomy of universes), in the 3-4D range full WebBIM can hit the streets. In the meantime the essential element of spatially hyperlinked data is already here for the curious to play with.

    So what’s a newbie Web BIMMER to do? The answer is obvious, get a building plan and start trying a few things. Starting out in 2D, here is an approach: get a building floorplan, add it to a Bing Maps interface, and then do something simple with it.

    Step 1 – Model a building

    CAD is the place to start for buildings. AEC generates floorplans by the boatload and there are even some available on line, but lacking DWG files the next possibility is using CAD as a capture tool. I tried both approaches. My local grocery store has a nice interior directory that is easily captured in AutoCAD by tracing over the image:

    King Soopers Store Directory
    Fig 5 – King Soopers Store Directory (foldout brochure)

    As an alternative example of a more typical DWG source, the
    University of Alaska
    has kindly published their floor plans on the internet.

    In both scenarios the key is getting the DWG into something that can readily be used in a web map application. Since I’m fond of Bing Silverlight Map Control, the goal is DWG to XAML. Similar things can be done with SVG, and will be as HTML5 increases its reach, and probably even in KML for the Googler minded. At first I thought this would be as easy as starting up Safe Software’s FME, but XAML is not in their writers list, perhaps soon. Next, I fell back to the venerable DXF text export with some C# code to turn it into XAML. This was actually fairly easy with the DXF capture of my local grocery store. I had kept the DWG limited to simple closed polylines and text, separated by layer names.

    Here is the result:

    Now on to more typical sources, DWG files which are outside of my control. Dealing with arcs and blocks was more than I wanted, so I took an alternative path. FME does have an SVG writer. SVG is hauntingly similar to XAML (especially haunting to W3C), and writing a simple SVG to XAML translator in C# was easier than any other approach I could think of. There are some XSLT files for SVG to XAML, but instead I took the quick and dirty route of translating SVG text to XAML text in my own translator giving me some more control.

    Here is the result:

    Step 2 – Embed XAML into a Bing Map Control interface

    First I wrote a small webapp that allows me to zoom Bing maps aerial to a desired building and draw a polyline around its footprint. This polyline is turned into a polygon XAML snippet added to a TextBox suitable for cut/paste as a map link in my Web BIM experiment.

     <m:MapPolygon Tag="KingSoopers2" x:Name="footprint_1"
     Fill="#FFfdd173" Stroke="Black" StrokeThickness="1" Opacity="0.5"
    MouseLeftButtonUp="MapPolygon_MouseLeftButtonUp">
    <m:MapPolygon.Locations>
    39.05779345,-104.84322060
    39.05772368,-104.84321926
    39.05770910,-104.84302480
    39.05771014,-104.84290947
    39.05772159,-104.84277536
    39.05776116,-104.84277804
    39.05776429,-104.84243204
    39.05833809,-104.84248434
    39.05833288,-104.84283303
    39.05836204,-104.84284510
    39.05835996,-104.84313880
    39.05832872,-104.84313880
    39.05832663,-104.84340836
    39.05825478,-104.84340568
    39.05825374,-104.84354113
    39.05821000,-104.84353979
    39.05820792,-104.84369670
    39.05779137,-104.84367792
    39.05779345,-104.84322060
    </m:MapPolygon.Locations>
    </m:MapPolygon>
    

    As a demonstration, it was sufficient to simply add this snippet to a map control. The more general method would be to create a SQL table of buildings that includes a geography column of the footprint suitable for geography STIntersects queries. Using a typical MainMap. ViewChangeEnd event would then let the UI send a WCF query to the table, retrieving footprints falling into the current viewport as a user navigates in map space. However, the real goal is playing with interior plans and I left the data connector feature for a future enhancement.

    In order to find buildings easily, I added some Geocode Service calls for an Address finder. The footprint polygon with its MouseLeftButtonUp event leads to a NavigationService that moves to the desired floor plan page. Again generalizing this would involve keeping these XAML floor plans in a SQL Azure Building table for reference as needed. A XAML canvas containing the floor plans would be stored in a BLOB column for easy query and import to the UI. Supporting other export formats such as XAML, SVG, and KML might best be served by using a GeometryCollection in the SQL table with translation on the query response.

    Step 3 – Do something simple with the floorplans

    Some useful utilities included nesting my floorplan XAML inside a <local:DragZoomPanel> which is coded to implement some normal pan and zoom functions: pan with left mouse, double click zoom in, and mouse wheel zoom +/-. Mouse over text labeling helps identify features as well. In addition, I was thinking about PlaneProjections for stacking multiple floors so I added some slider binding controls for PlaneProjection attributes, just for experimentation in a debug panel.

    Since my original King Soopers image is a store directory an obvious addition is making the plan view into a store directory finder.

    I added the store items along with aisles and shelf polygon id to a table accessed through a WCF query. When the floorplan is initialized a request is made to a SQL Server table with this directory item information used to populate a ListBox. You could use binding, but I needed to add some events so ListBoxItems are added in code behind.

    Mouse events connect directory entries to position polygons in the store shelves. Finally a MouseLeftButtonUp event illustrates opening a shelf photo view which is overlaid with a sample link geometry to a Crest product website. Clicks are also associated with Camera Icons to connect to some sample Photosynthes and Panoramas of store interior. Silverlight 5 due out in 2011 promises to have Silverlight integration of Photosynthe controls as well as 3D.

    Instead of a store directory the UAA example includes a simple room finder which moves to the corresponding floor and zooms to the room selected. My attempts at using PlaneProjection as a multi floor stack were thwarted by lack of control over camera position. I had hoped to show a stack of floor plans at an oblique view with an animation for the selected floor plan sliding it out of the stack and rotating to planar view. Eventually I’ll have a chance to revisit this in SL5 with its full 3D scene graph support.

    Where are we going?

    You can see where these primitive experiments are going: Move from a Bing Map to a Map of interior spaces and now simple Locators can reach into asset databases where we have information about all our stuff. The stuff in this case is not roads, addresses, and shops, but smaller stuff, human scale stuff that we use in our everyday consumer and corporate lives. Unlike rural cultures, modern western culture is much more about inside than outside. We spend many more hours inside the cube than out, so a Mirror Land where we live is certainly a plausible extension, whether we call it CAD, GIS, BIM, FM, or BINGBIM matters little.

    It’s also noteworthy that this gives vendors a chance to purchase more ad opportunities. After all, our technology is here to serve a consumer driven culture and so is Mirror Land.

    Interior spaces are a predictable part of Mirror Land and we are already seeing minor extensions. The proprietary and private nature of many interior spaces is likely to leave much out of public mapping. However, retail incentives will be a driving force extending ad opportunities into personal scale mapping. Eventually Mobile will close the loop on interior retail space, providing both consumer location as well as local asset views. Add some mobile camera apps, and augmented reality will combine product databases, individualized coupon links, nutritional content, etc to the shelf in front of you.

    On the enterprise side, behind locked BIM doors, Silverlight with its rich authentication framework, but more limited mobile reach, will play a part in proprietary asset management which is a big part of FM, BM, BIM ….. Location of assets is a major part of the drive to efficiency and covers a lot of ground from inventory, to medical equipment, to people.

    Summary:

    This small exercise will likely irk true NBIMSers who will not see much “real” BIM in a few floor plans. So I hasten to add this disclaimer, I’m not really a Web BIMer or even a Bing BIMer, but I am looking forward to the extension of Mirror Land to the interior spaces I generally occupy.

    Whether GIS analysis reaches into web mapped interiors is an open question. I’m old enough to remember when there were “CAD Maps” and “Real GIS”, and then “Web Maps” and “Real GIS.” Although GIS (real, virtual, or otherwise) is slowly reaching deeper into Mirror Land, we are still a long way from NBIMS sanctioned “Real” WebBIM with GIS analysis. But then that means it’s still fun, right?

    Mirror Land and the Last Millimeter


    Microsoft EMG Interface Patent

    Patent application number: 20090326406

    Well that was pretty quick. This went across the radar just this morning. See yesterday’s post Mirror Land and the Last Foot.

    “Microsoft’s connecting EMG sensors to arm muscles and then detecting finger gestures based on the muscle movement picked up by those sensors” REF: Engadget

    Looks like one part of the “Last Millimeter” is already patented. In a millmeter map we pick up objects and rotate them in mirror land. At least they don’t use drills with an EMG interface. My no fly threshold is any interface device requiring trepanning! It is interesting to see the biological UI beginning to stick its nose in the tent.


    Technology has its limits

    Mirror Land and the Last Foot


    Fig 1 – Bing Maps Streetside

    I know 2010 started yesterday but I slept in. I’m just a day late.

    Even a day late perhaps it’s profitable to step back and muse over larger technology trends. I’ve worked through several technology tides in the past 35 years. I regretfully admit that I never successfully absorbed the “Gang of Four” Design Patterns. My penchant for the abstract is relatively low. I learn by doing concrete projects, and probably fall into the amateur programming category often dismissed by the “professional” programming cognoscenti. However, having lived through a bit of history already, I believe I can recognize an occasional technology trend without benefit of a Harvard degree or even a “Professional GIS certificate.”

    What has been striking me of late is the growth of mirror realities. I’m not talking about bizarre multiverse theories popular in modern metaphysical cosmology, nor parallel universes of the many worlds quantum mechanics interpretation, or even virtual world phenoms such as Second Life or The Sims. I’m just looking at the mundane evolution of internet mapping.


    Fig 2 – Google Maps Street View

    One of my first mapping projects, back in the late 80′s, was converting the very sparse CIA world boundary file, WDBI, into an AutoCAD 3D Globe (WDBI came on a data tape reel). At the time it was novel enough, especially in the CAD world, to warrant a full color front cover of Cadence Magazine. I had lots of fun creating some simple AutoLisp scripts to spin the world view and add vector point and line features. I bring it up because at that point in history, prior to the big internet boom, mapping was a coarse affair at global scales. This was only a primitive wire frame, ethereal and transparent, yet even then quite beautiful, at least to map nerds.


    Fig 3 – Antique AutoCAD Globe WDBI

    Of course, at that time Scientists and GIS people were already playing with multi million dollar image aquisitions, but generally in fairly small areas. Landsat had been launched more than a decade earlier, but few people had the computing resources to play in that arena. Then too, US military was the main driving force with DARPA technology undreamed by the rest of us. A very large gap existed between Global and Local scales, at least for consumer masses. This access gap continued really until Keyhole’s aquisition by Google. There were regional initiatives like USGS DLG/DEM, Ordnance Survey, and Census TIGER. However, computer earth models were fragmented affairs, evolving relatively slowly down from satellite and up from aerial, until suddenly the entire gap was filled by Google and the repercussions are still very much evident.

    Internet Map coverage is now both global and local, and everything in between, a mirror land. The full spectrum of coverage is complete. Or is it? A friend remarked recently that they feel like recent talk in mobile LiDAR echos earlier discussions of “Last Mile” when the Baby Bells and Cable Comms were competing for market share of internet connectivity. You can glimpse the same echo as Microsoft and Google jocky for market share of local street resolution, StreetView vs Streetside. The trend is from a global coarse model to a full scale local model, a trend now pushing out into the “Last Foot.” Alternate map models of the real world are diving into human dimension, feet and inches not miles, the detail of the street, my local personal world.

    LiDAR contributes to this mirror land by adding a partial 3rd dimension to the flat photo world of street side capture. LiDAR backing can provide the swivel effects and the icon switching surface intelligence found in StreetView and Streetside. LiDAR capture is capable of much more, but internet UIs are still playing catchup in the 3rd dimension.

    The question arises whether GIS or AEC will be the driver in this new human dimension “mirror land.” Traditionally AEC held the cards at feet and inches while GIS aerial platforms held sway in miles. MAC, Mobile Asset Collection, adds a middle way with inch level resolution capability available for miles.


    Fig 4 – Video Synched to Map Route

    Whoever, gets the dollars for capture of the last foot, in the end it all winds up inside an internet mirror land.

    We are glimpsing a view of an alternate mirror reality that is not a Matrix sci-fi fantasy, but an ordinary part of internet connected life. Streetside and Street View push this mirror land down to the sidewalk.

    On another vector, cell phone locations are adding the first primitive time dimension with life tracks now possible for millions. Realtime point location is a first step, but life track video stitched on the fly into photosynth streams lends credence to street side contingency.

    The Location hype is really about linking those massive market demographic archives to a virtual world and then back connecting this information to a local personal world. As Sean Gillies in “Utopia or Dystopia” pointed out recently there are pros and cons. But, when have a few “cons” with axes ever really made a difference to the utopian future of technology?

    With that thought in mind why not push a little on the future and look where the “Last Millimeter” takes us?
        BCI Brain Computer Interface
        Neuronal Prosthetics


    Fig 5 – Brain Computer Interface

    Eye tracking HUD (not housing and urban development exactly)


    Fig 6- HUD phone?

    I’m afraid the “Last Millimeter” is not a pretty thought, but at least an interesting one.

    Summary

    Just a few technology trends to keep an eye on. When they get out the drill for that last millimeter perhaps it’s time to pick up an ax or two.

    Augmented Reality and GIS

    There have been a few interesting items surfacing on augmented reality recently. It is still very much a futuristic technology, but maybe not too distant future afterall. Augmented reality means intermingling digital and real objects, either adding additional digital objects to the real world, or in an inverse sense, combining real world objects into a virtual digital world.

    Here is an interesting example of augmenting a digital virtual world with real world objects borrowed from street view. The interface utilizes an iPhone inertial sensor to move the view inside a virtual world, but this virtual world is a mimic of the street side in Paris at the point in time that Google’s Street View truck went past.



    Fig 1 – Low tech high tech virtual reality interface


    Fig 2 – Immersive interface


    In this Sixth Sense presentation at TED, Pranav Mistry explores the interchangebility of real and virtual objects. The camera eye and microphone sensors are used to interpret gestures and interact with digital objects. These digital objects are then re-projected into the real world on real objects such as paper, books, and even other people.


    Fig 3 Augmented Reality Pranav Mistry


    Fig 4 Merging digital and real worlds


    A fascinating question is, “How might an augmented reality interface impact GIS?”

    Google’s recent announcement of replacing its digital map model with one of its own creation, along with the introduction of the first Android devices, triggered a flurry of blog postings. One of the more interesting posts speculated about the target of Google’s “less than free” business model. Gurley reasoned plausibly that the target is the local ad revenue market.

    Google’s ad revenue business model was and is a disruptive change in the IT world. Google appears interested in even larger local ad revenues, harnessed by a massive distribution of Android enabled GPS cell phones. It is the interplay of core aggregator capability with edge location that brings in the next generation of ad revenue. The immediate ancillary casualties in this case are the personal GPS manufacturers and a few map data vendors.

    Local ads may not be as large a market source as believed, but if it is, the interplay of the network edge with network core may be an additional disruptive change. Apple has a network edge play with iPhone and a core play with media iTunes & iVideo, Google has Android/Chrome at the edge and Search/Google Maps at core. Microsoft has Bing Maps/Search at core as well as dabbling less successfully in media, but I don’t see much activity at the edge?

    Of course if mobile hardware capability evolves fast enough, Microsoft’s regular OS will soon enough fit on mobiles, perhaps in time to short circuit an edge market capture by Apple and Google. Windows 8 on a cell phone would open the door wide to Silverlight/WPF UI developers. Android’s potential success would be based on the comparative lack of cpu/memory on mobile devices, but that is only a temporary state, perhaps 2 years. However, in two years the world is a far different place.

    By that time augmented reality stuff will be part of the tool kit for ad enhancements:

    • Point a phone camera at a store and show all sale prices overlaid on the store front for items fitting the user’s demographic profile. (Products and store pays service)
    • Inside a grocery store scan shelf items through the cell screen with paid ad enhancements customized to the user’s past buying profile. (Products pay store, store pays service)
    • Inside store point at a product and get list of price comparisons from all competing stores within 2 miles. (product or user pays service)
    • Crowd gamer will recognize other team members (or face book friends, or other security personnel . . ) with an augmented realty enhancement when scanning a crowd (gamer subscribes to service, product pays service for ads targeted to gamer)

    And non commercial, non ad uses:

    • A first responder points cell phone at a building and bring up the emergency plan overlay and list of toxic substance storage. (Fire district pays service)
    • Field utility repair personnel points cell at a transformer and sees an overlay of past history with parts list, schematics, etc, etc (utility pays service)

    It just requires edge location available to core data services that reflects filtered data back to the edge. The ad revenue owner holds both a core data source and an edge unit location. They sell ads priced on market share of that interplay. Google wants to own the edge and have all ad revenue owners come through them so the OS is less than free in exchange for slice of ad revenue.

    Back to augmented reality. As Pranav Mistry points out there is a largely unexplored region between the edge and the core, between reality and virtual reality, which is the home of augmented reality. GIS fits into this by storing spatial location for objects in the real world back at the network core available to edge location devices, which can in turn augment local objects with this additional information from the core.

    Just add a GPS to the Sixth Sense camera/mic device and the outside world at an edge location is merged with any information available at core. So for example scan objects from edge location with the camera and you have augmented information about any other mobile GPS or location data at the core. Since Android = edge GPS + link to core + gesture interface + camera (still missing screen projector and mic), no wonder it has potential as a game changer. Google appears more astute in the “organizing the world” arena than Apple, who apparently remains fixated on merely “organizing style.”

    Oh, and one more part of the local interface device still missing, a pointer. NextGen UI for GIS



    Fig 5 – Laser Distance Meter Leica LDM


    Add a laser ranging pointer to the mobile device and you have a rather specific point and click interface to real world objects.

    1. The phone location is known thanks to GPS.
    2. The range device bearing and heading are known, due to an internal compass and/or inertial sensors.
    3. Distance available from the range beam gives precise delta distance to an object relative to the mobile device.

    Send delta distance and current GPS position back to the core where a GIS spatial query determines any known object at that spatial location. This item’s attributes are returned to the edge device and projected onto any convenient local object, augmenting the local world with the stored spatial data from the core. After watching Pranav Mistry’s research presentation it all seems not too far outside of reality.

    GIS has an important part to play here because it is the repository of all things spatial.