GMTI in WPF

http://www.web-demographics.com/GMTI requires .NET 3.0 enabled IE browser.
Alternatively here is a video showing part of the time sequence simulation.


Fig 1 – Screenshot of 3D WPF browser interface to GMTI data stream

According to NATO documentation STANAG 4607/ AEDP-7, ” the GMTI standard defines the data content and format for the products of ground moving target indicator radar systems.” The GMTI format is a binary stream of packet segments containing information about mission, job, radar dwell, and targets. These packets contain a variety of geospatial data that can be used in a spatially oriented interface.

Over the past couple of weeks I have explored some options for utilizing this type of data in a web interface. I was provided with a set of simulation GMTI streams that model a UAV flight pattern and target aquisition in a demonstration area in northern Arizona.

My first task was to write a simple Java program to parse the sample binary streams and see what was available for use. The data is organized in packets and segments. The segments can be one of a variety of types, but the primary segments of interest were the Dwell Segment and Target Segments. Dwell segments contain information about the sensor platform including 3D georeferenced position and orientation. Target segments are referenced by a dwell segment to provide information about detected ground targets which includes ground position and possibly classification.

Since I was only interested in a subset of the entire GMTI specification I concentrated on fields of interest in developing a visual interface. The output of the resulting translator is selectable to xml, GeoRss, and sql.


Here is a sample of a few packets in the xml output:

<gmti>
<mission>
  <missionplan>NONE</missionplan>
  <flightplan>NONE</flightplan>
  <platformtype>0</platformtype>
  <platformconfig>NONE</platformconfig>
  <date>9/27/2002</date>
</mission>
<jobdefinition>
  <jobid>0</jobid>
  <sensorid>0</sensorid>
  <sensormodel>NORMAL</sensormodel>
  <priority>1</priority>
  <terrainmodel>1</terrainmodel>
  <geoid>2</geoid>
</jobdefinition>
<dwell>
  <packet>2</packet>
  <time units="milliseconds">75099818</time>
  <x>-111.25695822760463</x>
  <y>35.18875552341342</y>
  <z units="centimeters">106668</z>
  <scalex>11651</scalex>
  <scaley>5825</scaley>
  <PlatformHeading units="degree true N">334.1217041015625</PlatformHeading>
  <PlatformPitch>0.0</PlatformPitch>
  <PlatformRoll>0.0</PlatformRoll>
  <SensorHeading units="degree true N">0.0</SensorHeading>
  <SensorPitch>0.0</SensorPitch>
  <SensorRoll>0.0</SensorRoll>
  <SensorTrack units="degree true N">334.1217041015625</SensorTrack>
  <SensorSpeed unit="millimeters/second">180054</SensorSpeed>
  <SensorVerticalVelocity unit="decimeters/second">1</SensorVerticalVelocity>
  <DwellAreaLatCenter>35.05047817714512</DwellAreaLatCenter>
  <DwellAreaLongCenter>-111.94878098554909</DwellAreaLongCenter>
  <DwellAreaRangeHalfExtent units="kilometers">11.0</DwellAreaRangeHalfExtent>
  <DwellAreaDwellAngleHalfExtent units="degrees">0.999755859375</DwellAreaDwellAngleHalfExtent>
  <TargetReportCount>0</TargetReportCount>
  <DwellTime unit="milliseconds">75099818</DwellTime>
</dwell>
<dwell>
  <packet>3</packet>
  <time units="milliseconds">75099978</time>
  <x>-111.25695822760463</x>
  <y>35.18875552341342</y>
  <z units="centimeters">106668</z>
  <scalex>11651</scalex>
  <scaley>5825</scaley>
  <PlatformHeading units="degree true N">334.18212890625</PlatformHeading>
  <PlatformPitch>0.0</PlatformPitch>
  <PlatformRoll>0.0</PlatformRoll>
  <SensorHeading units="degree true N">0.0</SensorHeading>
  <SensorPitch>0.0</SensorPitch>
  <SensorRoll>0.0</SensorRoll>
  <SensorTrack units="degree true N">334.18212890625</SensorTrack>
  <SensorSpeed unit="millimeters/second">180054</SensorSpeed>
  <SensorVerticalVelocity unit="decimeters/second">1</SensorVerticalVelocity>
  <DwellAreaLatCenter>35.07123788818717</DwellAreaLatCenter>
  <DwellAreaLongCenter>-111.95461646653712</DwellAreaLongCenter>
  <DwellAreaRangeHalfExtent units="kilometers">11.0</DwellAreaRangeHalfExtent>
  <DwellAreaDwellAngleHalfExtent units="degrees">1.0052490234375</DwellAreaDwellAngleHalfExtent>
  <TargetReportCount>0</TargetReportCount>
  <DwellTime unit="milliseconds">75099978</DwellTime>
</dwell>
   .
   .
   .
<target>
  <packet>7</packet>
  <x>-111.90102383494377</x>
  <y>35.15539236366749</y>
  <z>2130</z>
  <classification>No Information, Simulated Target</classification>
</target>

Listing 1 – example of GMTI translator output to XML


The available documentation is complete and includes all the specialized format specifications such as binary angle, signed binary decimal, etc. I had to dust off bitwise operations to get at some of the fields.

	public double B16(short d){
		int intnumber = (int)d;
		int maskint = 0xFF80;
		int number = (intnumber & maskint)>>7;
		int maskfrac = 0x003F;
		int fraction = intnumber & maskfrac;
		return  (double)number + (double)fraction/128.0;
	}

Listing 2 – Example of some bit twiddling required for parsing the GMTI binary stream

Since I wanted to show the utility of a live stream I ended up translating the sample gmti into a sql load for PostGIS. The PostGIS gmti database was configured with just two tables a dwell table and a target table. Each record of these tables contained a time field from the gmti data stream which I could then use to simulate a time domain sequence in the interface.

Once the data was available as PostGIS with a time and geometry field, my first approach was to make the tables available as WMS layers using Geoserver. I could then use an existing SVG OWS interface to look at the data. The screen shot below shows TerraServer DOQ overlaid with a WMS dwell layer and a WFS target layer. The advantage of WFS is the ability to add some svg rollover intelligence to the vector points. WMS supplies raster which in an SVG sense is dumb, while WFS serves vector GML which can then be used to produce more intelligent event driven svg.


Fig 2 – SVG OWS interface showing GMTI target from Geoserver WFS over Terraserver DOQ


This approach is helpful and lets the browser make use of the variety of public WMS/WFS servers that are now available, however, now that WPF 3D is available the next step was obvious. I needed to make the GMTI data available in a 3D terrain interface.

I thought that an inverse of the flyover interface I had built recently for Pikes Peak viewing would be appropriate. In this approach a simple HUD could visually track aspects of the platform including air speed, pitch, roll, and heading. At the same time a smaller key map could be used to track the platform position while a full view terrain model shows the target locations as they are aquired. I could also use some camera controls for manipulating the camera look direction and zoom.

At one point I investigated making use of the dwell center points to see what tracking a camera.lookDirection to the vector from camera position to dwell center points would look like. Being a GMTI novice I did not realize how quickly dwells are moving, scanning the area of interest. The resulting interface was twitching the view inordinately and I switched to a smoother scene center look direction. Since I had worked out the Dwell center points already I decided to at least make use of them to show dwell centers as Xs in cyan on the terrain overlay.


Fig3 – WPF 3D interface showing GMTI target layer from Geoserver OWS on top of DOQ imagery from TerraServer draped over a DEM terrain from USGS


The WPF model followed a familiar pattern. I used USGS DEM for the ground terrain model. I first used gdal_translate to convert from ascii DEM to grd. I could then use a simple grd to WPF MeshGeometry3D translater which I had coded earlier to make the basic terrain model. Since the GMTI sensor platform circles quite a large area of ground terrain I made the simplification of limiting my terrain model to the target area.

The large area of the platform circle made a terrain model unnecessary for my keymap. Instead I used a rectangular planar surface draped with a DRG image from TerraServer. To this Model3D Group I added a simple spherical icon to represent the camera platform position:

                    <GeometryModel3D x:Name="cameraloc" Geometry="{StaticResource sphere}">
                      <GeometryModel3D.Material>
                        <DiffuseMaterial Brush="Red"/>
                      </GeometryModel3D.Material>
                      <GeometryModel3D.Transform>
                        <Transform3DGroup>
                          <ScaleTransform3D  ScaleX="0.02" ScaleY="0.02" ScaleZ="0.02"/>
                          <TranslateTransform3D x:Name="PlatformPosition" OffsetX="0" OffsetY="0" OffsetZ="0.0"/>
                        </Transform3DGroup>
                      </GeometryModel3D.Transform>
                    </GeometryModel3D>

Listing 3 – the geometry model used to indicate platform location


Now that the basic interface was in place I built an Ajax timer which accessed the server for a new set of GMTI results on a 1000 ms interval. To keep the simulation from being too boring I boosted the serverside query to a 3000ms interval which in essence created a simulation at 3X reality. My server side query was coded as a Java servlet which illustrates an interesting aspect of web interfaces, multiple technology interoperability.

On the client side the timer is triggered from a start button. It then reads the servlet results using C# scripting.

            Uri uri = new Uri("http://hostserver/WPFgmti/servlet/GetGMTITracks?table=all&currtime=" + currentTime + "&interval=3&area=GMTI&bbox=-112.56774902343798,34.65681457431236,-111.256958007813,35.7221794119471");
            WebRequest request = WebRequest.Create(uri);
            HttpWebResponse response = (HttpWebResponse)request.GetResponse();
            StreamReader reader = new StreamReader(response.GetResponseStream());
            string responseFromServer = reader.ReadToEnd();
            if (responseFromServer.Length > 0)
            {

            .
            .
          // use the resulting platform position to change the TranslateTransform3D of the camera location in keymap
          Point3D campt = new Point3D(Double.Parse(ll[0]), Double.Parse(ll[1]), Double.Parse(ll[3])/1000.0);
                            PlatformPosition.OffsetX = campt.X;
                            PlatformPosition.OffsetY = campt.Y;

Listing 4 – client side C# snippet for updating keymap camera positiony

The camera location is changed in keymap and its look direction is updated in the view frame. Now some method of showing targets and dwell centers needed to be worked out. One approach would be to use a 3D icon spherical or terahedron to represent a ground position on the terrain model. However, there are about 6000 target locations in the GMTI sample and rather than simply show icons moving from one location to the next I wanted to show their track history as well. In addition the targets were not discriminated in this GMTI stream so it would not be easy to distinguish among several moving targets. A better approach in this case is to take advantage of the DrawingBrush capability to create GeometryDrawing shapes on a terrian overlay brush.

                            <DiffuseMaterial>
                              <DiffuseMaterial.Brush>
                                <DrawingBrush Stretch="Uniform">
                                  <DrawingBrush.Drawing >
                                    <DrawingGroup x:Name="TargetGroup">
                                      <GeometryDrawing Brush="#000000FF">
                                        <GeometryDrawing.Geometry>
                                          <RectangleGeometry Rect="-112.0820209 -35.31137528 0.332466 0.332466"/>
                                        </GeometryDrawing.Geometry>
                                      </GeometryDrawing>
                                    </DrawingGroup>
                                  </DrawingBrush.Drawing>
                                </DrawingBrush>
                              </DiffuseMaterial.Brush>
                            </DiffuseMaterial>

Listing 5 – using a DrawingBrush to modify the terrian overlay


As targets arrive from the server circles are added to the DrawingGroup:

                            GeometryDrawing gd = new GeometryDrawing();
                            gd.Brush = new SolidColorBrush(Colors.Red);
                            GeometryGroup gg = new GeometryGroup();
                            EllipseGeometry eg = new EllipseGeometry();
                            eg.RadiusX = 0.0005;
                            eg.RadiusY = 0.0005;
                            Point cp = new Point(Double.Parse(ll[0]), Double.Parse(ll[1]));
                            eg.Center = cp;
                            gg.Children.Add(eg);
                            gd.Geometry = gg;

                            TargetGroup.Children.Add(gd);

Listing 6 – C# code for creating a new ellipse and adding to the DrawingGroup


The Dwell center points are handled in a similar way, creating two LineGeometry objects for each center point and adding them to TargetGroup.

This completes a simple 3D WPF interface for tracking both the sensor platform dwell segments and the target segments of a gmti stream. The use of WPF 3D extends tracking interfaces in the z dimension and provides some additional visual analysis utility. This interface makes use of a static terrain model made possible by foreknowledge of the GMTI input. A useful extension of this interface would be generalizing terrain. I explored this a bit earlier when building terrain from JPL SRTM WMS resources for the NASA Neo interface. This resource provides world wide elevation data set exposed in a WMS, which can be used to create GeometryModel3D TINs, enhanced with additional overlays such as BMNG, any where in the world.

In a tracking scenario some moving platform is represented in a live stream of constantly changing positions. Since these positions are following a vector another general approach would be an Ajax WebRequest to a set of terrain patches, either static for performance or dynamic from JPL SRTM for small footprint. As a tracked position moves across the globe new patches of GeometryModel3D would be dynamically added to the forward edge, while old patches are removed from the trailing edge, reminiscent of the VE and GE map services. GeometryModel3D requirements are much more intense than image pyramids so performance could not match popular slippy map interfaces. However, real time tracking is generally not a high speed affair, at least for ground vehicles.

A variation on this Ajax theme would be a retinal model that adjusts GeometryModel3D patch resolutions depending on the zoom. For example a zoom to a focus could increase the resolution of the terrain model from 100m to 30m and then again to 10m. If the DEM is pre configured it would be fairly simple to create a pyramid stack of GeometryModel3D patches using perhaps 160m, 80m, 40m, 20m, 10m resolutions. Starting at the top of the pyramid in the interface would provide enough resolution for a wide area TIN. Zoom events could then trigger higher resolution Ajax calls to enhance detail at a focus. These kinds of approaches will need to be explored in a future project.

This entry was posted in Uncategorized by admin. Bookmark the permalink.

Comments are closed.