Storm Runoff modelling and MRLC

The Multi-Resolution Land Characteristics Consortium, MRLC, is a consortium of agencies at the federal level that produces the National Land Cover Database, NLCD 2001. The dataset was developed using a national coverage set of 8 band Landsat-7 Imagery along with 30m DEM. The imagery is processed from three separate dates to give a seasonal average land cover classification. The resolution is a bit coarse at 30m square, but it is a valuable resource because of its consistent national coverage.

More detailed information on MRLC:

In addition to the NLCD coverage there are two derivative layers:

  • NLCD 2001 impervious surface: The impervious surface data classifies each pixel into 101 possible values (0% – 100%). The data show the detailed urban fabric in which many of us reside. Low percentage impervious is in light gray with increasing values depicted in darker gray and the highest value in pink and red. White areas have no impervious surface.
  • NLCD 2001 canopy density: Like the impervious surface data, the canopy density database element classifies each pixel into 101 possible values (0% – 100%). The canopy density estimate apply to the forest only. These data can be combined with the land cover to estimate canopy density by forest type (deciduous, evergreen, mixed, woody wetland)

The data is available for public download. There is also one of those vintage ESRI viewers that qualifies for James Fee’s “Cash for Geo Clunkers” proposal. These Ancien RĂ©gime viewers are littered all over the federal landscape. It will take years for newer technology to replace this legacy of ArcIMS. Fortunately there is an exposed WMS service ( See GetCapabilities ), which permits access to MRLC layers without going through the “Viewer” nonsense. This WMS service proved very useful on a recent web project for Storm Runoff Mitigation.

I am no hydrologist, but once I was provided with the appropriate calculation approach the Silverlight UI was fairly straightforward. Basically there are a number of potential mitigation practices that play into a runoff calculation. Two fairly significant factors are Impervious Surface Area and Canopy Area, which are both available through MRLC’s WMS service. One simplified calculation model in use is called the TR-55 method.

By making use of the MRLC derived layers for Impervious Surface and Canopy at least approximations for these two factors can be derived for any given area of the Continental US. The method I used was to provide a GetMap request to the WMS service which then returned a pixel shaded image of the impervious surface density. Most of the hard work has already been done by MRLC. All I need to do is extract the value density for the returned png image.

Impervious Surface layer
Fig 3 – Impervious Surface shaded density red
Impervious Surface layer
Fig 4 – Canopy shaded density green

The density values are relative to gray. I at first tried a simple density calculation from the color encoded pixels by subtracting the base gray from the variable green: Green – Red = factor. The sum of these factors divided by the total pixel area of the image times the max 255 byte value is a rough calculation of the percentage canopy over the viewport. However, after pursuing the USGS for a few days I managed to get the actual percentile RGB tables and improve the density calculation accuracy. This average density percentile is then used in TR-55 as An*CNn with the Canopy CN value of 70.

The process of extracting density from pixels looks like this:

  HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(new Uri(getlayerurl));
  using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
    if (response.StatusDescription.Equals("OK"))
      using (Stream stream = response.GetResponseStream())
        byte[] data = ReadFully(stream, response.ContentLength);
        Bitmap bmp = (Bitmap)Bitmap.FromStream(new MemoryStream(data), false);

        UnsafeBitmap fast_bitmap = new UnsafeBitmap(bmp);
        PixelData pixel;
        string key = "";
        double value = 0;
        for (int x = 0; x < bmp.Width; x++)
          for (int y = 0; y < bmp.Height; y++)
            pixel = fast_bitmap.GetPixel(x, y);
            key = + " " + + " " +;
            if (imperviousRGB.Contains(key))
              value += Array.IndexOf(imperviousSurfaceRGB, key) * 0.01;

        double total = (bmp.Height * bmp.Width);
        double ratio = value / total;
        return ratio.ToString();

C#, unlike Java, allows pointer arithmetic in compilation marked unsafe. The advantage of using this approach here is a tremendous speed increase. The array of imperviousRGB strings to percentiles was supplied by the USGS. This process is applied in a WCF service to both the Canopy and the Impervious Surface layers and the result passed back to the TR-55 calculations.

Possible Extensions:

There are several extensions beyond the scope of this project that could prove interesting.

  1. First the NLCD uses a color classifications scheme. A similar color processing algorithm could be used to provide rough percentages of each of these classificcations for a viewport area. These could be helpful for various research and reporting requirements.
  2. However, beyond simple rectangular viewports, a nice extension would be the ability to supply arbitrary polygonal area of interests. This is fairly easy to do in Silverlight. The draw command is just a series of point clicks that are added to a Path geometry as line segments. The resulting polygon is then used as a clip mask when passing through the GetMap image. Probably a very simple point in polygon check either coded manually or using one of the C# ports of JTS would provide reasonable performance.
MRLC NLCD 2001 Colour Classification
Fig 3 - MRLC NLCD 2001 Colour Classification

What about resolution?

It is tempting to think a little bit about resolution. Looking at the MRLC image results, especially over a map base, it is obvious that at 100 ft resolution even the best of calculations are far from the fine grained detail necessary for accurate neighborhood calculations.

It is also obvious that Impervious Surface can be enhanced directly by applying some additional lookup from a road database. Using pavement estimates from a road network could improve resolution quality quite a bit in urbanized areas. But even that can be improved if there is some way to collect other common urban impervious surfaces such as rooftops, walkways, driveways, and parking areas.

NAIP 1m GSD 4 band imagery has fewer bands but higher resolution. NAIP is a resource that has been applied to unsupervised impervious surface extraction. However, the 4 band aquisition is still not consistently available for the entire US.

Now that more LiDAR data is coming on line at higher resolutions, why not use LiDAR classifications to enhance impervious surface?

Lidar All Elevation
LidarAll Classification
Lidar All Intensity
Lidar All Return

Just looking at the different style choices on the LidarServer WMS for instance, it appears that there are ways to get roof top and canopy out of LiDAR data. LiDAR at 1m resolution for metro areas could increase resolution for Canopy as well as rooftop contribution to Impervious Surface estimates.

In fact the folks at QCoherent have developed tools for classification and extraction of features like roof polygons. This extraction tool appled over a metro area could result in a useful rooftop polygon set. Once available in a spatial database these polygons can be furnished as an additional color filled tile pyramid layer. Availability of this layer would also let the Runoff calculation apply rooftop area estimates to roof drain disconnect factors.

Additionally improved accuracy of impervious surface calculations can be achieved by using a merging version of the simple color scan process. In a merging process the scan loop over the MRLC image does a lookup in the corresponding rooftop image. Any pixel positive for rooftop is promoted to the highest impervious surface classification. This estimate only applies so long as roof top green gardens remain insignificant.

Ultimately the MRLC will be looking at 1m GSD collection for NLCD with some combination of imagery like NAIP and DEM from LiDAR. However, it could be a few years before these high resolution resources are available consistently across the US.


The utility of WMS resources continues to grow as services become better known and tools for web applications improve. Other OWS services like WFS and WCS are following along behind, but show significant promise as well. The exposure of public data resource in some kind of OGC service should be mandatory at all government levels. The cost is not that significant compared to the cost effectiveness of cross agency, even cross domain, access to valuable data resources.

By using appropriate WMS tools like Geoserver and Geowebcache, vastly more efficient tile pyramids can become a part of any published WMS service layer. It takes a lot more storage so the improved performance may not be feasible for larger national and worldwide extents. However, in this Runoff Mitigation project, where view performance is less important, the OGC standard WMS GetMap requests proved to be quite useful for the TR-55 calculations and performance adequate.

Comments are closed.