By Jim Smith, LANDFIRE Project Lead, The Nature Conservancy
Recently I saw a bumper sticker that said, “Just because you can doesn’t mean you should.” I couldn’t have said it better, especially regarding zooming in on spatial data.
Nowadays, (alert—grumble approaching), people zoom in tightly on their chosen landscape, region, and even pixel, whether the data support that kind of close-up view or not. Understandably, that means a LOT of misapplication of perfectly good science followed by head scratching and complaining.
To set a context, I want to look at the “good ole days” when people used less precise spatial data, but their sense of proportion was better. By “ole,” I mean before the mid-1980s or so, when almost all spatial data and spatial analyses were “analog,” i.e. Mylar map layers, hard copy remote sensing images and light tables (Ian McHarg’s revelation?). In 1978, pixels on satellite images were at least an acre in size. Digital aerial cameras and terrain-correct imagery barely existed. The output from an image processing system was a line printer “map” that used symbols for mapped categories, like “&” for Pine and “$” for Hardwood (yes, smarty pants, that was about all we could map from satellite imagery at that time). The power and true elegance we have at our finger tips today was unfathomable when I started working in this field barely 30 years ago.
Let me wax nostalgic a bit more – indulge me because I am an old GIS coot (relatively anyway). I remember command line ArcInfo, and when “INFO” was the actual relational data base used by ESRI software (did you ever wonder where the name ArcInfo came from?). I remember when ArcInfo came in modules like ArcEdit and ArcPlot, each with its own manual, which meant a total of about three feet of shelf space for the set. I remember when ArcInfo required a so-called “minicomputer” such as a DEC VAX or Data General, and when an IBM mainframe computer only had 512K [not MB or GB] RAM available. I know I sound like the clichéd dad telling the kids about how bad it was when he was growing up — carrying his brother on his back to school in knee-deep snow with no shoes and all that — but pay attention anyway, ‘cause dad knows a thing or two.
While I have no desire to go back to those days, there is one concept that I really wish we could resurrect. In the days of paper maps, Mylar overlays, and photographic film, spatial data had an inherent scale that was almost always known, and really could not be effectively ignored. Paper maps had printed scales — USGS quarter quads were 1:24,000 — one tiny millimeter on one of these maps (a slip of a pretty sharp pencil) represented 24 meters on the ground — almost as large as a pixel on a mid-scale satellite image today. Aerial photographs had scales, and the products derived from them inherited that scale. You knew it — there was not much you could do about it.
Today, if you care about scale, you have to investigate for hours or read almost unintelligible metadata (if available) to understand where the digital spatial data came from — that stuff you are zooming in on 10 or 100 times — and what their inherent scale is. I think that most, or at least many, data users have no idea that they should even be asking the question about appropriate use of scale — after all the results look beautiful, don’t they? This pesky question means that users often worry about how accurately categories were mapped without thinking for a New York minute about the data’s inherent scale, or about the implied scale of the analysis. I am especially frustrated with the “My Favorite Pixel Syndrome” when a user dismisses the entire dataset because it mis-maps the user’s favorite 30-meter location, even though the data were designed to be used at the watershed level or even larger geographies.
So, listen up: all that fancy-schmancy-looking data in your GIS actually has a scale. Remember this, kids, every time you nonchalantly zoom-in, or create a map product, or run any kind of spatial analysis. Believe an old codger.
Authors: This week’s post is guest written from The Nature Conservancy’s Landfire team, which includes Kori Blankenship, Sarah Hagen, Randy Swaty, Kim Hall, Jeannie Patton, and Jim Smith. The Landfire team is focused on data, models, and tools developed to support applications, land management and planning for biodiversity conservation. If you would like to guest write for this Spatial Reserves blog about geospatial data, use the About the Authors section and contact one of us about your topic.
A few years ago, I walked on the pier at Manitowoc, Wisconsin, and after mapping my route, reflected on issues of resolution and scale in this blog. After recording my track on my smartphone in an application called RunKeeper, it appeared on the map as though I had been walking on the water! This, of course, was because the basemap did not show the pier or the fill adjacent to the marina. Recently, following the annual meeting of the Association of American Geographers, I had the opportunity to retrace my steps and revisit my field site. What has changed in the past 2 1/2 years? Much.
As shown below, the basemap used by RunKeeper has vastly improved in that short amount of time. The pier and fill is now on the map, and note the other differences between the new map and the one from 2012 below that appears below it–schools, trails, contour lines, and other features are now available. A 3-D profile is available now as well. Why? The continued improvement of maps and geospatial data from local, regional, federal, and international government agencies plays a role. We have a plethora of data sources to choose from, as is evident in our recent post about Dr Karen Payne’s list of geospatial data and the development of Esri’s Living Atlas of the World. The variety and resolution of base maps in ArcGIS Online and in other platforms continues to expand and improve at an rapid pace.
Equally significant, and some might argue more significant, is the role that crowdsourcing is having on the improvement of maps and services (such as traffic and weather feeds). In fact, even in this example, note the “improve this map” text that appears in the lower right of the map, allowing everyday fitness app users the ability to submit changes that will be reviewed and added to RunKeeper’s basemap. What does all of this mean for the the data user and GIS analyst? Maps are improving at a dizzying pace due to efforts by government agencies, nonprofit organizations, academia, private companies, and the ordinary citizen. Yet, scale and resolution still matter. Critically thinking about data and where it comes from still matters. Fieldwork that uses ordinary apps can serve as an effective instructional technique. It is indeed an exciting time to be in the field of geotechnologies.
The map from 2012 is below.
I recently gave presentations at the University of Wisconsin Milwaukee for GIS Day, and took the opportunity, as most geographers would, to get out onto the landscape. I walked on the Lake Michigan pier at Manitowoc, enjoying a stroll in the brisk wind to and from the lighthouse there, recording my track on my smartphone in an application called Runkeeper. When my track had finished and been mapped, it appeared as though I had been walking on the water!
According to my map, I walked on water. Funny, but I don’t recall even getting wet! It all comes down to paying close attention to your data, and knowing its sources. This provides a teachable moment in a larger discussion on the importance of scale and resolution in any project involving maps or GIS. In my case, even if I scrolled in to a larger scale, the pier did not appear on the Runkeeper’s application’s base map. It does, however, appear in the base map in ArcGIS Online.
In the book that Jill Clark and I wrote entitled The GIS Guide to Public Domain Data, we discuss how scale and resolution can be conceptualized and put into practice in both the raster and vector worlds. We cite examples where neglecting these important concepts have led not only to bad decisions, but have cost people their property and even their lives. Today, while GIS tools allow us to instantly zoom to a large scale, the data being examined might have been collected at a much smaller scale. Much caution therefore needs to be used when making decisions when the analysis scale is larger than the collection scale. For example, if you are making decisions at 1:10,000 scale and your base data was collected at 1:50,000 scale, you are treading on dangerous ground.
Or, one could say, you are “walking on water”!