Archive

Posts Tagged ‘Google Maps’

The Good Ole Bad Days: Pixels, Scale and Appropriate Analysis

August 30, 2015 2 comments

By Jim Smith, LANDFIRE Project Lead, The Nature Conservancy

Recently I saw a bumper sticker that said, “Just because you can doesn’t mean you should.” I couldn’t have said it better, especially regarding zooming in on spatial data.

Nowadays, (alert—grumble approaching), people zoom in tightly on their chosen landscape, region, and even pixel, whether the data support that kind of close-up view or not. Understandably, that means a LOT of misapplication of perfectly good science followed by head scratching and complaining.

To set a context, I want to look at the “good ole days” when people used less precise spatial data, but their sense of proportion was better. By “ole,” I mean before the mid-1980s or so, when almost all spatial data and spatial analyses were “analog,” i.e. Mylar map layers, hard copy remote sensing images and light tables (Ian McHarg’s revelation?).  In 1978, pixels on satellite images were at least an acre in size.  Digital aerial cameras and terrain-correct imagery barely existed.  The output from an image processing system was a line printer “map” that used symbols for mapped categories, like “&” for Pine and “$” for Hardwood (yes, smarty pants, that was about all we could map from satellite imagery at that time). The power and true elegance we have at our finger tips today was unfathomable when I started working in this field barely 30 years ago.

Let me wax nostalgic a bit more – indulge me because I am an old GIS coot (relatively anyway).  I remember command line ArcInfo, and when “INFO” was the actual relational data base used by ESRI software (did you ever wonder where the name ArcInfo came from?).  I remember when ArcInfo came in modules like ArcEdit and ArcPlot, each with its own manual, which meant a total of about three feet of shelf space for the set. I remember when ArcInfo required a so-called “minicomputer” such as a DEC VAX or Data General, and when an IBM mainframe computer only had 512K [not MB or GB] RAM available.  I know I sound like the clichéd dad telling the kids about how bad it was when he was growing up — carrying his brother on his back to school in knee-deep snow with no shoes and all that — but pay attention anyway, ‘cause dad knows a thing or two.

While I have no desire to go back to those days, there is one concept that I really wish we could resurrect.  In the days of paper maps, Mylar overlays, and photographic film, spatial data had an inherent scale that was almost always known, and really could not be effectively ignored.  Paper maps had printed scales — USGS quarter quads were 1:24,000 — one tiny millimeter on one of these maps (a slip of a pretty sharp pencil) represented 24 meters on the ground — almost as large as a pixel on a mid-scale satellite image today.  Aerial photographs had scales, and the products derived from them inherited that scale. You knew it — there was not much you could do about it.

Today, if you care about scale, you have to investigate for hours or read almost unintelligible metadata (if available) to understand where the digital spatial data came from — that stuff you are zooming in on 10 or 100 times — and what their inherent scale is.  I think that most, or at least many, data users have no idea that they should even be asking the question about appropriate use of scale — after all the results look beautiful, don’t they? This pesky question means that users often worry about how accurately categories were mapped without thinking for a New York minute about the data’s inherent scale, or about the implied scale of the analysis. I am especially frustrated with the “My Favorite Pixel Syndrome” when a user dismisses the entire dataset because it mis-maps the user’s favorite 30-meter location, even though the data were designed to be used at the watershed level or even larger geographies.

So, listen up: all that fancy-schmancy-looking data in your GIS actually has a scale. Remember this, kids, every time you nonchalantly zoom-in, or create a map product, or run any kind of spatial analysis. Believe an old codger.

The Good Ole Bad Days: Pixels, Scale and Appropriate Analysis

The Good Ole Bad Days: Pixels, Scale and Appropriate Analysis.

—–

Authors:  This week’s post is guest written from The Nature Conservancy’s Landfire team, which includes Kori Blankenship, Sarah Hagen, Randy Swaty, Kim Hall, Jeannie Patton, and Jim Smith. The Landfire team is focused on data, models, and tools developed to support applications, land management and planning for biodiversity conservation. If you would like to guest write for this Spatial Reserves blog about geospatial data, use the About the Authors section and contact one of us about your topic.

The map makers

January 27, 2014 1 comment

A couple of interesting articles have appeared recently discussing the emergence of Google Maps, the changing fortunes of some other leading mapping companies and an argument against the dominance of Google products in favour of OpenStreetMap. In his article Google’s Road to Global Domination Adam Fisher charts the rise of the Google Maps phenomenon, the visionary aspirations to chart streets in San Francisco that led to the development of Street View and the development of technologies, such as the self-driving car, that will incorporate the accumulated map data and may one day obviate the requirement for individuals to interpret a map for themselves.

Taking a stand against a mapping monopoly, Serge Wroclawski’s post Why the World Needs OpenStreetMap, urges readers to rethink their habitual Google Maps usage in favour of  the ‘neutral and transparent‘ OpenStreetMap. Wroclawski argues that no one company should have sole responsibility for interpreting place, nor the information associated with that place, (we wrote on a similar theme in Truth in Maps about the potential for bias in mapping) and that a map product based on the combined efforts of a global network of contributors, which is free to download and can be used without trading personal location information, is the better option for society. However, in his closing comment Fisher quotes O’Reilly – ‘the guy who has the most data, wins‘. Will OpenStreetMap be able to compete against the power of Google when it comes to data collection? 

Whatever the arguments for or against a certain mapping product, perhaps the most important consideration is choice. As long as users continue to have a choice of map products and are aware of the implications, restrictions and limitations of the products they use, then there should be room for both approaches to the provision of map services. 

  

The Open Geoportal project

June 17, 2013 Leave a comment

The Open Geoportal (OGP) project is ‘…. a collaboratively developed, open source, federated web application to rapidly discover, preview, and retrieve geospatial data from multiple organizations‘. The project, lead by Tufts University in conjunction a number of partner organisations including Harvard, MIT, Stanford and UCLA, was established to provide a framework for organizations to share geospatial data layers, maps, metadata, and development resources through a common interface. Version 2.0 of the OGP was released in April 2013, providing an improved interface and interoperability for a number of web mapping environments.

OGP currently supports four production geoportal instances:

Harvard Geospatial Library

Harvard Geospatial Library

  • UC Berkeley Geospatial Data Repository: Geospatial data from UC Berkeley Library
  • MIT GeoWeb: Geospatial data from the MIT Geodata Repository, MassGIS, and Harvard Geospatial Library
  • GeoData@Tufts: Geoportal developed and maintained by Tufts University Information Technology, providing search tools for data discovery and for use in teaching, learning, and research.

The data may be streamed, downloaded or shared as required. Although many of the data layers are publicly available, access to some of the layers is restricted and requires registration with the geoportal.

Access to data layers

Access to data layers

A number of geoportals are currently in development including those from the universities of Colombia, Washington and Yale.

Fantasy island

November 26, 2012 2 comments

A widely reported story, courtesy of the BBC, appeared last week describing the unusual case of Sandy Island, a south Pacific island between Australia and New Caledonia.  Although the island appeared on Google Maps and Google Earth, a research team from Australia were unable to locate it when they went to investigate. Expecting to find a ‘sizeable’ strip of land, the researchers found instead 1,400 m of deep blue sea. In their defence Google commented that they did consult a number of authoritative data sources for their maps, and the island did appear to be a genuine feature.

It seems the island never really existed and was most likely the product of an error that had gone unnoticed and had been perpetuated over the years.  This story touched on a number of issues we discussed in The GIS Guide to Public Domain Data including the use of assertive versus authoritative data sources. What does authoritative really mean and how far should we go to get the definitive answer?  Should we no longer rely solely on the traditional sources of geographic data when it seems even they can’t be guaranteed, and always get a second opinion? Prof. Michael Goodchild (Univ. of California, Santa Barbara) discusses a hybrid solution in a paper entitled ‘Assertion and Authority: The Science of User Generated Geographic Content’  and sees the merits in taking advantage of the expertise and knowledge accumulated by the traditional data providers (national mapping agencies, survey companies and so on) but also taking advantage of independent verification from volunteer groups, research teams and other interested individuals and organisations.

Future developments in data capture and verification will probably mean cases like this should be rare. However given the rate of change in both the physical and man-made environment and the ever-present possibility of mis-interpretation, the  ‘definitive map’ will probably remain an elusive goal.

The International Politics of Cartography

August 20, 2012 Leave a comment

In Feb 2012 Frank Jacobs wrote an article in the opinion pages of the The New York Times about The First Google Map Wars. The article recalled a day in Nov. 2010 when a Nicaraguan official strayed into neighbouring Costa Rica’s territory. When asked to defend his actions, the official simply replied he wasn’t trespassing according to Google Maps, which did indeed appear to indicate that particular piece of ground belonged to Nicaragua. In an attempt to settle the subsequent dispute, Google agreed to adjust the border.

We reported a similar incident in The GIS Guide to Public Domain Data about a dispute between India and Pakistan over the misrepresentation of the border between Pakistan and the disputed territory of Kashmir. Following threats of action from the Indian Government, Google again agreed to adjust their map of the region and tensions were, for the time being, diffused.

Many of us have become accustomed to using Google, Bing and other on-line mapping resources for many of our quick location-related queries. However, good as they are these resources are not infallible and mistakes do happen. As Jacobs comments, the boundaries depicted by Google Maps remain an unauthorised representation of  borders and place names and  ‘…popularity does not bestow authority’.