Archive

Posts Tagged ‘Google Maps’

Data Quality on Live Web Maps

June 19, 2017 3 comments

Modern web maps and the cloud-based GIS tools and services upon which they are built continue to improve in richness of content and in data quality.  But as we have focused on many times in this blog and in our book, maps are representations of reality.  They are extremely useful representations, to be sure, particularly so in the cloud, but still are representations.   These representations are dependent upon the data sources, accuracy standards, map projections, completeness, processing and rendering procedures used, regulations and policies in place, and much more.  A case in point are offsets between street data and the satellite image data that I noticed in mid-2017 in Chengdu in south-central China.  The streets are about 369 meters southeast of where they appear on the satellite image (below):

china-google-maps

Puzzled, I panned the map to other locations in China.  The offsets varied, but they appeared everywhere in the country; for example, note the offset of 557 meters where a highway crosses the river at Dongyang, again to the southeast:

china-google-maps2

As of this writing, the offset appears in the same cardinal direction and only in China; indeed; After examining border towns with North Korea, Vietnam, and other countries, the offset appears to stop along those borders.  No offsets exist in Hong Kong nor in Macao.  Yahoo Maps Bing Maps both show the same types of offsets in China (Bing maps example, below):

china_bing

MapQuest, which uses an OpenStreetMap base, showed no offset.  I then tested ArcGIS Online with a satellite image base and the OpenStreetMap base, and there was no offset there, either (below).  This offset is a datum issue related to national security that is documented in this Wikipedia article.  The same data restriction issues that we discuss in our book and in our blog touch on other aspects of geospatial data, such as fines for unauthorized surveys, lack of geotagging information on many cameras when the GPS chip detects a location within China, and seeming unlawfulness of crowdsourced mapping efforts such as OpenStreetMap.

But furthermore, as we have noted, the satellite images are processed tiled and data sets, and like other data sets, they need to be critically scrutinized as well.  They should not be considered “reality” despite their appearance of being the “actual” Earth’s surface.  They too contain error, may have been taken on different dates or seasons, may be reprojected on a different datum, and other data quality aspects need to be considered.

china-agol

Another difference between these maps is the wide variation in the amount of detail in terms of the streets data in China.  The OpenStreetMap was the most complete; the other web mapping platforms offered a varying level of detail; some of which were seriously lacking, surprisingly especially in the year 2017, in almost every type of street except major freeways.  The streets content was much more complete in other countries.

It all comes back to identifying your end goals in using any sort of GIS or mapping package.  Being critical of the data can and should be part of the decision making process that you use and the choice of tools and maps to use.  By the time you read this, the image offset problem could have been resolved.  Great!  But are there now new issues of concern? Data sources, methods, and quality vary considerably among different countries. Furthermore, the tools and data change frequently, along with the processing methods, and being critical of the data is not just something to practice one time, but rather, fundamental to everyday work with GIS.

Advertisements

The Good Ole Bad Days: Pixels, Scale and Appropriate Analysis

August 30, 2015 2 comments

By Jim Smith, LANDFIRE Project Lead, The Nature Conservancy

Recently I saw a bumper sticker that said, “Just because you can doesn’t mean you should.” I couldn’t have said it better, especially regarding zooming in on spatial data.

Nowadays, (alert—grumble approaching), people zoom in tightly on their chosen landscape, region, and even pixel, whether the data support that kind of close-up view or not. Understandably, that means a LOT of misapplication of perfectly good science followed by head scratching and complaining.

To set a context, I want to look at the “good ole days” when people used less precise spatial data, but their sense of proportion was better. By “ole,” I mean before the mid-1980s or so, when almost all spatial data and spatial analyses were “analog,” i.e. Mylar map layers, hard copy remote sensing images and light tables (Ian McHarg’s revelation?).  In 1978, pixels on satellite images were at least an acre in size.  Digital aerial cameras and terrain-correct imagery barely existed.  The output from an image processing system was a line printer “map” that used symbols for mapped categories, like “&” for Pine and “$” for Hardwood (yes, smarty pants, that was about all we could map from satellite imagery at that time). The power and true elegance we have at our finger tips today was unfathomable when I started working in this field barely 30 years ago.

Let me wax nostalgic a bit more – indulge me because I am an old GIS coot (relatively anyway).  I remember command line ArcInfo, and when “INFO” was the actual relational data base used by ESRI software (did you ever wonder where the name ArcInfo came from?).  I remember when ArcInfo came in modules like ArcEdit and ArcPlot, each with its own manual, which meant a total of about three feet of shelf space for the set. I remember when ArcInfo required a so-called “minicomputer” such as a DEC VAX or Data General, and when an IBM mainframe computer only had 512K [not MB or GB] RAM available.  I know I sound like the clichéd dad telling the kids about how bad it was when he was growing up — carrying his brother on his back to school in knee-deep snow with no shoes and all that — but pay attention anyway, ‘cause dad knows a thing or two.

While I have no desire to go back to those days, there is one concept that I really wish we could resurrect.  In the days of paper maps, Mylar overlays, and photographic film, spatial data had an inherent scale that was almost always known, and really could not be effectively ignored.  Paper maps had printed scales — USGS quarter quads were 1:24,000 — one tiny millimeter on one of these maps (a slip of a pretty sharp pencil) represented 24 meters on the ground — almost as large as a pixel on a mid-scale satellite image today.  Aerial photographs had scales, and the products derived from them inherited that scale. You knew it — there was not much you could do about it.

Today, if you care about scale, you have to investigate for hours or read almost unintelligible metadata (if available) to understand where the digital spatial data came from — that stuff you are zooming in on 10 or 100 times — and what their inherent scale is.  I think that most, or at least many, data users have no idea that they should even be asking the question about appropriate use of scale — after all the results look beautiful, don’t they? This pesky question means that users often worry about how accurately categories were mapped without thinking for a New York minute about the data’s inherent scale, or about the implied scale of the analysis. I am especially frustrated with the “My Favorite Pixel Syndrome” when a user dismisses the entire dataset because it mis-maps the user’s favorite 30-meter location, even though the data were designed to be used at the watershed level or even larger geographies.

So, listen up: all that fancy-schmancy-looking data in your GIS actually has a scale. Remember this, kids, every time you nonchalantly zoom-in, or create a map product, or run any kind of spatial analysis. Believe an old codger.

The Good Ole Bad Days: Pixels, Scale and Appropriate Analysis

The Good Ole Bad Days: Pixels, Scale and Appropriate Analysis.

—–

Authors:  This week’s post is guest written from The Nature Conservancy’s Landfire team, which includes Kori Blankenship, Sarah Hagen, Randy Swaty, Kim Hall, Jeannie Patton, and Jim Smith. The Landfire team is focused on data, models, and tools developed to support applications, land management and planning for biodiversity conservation. If you would like to guest write for this Spatial Reserves blog about geospatial data, use the About the Authors section and contact one of us about your topic.

The map makers

January 27, 2014 1 comment

A couple of interesting articles have appeared recently discussing the emergence of Google Maps, the changing fortunes of some other leading mapping companies and an argument against the dominance of Google products in favour of OpenStreetMap. In his article Google’s Road to Global Domination Adam Fisher charts the rise of the Google Maps phenomenon, the visionary aspirations to chart streets in San Francisco that led to the development of Street View and the development of technologies, such as the self-driving car, that will incorporate the accumulated map data and may one day obviate the requirement for individuals to interpret a map for themselves.

Taking a stand against a mapping monopoly, Serge Wroclawski’s post Why the World Needs OpenStreetMap, urges readers to rethink their habitual Google Maps usage in favour of  the ‘neutral and transparent‘ OpenStreetMap. Wroclawski argues that no one company should have sole responsibility for interpreting place, nor the information associated with that place, (we wrote on a similar theme in Truth in Maps about the potential for bias in mapping) and that a map product based on the combined efforts of a global network of contributors, which is free to download and can be used without trading personal location information, is the better option for society. However, in his closing comment Fisher quotes O’Reilly – ‘the guy who has the most data, wins‘. Will OpenStreetMap be able to compete against the power of Google when it comes to data collection? 

Whatever the arguments for or against a certain mapping product, perhaps the most important consideration is choice. As long as users continue to have a choice of map products and are aware of the implications, restrictions and limitations of the products they use, then there should be room for both approaches to the provision of map services. 

  

The Open Geoportal project

June 17, 2013 Leave a comment

The Open Geoportal (OGP) project is ‘…. a collaboratively developed, open source, federated web application to rapidly discover, preview, and retrieve geospatial data from multiple organizations‘. The project, lead by Tufts University in conjunction a number of partner organisations including Harvard, MIT, Stanford and UCLA, was established to provide a framework for organizations to share geospatial data layers, maps, metadata, and development resources through a common interface. Version 2.0 of the OGP was released in April 2013, providing an improved interface and interoperability for a number of web mapping environments.

OGP currently supports four production geoportal instances:

Harvard Geospatial Library

Harvard Geospatial Library

  • UC Berkeley Geospatial Data Repository: Geospatial data from UC Berkeley Library
  • MIT GeoWeb: Geospatial data from the MIT Geodata Repository, MassGIS, and Harvard Geospatial Library
  • GeoData@Tufts: Geoportal developed and maintained by Tufts University Information Technology, providing search tools for data discovery and for use in teaching, learning, and research.

The data may be streamed, downloaded or shared as required. Although many of the data layers are publicly available, access to some of the layers is restricted and requires registration with the geoportal.

Access to data layers

Access to data layers

A number of geoportals are currently in development including those from the universities of Colombia, Washington and Yale.

Fantasy island

November 26, 2012 2 comments

A widely reported story, courtesy of the BBC, appeared last week describing the unusual case of Sandy Island, a south Pacific island between Australia and New Caledonia.  Although the island appeared on Google Maps and Google Earth, a research team from Australia were unable to locate it when they went to investigate. Expecting to find a ‘sizeable’ strip of land, the researchers found instead 1,400 m of deep blue sea. In their defence Google commented that they did consult a number of authoritative data sources for their maps, and the island did appear to be a genuine feature.

It seems the island never really existed and was most likely the product of an error that had gone unnoticed and had been perpetuated over the years.  This story touched on a number of issues we discussed in The GIS Guide to Public Domain Data including the use of assertive versus authoritative data sources. What does authoritative really mean and how far should we go to get the definitive answer?  Should we no longer rely solely on the traditional sources of geographic data when it seems even they can’t be guaranteed, and always get a second opinion? Prof. Michael Goodchild (Univ. of California, Santa Barbara) discusses a hybrid solution in a paper entitled ‘Assertion and Authority: The Science of User Generated Geographic Content’  and sees the merits in taking advantage of the expertise and knowledge accumulated by the traditional data providers (national mapping agencies, survey companies and so on) but also taking advantage of independent verification from volunteer groups, research teams and other interested individuals and organisations.

Future developments in data capture and verification will probably mean cases like this should be rare. However given the rate of change in both the physical and man-made environment and the ever-present possibility of mis-interpretation, the  ‘definitive map’ will probably remain an elusive goal.

The International Politics of Cartography

August 20, 2012 Leave a comment

In Feb 2012 Frank Jacobs wrote an article in the opinion pages of the The New York Times about The First Google Map Wars. The article recalled a day in Nov. 2010 when a Nicaraguan official strayed into neighbouring Costa Rica’s territory. When asked to defend his actions, the official simply replied he wasn’t trespassing according to Google Maps, which did indeed appear to indicate that particular piece of ground belonged to Nicaragua. In an attempt to settle the subsequent dispute, Google agreed to adjust the border.

We reported a similar incident in The GIS Guide to Public Domain Data about a dispute between India and Pakistan over the misrepresentation of the border between Pakistan and the disputed territory of Kashmir. Following threats of action from the Indian Government, Google again agreed to adjust their map of the region and tensions were, for the time being, diffused.

Many of us have become accustomed to using Google, Bing and other on-line mapping resources for many of our quick location-related queries. However, good as they are these resources are not infallible and mistakes do happen. As Jacobs comments, the boundaries depicted by Google Maps remain an unauthorised representation of  borders and place names and  ‘…popularity does not bestow authority’.