In a recent article published in the ISPRS International Journal of Geo-Information, Quality Evaluation of VGI Using Authoritative Data—A Comparison with Land Use Data in Southern Germany, the authors investigated some of the concerns regarding data quality and data usability often levelled at Volunteered Geographic Information (VGI) data sources.
The objective of the study, based in the Rhine-Neckar region of southern Germany, was to compare OSM data to the authoritative land use and land cover (LULC) data set ATKIS Base DLM version 6.0. published by the LGL mapping agency (Baden-Württemberg State Office for Geoinformation and State development).
The results for the OSM data completeness and correctness comparison were variable across the different classes of land use in the study area. However some general trends emerged including:
- Areas with a high percentage of forest cover were the areas with the highest level of completeness and correctness.
- Other classes (incl. farmland and urban areas) had low levels of completeness but higher levels of correctness; features present were mapped accurately but some features were missing.
- Other areas (incl. quarry and lakes) had high levels of completeness (most features mapped) but had a greater percentage of incorrectly mapped features.
- There was a marked difference between rural and urban areas; the study identified higher OSM coverage and thematic accuracy in densely populated areas (more people available/interested in collecting the data?).
- Some land use classes demonstrated both high levels of completeness and correctness, suggesting they had been mapped for a specific purpose.
Although not intended as a definitive statement of OSM data quality, the study suggested that if full coverage and accurate LULC data was a requirement for a project, then OSM data (at present) may not be the best option. However for certain land use classes, where the LULC information was available it was mostly correct so depending on project requirements OSM data may be a suitable alternative.
As we’ve said many times before on Spatial Reserves, it is not whether the data are good, but rather if they are good enough to meet your requirements.
Dorn, H.,Törnros, T. and Zipf, A. (2015). Quality Evaluation of VGI Using Authoritative Data—A Comparison with Land Use Data in Southern Germany. ISPRS Int. J. Geo-Inf. 4, pp. 1657 – 1670
A few weeks ago we wrote about autonomous cars and some of the associated location data privacy issues that this new type of transport raised. In a related article in Automotive News, the challenge of collecting and maintaining the highly accurate map data that would be required to support these vehicles and provide the locational context for the various data sources collected by in-car sensors was also discussed. As the report author commented, ‘History’s most intrepid explorers were often at the mercy of their maps. The self-driving cars of the future won’t be any different.‘
Jim Keller (Chief Engineer, Honda R&D Americas Inc.) has acknowledged that mapping is going to be critical to the success of the autonomous car and he considers the relationship between map makers and car manufacturers as both vital and symbiotic. He argues that data collected by the cars will augment the data available from more traditional sources and data available from those more traditional sources will in turn help the car manufacturers.
While this suggests a new location data collecting dynamic – crowd-sourcing meets Street View, with cars altruistically recording and sharing the data they collect – it also highlights some of the challenges ahead. These cars have the potential to provide unprecedented volumes of detailed road network data but for that data to be useful, they have to be accurate, current and consistent with the standards adopted by other map data providers to ensure integration with existing data sets, reliability and ultimately safe driving for all road users.
Crowd sourced data is a topic we have covered a number of times on Spatial Reserves, from recording environmental data to providing geographic base data for areas affected by natural disasters and other emergency situations. Attention turns this week to the notoriously difficult task of accurately predicting the weather. While recent advances in forecasting have improved the reliability of many 5-day weather reports, predicting more extreme weather events such as flooding and longer terms weather patterns remains a complex and challenging task.
One possible additional source of data to help provide on-the-spot updates to support real-time monitoring of meteorological phenomena is crowd sourced weather reporting. While there are an increasing number of mobile apps available that allow people to post updates on current local weather conditions, such as Weddar and Wezzoo, Rolf Hut, a scientist from the Delft University of Technology, has proposed a novel solution for the problem of collecting rainfall data – the humble umbrella.
In a recent report by the BBC, Hut argues that the information collected by smart umbrellas could help offset the rainfall data deficit that has resulted from the declining numbers of maintained weather gauge stations. With an in-built sensor (an acoustic rain gauge) connected to a mobile phone via Bluetooth, once the umbrella was opened it would start to transmit real-time rainfall and location data. There’s a rather cyclical dimension to the whole process: cloud > rain > umbrella >sensor> data > phone > cloud.
Although still at the prototype stage, the early results are promising and the data could potentially be used to augment the data collected by existing rainfall radar and satellite measurement systems. However, as with most crowd sourced data initiatives, simply having access to more data doesn’t necessarily improve the situation, and in some cases can even hinder the analysis. The quality of the data has to assured for that data to add value to the process.
The World Resources Institute (WRI) has recently announced the launch of Global Forest Watch (GFW), a dynamic forest monitoring system that provides aims to provide ‘timely and reliable’ information about the state of the world’s forests. Using a combination of satellite imagery, open access data and crowd sourced information, GFW builds on earlier projects such as the Forest Frontiers Initiative and the Forest Atlases, one of the case studies we discussed in The GIS Guide to Public Domain Data, which promoted the sustainable management of forest resources.
One of the big issues for monitoring forest reserves has been, given the often inaccessible locations, by the time harmful and illegal logging was reported it was invariably too late to stop the deforestation. GFW aims to provide near real-time information on forest clearing activities so local authorities, governments, global business and the general public have access to the latest, and hopefully most accurate, status of forest reserves. The listed data sources include:
- Forest change ( many derived from MODIS data)
- Forest cover
- Forest Use
The GFW web site provides access to a global map based on the University of Maryland Tree Cover Loss and Gain data.
The GFW site also provides a time-lapse run through of the last twelve years change in tree cover.
Although the predominance of forest cover loss (pink) as opposed to gain (blue) in many areas tells a depressingly familiar tale, providing public access to the latest information like this should help shine a light on illegal logging activities.
The advent of crowd-sourcing and volunteered geographic information (VGI), facilitated by easy access to relatively cheap, GPS-enabled devices and cloud-based mapping services, have transformed our ability to record and respond to natural and man-made hazards and emergencies. VGI can provide an invaluable local commentary on rapidly changing situations that would otherwise be bereft of real-time, detailed observation.
This VGI resource is also increasingly valued in the documentation of more insidious regional and global phenomenon such as climate change. The high cost of traditional scientific data capture and the lack of a consistent, regional overview prompted a re-think of how such information should be captured. The pan-European research Citizen Observatory Web (COBWEB) project, launched at the end of 2012 and due to be released in 2016, aims to develop an observation framework to support the collection of crowd-sourced environmental data throughout Europe. The emerging COBWEB infrastructure is set to be trialled in study areas that come under the UNESCO World Network of Biosphere reserves (WNBR). The COBWEB consortium (made up of 13 European organisations) hopes the motivation to retain the unique characteristics of the biosphere reserves will encourage local citizens to become involved in monitoring the local environment.
To address some of the inherent problems with VGI – data quality, interoperability and validation – COBWEB will integrate the crowd-sourced observations with authoritative reference data published by public authorities under the INSPIRE directive, from compliant spatial data infrastructures (SDI) and the Global Earth Organisation System of Sensors (GEOSS). If these integrated data sources are accepted as a reliable source of information to support further research and as a basis for policy making, this will be significant a achievement for COBWEB. Another major challenge for the project is to develop a workable accessibility framework for the data sources, which will combine publicly available crowd-sourced data with information from more restricted sources.