Welcome to the Spatial Reserves blog.
The GIS Guide to Public Domain Data was written to provide GIS practitioners and instructors with the essential skills to find, acquire, format, and analyze public domain spatial data. Some of the themes discussed in the book include open data access and spatial law, the importance of metadata, the fee vs. free debate, data and national security, the efficacy of spatial data infrastructures, the impact of cloud computing and the emergence of the GIS-as-a-Service (GaaS) business model. Recent technological innovations have radically altered how both data users and data providers work with spatial information to help address a diverse range of social, economic and environmental issues.
This blog was established to follow up on some of these themes, promote a discussion of the issues raised, and host a copy of the exercises that accompany the book. This story board provides a brief description of the exercises.
I have created a series of 22 new videos describe decision making with GIS, using public domain data. The videos, which use the ArcGIS Spatial Analyst extension, are listed and accessible in this YouTube playlist. Over 108 minutes of content is included, but in easy-to-understand short segments that are almost entirely comprised of demonstrations of the tools in real-world contexts. They make use of public domain data such as land cover, hydrography, roads, and a Digital Elevation Model.
The videos include the topics listed below. Videos 10 through 20 include a real-world scenario of selecting optimal sites for fire towers in the Loess Hills of eastern Nebraska, an exercise that Jill Clark and I included in the Esri Press book The GIS Guide to Public Domain Data and available online.
1) Using the transparency and swipe tools with raster data.
2) Comparing and using topographic maps and satellite and aerial imagery stored locally to the same type of data in the ArcGIS Online cloud.
3) Analyzing land cover change with topographic maps and satellite imagery on your local computer and with ArcGIS Online.
4) Creating a shaded relief map using hillshade from a Digital Elevation Model (DEM).
5) Analyzing a Digital Elevation Model and a shaded relief map.
6) Creating contour lines from elevation data.
7) Creating a slope map from elevation data.
8) Creating an aspect (direction of slope) map from elevation data.
9) Creating symbolized contour lines using the Contour with Barriers tool.
10) Decision making using GIS: Introduction to the problem, and selecting hydrography features.
11) Decision making using GIS: Buffering hydrography features.
12) Decision making using GIS: Selecting and buffering road features.
13) Decision making using GIS: Selecting suitable slopes and elevations.
14) Decision making using GIS: Comparing Boolean And, Or, and Xor Operations.
15) Decision making using GIS: Selecting suitable land use.
16) Decision making using GIS: Selecting suitable land use, slope, and elevation.
17) Decision making using GIS: Intersecting vector layers of areas near hydrography and near roads.
18) Decision making using GIS: Converting raster to vector data.
19) Decision making using GIS: Final determination of optimal sites.
20) Creating layouts.
21) Additional considerations and tools in creating layouts.
22) Checking Extensions when using Spatial Analyst tools.
How might you be able to make use of these videos and the processes described in them in your instruction?
The World Resources Institute (WRI) has recently announced the launch of Global Forest Watch (GFW), a dynamic forest monitoring system that provides aims to provide ‘timely and reliable’ information about the state of the world’s forests. Using a combination of satellite imagery, open access data and crowd sourced information, GFW builds on earlier projects such as the Forest Frontiers Initiative and the Forest Atlases, one of the case studies we discussed in The GIS Guide to Public Domain Data, which promoted the sustainable management of forest resources.
One of the big issues for monitoring forest reserves has been, given the often inaccessible locations, by the time harmful and illegal logging was reported it was invariably too late to stop the deforestation. GFW aims to provide near real-time information on forest clearing activities so local authorities, governments, global business and the general public have access to the latest, and hopefully most accurate, status of forest reserves. The listed data sources include:
- Forest change ( many derived from MODIS data)
- Forest cover
- Forest Use
The GFW web site provides access to a global map based on the University of Maryland Tree Cover Loss and Gain data.
The GFW site also provides a time-lapse run through of the last twelve years change in tree cover.
Although the predominance of forest cover loss (pink) as opposed to gain (blue) in many areas tells a depressingly familiar tale, providing public access to the latest information like this should help shine a light on illegal logging activities.
The Government Accountability Office has updated its review of federal GIS activities. The study was conducted because “The federal government collects, maintains, and uses geospatial information–information linked to specific geographic locations–to support many functions, including national security and disaster response. In 2012, the Department of the Interior estimated that the federal government was investing billions of dollars on geospatial data annually, and that duplication was common.”
The report said that, “The President and the Office of Management and Budget (OMB) established policies and procedures for coordinating investments in geospatial data. However, in November 2012, GAO reported that governmentwide committees and federal departments and agencies had not effectively implemented them. The committee that was established to promote the coordination of geospatial data nationwide–the Federal Geographic Data Committee (FGDC)–had developed and endorsed key standards and had established a clearinghouse of metadata. GAO found that the clearinghouse was not being used by agencies to identify planned geospatial investments to promote coordination and reduce duplication. In addition, the committee had not yet fully planned for or implemented an approach to manage geospatial data as related groups of investments to allow agencies to more effectively plan geospatial data collection efforts and minimize duplicative investments, and its strategic plan was missing key elements.”
“Other shortfalls have impaired progress in coordinating geospatial data. Specifically, none of the three federal departments in GAO’s review had fully implemented important activities such as preparing and implementing a strategy for advancing geospatial activities within their respective departments. Moreover, the agencies in GAO’s review responsible for governmentwide management of specific geospatial data had implemented some but not all key activities for coordinating the national coverage of specific geospatial data.”
“GAO is making no new recommendations in this statement. In November 2012, GAO recommended that to improve coordination and reduce duplication, FGDC develop a national strategy for coordinating geospatial investments; federal agencies follow federal guidance for managing geospatial investments; and OMB develop a mechanism to identify and report on geospatial investments. Since that time, FGDC and several agencies have taken some steps to implement the recommendations. However, additional actions are still needed.”
Why are we not surprised? To be fair, coordinating any activity among federal agencies, particularly one as pervasive and cross-cutting as geospatial data collection and use, is an enormous task. Furthermore, coordination cannot be established and then just placed on “auto pilot”, but needs to be continually improved and adjusted with changing needs, stakeholders, priorities, and decision makers. On the other hand, the goal of coordination of federal geospatial activities has been a goal for 20 years now, since the signing of the NSDI back in 1994. We discuss the progress made and the challenges that are still outstanding at length in our book, The GIS Guide to Public Domain Data. It is disheartening to read that so much remains to be done but encouraging to see at least some progress and reports like this one to keep coordination moving forward.
Last year we wrote about the imminent influx of high resolution imagery from unmanned aerial vehicles (UAVs) or drones and the great potential this could offer those agencies responding to emergency situations where the effective provision of humanitarian aid relies heavily on access to current, accurate and readily available map data.
When Typhoon Haiyan (Yolanda), reportedly the strongest typhoon to ever make landfall, struck the Philippines on the 8th of November 2013 it caused catastrophic destruction and loss of life. The Humanitarian OpenStreetMap Team (H.O.T) activated Project Haiyan to provide geographic base data for the affected areas.
However as Kate Chapman reported in a project update last month, although a large number of UAVs had been used to collect imagery immediately after the typhoon struck, much of the mapping activity was uncoordinated, resulting in fragmented data sources that were unavailable to the aid agencies. Although UAV imagery can provide much higher resolution data (5-10cm) than is currently available from satellite imagery sources (0.5m), if the data can’t be accessed when required, the relevant agencies don’t know what’s available and from whom or the licensing arrangements prohibit open access to the data, then the transient opportunities to put the data to good use are lost.
Given the increasing miniaturisation, reduced costs and availability of these devices, a register of publicly available UAV data sources, a crowdsourced OpenUAVImagery initiative or the “OpenReconstruction/Open Drone” platform described by the H.O.T. would seem to be the next step towards making the most of this data resource.
An online e-book entitled Open Government Data by Joshua Tauberer is, according to the author, “the culmination of several years of thinking about the principles behind the open government data movement in the United States.” In the book, he “frame[s] the movement as the application of Big Data to civics. Topics include principles, uses for transparency and civic engagement, a brief legal history, data quality, civic hacking, and paradoxes in transparency.”
The author is the creator of the US Congress-tracking tool GovTrack.us, which launched in 2004, helping to spur the national open government data community. He was also a co-founder of POPVOX, a platform for advocacy, providing a means for citizens to communicate with Congress about the issues they care about.
Tauberer mentions GIS data in part 2.2 where he uses Google Transit Feed Specification data as an example (three-quarters of the way down the page, in Figure 8) to visualize ridership in the Washington DC area. But despite the lack of overt GIS references, I believe this book could be useful to the readers of our book and this blog. Its chapters include “Big Data Meets Open Government”, “Civic Hacking by Example”, “Applications to Open Government”, “A Brief Legal History of Open Government Data”, “Paradoxes in Open Government”, and “Example Policy Language”. In particular, the chapter on “A Brief Legal History of Open Government Data” provides useful additional reading after reading Chapter 1 of our book, The GIS Guide to Public Domain Data. Through reading Tauberer’s book, one can better understand how spatial data can and should fit into larger open data and open government initiatives.
A couple of interesting articles have appeared recently discussing the emergence of Google Maps, the changing fortunes of some other leading mapping companies and an argument against the dominance of Google products in favour of OpenStreetMap. In his article Google’s Road to Global Domination Adam Fisher charts the rise of the Google Maps phenomenon, the visionary aspirations to chart streets in San Francisco that led to the development of Street View and the development of technologies, such as the self-driving car, that will incorporate the accumulated map data and may one day obviate the requirement for individuals to interpret a map for themselves.
Taking a stand against a mapping monopoly, Serge Wroclawski’s post Why the World Needs OpenStreetMap, urges readers to rethink their habitual Google Maps usage in favour of the ‘neutral and transparent‘ OpenStreetMap. Wroclawski argues that no one company should have sole responsibility for interpreting place, nor the information associated with that place, (we wrote on a similar theme in Truth in Maps about the potential for bias in mapping) and that a map product based on the combined efforts of a global network of contributors, which is free to download and can be used without trading personal location information, is the better option for society. However, in his closing comment Fisher quotes O’Reilly – ‘the guy who has the most data, wins‘. Will OpenStreetMap be able to compete against the power of Google when it comes to data collection?
Whatever the arguments for or against a certain mapping product, perhaps the most important consideration is choice. As long as users continue to have a choice of map products and are aware of the implications, restrictions and limitations of the products they use, then there should be room for both approaches to the provision of map services.
A recent article in Sensors & Systems: Making Sense of Global Change raised key issues regarding challenges and considerations in geospatial data integration. Author Robert Pitts of New Light Technologies recognizes that the increased availability of data presents opportunities for improving our understanding of the world, but combining diverse data remains a challenge due to several reasons. I like the way he cuts through the noise and captured the key analytical considerations, which we address in our book entitled, The GIS Guide to Public Domain Data. These include coverage, quality, compatibility, geometry type and complexity, spatial and temporal resolution, confidentiality, and update frequency.
In today’s world of increasingly available data, and ways to access that data, integrating data sets to create decision-making dashboards for policymakers may seem like a daunting task–much worse than that term paper you were putting off writing until the last minute. However, breaking down integration tasks into the operational considerations that Mr. Pitts identifies may help the geospatial and policymaking communities make progress toward the overall goal. These operational considerations include access method, format and size of data, data model and schema, update frequency, speed and performance, and stability and reliability.
Fortunately, as Mr. Pitts points out, “operational dashboards” are appearing that help decision makers work with geospatial data in diverse contexts and scales. These include the US Census Bureau’s ”On the Map for Emergency Management“, based on Google tools and the Florida State Emergency Response Team’s Geospatial Assessment Tool for Operations and Response (GATOR) based on ArcGIS Online technology, shown here.
As we discuss in our book and in this blog, portals or operational dashboards will not by themselves ensure that better decisions will be made. I see two chief challenges with these dashboards and make the following recommendations: (1) Make sure that those who create them are not simply putting something up quickly to satisfy an agency mandate. Rather, those who create them need to understand the integration challenges listed above as they build the dashboard. Furthermore, since the decision-makers are likely not to be geospatial professionals who understand scale, accuracy, and so on, the creators of these dashboards need to communicate the above considerations in an understandable way to those using the dashboards. (2) Make sure that the dashboards are maintained and updated. If you are a regular reader of this blog, you know that we are blunt in our criticism about portals that may be well-intentioned but are out of date and/or are extremely difficult to use. For example, the US Census dashboard that I analyzed above contained emergencies that were three months old, despite the fact that I had checked the current date box for my analysis.
Take a look around at our world. We need to incorporate geospatial technologies in decision making across the private sector, nonprofit organizations, and in government, at all levels and scales. It is absolutely critical that geospatial tools and data are placed into the hands of decision makers for the benefit of all. Progress is being made, but it needs to happen at a faster pace through the effort of the geospatial community as well as key decision makers working together.