April 16, 2012 4 comments

Welcome to the Spatial Reserves blog.

The GIS Guide to Public Domain Data was written to provide GIS practitioners and instructors with the essential skills to find, acquire, format, and analyze public domain spatial data. Some of the themes discussed in the book include open data access and spatial law, the importance of metadata, the fee vs. free debate, data and national security, the efficacy of spatial data infrastructures, the impact of cloud computing and the emergence of the GIS-as-a-Service (GaaS) business model. Recent technological innovations have radically altered how both data users and data providers work with spatial information to help address a diverse range of social, economic and environmental issues.

This blog was established to follow up on some of these themes, promote a discussion of the issues raised, and host a copy of the exercises that accompany the book.  This story board provides a brief description of the exercises.

No Place Else I’d Rather Be: Troubles and Triumphs of Prairie Restoration

October 10, 2015 Leave a comment

By Sarah Hagan
Spatial Ecologist, The Nature Conservancy, Illinois, USA

It’s a sunny midsummer morning in south central Wisconsin. From my position on the bluff overlooking a large prairie and wetland complex, I can hear several varieties of songbirds in the trees above me. To my right, high up in the old, gnarled burr oaks, I hear the squawking and drumming of a flock of red-headed woodpeckers. The scene above and below me is lush and green and idyllic, but that’s not what I’m thinking about.

I’m thinking about how it’s not even ten in the morning and the temperature has already risen above 85°F. I’m thinking about how I just slid down a 60+° slope through a patch of poison ivy for what feels like the thousandth time in the past two hours. I’m thinking about how I can still feel the cuts on my legs from my trek through the honeysuckle and blackberries. I’m thinking about the chemical burns on the side of my neck that I was rewarded with when I failed to dodge a wild parsnip as it careened toward my face a week ago. I’m hot and I’m tired and I’m dirty and I still have six hours of this ahead of me before I climb into a car, drive 45 minutes back to my apartment, take a shower, eat a mountain of food, and fall asleep by 10:00 p.m. while my roommates head out to have all the fun that summer in Madison can offer. At 6:00 a.m. my alarm will ring and I’ll wake up and do it all again.

The Work--and the Rewards--of Prairie Restoration

The Work–and the Rewards–of Prairie Restoration.

Prairie restoration, as with most forms of conservation and land management, is rarely as glamorous as friends and family may think it is. It requires long hours in long sleeves and long pants under the blazing summer sun with little to no cover to speak of. It requires trekking through sometimes difficult terrain to places inaccessible by vehicle, shovel slung over your shoulder as you climb up hills and through brush—most of which has some manner of thorns—while insects buzz around your head and climb at your feet. The summer I spent working in prairie restoration in southern Wisconsin was one of the most difficult of my life. While friends were staying out late and sleeping in and celebrating their recent completion of undergraduate study, I was waking up early and collapsing into bed, sunburnt and aching and exhausted, a few hours after I arrived back home. Never a morning person, I grumbled at the 6:00 a.m. wakeup call and the prospect of another day of grueling work under a relentless sun.

That summer was also one of the most rewarding of my career. I learned more about ecology—bird and plant identification, care of the land, the history of the land—in a few short weeks on the prairie than I did in four years of top-rate university education. I had more wholly rewarding experiences standing amidst the grasses than I did anywhere else that my travels had taken me. I still recall the day that, while pulling garlic mustard deep in a forest, I stumbled upon a fern grove. It was the sort of magical place that you picture in your mind while reading old fairy tales. I half expected gnomes and sprites to be running about beneath my feet. I remember uncovering a nest of newly hatched wild turkeys, the little ones all striped and fuzzy as they peeped and scurried about until their mother returned. I remember the calls of the Sandhill cranes as they flew gracefully over my head. I remember the rare orchids and the flocks of red-headed woodpeckers becoming an almost commonplace daily fixture.

Prairie restoration is difficult. You’re hot and tired and dirty for long hours, day after day. There’s always more work to be done. It’s easy to give up on the grasslands, to decide that it’s hopeless. It’s easy to wonder why you’re doing this anyway. But for every day I felt miserable and sorry for myself, there was a moment where a rare breeze blew around me and I put my shovel down for a moment, looked at the seemingly endless fields of big bluestem waving in the wind, listened to the birds singing all around me, and thought, “There’s nowhere else I’d rather be.”


Authors:  This week’s post is guest written by The Nature Conservancy’s Landfire team, which includes Kori Blankenship, Sarah Hagen, Randy Swaty, Kim Hall, Jeannie Patton, and Jim Smith. The Landfire team is focused on data, models, and tools developed to support applications, land management and planning for biodiversity conservation.

If *you* would like to guest write for the Spatial Reserves blog about geospatial data, use the About the Authors section and contact one of us about your topic.

Global Imagery Donation From Planet Labs

October 5, 2015 1 comment

Planet Labs, an US based imaging company that operates a constellation of miniature satellites, recently announced a new collaborative project with the United Nations and a number of private institutions and NGOs. The initiative, known as Open Region, will see the publication of $60 million worth of global imagery under a Creative Commons License Attribution-ShareAlike (CC-BY-SA) license.


Planet Labs Color Images

The data will be available online through the Planet Labs imaging platform and accessed using web-based tools and/or an API for developers. The hope is easy and open access to the new data sets will provide a platform to help meet the UN’s Sustainable Development Goals, which include tackling climate change, promoting sustainable use of resources and eliminating poverty.

Copyright in Today’s Web Mapping World

September 27, 2015 1 comment

GIS professional Nicholas Duggan has written an excellent article and a flowchart to help mapmakers and GIS analysts decide if they can legally use specific data sets in their work.

As Duggan points out, “anyone can make maps”, and as we emphasize in our book, today’s web mapping environment makes accessing data easier than ever.  But even though it is possible to use a specific data set, does that mean that we legally have the right to do so?  Duggan’s flowchart can help make these decisions.  About his flowchart, Duggan says that “it does not cover the plethora of data or map license types, this chart provides an easy reference as to whether you may or may not use the material you intend to use. Of course this may vary from country to country and on a case by case basis; also this does not serve as a legal document; legal advice should be obtained in case of dispute.”  I find the flowchart to be very useful and applaud Mr Duggan for creating and sharing it.

And yes, following his own advice and ours that we wrote about in the book, I did ask his permission if we could refer to his resource in this blog!  Thank you Nicholas.

Nicholas Duggan's article and flowchart to help GIS users decide if they can use a data set.

Nicholas Duggan’s helpful article and flowchart to help GIS users decide if they can use a data set.


September 13, 2015 Leave a comment

By Randy Swaty, LANDFIRE Ecologist

When we try to pick out anything by itself, we find it hitched to everything else in the universe. — John Muir

The Nature Conservancy program that I work for just hit the ten-year mark, and that birthday got me to thinking. About origin stories. About ideas that are so simple, they’re obvious — but only after the fact. About how great need prompts great action. About how everything is hitched to everything else, once you look closely enough. Landmark birthdays are excellent prompts for us to connect the dots, to find out how we got here, and then look at where we can go.

First, the obvious: Our livelihood – our very lives — depends on grasslands, shrublands and forests, all kinds of living connections that have nothing to do with politics or ownership, that have their own integrity, that recognize no boundaries. Health and hardship spill over and swap across landscapes and watersheds all the time. At the Conservancy, we know that restoring, managing and conserving land and water requires knowledge and insight that extend beyond boundaries as well.  Then, the origin of my program — great need.

LANDFIRE data and project from The Nature Conservancy

LANDFIRE data and project from The Nature Conservancy.

At LANDFIRE’s decade mark, I’m proud to work with great people on a project that spans all 50 states. What I do is connected to my home, my neighborhood, my city, my state and beyond those boundaries. I make models that change the way people manage land, but before LANDFIRE made that possible, we worked with next-to-nothing in terms of scientifically based data. The fact that I did without that foundation makes me all the more appreciative.

In graduate school I studied soil fungi (mycorrhiza-see photo), assuming that U.S. ecosystems had already been mapped and described. Moving from researching microbes to conserving landscapes, I soon learned I was wrong.  My first job with The Nature Conservancy was to help large industrial land managers such as MeadWestvaco and International Paper explore opportunities across the Upper Peninsula of Michigan to conserve wildlife habitat, protect landscape features such as wolf dens, and insure that all ecosystems there had representation.

To do that, we needed a common vegetation dataset that covered both the lands that those managers owned and the lands in-between as well — something that told us, for example, which ecosystems were under- and which were over-represented on the landscape.  Data was either non-existent, or when it did exist, often collected in different ways for different reasons. But a common dataset did not exist. Instead, the situation for restoration in the U.S. prior to LANDFIRE was chaotic. To understand just HOW nuts it was, imagine a grocer stocking shelves using only information provided by a handful of vendors. Not only that, but recording the grocery stock varies — some vendors count individual cans, some count cases, some report by volume (quarts, liters) and others by weight (pounds, kilos). And just because vendors report inventory delivered, that doesn’t mean the totals are correct, or that the grocers who received the deliveries agree with the count.

In the MeadWestvaco and International Paper case, the companies had robust inventories of their own lands, but these datasets were not compatible, and there were no data for the private non-industrial lands. We could either compare cases to cans or make huge assumptions.

Making the ‘Encyclopedia of Ecosystems’

Then I heard about LANDFIRE. The idea was the brainchild of great scientific minds at The Nature Conservancy and federal agencies, and was envisioned as a suite of tools to support land management efforts that would reduce wildfire risk. I joined up as fast as I could, because it was not only the answer to my first job’s immediate problem, but I could see LANDFIRE’s huge potential to be beneficial to conservation and restoration in situations beyond fire as well.

The Conservancy’s first major role in the LANDFIRE project was to describe how all 1,800+ ecosystems in the U.S. looked and worked when functioning “naturally,” i.e., before invasives, fire suppression or large-scale logging.  The result was the first national “Encyclopedia of Ecosystems,” and it was initially used to create a dataset called “Vegetation Departure” that compared current conditions for these ecosystems to the “natural” or “reference” conditions.

Vegetation Departure has been immensely significant to the cause of conservation. For instance, it informed the creation in 2000 of the U.S. National Fire Plan — a long-term national strategy for reducing the catastrophic impacts of wildfires. It also continues to provide conservationists targets for their restoration work.

The Pressure Cooker of Creating Reference Condition Info

Creation of these reference conditions was a wild ride, starting with the sheer volume of information.  Working with more than 700 experts via 40+ workshops, 35+ WebExes, and innumerable phone calls and individual visits, the six-person Conservancy LANDFIRE team assembled, wrote and delivered seven ecosystem descriptions and models each week — 100-300 pages — for 183 weeks running.

The pressure was on: a number of important products (such as the map of these ecosystems and their associated fire regimes) were relying on fast delivery of the information, and every bit of it had to be peer-reviewed and checked for accuracy.  Being late or wrong threw off the whole process. In the end of the first phase, we had an amazing record, delivering “final” or at the least “workable” drafts on schedule and within budget every time.

On the flip side, people were suddenly jockeying to sit next to me, a budding ecologist, at meals during my first LANDFIRE workshop for experts. I wondered what was going on until I learned that the outcomes of our models and descriptions had important budget implications for these managers.

But LANDFIRE’s rigorous system of workshops, peer review, QA/QC and internal review gave us a strong scientific buffer from the pressure we all felt. Additionally, our models were and are informed by literature review, local datasets and general ecological principles. At last, grocers, vendors, inventories and deliveries were all on the same page, and data sheets could be balanced with a high degree of accuracy and agreement.

Mapping Bighorn Sheep Viability to Pollinator Habitat and More

The models and descriptions the LANDFIRE team developed not only fed into the Vegetation Departure dataset, but also contributed to building other important ecological datasets, including three for historic fire regimes and the current data showing the developmental stages of each ecosystem in the United States.

These datasets paint a picture of the entire landscape. They are not only beautiful, but allow conservationists to understand what they are dealing with. For example, the current data has allowed agency, academic and NGO partners to better monitor conditions along the Appalachian Trail — a huge area covering many ownerships and political boundaries (see LANDFIRE data in Appalachian Trail Mapping Viewer). LANDFIRE data spans them all.

Our colleagues at the U.S. Forest Service and U.S. Geological Survey then took our work and turned it into a staggering output of maps — 2.43 billion acres mapped in total (that’s 44,000 satellite images processed), including 25 spatial datasets, each covering the United States. A visit to our WHAM! (Web Hosted Applications Map) offers a snapshot of some of our favorite LANDFIRE data uses, ranging from understanding bighorn sheep viability in Idaho to mapping wildland fire potential for the entire country to understanding wildland habitat value for pollinators in California.

The bottom line: the Conservancy believes that conservation must be practiced within context. It’s not just about special plants, wildlife and services that nature might provide in a particular place, but how they fit into the entire landscape and how all the activities on that landscape impact its ecological function. LANDFIRE products have and still are enabling that approach.

While the process was stressful for all of us, I can’t help but think of those crazy early days as rewarding. We were doing something that had never been done before. Along the way we forged great friendships and learned a lot about ecology and ourselves.

Data + Maps = Collaboration

LANDFIRE’s work didn’t end with that first great push. All this great data doesn’t sell or explain itself, so the Conservancy’s LANDFIRE team works hard to promote appropriate and innovative uses for LANDFIRE products that also have conservation benefits. For example, LANDFIRE recently has filled data gaps for forest restoration in Colorado’s Upper Monument Creek (UMC), a high-priority landscape for conservation within the Pike National Forest that has also experienced some severe and expensive wildfires such as the infamous “Waldo Canyon Fire” of 2012.

After that fire, the UMC Landscape Restoration Initiative was launched to accelerate the pace of forest restoration through data-informed prioritization of “what to do where.”


Editor’s Note:  This week’s post is guest written by The Nature Conservancy’s Landfire team, which includes Kori Blankenship, Sarah Hagen, Randy Swaty, Kim Hall, Jeannie Patton, and Jim Smith. The Landfire team is focused on data, models, and tools developed to support applications, land management and planning for biodiversity conservation. If you would like to guest write for the Spatial Reserves blog about geospatial data, use the About the Authors section and contact one of us about your topic.

The Good Ole Bad Days: Pixels, Scale and Appropriate Analysis

August 30, 2015 2 comments

By Jim Smith, LANDFIRE Project Lead, The Nature Conservancy

Recently I saw a bumper sticker that said, “Just because you can doesn’t mean you should.” I couldn’t have said it better, especially regarding zooming in on spatial data.

Nowadays, (alert—grumble approaching), people zoom in tightly on their chosen landscape, region, and even pixel, whether the data support that kind of close-up view or not. Understandably, that means a LOT of misapplication of perfectly good science followed by head scratching and complaining.

To set a context, I want to look at the “good ole days” when people used less precise spatial data, but their sense of proportion was better. By “ole,” I mean before the mid-1980s or so, when almost all spatial data and spatial analyses were “analog,” i.e. Mylar map layers, hard copy remote sensing images and light tables (Ian McHarg’s revelation?).  In 1978, pixels on satellite images were at least an acre in size.  Digital aerial cameras and terrain-correct imagery barely existed.  The output from an image processing system was a line printer “map” that used symbols for mapped categories, like “&” for Pine and “$” for Hardwood (yes, smarty pants, that was about all we could map from satellite imagery at that time). The power and true elegance we have at our finger tips today was unfathomable when I started working in this field barely 30 years ago.

Let me wax nostalgic a bit more – indulge me because I am an old GIS coot (relatively anyway).  I remember command line ArcInfo, and when “INFO” was the actual relational data base used by ESRI software (did you ever wonder where the name ArcInfo came from?).  I remember when ArcInfo came in modules like ArcEdit and ArcPlot, each with its own manual, which meant a total of about three feet of shelf space for the set. I remember when ArcInfo required a so-called “minicomputer” such as a DEC VAX or Data General, and when an IBM mainframe computer only had 512K [not MB or GB] RAM available.  I know I sound like the clichéd dad telling the kids about how bad it was when he was growing up — carrying his brother on his back to school in knee-deep snow with no shoes and all that — but pay attention anyway, ‘cause dad knows a thing or two.

While I have no desire to go back to those days, there is one concept that I really wish we could resurrect.  In the days of paper maps, Mylar overlays, and photographic film, spatial data had an inherent scale that was almost always known, and really could not be effectively ignored.  Paper maps had printed scales — USGS quarter quads were 1:24,000 — one tiny millimeter on one of these maps (a slip of a pretty sharp pencil) represented 24 meters on the ground — almost as large as a pixel on a mid-scale satellite image today.  Aerial photographs had scales, and the products derived from them inherited that scale. You knew it — there was not much you could do about it.

Today, if you care about scale, you have to investigate for hours or read almost unintelligible metadata (if available) to understand where the digital spatial data came from — that stuff you are zooming in on 10 or 100 times — and what their inherent scale is.  I think that most, or at least many, data users have no idea that they should even be asking the question about appropriate use of scale — after all the results look beautiful, don’t they? This pesky question means that users often worry about how accurately categories were mapped without thinking for a New York minute about the data’s inherent scale, or about the implied scale of the analysis. I am especially frustrated with the “My Favorite Pixel Syndrome” when a user dismisses the entire dataset because it mis-maps the user’s favorite 30-meter location, even though the data were designed to be used at the watershed level or even larger geographies.

So, listen up: all that fancy-schmancy-looking data in your GIS actually has a scale. Remember this, kids, every time you nonchalantly zoom-in, or create a map product, or run any kind of spatial analysis. Believe an old codger.

The Good Ole Bad Days: Pixels, Scale and Appropriate Analysis

The Good Ole Bad Days: Pixels, Scale and Appropriate Analysis.


Authors:  This week’s post is guest written from The Nature Conservancy’s Landfire team, which includes Kori Blankenship, Sarah Hagen, Randy Swaty, Kim Hall, Jeannie Patton, and Jim Smith. The Landfire team is focused on data, models, and tools developed to support applications, land management and planning for biodiversity conservation. If you would like to guest write for this Spatial Reserves blog about geospatial data, use the About the Authors section and contact one of us about your topic.

Trading Services For Location Information

August 24, 2015 1 comment

The popular music streaming service Spotify recently announced an updated set of terms and conditions. In addition to stated intentions to access contact information and photographs stored on a mobile device, for those using the Spotify Running feature the service would also collect location data.

Depending on the type of device that you use to interact with the Service and your settings, we may also collect information about your location based on, for example, your phone’s GPS location or other forms of locating mobile devices (e.g., Bluetooth). We may also collect sensor data (e.g., data about the speed of your movements, such as whether you are running, walking, or in transit). – Spotify

For many service providers such as Spotify, personal location information is just one of the data sources they can tap in to provide a more personal service and Spotify are not the first service to want access. The trade off we have as consumers of those services is does the service we want to use justify trading some of our personal information. Before consenting to that trade off we need to understand how, what, and when data is collected, who will use it (third party access?) and if we can opt out of sharing this information.

In responding to some of the negative feedback to the announcement from users concerned about how their personal information was being used, Spotify acknowledged that they didn’t do a good job communicating the updated terms and conditions and what they meant to say was…..

Location: We will never gather or use the location of your mobile device without your explicit permission. We would use it to help personalize recommendations or to keep you up to date about music trending in your area. And if you choose to share location information but later change your mind, you will always have the ability to stop sharing. – Spotify

For some the new personal service will appeal, for others it will be a tracked step too far. The important thing for all users is sufficient information to make an informed decision before accepting the new terms and conditions.


Categories: Public Domain Data

Know Your Data! Lessons Learned from Mapping Lyme Disease

August 16, 2015 2 comments

I have taught numerous workshops using Lyme Disease case counts from 1992 to 1998 by town in the state of Rhode Island.  I began with an Excel spreadsheet and used Esri Maps for Office to map and publish the data to ArcGIS Online.  The results are here.

As the first decade of the 2000s came to a close, my colleague and I wanted to update the data with information from 1999 to the present, and so we contacted the people at the Rhode Island Department of Health. They not only provided the updated data, for which we were grateful, but they also provided valuable information about the data.  This information has wider implications for data quality in general that we frequently discuss on this Spatial Reserves blog.

The Public Health staff told us that the Lyme disease surveillance is time and resource intensive.  During the 1980s and 1990s, as funding and human resource capacity allowed, the state ramped up surveillance activities including robust outreach to healthcare providers.  Prioritizing Lyme surveillance allowed the state to obtain detailed clinical information for a large number of cases and classify them appropriately.  The decrease observed in the 2004-2005 case counts was due to personnel changes and a shift in strategy for Lyme surveillance.  Resource and priority changes reduced their active provider follow up.  As a result, in the years since 2004, the state has been reporting fewer cases than in the past.  They believe this decrease in cases is a result of changes to surveillance activities and not to a change in the incidence of disease in Rhode Island.

If this isn’t the perfect example of “know your data”, I don’t know what is.  If one did not know the above information, an erroneous conclusion about the spatial and temporal patterns of Lyme disease would surely have occurred.  This kind of information often does not make it into standard metadata forms.  This therefore is also a reminder that contacting the data provider is often the most helpful way of obtaining the “inside scoop” on how the data was gathered.  I created a video highlighting these points.  And rest assured that we made certain that this information was included in the metadata when we served this updated information.


Get every new post delivered to your Inbox.

Join 122 other followers