One of the cornerstones of current forest policy is the assumption that Western forests are outside of their normal density and appearance or what is termed “historic variability.” But a new look at fire history studies may challenge that assumption.
The commonly given reasons for this are a hundred years of mismanagement that included logging of old growth, fire suppression, and livestock grazing. The idea that forests are out of balance (unhealthy, in other words) is the reason, we are told, for larger fires and insect outbreaks. Due to this past mismanagement, we are told that forests are overgrown, decadent and ready to burn.
The solution: more logging of public lands to “restore” forests to their premanagement era appearance and resiliency. However, this idea fails to recognize that large fires and insects are a natural consequence of current climatic conditions and that the forest is perfectly capable of “restoring” itself. What is missing in the discussion is that a healthy forest is one where natural processes like insects, disease and wildfires still operate, not some static image or preconceived notions of stand structure or appearance.
Not to dismiss the past management that largely continues unabated today, including ongoing logging, grazing and fire suppression, but whether the current forest stand condition is that far from past conditions is a matter of increasing debate. This is especially important because the proposed solution to the perceived problem is to log the forest.
Deforestation is no longer done just to provide timber companies with profits or consumers with wood. Now, lumber companies are involved in a much more noble enterprise—they are logging the trees to “restore” the presumed forest health.
The scientific basis for restoration is dependent on fire-scar studies. These studies suggest that the drier forests composed of lower elevation ponderosa pine and Douglas fir burned frequently and thus kept density low with park-like open stands of mostly larger trees. Keep in mind the discussion is focused on lower-elevation forests, as higher-elevation forests made up of lodgepole pine, fir and spruce are characterized by much longer fire intervals and definitely were not affected to any significant degree by fire suppression.
So we often hear how such low-elevation dry forests burned regularly at frequent intervals in light, cool blazes that removed the litter and killed the small trees, but did little harm to the larger trees.
Like a lot of myths, there is some truth to this generalization, and no doubt in some areas this characterization is accurate. But more recent studies using different methods have started to question this well-established story-line. These alternative interpretations are finding that the intervals between fires is much longer then previously suspected, and that stand replacement blazes (where most of the trees are killed) were likely more common even among lower-elevation dry forests then previously thought.
Problems With Fire-Scar Methods
The major method for determining the fire history of an area is to find trees with scars created by fires. If the tree is not killed by the blaze, it will develop a scar that can be counted in the tree rings. This record of past fires is then used to determine the “fire rotation,” or the time it takes to burn a specific area one time.
There are four major flaws with many fire scar studies. These methodological flaws contribute to a shorter fire rotation bias—in other words, they tend to overstate the effect of fire suppression on forests. And if the fire rotation is longer, than much of what is being characterized as unhealthy forest may be perfectly normal and healthy.
The first flaw is targeted sampling. A researcher walks through the forest looking for areas with an abundance of fire scarred trees. The trees in this area are then sampled and used to determine the fire history for the area. In the 1930s the bank robber Willie Sutton was asked why he robbed banks. Sutton is reputed to have replied with the self-evident “because that is where the money is.” (OK, he didn’t really say it. But you get the point.) In a sense that is how fire researchers have gathered their data on fires—they sample in places with a lot of fire scars.
The problem with targeted sampling is that it’s nonrandom. It’s like going into a brewery to poll people about whether they like beer. Places with an abundance of fire scars tend to have naturally low fuel loadings. But these sites may not be representative of the surrounding landscape such as north facing slopes or valley bottoms which may be wetter or have higher productivity—thus longer intervals between blazes.
The second flaw is composite fire scars. Most fire studies add up all the fire scars recorded into a composite timeline. The problem with this technique is that the more scars you find and count, the shorter the fire interval becomes. Since the majority of fires do not burn more than a single tree or a small group of trees, using these scars in the composite tends to bias the final count towards much more frequent intervals. Some fire researchers now try to counter this by only including fire scars recorded the same year on three or more trees. Nevertheless, even this may overstate the frequency of fire in a given study area.
In other words, your composite may suggest a fire burned within your study area once every five years or whatever, but most of these blazes only burned a few trees, then ecologically speaking they are insignificant. What are important: the fires that burn most or all of the study area. These larger blazes may be far less frequent and take 100 years to burn most or all of the study area. Since the critical issue for the forest is the occurrence of the occasional blaze or series of fires that burns most, if not all of the entire study area, the real fire rotation for such an area may be 100 years, not every five years.
The third flaw is fire distribution. If you read fire studies carefully they will usually note the longest interval without any recorded fire. Often this is a significant period of many decades. Why is this important? Because the average person hears that there were fires every 10 or 20 years and assumes that fires operate like clocks on a regular schedule. In reality, fires come in episodic groups usually dictated by periodic droughts that are controlled by shifts in offshore currents like the Pacific Decadal Oscillation, thus tend to be grouped together in certain drought prone decades.
Let me give a hypothetical example of how averaging fire intervals can skew interpretations. Let’s say a particular fire history study found a fire interval that averaged one blaze every 20 years. In a 100 years, this works out to five fires. However, since fires tend to burn only in drier decades, one could easily have three fires in the first decade, and two in the last decade and no fires for the 80 years in between.
Why is this important? Because the common assumption is that if the fire interval is 20 years, fires would keep tree density low and reduce fuel build up. However, if no fires burned in an area for 80 years or whatever the fire free interval may have been, then there may not be an abnormal build up of fuel or increase in tree density, when in fact, nothing is out of the ordinary at all .
Finally, the fourth major flaw is assuming that stand replacement blazes are unusual in dry lower elevation forests. Because most fire scar studies are non-random, and target areas with fire scars, the other areas are not sampled. Often the reason these non-sampled areas lack significant numbers of fire scarred trees is because all trees may have been killed in a stand replacement fires—so are not there to be recorded.
Due to these flaws and errors in interpretation, many fire scar histories (but not all) may seriously misrepresent the fire rotation of an area. If the period between fires is considerably longer than previously thought, then our forests may not be far out of their historic variability and may be well within that variability. In either case, they do not require restoration because they are not out of balance.
The other major justification for logging is to reduce the perceived increase in fire occurrence and severity often blamed on past forest management including fire suppression. As pointed out above, most forests may not be that far outside of historic conditions.
The fact that we are seeing more and larger fires fits perfectly with the pattern that is expected under current climatic conditions. In other words, if you have drier weather conditions, with high temperatures, low humidity and high winds, you will get more fires. You will get larger fires.
The prevailing climatic conditions are driving most of the apparent change in fire frequency and severity. For instance, the Southwest is in the grips of a drought that hasn’t been seen in five hundred years. Not surprisingly, there are fires now burning across the region bigger and more intense than any seen in the past. However, Paleo fire studies confirm that such large fires may not be abnormal when compared to the fires that burned similar severe droughts occurred in the past centuries.
This is not to suggest that all fire-scar historical reconstructions are wrong—but it does raise the prospect that many of our assumptions about fire may be inaccurate or biased to some degree. Many of the logging proposals in the West are likely based on flawed assumptions about fire ecology and historic conditions. And before any restoration logging is accepted as necessarily, the underlying assumptions should be carefully evaluated to make sure they are not skewed towards a shorter rotation that actually does not characterize the area accurately.