Applied Ecosystem Services, LLC

The Environmental Issues Doctor

Photo of Bringing Environmental Policy and Regulation into the 21st Century, Part 1


Estimated reading time: 2 minutes

After 50 years it is time to bring environmental policy and regulatory decision making into the 21st century by applying statistical paradigms that produce technically sound and legally defensible results from environmental data.

When federal environmental laws were created, and agencies directed to develop regulations to ensure compliance with them, biologists and ecologists knew less about environmental systems and data analyses than we do today.

Scientists had insufficient data for the wide variety of ecosystems covered by these laws, and the only statistical paradigm they knew was the null hypothesis/significance testing (NHST) approach. Unfortunately, this paradigm is rarely appropriate for environmental data. The problems have been argued by statisticians for about 80 years. Regulators, regulated companies, consultants, and attorneys have better paradigms available for analyzing environmental data.

The frequentist defines a null hypothesis (Ho) and an alternative hypothesis (Ha). The null means there is no difference between two samples; the alternative means that there are differences between the two data sets. These two hypotheses can be proven, only rejected. The null hypothesis is tested and accepted or rejected by the probability (the P-value) that the observed data fit the null hypothesis. If the P-value is less than the arbitrary value of 0.05 the null hypothesis is rejected and the alternative is accepted without explicit testing.

The problems with the frequentist paradigm are many; a few are summarized here.

  1. The statistical test is how well the observed/measured data fit the null hypothesis, not how well the null hypothesis explains the data. A serious problem.

  2. Only two hypotheses are open for examination. Almost always there are multiple possible explanations for the observed data; it might not be the regulated operation.

  3. Almost always, the null hypothesis is known to be false, even before data are collected. Therefore, rejecting it as an explanation of the data’s fit to the hypothesis is a tautology. In very rare cases the analyst creates a meaningful null hypothesis, but the single (non-specific) alternative is still not tested, just accepted.

  4. The significance level of 95% is totally arbitrary. There is no basis for this value. It’s been accepted as “true” only because of repetition.

The two alternative paradigms, likelihood and Bayesian, correct these problems. Environmental issues are prominent in society and politics: climate change, drought, fossil fuels and renewable energy sources, mining, livestock grazing, sage grouse, anadromous salmon, and other wildlife. Regulated companies, their regulators, consultants, and attorneys will appreciate the value they gain after changing their approach to environmental data analysis.

This work was originally published on the Applied Ecosystem Services, LLC web site at

It is offered under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International license. In short, you may copy and redistribute the material in any medium or format as long as you credit Dr. Richard Shepard as the author. You may not use the material for commercial purposes, and you may not distribute modified versions.

Keep reading

  1. Photo of Bringing Environmental Policy and Regulation into the 21st Century, Part 2

    Bringing Environmental Policy and Regulation into the 21st Century, Part 2


    Estimated reading time: 2 minutes

    The null hypothesis/significance testing (NHST) analytical paradigm does not produce answers for environmental regulatory decisions because rejecting the null hypothesis (of no difference between data sets) says nothing about why or by how much they differ. The likelihood paradigm overcomes many of NHST’s problems and can be applied to environmental data when its limitations are understood. The NHST approach tests how well the data fit a single null hypothesis. The Maximum Likelihood Estimation (MLE) approach tests how well multiple hypotheses fit the data and identifies the hypothesis that maximizes the likelihood of explaining the data.
  2. Photo of Why Water Quality Improvement Projects Fail

    Why Water Quality Improvement Projects Fail


    Estimated reading time: 3 minutes

    Water quality matters for humans, livestock, fish and wildlife, and plants including food crops. Too often policies and regulations are ineffective while restoration projects fail to achieve intended goals. The problem is seen in environmental impact assessments, point- and nonpoint-source discharges, and Superfund sites. While some reasons for failure are project-specific, three common and easily avoided reasons are the lack of knowledge about spatial and temporal distribution of the chemical of concern, no information about the causes and amount of variability, and the focus on concentrations at a local point rather than on the entire ecosystem.

Contact me to ensure you avoid future environmental issues.