Sea-level Rise, Groundwater Quality, and the Impacts on Coastal Homeowners

Dennis Guignet, O. Ashton Morgan, Craig Landry, John C. Whitehead, William Anderson

Sea-level rise poses a growing threat to coastal communities and economies across the globe. North Carolina (NC) is no exception, with coastal communities facing annual sea-level rise rates of 2.01 to 4.55 mm/year (NOAA, 2018). Sea-level rise can affect key ecosystem services to coastal communities, including the provision of clean drinking water and adequate wastewater treatment. We examine how increases in the cost of these services and possible negative effects on coastal house prices due to sea-level rise impact residential location decisions. Administering a stated preference survey to NC homeowners in counties adjacent to the coast, we assess how households might respond to the increasing costs of drinking water and wastewater treatment due to sea-level rise. We present a novel framework to estimate expected welfare impacts under illustrative scenarios. Our results can inform local communities and benefit-cost analyses of future adaptation strategies and infrastructure investments.
Key Words: drinking water; ecosystem service; groundwater; housing; stated preference; sealevel rise; wastewater
0 comment
0 FacebookTwitterPinterestEmail

Total Economic Valuation of Great Lakes Recreational Fisheries: Attribute Non-attendance, Hypothetical Bias and Insensitivity to Scope

John C. Whitehead, Louis Cornicelli and Gregory Howard

Abstract: We use stated preference methods to estimate willingness to pay to avoid reductions in recreational catch in Great Lakes fisheries. We compare willingness to pay estimates where uncertain “in favor” votes are recoded to “against” votes to an attribute non-attendance model that focuses on the policy cost attribute. We find that the two hypothetical bias models yield similar results. We estimate another attribute non-attendance model that also considers the scope of the policy and find that the scope elasticity is significantly underestimated in other models. The willingness to pay in this last model is higher than in the other models.

Key Words: Attribute non-attendance, Hypothetical bias, Scope test, Willingess to pay


0 comment
0 FacebookTwitterPinterestEmail

The Aggregate Economic Value of Great Lakes Recreational Fishing Trips

John C. Whitehead, Louis Cornicelli, Lisa Bragg and Rob Southwick

Abstract: We use the contingent valuation method in a survey of Great Lakes anglers to estimate the willingness to pay for a Great Lakes recreational fishing trip. Employing various assumptions and models, we find that the willingness to pay for a trip ranges from $54 to $101 ($2020). We then combine the willingness to pay per trip estimates with an estimate of the number of trips and find that the aggregate economic value of Great Lakes fishing trips in the U.S. is $611 million. We conduct a sensitivity analysis over the estimates of willingness to pay and the number of trips and estimate that the 90% confidence interval around the mean estimate of $632 million is ($182.5, $1,553) million. 


0 comment
0 FacebookTwitterPinterestEmail

They doth protest too much, methinks: Reply to “Reply to Whitehead”

John C. Whitehead

No 24-04, Working Papers from Department of Economics, Appalachian State University

Abstract: Desvousges, Mathews and Train (2020) point out a mistake in my comment on their 2015 paper. When this mistake is corrected the conclusions drawn in my comment are unchanged. In addition, the authors claim that I make another 11 “mistakes”. In this paper I argue that these “mistakes” are mostly fairly standard practice in the contingent valuation method. Desvousges, Mathews and Train misread and distort this literature. In addition, I place the comments and reply in the context of a larger debate over using the Contingent Valuation Method for Natural Resource Damage Assessment.


0 comment
0 FacebookTwitterPinterestEmail

IRERE special issue honoring Tom Tietenberg


From the inbox:

The International Review of Environmental and Resource Economics has published the following new issue. The articles in this issue are freely avaible until 20 February[*]. For other issues or for subscription information, please visit the journal webpage.

Volume 17, Issue 4 – Special Issue Honoring Thomas H. Tietenberg
Kathleen Segerson (2023), “Introduction: Honoring Thomas H. Tietenberg”
Henk Folmer (2023), “Tom Tietenberg’s Merits for the International Review of Environmental and Resource Economics”
Carolyn Fischer (2023), “Legacy of Tom Tietenberg in Research”
Roger G. Noll (2023), “Thomas Tietenberg and the Tradable Permits Innovation”
Deirdre Nansen McCloskey (2023), “Is Teaching Expert Economists a Good Idea?”
Sahan T. M. Dissanayake and Sarah Jacobson (2023), “”An Absolute Giant in the Classroom:” What We Can Learn from Thomas Tietenberg about Teaching”
Lynne Lewis (2023), “A Tribute and Thank You to Tom Tietenberg”


*Note: the website says free until January 20, not February.

0 comment
0 FacebookTwitterPinterestEmail

From the NORC NOW email:

$394 billion. That’s how much hunters, anglers, and wildlife observers spent on being in the wild in 2022, according to the latest National Survey of Fishing, Hunting, and Wildlife-Associated Recreation. When the survey first launched in 1955, the best way to ask Americans about those pursuits was through in-person interviews. Fast forward to 2020, when the Association of Fish & Wildlife Agencies—in partnership with the U.S. Fish & Wildlife Service—tasked NORC with mitigating declining survey response rates and reducing costs. NORC revitalized the survey by streamlining its content and replacing in-person interviewing with a “push to web” approach that invites randomly selected and targeted households through mailed invitations to take the survey online, on paper, or by phone. NORC’s AmeriSpeak® Panel helped identify rural residents and our TrueNorth® methodology reduced bias in the targeted sample.

NORC completed 100,000+ interviews, which revealed that 57 percent of Americans (148 million) watched wildlife, 15 percent (40 million) fished, and six percent (14.4 million) hunted. All of these suggest significant contributions to local economies. Survey findings will help local and state organizations fine-tune their efforts to preserve the habitats for both wildlife and its enthusiasts.  

Read: 2022 National Survey of Fishing, Hunting, and Wildlife-Associated Recreation report

Good luck using that link to find the actual report. When I click on PDF it takes me to a list of other reports and then I survey for National Survey and I’m in a loop.

The USFWS used the Census to do these surveys for decades. As far as I can tell, this survey had the biggest contingent valuation method sample in history until budget cuts with, I think, the 2011 survey and they went from dichotomous choice back to open-ended willingness to pay questions. This survey and, I think, the previous one dropped the CVM questions.

Here’s the link to a paper I wrote with the data a long time ago:

0 comment
0 FacebookTwitterPinterestEmail

I wrote a referee comment to the effect of:

Many contingent valuation method researchers use the nonparametrice Turnbull WTP estimates for hypothesis testing. This is inappropriate when the data must be “pooled” to get the willingness to pay (e.g., the “vote in favor” variable) to decrease with the cost amount. Sometimes, due to small samples, poorly chosen cost amounts or respondent inattentiveness, the percentage of “vote in favor” responses is not monotonically decreasing with the cost amount. The Turnbull estimator requires that the “vote in favor” responses are pooled over prices until the pooled responses are monotonically decreasing. This is, in effect, a recoding of the dependent variable. This makes the WTP estimates inappropriate for hypothesis testing. 

The authors halfway defended their practice because everyone does it. Do I have to be your parent? If everyone does it, does that make it right?

Are there any other examples in the literature where we allow researchers to recode their dependent variable so that it conforms to theory and then use the recoded data for hypothesis testing?

0 comment
0 FacebookTwitterPinterestEmail

Authors: John Whitehead and Tanga Mohr [1]


The Regional Greenhouse Gas Initiative (RGGI) is a cap-and-trade program that covers the electric power sector in more than 10 northeastern states. The cap-and-trade program creates markets for a limited number CO2 allowances, reducing greenhouse gases. Laboratory experiments were used to inform RGGI about the most efficient design for the primary auction and the secondary markets (e.g., Shobe et al. 2010). These experiments were single unit auctions but RGGI conducts multi-unit auctions. The purpose of this research is to explore the efficiency of multi-unit auction designs in the RGGI context.


In first price auctions, bidders pay their bid. Theory predicts that bidders in first price auctions of a single unit will shade their bids. In second price auctions, all winning bidders pay the same market clearing bid. Theory predicts that bids will be equal to value in second price auctions of a single unit. Theory is not so clear in first and second price multi-unit auctions (Khezr and Cumpston 2022).

Real and Hypothetical Auctions

Real auctions are incentivized; i.e., subject earnings are real and depend on bidding behavior. Hypothetical auctions are not incentivized; i.e., subject earnings are fixed and do not depend on bidding behavior. We expect incentivized subjects to make bids closer to theoretical predictions (noting that theoretical predictions are not sharp in multi-unit auctions) (see Mohr and Whitehead 2023).


We conducted multi-unit induced value auctions using the VECONLAB platform. In induced value auctions, subjects are told how much an item is worth and then make a bid for that item. Each subject has demand for three units and the induced value for each unit differs in each round and over 18 rounds of bidding. We have 74 subjects in four treatments:

Real, 1st price auction
Hypothetical, 1st price auction
Real, 2nd price auction
Hypothetical, 2nd price auction

We use latent class regression models to explore various bidding strategies that were used by subjects.


Using naive models (assuming that all subjects behave in the same way), we find no differences in bidding behavior in real and hypothetical experimental sessions.

Using latent class models we identify two different types of bidding behavior for both auctions. In the first price auctions one class suggests that subjects in the hypothetical session bid their value and shade their bids by 85% in the real sessions. In the other class, all subjects shade their bids by 68%.

In the second price auctions one class suggests that hypothetical and real subjects shade their bids by 88% and 84%, respectively. In the other class, hypothetical and real subjects shade their bids by 53% and 72% respectively.


We find some evidence that real auctions yield results closer to theory. Latent class models can lend additional insights to experimental auction behavior. We plan to conduct more incentivized first and second price auctions in the future. [2] 


Khezr, Peyman, and Anne Cumpston. “A review of multiunit auctions with homogeneous goods.” Journal of Economic Surveys 36, no. 4 (2022): 1225-1247.

Mohr, Tanga, and John C. Whitehead. External Validity of Inferred Attribute NonAttendance: Evidence from a Laboratory Experiment with Real and Hypothetical Payoffs. Department of Economics Working Paper No. 23-05. 2023.

Shobe, William, Karen Palmer, Erica Myers, Charles Holt, Jacob Goeree, and Dallas Burtraw. “An experimental analysis of auctioning emission allowances under a loose cap.” Agricultural and Resource Economics Review 39, no. 2 (2010): 162-175.


[1] This study was funded by the Walker College of Business Dean’s Club. It was conducted with a student at Appalachian State University who was going to use it for an Honors Thesis. The student ghosted on us and we’re left with the responsibility for producing a poster for the Dean’s Club poster session (a requirement for securing Dean’s Club funding). 

[2] More detail to come over the next 6 days … 

0 comment
0 FacebookTwitterPinterestEmail

In which we* use old-timey contingent valuation willingness to pay for a recreation trip questions. After this paper and others (in the past and in the future), I’m thinking that attribute non-attendance mitigates hypothetical bias, fat tails, scope insensitivity, etc. I’m not sure why it hasn’t caught on 100% yet. Everyone seems to think that if we can only use stated preference “best practices” then everything is going to be fine. I think no, everything isn’t fine (and that doesn’t even factor in the enormous cost of stated preference “best practices.”

Here is the link:

*Authors: John C. Whitehead, William P. Anderson, Jr., Dennis Guignet, Craig E. Landry and O. Ashton Morgan

**Not that we had enough money for “best practices” for this paper.

0 comment
0 FacebookTwitterPinterestEmail

From Data are Plural (10/11): 

Michigan air permit violations. For local news organization Planet Detroit, freelance journalist Shelby Jouppi has built a daily-updating dashboard of air quality permit violations cited by Michigan’s Department of Environment, Great Lakes and Energy. The dataset lists 1,500+ violation notices since 2018; for each, it provides the notice date and findings, facility name and location, and more. To construct it, Jouppi had to scrape individual notice PDFs from the department’s website and then extract the information from those documents. Read more: “Southwest Detroit steel slag processor receives 12th air quality permit violation for fallout since 2018,” an article by Jouppi based on the data.

Here is a screenshot of the map:

*Not someone who uses stated preference data.

0 comment
0 FacebookTwitterPinterestEmail
Newer Posts