Category:

Economy

I recently received a request for the NLogit code for this article:

Whitehead, John C., and Daniel K. Lew. “Estimating recreation benefits through joint estimation of revealed and stated preference discrete choice data.” Empirical Economics 58 (2020): 2009-2029.

I was happy to oblige but it took a second because, since we estimated those models, I had gone into the program and tried a bunch of attribute non-attendance models. The program was a mess. So, I had to hunt the different models down and rerun everything to make sure it was working and … discovered a minor error in Table 6. The scaled multinomial logit model was estimated with both the revealed and stated preference data so the number of time periods should be 8 instead of 4. Ugh.

 

0 comment
0 FacebookTwitterPinterestEmail

From the WSJ (National Parks Will Close if Government Shuts Down):

National parks will close their gates if lawmakers don’t pass legislation to keep the federal government funded by the end of this week.

The Biden administration announced Friday morning that sites run by the National Park Service will close if government funding lapses on Sunday. The administration will release a full contingency plan later Friday. The closures would roll out over the weekend and into Monday if a shutdown does occur, a senior Interior Department official said.

The closures will affect national parks, including sites like Yosemite and Yellowstone, and other monuments and sites like the National Mall and memorials in Washington, D.C. During the shutdown, thousands of park rangers will be furloughed.

Visitors will still be able to access some parks during the shutdown. While some parks have entry points that can be closed to guests, visitors could go to many other federally run destinations that are easier to access. State parks won’t be closed because of the shutdown.

The nonprofit National Parks Conservation Association, citing government data, projects that the parks could see nearly one million fewer visitors and an economic loss of as much as $70 million for every day that the destinations are closed in October.

Here is one reason to close them down:

During the most recent government shutdown, the Trump administration kept national parks open with lower staffing levels. As travelers continued to visit, trash and toilet facilities overflowed at some locations. Visitors also caused damage to some locations, including Joshua Tree National Park.

The Government Accountability Office rebuked the Trump administration for keeping the parks up and running. In a 2019 legal opinion, the GAO said the administration ran afoul of federal rules that dictate how money can be spent during a lapse in appropriations. The GAO also warned that similar moves in the future would be considered “knowing and willful violations” of the law.

Thanks Kevin!

Oh, and the WSJ felt obliged to add this:

State parks won’t be closed because of the shutdown.

Because … um, they’re state parks?

0 comment
0 FacebookTwitterPinterestEmail

It was the same paper. I was excited to boast that I might be the only person who has ever presented the same paper in the same year in the 49th and 50th states. As it turns out, I wasn’t the only one who has ever done that. Another person presented the same paper at the same two places at almost the same times! Ugh.

In June we attended the 2023 Summer Workshop at the University of Alaska Anchorage and presented during the same session.

In September we both presented in the Workshop on Energy and Environmental Research at the University of Hawaii just one week apart. Also, my presentation was virtual (not sure about Renato’s). 

But still, the population of economists who have presented a paper in both Alaska and Hawaii within a 4 month window has to be small. Maybe n=2. In that way, I’m special.

Here is the abstract and the presentation (PDF):

We estimate economic benefits of avoiding reductions in drinking water quality due to sea level rise accruing to North Carolina (NC) coastal tourists. Using stated preference stated preference methods data with recent coastal visitors, we find that tourists are 2%, 8%, and 11% less likely to take an overnight trip if drinking water tastes slightly, moderately, or very salty at their chosen destination. The majority of those who decline a trip would take a trip to another NC beach without water quality issues, others would take another type of trip, with a minority opting to stay home. Willingness to pay for an overnight beach trip declines with the salty taste of drinking water. We find evidence of attribute non-attendance in the stated preference data, which impacts the regression model and willingness to pay for trips. Combining economic and hydrology models, annual aggregate welfare losses due to low drinking water quality could be as high as $401 million, $656 million and $1.02 billion in 2040, 2060 and 2080.

What the abstract doesn’t describe is the funnest part. We get to use old-fashioned contingent valuation for valuing these trips … and it works (it always works!)! The literature review starts with Brown and Hammock (REStat 1973), spends some time in the 1980s and 1990s and then skips to 2017. 

A working paper should be out in about a week (incorporating comments from the paper posted at WEER). 

Footnote:

And hey, if you are doing stated preference research you probably have attribute non-attendance on your cost variable which biases your WTP estimates upwards. Even if you follow all of the guidelines in Johnston et al. (JAERE 2017) (which would make your study a multi-year, multi-million dollar endeavor).

0 comment
0 FacebookTwitterPinterestEmail

The other day in my intro environmental and natural resource economics class we used the BDM method to elicit willingness to pay (WTP) values for an App State travel tumbler (I paid $26 with a faculty discount at the university bookstore). I explained the BDM with these slides [Download BDM] and each student had a “payment card” for revealing their WTP [Download BDM-WTP]. The average WTP was $6.23 (n=26). I entered the WTP values into Excel, sorted them from highest to lowest and plotted them along with the randomly chosen price ($7). At this price, 12 of 26 students would have purchased the tumbler and enjoyed a consumer surplus of $45 (CS = WTP – price).

As a preview of what we are going to do later in the semester, I simulated a dichotomous choice stated preference exercise. For each WTP value I randomly chose a price that ranged from $2 to $14 (average = $5.69) and simulated whether the consumer would purchase the tumbler or not. Fifty percent of the consumers would purchase the product. I then estimated a linear probability model: Pr(purchase=1) = 0.711 – 0.0372 x Price. Plotting this line and calculating the consumer surplus area of the triangle yields a CS estimate of $6.81 — very close to the actual CS average. I told them that this valuation approach is called the dichotomous choice WTP survey approach and is used in E&R economics (and marketing) to estimate WTP values for environmental amenities and consumer goods.

I randomly chose one of the student’s WTP sheets and the WTP was less than $7. Then I chose another and the WTP was $15. This student paid me $7 and is, apparently, enjoying $8 of consumer surplus. 

I think I’ve convinced many of the students in this class that consumer surplus is equivalent to getting a “good deal”. Now the trick is to convince them that WTP measures the value of nonmarket goods … too be continued.

0 comment
0 FacebookTwitterPinterestEmail

From Nature News (Scientific sleuths spot dishonest ChatGPT use in papers) via Retraction Watch Weekend Reads [brackets added below]:

Searching for key phrases picks up only naive undeclared uses of ChatGPT — in which authors forgot to edit out the telltale signs — so the number of undisclosed peer-reviewed papers generated with the undeclared assistance of ChatGPT is likely to be much greater. “It’s only the tip of the iceberg,” [“scientific sleuth Guillame“] Cabanac says. (The telltale signs change too: ChatGPT’s ‘Regenerate response’ button changed earlier this year to ‘Regenerate’ in an update to the tool).

Cabanac has detected typical ChatGPT phrases in a handful of papers published in Elsevier journals. The latest is a paper that was published on 3 August in Resources Policy that explored the impact of e-commerce on fossil-fuel efficiency in developing countries. Cabanac noticed that some of the equations in the paper didn’t make sense, but the giveaway was above a table: ‘Please note that as an AI language model, I am unable to generate specific tables or conduct tests …’

I wanted to see it for myself and I bet you do too (at the bottom of the screenshot):

Let’s say you want to use ChatGPT to help write your papers (because writing you own papers is soooooo hard). First, you must admit it in your acknowledgements or your risk retraction. Second, actually read your own paper and edit this nonsense out. 

0 comment
0 FacebookTwitterPinterestEmail

My go to classroom experiment has been Veconlab‘s supply and opportunity cost experiment for at least a decade (here is a 2016 post). Here is the abstract from Holt et al. (2010):

This paper describes an individual choice experiment that can be used to teach students how to correctly account for opportunity costs in production decisions. Students play the role of producers that require a fuel input and an emissions permit for production. Given fixed market prices, they make production quantity decisions based on their costs. Permits have a constant price throughout the experiment. In one treatment, students have to purchase both a fuel input and an emissions permit for each production unit. In a second treatment, they receive permits for free and any unused permits are sold on their behalf at the permit price. If students correctly incorporate opportunity costs, they will have the same supply function in both treatments. This experiment motivates classroom discussion of opportunity costs and emission permit allocation under cap and trade schemes. The European Union Emissions Trading Scheme (EU ETS) provides a relevant example for classroom discussion, as industry earned significant “windfall profits” from free allocation of emissions permits in the early phases of the program.

I’m teaching an intro to environmental and resource economics class to mostly non-majors. For the third semester in a row I’m teaching online (n=45) and in-person (n=26). Both classes participated in the experiment on the second day of class. Here is the estimated linear probability model with Pr(sale=1) as the dependent variable, PRICE ranges from 1.5 to 8.5, COST is the marginal cost of production equal to 1, 3, 5 for units 1, 2, 3 in each of the 16 rounds, and GRAND is equal to 1 when permits are grandfathered and 0 when there is the permit price is $3. Through the magic of in-person teaching I stopped the experiment after four rounds of mostly ignoring opportunity cost and explained what was going on (DEBRIEF=1, 0 otherwise). The model is OLS with clustered standard errors (n = 20, t = 48 with the face-to-face data; n = 33, t = 48 with the online data):

Looking at the constants, the online class was more likely to sell the product with a 40% baseline probability (vs 26% in the F2F class). The F2F class put more weight on the price and cost. The price and cost coefficients should be equal since a $1 change in each effects profits the same way. But, in both classes more weight is placed on the cost variable. The F2F class ignored opportunity cost more than the online class. The debrief reduces much of the overproduction in the grandfathered rounds. But, overproduction is not much lower than for the online class after the debrief. The coefficient of determination (R2) tells me that the F2F class paid more attention to the variables than the online class.  

Here is my attempt to turn this experiment into a journal article: https://www.env-econ.net/2023/09/new-working-paper-with-tanga-mohr.html

0 comment
0 FacebookTwitterPinterestEmail

New working paper (with Tanga Mohr)

by

The title is “External Validity of Inferred Attribute Non-Attendance: Evidence from a Laboratory Experiment with Real and Hypothetical Payoffs”. Here is the abstract:

We consider differences in hypothetical and real payoff laboratory experiments using attribute non-attendance methods. Attribute non-attendance is an empirical approach that measures and accounts for when survey respondents ignore attributes in stated preference surveys. We use attribute non-attendance methods with data from an emissions permit experiment with real and hypothetical payments. Our conjecture is that attribute non-attendance may be more pronounced in hypothetical sessions and, once accounted for, hypothetical decisions and real decisions influenced by monetary payoffs will be more similar. In both treatments we find that the effect of the cost of an emissions permit on behavior differs if the cost is implicit or explicit. In inferred attribute non-attendance models with the real treatment data we find two classes of respondents with different behavior but no evidence of attribute non-attendance. With the hypothetical treatment data we find two classes of respondents with different behavior and evidence of attribute non-attendance on two of the four choice attributes.

And yes, it is the first paper where I’ve used lab experiment data.

0 comment
0 FacebookTwitterPinterestEmail

If you submit a paper to the [redacted] journal and follow the submission guidelines:

Then the journal will kick the submission back to you and say:

Provide conflict of interest statement in the blinded manuscript.

So now the information is on the title page and in the blinded manuscript. And, I’ve wasted about an hour fighting with Editorial Manager. 

0 comment
0 FacebookTwitterPinterestEmail

From Springer

This book delivers a user guide reference for researchers seeking to build their capabilities in conducting discrete choice experiment (DCE). The book is born out of the observation of the growing popularity – but lack of understanding – of the techniques to investigate preferences. It acknowledges that these broader decision-making processes are often difficult, or sometimes, impossible to study using conventional methods. While DCE is more mature in certain fields, it is relatively new in disciplines within social and managerial sciences. This text addresses these gaps as the first ‘how-to’ handbook that discusses the design and application of DCE methodology using R for social and managerial science research. Whereas existing books on DCE are either research monographs or largely focused on technical aspects, this book offers a step-by-step application of DCE in R, underpinned by a theoretical discussion on the strengths and weaknesses of the DCE approach, with supporting examples of best practices. Relevant to a broad spectrum of emerging and established researchers who are interested in experimental research techniques, particularly those that pertain to the measurements of preferences and decision-making, it is also useful to policymakers, government officials, and NGOs working in social scientific spaces.

I’m too old to learn how to estimate DCEs with R but this looks like a good book for someone who wants to do that or teach it to grad students. 

0 comment
0 FacebookTwitterPinterestEmail

Pageviews since the reboot

by

We made the big announcement that we would try to post more here in January 2023. We average 400 visitors from January through May. That is 600 fewer relative to when blogs were big in  economics (i.e., before Twitter took over the conversation). Summer followed pattern we were accustomed to with only a fraction of the academic semester average number of visitors. Except, except for August 11 when we inexplicably enjoyed 3420 visitors. There was a post on August 20 but nothing to explain that spike — it received 692 views, 13 likes and 1 retweet on Twitter. I’m not tech savvy enough to figure out where all these came from. I’m just concluding that was a weird day.

0 comment
0 FacebookTwitterPinterestEmail