Category:

Economy

If there’s been one consistent thread since the beginning of Env-Econ.net, it’s our endearing commitment to helping you understand the incentives of gas taxes vs. mileage taxes

Well, the debate is back in the news again as the governments debate ways to overcome…  

…the myriad hurdles U.S. states face as they experiment with road usage charging programs aimed at one day replacing motor fuel taxes, which are generating less each year, in part due to fuel efficiency and the rise of electric cars.

So here’s a[n updated] view from the wayback machine at some of issues that arise from a mileage tax:

It’s been [over 16] years now (May 8, 2007) and people still aren’t listening to me. 

Taxing miles creates perverse incentives for fuel efficiency.  A $0.015/mile tax (the size of the tax mentioned in the article) is the equivalent of a $0.015 * X tax per gallon where X is mpg.  In words, a mileage tax increases the tax per gallon the more fuel efficient the car.  Now granted, with higher mpg you use fewer gallons to drive an equivalent number of miles, and in the end, everyone driving 100 miles will pay the same tax.  And from a revenue perspective, that might be OK.  But there might be a way to kill fewer birds with one stone.

As I have written a number of times, a more straightforward proposal is to simply raise the gas tax.  Reaising the gas tax accomplishes a number of things 1) It raises revenue, 2) It discourages miles driven, and 3) It increases the incentive for higher fuel efficiency. 

Because my previous posts on this have been written with an ironic twist (I propose a mileage tax that is inversely proportional to fuel efficiency and then show that such a tax is the equivalent of a $1/gallon gas tax), here’s the direct, non-ironic version:  A $1/gallon gas tax…

…places a higher burden on those driving less fuel efficient vehicles–that should satisfy those blaming the SUV drivers for all of the problems*.

…places a higher burden on those driving more.  By increasing the marginal cost per mile driven, total miles driven should decrease.

…assuming fuel efficiency and income are negatively correlated–that is, the rich tend to drive larger, more expensive, less fuel efficient cars–[higher gas taxes] place a higher burden on higher incomes.

…provides an incentive for drivers to switch to more fuel efficient vehicles.

It’s really simple.  Why worry about complicated milage programs?  The gas tax infrastructure is in place.  Raise the gas tax and meet multiple public policy and economic goals simultaneously.

0 comment
0 FacebookTwitterPinterestEmail

Peak oil (demand)

by

From the WSJ (Oil Demand Expected to Peak This Decade as EVs Boom): 

Rising demand for crude oil is set to slow to a trickle within five years and peak before the end of the decade, as electric-vehicle uptake surges and developed nations rapidly transition to cleaner sources of energy, according to a prominent energy forecaster.

The International Energy Agency, a group funded by some of the world’s largest oil consumers, expects demand for transport fuels derived from oil such as gasoline will be the first to peak before starting a steady decline—hastened by a sharp uptick in EVs and a long-lasting shift to remote working spurred on by the Covid-19 pandemic.

Rapidly growing Asian economies will continue to prop up the global appetite for oil in the coming years, and demand for jet fuel, naphtha and other oil products with industrial uses will continue to tick higher, the IEA said in a report released Wednesday. But even in China, which has long been the powerhouse of global oil demand, the appetite for crude will slow markedly before the end of the decade. India will surpass China as the main driver of oil growth as soon as 2027, the IEA said.

The forecast, which the IEA made in an annual report that considers oil demand as far away as 2028, isn’t the first time the Paris-based group has laid out a timeline predicting a zenith for oil. But it envisages a far more rapid shift away from fossil fuels than previously expected—a shift that has been sharply accelerated by the Covid-19 pandemic and the energy crisis that followed Russia’s invasion of Ukraine.

Joe Manchin: “If I can’t go home and explain it to the people West Virginia, I can’t vote for it. I just can’t. I’ve tried everything humanly possible. I can’t get there,”

0 comment
0 FacebookTwitterPinterestEmail
via Max Auffhammer:
Thanks to generous renewal funding from the Alfred P. Sloan Foundation, the Berkeley Summer School will take place again this year – in person! The goal of the summer school is to provide doctoral students nationwide, who have completed the core first year microeconomics and econometrics requirement, with an overview of the most important and current topics in the fields of environmental and energy economics. Previous incarnations of this summer school have helped students identify promising dissertation projects, build networks and gain valuable insights on how to be a productive researcher. This year students will have an opportunity to get feedback on a research idea from one of the instructors. The instructors’ expertise covers a range of important topics. More details and up to date information can always be found on the summer school website. The currently planned lineup of instructors is:
 
Meredith Fowlie, UC Berkeley
Joseph Shapiro, UC Berkeley
Maximilian Auffhammer, UC Berkeley
Susanna Berkouwer, Wharton
Marshall Burke, Stanford 
Koichiro Ito, University of Chicago
(more TBA)
 
The summer school will start on Monday (8/14) at noon with a virtual welcome lunch for all attendees, followed by the first session at 1:30pm PDT. For the remaining four days, there will be a 3-hour lecture in the morning beginning at 9:00am, a one hour lunch followed by another 3-hour lecture. We are planning a number of other exciting activities and will keep you posted. There is no tuition, but space is limited. We provide breakfast and lunch and can accommodate dietary restrictions! Students are expected to attend all sessions.
 
To apply for the summer school, you must be a registered PhD student in an economics department, business or policy school and have completed your first year of coursework. Only PhD students in universities in North America (Canada/USA/Mexico) are eligible to apply. Please submit your application online here.  Please ask your advisor or a faculty member that knows you well to fill out a very brief confirmation that they support your recommendation here.
 
We are funding a number of diversity fellowships to offset travel and lodging expenses, which will require awardees to write a brief application statement to apply, as well as a two-page research proposal for a new idea in the field of EEE (due the week after camp). We will then match up fellows with one of the instructors, who will provide feedback on the idea in a one-on-one meeting. You can apply for the fellowship within the application form linked above.
 
The application deadline is 5pm PST June 23, 2023. We will notify you of your admission by end of June. If you have any questions, please email Karen Notsund, knotsund@berkeley.edu or Maximilian Auffhammer (auffhammer@berkeley.edu).
0 comment
0 FacebookTwitterPinterestEmail

Krugman (Working From Home and Realizing What Matters):

First things first: The reduction in commuting time is a seriously big deal. Before the pandemic, the average American adult spent about 0.28 hours per day, or more than 100 hours a year, on work-related travel. (Since not all adults are employed, the number for workers was considerably higher.) By 2021, that number had fallen by about a quarter.

Putting a dollar value on the benefits from reduced commuting is tricky. You can’t simply multiply the time saved by average wages, because people probably don’t view time spent on the road (yes, most people drive to work) as fully lost. On the other hand, there are many other expenses, from fuel to wear and tear to psychological strain, associated with commuting. On the third hand, the option of remote or hybrid work tends to be available mainly to highly educated workers with above-average wages and hence a high value associated with their time.

But it’s not hard to make the case that the overall benefits from not commuting every day are equivalent to a gain in national income of at least one and maybe several percentage points.

If median household income is $70,000 and 1 earner in each household works full-time then the household wage is $35. If time is valued at 1/3 of the wage (the number typically used in travel cost demanad models) then the average household enjoys $1200 in additional time at home (100 hours at $12). If there are 125 million households in the U.S. then the number aggregates to $150 billion.

That is lower, 0.65%, Krugman’s 1-3% of US GDP ($23 trillion) estimate. There is some slippage between households and individual adults here, but you get the idea. Krugman is making assumptions less conservative than mine. 

 

0 comment
0 FacebookTwitterPinterestEmail

I participated in a survey conducted by Clemson this week. I was eligible because I had published a paper using opt-in panel data at some point. I posted the image to the right to twitter and proceeded to provide a brief review of what I thought about each of the panels I’ve used. I’ve been thinking about that and want to say more. During the rare times I’ve had enough money in the research budget I’ve used KN/GfK/Ipsos‘s Knowledge Panel (KP). KP is a probability-based sample and more representative of the population than opt-in panels. Opt-in panels are basically convenience samples. There are interesting research questions about if and when researchers should use opt-in panels. A forthcoming Applied Economics Policy and Perspectives symposium is a step in that direction (here is the second, I think, of four articles to appear online). 

The first time that I enjoyed a probability-sample was when I was working on Florida’s BP/Deepwater Horizon damage assesssement with Tim and others. We had plenty of funding for two (!) KP surveys and two articles have been published (one and two [the first, I think, of the AEPP articles to appear online). The second time was a few years ago with funding from the state of North Carolina where Ash Morgan and I looked at the invasive species Hemlock Wooly Adelgid (HWA) and western North Carolina forests. I’ve presented papers from that study at a couple of conferences and UNC – Asheville but nothing publishable. I hope to write the forest paper this summer because it boasts the same coincidental design as the second published paper above. GfK supplemented the KP sample with opt-in responses (while charging us the same price per unit) so there is a data quality comparison between probability-based and opt-in samples. In the second published AEPP paper with a single binary choice question we find that the opt-in data was lower quality. In the HWA study we aren’t finding many differences. In other words, the opt-in data is as good as the probability-based data.  

I think that these opt-in panels will be increasingly used in the future and we need to figure out how best to use them. Opt-in data are much less expensive. For example, a Dynata recreational user respondent cost me $5 in a February 2023 survey. A KP recreational user cost $35 per unit. Of course, KN/GfK programmed the survey while I program my own when using the Dynata panel but programming yourself doesn’t cost much more when you are writing the questions and trying to explain how to do it to KN/GfK. One known problem with opt-in panels is that you don’t get a response rate but it is a toss up whether no response rate is worse than a response rate of less than 10% from a mail survey. The good thing about a mail survey is that you know what sort of bias your data will suffer from (sample selection). I don’t have an estimate of the cost of a mail survey but it is much higher than $3.50 when the response rate is less than 10%. 

I attended this workshop where four of us provided comments on five stated preference studies funded by the EPA that have been published by PNAS. Each of these studies was multi-year and used focus groups, pretests and probability-based sample data. The time and money cost was very high. During the discussion one of the exhausted researchers involved in those studies asked how we economists could go from these great but unlikely-to-be-useful-for-policy-analysis (my words) studies to something that would be useful for policy analysis. The audience was stumped for a second and then I realized that I had an answer. The long-term answer, I think, is taking the lessons from these huge studies and developing benefit estimates with models from opt-in data. You can go do this within one year with opt-in data and a single pretest relative to 3-5 years for a major study. The test, I think, is whether the results from models using opt-in data is better than benefit transfer, which is how most policy analysis is being done.

I think the answer is yes (opt-in data models are better than benefit transfer). The second of the published AEPP articles above resulted from a pretest of the PNAS studies. It’s conclusion was that opt-in data wasn’t so bad. I’m hoping to contribute to the opt-in data is good enough for policy literature by thinking about the role of attribute non-attendance in analyzing opt-in data (more on this soon, I hope). We need more studies like these to convince a skeptical bunch of environmental economists and, especially, OMB that policy anlaysis will be improved if we don’t always rely on million dollar studies.  

0 comment
0 FacebookTwitterPinterestEmail

On the day of the final in my 2000 level environmental and resource economics course, the WSJ published the article on Clean Power Plan 2.0 (Biden Administration Targets Power-Plant Emissions in New Climate Initiative): 

The Biden administration proposed new rules Thursday to drastically reduce greenhouse gases from coal- and gas-fired power plants—measures that will cost billions of dollars but that officials say will curb emissions that are warming the atmosphere and harming human health. …

The EPA proposal incorporates separate standards for different types of plants. The rules for existing coal- and new natural-gas-fired power plants would reduce CO2 emissions by 617 million metric tons. The proposal for existing gas plants would cut 214 million to 407 million metric tons between 2028 and 2042, according to the EPA. Cleaning up power plants would prevent an estimated 300,000 asthma attacks and 1,300 premature deaths in 2030, according to the EPA.

This is something that I’d normally cover in class so I added this extra credit question (note/a): 

Read the WSJ article and, using an estimate of the social cost of carbon, provide an estimate of the climate benefits of the Clean Power Plan 2.0. Write a short paragraph explaining your answer (and note that you could also provide an estimate of the value of reduced mortality with the VSL). 

I received one submission and it is a good one: Bradley Del Vecchio (2026 sustainable technology major) writes:

The Clean Power Plan 2.0 is an awesome step in the right direction to further combine environmentalism and economics, in order to create a more equitable future. According to the estimates provided by the EPA, we could see cuts ranging from 214 to 407 million metric tons of CO2 emissions from 2028 to 2042. Using the most recent estimate of the social cost of carbon at $51 per ton of CO2 or equivalent greenhouse gasses, the proposal could deliver on benefits to our air quality ranging from 10.9 to 20.8 billion dollars over the given time period. Additionally, using the VSL of $11.9 million dollars per life and factoring in the proposals estimated 1,300 premature deaths we could see an additional $15.5 billion in health benefits. This figure does not account for the 300,000 asthma attacks that could be prevented, saving many people and their families from costly medical bills. Overall, the Clean Power Plant 2.0 proposal could provide benefits of $26.4 on the low side and $36.3 billion on the high side from 2028 to 2042. These benefits, when compared to the marginal increase in electric costs, seems like a completely viable proposal both economically and socially.

a/ A successful answer would be used to bump a grade within 1 point to the round up range. For example, I’ll normally round an overall 89.7 average up to a 90 (A-) but a 89.4 is a B+. This would push the 89.4 to an A-.

0 comment
0 FacebookTwitterPinterestEmail

From E&E news (Senators eye ‘CHIPS 2.0’ as vehicle for carbon tariff):

As senators from both parties seek a pathway for advancing a bill imposing carbon tariffs, a potentially viable vehicle has emerged: a nascent legislative package to boost U.S. competitiveness against China.

Senate Democrats announced last week they want to write a follow-up to last year’s bipartisan CHIPS and Science Act with the help of Republicans, opening the door to a rare opportunity this Congress to craft and even pass legislation that would have support from each side of the aisle.

Critically, the scope of “CHIPS and Science 2.0,” as some are calling it, would also likely lend itself to the inclusion of language to institute a carbon border adjustment mechanism, or CBAM — an emissions reduction concept that is gaining support across the political spectrum. Many advocates see the effort as key to fighting global emissions while at the same time punishing foreign adversarives like China. …

Lawmakers would also have to resolve significant disagreement over whether putting a domestic price on carbon is necessary for achieving a CBAM that deals with international behavior surrounding emissions.

Some advocates say a domestic carbon fee is a necessary step to ensure a CBAM’s enforceability by the World Trade Organization. Others, including Coons, are currently willing to risk running afoul of the WTO by leaving that component out of their proposals to make their ideas more politically palatable.

On April 19 I wrote:

Of all the types of protectionism that are trending (e.g., herehere), this one might not be so bad.

So, why is that? Well, regular tariffs might be designed to protect domestic industry. They do that but the cost, in terms of higher prices and lost consumer surplus, outweigh the benefits. The cost of steel tariffs might be about $900,000 for each steel job protected. The same type of calculation would be at work with tariffs aimed at imports that generate more GHGs than products made domestically but there is the added benefit of reducing GHGs. If both the U.S. and Europe imposed these tariffs then it would elevate climate policy in both places and the tariff rate would effectively be zero. At least, that is the result in my simple trade model. 

0 comment
0 FacebookTwitterPinterestEmail

From the GW Regulatory Resources Center

Revising Regulatory Review: Expert Insights on the Biden Administration’s Guidelines for Regulatory Analysis
Tue, 9 May, 2023 10:30am – 3:00pm

Please join the GW Regulatory Studies Center and the Society for Benefit-Cost Analysis for a timely discussion of recent changes to regulatory practices and analysis.

In April, the White House released much-anticipated revisions to federal regulatory practices, including a new Executive Order 14094 on “Modernizing Regulatory Review,” draft revisions to OMB Circular A-4 governing regulatory impact analysis, and draft guidance on meetings with entities outside of the executive branch. The Office of Information and Regulatory Affairs (OIRA) is working to implement these changes, which comprise the most significant regulatory policy initiatives of the Biden administration and raise interesting benefit-cost analysis issues, including the appropriate discount rate, who has standing in a benefit-cost analysis, and how distributional impacts should be measured.

At this forum, OIRA Administrator Richard Revesz (invited) will discuss these changes with a panel of former OIRA administrators who served in the Clinton, Bush, Obama, and Trump administrations. Following that discussion, a panel of experts experienced in regulatory impact analysis at the federal level will explore in more depth the draft changes to Circular A-4.

Register for virtual attendance here: https://docs.google.com/forms/d/e/1FAIpQLSeelAAXOwChP9y9ygk7YDxRTGEtb5_m1A_54yG-MBGccMOcUw/viewform

 

0 comment
0 FacebookTwitterPinterestEmail

They’ll go after you if you call them predatory. From Retraction Watch:

A 2021 article that found journals from the open-access publisher MDPI had characteristics of predatory journals has been retracted and replaced with a version that softens its conclusions about the company. MDPI is still not satisfied, however.

The article, “Journal citation reports and the definition of a predatory journal: The case of the Multidisciplinary Digital Publishing Institute (MDPI),” was published in Research Evaluation. It has been cited 20 times, according to Clarivate’s Web of Science. 

Here is the abstract of the article in Research Evaluation:

The extent to which predatory journals can harm scientific practice increases as the numbers of such journals expand, in so far as they undermine scientific integrity, quality, and credibility, especially if those journals leak into prestigious databases. Clarivate’s Journal Citation Reports (JCR), a reference for the assessment of researchers and for grant-making decisions, is used as a standard whitelist, in so far as the selectivity of a JCR-indexed journal adds a legitimacy of sorts to the articles that the journal publishes. The Multidisciplinary Digital Publishing Institute (MDPI) had 53 journals ranked in the 2018 JCRs annual report. These journals are analysed, not only to contrast the formal criteria for the identification of predatory journals, but taking a step further, their background is also analysed with regard to self-citations and the source of those self-citations in 2018 and 2019. The results showed that the self-citation rates increased and was very much higher than those of the leading journals in the JCR category. Besides, an increasingly high rate of citations from other MDPI-journals was observed. The formal criteria together with the analysis of the citation patterns of the 53 journals under analysis all suggest they may be predatory journals. Hence, specific recommendations are given to researchers, educational institutions and prestigious databases advising them to review their working relations with those sorts of journals.

More from Retraction Watch:

Soon after the paper was published in July 2021, MDPI issued a “comment” about the article that responded to Oviedo García’s analysis point by point. The comment called out “the misrepresentation of MDPI, as well as concerns around the accuracy of the data and validity of the research methodology.”

Here is the graph that MDPI is using to suggest “that MDPI is in-line with other publishers, and that its self-citation index is lower than that of many others; on the other hand, its self-citation index is higher than some others.”

From this graph you can see market share in publishing: Elsevier, Springer, Informa (Taylor and Francis) and Wiley Blackwell are the big 4. The next two largest publishers are the Institution of Electronics and Electrical Engineers and MDPI. There are a ton of self-cites in Elsevier journals (the self-citation numbers are at the publisher level). MDPI is comparing itself to, say, the Elsevier and Springer placement on the vertical axis. What this analysis is missing is quality control. There are not many MDPI articles that you’d want to reference because they are mostly low quality. So, self-citation at lower level journals is some evidence of manipulation of journal rankings. 

The solution, it seems to me, is to stop trying to label MDPI (and Frontiers) as predatory. Just call them something else like “publishers that manipulate journal rankings and take people’s money in exchange for a shaky review process that ultimately leads to publication.”  

0 comment
0 FacebookTwitterPinterestEmail