Category:

Economy

Krugman (Working From Home and Realizing What Matters):

First things first: The reduction in commuting time is a seriously big deal. Before the pandemic, the average American adult spent about 0.28 hours per day, or more than 100 hours a year, on work-related travel. (Since not all adults are employed, the number for workers was considerably higher.) By 2021, that number had fallen by about a quarter.

Putting a dollar value on the benefits from reduced commuting is tricky. You can’t simply multiply the time saved by average wages, because people probably don’t view time spent on the road (yes, most people drive to work) as fully lost. On the other hand, there are many other expenses, from fuel to wear and tear to psychological strain, associated with commuting. On the third hand, the option of remote or hybrid work tends to be available mainly to highly educated workers with above-average wages and hence a high value associated with their time.

But it’s not hard to make the case that the overall benefits from not commuting every day are equivalent to a gain in national income of at least one and maybe several percentage points.

If median household income is $70,000 and 1 earner in each household works full-time then the household wage is $35. If time is valued at 1/3 of the wage (the number typically used in travel cost demanad models) then the average household enjoys $1200 in additional time at home (100 hours at $12). If there are 125 million households in the U.S. then the number aggregates to $150 billion.

That is lower, 0.65%, Krugman’s 1-3% of US GDP ($23 trillion) estimate. There is some slippage between households and individual adults here, but you get the idea. Krugman is making assumptions less conservative than mine. 

 

0 comment
0 FacebookTwitterPinterestEmail

I participated in a survey conducted by Clemson this week. I was eligible because I had published a paper using opt-in panel data at some point. I posted the image to the right to twitter and proceeded to provide a brief review of what I thought about each of the panels I’ve used. I’ve been thinking about that and want to say more. During the rare times I’ve had enough money in the research budget I’ve used KN/GfK/Ipsos‘s Knowledge Panel (KP). KP is a probability-based sample and more representative of the population than opt-in panels. Opt-in panels are basically convenience samples. There are interesting research questions about if and when researchers should use opt-in panels. A forthcoming Applied Economics Policy and Perspectives symposium is a step in that direction (here is the second, I think, of four articles to appear online). 

The first time that I enjoyed a probability-sample was when I was working on Florida’s BP/Deepwater Horizon damage assesssement with Tim and others. We had plenty of funding for two (!) KP surveys and two articles have been published (one and two [the first, I think, of the AEPP articles to appear online). The second time was a few years ago with funding from the state of North Carolina where Ash Morgan and I looked at the invasive species Hemlock Wooly Adelgid (HWA) and western North Carolina forests. I’ve presented papers from that study at a couple of conferences and UNC – Asheville but nothing publishable. I hope to write the forest paper this summer because it boasts the same coincidental design as the second published paper above. GfK supplemented the KP sample with opt-in responses (while charging us the same price per unit) so there is a data quality comparison between probability-based and opt-in samples. In the second published AEPP paper with a single binary choice question we find that the opt-in data was lower quality. In the HWA study we aren’t finding many differences. In other words, the opt-in data is as good as the probability-based data.  

I think that these opt-in panels will be increasingly used in the future and we need to figure out how best to use them. Opt-in data are much less expensive. For example, a Dynata recreational user respondent cost me $5 in a February 2023 survey. A KP recreational user cost $35 per unit. Of course, KN/GfK programmed the survey while I program my own when using the Dynata panel but programming yourself doesn’t cost much more when you are writing the questions and trying to explain how to do it to KN/GfK. One known problem with opt-in panels is that you don’t get a response rate but it is a toss up whether no response rate is worse than a response rate of less than 10% from a mail survey. The good thing about a mail survey is that you know what sort of bias your data will suffer from (sample selection). I don’t have an estimate of the cost of a mail survey but it is much higher than $3.50 when the response rate is less than 10%. 

I attended this workshop where four of us provided comments on five stated preference studies funded by the EPA that have been published by PNAS. Each of these studies was multi-year and used focus groups, pretests and probability-based sample data. The time and money cost was very high. During the discussion one of the exhausted researchers involved in those studies asked how we economists could go from these great but unlikely-to-be-useful-for-policy-analysis (my words) studies to something that would be useful for policy analysis. The audience was stumped for a second and then I realized that I had an answer. The long-term answer, I think, is taking the lessons from these huge studies and developing benefit estimates with models from opt-in data. You can go do this within one year with opt-in data and a single pretest relative to 3-5 years for a major study. The test, I think, is whether the results from models using opt-in data is better than benefit transfer, which is how most policy analysis is being done.

I think the answer is yes (opt-in data models are better than benefit transfer). The second of the published AEPP articles above resulted from a pretest of the PNAS studies. It’s conclusion was that opt-in data wasn’t so bad. I’m hoping to contribute to the opt-in data is good enough for policy literature by thinking about the role of attribute non-attendance in analyzing opt-in data (more on this soon, I hope). We need more studies like these to convince a skeptical bunch of environmental economists and, especially, OMB that policy anlaysis will be improved if we don’t always rely on million dollar studies.  

0 comment
0 FacebookTwitterPinterestEmail

On the day of the final in my 2000 level environmental and resource economics course, the WSJ published the article on Clean Power Plan 2.0 (Biden Administration Targets Power-Plant Emissions in New Climate Initiative): 

The Biden administration proposed new rules Thursday to drastically reduce greenhouse gases from coal- and gas-fired power plants—measures that will cost billions of dollars but that officials say will curb emissions that are warming the atmosphere and harming human health. …

The EPA proposal incorporates separate standards for different types of plants. The rules for existing coal- and new natural-gas-fired power plants would reduce CO2 emissions by 617 million metric tons. The proposal for existing gas plants would cut 214 million to 407 million metric tons between 2028 and 2042, according to the EPA. Cleaning up power plants would prevent an estimated 300,000 asthma attacks and 1,300 premature deaths in 2030, according to the EPA.

This is something that I’d normally cover in class so I added this extra credit question (note/a): 

Read the WSJ article and, using an estimate of the social cost of carbon, provide an estimate of the climate benefits of the Clean Power Plan 2.0. Write a short paragraph explaining your answer (and note that you could also provide an estimate of the value of reduced mortality with the VSL). 

I received one submission and it is a good one: Bradley Del Vecchio (2026 sustainable technology major) writes:

The Clean Power Plan 2.0 is an awesome step in the right direction to further combine environmentalism and economics, in order to create a more equitable future. According to the estimates provided by the EPA, we could see cuts ranging from 214 to 407 million metric tons of CO2 emissions from 2028 to 2042. Using the most recent estimate of the social cost of carbon at $51 per ton of CO2 or equivalent greenhouse gasses, the proposal could deliver on benefits to our air quality ranging from 10.9 to 20.8 billion dollars over the given time period. Additionally, using the VSL of $11.9 million dollars per life and factoring in the proposals estimated 1,300 premature deaths we could see an additional $15.5 billion in health benefits. This figure does not account for the 300,000 asthma attacks that could be prevented, saving many people and their families from costly medical bills. Overall, the Clean Power Plant 2.0 proposal could provide benefits of $26.4 on the low side and $36.3 billion on the high side from 2028 to 2042. These benefits, when compared to the marginal increase in electric costs, seems like a completely viable proposal both economically and socially.

a/ A successful answer would be used to bump a grade within 1 point to the round up range. For example, I’ll normally round an overall 89.7 average up to a 90 (A-) but a 89.4 is a B+. This would push the 89.4 to an A-.

0 comment
0 FacebookTwitterPinterestEmail

From E&E news (Senators eye ‘CHIPS 2.0’ as vehicle for carbon tariff):

As senators from both parties seek a pathway for advancing a bill imposing carbon tariffs, a potentially viable vehicle has emerged: a nascent legislative package to boost U.S. competitiveness against China.

Senate Democrats announced last week they want to write a follow-up to last year’s bipartisan CHIPS and Science Act with the help of Republicans, opening the door to a rare opportunity this Congress to craft and even pass legislation that would have support from each side of the aisle.

Critically, the scope of “CHIPS and Science 2.0,” as some are calling it, would also likely lend itself to the inclusion of language to institute a carbon border adjustment mechanism, or CBAM — an emissions reduction concept that is gaining support across the political spectrum. Many advocates see the effort as key to fighting global emissions while at the same time punishing foreign adversarives like China. …

Lawmakers would also have to resolve significant disagreement over whether putting a domestic price on carbon is necessary for achieving a CBAM that deals with international behavior surrounding emissions.

Some advocates say a domestic carbon fee is a necessary step to ensure a CBAM’s enforceability by the World Trade Organization. Others, including Coons, are currently willing to risk running afoul of the WTO by leaving that component out of their proposals to make their ideas more politically palatable.

On April 19 I wrote:

Of all the types of protectionism that are trending (e.g., herehere), this one might not be so bad.

So, why is that? Well, regular tariffs might be designed to protect domestic industry. They do that but the cost, in terms of higher prices and lost consumer surplus, outweigh the benefits. The cost of steel tariffs might be about $900,000 for each steel job protected. The same type of calculation would be at work with tariffs aimed at imports that generate more GHGs than products made domestically but there is the added benefit of reducing GHGs. If both the U.S. and Europe imposed these tariffs then it would elevate climate policy in both places and the tariff rate would effectively be zero. At least, that is the result in my simple trade model. 

0 comment
0 FacebookTwitterPinterestEmail

From the GW Regulatory Resources Center

Revising Regulatory Review: Expert Insights on the Biden Administration’s Guidelines for Regulatory Analysis
Tue, 9 May, 2023 10:30am – 3:00pm

Please join the GW Regulatory Studies Center and the Society for Benefit-Cost Analysis for a timely discussion of recent changes to regulatory practices and analysis.

In April, the White House released much-anticipated revisions to federal regulatory practices, including a new Executive Order 14094 on “Modernizing Regulatory Review,” draft revisions to OMB Circular A-4 governing regulatory impact analysis, and draft guidance on meetings with entities outside of the executive branch. The Office of Information and Regulatory Affairs (OIRA) is working to implement these changes, which comprise the most significant regulatory policy initiatives of the Biden administration and raise interesting benefit-cost analysis issues, including the appropriate discount rate, who has standing in a benefit-cost analysis, and how distributional impacts should be measured.

At this forum, OIRA Administrator Richard Revesz (invited) will discuss these changes with a panel of former OIRA administrators who served in the Clinton, Bush, Obama, and Trump administrations. Following that discussion, a panel of experts experienced in regulatory impact analysis at the federal level will explore in more depth the draft changes to Circular A-4.

Register for virtual attendance here: https://docs.google.com/forms/d/e/1FAIpQLSeelAAXOwChP9y9ygk7YDxRTGEtb5_m1A_54yG-MBGccMOcUw/viewform

 

0 comment
0 FacebookTwitterPinterestEmail

They’ll go after you if you call them predatory. From Retraction Watch:

A 2021 article that found journals from the open-access publisher MDPI had characteristics of predatory journals has been retracted and replaced with a version that softens its conclusions about the company. MDPI is still not satisfied, however.

The article, “Journal citation reports and the definition of a predatory journal: The case of the Multidisciplinary Digital Publishing Institute (MDPI),” was published in Research Evaluation. It has been cited 20 times, according to Clarivate’s Web of Science. 

Here is the abstract of the article in Research Evaluation:

The extent to which predatory journals can harm scientific practice increases as the numbers of such journals expand, in so far as they undermine scientific integrity, quality, and credibility, especially if those journals leak into prestigious databases. Clarivate’s Journal Citation Reports (JCR), a reference for the assessment of researchers and for grant-making decisions, is used as a standard whitelist, in so far as the selectivity of a JCR-indexed journal adds a legitimacy of sorts to the articles that the journal publishes. The Multidisciplinary Digital Publishing Institute (MDPI) had 53 journals ranked in the 2018 JCRs annual report. These journals are analysed, not only to contrast the formal criteria for the identification of predatory journals, but taking a step further, their background is also analysed with regard to self-citations and the source of those self-citations in 2018 and 2019. The results showed that the self-citation rates increased and was very much higher than those of the leading journals in the JCR category. Besides, an increasingly high rate of citations from other MDPI-journals was observed. The formal criteria together with the analysis of the citation patterns of the 53 journals under analysis all suggest they may be predatory journals. Hence, specific recommendations are given to researchers, educational institutions and prestigious databases advising them to review their working relations with those sorts of journals.

More from Retraction Watch:

Soon after the paper was published in July 2021, MDPI issued a “comment” about the article that responded to Oviedo García’s analysis point by point. The comment called out “the misrepresentation of MDPI, as well as concerns around the accuracy of the data and validity of the research methodology.”

Here is the graph that MDPI is using to suggest “that MDPI is in-line with other publishers, and that its self-citation index is lower than that of many others; on the other hand, its self-citation index is higher than some others.”

From this graph you can see market share in publishing: Elsevier, Springer, Informa (Taylor and Francis) and Wiley Blackwell are the big 4. The next two largest publishers are the Institution of Electronics and Electrical Engineers and MDPI. There are a ton of self-cites in Elsevier journals (the self-citation numbers are at the publisher level). MDPI is comparing itself to, say, the Elsevier and Springer placement on the vertical axis. What this analysis is missing is quality control. There are not many MDPI articles that you’d want to reference because they are mostly low quality. So, self-citation at lower level journals is some evidence of manipulation of journal rankings. 

The solution, it seems to me, is to stop trying to label MDPI (and Frontiers) as predatory. Just call them something else like “publishers that manipulate journal rankings and take people’s money in exchange for a shaky review process that ultimately leads to publication.”  

0 comment
0 FacebookTwitterPinterestEmail

I teach an online MBA managerial economics class and try to inject environmental content as much as I can (after all, we have the #1 ranked sustainable MBA program in NC). I’m terrible at game theory but I think I have found a WSJ article that helps (Tesla’s Price Cuts Are Roiling the Car Market): 

Tesla Inc.’s recent price cuts on its most popular models in the U.S. are reverberating through the car business, pressuring rivals and affecting purchase decisions for new- and used-car buyers. …

Tesla’s price cuts have drawn mixed reactions from investors and Wall Street analysts. Some suggested the move was made in response to waning demand. Others viewed it as Tesla squeezing competitors by sacrificing some of its strong operating-profit margins—which are larger than most car companies—while also lowering prices enough to qualify many models for a $7,500 federal tax credit.

A more recent article has Elon saying that they are going for market share and then will try to gouge Tesla owners with costly software updates (my interpretation). Here is a question I asked on a recent exam:

Refer to the WSJ article: “Tesla’s Price Cuts Are Roiling the Car Market”.

From the article: “…lower Tesla prices are undercutting some competitors’ EVs just as those auto makers try to convince investors and car buyers that they are a viable Tesla alternative…”
Draw and label a graph that depicts a demand curve and a supply curve in the market for Ford electric vehicles (EVs). (i) Illustrate on your graph the effect of a decrease in the price of Tesla EVs. (ii) Describe the impact on the equilibrium price and quantity of Ford EV automobiles as a result of the decrease in the price of Tesla EVs.
Considering your answer in part (a), what happens to Ford’s EV profits if they leave price unchanged? What happens to Ford’s EV profits if they reduce their price to match Tesla?
Thinking of this in terms of a repeated game, what is Ford’s next move? Use the concept of a trigger strategy in your answer.
Draw and label a graph that depicts a demand curve and a supply curve in the market for Tesla EVs. Illustrate the effect of your answer in part (c) on this graph.

Any advice on how to make this question better is appreciated.

0 comment
0 FacebookTwitterPinterestEmail

If you have published an article in a Wiley journal lately you may have noticed the Hindawi brand name shows up alongside Wiley at various stages of the process (at least, that is my recollection). You may have also noticed that Hindawi is a sketchy publisher, sometimes accused of predatory behavior.

From Retraction Watch (Nearly 20 Hindawi journals delisted from leading index amid concerns of papermill activity):

Nineteen journals from the open-access publisher Hindawi were removed from Clarivate’s Web of Science Monday when the indexer refreshed its Master Journal List.

The delistings follow a disclosure by Wiley, which bought Hindawi in 2021, that the company suspended publishing special issues for three months because of “compromised articles.” That lost the company $9 million in revenue.

Here are the stats from one delisted journal:

Submission to final decision in 35 days is too quick to give me any confidence that most articles was adequately reviewed. I can’t find any economics journals on Hindawi’s webpage but there are plenty of economics articles (with some goofy titles).

I typed the question in the blog post title into Google and this article came up first: Wiley Buys Open Access Publisher for $298 Million. Here is what the article says:

John Wiley has expanded its presence in the open access field with the acquisition of Hindawi, a London-based scientific research publisher of more than 200 peer-reviewed scientific, technical, and medical journals. Wiley paid $298 million for Hindawi which had 2020 revenue of $40 million.

In making the announcement of the acquisition, Wiley cited not only Hindawi’s portfolio of journals but also its “highly efficient publishing platform” and its “low-cost infrastructure” as reasons for doing the deal. …

Wiley noted the critical role Hindawi, which was founded in 1997 and became a fully OA publisher in 2007, has played in advancing open access publishing, a model under which scholarly articles are made freely accessible to researchers with costs covered by publication and author fees rather than subscriptions.

So it seems like Wiley wanted to make some money and didn’t much care if it made anyone wonder if the reputations of their legitimate journals would get soiled in the process. Wiley looked at the stats for the Journal of Environmental and Public Health and decided it was a good idea to buy it. 

I’d stay away from Hindawi just like I’d stay away from MDPI (Fast-growing open-access journals stripped of coveted impact factors): 

Clarivate initially did not name any of the delisted journals or provide specific reasons. But it confirmed to Science the identities of 19 Hindawi journals and two MDPI titles after reports circulated about their removals. The MDPI journals include the International Journal of Environmental Research and Public Health, which published about 17,000 articles last year. In 2022, it had a Web of Science journal impact factor of 4.614, in the top half of all journals in the field of public health.

Jeez, keep your eye on Wiley journals too.

0 comment
0 FacebookTwitterPinterestEmail