Category:

Stock

Work Requirements in SNAP

by

Chris Edwards

Federal policymakers will soon run into a hard deadline to increase the government’s legal debt limit. President Biden wants a simple debt‐​limit increase with no strings attached, but House Republicans have proposed spending reforms called Limit, Save, Grow to include in a debt‐​limit deal.

One GOP reform would strengthen work requirements for the Supplemental Nutrition Assistance Program (SNAP), also called food stamps. The proposal would affect a small fraction of people on the program and reduce costs only slightly. But restricting hand‐​outs to encourage work makes sense because the economy has millions of job openings, as shown in the chart below.

In 2023, about 42 million people will receive food stamps at a cost of $127 billion. Many recipients are exempt from SNAP work requirements, including children, the elderly, and the disabled. About four‐​fifths of SNAP households include a child, a senior, or a disabled person. The other one‐​fifth consist of adults who generally need to be working, looking for work, or in training to receive ongoing benefits.

There are two sets of work requirements for SNAP recipients. General rules require individuals able to work, age 16–59, and not caring for a child under age 6, to register for work, to accept suitable work, or be in a training program. These rules have numerous exceptions. There are additional rules for able‐​bodied adults without dependents (ABAWDs) age 18–49 to receive benefits for more than three months within any three‐​year period.

The Republican proposal would tighten work requirements by raising the top age for the ABAWD group from 49 to 56. Looking at Table 3.2.a here, 3.5 million SNAP households do not include either children, the elderly, or the disabled, and about 2.5 million are in the ABAWD group. That appears to leave about 1 million households or fewer that may be affected by the GOP proposal. The data is for the October 2019 to February 2020 period.

SNAP’s ABAWD rules had been suspended during the pandemic but come into force again this year. And even then, the American Enterprise Institute’s Kevin Corinth notes that numerous states have federal waivers that void some of the program’s work requirements.

Tightening the SNAP work requirements would generate just a small part of the savings from the Republican plan. But it is important to begin reining in bloated entitlements, and adjusting eligibility to encourage work is a good place to start.

More on SNAP here, here, and here.

0 comment
0 FacebookTwitterPinterestEmail

Gabriella Beaumont-Smith

The COVID-19 pandemic changed the world, shifting even peoples’ eating habits. Numerous surveys reveal that Americans are snacking more, providing new opportunities for food and beverage companies. In fact, according to Axios, American food manufacturers are pushing out all sorts of new flavor combinations, sizes, packaging, and shapes of snacks. In particular, fun‐​size snacks and “mashups” of flavors are trending, including, bite‐​sized Twinkies and Ding Dongs, Cocoa Puffs popcorn, and Dr Pepper cotton candy. And these aren’t just aimed at kids, executives at numerous food companies say all consumers value these new varieties.

While sweet treats are integral to many celebrations—Halloween, Valentine’s Day, Easter, Christmas, birthdays, etc.—they now serve as a simple joy in life. According to the National Confectioners Association, consumers see chocolate and candy as “a fun part of life.” And since inflation has persisted, reducing how far peoples’ paychecks go, consumers are looking for affordable treats, and candy and chocolate provide the perfect fix.

Yet, the U.S. sugar program purposefully makes sugar more expensive. Basically, the U.S. government restricts the supply of sugar to keep the price of U.S. sugar high. The government does this in a few ways but primarily by buying U.S. sugar to keep it off the consumer market and by imposing strict tariff‐​rate quotas where very low quantities can be imported duty‐​free. Any excess is subject to a tax reaching almost a hundred percent. As a result, the U.S. sugar price is about double the world price of sugar, as shown in the chart.

This program inflicts high costs on the food manufacturers pumping out these cool new snacks, and ultimately on consumers who end up paying more for snacks. The U.S. sugar program is estimated to cost consumers up to $4 billion a year and induce job losses up to 20,000 in food processing and confectionary industries.

So, next time you’re enjoying some Cinnamon Toast Crunch popcorn, or a bite‐​size Butterfinger, remember that the U.S. sugar program unnecessarily makes them more expensive.

0 comment
0 FacebookTwitterPinterestEmail

TikTok Panic Threatens Speech

by

Will Duffield

TikTok is a social media app that hosts short‐​form videos and serves them to users via algorithm. Because TikTok is owned by the Chinese tech firm ByteDance, its surging popularity with teenagers and young adults in America has prompted concerns that it could be used for illicit data gathering and influence operations. These concerns, and a broader crisis of confidence in American culture, have launched host of proposals to ban TikTok. A ban would frustrate the many millions of Americans who use TikTok to express themselves, and efforts to crush the app risk giving the government new powers to police speech across the internet.

TikTok’s Project Texas aims to assuage concerns about ByteDance’s ownership of the platform by creating an American subsidiary called TikTok U.S. Data Security Inc.. The subsidiary will own American user data and manage TikTok’s deployment on Oracle’s cloud, where Oracle can monitor TikTok’s algorithm. Although Project Texas adequately addresses concerns that TikTok’s algorithm could be re‐​tuned to spread propaganda, worries that the CCP might use TikTok to gather information about Americans for nefarious purposes are harder to dispel.

A Foreign Crisis?

Like many apps, TikTok collects data from user devices, including users’ locations and stored media. This data is necessary to make TikTok work, but it can also be misused. Unlike most other apps, TikTok’s parent company ByteDance is headquartered in China, where it is subject to China’s National Intelligence Law. Under the law, China can require its citizens and corporations to provide data relevant to intelligence work.

There isn’t any evidence that TikTok is spying for the CCP. Its only demonstrable misuse of customer data – tracking employees leaking information to journalists – are reminiscent of Travis Kalanick’s excesses, not the KGB’s. Nevertheless, the National Intelligence Law makes concerns about data gathering hard to fully allay. So long as TikTok is owned by ByteDance, the law might be invoked to compel ByteDance to circumvent or undermine Project Texas, and data is notoriously leaky.

Other countries have found themselves in similar situations with American tech firms and America’s National Security Letter process. This Patriot Act authority allows the FBI to demand data from private firms without approval by a court, and imposes a gag order on the letter’s recipient. The Ninth Circuit recently held that Twitter could not even disclose the number of NSLs that it had received. This power makes some foreigners understandably nervous about doing business with American tech firms. However, this is the first time America has found itself on the receiving end of such a power.

Thus, the concern that China’s NIL will be used to access American TikTok user’s data directly implicates America’s place in the global order it has created. It has created, in a word, a crisis. It is not a national security crisis, or, in any reasonable likelihood, a data security crisis down the road. China has countless ways of accessing or simply purchasing whatever data it might gather through TikTok. Instead, however, TikTok’s rise has spurred a domestic crisis of confidence in the open internet and an American‐​led liberal order.

Before TikTok, China’s NIL didn’t matter to America because few Americans used Chinese apps or websites. In a sense, TikTok, and a few other firms such as consumer drone maker DJI, are tragic success stories. They have succeeded in offering novel, useful products to customers worldwide despite growing under the CCP, yet their success makes them geopolitical footballs. American legislators see a threat in their products’ appearance in American homes, while China won’t allow them to become truly global companies, free from the cloud government influence. TikTok isn’t a small founder‐​run operation like Telegram, which while born in Russia, escaped its orbit and is now registered in the Cayman Islands and headquartered in Dubai.

TikTok’s success is no reason to throw out the system that allowed it to grow. The vast majority of globally successful companies are still American. An international order that allows the free exchange of apps and web services benefits America far more than a splinternet of national champions. Banning TikTok will invite reciprocal action and opportunistic demands for divestment and local spin‐​offs of US firms. America has many more global internet giants to lose than China.

Banning TikTok will also do little to curtail Chinese access to Americans’ data. TikTok isn’t collecting anything unique. Absent limits on domestic data collection and resale, banning TikTok or other Chinese apps amounts to requiring China to buy Americans’ data on the open market like everyone else. China can also simply continue to steal data far more important than anything collected by TikTok, such as in the 22 million federal employee background check records stolen in the 2015 Office of Personnel management data breach or the credit records of 145 million Americans stolen in the 2017 Equifax hack. The cure, then, is both homeopathic and worse than the disease.

Different Flavored Bans

However, if TikTok is banned, how it is banned matters, both for American firms abroad and American liberties at home. The most likely route to a ban is CFIUS review. CFIUS, the Committee on Foreign Investment in the United States, has the power to place conditions on, reject, or even unwind the acquisition of American firms by foreign buyers.

In 2019, CFIUS ordered Chinese firm Kunlun Tech to sell gay dating app Grindr, reversing its 2018 acquisition of the American startup. CFIUS has the power to force ByteDance to sell TikTok because TikTok is a product of ByteDance’s 2017 acquisition of American short form video app Music​.ly. Project Texas is a product of the committee’s negotiations with TikTok, but while it once seemed capable of assuaging their concerns, a divestment order requiring ByteDance to sell its stake in the platform now seems more likely.

While this is not technically the same as banning TikTok, it will set in motion a chain of events likely to lead to the app’s demise. China is unlikely to allow its unicorn to be expropriated by America, and will block any sale to American buyers. In this situation, ByteDance and TikTok will eventually be driven out of the country by mounting civil fines for failing to comply with the divestment order. A CFIUS divestment order doesn’t require congressional action beyond the hostility displayed in last month’s hearing with TikTok CEO Shou Zi, making it the most likely path to a ban. While CFIUS action doesn’t further empower government, unwinding an acquisition of this size six years after the fact would be unprecedented, and would certainly prompt Chinese efforts to punish American firms. Although American social media platforms are banned in China, many American tech firms manufacture products there, so the CCP has plenty of potential leverage if treatment of TikTok prompts a tit‐​for‐​tat spat.

However, if CFIUS fails to dispatch TikTok, there are even worse solutions to the crisis waiting in the wings. Two bills have been introduced that would expand the government’s power to police speech and speech infrastructure to get at TikTok.

Proposed Legislation

In the Senate, Senator Mark Warner’s RESTRICT Act would empower the Secretary of Commerce to prohibit or demand “mitigation measures” from almost any service provider in the commutations stack that touches an adversary nation — Russia, China, Iran, North Korea, Venezuela, and Cuba, to start. It initially attracted cosponsors on both sides of the aisle but has since received criticism from civil liberties organizations and the right. Jennifer Huddleston and I published an in‐​depth analysis of RESTRICT with our colleagues in the Trade Policy Center titled “TikTok Legislation Is a Blank Check for Government Encroachment Upon Americans’ Wealth, Privacy, and Safety”. We write that RESTRICT “raises troubling and far‐​reaching concerns for the First Amendment, international commerce, technology, privacy, and separation of powers”. Thankfully, some of its early proponents have already walked back their support.

Senator Josh Hawley’s proposal is the cleanest legislative solution, but runs into Bill of Attainder issues (the Constitution prohibits legislation targeting particular people or firms) because it waives the Berman Amendment specifically for action relating to TikTok. Montana’s TikTok ban, currently awaiting Governor Gianforte’s signature or veto, runs into the same problem and more. SB 419 would fine app stores that offer TikTok to Montanans $10,000 each time the app is downloaded, likely running afoul of the dormant commerce clause and the First Amendment.

In Congress, H.R. 1153, the DATA Act, is more limited than RESTRICT, but still gives the government new powers to sanction foreign election interference. The DATA Act would modify the Berman Amendment, a limit on the International Emergency Economic Powers Act, or IEEPA, that prevents it from being used to ban “informational materials”. IEEPA gives the president wide latitude to sanction international trade during national emergencies, but, thanks to the Berman Amendment, it can’t block the import (or export) of speech. DATA would change this by exempting “sensitive personal information” from the Berman Amendment. Although this sounds narrow, it includes private “electronic communications, including email, messaging, or chat communications”, effectively excluding chat applications such as Telegram from the Berman Amendment’s protections.

When President Donald Trump tried to ban TikTok in 2020 he used IEEPA to prohibit transactions with ByteDance and its subsidiaries. TikTok sued, and the US District Court for the District of Columbia issued a preliminary injunction against the ban in part because the Berman Amendment made it likely that TikTok’s challenge would succeed. President Biden revoked Trump’s ban order before the matter could be litigated further.

Losing our Religion

The Berman Amendment isn’t just a bar to banning TikTok — it’s also a time capsule. In 1988 when the amendment was passed, America felt much rosier about its place in the world.

America was so sure of its cultural advantages that it specifically exempted the exchange of ideas from IEEPA’s sanctions regime. In 1994, flush from the collapse of the USSR, Congress expanded the amendment to include the internet. It is hard to see the America that passed and extended the Berman Amendment banning TikTok. And yet, despite the continuing dominance of American culture and technology platforms, our attitude toward the free exchange of ideas has shifted. We no longer appreciate our strengths.

China recognizes our cultural power; indeed, this is the main reason the CCP bans TikTok along with western social media. It’s all filled with American speech, ideas, and culture, a glowing digital billboard for freedom. Senator Rand Paul made note of this quality in a recent op‐​ed, writing, “Go to TikTok and search for videos advocating Taiwan’s independence, criticism of Chinese Premier Xi Jinping. Videos are all over TikTok that are critical of official Chinese positions. That’s why TikTok is banned in China.” Unfortunately, Sen. Paul is lonely in this recognition. Most elites on the left and right respectively treat Russian and Chinese propaganda as hyper‐​persuasive despite all evidence to the contrary, and ignore the power of American speech. This faltering belief in American values and the value of American culture is a far greater threat than TikTok.

Instead of searching for a Goldilocks method of banning TikTok without further empowering the government, inviting blowback, or violating the constitution, America should lean into its advantages and learn to live with their costs.

This doesn’t mean doing nothing, but fiddling with the Berman Amendment is bound to give government new ways to censor speech it deems “foreign disinformation” and “election interference”. The Twitter Files and repeated disclosures by platform representatives have made it clear that the government already pushes platforms to remove too much in the name of national security. Instead of banning TikTok, we could prohibit service members and government employees from installing a wide variety of foreign apps on their personal devices, condition approval of ByteDance’s acquisition of Music​.ly on data collection limits, or impose industry‐​wide rules about what data platforms can gather in the first place.

Regardless of how a ban might be achieved, TikTok simply isn’t uniquely concerning enough to justify compromising the open international market for apps and web services or jettisoning our deeper commitments to the free flow of information. America remains dominant in the production of culture and technology. TikTok doesn’t change that, but our reactions to it could.

0 comment
0 FacebookTwitterPinterestEmail

Friday Feature: Gather Forest School

by

Colleen Hroncich

On average, kids today reportedly spend fewer than 10 minutes a day playing outside—and more than seven hours a day on screens. That’s one of the reasons I love learning about forest schools, like The Garden School, Barefoot University, and today’s feature, Gather in Decatur, Georgia.

Gather founders Ashley and Shelby.

A former teacher, Ashley Causey‐​Golden was caring for her newborn son and trying to figure out her next move. She didn’t want to put him in daycare when he was so young, so she decided to try creating her own school. Ashley had interacted with Shelby Stone‐​Steel on Instagram through Ashley’s Afrocentric Montessori page. She knew they shared a lot of the same ideas about education, so she reached out to her about partnering. At that point, they’d never met in person.

“I gave birth to Anthony in March and was not even a month post‐​partum,” Ashley recalls. “We did a lot of talking on the phone because we were just trying to get to know each other. In July, we started touring spaces. Trying to find a space was the hardest hurdle because we were trying to prove a concept. But one place took a chance on us and really loved what we were doing for black kids — having children outdoors and learning through nature. We do academics in this space, but there’s also something to be said just being present and existing in nature for longer than a few minutes.”

As a forest school, they spend nearly all of their time outdoors. Ashley says Gather has “a Montessori and Waldorf flow.” She adds, “We do math, literacy, writing, science, geography, social studies, history, and cultural studies. We use some worksheets, but we also use things like acorns and leaves because with those things you can do pattern work with our younger students.”

In the first year, there was a mixture of families. Some wanted the nature approach just for preschool and then were going to a more traditional program. But the homeschooling families really embraced the whole program. “We found out that we really like partnering with homeschoolers,” says Ashley. “They see this journey as long term because it is sometimes hard for them to find consistent community. So we said this is our lane—partnering with black homeschooling families.

Art time at Gather Forest School.

Gather operates Monday through Friday, 8:30 a.m.­–12:30 p.m. Parents can choose full‐ or part‐​time participation with tuition prorated based on how many days their children attend. “Since we work with homeschooling families, each family has their different flow,” Ashley says. She asks families to sign up for particular days to ensure they have sufficient student‐​teacher ratios. In addition to Ashley and Shelby, they recently hired another teacher – which will be particularly helpful since Ashley’s second child is due any day.

Ashley says she would encourage other teachers who are interested in creating something new to give it a shot. “If you have a dream of doing something else, try it,” she says. “It is a lot of hard work—because now you’re an administrator and a teacher. That was the biggest learning curve for Shelby and me because before, we were teachers. But the administrators were handling all the other moving pieces like dealing with parents, complaints, marketing, and fees. When you’re running your own program, you have to do all of that and still teach. But at the end of the day, I wouldn’t trade it in.”

Her message for parents is to truly know your child and try to find a place that’s a good fit. She advises, “Be really honest about who your child is—the good and the beautiful parts of your child, but also the parts that are growth areas. Because you want a program that’s able to speak to both—that keeps pushing the good but also helps challenge where they need it.”

0 comment
0 FacebookTwitterPinterestEmail

Jack Solowey

The United States has long led global finance. Its institutions shaped critical financial infrastructure and saw the dollar become the world’s reserve currency—thanks to rule of law, property rights, and an innovative market economy at home. As the economic landscape evolves, maintaining this position is a matter of adapting to new technologies that could complement the U.S. dollar and enhance global financial plumbing. Yet, in a fit of myopia, U.S. regulators seem bent on stifling the very developments that could help extend America’s historic strengths, looking askance at recent attempts to integrate open‐​source software with finance.

Yesterday, the House Financial Services Committee’s Subcommittee on Digital Assets, Financial Technology, and Inclusion held a hearing on stablecoins (cryptocurrencies pegged to the value of an asset like the dollar). The committee deserves recognition for taking the all‐​important first step: admitting we have a problem. Nonetheless, although the witnesses largely agreed on the shortsightedness of U.S. hostility to decentralized financial technology and the need for regulatory clarity, comments from lawmakers indicated that a common‐​sense solution on stablecoins, unfortunately, remains far off.

Moreover, a bill posted on the committee’s website before the hearing—a draft stablecoin framework that first circulated last fall—needs work if it is to rein in the excessive regulatory discretion that hinders a competitive stablecoin market and undermines American developers and consumers.

To their credit—Subcommittee Chairman French Hill (R‑AR) and Committee Chairman Patrick McHenry (R‑NC) acknowledged that the bill is but a jumping off point for future revisions—an “infant” in Rep. Hill’s words (and an “ugly baby” in Rep. McHenry’s phrasing from last fall). And Chairman McHenry was candid that the bill is imperfect “in many, many ways.” More pointedly, Ranking Member Maxine Waters (D‑CA) was clear to emphasize that from her perspective, “we’re starting from scratch” and should “disregard the bill that has been posted altogether” given developments in the crypto space since earlier negotiations.

So, what should a final bill look like? To answer that question, it’s important to understand how the U.S.’s tangled web of legacy state and federal laws allow regulators to freestyle when it comes to stablecoins and intervene erratically in the market, which, according to the testimony of Columbia Business School professor Austin Campbell yesterday, is driving developers to more welcoming shores abroad. At the federal level, the Securities Exchange Commission and bank regulators have leveraged ambiguity to threaten enforcement actions against stablecoin projects and caution licensed institutions away from involvement with the crypto ecosystem.

Sensible stablecoin legislation can provide a much‐​needed signal that the U.S. is finally ready to adopt a sane approach to digital assets. However, to achieve that sanity, a stablecoin bill will need to embrace competition from new entrants. This can be accomplished by reducing the regulatory discretion that disserves U.S. businesses and users, avoiding overreactions to experimental instruments, and opening the doors to non‐​traditional market participants.

A stablecoin bill should not grant regulators open‐​ended leeway to reject the applications of stablecoin issuers. Instead, legislation should focus on objective criteria related to reserve assets and disclosures rather than vague factors like a project’s future benefits, contribution to financial stability writ large, overall convenience, or ability to promote financial inclusion.

While those are laudable goals—and consistent with the promise of stablecoins to facilitate competition, transparency, efficient payments, and financial inclusion—evaluating a given stablecoin project’s ability to achieve them ex ante would be highly subjective. Requiring stablecoin issuers to prove their merit in order to exist, instead of simply to mitigate known risks related to the quality and availability of their collateral, would hold stablecoin issuers to a higher standard than other financial institutions. Indeed, the very goals of inclusion and competition would be better served by allowing new market entrants, not creating nebulous prior restraint standards with which to reject new players.

Along these lines, a stablecoin bill should simply address stablecoins’ primary risks: that fiat asset‐​backed projects have the reserves and redemption policies they claim to. As Jake Chervinsky, Chief Policy Officer of the Blockchain Association, noted yesterday, currently “over 90% of the market capitalization for all stablecoins comes from just five custodial stablecoins.” Legislation should avoid expressing an opinion on, let alone banning or pausing, other types of instruments that are erroneously lumped together with fiat‐​collateralized stablecoins, such as crypto‐​asset‐​backed stablecoins and algorithmic stablecoins (which endeavor to maintain stable values by engineering convertibility between two digital assets from the same issuer). While algorithmic stablecoins, for example, are an unproven technology, they’re largely irrelevant to the problem of providing regulatory clarity to businesses tokenizing fiat assets. Moreover, prohibiting financial technology experimentation generally is unbecoming of the leader of the free world and an innovative market economy.

Lastly, stablecoin legislation should allow flexibility when it comes to the types of businesses issuing stablecoins. Not only should non‐​bank and state‐​chartered entities be allowed to become lawful issuers, but so too should businesses from diverse sectors, including those traditionally outside of finance. Preventing companies with other lines of business from issuing stablecoins—or affiliating with those doing so—would risk further constraining financial inclusion and competition. Inclusion goals could be hindered where trusted brands are blocked from serving the markets and communities they know best. And potential efficiency gains could be lost where networked businesses in other sectors (like social media and ecommerce platforms) are unable to bring their expertise to bear in the stablecoin market should they choose to.

Through curbing regulatory discretion, avoiding disproportionate interventions, and opening the field to new participants, Congress could help to resolve the U.S.’s unsustainable stablecoin status quo. If the U.S. wishes to remain the world’s preeminent financial market, legislative work on stablecoins must continue to ensure that our laws are open to technologies with the potential to help maintain and extend that lead.

0 comment
0 FacebookTwitterPinterestEmail

The Menace of Fiscal Inflation

by

Today, inflation has reached a 40-year high, in response to fiscal profligacy and accommodative monetary policy.  Direct cash payments to individuals and businesses in 2020 and 2021, which totaled more than $5 trillion, along with the rapid growth of base money—under the Fed’s large-scale asset purchase program (also known as quantitative easing or QE)—-have combined with global supply-chain disruptions to generate inflation rates that are substantially above the Fed’s long-run goal of 2 percent (Figure 1).

John Cochrane (2022), a Senior Fellow at the Hoover Institution, makes a convincing case that, although inflation generally can be understood as a monetary phenomenon, its roots often can be traced to fiscal dominance—that is, to political pressure to use the central bank to accommodate government deficit spending. Both debt monetization and fiscal helicopter drops—or what Cochrane calls “fiscal inflation”—need to be recognized.

This article examines the fiscal and monetary sources of the current inflation and emphasizes the importance of fiscal rectitude and monetary control in shaping expectations about future inflation.  Without sound fiscal and monetary institutions that are transparent, constrained by the rule of law, and trusted by the public, the danger of future inflation will persist.

Fiscal Stimulus and Monetary Accommodation in Response to COVID-19

The U.S. economy came to an abrupt halt in March/April 2020 as the COVID-19 pandemic resulted in the lockdown of normal commercial life.  Businesses shut down and millions of workers became unemployed overnight. With unprecedented uncertainty about the future, the demand for cash balances increased and the velocity of money took a sharp downturn, which led to a concomitant decline in nominal GDP.    The recession was deep, but short-lived, compared to the Great Recession stemming from the 2008 financial crisis, as Congress and the Fed moved quickly to pump up aggregate demand.

A Fiscal Helicopter Drop

A rush of legislation was passed by Congress and signed into law by President Trump in 2020 to mitigate the ill effects of the pandemic.  The “Coronavirus Aid, Relief, and Economic Security Act (CARES Act) was signed on March 27. It appropriated $2.3 trillion for making direct cash payments to individuals, expanding unemployment benefits for workers, and assisting small businesses and industries.  On December 27, President Trump signed a second piece of legislation, the “Consolidated Appropriations Act” (CCA), which provided additional relief of $900 billion. In March 2021, when the economy was growing at a healthy pace, Democrats poured fuel on the spending spree by passing the “American Rescue Plan Act of 2021,” which added $1.9 trillion of new spending. Individuals who earned less than $75,000 per year were eligible to receive direct cash payments of $1,400, plus another $1,400 per dependent. (For a summary of these laws, see Alpert 2022.)

John Cochrane likens the more than $5 trillion of COVID-19 relief spending to a fiscal helicopter drop that was sure to generate inflation as the economy quickly recovered from the pandemic.  According to Cochrane (2022):

Starting in March 2020, in response to the disruptions of Covid-19, the U.S. government created about $3 trillion of new bank reserves, equivalent to cash, and sent checks to people and businesses. (Mechanically, the Treasury issued $3 trillion of new debt, which the Fed quickly bought in return for $3 trillion of new reserves. The Treasury sent out checks, transferring the reserves to people’s banks.

The Treasury then borrowed another $2 trillion or so and sent more checks. Overall federal debt rose nearly 30 percent. Is it at all a surprise that a year later inflation breaks out? It is hard to ask for a clearer demonstration of fiscal inflation, an immense fiscal helicopter drop.

A key assumption in Cochrane’s analysis is that the debt will never be repaid. Thus, one can treat the debt as a hypothetical bond with a notional, not real, value.  Kevin Dowd describes such debt as a “perpetual bond with a zero coupon.”  Strictly speaking, the expression “helicopter drop” should only be used when the Fed is literally giving money away to the public (see Friedman).

More importantly, in the post-2008 “floor system,” there is no necessary link between the size of the Fed’s balance sheet and inflation, since reserves can always be sterilized by increasing interest on reserves (more on this later).

Deficits and Money Dance Together

The COVID-19 transfers to households and businesses widened federal deficits, which were financed primarily with accommodative monetary policy. The Fed lowered interest rates, kept them lower for longer, and continued to purchase a large share of Treasuries and mortgage-backed securities. Total assets held by the Fed now stand at about $9 trillion compared to $4 trillion in 2019 (Figure 2). The reserve component of base money more than doubled from March 2020 to August 2021 (Figure 3) as the Fed acquired assets.

The Fed’s decision in 2008 to begin paying interest on reserves—was an alternative to sterilizing its emergency lending during the financial crisis. When the Fed raises the IOR rate above other short-term lending rates, banks have an incentive to lend to the Fed rather than to the market. That policy reduced the multiplier effect of base money (also known as high-powered money) on both M2 (i.e., currency held by the public, demand deposits, certain time deposits, saving accounts, and money market accounts) and NGDP (Figure 4). However, as the economy gained steam in 2021, banks began to increase their private-sector lending, leading to a slight increase in base-money multipliers (see Williams 2012).

Figure 5, based on the work of Fernando Martin at the St. Louis Fed, shows the close relationship between federal deficit spending and M2 in a monetary regime of near-zero interest rates, QE, and forward guidance designed to keep inflation at 2 percent over the long run.  As Martin notes, “Both series track each other well, suggesting that the expansion of money in the economy has a fiscal origin.”

During the prepandemic years (2016–2019), M2 grew at an annual rate of 5.6 percent. However, money growth accelerated to 15.6 percent between February and May 2020 once the CARES Act became law in March and the lockdowns took place. A one-time jump in the quantity of money along with a negative supply shock was bound to increase the price level, but long-run inflation requires a sustained increase in M2 (see Thornton 2022). Looking at the data, Martin points out that, following the initial sharp increase, M2 continued to grow at a much faster rate (12.5 percent) than prior to the pandemic. He concludes that, unless money growth slows, inflation can be expected to continue. But, for that to happen, “future deficits need to be persistently large.” From Cochrane’s perspective, it means that the public must expect that the debt will never be repaid. Yet, once again, whether there is future inflation will depend on whether the Fed fails to raise its administered rates soon enough and high enough.

 The Failure of the Fed’s New Monetary Framework

In August 2020, the Fed announced several changes to its Statement on Longer-Run Goals and Monetary Policy Strategy.  Most importantly, the Federal Open Market Committee (FOMC) adopted “flexible average inflation targeting” (FAIT) that would allow inflation to be “moderately above 2 percent for some time” to make up for past shortfalls. It also redefined its maximum employment mandate as “a broad-based and inclusive goal.”  Although that goal is “not directly measurable and changes over time owing largely to nonmonetary factors,” the FOMC‘s “policy decisions must be informed by assessments of the shortfalls of employment from its maximum level” (emphasis added). By substituting “shortfalls” for “deviations,” and by moving to FAIT, the Fed signaled that its employment mandate would take precedence over price stability.

The Fed’s new monetary framework is asymmetrical: it deals only with undershooting the target rate, not with overshooting it. It is also vague: the starting and ending points for FAIT are not specified—that is, there is no clear range over which inflation would be allowed to exceed 2 percent. That uncertainty could lead to selecting a higher average inflation target, which would lead people to expect more inflation in the future.

Earlier, in October 2008, in response to the financial crisis and a near-zero nominal policy rate, the Fed abandoned its traditional “limited-reserves regime” for conducting monetary policy in favor of an “ample-reserves regime.” Instead of using small changes in the supply of reserves to move the fed funds rate in line with the Fed’s target rate, the new framework uses QE to pump up reserves, so that the supply of reserves lands in the flat portion of the demand curve.  Meanwhile, the Fed’s target rate is implemented by having the Board of Governors, not the FOMC, administer two rates—the interest on reserve balances (IORB) rate (often shortened to interest on reserves or IOR rate) and the overnight reverse repo (ONRRP) rate. The new operating arrangement means that the Fed’s asset purchases (and thus the size of its balance sheet) can be separated from the stance (i.e., position) of monetary policy (see Plosser 2017; Beckworth 2018; Selgin 2018).

The “divorce” of money (reserves or base money) from the stance of monetary policy (i.e., the Fed’s target range for the federal funds rate) in an ample-reserves regime can be seen in Figure 6. Once reserves are in the flat portion of the demand curve, shifts in the supply of reserves will not affect the fed funds rate, as they did in the pre-2008 framework (see Ihrig and Wolla 2020). Under the new regime, the stance of monetary policy depends primarily on where the Board sets the administered rates it pays on bank reserves and reverse repos.

Todd Keister et al. (2008) nicely summarized the key implication of the ample-reserves regime (also known as the “floor system”) in an article for the New York Fed:

A floor system “divorces” the quantity of money from the interest rate target and, hence, from monetary policy. This divorce gives the central bank two separate policy instruments: the interest rate target can be set according to the usual monetary policy concerns, while the quantity of reserves can be set independently.

What this means is that the Fed can pump up its balance sheet without causing inflation provided it sets its administered rates high enough and soon enough. But there are political limits on how high the IOR rate can go. The public will not look favorably upon large banks receiving high rates on their reserves at the Fed while rates for most people on their bank deposits lag behind. Likewise, Congress won’t like the fact that as the Fed pays higher rates on reserves; remittances to the Treasury will fall. Finally, large increases in the Fed’s balance sheet via asset acquisition—without corresponding increases in the IOR and ONRRP rates—heighten the risk of inflation. The current inflation is testimony to the risk of the Fed waiting too long to raise its administered rates.

Predicting both fiscal deficits and inflation is difficult: forecast errors are common and often large. Top Fed and Treasury officials were caught by surprise with the recent run-up in inflation to a 40-year high. They predicted inflation would be transitory, not that it would now be running at 8.6 percent. They fundamentally ignored the risk of inflation. Indeed, after President Biden signed the $1.9 trillion American Rescue Plan into law on March 11, 2021, Treasury Secretary Janet Yellen, when asked, “Is there a risk of inflation?,” responded: “I think there’s a small risk and I think it’s manageable.” She now admits her forecast error and thinks inflation will “remain high” and should be “our number one priority.”

The failure of the Fed’s post-2008 operating system (see Selgin 2018), and its move to FAIT in 2020, should be addressed if future inflation is to be avoided (see Plosser 2022).

Drawing a Clear Line between Fiscal and Monetary Policy

To stem inflation in the long run, we need moderate levels of monetary growth, for a start, and a clear line of demarcation between fiscal and monetary policy.  Since the 2008 financial crisis, the Fed has engaged in credit allocation in close coordination with the Treasury. George Selgin, in his book, The Menace of Fiscal QE (2020), warned against using the Fed’s balance sheet under the floor system to fund projects that may not muster a majority vote. Such “backdoor spending” is facilitated by the Fed’s power to engage in large-scale asset purchases, and the Board of Governors’ authority (since 2008) to pay interest on reserves.

Charles Goodhart, emeritus professor at the London School of Economics, gets to the crux of the problem of fiscal QE in his inside-cover quote for Selgin’s book:

The current “floor system” of money market management in the USA allows the size of the Fed’s balance sheet to be divorced from its mandate to control inflation. This opens the way for some, usually on the political left, to advocate using the Central Bank’s balance sheet to fund all kinds of (idealistic) expenditures, “quasi-fiscal” quantitative easing, thereby avoiding the need for legislative approval and often (mistakenly) perceived as a “free lunch.” George Selgin advocates a return to a “corridor” system of money management to avert such dangers.

Likewise, Cochrane (2022) warns that, “Whether inflation continues or not depends on future monetary policy, future fiscal policy, and whether people change their minds about overall debt repayment.”  As he explains:

If the government borrows or prints $5 trillion, with no change in its plan to repay debt, on top of $17 trillion outstanding debt, then the price level will rise a cumulative 30 percent, so that the $22 trillion of debt is worth in real terms what the $17 trillion was before. In essence, absent a credible increase in future surpluses, the deficit is financed by defaulting on $5 trillion of outstanding debt, via inflation.

Cochrane’s message is that, if the root cause of the current inflation is fiscal profligacy, then reforming the fisc is a necessary step toward monetary reform. Government spending needs to slow while economic growth needs to rise; markets,   not government bureaucrats, need to set prices; and monetary and fiscal policymakers need to be guided by rules rather than pure discretion.

Conclusion

To say that “inflation is always and everywhere a monetary phenomenon” is not to say that fiscal policy doesn’t matter. “Fiscal inflation” is indeed “a menace,” as Cochrane and others have argued. Few experts predicted the shift from low inflation before the pandemic to nearly 9 percent CPI inflation today. Policymakers largely ignored the implications of the post-2008 operating system, the close dance between cumulative federal deficits and M2 growth, and the risk of adhering to the Fed’s “lower for longer” recipe for its policy rate in the hope of stimulating asset markets and the economy without causing high inflation. The impact of FAIT on inflation expectations was also underestimated. The goal of FAIT was to make up for periods of low inflation with periods of high inflation, but in doing so the FOMC elevated its maximum employment goal and downplayed its price stability mandate. With inflation now at the highest level in 40 years, there could be pressure for increasing the average inflation target to 3 or 4 percent. To do so would be another step away from long-run price stability.

Also, it is important to remember that persistent inflation depends primarily on a continuing excess supply of money, the likely cause of which is fiscal profligacy—and, in the case of the current inflation, large cash transfers directly to individuals and businesses from the U.S. Treasury. Supply-chain effects, to the extent they are a part of the story, should still be effects only on the price level, not the inflation rate. Thus, they should be transitory, not permanent (see Thornton 2022).

Of course, the same is partially true of the fiscal and monetary stimulus.  As my colleague Jeff Miron noted (in a recent email):

If we don’t repeat the Covid-19 spending burst, and the spending goes back to its prepandemic level relative to GDP; and/or, if the Fed reverses policy, as it has been doing for past few months, then the fiscal and monetary stimulus should also turn out to have been transitory, and so inflation should subside to a significant degree. But, the key difference is that the debt path has been implying future inflation for years now. The recent fiscal outburst was, at one level, small potatoes relative to the existing path. But maybe it reminded everyone that the fiscal iceberg is out there.

The dangers of fiscal QE, debt monetization, fiscal helicopter drops, and fiscal inflation are real. They need to be taken seriously by policymakers and the public. The FOMC’s 2020 Statement on Longer-Run Goals and Monetary Policy Strategy calls on the Fed “to undertake roughly every five years a thorough public review of its monetary policy strategy, tools, and communication practices.” The FOMC also “seeks to explain its monetary policy decisions to the public as clearly as possible.” Yet, in its policy statement, the Committee specifically says that its goal of maximum employment “is not directly measurable” and that shortfalls from it are therefore difficult to observe.  Likewise, there is no specific information regarding the timeframe for achieving an average inflation rate of 2 percent, and no guarantee that the Fed can hit that target.  Nor is there any evaluation of how well the floor system has worked or what alternatives would better promote the Fed’s dual mandate as well as provide for moderate long-run interest rates.

If the Fed really wants to be transparent and keep the public informed about the nature of monetary policy, a broader discussion needs to take place, preferably to include alternatives to the present discretionary government fiat money system—and sooner than 2025.

The post The Menace of Fiscal Inflation appeared first on Alt-M.

0 comment
0 FacebookTwitterPinterestEmail

Monetary Progress

by

(From the entry on “Money,” by Charles Francis Bastable, in the famous 11th edition of the Encyclopedia Britannica):

The very large number of the autonomous cities of Greece, which possessed the right of issuing money, was the cause of the competition between different currencies, each having legal tender power only within its own city. In its practical outcome this “free coinage” system proved beneficial, for it compelled the maintenance of the true standard in order to gain wider circulation. With the establishment of larger states the control over the issue of money grew more stringent. In the later Roman Empire the right of coining was reserved to the emperor exclusively… .

A long course of debasement is the characteristic aspect of the [imperial Roman] currency system. “Under the empire,” we are told, “the history of silver coinage is one of melancholy debasement. The most extensive frauds in connexion with money were perpetuated by the Romans.” The gold aureas, which in the time of Augustus was one forty-fifth of a pound, was under Constantine only one seventy-second of a pound. The alloy in the silver coins gradually rose to three-fourths of the weight. Plated coins came into extensive use. The practice of debasement was in accordance with the theories of the jurists, who seem to have regarded money as simply the creature of the state.

The post Monetary Progress appeared first on Alt-M.

0 comment
0 FacebookTwitterPinterestEmail

Although I’ve devoted many essays here to exploding myths about historical private currencies, there’s one I’ve yet to directly challenge. That’s the belief that such currencies only thrive in the absence of official alternatives. Otherwise, the argument goes, people would drop private currencies like so many hot rocks. Since this opinion assumes that private currencies are inevitably inferior to official ones, I hereby christen it the “ersatz” theory of private currency. Note that  “currency” means circulating or (in today’s digital context) peer-to-peer exchange media: nobody denies that other sorts of private money, such as commercial bank deposits and traveler’s checks, can coexist with official alternatives.

Implicit appeals to the ersatz theory of private currency are as common as muck. Take, for example, this statement by the ECB’s Yves Mersch:

Only an independent central bank with a strong mandate can provide the institutional backing necessary to issue reliable forms of money and rigorously preserve public trust in them. So private currencies have little or no prospect of establishing themselves as viable alternatives to centrally issued money that is accepted as legal tender.

Since governments alone can declare a currency “legal tender,” it’s seldom possible for private currencies to “establish themselves” as such, no matter how good or popular they are. El Salvador’s decision to declare Bitcoin legal tender was a rare exception.  But I take Mersch to mean that, legal tender or not, private currencies simply can’t hope to compete successfully against centrally issued money.

Paul Krugman explicitly appeals to the ersatz theory in observing, in a recent New York Times column, that although “private currencies did indeed circulate and function as mediums of exchange” during the United States “free banking” era, this was so “because there were no better alternatives: greenbacks—dollar notes issued by the U.S. Treasury—didn’t yet exist.” Krugman goes on to say that, because “greenbacks and government-insured bank deposits do exist” today, “stablecoins play almost no role in ordinary business transactions.”

Like Mersch and most others who subscribe to the ersatz theory of private currency, either explicitly or implicitly, Krugman doesn’t seem to consider another possibility, to wit: that private currencies seldom survive, not because the public prefers centrally-supplied, official currencies, but because governments routinely slant currency playing fields in official currencies’ favor, often by banning private alternatives outright.  Let’s call this the “coercive” theory of official currencies. If the ersatz theory is correct, the historical record should show that private currencies died out on their own once official alternatives were available. If, instead, the coercive theory is correct, governments would have had to take further steps to seal private currencies’ fate.

Banking on the State

Before we look into how private currencies die, we should first consider the circumstances that tend to give birth to them. According to the ersatz theory, the key requirement is the lack of official, hence presumably superior, alternatives, as when a central government simply hasn’t gotten around to issuing its own currency. Yet it’s easy to show that private currencies often got going, not because there were no official alternatives, but because official money sucked.

Consider the earliest known paper money—the “Flying cash” of Tang Dynasty China (618 to 907 AD). It was developed by merchants as a substitute for official Chinese copper coins which, besides being incredibly bulky, were often unavailable in adequate quantities.  In Europe as well, though much later, it was the shoddy state of official coins that gave rise to private paper substitutes. As William Stanley Jevons (chap. XVI) explains,

The origin of the European system of bank-notes is to be found in the deposit banks established in Italy from four to seven centuries ago. In those days the circulating medium consisted of a mixture of coins of many denominations, variously clipped or depreciated. In receiving money, the merchant had to weigh and estimate the fineness of each coin, and much trouble, loss of time, and risk of fraud thus arose. It became, therefore, the custom in the mercantile republics of Italy to deposit such money in a bank, where its value was accurately estimated, once for all, and placed to the credit of the depositor.

I can well imagine Krugman, or someone who thinks like him, saying, “Ah, but there was no official paper money people might resort to in these instances.” There wasn’t; but this is missing the larger point, which is that there might never have been had private innovators not come up with and tested the idea. In the past, like today, it was such innovators who came up with new and often superior exchange media. Governments then tended to muscle in, eventually snuffing out their would-be private rivals. Nor was the result always an improvement. Take what happened in China. At the start of the Chin-dynasty, emperor Hsiao-tsung (1163-90) gave China its first nationally-regulated paper money, suppressing private substitutes. Before you could say Jack Robinson, China experienced “the first nation-wide inflation of paper money in world history.”

Green with Envy

Now let’s consider a case in which private currencies had to compete with relatively close, official substitutes.  As Krugman, in endorsing the ersatz theory, instances notes issued by state banks during the U.S. “free banking” era (1837-1865), one might suppose that, if any private currencies ever went “gentle into that good night” as soon official alternatives became available, those issued by state banks must have done so as soon as the U.S. Treasury entered the currency business.

Let us see. The first “greenbacks,” officially called “demand notes” because the U.S. Treasury was supposed to redeem them in specie on demand, were made available in August 1861. By December, the Treasury had reneged on its promise (sound familiar?), effectively placing the nation on a greenback standard, where it remained until 1879. In all $60 million in demand notes were authorized. In February 1861, these were supplemented by $150 million in new legal-tender greenbacks, $400 million of which would ultimately be authorized. By the end of 1861, there were more greenbacks than state banknotes in circulation. Since greenbacks, besides being legal tender, were free of default risk, if only because the Treasury had already broken its solemn promise to redeem its paper in specie, the public must surely have preferred them to state banks’  riskier products.

Yet, instead of shrinking, the quantity of state banknotes kept growing. From $184 million in December 1861, it had risen to $239 million on the first of January, 1863.  Whatever else greenbacks were doing, they weren’t driving state banks out of the currency business.

National Banknotes

But greenbacks weren’t the only central-government-sponsored currency state bankers had to contend with. In February 1863, President Lincoln signed the National Currency Act, authorizing national banks. The notes of these federally-chartered banks had to be more than fully secured by U.S. Treasury securities. The foremost aim of the new law, and of that last stipulation in particular, was to help fund the Union war effort: the greater the number of national banks, the greater the market for government bonds. But the law’s authors were especially keen on seeing state banks convert to national charters so that the nation could at last say good riddance to those pesky state-authorized banknotes. Because most banks back then depended for their survival on being able to keep notes in circulation, that’s just what would have happened had the public really preferred national currency.

Alas for the Union’s plan, it turned out that the public was rather fond of those old state banknotes—so fond, in fact, that few state banks opted to convert to the new, national charters. In 1860, there were 1,650 state banks, almost all of which issued their own notes. By the end of 1863, only nine, almost all formerly part of the State Bank of Ohio system, had switched to national charters.  A year later, fewer than 200 had done so. As I’ve explained at length elsewhere, this wasn’t a result of any identifiable market failure. Nor can it be attributed to less-stringent state bank regulations: were national banknotes truly preferred, most state banks, and those in rural areas especially, would have had no choice but to either switch to national charters or go out of business, because their profits depended on their circulation.  Although many state banks did close in 1863, according to Matt Jaremski (p. 384), they did so, not because the new national currency was preferred to their own, but because of losses they suffered as a result of the outbreak of the Civil War.  The simple truth is that state banks, and the notes they issued, survived the 1863 Act because the public was happy to go on using their currency. Indeed, between 1863 and 1864, 25 new state banks opened.

This conclusion will undoubtedly seem incredible to those who recall horror stories about antebellum U.S. currency. But while some of those stories are true, they refer only to a relatively small proportion of antebellum banknotes. They also refer mainly to the early years of the so-called “free banking” era. Although the outbreak of the Civil War caused another cluster of Midwestern “free” banks to fail, owing to the depreciation of Western and Southern bonds they were obliged to hold as backing for their notes (ibid., p. 382), elsewhere state bank currencies, though still far from perfect, had improved considerably. Despite the tremendous handicap of barriers to branch banking, banknote discounts had come down to very modest levels; and some banknotes, including those issued by the banks in the Northeast, circulated at par almost everywhere.

Carrots and Sticks

Disappointed by state bankers’ response to the National Currency Act, the government tried again, passing a revised version—the National Bank Act—in June, 1964.  The new version tried to make national banknotes more appealing by requiring every national bank to receive all national banknotes at par. (It was ultimately this requirement, rather than the notes’ implicit Treasury guarantee, that ruled-out discounts.) It also tried to make national charters more appealing to state bankers by relaxing national bank capital and reserve requirements, and by allowing converted state banks to incorporate their original names into their new ones. Thus the “Merchants and Mechanics Bank” of Troy, New York, could become the “Merchants and Mechanics National Bank” of that same city, instead of having to be “the Third National Bank of Troy New York” as the earlier law had required. That state banks wanted to keep their old names suggests that they feared losing valuable brand-name capital, which they wouldn’t have done had the words “national bank” alone made up for the loss.

The revised law did lead to more state bank conversions (see the figure reproduced from Jaremski’s article below), with another 245 state banks converting in 1864 alone. But these conversions were mostly of larger state banks in bigger towns and cities, and major financial centers especially, where banks could survive on deposit-taking alone. Rural state banks, in contrast, still stuck to their old charters, so that they could still issue their own notes, for which there was still a robust demand. In all, over one-thousand state banks survived into 1865.

The continued survival of so many state banks of issue, despite what were by then substantial numbers of national banks notes from which might circulate anywhere, ought to have suggested to government officials that state banks and their currencies were meeting needs national banks could not. Instead it persuaded them to start playing hardball with obstinate state bankers. And hardball is what they played by including a prohibitive 10 percent tax on state banknotes in the March, 1865 Revenue Act.

A Hollow Victory

The OCC says that, in taxing the notes of state banks, the federal government was “signaling its determination that national banks would triumph and the state banks would fade away.”  But that “fade away” is so much sugar-coating: deprived of their ability to issue currency, state banks dropped like flies. Although they were granted a stay of execution by a law delaying the implementation of the tax until August 1866, by the end of the decade only 250 state banks were left. Nor was their demise a reflection of the public’s belated discovery of their notes’ inferiority. According to Jaremski (p. 386), who has studied the matter closely, the 10 percent tax “seems to have been the only piece of legislation capable of closing state banks” (my emphasis).

And just what sort of “triumph” was this?  It was surely not one for currency users, who had long had the option of refusing state bank notes, but were now deprived of the option of having them. Nor was it a triumph for the communities that lost their former state banks: being unable to muster up the capital needed to establish national banks, many ended up being deprived of banks altogether. The Midwest and the South were especially hard hit, with the latter suffering not only because it had been impoverished by the war, but because the total circulation of national banknotes was limited until 1875, and almost all had been spoken for by the time the Civil War ended. Many, myself among them, believe that the lack of banks was an important cause of the South’s persistent post-bellum underdevelopment.

“But at least the new currency was more reliable!” In some respects, perhaps. And as an alternative to state banknotes, which it was until that damned tax was passed, it could only have been beneficial. But in other respects, the nationalized paper currency system was deeply flawed. Consequently, by making national banknotes and greenbacks the United States’ only legal paper monies, the government unwittingly helped set the stage for the recurring “currency panics” of the last decades of the 19th century and the first decade of the 20th. Those panics were consequences of the “inelastic” supply of official paper currency—of an absolute limit on greenbacks and the bond-backing requirement for national banknotes. Had state banks not been forced out of the currency business, their notes might have met demands for currency that national banks couldn’t meet, and we might have been spared yet another federal government “triumph” in the field of currency.

The Rest of the Story

I’ve singled out the story of U.S. “national” currency because Krugman refers to it. But it is only one historical instance of many that I might offer contradicting the ersatz theory of private currency, while affirming the coercive alternative. In fact, so far as I’m aware, private paper currencies, including notes issued by ordinary commercial banks without the benefit of official guarantees, have never been driven to extinction by the mere presence of official alternatives. Instead, they’ve always been forced out of existence, by prohibitive taxes, impossibly onerous regulations, or (most often) outright prohibition. This was so in England and Wales. It was so in France and in Italy. It was so in Sweden and Switzerland and Canada and…but it would be tedious to list all the cases I’m aware of. Instead, I challenge my readers to inform me of an exception, that is, a case where some official currency out-competed private rivals, fair and square.

Of course, even if no such exception exists, it may still be true that private digital currency can’t compete successfully, on a level playing field, against centrally-supplied alternatives. But we’ll never know unless governments allow such competition to take place. Perhaps Paul Krugman will welcome such competition. But I doubt it. He may say that private currencies tend to die out naturally. But what he means is that, since they’re bound to die anyway, the government might as well kill them.

The post Paul Krugman and the “Ersatz” Theory of Private Currencies appeared first on Alt-M.

0 comment
0 FacebookTwitterPinterestEmail

(Although my contributions to this series have so far been more-or-less in their proper order, this one isn’t:  it occurred to me only relatively recently that it would be worthwhile to take stock of the overall progress of the recovery up to the outbreak of the Roosevelt Recession before delving into that episode. Had I done this in the first place, this installment would be Part 10 of the series, with the present Part 10 and all subsequent installments moved up a notch. –Ed.)

When it struck down the Agricultural Adjustment Act in January 1936, the Supreme Court dropped the final curtain on the original New Deal. By then the so-called Second New Deal was itself almost complete. On May 6th, 1935, the Works Progress Administration (WPA) took over the federal government’s work relief programs, considerably expanding their scope. The National Labor Relations (Wagner) Act, aimed at reinforcing the National Industrial Recovery Act’s (NIRA) surviving but feeble collective-bargaining provisions, came a month later. The Social Security Act followed that August. The Housing Act of 1937, passed the following September, finished the job. Unlike some parts of the First New Deal, none of these later measures had economic recovery as its chief aim.[1]

That fact should be kept in mind as we take a step back from assessing particular parts of the New Deal to take stock of the overall progress of economic recovery up to mid-1937. How much did economic conditions improve? Does that improvement square with our assessment of the shortcomings of the First New Deal, and of the National Recovery Administration (NRA) especially? If the NRA didn’t help, what did?

A Rocky Start

“Between 1933 and 1937,” Christina Romer says at the start of her famous article, “What Ended the Great Depression,” “real GNP in the United States grew at an average rate of over 8 percent per year.” By her reckoning the unemployment rate also fell substantially, from over 25 percent to under 11. After noting that such gains were “spectacular even for an economy pulling out of a severe recession,” Romer wonders how they can be reconciled with “the conventional wisdom that the U.S. economy remained depressed for all of the 1930s.”

Romer herself offers a partial answer. The collapse of the early 1930s, she says, was “so large that it took many years of unprecedented growth” to make up for it. In fact, the U.S. depression was exceptionally deep, in part because it involved a serious financial crisis, and also because the U.S. took longer than some other countries did to set aside the gold standard.

But a closer look at the record suggests that the recovery wasn’t quite so spectacular as the statistics Romer mentions suggest. For one thing, until mid-1935, the recovery was anything but steady. “After the sharp rise in [industrial] production in the spring and early summer of 1933,” yet another painstaking Brookings Institute study informs us,

there was a substantial recession in the autumn. Through 1934 and the first half of 1935 there were periodic advances followed by more or less corresponding declines. It was not until the middle of 1935 that a strong and persistent advance occurred (p. 77).

Thanks to all those ups-and-downs, despite the great gains made during the Roosevelt administration’s first months, by mid-1935, the index had yet to make up half the ground it lost between the start of the depression and its nadir.

That the “sharp increase in production in the spring and early summer of 1933” didn’t last is not so surprising when one recalls the fact, noted in our review of Brookings’ NRA report, that much of it consisted of a speculative boom informed by the understanding that NRA codes would soon be driving up the prices of both finished goods and inputs.

Spread Thin

Nor were employment gains before 1937 as impressive as Romer’s statistics suggest. Her claim that the unemployment rate fell from 25 to 11 percent relies on Michael Darby’s revised unemployment rate estimates which, as I explained in an earlier installment to this series, overstate the extent of the recovery: although it’s true that persons on work relief weren’t literally unemployed, it’s also true that most weren’t able to find employment elsewhere, so that the number of ordinary employment opportunities is a better guide to the progress of recovery. According to the Bureau of Labor Statistics’ traditional unemployment rate numbers (also reported by Darby), which reflect the lack of such opportunities by considering those on work relief “unemployed,” in 1937, over 14 percent of workers still couldn’t find ordinary jobs.

But even those BLS figures exaggerate the extent of the employment gains between 1933 and 1937, and particularly between August 1933 and May 1935. As we saw in reviewing Brookings’ NRA report, the decline in the unemployment rate during that time, however one measures it, was almost entirely due to “work sharing” requirements of the July 1933 “President’s Reemployment Agreement,” aka the NRA “blanket code.” That code called on firms to reduce the length of their employees’ work weeks so that available jobs could be shared by more workers. According to Jason Taylor’s estimates, had it not influenced firms’ total demand for labor hours, work-sharing would have created another 2.47 million jobs. But because having three persons each working just under 35 hours a week was more costly than having two working about 50 hours, the code led to a 9.1 percent decline in total labor hours demanded, leaving space for only 1.34 million more employees.

Despite the extra costs it may involve, work sharing can be a reasonable alternative to work relief for keeping people gainfully, if less lucratively, employed during a downturn.  But like work relief, although it creates jobs, it does nothing to improve the private economy’s job-creating capacity, understood to mean its total demand for labor hours. As Taylor’s estimates suggest, it can even lower that capacity somewhat.

According to Bob Higgs (2009, p. 152), it wasn’t until some time in 1935 that total non-relief labor hours actually rose above their March 1933 level. The chart below illustrates Higgs’s point. In it, the green and blue lines are total civilian labor hours worked, where the blue line includes work relief. The red line crudely approximates potential or full-employment labor hours by interpolating between the full employment peaks of 1929 and 1949. That procedure allows for the fact that, mandatory work sharing aside, the length of the typical work week gradually declined between those dates. It also ignores the tendency of the labor force to shrink during severe recessions, as unemployed workers get discouraged and drop out of the labor force. (That labor hours employed actually rose above their “full employment” value during WWII reflects intense wartime industrial mobilization efforts, which succeeded in getting roughly 6.7 additional women to temporarily join the labor force.)

A “Progressive” Decade

Such details allow us to reconcile statistics suggesting a “spectacular” recovery with the general consensus that the NRA delayed recovery instead of promoting it, especially by showing how the pace of recovery slowed when the NRA codes went into effect and then improved again as those codes ceased to be complied with, as was increasingly the case in the months before the NRA was struck down.

But the fact remains that, in May 1935, industrial production was 50 percent above its March 1933 level, and that it rose steadily after that for another two years. What part did other New Deal policies play in this indisputably impressive progress?

It’s clear that at least some of the progress had nothing to do with New Deal policies at all. This includes the part due to concurrent, technologically-driven gains in potential output. The progress of recovery is correctly measured not by that of industrial output but by the extent to which the gap between it and potential output has shrunk. A depressed economy that’s producing more and more may still be making slow progress, if not altogether failing to recover, if potential output is also growing.

In fact, potential U.S. output grew very rapidly during the 1930s—so rapidly that economic historian Alexander Field wrote an influential article about it titled “The Most Technologically Progressive Decade of the Century.”  “Throughout the Depression,” Field writes (p. 1401),

behind the dramatic backdrop of continued high unemployment, technological and organizational innovations were occurring across the American economy… . [T]he sum total of these changes had, by the onset of World War II, increased the natural or potential output of the U.S. economy far beyond what contemporary observers and economists at the time believed possible.

Field himself puts the average annual growth rate of multifactor productivity during the 1930s at 2.27 percent for the entire private U.S. economy, and at 2.31 percent for the nonfarm economy.  Allowing for that last estimate, and also for growth in the full-employment U.S. labor force at an average annual rate of roughly 1.3 percent, reduces the extent of the overall recovery attributable to New Deal policies. But not by much: the red line in the FRED chart below shows how industrial productivity would have increased from its July 1932 trough had labor force and productivity gains alone been at work. That leaves plenty of recovery to be otherwise accounted for.

Great Expectations?

As we’ve seen, neither fiscal stimulus (excepting that from the 1936 army veterans’ Bonus Bill) nor Fed action can account for much of the ’33-’37 recovery. Of remaining possibilities, two stand out. These are Christina Romer’s own explanation, attributing the recovery to gold inflows from abroad, and an alternative view, first put forward by Peter Temin and Barrie Wigmore, and elaborated upon by  Gauti Eggertsson, stressing the Roosevelt Administration’s abandonment of conservative “policy dogmas.” Since the U.S. monetary gold stock hardly budged before January 1934, Romer’s theory can’t account for whatever gains took place before then, so if either of the two theories accounts for output gains up to that time, it must be Eggertsson’s.

That improved expectations were part of the story behind the economic turnaround, and initial burst of output, that took place immediately after the March bank holiday, can hardly be doubted. Between them, the suspension of gold payments, the administration’s assurances that the banks it was reopening were sound, and the Fed’s all but explicit promise to supply all the paper currency needed to those who still chose to withdraw bank deposits, gave people good reason to believe that the era of deflation had ended.

Apart from taking those initial steps, Eggertsson argues, the Roosevelt administration made it clear that it was determined to go to unusual lengths to get prices back up to pre-depression levels, including jettisoning orthodox views on the proper conduct of fiscal and monetary policy. The resulting, positive change in the public’s inflation expectations, he says, translated into a reduction in forward-looking real (that is, inflation-adjusted) interest rates, which in turn encouraged people to borrow and spend more. By means of a calibration exercise using a modified Dynamic Stochastic General Equilibrium (DSGE) model, Eggertsson concludes that the New Deal’s abandonment of former “policy dogmas” accounted for a whopping 79 percent of the 127 percent increase in output between March 1933 and July 1937.

But is that figure plausible? Andrew Jalil and Gisela Rua offer support for it in the form of “narrative” evidence, such as news coverage of “inflation,” suggesting that people did indeed anticipate higher prices after FDR suspended the gold standard on April 19th, 1933, and especially after June 19th, when he revealed that he had no intention of subordinating his domestic monetary goals to that of stabilizing currency exchange rates. But Jalil and Rua, along with Eggertsson himself, and Temin and Wigmore, may exaggerate the importance of the public’s altered expectations somewhat by assigning them full credit for the Spring 1933 investment “boomlet,” ignoring the part anticipated NRA codes played in it. In any event, instead of lasting, the boomlet was followed by a sharp correction that occurred when the blanket code came into effect that August.

Proponents of the regime-change hypothesis also exaggerate the extent to which, after FDR took Hoover’s place, “the focus shifted…from budget balancing to fiscal stimulus” (Temin and Wigmore 1990, p. 490). In fact, as the late Steven Horwitz points out, in the course of a critical appraisal of Eggertsson’s study, and as Julian Zelizer explains in detail, until 1938 the fiscal philosophies of Hoover and FDR were more alike than dissimilar. Both favored a balanced budget under ordinary circumstances, while tolerating deficit spending to provide relief for the unemployed when they couldn’t avoid it by resorting to new taxes. In a passage from his Pulitzer-Prize winning book on the depression to which Horwitz refers (p.79), David Kennedy notes how, once the depression broke out, “Hoover argued strenuously against the budget balancers in his own cabinet,” likening the emergency to a war, and arguing (as Henry Stimson, his secretary of state, noted in his diary) that “in war times no one dreamed of balancing the budget. Fortunately we can borrow.”

In fact, despite somewhat larger deficits, particularly in 1934 and 1936, the Roosevelt administration’s fiscal policy was never expansionary enough to cause any substantial increase in either the price level or employment. “It would be…hard,” Journal of Economic Perspectives editor Tim Taylor says, “to find an economic historian to argue that the primary reason for the drop in unemployment rates from 1933 to 1937 was a surge of expansionary fiscal policy.”[2]

Finally, mere hopes could not boost inflation expectations forever: at some point, those expectations would either have to be fulfilled, or would give way once again to more pessimistic ones. By December 1933, the CPI wasn’t even 5 percent higher than it had been in March. Wholesale prices rose a lot more; but Christina Romer (1999) argues that they did so mainly owing to the NRA codes which, besides directly raising the prices of many commodities, also made them unresponsive to the depression itself, which would otherwise have tended to pull them down. “By setting minimum wages and encouraging firms to base price changes on observable changes in costs,” Romer says, the NRA “prevented the large negative deviations of output from trend in the early recovery period from depressing prices.”[3]

Jalil and Rua themselves conclude that by the end of 1933 the public had come to doubt the administration’s commitment to inflationary policies. It had also become clear by then that, despite record deficits, the New Dealers still didn’t see deficit spending as a means for boosting prices or otherwise stimulating recovery, remaining fiscally orthodox in this regard. And while the gold standard’s suspension, and the dollar’s consequent depreciation, during 1933 gave the public reason to look forward to higher prices, the dollar’s official devaluation in January 1934 suggested that yet another old “policy dogma” was far from dead and buried.

In her paper “Why Did Prices Rise in the 1930s?,” Christina Romer (1999) notes that any direct effect of the dollar’s formal devaluation on inflation should have been short-lived and limited. She nevertheless goes on to carefully test the hypothesis that devaluation led to a substantial and persistent effect on inflationary expectations, only to conclude that the change did not itself lead people to anticipate ongoing inflation throughout the rest of the decade.

QE Avant Les Lettres

While the 1934 devaluation didn’t directly raise inflation expectations, it did play a part in provoking the gold inflows that Romer elsewhere identifies as the main cause of recovery. By so doing, it would ultimately also help to supply the American public with a new reason to anticipate inflation—albeit one for which unorthodox New Deal policies deserve relatively little credit. And despite its distinct cause, this change in inflation expectations plays a part in Romer’s account similar to the one it plays in Eggertsson’s. “Nominal interest rates were already so low,” she says (p. 775), that “the main way that the monetary expansion could stimulate the economy was by generating expectations of inflation and thus causing a reduction in real interest rates.”

Several studies—Bernanke, Reinhart, and Sack (2004); Anderson (1910); Jaremski and Mathy (2017)—have compared the gold-based expansion of the U.S. stock of high-powered money in the 1930s, and the change in inflation expectations it inspired, to modern central banks’ resort to “Quantitative Easing,” or QE. But changed inflation expectations are only one mechanism by which QE, and its gold-inflow equivalent, might boost output. Another is through the so-called “portfolio balance” effect. By purchasing longer-term securities, central banks reduce the supply of longer-duration assets, reducing the term premium, and therefore the yield, on other long-term as well as medium-term securities. Something like that seems to have occurred in the ’30s as well, when Europeans’ “hot money” was invested in long-term Treasury securities both directly and indirectly through deposits to U.S. banks. According to Christopher Hanes (2019), “trend high-powered money growth” during the Thirties, itself driven entirely by gold inflows, “was, in fact, accompanied by substantial declines in medium- and long-term nominal yields.”

The Golden Avalanche

Hanes concludes from the evidence he reports that, like their contemporary counterparts,  “American monetary authorities” in the 1930s “follow[ed] policies that increased high-powered money.” But that’s misleading: for the most part, the gold that flowed into the United States after 1934 did so for reasons unconnected to U.S. monetary policy. And while it’s true that American authorities initially tolerated that inflow and its consequences, they did so with growing reluctance, and not for long.

In the late 1920s, when economists spoke of “the gold problem,” they had in mind the shortage of gold stemming from the combination of inflated money stocks, diminished gold output, and—most of all—the maldistribution of existing monetary gold caused by its accumulation in the vaults of the Bank of France and the Federal Reserve System.[4] It has also been generally understood since then that both that gold shortage itself and the way various governments tried to deal with it set the stage for the world depression. So it may come as a surprise to many to learn that, by 1936, instead of complaining that there wasn’t enough gold, U.S. economists and policymakers were worried that there was too much.

Here’s a chart showing the size of the U.S. monetary gold stock, in dollars, revealing just how rapidly it grew after 1934, after not growing at all for many years. The jump between January and February 1934 reflects the dollar’s official devaluation. Bear in mind that the proceeds from that jump went to the Treasury, rather than to either the Federal Reserve or U.S. citizens whose gold the government had confiscated, so only the subsequent increase was capable of encouraging growth in broader measures of the U.S. money stock.

Several developments accounted for the large-scale gold inflows that reversed the nature of the gold problem. The best known of these was the growing fear, after Hitler became Germany’s chancellor,  that war would once again break out in Europe. Besides causing Europeans to shift their savings to the United States, either by depositing them in U.S. banks or by buying American securities ($1.2 billion of which were sold to them between 1934 and 1939), that fear increased the European demand for armaments—a demand that American arms makers were only too happy to help satisfy.

The dollar’s devaluation also contributed to the gold inflow, but not because it did much to improve the balance of trade in the United States’ favor: as Bernanke, Reinhart, and Sack observe (p. 319n33), although it “improved the competitiveness of U.S. exports and raised the prices of imports…in an economy that was by this time largely closed, the direct effects of devaluation seem unlikely to have been large enough to account for the sharp turnaround.” More importantly, because the dollar’s devaluation followed, and was in turn followed by, devaluations elsewhere, after it U.S. goods weren’t conspicuously cheaper than goods elsewhere.[5] Instead, the devaluation mainly mattered because together with those other devaluations it spurred gold production by raising the precious metal’s relative price. The resulting rapid increase is apparent in the following chart, reproduced from Wikipedia :

So it wasn’t only gold that had been in existence before the depression, but a large share of the dramatic increase in output since, that found its way to the U.S. Treasury, which bought it from the New York banks acting on foreign customers’ behalf, and paid for it with gold certificates. The Treasury’s gold holdings piled up at either the Philadelphia Mint or the New York Assay Office until 1937, when they were transferred to its new Bullion Depository at Fort Knox.

From Russia, without Love

Most newly-mined gold came from mines in the British Empire, and especially from those of South Africa, which until the mid-1920s accounted for roughly half of world gold output. In 1922, for example, they yielded about 200 metric tons of gold. The Soviet Union’s Siberian mines, in contrast, produced just 6.2 tons of gold that year. But starting around that time, the Soviet government began investing heavily in and centralizing those mines; and as the table below (reproduced from an article by Fritz Lehman) shows, their output grew steadily, to 25 tons by 1929, to 68 tons by 1932, and to 185 tons, or almost half of South Africa’s impressive output, by 1936.[6]

Remarking on the vast outpouring of subsidized Russian gold in September 1936 , when authorities at the Bank of International Settlements and elsewhere were predicting further, substantial increases by the end of the decade, Keynes observed—in what might have been a reference to the portfolio-balance argument for QE—that “in the long run” it was “bound to exert a great influence in keeping rates of interest down,” and that its consequent influence on prices was “likely to be steadily upwards.”  Russian gold thus removed “the most important obstacle in the way of cheap money.” The means were therefore at hand for ending the world depression, and doing it without resorting to radical policies.

The importance of large supplies of gold now in sight lies in the fact that they may make possible by more or less orthodox methods adjustments, highly desirable in themselves, which we should be less likely to secure by other means. The muse of History is ironically disposed. Communist efficiency [sic] in the extraction of gold may serve to sustain yet awhile the capitalist system.

Were Keynes referring only to the United States, he might have added Nazi bellicosity to Communist “efficiency.”  Paradoxically, between them, Stalin and Hitler were now doing more to hasten the U.S. recovery than the American president himself.

Yet, in what is surely, in retrospect at least, one of the most bizarre plot twists in the story of the Great Depression, instead of welcoming all that foreign gold, U.S. authorities came to dread it. What’s more, they came to dread it for precisely the reason that Keynes saw it as a godsend, and that both Christina Romer and Gauti Eggertsson might reckon it so today, namely, because it seemed likely to boost spending and raise prices.

As both the monetary gold stock and bank reserves accumulated—the reserves of Fed member banks more than doubled during the two years following the dollar’s devaluation—U.S. authorities became increasingly concerned that, if conditions improved, either here or abroad, all those bank reserves could prove embarrassing. Europeans might suddenly want their expatriated savings back, upsetting U.S. credit markets. Alternatively, banks might attempt to shed their excess reserves, causing both the money stock and the price level to rise sharply.  Because the CPI was still 20 percent below its 1929 level at the end of 1936, that second concern seems odd in retrospect. But many feared that bank lending would revive so quickly that, instead of merely serving to revive business, it would fuel an unsustainable boom.

So it happened that, starting in August 1936, first the Fed and then the U.S. Treasury took steps to limit the risk of inflation from either accumulated reserves or further gold inflows. Thanks to this reaction to them, the same gold flows that helped revive the U.S. economy between 1934 and 1937 helped bring about the disastrous reversal that was to erase most of those gains. The muse of History is, indeed, ironically disposed.

____________________

[1] Some authorities label Roosevelt’s post-1936 domestic policies and reforms, and particularly those that followed the 1937 recession, a “Third New Deal,” distinguished from the others by its abandonment of more radical reform efforts in favor of “Keynesian” fiscal policy. On this change in emphasis see Jeffries (1990) and the present series’ three installments concerning “The Keynesian Myth.”

[2] In defense of their claim that Hoover was more fiscally orthodox than Roosevelt, Temin and Wigmore (1990, p. 487) observe that Hoover “sponsored a massive tax increase in late 1931 in an effort to recoup the precipitous decline in federal tax revenues. The maximum personal income tax rate rose from 2.5 to 63%. Corporate income taxes rose, estate taxes were doubled, and gift taxes were reintroduced. Hoover’s opposition to the veterans’ bonus reveals the depth of his opposition to expansionary policies; the bonus was handed to him with no political risk and a rationale that allowed him to maintain its ideological purity. Hoover still declined this offer.”  They neglect to mention that FDR retained Hoover’s 63 percent marginal tax rate through fiscal 1936, when he raised it to 79 percent, where it remained through the end of the decade; that FDR also raised corporate tax rates considerably; and that he raised the maximum estate tax rate from 45 percent to 60 percent in 1934, and to 70 percent in 1935. Nor do they point out that Roosevelt was no less opposed to early payment of the veterans’ bonus than Hoover and every other president since Wilson had been, and that he opposed it on deficit-reduction grounds. Roosevelt vetoed bonus bills in both 1935 and 1936, going to the length of addressing Congress to defend the first veto. The exceptionally large 1936 budget deficit came about only because Congress succeeded in overriding the second.

[3] In a paper that’s meant to complement his 2008 study, Eggertsson (2012) claims, controversially, that instead of inspiring a temporary boom only, and otherwise hampering recovery, the NRA codes actually aided recovery, and did so in a manner analogous to that of anticipated monetary or fiscal stimulus. By raising inflation expectations, he says, the codes could also have lowered anticipated real interest rates despite a binding zero lower bound on nominal rates. Although Eggertsson recognizes the obvious difference between expansionary monetary or fiscal policy, which raise prices by directly boosting aggregate demand, and the NRA codes, which were more like an adverse supply shock, he neglects the possible effect of the shock on what he calls the “efficient” (real) interest rate, which he treats as an exogenous variable dependent only on consumers’ intertemporal preferences. If, instead, one allows it to depend on anticipated industrial productivity, while also allowing that the NRA codes were expected to have a lasting adverse effect on that productivity (which would not have been an unreasonable assumption), the codes would have reduced the efficient rate. In that case, despite higher expected inflation, the zero lower bound might have stayed just as binding as before. For  skeptical views, appealing to both empirical evidence and theory, of the New Keynesian argument that adverse supply shocks can boost output, see Cohen-Setton, Hausman, and Wieland (2017) and Wieland (2019).

[4] See the interim (1930) and final (1932) reports of the Gold Delegation of the League of Nation’s Financial Committee, and especially the dissenting observations in the latter report by M. Albert Janssen, Sir Reginald Manx and Sir Henry Strakosch (pp. 64-73) and Gustave Cassel (pp. 74 -75), all of whom regarded the hoarding of gold by France and the United States, rather than any overall gold shortage, as the chief cause of trouble. See also Patricia Clavin and Jens-Wilhelm Wessels’ fascinating discussion of the Gold Delegation’s controversial origins and proceedings, emphasizing how its inquiry into the gold “shortage” had been intended by British authorities as a politically safe way to draw attention to the threat U.S. and French policies posed to sterling’s stability.

[5] As Frank D. Graham explained in 1935, “With so many countries devaluing in almost equal degree, at a time when deflation rather than inflation had been in progress, there has been a strong tendency for the price structure in the devaluing countries to become the dominating factor in the world situation and thus to shift the burden of adjustment upon the countries which were maintaining the original gold content of their currencies. The devaluation of currencies in terms of gold did not result in a devaluation of the currencies in terms of commodities but rather raised the general commodity value of gold. Instead of the price structure in gold standard countries serving as the lode-star round which other price structures turned, the center of gravity was shifted to the modal bloc of currencies of approximately equal devaluation. Prices in these currencies are today practically the same as they were in September 1931, while in the countries of the gold bloc there has been a fall of from 15 to 20 per cent.”

[6] A comparison of Lehman’s table with one found in the 1932 report of the League of Nation’s Gold Delegation, showing estimates of future gold production supplied to it, reveals just how far reality ended up veering from those estimates:

 

Continue Reading The New Deal and Recovery:

Intro
Part 1: The Record
Part 2: Inventing the New Deal
Part 3: The Fiscal Stimulus Myth
Part 4: FDR’s Fed
Part 5: The Banking Crises
Part 6: The National Banking Holiday
Part 7: FDR and Gold
Part 8: The NRA
Part 8 (Supplement): The Brookings Report
Part 9: The AAA
Part 10: The Roosevelt Recession
Part 11: The Roosevelt Recession, Continued
Part 12: Fear Itself
Part 13: Fear Itself, Continued
Part 14: Fear Itself, Concluded
Part 15: The Keynesian Myth
Part 16: The Keynesian Myth, Continued
Part 17: The Keynesian Myth, Concluded
Part 18: The Recovery, So Far

The post The New Deal and Recovery, Part 18: The Recovery So Far appeared first on Alt-M.

0 comment
0 FacebookTwitterPinterestEmail
“The fateful errors of popular monetary doctrines which have led astray the monetary policies of almost all governments would hardly have come into existence if many economists had not themselves committed blunders in dealing with monetary issues and did not stubbornly cling to them.”
—Ludwig von Mises, Human Action.

I was chatting on the phone last week with Peter Coy, who was working on an article about money for The New York Times Magazine, when he mentioned the old, three-pronged textbook definition of money: you know, the one that says money is a medium of exchange, a store of value, and a unit of account. It’s the first thing most econ students learn about money. For many, I suspect, it’s all they remember.

Which is a shame, because it’s wrong.

In this post, I explain why it’s wrong. I trace the mistaken definition to past economists’ careless reading of that definition’s locus classicus, in William Stanley Jevons’ great work, Money and the Mechanism of Exchange.  I next show how Jevons’ actual understanding of the meaning of “money” was shared by Carl Menger and later Austrian-school economists. I conclude with a plea for dispensing, once and for all, with the three-part textbook definition of “money,” in favor of the definition Jevons favored all along.

Money’s “Three Functions”

In his Principles of Economics textbook, Ed Dolan writes that “Money is an asset that serves as a means of payment, a store of purchasing power, and a unit of account.”  Greg Mankiw likewise says, in his Principles of Macroeconomics text, that “Money has three functions in the economy: It is a medium of exchange, a unit of account, and a store of value. These three functions together distinguish money from other assets in the economy.” I might supply any number of similar examples of this conventional way of defining money.

Nor does the tripartite definition of money only occur in textbooks. According to a St. Louis Fed publication, although “Money has taken different forms through the ages,” all of them “share the three functions of money”:

First: Money is a store of value. If I work today and earn 25 dollars, I can hold on to the money before I spend it because it will hold its value until tomorrow, next week, or even next year. In fact, holding money is a more effective way of storing value than holding other items of value such as corn, which might rot. …
Second: Money is a unit of account. You can think of money as a yardstick—the device we use to measure value in economic transactions….
Third: Money is a medium of exchange. This means that money is widely accepted as a method of payment.

Money Isn’t “A Store of Value.”

What’s wrong with the standard definition? The trouble is that it often happens, even in indisputably “monetary” economies, that no single good or asset performs all three functions that, according to it, “money” is supposed to perform. In all such instances, the conventional definition begs the question: when nothing performs all three functions, how can “money” possibly exist?  If it does exist, it must be the case that not all of the three supposed “functions” of money are actually functions money must perform, let alone perform well.

Take the store of value function. Of course something that is of no value at all, or of very ephemeral value only, is unlikely to serve any of the three supposed monetary functions; and many things that have served as money in the past were also reasonably good stores of value. For this reason it’s not hard to understand the temptation to assume that, whatever other functions it might perform, money must serve as a store of value.

But while it’s true that a spectacularly bad store of value—like ice cream cones in summertime—isn’t likely to ever serve either of the remaining two supposed functions of money, it’s quite common for stuff that everyone considers “money” to be a mediocre if not a poor store of value. Fiat monies, for example, always tend to depreciate; and it’s notorious that they sometimes lose value quite rapidly. Yet even in extreme cases of hyperinflation, such fiat currencies continue to be regarded as “money,” and continue to serve as both media of account and generally accepted exchange media. (It is, after all, only when prices are expressed in terms of some fiat unit, where the number of representatives of the unit being spent in any given period is rising rapidly, that hyperinflation can take place.) To insist that money must serve as a “store of value” in such cases begs the question: in what meaningful sense could Papiermarks be said to have served as a “store of value” in Germany during the autumn of 1923? And if they were an abysmal store of value, weren’t they money nonetheless?

If something can be money despite being an abysmal store of value, the opposite is also true: something can be an exceptional store of value without being, or ever becoming, money. To his credit, in the original (1948) edition of his famous textbook, Paul Samuelson assigns money only two functions: it is, he says, a medium of exchange and unit of account. Although he recognizes that “a man may choose to hold part of his wealth in the form of cash,” Samuelson notes that “in normal times a man can earn a return on his savings if he puts them into a savings account or invests them in a bond or stock. Thus it is not normal for money to serve as a ‘store of value’.”

So Samuelson improved upon the conventional three-part definition of money. Yet he still assigned “money” one function too many.

Nor is It a Unit of Account

While it’s relatively easy to point to things serving as both generally-accepted exchange media and media of accounts that were crummy stores of value, it’s not so easy to find instances in which an economy’s unit of account didn’t consist of a standard unit of its preferred medium of exchange. There’s a perfectly good reason for this: it just makes good sense for people to price things, and keep accounts in, units based upon the stuff they prefer to receive, or insist upon receiving, in payments.

Still, it sometimes does happen that an economy’s unit of account is not based upon, or is “separated from,” its most popular exchange media; and in such cases we are again compelled to ask, which thing is “money”?  Is it the unit of account, or whatever stuff defines it, or is it the medium of exchange? The answer to this question takes us one more step closer to answering the question, “What is ‘money,’ really?”

High inflations once again come to the rescue here, for these often lead to the separation of accounting units from prevailing exchange media. Consider the case of Brazil in 1992. In that year alone prices expressed in Brazil’s official currency unit, the cruzeiro, rose more than tenfold. But rather than express prices in cruzeiros, which would have meant changing them daily, if not more than once a day, hotels, restaurants, and many other businesses switched to posting prices in dollars. Many also kept accounts in dollars. Cruzeiros nevertheless remained Brazil’s most widely used medium of exchange. So which was Brazil’s “money”—dollars or cruzeiros? And, if it was dollars, what exactly were cruzeiros?

The last question is meant to be rhetorical: cruzeiros no longer supplied Brazil with a useful unit of account; and they were certainly nobody’s idea of a decent story of value.  Yet they were still that nation’s most widely-accepted medium of exchange; and few doubt that they, not dollars, were therefore Brazil’s “money.”

Or consider another case: the British pound sterling. Long before Great Britain ever had such a thing as a pound coin, the pound sterling had served as its principal accounting unit. On the other hand, the gold “guinea,” which for most of its existence was worth 21 shillings, or one pound and 1 shilling, was an actual coin that circulated, along with fractional counterparts, in Great Britain between 1663 and 1814 (when it gave way to sovereigns). Yet it saw only very limited use—in contracts between “gentlemen”—as an accounting unit. Yet who doubts that guineas were British “money”?

Going back still further, to medieval times, we find still more compelling grounds for rejecting the “unit of account” definition of money, for the motley condition of coins in those days caused merchants to resort to accounting units that had no actual coin counterparts. In some cases these units were based on what the late John Munro called “ghost” monies—past coins that no longer circulated. It should be obvious such “ghost” monies could not be actual money. That is, there was no longer any countable stuff to which they referred. They were “pure” accounting units, and as such completely separated from any actual exchange media. The medieval situation therefore supplies an especially clear case in which the term “money” might refer either to the actual exchange media in use, or to the media upon which prevailing accounting units were based, but could not refer to anything that was both. So, which was it?

Once upon a time, few economists would have hesitated to say that “money” meant the coins actually in use, not the “ghost” coins no longer extant.  Many today will think so as well. But if some aren’t so sure, you can blame it on some past economists’ careless reading of William Stanley Jevons’s great work.

What Jevons Really Said

Chapter III of William Stanley Jevons’s Money and the Mechanism of Exchange (1875) is generally understood to be the locus classicus of the treatment of “money” as something that serves several distinct functions. In fact, Jevons refers not just to three but to four functions of money: the three now referred to in most textbooks, plus a fourth “standard of deferred payments” function.

By 1919, Jevons’s treatment had already become so popular that it was summed up in a then-popular couplet:

Money’s a matter of functions four,
A Medium, a Measure, a Standard, a Store.

Eventually, “Measure” (of value) and “Standard” (of deferred payments) were combined into “Unit” (of account),  giving rise to the now-standard three-function definition, though one still sees occasional references to money’s four functions.

But while the various functions of money Jevons identified have given rise to today’s conventional wisdom, his appraisal of each function’s significance has been all but forgotten. A close look at that appraisal reveals that Jevons actually considered only one of money’s functions essential, hence definitive.

In fact, Jevons makes clear from the onset of his discussion that he considers only two of money’s four functions to be of “high importance.” “We have seen,” he writes,

that three inconveniences attach to the practice of simple barter, namely, the improbability, of coincidence between persons wanting and persons possessing; the complexity of exchanges, which are not made in terms of one single substance; and the need of some means of dividing and distributing valuable articles. Money remedies these inconveniences, and thereby performs two distinct functions of high importance, acting as—

(1) A medium of exchange.
(2) A common measure of value.

Money’s remaining two functions are for Jevons of secondary importance only. The “standard of value” function, he says, develops only as an offshoot of money’s other roles. As for the “store of value” function, although a country’s money may be a useful means for storing and conveying value, “diamonds and other precious stones, and articles of exceptional beauty and rarity,” might serve the same purpose. Jevons also recognizes the link between non-monetary stores of value and early monies:

The use of esteemed articles as a store or medium for conveying value may in some cases precede their employment as currency. Mr. Gladstone states that in the Homeric poems gold is mentioned as being hoarded and treasured up, and as being occasionally used in the payment of services, before it became the common measure of value, oxen being then used for the latter purpose. Historically speaking, such a generally esteemed substance as gold seems to have served, firstly, as a commodity valuable for ornamental purposes; secondly, as stored wealth; thirdly, as a medium of exchange; and, lastly, as a measure of value.

Finally, in a subsection specifically addressing the “separation of [monetary] functions,” Jevons explicitly recognizes the inadequacy of any definition of money that insists on its serving all four of the functions he names. It is, he says, only because people are “accustomed to use the one same substance in all the four different ways” that they

come to regard as almost necessary that union of functions which is, at the most, a matter of convenience, and may not always be desirable. We might certainly employ one substance as a medium of exchange, a second as a measure of value, a third as a standard of value, and a fourth as a store of value. In buying and selling we might transfer portions of gold; in expressing and calculating prices we might speak in terms of silver; when we wanted to make long leases we might define the rent in terms of wheat, and when we wished to carry our riches away we might condense it into the form of precious stones.

But didn’t Jevons nevertheless say that money has not one but two functions of “high importance”? He did. But if we look at how the paragraph in which he says this continues, we find that only one of those two important functions is really important—that is, important enough to be essential or decisive.  “In its first form,” Jevons says,

money is simply any commodity esteemed by all persons, any article of food, clothing, or ornament which any person will readily receive, and which, therefore, every person desires to have by him in greater or less quantity, in order that he may have the means of procuring necessaries of life at any time. Although many commodities may be capable of performing this function of a medium more or less perfectly, some one article will usually be selected, as money par excellence, by custom or the force of circumstances.

In other words, money is, first and foremost, a generally recognized medium of exchange. The use of a standard money unit as a common measure of value, though it, too, is ultimately of “high importance” in the sense that it further helps to simplify and expedite exchange, is yet another offshoot of its one, fundamental role. Whatever first comes to serve as a generally accepted medium of exchange

will then begin to be used as a measure of value. Being accustomed to exchange things frequently for sums of money, people learn the value of other articles in terms of money, so that all exchanges will most readily be calculated and adjusted by comparison of the money values of the things exchanged.

It follows that, in those relatively rare instances in which the two functions commonly performed by the same stuff are instead performed by different things, the stuff that is generally accepted in exchange alone qualifies as “money.”

That the man who coined the expression “double coincidence of wants,” and who first represented money as something capable of making up for the lack of such “double coincidences” in barter economies, should have given pride of place to money’s medium of exchange function, should not surprise us. But Jevons was hardly unique in this regard. The same view was held by other outstanding monetary theorists of the late 19th and 20th centuries, including Carl Menger.

Menger on Money’s Functions

Anyone familiar with Menger’s famous theory of the evolution of money will know that he identifies it with the most readily accepted or “saleable” of an economy’s goods or assets. Like Jevons, Menger recognizes that money typically performs other functions, but these he regards as secondary.  Thus, when Menger observes, on p. 276 of his Principles of Economics, that “Under conditions of developed trade, the only commodity in which all others can be evaluated without roundabout procedures is money,” he isn’t defining money as a medium of account: he’s merely observing, as Jevons does, that an established money will also tend to be an economy’s most convenient medium of account. Menger recognizes, furthermore, that

this outcome is not a necessary consequence of the money character of a commodity. One can very easily imagine cases in which a commodity that does not have money character nevertheless serves as the “measure of price” … The function of serving as a measure of price is therefore not necessarily an attribute of commodities that have attained money character. And if it is not a necessary consequence of the fact that a commodity has become money, it is still less a prerequisite or cause of a commodity becoming money.

As if anticipating the present critique, Menger goes on to note how “[s]everal economists have fused the concept of money and the concept of a ‘measure of value’ together, and have involved themselves, as a result, in a misconception of the true nature of money.”

Menger disposes of the “store of value” view of money in much the same fashion:

The same factors that are responsible for the fact that money is the only commodity in terms of which valuations are usually made are responsible also for the fact that money is the most appropriate medium for accumulating that portion of a person’s wealth by means of which he intends to acquire other goods (consumption goods or means of production). …But the notion that attributes to money as such the function of also transferring “values” from the present into the future must be designated as erroneous. Although metallic money, because of its durability and low cost of preservation, is doubtless suitable for this purpose also, it is nevertheless clear that other commodities are still better suited for it. Indeed, experience teaches that wherever less easily preserved goods rather than the precious metals have attained money-character, they ordinarily serve for purposes of circulation, but not for the preservation of “values.”

Later Austrian economists were if anything even more emphatic on these points than Menger. According to Ludwig von Mises, “Money is the thing which serves as the generally accepted and commonly used medium of exchange. This is its only function. All the other functions which people ascribe to money are merely particular aspects of its primary and sole function, that of a medium of exchange.” Murray Rothbard likewise observes that, although “Many textbooks say that money has several functions…it should be clear that all of these functions are simply corollaries of the one great function: the medium of exchange.”

Money is a Generally Accepted Medium of Exchange

So, can we please junk the stupid three-function definition of money? So what if textbook writers keep repeating it? It’s incoherent. It’s based on some earlier economists’ sloppy reading of Jevons’s uber-treatment. It encourages people to say silly things. In short, it’s good for nothing but confusion and mischief.

Yet there’s a perfectly sensible alternative definition—the one that heads this section. It’s been endorsed by many of the greatest monetary economists of all time, including the one who is wrongly understood to have given us the silly three-part alternative. It avoids all the shortcomings of the three-part definition. And it’s easier to remember.

Show me someone who doesn’t find these arguments persuasive, and I’ll show you someone who badly wants to call something “money,” that isn’t.

The post A Three-Pronged Blunder, or, what Money is, and what it isn’t appeared first on Alt-M.

0 comment
0 FacebookTwitterPinterestEmail