Category:

Stock

Romina Boccia

Passing the so-called Social Security Fairness Act sends a clear message about how Washington approaches Social Security reform—and it’s a disturbing one. Congress and President Biden have chosen to ignore all expert advice, cater to organized special interest groups, and burden younger taxpayers with increasingly unaffordable costs.

This decision foreshadows likely future actions as Social Security’s trust-fund-related borrowing authority will run out around 2032. Instead of sensible policy reforms that better align Social Security benefits with the ability of workers to pay for them, Congress will want to take the path of least resistance. Without significant public pressure to do the right thing, expect a multi-trillion-dollar general revenue transfer (meaning added borrowing) come trust fund depletion, and perhaps superficial fixes like the federal government borrowing money today to ‘invest’ to generate revenue from speculative gains tomorrow.

We need to send a clear message to Washington: stop the superficial quick fixes and solve the real entitlement spending challenge.

Popularity Trumps Responsibility

Public choice theory helps explain why Congress prioritizes the demands of well-organized interest groups over the recommendations of policy experts. Politicians are incentivized to back proposals that offer visible and immediate benefits to vocal constituencies, even if they impose long-term costs on the broader public. The Social Security Fairness Act’s supporters—public-sector workers and their well-funded unions—pushed hard to fix what they claimed were unfair benefit cuts. Their voices were loud, their cause easy to understand, and their political influence significant. This is a winning combination in politics, even if the changes create unfair outcomes or worsen federal finances.

The Social Security Fairness Act increases the program’s financing gap yet further. Funding this policy with additional payroll taxes would burden 180 million workers with an additional $68 in annual taxes to fund higher benefits for 3 million public sector workers and their spouses by unfairly manipulating the Social Security benefit formula to their advantage. This is a textbook example of Mancur Olson’s theory of collective action, where small, concentrated groups secure disproportionate benefits at the expense of a diffuse majority.

The repeal of the Windfall Elimination Provision and Government Pension Offset creates outsized benefits for workers who had significant earnings that were exempt from payroll taxes compared to those who paid Social Security taxes over their entire careers. For example, economist Larry Kotlikoff highlights a schoolteacher whose lifetime benefits will soar by $830,625 (!) under this law, with her annual retirement benefit more than doubling and her widow’s benefit nearly tripling. Similarly, actuary Elizabeth Bauer calculates that public-sector workers with only brief private-sector employment will receive benefits that are 45 percent higher than those with identical earnings histories who paid into Social Security throughout their careers.

Congressional Republicans’ support for this expensive change likely stems from a political calculation. For a long time, backing the bill seemed like a low-cost way to curry favor with police and firefighter unions, key constituents in many members’ voter base without serious worry that the bill would pass. It took 24 years from when a version of the Social Security Fairness Act was first introduced in 2001 (with a congressional hearing held in 2003) to it being signed by President Biden on January 5, 2025. The Wall Street Journal suggests the timing—a post-election passage—points to a political payoff for groups like the International Association of Fire Fighters, which lobbied heavily for the measure and declined to endorse Kamala Harris for president (after endorsing Joe Biden in 2020).

It didn’t help that the windfall elimination and government pension offset provisions were complex policy adjustments applied to an even more complex Social Security benefit formula that few people understand in great depth. As Andrew Biggs argued in his Substack:

The typical Member of Congress doesn’t understand very well how Social Security works. If they did, the Social Security Fairness Act wouldn’t have come close to passing.

The hard truth is that Social Security’s formulas and finances are a mess. Fixing them means making tough choices—like raising taxes or reducing benefits. Those are unpopular changes, and politicians don’t want to make them. Voters do not wish to be confronted with trade-offs. Everyone likes a free lunch, even if those do not exist. Instead, politicians have demonstrated once again that the surest way to gain bipartisan support for a measure is to bestow immediate benefits on current constituents while passing the buck to the next generation in the form of higher debt.

What This Means for the Future of Social Security Reform

The Social Security Fairness Act shows how hard real, comprehensive reform will be. If Congress can’t say no to popular and shortsighted benefit increases, how will it tackle the tougher job of making Social Security long-term solvent? The sad truth is that politicians probably won’t even try—at least not until the crisis is too close to ignore.

That’s the real danger. By delaying sensible policy reforms now, lawmakers are setting the stage for even more drastic measures down the road. A general revenue transfer or gimmicks like pre-borrowing against speculative gains become the most likely quick fixes that Congress will grasp when automatic benefit cuts loom in 2032.

A general revenue transfer means that Congress might decide to no longer limit Social Security’s funding to its dedicated revenue sources but to open the government spending and borrowing spigot wide open. While Social Security has been primarily funded by payroll taxes, it faces a $25 trillion shortfall over the next 75 years—after taxpayers have repaid the payroll tax surpluses that previous Congress squandered, with interest. The program won’t have enough money incoming to pay all the promised benefits around 2032. A general revenue transfer is a way of kicking the can down the road, with future taxpayers left to pick up the can, and then some. Basically, Congress would simply tell the Treasury to keep selling bonds to finance Social Security benefits, even after the so-called trust fund is depleted.
The speculative gains idea originates with Senator Bill Cassidy (R‑LA), who proposed creating a separate $1.5 trillion investment fund to shore up Social Security by investing in private equity markets. Basically, the government would borrow $1.5 trillion to purchase stocks currently owned by Americans and other investors, leaving future taxpayers to repay that debt while any stock gains would flow to the government. Over 75 years, this could lead to the federal government owning one-third of the stock market, according to Andrew Biggs, raising concerns about corporate governance and the government using its stock market position to engage in politically-driven social investing.

Social Security needs comprehensive reform, not more kicking the can. Social Security’s financing challenge isn’t just a money issue, either—it’s a generational betrayal that will fall hardest on those who can least afford it. As Congress keeps showering the older generation with unfunded benefits, younger generations confront a bleaker future, saddled with higher taxes and a slower economy dragged down by excessive government debt.

Congress listens to the loudest voices. That’s why it’s crucial for taxpayers and advocates of limited government and sustainable Social Security reform to get organized now and counter the ingrained tendency to keep benefits flowing while kicking the financing can. If we don’t raise our voices, politicians will keep caving to beneficiary interest groups, until a fiscal crisis kicks back.

0 comment
0 FacebookTwitterPinterestEmail

The Fed Must Adopt a Monetary Policy Rule

by

Jai Kedia and Norbert Michel

The Federal Reserve is scheduled to conduct its five-year monetary policy framework review in 2025, barely two years after one of the nation’s worst bouts of inflation. The review is the perfect opportunity for the Fed to take a bold step toward objective policymaking that could protect Americans from future policy mistakes, such as those that led to the post-COVID pandemic inflation spike. To take this step, the Fed’s review should include an examination of monetary policy rules that would commit it to certain courses of action regarding its future monetary policy stance.

As long as the monetary system is based on government-issued paper (fiat) base money, some group of government officials must manage that issuance. Because there is no pure market-based mechanism to guide how much of that base money the government should issue, a monetary policy rule might provide the next best alternative. As Jason Furman, chair of President Obama’s Council of Economic Advisers, recently explained: “Placing more weight on rules at the Fed could solve several problems. It would make monetary policy more predictable and understandable, reducing market volatility and enabling better investment decisions. It could also avoid the biases that have crept into the Fed’s decision-making in recent years.”

Debates over which specific rule the Fed should adopt are useful but ultimately should not stand in the way of committing to a rule in the first place. Most monetary policy rules take the form of an interest rate feedback system where the Fed sets the target for the federal funds rate (“FFR”) in response to current values of key macroeconomic indicators. Since the economy is interlinked and most macroeconomic indicators mirror each other, the general recommendations the various rules offer should (theoretically) be similar. Indeed, this is borne out by the data. 

Figure 1 below compares the actual FFR to its hypothetical counterparts under various monetary policy rules. The rules considered are popular choices from academia or policy analysis and include: the Taylor rule, a difference rule, NGDP targeting, and pure inflation targeting.[1]

Figure 1: A Comparison of the Actual Federal Funds Rate to Hypothetical Rules-Based Rates

As Figure 1 shows, policy recommendations across the various rules closely align. (NGDP targeting is slightly more volatile than the other rules.) The key observation is that any of these policy rules would have helped the Fed avoid its costly post-pandemic mistakes. All rules recommended raising the FFR well before the Fed did so in early 2022. Instead, discretionary monetary policy led the Fed to incorrectly label inflation as “transitory” and its sluggishness in tightening its stance likely allowed inflation to become entrenched. Once it realized its mistake, the Fed had to execute a series of rapid rate hikes and has since kept its rate target higher than all the rules recommend. Had the Fed followed such a rule, it is likely that Americans may have experienced a stable increase in rates and avoided suffering the highest bout of inflation in at least 40 years.

While ample evidence has shown that rules are better than discretion, Americans should not expect that monetary policy can ever result in perfect economic conditions. Debates over the efficacy of different rules have often hindered the fight to force a rule upon the Fed. But as our new paper shows, there is no single “optimal” monetary policy rule. In the paper, we use a state-of-the-art macroeconomic model to simulate the US economy under various monetary policy regimes (such as those from Figure 1).

We find that there is no one rule that is universally best at responding to demand and supply shocks nor is any one rule the best at stabilizing all (or even most) macroeconomic variables, such as output growth or inflation. Monetary policy cannot solve all economic problems, and most rules are associated with tradeoffs that require the policymaker to decide which macro variables matter the most for welfare before picking a rule. 

We also analyze the informational burdens associated with various rules. To do so, we compare the Fed’s forecasting errors under different rule regimes. Here too there is a tradeoff that is important for policymakers—rules that provide improved stabilization regrettably also have the highest information burdens.

Given the existing centrally managed fiat money framework in the United States, it is unfortunate that there is no silver bullet monetary policy rule. Still, given that central banks are ill-suited to “manage” the economy by consistently reaching precise macroeconomic goals, rules-based monetary policy might produce the best outcomes that can be expected. Our evidence suggests that any of the reasonable monetary policy rules under consideration perform similarly well, and we argue that policy based on any of these rules would be better than pure Fed discretion.

A template for a workable monetary policy rule already exists in the 2015 FORM Act. Under this approach, Congress would require the Fed to announce a guiding rule for interest rate decisions. If the Fed feels it is necessary to deviate from its rule (say to accommodate a large fiscal expenditure), it could do so provided it explains its reasoning to Congress. 

The Fed will soon begin its framework review; those of us who are concerned about poor monetary policy should not let this opportunity pass to hold the Fed accountable.
 

To read the full paper that includes a detailed analysis of various policy rules, please click here.

The authors thank Jerome Famularo for his research assistance in the preparation of this article and the paper.

[1] A detailed explanation of the exact construction of each rule such as exact parameter specifications is excluded for simplicity. These are available upon request.

0 comment
0 FacebookTwitterPinterestEmail

Daniel Raisbeck

The evidence in Argentina already shows that relaxing controls leads to a massive influx of dollars into the banking system, as would occur under an official dollarization process (whereby a government grants the dollar legal tender or simply allows its use under the free competition of currencies). And yet the currency clamp remains.

Though not explicit, the Milei government seems keen to avoid the experience of former President Mauricio Macri (2015–2019). During his government, a rapid removal of many—but not all—currency and capital controls was combined with deficit spending, which was financed with sovereign debt.

The experiment failed: facing capital flight and an imminent debt default in 2018, Macri turned to the IMF for a USD $57 billion loan, the largest in the fund’s history. In 2019, annual inflation rose beyond 50 percent. When it became clear that Macri would lose that year’s election to Alberto Fernández, a Peronist candidate, capital flight intensified and the currency depreciated by 25 percent against the dollar in a single day. The central bank responded by using USD $13 billion in reserves to defend the peso. Macri then reintroduced a strict version of the currency clamp.

Milei is carrying out the opposite approach: fiscal discipline from day one was combined with a cautious, interventionist exchange rate policy. The strategy has worked thus far and there are expectations of the clamp’s gradual unwinding. But the cost is the current currency lag.

That is, the exchange rate remains controlled, yet consumer prices have risen at a higher rate than the official rate of devaluation. At 2.4 percent, Argentina’s monthly inflation reached a multi-year low, yet the level is still greater than the 2 percent crawling peg, the rate at which the government officially devalues the peso each month. When the rate of devaluation does not match the rate at which consumer prices rise, the local currency can become overvalued. 

According to The Economist’s Big Mac Index, which measures relative purchasing power parity in different currencies, the peso was 15 percent overvalued vis-à-vis the dollar in June 2024, when the implied exchange rate was ARS $1,072, while the official rate stood at ARS $931. But this was the “raw index” spread, which merely accounts for the disparity between the local currency and the dollar. According to the GDP-adjusted index, which considers relative income levels in Argentina and the United States—and, as such, “may be a better guide to the current fair value of a currency” — the peso was 47 percent overvalued against the dollar in June (roughly the level of the spread between the blue and official dollars).

The peso’s overvaluation in the Big Mac index was the highest since 2011, the year when former leftist President Cristina Kirchner (2007–2015) hardened the existing currency controls and introduced many of the clamp’s current elements.

GDP-Adjusted Value of Argentine Peso vs. U.S. Dollar (2011–2024)

Source: The Economist Big Mac Index. Periods of overvaluation (blue) and undervaluation (red) are measured every six months (June and December).

What has happened after previous cases of peso overvaluation? Just in the last 15 years, periods of overvaluation above 25 percent (in GDP-adjusted terms) have been followed by a crash in the Argentine currency, with a swing of at least 45 percent and as much as 117 percent toward peak devaluation. Granted, past conditions were not the same as today’s, especially in terms of the Milei government’s strict fiscal stance and, more recently, its zero-emission monetary policy. But, at the very least, historical precedent should give room for thought. 

In a recent podcast interview, Milei downplayed the peso overvaluation argument by stating that nobody can determine what the proper price of the dollar should be. This is correct, which begs the question of why the government maintains the official exchange rate and the currency clamp in the first place.

In a similar vein, economist Ramiro Castiñeira argues that a currency lag only applies when there is an active, political attempt to prop up the official exchange rate while the black-market rate spirals out of control, which is not the current case in Argentina. Today, he adds, “the currency spread is collapsing without the sale of reserves or the issuing of foreign debt, and even while interest rates are decreasing so as to not favor the carry trade.”

While these are good points, one should point to recent interventionism in the currency market and to the crawling peg as the carry trade’s main catalyst. There is also the fact that, despite the official fall in inflation, which is measured in pesos, Argentina remains objectively expensive compared to much of the world in strict dollar terms. 

As Ambrose Evans-Pritchard writes in The Daily Telegraph, “it costs almost twice as much to buy a hamburger in Buenos Aires as it does in Tokyo, even though the pampas are full of cattle, while the rice terraces of Japan are not.” 

On the other hand, when it comes to imported goods such as clothing, extensive trade barriers beyond the reach of the currency clamp also drive up prices in Argentina. According to a recent study by Fundar, a research center that analyzed 2024 data in absolute terms, “a basket of clothing in Argentina is 35% more expensive at the official dollar rate than the average for the same basket in other countries in the region.”

So is there a currency lag in Argentina after all? There is according to the central bank’s website. Since the currency-clamp-induced dollar scarcity limits importers’ ability to pay for the products they eventually sell to the public, the central bank began to issue inflation-proof bonds that would allow importers to pay their debts abroad:

BOPREAL stands for Bond for the Reconstruction of a Free Argentina in Spanish. BOPREALs are US dollar-denominated securities issued by the BCRA for importers with overdue debts for goods with customs registration and/​or services actually rendered until December 12, 2023.

BOPREALs offer a transparent, orderly, non-discriminatory and effective solution to the historical growth of commercial debts abroad triggered by a foreign exchange delay and the ensuing lack of foreign currency, which makes it impossible to meet obligations immediately. (Emphasis added)

Bopreal bonds are a complex, short-term means to counter the adverse results of Argentina’s self-imposed currency market complexity.

Dismantling the currency clamp would undo such complexity and restore freedom to the currency markets, but the government believes that a rapid rise in inflation would follow. Herein lies a crucial contradiction. If the government’s premise is correct, then the recent fall in inflation is not due solely to fiscal surpluses, deregulation, or an extremely restrictive monetary regime, but rather also to the systematic repression of the currency. Whether correct or not, the government’s thesis is an implicit recognition of a serious currency lag. 

The government says it will phase out the currency clamp in 2025. However, it has not provided a timeline for its plan beyond the stated prerequisites of an increase in the central bank’s reserves, a convergence between the inflation rate and the crawling peg, and a rise in the demand for pesos.

There is also an unstated yet highly likely political motivation for the government’s delay: the government does not want to risk a sudden freefall in the peso’s value, with a corresponding rise in inflation, before the congressional elections in October 2025, when Milei will have the chance to improve his party’s standing from its current, small minority to a significant majority. As the president argues, politics is a zero-sum game; if his party does not wield power, then the statist Peronists will do so once again. 

Certainly, the political angle is not to be brushed aside. But there are also risks to keeping the clamp in place for months on end, a policy that necessarily pits the central bank’s fonctionnaires against the currency markets’ free and often wild forces. On the other hand, lifting currency controls properly could have the opposite effect of what the government fears, with a net inflow of dollars into a country that is increasingly attractive to investors abroad. 

Much as Buenos Aires Governor Dardo Rocha undid the original cepo in the 1880’s, President Milei should end Argentina’s currency clamp once and for all. 

0 comment
0 FacebookTwitterPinterestEmail

Eric Gomez

The backlog of US weapons that have been sold but not delivered to Taiwan saw several changes in December 2024, the net result being a $77 million reduction in the backlog, which now stands at $21.87 billion.

The third-largest arms sale in the backlog, a July 2019 sale of 108 Abrams tanks valued at $2 billion, began delivery in December. This partial delivery is visualized in Figures 1 and 2, but for reasons explained below I have elected not to reduce the backlog’s dollar value until full delivery has occurred.

Two New Sales, and TOW Delivery

The Biden administration notified Congress of three new Foreign Military Sales (FMS) cases to Taiwan in December 2024. Two of these cases are included in the backlog. One case is not included because it procures equipment for Taiwanese fighter aircraft pilots who are conducting training in the United States.

The two new FMS cases included in the backlog are a $30 million sale of 16 MK 75 naval gun mounts and a $265 million sale of equipment to modernize various command, control, communication, and computer—also known as C4—systems. Congressional notification occurred on December 20, 2024, for both cases.

Similar to aircraft reconnaissance equipment, the database codes the naval gun mount as a traditional capability because it is a sub-component of another traditional capability. While most FMS cases entail the production of new equipment, the notification announcement mentions that the gun mounts will come from existing US stock. This should make for much faster delivery since the mounts will not need to be made from scratch. This is the first traditional capability sold to Taiwan by the Biden administration since an August 2023 sale of F‑16 infrared targeting pods.

C4 modernization is difficult to code because these capabilities support a wide variety of other equipment. For example, the announcement for this sale mentions one asymmetric (Patriot air and missile defense batteries) and two traditional (F‑16 and P‑3 aircraft) capabilities. I have coded the C4 modernization sale as an asymmetric capability. While the announcement mentions more traditional than asymmetric capabilities, modern C4 equipment will be able to better support Taiwan’s asymmetric capabilities.

December 2024 also saw the delivery of 1,700 TOW-2B anti-tank missiles that were sold across two FMS cases announced in December 2015 and July 2019. According to Taiwan’s Ministry of National Defense (MND), these missiles were supposed to be delivered in 2022 but were delayed due to the COVID-19 pandemic, raw material shortages, and quality control problems. The delivery of the missiles, which were coded as asymmetric capabilities, reduced the backlog by $372 million.

Table 1 shows an itemized list of the arms sales currently in the backlog. Green text denotes December’s additions to the backlog, while red text shows arms sales that completed delivery. The yellow text shows deliveries in progress as explained below. 

First Abrams Tanks Arrive

The biggest Taiwan arms sale news in December 2024 was the delivery of 38 M1A2T Abrams tanks, the first of three batches that will deliver 108 tanks before the end of 2026. The $2 billion Abrams sale is the third largest in the backlog, behind an $8 billion F‑16 sale and a $2.37 billion sale of truck-mounted Harpoon anti-ship missiles.

Figuring out a way to categorize a partial delivery has been difficult. The addition of almost 40 Abrams tanks is a big deal for Taiwan’s ability to protect itself. But reducing the backlog’s dollar value proportionately is not a simple solution because the Abrams sale includes many other smaller capabilities in addition to the tanks that are harder to track.

I decided to keep the $2 billion figure as part of the backlog until full delivery of the tanks occurs. However, the data visualizations have been amended to show that delivery of this case is in progress. Going forward, this approach of retaining the dollar value while amending the visualization to show delivery in progress will be applied to all backlogged arms sales of $1 billion or more.

Where are the F‑16s?

As 2024 ended, there was a mystery surrounding the largest arms sale in the backlog, an $8 billion sale of 66 F‑16 aircraft. Similar to the Abrams, Taiwan was supposed to receive an initial group of fighters in 2024. In October 2024, Taiwan’s defense minister said that delivery of the first aircraft was expected by the end of the year, but there has been neither a sighting of the new aircraft nor an announcement from MND.

It is possible that one or more F‑16s arrived in Taiwan without any fanfare, but this would be unusual. Taiwan’s press and government are typically eager to report deliveries of major weapons systems, which are seen as a visible sign of US support. MND documents from earlier in 2024 indicated that the F‑16s were a few months behind schedule, but the ministry expected the first aircraft before the end of the year. If initial delivery has not occurred, then full delivery of Taiwan’s F‑16s may not happen until after 2026. 

Taiwan Arms Backlog Dataset, December 2024

0 comment
0 FacebookTwitterPinterestEmail

New Year, New AI Policy?

by

Jennifer Huddleston

The general public has become increasingly excited and concerned about advancements in artificial intelligence (AI). Many of us have been encountering and using AI in an array of products for some time without realizing it, such as search engines and chatbots.

However, the popularity and advancement of publicly available generative AI products like ChatGPT, Gemini, and Dall‑E have captured the attention of both the public and policymakers. The previous Congress saw both the House and Senate release suggestions for how policymakers should consider what—if any—legislation is needed. The GOP platform vowed to revoke the Biden administration’s executive order on AI, and many lawmakers in both red and blue states have already pre-filed bills related to this rapidly emerging technology.

What should policymakers consider, and what is likely to happen as this next phase of the AI policy debate emerges?

Focus AI Policy on Supporting the Benefits of the Technology, Not Just Eliminating the Harms

New technologies are often disruptive, and policymakers initially seek a legislative response that ensures safety first, even at the expense of increased access, innovation, and speech.

AI has seen a similar panic initially emerge with sci-fi movie-esque doomsday scenarios. Less attention is paid to how AI is impactful and provides myriad benefits, from giving stroke victims their voices back to predicting how wildfires might move. As of January 2025, 55 percent of chief operating officers in a survey conducted by PYMNTS indicated they were using AI to improve cybersecurity.

However, much of the policy debate around the world has focused on preventing the negatives. This has significant potential unintended consequences for the beneficial applications.

As mentioned, the GOP platform calls to revoke the Biden-era executive order on AI. This executive order represented a much more precautionary and regulatory approach to technology, as has typically been seen in Europe.

While the incoming Trump administration has not stated what could replace the AI executive order, it could lean on prior frameworks related to AI, including those from the previous Trump and Obama administrations, which focused on supporting positive development rather than establishing regulations based on general aspects of the technology. This approach aligns more with what has allowed the United States to flourish as a leader in technology and innovation by providing a light-touch approach to general-purpose technologies. Further, it seeks to respond to specific problems or applications when necessary, instead of utilizing a top-down approach.

Applying a light regulatory touch to AI will most likely let US innovators and consumers access the benefits of this exciting technology, instead of regulating the technology’s development via limitations on computing power or requirements to review before launching.

Existing laws on topics such as discrimination or fraud address many of the concerns around AI. As a result, rather than passing new laws or restrictions that may lead to unintended consequences and limit important developments in health care and other fields, policymakers should consider whether existing laws may already address their concerns.

Recognize AI Is a Globally Competitive Market

While many leading technology companies invest in artificial intelligence, the United States is not the only country developing the technology. And while companies in Europe and other free societies are potential competitors, the risk of what could happen if technologies are developed and advanced by societies without democratic values, such as China, should not be ignored. In this way, the United States must have a policy towards AI that encourages a variety of innovations and does not place regulations and restrictions on technological development that would make it difficult to compete in the global marketplace.

There are positive steps US policymakers could take to support further global competitiveness of American innovation in AI. This includes ensuring that policy does not prevent open-source development, a position Vice President-elect J.D. Vance supports. Further, lawmakers should ensure regulations are not constructed so that only the largest players can afford to comply or have the resources to navigate them, thus hurting startups in the process.

Additionally, regulators should avoid misguided claims of monopoly at this early stage of the market or positions that limit acquisitions, which could deter important investment or research and development and even provide an advantage for Chinese competitors like Huawei.

While AI’s national security risk is often cited as a cause for further regulation, a policy that supports American innovation can further the positive reach of democratic values in this technology and provide an option for allies around the world.

Neutralize the Emerging Risk of a State AI Patchwork

A more regulatory approach to AI policy and technology policy in general has taken hold in Europe. Unfortunately, many states in the US have also considered it. This approach risks creating a patchwork that could disrupt innovation. Last year, over 40 states considered AI-related legislation, ranging from studies of the technology’s potential impact to the significant regulatory regime seen in Colorado

In some cases, states may be able to support innovation by removing their own regulatory barriers or creating certainty around how existing laws apply to bad actors who use AI to commit fraud or other abuse; however, general state regulatory frameworks for AI create multiple problems beyond the state’s borders. Even seemingly fewer regulatory approaches at a state level could create a problem without clear reciprocity by requiring companies to seek certification in each state before fully launching or even developing their product.

As a result, the general framework for AI policy will need to be decided at the federal level, as the internet did before it. General AI regulation at the state level would mean the most restrictive state law becomes de facto federal legislation and creates conflicts that prevent products from being available.

Conclusion

The light-touch approach to technology has allowed innovation and entrepreneurship to flourish in the US, and its uses support the spread of democratic values. As the new administration and Congress consider AI policy, they should look for opportunities to support the beneficial innovation it brings and respond to harm. 

In the face of European and state-level regulation, however, a proactive statement of intended policy will likely be needed to overcome the problems that could arise and prevent American companies from strongly competing on a global stage.

0 comment
0 FacebookTwitterPinterestEmail

Cato Tax Bootcamp: Tax Code 101

by

Adam N. Michel

This month Cato is hosting a Congressional Fellowship on tax and trade ahead of a year that promises to be busy for both policy areas. The fellowship is a nine-week program attended by a bi-partisan cohort of congressional staff that engage in weekly discussion sessions with featured speakers. The first four discussions are dedicated to tax policy. We will cover tax code basics, the Tax Cuts and Jobs Act (TCJA), radical tax reforms, and international tax. After each week’s session, I will provide an embellished class outline for those who want to follow along. This is the first in a four-part series.

The first session starts with an overview of US federal revenue sources and the distribution of who pays taxes. We then cover some differences between income and consumption tax bases, tax expenditures, and the deadweight loss caused by high tax rates. This class was co-taught with Cato’s Chirs Edwards

Sources of Revenue

Americans paid roughly $7.2 trillion in taxes across all levels of government (federal, state, local) in 2023. The federal government collected two-thirds of that revenue, or about $4.7 trillion.

About half of federal revenue comes from the individual income tax (Figure 1). The income tax includes revenues from taxes on wages, capital gains and dividends, and pass-through businesses. The United States is relatively unique in that a majority of business profits are “passed through” to individual tax returns and taxed as personal income.

The income tax system is structured with brackets that apply progressively higher tax rates to different income ranges. Rates start at 10 percent for the lowest incomes and increase incrementally to 37 percent for the highest incomes, with each rate applying only to income within its specific income bracket.

Payroll taxes make up 36 percent of federal revenue. They consist of a flat 12.4 percent tax on wages up to $176,100 (in 2025) that partially funds Social Security and a 2.9 percent tax (with no wage cap) that funds about a third of Medicare. The combined rate is 15.3 percent, split evenly between the employer and the employee, although economists generally agree workers pay the full economic cost of the tax. Individuals with wages above $200,000 pay an additional 0.9 percent Medicare tax. 

The corporate income tax raises about 9 percent of federal revenue. Other sources of revenue (customs duties, estate tax, excise taxes, and federal reserve earnings) round out the last 5 percent.

Figure 2 shows revenue sources as a share of GDP since 1950. Total revenue as a share of the economy has remained relatively stable over this period, fluctuating around 17 percent. Individual income taxes were the largest source of revenue during this time, and their stability is particularly impressive given that top income tax rates were 91 percent in the 1950s, 70 percent in the 1970s, and eventually fell to 28 percent in 1986. The rate increased again in the early 1990s, rising to 39.6 percent in 1993. The top rate was cut to 37 percent in 2018. See a complete history of the top and bottom income tax rates here.

Payroll tax revenue increased as rates slowly ratcheted up from 2 percent in 1937, when Social Security was created, to 15.3 percent in 1990 (adding in Medicare taxes in 1966). The wage base also expanded from its initial $66,000 (in today’s dollars). More on the evolution of the payroll tax rate here and tax base here

Corporate income tax revenue decreased in importance through the 1980s as more businesses shifted into the passthrough form. The statutory tax rate was cut from 52 percent in the 1950s to 35 percent in 1993 before being cut to 21 percent in 2018.

Tax Distribution

Data on income tax payments and estimates from the Treasury Department show that the US federal tax system is highly progressive, or graduated, meaning that higher-income Americans pay a significantly larger share of their incomes in taxes than others.

Figure 3 reports the latest Internal Revenue Service data on income taxes for the 2022 tax year. As a share of adjusted gross income (AGI), the top 1 percent earned 22 percent of total income and paid 40 percent of all the income taxes. The top 10 percent earned 50 percent of the income and paid 72 percent of the income tax. The lowest-income half of taxpayers earned just shy of 12 percent of AGI and paid 3 percent of income taxes. 

The US Treasury’s Office of Tax Analysis estimates average federal tax rates, accounting for income, payroll, corporate, and other taxes. Figure 4 shows that tax rates climb as incomes rise.

The lowest-income 20 percent of earners, measured by adjusted family cash income, face average tax rates that are either negative or close to zero. A negative tax rate means the taxpayer is a net beneficiary of the tax system, likely receiving refundable tax credits, such as the Earned Income Tax Credit (EITC), Child Tax Credit (CTC), and federal health care exchange subsidies.

On the other end of the distribution, the top 10 percent of income earners pay an average tax rate of 27 percent. Treasury breaks the highest income earners into narrower segments, showing that the highest-earning 0.1 percent pay the highest estimated average tax rate of 33.5 percent.

Other recent Treasury data show that the wealthiest 92 taxpayers in the US paid an average effective tax rate of 60 percent (state, federal, international).

The Tax Base: Consumption vs. Income Taxes

Every tax has two fundamental components: the tax base—what is subject to tax—and the tax rate. Defining the tax base is the most economically consequential decision in a tax system, and the correct tax base—consumption or income—is the subject of a fundamental disagreement between policymakers with different goals.

Conceptually, a person can get their money from three sources: returns to labor (wages), returns to capital (dividends, interest, small business profits), and changes in wealth (capital gains).

The dominant conception of the ideal tax base by liberal economists is Haig-Simons income, named after economists Robert Haig and Henry Simons. This broad conception of the tax base includes all three sources of income. It is defined as consumption plus the change in net wealth taxed each year, including unrealized gains. In theory, it would tax the value of your financial assets, house, and consumer durables yearly as their value rises. It would also tax the imputed (fictitious) return from personally owned durable goods like houses, cars, and even blue jeans.

The theoretical Haig-Simons tax base is very impractical. So, Congress has imposed an actual income tax, which taxes capital gains when they are realized and ignores imputed income on personally owned durable goods. However, the extremely broad tax base under Haig-Simons theory continues to drive federal tax policy. For example, President Joe Biden and Democratic presidential candidate Kamala Harris proposed taxing unrealized gains for wealthy individuals.

Another way to think about the tax base is how a person uses their money (instead of how they earned it). Earnings can be spent on consumption, or earnings can be saved and invested. Money saved is future consumption. Taxing consumption is often done through a sales tax or a value-added tax (VAT), but it can also be done through an income tax, with a deduction for saving (called a consumed income tax).

Income taxes tend to levy higher effective tax rates on savers than on people who consume their income immediately by taxing wages and also the returns on wages saved and invested. Former Princeton professor David Bradford was a crucial voice on the advantages of consumption taxes over income taxes. Alan Viard concisely summarizes how taxes on capital income penalize saving here.

In the US, conservative and libertarian economists and tax scholars favor tax changes that move the tax system toward a consumption tax base. Consumption tax bases are more economic growth-oriented because they remove the income tax’s bias against investment, which is one of the foundations of long-term growth. Consumption taxes are also better at equalizing marginal tax rates across industries and can account more fully for the distortions of inflation. They also tend to be simpler systems without complicated depreciation rules or special systems for capital gains.

Liberal and progressive economists and tax scholars often favor a version of the income tax. They tend to discount the taxes’ negative impact on growth and have a strong bias for high levels of redistribution through disproportionate taxes on the income sources of the wealthiest Americans, who rely more on investment income. Although consumption taxes can be designed with progressive rate structures, income taxes can more easily levy confiscatory taxes on the most visible forms of investment income.

Tax Expenditures

Due to this tension between the desire for growth and redistribution, the modern US income tax is a hybrid system, stuck in a tug-of-war between consumption and income taxes. Federal tax loopholes, credits, and deductions—called tax expenditures—are measured from a modified Haig-Simons income tax base.

Using the liberal’s preferred definition of the ideal tax base creates a biased list of officially tallied tax expenditures. The Joint Committee on Taxation and the Department of the Treasury compile tax expenditure lists. About half of the dollar value of the official tax expenditures would not be considered as such if measured from a consumption tax baseline. Operating from a misleading list, lawmakers’ well-meaning quest to broaden the tax base often mixes up true loopholes in the tax code, and economically beneficial provisions that alleviate biases against saving and investment.

The table below from Chris Edward’s Cato Policy Analysis “Tax Expenditures and Tax Reform” separates the true tax loopholes and those that would not be included on a list of tax expenditures under a consumption tax base. Read the full study for an in-depth discussion of the consumption and income tax bases. 

The Tax Rate: Deadweight Loss

The economic harm caused by a tax levied on a given tax base is a function of the tax rate. Lower tax rates change people’s behavior less, and higher tax rates change their behavior more, resulting in higher economic costs. A tax’s primary effect is transferring private resources to the government; however, this transfer also causes a deadweight loss (or excess burden).

When the government imposes a tax, it raises the price of the taxed good or activity, inducing less of it. The recently implemented congestion price in New York City is a clear example: the new tax is intended to raise the price of driving into the city, causing some people to forego the trip (those who value the trip less than the new higher cost will forgo the trip). In the case of congestion pricing, the decreased activity is a desired feature of the tax. Other taxes on income, business profits, or investment returns similarly result in people working less, fewer businesses, and lower levels of investment.

In the classic supply and demand framework, the deadweight loss triangle graphically illustrates the value of market trades that are not made because of the tax-induced higher price. The area of the deadweight loss triangle and, thus, the economic loss caused by the tax rises exponentially with the tax rate. For example, if a tax rate is doubled, the deadweight loss caused by the tax quadruples, resulting in the lost economic activity increasing faster than the additional revenue. (Figure adapted from Slow Boring.)

This relationship between the tax rate, revenue, and economic activity underlies the well-known Laffer Curve. The hypothesis states that an income tax will not raise any revenue when the rate is zero or when the rate is 100 percent. There is some point in the middle where government revenue is maximized. Revenue generally rises with the rate until the deadweight loss of the tax is so large—by shrinking the tax base—that revenue begins to fall.

The shape of the Laffer Curve is determined by the tax base (what is subject to tax) and people’s responsiveness to the tax. When the tax base has many loopholes and carveouts, the top of the curve will be at a lower tax rate as taxpayers have more opportunities to invest in avoidance and evasion.

Lastly, the top of the Laffer Curve does not tell policymakers what the tax rate should be; it simply provides an extreme upper bound on tax rates that, if raised any higher, would be net losers for both the government and taxpayers. For taxpayers and the economy, policymakers should aim not for the revenue-maximizing rate but the rate that maximizes overall prosperity.

Next week, we will apply these principles to the 2017 tax reforms. 

0 comment
0 FacebookTwitterPinterestEmail

Juries as a Bulwark Against Oppression

by

Mike Fox

Imagine at 22, you were driving recklessly with a group of friends when a drunk driver crossed the center line and hit you head-on, killing several of your passengers. If you think the grief from feeling responsible for the death of your friends would be unfathomable, imagine the state charged you with causing your friends’ deaths, confining you to an approximately 70-square-foot cage for over three decades.

In 2022, Floridian Devin Perkins was driving 100 mph in a posted 35 mph zone when a drunk driver going the wrong way slammed into him, killing three of his passengers before fleeing the scene. Prosecutors charged Perkins with three counts of vehicular homicide and one count of reckless driving resulting in serious bodily injury. Perkins rejected a plea offer recommending 10 to 20 years in prison, opting to exercise his Sixth Amendment right to a jury trial.

At his September trial, jurors convicted Perkins after deliberating for just 20 minutes. Awaiting sentencing, Perkins faces a possible life sentence and a mandatory minimum of over three decades behind bars. Perkins made a terrible mistake—one he should certainly be held accountable for. But the penalty is excessive, particularly in light of the reality that several of the victim’s family members have even said that he should never have been charged.

The Framers entrusted jurors to wrestle with the toughest of questions to resolve disputes between citizens and their government. It is no accident that the jury trial is the constitutionally prescribed mechanism by which we adjudicate criminal cases. The Sixth Amendment reads, “In all criminal prosecutions, the accused shall enjoy the right to a speedy and public trial, by an impartial jury of the State and district wherein the crime shall have been committed.” After much deliberation, the Framers chose their words carefully. They knew from experience that having a powerful, disconnected government posed an existential threat to their freedoms. So, they counted on their neighbors to shield them against government oppression.

At the Founding, criminal jurors were not relegated to the role of mere factfinders, as they are today. Historically, the institution of jury independence—which includes but is not limited to the power to acquit against the evidence—played an important role in assessing the wisdom, fairness, and legitimacy of a given prosecution. Founding-era jurors were tasked not just with finding facts, but with preventing injustice. Jurors could acquit factually guilty defendants if they perceived a law as immoral as applied to a specific case or if they considered the sentence disproportionate to the wrongfulness of the crime. 

This practice is widely known as “jury nullification,” but that term is misleading because only judges can nullify laws, whereas the most jurors can do is simply refuse to convict a given defendant in a particular prosecution. Unlike when a judge strikes down a law, when jurors exercise their power to acquit against the evidence, the law remains on the books, and prosecutors remain free to enforce the same law again, even against that same defendant. Thus, a more precise term is “conscientious acquittal.”

Independent jurors have the prerogative to not only acquit against the evidence but also to ask questions and draw inferences based upon how their questions are answered or ignored. Additionally, independent jurors can refuse to accept the judge’s interpretation of the law despite being told they must. In Devin Perkins’ case, the judge did not inform jurors of their power to acquit against the evidence. Nor did the judge tell jurors that, if convicted, Perkins would serve at least thirty years—and up to life in prison. Had jurors known all this, perhaps they might have acquitted him.

The institution of jury independence is nothing new. In 1735, dissident publisher John Peter Zenger was charged with seditious libel for criticizing New York’s royal governor. A New York jury acquitted Zenger, in what came to be a celebrated early example of so-called jury nullification in the New World. Whether protecting dissident publishers like Zenger from politically motivated prosecutions or acquitting abolitionists prosecuted for delivering fellow human beings from bondage under the Fugitive Slave Act, jury nullification was employed without controversy before, during, and after the Founding to safeguard victims of an excessively punitive government.

The recent prosecution of Daniel Penny in New York City illustrates how judges sometimes mislead jurors into believing they lack the power to acquit against the evidence. While it may have been reasonable for prosecutors to decline to file charges, the decision to prosecute is not the final say. When reasonable people can disagree about whether someone committed a criminal act, the Framers left it to a jury of ordinary citizens to determine that person’s fate. Penny had his day in court. He had the opportunity to face his accuser and question the witnesses against him. The prosecution tried and failed to prove its case beyond a reasonable doubt.

Jurors eventually acquitted Penny of the lesser criminally negligent homicide count after the prosecution moved to dismiss the more serious second-degree manslaughter count due to a deadlocked jury. However, there is more to Penny’s case than meets the eye: New York pattern jury instructions inform jurors that if the state proves every element of the charged offense beyond a reasonable doubt, they must convict. This means that whenever anyone, including President Donald Trump, exercises their Sixth Amendment right to a jury trial in the New York court system, they are not tried by the truly impartial jury that the Sixth Amendment commands. Why? Because the judge will explicitly—and erroneously—tell jurors that they may not exercise their prerogative to acquit against the evidence. This jury instruction—despite having been upheld by the New York Court of Appeals—renders convictions infirm, violates the Sixth Amendment, and poses serious due process concerns.

While prosecutors had a weak case against Penny, he may also have benefited from key provisions of the Sixth Amendment right to a trial by jury—including the Vicinage Clause. The Vicinage Clause ensures that jurors come from the community where the alleged crime was committed, precisely because the Framers did not want jurors to be far removed from the cases they adjudicate. The Framers envisioned jurors with similar lived experiences. While New Yorkers may disagree as to whether Penny’s response was appropriate—like Penny—many have likely encountered severely mentally ill individuals while riding the subway.

The Framers devised a system where defendants might well be personally known to jurors. Yet in jury selection today, people who know the accused are nearly always struck. By refusing to subject their neighbors to unjust laws or overtly cruel punishment, independent jurors can pass judgment on immoral laws and arbitrary prosecutions. Independent jurors force legislators and prosecutors to be more responsive to the will of the people.

The Vicinage Clause helped ensure Penny’s jury was not too far removed from his predicament. The Clause required jurors in Penny’s case to reside in Manhattan, where it is seemingly impossible to strike every eligible juror who has had an uncomfortable encounter on the subway.

Another key provision of the Sixth Amendment is the right to an impartial jury. What constitutes an appropriately impartial jury for constitutional purposes remains largely undefined. One might well argue that striking all jurors who express sympathy towards jury nullification undermines the constitutional requirement of jury impartiality. Nevertheless, judges and prosecutors have created a framework that all but guarantees that anyone who expresses support for jury nullification—or any other facet of jury independence—is excluded from jury service.

And across the nation, judges routinely mislead jurors with incomplete or even inaccurate jury instructions. Judges say things like “Your role is to be a judge of the facts.” Judges tell jurors they cannot consider punishment in rendering a verdict since sentencing is left up to the judge. Judges inform jurors that they must apply the law even if they disagree with the premise of the law. And perhaps most importantly, judges sometimes tell jurors, “If the state proves every element of the charged offense beyond a reasonable doubt, you must convict.” 

As noted above, the notion that jurors must convict just because the state meets its evidentiary burden is demonstrably false. Jurors can convict if the state proves every element beyond a reasonable doubt but are not obligated to do so.

All this raises the question of why judges and prosecutors insist on misleading criminal jurors about their proper role. Consider the ongoing case of John Moore and Tanner Mansell. They were operating a charter boat in Florida and came across a fishing line they believed to be the work of poachers. They hauled in the line, released several fish, and took the rig back to the marina after notifying state officials. It turned out they were mistaken and had actually stumbled onto a bona fide research project. The US Department of Justice pursued felony charges against Moore and Mansell for theft of property within the “special maritime jurisdiction” of the United States.

A jury reluctantly convicted Moore and Mansell after sending out multiple notes to the judge and nearly deadlocking. The Eleventh Circuit reluctantly affirmed, with one judge—herself a former federal prosecutor—penning a concurrence in which she castigated by name the Assistant United States Attorney who prosecuted the case for “taking a page out of Inspector Javert’s playbook.” She noted that Moore and Mansell “never sought to derive any benefit from their conduct” and have been branded as lifelong felons “for having violated a statute that no reasonable person would understand to prohibit the conduct they engaged in.”

The Cato Institute filed an amicus brief in Moore’s and Mansell’s case, arguing that “[i]t is highly doubtful that a Founding-era jury, fully cognizant of their historic powers and duties, would have branded John Moore and Tanner Mansell lifelong felons for their misguided attempt to fulfill what they perceived to be a civic duty.”

The Framers did not intend for it to be easy to deprive citizens of their liberty, and they established the criminal jury trial as a key procedural safeguard to help ensure that only those acts and individuals society deemed truly culpable result in criminal punishment.

Prosecutors understand that Founding-era informed jurors seriously threaten the government’s ability to dispose of cases on its own terms. And many judgeswho are disproportionately former government lawyers themselves—are likely to keep going along with it. Thus, even though jurors indisputably have the power to acquit against the evidence, it’s a safe bet prosecutors will do everything in their power to ensure jurors remain ignorant of it.

So, the next time you get summoned for jury duty, rather than view it as a burden, try to see it instead as an opportunity for public service—because you just might have the opportunity to save your neighbor from government oppression. 

0 comment
0 FacebookTwitterPinterestEmail

Michael Chapman

Recent estimates on job losses in California caused by a $20 minimum wage hike for fast-food employees confirm what classical liberal economics has always taught about the minimum wage and what Cato experts said about the law last summer

As Milton Friedman taught, minimum wage laws “increase unemployment” and “the groups that will be hurt the most are the low-paid and the unskilled.”

In California, the raise in the minimum wage for fast-food workers from $16 to $20 was signed into law by Gov. Gavin Newsom (D) on September 28, 2023, and went into effect on April 1, 2024. The law applies to restaurants, coffee shops, and juice bars with at least 60 locations nationwide. E.g., McDonald’s, Wendy’s, Jersey Mike’s, Del Taco, and Pizza Hut.

That wage hike, which raised the cost of labor, killed 6,166 fast-food jobs between September 2023 and June 2024, reported the Employment Policies Institute in late December 2024. Counting the whole year since the law was signed—September 2023–2024—there were at least 9,600 job losses (and up to 19,300), reported Edgeworth Economics in late November 2024. 

The Employment Policies Institute further reported that between September 2023 and June 2024 “total private sector fast-food employment nationwide grew 1.6 percent” but in California it declined 1.1 percent. And that decline was steeper than statewide private employment, which fell only 0.3 percent in that same period.

Citing the Quarterly Census of Employment and Wages, the EPI said the $20 minimum wage “shows crystal-clear negative employment consequences” and “unequivocal job losses.”

Economist Stephen G. Bronars, PhD, with Edgeworth Economics, commented, “The $20 minimum wage harms California’s least experienced workers by causing them to be more expensive, but no more productive, for limited-service restaurants that have traditionally hired young and inexperienced workers. Limited-service restaurants will replace employees with kiosks as businesses adapt to the higher minimum wage.”

California’s minimum wage law for fast-food workers is, like any such law, a price floor; it is a price control on labor. Employers and workers are not free to negotiate the wages they want; the government dictates the price. And this, as decades of evidence shows, puts people out of work. As Cato’s 2024 book The War on Prices explained, minimum wage laws only deliver “symbolic hope to the working poor” and “risk leaving many of the nation’s most vulnerable worse off.” 

0 comment
0 FacebookTwitterPinterestEmail

Should Defamation Lawsuits Exist?

by

Jeffrey Miron and Jacob Winter

Last month, ABC News agreed to pay $15 million to Donald Trump’s future presidential library to settle a defamation suit resulting from anchor George Stephanopoulos’ statements on March 10, 2024.

Defamation occurs when a person communicates false statements about another person that damage their reputation.

Since the founding of the country, defamation has been a tort—a matter for which one person can sue another. Additionally, defamation is a crime punishable by fines and/​or jail time in at least 14 states.

Libertarians object to criminal defamation laws because governments can use them to harass and silence criticism. Governments have used this tactic throughout our nation’s history—from the Sedition Act in 1798 to 2018, for example, when New Hampshire police arrested and charged a man for criticizing his town’s police chief. These laws run afoul of the First Amendment’s guarantee of free speech.

The ABC case, however, shows that defamation’s status as a tort is also problematic. Current law bars public figures from winning defamation suits unless they can prove the defendant communicated the statement “with knowledge of or reckless disregard for its falsity.” In the ABC case, Stephanopoulos repeated that Trump had been “found liable for rape,” which is technically inaccurate because the jury found Trump liable for sexual abuse, a separate category in New York at the time of the alleged incident. Thus, Stephanopoulos’ phrasing was incorrect but not seriously misleading.

Regardless of whether the settlement was justified, this case illustrates that civil defamation suits carry a danger. Even if government officials cannot imprison people who allegedly defame them, they can still use or threaten civil suits that effectively impose fines, jeopardizing freedom of speech.

Defamation suits potentially have benefits. If I spread false rumors that tarnish my neighbor’s reputation, it seems fair they should have redress.

Measuring such subjective harm is difficult, however. And if defamation suits did not exist, my neighbor could say whatever they wanted to correct the record or even defame me in retribution without fear that I would sue. This offers a natural incentive for people not to defame others.

The right question is therefore what legal framework best balances the benefits of defamation suits against their potential for censorship. The best approach is one that maximizes the public’s ability to engage in vigorous debates. Thus, we should eliminate defamation as both a crime and a tort.

0 comment
0 FacebookTwitterPinterestEmail

Norbert Michel and Jerome Famularo

In the aftermath of the COVID-19 pandemic, the United States experienced a much higher rate of inflation than at any time during the prior few decades. Like the prices of many goods and services, the cost of housing rose rapidly, with the median home price increasing almost $100,000. (Figure 1.) Unsurprisingly, many potential homebuyers were—and still are—shocked and upset.

As in years past, many politicians have latched on to the anger surrounding the recent housing market turmoil. During the presidential debate, Vice President Kamala Harris said, “Here’s the thing: we know that we have a shortage of homes and housing. And the cost of housing is too expensive for far too many people.” Prior to the election, Donald Trump outlined his solutions, and now federal officials want to implement a host of policies, ranging from subsidies to selling federal land.

But is the United States really facing a housing crisis? Or a shortage of homes? And should Americans really expect recent federal policy proposals to make housing more affordable?

For the past few weeks, we’ve run a series of posts on Cato at Liberty to examine these questions, and this post is the last in that series. (Previous posts are hereherehereherehere, and here.) Each post started with the same set of caveats—here’s a shorter version: 

Local officials should relax zoning restrictions and other regulations to make it easier and less costly for people to live, and to allow builders and developers to more easily meet demand.
Many Americans have taken an economic beating these past few years—real wages have fallen, and prices (including home prices) have not reverted to pre-COVID-19 levels. It is no surprise that so many people have been calling for federal intervention in the hopes of ameliorating that economic burden.
After staying above 5 percent through early 2023, annual inflation has been below 4 percent since June 2023, and below 3 percent since July 2024. But prices remain well above the pre-pandemic level and many consumers are still upset about high prices.
Something similar happened in the housing market. The median home price increased by more than $100,000, all the way to $442,600. The fact that it came back down to $415,000 by June 2024 has given little comfort to prospective buyers and made recent buyers uneasy.

It is no surprise that many people have been calling for increased government intervention in the housing market, just as many have been calling for various types of price controls in response to inflation. But there is already a long and problematic history of government intervention in housing markets, most of which has increased housing costs. Based on historical experience, policymakers should reject calls for further intervention and pare back the current level of federal involvement.

There is No Housing “Crisis”—A Recap

Still, there are many good reasons to reject the housing crisis/​market failure story. Here are a few of the main reasons, as detailed throughout the series.

House prices aside and contrary to the conventional wisdom, Americans at all income levels have done increasingly well over the past several decades. For starters, most consumer goods and services have become more affordable over time, and interest rates steadily declined, suggesting that Americans could afford to spend more on housing than they did in the 1960s and 1970s.
Americans experienced solid income growth over the past five decades. From 1967 to 2023, the share of households earning (in real terms) less than $35,000 fell from 31 percent to 21 percent, and the share earning between $35,000 and $100,000 fell from more than 53 percent to 38 percent. During the same period, the share of households earning more than $100,000 essentially tripled, from 14 percent to 41 percent. Moreover, according to the Bureau of Labor Statistics, the share of Americans making at or below the minimum wage declined from 1.1 percent in 2012 to 0.3 percent in 2022.
These statistics would be consistent with Americans purchasing bigger homes with more amenities, and that’s exactly what they’ve been doing for decades. They’ve been buying larger houses and living in them with fewer people. Holding both the average home size and household size constant, homes have become slightly more affordable, with the share of household income spent on new homes displaying a slightly decreasing trend since 1975. (We previously addressed this issue when Senator Elizbeth Warren (D‑MA) asked her X followers, “You ever wonder how your grandparents bought a home for seven raspberries, but you can’t afford a one-bedroom apartment?”)
Work hours needed to afford rent remained stable from 2007 to 2019. Additionally, both rent and food expenditures as a percentage of income varied little until the pandemic. Thus, the recent spike in prices and rents is anomalous—it has certainly been harmful to many people, but it was not indicative of a long-term trend. To the extent that nominal incomes continue to rise, the real effects from this spike will continue to dissipate.
Even though the US population increased by 130 million people during the past five decades, the homeownership rate remained stable outside of the 2008 crisis. The homeownership rate rose even after prices started spiking in 2019.
A basic estimate of housing availability shows that housing construction has broadly kept up with population growth. For instance, between 2000 and 2021, the average building units permitted per 100-unit change in population was approximately 93 housing units. This relationship is not necessarily optimal, and more housing could certainly make some people better off, but this relationship makes it extremely difficult to support the idea of a widespread housing shortage or market failure.
Americans have consistently been choosing to rent and buy in more densely populated areas. These choices reveal that people have a strong preference for living in certain areas—for a variety of reasons—and are willing and able to pay to live in those places. Many people have moved from higher-cost areas to lower-cost areas, but most of these moves were from densely populated areas to other densely populated areas. These patterns tend to contribute to higher nominal house prices partly because new land cannot be produced in the same way that other goods can.

We are not recounting these facts to prove that everyone is just fine, or that more people wouldn’t benefit from lower rents, lower housing prices, or higher income. And we’re certainly not saying that there isn’t excessive federal involvement in mortgage markets, or too much regulation at either the federal or state and local level. We think most people would be doing even better without all the regulation and federal involvement in housing markets. But it’s not true that most Americans are facing some kind of housing crisis.

Throughout this series, we have addressed the main aspects of why America is not in a housing crisis. In the rest of this post, we tie up some loose ends by addressing a few additional arguments offered by proponents of the housing crisis narrative.

Poverty Is the Problem for ELI Households

Interestingly, many groups have implicitly conceded that most Americans can afford housing. They’ve focused, instead, on the plight of the poorest Americans and the absence of certain types of “affordable” housing. For instance, in 2018 the president of the National Low Income Housing Coalition (NLIHC), Diane Yentel, testified that “the shortage of affordable homes is most severe for extremely low-income (ELI) households.” As an example, she offered “a family of four, with two working parents who earn a combined $25,100 annually.” She also testified that “Housing cost burdens make it more difficult for poor households to accumulate emergency savings.”

Yentel is surely correct that high housing costs make it difficult for those in extreme poverty, but so does the cost of everything else. The problem is no more a housing affordability problem than it is any other type of affordability problem. The problem is poverty itself, and that’s a much broader economic issue.

Fortunately, the type of extreme poverty Yentel points to is not pervasive in America.

In 2018, when Yentel testified, the estimated number of all American households with two working parents and two children was 8,287,304. Of those, 144,974 households made less than or equal to $25,100. That’s 1.75 percent of households with two children and two employed parents, and that’s less than 150,000 families in a nation of more than 330 million people. (It’s about 0.1 percent of all American households.) Policymakers should have frank discussions about why those 150,000 families are in poverty, but they should also acknowledge that those families’ situation is far from typical. (That discussion should also include all the federal transfers available to such families, but we’re not getting into that here.)

Addressing the “Starter Home” Debate

The COVID-19 pandemic did little to foster a more focused discussion among policymakers on the plight of extremely impoverished Americans. In her 2023 testimony, Yentel stated that “An underlying cause of America’s housing crisis is the severe shortage of rental homes affordable and available to people with the lowest incomes,” and that this shortage “is a structural feature of the country’s housing system, consistently impacting every state and nearly every community.”

To solve this “crisis,” the NLIHC wants Congress to enact “large-scale, sustained investments and reforms.” Their list of preferred policies includes the expanded use of the federal low-income housing tax credit (LIHTC) to build more “affordable” housing units and expanded funding for the national Housing Trust Fund (to at least $40 billion per year). While the NLIHC is opposed to private investors buying “single-family and multi-family properties” because it increases rents, the group supports legislative efforts to “provide more than $150 billion in critical investments” to help create “nearly 1.4 million affordable and accessible homes.”

The NLIHC and other critics of the US housing market also complain about the disappearance of the “starter home,” one that was small and, supposedly, easily affordable (especially for “young potential homebuyers”). In 2024, the Biden administration released its proposals to address this so-called shortage. The list included a new Neighborhood Homes Tax Credit for “the construction or preservation of over 400,000 starter homes in communities throughout the country,” $1.25 billion for the HOME Investment Partnerships Program (HOME) to “construct and rehabilitate affordable rental housing,” a “First-Generation Down Payment Assistance Program,” and a “one-year tax credit of up to $10,000 to middle-class families who sell their starter home.”

The 2024 Economic Report of the President bemoans that “the fraction of all new single-family homes under 1,400 square feet [starter homes] declined from nearly 40 percent in the early 1970s to about 7 percent in the early 2020s.” (Other critics focus on a lower-than-average price, more in line with home price-to-income ratios from earlier decades, such as $200,000 or $300,000.) We do not dispute this declining share, and our second post presented evidence that the size of newly constructed homes increased for most of the past several decades, though it has fallen slightly since 2015.

We do, however, dispute that the federal government should subsidize the construction of “affordable” homes, whether based on the size or price of “starter homes” in the 1960s or 1970s. Federal officials have no idea what the “right” size home is for any neighborhood (and neither do we), but there is no reason that it should be the same size as it was in the 1960s or 1970s. During those two decades, people were becoming increasingly materially well off, and new houses were getting larger and being shared with fewer people than in the 1950s.

As we’ve detailed in this series, those same trends continued, and people built even larger houses with more amenities than in prior years. Many Americans have chosen to buy those smaller “starter homes” from decades past and replace them with larger, better-equipped ones. But the fact that more people would have been able to afford houses if they had been priced lower—or that every person is unable to pay for the housing that they want—is not a market failure.

In any market, for any given demand, the only way to ensure that everyone can afford the house that they want is to shift the supply curve so far to the right that the market price drops to effectively zero.

We make no judgement as to how low home prices should be and neither should federal officials. It is worth noting, though, that even the “low” price of $200,000—the price that many critics of the disappearing “starter homes” are currently fascinated with—would not really solve the plight of people in extreme poverty.

For instance, for a family such as the one in the NLIHC’s example, earning $25,000 per year, the monthly mortgage payment on a $200,000 house (with zero down payment) would consume almost all the family’s income. The price alone, or even the equivalent cost in rent, says nothing about such a family’s ability to consistently earn higher income. That is, the plight of such Americans is much broader than simply a housing affordability problem.

Another cautionary tale against focusing on the lack of “starter homes” is that the market does adjust to meet demand for smaller homes, even if not to everyone’s satisfaction. The size of newly constructed homes has fallen a bit over the past ten years, and local governments—in places such as Oregon, Texas, and Utah—have started changing their zoning restrictions to allow for the construction of smaller units on lots that were previously reserved for larger homes.

Again, these changes demonstrate that markets are working to provide the kinds of housing that people demand. They’re certainly not indicative of a market failure or a crisis, much less one that requires some kind of massive federal program to build “smaller” or more “affordable” homes. Conducting such an effort—which many groups and politicians are calling for—is just another type of industrial policy that is bound to end as badly as any other industrial policy.

Just as federal officials lack the knowledge to know precisely which consumer goods will be in highest demand in the future, they cannot know which types of housing will be most desired. It is very likely that such a large-scale federal housing program will produce more “starter homes” than people would otherwise want. As with any industrial policy, it is likely that this sort of federal intervention in housing markets will result in “a host of ‘unseen’ costs, such as indirect costs paid by others, deadweight loss for the economy as a whole, opportunity costs, misallocation of resources, unintended consequences, moral hazard and adverse selection, and uncertainty inherent in a system dependent on politics, not the market.”

Conclusion

Our intent for this series has been to demonstrate that claims of a severe shortage or market failure—a housing crisis—in the United States are exaggerated. Long before the recent spike in home prices, many policymakers and groups regularly called for massive federal intervention. They still justify these calls on the grounds that America suffers from a severe shortage of housing. It is an abuse of terms.

Over the past several decades, with a growing population, Americans at all income levels have done increasingly well. Income has grown, most consumer goods and services have become more affordable over time, and fewer people are in poverty. Americans have, for the most part, been purchasing bigger and better-equipped homes, and sharing that space with fewer people. Things could certainly be better, but the notion that the United States suffers from a severe shortage of housing, rising to crisis-level proportions that only massive federal programs can fix, is comical.

Speaking of how things could be better, there’s too much federal intervention in the housing market, and most of it is through the financial system. For instance, through the Federal Housing Administration and Fannie Mae and Freddie Mac, the federal government makes it easier for millions of people to get home mortgages. This arrangement subsidizes debt, not ownership.

Financial consequences aside, this arrangement brings more buyers into the market, so it increases demand. And because housing supply is always somewhat constrained—it takes time to build, and land cannot be mass-produced to meet demand—the result is upward pressure on prices. In the absence of this kind of federal involvement, it is likely that housing in the United States would be even more affordable. For instance, the slightly decreasing trend (since 1975) in the share of income Americans spend on new homes would likely have decreased more.

Throw in the financial consequences of this debt subsidy arrangement, and paring back federal involvement in housing is a no-brainer.

In closing, we acknowledge that not everyone in America is as materially well off as they could be, that housing markets are not perfect, and that some people would benefit from lower rents and housing prices. But this preference for cheaper housing does not equate to a crisis or market failure, and it does not justify federal intervention. There is already too much government involvement in housing markets. It should be pared back, and the federal government should remain neutral in Americans’ decision to rent or buy. It should do so because it would make markets work better, not because markets have failed. 

0 comment
0 FacebookTwitterPinterestEmail