Category:

Stock

Bryan Caplan

If you read the endnotes for Build, Baby, Build, you’ll learn about all of the papers I could find on the connection between fertility and housing prices/​housing regulation. I’m afraid the total is only three:

Simon, Curtis, and Robert Tamura. 2009. “Do Higher Rents Discourage Fertility?” Regional Science and Urban Economics 39: 33–42.
Mulder, Clara, and Francesco Billari. 2010. “Homeownership Regimes and Low Fertility.” Housing Studies 25: 527–41.
Shoag, Daniel, and Lauren Russell. 2018. “Land Use Regulations and Fertility Rates.” In One Hundred Years of Zoning and the Future of Cities, edited by Amnon Lehavi, pp.139–49.

All three articles affirm that lower housing prices and/​or less housing regulation raises fertility. But to be honest, the main reason I’m convinced of the natalist power of housing deregulation is not the research but common sense. Specifically:

Higher housing prices make young adults more likely to keep living with their parents.
Young adults who live with their parents are unlikely to marry.
Even married young adults who live with their parents are unlikely to have kids.

I call this the problem of “basement fertility”: Living in your parents’ basement is a powerful form of contraception. Make housing a lot cheaper, and young adults will form new households — and new families — sooner.

The main doubt: Housing deregulation allows greater population density, and some smart people are convinced that density causally reduces fertility. Several people have recently waved Rotella et al.’s “Increasing Population Densities Predict Decreasing Fertility Rates Over Time: A 174‐​Nation Investigation” (American Psychologist, 2021) in my virtual face. While Rotella et al. admit that their main evidence is not really causal, they appeal to animal experiments where density per se slashes fertility:

In nonhuman animal studies, higher population densities have been associated with reduced reproduction rates (Fowler, 1981, 1987). Further, experimental work suggests that this relationship is causal: Organisms downregulate their fertility rates in higher densities (e.g., Both, 1998; Dhondt et al., 1992; Leips et al., 2009; see Sng et al., 2017; for a recent review of this literature).

Later, Rotella et al. elaborate:

In this view, adaptive behavioral responses depend on ecological constraints, which can differ in high‐ and low‐​density populations. Low‐​density environments are often characterized by high resource availability per individual, and lower intrapopulation competition for resources. In such conditions, it is more adaptive for individuals to exploit resources at a faster pace, to reproduce earlier, and have more children. In contrast, in more dense environments, there is greater competition between individuals. For individuals to compete successfully in such an environment, one needs to build relevant skills and knowledge, which in turn delays reproductive efforts. Moreover, it is likely that in high‐​density contexts, offspring also require more investment to become competitive enough to survive and reproduce. Thus, it is comparatively more advantageous to invest more heavily in fewer children in population‐​dense environments.

If you actually read Rotella et al., you’ll find that they measure population density at the national level. Get a country’s total population, divide it by the total land area, and you get their density measure. Which yields something like the following global map:

A few critical remarks are in order.

If animal experiments are your inspiration, national population density is a bizarre measure. Should we imagine that Muscovites will have lots of kids because their country includes millions of square miles of Siberian emptiness? Animals don’t care about populations beyond their fields of vision. Should we expect humans to be so different?
Even if you take Rotella’s approach as gospel, their paper simply has zero to say about housing or urbanization. Packing your people into fewer houses or taller buildings doesn’t change national density, which remains total population divided by total land area.
Suppose we switch to a more localized measure of population density. The fear that deregulation will raise this measure of density and thereby reduce fertility is now coherent. But is it correct?

a. “High housing prices suppress fertility” starts with a strong presumption in its favor, and has a clear policy implication for natalists: deregulate so prices fall and fertility rises.

b. Much of the observed correlation between local density and fertility is clearly reverse causation. People who want large families tend to move to spacious housing; people who don’t want large families move to pricey downtowns.

c. “Density suppresses fertility” and “High housing prices suppress fertility” are very hard to distinguish empirically. After all, why do urbanites live in such small places? In large part, because large residences in urban areas are extremely expensive. Imagine, though, that a 5000 square foot apartment in Manhattan rented for $1000 a month. Should we really think that New Yorkers wouldn’t spring for the extra elbow room? And once they have this extra elbow room, will the fertility effect really vanish because they’re too high off the ground?

d. I do not claim that housing prices are the only factor that shifts fertility. Culture matters, foresight matters, and contrary to what you’ve heard, so do baby bonuses. But as far as I know, dense cities with cheap, spacious housing simply don’t exist in the First World. And under laissez‐​faire, they probably would.

e. On balance, full deregulation would very likely increase urbanization. As I’ve explained before:

Current regulation strangles urban construction, and heavily restricts suburban construction. If you got rid of this regulation, skyscrapers really would start going up all over high‐​priced cities – and millions of urban commuters would swiftly relocate to occupy these new buildings. Families with children would naturally be less‐​eager to go urban, but even they might be tempted by large, cheap apartments across the street from their jobs.

Plenty of other folks would respond by moving into all of the newly vacant – and suddenly cheap – suburban homes. This could conceivably fully satisfy suburban demand, but the more likely result is that developers would also take advantage of deregulation to subdivide existing lots and build lots more single‐​family homes. And of course other developers would buy up neighborhoods of old single‐​family homes, bulldoze them, and replace them with massive cheap apartment complexes.

f. So would deregulation raise or lower density? It depends on the measure. Any localized measure that takes population and divides it by land area will rise. Even in suburbs, because a deregulated world would abolish minimum lot sizes. But a more sensible density measure like “indoor living space per person” would likely plummet.

To be clear, housing prices probably aren’t the main cause of falling fertility. I put more blame on laborious parenting philosophies, intermediate foresight, and absurd credential inflation. But cheaper housing would almost surely help, and freeing housing markets is the best known way to make housing cheap.

0 comment
0 FacebookTwitterPinterestEmail

Colin Grabow

Washington State Ferries (WSF) has certainly seen better days. With over 3,500 canceled sailings last year and just 15 of its 21 vessels reliably operating, a recent Seattle Times editorial described the ferry system as “in crisis” and characterized its fleet as “antiquated” and “depleted.” Such language is apt. With 11 of the system’s ferries at least 40 years old and WSF five short of the 26 vessels it considers ideal, new vessels are badly needed. Unfortunately, none are projected to arrive until 2028 at the earliest.

The extended delivery timeline is sufficiently disruptive that it has become a topic of this year’s governor’s race, with considerable criticism directed at the state’s decision to procure ferries powered by hybrid electric engines—a move some observers allege has complicated the acquisition process. At most, however, this is only a proximate cause of WSF’s ferry woes. Far more deserving of blame are protectionist maritime laws that prohibit the purchase of vessels from the international market. Such restrictions mean that WSF—in fact, all US ferry systems—must provide service with one hand tied behind their backs.

That Washington urgently needs to revamp its ferry fleet isn’t news. Five years ago, the state passed legislation allowing a contract extension with Seattle shipyard Vigor—which has built WSF’s last ten vessels—to build up to five more large ferries using hybrid electric technology. But negotiations with the shipyard then hit a snag. While Washington had pegged the cost of new ferries at $188 million each in 2018—an estimate that rose to $249 million in 2022—the price quoted by Vigor for the first such ferry was over $400 million.

Ouch.

Washington responded by rebidding the contract and changing its law so shipyards outside the state could compete to build the vessels. That, however, has meant delays in the acquisition process, mounting frustration among ferry users, and the ongoing exchange of barbs over the new propulsion system.

But this controversy misses the bigger picture. Washington’s chief obstacle to cost‐​effectively acquiring new ferries isn’t rooted in technology but protectionism. One only needs to look across Washington’s international border to see why.

In late 2019, only two months after Washington announced its plan to purchase new hybrid electric ferries, Canadian ferry operator BC Ferries ordered four vessels with the same technology from a European shipbuilder. All four were delivered before the end of 2021. Featuring a capacity approximately one‐​third that of the vessels sought by WSF (450 passengers and crew and 47 vehicles versus 1,500 passengers and 144 vehicles), the ferries cost about $38 million each—less than a sixth of the new WSF ferries’ estimated price.

WSF and BC Ferries’ contrasting experiences are largely (if not entirely) due to the latter’s ability to buy vessels from international shipbuilders. When BC Ferries announced its desire to purchase four new hybrid electric ferries, 18 shipyards from around the world indicated their interest (of which 9 were selected to compete). Notably, not a single Canadian shipyard bid on the project.

In contrast, WSF must contend with the 1920 Jones Act, which applies to the domestic waterborne transportation of merchandise (e.g. vehicles), and the Passenger Vessel Services Act of 1886, which restricts the domestic waterborne transportation of people. Both laws require vessels to be constructed in US shipyards that are far less numerous and far less competitive than their international counterparts. Significantly higher prices for WSF’s new ferries are a foregone conclusion.

And the protectionist headaches don’t stop there.

After the vessels are delivered, WSF still faces the task of finding mariners to crew them. That’s unlikely to be easy, with a January report from Washington’s Department of Transportation pointing out that the ferry system faces “severe staff shortages that are unprecedented in its 70‐​year history.” Maritime protectionism figures here too. While BC Ferries and other international ferry systems can hire skilled foreign mariners to help mitigate such crew shortages, the report points out that WSF is “precluded from doing [so] by the 1920 Jones Act” (US‐​flagged vessels are restricted in the employment of foreign nationals to green card holders).

Unfortunately, Jones Act‐​induced complications to US ferry systems go beyond Washington.

Like WSF, the Alaska Marine Highway System (AMHS) also needs new vessels, including a hybrid electric replacement for its 1964‐​built ferry Tustumena. Described by the system’s director as thirty years past its prime, the ferry has a history of structural issues and saw its return to service from an annual overhaul recently delayed by the discovery of wasted steel. Just maintaining the vessel costs $2 million per year. As the Anchorage Daily News has pointed out, however, finding the money and a shipyard willing to build the new ferry is proving a challenge:

The replacement vessel is expected to cost around $350 million. Sam Dapcevich, a marine highway spokesperson, said the state has so far secured nearly $243 million, counting around $60 million in expected federal formula funds. That leaves more than $107 million needed to fund the project. The state is hoping most — if not all — of the balance can be covered through a federal grant.

As of March, amid months of delays, the Tustumena replacement vessel has yet to go out to bid. The design was changed to include batteries, reflecting a federal requirement for reduced emissions. A bid attempt in 2022 yielded no takers [emphasis and hyperlink added]. Last summer, [AMHS Director Craig] Tornga said he wanted to select a shipyard by the end of the year. In December, Tornga said he wanted to put out a request for proposals in January. Delays have piled on as Tornga held meetings with several shipyard officials to ensure that unlike in 2022, shipyards would, in fact, bid on the project.

On the East Coast, meanwhile, Massachusetts’ Steamship Authority hasn’t even bothered constructing new purpose‐​built vessels to replace three of its aging (and deteriorating) ferries. Instead, it purchased three offshore service vessels—all at least fifteen years of age—used to support Gulf of Mexico offshore oil and gas operations and is converting them into ferries. The project has already experienced hiccups, with the conversions’ cost increasing from a projected $9 million per vessel to $13.6 million.

Transporting over 131 million passengers in 2019, ferries form an important part of the US transportation system. They are particularly vital for parts of the country, including Alaska and the greater Puget Sound region, for which land‐​based transport alternatives are either less direct and more time‐​consuming or non‐​existent. For these communities, ferry service is a lifeline and their difficulties demand an effective response from policymakers. Taking aim at costly and ineffective maritime protectionism that impedes US ferry systems’ ability to obtain the vessels and mariners they need would be an excellent start.

0 comment
0 FacebookTwitterPinterestEmail

Marc Joffe

Two bills in the California State Legislature propose to transfer wealth from social media companies to local news providers. Although the rhetoric behind these bills sounds worthy, their ultimate effect will be to lower the barrier between the state and a free press envisioned in the First Amendment to the US Constitution.

The narrative underlying these bills is well known to media consumers: internet behemoths have sucked the life out of local journalism, depriving residents of information about local governments and community organizations. By taxing firms like Google and Meta and distributing the proceeds to local media, legislators propose to counter this trend.

A big question that such a policy begs is that of which organizations should receive the proceeds. If elected officials and bureaucrats can pick and choose which newspapers and broadcasters to subsidize, they can reward sympathetic journalists. Those not receiving funds at first may change their coverage to favor politically preferred interests to obtain future funding.

To his credit, the author of California’s newest journalism bill, SB 1327, has attempted to make the subsidies formula‐​based, constraining the ability of state officials to pick favorites. But the complicated eligibility criteria leave the door open to discretion.

SB 1327 would offer tax credits to local media companies and grants to non‐​profit media concerns to the extent that they provide “qualified services,” which are defined as “gathering, preparing, recording, directing the recording of, producing, collecting, photographing, writing, editing, reporting, presenting, or publishing original local community news for dissemination to the local community.”

Although the legislation contains an extended definition of a “local community,” it does not provide a definition of “news.” That omission is especially critical today when journalists have largely abandoned the twentieth‐​century goal of objective reporting in favor of mixing fact and opinion. As professors at the Walter Cronkite School of Journalism recently observed:

Newsroom leaders are confronting a generation of increasingly diverse young journalists struggling to reconcile traditional news standards with their concepts of “cultural context,” “identity,” “point of view,” and “advocacy journalism.”

If journalists do not even agree among themselves about the distinction between news and opinion, why should we assume California officials will separate the two in a disinterested way? Or would websites covering local events from a liberal perspective receive tax credits and grants while those applying a conservative perspective be denied?

Confidence in the state’s ability to carry out a local news tax credit and grant program in a neutral manner might be undermined by a review of how California’s attorney general executes the task of naming and summarizing ballot measures. As CalMatters explains:

California election law requires those descriptions and labels to be “true and impartial” and “neither be an argument, nor be likely to create prejudice, for or against the proposed measure.”

But:

[E]ditorials from the Los Angeles Times, the San Francisco Chronicle and the San Jose Mercury News have alternately accused (former California Attorney General Xavier) Becerra of “playing favorites,” “skewing the language” of the ballot and “(t)ricking the electorate.”

While many of us in older generations harbor positive recollections of local newspapers and mourn their demise, technological advances have created new models for covering community happenings. As I observed recently, YouTubers have provided exhaustive coverage of political corruption in the Chicago suburb of Dolton, Illinois.

Similarly, my Cato colleague Paul Matzko noted that TikTok creators offered informed perspectives on the East Palestine, Ohio train derailment while local media “regurgitated corporate press releases and government statements.”

Today, anyone with a smartphone can create and upload video content and make it widely available. Blogging sites like WordPress and Substack make it easy for journalists to directly reach the public: costly printing presses are no longer needed. One academic analysis found that, in recent years, one local new digital news provider has been opening for each local newspaper that closes.

So rather than redirect funds to legacy media outlets, state lawmakers might instead consider allowing private actors to get on with the business of sharing neighborhood news without state intervention.

0 comment
0 FacebookTwitterPinterestEmail

Fast Facts about Social Security

by

Romina Boccia and Ivane Nachkebia

This Thursday, Cato is hosting the Social Security Symposium: A Global Perspective from 8:45 a.m. to 2:30 p.m. (EST). You can join us in person at the Cato headquarters in Washington DC (breakfast and lunch will be served) or tune in online. We hope to see you there!

Social Security, the largest federal government program, is unsustainable as currently structured. Social Security consists of Old Age and Survivors Insurance (OASI) and Disability Insurance (DI). Unless stated otherwise, Social Security will refer to OASI in this document. This fact sheet lays out key fiscal details legislators and the public should know about Social Security to help them examine the unsustainability of this massive federal entitlement program.

Social Security is the single largest federal government program, spending $1.2 trillion in 2023 or 4.5 percent of gross domestic product (GDP).

Social Security spending will nearly double and reach $2.1 trillion or 5.2 percent of GDP by 2033. By then, the government will spend more on Social Security annually than on the entire defense and nondefense discretionary budget.

For the first time in the program’s history, the number of beneficiaries will exceed 60 million by 2025.

The average monthly benefit for an individual was $1,760 in 2023.

The maximum monthly benefit for an individual is $4,873.

Because initial benefit levels are indexed to wage growth, Social Security benefits are growing much faster than inflation. A maximum‐​benefit‐​eligible beneficiary retiring in 2045 would receive over $23,000 more in annual benefits that year, after adjusting for inflation, than a comparable worker who retired in 2020.

Since the program’s inception, life expectancy at birth has increased by nearly 16 years. Yet, Social Security’s eligibility age has only increased by 2 years.

The ratio of covered workers to Social Security beneficiaries has declined from 41.9 in 1945 to 3.1 in 2024, putting substantial strain on the program’s pay‐​as‐​you‐​go system due to fewer workers funding retirees’ benefits.

Social Security has been contributing to federal deficits and grappling with increasing budget shortfalls.

Social Security is projected to run a $184 billion deficit in 2024. By 2033, the deficit is expected to more than double, reaching $402 billion (see Figure 1).

Social Security’s trust fund is a liability, not an asset. Social Security holds no real assets beyond IOUs against future US taxpayers. Those IOUs, which amount to $2.6 trillion as of 2024, are part of the $34.6 trillion gross national debt.

When the Social Security trust fund ledger depletes by 2033, all beneficiaries, regardless of age, income, or need, will face a 21 percent benefit cut.

Social Security’s 75‐​year unfunded obligation (combined OASI and DI)—the difference between the present value of payroll tax revenues and spending — is $25.2 trillion, comparable in size to nearly the entire publicly held debt of $27.5 trillion in 2024. Together with Medicare, Social Security’s OASI and DI programs are responsible for 100 percent of US unfunded obligations.

To address the 75‐​year Social Security funding shortfall, Congress would have to increase the payroll tax rate from 12.4 percent to 17.5 percent. As a result, median earners ($48,000/year) would see their payroll taxes rise by $2,450 annually.

Security reform options:

Adopt a more accurate inflation adjustment: Congress should update the index used to calculate cost‐​of‐​living adjustments for current Social Security benefits from the CPI‑W to the chained consumer price index. The chained CPI more accurately reflects changes in consumer behavior in response to price changes and is estimated to reduce Social Security’s 75‐​year funding gap by one‐​fifth.

Index Initial Benefits to Prices: Social Security benefits are growing much faster than inflation because initial benefits are indexed to wage growth. Switching to a benefit formula that adjusts workers’ initial benefits for inflation, rather than the growth in average wages, would close 80 percent of the program’s 75‐​year funding gap while preserving the purchasing power of seniors.

Increase the Social Security Eligibility Age: Congress should raise the early and full Social Security eligibility ages by 3 years each (to 65 and 70) and index both to increases in longevity. Raising the full retirement age to 70 would reduce deficits by $121 billion between 2024 and 2032.

Transition from an Earnings‐​related Scheme to Flat Benefits: Social Security uses a costly earnings‐​related structure instead of a poverty‐​prevention, flat benefit system, as is common in many other countries. Depending on the level at which a flat benefit is set, such a regime can save taxpayers’ money by providing antipoverty protection in old age at a lower cost than an earnings‐​related regime does.

Uncapping the Payroll Tax Won’t Save Social Security: Eliminating the Social Security tax cap (thus taxing all earned income) would only close half of the program’s long‐​term funding shortfall. If the tax cap were eliminated in 2024, Social Security would return to running deficits by 2029, merely five years later.

0 comment
0 FacebookTwitterPinterestEmail

Thomas A. Berry

In the First Step Act of 2018, Congress significantly altered how mandatory minimum penalties attach to repeat violations of section 924(c), a federal firearms offense. This section makes it a crime to use or carry a firearm during and in relation to, or in furtherance of, a crime of violence or drug trafficking offense. A first conviction results in a mandatory minimum of at least five years, and each subsequent conviction requires a twenty‐​five‐​year minimum sentence.

Prior to the First Step Act, the rule was that sentences “shall [not] run concurrently with any other term of imprisonment,” so sentences for multiple section 924(c) counts had to be stacked to run consecutively. For example, a first‐​time offender convicted of three section 924(c) possession counts in a single indictment would be sentenced to a mandatory fifty‐​five years for the firearms counts—on top of the sentence for the underlying crime of violence or drug trafficking.

As a representative of the Judicial Conference of the United States once explained, the mandatory minimums in effect before the First Step Act resulted in sentences that were “irrational,” “unduly harsh,” “cruel and unusual, unwise and unjust.” In the First Step Act, Congress addressed this problem. The Act clarified that instead of treating section 924(c) convictions in a single proceeding as automatically qualifying a defendant as a repeat offender subject to consecutive twenty‐​five‐​year mandatory minimums for each additional count, a prior conviction must have become “final” before a second violation is subject to these greatly enhanced minimum penalties.

The First Step Act mandates that its new sentencing rules not only apply to all offenses committed after its enactment but also “shall apply to any offense that was committed before the date of enactment of this Act, if a sentence for the offense has not been imposed as of such date of enactment.”

The Act thus struck a balance between fairness and finality—although sentences that were already final upon the Act’s enactment would not be vacated, any sentence imposed after the Act’s enactment would be under the new, fairer rules.

But now, the courts of appeals have split on an important question of statutory interpretation. What happens if a sentence was originally imposed before the Act’s enactment, but was then subsequently vacated (for some reason unrelated to the Act) after the Act’s enactment? In that circumstance, a district court must impose a new sentence.

Should it apply that sentence under the old, draconian sentencing rules or under the new First Step Act rules? The Fifth Circuit has held that the old sentencing rules must be imposed, reasoning that even a vacated sentence qualifies as a sentence that “has … been imposed as of [the Act’s] date of enactment.”

Now the Supreme Court has been asked to review this question, and Cato has joined the American Civil Liberties Union, ACLU of Texas, and the Due Process Institute in an amicus brief urging the court to take the case.

In our brief, we explain that the Fifth Circuit’s approach is wrong as a textual matter. When a sentence is vacated, it is treated for all intents and purposes as if it was never put into effect—it becomes a legal nullity. It would be incongruous for an invalid sentence, which is treated as having no force in every other respect, to become the only thing standing in the way of a new, reasonable sentence under the rules Congress has set out.

Further, our brief explains that the Fifth Circuit’s interpretation does not serve the interests of finality. Whenever a sentence is vacated, a new sentence must be imposed. Applying the new sentencing rules of the First Step Act to such cases would do nothing to upset the status quo for sentences imposed before the Act’s enactment that remain in effect.

To treat a resentenced defendant as if the First Step Act was never enacted would needlessly prolong the types of injustices that the Act was meant to end. The Supreme Court should take this case, resolve the circuit split, and reverse the Fifth Circuit.

0 comment
0 FacebookTwitterPinterestEmail

Brent Skorup, Anastasia P. Boden, and Christopher Barnewolt

Every day, Americans find themselves and their businesses shunted into administrative proceedings at agencies like the Federal Trade Commission. In these proceedings, individuals and regulated parties litigate, often for years, within an agency that simultaneously writes the applicable regulations, enforces those rules before its hearing officers or courts, adjudicates initial complaints, and hears appeals.

For many Americans, it is jarring to find themselves subject to severe financial, reputational, and professional penalties in adjudications very different from a courtroom. The Federal Rules of Evidence do not apply, juries are nonexistent, and the hearing officers are overseen and at risk of removal by the head (or heads) of the agency. Quietly and routinely, people lose their businesses and their livelihoods. Many accept an early settlement offer or lighter penalties rather than attempt the risky and expensive process to vindicate their rights in federal court. Some parties, however, are challenging these pernicious agency practices that have accumulated over decades.

Intuit Inc. markets and sells TurboTax, the popular tax‐​preparation software used by millions of Americans annually. A few years ago, the FTC investigated Intuit, believing certain TurboTax ads were deceptive and harmed consumers, and issued a complaint before its own agency judges. But when the FTC simultaneously sought a preliminary injunction against Intuit in federal court, the judge rejected the FTC’s theory. Having failed to persuade the court, the FTC simply chose to reissue its complaint internally—acting as judge, jury, and executioner.

Unsurprisingly, the FTC then ruled in favor of its own allegations and imposed new disclosure requirements on Intuit. Over the last forty‐​six years, the FTC has lost just five of the over 150 cases adjudicated internally on the merits. Intuit sued, arguing the FTC’s processes and new requirements had several deficiencies, including violating the company’s due process protections.

The Cato Institute submitted an amicus brief to the Fifth Circuit in Intuit v. FTC, urging the court to make it clear that the FTC’s exercise of legislative, executive, and judicial power is unconstitutional. Our brief discusses how the Founders opposed the combination of all three powers of government in one body as dangerous to liberty and reviews how the Framers intended the Constitution to establish the separation of powers. As James Madison said in Federalist 47,

The accumulation of all powers legislative, executive and judiciary in the same hands, whether of one, a few or many, and whether hereditary, self appointed, or elective, may justly be pronounced the very definition of tyranny.

We also discuss how allowing the FTC to act as both prosecutor and judge in agency proceedings violates parties’ due process rights. Finally, we argue Congress has unconstitutionally delegated the FTC unconstrained authority to choose between federal court and in‐​house agency adjudication and that this violates the nondelegation doctrine. Congress needed to provide more limits on the FTC’s broad power to decide which venue—agency or court—and legal processes apply to parties like Intuit.

Our brief urges the Fifth Circuit to recognize the unconstitutionality of the FTC’s combination of legislative, executive, and judicial powers and to vacate the FTC’s order.

0 comment
0 FacebookTwitterPinterestEmail

Colleen Hroncich

“Startling Insights From a New Preschool Study” blares the headline from a SciTechDaily article on a new review of early childhood education research. The article is attributed to the Teachers College at Columbia University, home of two of the study’s authors.

But is this really startling given that most studies show mixed results at best?

The new report, “Unsettled science on longer‐​run effects of early education,” notes that most of the positive enthusiasm for taxpayer support of preschool comes from two projects in the 1960s and 1970s. The Abecedarian Preschool Study and the Perry Preschool Project each enrolled fewer than 60 children, included family support beyond the preschool programs, and were costly. It’s not surprising they have never been replicated on a large scale.

Given the impracticality of duplicating these tiny programs, it’s rather absurd that, as the “Unsettled science” report puts it, “The rigorous and notably positive evidence from these two studies all but ended the debate over the longer‐​run effectiveness of early childhood education programs.” Yet when it comes to broader preschool programs, these benefits are largely absent.

Head Start, run by the US Department of Health and Human Services (HHS), is the largest preschool program in the United States. In 2012, HHS released the results of the most comprehensive study of the program, finding little or no effect on student outcomes that persisted through third grade. Troublingly, as the “Unsettled science” report notes, there was even evidence that children who participated in Head Start “displayed more emotional symptoms than children who lost the lottery.”

These poor results were despite the program costing more than $7 billion per year at the time ($7,900 per child). It now costs around $12 billion, or more than $14,000 per child. But there is still no evidence of persistent benefits from the program.

The report also references a 2022 study of Tennessee’s Voluntary Pre‑K program. I recently had the opportunity to testify at a Congressional Joint Economic Committee hearing on “Building Blocks for Success: Investing in Early Childhood Education.” In my testimony, I noted,

There’s no consistent evidence that large‐​scale preschool programs are beneficial; and there’s evidence they can even be harmful. In January 2022, researchers from Vanderbilt University released a randomized study of Tennessee’s Voluntary Pre‑K initiative that found that children who participated in the program experienced “significantly negative effects” compared with the children who did not. Harms included worse academic performance and higher likelihood to have discipline issues and be referred for special education services. The results were so shocking that the researchers had to “go back and do robustness checks every which way from Sunday,” according to Dale Farran, one of the lead researchers. “At least for poor children, it turns out that something is not better than nothing,” she said.

The only positive outcomes in the “Unsettled science” report were in Boston, where there was a recent lottery‐​based evaluation of cohorts entering the city’s public pre‑k program between 1997 and 2003. Researchers found positive impacts on high school graduation and college enrollment, as well as fewer disciplinary problems in high school. But neither this nor a separate study of Boston’s program showed any positive impacts before high school.

The new study acknowledges, “the Boston evaluation leaves unanswered many important questions: In particular, what went on in the classrooms, and how can we explain the program’s longer‐​run impacts, given that there were no detectable achievement and behavioral impacts across the first eight grades of school?”

The inconsistent and sometimes negative results from large‐​scale preschool programs point to an important fact when it comes to any educational endeavors: one size does not fit all. A program that works for some children may be terrible for others. That’s why it’s crucial that the federal government, especially, stays out of early childhood education.

The first sentence of the “Unsettled science” report claims, “Early education programs are widely believed to be effective public investments for helping children succeed in school and for reducing income and race‐​based achievement gaps.” If it’s “widely believed” that large‐​scale preschool programs are an effective tool, that’s because people want it to be so, not because the evidence backs up that belief.

0 comment
0 FacebookTwitterPinterestEmail

David Inserra

The House Judiciary Committee recently released an 800+ page report detailing the efforts of the Biden White House to censor constitutionally protected speech and books by pressuring large tech companies. While the report covers some of the same ground as previous reporting, it adds new, damning conversations among high‐​ranking tech executives, as well as more censorial demands from the White House. 

And while it’s great that we are finding out about these new developments—where reporters and congressional committees can slowly extract bits and pieces of what the government demanded and how companies felt compelled to acquiesce—the American people deserve better. We need radical transparency from the government so that Americans know what the government is saying to private companies and what they are demanding be censored. 

The report, The Censorship‐​Industrial Complex: How Top Biden White House Officials Coerced Big Tech to Censor Americans, True Information, and Critics of the Biden Administration, shows that multiple tech companies, including Facebook, Amazon, and YouTube changed their policies—primarily around COVID-19—in response to pressure from the Biden White House.

In emails with his top executives, Meta CEO Mark Zuckerberg explicitly stated that Meta’s removal of the Lab Leak theory “seems like a good reminder that when we compromise our standards due to pressure from an administration in either direction, we’ll often regret it later.” Meta executives made clear that they felt compelled to change their policy to remove more COVID “misinformation” because they needed the administration to help them on other business issues, citing the “bigger fish to fry with the Administration—data flows, etc.” And doing nothing “doesn’t’ seem [like] a great place for us to be.”

When Biden officials reached out to Amazon, its teams worked to change their policy around vaccine‐​critical books, citing “the impetus for this request is criticism from the Biden administration.” Amazon executives testified that they wanted to “accelerate” this policy change to be completed before a forthcoming call with the White House. Employees wrote that their work streams were “due to criticism from the Biden people.”

One week after contact with the White House, the new policy was in place, in time for the call with the White House. Further pressure from the White House promoted additional policy steps to limit certain types of anti‐​vax content. 

YouTube also felt pressured to change its COVID-19 misinformation policy with executives noting that assuaging the administration was important because the company, like Meta, wanted “to work closely with the administration on multiple policy fronts.” The White House staffers emphasized that their concerns were “shared at the highest (and I mean highest) levels of the White House.”

YouTube went so far as to give the administration a preview of the policy they were finalizing and requested feedback from the administration.

These companies clearly and consistently felt the pressure from the administration to change their policies to remove constitutionally protected speech. They felt they needed to comply because they interpreted the administration’s statements and actions as implicit threats—if the companies did not censor content, then other important parts of their business would suffer the wrath of the White House. It almost reads as if the government is implying “It’s a nice business you’ve got there.”

This approach should be concerning for everyone regardless of their political views. One could swap out the Biden administration actions on COVID-19 for the Trump administration on Black Lives Matter or pro‐​Palestinian speech. Most people can see the danger in allowing their political opponents to have such power.

The left would not stand for a Trump White House secretly demanding that Facebook or YouTube remove or suppress certain Palestinian claims because they are “false,” “harmful,” or contributing to violence at campus protests. Imagine such demands being readily understood by executives at these companies as threats to their business that leave them with little choice but to comply.

Alternatively, progressives would rightly be enraged if then‐​President Trump had pressured Amazon or another bookstore to remove Black Lives Matter or DEI‐​themed books and then the store sent its new restrictive policy to Trump officials for feedback before launching it. We should apply the same standards to any administration, whether we agree or disagree with their beliefs.

This is wrong. No matter who is in the White House and no matter what type of speech is being suppressed.

The Supreme Court is currently reviewing Murthy v. Missouri and Vullo v. NRA, related cases of government censorship by proxy or “jawboning.” There are a variety of practical and legal considerations that can make it difficult to neatly draw the line between lawful government communications with companies and unlawful coercion and compulsion.

Of course, the government should be able to flag illegal activity on platforms or provide useful context to companies about a given situation or user. And blatant attempts to force companies to take down protected speech are wrong. But there is a grey area in which the government may believe it is only encouraging or suggesting enforcement actions or policy changes. But how do companies perceive such pressure, especially when it is sustained, strident, and coming from multiple parts of the government? These are difficult questions to thread the needle on. Draft a narrow standard for what governments are allowed to say and companies may miss out on truly helpful information from the government. Draft a broad standard and government jawboning can continue unabated.

Regardless of what the court decides, one of the greatest obstacles is the fact that so much of these communications—the helpful FYIs, the truly non‐​aggressive requests for information, persuasive conversations, belligerent demands, coded or even explicit threats—happen in secret. It takes a multi‐​billionaire buying a major social media company or years of investigation by a congressional committee to uncover some of what the government is doing to these companies. 

Americans’ foundational right to express themselves free from government censorship should not hang on what the government does in secret. We need a system of transparency that requires government employees to report any time they request or encourage a private company to silence, suppress, or limit speech. We already demand that government employees report all sorts of other activities ranging from contacts with the media to foreign travel. Certainly, Americans’ expressive rights are worth the minor inconvenience to government employees who will need to fill out a short form whenever they try to have a company suppress American speech.

Such transparency has multiple benefits. First, the mere fact that government employees need to document their demands will curtail the worst sorts of government pressure. Sunlight is the best disinfectant. Second, a fulsome record of government communications with tech companies can inform policymakers and courts about the different ways that government actors communicate with companies, helping them identify potential limits or restrictions that could be placed on executive communications. And third, it means that specific individuals whose speech has been silenced as a result of government communication with tech companies have the evidence they need to defend their rights in the courts. 

Americans of all beliefs, backgrounds, and politics deserve to have their expression protected from government interference. Defending this liberty requires shining a light on government communications with private companies.

0 comment
0 FacebookTwitterPinterestEmail

Jai Kedia

The Federal Reserve underwent a massive regime shift following the 2008 financial crisis. It incorporated several new tools into its monetary policy arsenal, ranging from interest on excess reserves to large‐​scale asset purchases (“quantitative easing”) to deal with the crisis. Unfortunately, as with most public institutions, power given is seldom relinquished.

As a recent Cato CMFA article noted, if the Fed is expected to massively increase its balance sheet in response to every major crisis, it will never return to a pre‐​2008 operating system. Furthermore, the COVID-19 pandemic spurred a further round of massive quantitative easing, so much so that instead of shrinking back down, the Fed’s asset holdings are now over one‐​third the size of the entire US commercial banking sector.

It stands to reason that such a massive shift in central banking should have effects on the financial system. A recent Wall Street Journal opinion piece makes this argument and uses options market data to suggest that the Fed’s involvement in financial markets has increased stock market volatility.

To be fair, the article points out a singular observation, one which may not reflect any underlying trend.

But there are ways to check for any underlying trend, such as vector autoregressions (VARs). In this post, I use a VAR method and demonstrate that the Fed’s 2008 regime shift has indeed had serious repercussions for market volatility as measured with the CBOE Volatility Index (“VIX”). (For those interested, I provide more details on the methodology after discussing the results.)

Figure 1 shows the response of the VIX to an unexpected one‐​percent increase in Fed asset holdings. It is immediately clear that the relationship between the Fed’s balance sheet and volatility has flipped since 2008. Prior to the financial crisis (the blue line in Figure 1), when the Fed purchased more assets, market volatility slightly declined. A 1 percent increase in asset holdings reduced volatility by around 2 percent at peak efficacy. After the crisis (the red line in Figure 1), under the Fed’s quantitative easing framework, an increase in asset holdings significantly increased short‐​term market volatility. In this period, a 1 percent increase in assets led to a 6 percent increase in volatility at peak efficacy.

Figure 1: Impulse Response of VIX to a 1% Increase in Fed Asset Holdings

And there’s more.

Figure 2 shows that the amount of market volatility that is attributable to Fed balance sheet shocks increased after the 2008 regime shift. Put simply, the percentage of all fluctuations in market volatility that is attributable to the Fed has increased. Prior to the financial crisis, the Fed accounted for around 6 percent of the fluctuations in the VIX. Following the crisis, it has accounted for well over 10 percent of the movements in the VIX. So, along with increasing the severity of market volatility fluctuations, the Fed has also become more responsible for overall market volatility post‐​2008.

Though it is not shown in Figure 2, the portion of volatility now attributable to the Fed is similar to the degree by which overall demand and supply factors influence the VIX. That is, the Fed’s actions are now roughly as important in driving volatility as overall economic conditions.

Figure 2: Fed’s Share of VIX Forecast Error Variance Decomposition

There may be several possible explanations for this flip in the transmission of the Fed’s asset purchases to volatility under the new framework. Before the crisis, asset purchases were not in themselves considered a means of easing or tightening. Market participants may now view Fed asset changes as a signal of weakening economic conditions due to the scale and frequency of quantitative easing measures. Conversely, before 2008, balance sheet expansions were less common and typically viewed as a routine adjustment or a response to relatively minor financial disruptions. The nature of the assets being purchased also changed post‐​2008, when the Fed, for the first time, expanded its asset purchases to include a large quantity of mortgage‐​backed securities. And, of course, in response to COVID-19, the Fed also began purchasing corporate bonds and ETFs.

Once again, the empirical evidence supports reigning in the Fed; the economy works better when the Fed does less, not more.

For those interested, here is a brief description of the VAR method used for this post. A vector autoregression with four lags is fitted to quarterly data on four variables—output gap (i.e., how far US real GDP is away from its potential capacity), inflation computed using the GDP deflator, percentage change in the VIX, and percentage change in the Fed’s total asset holdings.[1] The VIX uses stock index option prices to measure the market’s expectation of short‐​term volatility. Since the goal of this post is to compare the Fed’s effect on volatility under its post‐​2008 operating framework, the VAR is estimated over two data samples—1991 through 2007 (“Great Moderation”) and 2009 through 2019 (“Post Financial Crisis”). The start year is set at 1991 to match the first full year when VIX data are available. The full year of 2008 is skipped to ignore the transition between the Fed’s pre‐​crisis and post‐​crisis operating framework.

The Fed’s framework has arguably changed again following the COVID-19 pandemic but there has been insufficient data since then to estimate any meaningful statistical model. Moreover, the Fed is still operating in an abundant reserve framework, and paying interest on reserves, both of which are the major defining characteristics of the new operating regime.

This VAR method offers important advantages when trying to empirically describe the relationship between the Fed’s balance sheet and volatility. Firstly, it includes macro indicators (output gap and inflation) so that any common changes in volatility or asset holdings resulting from responses to general economic conditions are not mistakenly attributed to each other. Secondly, such a method prevents obfuscation created through the cross‐​dependence of volatility and assets on each other. That is, it will not incorrectly attribute the effects of volatility on balance sheet size in the other direction.

The author thanks Jerome Famularo and Nicholas Thielman for their invaluable research assistance in the preparation of this essay.

[1] All data is collected from the FRED database. Compiled Fed balance sheet data is only available directly from the Fed from late 2002. Bao et. al. (2018) extend this dataset all the way back to the Fed’s inception. Their dataset is used from 1991 until the start of the Fed’s own data series.

0 comment
0 FacebookTwitterPinterestEmail

Bryan Caplan

Historic preservation is one of the most crowd‐​pleasing rationales for preventing development. Critics often ridicule the rationale’s abuse. New York has an historic parking lot, and so does Washington, DC. Rarely, however, does anyone challenge the principle of historic preservation. My new Build, Baby, Build: The Science and Ethics of Housing Regulation does precisely that.

I first decided to address historic preservation while reading Triumph of the City by Ed Glaeser, chair of Harvard’s Economics Department. He is unquestionably a hero of Yes In My Backyard (YIMBY). He’s probably the greatest YIMBY hero in academia. But to my ears, Glaeser still praises historic preservation with faint damnation:

In cities and suburban enclaves alike, opposition to change means blocking new development and stopping new infrastructure projects. Residents are in effect saying “not in my backyard.” In older cities like New York, NIMBYism hides under the cover of preservationism, perverting the worthy cause of preserving the most beautiful reminders of our past into an attempt to freeze vast neighborhoods filled with undistinguished architecture.

This passage inspired a Build, Baby, Build scene where I invite Ed for a ride in my time machine. First, I take him back to 1928 and show him the original Waldorf‐​Astoria Hotel. The hotel was undeniably gorgeous.

But in the late 1920s, regulators didn’t try very hard to “preserve the most beautiful reminders of our past.” So about a century ago, developers were able to demolish the hotel.

And then… they built the iconic Empire State Building in its stead! Take a look.

On reflection, there is a widely‐​ignored trade‐​off between preserving past greatness and creating new greatness. Almost every beloved building stands on the footprint of an even older building. If historic preservation had existed throughout history, many more truly ancient structures would still be standing. But everything more recent would, at best, be less conveniently located. Many wouldn’t exist at all! After all, there’s little point in building the Empire State Building anywhere other than a city center.

Without historic preservation laws, profit‐​maximizing developers would still consider historic value. Tenants and buyers like historic stuff, so they’ll pay a premium for it. Developers like good publicity, so they have an added incentive to loudly proclaim their enduring eagerness to keep history alive. Philanthropists may even buy historic buildings and turn them into museums, or partial museums.

I say these free‐​market forces deliver all the historic preservation a reasonable person would ever want. I know, many of my fellow economists will hail the positive externalities of doing even more. But we shouldn’t give them the time of day. Much of what mainstream economists credulously call “positive externalities” is just Social Desirability Bias — our all‐​too‐​human tendency to voice pretty lies.

“History is priceless” is a lovely yet absurd slogan. A few architectural historians aside, people barely care about 99 percent of protected buildings. When was the last time you smiled at a random structure built before your birth? What’s your tenth favorite building in Paris, never mind San Francisco?

But even if you take the positive externalities more seriously than I do, we’re not choosing between positive externalities and nothing. We’re choosing between the positive externalities of the buildings we have, and positive externalities of the buildings we could have instead. People in the past were right to believe that the future could easily outshine the past. Why shouldn’t we believe the same?

0 comment
0 FacebookTwitterPinterestEmail