Category:

Stock

Walter Olson

A federal grand jury yesterday charged former president Donald Trump with four criminal counts in connection with his attempt to overturn the results of the 2020 election and remain in power. The four counts are: conspiracy to defraud the United States, conspiracy to obstruct an official proceeding, obstruction of and attempt to obstruct an official proceeding, and conspiracy to deprive persons of protected rights. The indictment cites six unnamed co‐​conspirators. Five can be identified from context as Trump advisers or administration officials, four of whom are lawyers. The sixth is a yet‐​unidentified political consultant.

Each indictment of Trump — yesterday’s was the third, with a fourth expected soon from Georgia — is different in its own way. Let’s start by looking at a few of the ways in which this one differs from the two previous, and then go on to briefly preview how a few legal issues may unfold in the new prosecution.

The ultimate gravity of the conduct. Of the criminal proceedings under way against Trump, this is the one that squarely addresses the conduct that posed the gravest threat to republican liberty, his attempt to nullify the decision of American voters and steal a second term as president they had declined to give him. Serious as are the Florida federal prosecution’s charges of obstruction of justice and improper retention of national security documents, such conduct by an ex‐​president is unlikely to result in a constitutional crisis or bloodshed in the halls of government or the streets. This one, not so much.

No mere regulatory infraction. The Manhattan charges against Trump for falsifying business records to conceal payoffs to a paramour appear to be based at least in part on unusually strict requirements specific to New York. That is one reason many commentators have been cool toward them. In 1905 one court drew a familiar distinction as follows: “An offense malum in se is properly defined as one which is naturally evil as adjudged by the sense of a civilized community, whereas an act malum prohibitum is wrong only because made so by statute.” It is hard to read the new indictment without concluding that if proved, many elements of the misconduct would be broadly condemned by civilized opinion as intrinsically wrong.

Novel legal and constitutional issues. Notwithstanding the odd frivolous theory in Trump’s defense about the Presidential Records Act, the classified‐​documents indictment is likely to be cut and dried in many of the legal issues it raises. The courts each year process a substantial number of cases alleging improper document retention as well as obstruction of justice, and are unlikely to carve out a special exemption for ex‐​presidents.

Yesterday’s indictment, in contrast, is likely to raise legal issues that are relatively unfamiliar, uncertain, or both. Few legal commentators are deeply familiar with all four of the statutory bases on which the grand jury filed charges, and intuitions can be deceptive: in applying the law against defrauding the United States, for example, courts have not always construed the elements of fraud in the same way they do in some other fraud areas, and the interpretations have also changed. It is not entirely settled how the elements of obstructing an official proceeding will ultimately shake out in January 6 cases, and so forth.

In other words, caution is called for at this stage in predicting the extent to which judges will trim back the scope of this prosecution, if they do. It is widely agreed that the First Amendment protects some telling of lies for political benefit, and also that it protects (as, in effect, lobbying) some efforts to persuade government officials to carry out acts that are wicked and unconstitutional. It is equally certain that the First Amendment does not protect every act of speech or persuasion that someone might retroactively try to jam into these categories. If you shut down a pending courtroom trial by phoning in a false report of a dangerous gas leak, you cannot get off by arguing that you were just exercising your speech and lobbying rights, nor are you likely to get off by arguing that you knew there was a gas stove in the court cafeteria and were basing your 911 call on a sincere belief that there was an elevated risk of asthma from stray methane.

In short, it matters in law and under the First Amendment whether speech and lobbying intended to obstruct proceedings or nullify rights was taken in good faith or otherwise, and deceitfully or otherwise. That is probably one reason why the indictment cites extensive cause to believe that Trump knew his claims of election fraud to be false, rather than wandering around in some sort of fugue state in which he might reasonably believe them to be true.

0 comment
0 FacebookTwitterPinterestEmail

Fitch Downgrades U.S. Debt

by

Romina Boccia

Fitch Ratings, one of three major credit rating agencies, downgraded the U.S. debt from AAA (the highest possible rating) to AA+ yesterday, explaining:

“The rating downgrade of the United States reflects the expected fiscal deterioration over the next three years, a high and growing general government debt burden, and the erosion of governance…”

A ratings downgrade is intended to serve as a signal to markets that an issuer of bonds is less likely to repay interest or principal. In this case, the ratings downgrade is minor, from an “extremely strong capacity to meet financial commitments” to “very strong” capacity.

This is the second time in U.S. history that a major credit agency has downgraded the country’s debt. Standard & Poor’s downgraded the U.S. debt rating in 2011. Rates on 10‐​year Treasury bonds fell after the announcement. While past performance is not necessarily indicative of the future, it’s not clear that interest costs will necessarily rise as a result of the downgrade. The U.S. economy is relatively strong, despite the drag from high and rising government debt, and the U.S. dollar remains the pre‐​eminent global reserve currency. And yet, the long‐​term fiscal trajectory is abysmal, with more than $100 trillion in deficits projected over the next 30 years as debt surges to an unprecedented 180 percent of gross domestic product (GDP).

While many commentators are focused on whether the Fitch credit rating downgrade reflects discontent with the nature of U.S. debt limit negotiations, we shouldn’t mistake a symptom for the cause. The United States faces a potentially catastrophic fiscal crisis in the long run, if spending and debt continue growing unabated. The responsible choice at the debt limit is to adopt reforms that address the driving forces behind the growth in the debt. The outcome of the May debt limit deal is indicative of legislative myopia and the tendency to kick the can down the road.

Congress should address the largest spending growth drivers, especially major health insurance programs like Medicare and Social Security. Following the inadequate debt limit deal that left the biggest cost drivers unaddressed, some members of Congress are eyeing the possibility of establishing a fiscal commission. A well‐​designed commission, modeled after the successful Base Realignment and Closure Commission (BRAC) can help Congress overcome political gridlock and signal to markets and credit rating agencies that U.S. legislators are committed to stabilizing the growth in the U.S. debt.

Whether this credit rating downgrade turns out to be a blip or a more significant market event, we know that the U.S. budget is on a highly unsustainable path that threatens to undermine American prosperity and security if it goes unaddressed for much longer. Congress should seize this moment to establish a mechanism for stabilizing the growth in the debt. A BRAC‐​like commission can help Congress see this through.

Recommended Reading:

From Debt Ceiling Crisis to Debt Crisis

National Security Implications of Unsustainable Spending and Debt

Medicare and Social Security Are Responsible for 95 Percent of U.S. Unfunded Obligations

Designing a BRAC‐​Like Fiscal Commission To Stabilize the Debt

Cato Explainer Video: The Price of a U.S. Credit Rating Downgrade

0 comment
0 FacebookTwitterPinterestEmail

Jeffrey A. Singer

This June, Rep. Sheila Jackson‐​Lee (D‑TX) and more than 40 co‐​sponsors introduced H.R.4272, the “Stop Fentanyl Now Act,” The bill essentially directs funding for the Secretary of Health and Human Services to develop programs to “provide outreach and awareness to the dangers of fentanyl” and enhances grants for “treatment and recovery services.” It also increases penalties and fines for certain drug‐​related crimes.

While most of the proposal offers nothing new, Section 8 represents a welcome attempt to chip away at federal drug paraphernalia laws that undermine harm reduction.

The title of Section 8 is “Exclusion Of Fentanyl Drug Testing Equipment From Treatment As ‘Drug Paraphernalia.’” The bill would amend Section 422 of the Controlled Substances Act (21 U.S.C. Section 863) that prohibits the interstate sale or transport of “drug paraphernalia.” Section 422(d) defines “drug paraphernalia” and provides a list of examples. Section 422(f) lists exemptions.

This is the current language of Section 422(f):

(f) Exemptions

This section shall not apply to—

(1) any person authorized by local, State, or Federal law to manufacture, possess, or distribute such items; or

(2) any item that, in the normal lawful course of business, is imported, exported, transported, or sold through the mail or by any other means, and traditionally intended for use with tobacco products, including any pipe, paper, or accessory.

H.R. 4272 would add to Section 422(f):

(3) the possession, sale, or purchase of fentanyl drug testing equipment, including fentanyl test strips.

A week before Rep. Jackson‐​Lee introduced this bill, Rep. Jasmine Crockett (D‑TX) introduced H.R. 3563, called the STRIP Act. The STRIP Act clarifies in statute that “fentanyl drug testing equipment including fentanyl test strips” shall be excluded from the list of federally prohibited drug paraphernalia.

While both bills provide a welcome change to federal drug paraphernalia law, their wording is too narrow. As I explained to Rep. Jackson‐​Lee and her colleagues on the House Judiciary Committee Subcommittee on Crime and Government Surveillance last March:

[F]entanyl is just the latest manifestation of what drug policy analysts call “the iron law of prohibition.”1A variant of what economists call the Alchian‐​Allen Effect, the shorthand version of the iron law states, “The harder the law enforcement, the harder the drug.” Enforcing prohibition incentivizes those who market prohibited substances to develop more potent forms that are easier to smuggle in smaller sizes and can be subdivided into more units to sell… The iron law of prohibition is why cannabis THC concentration has grown over the years. It is what brought crack cocaine into the cocaine market. And it made fentanyl replace heroin as the primary cause of overdose deaths in the United States… The iron law of prohibition cannot be repealed. Already we have been getting troubling reports of the veterinary tranquilizer xylazine—drug users call it “tranq”—becoming an additive to fentanyl and other illicit narcotics. This tranquilizer greatly potentiates opioids’ effects, producing more powerful “highs.” Adding this potentiator again enables illicit opioids to be smuggled in smaller sizes and subdivided into more units to sell… What makes xylazine more deadly is that it is not an opioid, and overdoses from it that cause people to stop breathing cannot be reversed with naloxone.

Fortunately, the company that manufactures fentanyl test strips now makes xylazine test strips. But neither the STRIP Act nor the Stop Fentanyl Now Act excludes xylazine test strips from Section 422.

I also warned Subcommittee members that the iron law of prohibition is responsible for the recent appearance of a new, more potent category of synthetic opioids called nitazenes. There are not yet any test kits for nitazene. Let’s hope medical device manufacturers come up with them soon.

In a blog post about the STRIP Act, I wrote, “As long as policymakers persist in prosecuting America’s longest war, the war on drugs, the iron law of prohibition guarantees there will always be a new and more potent drug to wage war against.

Here’s a suggestion for lawmakers: exempt any drug testing equipment from drug paraphernalia laws. Better yet, repeal drug paraphernalia laws altogether.

0 comment
0 FacebookTwitterPinterestEmail

Romina Boccia and Dominik Lett

Within weeks of passing new discretionary spending limits, Congress is proposing to increase deficits by abusing emergency designations to prop up agency budgets.

The May Fiscal Responsibility Act established caps on discretionary funding, but provided an exception for spending designated as “emergency.” And now the leaders of the Senate Appropriations Committee have said they plan on going around debt ceiling caps by adding $8 billion for defense and $5.7 billion for non‐​defense emergencies.

Is the added spending really for emergencies?

Let’s look at the Commerce, Justice, and Science (CJS) appropriations bill to see whether its $2.4 billion in emergency spending really is urgent and disaster‐​related.

Science and technology

The Commerce, Justice, and Science appropriations bill (S.2321) provides $1.2 billion of emergency spending for science and technology agencies (see Table 1). Of that total, the National Space and Aeronautics Administration (NASA) receives $296 million for infrastructure and compliance with environmental regulations—70 percent of NASA’s entire budget for construction and environmental compliance. The National Science Foundation receives a whopping $420 million for unspecified “research.”

Building vehicles, constructing new facilities, and doing research are all well within the purview of normal operations for scientific agencies. Federally funded mission research, like that done by the National Science Foundation, has a long and useful history, but it is not without its limitations. Federal research and development subsidies can crowd out private R&D, leading to worse returns on investment. Congress should justify spending taxpayer dollars that could be more productively used in the private sector, especially if designating them for emergencies, which allows for spending in excess of agreed‐​upon cap levels.

Law enforcement and criminal justice

Law enforcement receives nearly $1 billion in emergency spending (see Table 2)—half of that is for the broad category of “salaries and expenses.” All funding for local and state law enforcement grants for presidential nominating conventions—a full $100 million—is designated as emergency spending. Likewise, 86 percent of infrastructure funding for federal prisons—$179 million—is designated as emergency spending.

None of these line items are unexpected, sudden, or temporary. Salaries, expenses, and recurring predictable events like presidential nominations fall within normal budgetary operations. Law enforcement agencies deserve the same funding scrutiny that other agencies receive. For FY24, Senate appropriators plan on providing $38 billion to federal law enforcement agencies. Most of the year‐​to‐​year spending increases for law enforcement were provided through the abuse of emergency designations. If law enforcement requires additional resources, Congress should provide these through regular appropriations.

Economic aid

The Economic Development Administration (EDA) receives $25 million in emergency spending. EDA’s track record of poor performance merits spending cuts, not increases. As Cato’s Chris Edwards argues, “Federal funding of local projects is inefficient for many reasons, and it is not affordable given ongoing federal deficits of more than $1.5 trillion a year.”

Congress is supposed to first authorize programs and then appropriate taxpayer dollars. Increasingly, Congress skips the first part, reducing oversight and allowing for more wasteful spending. EDA’s authorization lapsed in 2008, yet Congress plans to spend $1.4 billion for FY24 on it. EDA is a ripe target for elimination. If Congress chooses to continue funding its operations, it should first re‐​authorize it and then fund it with regular appropriations.

Reject unwarranted emergency spending

Escape valves for urgent, sudden needs are necessary for statutory spending limits to operate effectively. However, abuse of emergency spending to prop up agency budgets, as Senate appropriators are proposing, undermines trust in the federal government’s fiscal commitments and contributes to a worsening fiscal trajectory.

A wide range of promising reforms are available to address the abuse of emergencies. One such option: notional emergency spending accounts. Using a similar mechanism as CUTGO, Congress could account for emergency spending and offset it, reducing future abuse, increasing transparency, and strengthening fiscal responsibility. Tracking emergency spending (and associated interest costs) and reducing discretionary limits over the following five years would deter irresponsible emergency spending and incentivize forward‐​looking budgetary planning.

Emergency designations are a mechanism to avoid the difficult but important process of considering budgetary trade‐​offs. They evade spending limits, reducing oversight, promoting waste, and contributing to America’s growing debt crisis. In 2018, a pre‐​pandemic Congress produced the same appropriation bill without a single emergency designation. Congress is capable of budgeting more responsibly. Legislators should reject unjustified and unnecessary emergency spending.

0 comment
0 FacebookTwitterPinterestEmail

Bill Kristol Backs a Bad, Old Idea

by

Paul Matzko

When Fox News settled for nearly $800 million with Dominion Voting Systems, it avoided having to admit that it promoted lies in its coverage of Trump’s attempt to overturn the 2020 election results. That settlement is now the basis of an attempt by the Media and Democracy Project (MAD) to block a Philadelphia television station, Fox 29, from having its license renewed by the FCC. MAD’s petition recently received a letter of support from Bill Kristol, the Never Trump conservative and a current editor at The Bulwark.

The MAD petition hinges on an infrequently enforced FCC policy about “news distortion.” It is a high bar to clear. To qualify, the reported news must not only be false but falsified. That’s the “distortion” angle. Selective reporting, like focusing on an unrepresentative framing of an event, wouldn’t count. For example, a journalist could choose to do their segment at a post‐​George Floyd protest in the summer of 2020 either with a fiery backdrop late at night or in front of peacefully marching protesters earlier in the day. Neither choice would count as distortion. It could, however, be distortion if a journalist were to pay actors to stage a fake demonstration.

While I found much of Fox News’s coverage of the 2020 election to be a shocking dereliction of journalistic responsibility — and friends don’t let friends rely on *any* cable news channel as their primary source of information — I have my doubts about whether it rises to the level of news distortion. Selective, biased, or even knowingly false reporting isn’t enough. Fox News would have had to manufacture the stories themselves, not just report on lies and rumors promoted by others.

But let’s set that uncertainty aside for a second and simply assume that Fox News will be found responsible of news distortion by the FCC. Is that a good system that operates in the public interest? And should conservatives like Bill Kristol support such a system?

A quick look at the history of the news distortion standard offers a cautionary tale for those with an understandable but misguided desire to use State power to punish falsity. The news distortion policy was not created in a vacuum and was not a product of disinterested civil servants seeking to serve the common good. It was concocted under pressure from politicians who wanted to punish disfavored speech and suppress political dissent.

The news distortion policy began in 1969 with a series of FCC investigations into complaints about journalistic staging. These were not just complaints from ordinary citizens; they came from members of Congress. While the complaints covered quite a range of topics — from allegations of staged marijuana parties on college campuses to selective editing of interviews with whistleblowers on Pentagon waste — the most widely referenced involved news coverage of the 1968 Democratic National Convention protests in Chicago.

In particular, Democratic congresspeople were angry that journalists were, in their opinion, exaggerating police violence against protestors. One Senator accused a camera crew of dressing up a “girl hippie” with a bloody bandage and sending her up to the police line to shout, “Don’t hit me,” on cue. Supposedly wounded protestors would rise up like Lazarus as soon as the cameras turned off.

It’s worth noting that none of the 1969 investigations found any merit to these complaints. And historians are in broad agreement that the ‘68 protests had plenty of certifiable police brutality and angry protestors without any need for journalistic fabrication. (I studied w/​David Farber; read his book!)

So what was the point of these news distortion claims? It was an attempt to abuse a government agency to delegitimize democratic opposition. (And Democratic opposition, for that matter.) If you were a pro‐​war Democratic congressman supporting the pro‐​war Democratic presidential nominee Hubert Humphrey as the DNC delegates defied the primary results that favored anti‐​war candidate Eugene McCarthy, you weren’t appreciative of journalists drawing attention to your unpopular convention machinations. Nor would you appreciate exposes of the party’s willingness to rely on sometimes brutal law enforcement tactics to suppress their critics.

So filing a complaint with the FCC for news distortion — which could then require the responsible news network to explain their conduct in a potentially embarrassing public spectacle — was a way of putting pressure on tv networks to shape future coverage in a way that was more favorable to the politicians filing the complaints.

In other words, complaints about news distortion were *themselves* an attempt at distorting the news!

We rightly condemn President Richard Nixon for sending Charles Colton to CBS in 1970 to threaten the network with targeted FCC regulatory enforcement if they didn’t back off with their critical coverage of the Vietnam War effort. (It worked.) We should be just as condemnatory of how congress created a novel news distortion standard at the FCC in 1969 to suppress critical news coverage.

It’s notable that the FCC’s rules were capacious enough in ‘69 & ‘70 to enable both parties to simultaneously target their opponents. Nixon leaned on the fairness doctrine while congressional Democrats used news distortion. It’s a reminder that when government agencies are granted the power to control speech — even accidentally — it creates an opportunity for political entrepreneurs to find ways to abuse those powers in order to extract maximum partisan advantage. That’s an invitation for a constant, seesawing, no‐​holds‐​barred brawl for control of the levers of regulatory power; the temporary winner(s) get to punish their ideological enemies and reward their allies. Free speech gets caught in the crossfire.

With that history in mind, Bill Kristol’s support for a newly invigorated news distortion policy comes across as naive. (To be fair, as you can see from the picture above, Kristol was merely a callow intern at the White House in 1970 when all this was happening.) The news distortion policy was censorship via the backdoor.

And conservatives have good reason to be leery of boosting the FCC’s authority to police news content. As I have written about at length, while the FCC’s various backdoor speech regulations have punished radicals from both Left and Right, conservatives have been a particular target. I used to ask liberal proponents of a revived Fairness Doctrine standard if they really wanted to gift these powers to the Trump administration. I suppose now I have to ask conservative proponents of an energized news distortion standard if they really want to gift these powers to the Biden (or Harris) administration.

Rather than calling for broader enforcement of the news distortion policy, small government conservatives should instead be calling for its repeal.

Crossposted from the author’s Substack. Subscribe for weekly insights from the intersection of history, policy, and politics.

0 comment
0 FacebookTwitterPinterestEmail

Jennifer Huddleston

In previous years, debates about online speech at a state level had largely focused on issues such as concerns about anti‐​conservative bias or online radicalization. More recently, however, many states have instead focused on the impact of social media platforms and the internet on kids and teens.

While many of the proponents of these bills may have good intentions, these proposals have significant consequences for parents, children, and all internet users when it comes to privacy and speech. States that have enacted such legislation have faced legal challenges on First Amendment grounds and cases are currently pending in the courts.

In general, there have been four categories of legislation at a state level: age‐​appropriate design codes, age‐​verification and internet access restrictions, content‐​specific age‐​verification laws, and digital literacy proposals. With many state legislatures recessing this summer, there is an opportunity to analyze what the emerging patchwork of such laws looks like, the potential consequences of these actions, and what — if any — positive policies have happened.

Age‐​Appropriate Design Codes and Age Verification for Online Activity in the US

Signed into law in 2023, the California Age‐​Appropriate Design Code Act is the first of its kind in the United States. The law obliges businesses to conduct risk assessments of their data management practices and to estimate the age of child users with a higher degree of certainty than existing laws, controlling their access to certain content. While such a law is well intended, it has raised serious concerns about privacy and free speech and is currently being challenged in court. Other states are considering bills that require age verification for using social media.

Such proposals originate in European countries, such as the UK, which is considering its own Online Safety Bill to prevent young people from harmful content but also raises serious concerns around the censorship of lawful speech, privacy, and encryption. On speech, such initiatives threaten the right to anonymous speech. On privacy, kids and adults are likely to be harmed by invasive, yet currently unsafe methods of age‐​verification technologies in an online ecosystem where at least 80% of businesses claim to have been hacked at least once. On encryption, some have advocated introducing backdoors in end‐​to‐​end encryption to catch malicious actors that harm kids while overlooking the importance of encrypted channels for kids to safely call out abusers.

This legislative session, several U.S. states have contemplated bills that would require additional steps to verify who may have a user account on social media or other websites, each with their unique approaches. But many share common concerns. For example, a cluster of states have sought to mandate explicit parental consent for minors creating or operating a social media account, like Pennsylvania, Ohio, Connecticut, and Louisiana. Pennsylvania, for example, has proposed legislation stating that a minor cannot have a social media account unless explicit written consent is granted by a parent or guardian. Ohio and Connecticut have followed a similar path requiring parental consent for children under 16 using social media. Wisconsin considered a bill recently to require social media companies to verify the age of users and require parental consent for children to create accounts. More than 60 bills were introduced in 2023 and at least nine states considered age verification, age‐​appropriate design codes, or other restrictions on young people’s internet usage. Most of these proposals failed; however, there are a few significant age verification bills that were still pending or enacted as of July.

The Governor of Louisiana signed the Secure Online Child Interaction and Age Limitation Act (SB162) into law on June 28. This law not only enforces parental consent for minors, but expressly requires companies to verify the age of all Louisiana account holders. As will be discussed below, this is often the case with age‐​verification laws more generally. Similarly, Arkansas passed the Social Media Safety Act, requiring children under 18 to obtain parental consent for creating a social media account. Utah went a step further by banning access to social media after 10:30 pm for all children under 18 unless parents modify the settings.

Consequences of Age‐​Appropriate Design Codes

The implementation of overly broad policies raises significant privacy concerns, not only for young users but for everyone. The process of accurately determining the age of an underage social media user inherently necessitates determining the age of all users. In a context where social media companies may be held accountable for errors in age determination, the request for sensitive information such as proof of ID becomes a requirement for all users. This poses immediate questions regarding the type of identification data to be collected and how companies might utilize this information before the age verification process is complete.

On a practical level, social media platforms cannot solely depend on their internal capabilities for age verification, thus necessitating reliance on third‐​party vendors. This reliance presents a further question: who possesses the necessary infrastructure to manage such data collection? Currently, MindGeek, the parent company of PornHub, stands as one of the dominant international market players in age verification. Many conservatives may question such a company or the social media platforms they are concerned about having the IDs or biometrics of young users. For example, the Arkansas Social Media Safety Act relies on third‐​party companies to verify users’ personal information.

Options that do not require the collection of sensitive documents — like government IDs or birth certificates — are likely to rely on biometrics. In such cases, not only are there concerns about the potential risk of this information falling into the hands of malevolent actors, but also questions of the accuracy of such technology in cases, such as distinguishing the difference in a 17 ½‑year‐​old and an 18‐​year‐​old. These are critical considerations for legislators as they advance bills aiming to replace parental oversight with governmental control, a shift that may also generate unforeseen consequences and risks.

We must also consider the potential repercussions on youth when their freedom of speech, expression, and peer association are curtailed due to the absence of social media. How can we balance the disparities between parents who restrict their children’s access to social media and those who permit it? In today’s digital age, children often forgo playing in neighborhood streets and optinstead for virtual interaction.

Social media platforms have empowered young people to voice their opinions on political matters and vital issues such as climate change. Without the communication channels provided by social media, the reach and organization of initiatives like Greta Thunberg’s “Fridays for Future” would have been significantly reduced. It’s crucial to consider the potential loss of such influential platforms which serve not only as a stage for youthful expression, but also a catalyst for activism. Introducing bills that impose broad restrictions on access to social media is likely to also obstruct these beneficial aspects stemming from social media usage.

Additionally, these restrictions would make it difficult — if not impossible — for users of all ages to engage in anonymous speech as well as access communication and lawful speech. The only way to verify users under a certain age — such as 16 or 18 — is to also verify users over that age. This means all users would be forced to provide sensitive information like passports, driver’s licenses, or biometrics in order to participate in online discussions. This information would have to be tied to a user’s account, meaning it would be impossible for users to retain true anonymity. This sets up a honeypot of sensitive personal information for malicious hackers.

Topic‐​Based Age‐​Appropriate Design Codes or Age‐​Verification

Some states have introduced age‐​verification legislation that targets specific content. Currently, these proposals have been limited to pornographic material and websites. For websites exclusively dealing with pornography, the task of flagging them is relatively straightforward. However, challenges arise when attempting to regulate more malleable platforms that do not primarily host adult content.

Louisiana was the first state to take such an approach with a law that requires age verification for access to platforms if pornographic content comprises more than one‐​third of the overall content. However, such thresholds can often be arbitrary and could impact more general‐​use websites that may be attempting to remove such content. For example, platforms like Twitter and BlueSky allow adult nudity, and other “sensitive media content” are permissible with certain restrictions. The platforms likely engage in significant content moderation and flagging of such content; however, the exact percentage of such content on a website may vary.

Lawmakers must also take into account how such laws could impact smaller platforms. A new platform with fewer users could have only a small amount of adult content but cross an arbitrarily set threshold based on the percentage of content. Small websites that see a sudden increase in users might also struggle to keep up with moderation for a time and end up over thresholds — even if such content violates their official terms.

Pragmatically, these laws may not be as effective at achieving their goal as policymakers may hope. As of July 1st, Virginia is the most recent state to enact a law that requires age verification for websites showcasing adult content. However, given that consumers have privacy concerns over sharing their sensitive personal data, they tend to bypass these protective measures, raising concerns over their effectiveness. For instance, since the enactment of the law, Google Trends data indicates that Virginia leads the US in searches for virtual private networks (VPNs), a tool that allows individuals to access such sites without disclosing sensitive information to these adult‐​content websites. Utah also saw an uptick in VPN searches when it introduced its age verification law (SB287). It’s worth noting that bypass methods aren’t exclusive to adults.. A study on the enforcement of similar laws in the United Kingdom revealed that 23% of minors say that they can bypass blocking measures. In addition to relying on VPNs to bypass age verification, users may also visit more obscure adult content sites that are less likely to follow safety protocols.

The ease with which these measures can be circumvented suggests that these government laws may put people’s sensitive data at risk and infringe upon young people’s rights to access various speech forums, all without providing effective ways to reap their intended benefits. Rather than enacting laws that may not achieve their intended effects, focus should be shifted toward actionable measures like public awareness and education. The state‐​level patchwork approach to handling people’s sensitive data underscores the urgent need for a comprehensive federal privacy bill.

A Better Alternative: State Bills Promoting Digital Literacy

The concerns about young people online are quite varied and an important reason why the best solutions are likely left to parents and trusted adults in a child’s life, rather than a government one‐​size‐​fits‐​all approach. One positive set of legislative proposals that has emerged over this last session are those that focus on the education of young people through improved digital literacy curriculum. This approach will empower young people to use technology in beneficial ways while also advising them what to do should they encounter harmful or concerning content.

As discussed in more detail in a recent policy brief, many states already have an element of digital literacy in their K‑12 curriculum; however, such standards typically pre‐​date the rise of the internet and social media. This year, Florida passed a law that would include social media digital literacy in the curriculum. States including Alabama, Virginia, and Missouri also considered such laws.

An education‐​focused approach will empower young people to make good decisions around their own technology use. Ideally, such a curriculum should be balanced or neutral in its approach to explaining the risks and benefits of social media or other online activities. States should not be too prescriptive in their approach or allow individual schools to make decisions that reflect specific values or issues encountered by their students. They should give way to parental notification and responsiveness when it comes to discussions around such issues. Civil society and industry have provided a great number of responses to support parental choice and controls. If policymakers are to be involved, the focus should be on education and empowerment rather than restriction and regulation.

Conclusion

2023 has seen an increase in policy proposals seeking to regulate the internet access of young people, but this carries consequences for all internet users. Such actions will likely face challenges in court on First Amendment grounds as seen with the Arkansas and California laws. As with users of any age, children and teens’ use of and experience with technology can be both positive and negative. A wide array of tools exists to empower parents and young people to deal with concerns, including exposure to certain content or time spent on social media. If policymakers seek to do anything in this area, the focus should be on empowering and educating children and parents on how to use the internet in positive ways and what to do if they have concerns, not through heavy‐​handed regulation that both fails to improve online safety and takes away its beneficial uses.

0 comment
0 FacebookTwitterPinterestEmail

Jai Kedia

Recently, CMFA published an article and a working paper that detailed the Federal Reserve’s departure from rules‐​based governance following the financial crisis of the late 2000s. As per academics and Fed officials, the era of rules‐​based governance facilitated the Great Moderation – a stable economic period characterized by less volatile macro indicators such as inflation, output gap, and unemployment. In academic parlance, macroeconomists refer to this situation as determinacy. Despite conflicting evidence, the prevailing view is that the Fed facilitated the Great Moderation by establishing a determinate economic environment through rules‐​based governance that focused on keeping inflation low. Previous CMFA papers had posited the question as to whether the Fed’s departure from this “successful” era of monetary policy may have instead led to indeterminacy. This article provides evidence that indeterminacy did occur during this period.

Determinacy is a feature of an economic system whereby outcomes such as inflation, output, etc., can be precisely determined based on a given set of initial conditions and policy rules. Under determinacy, the economy (as represented by a mathematical model) has a unique equilibrium outcome. In simple terms, under determinacy, the economy has only one possible resting state and is also stable with no large spirals or variability. Conversely, indeterminacy occurs when there are multiple possible equilibria that could result from the same initial conditions and policy rules. This state can create uncertainty in predicting the future state of the economy, as different equilibria may lead to significantly divergent economic outcomes. Simply put, the economy could end up in multiple possible states, some of which may be highly volatile, depending on how individuals form their expectations and make decisions.

Academics generally believe that a strong Fed response to inflation (a more than one‐​to‐​one increase in the target federal funds rate to inflation changes) can ensure a determinate system. This is known as the Taylor Principle. A greater than one‐​to‐​one response to inflation is deeply entrenched in the economic literature; most empirical macroeconomic studies simply assume determinacy and fix the Fed’s response to inflation at a number higher than one or use estimation techniques that entirely exclude the possibility of indeterminacy. This determinacy bias has serious implications for policy analysis because economic models (such as those used by the Fed) exhibit significantly different dynamics in an indeterminate system. Additionally, even approaches that account for indeterminacy, including seminal papers, fail to take consumers’ inflation expectations seriously. As noted above, expectations matter drastically when determining equilibrium selection. They should be included in the datasets used by empirical methods.

I utilize a simple macro model – connecting output gap, inflation, and the federal funds rate – to test the determinacy of the U.S. economy during the period when the Fed abandoned rules‐​based governance (2009 through 2022). I use actual U.S. time series data for the three variables listed above as well as a measure of consumers’ inflation expectations – one year ahead inflation expectations collected from the Michigan Survey of Consumers.[1] I fit the macro model to the data using a Bayesian estimation procedure under both determinacy and indeterminacy to see which fits the data better.

I find that the model under indeterminacy significantly outperforms its determinate counterpart in fitting the data set. That is, the model under indeterminacy has a much higher “goodness‐​of‐​fit” versus determinacy. Goodness‐​of‐​fit values from Bayesian analysis are unlike the usual R2 value reported from regressions. Bayesian model comparison is conducted through marginal likelihoods which are then converted to an odds ratio (similar to betting odds) called the Bayes factor. The estimated odds of determinacy to indeterminacy are 1 to 1.5 x 1015 – making determinacy an extremely unlikely event. To understand exactly how unlikely, let us compare these odds to another extremely unlikely event – being struck by lightning. The odds of being struck by lightning are much higher in comparison: 1 to 1.5 x 104. In other words, the odds of being struck by lightning are significantly higher than the odds that the U.S. economy was determinate from 2009 through 2022.

Consequently, the probability that the U.S. economy was indeterminate following the financial crisis is nearly 100%. The (indeterminate) model with a 0.57 estimated inflation response coefficient fits the data better than the (determinate) model with a 1.13 coefficient estimate. The results confirm that the Fed did not target inflation in line with the Taylor Principle.

These findings raise an important question: how responsible is the Fed in keeping the economy determinate with a unique and stable outcome? If it is, as several academics and Fed officials have claimed, then they must answer why they did not conduct policy in a way that ensured the economy’s determinacy. If they are not responsible for keeping the economy determinate (as several recent studies are now finding), then the Fed’s reputation for stabilizing the economy is undeserved, and the public should question why an unelected governmental agency exerts such a high degree of influence over the political economy discourse if it is ineffective in maintaining prices or keeping the economy stable. A forthcoming paper will further examine the history of the Fed’s effectiveness in achieving determinacy.

The author thanks Jerome Famularo for providing research assistance during the preparation of this essay. For more information on the model, empirical methodology, and posterior distribution please click here.

[1] Respondents are asked the question: ‘By what percent do you expect prices to go up, on the average, during the next 12 months? The average of all responses is used as the measure for inflation expectations.

0 comment
0 FacebookTwitterPinterestEmail

Neal McCluskey

Today, the Biden administration launched a new website for student debtors to enroll in its next big effort to reduce their repayments. Biden’s Saving on a Valuable Education (SAVE) plan seeks to make income‐​driven repayment (IDR) much more generous than it has been. IDR is intended to ease repayment burdens when debtors aren’t earning very much. It does this by pegging repayment requirements to income rather than set, monthly payments. Also, if a borrower has debt remaining after 20 or 25 years of repayment it will be forgiven, with the time spans depending on whether the debt was just for undergraduate education or also grad school.

There are many possible calculations to show the effect of SAVE based on different combinations of education consumed, debt levels, family size, and earnings, which I will eventually take up. But to quickly illustrate how much more generous SAVE is than previous IDR, we can look at repayment requirements for a single, recent bachelor’s graduate with an average new grad’s salary of $58,862.

The key IDR changes are:

The amount of income incurring no charge (“protected income”) rises from 150 percent of the federal poverty level to 225 percent.
Payment drops from 10 percent of the difference between earnings and protected income (“discretionary income”) to 5 percent.

As you can see in the table below, these changes make a substantial difference, dropping the annual repayment cap from $3,699 to $1,303 and monthly charges from $185 to $65. Basically, a two‐​thirds repayment reduction.

Importantly, IDR before the change allowed for relatively painless repayment. The rule of thumb is that to be comfortably repaid, student debt should not exceed 8 percent of one’s earnings. Before SAVE, the annual payment constituted only 6.3 percent of earnings. Under the new plan, it is only 2.2 percent.

This is very generous with taxpayers’ dollars, especially considering that the name of the plan itself says borrowers’ education was valuable, presumably to them. It is also of dubious legality, with the executive branch unilaterally changing basic lending terms. It’s not quite as egregious as POTUS simply declaring mass debt cancellation, but it still stretches executive power well beyond what it should be.

0 comment
0 FacebookTwitterPinterestEmail

Jack Solowey

Crypto startups and venture capitalists are not the only ones pivoting to artificial intelligence (AI). Recently, SEC Chair Gary Gensler delivered remarks to the National Press Club outlining his concerns about AI’s role in the future of finance.

In those high‐​level remarks, Gensler shared his anxiety that AI could threaten macro‐​level financial stability, positing that “AI may heighten financial fragility as it could promote herding with individual actors making similar decisions because they are getting the same signal from a base model or data aggregator.”

This fear largely rests on a pair of debatable assumptions: one, that the market for AI models will be highly concentrated, and two, that this will cause financial groupthink. There are important reasons to doubt both premises. Before the SEC, or any regulator, puts forward an AI policy agenda, the assumptions on which it rests must be closely scrutinized and validated.

Assumption 1: Foundation Model Market Concentration

Chair Gensler’s assessment assumes that the market for AI foundation models will be highly concentrated. Foundation models, like OpenAI’s GPT‑4 or Meta’s Llama 2, are pre‐​trained on reams of data to establish predictive capabilities and can serve as bases for “downstream” applications that further refine the models to better perform specific tasks.

Because upstream foundation models are data‐​intensive and have the potential to leverage downstream data for their own benefit, Gensler is concerned that one or a few model providers will be able to corner the market. It’s understandable that one might assume this, but there are plenty of reasons to doubt the assumption.

The best arguments for the market concentration assumption are that natural barriers to entry, economies of scale, and network effects will produce a small number of clear market leaders in foundation models. For instance, pre‐​training can require a lot of data, computing power, and money, potentially advantaging a small number of well‐​resourced players. In addition, network effects (i.e., platforms with more users are more valuable to those users) could further entrench incumbents, either because big‐​tech leaders already have access to more training data from their user networks, because the model providers attracting the most users will come to access more data to further improve their models or some combination of both.

But the assumption that the market for foundation models inevitably will be concentrated is readily vulnerable to counterarguments. For one, the recent AI surge has punctured theories about the perpetual dearth of tech platform competition. With the launch of ChatGPT, OpenAI—a company with fewer than 400 full‐​time employees earlier this year—became a household name and provoked typically best‐​in‐​class firms to scramble in response. And while it’s true that OpenAI has made strategic partnerships with Microsoft, OpenAI’s rise undermined the conventional wisdom that the same five technology incumbents would enjoy unalloyed dominance everywhere forever. The emergence of additional players, like Anthropic, Inflection, and Stability AI, to name just a few, provides further reason to question the idea of a competition‐​free future for AI models.

In addition, the availability of high‐​quality foundation models with open‐​source (or other relatively permissive) licenses runs counter to the assumed future of monopoly control. Open‐​source licenses typically grant others the right to use, copy, and modify software for their own purposes (commercial or otherwise) free of charge. The AI tool builder Hugging Face currently lists tens of thousands of open‐​source models. And other major players are providing their own models with open‐​source licenses (e.g., Stability AI’s new language model) or relatively permissive “source available” licenses (e.g., Meta’s latest Llama 2). Open‐​source model availability could have a material impact on competitive dynamics. A reportedly leaked document from Google put it starkly:

[T]he uncomfortable truth is, we aren’t positioned to win this arms race and neither is OpenAI. While we’ve been squabbling, a third faction has been quietly eating our lunch. I’m talking, of course, about open source.

Lastly, Gensler’s vision of a concentrated foundation model market itself rests in large part on the assumption that model providers will continuously improve their models with the data provided to them by downstream third‐​party applications. But this too should not be taken as a given. Such arrangements are a possible feature of a model provider’s terms but not an unavoidable one. For example, OpenAI’s current data usage policies for those accessing its models through an application programming interface (API), as opposed to OpenAI’s own applications (like ChatGPT), limit (as of March 2023) OpenAI’s use of downstream data to improve its models:

By default, OpenAI will not use API data to train OpenAI models or improve OpenAI’s service offering. Data submitted by the user for fine‐​tuning will only be used to fine‐​tune the customer’s model.

Indeed, providers of base models may not always benefit from downstream data, as finetuning a model for better performance in one domain could risk undermining performance in others (a dramatic form of this phenomenon is known as “catastrophic forgetting”).

Again, this is not to say that foundation model market concentration is impossible. The point is simply that there also are plenty of reasons the concentrated market Gensler envisions may not come to pass. Indeed, a source Gensler cited put it well: “It is too early to tell if the supply of base AI models will be highly competitive or concentrated by only a few big players.” Any SEC regulatory intervention premised on the idea of a non‐​competitive foundation model market would similarly be too early.

Assumption 2: Foundation Model Market Concentration Will Cause Risky Capital Market Participant Groupthink

The second assumption underpinning Gensler’s financial fragility fear is that a limited number of model providers will lead to dangerous uniformity in the behavior of market participants using those models. As Gensler put it, “This could encourage monocultures.”

Even if one accepts for argument’s sake a future of foundation model market concentration, there are reasons to doubt the added assumption that this will encourage monocultures or herd behavior among financial market participants.

While foundation models can be used as generic tools out of the box, they also can be further customized to users’ unique needs and expertise. Finetuning—further training a model on a smaller subset of domain‐​specific data to improve performance in that area—can allow users to tailor base models to firm‐​specific knowledge and maintain a degree of differentiation from their competitors. This complicates the groupthink assumption. Indeed, Morgan Stanley has leveraged OpenAI’s GPT‑4 to synthesize the wealth manager’s own institutional knowledge.

Taking a step back, is it more likely that financial firms with coveted caches of proprietary data and know‐​how will forfeit their competitive advantages, or that they will look to capitalize on them with new tools? Beyond training and finetuning models around firm‐​specific data, firms also can maintain their edge simply by prompting models in a manner consistent with their unique approaches. In addition, firms almost certainly will continue to interpret results based on their specific strategies, cultures, and philosophies. Lastly, because there are profits to be made from identifying mispriced assets, firms would be incentivized to spot others’ inefficient herding behavior and diverge from the “monoculture”; they may even devise ways to leverage models for this purpose.

At the very least, as with model market concentration, more time and research are needed before the impact of the latest generation of AI on financial market participant herding behavior can be assessed with enough confidence to provide a sound basis for regulatory intervention.

Conclusion

Emerging technologies can, of course, be disruptive. But before regulators assume novel technologies present novel risks, they should test and validate their assumptions. Otherwise, one can reasonably doubt regulators when they proclaim themselves “technology neutral.” As SEC Commissioner Hester Peirce noted last week regarding the SEC’s proposed rules tackling a separate AI‐​related concern—conflict-of-interest risks from broker‐​dealers’ and investment advisers’ use of “predictive data analytics”—singling out a specific technology for “uniquely onerous review” is tantamount to “regulatory hazing.”

Another word of caution is warranted: even where regulators do perceive bona fide evidence of enhanced risks, they should be wary of counterproductive interventions. To name just one example, heightened regulatory barriers to entry could worsen the very concentration in the market for AI models that Gensler fears.

0 comment
0 FacebookTwitterPinterestEmail

The Culture Wars and Public Libraries

by

Jeffrey Miron

This article appeared on Substack on July 31, 2023.

A recent front in the culture wars is public libraries, such as in Front Royal, Virginia, where

a handful of residents ha[s] begun demanding the removal of certain books in the children’s section of Warren County’s only public library. Most of the titles involved LGBTQ+ themes.

In Libertarian Land, such conflicts do not arise, since public libraries do not exist.

Despite the word “public,” libraries are not a “public good” that private markets might undersupply.

The textbook public good is national defense. If any private group mounts an army that stands ready to defend the country, others will free ride. This makes it hard for the provider to finance its efforts and therefore discourages private provision.

No such issue exists for books; private provision is bountiful. People cannot free ride on book purchases by others.

The crucial benefit of leaving “libraries” to the marketplace is that no one’s tax dollars support the provision of particular books. If Amazon sells books that some people do not want their children to read, these people do not buy such books. Thus the polarization that results from public libraries is absent.

Advocates will respond that public libraries provide free access to books and thus benefit low‐​income families. That is mainly false; public libraries typically locate in middle‐​class neighborhoods and serve middle‐​income families.

Fans of public provision might also argue that such libraries provide more than free access to books: story time, community events, author book signings, and the like. Private book stores, however, can do and provide these services if demand exists, perhaps because it brings in paying customers.

0 comment
0 FacebookTwitterPinterestEmail