J. L. BELL is a Massachusetts writer who specializes in (among other things) the start of the American Revolution in and around Boston. He is particularly interested in the experiences of children in 1765-75. He has published scholarly papers and popular articles for both children and adults. He was consultant for an episode of History Detectives, and contributed to a display at Minute Man National Historic Park.

Subscribe thru Follow.it





•••••••••••••••••



Showing posts with label Supreme Court. Show all posts
Showing posts with label Supreme Court. Show all posts

Sunday, September 17, 2023

The Case of the Adapted Anecdote

Today is Constitution Day, declared to commemorate the date on which the delegates at the Constitutional Convention signed off on their work.

Not the day on which that proposed constitution for the new U.S. of A. was ratified by a supermajority of the people’s representatives, nor the day on which it went into effect. But that’s another story.

Speaking of stories, I’m continuing to investigate the anecdote that James McHenry wrote and then rewrote about Benjamin Franklin telling Elizabeth Powel that the convention provided for a “a republic—if you can keep it.”

Two Supreme Court justices have written books using that phrase as their title. The more recent is by Neil Gorsuch, who alluded to the story only in passing.

The earlier was by Earl Warren in 1972, after he had retired from the bench. It offers this page at the start:

After a detailed description of Franklin encountering a woman outside the meeting hall, Warren cited the “Notes of Dr. James McHenry, one of the delegates,” adding, “Adapted from Documents Illustrative of the Formation of the Union of the American States, Government Printing Office, 1927.”

When I looked up that government publication, however, I found only the transcription of what McHenry wrote at the end of his convention notes, as published in Max Farrand’s The Records of the Federal Convention of 1787 in 1911.
A lady asked Dr. Franklin Well Doctor what have we got a republic or a monarchy. A republic replied the Doctor if you can keep it.

The lady here aluded to was Mrs. Powel of Philada.
Warren accurately quoted Elizabeth Powel’s question about “a republic or a monarchy.” He didn’t insert the word “Madam” into Franklin’s response as some authors did.

However, none of the emotional detail in Warren’s anecdote—how the “delegates trudged out,” the “anxious woman in the crowd waiting at the entrance”—came from the source he cited. The phrase “Adapted from” shows that Warren must have realized how his telling differed from the original. Most likely, he had been influenced by other detailed retellings and imagined the scene that way.

American authors had been setting this exchange on the street for at least thirty years by then. (McHenry wrote that it happened indoors, and Powel insisted that it had happened in her salon if it had happened at all.)

Previous writers had described the questioner as “eager,” “concerned,” and “inquisitive.” This is the earliest version that I’ve found using the word “anxious,” an adjective repeated in reviews of this book and in later narrations. (Powel would have hated that characterization.)

This version of the anecdote appeared in a book by a former Chief Justice of the United States, with what appears to be a citation to a highly authoritative source. But tracing back that citation shows how many details of this tale were spun out of nothing.

Wednesday, December 07, 2022

Charles Pinckney in Hindsight and the Supreme Court

Yesterday I was struck by Pema Levy’s article at Mother Jones about a false document being cited to the U.S. Supreme Court.

Levy based her article on September essay at Politico by Ethen Herenstein and Brian Palmer, and by briefs that have been filed with the court since.

Levy writes:
Three decades after the Constitution was drafted in Philadelphia, Secretary of State John Quincy Adams set about assembling the government’s official Journal of the Convention. Missing from the records was the proposal submitted by Charles Pinckney of South Carolina [shown here]. So Adams wrote him to request a copy. Pinckney replied with an extraordinary document: a draft that so closely resembled the final Constitution that he would have to have been clairvoyant to have written it. . . .

“At the distance of nearly thirty two Years it is impossible for me now to say which of the 4 or 5 draughts I have was the one,” he replied to Adams’ request in 1818, “but enclosed I send you the one I believe was it.” Oddly, the document was written on paper with a 1797 watermark, matching his accompanying letter. Nonetheless, Adams published it.

The debunkings came fast. James Madison, the convention’s most meticulous notetaker, soon wrote to friends that the draft was inaccurate. Years later, Madison discredited Pinckney’s fraud in writing, explaining the document contained language that had only been arrived at after weeks of debate and could not have been divined before the convention began. Madison, convinced it was a fake, detailed how Pinckney’s supposed draft contradicted a more contemporaneous account of the South Carolinian’s actual proposal.
Max Farrand included the Pinckney document in his comprehensive twentieth-century compilation of documents related to the U.S. Constitution, but with a note and additional documents making quite clear that it was not a reliable historical source. A genuine contemporaneous copy of Pinckney’s actual plan survived in the papers of James Wilson and was published in 1904.

Advocates for the “Independent State Legislature” theory have seized on one small detail in the post-Constitution Pinckney document, arguing that it shows the Framers (not just Pinckney) planned at the start of the Constitutional Convention (not two to four decades later) to give states unlimited power over federal elections.

Levy says:
there is no evidence that the framers of the Constitution intended to give legislatures such authority over federal elections. Nor is there any record this interpretation was accepted in the republic’s early years. In fact, history shows that the independent state legislature theory is a modern invention. . . .

It’s possible that the lawyers…who cited the version of the document in Farrand’s 1911 compendium, simply failed to read past the plan to the historian’s conclusion that it was a fake, and that they likewise failed to read Madison’s public takedown or his private letters expressing doubts, all of which were included by Farrand. Whether they meant to or not, they hung their argument on a fake document because it offered a glimmer of originalist evidence to back up their case.
Historians and legal scholars, including some on the political right, have filed briefs arguing against reliance on this document in particular and the theory being espoused in general.

The response has been legal tap-dancing:
the lawyers filed a new brief defending their use of the Pinckney plan. They argued that the plan was not technically “a fake” because it is “undisputed” that Pinckney wrote it, and allege that the generations of historians who discredited the document were hoodwinked by Madison’s “campaign to diminish the significance of [Pinckney’s] role at the convention.”
Justices on the Supreme Court today have been willing to deny photographic evidence and ignore decades of legal and historical precedent in order to reach the verdicts they want. In this case, a majority could adopt the “Independent State Legislature” theory without mentioning one problematic document. But if the final decisions do mention Pinckney, that will be yet more evidence that the “originalists” on the court aren’t interested in the original Constitution at all.

Thursday, July 07, 2022

“High standards of historical scholarship” and the Supreme Court

Last month I wrote about how one of the recent big U.S. Supreme Court decisions clearly misstated the facts of the case at issue, and about the troubling implications of such misstatements.

As Ian Milhiser had argued at Vox, if events had actually occurred as the decision described, previous precedents would have applied. The majority opinion’s false description of circumstances therefore muddled what exactly it allowed while overruling past decisions.

Obviously the majority of justices on this court wanted to reverse those decisions. And I think their willingness to do so based on obviously false factual statements undermined the authority of the court and the law.

Yesterday the Organization of American Historians, the American Historical Association, and other groups of history professionals issued a joint statement criticizing how another recent Supreme Court decision, Dobbs v. Jackson, badly misrepresented the more distant past:
Historians might note that the Court’s majority opinion refers to “history” sixty-seven times, claiming that “an unbroken tradition of prohibiting abortion on pain of criminal punishment persisted from the earliest days of the common law until 1973.”

Our brief shows plentiful evidence, however, of the long legal tradition, extending from the common law to the mid-1800s (and far longer in some American states, including Mississippi), of tolerating termination of pregnancy before occurrence of “quickening,” the time when a woman first felt fetal movement. The majority of the court dismisses that reality because it was eventually—although quite gradually—superseded by criminalization. In so doing the court denies the strong presence in US “history and traditions” at least from the Revolution to the Civil War of women’s ability to terminate pregnancy before the third to fourth month without intervention by the state.

These misrepresentations are now enshrined in a text that becomes authoritative for legal reference and citation in the future. . . . The OAH and AHA consider it imperative that historical evidence and argument be presented according to high standards of historical scholarship. The Court’s majority opinion in Dobbs v. Jackson does not meet those standards, and has therefore established a flawed and troubling precedent.
In essence, the historic trend to bar or even criminalize abortions that gained steam in the late 1800s didn’t fit with the conservative justices’ claim to follow precedents from when the Constitution was written—“originalism,” so-called. They couldn’t claim that later generations had become more enlightened because people might then want to apply the same insight to other matters. So those justices simply proclaimed that the historical record really did fit their claims.

As the historical organizations say, the result is distorted history that’s nonetheless “authoritative for legal reference.” This Supreme Court is asking us Americans to base our system of laws on claims that are demonstrably not true.

Wednesday, June 29, 2022

The Supreme Court and “a fabricated case”

Back in 2012, I departed from the eighteenth century to consider an issue in the intersection of law and history.

What does it mean when a Supreme Court decision guiding decades of law and policy turns out to be based on a historical falsehood?

In 1953 the court issued what’s come to be known as the Reynolds decision, requiring courts to defer to the executive branch of the federal government when it invokes national security and the need for secrecy to demand an end to legal proceedings. The majority of justices concluded that the government wouldn’t do that for petty or self-serving reasons.

Over forty years later, documents emerged to show that the U.S. Air Force had done precisely what the Supreme Court said we must assume it wouldn’t do: hide evidence and stifle a lawsuit to avoid embarrassment and liability. Yet Reynolds remains a guiding legal precedent.

Last week the U.S. Supreme Court issued a decision, Kennedy v. Bermerton School District, that allowed a public school employee to lead public prayers on school grounds despite the First Amendment’s religious establishment clause and previous court precedents. What’s more, the majority decision misstated the facts of the case, as shown by citations and a photograph included in the minority dissent.

Ian Milhiser at Vox wrote that one consequence of those false statements is that it’s unclear what the decision actually allowed:
Justice Neil Gorsuch’s opinion for himself and his fellow Republican appointees relies on a bizarre misrepresentation of the case’s facts. He repeatedly claims that Joseph Kennedy, a former public school football coach at Bremerton High School in Washington state who ostentatiously prayed at the 50-yard line following football games — often joined by his players, members of the opposing team, and members of the general public — “offered his prayers quietly while his students were otherwise occupied.” . . .

Moreover, because Gorsuch’s opinion relies so heavily on false facts, the Court does not actually decide what the Constitution has to say about a coach who ostentatiously prays in the presence of students and the public. Instead, it decides a fabricated case about a coach who merely engaged in “private” and “quiet” prayer. . . . [According to already established precedents] Public school employees may engage in private acts of devotion, such as saying a prayer over their lunch in a school cafeteria while they are on the job.

But there’s nothing private about a school employee conducting a media tour touting his plans to pray at the 50-yard line of a football field immediately after a game. There is nothing private about the coach carrying out that plan — especially when he does so surrounded by kneeling players, cameras, and members of the public.
Another consequence of this decision is, of course, that the U.S. Supreme Court majority has further damaged its own credibility. We expect disagreements over matters of opinion. But when justices define the nation’s law while stating things that we can all see are false, they make our legal system look deceptive and arbitrary.

Thursday, May 05, 2022

A Case Study of Abortion in Colonial America

In 1991 Prof. Cornelia Hughes Dayton published a paper titled “Taking the Trade: Abortion and Gender Relations in an Eighteenth-Century New England Village” in the William and Mary Quarterly.

In 2007 students at the University of Connecticut created this website exploring the same case, using Dayton’s analysis, transcriptions, photographs of the sites involved, and more. (This may have grown from the similar work of Prof. Larry Cebula or it may have been a parallel effort.)

The “Taking the Trade” paper and website examine a dispute in colonial Connecticut. In 1742, Sarah Grosvenor of Pomfret ended an unwanted pregnancy by inducing a miscarriage, having used both medicinal and surgical means, but she died two months later.

Grosvenor’s family complained about the man who had impregnated her, Amasa Sessions. Many colonial New England men in that situation married their sexual partners and went on to have more children, however grudging the partnership was. In contrast, Sessions pressed Grosvenor to take an abortifacient provided by Dr. John Hallowell of Killingley.

In 1746, Sessions and Hallowell were indicted for the reckless murder of Sarah Grosvenor—but not for trying to induce an abortion. In fact, Grosvenor’s sister had also helped her end the pregnancy, but she was not indicted. The surviving documents don’t offer answers for all the questions they raise, but they make clear that eighteenth-century New Englanders knew about abortion and viewed it primarily as a private matter not involving the government. Providing an unsafe abortion was potentially criminal.

A crucial aspect of how Sarah Grosvenor and her contemporaries understood her situation was the “quickening”—the moment when a pregnant woman can feel the fetus move inside her body. Only then, according to the thinking of the time, did a soul enter the fetus, making it a person. That was usually about twenty weeks into a pregnancy.

The U.S. of A. is currently in a heated discussion about Justice Samuel Alito’s draft decision upending American women’s right to abortion, federally guaranteed for almost half a century. That draft claims there is a longer history of laws against abortion.

However, as Prof. Holly Brewer has pointed out, all of the draft’s so-called legal precedents from the seventeenth and eighteenth centuries ban abortion procedures only after the quickening. Other cited laws banned abortion methods on the grounds they were unsafe for the woman, not because they ended her pregnancy.

This is a problem with “originalist” jurisprudence: determining modern law based on history requires actually understanding that history in all its nuances, not just plucking out details that suit the result the judge desires. As the “Taking the Trade” paper and website show, colonial Americans didn’t view safe abortion as a criminal matter.

Friday, March 25, 2022

“Lifetime Tenure” When the Supreme Court Began

The U.S. senate is holding hearings on the nomination of a new Supreme Court justice. Some senators have come out against giving this nominee a “lifetime appointment” despite having previously approved her lifetime appointment as a federal judge at two levels.

Social-media discussions of this issue got me thinking of what a “lifetime appointment” meant when the U.S. Supreme Court first met.

Lifetime judicial appointments were common in the British and thus British-American legal systems. Although overall life expectancy was lower in the eighteenth century, that’s largely due to childhood mortality, so once a mature man was appointed to the bench he often served for many years.

(Colonial Rhode Island was an exception to that system of lifetime appointments. Under its eighteenth-century constitution, judges were elected for one-year terms, though they could be reelected. Which just shows how anomalous Rhode Island was.)

I decided to look at the Supreme Court justices appointed in the 1790s to see how long they stayed alive and stayed on the court.
  • John Jay / 6 years on the court / 40 more years of life after appointment 
  • John Rutledge / 1 one year on the court, then another stint of a few months four years later / 11 more years of life 
  • William Cushing / 20 / 20 
  • James Wilson / 9 / 9 
  • John Blair / 5 / 10 
  • James Iredell / 9 / 9 
  • Thomas Johnson / 2 / 28 
  • William Paterson / 13 / 13 
  • Samuel Chase / 15 / 15 
  • Oliver Ellsworth / 4 / 11 
  • Bushrod Washington / 31 / 31 
Thus, from early on we see Supreme Court justices serving for a decade or more. Six of these eleven men sat on the bench until they died, with an average tenure of over fifteen years. Three more justices nominated by the Presidents active in the Founding—John Marshall, William Johnson, and Joseph Story—also served more than thirty years.

That said, while the first generation of U.S. politicians could conceive of Supreme Court justices serving for decades, the number of jurists who actually do so has gone up. As of today the historical average tenure on the court stands at sixteen years, but no justice has left the bench before that time since the late 1960s.

The other career model we see these days, a justice serving for decades and then retiring, was less common in the 1790s. Indeed, the three early justices who resigned citing reasons of health—John Blair, Thomas Johnson, and Oliver Ellsworth—did so after only a handful of years. The job was more physically demanding when Supreme Court justices still rode the circuit to hear federal cases rather than staying in the capital.

One path we haven’t seen for a long time was a justice resigning from the top bench because he preferred a different government role. John Jay left the court to be governor of New York, having already run for that offce in 1792 and gone overseas as President George Washington’s treaty negotiator in 1794.

Finally, there’s a storyline we really don’t want to see repeated. John Rutledge (shown above) resigned from the U.S. bench to become chief justice in the home state of South Carolina. Then President George Washington put him back on the Supreme Court as chief justice, only for the senate to decline to confirm him. Rutledge attempted suicide, withdrew from public life, and died five years later.

Tuesday, December 21, 2021

Filling the New England Seat on the U.S. Supreme Court

For more than a century the U.S. Supreme Court had a seat reserved for New Englanders.

The early Presidents had two good reasons for that. First, by appointing justices equally from all regions of the country those Presidents—especially all those Virginians—avoided charges of favoring their home region.

Second, in its early years the Supreme Court justices also rode circuit, hearing federal cases in their districts. So a New Englander covering the northeastern states wasn’t so far away from home.

For the first two decades, that New Englander was William Cushing, formerly chief justice in Massachusetts. In 1795 President George Washington promoted him to be the chief justice, and the Senate confirmed him. But Cushing declined the commission. Being chief justice just wasn’t as prestigious and powerful as the job has become.

Justice Cushing remained on the bench longer than any of the other original court. He was also the last to wear the full judicial wig inherited from the British system. When Cushing died in 1810, President James Madison needed a replacement from New England. He also wanted someone from his own Republican party. Which was difficult because most New England lawyers were Federalists.

Madison’s first choice was Levi Lincoln of Hingham—former U.S. attorney general under Thomas Jefferson, former lieutenant and acting governor of Massachusetts (shown above). The Senate voted its approval. But Lincoln declined, citing bad eyes. Again, being a Supreme Court justice wasn’t that great.

Madison then nominated Alexander Wolcott of Connecticut, mentioned in yesterday’s posting. Wolcott had practiced law, but he was primarily known as the leader of his state’s Republicans. He engaged in harsh political disputes and oversaw patronage appointments. The closest he’d gotten to judicial experience was in his own patronage position as a Customs inspector. The Federalist Columbian Centinel called Wolcott’s nomination “abominable.”

Nonetheless, the Republicans were firmly in charge of the U.S. Senate, 28 votes to 6, and Supreme Court nominees usually got approved within a week. In Wolcott’s case, the Senators referred the court nomination to a committee for the first time. Then they didn’t take a vote until nine whole days later, on 14 Feb 1811.

The U.S. Senate rejected Alexander Wolcott’s nomination to the Supreme Court by a vote of 24 to 9. This was the largest percentage against any court nominee ever. Even Republican Senators voted against the nomination by a margin of at least 2:1.

Wolcott went back to Connecticut politics. President Madison looked around for another New Englander to nominate to the high Court. Again, he needed a prominent Republican—but one with a less partisan history.

Madison’s third choice was John Quincy Adams, former Federalist Senator from Massachusetts. Adams had bucked his party’s foreign policy on several issues under President Thomas Jefferson and ended up a politician without party backing. In 1809 Madison appointed him the U.S. minister to Russia, a country Adams had first visited as a teen-aged secretary for the Continental Congress’s envoy, Francis Dana.

As with Lincoln, the Senate gave their advice and consent in favor of President Madison’s nominee. And as with Lincoln, the nominee declined the job. Adams would go on to be U.S. Secretary of State, President, and a long-time Representative from Massachusetts.

Once again President Madison scanned the New England legal landscape. The best candidate he could find was a lawyer from Marblehead, only thirty-two years old, with one term in the U.S. House of Representatives under his belt. This was Joseph Story, still the youngest person ever nominated to the U.S. Supreme Court.

Story was confirmed and served thirty-three years. As an associate justice, law professor, and author, he exercised more influence over the U.S. legal system than anyone else in the early 1800s but Chief Justice John Marshall.

When Story died in 1845, President James K. Polk nominated Levi Woodbury of New Hampshire to succeed him. After Woodbury, the justices in that line were Benjamin Curtis of Watertown; Nathan Clifford of Maine; Horace Gray of Boston; and Oliver Wendell Holmes, Jr., of Boston. The replacement for Holmes was Benjamin Cardozo of New York, though by that time Louis Brandeis—a native of Kentucky who had established his legal career in Boston—was representing New England on the high bench.

Monday, May 27, 2019

Serfin’ U.S.A. with Benjamin Franklin

Yesterday I examined the facts and logic of a recent USA Today opinion essay, “Killing the Electoral College Means Rural Americans Would Be Serfs” by Trent England. I found them unconvincing.

The portions of the essay that invoke history are more alarmist and equally slipshod. England writes:
…history shows that city dwellers have a nasty habit of taking advantage of their country cousins. Greeks enslaved whole masses of rural people, known as helots. Medieval Europe had feudalism. The Russians had their serfs.
That’s laughable, and not just because this conception of world history appears to be confined to the western half of Eurasia.

Before the Industrial Revolution, the overwhelming proportion of people in all large societies worked in agriculture. Cities were relatively small. Urban elites didn’t just head out to the countryside and enslave the people they found there. Rather, local strongmen forced the bulk of their neighbors to work the fields for them in exchange for protection. Only over time did elite families take urban dwellings as well, and only later did urban traders turn themselves into country aristocrats.

Notably, England doesn’t discuss the U.S. of A.’s own history of enslaving and oppressing people to make them work on agricultural production. In the ante-bellum period and then in the Jim Crow era, the Electoral College preserved the power of the local elites who maintained and benefited from that exploitation. Nobody looking at U.S. history should think that the Electoral College system has protected the rural Americans who actually did the work.

“The idea that every vote should count equally is attractive,” England writes. Yes, that’s why his state of Oklahoma and every other counts votes equally for local elections. I have yet to see proponents of the national Electoral College demand a similar system for their own states. The U.S. Supreme Court has even ruled that state and local elections must be based on the principle of “one person, one vote.”

England goes on:
But a quote often attributed to Benjamin Franklin famously reminds us that democracy can be “two wolves and a lamb voting on what’s for lunch.” (City dwellers who think that meat comes from the grocery store might not understand why this is such a big problem for the lamb.)
England snidely suggests that “city dwellers” don’t know where meat comes from, but really he destroys his claim to speak for rural America by treating “two wolves and a lamb” as the norm.

There are more than 5,000,000 sheep in America and fewer than 25,000 wolves. Lambs would be well off in a “one animal, one vote” democracy where sheep could easily outvote wolves. The only time wolves outnumber sheep is when they maneuver to create that situation for their own advantage. Likewise, politicians worried about losing fair votes manipulate electoral districts (gerrymandering) or cling to an old imbalanced system (the Electoral College).

Franklin never made that mistake about wolves and sheep because Franklin never said what England quotes him as saying. The line appears nowhere in the Franklin Papers at Founders Online. Wikiquote not only notes that lack of a credible source but also how the word “lunch” appeared well after Franklin’s lifetime. England’s phrase “a quote often attributed to” hints that he recognized how unreliable this attribution was but decided to use it anyway because it served his purposes.

Likewise, the present Electoral College system continues to serve the purposes of some Americans, so they’ll use any argument to make it appear to be fair, logical, or beneficial. But those arguments melt on scrutiny.

Thursday, June 07, 2018

What the Founding Era Meant by “Bear Arms”

Last month Dennis Baron, a professor of English and linguistics at University of Illinois at Urbana-Champaign, published an op-ed essay in the Washington Post on the language of the Second Amendment to the U.S. Constitution:
Two new databases of English writing from the founding era confirm that “bear arms” is a military term. Non-military uses of “bear arms” are not just rare—they’re almost nonexistent.

A search of Brigham Young University’s new online Corpus of Founding Era American English, with more than 95,000 texts and 138 million words, yields 281 instances of the phrase “bear arms.” BYU’s Corpus of Early Modern English, with 40,000 texts and close to 1.3 billion words, shows 1,572 instances of the phrase. Subtracting about 350 duplicate matches, that leaves about 1,500 separate occurrences of “bear arms” in the 17th and 18th centuries, and only a handful don’t refer to war, soldiering or organized, armed action. These databases confirm that the natural meaning of “bear arms” in the framers’ day was military.
Lawyer Neal Goldfarb checked more variations of the phrase in the same databases and came to the same basic conclusion.

In the 2008 Heller case, as everyone involved in this discussion knows, the U.S. Supreme Court decided otherwise. Writing for the court, Justice Antonin Scalia treated “bear ams” not as an idiom with a military meaning but as a general phrase about carrying weapons.

The data shows otherwise—hardly anyone in the eighteenth century used it as Scalia did. As with the Reynolds case I wrote about here, the court’s finding is simply at odds with historical facts. The Heller ruling overturned legal understandings that prevailed for most of the twentieth century and changed the law going forward, but such rulings can’t change the actual past.

The Second Amendment reflects the Founding generation’s faith in the militia system of community self-defense that they had all grown up with. It said nothing about private ownership of firearms to hunt, to protect one’s home or person, or to make loud noises. Perhaps they viewed those activities as falling under the Tenth Amendment. We can’t know because the Tenth is so vague.

That said, the idea of a militia in the Founders’ time depended on widespread ownership of firearms by the (mostly white) men who made up the militia. Even if we go back to reading “bear arms” to refer only to military activity, as the Founders no doubt understood it, they still envisioned a public self-defense system in which most white men owned muskets, trained regularly with those muskets, and knew which officers to turn out for while carrying those muskets.

I think the big question of the Second Amendment lies in its opening premise: “A well regulated Militia, being necessary to the security of a free State.” We no longer have a militia system that the Framers would recognize. Instead, we have a large standing army with advanced weaponry, many of those troops deployed overseas—a situation that would startle the Founders, if not alarm them. If the premise of the Second Amendment no longer applies, what does that mean for the conclusion?

Friday, December 30, 2016

What the United States Are/Is

In the U.S. Constitution, “United States of America” is a plural noun, as in:

No Title of Nobility shall be granted by the United States: And no Person holding any Office of Profit or Trust under them, shall, without the Consent of the Congress, accept of any present, Emolument, Office, or Title, of any kind whatever, from any King, Prince, or foreign State. . . .

Treason against the United States, shall consist only in levying War against them, or in adhering to their Enemies, giving them Aid and Comfort.
Today we refer to the “United States” as a singular entity. And lots of people trace that change to the U.S. Civil War of the 1860s. That idea got a big boost in 1988 when historian James McPherson write in Battle Cry of Freedom:
Before 1861 the two words ‘United States’ were rendered as a plural noun: “the United States are a republic.” The war marked a transition of the United States to a singular noun.
Two years later, historian Shelby Foote gave that statement an even bigger boost by echoing it in Ken Burns’s Civil War television series and accompanying book.

In 2005 Ben Zimmer at Language Log dug back into the record, finding expressions of the idea as early as the 24 Apr 1887 Washington Post:
There was a time a few years ago when the United States was spoken of in the plural number. Men said “the United States are” — “the United States have” — “the United States were.” But the war changed all that. . . . The surrender of Mr. Davis and Gen. Lee meant a transition from the plural to the singular.
In 1902 the U.S. House of Representatives’s Committee on Revision of the Laws decided that henceforth Congress should refer to “the United States” as singular.

In 2009 (after Language Log got a new design), Marc Liberman started exploring the idea through historic newspapers, Presidential speeches, and Supreme Court decisions. He saw usage evolve but didn’t find a clear-cut change around the Civil War.

I decided to use the less sophisticated tool of the Google Books Ngram Viewer, looking for “United States is/was/are/were” across its database of publications from 1780 to 2000.

That method brings up a lot of false hits because it doesn’t distinguish between phrases in which “United States” is the actual subject of a sentence and phrases in which it’s helping to modify the subject, such as “The laws of the United States are based on British traditions” or “The President of the United States is an idiot.” But if those other constructions appear at a steady rate, then the comparison is still meaningful.

Here’s the graph that the Ngram Viewer produced for me.
There was a definite shift toward singular usage, but the big change came a couple of decades after the Civil War, and it really took off in the first decades of the twentieth century—coinciding with the U.S. House’s official choice. So the Civil War might have changed Americans’ thinking about the country, but it took a while for that thinking to be clearly reflected in our language.

Also interesting, the peaks in appearances of any form of “United States is/was/are/were” within the corpus of publications coincide with U.S. entry into the two World Wars and the Vietnam War.

Tuesday, January 26, 2016

The Long Process of Labeling the Bill of Rights

As I noted back here, James Madison used the label “bill of rights” for the first of his proposed amendments to the U.S. Constitution—a proposal that never got out of Congress.

He also proposed a bunch of limitations on the federal government that became the first ten Amendments to the Constitution, but he doesn’t seem to have considered those Amendments to comprise the United States’s own Bill of Rights.

Instead, Madison and his contemporaries continued to use the phrase “bill of rights” to refer to a general statement of the government’s powers and limitations. The one possible exception I’ve found in Founders Online is in a 1792 letter from Thomas Jefferson to George Washington. In one of those internecine squabbles that’s so much more entertaining on the Broadway stage than in your cabinet, Jefferson wrote to the President:
you will there see that my objection to the constitution was that it wanted a bill of rights securing freedom of religion, freedom of the press, freedom from standing armies trial by jury, & a constant Habeas corpus act. Colo. [Alexander] Hamilton’s was that it wanted a king and house of lords. the sense of America has approved my objection & added the bill of rights, not the king and lords.
Jefferson clearly saw the First Amendment as part of his desired “bill of rights.” Whether he thought of all ten Amendments under that label is unclear.

American legal authorities don’t seem to have publicly applied the label “Bill of Rights” to the Amendments for decades. In Barron v. the Mayor and City Council of Baltimore (1833), Chief Justice John Marshall delivered an opinion that Article 1, Section 9 of the Constitution “enumerated, in the nature of a bill of rights, the limitations intended to be imposed on the power of the general [i.e., federal] government…”

You remember the fundamentals laid out in Article 1, Section 9, right? Some are indeed important for individual rights, such as habeas corpus. But that section also protected the transatlantic slave trade until 1808 and tackled the burning issue of noble titles. It was a general list of limitations on Congress.

Incidentally, Marshall’s decision confirmed that those clauses and most other parts of the Constitution applied only to the federal government, not the states. So this decision seems, to modern eyes, to codify a sadly limited conception of a U.S. Bill of Rights.

That same year, however, Marblehead’s own Joseph Story (1779-1845, shown above), who was both an Associate Justice of the U.S. Supreme Court and professor of law at Harvard, started the process of applying the label of the Bill of Rights the way we do today. In his highly influential Commentaries on the Constitution of the United States (1833), Story began a discussion of the Amendments this way:
Let us now enter upon the consideration of the amendments, which, (it will be found,) principally regard subjects properly belonging to a bill of rights.
The next year, Story revised that book for use in classrooms as The Constitutional Class Book, and this time he wrote:
When the Constitution was before the People for adoption several of the State Conventions suggested amendments for the consideration of Congress, some of the most important of which were afterwards acted upon by that body at its first organization; and having been since ratified, are now incorporated into the Constitution. They are mainly clauses, in the nature of a Bill of Rights, which more effectually guard certain rights already provided for in the Constitution, or prohibit certain exercises of authority supposed to be dangerous to the public interests.
Finally, in 1840 Story revised his textbook again as A Familiar Exposition of the Constitution of the United States, including a rewrite of the above paragraph and following it with:
Before, however, proceeding to the consideration of them, it may be proper to say a few words, as to the origin and objects of the first ten amendments, which may be considered as a Bill of Rights, and were proposed by the first Congress, and were immediately adopted by the people of the United States.
Thus, over the course of seven years Justice Story went from saying that the first ten Amendments covered what a bill of rights should to saying that we might as well think of them as a Bill of Rights (with capital letters).

According to legal historian Akil Reed Amar, Story’s label remained unofficial and qualified until well past the U.S. Civil War. Rep. John Bingham of Ohio tried to write the Fourteenth Amendment so that it applied the federal “Bill of Rights” to the states. The Supreme Court resisted both the doctrine and the phrasing for decades. Finally, a 1900 dissent by Justice John Marshall Harlan retroactively declared that “These [first ten] amendments have ever since [ratification] been regarded as the National Bill of Rights.”

Monday, January 25, 2016

Just a Few Revisions Here and There

The Amendments to the U.S. Constitution that we think of as the Bill of Rights are rooted mostly in James Madison’s fourth and fifth proposed amendments from June 1789:
Fourthly,
That in article 2st, section 9, between clauses 3 and 4, be inserted these clauses, to wit, The civil rights of none shall be abridged on account of religious belief or worship, nor shall any national religion be established, nor shall the full and equal rights of conscience by in any manner, or on any pretext infringed.

The people shall not be deprived or abridged of their right to speak, to write, or to publish their sentiments; and the freedom of the press, as one of the great bulwarks of liberty, shall be inviolable.

The people shall not be restrained from peaceably assembling and consulting for their common good, nor from applying to the legislature by petitions, or remonstrances for redress of their grievances.

The right of the people to keep and bear arms shall not be infringed; a well armed, and well regulated militia being the best security of a free country: but no person religiously scrupulous of bearing arms, shall be compelled to render military service in person.

No soldier shall in time of peace be quartered in any house without the consent of the owner; nor at any time, but in a manner warranted by law.

No person shall be subject, except in cases of impeachment, to more than one punishment, or one trial for the same office; nor shall be compelled to be a witness against himself; nor be deprived of life, liberty, or property without due process of law; nor be obliged to relinquish his property, where it may be necessary for public use, without a just compensation.

Excessive bail shall not be required, nor excessive fines imposed, nor cruel and unusual punishments inflicted.

The rights of the people to be secured in their persons, their houses, their papers, and their other property from all unreasonable searches and seizures, shall not be violated by warrants issued without probable cause, supported by oath or affirmation, or not particularly describing the places to be searched, or the persons or things to be seized.

In all criminal prosecutions, the accused shall enjoy the right to a speedy and public trial, to be informed of the cause and nature of the accusation, to be confronted with his accusers, and the witnesses against him; to have a compulsory process for obtaining witnesses in his favor; and to have the assistance of counsel for his defense.

The exceptions here or elsewhere in the constitution, made in favor of particular rights, shall not be so construed as to diminish the just importance of other rights retained by the people; or as to enlarge the powers delegated by the constitution; but either as actual limitations of such powers, or as inserted merely for greater caution.

Fifthly.
That in article 2st, section 10, between clauses 1 and 2, be inserted this clause, to wit:
No state shall violate the equal rights of conscience, or the freedom of the press, or the trial by jury in criminal cases.
I’m being anachronistic by including Madison’s fifth point because the Senate decided that the federal Constitution should not limit state governments in those ways and therefore omitted that proposal. It took Supreme Court decisions in the early twentieth century to apply the U.S. Bill of Rights to state and local governments. Now we take that for granted.

Lastly, the Tenth Amendment derives from Madison’s eighth, the part that said: “The powers not delegated by this constitution, nor prohibited by it to the states, are reserved to the States respectively.”

Teaching American History has a chart of which of Madison’s proposals fell away as Congress and the states considered them. Of the twelve proposed amendments to come out of that process, ten were approved by 1791 and one more in 1992.

The House rejected Madison’s idea to revise the Constitution’s text itself in favor of tacking all the amendments on at the end. The Congress also made a lot of changes to Madison’s language, mostly shortening it (perhaps at a cost to precision). As a result, the first ten Amendments don’t have a single author; they were a collective creation.

TOMORROW: If Madison didn’t call those Amendments our Bill of Rights, who did?

Sunday, March 15, 2015

Constitutional Correlations

In a recent issue of the New Yorker, Jill Lepore reviewed some recent books about economic inequality, which has been measured for a century on the Gini scale, and what that phenomenon might say about and mean for different societies.

Toward the end of her review Lepore mentions some work by the political scientists Alfred Stepan and Juan J. Linz that found correlatioons between economic inequality and the political structures that different nations had established:
Stepan and Linz identified twenty-three long-standing democracies with advanced economies. Then they counted the number of veto players in each of those twenty-three governments. (A veto player is a person or body that can block a policy decision. Stepan and Linz explain, “For example, in the United States, the Senate and the House of Representatives are veto players because without their consent, no bill can become a law.”) More than half of the twenty-three countries Stepan and Linz studied have only one veto player; most of these countries have unicameral parliaments. A few countries have two veto players; Switzerland and Australia have three. Only the United States has four. Then they made a chart, comparing Gini indices with veto-player numbers: the more veto players in a government, the greater the nation’s economic inequality. This is only a correlation, of course, and cross-country economic comparisons are fraught, but it’s interesting.

Then they observed something more. Their twenty-three democracies included eight federal governments with both upper and lower legislative bodies. Using the number of seats and the size of the population to calculate malapportionment, they assigned a “Gini Index of Inequality of Representation” to those eight upper houses, and found that the United States had the highest score: it has the most malapportioned and the least representative upper house. These scores, too, correlated with the countries’ Gini scores for income inequality: the less representative the upper body of a national legislature, the greater the gap between the rich and the poor.
The U.S. Constitution produced our malapportioned Senate because it was designed to respond to two of the pressing concerns of 1787: the small-population states didn’t want to give up the “one state, one vote” system of the Articles of Confederation and Perpetual Union; and the elite men at the Constitutional Convention had been spooked by the Regulator uprising in Massachusetts and didn’t trust democracy. Their design for the new federal Congress in turn led to the lesser but still significant malapportionment in the Electoral College.

Our Revolution was a step away from aristocratic government, in which hereditary monarchs and nobles had a disproportionate say on the basis of birthright. The new Senate wasn’t hereditary like the House of Lords, but it was initially designed to insulate its members from the voting population. The Seventeenth Amendment changed that. Senators’ six-year terms preserve them from facing the voters as often as other federal elected officials, but the fact that they represent states means that they can’t benefit from gerrymandering—producing more turnover in the Senate than in the House.

As for the veto players, I guess the four American ones are the House, Senate, President, and Supreme Court. The Constitution set up the first three to create “checks and balances”—a term coined by John Adams in 1787 (based on older British Whig phrases). The Supreme Court established itself as another veto player through decisions under Chief Justice John Marshall in the early 1800s.

Many other countries with legislatures also modeled on Britain’s bicameral Parliament have gradually removed veto power from their upper houses, rendering some almost symbolic. Most of those countries also have a weak or even symbolic head of state, concentrating legislative and executive powers in a prime minister. That’s how they get by with so few veto players/checks and balances—yet they remain “long-standing democracies with advanced economies.”

How much effect do those modern government structures have on economic inequality in those countries? As Lepore wrote, Stepan and Linz were finding correlations, not necessarily causes. Linz much preferred parliamentary systems over the U.S. of A.’s separation of powers, so these findings fit into his life’s work. Still, Stepan and Linz’s work makes one think about the unforeseeable consequences of laying out a constitution in the late 1700s.

Tuesday, February 03, 2015

The Legalities of Licensing Historical Tour Guides

The National Constitution Center has highlighted a case under consideration by the U.S. Supreme Court about whether cities can require tour guides to pass tests of historical knowledge before being licensed.

Federal courts have issued contradictory decisions on that point in cases from Washington, D.C., and New Orleans. Such a disagreement often spurs the high court to issue a decision, but not always.

This question also arose in Philadelphia, and back in 2008 I wrote a bit about it, starting here. I found the libertarian think tank arguing the case (the same as in the current appeals) and its tour-company client were making arguments that didn’t add up, but they were easier to follow than a history professor’s position.

In the end, I concluded that a democratic government could, for the public good, institute a process for certifying guides as meeting certain standards for historical accuracy. But the potential harm to the public from historical misinformation was too small (even though the annoyance factor is high) to justify fining or otherwise punishing guides who don’t meet those standards or seek that certification. Furthermore, the process of approving some versions of history or some guides carries the dangers of oppressive ideology and favoritism.

(The photo above shows Gary Gregory in 2006, courtesy of Anne through Flickr via a Creative Commons license. Gary now operates the Edes & Gill print shop in the North End.)

Sunday, January 27, 2013

“When the senate should have had an opportunity to act”

Joseph Story was only a boy in Marblehead when the Constitution was written. However, he became a Supreme Court justice and a Harvard law professor and thus a very influential commenter on that document. This is how he interpreted the recess appointment clause in 1833:
the president should be authorized to make temporary appointments during the recess, which should expire, when the senate should have had an opportunity to act on the subject. . . . [This course] combines convenience, promptitude of action, and general security.

The appointments so made, by the very language of the constitution, expire at the next session of the senate; and the commissions given by him have the same duration. When the senate is assembled, if the president nominates the same officer to the office, this is to all intents and purposes a new nomination to office; and, if approved by the senate, the appointment is a new appointment, and not a mere continuation of the old appointment.
Story clearly believed that a recess appointment “should expire, when the senate should have had an opportunity to act on the subject.” He even wrote that such appointments “expire at the next session” of the Senate, not “at the End of their next Session,” which is the Constitution’s language (with my emphasis).

Story wrote only a few years after a conflict over appointments between President Andrew Jackson and the Senate. During an 1829 Senate recess, Jackson named many political supporters to federal offices, particularly newspaper editors. The Senate eventually got to vote on those men and rejected at least nine. Though the administration later renominated those supporters or found new posts for them, that conflict appears to fit within Justice Story’s interpretation of the recess appointments clause: such appointments should last only until the Senate has a chance to vote on them.

In 1884 and afterwards, however, the U.S. courts ruled that the Senate could not remove an official named by recess appointment from office. Those decisions have their roots in Justice Department documents from the Jackson administration back in 1830, but they disagree with Story’s understanding and, I suspect, the Constitutional Convention’s expectations.

Since then, Presidents of both parties have expanded the use of the recess appointment. They have filled positions not just between formal Senate sessions but also in shorter recesses during those sessions. Presidents have argued that such appointments become necessary as the Senate increasingly refuses to vote on nominees, even when a majority is ready to; such filibusters also seem like a distortion of what the Constitutional Convention imagined, and unproductive for the country as well.

Nevertheless, our legal system isn’t based just on what Alexander Hamilton wrote in 1788 or Joseph Story wrote in 1833, but on the whole line of precedents. Courts have considered many aspects of recess appointments and generally found the practice constitutional. This week a U.S. Circuit Court panel ruled the other way, saying President Barack Obama overstepped that authority and imposing limits not applied to recent past Presidents. The issue seems headed for the Supreme Court.

Saturday, January 26, 2013

The Birth of the Recess Appointment

Article Two of the U.S. Constitution includes this clause, proposed by Richard Dobbs Spaight of North Carolina:
The President shall have power to fill up all Vacancies that may happen during the Recess of the Senate, by granting Commissions which shall expire at the End of their next Session.
This language was modeled after a clause in the North Carolina constitution. It wasn’t part of the first draft of the new federal document, but the men of the Constitutional Convention—many of whom probably expected to be Senators—knew they wouldn’t want to spend all their time at the capital just in case an important position should become vacant.

No one dissented on this clause, and therefore there was no formal debate about its meaning. The Constitution doesn’t define the parameters of the Senate’s “recess” or “session” except to say that it can’t “adjourn for more than three days” without the House of Representatives’ consent or meet somewhere away from the House. The founders at the Constitutional Convention shared a basic understanding of how legislatures worked, so they didn’t think it worthwhile to spell that all out.

The 67th installment of The Federalist Papers, written by Alexander Hamilton, explained that clause this way:
The ordinary power of appointment is confined to the President and Senate jointly, and can therefore only be exercised during the session of the Senate; but as it would have been improper to oblige this body to be continually in session for the appointment of officers and as vacancies might happen in their recess, which it might be necessary for the public service to fill without delay, the succeeding clause is evidently intended to authorize the President, singly, to make temporary appointments “during the recess of the Senate, by granting commissions which shall expire at the end of their next session.”
Hamilton’s main goal in that essay, context makes clear, was to assure readers that the President could not appoint Senators, as some critics of the Constitution had evidently claimed. The actual workings of the recess appointment were only a minor consideration for him.

The first President to make a recess appointment was the first President, George Washington. He named officials in the very first break of the first Congress. Presidents John Adams, Thomas Jefferson, and James Madison (also kind of an expert on the Constitution) also used this power. Jefferson, in fact, delayed his nomination of Albert Gallatin as Secretary of the Treasury so he could make a recess appointment; he didn’t submit Gallatin’s name to the Senate until almost eight months later in January 1802, when there was a Republican majority.

In fact, most of those early recess appointments were later confirmed by the Senate, or at least not rejected. But there was a notable exception. In June 1795 Washington named John Rutledge of South Carolina to be Chief Justice while the Senate was in recess. Less than three weeks later, Rutledge made a speech against the Jay Treaty negotiated by his predecessor, saying that he hoped Washington would die rather than sign it. This reduced his popularity within the administration.

Nevertheless, Rutledge presided over some court sessions that fall, and the President formalized his nomination in December 1795. By then, however, people were speaking openly about the new Chief Justice’s alcoholism, depressions, and failing mind. The Senate rejected the nomination, keeping its debate off the record. Rutledge went home to Charleston and attempted suicide. That didn’t work out, either.

Two days later, Rutledge wrote to Washington, resigning his commission as Chief Justice. Under the literal language of the Constitution, that commission was due to expire at the end of the Senate’s current session, or about five months later. Because Rutledge resigned, however, the country didn’t test the question of whether his commission should have ended as soon as the Senate had considered and rejected his appointment.

TOMORROW: Justice Joseph Story’s interpretation.

Sunday, December 30, 2012

A Modern Claim of Privilege

I’m departing from the Revolutionary era to talk about a book on another period because it raises important issues about the value of historical study within our constitutional system. Claim of Privilege: A Mysterious Plane Crash, a Landmark Supreme Court Case, and the Rise of State Secrets was written by Barry Siegel and published in 2008. It’s about the Supreme Court case known as United States v. Reynolds, or just Reynolds.

In 1948 a U.S. Air Force B-29 Superfortress crashed in Waycross, Georgia. Nine people died, three of them civilians working on new radar equipment that the plane was supposed to test. Those three civilians’ widows sued for negligence, asking that the government turn over its accident report. [When I was growing up, my parents were colleagues of one of those widows, though nobody talked to me about her past.]

The Air Force argued in court that it shouldn’t have to turn over the accident report, eventually invoking “state secrets.” Though its lawyers would not lay out the details, they implied that the report would reveal important information about the equipment being tested or the capacities of the B-29, information that would benefit the nation’s enemies.

Judges had been dealing with sensitive material for centuries. They usually examined the documents privately to determine if they were really relevant to the case. If so, they could limit discussion of those documents in camera, or in a closed court. Judges could also appoint independent experts to examine the material. But the Air Force didn’t want the court to try any of those methods.

The original trial judge and the appeals court both ruled in favor of the plaintiffs, saying that the government had to default on the case if it refused to turn over the pertinent documents. But the U.S. Supreme Court, which was dealing with the Rosenberg spy case at the same time, decided in 1953 by a vote of 6-3 that the government’s claim of the importance of secrecy trumped everything else.

In essence, the Reynolds case ruled that if the federal government declared that certain material was important for national security, the judge had to accept that privilege and not penalize the government as it would another party to a lawsuit. The logic for this decision rested on the belief that the government would not lie about the importance of any documents simply to avoid embarrassment or liability.

But in the Reynolds case the Air Force’s lawyers lied. In 2000, as Siegel recounts in Claim of Privilege, the government declassified the accident report. The document that the Air Force insisted had to remain completely secret turned out to say almost nothing about the experimental equipment the dead men had been hoping to test; that equipment wasn’t involved in the fatal accident. Instead, that report listed several problems with the airplane, the pilots’ choices, and the safety training—issues central to the widows’ negligence lawsuit.

The result is a breakdown between historical fact and legal doctrine. In law, the Reynolds case provides the basis of the U.S. government’s expanded claims for secrecy privileges over the last sixty years. It turns certain “state secrets” claims into legally unquestionable facts.

In history, however, the Reynolds case shows that government officials did use a false claim of national security to protect themselves from embarrassment. That’s an obvious historical fact. The actual events show we should be more skeptical about government claims of privilege, not more deferential.

National security obviously requires that the government be able to keep some secrets for a limited time. But in this test the Supreme Court, choosing to work blind and trust the military, drew the line in the wrong place. Unfortunately, history can’t overturn law; only more law, driven by public opinion, can. And the Reynolds precedent, having been built on in other cases, has become more entrenched.

Siegel’s book is part mystery investigation, part courtroom drama. It’s not always exciting, but it’s a thought-provoking and ultimately frustrating read.

Thursday, July 26, 2012

That Wasn’t Hamilton’s “Hideous Monster”

I’m traveling for the next two weeks, so for a while most Boston 1775 postings will be pointers to interesting material elsewhere on the web. I can’t guarantee timeliness, either of that news or of the morning updates, but I do hope to share something each day.

As a start, History News Network, journalist and professor Ian Mylchreest offered an essay on “How Four Supreme Court Justices Misquoted Alexander Hamilton”:
Americans have always used the Revolutionary era as a cabinet of historical curiosities. When we need authority for our beliefs, we rummage around in the cupboard and pull out some suitable analogy or quotation to bolster the argument we want to make. . . .

Given how ingrained this national habit is, it seemed pretty routine that the four conservative Supreme Court justices who found the Affordable Care Act unconstitutional would include in their judgment a quotation from Alexander Hamilton. Washington’s lieutenant duly makes an appearance as the judges are warming up to denounce the individual mandate as constitutional overreach because it dragoons healthy young individuals into buying health insurance they do not want.

If Congress can do that, the dissenting justices write, “then the Commerce Clause becomes a font of unlimited power, or in Hamilton’s words, ‘the hideous monster whose devouring jaws ... spare neither sex nor age, nor high nor low, nor sacred nor profane.’”

Those are indeed the words of Alexander Hamilton, but, as they’re quoted here, it seems that he must have been warning against the ever-present tyranny of the federal government. But that was not what he was saying.

The image of the devouring monster in Federalist 33 is, in fact, Hamilton sarcastically denouncing the scare tactics of the Anti-Federalists—the men who opposed the Constitution. Hamilton wanted to assure the voters of New York that far from the tyrannous monster they had been warned about, the broad power of taxation in the Constitution was perfectly consistent with republican government.
Indeed, that number of the Federalist states:
The last clause of the eighth Section of the first Article of the plan under consideration authorizes the National Legislature “to make all laws which shall be necessary and proper for carrying into execution the powers by that Constitution vested in the Government of the United States, or in any department or officer thereof;” and the second clause of the sixth Article declares, “that the Constitution and the laws of the United States made in pursuance thereof, and the treaties made by their authority, shall be the supreme law of the land; anything in the constitution or laws of any State to the contrary notwithstanding.”

These two clauses have been the source of much virulent invective, and petulant declamation, against the proposed Constitution. They have been held up to the people in all the exaggerated colors of misrepresentation; as the pernicious engines by which their local Governments were to be destroyed, and their liberties exterminated; as the hideous monster whose devouring jaws would spare neither sex nor age, nor high nor low, nor sacred nor profane; and yet, strange as it may appear, after all this clamor, to those who may not have happened to contemplate them in the same light, it may be affirmed with perfect confidence, that the constitutional operation of the intended Government would be precisely the same, if these clauses were entirely obliterated, as if they were repeated in every Article. They are only declaratory of a truth, which would have resulted by necessary and unavoidable implication from the very act of constituting a Fœderal Government, and vesting it with certain specified powers. This is so clear a proposition, that moderation itself can scarcely listen to the railings which have been so copiously vented against this part of the Plan, without emotions that disturb its equanimity.
As Mylchreest goes on to point out, in truth Hamilton was one of the early American republic’s champions of a stronger and more active national government:
Hamilton was the big government conservative in 1787. Unlike many others in Constitution Hall, he wanted a powerful national state to emerge from the deliberations and he believed it should carry a sizeable public debt. Hamilton wanted an active federal government to build the nation.
Specifically, Hamilton hoped a bigger, stronger national government could help American business. His understanding of the constitutional structure in general and the commerce clause in particular aren’t the only authorities on their meaning, then or now. But the justices and their clerks could have found much more appropriate dead spokesmen for their view of a limited commerce clause.

Friday, May 21, 2010

William York: ten-year-old murderer

Yesterday I mentioned a 1965 study by B.E.F. Knell that expressed doubt about the reported hanging of a seven-year-old child in 1708, sometimes cited as an example of the strictness of the eighteenth-century British legal system.

The main focus of that study was the early 1800s. Knell surveyed all the death sentences handed down for children under age fourteen by a well documented court in London. (None of those children was convicted of murder.) In every case—over 100 in all—the initial death sentence was eventually changed to transportation, imprisonment, and/or whipping. No child criminal was actually put to death.

That pattern matches a case related in William Oldnall Russell’s 1824 A Treatise on Crimes and Misdemeanors. In 1748, at age ten, William York was jailed for killing a five-year-old girl named Susan Mayhew. They both lived in the workhouse at Eyke. (The town’s Church of All Saints shown above, courtesy of Roger Miller via Wikipedia under a Creative Commons license.)

Authorities reported of William: “All he alleged was that the child fouled the bed in which they lay together, that she was sulky, and that he did not like her.” His arrest made The Gentleman’s Magazine, which stated that “Judge Hales order’d a boy of the same age to be hang’d, who burnt a child in a cradle.” In fact, Sir Matthew Hale had determined the boy in that arson case was “above fourteen and near fifteen years of age” before sending him to the gallows—and that was back in the mid-1600s.

The Newgate Calendar later reprinted that early report on the killing, dwelling on the grisly details and even illustrating them in some editions. But it left out the details of the eventual legal outcome.

The court convicted William, and under British law that required the death penalty. The judges in Bury St. Edmunds even agreed that not handing down that sentence might encourage other ten-year-olds to kill little girls they disliked and found sulky. Nevertheless, those judges put off the execution by one order after another until 1757. At age eighteen or nineteen, William York received a royal pardon and went into the Royal Navy. (There was a war on, after all.)

The judicial system’s way of dealing with William York in the mid-1700s thus matches what Knell found in that survey of London cases from the early 1800s. Though British law allowed for a ten-year-old to be convicted of murder and sentenced to die, officials weren’t ready to follow through on that penalty.

The disparity between what the law says and how society actually applies it was also part of this week’s Supreme Court decision. The main dissent, written by Justice Clarence Thomas, argued that a majority of American states allow for juveniles to be condemned to life in prison for crimes less than murder. The majority opinion from Justice Anthony Kennedy pointed out that very few states actually do so. So which is the better indicator of what society considers “cruel and unusual”—what we say we can do, or what we usually do?

TOMORROW: Connecticut executes a twelve-year-old in 1786.

Wednesday, May 19, 2010

“The Moral Commitment Embodied in the Eighth Amendment”

Standards of eighteenth-century British criminal justice came up in the Supreme Court this week. The issue was whether a life sentence for a seventeen-year-old convicted of two armed robberies, or for any juvenile offender who hadn’t committed murder, was “cruel and unusual punishment” under the Eighth Amendment to the U.S. Constitution. From Adam Liptak’s coverage in the New York Times:

As usual in cases involving the Eighth Amendment, the justices debated whether the Constitution should consider, in a one common formulation, “the evolving standards of decency that mark the progress of a maturing society.”

Justice [Clarence] Thomas said the court should look to the practices at the time the Bill of Rights was adopted. Given that capital punishment could be imposed on people as young as 7 in the 18th century, he said, Mr. [Terrance] Graham’s punishment would almost certainly have been deemed acceptable back then.

Justice John Paul Stevens, in a concurrence joined by Justices [Ruth Bader] Ginsburg and [Sonia] Sotomayor, said Justice Thomas’s “static approach to the law” did not allow for societal progress and would entail unacceptable human consequences.

“Justice Thomas would apparently not rule out a death sentence for a $50 theft by a 7-year-old,” Justice Stevens wrote. “Knowledge accumulates,” he wrote. “We learn, sometimes, from our mistakes.”
I’m not surprised to see Stevens and his colleagues reminding us how British law once allowed seven-year-olds to be hanged; that fact is a reminder about just how cruel past societies could be.

What seems remarkable is that Thomas actually brought up hanging seven-year-olds first. He appears to accept execution of children of that age as just if allowed by a legislature. Thomas’s allusion comes in footnote three of his dissent, which Justices Antonin Scalia and Samuel Alito joined. That note points to an opinion that Scalia wrote for the court in 1989—a decision overturned in 2005.

Thomas’s dissent also misquotes the Scalia opinion, which no one seems to have noticed. Thomas wrote that British common law allowed “capital punishment to be imposed on a person as young as age 7.” Scalia actually, and correctly, had written that the punishment was allowed for “anyone over the age of 7”—i.e., eight or above.

The “evolving standards of decency” formulation is over half a century old now, coming from Trop v. Dulles in 1958, which in turn cited Weems v. United States in 1910. In sum, the belief that courts shouldn’t define “cruel and unusual punishment” by eighteenth-century standards has been U.S. law for a century.

Stevens’s two-paragraph response to Thomas’s dissent concludes:
Punishments that did not seem cruel and unusual at one time may, in the light of reason and experience, be found cruel and unusual at a later time; unless we are to abandon the moral commitment embodied in the Eighth Amendment, proportionality review must never become effectively obsolete. . . . Standards of decency have evolved since 1980. They will never stop doing so.
TOMORROW: Did the British justice system of the 1700s actually execute young children?