Before there was a law under the American Constitution, there was an argument about the law. It was an argument, that is, about the ends of the law, and the framework of a lawful government. This was, of course, the argument over the Constitution, and it seems remarkably to have escaped recognition these days that an argument of this kind is itself a dramatic illustration of “natural law.” After all, the very appeal to first principles as the ground of a constitution is itself a move into natural law. If a constitution is to make sense, it must presuppose that there are certain principles of lawfulness that existed, as truths commanding our respect, even before a constitution was framed and enacted.

As John Locke pointed out, the legislature would be the source of the “positive law,” the law that was enacted or posited. But what, he asked, would be the source of the legislature? From what would that spring? The origin was to be found, as Locke said, in understandings that were “antecedent to all positive laws.” The ultimate authority to establish a constitution and a legislature depends “wholly on the people.” Before there is a legislature or a constitution defining a legislature, there is the right of a people to govern itself by forming a constitution and bringing forth a government restrained by law.

James Wilson made this same point in the first case that elicited a set of opinions from the Supreme Court (Chisholm v. Georgia, 1793). Wilson took the occasion to point out that the law in America would be planted on an entirely different ground from that of the law in England. The law in England, made familiar by William Blackstone in his Commentaries, began with the notion of a sovereign issuing commands. But the law in America, said Wilson, would begin “with another principle, very different in its nature and operations”: “Laws derived from the pure source of equality and justice must be founded on the consent of those, whose obedience they require. The sovereign, when traced to his source, must be found in the man.”

Legislators cannot name the real object of their concern, or explain why the law is justified in addressing a distinctly “moral” issue.

The appeal then had to be to those first principles “antecedent to all positive law,” an appeal to “nature”—or, as Aristotle would say, to an understanding of that creature who was by nature suited to political life. The task of “founding” a new constitutional order draws one back to the root, or to the questions that stand at the beginning of the law. But the founders also stood in that rare position rather hard for the rest of us to imagine: their experience encompassed an America, and a world of law, without the Constitution. And it requires but a moment’s reading, in any of the legal texts of the founding, to become aware instantly of the vast differences that separate the furnishings of mind of that first generation of jurists from the sensibilities typical of judges in our own day. Two snapshots, drawn from the two periods, tell the story.

In that first case, Chisholm v. Georgia, James Wilson and his colleagues understood that this was a moment of teaching, for they were at the beginning of the law under the Constitution with no cases to draw upon as precedents. Before Wilson spoke about the text of the Constitution, he found it necessary to speak about “the principles of general jurisprudence” and to acknowledge the laws of reason and “the philosophy of mind.” And so, before Wilson invoked the authority of any case at law or any commentator on matters jural, he invoked the authority of “Dr. [Thomas] Reid, in his excellent inquiry into the human mind, on the principles of common sense, speaking of the skeptical and illiberal philosophy, which under bold, but false pretensions to liberality, prevailed in many parts of Europe before he wrote.” Wilson began, then, by rejecting skepticism as the fount of all forms of relativism in morality and law.

As a contrast, it would be hard to find anything more redolent of our age than the famous “mystery passage” in Planned Parenthood v. Casey in 1992. It had been anticipated that the Supreme Court might overrule Roe v. Wade, the decision in 1973 that established a constitutional “right to abortion.” But instead, the court offered an opinion that seemed to entrench even further the holding in Roe v. Wade. In fact, Justices Sandra Day O’Connor, Anthony Kennedy, and David Souter wrote a plurality opinion in which they enjoined the country to cease its agitation over this issue. In restating the claim for a right to abortion, the three judges sought to soar to a level poetic, and delivered themselves of this profundity: “at the heart of liberty is the right to define one’s own concept of existence, of meaning, of the universe, and of the mystery of human life.” The founders began by rejecting skepticism and relativism in philosophy and morality; the modern judges—products of the best law schools in the land—affirm the right of a person to make up his own version of the universe.

The words of the judges were philosophically untethered, but they were not inadvertent. This bantering, this rhetorical play with relativism, had been at work for many years. It ran back to the end of the nineteenth century, when law schools became the vehicles for a new science of law that would enshrine legal “positivism” as the reigning orthodoxy in the profession. In the classic debates between Abraham Lincoln and Stephen Douglas, Lincoln had represented the tradition of natural law, while Douglas expressed the purest form of legal positivism, with all of its shadings of moral relativism. There would be no truths grounded in the nature of human beings, truths that would hold their truth in all places, wherever human nature remained the same. Nor would there be any distinctly human rights that arose from that nature. Rather, the notion of right and wrong would always be “relative” to the setting and to the opinions of right and wrong that were dominant in any place. The measure of morality would be found then in the understandings so dominant politically that they could be posited or enacted into law by the people with the power to rule. By the end of the century, the law schools would teach in the spirit of Douglas’s positivism. And by our own day, the lawyers and judges who emerged from those schools would hardly be aware that there had ever been a serious debate on these questions.1

Justice Oliver Wendell Holmes (1841–1935) had been a representative figure, both reflecting and shaping this change in the legal culture with his lectures at Harvard and his writings on the bench. With his characteristic terseness, or his lunge toward aphorism, Holmes marked the new sensibility. He thought it would be a notable gain if “every word of moral significance could be banished from the law altogether, and other words adopted which should convey legal ideas uncolored by anything outside the law.”

“Our society,” Scalia wrote, had never shared that “‘you-may-do-what-you-like-so-long-as-it- does-not-injure-someone-else’ beau ideal.”

This perspective, so startling in its expression, has worked itself into the reflexes of legislators and politicians. For example, the authorities in New York City are pressed to do something about the culture of prostitution, which blights several neighborhoods. But, in a liberal city, they do not wish to say that consenting adults may be condemned in the law for their sexual relations. Instead, the legislators try to discourage brothels by insisting that any establishment calling itself a “massage parlor” must contain a swimming pool or a squash court. And more than that, the squash court must be at least twenty-five feet wide, forty-five feet long, and twenty feet high. As I have put it elsewhere, these are the kinds of rituals of empty exactitude that legislators produce when they cannot name the real object of their concern, or explain why the law is justified in addressing a distinctly “moral” issue.

In this way, the Holmesian perspective turns itself into a routine burlesque in the law. But no sooner has the law offered a parody of itself than we find a jurist who proclaims these doctrines quite earnestly. In our own time, that jurist turns out to be Justice Souter, who wrote his undergraduate thesis at Harvard on Holmes. When Justice Souter turns to the question of prostitution or nude dancing in public, he insists that the law can bear upon such matters only when they produce harmful “secondary effects.” The presence of prostitution or tawdry entertainments may become baneful for a community because they draw muggers and pickpockets, and foster a climate of violence. But then again, pickpockets and muggers are drawn by games at Yankee Stadium or by the prospect of crowds gathered at Grand Central Station on Fridays, yet the predictable rise in crime associated with these scenes or entertainments does not supply us with any ground for banning the ball games or the commuting. At a certain point the law makes sense only if it can say (as Justice Antonin Scalia put it) that the legislators meant to ban the public display of genitals not because of any speculation about “secondary effects” and “not because they harm others but because they are considered, in the traditional phrase, ‘contra bonos mores,’” i.e., immoral.

Scalia asked just when this Holmesian view of morality and law had ever been incorporated in the Constitution. “Our society,” he wrote, had never shared that “‘you-may-do-what-you-like-so-long-as-it- does-not-injure-someone-else’ beau ideal,” and much less had it ever thought that position to be “written into the Constitution.” He went on to remark that “[i]n American society, such prohibitions have included, for example, sadomasochism, cockfighting, bestiality, suicide, drug use, prostitution, and sodomy. While there may be great diversity of view on whether various of these prohibitions should exist (though I have found few ready to abandon, in principle, all of them), there is no doubt that, absent specific constitutional protection for the conduct involved, the Constitution does not prohibit them simply because they regulate ‘morality.’”

At the turn of the century, Learned Hand (1872–1961) as an undergraduate would absorb the currents at work in the teaching of philosophy at Harvard. From William James, Josiah Royce, and others, he incorporated a kind of genteel skepticism, dressed up with pragmatism. The result was a new kind of affectation among the intellectual classes. The judges marked their sensibility by professing, at every turn, their doubts, their uncertainties, their suspicions about truths, and especially the truths that marked first principles. In the end, they were left with little more than the intuitions that sprung from their own exquisite sensibilities. Learned Hand became the most accomplished of judges, widely respected and rightly admired as a common law jurist. When he concentrated his genius on issues of copyrights and patents, he wrote with a remarkable spareness and with a literary craft. But when this highly tutored man turned to questions of Constitutional law or became more self-consciously philosophic, his writing was marked by a mannered aversion to “absolutes.” And that reflex produced in him a shallowness that reflected the public philosophy of his age.

Speech on matters of moral and political significance was all “subjective” in nature.

The habit of recoiling from absolutes was picked up by Professor Gerald Gunther, Hand’s most recent biographer. As a result, Professor Gunther, who had been a clerk to Learned Hand, began to see the cases, and the world, through the same clichés that acted as a screen to Hand. That screening is illustrated in a telling way in the account of George Sutherland’s opinion in Adkins v. Children’s Hospital. In that case, Sutherland, a leader in the cause of votes for women, saw himself as acting on the same principles when he struck down a law mandating minimum wages for women in the District of Columbia. But when Hand (and Professor Gunther) characterized Sutherland’s argument, it was through the lens of positivism. Sutherland’s complex jural argument is entirely flattened into the conclusion that he was merely being willful, that he disliked the legislation at hand, and that the legislation collided with his predilections. The reasoning offered by Sutherland is never reported or addressed, and along with everything else lost from view in this case are the circumstances of the injured party. Nowhere in Hand’s or Professor Gunther’s accounts is there any mention of Willie Lyons, forced out of her job at the Congress Hotel as a result of the law on minimum wages for women in the District of Columbia. In other words, the teaching had taken hold. Hand’s patterns of obtuseness were now imparted to his admirers in the next generation. In the story of Hand and the judges around him, the villains for Professor Gunther were the judges who were “never tortured by doubt.” Hand might differ strongly at times with Holmes, and yet, as Professor Gunther remarked, “they shared a common philosophic outlook,” quite representative of the circles from which they had sprung. “Neither,” said Professor Gunther, “believed in absolutes or eternal truths.”

Learned Hand never made it to the Supreme Court. But he later recommended to President Eisenhower the appointment of his colleague on the Federal Court of Appeals for the Second Circuit (in New York), the redoubtable John Marshall Harlan. Harlan was the grandson of the famous justice of the Supreme Court bearing the same name, the man who had offered the ringing dissent against racial segregation in Plessy v. Ferguson (“the Constitution is color-blind”). The original John Marshall Harlan had come from Kentucky; the grandson came from the schools and the corporate world of the East Coast, and he was as reflective of those circles as Hand. Toward the end of his career, this supposedly conservative judge helped advance the cause of sexual liberation by helping to strike down the laws on contraception. But he astounded observers, in 1971, with an opinion on “political speech” that brought jurisprudence on the First Amendment into a new register. And yet, in that famous case of Cohen v. California, Harlan had simply advanced the strands of moral skepticism that he had inherited from Hand and Holmes. It is one of the ironies of our time that Harlan could gain a reputation for novelty and inventiveness in the law for “discovering” for the courts the doctrines of logical positivism about thirty years after they had been refuted in departments of philosophy.

Still, those ideas had been much in fashion when Harlan was in school, and he drew upon them with all of the freshness of his own youth. As the key to the case, Harlan offered the cliché that would become, for many judges, their signature tune on matters of the First Amendment: “One man’s vulgarity is another’s lyric.” Speech on matters of moral and political significance was all “subjective” in nature. As logical positivism had instructed Harlan’s generation, there were no truths that could anchor our judgments on matters of morality and justice. Statements about the things that were right or wrong, just or unjust, were essentially “emotive” in character. There was nothing “cognitive” about them, no propositions that could be weighed for their truth or falsity. And so restrictions on speech simply reflected the passions and the emotions of the people who made the laws. In Cohen, a young man, in the turbulent days of 1968, had walked into a crowded courthouse in Los Angeles, wearing a jacket that bore the inscription “F*** the Draft.” The intention was clearly to provoke by using a shocking, vulgar expression, one not yet part of civil discourse. But Harlan, with an affectation of philosophy, professed his want of surety in unlocking the meaning of the words. “How is one to distinguish this [word],” he asked, “from any other offensive word?” He insisted that there was “no readily ascertainable general principle” by which one could draw distinctions. That is to say, there were no principled grounds on which the authorities, or anyone else, could distinguish between speech that was assaulting or innocent, threatening or inoffensive. And, for that sovereign reason, Harlan now declared with his colleagues that the decision as to what language is fit for a public place must be left “largely [in] the hands of each of us.”

It was understood in the past that when people ventured into public, they had an obligation to restrain themselves out of a respect for the sensibilities of others. Harlan and his colleagues now switched the presumptions and the burdens. People who used uncivil gestures and assaulting words would have a presumptive “right” to speak. The burden would fall now on the victims, or the passersby, to avert their eyes or develop tougher skin. For years, urbanists had been urging planners to arrange cities in such a way that they would facilitate the encountering of strangers: They preferred public transport to the privacy of automobiles; they contrived parks and benches where strangers could meet while at lunch. But Justice Harlan and his colleagues, with this access of novelty, undermined the moral framework for these policies of urbanism. The teaching in Cohen made it hazardous for people to venture out, especially at night, into public places. Throughout the country it suddenly became harder for the police to enforce the laws on loitering, to remove the aggressive hawkers and beggars who take over prominent corners in cities. In Washington, restaurants went out of business on Connecticut Avenue near Dupont Circle as their customers finally became reluctant to move through the gauntlet of exotic characters importuning and insulting them as they made their way along the public paths. Indeed, with this small move, Harlan triggered nothing less than a minor revolution in our civic life.

There was the most telling discord between the argument he was forced to make for the subjective nature of offensive or assaulting speech and the argument he had to put in place to establish the claim of this speech to constitutional protection. Harlan assumed that the speech emblazoned on Cohen’s jacket had a claim to constitutional protection because it was “political”; it conveyed a sentiment dealing with a matter of public controversy. According to Harlan, what Cohen was doing with his jacket was “asserting [a] position on the inutility or immorality of the draft.” There was a point to be made by taking Harlan at his word and asking just which one, exactly, he thought Cohen meant. Was “F*** the Draft” merely a shorthand expression for: the draft is “inutile”? Or that the draft was “immoral”? It is worth pointing out that the message did not mean either. The profanity on the jacket was meant to mock with its grossness; it conspicuously lacked the precision of analytic prose, particularly when applied to matters of public policy.

Nevertheless, Harlan’s reading was in one respect correct, but on grounds that contradicted his argument at the root. What we knew of Cohen’s message was that it condemned or denounced the draft, and we knew that mainly because he had drawn upon a word that was established in ordinary language as a term of condemnation, derision, insult. We knew it, that is, because the meaning of words was not subjective and arbitrary. And in the same way we knew that he was referring to the military “draft.” Someone who had taken Harlan’s argument literally might have turned around and insisted that all the words were entirely “subjective” and asked how we could know that Cohen was not referring to a “draft” in the sense of wind? How did we know then that Cohen was not enjoining us, perhaps in a spirit of paganism, to “make love to the wind”?

But we knew that Cohen was making a political speech precisely because the meaning of words is not wholly subjective. And we knew these things for the same reasons that were brought forth, years earlier, to refute logical positivism. The functions of condemning or commending, of deriding or applauding, are moral functions, and they are rooted in our language. The words that carry these functions may change over time, but the functions persist. And if they do, it must be possible for most people to understand at any moment the words that are established in our language as terms of rebuke or of praise. In this exercise of gauging ordinary usage—of recognizing insults, say, when we hear them—the judgments of truck drivers can be quite as reliable as the reactions of doctors and lawyers. But that moral function will always be contained in our language, because it is part of the constitution of our own natures, as moral beings.

Even people of ordinary wit, then, labor under no philosophical handicap when they undertake to distinguish between words that are meant as praise and words that are meant as attacks. But thanks to Harlan and his colleagues, the law would be founded from that time forward on entirely different premises. The conviction holds, among conservative as well as liberal judges, that it is not legitimate or even possible for authorities to make discriminations based on the content of speech. For example, the Supreme Court struck down any attempt to ban gestures of disrespect for the American flag. As Justice Brennan explained in 1989, the Constitution made it untenable now for anyone in official authority to establish what was and was not permitted when it came to behavior toward the American flag. Some of us thought that the judges could not be fully serious, for what would they do when they encountered the evident misuse of a cross? Mark Russell once told the story of a family of Unitarians who moved into a Southern town, and in the middle of the night a group of bigots burned, on their lawn, a large question mark. Surely one could tell the difference between a burning question mark and a burning cross. The former may be puzzling, but the latter has been invested with a rather definite meaning in our language and experience. And a man of ordinary wit would surely be able to tell the difference between a cross used for devotional purposes in Christian services and a cross used for the sake of terrorizing black people.

Nevertheless, when a case of that kind made its way to the Supreme Court, the judges came together unanimously in striking down the attempt of the law to recognize those differences and forbid those forms of “expression” that constitute assaults. In R.A.V. v. St. Paul (1992), the city of St. Paul had banned the burning of crosses, along with other forms of gestures and speech molded into assaults on groups defined by race, ethnicity, gender, and religion. It was a telling sign that the main opinion in the case was written by Justice Antonin Scalia, who had become the leading voice of conservative jurisprudence on the Court. Scalia registered an apt concern about speech that would be focused on political adversaries. If there were restrictions on assaulting speech, it was critical that the measures remain “neutral” in regard to their political tendency. There might be laws barring political signs near polling places, but those laws could not be applied only to Republicans rather than Democrats. There could be a ban on “hateful” speech, but those laws could not be used in a manipulative way to ban speech that was critical of gays, and yet not ban the kind of speech that smeared other people as “homophobes.” In all of this there was a point, and yet there was also a point gravely missed. There was in fact a species of defamation or assault, quite distinct and knowable, which acquired its viciousness as it diffused its attacks on whole groups defined by race or ethnicity. The notion of group libel was taken quite seriously after the Second World War, as governments in Europe sought to foreclose the campaigns of vilification that had been directed at Jews. In America, a comparable concern for the treatment of blacks, and the fomenting of racial riots, had found expression in the framing of comparable laws.

But Scalia thought that there was a kind of symmetry in attacking, say, Catholics as a group, and the bigots who were “anti-Papist.” He thought it revealed, quite sharply, the political tilt hidden in this legislation that “one could hold up a sign saying, for example, that all ‘anti-Catholic bigots’ are misbegotten; but not that all ‘papists’ are, for that would insult and provoke violence ‘on the basis of religion.’” Yet if there was in fact a deep wrong in attacking people on account of their race or religion, then there was no parity here. The person who objected to this kind of speech could not be seen simply as a different species of bigot, attacking another class of people (namely, those “public-spirited” people who burn crosses or attack others on the basis of their race or ethnicity). But in the meantime, Scalia and his colleagues seemed to confirm, ever more deeply, the moral skepticism that was engrafted onto the law by Justice Harlan in the Cohen case. They helped to enforce the notion that people in authority could not be trusted to make judgments about the content of speech, because there was no conviction, in the end, that there were grounds for judging the rightness or wrongness of the political ends that animated that speech.

And so, twenty-five years after Cohen v. California, we find the melancholy fading of any differences on this matter between conservative and liberal judges. It is no surprise then that conservative jurists, for the most part, are as dubious about natural law, or “moral truths,” as the liberal judges, and their allies in the academy, who teach the gospel of “postmodernism” and “multiculturalism.” Clarence Thomas stands as a notable exception, as a judge trying at least to recall the origins of the American law in “natural rights.” But most of his colleagues in the courts react with a bemused tolerance, or with scornful dismissiveness, if anyone raises the notion of natural rights, or earnestly claims to know of certain anchoring truths that were more than merely conventional. All of that is taken as so much twaddle, or as a reflection of the innocence of an earlier age. Lincoln said that when the founders proclaimed that all men were created equal, they had pronounced “an abstract truth, applicable to all men and all times.” But that conviction is no longer taken seriously by conservatives or liberals among the judges.

Yet, without the moral premises of the founders, it is not at all clear what can form the ground of jurisprudence in this new age, delivered from those superstitions of the past. Is the substance of justice simply established, as much as practicable, when a proposition is enacted into law? Or are there standards, principles of judgment, that can test the substance of what was done, even if it were done in a formally legal way? In his notable lectures on jurisprudence in 1790, James Wilson made the point that since the American law began with the understanding of natural rights, it began by incorporating a principle of revolution; it began with the recognition that there could be a wrongful law. There could be a measure passed in a legal way, but thoroughly wrongful, or evil, in its substance.

Lincoln said that when the founders proclaimed that all men were created equal, they had pronounced “an abstract truth, applicable to all men and all times.”

For the jurists of our own time, this sentiment seems quite uplifting, laudable, even if they do not quite believe it. Liberal professors of law are especially willing to speak words of this kind as they encourage judges to flex their powers of office, in defending certain “rights” against the opinions of the public, reflected in legislatures. But that flexing of power can be justified only when the judges appeal to a standard of right and wrong apart from the votes of a majority. Liberal jurists and professors have been quite willing for judges to exercise that power when it comes to articulating certain rights to sexual “privacy,” such as a right to abortion or a right to homosexual behavior. And yet their ethic of liberation has proclaimed itself by declaring an emancipation from the constraints of moral truths. Professors such as Ronald Dworkin and Laurence Tribe have been willing to countenance judges imposing whole new ensembles of law, even when it means overriding moral sentiments held deeply among the public. At the same time, however, they have carefully avoided any claim that they are appealing to moral truths or natural law. Professor Dworkin makes the most sweeping claim for an “empire” of law based on “principle,” even though he finds the foundation for his judgment in “a nation’s political traditions and culture”—a formula that, in the nineteenth century, would have encompassed slavery in America. As for Professor Tribe, he has been quite emphatic in his judgments, but at the same time he has warned that, “even if we could settle on firm constitutional postulates, we would remain inescapably subjective” in the application of those postulates. In the end, as he says, he falls back simply on convictions “powerfully held.” But, of course, if it were a matter simply of opposing the beliefs held firmly by judges against the convictions held tenaciously by the public, it is not clear why the beliefs of judges, merely as beliefs, claim a higher authority.

That, at any rate, was one of the anchoring convictions of Hugo Black, Franklin Roosevelt’s first appointee to the Supreme Court. Black, a populist from Alabama, was always distrustful of lawyers spinning out judgments from that magical phrase, “due process of law.” In truth, the exercise was implicit in the very logic of law: a “law” passed with all of the formal trappings of legality may nevertheless lack the substance of lawfulness or justice. But when judges sought to appeal to the principles of justice lying beyond the text, Black sniffed out an appeal to natural law, and natural law he regarded as a sham. In the most curious way, Black absorbed the premises of the logical positivists: When judges appealed to principles of right and wrong in order to strike down a law, he suspected that it was merely a pretentious way for judges to say, “I don’t like the policy.” But in this manner, Black reflected the divided soul of the New Deal: In politics, not a hint of relativism— not the slightest suggestion that the attack on “economic royalists” was anything less than just—but in jurisprudence, relativist. The judges would recede from imposing their judgments, and the legislators would be free to act out their own, emphatic moral judgments, responsible only to the people who elected them. They would be restrained only by the most explicit provisions in the Constitution.

But then suddenly, in the 1960s, liberal jurisprudence began to turn itself completely about, and Black was uniquely placed to mark the change. He stood among the dissenters in the famous Griswold case in 1965, when the Court discovered in the Constitution a right to “privacy” that denied to legislatures the authority to regulate the use or sale of contraceptives. With just a few short steps, the Court made its way from that point to encompass, within the scheme of “privacy,” the right of a woman to “terminate” her pregnancy. By that time, Black was gone, but his principal follower on the Court, Byron White, was one of the dissenters in Roe v. Wade. With that dissent he offered a final reflection of the old liberal jurisprudence: For Justice Black, there could not have been the slightest doubt that legislatures were able to protect children in the womb, as they could protect endangered animals. Nothing in the text of the Constitution barred the people and their representatives from registering that moral judgment.

When Robert Bork appeared for his confirmation hearings in 1987, he was a Republican appointee, reflecting the jurisprudence of the old liberalism. He confronted, in the Senate Committee on the Judiciary, a wall of liberal Democrats who had formed themselves, in effect, into the “party of the courts.” Bork looked to be— and no doubt was—the potential fifth vote to overturn Roe v. Wade. To the liberalism of the New Deal, it would not have mattered. If the right to an abortion were really as important to the American people as the Democrats now insisted, then the people could hardly be divested of that right if the subject were returned to the hands of legislators elected by those same people. But clearly, the right to abortion had now become central to the ethic and the jurisprudence of the modern Democratic party. It was part of a larger wave of sexual freedom and an ethic of “autonomy” that was steadily diffusing itself through the decisions of the courts, not only on matters of sexuality, but also on cases of euthanasia and a new so-called “right to die.” In fact, it soon became clear that the right to abortion was not merely an item in the mix of liberal concerns; it was the central peg on which liberal jurisprudence would arrange itself.

Liberalism had converted itself from a doctrine proclaimed and defended in public into a covert doctrine, whose ends were articulated and imposed only through the courts. At the polls, the right to abortion had lost persistently, in most places, before Roe v. Wade. Bill Clinton did not campaign openly with a program of gay rights, and when he made that program the first initiative of his new administration, he sank deeply in the polls. His retreat confirmed the new posture of liberalism: the liberal party would simply depend on the courts to impose the more “advanced” parts of its agenda. The party defended the insulation of the courts, and the judges in turn enacted those parts of the liberal agenda that the party would never acknowledge or defend in the course of a public campaign. For liberal jurisprudence, the right to abortion and sexual freedom had become, in Matthew Arnold’s phrase, “the one thing needful,” and political liberalism had now been reduced, in effect, to its jurisprudence.

What seems missing from both liberals and conservatives today is any sense that when we speak of natural law or natural rights we are dealing with the problem at the root of our understanding of the law. Alexander Hamilton conveyed the sense of these things in the most elementary and compelling way in The Federalist Papers. For example, in Federalist 78, Hamilton noted the rule that guided the courts in dealing with statutes in conflict: that the statute passed later is presumed to have superseded the law enacted earlier. The same rule does not come into play, of course, with the Constitution, for a Constitution framed earlier would have to be given a logical precedence over the statute that came later. Were that not the case, the Constitution would lose its function, or its logic, as a restraint on the legislative power. But these rules for the interpretation of statutes are nowhere mentioned in the Constitution. As Hamilton remarked, they were “not derived from any positive law, but from the nature and reason of the thing.”

For the leading members of that founding generation, there was not the least strain in moving to those understandings that stood “antecedent to all positive laws.” In a comment made in passing in one of his opinions, Chief Justice Marshall apologized to his readers for spending so much time explaining something that should stand, after all, in the class of an “axiom.” Marshall apparently took it for granted that every literate reader knew that axioms cannot be demonstrated: they had to be grasped, as Aquinas said, per se nota, as things known in themselves. That the founders understood the matter precisely in this way was nowhere expressed with more elegance and clarity than by Hamilton in the opening paragraph of his essay in Federalist 31. The paper was about taxation, and in the course of the essay he did not reach any conclusion that has not been reached in our own day, say, by Bob Dole. But any disinterested reader will notice at once some striking difference in the furnishings of mind. Hamilton put it in this way:

In disquisitions of every kind there are certain primary truths, or first principles, upon which all subsequent reasonings must depend. These contain an internal evidence which, antecedent to all reflection or combination, command the assent of the mind… . Of this nature are the maxims in geometry that the whole is greater than its parts; that things equal to the same are equal to one another; that two straight lines cannot enclose a space; and that all right angles are equal to each other. Of the same nature are these other maxims in ethics and politics, that there cannot be an effect without a cause; that the means ought to be proportioned to the end; that every power ought to be commensurate with its object; that there ought to be no limitation of a power destined to effect a purpose which is itself incapable of limitation.

Just as Bob Dole might have put it. The contrast may become even deeper—and even more telling—when we realize that this was the kind of prose that Hamilton struck off at a moment’s notice, writing for a deadline as a political essayist. That generation of American jurists contained minds of the first order, and one benign, lasting effect was that even rather pedestrian lawyers in the middle of the nineteenth century took as their standard Marshall and his best colleagues. And so even ordinary judges came closer in their craft to Marshall than to the Blackmuns and Souters of our own age. The main difference in our day is that jurists have fallen out of practice. They may be quite as bright, but they have been gradually tutored out of the conviction that there are principles to be known, or laws of reason that can provide anchors for judgment. They have even been schooled out of the elementary sense of that being who forms the ground and the object of jurisprudence. James Wilson insisted that the law in America would trace back simply to the understanding of “man,” weighing his consent. And by “man,” the founders understood, with Aristotle, that primate who was distinctly suited to the polis, and the world of law, because he could do more than emit sounds to indicate pleasure and pain: he could “declare what is just and what is unjust.”

Even that elementary sense of things, and the implications tucked away in it, may be quite removed from the understandings held by our recent judges. The late Thurgood Marshall offers a preeminent example. In the case of Rhode Island v. Innis (1980), a man was arrested for committing armed robbery with a sawed-off shotgun. On the way to the police station, the officers in the car fell into a conversation about the missing shotgun. One officer remarked that there was a school for handicapped children in the area and “God forbid one of them might find a weapon with shells and they might hurt themselves.” After several more minutes of this conversation, staged for the benefit of the man in custody, the suspect finally responded to the veiled appeal. He told the police he would lead them to the weapon. The police issued his “Miranda warning”; he understood that he had a right to remain silent—and still he led the police to the gun. With the gun as evidence, the man was eventually convicted for kidnaping, robbery, and murder. But then, his lawyers appealed the conviction on the ground that he had been coerced or manipulated, even though he had never been beaten or intimidated into confessing. The Supreme Court upheld his conviction, but Justice Thurgood Marshall protested:

One can scarcely imagine a stronger appeal to the conscience of a suspect—any suspect— than the assertion that if the weapon is not found an innocent person will be hurt or killed. And not just any innocent person, but an innocent child—a little girl—a helpless, handicapped little girl on her way to school.

In the world of law as it was envisaged by Thurgood Marshall, the human being who formed the object of jurisprudence was apparently not a being constituted in any significant way by a “moral” sense. He was not the kind of being who might feel guilt or a need for confession or repentance. Nor could he have any plausible interest in avoiding harm to the innocent as a means of avoiding a deepening of his crime. The object of law or jurisprudence for Marshall was evidently Hobbesian man, whose overriding interest and chief animating motive was self-preservation and the avoidance of pain. For a being constituted in that way it could never be “rational” to confess to wrongdoing and open himself to punishment. And so any appeal to his so-called conscience was an appeal for him to collaborate in his own punishment. Therefore it could be seen only as a form of manipulation; manipulation was a form of extracting evidence unfairly; and so the eliciting of evidence in this way had to be “unconstitutional.”

In this manner, with the most unremarkable chain of steps, a generation of liberal jurists managed to incorporate premises that are at war with the moral grounds of anything that could call itself jurisprudence. And in a strange confirmation of modernity, even conservative jurists, moving along a slightly different path, have absorbed the same premises.

It was no longer permissible to erect a barrier against infanticide if such a barrier worked to inhibit abortions.

But while conservative judges manage to conceal from themselves the deep logic of natural right, they conceal also this critical premise they have come to share with their liberal counterparts: If there is never a consensus on matters of morality, then we must be driven back to the interests or the passions that we can count on all men possessing, quite detached from any moral sense. And that interest will be, of course, the interest in self-preservation and the avoidance of pain. When the conservative jurists reject, out of hand, the prospect of knowing any first principles of a moral character, they too back into the notion of Hobbesian man. That calculating animal, detached from moral reflexes of any kind, becomes for the conservatives, no less than for the liberals, the ground and the measure of our jurisprudence.

If more evidence is wanted, the measure of the recent jural mind can be found most tellingly in decisions handed down in state and federal courts in the spring and summer of 1998. In seventeen states, judges struck down, or blocked, legislation newly passed to bar “partial-birth abortion.” In the reckoning of the judges, nothing less than the Constitution interfered with the authority of the political community to ban this grisly procedure. In striking down these laws, the judges were compelled to say things that judges in America had never before said, at least in public. Richard Bilby, a federal judge in Arizona, acknowledged that the framers of the bill had meant to “erect a firm barrier against infanticide.” But now, the judge went on to explain, in language suitably muffled, they could no longer do that: It was no longer permissible to erect a barrier against infanticide if such a barrier worked to inhibit abortions. And in other states, judges dealing with abortions at the point of birth were brought to the same threshold: They could continue to defend a constitutional right to abortion mainly by arguing that infanticide could no longer be so firmly prohibited.

The judges worked their way to this position only by overthrowing virtually every premise that anchored the understanding of jurists in the founding generation. James Wilson did not think that the purpose of the government was to invent new rights. The purpose, rather, was “to acquire a new security for the possession or the recovery of those rights” we already possessed by nature. Central among those rights was the right not to have one’s life taken without justification. He had also thought that the law in America would not begin with the sovereign issuing commands; it would simply trace back to the sense of a “man,” a moral agent, weighing the justifications of the law, tendering or withholding his consent. If the judges today tell us that infanticide no longer matters, it can only mean that homicide, too, has ceased to be a central concern of the law. But that can be the case only if there is not the same clarity any longer about the notion of a “man,” or a moral agent. Or perhaps it means that we are willing to treat the idea of a man as an open question, as something to be left to the people around us to settle for themselves, on terms that suit their own interests. We leave it, that is, to be settled by those in power, without the need to measure their judgments against any standard apart from power itself. In other words, “man” himself, as the ground and object of our jurisprudence, gradually disappears from view.

When that understanding is traced then to its root, when it is measured against the understanding of the founding generation, our jurisprudence is revealed as diminished, shallow, and even corrupt, recognizing no sense of lawfulness apart from power itself. With everything we have heard in recent years, with colleges appealing to their alumni, or hucksters hawking a college education, there is one appeal that has rarely been heard for the reading of the classics. And that is the case offered, in passing, by Chesterton for the Church. It is the case simply for recovering the understanding of a generation far more tutored, far wiser about the world, than our own: that it “saves a man from the degrading slavery of being a child of his age.”

  1.  This sobering point was brought home to me several years ago when Professor Robert George invited me to participate in some seminars he was running for federal judges at Princeton University. The “students” in the seminar were given, among other things, an exchange I had with Robert Bork about the problem of natural rights and positivism. Two of the judges, evidently quite taken with the exchange, were candid enough to remark that this was the first time they had ever encountered the argument for natural rights.

This article originally appeared in The New Criterion, Volume 17 Number 5, on page 4
Copyright © 2017 The New Criterion |

Popular Right Now