Archive for the ‘Book Review’ Category

The Betrayal of the American Dream by Donald L. Bartlett and James B. Steele

THE CLOSING of the COMMONS

The New Enclosure Movement

Essentially “the American Dream” has always been a middle class dream. Thanks to carefully targeted government policy, the middle class has been systematically privileged and advantaged, while the lower classes lived under surveillance and were kept under control. Even in the  Gilded Age, those glorious years before the hated personal income tax was ratified as a Constitutional amendment in 1913, aspirational Americans dreamed of owning their own farms or starting their own businesses or of finding a good job. Like “Liberty” and “Justice,” those dreams were The American Way.

But hiding behind those aspirations and fine words were government measures that worked in favor of the rich, making a mockery of sacred American words such as “equality” and “fairness.” It is the thesis of the latest book, The Betrayal of the American Dream, by Donald Bartlett and James Steele that it is not just the American Dream that has been “betrayed” but also that all of the Americans who are not rich have been betrayed. And even worse, these Americans have been betrayed by their fellow citizens, the very rich and the very powerful, who have essentially thrown them and their dreams under the bus…or the stretch limousine.

Indeed, the first chapter of The Betrayal of the American Dream is entitled “Assault on the Middle Class” and the account of the “assault” begins with a real person, Barbara Joy Whitehouse, one of the many people left behind in the stampede of the wealthy, in their urgency to help themselves, trampling over the rights and dignity of the ordinary person. One could say, “what else is new?” Or one could say, “This sounds familiar.” Or one could repeat the old adage, “The rich get richer and the poor get poorer.” But, this kind of attitude of selfishness which rips the fabric of society apart is new and the disregard of the rich for the communal and historic social compact is relatively recent. Before entering into the weeds of this angry and informative book—how the American dream was betrayed—the presentation of a couple of charts might be in order.

The Contemporary Middle Class

First is the now famous chart of the “flatlining” incomes of the middle class since 1970 juxtaposed to the rising of upper class incomes. The blue line is the income of the Middle Class and the Red Line represents the wealth of the Upper Class. The blue line is evenly stretched out across four decades, staying consistent and flat, even while the prices on everything rose. The red line rises like a star, soaring to the skies of unbelievable wealth, charting an upward bound path towards more money than any one human being could ever spend. The source is CNN Money:

The next chart is the equally famous “Parfait Chart,” which shows different colored layers, demonstrating the “thickness” of the layer entitled “Tax Cuts for the Rich.” The much derided  Recovery Plan, also known as “The Stimulus,” from the Obama administration is a tiny pale layer squashed under the Bush Gift to his “Base,” as he called his wealthy supporters. This chart is courtesy of Center on Budget and Policy Priorities:

These charts are classic illustrations of ideology, or how the government favors the interests of the dominant class. The rich prosper and everyone else pays for their gains or to be more precise, the 99% hand over their hard earned money to the 1% in order to encourage these individuals to, at some unspecified point in time, to trickle something down upon the poor. If the Bush tax cuts for the wealthy, now twelve years old, are either increased or allowed to continue as the Republicans wish, the four year recovery from the Wall Street Crash is crushed and the debt continues to rise—along with the incomes of the rich. This parfait chart is instructive because it shows how marginal the Bush Wars (on the credit card) were compared to the Bush Giveaway to the most wealthy and the least needy in America. I present these charts for a reason, because these bright colors bring to mind another kind of economic map, literally a map that shows what happens when the the rich use government to take away from the lower classes. In the eighteenth century, this seizure of resources was called The Closing of the Commons or the Enclosure Movement.

The First Closing of the Commons

The chart above is actually a map of the Commons of an English village called Kibworth-Beauchamp, featured on the recent The Story of England, hosted by the incomparable Michael Wood. The Commons is land held in common by the people. The actual owner of the terrain is the squire of lord of the manor who, in an act of noblesse oblige, allows the people or the tenants who work the estate—the small farmers and the peasants—to have their own plots of farmland. The farmers planted and harvested as they wished and were allowed to keep the bounty for themselves. In the old days, this obligation to one’s tenants, inherited from the Feudal era, was a responsibility that came with wealth and privilege. The Lord and Lady took care of their own. As virtuous as it sounded, noblesse oblige was also smart public policy: it is easier to control contented workers than it is to quell discontented peasants. If both sides understand that the social and economic bargain is a two way street then the network of obligations of responsibilities becomes the warp and woof of social relations.

In a time of unlimited power of monarchs and aristocracy, this historical equalizing of the economic scales acted as a way to repay the peasants for their service, while at the same time tying these people to the land upon which they labored. According to Wood, these strips had been worked by the same families for generations. Each strip has an individual name; each strip had its own level of fertility. Some strips were less fertile or harder to work than others, while some were fertile and easy to farm. These strips were parceled out equally, so that on one family could benefit at the expense of others. Thus a rough equality of responsibility (if not income) that somewhat offset the imbalance of power was created. This age-old balance of power enabled the rich to placate the poor and gave hope to the nascent middle class, and, in England, staved off discontent and revolution. But this social agreement or this belief that everyone had obligations under the social conpact between the two classes came to a close during the eighteenth century.

In his article, “The Second Enclosure Movement and the Construction of Public Domain,” James Boyle presented an old poem that raged against the Closing of the Commons.

The law locks up the man or woman
Who steals the goose from off the common But leaves the greater villain loose
Who steals the common from off the goose.

The law demands that we atone
When we take things we do not own
But leaves the lords and ladies fine
Who take things that are yours and mine.

The poor and wretched don’t escape If they conspire the law to break; This must be so but they endure Those who conspire to make the law.

The law locks up the man or woman
Who steals the goose from off the common And geese will still a common lack
Till they go and steal it back.

Anonymous

The Closing of the Commons or the Enclosure Movement ended, rather abruptly, a centuries old set of legal and social customs pertaining to the balance between privilege and powerlessness. Nowhere is this “shock of the new” better illustrated than in Thomas Gainsborough’s portrait of Mr. and Mrs. Robert Andrews (1748). In her iconic description of this painting, art historian, Ann Bermingham, alludes to “agrarian change.”  On one hand we see the accouterments of privilege: the pretty blue silk dress and dainty pink shoes of Frances Andrews and the flintlock rifle and dead game displayed by Robert Andrews. She does not have to labor and he has the inalienable right to hunt on his own property. But the background of the painting, the landscape view that made all of the attributes possible, tells a new story: the Closing of the Commons.

We see the Enclosure Movement stretched out behind the newly married couple. The absence of labor or the workers who serve the estate is palpable. The wide open Commons are fenced in, walled in, making Enclosures for sheep. The reasons for Enclosure during a hundred year period are complex and varied over time and place. In her article article “Jane Austen and the Enclosure Movement: the Sense and Sensibility of Land Reform,” Celia Easton pointed out that

Owners of large estates began enclosing their land when the market and transportation infrastructure made an acre of land devoted to raising sheep more valuable than an acre of land devoted to raising barley. Sheep herding had immediate advantages over farming: lower labor costs, less dependency on weather, and easier land management. Extreme climactic events and dis- ease did threaten the main capital investment—the sheep themselves—but large landowners were less affected by these threats than small landowners, since their sheep had access to larger pasturage and shelter from inclement conditions. None of the decisions to enclose land to raise sheep would have been made, however, without a market for wool and the roads on which to transport it. 

What we are seeing in Mr. and Mrs. Robert Andrews is that the wool trade became more economically profitable and that the centuries of farming the same strips of the Commons had exhausted the land. As Easton stated, for centuries, the English government had restricted Enclosure or the desire of the upper classes to make a greater profit, to protect the lower classes, but by the eighteenth century, profit motives overtook moral obligations or social concerns, and the Commons were Closed, either by parliamentary means or by unilateral actions on the part of the landowner.  In 1748, Mr. and Mrs. Robert Andrews were on the cutting edge of Enclosure, slicing and dicing their lands and pushing the villagers off their ancestral lands. In other words, the land was outsourced to the sheep.

The  Contemporary  Closing of the Commons

The Closing of the Commons and the ultimate  “betrayal” of the common people in the eighteenth century is similar to the “betrayal” Bartlett and Steele describe in their book. In our time, the post-World War II period, there were a series of government policies designed to raise the middle class, from the G. I. Bill to government projects, such as infrastructure to make interstate commerce more efficient—all of which elevated lower class white males (and their families) to the middle class. As seen in the photograph of Levittown above, it is true that these post-war laws were explicitly directed towards the white male population as a reward for their services in the War. Women and people of color were consciously left out of the post-war benefits boom and their war-time service were expressly not recognized. Both groups, the majority of the American population, were thus placed under the curse of “redlining,” and were denied loans for homes and entry into certain neighborhoods and access to certain jobs and schools.

But post-war government policy had a large and positive impact, creating an extended middle class with rising consumer power and rising incomes that allowed men and women to purchase the post-war avalanche of new commodities. But by 1970, a mere twenty years later, the party was over as outsourcing of good manufacturing jobs began, slowly at first, a trickle here and there, gradually widening into a stream, predicting the flood of jobs gushing towards Asia. Low and high skill manufacturing jobs (the usual domain of the white male) were shipped overseas where desperate workers did the same jobs at a fraction of the wages. The American worker and the middle class professional was left behind high and dry while the wealthy took advantage of law and tax policies they helped fashion to enrich themselves through outsourcing.

Ever since the Enclosure Movement, sociologists and economists have argued over whether or not the Closing of the Commons was theft from the people or whether, in the long, run the result was positive.  Of course, as John Maynard Keynes pointed out, “in the long run, we are all dead,” and the long term benefits have proven to benefit one group, the rich, over the other group, those who work to make the rich richer. As in the eighteenth century, those in power have sloughed off the sense of responsibility while retaining the idea of  privilege. Just as there was a refusal to accept age-old obligations two hundred years ago, today there are no thoughts of citizenship and no concern with giving back or paying forward for the greater good or the future of the nation. As the authors ot The Betrayal of the American Dream point out,

In our 1992 book America: What Went Wrong? we told the stories of people who were victims of an epidemic of corporate takeovers and buyouts in the 1980s. We warned that by squeezing the middle class, the nation was heading toward a two-class society dramatically imbalanced in favor of the wealthy. At the time, the plight of middle-class Americans victimized by corporate excess was dismissed by economists as nothing more than the result of a dynamic market economy in which some people lose jobs while others move into new jobs—“creative destruction…”

The issues now, as they were in the two centuries of the Enclosure Movement, is not the “creation” of new ways of making wealth but the “destruction” of the old ways and the impact of the “betrayal.” Most importantly, when it is asked who benefits from these economic changes, it becomes clear the the so-called “creativity” which benefits certain individuals also results in a destruction of the lives of the masses who cannot live long enough to benefit from future largesse. The result of the Enclosure Movement was a disconnect between the people and the land—Bermingham calls the effect “alienation.” The landowners severed the ancient obligations of the squire, and the peasants were separated from the land that they had long regarded as “theirs” to the extent that they named their plots.

Globalism and the Abandonment of the Land

Today, Globalization has become the new Enclosure Movement. In the process of moving towards a new international economy—and this is a point that Bartlett and Steele did not emphasize—the rich Americans, like American corporations, have less and less connection to their own nation: their wealth is global and consequently their interests or their fealties are international. The result is a waning of patriotism or a connection to the land (America) and the people who live in the land (America). It has been said by many political commentators, such as Matt Taibbi (Griftopia and The Great Derangement) and Chrsytia Freeland (Plutocrats), that the new wealthy class is not American, they are citizen of the globe who merely happen to live in America. As global citizens, these mega-rich people have no obligation to America and therefore have no compunction about “betraying the American dream.”

Today, money (whether virtual or real) has replaced land as the major source of wealth. During the nineteenth and twentieth centuries, wealth came from ownership of businesses or corporations that were local that had and depended upon a symbiotic relationship between the communities and the laborers. Henry Ford understood that his workers needed to earn enough money in his factories to buy the cars they made.  In the twenty-first century, this common sense understanding that labor and management had needs in common and that their relationships was reciprocal has dissolved. In fact an aerial photograph of homes in the Hamptons looks remarkably like the Enclosure Movement in action. The coast and the sea is all privately owned and controlled and enclosed.

Breaking the Social Bonds

But the sources of money in our century are global and not local. The global workers are speechless and powerless citizens of totalitarian nations which are in league with American corporations. Management does not manage workers; managers manage the income or the wealth of the company. American workers have been fired, outsourced and disenfranchised, losing their jobs, their futures and their governmental representation. As Bartlett and Steele write,

At a time when the federal government should be supporting its citizens by providing them with the tools to survive in a global economy, the government has abandoned them. It is exactly what members of the ruling class want. The last thing they want is an activist government—a government that behaves, let’s say, the way China’s does. Their attitude is “let the market sort it out.” The market has been sorting, and it has tossed millions out of good-paying jobs. Now that same ruling class and its cheerleaders in Congress are pushing mightily for a balanced budget at any cost. If it happens, it will be secured mostly by taking more out of the pockets of working people, driving yet another nail into the middle-class coffin. The economic elite have accomplished this by relentlessly pressing their advantage, an advantage that exists for the simplest of reasons: the rich buy influence.

The goals of a corporation are short term: make money now and don’t worry about the future. Or to put it another way—the corporations are no longer linked to a nation so they don’t have any stake in the people of any country. In other words, the relative ability of the American middle class to buy corporate products or commodities is irrelevant to the international business. The only relevancy is profit. It is a moral imperative to corporations that there is no higher good than higher profits. Hiring American workers is expensive: American wages are higher than in most Asian countries and, unlike European countries, American businesses are expected to provide health care benefits and manage retirement accounts. No sane profit-minded corporation would hire American workers when Asian workers could be hired at a fraction of the cost. The free market is free of responsibility and of allegiance to one’s flag. As the authors point out,

Corporate executives contend that they are forced to relocate their operations to low-wage havens to remain competitive. In other words, their domestic workers earn too much. Never mind that manufacturing wages are lower in the United States than in a dozen other developed countries.

But Bartlett and Steele are also interested in telling the story of how the wealthy have been able to not only remove the sources of their income from American shores but also how the wealthy protect their wealth. It is not just that the very rich and powerful have moved the jobs out of reach of the worker, it is they have also removed their money out of the reach of the government. And the government or the politicians have allowed the rich to strip America of the money the nation has earned for them. As Bartlett and Steele charge, the wealthy “lack a moral or civic compass” and are “without a purpose beyond its own perpetuation with no mission except to wall in the money within its ranks.” A case in point would be a Birkin bag that was auctioned off in 2011 for over $200,000: the cost of a modest middle class home in a modest Midwestern state or the amount of four middle class incomes.

That the purse costs as much as a home–and that home is probably in the hands of a bank that has foreclosed and refuses to refinance–raises the question of how much money is “enough?”  Is the opportunity to own such an object so important that the possession overrides morality or common sense or American values? The authors assert that America has ceased to be a democracy and has, over time, devolved into a “plutocracy” in which the common people are not so much ruled by the rich as they are exploited by the rich. The rich can’t be bothered to be part of the government; it is easier to buy politicians to enact laws and rules that benefit their one driving desire—to accumulate money, more money, and then even more money.

Ironically, it was Wall Street that disclosed the emergence of the American plutocracy. As early as 2005, a global strategist at Citigroup, Ajay Kapur, and his colleagues coined the word “plutonomy.” They used it in an internal report to describe any country with massive income and wealth inequality. Among those countries qualifying for the title: the United States. At the time, the top 1 percent of U.S. households controlled more than $16 trillion in wealth—more than all the wealth controlled by the bottom 90 percent of the households. In their view, there really was no “average consumer,” just “the rich” and everyone else. Their thesis: “capitalists benefit disproportionately from globalization and the productivity boom, at the relative expense of labor,” a conviction later confirmed by America’s biggest crash since the Great Depression. The very rich recovered quite nicely. Most everyone else is still in the hole.

Indeed, we of the middle class are more than likely to stay in “the hole.” Bartlett and Steele made the case that,

Only once before in American history, the nineteenth-century era of the robber barons, has the financial aristocracy so dominated policy and finance. Only once before has there been such an astonishing concentration of wealth and power in an American oligarchy. This time it will be much harder to pull the country back from the brink. What is happening to America’s middle class is not inevitable. It’s the direct result of government policy, and it can be changed by government action.

It is important to realize to what an extent the moneyed class has become the equivalent of absentee landlords in the eighteenth century. The middle class is simply unimportant to them, their plans, their goals.

Despite obligatory comments about the importance of the middle class and why it should be helped, America’s ruling class doesn’t really care. They’ve moved on, having successfully created through globalization a world where the middle classes in China and India offer them far more opportunities to get rich.

In addition, Bartlett and Steele map out the thinking of corporate America. The “job creators” understand that there is a trade off between providing jobs for Americans or for the Indians and piously decided that it is good and righteous to elevate the inhabitants of Madras instead. The name of the game is “creative destruction” as jobs are created in China and are destroyed in America.

The result is a huge transfer of wealth from the middle class to the wealthy in this country, as well as to workers in China, India, and other developing nations. No one wants to deny people in those countries the right to improve their lot, but the price of uplifting them has been borne almost entirely by American workers, while in this country the benefits have flowed almost exclusively to a wealthy super-elite. Globalization was peddled on the basis that it would benefit everyone in this country. It hasn’t, and it won’t as long as current policies prevail.

The phrase “has been borne almost entirely by” used by Bartlett and Steele is one that can also be applied by the tax code: it is the middle class that pays the price of globalization and it is the middle class that pays the taxes that pay for America. And it is not just the rich individuals who refuse to pay their fare share it is also the corporations who similarly refuse to pay their taxes.

One explanation for the tax burden on middle America is that for years U.S multinational corporations have refused to bring home billions of dollars they’ve earned on overseas sales because they don’t want to pay taxes on those profits. Sitting in banks in the Cayman Islands, the Bahamas, Switzerland, Luxembourg, Singapore, and other tax-friendly jurisdictions is a staggering amount of money—an estimated $2 trillion, a sum equal to all the money spent by all the states combined every year, or more than half the size of the annual federal budget.

The Un-Freedom of the “Free Market”

We are told by the ruling class–or their mouthpieces, the politicians–that the “free market” is at work, that no laws have been broken, and that any regulations on the free market would be a disaster. However, what is not said is that the market is not a level playing field–the market is not free, it is fixed, it a rigged game, the market is Vegas where the house always wins and the weekend punters always loose.

Ultimately, the rule-makers in Washington determine who, among the principal players in the U.S. economy, is most favored, who is simply ignored, and who is penalized. In the last few decades, the rules have been nearly universally weighted against working Americans. That a huge wealth gap exists in this country is now so widely recognized and accepted as fact that most people have lost track of how it happened. One of the purposes of this book is to show how the gap became so huge and to explain why it was no accident. Over the last four decades, the elite have systematically rewritten the rules to take care of themselves at everyone else’s expense.

The myth of the Free Market is just that—a Myth. As the authors point out, Germany and Japan and European countries such as France protect their citizens against the ravages of the market. In American we decry “protectionism” in the names of American corporations who want to  sell American products abroad. The middle class wants, we are told, the ability to purchase “cheap” televisions from South Korea, but as Bartlett and Steele point out the trade between America and its trading partners is not free: their workers are protected; ours are not. The result is that American cars are a luxury in China and cost around $100,000. Europe and Asia are simply not big markets for American cars which, at home, must compete with Toyotas, et. at.

Unfair competition that benefits the rich and forces the workers and the poor to take the hit has been going on ever since travel and technology made globalization possible.

What is different today is that a company can go under or “fail,” regardless of competition or profitability. All it needed is for a company to be swooped down upon by a corporate raider intent on a “hostile takeover.” Indeed in their description of what a private equity company, like Bain, does to a business, the authors state that the vulture-like investors argue that the elimination of companies and jobs forces a greater efficiency and thus benefits the “economy.”  Bad CEOs are removed, unproductive workers are sent away, they argue and everyone benefits and the nation as a whole is served. But Sensata, a company with record profits, was suddenly swallowed up and closed down by Bain Capital and the jobs and equipment are being shipped to China—all in the name of a greater profit. So we ask? Who Benefits? Which economy? Their or ours? While using the word “economy,” the corporate executives seem to imply the American economy, but what they really mean is that their personal economic positions are improved on the global stage.

The managers of the largest equity and hedge funds have become immensely wealthy—many are billionaires—even though some of the companies they bought and sold later foundered. In addition to the rich fees they harvest, private equity fund managers rake in millions more courtesy of U.S. taxpayers. Thanks to Congress, a portion of their annual income is taxed at 15 percent (rather than 35 percent) under an obscure provision called “carried interest.” This puts that income in the same tax bracket occupied by the janitors who clean their buildings. Using the proceeds from their deals and the money they save on taxes, private equity and hedge fund managers have lavish lifestyles featuring multiple residences, private planes, and ostentatious parties.

As David Stockman described in The Great Deformer, meanwhile the companies seized by Bain-like companies, loaded down with debt and gutted and left for dead, cannot be more “efficient” because the investors/looters have pocketed all the money. Stockman, once Ronald Reagan’s economic budget guru, pointed out the not only does wealth not trickle down, the kind of wealth won by investment capital is not a win-win proposition—the investor wins by destroying a healthy company and displacing thousands of American workers and gutting hundreds of American towns. The wealthy, the authors write, are able to buy not just Congress and other key members of the government, but are also purchasing so-called “experts,” academics in supposedly intellectual “think tanks,” which are well paid for their so-called “reports” on the economy. Writing of the fabulously rich Koch Brothers who fund any number of right-wing causes, Bartlett and Steele said,

The Kochs have contributed $12.7 million to candidates (91 percent Republican) since 1990 and spent more than $60 million on lobbying Washington in the last decade. But their greatest impact is the millions they have poured into foundations, think tanks, and front groups to mold public opinion in their favor by promoting positions that in almost every case benefit the few. The rise of these conservative think tanks and foundations directly coincides with the economic decline of the middle class. Among the more prominent of these organizations are the Cato Institute, which Charles cofounded in 1974, and Americans for Prosperity, which David launched in 2004 as a successor to a similar group that he had helped found earlier called Citizens for a Sound Economy. Dozens of other groups receive Koch money at the national or regional level. In early 2012, a rift developed between the Kochs and Cato, sparking litigation by the Kochs and charges by Cato president Ed Crane that Charles Koch was trying to gain full control of the think tank to advance his “partisan agenda.” The environmental group Greenpeace, which in 2010 examined just one issue on the Kochs’ agenda—their efforts to discredit scientific data about global warming—identified forty organizations to which the Koch foundations had contributed $24.9 million from 2005 to 2008 to fund what Greenpeace called a “climate denial machine.”

In fact, after the release of the documentary Inside Job, the outcry against economists clearly caught in conflict of interest situations was so loud that the profession briefly flirted with setting ethics standards for itself. Embarrassed, the American Economic Association schedule a session on ethics in its 2011 meetings in Denver. As The Economist pointed out,

You might assume that economists already disclose their links to organisations. But when economists write articles for the opinion pages of newspapers and magazines, appear on television to discuss matters of economic policy or testify before parliamentary committees, the audience is often unaware of their non-academic affiliations. A study by Gerald Epstein and Jessica Carrick-Hagenbarth of the University of Massachusetts, Amherst, looked at how 19 prominent academic financial economists who were part of advocacy groups promoting particular financial-reform packages in America described themselves when they wrote articles in the press. Most had served as consultants to private financial firms, sat on their boards, or been trustees or advisers to them. But in articles written between 2005 and 2009 many never mentioned these affiliations, and most of the rest did so only sporadically and selectively. Readers may have assumed they had more distance from the industry than was in fact the case.

Can This Country Be Saved?

The authors of The Betrayal of the American Dream, who have watched the American economy for years, end their book with a plan to remedy the current situation.

Over the last four decades, public policies driven by the economic elite have moved the nation even further away from the broad programs that helped create the world’s largest middle class, to the point that much of that middle class is now imperiled. The economic system that once attempted to help the majority of its citizens has become one that favors the few. Not everyone in the middle class who pursued the American dream expected to get rich. But there was a bedrock sense of optimism. Most people felt that life was good and might get better, that their years of dedication to a job would be followed by a livable, if not comfortable, retirement, and that the prospects for their children and the generations to follow would be better than their own.

The writers lay out a series of reforms that they think are necessary to save the middle class. From reforming the tax code which as been written to favor the wealthy to policing the financial markets to providing Keynesian stimulus to rebuild the infrastructure—all of the se suggestions are common sense and all are doomed to failure, unless the voters demand otherwise. Bartlett and Steele suggest that

Middle-class Americans, still the largest group of voters, must put their own economic survival above partisan loyalties and ask four simple questions of any candidate who wishes to represent them: 1. Will you support tax reform that restores fairness to personal and corporate tax rates? 2. Will you support U.S. manufacturing and other sectors of the economy by working for a more balanced trade policy? 3. Will you support government investment in essential infrastructure that helps business and creates jobs? 4. Will you help keep the benefits of U.S. innovation within the United States and work to prevent those benefits from being outsourced? The choices we make in the candidates we elect and the programs and policies we support will set the direction of the country.

It will be difficult for Americans to put country before party to look past ideology to find facts, for as Thomas Frank pointed out in his 2004 book, What’s the Matter with Kansas? How Conservatives won the Heart of AmericaAmericans can be counted on to vote against their own best interests. His argument, hotly contested by some writers, is that class interests, i. e. money, has been replaced by ethnic interests, i. e., race. Lower and middle class white people have been persuaded that their interests are aligned with those of the upper classes who will—in their own good time—”trickle” their gains “down” to the deserving few. Someday, they are assured “the job creators” will return the jobs they have shipped overseas. Sadly those jobs are not coming back and the Middle Class must start standing up for itself. As The Betrayal of the American Dream concludes,

What’s at stake is not only the middle class, but the country itself. As the late U.S. Supreme Court justice Louis Brandeis once put it: “We can have concentrated wealth in the hands of a few or we can have democracy. But we cannot have both.”

One this is sure, only the middle class can help itself; no one else will. 

Dr. Jeanne S. M. Willette

The Arts Blogger

 

 

 

 

 

 

.

 

“The Persistence of the Color Line” by Randall Kennedy

THE COLOR OF THE PRESIDENCY

The full title of Randall Kennedy‘s new book, The Persistence of the Color Line. Racial Politics and the Obama Presidency, is, or was, published (2011) a bit too soon and needs a sequel. The incompleteness of this book is not the fault of Kennedy, a professor at the Harvard Law School, but the continuing evidence of ongoing and unrelenting racism displayed in disguise by a variety of political groups. From the Birthers to the Congress to the Tea Party, the election of a black man as President has brought out the worst of America. Kennedy’s book barely gets past the first year of a term in office that was complicated by the simple fact that Barack Obama is only half white. And half is not enough. Kennedy’s main point is that Obama is trapped in his (half) blackness and cannot act with the privileged latitude that comes automatically to any and all white Presidents.This trap of skin color has shaped and will shape this unique Presidency.

Kennedy is certainly correct that it is institutional racism that restricts Obama in what he can do, what he can say, who he can champion, what he can support, which laws he can put forward, which policies he can enact. Despite his high office, in his own country (more than in any other nation) Obama is defined by his race. Kennedy opens his book with the assertion:

The terms under which Barack Obama won the presidency, the conditions under which he governs, and the circumstances under which he seeks reelection all display the haunting persistence of the color line. Many prophesied or prayed that his election heralded a postracial America. But everything about Obama is widely, insistently, almost unavoidably interpreted through the prism of race…

Sadly, despite the hopes to the contrary that America was now “postracial,” it is now clear that America is still a racist society. If we define racism in its largest sense: that racism is a “consciousness” of race, then Americans are intensely conscious of Obama as a man of color.  For some, this “color”—black—is the color of redemption, for others, the color is a threat and a retribution. Whether positively or negatively, the entire nation is in thrall to the notion that our President is a black man.

One could wonder if the event of the election of a woman or a man of color as President had happened a few decades later, say in the 2030s, more Americans would have been more accepting and fewer people would have cared about race, but instead Barack Obama was elected in 2010. Early twenty-first century people had parents and grandparents who had (fond) memories of segregation and for many Americans, particularly those in Middle America, the sight of people of color is still rare. The reaction of these white Americans was defensive on one hand—a regression into segregationist attitudes—and offensive—an instinctive rejection of someone so unfamiliar, so dark, so cool.

One could also wonder how much the fate of Obama would have been changed if his own white family had survived: if his white grandparents had survived his election, if his white mother could have lived in the White House along with Michelle Obama’s black mother. The whiteness of Obama could have been on full display on the campaign trail, at the Inauguration, and during policy debates. But without either that white half or the black half, a “blackness” born of racism was projected onto Obama. The result was, to borrow Randall Kennedy’s term, to “blacken” Obama and to make him seem alien. However, far from being an “alien,” Obama is the mixed-race future of a more tolerant America to which we might aspire.

It is interesting to note that the President grew up in a white and multicultural society. Obama is the product of the “Melting Pot” so hated and so dreaded by the Nativists and the Know Nothings of the previous century. Obama is the future they fought to avoid. In a very typical fashion, he was raised by a single mother and her parents, all of whom were white and all of whom loved him. He grew up in multicultural Hawaii and went to white-identified schools and colleges, Occidental, Columbia and Harvard, and dated white women and had white friends. Obama chose to be “black” in the sense that he had to seek and learn about “blackness.”

But these subtleties of choice are lost on those who object to Obama solely because he is black—they don’t care about his decisions, or about the distinctions between black skin and black culture, they care only about the skin and refuse to accept him in the office of the Presidency. As Kennedy reports on the ugly fact that there are,

substantial number of Americans who simply refuse to acknowledge Obama’s political legitimacy (for example, the allegation believed by tens of millions that he was born abroad), the open contempt displayed by antagonists not only on the airwaves of right-wing talk radio but also in the inner sanctum of Congress (for example, Joe Wilson’s infamous shout of “You lie!”), and the stark polarization that characterizes the racial demographics of support for and opposition to Obama. That the opposition is overwhelmingly white is a fact that no one can reasonably dispute.

Then Kennedy asserts, “What is disputed, however, is that racial sentiment is an important ingredient in the opposition.”  This statement is interesting and what the author is working through is the fact that Obama won an overwhelming victory and that while he did not win the majority of the white vote, he won enough to carry the day. And as Kennedy points out there are “plenty of reasons” to dismiss Obama without even mentioning race—he is too liberal, he is too conservative, and so on. No president is going to please everyone all the time; but, that said, Obama will always be judged according to different standards and this judgement will always be tempered by race and those attitudes are, in and of themselves, a form of racism. The very fact that Americans were (momentarily) proud of themselves is tinged by a history of slavery and segregation. As Kennedy says,

An inflated sense of accomplishment is part of the racial predicament in which Americans find themselves. Electing a black American as president is treated as remarkable. In a sense it is—but only against the backdrop of a long-standing betrayal of democratic principles…

…That Obama has had to work so hard to make himself and his family acceptable to white America and that he has had to continue to work so persistently to overcome the perceived burden of his blackness is a sobering lesson.

I supposed we Americans hoped that we would rise to our own optimistic standards, and, as Kennedy lays out the campaign was remarkably free of racism; but there was a sizable segment of the nation that would never accept Barack Obama as President. One could argue which incident by which public official marked Obama as “black” and unacceptable was the first but barely into his first term it became clear that this was a marked man. A conservative discourse was woven, full of symbolic racist “dog whistles” to a certain group and therefore skirting overt racism. Kennedy writes that “…the prejudice has been sublimated and expressed via a code that provides a cover of plausible deniability: “He’s not one of ours”; “He’s not like us”; “He’s alien”; “He’s a Muslim”; “He’s a socialist.”

Ironically, because he is black, Kennedy argues, Obama cannot appear to favor peoples of color and therefore can do less for his “own people” who truly need the special help than a white President can freely provide. On the other hand, ironically also because he is black, Obama was in the cultural position to assist other Others, the LBGT community and the Latino community. Although Obama has, as Kennedy points out, elevated many black people to high places in his administration, he has arguably done—in a more specific way—more for the gay and lesbian community and Mexican Americans than for blacks. Thanks to Obama, gays can now serve openly in the military and Latino young people who were brought to American as children can now move freely in society without fear. The next steps, thanks to Obama, will be that gay people may be able to marry legally and that young immigrants can become citizens. This willingness to act in a moral fashion towards those who inhabit this country is real progress towards civil rights for all Americans.

Then there is the dark side of this Presidency. Because of the color of his skin, because of his race, and mostly because of the consciousness of his race, oppositional criticism of Obama falls into the zone of racism but these racist (de)evaluations are delivered in code. Once racist sentiments were uttered openly without restraint and were part of the broader culture, but as Kennedy writes, during the 1960s the language of racism in politics changed:

The Civil Rights Revolution stigmatized the open appeal to racial animus. By the late 1960s, politicians were no longer able to blatantly incite racial prejudice to their advantage at little or no political cost. To tap into racial resentments openly meant falling afoul of newly ascendant norms of racial etiquette and thus attracting punishing censure. So open appeals to racist animus gave way to implicit appeals. To avoid being branded as racist while nonetheless trafficking in racial prejudice, some politicians began to use code words to say covertly what they could no longer safely say overtly.

Today, three years into Obama’s presidency, we see these codes fully developed, unfurled and proudly flying out of the mouths political opponents. Add up these wordy criticisms, they all say the same thing: Obama is incapable of being President because he is black: “he is in over his head,” “he is incompetent,” and so on. All blame for all ills can be laid at the door of a black man, a sin eater of white transgressions. Therefore, the white men who created huge budget deficits are not at fault, the white men who started but did not finish two wars are not at fault, the white men who let Osama bin Laden slip through their fingers are not at fault and Obama’s bold deeds cannot be celebrated, because, as Mitt Romney claimed, killing Osama when the opportunity presented itself was a “no brainer.”

All of Obama’s accomplishments are discounted—he was an affirmative action admission to exclusive Ivy League schools, the stimulus did not work, he is wrong to attempt to bring peace to the Middle East, and on and on. Nothing he does is right and everything he does is wrong, not because any of these Codes are true but because the endless assertions of failure are necessary to allow whites to feel superior to this intelligent and intellectual and gifted and exceptional black man.

The idea that a black President might do a better job than a white one—even George Bush—is insupportable to racist white Americans. Kennedy goes through a number racially tinted incidents that happened before or early in the Presidency of Obama: the very real embarrassment of the Reverend Wright, the clash between the Harvard scholar, Henry Louis Gates, Jr. and a Cambridge police officer, the embarrassing incident involving Shirley Sherrod, the confirmation of Sotomayor, and so on. Kennedy does an excellent job of explaining the culture of Reverend Jeremiah Wright and gives an informative account of black patriotism or why black people love America, But the incident that opened the dam of racism in my opinion was the famous “You Lie” outburst of Joe Wilson, Congressperson from South Carolina.

The occasion was a solemn one, the health care address on a major policy proposal by Obama, marred by a loud Southern voice screaming “You Lie!” clearly something that would never happen to a white president. As Maureen Dowd wrote in the fall of 2009,

I’ve been loath to admit that the shrieking lunacy of the summer — the frantic efforts to paint our first black president as the Other, a foreigner, socialist, fascist, Marxist, racist, Commie, Nazi; a cad who would snuff old people; a snake who would indoctrinate kids — had much to do with race…But Wilson’s shocking disrespect for the office of the president — no Democrat ever shouted “liar” at W. when he was hawking a fake case for war in Iraq — convinced me: Some people just can’t believe a black man is president and will never accept it…Barry Obama of the post-’60s Hawaiian ’hood did not live through the major racial struggles in American history. Maybe he had a problem relating to his white basketball coach or catching a cab in New York, but he never got beaten up for being black. Now he’s at the center of a period of racial turbulence sparked by his ascension. Even if he and the coterie of white male advisers around him don’t choose to openly acknowledge it, this president is the ultimate civil rights figure — a black man whose legitimacy is constantly challenged by a loco fringe. For two centuries, the South has feared a takeover by blacks or the feds. In Obama, they have both.

Dowd concluded by quoting “Congressman Jim Clyburn, a senior member of the South Carolina delegation,”  “…had a warning for Obama advisers who want to forgive Wilson, ignore the ignorant outbursts and move on: “They’re going to have to develop ways in this White House to deal with things and not let them fester out there. Otherwise, they’ll see numbers moving in the wrong direction.”  I believe that Dowd and Clyburn were correct. The Wilson event, during a speech by Obama on health care, was a turning point. The Congressman both apologized and then raised campaign money on the strength of his racist outburst:

“This evening I let my emotions get the best of me when listening to the president’s remarks regarding the coverage of illegal immigrants in the health care bill. While I disagree with the president’s statement, my comments were inappropriate and regrettable. I extend sincere apologies to the president for this lack of civility.”

Wilson was censured by the House, along party lines (the Republicans taking no responsibiltiy), but the damage was done. This outburst, which Wilson claimed to be “spontaneous,” received only a mild rebuke from his colleagues, and Obama accepted the “apology.” Wilson took advantage of the natural paralysis that happens when civilized people are confronted by outrageous barbarism. There is simply no acceptable reply to an act of such contempt. It is asking a great deal of any human being, startled by an unwarranted and untrue accusation, to react in an effective fashion. One either ignores the outburst—Obama’s approach—or to stop the proceedings—a major policy address—and politely ask the offender to leave. The Congressman should have been expelled  from the room and expelled from Congress.

But confrontation is not in the make up of Obama. He is a child of consensus and negotiation, an offspring of the postracial society. It is possible that Obama though that Wilson was having a nervous breakdown, a fit or a meltdown of some sort.  Obama wants peace and, at that time, during that summer of 2009, he probably genuinely thought that he could bring the Republicans could be brought into the fold. He did not want to offend the other side; but, by accepting what was a facile and meaningless apology from Wilson, Obama suggested to those who were watching, the people that he did not yet see as his enemies, that he was weak.

After Wilson was let off the hook, it was as if the dam had been burst and the Birthers came out of the woodwork with their absurd claims that the Presidency of Obama was a result of an impossibly complex conspiracy to place a Manchurian candidate in the White House—for what purposes it is never clear. Also out in the open were charges of Socialist, Food Stamp President, “European,” “Muslim,” and on and on, all of which were codes for un-American and also not white, because “real” Americans are white and Obama is black. Obama, in trying to govern from a “bipartisan” philosophy of “compromise,” looked foolish and naïve.

Obama, quite rightly, has taken seriously his charge as President to govern all Americans, regardless of age, race, gender or party affiliations, fairly. This position of equity is far more Presidential and fair than most Presidents. As Kennedy writes, Obama refuses to govern from a position of race and insists upon taking his positions on the basis of morality. For Obama, Kennedy states, it…

…isn’t a matter of black and white. It’s a matter of right and wrong.” Sticking to his strategy of deracialization, Obama sought as much as he could to avoid dirtying himself with the racial messiness of the dispute without alienating his African-American base. He saw deep engagement in the controversy as a losing proposition, a racial quagmire that, for many white voters, would only blacken him…”

If Obama is “blackened,” then all people of color are “colored” in even more intense hues. If it is acceptable to emit racist codes when referring to Obama, then the attacks on Others, those who are not white and male, are suddenly acceptable.  Since “You Lie!” we have heard supposedly reputable or apparently sane politicians call for an electrified fence on the border of Mexico and we have seen literally hundreds of laws passed to restrict the rights of female citizens.

We know now that on the night of the Inauguration, certain Republicans met in private (secret) and made a pact—to obstruct very single proposal Obama made, regardless of its merit, regardless of whether or not the policies were originally Republican, regardless of the impact upon the nation. This pact or agreement was nothing short of un-American and un-patriotic and unprecedented. The Republicans have held firm and have voted en masse against every proposal, every policy, every law put forward by Obama. These actions are tantamount to a conspiracy and de-value the office of the Presidency. Already there is ample evidence that the Republicans will have no respect for any President, even their own.

Randall Kennedy shows us the early straws in the wind, one racist event after another, incidents that would have passed unnoticed under a white president or events that would not have happened under a white president. Kennedy points out that in each and every case he presents, that Obama is damned if he speaks out or wades in, and he is damned if he stays silent and stays away. Kennedy is right to stress the fact that Obama is trapped in his blackness and in his innate civility and his heartfelt belief in the good will of all people. I believe that Obama had no idea of how deep and how wide and how old racism is in America. I don’t think he was prepared for the wall of refusal that he faced, and, for years, Obama has had no effective response to the visceral rejection of his presidency.

But Obama is a learner and he is a proud man. The question is what is this nation facing—a return to the blithe and blunt racism of the 1950s? or the last spewings of an ugly racist bile out of the body politic?  It this Presidency a Sacrifice Presidency, a period that forces a stained country to redeem its shameful past or is this Presidency a Reversion Presidency, the occasion upon which we revert to the old ways: the rule of the white male? We have a presidential campaign for 2012 that is entirely based upon the charge that Obama must be removed from office because he is “incompetent,” or, in other words, “black.”

News commentators continue tip-toe around this bigoted rhetoric and gingerly call this dark prejudice “tribal” and are forced to call attention to the “codes” used. And as the discourse continues to grow and become more extensive, the media are forced, more and more, to confront the constant racism that has been inspired by this Presidency. But the media–whether left or right—is merely reactive. This Presidency is not just any Presidency: it is an occasion and it is up to Obama to take advantage of his historic election. The speech on Reverend Wright and modern racism was a start, but now, three years later, this brave address is revealed as sadly insufficient for today’s dark world. Obama must take the high moral ground and be a new, another Martin Luther King and demand an end to racism at long last. Kennedy ends his interesting book on a hopeful note,

Among colored folk, his ascendancy has raised expectations of what is possible for them to achieve in a “white” Western modern democracy. It has also affected the expectations of white folk, habituating them, like nothing before, to the prospect of people of color exercising power at the highest levels. There are many who still chafe at this turnabout—witness the racial component of the denial, resentment, and anger that has fueled reaction against the Obama administration. The racial backlash, however, is eclipsed by the lesson being daily and pervasively absorbed—the message that a person of color can responsibly govern.

On the eve of an election campaign that is mired in open and belligerent racism, Randall Kennedy’s book, though now out of date, is an instructive account of how a black man teaches white men (and women) that race should be irrelevant. Only when we all learn what Obama is trying to show us, do we achieve the transcendence of a postracial society.

Dr. Jeanne S. M. Willette

The Arts Blogger 

 

 

“Drift” by Rachael Maddow

Drift: The Unmooring of American Military Power (2012)

Introduction

The heart of the question of what Rachael Maddow calls Drift is how do we wage war in the twenty-first century?  What is the purpose of war in the contemporary era? And who fights these wars? Or to twist the title of a famous film from World War II, why do we fight?

The answer is: because the President wants us to fight.

American history has been based on the predicate that Americans fight for our rights to be free and to live in a democratic society. We imagine ourselves to be valiant warriors—citizen soldiers, as Stephen Ambrose named those who fought in the last “good war.”  Maddow quoted future President Thomas Jefferson as saying in 1792, “One of my favorite ideas is, never to keep an unnecessary soldier,” noting that once the “necessary” war is fought, the “necessary” soldiers fade back into civilian life.  But that image of Jefferson’s Yeoman Farmer who could be counted on to spring to the country’s defense when needed is a highly idealized one.

By the middle of the nineteenth century, Jefferson’s concerns about the dangers of keeping a standing army had melted in the heat and fire of expansionism and the Mexican-American War (1846-1848). Maddow quickly skips over a sizable chunk of American history, from the decades of Manifest Destiny and an extended campaign of genocide against Native Americans. This slide into Empire was capped with the Spanish-American War when America finally controlled the maximum territory possible. But making and maintaining an American Empire required having a standing army—how else do you wage a war of conquest from one end of the continent to the other?

I mention this seventy year slice of history, not to criticize Maddow for not covering it, but to make the point that the desire to keep an army, a strong military force, under close command and control of the executive branch, has always been present in American culture, no matter how much the national mythology denies this history. Certainly the Great War was a rude interruption in a self-satisfied isolationism and Americans were dragged with great reluctance into this and the Second World War. Maddow emphasizes how quickly the military was demobilized after these two great wars. However, to paraphrase Karl Marx, the insistence that America was, at heart, a peace-loving nation was a discourse pregnant with its opposite. The ability of a strong President to wage war on command was always present and had been practiced for the bulk of the nineteenth century—war disguised as Manifest Destiny.

That said, the importance of the model or the paradigm of the Second World War cannot be overstated. It was not just the “Last Good War,” as has been often noted, it was also the last conventional war, because it was the last war America fought with Europeans. A shared culture of combat enabled the armies of World War II to fight on the basis of shared assumptions. Japan, having become “modern” by first copying the West and then by beating the West, for the most part complied. Here and there, like Germany, Japan broke the laws of “civilized warfare,” surely a contradiction in terms, but for the most part the basic “rules” were followed. Armies faced and fought one another, Navies faced and fought one another. The goal was for one group to defeat the enemy, invade the territory, seize the capital, and force a formal surrender.

The new enemies did not share these cultural expectations and proceeded to ignore the European forms of fighting. The fact that this rather stilted and formal mode of thrust and parry had a long history, stretching back to Medieval times did not impress the Vietnamese or the tribes of the Middle East. After a brief foray into South Korea, America fought a European style war only once again—an even more brief visit to Iraq. What followed would be a continuation of the Viet Nam style quagmire, a series of non-wars that could not be won, only endured until exhaustion intervened. Despite these unpalatable facts, or because of them, the “Dream War” was a re-run of the Second World War, the Good War, the Winnable War, where words like “victory” and “win”  had some meaning.

The Proxy War

After the Second World War, America somehow entered into a continuous state of total war and these were mostly undeclared “wars,” called interventions of some other nomenclature. It seems that after four years of  national militarization, it was hard to break the habit of defensive belligerence. The new enemy was the Soviet Union and the Cold War began. There is, apparently, something comforting, in an ordering, logical sort of way, to have a known enemy. The “enemy” sorts the world neatly into two halves: good and evil, simple dualities. We know how hard it has been to let go of a good Foe. Once the Berlin Wall fell and the Soviet Union imploded, America has continued to seek another Opponent. As Maddow comments in her section on Ronald Reagan,

We’d got in the habit of being at war, and not against some economic crisis, but real war—big, small, hot, cold, air, sea, or ground—and against real enemies. Sometimes they’d attacked us, and sometimes we’d gone out of our way to find them.

But the post-war and the post-Cold War world is not so neat and tidy and the new enemies were not schooled in eighteenth century military tactics of opposing lines protecting important strategic sites.  And herein lies the trouble with contemporary war and this is the point where Maddow begins to make her point about “drifting” away from traditional formal ways of waging war through declaration and mobilization.  Maddow writes of the  of the standing army  after 1945,

We had 150,000 troops in the Far East, 125,000 in Western Europe, and a smattering in such diverse and far-flung locations as Panama, Cuba, Guatemala, Morocco, Eritrea, Libya, Saudi Arabia, Samoa, and Indochina. Wary as never before of the Communist threat—now a constant “speck of war visible in our horizon”—America had come to see Jefferson’s preoccupation with standing armies and threats from inside our own power structure as a bit moldy. We were, after all, the only country still capable of keeping the planet safe for democracy.

The Cold War set a precedent for war with a goal but no foreseeable ending. Most people thought that the Cold War would never end, precisely because it was cold. Until the twenty-first century, Americans had not considered the possibility that that a Hot War would not have no foreseeable ending but also no articulated purpose. Maddow takes the reader on a Long March from the Viet Nam War into Iraq and Afghanistan, but her purpose is not to refight these endless wars but to discuss why we are fighting them in the first place. The answer seems to be a particularly male or males—the President and the military— need to feel manly and a rather frightening willingness on the part of a temporary leader, i. e., the President, to alone be responsible for the spending of blood and treasure.

In laying out how Maddow made her case, I want to first, move directly past the Viet Nam War into the peculiar non-wars of Ronald Reagan and second, to use the “Reagan Wars” as examples of the lingering Viet Nam Syndrome. The reason for skipping over the conduct of the war in Viet Nam is because this was an inherited war, with long, long roots back to the French Empire. After the Second World War, the tiny Asian nation wanted to be independent of the French who, after surrendering to Germany, were driven to retrieve their dignity by reclaiming parts of their “empire,” such as Viet Nam. The French dragged America into this dubious enterprise through blackmail: if we gave them military and monetary assistance, they would join NATO. And then, the French were defeated at Dien Bien Phu in the summer of 1954. They withdrew and left America holding the bag, so to speak.

Viet Nam became an “American” war by circumstance, doubly damning the conflict as having nothing to do with “our” vital interests. Even though all Viet Nam wanted was national self-determination, as promised by American President Woodrow Wilson, the American government decided that this was the ground where they would fight a proxy war against Communism. From 1959 to 1975 American fought a war that was never declared. Maddow recounts that in an ill-considered desire to carry out the supposed wishes of the deceased President John F, Kennedy, President Lyndon Johnson slip-slided into war sideways through a draft of marginal young men. Privileged young men, future President George H. W. Bush and future Vice-President Dick Cheney and future Presidential candidate, Mitt Romney, could receive  draft “deferments.”

The point that Maddow, in laying out her argument concerning wars ordered at the whim of the Executive Branch makes, is that by the 1960s, in the midst of the post-war boom, it was unwise to both wage war and the mobilize the population for war. People did not want another war, not the kind of war that involved the entire population. In order to fight this new war, President Johnson sought recruits from the sons of citizens who had no political clout.

So from the first 3,500 combat Marines Johnson sent ashore near Da Nang on March 8, 1965, to support the first sustained bombing of North Vietnam to the 535,000 American troops who were in Vietnam at the end of his presidency, something like 1 percent would be Guard and Reserves. The active-duty armed forces shouldered the burdens of Johnson’s land war in Asia—fleshed out by draftees, chosen at random from among the ranks of young American men who were unable or unwilling to get themselves out of it.

A dangerous step had been taken—fighting a wrong war with the wrong—or unwilling people—all in the name of an abstraction: The Cold War. Unfortunately, for President Johnson, television had been invented and Americans indicated strongly that they did not want to send their children off to foreign wars, nor did they wish to see nightly battles on television. So for future presidents, the problem would be compounded: how to go to war with the minimum amount of soldiers—no need to call attention to the fact that wars are fought by real people—and with as few witnesses as possible, all the while achieving maximum glory. And here is where Ronald Reagan rode to the rescue with the solution to the problems Lyndon Johnson had left behind.

The Viet Nam War ended in a humiliating defeat for America. The greatest nation in the world had to withdraw ignominiously from an inglorious conflict that had been fought to make a political point for an opponent who was never present. The “manhood” of America had been emasculated, damaged by  a guerrilla force impervious to traditional warfare and offended by occupation and division of their nation by colonial masters. Instead of studying the experience of the war and coming to the realization that the myth of American isolationism could make an excellent reality, President Ronald Reagan wanted to help America to “man up.”

The Reagan Solution

However, Reagan was thwarted by a belated law passed by a chastened Congress, a law to curtail adventurous presidents and to limit their War Powers. As Maddow descibes it,

The War Powers Resolution of 1973 was an imperfect law. But by passing it, the legislative branch was putting the executive on notice—it no longer would settle for being a backbencher on vital questions of war and peace. If the president wanted to execute a military operation (any military operation), he had to petition Congress for the authority to do so within thirty days; if Congress didn’t grant explicit authorization, that operation would have to end after sixty days by law. The Oval Office would no longer have open-ended war-making powers.

Rather than putting an end to  the unfortunate “foreign entanglements” that George Washington warned of, the War Powers Resolution became an obstacle for annoyed Presidents to overcome. At this point, Maddow begins to describe how one president after another strove to wage war by other means. Reagan’s answer to the Resolution was to order strange little “interventions” or tiny wars, waged on defenseless territories. Reagan was considerably boosted in his Presidential aspirations by his contention that America should reclaim the Panama Canal. The fact that his jingoism, as Maddow puts it, struck a nerve with many Americans suggests that the post-Viet Nam War syndrome—the shame of defeat—was, twenty years later, a national mood.

Once he became President, Reagan immediately began building up the military. To the end of his Presidency, he dreamed of a fantastical mirage of the conquest of space with a weapon called “Star Wars.” Indeed, there was always a strange and surreal aspect to Reagan’s military adventures: he ran when attacked and attacked when there could be no reply. As Maddow explains, Reagan seemed to lack the ability to separate rhetoric from reality and it appears that he actually believed that America had “lost” the Panama Canal and that it was necessary to invade Grenada and then to attempt to overthrow the government of Nicaragua with secret stashes of arms to contras. War under Reagan became a curious mixture of secrecy and public relations.

Maddow lays out how the Reagan administration worked very hard to write a metanarrative that was both teflon and atomic: it was an untouchable story and it would have a long half life. The untouchable narrative was that America had to be Number One and that it had enemies everywhere. Therefore, regardless of facts to the contrary or regardless of the lack of facts, America was in danger, ringed with enemies, in constant danger. From today’s vantage point, the paranoia of the Reagan years seems predictive: a Republican administration frightens the American people with a threat that does not exist, calls those who dare to bring facts to the table “Communist stooges” and what have you, and ignores the impact on those outside of America, who are observing these antics. As Maddow writes,

The Soviets put their own intelligence services on high alert, watching for any and every sign of American military movement. And their ambassador to the United States, Anatoly Dobrynin, who spent much of his adult life in Washington, was gently passing the word to his bosses in the Kremlin that Reagan really did believe what he was saying. Dobrynin later wrote in his memoir that “considering the continuous political and military rivalry and tension between the two superpowers, and an adventurous president such as Reagan, there was no lack of concern in Moscow that American bellicosity and simple human miscalculation could combine with fatal results.” In 1983, when fear at the Kremlin was at an all-time high, the Reagan administration was more or less oblivious to it.

The dangers of this story with a long half-life and this myopic inward vision is apparent. Clearly, Reagan believed everything he was told (he apparently neither read daily briefings nor spent much time in the Oval Office) and clearly he was playing to a local audience for political purposes. Otherwise, why, out of all the nations in the world, invade Grenada? Maddow writes in an ironic spritely style that, in certain contexts, can be somewhat disconcerting, but here, in her description of the Battle of Grenada, excuse me, Operation Urgent Fury, the amused detached tone of near-parody is perfect. The trick the Reagan Administration needed to pull off was to both keep this Operation a secret but to convince the nation that a small group of American medical students were being threatened by an evil Latino dictator.

The story of Operation Urgent Fury reads like a script from the Keystone Cops. It would be a funny story, except for an earlier event that would prove to be prophetic:

On the morning of October 23, 1983, a suicide bomber drove a truck containing six tons of explosives and a variety of highly flammable gases into the US Marine barracks at the airport in Beirut, Lebanon, killing 241 soldiers there on a don’t-shoot peacekeeping mission. Fourteen months into the deployment, and after an earlier suicide bombing at the US embassy in Beirut, Reagan was still unable to make clear to the American people exactly why US Marines were there.

The answer to an unanswerable attack in Lebanon was to invade Grenada and to save medical students from Fidel Castro. Except that, according to Maddow, “Fidel Castro, knew about the invasion well before the Speaker of the United States House of Representatives.” Not only had the rescue teams not bothered to locate the students, who were scattered in various locations, but also, in Maddow’s words, “The chancellor of the medical school had already been telling reporters that their students hadn’t needed rescuing.” Indeed, some students were left behind, never to be “rescued.” But never mind, America was getting its macho back and the public’s attention was diverted from the 240 Marine deaths in Lebanon. Therefore, the Administration took its eye off a very significant ball: the Middle East to gaze southward to Latin nations, where Communism was supposedly fomenting at America’s very doorstep.

Although Congress was not pleased with Reagan and slapped his (now popular) hand, these unilateral actions continued under Reagan’s not always certain management. Maddow quotes the Speaker of the  House, Tipp O’Neill:

“He only works three and a half hours a day. He doesn’t do his homework. He doesn’t read his briefing papers. It’s sinful that this man is President of the United States. He lacks the knowledge that he should have, on every sphere, whether it’s the domestic or whether it’s the international sphere.”

The Iran-Contra experience is now a matter of history and it is still unclear  who was in charge or whether or not Reagan was or was not in the grip of Alzheimer’s. What is certain is that the “victory” in Grenada gave the President a sense of entitlement and he was determined to have another war in Nicaragua. As Maddow states,

Reagan was convinced that a president needed unconstrained authority on national security. He was also convinced that he knew best (after all, he was the only person getting that daily secret intelligence briefing). These twin certainties led him into two unpopular and illegal foreign policy adventures that became a single hyphenated mega-scandal that nearly scuttled his second term and his legacy, and created a crisis from which we still have not recovered. In his scramble to save himself from that scandal, Reagan’s after-the-fact justification for his illegal and secret operations left a nasty residue of official radicalism on the subjects of executive power and how America cooks up its wars.

In order to have his war and eat it too, Reagan and his sidekick, Oliver North, privitized this little war, which was funded through wealthy (Republican) donors and the Saudis. This unlikely enterprise—too strange to unwind here—came undone and the clear illegalities were exposed to withering investigations. As Maddow summued up this misadventures of Ronald Reagan,

Even before all the indictments and the convictions of senior administration officials, Reagan’s new way—the president can do anything so long as the president thinks it’s okay—looked like toast. In fact, Reagan looked like toast. Whatever his presidency had meant up until that point, Iran-Contra was such an embarrassment, such a toxic combination of illegality and sheer stupidity, that even the conservatives of his own party were disgusted. “He will never again be the Reagan that he was before he blew it,” said a little-known Republican congressman from Georgia by the name of Newt Gingrich. “He is not going to regain our trust and our faith easily.” The president had been caught red-handed.

However, due to the wonderous alchemy of Republican spin, “Reagan could be reimagined and reinvented by conservatives as an executive who had done no wrong: the gold standard of Republican presidents.” Maddow goes on to describe and recount further adventures of the Presidents who came after Reagan. Reagan laid down not just a gauntlet to a meddling Congress but also a path to Executive Power to use the military. The key was not to wage war but to sent out the troops. The problem was that the Draft had been eliminated and the President had to use a professional or volunteer army and the National Guard or the Reserves. It is interesting to note that the liability of not having a large standing army was now an asset. A small but flexible force, especially when combined with an international force, as in the Balkans and in the Gulf War, enabled the President to sent out a focused force without “waging war” and without declaring war.

Once Reagan had established the (specious) “legal” precedent that the military was the President’s tool, there was no check to balance this power. As Maddow states,

Congress has never since effectively asserted itself to stop a president with a bead on war. It was true of George Herbert Walker Bush. It was true of Bill Clinton. And by September 11, 2001, even if there had been real resistance to Vice President Cheney and President George W. Bush starting the next war (or two), there were no institutional barriers strong enough to have realistically stopped them. By 9/11, the war-making authority in the United States had become, for all intents and purposes, uncontested and unilateral: one man’s decision to make. It wasn’t supposed to be like this.

I have been moving through Maddow’s book, or drifting through her arguments by trying to set up, step by step, the trajectory from waging small but satisfying wars somewhere else with a tiny number of military personnel with low psychological cost to the public and with high pay-offs in bragging rights. I think that Maddow is correct to put the starting point of the rise of executive power over war with the Cold War and its ambiguities. That said, during the nineteenth century, there was also a long history of expansion and empire via military campaigns that were informal “wars.”  The lack of large and formally declared wars led to the misleading myth of America rousing itself only when necessary while overwriting a longer and more complete story that was actually laced with combat.

The Two Wars of the Bushes

In order to solve the pesky “Viet Nam Syndrome,” or the reluctance on the part of Congress venture into pointless and costly wars, Reagan had solved one problem by seizing the power to put troops in the field and solved the problem of cost by financing the action with a deficit: fight now, pay later. But Reagan’s wars, in and of themselves, were dubious and unsatisfying. What America needed was a “real” war, something that would wipe out the stain of defeat in Viet Nam and when Saddam Hussein invaded the very small and very rich nation of Kuwait, the opportunity to re-masculinize presented itself. After a long and winding wrangle with a recalcitrant Congress, President George Bush put together an international coalition to drive Saddam out of Kuwait.

Thanks to Reagan, Bush felt that he could call up an army without consulting Congress. While Congress complained, Bush and the Chair of the Joint Chiefs of Staffs, Colin Powell, planned. Powell, a veteran of the Viet Nam fiasco, had his own theory of the case on how to fight a war—with deep preparation and with overwhelming force. As Maddow explains,

Powell wanted an overwhelming, decisive use of force to meet American military objectives clearly and quickly. The whole Powell Doctrine of disproportionate force, clear goals, a clear exit strategy, and public support was designed to create a kind of quagmire-free war zone. He was unequivocal—he and his commander on the ground, Norman Schwarzkopf, had agreed: two hundred thousand more troops was what it would take. And they’d already made sure the president understood the numbers would go up if he decided he wanted not only to eject Saddam from Kuwait but to destroy his army, or to depose him. The mission objectives would have to be clearly defined before H-Hour. In any case, Powell and Schwarzkopf wanted five, maybe six, aircraft carrier task forces deployed to the Persian Gulf, which would leave naval power dangerously thin in the rest of the world. By the time the offensive capability was in place, about two months down the road, there would be something in the neighborhood of 500,000 American troops in the Middle East—nearly as many as at the high-water mark in Vietnam. Two-thirds of the combat units in the Marine Corps would be deployed in the Gulf. There would be no more talk of rotating troops home after six months. Soldiers had to understand they were in the Gulf until the job was done, however long that took.

This was the famous “Powell Doctrine,” which was designed to guarantee success. And it worked magnificently in the Gulf War, resulting in a great victory over an inept foe in a truly stupid war that ended in a graceless slaughter along the Highway of Death. Only after long and protracted fight did Congress agree to go to war. According to Maddow, Congress objected to fighting a war in which American interests were not directly involved, but Congress was also disinclined to accept the consequences of not saving Kuwait. The Bush Administration fought a successful war and Kuwait, a nation that circumcised the women, was restored to its (male) owners, but there were hidden costs for the future.  The jumping off point into Kuwait was Saudi Arabia and that meant that to one very indignant man infidels were on sacred soil. Osama bin Laden would wait a decade to take his revenge.

Since the “good” Gulf War was fought with Reserves, it was fortunate that the engagement was, thanks to Colin Powell, a short one. But in this short amount of time, certain rules of engagement were laid down—not for the enemy but for fellow Americans. The Viet Nam War had run into trouble as much at home as in the field due to the fact that this was the first war since the Civil War that was uncensored. The military would not make that mistake again.  The Gulf War was stage managed, information was controlled and doled out, and press and public was placated with video games of the “smart bombs” over Baghdad.  As Maddow said,

Our military dazzled. The First Gulf War was all Powell could have hoped for: a clear mission, explicit public support, and an overwhelming show of force. It was fast—the ground assault lasted just a hundred hours, the troops were home less than five months later. It was relatively bloodless for the away team—fewer than two hundred American soldiers were killed in action. It was cost-effective—happy allies reimbursed the United States for all but $8 billion spent. And it was, withal, a riveting display of our military capability, almost like it was designed for TV. Americans, and much of the world, watched a Technicolor air-strike extravaganza every night. The skeptics were forced to stand down; our military had proved beyond doubt or discussion that we were the Last Superpower Still Standing.

But for longer missions, the Reserves and the video games would not be enough to placate the public. Although, thanks to Reagan, there was no serious thought given to balancing a budget and the military was given whatever it needed or wanted or desired. Aside from boys and their toys, supporting an adequately sized volunteer army was proving to be a very expensive proposition.  The military had always supported itself. A young man could enlist or be drafted and find himself, not fighting, but doing laundry or providing food or doing mechanical work. For every combat fighter, there were a dozen or so working in the support systems, as engineers or office workers.

Once the military Draft was ended in 1973 under Richard Nixon, the armed forces all became “volunteer.” At the time, those who were opposed to the Draft, complained of “opportunity costs,” or the economic losses incurred by middle class white males, now likely to have the prospect of high salaries during the post-war boom. Once the white males moved out the way, the males of color could raise themselves socially and economically by volunteering for the military where new “opportunities” could be found. Those who were opposed to the end of the Draft, felt that the ethnic and social mixing that occurred in the military knitted America into a whole nation, instead of a divided country. There was some discussion of patriotism and service to the Flag, but the urgent voices of disgruntled white males had to be heard.

Twenty years later, the all volunteer army was an excellent career choice, but only certain demographic groups took advantage of what the government was offering: young men and women of color and young men and women from the South. The rest of the youth were not interested.  The result of these very different life paths would have consequences that would take another twenty years to play out. In the short run, there was the sheer unexpected cost of maintaining a large and long term military full of careerists and their families. As opposed to the draftees, these “volunteers” did not cycle out after a couple of years, they stayed and got married and raised families. Each soldier could easily have three or more dependents living on the base and needing care and feeding.

Maddow brings up a very interesting point about the sheer financial scale of the obligations the government takes on when it commits to a Volunteer Army. The cost of maintaining soldiers and their spouses and children and all the attendant services was huge. As Maddow explained,

In the ten years after 1985, the procurement budget had dropped from $126 billion to $39 billion and represented a paltry 18 percent of total defense expenditures. Sure, the active-duty force had been pared by nearly 30 percent and a few bases had been closed, but that didn’t come close to solving the problem. How were we supposed to ensure our Last-Superpower-on-Earth superiority when just the overhead cost of keeping our standing army milling around was swallowing between 40 and 50 percent of the Pentagon’s annual cash allotment?

The problem was solved by a now familiar term, “outsourcing.” Now, on one hand, it is more expensive to privatize and the corruption when private companies take the place of military personnel is vast, unchecked, and continues today unabated. However, outsourcing can be a very good thing, as Martha Stewart would say, because one can outsource actual soldiers. If one outsources soldiers, not just food services, then the President who is in charge of deploying the mercenaries is now undeterred by such nuisances as Congressional approval. The corporation, such as Xe, assumes the risks and the expenses of the mercenaries who are not eligible for Veterans’ benefits—hospitalization, education, legal protection—but they are paid accordingly with very high salaries that, unlike benefits, have end points. The government is off the hook and the mercenaries can be charged with all kinds of illegal and dishonorable tasks, off the books.

Outsourcing began in earnest in the 1990s. President Bill Clinton was wise enough to not fight wars but to participate in peace-keeping missions, such as the one in the Balkans, where some kind of military presence needed to be in place for years. By the 1990s, the problem of going to war was solved and now it was easy to avoid the skepticism of Congress or the suspicions of the American people or the high cost of casualties. As Maddow explains,

President Clinton never really expended much effort on the politically costly task of convincing the American public of the need to arm the Bosnians or Croatians, or the need to unleash American air power on Miloševic and the Serbs, or the need to put US boots on the ground. Instead, he found a way to do something without the necessity of making any vigorous public argument for it, and without much involving his own balky Pentagon…So it was soon after the peace accords were signed that those twenty thousand American peacekeepers—who would be joined by twenty thousand private citizens under contract to provide support services—arrived in Bosnia and Croatia as part of an international force to keep Miloševic and his Serbian military under heel. And did Clinton have a hard time selling that manpower commitment to the American people? He did not. He was helped greatly by—what else? Outsourcing.

The civil war and the genocide in the former Yugoslavia needed to be quelled and then order had to be restored, a process that took years. Private Contractors, as these mercenaries were then called, made their first appearances in the Balkans. The consequence of the decision to privitize were disastrous, as Maddow says,

…the acute and lasting problem was that they cut that mooring line tying our wars to our politics, the line that tied the decision to go to war to public debate about that decision. The idea of the Abrams Doctrine—and Jefferson’s citizen-soldiers—was to make it so we can’t make war without causing a big civilian hullabaloo. Privatization made it all easy, and quiet.

By the time President Barack Obama inherited two wars, one in Afghanistan and Iraq, the private contractor was a fixture in the American military. During the second Iraq War under a second President Bush, the ratio of the Reserves on active duty and the Private Contractors/Mercenaries was one to one. When the American public is told how many men and women are on active duty in these two war zones, this number should be doubled. In terms of the troops in the field, the actual force is twice as large as we are told. Unfortunately, the troops and the mercenaries are unsuited to the task of “nation building” or modernizing and westernizing a Medieval culture that has no history of democracy or equality.

Into the Cauldron

By the twenty-first century, reasonably good excuses had to be given for rounding up the Reserves and one had to attend to public relations and “nation building” or “bringing democracy” to benighted places seemed to be worthy causes. The invasion of Afghanistan, a barren land, suitable only for the breeding of war and poppies, should have been short-lived once the objective had been obtained—to drive Al Qaeda out of Afghanistan and to kill or capture the architects of the “attack on America” on September 11, 2001. The problem conceptually was that “objective” or goal was not a “victory,” and the second Bush administration cast about for an alternative war on better terrain where a good old-fashioned war could be fought.

Perhaps in the distant future, psycho-historican will explain the psychology of launching a “preemptive war,” also known as the “Bush Doctrine.” The invasion and occupation of Iraq was a strange and surreal event, too familiar to be retold here, but there is one element that remains intriguing—the willingness to not just lie but to create an alternative reality. In contrast to the Cold War, which has been deemed a Simulacra of a war, the Iraq War was a real war fought for fictitious reasons in the fevered mindset of a neo-con fantasy. As with the Reagan administration, it is unclear if the major players actually believed their own rhetoric, if they actually inhabited the alternative universe they created out of whole cloth or whether for unknown reasons they simply wanted to send men and women off to kill other men and women on a whim.

Experience suggests that it is futile to argue with alternative universes and no manner of proof to the contrary will convince the perpetrators otherwise. But what the Iraq was does demonstrate is another step towards executive capriciousness. The second Bush Administration proved to be incapable of governing but the energy of the government was wholly swallowed up in dreams of glory. Maddow suggests that we have now reached the point where the Executive Branch is nearly unchecked and the Pentagon has, thanks to generous Republican (deficit-fueled) spending on defense, the military has taken on a life of its own, regardless of need or regardless of real conditions on the ground.

A fact that’s underappreciated in the civilian world but very well appreciated in our military is that the US Armed Forces right now are absolutely stunning in their lethality. Deploy, deploy, deploy … practice, practice, practice. The US military was the best and best-equipped fighting force on earth even before 9/11. Now, after a solid decade of war, they’re almost unrecognizably better. Early worries such as how much gear we were burning through in Iraq were solved the way we always solve problems like that now: we doubled the military’s procurement budget between 2000 and 2010.

Obama Country

New President Barack Obama won the office, partly on “hope and change,” and partly because he was against “dumb wars.” He inherited two dumb wars and virtually unchecked Executive Power to go to war. Obama is no cowboy. A thoughtful man, he is an intellectual with an analytic mind and it seems that somewhere along the line, he has gently and silently slipped the nation into the new century. As the Obama administration is demonstrating daily, the way in which President George H. W. Bush waged war was old-fashioned and outmoded, a nineteenth century idea of fighting with twentieth century weapons.

To return to a point I made earlier, if the starting point is the “good war,” the Second World War, then the post-war dream is already an outmoded one, one of “victory” and “glory” and “win.” These terms, in the twenty-first century, are without definitions. Even the Powell Doctrine, invading with maximum force, only gets you so far—into the territory—but does nothing in terms of a long occupation and is a hindrance when it is time to get out. And the Powell Doctrine was totally disregarded when the Bush Administration decided to invade Afghanistan and Iraq.

The Iraq War was a horribly expensive war, fought on the cheap in terms of the numbers of troops deployed. While bending to public disapproval of the unnecessary war in search of Weapons of Mass Destruction, the Pentagon kept the number of Reserves low but augmented with Contractors. Iraq is a huge territory that did not want to be invaded or occupied and the shoestring forces could not control the reluctant population. The major objective when waging an unpopular war, justified in a variety of confusing and conflicting ways, is to win this war. But to do so, the Powell Doctrine must be put into play, an impossibility if the war is a “War of Choice.”

Maddow does not spent much time on the fiasco of the Iraq War, already ably covered by other incredulous historians, but she notes that

By 2001, the ability of a president to start and wage military operations without (or even in spite of) Congress was established precedent. By 2001, even the peacetime US military budget was well over half the size of all other military budgets in the world combined. By 2001, the spirit of the Abrams Doctrine—that the disruption of civilian life is the price of admission for war—was pretty much kaput. By 2001, we’d freed ourselves of all those hassles, all those restraints tying us down.

Iraq and Afghanistan, of course, did not go well. The British, how had tried to contain Iraq in the 1920s and the Soviets who had tried to control Afghanistan in the 1980s could have warned the deaf Americans of their ridiculous quest. No amount of time or effort could bring about a “victory” or a ” success” in these ancient lands of Mesopotamia. As if to satisfy himself that the Neo-Conservative assertions that these wars could be won with more troops (remember that the actual number of soldiers is double what we are told), Obama conducted a “surge.” In male military language a surge is an increase of personnel for a limited period of time. The hope is to stabilize the situation long enough to get out of Dodge. Obama’s surge allowed America to save face and taught the President that surges are futile.  To ask for a surge is like asking for the price in a fancy boutique—-if you have to ask, you can’t afford it; it you have to surge, you’ve lost the war.

Quietly, Obama took the advice of his Vice-President, Joe Biden, to use commandos instead. And this is where the book ends. Maddow makes the point that every step along the way disconnects “war” from national responsibility, national participation, and democratic participation. As Obama pulls out of the Twin Wars of Bush’s devising, he is escalating the ultimate dislocated war, a War of Drones waged by the CIA, augmented by occasional strikes by elite Special Forces. The Administration has a supposed “secret kill list” of those who are to be removed through long-distance strikes and the rule of engagement are unknown. Congress is kept in the dark about the details but the benefits are clear.

First, the President and the CIA and a small portion of the military can operate at will. They are not engaged in a war but in a program of planned assassinations, designed to take out the leaders and discourage the followers. Compared to a large number of “boots on the ground,” the Drone Program saves lives and money, blood and treasure. The result is the Ultimate Video Game. As Maddow explains it,

When one of those Blackwater-armed drones takes off with a specific target location programmed into its hard drive, it is operated remotely by a CIA-paid “pilot” on-site, in a setup that looks like a rich teenager’s video-game lair: a big computer tower (a Dell, according to some reporting), a couple of keyboards, a bunch of monitors, a roller-ball mouse (gotta guard against carpal tunnel syndrome), a board of switches on a virtual flight console, and, of course, a joystick. Once the drone is airborne and on its way to the target, the local pilot turns control over to a fellow pilot at a much niftier video-game room at the CIA headquarters in Langley, Virginia. The “pilot,” sitting in air-conditioned comfort in suburban Virginia, homes the drone in on its quarry somewhere in, say, North Waziristan. Watching the live video feed from the drone’s infrared heat–sensitive cameras on big to-die-for-on-Super-Bowl-Sunday flat-screen monitors, the pilot and a team of CIA analysts start to make what then CIA chief Leon Panetta liked to call “life-and-death decisions.” Maybe not sporting, but certainly effective.

According to an article by NPR, the local pilots are required to wear uniforms and there are programs to help these people to cope with the after effects of frequent killing, even at a distance. Maddow’s concern is that there is such a dislocation between the decision making process and the public and the distance between the moral responsibility of waging war that it is easy to be in a state of constant conflict without any accountability. She is concerned that the breakdown is between Congress and the President, but I think that there is another trajectory that also needs to be looked at—the increase in distance between the target and the triggerman.

The real question might be another kind of separation, one that dates back to the bombing of civilians in the 1920s. When these bombings first occurred, there was little concern, because the victims were in Iraq and Ethiopia. Only when Europeans were assaulted in Guernica did any outcry occur but these moral qualms vanished, and ten years later, the Allies had firebombed Dresden, Hamburg, and Tokyo and had dropped two atomic bombs on non-military targets in Japan—all on civilians.

The ethical aspects of killing helpless human beings was wiped out by the blanket assumption that the populations of Germany and Japan were complicit in the Second World War. The rationale for these civilian bombings was that the morale of the people had to be broken. Studies after the war have suggested that these bombings, such as that of London, were not effective in either lowering morale or in slowing war time production, but it was hard to break the spell of cost-free or effective aerial warfare.

In fact, Powell had dissuaded Clinton from attempting to settle the Serbian conflict through bombing. Maddow quotes Clinton assistant, Nancy Soderberg, who reported that Powell had advised, “ ‘Don’t fall in love with air power because it hasn’t worked,’ [he said]. To Powell, air power would not change Serb behavior, ‘only troops on the ground could do that.’ ” Indeed, the Second World War was won on the ground in a long slow and deliberate drive to capture and hold territory. In the end, the most effective bombing was those two that were dropped in the end on Hiroshima and Nagasaki. However, the second Bush Administration was still enraptured by air power and treated the helpless and blameless Iraqis to “shock and awe” in 2003…again to no avail.

Wars in the Mideast were quite different from wars in Europe. These new wars were asymmetrical, tribesmen with a cache of modern weapons against a large contingent of well armed twenty-first century warriors who become mired down in what is part of an ongoing tribal conflict. Even though America was convinced that it was fighting a “War on Terror,” the nation was confronting an old culture that was fighting against modernism or modernity. In addition to fighting unwelcome change and colonialism from the outside, these tribes were fighting each other for religious reasons that were unclear to Westerners. But however sectarian these local issues, America is committed to fighting a condition that has been named a “War” to give the American public a framework through which to “read” the traumatic “event” of September 11th.

Obama has definitively changed the way in which this non-war is not waged. The troops are coming home, while the Drones carry on the killing. On one hand, if we follow this line of thinking—kill at a distance—from the bombing of Dresden to the Drone attacks on terrorists in Pakistan, the two points are certainly connected. What remains unclear, even in Maddow’s book, is why a President would want to take sole responsibility for body bags, ours or theirs. Drift seems to imply that one President after another “drifted” into taking more and more power because they could do it, because there was no power capable of stopping them. As the wars became more and more arbitrary, from Viet Nam to Iraq, the personal responsibility became greater, and, as Johnson and Bush found out, the consequences, the judgment of history can be harsh for those who wage war unsuccessfully and for no good reason.

But if the costs of blood and treasure are relatively low, as with the secretive Drone Wars, then the power shifts decisively towards to the Executive Branch. If “war” is redefined as tracking down designated targets on a “kill list,” then the ostensible cost of war goes down as does the size of the military.  If Drone attacks can do the job of people, then the need to attack or invade or occupy should diminish. The public will be happy to allow this kind of invisible war to continue, no questions asked. No more flag draped coffins. Maddow ends her book with a list of problems that need to be solved—what she calls a “to do list.” Most of the points on her list concerning going to war, the role of the citizen soldiers, privatization and the disposal of nuclear weapons, will resolve themselves within a few years.

Two of her objections—the “secret” Drone Wars and Executive Power—are here to stay and are the future of war: a President in the Situation Room waiting for the outcome of a covert operation by a team of Seals or for a report on a strike on a target thousands of miles away. If we accept the “necessity” of dropping an atomic bomb on Nagasaki, how can we complain about a single Drone strike on one person? If we want to balance the budget, then how can we not accept this cheap and reliable manner of taking the war to the terrorists? If we could go back in time and assassinate Osama bin Laden, would we do it? If so, then targeting other individuals before they do their worst is a moral act.

Although, such strikes now come under the auspices of the CIA and are “secret” and based on”intelligence” that the public and Congress do not know, Rachael Maddow ends hopefully,

We just need to revive that old idea of America as a deliberately peaceable nation. That’s not simply our inheritance, it’s our responsibility.

I wish I could agree with her hopeful assessment. America has not been a “deliberately peaceable nation,” but we decidedly do not want to take responsibility for these new wars. I was shocked to learn that one of my former art students has become a Drone Pilot. Happy and satisfied in a military career, he is in charge of sorting out the designated target from innocent civilians, and he is convinced that these assassinations save money and lives. What is the more moral position—send thousands of men and women off to die or quietly kill the “terrorists” identified by “intelligence?”

This could well be a question that we will never be asked in any formal way. While there are those who are questioning the Drone War, the real Drift is away from taking collective responsibility. So war becomes the provence of the President who wages it in secret and we may be told from time to time of its causalities. This is the future.

Dr. Jeanne S. M. Willette

The Arts Blogger

 

 

“The History of White People” by Nell Irvin Painter

THE DARK HISTORY OF “WHITE”

Introduction

We were told that the election of Barack Obama meant that we—America—had transcended into a beatific state called “post-racial.” We were proud of having overcome three centuries of a history of stubborn slavery and an even more intransigent segregation, both of which were based on bogus “racial” “theories.”  We proudly and overwhelmingly elected an African American as the President of the United States. Once the tears of joy and pride had been wiped way and clear vision was restored, it was shamingly clear that, far from being a phenomenon of the past, racism was alive and well and virulent in the “land of the free” and the “home of the brave.” The years-long assault on the legitimacy of a Presidency, the determined and unrelenting efforts to force “failure” upon, not only one man—because he was black—but upon the general population has spanned a gamut of accusations: “Muslim,” “Kenyan,” Socialist,” “In over his Head,” “Ineffectual,” even “Monster,” and so on. Make no mistake, racism is hiding behind each and every one of these words.

Whatever words are used, they all add up to one word “black,” which is the opposite of “white,” and, in the minds of these retrogrades racists, there are two words that should never come together: “Black” and “President.” It is important to understand that the (right-wing, Conservative, Tea Party, whatever) attitude that Barack Obama can never be a “legitimate” President is fundamentally different from that of the Democrats who felt strongly that George H. W. Bush had not been elected President twice and had been put in the office through a unilateral action on the part of the Supreme Court but bowed to the rule of law and lived peacefully (if unhappily) under his Presidency. The complaint of the Democrats was a legal one, while the complaint against Obama is a racist one. Over time, Democrats learned to live with what seemed to them to be a coup d’êtat and let the subsequent career of Bush determine his fitness to serve, but, in contrast, the refusal to accept the very basic fact that Obama is an American citizen (born in Hawaii) continues.

The question is why?

Of course, it is impossible to get inside of the psychology of a sizable group of people, but it is possible to get into the history of the culture that created the concept of “whiteness” and the racial dialectical that similarly constructed its polar opposite, “blackness.” Until recently, the very thought of  “white” was an absent presence: there but invisible, unspoken but acted upon, reiterated but not acknowledged. “White” as a “race” existed and exerted an unquestioned power but “white” was not seen. This social “white noise” was embedded in the cultural common consciousness, coming from everywhere and no where.  The power of “white” rested upon the fact that its source and origin remained both operative and obscured.

 Some twenty years ago, “white” came out of the dark and into the light of history and “whiteness studies” was born. This 2010 book by Nell Irvin Painter is part of these academic attempts to examine “whiteness” or “white” as a concept, but, in this book, she examines how the description of a skin color, “white,” became a loaded term, implying innate superiority of one skin color over anther and, by extension, of one “race” over another. To those who watch The Colbert Report, Painter was the game author who attempted to get it across to Stephen Colbert that “white” was an intellectual construct. Colbert asked a very interesting question, trying to determine if her book was a straight historical account of the comings and goings of white people. The History of White People is not what its title infers and the title is probably both ironic and provocative.

With a Ph. D. from Harvard and an academic position at Yale, Painter, a gifted artist and celebrated historian, took up the task of tracing the history of what I could call the “need to define” “white people.” This self-imposed task separates the work of Painter from the theoretical field of “whiteness studies,” for she has produced what is a fairly straightforward account in which she traces to the formation of a discourse on “white people.”  It is only recently that an African American has been in the position to write about white people. Or to put it another way, white people have written a great deal about black people but society and culture prevented the objects of this whitened scrutiny to write back. The sheer fact that Painter is black gives the title an extra punch, mitigated by her easy and congenial manner: she comes in peace not in condemnation. As Painter explains on the first page,

I might have entitled this book Constructions of White Americans from Antiquity to the Present, because it explores a concept that lies within a history of events. I have chosen this strategy because race is an idea, not a fact, and its questions demand answers from the conceptual rather than the factual realm.

One of the oddities of “white people” is that unlike “German people” or “British people,” there is a paucity of literature devoted to defining “white people.” This scarcity is particularly notable when compared to the enormous amount of time, energy and ink spent on defining “black people.” There is, in fact, an excess, a surplus, an overflow of writing on “black,” giving the relative silence on “white” the kind of power only wielded by withholding. Withholding of “white” gave “white” not just a powerful potency but also created an assumption of what white meant, a blankness that allowed “white” to be over/written by whatever qualities the culture desired. In other words, “Blackness” was defined from the position of “Whiteness,” which was the vantage point of power and privilege which claimed in inalienable right to Represent. This is power indeed.

Painter’s book in interesting because of the way in which she lays out her argument that the “history” of “white people” is a discourse devised for socio-economic purposes dedicated to the maintenance of domination. First, she tracks down the basis of the word “Caucasian” and then linked the term to “white” which is then linked to “beauty” which was then connected to “intelligence, leading to the logic of superiority. Second, she establishes how the historical connection between color, “black” with bondage and  “white” with free, and  slavery was made. The importance of taking these two steps or of establishing these two separate discourses, is that the discourse of racial superiority and the discourse of slavery are separable. Painter has to separate the concepts because, once slavery was abolished, the discourse of racial superiority could live on unchanged. Slavery is easy to outlaw; the concept of one race being “superior ” to another is an idea and cannot be abolished.

Slavery can die but racism can live on.

How Racism began, without “Race”

Nell Painter begins her journey into understanding how two neutral words, “white” and “people” became conjoined with ancient Greece, supposedly the “cradle of Western civilization.” The ancient Greeks had no concept of “race” and differentiated among the peoples they came into contact with in terms of place or locale. Historians divided various tribal groups in a accordances with the physical and social distinctions due to climate or terrain. But there was one group that was beyond their empirical reach, those mysterious and legendary inhabitants of the region the Greeks called the “Caucasus.” Here was the land of myth. As Painter laconically describes it, this modern territory,

…is a geographically and ethnically complex area lying between the Black and Caspian Seas and flanked north and south by two ranges of the Caucasus Mountains. The northern Caucasus range forms a natural border with Russia; the southern, lesser Caucasus physically separates the area from Turkey and Iran. The Republic of Georgia lies between the disputed region of the Caucasus, Turkey, Armenia, Iran, and Azerbaijan.

Today this region is still remote and isolated, only occasionally breached by modernity but through a historical accident that is rather arbitrary, “white people” have been named “Caucasians.” Like the Greeks, the Romans had no concept of “race” but the contribution of the Romans to racial thinking was both considerable and accidental. It was the Romans, who, in search of Empire, classified most of the inhabitants of Europe. The Romans were interested in the “civilization” or cultural traits of the non-Romans compared to the Empire builders. “For Roman purposes,” Painter writes, “politics and warfare defined ethnic identities.” Painter points out that it was Julius Caesar who gave many of the names we know and use today, from “Gaul” to “Germania,” to the peoples he encountered. In discussing the differences among these scattered and disparate tribes, Caesar was assessing their relative battle worthiness and determining how he would subdue them.

The Romans, as Empire builders, were imperially promiscuous, the better to blend the subjugated peoples to the conquerers. The result was centuries of intermixing and intermarriage resulting in a hybrid culture that some say diluted the social foundation of the Romans and gradually eroded the Empire.  In contrast, the Germans or the Germanic tribes would be very resistant to the benefits of Empire and were hostile to outsiders. In the early years, during the time of Caesar when the Romans were striving to understand their northern neighbors, important differences were imagined. As Painter says,

How could eminent citizens of this great empire squeeze out admiration for the dirty, bellicose, and funny-looking barbarians to the north? The answer lies in notions of masculinity circulating among a nobility based on military conquest. According to this ideology, peace brings weakness; peace saps virility. The wildness of the Germani recalls a young manhood lost to the Roman empire. Caesar headed a train of civilized male observers—with Tacitus among the most famous—contrasting the hard with the soft, the strong and the weak, the peaceful and the warlike, all to the detriment of the civilized, dismissed as effeminate. As we see, the seeds of this stereotype—a contrast between civilized French and barbarian Germans—lie in the work of ancient writers, themselves uneasy about the manhood costs of peacetime.

The Greeks imagined the Caucasians and the Romans imagined the Germans and these ancient mythologies would link “whiteness” to “masculinity” or, to put it another way, there would be a link between purity and resistance compared to hybridity and femininity. The Gauls submitted to the Romans and permitted interpenetration of tribal cultures while the Germans remained “uncivilized” and aloof, withdrawing behind the Rhine river where they remained unmolested. The importance of “Teutonic purity” would be revived later and after the fall of the Roman Empire, Painter relates, “white people” are linked to the barbaric tribes of the British Isles, another resistant group divided by Hadrian’s Wall. The Anglo-Saxons, like the many tribes of the Roman Empire were an amalgamation of conquers and the conquerers—a hybrid mixture of Viking/Scandinavian tribes that invaded the island and settled.

It is interesting that the ethnic groups that gave the Romans the most resistance, the tribes in Great Britain and Germany, were the ones who became linked to “white people,” however, another element had to be added before the concept of “white” could come into existence. As stated, the modern concept of race is a very modern one and was linked to the final ingredient: “black” and slave. Until the sixteenth century, slaves were of all colors. In fact, as Painter points out, the word “slave” comes from the word “Slav,” or the Slavs from eastern Europe who, as the result of the labor shortage after the Black Death, were caught up in a lively slave trade. The “Slave” and the “black” “race”  were not paired until the need for workers on the sugar plantations in the Caribbean encouraged the European colonizers to depend upon the Africans.

The Confluence of  “Black” and “Slave”

The Spanish eradicated the indigenous population of the Caribbean in a couple of generations and, the English settlers of the North American continent also found out that it was difficult to enslave people—the Native Americans—in their own territory. Africans, seized and stolen from their homes, arrived in America dazed and disenfranchised, far removed from their own cultures with no hope of returning, made good slaves: strong and healthy, confused and divided by dialects and languages. Unlike the Native Americans, the Africans had nowhere to run and no place to hide. However, the idea of “slavery” and lifelong servitude took decades to affix itself to Africans only. Other books have outlined the process in which white indentured laborers and black indentured laborers were socially and legally separated from each other, leaving the white person “free” and the black person “enslaved,” but foundational focus of Painter is the eighteenth and nineteenth centuries, because it is in this period of “enlightenment” that the slavery of black people had to be justified.

Although she later outlines how the “peculiar institution” of slavery in America was developed, Painter unexpectedly begins by linking “white” and “beauty” through

the eighteenth-century science of race developed in Europe, influential scholars referred to two kinds of slavery in their anthropological works. Nearly always those associated with brute labor—Africans and Tartars primarily—emerged as ugly, while the luxury slaves, those valued for sex and gendered as female—the Circassians, Georgians, and Caucasians of the Black Sea region—came to figure as epitomes of human beauty.

The profitability of slavery, regardless of color, throughout the eighteenth century, would stifle any moral qualms about holding humans in bondage for two centuries. However, Painter emphasizes a clear and present subtext in the racialized discourse: the practice of dividing people in terms of physical appearance—beautiful and ugly—based on the Greek ideal (as filtered through Roman art), laced with sexual fantasies, stimulated by both homosexual and homosexual desire. Beauty, for men and women, was Greek and was attributed to certain kinds of features deemed unique to Europeans (white people) as opposed to Africans, Asians or Slavs. Tall, slim, pale-skinned, straight hair and straight noses were the favored elements—not just Greek features based on marble statues, but also diametrically and conveniently opposed to dark skinned, flat nosed, coarse haired Africans and Asians.

Painter does a nice job of presenting a number of intellectual and philosophical and scientific ideas put forward in the eighteenth century (and in the two subsequent centuries) concerning the measurement of skulls and the angle of facial profiles. To the reader conversant with these endeavors, the author presents a brisk summation across a series of chapters. The underlying reason for this growing discourse on “difference” is, of course, linked to the rise of Empires. The imperial adventures of European nations coupled with the enormously profitable enterprise of slavery were inconvenient coincidences with the Enlightenment and its rational doctrines of equality. We can assume that the serious manner in which the Europeans blinded themselves with (pseudo) science to account for their unwillingness to allow the logic of Enlightenment thought to play itself out was a defensive measure.

In America the need to distinguish “white” from “black” was acute. Europeans were intent on explaining their supposed superiority in terms of beauty, equated with innate intelligence, as the reason for colonizing and exploiting the rest of the known world. Unlike the Americans, the Europeans did not keep slaves nor did they depend upon a slave economy. In the American South, an agricultural feudal economy while the Europeans built an international mercantile economy. But in a small and new nation, the South was not only anachronistic but also powerful. Its leaders were slaveholders reluctant to give up their incomes to square their words of freedom with their deeds of slavery. When the Americans gained their independence, they did so by denying the majority of its inhabitants, women and slaves, basic rights. As English politician Samuel Johnson caustically asked, “How is it that we hear the loudest yelps for liberty among the drivers of negroes?”

Painter places Thomas Jefferson, slave owner, lover of a slave, father of slaves, at the center of the American thinking on the significance of “Anglo-Saxon” heritage. She writes,

To Jefferson, whatever genius for liberty Dark Age Saxons had bequeathed the English somehow thrived on English soil but died in Germany…In 1798 he wrote Essay on the Anglo-Saxon Language, which equates language with biological descent, a confusion then common among philologists. In this essay Jefferson runs together Old English and Middle English, creating a long era of Anglo-Saxon greatness stretching from the sixth century to the thirteenth. With its emphasis on blood purity, this smacks of race talk. Not only had Jefferson’s Saxons remained racially pure during the Roman occupation (there was “little familiar mixture with the native Britons”), but, amazingly, their language had stayed pristine two centuries after the Norman conquest: Anglo Saxon “was the language of all England, properly so called, from the Saxon possession of that country in the sixth century to the time of Henry III in the thirteenth, and was spoken pure and unmixed with any other.” Therefore Anglo-Saxon/Old English deserved study as the basis of American thought. One of Jefferson’s last great achievements, his founding of the University of Virginia in 1818, institutionalized his interest in Anglo-Saxon as the language of American culture, law, and politics. On opening in 1825, it was the only college in the United States to offer instruction in Anglo-Saxon, and Anglo-Saxon was the only course it offered on the English language. Beowulf, naturally, became a staple of instruction.

Jefferson’s obsession with the Anglo-Saxons and their mythical racial “purity” was shared with other Americans who were intent on establishing a cultural distinctiveness for those descended from English ancestors. The subtext was more than an attempt to elevate the “pure” white race above the African slaves; it was also a device used to coin social difference to elevate one class of white people above another. The sheer quantity of argument and writing about their racial superiority on the part of white males from all corners of intelligentsia imply a deep unease with their convoluted reasoning. Every now and then a counter argument was put forward, and a rare black voice was heard. Painter introduces the reader to David Walker, a free man in Boston and a well known activist who, in 1829 wrote David Walker’s Appeal: in four articles, together with a preamble, to the coloured citizens of the world, but in particular, and very expressly, to those of the United States of America. According to Painter,

Walker’s Appeal spread a wide net, excoriating “whites” and, indeed, “Christian America” for its inhumanity and hypocrisy. Over the long sweep of immutable racial history, Walker traces two essences. On one side lies black history, beginning with ancient Egyptians (“Africans or coloured people, such as we are”) and encompassing “our brethren the Haytians.” On the other lie white people, cradled in bloody, deceitful ancient Greece. Racial traits within these opposites never change.

Another sub-text that Painter locates in the growing American discourse on race is the dilemma of slave holders—the moral and psychic damage done to them by owning human beings and being unwilling to let the humans in bondage go free. To the modern reader, the guilt of the Founding Fathers is pure hypocrisy, for these high minded men did not have the courage to let go of their slaves, the foundation of their wealth and class position. When the Constitution was written the argument for doing nothing about slavery was put forward, for owning slaves seemed to be on the verge of being less and less profitable. However, the invention of the Cotton Gin in 1794 by Eli Whitney ended the wistful hope that slavery would collapse of its own weight. Once slavery was profitable, the discourse of justification intensified.

Slavery as the American Stain

The need to explain why slavery should continue to be a feature of American life would become more pressing as the nineteenth century progressed. European nations gradually outlawed slave trade, but sharp eyed observers such as Alexis de Tocqueville realized that the slave culture in the South constituted a moral cancer, a disease in the democratic republic. It is not just slavery, however, that is the embedded flaw, it is racism. Racism made it possible to enslave the black and to push the Native Americans off their lands. In his perceptive Democracy in America (1835) Tocqueville  wrote unflatteringly of the Southerners:

“From birth, the southern American is invested with a kind of domestic dictatorship…and the first habit he learns is that of effortless domination…[which turns] the southern American into a haughty, hasty, irascible, violent man, passionate in his desires and irritated by obstacles. But he is easily discouraged if he fails to succeed at his first attempt…The southerner loves grandeur, luxury, reputation, excitement, pleasure, and, above all, idleness; nothing constrains him to work hard for his livelihood and, as he has no work which he has to do, he sleeps his time away, not even attempting anything useful.”

Europeans made fortunes in the vile trade of capturing and selling slaves but they distanced themselves, not from the profits but from the consequences by not actually owning Africans. According to Painter, Tocqueville seemed to find it hard to write of the South and its customs, but his friend Gustave de Beaumont examined slavery in America in Marie, or Slavery in the United States, a Picture of American Manners, written in the same year as Tocqueville’s first volume on America. However, Beaumont’s book was not translated into English until 1958 and, tragically, when it was finally published in America, its theme, how “one drop” of “black blood” designated an individual as “black.” Indeed, Painter does not point this out but during World War II, the American blood supply for the soldiers was divided between black and white blood.

For Painter, the Civil War and the extended blood letting over the question of slavery verses the rights of a state to own human beings is only but one part of the question of “race” that, by the nineteenth century had begun to define American thinking. As she writes,

In a society largely based on African slavery and founded in the era that invented the very idea of race, race as color has always played a prominent role. It has shaped the determination not only of race but also of citizenship, beauty, virtue, and the like. The idea of blackness, if not the actual color of skin, continues to play a leading role in American race thinking. Today’s Americans, bred in the ideology of skin color as racial difference, find it difficult to recognize the historical coexistence of potent American hatreds against people accepted as white, Irish Catholics. But anti-Catholicism has a long and often bloody national history, one that expressed itself in racial language and a violence that we nowadays attach most readily to race-as-color bigotry, when, in fact, religious hatred arrived in Western culture much earlier, lasted much longer, and killed more people. If we fail to connect the dots between class and religion, we lose whole layers of historical meaning. Hatred of black people did not preclude hatred of other white people—those considered different and inferior—and flare-ups of deadly violence against stigmatized whites.

What makes this book remarkable is that long before the Republican’s so-called “Southern Strategy” of the 1970s,  American had already absorbed racial and racist thinking even before the Civil War. The other value of the book is the sad evidence of how deeply supposedly intelligent and fair minded people, American and European intellectuals and scientists were implicated in fashioning a discourse of dehumanization and prejudice. Painter devotes a segment of the book on how various immigrants who were not English struggled to be accepted as “Americans.”  Broadly put, in Hegelian terms, the Master/Slave, the One/the Other dialectic became deeply embedded in the American psyche. Although America was a nation of immigrants, from the very start, only certain kinds of immigrants were welcome: no Irish, no Italians, no Jews, no Eastern Europeans, no Asians, and so on. In fact the nineteenth century, punctuated by the Civil War was one long struggle against the Other, whether it was the Native Americans in the West or the Catholics in the East.

Enlarging “White” through Diminishing Others

Many of the literary architects of the discourse of racism that constructed the concept of “white people” created a construct of “whiteness” that was designed to maintain the privilege of a favored few. Thomas Carlyle, generally well remembered for his efforts to improve the conditions of the working class, stained his record of humanism with viscous writing about the Irish. When Carlyle was writing the Irish were being deliberately starved out of Ireland but he was a man without pity. As Painter wrote,

Thomas Carlyle (1795–1881), the most influential essayist in Victorian England, held the racial-deficiency view, having fled Ireland’s scenes of destitution in disgust after brief visits in 1846 and 1849. In one cranky article he called Ireland “a human dog kennel.” From his perch in London, Carlyle saw the Irish as a people bred to be dominated and lacking historical agency. He took it for granted that Saxons and Teutons had always monopolized the energy necessary for creative action. Celts and Negroes, in contrast, lacked the vision as well as the spunk needed to add value to the world.

Thomas Carlyle teamed up with the American poet Ralph Waldo Emerson in a rather awkward partnership. Carlyle thought that slavery was a perfectly permissible state for the inferior race, while Emerson was an abolitionist. But both were involved in a mystical enterprise of elevating an imaginary Anglo-Saxon “race” above other “races,” such as the benighted Irish. As other writers have pointed out, the language and terminology developed by the English to defame the Irish and justify the British rule over Ireland. This language of inferiority and beastality was formed centuries before the African slave trade and was already ready to be deployed and applied towards any group considered unworthy. Although, as Painter points out, African Americans, such as Frederick Douglas, understood the parallels of prejudice against the Irish and the blacks, the Irish rejected this comparison and fought to be called “white.”

Other respected philosophers, from France’s Ernest Renan and America’s Matthew Arnold wrote extensively of the wonders of the Celts and the Anglo-Saxons. These writings can be read benignly as an attempt to delineate a national identity for a modern world, now obsessed with “difference.” But this strain of thinking was also at heart divisive and, for America, racist. By the middle of the nineteenth century, America was experiencing a tidal wave of immigration, starting with the Irish, followed by the Italians, all of whom were Catholic and all of whom were, therefore, alien to the supposed Anglo-Saxon Protestant Americans. The poetics of a Matthew Arnold and the violent bigotry of the Know Nothing Party are but two sides of the same coin. By mid century, as Painter writes, “The Anglo-Saxon myth of racial superiority now permeated concepts of race in the United States and virtually throughout the English-speaking world. To be American was to be Saxon.”

The reasons for this painstaking and fictional construct of “Teutonic” and “Anglo-Saxon” superiority seems clears today. Carlyle feared the consequences of democracy and other writers feared the invasion of the Others. However, the extent to which these supposed “great” men were aware of the contradiction between their views of the superiority of “whiteness” and the mercy and love of Christianity and promise of equality and democracy is unclear. But for the next one hundred years (and beyond), there would be a mountain of writing piling up a pseudo scientific and pseudo philosophical explanations for why certain peoples should be excluded from the basic rights of human beings and citizens of a free nation. This discourse constructed a fantasy vision of “white people” that was the base for a superstructure of exclusionary laws directed against people who were “not white.”

Clearly the political unconscious of both America and England is an ugly one, but Painter includes an interesting section that links racism not just to beauty but also to sexual desire. Most of the constructors of “whiteness” were middle class privileged males who may or may not have been latent homosexuals. Painter reads their texts much the way in which we read Johann Joachim Winckelmann’s writings on (Roman copies) Greek Art and finds an underlying current of, shall we say, intense admiration for the (male) beauty of the Teutonic ideal. These descriptions of the beautiful white male linger on today—fair skin, blue eyes, blond hair, tall thin frame—and are seen in Ambercombie and Fitch and Ralph Lauren advertising. That said, “white people” are gendered male and “beauty” is linked to the idea of “white.”

With the dubious intellectual weight behind the notion of the inherent and innate superiority of “white people,” came the construction of the “Aryan” idea that was so powerful that art history still includes the ancient Egyptian culture as “Western,” regardless of the fact that the Egyptians were Africans and black. The romantic idea of Aryan and white continued to be supported well into the twentieth century, as after the Civil War in America, “whiteness” was linked to enfranchisement and the power to vote. Even though black men were give the “right” to vote in 1870, full voting rights for non-whites took a hundred years to come about and the hard-fought right to cast a ballot remains under threat today.

Aryan Supremacy Through Eugenics

One of the great services of Painter’s book is the parade of scholars and scientists who wrote of the wonders of being Aryan and Anglo-Saxon and who did studies of the human skull in order to “prove” racial superiority. Today, these men are obscure and well-known only to specialists, such as Painter, but, in their time, as she stresses, they were respected and celebrated. What is remarkable is not only how forgotten these architects of racism are today but also paradoxically how completely their discourses penetrated the American collective consciousness. Reading of one after another of these supposed intellectuals is simply depressing. Decades after slavery was abolished, the writings kept coming, their perpetrators festooned with honors and crowned with laurels, halted only in the face of the Nazis.

Not that the proponents of Aryan superiority would be entirely silenced by the horrors of the Holocaust but the doctrine of racial superiority would lose its luster, at long last. The nation therefore owes a great deal to the occasional and brave white dissenter, like the anthropologist, Franz Boas, who joined with Black intellectual, W. E. B. Dubois to fight racism and anti-semitism in those decades before the Second World War. As Painter points out,

During the late nineteenth century, poor, dark-skinned people often fell victim to bloodthirsty attack, with lynching only the worst of it. Against a backdrop of rampant white supremacy, shrill Anglo-Saxonism, and flagrant abuse of non-Anglo-Saxon workers, Boas appears amazingly brave. It mattered little in those times that lynching remained outside the law. More than twelve hundred men and women of all races were lynched in the 1890s while authorities looked the other way. Within the law, state and local statutes mandating racial segregation actually expelled people of color from the public realm.

The voices, such as that of Boas, who spoke out against the bogus assertion of “race,” were shouting into a headwind of rhetoric. American histories rarely stress the articulate racism of Presidents Theodore Roosevelt and Woodrow Wilson. But early in the twentieth century, instead of leading the population into a new century with new thinking, they vigorously extended the creed of “Anglo-Saxonism.” The trend towards to “Teutonic,” abated somewhat due to the Great War in which the Germans or the Teutons were the enemy. One of the outgrowths of elevation of “white people” was the attendant fear of “race suicide,” due to the threat of intermarriage among the Anglo-Saxons and “inferior” whites. In the Northern states, these rants were due to the continued flux of immigrants who were diluting the essence of the “white” race, in the South, the fear of inter-racial mixing drove many localities to forced sterilization of those who were unfit to breed.

The “science” of eugenics, that would become the driving force behind the Nazis extermination of the Jews, gypsies, and other “undesirables” was, like many of the racist theories in America, tended to be the brian child of New Englanders. Cradled and supported by the most eminent universities in the nation, these writers drove a discourse of exclusion and elimination of the wrong kind of blood or hereditary. It was taken as an article of faith that “inferiority” was hereditary and that there was no concept of environmental considerations that may have caused generational poverty. On one hand, scholarship was turned into public policy that prevented equal opportunity and, on the other hand, those who were impacted were then declared inferior due to generations of bigots who kept the Irish, or the Italians, the Chinese, the African Americans, and so on, from achieving.

The solution to the socially engineered under achievements of the poor and disadvantaged was forced sterilization, which was found constitutional by an 8-1 majority. Virginia led the way in 1924 and for a decade (with California coming in as the second largest sterilizing state), until shamed when the Nazis took up identical policies of sterilizing the poor and those who might inherit a tendency towards criminality, other states followed. But forced sterilization continued until the Civil Rights Movement in the 1960s. Eugenics was directly linked to the argument of inherited superiority of “white people” by taking the assumption of birth rights and privileges and turning it against towards those who had inherited poverty.

If one inherited one’s low economic status, then one also inherited a low intelligence. As with sterilization, technology and pseudo science was put in the service of white suprematists. Blithely unaware of the impact of environment upon intelligence and of the inherent biases in the so-called “intelligent” tests designed by  Alfred Binet and Theodore Simon, the anti-immigrant “nativists” used yet another measure to disenfranchise and marginalize the less white whites. The bundle of frantic efforts to maintain domination with tactics including Jim Crow Laws, forced sterilization, anti-immigration intelligence testing and restricted immigration were all based on the supposed superiority of the very white European stocks. However, as these superior beings, in all their shining whiteness, descended into the mad savagery of the Great War.

White Public Policy

The awkwardness of seeing the “white people” acting badly did not deter decades of effort during the twentieth century in legitimizing people of color, Catholics, and  Jews. These bigoted beliefs mainstreamed and popularized through mass media and had become widely believed. Fortunately, these beliefs were forced underground and were muted by mid-century. As Painter explains,

After its heyday among race theorists in the 1910s and 1920s, Anglo-Saxonism declined during the Great Depression and the Second World War. A new generation of social scientists had outgrown such blather on race. Now scholars were questioning the very meanings of any and all concepts of race and studying the troubling fact of racial prejudice. Ruth Benedict, along with Franz Boas and their like, were beginning to carry the day…The change from 1920s hysteria to 1940s cultural pluralism occurred simultaneously in politics and in culture.

After the Second World War II, racism, based on the the ideal of “white” beauty, continued under other guises, despite the fact that the idea that there was a scientific entity called “race” was being debunked. During the final decades of the twentieth century, the idea of “white people” was less intellectualized and more politicized. The efforts to assert the superiority of whites was no longer respectable in academia but the efforts to deny African Americans the benefits of the New Deal, the G. I. Bill, and even basic civil rights continued in public policy through a maze of laws and customs. In addition to being pushed to the margins, people of color were trained through mass media to “look white.” As Painter writes,

Much nose bobbing, hair straightening, and bleaching ensued. Anglo-Saxon ideals fell particularly hard on women and girls, for the strength and assertion of working-class women of the immigrant generations were out of place in middle-class femininity. Not only was the tall, slim Anglo-Saxon body preeminent, the body must look middle-rather than working-class.

People of color or different “ethnic” types were forced by real estate laws and municipal zoning to live in ghettos and barrios, where they were invisible. The race presented by mass media as “American” was pure white, people of color were rare and on the fringes in movies and many mainstream magazines, from news magazines to fashion magazines, refused to print photographs of people of color. But the Civil Rights Movement countered the myth of white cultural and physical superiority by challenging white people on moral grounds. Painter quotes Malcolm X,

“When I say the white man is a devil, I speak with the authority of history…. The record of history shows that the white man, as a people, have never done good…. He stole our fathers and mothers from their culture of silk and satins and brought them to this land in the belly of a ship…. He has kept us in chains ever since we have been here…. Now this blue-eyed devil’s time has about run out.”

While the “white people” were frightened after hearing the frank assessment of a certain portion of the African American public, that same public would be alarmed at the writings of an anti-melting pot white supremacist, as quoted by Painter, as having

taught at Stanford University and the experimental college of the State University of New York at Old Westbury. In The Rise of the Unmeltable Ethnics, perfectly suited to the times, Novak concentrates on those unmeltable “PIGS,” Poles, Italians, Greeks, and Slavs, in their view so long reviled: “The liberals always have despised us. We’ve got these mostly little jobs, and we drink beer and, my God, we bowl and watch television and we don’t read. It’s goddamn vicious snobbery. We’re sick of all these phoney integrated TV commercials with these upper-class Negroes. We know they’re phoney.”

These words, complete with misspelling, from Michael Novak’s Rise of the Unmeltable Ethnics (1972) would, today, be termed “hate speech.” At the time, they were the leading edge of the Nixon “Southern Strategy” which was a fancy term of thinly disguised racism. Here and there a few lone voices among white Southerners were raised and revealed the inherent lack of “ethics” in the system of racial segregation, bases upon the fiction of “white supremacy.” Painter presents

Lillian Smith (1897–1966), a white southern essayist, novelist, and (with her lifetime partner Paula Schnelling) operator of a fancy summer camp for girls, powerfully described her South in Killers of the Dream (1949 and 1961). The book pilloried southern culture as pathological and white supremacist southerners as caught in a spiral of sex, sin, and segregation.5* Here was a book of wide influence that portrayed whiteness as morally diseased.

It would take science, real science this time to put to rest the notion that there was “race.” There are only human beings who have skin colors and facial features that have evolved due to the environment. Painter quotes

the words of J. Craig Venter, then head of Celera Genomics, “Race is a social concept, not a scientific one. We all evolved in the last 100,000 years from the same small number of tribes that migrated out of Africa and colonized the world.” Each person shares 99.99 percent of the genetic material of every other human being. In terms of variation, people from the same race can be more different than people from different races.18 And in the genetic sense, all people—and all Americans—are African descended.

Painter’s book is divided into a series of what she terms “Enlargements of Whiteness.” The First Enlargement” is in fact the formation of the post-Enlightnement discourse on “white people” by writers in the eighteenth and early nineteenth century. The Second Enlargement builds on these beginnings and uses the ideas of racial superiority to expand political rights for one group of white people, the males, and excluding other groups, people of color and women. The Third Enlargement expanded these privileges after the Second World War by showering benefits upon white males and by excluding equally deserving people of color and women from the great government “thank you” stimulus that created the male middle class. The Fourth Enlargement is one of the struggle of women and people of color to enter fully into the American dream.

Conclusion

People of color made inroads during the post-war period because the discourse that defined “white people” was doubly discredited. First the Nazis adopted lock, stock and barrel the entire panoply of racist ideologies and used this discourse on “Aryans” and “Anglo Saxons” and what have you to slaughter millions of human beings. Second, the final stand of the white supremacists during the Civil Rights period was so public and so ugly and the resulting photographs and television coverage was so shaming that it was impossible to defend the determination to disenfranchise millions of American citizens. But the final coup de grâce was the genetic studies that proved that all humans share the same genetic makeup and that there is no scientific entity that could be separated out as “white people.”

Painter ends with the hope that perhaps intermarriage and “race mixing” will end the black/white dichotomy but that is far in the future. In the meantime as she points out “Nonetheless, poverty in a dark skin endures as the opposite of whiteness, driven by an age-old social yearning to characterize the poor as permanently other and inherently inferior.” The discourse on “white people” is alive and well and is an article of faith by millions of Americans who may or may not be aware of the immoral and unethical and unAmerican roots of their ideologies. It is sad to learn from this book that the term “American exceptionalism” is a code for “white people”—Anglo-Saxon “whiteness.”

When politician say that “Barack Obama does not believe in American exceptionalism,” they are saying “Barack Obama is black.” Like all the invading immigrants from the Irish to the Hispanics, he doesn’t “belong;” he is not an “American.” The discourse on “white people” is why there is such a strong belief that the President wasn’t born in America. Obama cannot be an American because he is an African American; he is black. Painter needs to write a sequel to this book that focuses on the twenty-first century salvage operation of this discourse which continues on the fringes on hate websites and in political speeches. The Discourse of “White People” continues to mar the American Dream.

Dr. Jeanne S. M. Willette

The Arts Blogger

 

 

 

 

 

.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The Paradox of Tar Heel Politics by Rob Christensen

THE PARADOX OF TAR HEEL POLITICS. 

THE PERSONALITIES, ELECTIONS, AND EVENTS

THAT SHAPED MODERN NORTH CAROLINA

BY ROB CHRISTENSEN

North Carolina is a small state of little consequence, so why is a political history of “modern North Carolina” of any interest to people outside of the state? The immediate answers might be “John Edwards” and his horrifying fall from grace or the upcoming Democratic Convention this summer.  But the long-term answer can be stated in two words: “Jesse Helms.” For the art world that name sends shivers down the spine, because during the last twenty years of his career, the notorious Senator began an on-going war against artistic freedom.  But the name of Helms should resonate for other reasons—-he was, in his time, the spiritual and practical Godfather of the extreme Right Wing and of the Tea Party.  The author Rob Christensen makes an interesting case that the future of many bad things begins in North Carolina.

In the South, race and class are everything, determinative, and have the half-life of uranium. Given that questions of race and class have shaped the past, present and future of the region, the most interesting aspects of this book is the undeveloped subtext—the legacy of slavery.  Christensen, a newspaper reporter, writes in a Dragnet manner and, to be fair, is not a historian and thus does not put the story of North Carolina politics into a fully developed temporal context. The author is a newspaper reporter who has a column in the Charlotte Observer and is a veteran observer of the political scene in the state and his task is to inform the reader of the modern—twentieth century—political history of his state.

Christensen makes a compelling case that North Carolina is a state caught between progressive ideas and traditional values. At this time, North Carolina seems to count as a border state, lodged between the Old South (to the South) and the New South (to the North). Although it is not on the Mason-Dixon line, over the past two decades, it seems to have joined the category occupied by Virginia and Maryland. As with these in-between states, “outsiders,” business interests, and other “new” industries, such as technology, have invaded North Carolina. Maryland and Virginia are outliers of the federal government and home to people, who come from all over the nation, to live and work in the northern-most suburbs. The result has been a slow sea change in their political and cultural make-up that have left these states, like North Carolina, divided between the old and the new.

While Virginia and Maryland accepted the newcomers passively, North Carolina actively courted them. In North Carolina, the changes have been forced by the modernization of the state through the famed Research Triangle between Raleigh, Durham and Chapel Hill and the arrival of the big banks in the city of Charlotte. The state is now split into factions—the more liberal university communities and the more conservative business interests and the outlying rural regions that are reactionary in their outlook. The state that unexpectedly went “blue” for a Black man, Barack Obama, in November 2008 also voted for a ban on gay marriage in May 2012.  Christensen’s book explains why the state should act in such contradictory ways.

Christensen sums up the contradictory nature of the state in his opening page of the book:

Politics was largely controlled by big business. The state lit the cigars for corporate executives but was hostile to organized labor; it generously spent money on roads and universities but was stingy when it came to the poor. State leaders sought a measure of fairness toward its black citizens, as long as it didn’t threaten the system of segregation. It was a business progressivism that was in tune with North Carolina’s growing urban middle class of lawyers, power-company executives, bankers, textile-plant owners, newspaper publishers and editors, and others.

On the surface, this description could be of many or any state with a substantial minority population, but most states don’t have the same cultural legacy that North Carolina does. For all its contrary aspects, for all its position as a border state, North Carolina was, is and will always be a Southern state burdened by the legacy of slavery.  The fact that the culture of slavery and its unsettling consequences should still be so powerful is rather curious.  If one compares crimes against humanity, it is customary for the succeeding generations to increasingly move beyond the sins of their grandparents and great-grandparents.  The young people in Germany are more than ready to vow “Never Again” and to move forward; they, after all, are not the ones who have to atone.

The famous book, The Inability to Mourn: Principles of Collective Behavior, by Alexander and Margarete Mitscherliche is instructive in that the authors point out the Germany, as a collective society, had difficulty in mourning their loss (the Jews) and were living with melancholia. Many studies, such as that of Lawrence Rees, The Nazis: A Warning From History, the generation that had to come to terms with their crimes refused to do so.  Rees’s shocking book (made into a video) contains testimony of aging Nazis who are too old to prosecute and who were, therefore, willing to “confess” but not repent.  Without getting into the weeds of how to compare cultural crimes, it might be said that the South still has not come to terms with slavery.

But what is the cause of the prolonged melancholia—a psychological condition that has persisted like a cancerous disease for over one hundred and fifty years? What is the “loss” for which the region is unable to mourn? Is the “loss” the defeat of the Confederacy? Or is the “loss” of the feeling of Mastery that slaveholding—owing human beings—brings to the owners (even those who had no slaves)? Or is the “loss” the loss of honor and moral standing for clinging to slavery long after the rest of the civilized world had outlawed it? Or does the melancholia come from the humiliating combination of being marked as wrong, sinful and brought low to a state of abjection?  We can only conjecture at the reason for the South’s insistence that the local customs, however odious, must be maintained at any cost, but it is clear that what sets the region apart is its adherence to slavery and its inability to repudiate its past.

Part of the problem for the South is the consequences of slavery.  And those consequences—a century of segregation—happened not so long ago. Many people living in the South benefitted and benefit still from segregation.  Many people living in the South suffered and suffered still from segregation. Segregation has served the region quite well—if one is white. The result of the continuing benefits is a cultural defensiveness on the part of the whites, who evidence a resentment of “outsiders” who do not accept the unspoken rules of the game.  It is difficult to be condemned by history and to be looked upon with suspicion by the present and to find one’s culture to be out of step with the tide of history. Many Southern states stubbornly defend an indefensible past and stubbornly fight to maintain the traditions of separate and unequal.  But North Carolina splits the difference between an unpalatable past and an unknowable future—it will go along with inevitable change but not too fast.

Christensen outlines the tactics of North Carolina politicians who attempted to navigate the requirement to accept the inherent racism and acknowledge the strata of class in the state. The state began the modern era as a Democratic state.  Republicans, the party of Abraham Lincoln, were few and far between.  In addition to being the party that led the federal government to victory in the Civil War and the party that abolished slavery, Republicans were associated with the decade of Reconstruction. Reconstruction, not to put too fine a point on it, was the Occupation of the South by the North. During this period, backed up by the occupation forces, freed African Americans were given economic and political opportunities and rights.

With hindsight, it seems astonishing that a recently subservient group should move so quickly to political activism, but this remarkable accomplishment got little credit from the outraged whites. By the end of the century, this brief period of relative equality came to an end and white supremacy pushed African Americans out of legislatures and out of the mainstream of public life. Christensen outlines in horrifying detail the long and bloody campaign of the Democrats to regain power through terror and intimidation. He provides a chilling poem of White Supremacy:

THE WHITES SHALL RULE.

The whites shall rule the land or die

The purpose grows in hearts of steel

With burning cheek and flashing eye

We wait what waiting may reveal.

But, come what may the whites must hold

What white men’s patriot valor bought;

Our grandsire’s ashes not yet cold, Hallow the soil for which they fought.

By the beginning of the twentieth century, whites were back in control and African-Americans began to experience life under the regressive laws of Jim Crow.  Christensen gives an account of the violent ouster of African Americans from power in the city of Wilmington—an event of which I was unaware.  Like Tulsa and Rosewood, Wilmington was a city where African-Americans prospered. But all that ended in the fall of 1898, like Tulsa and Rosewood, in the destruction of a Black community at the hands of white terrorists. As Christensen wrote, “Democrats engineered what must surely be one of the few coups d’etat in American history.”

He continued,

“The forced exile of the Republican leaders was followed by a voluntary exodus of 2,100 black residents from Wilmington, including many members of the black middle class. Within two years, Wilmington was transformed from a city with a small black majority to a city with a slight white majority. Wilmington would never recover its position as North Carolina’s leading city.”

The author noted that the entire campaign of terrorism was bankrolled by the business elite, who wanted to end, forever, the unlikely alliance of Republicans, African-Americans and the lower classes—the farmers and textile workers. By evoking race, Populism could be defeated.  The lower class whites were bought off by being given privileges that African Americans did not have—everything from voting to being allowed to ride in the front of the bus.  Or to put it another way, in order to privilege Whites, constitutional rights had to be taken away from Blacks. And after the campaign of domestic terror, the pre-war status quo was reasserted—African-Americans were inherently unequal and, if slavery were outlawed, then the Jim Crow laws would reify this presumed inequality.

Blacks were put back “in their place” all over the South. North Carolina was but one of many states that rejected federal rule and Reconstruction and any thoughts of racial equality.  In addition, the pattern of separating the lower classes from their natural economic allies, the African Americans was replicated in all Southern states. North Carolina was a fiercely anti-union state and its antagonism to unions was fueled by a natural antagonism to outside “agitators” who would try to change the culture.  Lower class mill workers would rather cleave to the upper classes who exploited them and be complicit in their own oppression because of their shared allegiance to white supremacy. In return for the workers’ willingness to be exploited, the businesses and industries did not hire their greatest competitors, the Blacks.

Having passed through North Carolina some years ago, I noted the presence of the linen industry, the furniture industry, liquor business, and the tobacco industry—long-time business powers in the state.  These industries are huge, providing the nation’s living room sofas, chairs and tables, the nation’s sheets and towels, and the nation’s oral addictions and yet the state was improvised.  On one level, the workers are paid such low wages that the state does not have much of a tax base; on the other level, these industries are enormously profitable and small North Carolina should be a very wealthy state.

Christensen discusses these dominating industries and their political power in the state but I wish he had solved the mystery that puzzled me—where do the tax dollars go?  This is a state that lacked basic fundamental safety conditions on the streets and highways—no reflective paint in the median strips, no reflective caps, and street lights were few and far between, making night driving an exercise in Russian roulette. One can only assume that for over a hundred years, generations of politicians have been paid off by the local businesses and that taxes must be abnormally low, rewarding the few at the expense of the many.

And here is why this book has resonance beyond North Carolina. What is interesting about this book is that it sets out the conditions for today’s politics and patterns that seem inexplicable—patterns that, the author suggests, have spread throughout the nation.  Much has been written about the “Southernization” of certain states and regions in America, most notably Rick Perlstein’s Nixonland: The Rise of a President and the Fracturing of America.  Sadly, starting with Nixon’s Southern Strategy, aka playing the race card, what has been exported has been largely negative—racism, classism, creationism, fundamentalism—none of which is in keeping with the modern world but are defiant survivals of a dead social system.

Without repeating the entire history of all the state’s governors, ably laid out by the author, it is clear that, from the beginning of the century, white supremacy ruled. White supremacy ruled without challenge until the Civil Rights era of the sixties and ended with the two rulings of the Supreme Courts in 1954 and 1955 and the two Civil Rights laws in 1964 and 1965.  Unlike other states that resisted the orders to integrate—Alabama and Virginia—North Carolina quietly complied. But the intervention of the federal government—again—“imposing” outside values and ideas upon a region that cherished its “ways” did not change the minds and hearts of the state.

The problem is that the South is a region of the nation that is often at odds with the Constitution and its ideals.  Off and on, the federal government has to assert itself in attempts to bring these states back into the Union, whether through warfare or legal actions. There was a Southern cultural refusal to accept the authority of the federal government. Ever since the defeat of the South and Reconstruction, the South has understandably never been favorable to Washington D.C. However, the South conveniently ignores the fact that the federal government provides funds and jobs for most of the states through military bases, such as the Naval base in Virginia, and other federal projects, such as the space programs in Florida and Texas.  North Carolina, Christensen stated twice, is “bristling” with military bases.

The problem is not how to escape the contradiction between being dependent upon federal largess and maintaining cultural customs but how to export the attitude of defiance and distrust of the “government” and how to maintain traditional “values” of racism, classism and homophobia. Enter Jesse Helms. Jesse Helms has long since gone to his maker and, upon his demise, cartoonists (who are artists) imagined him going (to his surprise) to Hell for his race-baiting attacks on the fine arts. But to the surprise of those of us in the art world, Helms had far more up his sleeve than the fight against art revealed. According the Christensen, Jesse Helms changed North Carolina from a Democratic state into a Republican state. As he wrote,

“Helms became North Carolina’s most famous national political figure of the twentieth century. He helped transform the state into a Republican stronghold instrumental in the elevation of Ronald Reagan to the presidency, shifted the GOP to the political right, and contributed to the polarization of the nation’s politics.”

The triumph of Jesse Helms and the Southern Strategy of White Supremacy rests upon the fact that in the South race trumps everything.  Race trumps class. Race trumps gender. Race trumps economic self-interest. Race trumps morality and ethics and honor.  Good and decent people are apparently willing to do anything to maintain the system of White Supremacy.  Christensen does not go into that much detail but he make it clear, from time to time, that enormous amounts of time and energy are spent maintaining a system that oppresses African Americans. But now that so many citizens of color have migrated out of the state, joining the other thousands from other Southern states, this time and energy are expended in maintaining a cultural supremacy of Traditional Values.

That said, it seems that Helms was—compared to the more courtly Sam Ervin—blatantly open about keeping Blacks out of power, assuring that the rights of women suppressed, and maintaining business as the state’s overlord.  The passage of the Civil Rights Act gave Helms an opening.  Christensen remarked, “Few people understood the power of the white political backlash better than Helms.”  The author explains that the Southerners were conditional Democrats, that is, they would support the national party if, and only if, white supremacy could be continued without interference. After decades of lynchings and oppression and Jim Crow systems could no longer be allowed to continue, the agreement was broken and Dixie sprinted to the welcoming arms of the Republicans.

One could wonder why the Republican Party—the party of Lincoln—would accept an entire region of White Supremacists, but the heritage of history is trumped by the desire for power.  Helms was an early and loud voice of modern day “conservatism.” He was a one-man Fox News before Fox News, starting out as a newspaper and radio reporter who became increasingly unwilling to accept the political progress that followed the Second World War. By 1960, Jesse Helms was on television in Raleigh, appearing on a station owned by a conservative son of a Baptist minister. As Christensen wrote,

“Although Helms did not host a talk show, in some ways he was a forerunner to Rush Limbaugh, Bill O’Reilly, and other national conservative commentators who would emerge in the 1990s, giving voice to conservative anger.”

Indeed, the author quotes a few of the “commentaries” of Jesse Helms that sound familiar to anyone who follows the news today.  Stating that “Helms preached an unvarnished libertarian conservatism. He called Social Security ‘nothing more than doles and handouts,’” Christensen further described the sentiments of Helms towards “…Rural electrification cooperatives were “socialistic electric power,” and Medicare was a “step over into the swampy field of socialized medicine.”

As quoted by Christensen, Helms opined that, “They didn’t call it socialism, of course. It was given deceptive names and adorned with fancy slogans. We heard about New Deals, and Fair Deals and New Frontiers and the Great Society.” In speaking of anti-war protesters, Helms stated, “Look carefully into the faces of the people participating. What you will see, for the most part, are dirty, unshaven, often-crude young men, and stringy-haired awkward young women who cannot attract attention any other way. They are strictly second-rate, all the way.”

These comments would be transplanted without much alteration onto the current debates on “Obama Care” and “Occupy Wall Street.” Helms (who had a well-earned reputation as a nasty political campaigner) became a United States Senator in 1972, riding to glory on the coattails of Richard Nixon. Helms would stay in place for the next thirty years, fighting the good fight, voting reflexively against everything federal. Christensen states, “…he was an ardent foe of nearly every social program, from food stamps to child nutrition programs; opposed nearly every consumer program, including the creation of the Consumer Protection Agency; and voted against nearly every environmental bill.” He was an early supporter of school prayer and was vehemently anti-abortion, bringing together like-minded Congress members to fight for and against their “causes.”

But for Helms, the real Messiah was not Nixon or Ford and not fellow Southerner, Jimmy Carter, but Ronald Reagan. Helms allied himself early on with someone he considered to be a true conservative. As Christensen writes, “By the beginning of the 1980s, Helms was the leader of a powerful political movement that would soon be dubbed the New Right. Helms had helped install Ronald Reagan in the White House.” Christensen quotes a Reagan biographer, Lou Cannon, who emphasized the importance of Helms to Reagan: “…the 1976 North Carolina primary was the “turning point” of Reagan’s political career. “Without his performance in North Carolina, both in person and on television, Reagan would have faded from contention before Kansas City, and it is unlikely that he would have won the presidential nomination four years later.”

In addition to using newspapers, radio and television to spread the doctrine of opposition to the “government” and of White Supremacy, in addition to working hard to bring a fellow conservative, Ronald Reagan, into power, Helms was also a pioneer in forming a powerful financial and political machine to get himself elected. The Congressional Club was a forerunner of today’s “grass roots” organizations that serve as laundering operations for billionaires who want to control the government. The achievements and reach of this “Club” was astonishing.  Christensen listed its accomplishments:

The Congressional Club not only engineered Helms’s reelection in 1978, 1984, and 1990, but it also elected John East, a political science professor at East Carolina University, to the U.S. Senate in 1980 and Clinton businessman Lauch Faircloth to the U.S. Senate in 1992. In the process, it defeated Democrat after Democrat. The Congressional Club handed four-term governor Jim Hunt his only defeat in 1984. It unseated Senator Robert Morgan, a moderate Democrat, and Senator Terry Sanford, a liberal. It scotched the hopes of John Ingram, a white populist, and Harvey Gantt, a black candidate. The Congressional Club also had a national reach. It helped elect Reagan, but it failed in its attempt to elect Steve Forbes as president in 1996. The Congressional Club tried in 1985 to buy the giant television network CBS because it wanted more conservative national news broadcasted. The club became a training ground for a generation of young conservatives—people such as Charles Black, Alex Castellanos, Carter Wrenn, Arthur Finkelstein, Richard Viguerie, and Ralph Reed—who later ran the campaigns of U.S. presidents as well as those of prime ministers of other countries.

In a prediction of the “permanent campaign,” the Congressional Club operated continuously for twenty years. The anti-gay, anti-Black, anti-government take-no-prisoners confrontational approach to politics of Helms was part of what Christensen called a “civil war” within the Republican Party. As early as the 1980s, a war began for the soul of the Republican Party, a battle between the “moderates” and the “conservatives.” Today, we know that the “moderates” lost and the Jesse Helms-types of politicians are now in control. In another foretelling, the 1982 race between the former governor, the progressive, Jim Hunt and Helms for the Senate was expensive. According to Christensen, “…the race cost $26 million, a national record for a Senate race at the time and the equivalent of $51 million in 2007. The advertising lasted nineteen straight months, breaking only for a week-long 1983 Christmas truce.”

Suddenly in trouble in this campaign, Helms resorted to the race card. Christensen recounts,

“When conservatives are in trouble in North Carolina, they frequently turn to racially charged issues. The momentum in the race began to shift in October 1983, when Helms launched a heavily publicized filibuster against legislation making slain civil rights leader Martin Luther King Jr.’s birthday a national holiday. For several days, Helms attracted headlines as he hammered away at King’s alleged communist connections. “King’s view of American society was thus not fundamentally different from that of the CPUSA [the American Communist Party] or of other Marxists, and political agitation, his hostility to and hatred for America should be made clear.”

Helms also tied Hunt to “gay activists” to “right wing death squads” and continued his opposition to abortion rights. Christen noted that while the fight of Helms to fight against the Martin Luther King holiday played well at home but his stand against abortion was not as popular. But nevertheless, in another foreshadowing of the avalanche of “war on women” bills that have been put forward and bills that have been passed in the past two years, “During his career, he sponsored twenty-seven antiabortion amendments or bills. Helms called the legalization of abortions a ‘human holocaust with no parallel in history.’  And he said abortion should not be permitted under any circumstance. ‘Rape does not justify murder of an unborn child,’ Helms said in 1988.”

The last stand of Jesse Helms was, in another prophecy, against a Black man, the poised and polished Harvey Gantt. Helms had a long career opposing racial equality.  Christensen writes,

“During his Senate career, Helms managed never to find a civil rights bill that met with his approval. In 1982 he staged a filibuster against an extension of the Voting Rights Act, even though it was supported by seventy-five senators and endorsed by President Reagan. Helms sponsored bills that would have banned court-ordered busing for racial integration. He was a major backer of the apartheid regimes in South Africa and Rhodesia (now called Zimbabwe). For years, he blocked efforts to put a black judge on the conservative, all-white Fourth U.S. Circuit Court of Appeals in Richmond, Virginia, prompting President Clinton to call his actions “outrageous.”

And yet, Christensen continues, “Helms’s segregationist views in the 1960s reflected those of a majority of white North Carolinians, according to public opinion polls.”

Although it was the 1990s and thirty years after the Civil Rights movement, certain segments of North Carolina voters could not bring themselves to vote for a Black man, and Jesse Helms defeated Harvey Gantt twice during this decade. During the first campaign, the Helms campaign produced one of the most devastating ads against racial equality in modern history.  More powerful and more aesthetically produced by Republican operative, Alex Castellanos, this ad is described by Christensen:

One TV ad dealing with racial quotas became perhaps the best-known political commercial in North Carolina history. The ad featured a pair of white hands crumbling a job application as the announcer says: “You needed that job, and you were the best qualified. But they had to give it to a minority because of a racial quota. Is that really fair? Harvey Gantt says it is. Gantt supports Ted Kennedy’s racial quota law that makes the color of your skin more important than your qualifications. You’ll vote on this issue next Tuesday. For racial quotas: Harvey Gantt. Against racial quotas: Jesse Helms.”

Although he vehemently denied playing the race card with this ad, Helms evoking white fears of Affirmative Action, stated, “…you want quotas to dominate and dictate whether you get a job or whether you get a promotion, you vote for Mr. Gantt.”

Christensen also reports on the efforts of his campaign to suppress the Black vote— tactics that also predict those that are being deployed against people of color, students and the elderly today. Although he won his election through his usual underhanded and unseemly fashion, Helms, according to Christensen seemed angry, his customary victim pose: “The confederation of liberals has struck out again: the homosexuals, the defenders of pornographic artistry—if you want to call it that—the National Organization for Women, the pro-abortion crowd, the labor union bosses, and the left-wing news media,” he said.  The only reason Helms seems to have decided to retire in 2001 was because of his declining health. Too bad, he would have like what he could see in today’s politics. Jesse Helms died in 2008, the year a Black man was elected President.  A new era of backlash had just begun.

The career of Jesse Helms was a curious one.  On one hand, he seems to be working against the tide of history and justice—opposed to the rights of women, people of color, gays, the working class, and social equity and equality—that wonders how he survived for so long.  On the other hand, he spoke powerfully to all those who were fearful of change.  The fears of losing Supremacy, whether of race, class or gender, are long-lasting and tremendous effort has been put towards maintaining the status quo.  Jesse Helms was in artful and intemperate in his phraseology—-in his own time—but his crude coarseness is now commonplace in political “discourse.”  Christensen pointed out that

“As late as 1965, Jesse Helms was still defending the use of literacy tests. The real question, Helms said, “is whether illiterates ought to be allowed to vote. And that raises the question of what kind of politician is likely to benefit from a system in which people who cannot possibly understand their responsibility are allowed and encouraged to register and vote without question.”

Today, it is accepted to propose an electrified fence on the Mexican border to kill “illegal aliens,” it is acceptable to suppress voting rights in a Redeux of Jim Crow laws, and it is acceptable to call for taxing people so poor that they are not eligible for income tax while, at the same time, cutting the taxes of billionaires. All of these “proposals” are part of a larger effort to restore the balance of power the way it was one hundred years ago. Undoubtedly it was that nostalgic longing for social control over a long list of people who should be suppressed that led to the passage of an amendment in May 2012 to deny the right to marry to gay couples. It is unlikely that the people who voted to (unconstitutionally) deny an inalienable right to a certain segment of the population actually know any gays or lesbians; they are voting against the future. But they were also voting in the face of increasing legal opposition to such oppression.  On May 31, less than a month later, the federal court of appeals in Boston declared the Defense Against Marriage Act to be unconstitutional. The constitutionality of homophobia, a favorite bugaboo of Jesse Helms, will be decided by the Supreme Court sometime this year.

Christensen ends his book with a description of the rise and fall of John Edwards, just acquitted of campaign finance misdeeds.

“There is a temptation to see Edwards as a tragic Greek figure like Icarus, who flew too close to the sun. Unquestionably, he was a man of immense political talents, but his vaunted self-discipline wilted under the pressure cooker of big-time politics and he lost his grounding.”

But on a more upbeat note, the author recounts an attempt to undo the race riot in Wilmington:

“In 2000 the state legislature created a commission to investigate the insurrection—patterned after Florida’s inquiry into the 1923 Rosewood Massacre and Oklahoma’s investigation into the Tulsa Race Riot of 1921. The commission’s final report, issued in 2006, recommended greater efforts to educate the public about the violence, compensation to the heirs of victims who can prove a loss, creation of incentives to help Wilmington areas damaged by the violence, and efforts by newspapers to distribute the report and acknowledge their own role. In 2007 the Democratic Party apologized for its role in the white supremacy campaign.”

It is always assumed that California, particularly Los Angeles, is the predictor of things to come. But, The Paradox of Tar Heel Politics makes a disturbing case that North Carolina, a small state caught between the past and present, is also a role model for how America deals with social change—one step forward two steps back. But, despite the one step forward, the legacy of Jesse Helms lives on in North Carolina that is still run by a well-funded political machine.  Christensen’s book perhaps could not continue to present day, but North Carolina is now owned and operated by the “Knight of the Right,” James Arthur Pope, profiled in 2011 by the formidable Jane Meyer in the New Yorker article “State for Sale.” I quote her at some length to make the parallels between the foundation that Helms laid and today’s political tactics clear:

Yet Pope’s triumph in 2010 was sweeping. According to an analysis by the Institute for Southern Studies, of the twenty-two legislative races targeted by him, his family, and their organizations, the Republicans won eighteen, placing both chambers of the General Assembly firmly under Republican majorities for the first time since 1870. North Carolina’s Democrats in Congress hung on to power, but those in the state legislature, where Pope had focused his spending, were routed.

The institute also found that three-quarters of the spending by independent groups in North Carolina’s 2010 state races came from accounts linked to Pope. The total amount that Pope, his family, and groups backed by him spent on the twenty-two races was $2.2 million—not that much, by national standards, but enough to exert crucial influence within the confines of one state. For example, as Gillespie had hoped, the REDMAP strategy worked: the Republicans in North Carolina’s General Assembly have redrafted congressional-district boundaries with an eye toward partisan advantage.

Experts predict that, next fall, the Republicans will likely take over at least four seats currently held by Democrats in the House of Representatives, helping the Party expand its majority in Congress. Meanwhile, the Republican leadership in the North Carolina General Assembly is raising issues that are sure to galvanize the conservative vote in the 2012 Presidential race, such as a constitutional ban on gay marriage.

Republican state legislators have also been devising new rules that, according to critics, are intended to suppress Democratic turnout in the state, such as limiting early voting and requiring voters to display government-issued photo I.D.s. College students, minorities, and the poor, all of whom tend to vote Democratic, will likely be most disadvantaged. Obama carried North Carolina by only fourteen thousand votes and, many analysts say, must carry it again to win in 2012, so turnout could be a decisive factor. Paul Shumaker, a Republican political consultant, says, “Art’s done a good job of changing the balance in the state.”

 

And this state—North Carolina—is the site of the 2012 Democratic Party Convention.  Stay tuned.

Dr. Jeanne S. M. Willette

The Arts Blogger

Disintegration. The Splintering of Black America, by Eugene Robinson

DISINTEGRATION

Affirmative Action has been an unqualified success. A legacy of the Civil Rights Movement, Affirmative Action forced employers to give “preferential” treatment to those who had been discriminated  against in the job market.  For hundreds of years—or ever since the dawn of society—certain elements of society have been singled out and given privileges on the job market. For the most part, hiring has always benefited the male and excluded the female from all desirable occupations and from most paying jobs.  In American, people of color joined women in the ranks of the historically discriminated against. But then came a series of Supreme Court decisions and laws that were passed over a decade, starting with the 1954 Brown v. Board of Education and ending in 1967 when President Lyndon Bains Johnson issued an Executive Order, extending President John F. Kennedy’s of 1961 to include women. And with these decisions, laws and orders, Affirmative Action began to transform American society.

There is no way to go back in time and measure the loss that gender and racial prejudice caused to American society but one gets a sense of the magnitude when one compares this country as it existed in 1960 to the way it is today in 2012.  What was lost, thrown away and denied for generations is incomprehensible.  One can only grieve for the lives lost and contributions never realized. Thanks to Affirmative Action women and people of color have risen from the position of being excluded and oppressed to being leaders in business and politics and have become powerful voices and presences in society. Eugene Robinson, author of Disintegration. The Splintering of Black America notes that

“The biggest beneficiaries of affirmative action over the past four decades have been women—mostly white women—who occupy a place in the workforce and the academy that previous generations could not have imagined. (When the feminist revolution came, black women already worked for a living.) Second, in terms of gains, have been middle-class African Americans.”

The achievements of these people who just needed to be “affirmed” in the same way that the white male had always been affirmed have been remarkable.  Even more striking, the advances were made within the space of one generation. In the 1950s it was “common knowledge” that Blacks were incapable of…fill in the blanks…and women were unable to do….fill in the blanks.  Over half the population of American were systematically stigmatized on the basis of no evidence whatsoever.  Given that prejudice is often internalized, the success of women and people of color is all the more remarkable in that each and every individual has had to fight discrimination both internally and externally. There is no doubt that few of these individuals could have acquired an equal education or a well-paying job or a decent home to live in without affirmative action. The fact that women are still routinely paid half of men and the continued complaints about Affirmative Action indicate that, if the federal government had not intervened, the white male would still dominate and discrimination would be unchallenged.

In a culture where the normal political processes no longer function and governments at all levels seem clogged and dysfunctional, it is important to take the time to measure the impact of social policies intended to bring about economic, social and political equality. Pulitzer Prize winning journalist Eugene Robinson, who writes for The Washington Post, has set out the assess the progress of the African American community since the Civil Rights movement. Written in the wake of Katrina and the shocking sight of the dead floating in the flood waters, Disintegration  describes what the author has designated as four categories among African Americans: the first and most familiar, thanks to Bill Cosby, is the Mainstream middle class upwardly mobile group, then there is the equally well-known, thanks to popular culture and politics, the Transcendent: the Ophras, the Obamas, the Tigers, the third group, less visible, is what the author calls the Emergent, or the recent African and Caribbean immigrants, and the last category is what America saw on television in the summer of 2005, the Abandoned.

The African American citizens of New Orleans, who had been left behind, were caught up in one of the most horrific hurricanes of the century. These helpless people had been abandoned in more ways than one—it wasn’t just that the buses to take them to high ground never came, it was also that somehow the Civil Rights Movement had not been able to lift them up out of poverty. Robinson dissects the reasons why some African Americans succeeded and some failed and continue to fail, and, even worse, will probably continue to fail. He stresses that the “disintegration” of the African American community refers to the splintering of the once solid group into faction in terms of income and class and historical memory.  As these elements move further and further away from one another, the result is an increased diversity where  the term “African American” means less and less or to be more precise, needs to be rethought.

Robinson undertakes a task that is extremely difficult.  On one hand, there is a sizable portion of  America that automatically responds to any African American as the Other and reflexively join together in an atavistic racial solidarity, whether to establish voter suppression laws or to defend the killer of a teen-age boy carrying nothing more than a bag of Skittles and an iced tea and a cell phone. On the other hand, the African American community is losing the solidarity that enabled its very survival during the dark centuries of slavery and segregation. In addition, Robinson points out, this community has become assimilated into the mainstream. As he stated,“…black American experience is nothing more or less than an integral and necessary component of the American experience.” Indeed, much of what we define as “American” comes from the Black culture—jazz, rock ‘n’ roll, fried chicken.  Robinson quotes the

“MacArthur Foundation “genius” grant winner Charles Johnson published an article in The American Scholar titled The End of the Black American Narrative. He posited that a ‘unique black American narrative, which emphasizes the experience of victimization, is quietly in the background of every conversation we have about black people, even when it is not fully articulated or expressed. It is our starting point, our agreed-upon premise, our most important presupposition for dialogues about black America.’ This narrative is based on ‘group victimization,’ Johnson writes, and it is obsolete; it blinds us to ‘the inevitability of change’—and the fact of change.”

While Katrina proved that there are numerous African Americans who are victimized as a group, it would seem that they are also the remnants of a tragic legacy of generational disadvantage compared to the other groups that managed to escape the “victim narrative.”  Robinson begins this narrative, not with slavery, but with the end of Reconstruction.  As I have pointed out elsewhere, the gains made by the former slaves after the Civil War were astonishing, which makes the fact that all the hard work and all the accomplishments were taken away by

“…the virtual re-enslavement of African Americans and a return to what racists like Grady considered the “natural” order of things. Nowhere was this bitter pill more difficult for black people to swallow than in Atlanta, where the former slaves and their descendants had come so far. There, a critical mass of black ambition had ignited what seemed an unstoppable reaction. Black educational institutions such as Atlanta University and Morehouse College were producing an educated elite. Black businesses, while still small in relative terms, were expanding and producing real economic benefits for the whole African American community. The grand project of black uplift looked so promising; now it was being snuffed out. In Atlanta, which was the intellectual center of black America, prominent thinkers waged a vital debate: What could black people do about this brutal campaign to kill the black American dream?”

To take away not just the dream but also hope meant that the bitterly disappointed African Americans would have to be crushed though a reign of terror carried on the dominant white population.  The memory of the remarkable achievements of the post slavery decades had to be exterminated and wiped from the memories and the hearts and the hard lessons of inborn and innate inferiority had to be forced into internalization. The fact that African Americans daily evidenced abilities equal to whites was apparently particularly galling and the what Robinson calls “re-enslavement” was enforced by public lynchings and brutal Jim Crow laws. Any rumor of any infraction of the elaborate system of creating a second class (non) citizenship would draw instant retaliation.  Robinson gives a frightening account of a “race riot” in Atlanta—one of many during the first half of the twentieth century—and notes that the term referred to whites rioting against Blacks and their property. He writes of the aftermath,

“The full psychological impact of the Atlanta riot may be incalculable, but one specific result is clear. Many whites—even those who disapproved of mob violence, lynching, and the terrorism of the Ku Klux Klan—were deeply shaken by the many instances during the melee in which blacks displayed the will and the means to fight back. Segregationists pointed to the resistance as proof that they were right—that blacks had to be kept down, had to be kept in their place. Measures to deny black citizens the vote throughout the South were perfected. Public accommodations were labeled whites only and blacks only; merchants began requiring black patrons to enter through the back door. This whole blueprint for the New South was codified into law as a way of delineating two ostensibly ‘separate but equal’ societies. Black Atlanta was effectively walled off from the rest of the city, left to make its own way in the world. The long, dark night of Jim Crow segregation had fallen.”

By the beginning of the twentieth century, the Great Migration sent waves and waves of African Americans to northern cities and the South lost the best and the brightest, those most able to survive the wrenching sacrifices of  abandoning friends and families and homeland to start anew in alien territories. But the diaspora of the African American culture enabled the next generations to enter into the Mainstream and allow some to become Transcendent. Not that the Northern territories were hospitable and welcoming.  As Robinson stated, “It’s true that racial segregation in the South, enforced by law and terror, wasn’t the same as racial segregation in the North and West, which was often enforced by housing covenants but also had to do with custom and clan.” Recalling the solidarity within the Black community when he was growing up, he also noted, “We were all black, and to be black was to live under assault.”

Robinson compared the mood within the African American community before and after Jim Crow—optimism became pessimism and resignation.  He writes of the

“…enormous deficits that newly freed blacks faced. Without assets or education they had to start from scratch, but during Reconstruction they made rapid gains. The problem was that those gains were promptly and often brutally taken away by Southern officials when Reconstruction was abruptly halted. This betrayal was committed with the acquiescence of the federal government—which was more interested in reaching an accommodation with the South…”

In the South, African Americans lived under a regime of terror; in the North, African Americans had hope and possibilities but the optimism was replaced with the need to survive and make the best of the new opportunities. He discusses how the deep despair and rage lying just beneath the surface broke out after the assassination of Martin Luther King.  It is no wonder that, sixty years after migrating from the South to the Promised Land, that the community would react violently.  It is at this point, in the spring of 1968, that “race riot” became linked to Blacks.  Robinson writes,

“The King assassination was too much to bear. It was not just a murder but a taking—the theft of our leader, our future, our reason for continuing to hope that America was finally ready to accept us as true Americans. The paroxysm of violence that followed was deliberately destructive: They take from us, we take from them. In the end, of course, we took from ourselves. The self-destructive nature of the 1968 riots was evident to all, even as the mayhem was unfolding.”

Although Robinson does not note the link, this self-destructive act of internalized self-loathing explains the intensity of the hopes projected onto Barack Obama. Only when one understands the history of slavery, segregation, discrimination, prejudice and terror endured day after day, century after century does it become clear why the election of a Black President felt like the Second Coming. But Robinson points out that the 1968 “race riots” were the final act, punctuating, these centuries of injustice as with an exclamation point. Thanks to the Fair Housing Act, the African Americans who could escape from the confines of the ghettos became part of a second Migration.  As he reports, some managed to get out and refugee to the suburbs while others were left behind in the slums.

Robinson makes an important and little noted point, that the White Flight was also a Black Flight that, as he said, “split” the African American community once again. First, those who could not or would not leave the South were left behind, and then, second, another group, once again, “did not make it.” With these migrations came increasing Black-White contact that would, over time, produce another category—the bi-racial individual. Most African Americans are distinguished from “Africans” by the presence of white blood, white ancestors, usually due to the slave masters raping the female slaves. But for centuries these somewhat whitened people were forced to remain behind the color line, due to the “one drop” rule.  Robinson points out that, unlike other nations, such as Brazil, America was racially rigid and enforced its codes, imagining that somehow “racial purity” could be maintained.

However in 1963, interracial marriage became legal and by the early twenty-first century, the young generation thinks noting of racial mixing. Intermarriage encourages, even necessitates assimilation into a larger community that becomes a third alternative characterized by tolerance and acceptance.  But by and large the progeny of these unions are, like Barack Obama, considered “Black,” because, as Robinson points out, the culture will not allow them to be anything else. Presumably, due to ties to the white comity, this group is considered Mainstream and, thanks to federal laws, can live anywhere they want, go to school anywhere they want, and are guaranteed equal opportunity to any job to which they aspire. these gains are the result of sixty years of waiting for the door to open again.

If Mainstream means “assimilated” out of the Black community and into the White community, then the African American Mainstream differs in significant ways. First, this affluent and successful group has a large number of single women, living alone or raising their children alone.  Uninterested in dating outside their race, they are also disinclined to spend their time hunting for suitable African American husbands. Robinson muses over whether or not this situation of female independence is the result of a narrative of the Matriarchy, but it should be said, that regardless of the historical roots, the aloneness of these women is but part of a larger trend: the majority of the adults in America live “solo.” The other interesting aspect of this Mainstream group is the loss of deference to adult authority, from parents to community elders. More and more, the African American teenagers are acting like their white counterparts—typical rebellious teenagers. Unlike their parents and grandparents they have no memory of the hard times when family was all there was.

The “disintegration” of the Mainstream community has begun in Robinson’s own lifetime. He provides the reader with interesting sections on the solidarity of the professional educated Black community, held together by links of acquaintance and old school ties, held together though a network of fraternities and sororities.  Robinson states,

“They were established, beginning about a hundred years ago, to provide mutual support and encouragement among blacks who knew that when they graduated from college they would be taking their hard-won learning into a cruel, openly racist world. Obviously the world today is a different place. But the black fraternities and sororities have endured—and they have remained black.” He added, “Pi Phi, known colloquially as the Boule, from an archaic Greek word meaning ‘representative assembly.’ The Boule (pronounced boo-lay) is for high-achieving black professionals, and its reach is nationwide.” 

One wonders if the new generation of the Mainstream will continue to join these societies, for, as Robinson observes, “My generation, like those that came before, was forged in an all-black context amid a hostile society.”

Since 1990, Robinson notes, African immigration to America, still thought of as the Promised Land, has exploded.  The result was a net community, composed of people of color who had no history of victimhood and slavery: the Emergent group.

“Immigrants from the Caribbean began to arrive in larger numbers after passage of the Hart-Cellar Act of 1965. The law loosened restrictions on immigration based on geography—a system that favored Europeans over nonwhites—and shifted the emphasis to professional qualifications and family reunification. Subsequent measures in 1976 and 1980 made it easier for immigrants to come to the United States as students or refugees; an attempt at comprehensive immigration reform in 1986 allowed many undocumented immigrants to apply for legal status, including 135,000 from the Caribbean and Africa. For Africans, the key impetus was passage of the Immigration Act of 1990, which increased the number of immigrants admitted on the basis of their skills.”

Allowed to enter, as Robinson writes, due to their skills and education, these immigrants had many advantages compared to the “local” African Americans. Although he does not mention the relative lack of prejudice against them, in fact the African African has had a somewhat easier path. For some reason white Americans consider such individual to have a higher status than those who are descended from slaves.  Robinson notes that “Today, Africans coming here voluntarily on wide-body jets are the best-educated immigrants in the United States—better-educated than Asians, Europeans, Latin Americans, or any other regional group.”  Indeed, he added, “…wherever African immigrants had settled in substantial numbers: Their children were performing so well in school that they were overrepresented, relative to their overall numbers, in the lists of overachievers.”

The author attributes this outstanding success to the mindset of optimism.  I would also add that the psychology of the African immigrants is somewhat akin to that of the African Americans who migrated northward. Robinson writes,

“Most immigrants who surmount all the obstacles and make it to the United States are accustomed to success. Whatever degree of political and economic dysfunction their home countries might be suffering, the immigrants managed to master or escape the local context. By virtue of their presence, they are among the winners in their societies. Optimism comes easily, and with it a certain sense of entitlement. All or some of this gets passed down to the next generation.”

One could site the same thing of the Great Migration, if the word “hope” is substituted for “optimism.”  That said, Robinson makes an interesting point: while the first generation of African immigrants were immune to the “stereotype effect” or the internalization of inferiority, the second generation were more susceptible to the narrative of certain failure. He also makes another important distinction between African Americans and African Africans and Caribbean Africans—they know their ancestry and have retained their heritages. In contrast, part of the process of conquest and enslavement in the American South, entire cultures from many parts of Africa were erased. He recounts,

“When our ancestors were brought here, slave owners waged a deliberate, thorough, and successful campaign to erase all traces of our prior cultures. There were, for example, many slaves who left Africa as Muslims; Islam had been established on the continent for centuries by the time the Americas were discovered and the Atlantic slave trade began. Once in the Americas, Muslims were given no leeway to practice their faith. Christianity was the only religious option, and it was all but mandatory.”

In contrast the Africans who were taken to destinations with a Catholic culture were allowed or were able to retain elements of their heritage. He writes,

“In Cuba and Brazil, they managed to fuse their religious tradition with Roman Catholicism in a way that was Catholic enough to satisfy the slave owners, but Yoruba enough to allow the slaves a sense of connection with their ancestors. These syncretic faiths came to be known as Santeria, candomblé, macumba—there are many names and many distinctions—and they basically associating specific Yoruba demigods, called orishas in Cuba and the other Spanish-speaking slave-owning islands, with specific Catholic saints.”

In conclusion, Robinson notes that no amount of DNA research can do any more than give an African American any but the vaguest idea of his or her ancestry. “…our ancestors’ history was obliterated,” he states,  “In that sense, we really have no idea who we are.” One of the central theses of this book is how the lack of ancestral knowledge was, for a long time, overcome though a shared history of slavery and deprivation and group solidarity.  But this common identity is “disintegrating” as the community is moving away from its roots, which were domination and oppression, towards a new upward mobility.  Here is where the African American group identity splits apart into two extreme segments. If Eugene Robinson places himself within the Mainstream which socially and economically is linked to the Emergent group, then on either side are the Transcendent and the Abandoned.

Like the fraternities and the sororities, the Transcendent are bound together through ties of friendships and circumstances.  Robinson uses the President as a prime example of a Transcendent, that is an African American that is beyond the reach of the narrative of race. He illustrates the network of the Transcendent by writing that

“…the first African American president, confronting the direst financial crisis since the Great Depression, was able to summon an experienced African American CEO (Richard Parsons) out of retirement to oversee troubled Citigroup. It meant that when the president went to work on his campaign promise to bring the treatment of terrorism suspects back into line with civilized norms, he could task an African American attorney general (Eric Holder) with the job. It meant that as President Obama decided on diplomatic steps he could take to rid the United States of its Crazy Cowboy image in the world and chart a new course, he could pick up the phone and call two African American former secretaries of state (Colin Powell and Condoleezza Rice).”

However, no African American is forever free of race.  The Transcendent Obamas are a case in point.  Robinson makes the good point that the couple has de-raced public assistance by going to causes that are universal—obesity and health care. But he also writes of the suspicion of  the other Transcendents towards the President—with his white mother and his privileged position among the white community, is he “Black” enough? For a significant segment of the white community, Obama is too Black.  Although Robinson does not go into the ways in which the President has been treated, the disrespect shown to him by his Republican and conservative opponents can be explained in on fashion other than open racism. I have something of an issue with Robinson when he writes, “I dwell on Obama’s candidacy because it was such a Rorschach test for the Transcendent class.” Unlike some Transcendents, such as Oprah, Will Smith, and Sean Combs, Obama is under constant scrutiny and attack.  He has lived the first years of his Presidency like Jackie Robinson in Ebbets Field.

Those who find the preternatural cool aloofness of Barack Obama irritating may not be aware of what the “first Black” must endure.  When Branch Rickey hired Jackie Robinson to play ball for the Brooklyn Dodgers, he knew that the athlete would be under constant siege. The exchange and bargain between the two men is famous:

Rickey: “I know you’re a good ballplayer. What I don’t know is whether you have the guts.”

Robinson: “Mr. Rickey, are you looking for a Negro who is afraid to fight back?”

Rickey, exploding: “Robinson, I’m looking for a ballplayer with guts enough not to fight back.”

Far from Transcending, Obama cannot fight back and he cannot speak up with the same freedom that a hip-hop star has. That said, Robinson makes a good point, that Obama is part of a generation that has little or no memory or experience of segregation. Unlike Jackie Robinson who was the grandson of a slave and the son of a sharecropper who migrated to California, Obama was not raised with a narrative of being a second-class citizen.  Indeed, Robinson states,

“These young Transcendents, generally in their forties, are indeed too young to have lived through Jim Crow. They are not too young to know what it was, and certainly not too young to believe as passionately as their elders in the need to keep fighting to advance the unfinished project of black uplift. But there is a difference between knowing what it is like to face racism and discrimination, which this next-generation black elite does, and knowing what it is like to be consigned by law and police authority to second-class citizenship, which it does not. In that sense, the post-segregation Transcendents carry less baggage through life.”

It remains to be seen if Obama’s restraint is due to an incomprehension—after being schooled in non-racist environments by white people—Occidental College, Columbia and Harvard University—at how he is being treated—or a deep knowledge of—like Jackie Robinson—how carefully he must tread. Robinson pictures Obama as an Insider and he is, in the parts of America that have become “post-racial.” But in the Red States, he is not only an Outsider but an Interloper.  While the President, in an interview with the author, talks of his awareness of the increased opportunities, he is also aware of the dark history behind the achievements. In an interview with Robinson, Obama said, “I do think it is important for the African American community, in its diversity, to stay true to one core aspect of the African American experience, which is we know what it’s like to be on the outside, we know what it’s like to be discriminated against, or at least to have family members who have been discriminated against. And if we ever lose that, then I think we’re in trouble. Then I think we’ve lost our way.”

On the other end of the spectrum are the Abandoned.  In contrast to the high achievers, they are invisible, tucked way in slums and fringe neighborhoods or incarcerated in jails. Pushed out of gentrified neighborhoods, these individuals are caught in a spiral from which there is no recovery.  Despite his horror at the ugly spectacle of human suffering during the aftermath of Katrina, Robinson regards the Abandoned with a despairing realism and a surprisingly conservative stratagem. The Abandoned are those who have been left behind, weighted down by the preceding generations inability to escape poverty.  Whether the Abandoned were Abandoned in the South during the Great Migration—almost certainly the cause of the poverty of the Katrina victims—or were Abandoned in the inner cities of the North, there is little hope for these people.

Commentators who had no understanding of the culture of New Orleans asked why the Black community had not evacuated.  But these are people who had no means of transportation and who were unwilling to leave their homes.  As Robinson discovered,

“An unusually high percentage of poor African Americans in New Orleans own their homes rather than rent, and some were determined to protect their property against looting. The parts of the Lower Ninth Ward that are closest to the Mississippi sit on relatively high ground, and those streets had never flooded before Katrina…”

The lack of transportation meant that the African Americans of the Ninth Ward and other poor neighborhoods could not follow the jobs and industries to the suburbs. The only jobs available were in the tourist industry that favored the kind of talent that could service and entertain the visitors. Robinson explains the tragic and unintended consequences of the Civil Rights Movement on those who would be left behind:

“At the same time that jobs were moving out of the cities, African Americans were winning unprecedented rights and freedoms. Those who were best prepared to take advantage of the new opportunities moved away from places like the Lower Ninth, leaving the least-prepared behind. The 1960s riots hastened an exodus that had already begun. As the black Mainstream made for the exit, what had been economically diverse African American neighborhoods became uniformly poor.”

The gaze of the television cameras on New Orleans allowed America to see what had become of those who had been Abandoned.  But the same story—without hurricanes—could be told in many other cities, such as Detroit and Baltimore. Lack of education, lack of transportation, lack of self-esteem, communities divided between rootless males and female-headed families, hopeless anger and self-defeating behavior are generational pathologies and survival strategies.  Robinson looks with empathy upon these communities where lives come and go, lived out in a flat line of neglect. “The web of restraints that keeps Abandoned black Americans from escaping into the middle class has been examined from every angle, described in great detail, and lamented ad infinitum. But the web continues to tighten.” He concludes, “It begins in the womb.”

The African American child born to an Abandoned mother has almost no change in life.  His or her plight is all the more stark, given the astonishing progress of the Mainstream.  As Robinson states,

“As the Mainstream have risen, the Abandoned have fallen. To be black, poor, and uneducated in America is, arguably, a more desperate and intractable predicament today than it was forty or fifty years ago…for all intents and purposes, Mainstream African Americans have arrived. The Abandoned, however, have not. And the question is whether they ever will…Increasingly, between the Abandoned and the rest of black America, there is a failure to communicate, much less comprehend…Abandoned black America—increasingly isolated from the Mainstream—develop a cultural ecosystem that makes sense internally but nowhere else.” 

Robinson explains that young black females are well aware of the facts of life and of condoms but they deliberately get pregnant so that they can establish their own households and lives and have someone to love them. While Robinson approves of the independent single Mainstream mother, he understands the consequences of this pattern of single motherhood on a young girl without financial resources.  He recommends a conservative approach—a two parent family—without explaining how the young males will be educated to take on such a responsibility.  The young man is caught up in his own needs.  If the girl needs to be love, the boy needs to be respected. As Robinson explains,

“For young people especially, material possessions, such as the most fashionable brand-name clothing and jewelry, are important because they command respect. The same is true in Mainstream society, of course, but the stakes are higher in communities where people struggle to afford necessities, let alone luxuries. Any teenager who obtains and flaunts high-status items—the right North Face jacket, for example, or the right Timberland boots—has to be willing and able to defend them. Taking such accoutrements by intimidation or force from the owner is the kind of bold action that can enhance another young man’s status among his peers, and in turn provide inoculation against those who might be tempted to try something like that with him.”

One can understand how, to those who are Abandoned and who have no place in Mainstream society, territory and personal possessions would be important, worth fighting and dying for. The Abandoned have nothing else.  According to Robinson,

“…the unwritten code of insult, umbrage, and retribution that holds sway in Abandoned communities—enforced by a few, but followed by many—plays an enormously destructive role by choking off ambition and creating an atmosphere of randomness and uncertainty. Those capable of code-switching have a chance of leaping the chasm—those who understand, for example, that while “acting white” in school is seen as a sign of softness and weakness, it is possible to avoid showing vulnerability in public and at the same time earn the kind of grades that make it possible to go to college. Those who cannot live in both worlds, who do not understand both sets of values, are all but lost. The essential, and tragic, problem is that “keeping it real”—adhering to the code—requires either engaging in all manner of self-defeating behavior or finding elaborate subterfuges to avoid shooting oneself in the foot. The warping of values in Abandoned black America means that being successful requires being duplicitous—being literally two-faced. And that is never an easy way to live.”

Robinson ends his book by presenting solutions to the seemingly intractable problems faced by the Abandoned.  Regardless of the good intentions of the Mainstream or the Transcendent to help the Abandoned, these are individual efforts, well meaning but hardly adequate to the enormity of the task.  He insists that Affirmative Action should continue but be targeted to the Abandoned or those in real need. He also suggests that the richest nation in the world can well afford a Marshall Plan for the inner cities.  In a surprising move, Robinson suggests something akin to the badly received remark of Barbara Bush about the Katrina victims:

“What I’m hearing, which is sort of scary, is they all want to stay in Texas. Everyone is so overwhelmed by the hospitality. And so many of the people in the arena here, you know, were underprivileged anyway, so this is working very well for them.”

Robinson suggests that the gentrification or the taking of inner city territory from the Abandoned should be continued. As he states,

“…gentrification breaks up tough knots of Abandoned poverty and scatters people to the winds, including to other areas that might be just as poor but are more racially integrated, the process actually can be to the displaced—with one big caveat. The caveat is that the displaced cannot simply be forced into another all-black ghetto—one that is more remote, with even fewer amenities and services. This is largely what has happened in Washington and some other cities, and the result is that the problem just gets moved, not solved. By far the best solution—and, yes, it costs money—is to preserve or create low-income housing that allows the Abandoned to stay in place while the neighborhood gentrifies around them.”

I am not sure the author has thought through the consequences of such a contrast between the Abandoned and the Mainstream, nor is it clear how the “tough knots” are broken up if they are only transferred to a high rise.  In addition, as the implosion of the Pruitt-Igoe projects in St. Louis suggested, poor people don’t take kindly to being herded into containment communities. As Alexander von Hoffman of Harvard University wrote of the fate of the 1956 high rise development,

“Only a few years later, disrepair, vandalism, and crime plagued Pruitt-Igoe. The project’s recreational galleries and skip-stop elevators, once heralded as architectural innovations, had become nuisances and danger zones. Large numbers of vacancies indicated that even poor people preferred to live anywhere but Pruitt-Igoe. In 1972, after spending more than $5 million in vain to cure the problems at Pruitt-Igoe, the St. Louis Housing Authority, in a highly publicized event, demolished three of the high-rise buildings. A year later, in concert with the U.S. Department of Housing and Urban Development, it declared Pruitt-Igoe unsalvageable and razed the remaining buildings.”

Whatever the problems with Robinson’s solutions, he asks the African American community to take responsibility for the Abandoned community.  Without adding that the rest of America is loath to spend money on a cause that seems intractable, he suggests that,

“Mainstream and those of the Abandoned coincide in the long run; ultimately, the goal is for the Abandoned to become Mainstream. But those interests diverge along the way. Two obvious goals for African Americans are consolidating decades of impressive gains into solid, multigenerational wealth; and doing whatever it takes to uplift the millions still trapped in desperate, multigenerational poverty…Transcendent CEOs can’t rescue the Abandoned, but they can serve as localized engines of economic development for the Mainstream by making certain that their companies actually practice diversity rather than just preach it. If they ensure that qualified and capable African Americans are represented among their executive teams, suppliers, and outside bankers, lawyers, and accountants, they will leave behind a far greater legacy than whatever the final numbers say on the balance sheet.”

Although Robinson was writing in 2010, he mentioned the unfathomable sums of tax payer dollars shoveled to the troughs of Wall Street to “rescue” perfectly able bodied white males only in passing and noted, also in passing that Americans were willing to sent money to Iraq and Afghanistan but not to the Abandoned in their own country. I wish he had made more of this comparison, because surely part of the persistent poverty among the Abandoned is the fact that, as he points out in his analysis of the film Precious, Americans believe that if you are poor, you brought this condition upon yourself and that “you deserve it.” An updated version of this book might be able to add numerous comments made in the last two years by members of Congress who excoriate the poor and extol the rich for the purposes of taking money from those in need in order to give money to those in un-need. This lack of compassion and this refusal of responsibility and this deliberate unraveling of the social and moral fabric of America is the real definition of Disintegration.

Dr. Jeanne S. M. Willette

The Arts Blogger

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 


The Shock Doctrine (2008) by Naomi Klein

THE SHOCK DOCTRINE: THE RISE OF DISASTER CAPITALISM

2008

BY NAOMI KLEIN

Naomi Klein is my hero.  She is beautiful and brilliant and can look at the sick world in which we are trying to exist, diagnose it, and give a prognosis for the future.  If you want to understand how we got from there—-middle class security and prosperity—-to here—-the death of the middle class, then read Naomi Klein.  Start with No Logo and then continue to The Shock Doctrine and you will come away feeling disgusted, discouraged and sadly enlightened.  As Naomi Klein said this morning on the MSNBC program Up w/Chris Hays, “The system is broken.”  How true.

The Shock Doctrine is a harrowing account of how a particular economic theory, popularized by economist Milton Friedman and spread by his Ayn Rand-dazed acolytes to many helpless nations, has created vast wealth for corporations and vast misery for the people who live in these countries.  Briefly and perhaps crudely, one can explain this economic doctrine as “free market capitalism,” or the myth of the free market which translates in reality to corporate monopolies over the lives of people—not just their economic lives, as in what kind of products they are forced to buy at non-competitive prices—but their social and political lives.

The Shock Doctrine is a phrase coined by Klein referring to the Milton Friedman doctrine of crisis.  Private business interests should take advantage of a public or social crisis in a nation and force radical change quickly, set these changes in place before the population can recover and then sit back and reap the economic rewards.  This cultural monopoly imposed by corporate interests must be all-encompassing because the political system needs to be co-opted in order to create a machine that delivers money to the business interests.  Government money, otherwise known as taxes paid by the citizens that should be returned to the people as part of a social contract, is used to subsidize the moneyed class to assist them in making profits without interference of inconveniences such as financial or environmental regulations.

The Shock Doctrine begins with the reaction of the Bush Administration to the flooding of New Orleans by the epic hurricane Katrina.  Although Klein is not making a new observation—many other commentators remarked at how quickly the African-American refugees were driven out of state, dumped and abandoned, leaving Louisiana a much whiter state, she analyzes the post-Katrina situation in terms of “disaster capitalism.”  This doctrine, which originated with Milton Friedman, urges the conservative government to rush in when a population is in shock and to upend existing structures and to replace them with private interests in the service of free market capitalism.

Klein remarks upon how quickly the Administration swooped down upon New Orleans and swept away the public school system as efficiently as Katrina had swept through the Ninth Ward.  The goal was to whiten the city by not rebuilding the African-American neighborhoods where people who traditionally voted for the Democrats once lived.  Lest any of these displaced persons think of returning, steps were taken to not rebuild their neighborhoods and to make education economically beyond their means.  The replacement for free public education? Charter schools, a privatized mode of education, accountable to no one, even its customer base, parents and children.

A public system was replace by a private one: this is what happened to the school system in New Orleans. Instead of having a public system of education that we all pay into because we all benefit from an educated society, this city now has charter schools.  For those who are well-to-do, a private school, excuse me a charter school, can be as expensive and as exclusive (and as segregated) as it wishes, out of reach of government supervision.  Such schools can teach what they wish, again with very limited government oversight.  Through the back door, separate but equal comes on little cat feet and steal the American dream.

For Milton Friedman, public schools are nothing short of socialism.  As the late guru once stated, “The preservation of freedom is the protective reason for limiting and decentralizing governmental power. But there is also a constructive reason. The great advances of civilization, whether in architecture or painting, in science or in literature, in industry or agriculture, have never come from centralized government.”

That statement is astoundingly ignorant, especially for a university professor.  As an art historian, I would like to waft a few names heavenward to Dr. Friedman (if he is in heaven) Egyptian pyramids, Jacques Louis David, Joseph Awkwright, Werner von Braum—-all these accomplishment, from architecture to art to invention to the “advance” of rocketry—came from centralized governments.  I can only suppose that his students were too intimidated to try to inform him of the facts.

However, Friedman, when speaking against public education asserted,

“…It isn’t the public purpose to build brick schools and have students taught there. The public purpose is to provide education. Think of it this way: If you want to subsidize the production of a product, there are two ways you can do it. You can subsidize the producer or you can subsidize the consumer. In education, we subsidize the producer—the school. If you subsidize the student instead—the consumer—you will have competition. The student could choose the school he attends and that would force schools to improve and to meet the demands of their students.”

Sounds good, but the flaw in the argument is that charter schools actually lower competition and prevent intervention of the “consumers,” by limiting the alternatives.  With public school, all the officials, from the governor, the mayor, the superintendent, the teachers, etc. are accountable to the public who can elect those who represent them.  Neighborhood schools can respond to the needs of the community, while a charter school reacts to the desire for profit.

Certainly the profit motive and selfishness—virtues praised by Friedman—are great motivators—but certain public services are public goods paid for by the public and provided by the government, which does not have and should not have a profit motive.  Friedman and his followers, called Neo-Conservatives (those lovely people who drove the American public into the Iraq War to the lasting profit of contractors), think the government should be run like a business.  This philosophy, the neo-conservative ideals, is at odds with the founding ideals of the American government, expressed in the Declaration of Independence and the Constitution and the Bill of Rights.

America is a nation built on the philosophy of the Enlightenment and is, therefore, is based upon The Social Contract.  The Social Contract is an idea based upon Jean-Jacques Rousseau’s The Social Contract, written in 1762.   Rousseau was contemplating the end point of the logic of the Enlightenment philosophy, which proposed individual freedom and individual responsibility as opposed to the divine right of Kings and Queens.  If human beings are not governed by a central authority ordained by God, then how are we to govern ourselves?  His answer was that people came together freely and gave their consent to govern and to be governed and guided by the foundational idea of mutual respect and mutual rights and mutual aid.

The American government was not founded on the ideal of the profit motive.

The American government was founded on the ideal of mutual consent.

The problem of privatization of government services is that privatization removes mutual consent and removes accountability as privatization “gets government out of the way.”

Once the government is out of the way, the corporations have free reign over the citizens who are their captive customers.

The Chicago School, or the economic philosophy of Milton Friedman, thinks about the role of government in terms of not-government or not-governing.  In other words, less government means more corporate control and more profits for the wealthy at the top. When the government is shrunk, its withdrawal creates a space and power vacuum, and the corporations rush in and fill up the open territory.  The citizens become consumers without a vote.  Neoconservativism is a form of public policy that is set on disenfranchising the public and reshaping society for the benefits of private profit.

Klein begins with an early experiment with the Shock Doctrine in Chile by Augusto Pinochet who overthrew the legitimate government by a coup-d’êtat and was advised in the conduct of his economic policy by Friedman himself.   As Klein described, the experiment in Chile would be repeated elsewhere.  The formula was simple, find a country in which an event has put the population in a traumatized state, “shock” the people, and seize the system and reshape it to your own ends.  According to Klein, Friedman advised Pinochet to implement

“…rapid-fire transformation of the economy—tax cuts, free trade, privatized services, cuts to social spending and deregulation. Eventually, Chileans even saw their public schools replaced with voucher-funded private ones. It was the most extreme capitalist makeover ever attempted anywhere, and it became known as a “Chicago School” revolution, since so many of Pinochet’s economists had studied under Friedman at the University of Chicago. Friedman predicted that the speed, suddenness and scope of the economic shifts would provoke psychological reactions in the public that “facilitate the adjustment.”  He coined a phrase for this painful tactic: economic “shock treatment.” In the decades since, whenever governments have imposed sweeping free-market programs, the all-at-once shock treatment, or “shock therapy,” has been the method of choice.

The United States (the CIA) supported the 1973 coup but Pinochet quickly revealed himself to be a particularly ugly bedfellow.  Nevertheless, the dictator, who wrecked Chile and killed and tortured its people, was preferable to any socialist politician, such as the socialist Allende, who had nationalized industry. As Klein pointed out, the citizens are always opposed to the economic theories of the Chicago School, because these theories do not benefit them, only the corporations.  Indeed when Pinochet died in 2006, the Chilean government probed the financial corruption of almost thirty years of misrule.  According to The Washington Post, Pinochet, though dead, had amassed ten tons of gold or $160 million dollars.

Imagine what $160 million could have done for the people of Chile.

Although Klein goes through a number of case studies of the Chicago School intervening with foreign nations that have dictators eager to emulate Pinochet, she concentrates on the “event” in America that unleashed our own Shock Doctrine within our nation: September 11th.   It is perhaps a coincidence that Pinochet seized power on September 11th, 1973 and that his coup was a dress rehearsal for the immediate reaction of the Chicago School neo-conservatives embedded in the Bush Administration. After 911, the astonishing leap from Afghanistan to Iraq may have surprised those of logical mind was in fact a long planned campaign into Iraq, site of massive oil fields.  Klein states,

The Bush team seized the moment of collective vertigo with chilling speed—not, as some have claimed, because the administration deviously plotted the crisis but because the key figures of the administration, veterans of earlier disaster capitalism experiments in Latin America and Eastern Europe, were part of a movement that prays for crisis the way drought-struck farmers pray for rain, and the way Christian-Zionist end-timers pray for the Rapture. When the long-awaited disaster strikes, they know instantly that their moment has come at last.

Klein correctly points out that the doctrines of the Chicago School had never been popular or desired by the American people.  That said, many of the ideas and principles were implemented by the Reagan Administration’s program of what George H. W. Bush called “voodoo economics,” also known as the “trickle down theory.”  The concept that, if taxes were cut for the wealthy, then the benefits would trickle down to the lower classes, was disproved by the fact that 1. The incomes of the middle class have stopped rising (and have stayed static to this day) and 2. Taxes had to be raised by Reagan eleven times to offset a growing deficit.  However, the great success of Ronald Reagan was that he introduced the idea that the “government is the problem.”

If that was the case during the Reagan Administration, during the Bush administration, the “government is the solution” to enriching corporations.  For the first time in the history of America, the nation went to war on a credit card.  The nation was urged to shop, not sacrifice, as the government conducted an endless “war on terror.”  Except that it was not the government that was waging this war. The “military” or the “troops” in the field that the American people heard about were something of a screen for what was really going on in Iraq. As Klein explained it,

“…the Bush administration outsourced, with no public debate, many of the most sensitive and core functions of government—from providing health care to soldiers, to interrogating prisoners, to gathering and “data mining” information on all of us. The role of the government in this unending war is not that of an administrator managing a network of contractors but of a deep-pocketed venture capitalist, both providing its seed money for the complex’s creation and becoming the biggest customer for its new services. To cite just three statistics that show the scope of the transformation, in 2003, the U.S. government handed out 3,512 contracts to companies to perform security functions; in the twenty-two-month period ending in August 2006, the Department of Homeland Security had issued more than 115,000 such contracts.”

Furthermore in the best tradition of the Chicago School, the huge cost increases incurred by privatizing the military and outsourcing fighting to contractors were hidden “off the books” and not put into the deficit until the Obama Administration.  As Klein pointed out, while the American people were improvised by this for-profit war of choice, Halliburton earned a $20 million profit. The Iraq was an experiment in large-scale privatization of war waged by corporate interests and their stockholders.  Secretary of Defense, Donald Rumsfeld, put forward the idea of a small army, which hid the subtext of a large force of private contractors, who would fight in “our” name with taxpayer dollars but without accountability.  This hidden army was never counted in the number of people who were fighting in Iraq but they doubled the number of military personnel fighting for American interests in Iraq.  The result was a ten-year trillion-dollar war that started with a lie and will end in resignation.

Klein points out that the Shock Doctrine of the Chicago School calls its followers by a number of names: neoconservatives in America, living in so-called “Think Tanks,” such as the American Enterprise Institute and the Hoover Institution, and “neoliberals” in Europe, indicating the interest in Macroeconomics or in corporate globalization.  The author decides upon a more descriptive term,

A more accurate term for a system that erases the boundaries between Big Government and Big Business is not liberal, conservative or capitalist but corporatist. Its main characteristics are huge transfers of public wealth to private hands, often accompanied by exploding debt, an ever-widening chasm between the dazzling rich and the disposable poor and an aggressive nationalism that justifies bottomless spending on security. For those inside the bubble of extreme wealth created by such an arrangement, there can be no more profitable way to organize a society. But because of the obvious drawbacks for the vast majority of the population left outside the bubble, other features of the corporatist state tend to include aggressive surveillance (once again, with government and large corporations trading favors and contracts), mass incarceration, shrinking civil liberties and often, though not always, torture.

By making an analogy to “torture,” Klein explains that the victim/nation is “softened up” through terrible events, which make human beings temporarily defenseless and susceptible to doing whatever it takes to remedy the crisis.  As she says,

That is how the shock doctrine works: the original disaster—the coup, the terrorist attack, the market meltdown, the war, the tsunami, the hurricane—puts the entire population into a state of collective shock. The falling bombs, the bursts of terror, the pounding winds serve to soften up whole societies much as the blaring music and blows in the torture cells soften up prisoners. Like the terrorized prisoner who gives up the names of comrades and renounces his faith, shocked societies often give up things they would otherwise fiercely protect.

The Chicago School, according to Klein, long though of itself as a School of Thought or a philosophy, rather than an economic theory.  Just as the American military sought a city that had not been bombed upon which to drop the atom bomb the better to ascertain the results, the Chicago School economists sought a “clean slate” upon which to write their doctrines the better to ascertain the results.  These economists imagined that the capitalist system was faultless, endlessly flexible and endlessly self-correcting, and, hence, infallible. This is typical Enlightenment thinking, based upon an idealized model, generated by math, and based upon a hypothesis.

The problem begins when the elegant model meets the real world.  The economic system works only for corporations; the populations hate how they are disenfranchised and become restive.  In order to control the experiment, the government must increase surveillance on its own citizens who are constantly signaling their discontent.  The disconnect is caused by a conceptual misfit: the government is now for the benefit of the corporations but is masquerading, as in America, as a democracy and allows a charade of elections which are financed and manipulated by corporations in a viscous circle.  Caught in the middle,  “We the People” become more and more angry and, eventually, a rebellion ensues to put things right again, as in Chile.

The fact that while the Shock Doctrine may work, the Chicago School economic ideas do not has not given the Neoconservatives pause.  Instead, they simply double down and repeat their assertions, for years, in the fact of facts and documentation, all of which point to the contrary.  As Klein points out, the Neoconservatives are “purist” thinkers, meaning that they think in theory and feel the need to wipe away any pollutants that sully or interfere with what they think of as the “free market.”  One can understand the insistence of the Republican Party that the Environmental Protection Agency prevents jobs from being created by realizing that regulation per se is “impure.”  The problem is, that many have pointed out, the logical outcome of Enlightenment thinking: such a stance of “purity” would end regulations totally and we would not be able to drink the water and the Cuyahoga River will be ablaze once again.

Klein mentions that the Neoconservatives of the Chicago School were in the intellectual wilderness for decades, and, indeed, even today, orthodox economics and mainstream economists have pointed out that the government has to take a role in regulating and directing the economy.  Today, as we are mired in neo-Depression, these economists are calling for Keynesian economic policies to prime the job market and to stimulate the economy.  And the neoconservative politicians stand firm for a policy of purity and refuse to help any element of society, except the wealthy.  Their philosophy is in line with that of Milton Friedman who decided that the nation went off the rails with the New Deal and created a “welfare state.”  For nearly a century, it has been the goal of these anti-Keynesians to dismantle the role of government in society, from social safety nets to regulations that promote public health and safety.

Because of the popularity of the New Deal and its programs and the success of post-war government intervention in building a prosperous middle class through public policies, the “Chicago Boys” had to practice overseas, mostly in South American nations.  Despite the fact that some of the students at the University of Chicago protested the corrupt and brutal killing regimes brought into being by Chicago style politics, Milton Friedman won a Nobel Prize for Economics in 1976 and apparently he never apologized for or agonized over all the horrible injustices done under his policies.  As Klein explained it,

This intellectual firewall went up not only because Chicago School economists refused to acknowledge any connection between their policies and the use of terror. Contributing to the problem was the particular way that these acts of terror were framed as narrow “human rights abuses” rather than as tools that served clear political and economic ends. That is partly because the Southern Cone in the seventies was not just a laboratory for a new economic model. It was also a laboratory for a relatively new activist model: the grassroots international human rights movement. That movement unquestionably played a decisive role in forcing an end to the junta’s worst abuses. But by focusing purely on the crimes and not on the reasons behind them, the human rights.

Somehow the Chicago School escaped being discredited on moral and ethical grounds and politicians realized that those economic policies were bad for the people who still casts votes in free nations.  Therefore, Milton Friedman was disappointed in the performance of Richard Nixon who understood that a contented population would reelect him.  As James Carville said, “It’s the economy, stupid.”  But later politicians would be bolder. Despite the undeniable truth of the terror and torture implemented by the Pinochet regime, free market politicians looked upon his work in Chile with favor.  Klein states,

When Friedrich Hayek, patron saint of the Chicago School, returned from a visit to Chile in 1981, he was so impressed by Augusto Pinochet and the Chicago Boys that he sat down and wrote a letter to his friend Margaret Thatcher, prime minister of Britain. He urged her to use the South American country as a model for transforming Britain’s Keynesian economy. Thatcher and Pinochet would later become firm friends, with Thatcher famously visiting the aging general under house arrest in England as he faced charges of genocide, torture and terrorism. The British prime minister was well acquainted with what she called “the remarkable success of the Chilean economy,” describing it as “a striking example of economic reform from which we can learn many lessons.”

Klein studied Margaret Thatcher’s implementation of Milton Friedman’s doctrines which worked so badly that her position as Prime Minister of Great Britain was saved by a strange and unnecessary war, the Falklands War of 1982, fought on behalf of less that three thousand people and an almost equal number of sheep.  Friedman would have preferred an economic crisis, a depression, a currency meltdown, or something like we have today, a global collapse of the economic system.  But Margaret Thatcher, the Iron Lady, went to war to cloak her failures.  England was a difficult site for Chicago School politics to flourish, and, as Klein continues, the former Soviet Union and China were more successful in following the “purity” of the free market philosophy of Milton Friedman who unapologetically advised China at the moment of Tiananmen Square.  But then Freidman always maintained that the ends always justify the means.   He said,

A common objection to totalitarian societies is that they regard the end as justifying the means. Taken literally, this objection is clearly illogical. If the end does not justify the means, what does? But this easy answer does not dispose of the objection; it simply shows that the objection is not well put. To deny that the end justifies the means is indirectly to assert that the end in question is not the ultimate end, that the ultimate end is itself the use of the proper means. Desirable or not, any end that can be attained only by the use of bad means must give way to the more basic end of the use of acceptable means.

As Klein points out, the former Soviet Union, now known as Russia, was an ideal proving ground for a doctrine that had continually failed.  She chronicles the psychological impact of the theories-come-home-to-roost as practice of the Chicago School: alcoholism and AIDS and prostitution and drug addiction and wealth concentrated in the hands of the few.  Such is the lament of the hopeless under a doctrine of “planned misery.”  She states,

Russia’s population is indeed in dramatic decline—the country is losing roughly 700,000 people a year. Between 1992, the first full year of shock therapy, and 2006, Russia’s population shrank by 6.6 million.83 Three decades ago, André Gunder Frank, the dissident Chicago economist, wrote a letter to Milton Friedman accusing him of “economic genocide.” Many Russians describe the slow disappearance of their fellow citizens in similar terms today. This planned misery is made all the more grotesque because the wealth accumulated by the elite is flaunted in Moscow as nowhere else outside of a handful of oil emirates. In Russia today, wealth is so stratified that the rich and the poor seem to be living not only in different countries but in different centuries. One time zone is downtown Moscow, transformed in fast-forward into a futuristic twenty-first-century sin city, where oligarchs race around in black Mercedes convoys, guarded by top-of-the-line mercenary soldiers, and where Western money managers are seduced by the open investment rules by day and by on-the-house prostitutes by night.

One might wonder why, with the many manifold and manifest failures of the Shock Doctrine and the Chicago School philosophy, the Neoconservatives continued to be fruitful and multiply.  The only answer that I could come up with is that the corporations like the policies because, once implemented, they become vastly enriched, even when the Chicago Boys can get only part of their agenda through, as in America.  To return to the Rumsfeld idea of “transforming” the military into a corporation by outsourcing fighting to contractors Klein recounts how unpopular this idea was to the generals who would watch the military double in size with half of the personnel beyond their control.  As she pointes out, the role of the government is to subcontract services to private businesses (which inevitably charge two or three times more), which cause the cost of any “government” service to spiral.

The philosophy of Milton Friedman made corporations and businesses profitable beyond their wildest dreams.  Thanks to Presidents Bill Clinton and George W. Bush, more and more areas traditionally reserved for government professionals, who were often unionized, were turned over the corporations.  The result was a gutting of unionized labor (which started with Ronald Reagan) and the disenfranchising of the voter who could not confront a corporation in a town hall.  Klein points out that Bush had energetically privatized the prisons in Texas and then went on to privatize the War on Terror.  What Bush wanted to do, she asserts is to “hollow out the government.”  With what seems like a preternatural patience, the neoconservatives who had been waiting and practicing for years came into own, thanks to the calamity and trauma of September 11th.  She states,

September 11 has changed everything,” said Ed Feulner, Milton Friedman’s old friend and president of the Heritage Foundation, ten days after the attack, making him one of the first to utter the fateful phrase. Many naturally assumed that part of that change would be a reevaluation of the radical antistate agenda that Feulner and his ideological allies had been pushing for three decades, at home and around the world.

911 allowed for the collapse Enron to happen with less notice than it would have otherwise have subjected to.  But Enron and its mode of doing business was a harbinger of things to come: total economic collapse through one of the maladies that has plagued the Chicago School since the experiments began in the 1970s: corruption.  The problem of outside contractors happily ripping off the government had been going on for years but under the Friedman style government of George Bush, the process accelerated to the extent that we still do not have a complete accounting of taxpayer money that was misspent or simply lost.  Vast sums of money went, not to stimulate the American economy, which remained stagnant, but to corporations.  As Klein recounts,

New Deal would be exclusively with corporate America, a straight-up transfer of hundreds of billions of public dollars a year into private hands. It would take the form of contracts, many offered secretively, with no competition and scarcely any oversight, to a sprawling network of industries: technology, media, communications, incarceration, engineering, education, health care.ax What happened in the period of mass disorientation after the attacks was, in retrospect, a domestic form of economic shock therapy. The Bush team, Friedmanite to the core, quickly moved to exploit the shock that gripped the nation to push through its radical vision of a hollow government in which everything from war fighting to disaster response was a for-profit venture.

The economic doctrine of the Bush Administration, expressed by  Bush’s Budget Director, Mitch Daniels and others, was that the government did not provide services but purchased them from an outside contractor and resold them to the American public who was then forced to pay for these services at two or three times the market value.  The result was a guaranteed deficit, draining the government surplus created under Bill Clinton and the future of the nation, which was not floating off on a sea of endless and unmentionable debt.  The War on Terror made contractors and corporations rich, and the nation poor.

For decades, America has been fighting one war after another and has been existing in a low level state of Total War, flying low under the public radar.  In the same way, the War on Terror was fought by corporations and by a small group of beleaguered American soldiers who were used as window-dressing.  These soldiers were isolated from the mainstream, which allowed the War to be fought globally without much scrutiny or without inconveniencing the American people who were busy “shopping” for homes and commodities.  The best part of the War was that it could conceptually go on as long as American could borrow money from China.  As Klein says,

From a military perspective, these sprawling and amorphous traits make the War on Terror an unwinnable proposition. But from an economic perspective, they make it an unbeatable one: not a flash-in-the-pan war that could potentially be won but a new and permanent fixture in the global economic architecture.

What Naomi Klein calls the “disaster industry” was based on high tech venture capital businesses ideally suited to hunting “terrorists” with sophisticated technology.  Such technology is superbly expensive and is ideally suited to endless improvement, or to put it another way, an endless revenue stream.  An entire corporate structure sprung up, designed to fight a war that could not be won—by definition—and, therefore, a war that could never end—like the profits.  Klein points out the vast fortunes some fortunate individuals amassed following 911, predicting and causing the current inequities between the very rich and the stalled and suffering middle class. She says,

From a military perspective, these sprawling and amorphous traits make the War on Terror an unwinnable proposition. But from an economic perspective, they make it an unbeatable one: not a flash-in-the-pan war that could potentially be won but a new and permanent fixture in the global economic architecture.

The problem is that once government services are auctioned off to no-bid contractors, the nation has been given to corporations whose motive is profit, not democracy and not public service and not the public good.  Corporations answer to stockholders, not to voters.  For example, insurance companies are motivated to make money not to make people healthy.  A corporation could be providing any sort of good and a health care company or a military contractor is simply filling in a blank corporate space, providing a good or a service, not because it is dedicated to public service but because the business wants to make a profit.  For those who have wondered why America invaded Iraq or for those who charged that the war was waged to enrich Vice President Dick Cheney’s company, Halliburton, Klein offers this succulent explanation:

Saddam did not pose a threat to U.S. security, but he did pose a threat to U.S. energy companies, since he had recently signed contracts with a Russian oil giant and was in negotiations with France’s Total, leaving U.S. and British oil firms with nothing; the third-largest proven oil reserves in the world were slipping out of the Anglo-American grasp.  Saddam’s removal from power has opened vistas of opportunities for the oil giants, including ExxonMobil, Chevron, Shell and BP, all of whom have been laying the groundwork for new deals in Iraq, as well as for Halliburton, which, with its move to Dubai, is perfectly positioned to sell its energy services to all these companies.   Already the war itself has been the single most profitable event in Halliburton’s history.

When Klein went to Iraq to investigate this economic story, she, of course, could find few people to talk with her about the underlying cause and effect of the war for profit in Iraq.  There was enough public scrutiny on the war and the amount of money that was wasted, the toll of American lives in the service of Halliburton and the cost of the war on American honor so that the Bush Administration was forced to scale back its occupation forever dream and agreed to begin withdrawal and scale back—of the military, not the contractors.   It is still unclear what kind of or extent of an American presence will remain in Iraq.  Klein discusses her trip to Iraq,

The fact that it was hard to find people in Baghdad who were interested in talking about economics was not surprising. The architects of this invasion were firm believers in the shock doctrine—they knew that while Iraqis were consumed with daily emergencies, the country could be auctioned off discreetly and the results announced as a done deal. As for journalists and activists, we seemed to be exhausting our attention on the spectacular physical attacks, forgetting that the parties with the most to gain never show up on the battlefield. And in Iraq there was plenty to gain: not just the world’s third-largest proven oil reserves but territory that was one of the last remaining holdouts from the drive to build a global market based on Friedman’s vision of unfettered capitalism. After the crusade had conquered Latin America, Africa, Eastern Europe and Asia, the Arab world called out as its final frontier.

It was clear from the start that Iraq was considered to be, not a nation, but a site of corporate exploitation on a scale that made nineteenth century imperialism look tame and lame.  Iraq was to be a staging ground for extraction and profit while the compliant and grateful population looked on in “shock and awe.”  As often happens with these best-laid plans of the Chicago Boys (who seem perennially divorced from reality), those very pesky people caused problems from the start: looting, complaining, and forming insurrectionary groups.  As Klein recounts, because the “planners” did not plan for the Iraqi people, the occupation was a disaster from the start:

The Bush cabinet had in fact launched an anti-Marshall Plan, its mirror opposite in nearly every conceivable way. It was a plan guaranteed from the start to further undermine Iraq’s badly weakened industrial sector and to send Iraqi unemployment soaring. Where the post-Second World War plan had barred foreign firms from investing, to avoid the perception that they were taking advantage of countries in a weakened state, this scheme did everything possible to entice corporate America (with a few bones tossed to corporations based in countries that joined the “Coalition of the Willing”). It was this theft of Iraq’s reconstruction funds from Iraqis, justified by unquestioned, racist assumptions about U.S. superiority and Iraqi inferiority—and not merely the generic demons of “corruption” and “inefficiency”—that doomed the project from the start. None of the money went to Iraqi factories so they could reopen and form the foundation of a sustainable economy, create local jobs and fund a social safety net. Iraqis had virtually no role in this plan at all.

Predictably, the Iraqis were angry with the Bush Administration and reacted appropriately and reactively.  Instead of working with the people they had invaded and conquered, the government treated the innocent Iraqis ruthlessly, disenfranchising them from their own country and offering them no choice but insurrection.  The worst elements in Iraqi society floated to the top, while the very people who could rebuild the country simply left.  Unable to work with the occupation government, which was intent on sucking the natural resources dry, the best and the brightest, the educated and the trained sectors of the society fled the conditions created by the ineptness and greed of the Bush Administration.  But Klein insists that the real cause of the disaster was deeper than mere inexperience:

Iraq’s current state of disaster cannot be reduced either to the incompetence and cronyism of the Bush White House or to the sectarianism or tribalism of Iraqis. It is a very capitalist disaster, a nightmare of unfettered greed unleashed in the wake of war. The “fiasco” of Iraq is one created by a careful and faithful application of unrestrained Chicago School ideology.

The occupation forces viewed local Iraqi businesses as elements to be purchased by international corporations that would then proceed to “downsize” the employees and globalize the assets. While the Iraqis rebelled against their livelihoods being wrested from them by global corporate interests, Klein points to another aspect of the Occupation—the reluctance of the Neoconservatives to allow a government to be built for the people.  The Neoconservatives did not believe in government and it would be hard to imagine a contingent of the American population more ill suited to putting a shocked and defeated people on the road to democracy.  The followers of Milton Friedman believe, not in democracy, not in the Social Contract, but in an everyman-for-himself philosophy.

Every person has to compete within an economic zone where everything is for sale.  If you fail to compete on this narrow and specialized field, it is your fault.  The government’s only role is to stage and facilitate economic warfare, the Darwinian survival of the fittest scenario.  It has been remarked on over and over, especially in Rajiv Chandrasekaran’s excellent 2006 book, Imperial Life in the Emerald City: Inside Iraq’s Green Zone, that the people hired to undertake the delicate and difficult task of reconstructing Iraq were young and inexperienced and given their jobs based, not on their understanding of nation building, but on having the “correct” positions on conservative “values,” such as abortion.  Klein makes it clear that such litmus tests that so puzzled me when I read Chandraskekaran’s book were probably just proofs of philosophical positions.  As she explains of the young people,

…they were frontline warriors from America’s counterrevolution against all relics of Keynesianism, many of them linked to the Heritage Foundation, ground zero of Friedmanism since it was launched in 1973. So whether they were twenty-two-year-old Dick Cheney interns or sixty-something university presidents, they shared a cultural antipathy to government and governing that, while invaluable for the dismantling of social security and the public education system back home, had little use when the job was actually to build up public institutions that had been destroyed.

Thanks to this army of neoconservatives, there was a vacuum where a government should have been.  Klein points out that the Iraqis who remained in their country had no government to coalesce around. There was no government, only an army of corporate occupiers, determined to loot and leave.  With few Iraqis allowed to be public presences or to have roles or jobs in the new corporate state, the people turned to the one element of society that had not been abolished, looted or corrupted: the fundamentalist Islam.  The Muslim religion in what had been a secular state under Sadaam became the only unifying force for the Iraqis.  A nation that had not allowed terrorists to disturb the dictator was now in the hands of terrorists and small fires of resistance broke out everywhere.  Soon the Green Zone was under siege and under fire, interrupting the contractors in their systematic looting of the nation’s resources.

The corporations were interested in taking money for not rebuilding Iraq, bombed into submission by its “liberators.”  In activities still incomprehensible, corporations such as Halliburton and Kellogg, Brown, and Root spend billions of borrowed money to “construct” facilities and buildings so bad and so dangerous that one has to wonder how such atrocities are actually carried out.  If anyone should be so bold as to sue, the corporations were beyond accountability: we paid them but we could not control them—the perfect situation for global looters.  As Klein says,

In March 2006, a federal jury in Virginia ruled against the company, finding it guilty of fraud, and forced it to pay $10 million in damages. The company then asked the judge to overturn the verdict, with a revealing defense. It claimed that the CPA was not part of the U.S. government, and therefore not subject to its laws, including the False Claims Act. The implications of this defense were enormous: the Bush administration had indemnified U.S. corporations working in Iraq from any liability under Iraqi laws; if the CPA wasn’t subject to U.S. law either, it meant that the contractors weren’t subject to any law at all—U.S. or Iraqi.

At the end of the book, Klein circles around from her long analysis of the looting of Iraq and returns to New Orleans after Katrina.  It seems that the Iraq model could be used in New Orleans to the benefit of tourist industries and developers.  This time, the disaster allowed the government to transport any citizens who might protest and ship them out of state so that the dismantling of entire neighborhoods and school districts could proceed unopposed.  As in Sri Lanka after the tsunami, the “abandoned” territory was privatized and gentrified.  The model of privatization has become so stealthily and systematically insinuated into the fabric of the American way of life that the private contractors have become stronger and less accountable.  As Klein expresses it,

The emergence of this parallel privatized infrastructure reaches far beyond policing. When the contractor infrastructure built up during the Bush years is looked at as a whole, what is seen is a fully articulated state-within-a-state that is as muscular and capable as the actual state is frail and feeble. This corporate shadow state has been built almost exclusively with public resources (90 percent of Blackwater’s revenues come from state contracts), including the training of its staff (overwhelmingly former civil servants, politicians and soldiers).  Yet the vast infrastructure is all privately owned and controlled. The citizens who have funded it have absolutely no claim to this parallel economy or its resources.

That these private corporations have the fate of the nation in their unaccountable hands is made clear when one looks at the banking industry.  Nowhere is the idea of public money and private gain truer than in the world of finance.  It is the public who risks and looses and the private that is saved and rewarded.  Klein’s thesis of “disaster capitalism” is playing out across America where we are seeing what she calls “disaster apartheid.”  The rich become richer and isolate themselves from the increasingly alienated lower classes, the middle and working and unemployed and underemployed classes.

It is not just the gated communities that withdraw from the Social Contract, such as one that Klein describes in Georgia; it is the gated minds that withdraw from the American promise: that we are one people and one nation.  Today, we are nation divided between the rich and protected who reap the rewards of a tax code rigged to make them rich and everyone else poor.   They are protected by powerful interests who are less interested in a single wealthy person on a Long Island estate in the Hamptons than in the “slippery slope” that the attentions of the citizens might turn to the corporations who also do not pay taxes.  Klein points out that Israel, like America has become a divided society, profiting from the “threats” of “terrorism” coming from tribes people who are living in a seventeenth century society.

The absurdity that twenty first century nations should establish an economic system dedicated to arming themselves against people who would leave us alone if we just left them alone has created a huge gulf between privatized wealth and public poverty.   Klein states that the American governments under the spell of Milton Friedman fear democratic socialism more than they fear any outside threat.  Any hints of “income redistribution” or “economic fairness” bring about instant assaults from the conservative media, which howls with charges of “Marxism” and “Nazism.”  Either these people are uneducated and don’t know the difference between the theories of Marx and the practices of Hitler or they simply hurl word grenades indiscriminately.

“Socialism” or a government that actually governs is a dire threat to the followers of Milton Friedman.  The people who run as conservatives run for office, not to govern, but to un-govern.  Their role is that of moles; to “hollow out” the government, leave it an empty tunnel under the crumbling sod of a nation that was once called “America.”  Running this brave new world will be a handful of corporations, those “people” who cannot vote but can buy elections.

As I write, there are protestors on Wall Street, “occupying” Zuccotti Park. The protests against the implementation of the Shock Doctrine upon Americans have been going on for years, ever since the Wall Street Bail Outs.  In another post, on Inside Job (2010) I wrote of the complicity of now-discredited economists and economic doctrines in causing a global economic crisis from which it will take us years to recover.  I say “us,” but most of “us” will never regain our strides or places in a once-thriving society that was looted by the rich and powerful who are affronted when “we” demand “economic justice.”  “We” are “Marxists” and “unpatriotic.”  The Shock Doctrine ends on a hopeful note as Naomi Klein sees signs that people are trying to take their country back.   Our future hangs in the balance and some of us wonder if this is our last chance before we all become “America, Inc.”

Dr. Jeanne S. M. Willette

The Arts Blogger

 

 

 

 

 

 

 

 

 

Rebels in Paradise: The Los Angeles Art Scene in the 1960s

REBELS IN PARADISE.

THE LOS ANGELES ART SCENE IN THE 1960S

BY HUNTER DROHOJOWSKA-PHILIP

HENRY HOLT AND COMPANY, 2011

Rebels in Paradise: The Los Angeles Art Scene in the 1960s  is a lovely and delicious book. Delightfully and briskly written, it is far and away the best book Drohojowska-Philip has produced to date.  She rightly calls the book “a love letter to Los Angeles” for it is narrow in scope and presents the sixties from a personal point of view.  Did this book need to be written? Probably not, because the sixties scene in L. A. has been thoroughly discussed.  The historical bricks and mortar are already in place but what the author provides are interesting bits and pieces, anecdotes about  “making it” in the art world, California style.  Most of the new material comes from oral histories of the definitive decade, so that the book is based on the artists’ voices. The reader can get through this book in a couple of hours, skipping lightly along the familiar and pausing for the occasional new gossipy nuggets about the marital musical chairs and who took LSD, who rode surf boards or motorcycles or hot rods and other mildly amusing stories of harmless fun in the sun.

It is very difficult to write this kind of book, which narrates history like a novel, but Drohojowska-Philip has the literary skills to pull it off.  Take this nice opening passage:

A feeling of excitement charged the balmy evening air outside, and North La Cienega Boulevard traffic slowed as drivers gawked at the scene. Inside, stylishly coifed women in sleeveless dresses mingled with Los Angeles artists, awkward young men outfitted in thrift-store splendor. Warhol entered the filled-to-capacity gallery wearing a carnation in the lapel of his Brooks Brothers blazer.

This is a book you want to read.  Compare the nicely elegant prose to his turgid mess from the opening of The Accidental Billionaires by Ben Mezrich:

It was probably the third cocktail that did the trick. It was hard for Eduardo to tell for sure, because the three drinks had come in such rapid succession—the empty plastic cups were now stacked accordion style on the windowsill behind him—that he hadn’t been able to gauge for certain when the change had occurred. But there was no denying it now, the evidence was all over him. The pleasantly warm flush to his normally sallow cheeks; the relaxed, almost rubbery way he leaned against the window—a stark contrast to his usual calcified, if slightly hunched posture; and most important of all, the easy smile on his face, something he’d practiced unsuccessfully in the mirror for two hours before he’d left his dorm room that evening.

The opening passage warns you that you, the poor helpless reader, will be trapped in a mire of terrible over-writing. This is a book you will never finish.  Here’s another would-be masterpiece that defies even the most tolerant reader’s patience:

Standing in the kitchen of his Park Avenue apartment, Jamie Dimon poured himself a cup of coffee, hoping it might ease his headache. He was recovering from a slight hangover, but his head really hurt for a different reason: He knew too much. It was just past 7:00 a.m. on the morning of Saturday, September 13, 2008. Dimon, the chief executive of JP Morgan Chase, the nation’s third-largest bank, had spent part of the prior evening at an emergency, all-hands-on-deck meeting at the Federal Reserve Bank of New York with a dozen of his rival Wall Street CEOs. Their assignment was to come up with a plan to save Lehman Brothers, the nation’s fourth-largest investment bank—or risk the collateral damage that might ensue in the markets.

Deliver me from such wordy writing.  Undoubtedly Andrew Sorkin had good intentions in Too Big to Fail: The Inside Story of How Wall Street and Washington Fought to Save the Financial System–and Themselves  but the book is an impossible slog.  It is the bad prose, such as these examples, all too common in non-fiction books, which makes Rebels in Paradise such a nice change.  Finally, someone who can write nonfiction. Drohojowska-Philip wisely sticks to simple description and lets her interesting cast of characters and their adventures drive the story.  That said, the definitive version of the emergence of Los Angeles as a major art scenes in the face of East Coast indifference has still to be written.  Anyone interested in the history of this period must to go through multiple volumes, starting with Cary McWilliams and Mike Davis and Peter Plagens and working up to recent updates by Peter Selz (Art of Engagement) and Elizabeth Armstrong (Birth of the Cool). The MOCA catalogue, Under the Big Black Sun, coming out in October of this year, looks promising with an interesting roster of writers but will probably have the same kind of narrow focus found in most of these books.  The student of the history of Los Angeles art must put together a complete picture by cobbling together information from various genres and Rebels in Paradise is yet another addition to a larger pool of information.

A compendium of personal experiences and memories, Rebels in Paradise, captured from the aging group of pioneer artists and dealers their tales of building an art world based upon freedom and experimentation when no one was looking and no one cared.  The audience for these Rebels was almost exclusively an audience of producers.  Some of the seminal figures have died since the founding of the Cool School: Walter Hopps, John Altoon, and Wallace Berman, significant voices stilled. The book deals with the decade of the life of the Ferus Gallery, the key exhibition site of the sixties.  A few other galleries and non-Ferus artists are included, such as Nick Wilder’s artist, David Hockney, and the rare woman on the scene, Vija Celmins.  African-American artists and activist artists get a quick walk-on.  Beginning with a prelude in the mid-fifties, which Drohojowska-Philip refers to as the “Beat” period, the focus is on the stable handled by founder Walter Hopps and his successor, Irving Blum.  The Ferus Gallery was the equivalent of a tree-house for very immature boys only and a frat house for partying artists who enjoyed the cultural attitude of “boys will be boys.”

These boys included Billy Al Bengston and Ed Ruscha and Ed Moses and Larry Bell and Ken Price and John Altoon and Ed Kienholz and Robert Irwin, with some being given more coverage than others.  The author includes Joe Goode, although he and his paintings with milk bottles were not part of the Gallery.  She also includes a close friend and collaborator of the Ferus artists, the up and coming architect, Frank Gehry.  The Ferus gang advertised themselves as “The Studs,” and many writers think that this label, seen in a gallery exhibition poster, referred to the constant diet of willing nubile young women who hung out with the young and handsome men.  In fact, the “studs” reference the actual studs of the gallery walls where the nails were driven to hang the paintings….or so one of the artists now claims.  But some of these frisky gentlemen, like Ed Kienholz and Craig Kauffman, seem to have collected wives.  The art scene was so glamorous and so appealing that it attracted other brash young men from the movie industry, Dennis Hopper, Dean Stockwell, Russ Tamblyn, and pretty starlets, such as Teri Garr. These minor B list stars gave the scene added luster but also spoke of the unmade bed quality of the Los Angeles art world, where anyone could climb in, unless of course you were black, gay or a woman.

The focus of Rebels in Paradise is  kept tight on the artists themselves, their lives and biographies.  The art itself is only glancingly discussed and a reader not familiar with the paintings of Bengston or the ceramics of Price or the architecture of Gehry would be lost. Written for insiders, the account is uneven at times.  The author relies upon a prior understanding of Every Building on Sunset Strip, for example, but provides a more informative discussion of Hockney’s portrait of the collectors, Marcia and Frederick Wiseman and a good account of how Robert Irwin constructed his convex dot paintings.  The book has no social context and “history” is usually a few sentences, which work as establishing shots.  There is some attempt to discuss the frustration of the African-Americans in Watts after the Civil Rights Movement but the reasons for the famous Peace Tower—the Viet Nam War—are glossed over.  The relative lack of historical backdrop is a loss because the artists in Los Angeles were willing to tackle the social and political issues of the day, something the New York artists refused to do.  The silence of so many East Coast artists makes the social critique provided by Vija Celmins and Noah Purfoy all the more brave and remarkable.

Aside from the comparative lack of social and economic and political context,  the scope of the book is an excellent attempt to create a literary biography of an active and varied the art scene in Los Angeles.  Drohojowska-Philip takes the time to include a thorough discussion of the Light and Space movement, which took place outside of the small stable of the Ferus Gallery.  She also brings in the ebullient Rudi Gernreich and the very important print studio, Gemini GEL where Robert Rauschenberg began an important new phase of his career.  In other words, Drohojowska-Philip emphasizes the New and keeps out the older traditions, which perhaps explain the comparative neglect of John McLaughlin whose paintings are connected to the California tradition of light but also stem from the hard-edge tradition of the East Coast.  The author concentrates on the semiotic approach to painting—the conceptual paintings developed by Ed Ruscha, which relegates pioneers in abstraction, Lorsel Feitelson and Helen Lundeberg, to the past.  The sixties in Los Angeles is an entirely new post-war plastic world, largely populated by Midwest and European immigrants attracted to the new possibilities of the Last Frontier.

I do not approve of critiquing a book for being something it was not intended to be but I do feel that it is important to state what Rebels in Paradise is—-a series of anecdotal biographies of a small group of significant artists—-and what it is not—a history of Los Angeles and its art at a particular moment in time.  Because the author stays so completely in her chosen tranche, there can be no perspective on the people or the events as viewed from the position of the present. Today, the idea that an art scene could unfold without women and people of color seems strange and unforgivable, particularly in light of the backdrop of the Civil Rights Movement.  Drohojowska-Philip presents a racist, sexist, and homophobic art world that is typical of its time, a sort of Mad Men at the Ferus Gallery, but resists commentary or judgment.  These years were the last of the total domination of the white male and it probably did not occur to any of these white and male artists that they were the last of their kind.  And these lives and careers are now, fifty years later, coming to a close.  It would have been interesting to have more of an epilogue—the illness of Teri Garr, the death of Dennis Hopper, the waning of Billy Al Bengston, the rise of the “Starchitect,” Frank Gehry and the contributions of Betye Saar and her remarkable daughters.

Overall the book is a very special sixties nostalgia trip, retelling the story of the making of an art world, without controlling art critics, without ruling dogmas, before the take-over of an international art market and before the control of the art schools.  The author takes the reader a bit into the future by bringing in John Baldessari and Judy Chicago, leaders of the seventies scene which ended painting as it was known and began the challenge of feminist art—all done at Cal Arts.  Again and again, Drohojowska-Philip presents statements by her artists stressing the importance of not being in New York and working in a brave new world where they could be completely open to new ways of making art.  She provides a particularly amusing story of painter Robert Irwin taking a New York critic on a tour of the local Kustom Kar Kulture of Los Angeles but the critic could not imagine that painting a car could possibly be “art.”  Irritated and impatient with such close-mindedness, Irwin put the critic out of the car and onto the highway and left the man standing beside the road.  Such is the fate of those who do not heed the future and the future is in Los Angeles.

Read this Valentine of a book, you will thoroughly enjoy it.

Dr. Jeanne S. M. Willette

The Arts Blogger

 

 

The Assault on Reason, 2007, by Al Gore

WHAT IF AL GORE HAD BEEN THE PRESIDENT?

A Review of

THE ASSAULT ON REASON, 2007, by Al Gore

One of the great “what ifs” in American history is “what if Al Gore had become president in 2000?”   Notice I did not say, “What if Al Gore had won the 2000 election?”  For some, George W. Bush did not defeat Al Gore, instead the Supreme Court in what many left-wing thinkers consider a coup-d’état handed him the presidency.  Who knows who really won?  The counting of the votes, hanging chads, butterfly ballot and all that, was never completed but was halted by the Court.  The Republican response to the Democratic dismay was to “suck it up” and accept the loss.  While this transfer of the presidency to George W. Bush has never left the consciousness of the Democrats, and while we will never know who actually won the most votes in Florida, some things we do know for certain and that is what would not have occurred if Gore had become president.

Imagine what we would not have had

  • No war in Iraq
  • No “discretionary” wars
  • No Patriot Act
  • No torture, no torture memos,
  • No wholesale spying on the American people
  • No Guantanamo Bay
  • No Abu Ghraib
  • No flouting of the Geneva Convention
  • No privatization of the military
  • No Haliburton, no KRB
  • No wars fought on credit cards
  • No unfunded prescription drug programs
  • No government lying
  • No outing of CIA agents
  • No inaction on Katrina
  • Job outsourcing offset by jobs at home
  • No Great Recession
  • No Bush Tax Cuts to the Wealthy
  • No massive debts
  • No union-busting governors
  • No Defense of Marriage Act
  • No polarization between political parties
  • No John Roberts
  • No Samuel Alito
  • No Citizens United Decision
  • No Tea Party
  • No Sarah Palin
  • No Michelle Bachmann
  • No Barack Obama

What we would have had:

  • A Short War in Afghanistan
  • A Green Economy
  • Green Jobs in America
  • Smaller Wall Street Crash
  • Illegal Immigrants made legal tax-paying citizens
  • The Protection of Reproductive Rights
  • The Protection of Voting Rights
  • Well-funded Social Security and Medicare and Medicaid
  • Compromise and Negotiation
  • A Respect for Truth and for Reality

Each president teaches the nation a series of lessons, some of them with lasting repercussions, some good and some bad.  Lyndon Johnson taught us that presidents lie.  Richard Nixon taught us that government is not to be trusted.  Ronald Reagan taught us that greed was good.  George H. W. Bush taught us to use racist lies as a campaign strategy.  Bill Clinton taught us that presidents have sex while in office.  George W.  Bush taught us that it was just fine to spend money we do not have and had no way of paying back.  Barack Obama taught us that resistance is futile.  Al Gore taught us how to lose gracefully.  Al Gore also taught retired public servants who to make the most of their retirement and how to maximize their experience for the public good. Of all the ex-politicians, Al Gore has contributed to the globe perhaps the most admirably, warning the world of the coming catastrophe of Global Warming or Climate Change or whatever you want to call it.  Only Jimmy Carter and Bill Clinton have equaled Gore in public service after serving in elected office.  We are still waiting to see what the Bushes, Senior and Junior, will do to show that they deserved the faith their voters put in them to serve the people.

We know what happened under George W. Bush.  But what if Gore had been president?  What are the arguments that things would have been better as the result of a Gore presidency?  First, Gore would have retained the surplus accrued under Clinton.  There would have been no tax cuts for the rich.  So how would all that extra money have been spent?  Undoubtedly, the deficit would have been paid down over time.  But there are always rainy days and the unexpected.  During the first decade of the twenty-first century, there were two events that could not have been planned for.  In making the second point, we could arguably ask: would there have been a September 11th?

While it is doubtful that the terrible insane plan to turn planes into weapons could have been detected, there would have been much more awareness of the dangers of Islamic terrorism in the Gore administration than in the Bush administration.  The Bush State Department was fully briefed by the outgoing administration on the threats from Al Qaeda and chose, famously, to ignore the information.  Third, while we can assume that, regardless of the increased vigilance that 9/11 would have happened anyway, we also know that there would have been no war in Iraq.  Certainly after September 11th, American would have fought what probably would have been a short and sharp war in Afghanistan.  How short, we cannot know, but certainly not the ten plus years we are witnessing now.

Another cost of the Bush wars was the very expensive privatization of the military. Once the military took care of itself, from cooking to cleaning to fighting.  Under the Bush administration, the basic cost of running a war was enormously increased by outsourcing what had been standard military tasks to private companies, which proceeded to overcharge the government.  It has long been known that the Defense Department had always been the target of enrichment scams on the part of civilian businesses and there were attempts, however feeble, to keep the outrageous overcharges under control.  Under the Bush administration, the ceding of the military to private enterprise exploded the cost of the war beyond what it would have normally been.

And none of the increased costs were paid for.  During the Second World War, the military was self-sufficient and the citizens paid the costs, one day at a time, through the sale of war bonds.  Instead, no bid contract were handed out from everyone to electricians to caterers to commandos, effectively doubling the personnel and causing costs to spiral out of control.  It is doubtful that under a Democratic president that the wars would have been either plural or privatized.  Without the wars, there would have been no Patriot Act, no wholesale spying on the American people, no Guantanamo Bay, no Abu Ghraib, no torture memos, no flouting of the Geneva Convention, no decline in American credibility and no loss of American honor.

Fourth, this war would have been paid for.  The two Bush wars were the first in American history to be waged without a tax increase and fought totally on borrowed money.  Fifth, it is unlikely that going into two wars on credit cards would have been coupled with another charge on the card, the unfunded and unpaid-for prescription drug plan.  Although it would be safe to assume that none of the budget busting events that happened under Bush—two wars, a tax cut and a prescription drug deal, none of which were ever paid for—under a Gore Administration, it would not be safe to assume that there would have been no financial melt-down.  The crisis of 2008 could well have come about regardless of who was in charge. The only real question is how bad would it have been?

The Gore administration would have, in all probability, continued the de-regulation of the lending financial industries undertaken by the Clinton economic team.  What is unclear is the extent of the financial excesses.  During the Bush years, Wall Street came to resemble Las Vegas even more than usual.  The stock market and its minions take its cues from political leadership and the market clearly followed the lead of the Bush administration and adopted the philosophy of short-term goals and short-term gains, to borrow and spend with no thought to the consequences.  The market will always take advantage of the slightest permissive loophole and even invent a few more but, under Bush, there was clear permission to binge.

Recall that after 9/11, the president urged the nation to shop.  Credit cards were flashed and homes were used as the proverbial piggy bank and, thanks to “liar’s loans,” value was extracted from what was the homeowner’s major financial asset.  The market may always be counted on to behave badly and selfishly but, under Bush, the basic fabric of responsibility and morality and ethical behavior became openly unraveled.  The bills finally came due and the entire structure, built on fantasy, came crashing down.  Would the Gore administration have bailed out Wall Street?

It is possible that, given the precedents, such as the Savings and Loan debacle, the answer would have been “yes.”  But it is probably safe to assume the crash would have been much less severe and the money would have been there to pump into the economy.  Not only that, but the economy would have been in much better shape and could have better absorbed such a blow.  Under the Bush administration, there was no job creation and no rise in middle class income.  Jobs were going out the door and traveling to other nations with cheap labor.  Tax incentives were created to encourage outsourcing and corporations were allowed to not pay taxes.  Of course with the high cost of labor and the stringency of regulations in America, all the businesses that could do so shipped their jobs overseas.

This practice was nothing new and had been going on since the 1970s.  Outsourcing is not a bad thing in and of itself.   American consumers have certainly enjoyed affordable commodities; from television sets to automobiles, and it make sense to allow certain societies to specialize in manufacturing if the advantage exists.  The problem is that, under Bush, these lost jobs were never replaced.  Real wages went down and, when taxes were cut, especially on the people who continued to experience a rise in income, revenues fell sharply.  With not enough coming in and with huge unprecedented amounts of money going out, a deficit rapidly replaced the surplus and America went into a deep financial hole.

With the Afghanistan War over, with the rich paying their fare share, with no unpaid-for prescription drug plan, with no war in Iraq, and with a healthy economy, the Gore administration would have been ready for the Wall Street Crash of ’08.  The Bush administration encouraged jobs to leave America and did nothing to encourage job creation at home.  Eight, here is where there would have been an enormous difference between Bush and Gore.  Environmentally conscious, Gore would have started green industries in America, creating green jobs.  Green jobs are the kind of jobs that cannot be outsourced and the range of these kinds of jobs is enormous, offering opportunities to men and women with a wide range of skills and education.  In addition, green jobs would have been located everywhere, eliminating the pockets of joblessness and limiting the dependence on federal spending seen in the southern part of the United States, for example.  People could have actually afforded their homes, paid their bills, and, who knows, maybe there would have been no total meltdown that impacted homeowners.  Maybe Wall Street would have had to suffer for its own excesses.  Who knows?

Given the aging Baby Boomers under Gore would there have been an upswing in socialized medicine and health care?  Or to put it another way, would Social Security and Medicare and Medicaid been in financial trouble?  The crisis in these government guarantees of public health is due to the lack of taxes to support them.  With normal tax revenues, there is no problem for the future of any of these programs.  It is even possible that, under Gore, American would have been allowed to buy drugs on a competitive market, even allowed to buy drugs in Canada, bringing down the cost of health care. But there is something more to consider.  Under Gore would there have been a Democratic push to legalize illegal immigrants?  Given the rewards, why not?

Legal citizens pay taxes, instead of sending the surplus to Mexico, because they now have a stake in their new nation.  The influx of income would be felt immediately in local and state and federal governments. People of Latino descent are a fast growing demographic and a young demographic, more than taking care of the spaces left by the Baby Boomers who will very shortly stop paying taxes and will start extracting their contributions to their retirement.  The current budget “crisis” could be solved simply by ending the Bush tax cuts and by making illegal aliens legal.  Legal citizens can vote and in gratitude, they would vote for the party that had given them citizenship.

Republicans know this fact of life and will continue to obstruct Democratic efforts to solve the “immigration problem,” which like many of the so-called “problems” we are told we have are problems of Bush’s making, because they know that the Republican base is a small one.  The idea of a permanent Democratic majority is simply unthinkable to the Republicans, even Bush knew that, but his own party blew the opportunity he gave them.  The Republicans can offset their smaller numbers with larger campaign spending, which is no anonymous and unlimited, thanks to the Supreme Court infamous Citizens United decision.  And that Decision brings up another major difference between the administration of Bush and Gore.  Under Gore, there would have been no John Roberts and no Samuel Alito and no rightward turn to the Supreme Court.  Instead, Gore would have nominated two more liberal or neutral justices to the Court and there would have been no rollback of civil liberties and no decisions that favored corporations over citizens such as we have seen over the past decade.

Finally, the last thing that we do know is that without Bush and the drift rightward of his administration, there would have been no Barack Obama.  Obama, a conservative Reagan Democrat, was able to position himself left of Bush, only because of the extreme right leaning positions taken by that administration.  Obama’s mild Republican health care policies, which seek to shield American citizens from predatory health care companies, were a shock due to the strong contrast to Bush’s laisse faire attitude towards the poor and the middle class. Without a right-wing Bush administration, there would also have been no Sarah Palin.  The Bush administration prepared the ground for an extreme Republican agenda and for extreme Republican candidates who do not read newspapers and who want to pray the gay away.

At the end of a Gore administration, the next president could have been a moderate Republican, like Romney, or another environmentally conscious Democrat.  It is doubtful that whomever the President would have been in 2008 that there would have been the latest upsurge of the John Birch Society, the Tea Party.  The Tea Party emerged, as did Sarah Palin, on the fertile soil of the Wall Street Bailout.  With a good economy, there would have been no need for a faux “tax revolt.”   Today, when nothing substantial gets done in Washington, it is hard to imagine what might have been.  As unimaginable as it seems, the Democrats and the Republicans would be talking to each other today.

Just as Ronald Reagan allowed greed to emerge unchecked in America, George Bush allowed and encouraged a take-no-prisoners approach to politics.  Taking a page from the book of his father’s late unlamented advisor, Lee Atwater, for the campaigns of the younger Bush, no trick was too dirty, no lie was too extreme, as long as it worked politically.   The result was the birth of scorn for “reality-based” narratives and the door to stories that had no basis in fact was opened.  It was fine to lie about the weapons of mass destruction, it was OK to reveal the identity of an officer of the CIA, just as it was perfectly acceptable to torture and to hold people indefinitely without charge or trial.  If one side believes in an untenable scenario and castigates anyone who wants to tell the truth, then compromise is impossible.  Once facts become meaningless then the party, which believes in non-facts can neither see nor agree to other points of view.  When the Bush administration showed its willingness to buy into improbable versions of actual reality, the way was cleared for political gridlock.  Without an agreement on basic facts and basic truths, no actions could ever be taken.

What the Bush administration taught us is that there was no accountability.  Would the Wall Street Robber Barons have been allowed to go free in a post-Gore administration? Probably not, but Obama, following a regime without penalties, threatened the bankers with only Elizabeth Warren. But there is such a thing as accountability and we, the middle class American citizens, are still paying for old sins that we did not commit.  In his best selling 2007 book, The Attack on Reason, Al Gore does not mention any of the might-have-beens listed above. He simply outlines in a clear precise language the failings of the Bush administration.  Writing before the Wall Street Crash, his concerns have to do with civil liberties lost and the campaign of misinformation that passes for “news” during the first decade of the twenty-first century.  Gore is especially concerned about the spread of false information by a mass media that is controlled by corporations and political interests.  Gore quotes Edward Muskie, a former presidential contender, brought low by media manipulation in 1970,

“There are only two kinds of politics.  They’re not radical and reactionary or conservative or liberal or even Democrat and Republican.  There are only the politics of fear and the politics of trust.”

We all know that the next famous quote was “I am not a crook,” uttered by Richard Nixon.  The president engineered his own demise by turning a small political misdemeanor into a massive cancerous cover-up, bringing the term “Watergate” and all things “gate” into being to designate scandals that could not be overcome.  Watergate, like the McCarthy Hearings, was played out on television to a fascinated audience who was dazzled at the cast of luminaries brought low.  Watergate was a rare case of the truth coming out and of that truth having consequences and of those responsible being held accountable.  It would be the last time such a public political punishment would occur.  Watergate was a story broken by a great newspaper, The Washington Post and what lodged in the public psyche was that newspapers, print media, was the last resort of truth.  Since Watergate the public spends more and more time passively consuming television in a one-way no-exchange experience.  As Gore points out, today, television is the public’s main source of political news on government business and newspapers are folding one by one.

Not only are newspapers dying and television ratings are soaring but television viewing has become more and more of a niche experience.  Unlike newspapers where a range of news and opinions co-exist, television programs appeal to the fears and prejudices of the audience.  Television exists to entertain and to make money for the owners not to seek and find the truth.  Furthermore, competition has greatly lessened among media outlets since the 1970s and a few vast conglomerates control everything.  Monopoly capitalism has captured the news, turning it into a source of revenue.   In such an atmosphere, reason has no place.

Gore’s main thesis is that reason has been replaced by “dogma and blind faith.”  The result is “a new kind of power” that is arbitrary because the public is not informed and cannot consent from an informed position.  Gore also states that this power comes from “deep poisoned wells of racism, ultranationalism, religious strife, tribalism, anti-Semitism, sexism, and homophobia…” In such an atmosphere, the ugliness that always underlies any body politic is allowed and even encouraged to emerge.  Real problems can be ignored while non-problems and fake crises distract the American people.  The result is a replacement of our system of checks and balances with unchecked power and influence thanks to a “coalition” that serves their own interests, not that of the public.

Gore used the Iraq War and the systematic lies that led to it as a prime example of the techniques of distraction.  It is now known that the Bush administration came into office with the goal of deposing Saddam Hussein and the administration’s spin machine diverted attention away from Osama bin Laden to phantom weapons of mass destruction.  Anyone who disagreed with President Bush or brought facts to bear was dismissed as “unpatriotic.”  Ideology replaced facts, faith replaced information, fantasy replaced history, and dogmatism replaced reason so that Bush could “benefit friends and supporters.”

The coalition, or the “friends with benefits,” that Gore describes is made of a number of groups or what Bush called “my base.”   He lists “the economic royalists” who want only to eliminate taxation and regulation, an “ideology” which has an “almost religious fervor.”  The public interest does not exist for these people.  Indeed, any government programs that aid the people are disincentives to make these people work hard for low wages.  The interests of the “wealthy and large corporations” have the highest priority for Republican ideology.  The infallibility of this ideological position is buttressed by what Gore lists as “well funded foundations, think tanks, action committees, media companies, and front groups capable of simulating grassroots activism and mounting a sustained assault on any reasoning process that threatens their economic goals.”

True, Republicans have been trying to dismantle the New Deal and the prosperity of the middle class for eighty years, but Gore asserts “this is different: the absolute dominance of the politics of wealth is something new.”  He traces the long struggle in America to create an equal society, which is also a struggle against monopoly power and corporate interference with the workings of government but once regulations that made sure that there were many choices of media outlets to ensure competition.  Under Reagan, Gore points out, media competition was ended when regulations were lifted allowing vast corporations to gather together many television and radio stations and newspapers into one bundle that spoke with a single mind, devoted to preserving the wealth of the wealthy.  Any information that should get in the way of ideology is promptly distorted for the cause or spun in a favorable direction.  Gore says of the Bush administration, “I cannot remember any administration adopting this kind of persistent, systematic abuse of the truth and the institutionalization of dishonesty as a routine part of the policy process.”  Gore states that the result of administration tactics was “to introduce a new level of viciousness in partisan politics.”

The Supreme Court, always compliant with right wing agendas, helped President Bush garner unprecedented and unchecked power to the executive branch.  The Bush doctrine became whatever the president did was legal, a stance taken unsuccessfully by President Nixon.  Bush was allowed to flout the American legal system and to disdain international laws.  All Supreme Court decisions were made in favor of corporations and their powers and against the people, leaving the individual with no recourse, not even the right to a trial by jury.  Bush was less interested in social issues than the later Republicans would be.  He was far more interested in amassing the power to do what he wanted, whether it was warrantless wiretapping, searches without search warrants and the “right” to put an unprecedented number of innocent citizens under surveillance for no particular reason.  The public was not allowed to assemble freely and any protestors were removed far away from the President and corralled in special sections so that Bush’s day would not be ruined by any sign of dissent.

Gore ends his description of the illegal and unconstitutional abuses of the Bush administration by stating what it would take to create a “well-informed citizenry” that democracy requires.  He does not have much faith in television and puts his faith instead in the Internet.  Gore warns that there are powers, corporate powers, which want to control the Internet by giving the content the rich and famous approve the green light of high speed and forcing the dissenters into the slow land of endless downloads.  This compartmentalization of the Internet into fast and slow ideologically structured lanes is a real and present danger.  One can only hope that the True Believers and the Bloggers will keep protecting the last bastion of true participatory democracy.  This book was published before the Bush presidency ended and does not account the last days of the Bush Bonfire when Wall Street burned.  Reading The Assault on Reason three years into the Obama presidency is to recognize how totally the Bush administration ruined the very promising situation it inherited from the Clinton-Gore administration.  One realizes that this is a group of politicians where were discredited to a man and woman but they were never held accountable.  They just got out of town and left the government in a shambles.

What was gained?  What was the Bush Administration all about?  Reading Gore’s book helps us understand that what was gained by the monied interests was a significant weakening of regulations of all kinds, a shrinking of taxes on the rich, an enlargement of subsidies even for the wealthiest corporations, and a lack of meaningful consequences when oil spills or chemicals leak or coal mines cave in and people die.  Wall Street banks can demand money from taxpayers and then refuse to help the very same citizens refinance their mortgages while giving themselves record bonuses.  Global Warming is now a hoax and every time it snows, the right wing throws verbal snowballs at Al Gore.  Every time there is a tornado or a flood or a drought, then the same people call the federal government.  Labor unions, especially teachers, are now the villains and these groups are under assault so that more tax breaks can be given to the wealthy.  States’ rights have made a comeback and even Obama, a black man who should know better, says that states should decide on whether or not to “allow” gay marriage.  “Compromise” and “negotiation” are bad words for a person whose election promise is to destroy government as we know it.  Washington is in gridlock. Media has rewritten lived history: the deficit was caused by Obama who was not born in the United States and who wants us to all become “European,” whatever that means.

Gore has been largely silent about these events that have unfolded since his book was published, but he cannot be surprised by the trend of today’s events.  He has not been outspoken like Bill Clinton, nor has he overtly supported Obama.  He has put forward the facts of Global Warming, won his Nobel Prize, and he will watch to see all his prophecies come true in forest fires, tornadoes, floods, droughts, melting ice caps, the extinction of polar bears, the widening of the hole in the ozone layer, endless winters, rising sea levels—Gore watches it all.  Some Americans look away from the dust storms and cry “Hoax.”  Other Americans have lost hope, and no wonder.  The game, we learned, was rigged for the rich and not for the public interest.  The land will be raped for the profits of the few and we the many will pay for the destruction.  Meanwhile, we watch television and see good-hearted well-meaning Americans demonstrating in Revolutionary War costumes to preserve tax cuts for the Wall Street bankers.  The media they watch has convinced them to dismantle all the social programs they enjoy and use.  These good people have been gathered together by powerful corporate interests who can bend them to their will.  Reason has no place in politics.  Nor do facts. Nor does reality.  Spin rules.  Slogans speak.  If Al Gore is right, the last refuge of the honest broker is the Internet…while it lasts.

Dr. Jeanne S. M. Willette

The Arts Blogger

 

Grown Up Digital: The Net Gen as Learners and Teachers

GROWN UP DIGITAL.

HOW THE NET GENERATION IS CHANGING YOUR WORLD

By Don Tapscott

2009

Is the Internet changing our brains?  We know what our brains look like on drugs—-but do we know what our brains look like on the web?  Don Tapscott, one of the experts in the realm of Internet communication says that our minds have been improved by unlikely mechanisms, such as video games and the much-scorned Wikipedia.  Even though it is hard to imagine World of Warcraft as the implementer of intellectual prowess and the facilitator of social skills, today’s children and teenagers, the sons and daughters of Dungeons and Dragons players, are smarter than their parents.  For some educators, the news that their students have sharper, better developed minds than they do, will come as a bit of a surprise.  However Tapscott insists,

…what we are seeing is the first case of a generation that is growing up with brains that are wired differently from those of the previous generation.  Evidence is mounting that Net Geners process information and behave differently because they have indeed developed brains that are functionally different from those of their parents.  They’re quicker, for example to process fast-moving images…

What does it all mean? What are the implications for the future?  Tapscott’s book is an informative and insightful journey into the way the twenty-somethings—the Net Generation—think.  Despite the scientific data that suggests that the brain of a person who has been web-trained his or her entire life is different from the book generation, the main thesis of Tapscott is not so much brain change but power change.  He posits the Net Gen as the “Lap Generation,” the first generation to lap or pass their parents by possessing authority their elders do not understand: how to use electronic technology.  The result of the younger generation’s apparent natural mastery of all things tech, Tapscott thinks, is the end of hierarchies and the abolition of a centralized authority.  The author focuses on four areas, family, education, business, and politics. All of these entities are being faced with the Lap Generation and their egalitarian mindsets.

Family

The youth of today are better informed, more adept at technology, and savvier with the ways and means of the Twenty-first century than the adults who are still in charge of education, businesses, and governments.  What Tapscott’s book points to is a huge generations gap, a chasm as wide as the famous “generation gap” of Margaret Mead.  For the Baby Boomers, their parents’ pre-war knowledge and experiences were irrelevant and useless, making what the author refers as the authoritarian family structure of the era extremely frustrating for the Boomers.  The fathers, who acted like CEO’s, as Tapscott calls them, pontificated, but they had little of use to share and were unwilling to learn from their children.  After years of having to endure lectures on topics that were alien to teenagers in the Sixties, the Boomers escaped the home front, never to return to the clutches of authority.

In contrast, today’s parents, who are the Boomers grown up, are more open to listening and to allowing their children to show them how to log onto the Internet. The relationship between parent and child is more open and more nurturing.  Parents and children are close, so close that an entirely new kind of parent has emerged, “the Helicopter parent.”  As an educator, I am familiar with that kind of ever-hovering parent but did not know that these same parents will continue to hover.  They will go on job interviews after college, and will even confront the boss if their child is not well treated.  How are the parents so well informed about the office politics for their child?  The Lap Generation, the “boomerang” generation, making a strictly economic decision, likes to live at home.  There are no hierarchies, only equality, in this new family.

After reading Tapscott’s observation about the new family, it occurred to me that this new arrangement bodes well for the distant future when the Boomer parents are elderly.  For the first time in generations, it may be possible that the children will care for the parents.  The Boomers ran away from home and abandoned their parents.  Many Boomers today are facing the conundrum of what to do about an elderly parent or two.  It is not uncommon for the Boomer’s elderly parents to be abandoned—again—in a facility where they will live out the last of their golden years, unvisited, and will die, unmourned.  But the Boomers who have been respectful and kind to their children should expect better care from their children.  What else could this new kind of anti-authoritarian family offer to the future?

Education

Educators should take note.  The current model of pedagogy is teacher focused, one-way, one size fits all.  It isolates the student in the learning process…. (Net Geners) will respond to the new model of education that is beginning to surface—student-focused and multiway, which is customized and collaborative… says the author.

Tapscott states that the Net Gen carries with it two sets of expectation when these students enter schools and colleges.  First, they are shaped by their experience with the Internet, which demands that they interact with technology, search for content, and socialize with their peers, long distance.  Second, they expect to shape and participate in their own education.  Rather than passively accepting intoned truths delivered from behind the lectern on high, this generation wants to participate and collaborate in what they expect to be a joint enterprise.  The author characterized current education as being a one-way model, that is one-person talks and another listens.  It occurs to me that, in fact, the educational system reflects the technology.  The Guttenberg technology, based upon the printing press, is a one-way form of communication.  The author writes and the reader reads.  The radio repeated this form of speaking and listening that reflected the print technology.  Then television came along and replicated the Gutenberg method once again.  Education is based upon the premise that an educated person, i.e. e. the teacher, is also a reader who has read and who, is, therefore, qualified to redeliver the written messages in an oral form, again repeating the model of one way communication.

Following my line of thinking, the real challenge to today’s educational model is the Internet, which is a two-way mode of communication.  In contrast the traditional Sermon on the Mount, the Web is participatory, non-authoritarian communication, a call and response format that is ignored and discredited by the authorities until they feel threatened by the sound of Other voices.  The call and response nature of the Internet—this new technology—means that education must become more participatory for the Net Gen students.  Tapscott writes that the Net Gen students expect interactive teaching and learning.  If they cannot actively collaborate, they will tune out and get bored with traditional methods of lecturing.  Although Tapscott does not get into the weeds of pedagogy, I suspect that, contrary to their current teachers, this is a generation that would accept and welcome distance learning.  Today’s students are used to learning from the computer, an instrument that many of today’s educators view with suspicion.  On one hand the computer is a convenient tool, on the other hand, it challenges the authority of the teacher who wants to be the sole source of knowledge.

Tapscott describes the elders of the Net Gen, the Gen Xers, as being “aggressive communicators who are extremely media centered.”  But unlike the Gen X, the Net Gen grew up using the “programmable web.”  “And every time you use it, you change it.”  The author continues later, “On the Net, the children have had to search for, rather than simply look at, information.  This forces them to develop thinking and investigative skills—-they must become critics. Which Web sites are good?”  Tapscott rightly calls the model of education we currently use—-teacher lecturing and student listening—-as Industrial, but I think he may be off by a few centuries.  The model is more that of a pre-Gutenberg culture, before the printing press made it possible for people to read what they wanted.  I would agree with Jeffrey Bannister, quoted in Tapscott’s book, who uses the term, “pre-Gutenberg.

We’ve got a bunch of professors reading from handwritten notes, writing on blackboards and the students are writing down what they say. This is a pre-Gutenberg model.

I might point out in passing, to Bannister, that in attempting to accommodate multiple learners, it is considered good practice to write on the board for the students who learn by reading, not hearing.  Indeed, Tapscott also states that,

Students are individuals who have individual ways of learning and absorbing information.  Some are visual learners; others learn by listening.  Still others learn by physically manipulating something.

As early as 1967, as Marshall McLuhan, also quoted by Tapscott, said,

Today’s child is bewildered when he enters the nineteenth-century environment that still characterizes the educational establishment, where information is scarce but ordered and structured by fragmented, classified patters, subjects, and schedules.

The New Learning must be customized for each student’s needs.  Tapscott also quotes Howard Gardner, who called today’s educational model as mass production, a reflection of the industrial economy, which created assembly lines and Taylorism that forced human beings to work in tandem with machines.  According to Gardner, school is also mass-production.  “You teach the same think to students in the same way and assess them all in the same way,” he says. True but this is how No Child Left Behind teaches, as it must, for the standardized test.  Even the best secondary schools teach towards to entrance exams so that the students can get the highest scores, not necessarily the best critical thinking skills.  The test becomes the teacher.   How are the Net Geners going to respond to a mechanism so crude and arbitrary as an SAT test?  Note that these standardized tests do not take into account the way that the test-takers, the Net Gen, actually think.  Change takes place at a glacial pace, especially when the entire educational system comes from a foundation based upon magical thinking: if the speaker says it, it is so.  Education equal authority—unquestioned authority.   How did strange combination of information without questions come about?  And how did such a procedure become labeled as “education?”

When Gutenberg invented the printing press, the Church was against this new instrument, because the sacred words, once intoned only from the pulpit would be distributed to the great unwashed, delivered by the voice of authority.  The Church feared, rightly, that the power of the printed word and of reading would allow the people to challenge the priesthood.  The authority of the Church was unquestioned and was based upon a far older form of disseminating information, an oral culture of story telling.  A culture of story telling is a logo centric culture, backed by the presence of the speaker who is the source of the story, information, and the truth.  God spoke to Noah, to the Prophets, etc. and the word of God was transcribed.  It was the task of religion to tell to those congregated the words of the Lord.  The Church inherited a largely illiterate society—even kings and queens often could neither read nor write–that had to be preached to.  Through years of standing for six to eight hours in cathedrals, hearing mysterious Latin, listening to sermons, and “reading’ the sculptural programs and the frescoes, the uneducated people under the care of the clergy were socially conditioned to listen to one voice (God’s) and one source of authority (the Church). The Protestant movement was proof that once the common person could read the words of the Bible, those people would take unto themselves the power to interpret God himself.

There are historically close ties between the Church and the University.  The first universities, the Sorbonne and Oxford, were affiliated with religion and, with the clergy the only educated group, the priests became the first faculties.  The traces of this history are clearly visible any graduation day with the procession of professors marching down the center aisle of the school auditorium, like the clergy files down the nave, in full “regalia,” wearing the long black robes, very monk like.  Further traces of the Church lie in the very practice of lecturing: the teacher stands at the head of the class and speaks alone.  The students speak only to ask questions and are expected to subside into obedient silence.  Just as the priests re-spoke the Word of God, academics re-speak the words of their precursors.  The very form of academic and scholarly phraseology mimes the sacred scriptures.  “As —- tells us,”  “As —- famously said,” and so on.  Logos being handed down from authority figure to authority figure.  Academics depend upon the logocentric tradition and upon the mystical belief that the speaker is backed by the fullness of authority.  It is as if Moses descended from the mountain, bearing tablets written in stone—not to be altered—-after communing with the Almighty.

The assumption of a plenitude of knowledge, like that of the completeness of presence, is a false one but authority must be protected at all costs.  Another prevailing characteristic of education, inherited from the Church, is, paradoxically, secrecy.  Knowledge is guarded by the initiated, those who are learned in the ways of scholarship; knowledge is not to be given out freely, especially insider secrets. Like the Greek temples where only the priests were allowed inside the inner sanctum, only those inside the circle of the select are allowed to “speak” or be “present,” that is to publish, that is to “re-speak” the already spoken.  The Internet has changed all that.  The Net Geners are not readers, they are not listeners; they are iconographers.  As Tapscott notes,

Net Geners who have grown up digital have learned how to read images…. they may be more visual than their parents are…. (They) tend to ignore lengthy instructions for their homework assignments…

Tapscott points out that students of today learn better through images.  Indeed, this generation has invented a series of new hieroglyphs that function as signs such as happy= (: and sad= ):

Today’s students, Tapscott points out, will want to customize their education.  He mentions that “tinkering” has made a come back.  Indeed it has.  The time of the mash-up has come.  In higher intellectual circles, we call the mash-up, or sampling, bricoulage, that is, taking the existing culture and making something else with it.  This is postmodern thinking, reclaim, reuse, remake, recycle.  The very same teachers who teach postmodern theories are those who insist upon “original” work from students who are what I call, the Mash-Up Generation.  The professors who eagerly and enthusiastically teach Postmodernism, or the questioning of the “metanarrative” of Modernism, will reject cutting and pasting and demand that the student cite “sources,” or the validating voices of authority.  The same professors find it hard to accept that a student has ideas of his or her own, attitudes that stem naturally from their own generation, for, although the Boomers may have resisted authority, they knew it existed.

If my generation got into trouble for questioning authority, this generation gets into trouble for leveling sources.   Every voice, every bit of cultural material has equal value and can be freely borrowed and re-used.  Net Gen seeks convenience and speed over venerated voices, who are often unwilling to make themselves available on the web.   Even more threatening to the traditional authority of educators is the declining value of scholarly knowledge, which is being by-passed and ignored by the mainstream undergraduate.  Every teacher knows that students think that Google is a database.  Students routinely ignore the expensive databases, paid for by student tuition, made available through library websites.  Getting into the date bases is a clumsy and cumbersome and often unrewarding enterprise, because the technology of these databases is antediluvian.  Naturally the student goes to Google’s fast and functional search engine to find information.  Like the Net Gener who gets a job and finds, to his horror, that the technology is twenty years behind the times, the student will not tolerate the ritual of multiple clicks and passwords and all the other paraphernalia that work to make knowledge inaccessible.  Even when forced to read a credible source, the students, accustomed to the all-purpose Net-speak, rebel at the insider jargon, written by scholars writing to scholars.

Net Geners want to be informed, not talked at.  They like to take materials they find helpful or interesting and remake it.  As opposed to always referring back to the authorities, the Net Gen likes to write its own material and to create its own content.  Tapscott indicates that the Web actually encourages creativity and productivity because the Web gives easy access to inventors.  From their habits of playing video games or participating in the virtual reality of Second Life, the Net Geners learn how to play their own game.  Speaking of video games, Tapscott says,

“This kind of play is deeply creative.  It involves trial and error, learning by experiment, role playing, failure, and many other aspects of creative thinking.

None of this kind of creativity is allowed in education.  Play is forbidden and failure is mocked.  In contrast, the author discusses a thirteen year-old writer who contributes stories to a website where they are read by thousands of readers.  “Isn’t that better than writing on paper and hoping that some day it might get published?” Tapscott asks.  For today’s teachers and professors the Web 2.0 is something Roland Barthes would have loved: this new Web is called the “read-write” web—we read it and we write it.

Although there are many teachers who are eager and willing try more experimental student centered ways of making learning a collaborative enterprise between mentor and apprentice, they are constrained by a system that demands command and control.  Distance Learning still attempts to replicate a now-obsolete classroom format, by demanding assignments at set due dates, by demanding chat room appearances at a set time, and so on.  This is hardly learning the way the student needs it, customized, when the student can devote the time to it, at a pace that facilitates learning.  Even distance learning classes end after a set number of weeks.  Traditional classroom education is ruled by the physics of time and space: one teacher to a classroom, a certain number of students in a space, taught a common denominator course that must fit into a larger curriculum at a specific time.  Student centered education is evidenced by allowing students to speak more or to participate in class discussion.  There is no time for the teacher to waste.  S/he has a set amount of material that must be covered.

Students are increasing unwilling to learn in the traditional manner, because they assume all knowledge is available on the Internet. Why learn math when one has a calculator?  Why not teach how to use the calculator to find the answer?  Why plow through many books when Wikipedia tells you anything you want to know and, even better, you too can write the content.  Tapscott tells an amusing story about interviewing a young man named Joe O’Shea who stated that he never read a book—why should he?  All the information he needed is on the Internet.

“I don’t read books per se,” he told the erudite and now somewhat stunned crowd.  “I got to Google and I can absorb relevant information quickly.  Some of this comes from books.  But sitting down and going through a book from cover to cover doesn’t make sense.  It’s not a good use of my time as I can get all of the information I need faster through the web.  You need to know how to do it—to be a skilled hunter.”

Before you educators out there jump to your feet to explain the difference between “information” and “knowledge,” know that the punch line was that the young man had just been awarded a Rhodes scholarship.

Business

Tapscott describes a new world in which the consumers remake the product, as they are remaking education. Education, he suggests, should think like a business and respond to the consumers, but Tapscott also points out that the businesses, which do not respond with agility to the demands of the Net Gen can get into trouble.  The Net Gen, rightly, in my view, views businesses and corporations with suspicion.  Tapscott points to the empowerment of the NetGeners who like to be “prosumers,” that is, proactive consumers, who customize their products.  Young people have been prosumers for generations, but no one has named their practices until recently.  Little girls have always treated their Barbies to new hair-dos and teen-age boys have always modified their cars with after-market products and custom decoration.  This desire to contribute to mass-produced and mass marketed products has only recently been harnessed by companies such as Apple where “there’s an app for that.”

The users of Apple have often been referred to as a “cult” because of their devotion to the product.  The term “cult” is derogatory and comes from those who simply don’t understand how the Net Gen thinks.  Apple is thought of by the techies as an honorable company, which strives to produce a product that is beautifully designed and user friendly.  In addition, the company also works closely with its user base, from the Bleeding Edgers to the novice customer, asking the tech savvy to participate in the improvement of the function and design of the product and watching for the difficulties of the blunderer so Apple can make function more straightforward.  The reason why the flap over the iPhone4 and its broken antennae was so minor to Apple users is because those customers know that the company will fix and improve the problem with the next iteration of the phone.  The Apple user is invariably an Early Adopter who expects such glitches and enjoys participating in the fix.  This kind of audience participation is the Apple business model and it has won the company a devoted following.

But all companies are not so accommodating to the customer base.  Witness the hostile relationship between music lovers and the music industry, publishers and those who write and read books, the car companies (Toyota) and those who drive.  The new generation of consumers wants to customize their experience with the product, Tapscott declares, but the corporate mind thinks in terms of profit not prosumer.  To the Net Gen, music and art and literature and knowledge, like information, should belong to no one and everyone.  Downloading “illegal” music is common practice, done without shame or remorse.  How can anyone own music?  Doesn’t art belong to everyone?  The Net Gen is forcing companies that want to survive to be transparent and participatory, Tapscott writes.  Older corporations do not want to interact with their customers.  Like the traditional media, the corporate mind insists upon a one-way communication: top down. As Tapscott says,

…the industry has built a business model around suing its customers.  And the industry that brought us the Beatles is now hated by its customers and is collapsing.  Sadly, obsession with control, privacy, and proprietary standards on the part of large industry players has only served to further alienate and anger music listeners…

Tapscott states that the Net Gen prefers flexible hours and “want to chose when and where they want to work.” not only that these young people what their work to be “meaningful.”  “They’re not loyal to an employer; they’re loyal to their career path,” he remarks.  Imagine the surprise of business types when the Net Gen shows up to “work.”  The Net Gen wants to play.   The Net Gen employee comes to a company for one reason—-no, not a job—to learn.  Once the Net Gen worker learns what s/he needs, s/he will move on to the next learning experience.  It is pointless to expect the Net Gener to be “loyal” to the company. The concept of loyalty that his grandfather may have enjoyed was broken when companies began sending jobs overseas in the Seventies.  Companies still expect the employee to commit to being a permanent fixture, while refusing to guarantee lifetime employment, much less health care. For the average corporation, human beings are a financial liability, but the Net Gener comes to play with the idea of contributing creatively.

Companies tend to create what Tapscott calls, a “generational firewall,” which separates the newbies from the oldtimers.  This strange way of not utilizing recruited talent is not unfamiliar to me.  I have often asked, why hire someone who is then suppressed and under utilized?  Business runs on a hierarchal basis, those at the top give orders and the orders roll downhill where the underlings carry out the dictates.  The Net Gen employee, according to Tapscott, does not accept hierarchy and assume that they were hired for their talents.  If they cannot and are not allowed to participate as an equal, the most talented will simply move on.  Their attitude, quite properly, is: if you won’t listen to me, why should I stay?  Net Gen wants to contribute and needs to contribute to something meaningful.  As the parents of the Net Geners changed the modeling of parenting, education needs to change its traditional assignments and business needs to change its traditional models.  Show the Net Geners what’s in it for them.

Politics

That same attitude—-what’s in it for me? appears in politics.  Today there are two common questions in popular culture: “What would Jesus do?”  And “What’s in it for me?”  We assume that Jesus would not say, “What’s in it for me?”  We like to think he would say, “What can I do for you?”  “What’s in it for me?” is a business question and the answer has to be “profits.”  “Profits” is a business answer.  So when a politician promises to run the government like a business, that implies that the government will not be in the service of the people but in the service of profit making entities, like corporations. Imagine if government were run like a business, like, say an oil company or a music company. Tapscott is convinced that the Net Geners have a better way. The Net Gen voter is an active participant who, unlike her grandparents, is a volunteer or a community activist, Tapscott says.  Some of the Boomers joined the Peace Corps, some marched for Civil Rights and some protested against the Viet Nam War.   Others marched for women’s rights and demanded gay rights.  The Boomer’s children are the Net Roots who became activated by the prospect of being allowed to participate in the election of Barack Obama.

Tapscott discusses the Internet based campaign at length, and reading these passages, now that we are two years into the Obama administration, is enlightening. I think that much of what Tapscott writes is insightful and informative and I learned a lot from reading his book, however, I do think he is too sunny and too hopeful and too optimistic.  Politics is a case in point as the enthusiasm for Obama wanes quickly.  The Net Gen expected results.  When Obama promised “transparency,” they thought that the President was thinking like the open artless, and fearless sharing that takes place on Facebook.  The web is totally open and uncontrolled as a source of energy and information. The web is a place where things happen.  That is why so many people (like me) devote their time to contributing to it.  But the Net Gen quickly learned its lesson.  As Tapscott writes,

Most Net Geners believe that the mechanics of power and policy making are controlled by self-interested politicians and organized lobby groups…The Net Generation does not put much trust in politicians and political institutions—-not because they are uninterested, but rather because political systems have failed to engage them in a manner that fits their digital and ethical upbringing.

The Net Gen experience as Internet users has taught them that if they coalesce towards a cause they can make changes.  The fact that the Net Gen volunteers for Obama were so excited because they were “natural” Democrats, that is, they shared a cultural attitude that the government should work for the people, and that they—the (young) people could shape the outcome through their participation.   According to Tapscott, the Net Geners are not conservative but more open to change and new ways of thinking than any other generation.  But a Democratic victory did not bring the change they expected.  And now the Net Gen has turned their backs on the administration.  Why?  The problem is that the government is controlled by a group of middle-aged people who will not let go of power.  Just look at Congress on C-Span.  All old White Men.  No one under forty.  No poor people.  Few People of Color.  Some women here and there.  No collaboration, no participation from half of the members of Congress, who appear to have abdicated their governing responsibility in the pursuit of political power.  The strategy of not participating—this is not the Net Gen type of thinking in Congress.

Things only get worse when one turns on to the news programs.  The gap in age is shocking.  Although there are some networks or news programs I do not watch, I do record at least four hours of news a day on TV to which I listen while I am writing) and read three newspapers a day. There are no young faces, no young writers (and therefore no young readers), no young voices, no young way of thinking. Only the Hill reporter, Luke Russert, the bright son of the late Tim Russert, stands out as someone under thirty.  An entire generation is being left out of the conversation.  The elders reflect back on their days with President Carter or President Clinton, prehistoric eras for the Net Gen, and discuss and debate raging political quarrels that are non-issues for the younger generation.

People—usually men—well beyond their childbearing years decide abortion policy.  People—increasingly women as well—who are too old to fight send their young generation off to war for their own political ends or their lobbyists needs.  People with lifetime jobs in Congress decide how much money the unemployed will or will not get. People with guaranteed government health care decide that others cannot have those same privileges and see no hypocrisy in their positions.  Those who are heterosexual (they say) decide the personal lives of homosexuals.   And so on.

Would results be different if the younger generation made itself heard?  As Tapscott points out, this generation is far more tolerant than their parents or grandparents.  It is their grandparents who are concerned about racial and gender equality, interracial marriage, “illegal” immigration, gay marriage, and other hot button issues.  For their grandparents, global warming is debatable, for this generation, raised on green values; a devastated planet is their inheritance.  If you asked a Net Gener which problem worried him more, the budget deficit or global warming, he would say, “global warming.” Always the optimist, Tapscott writes,

I’m convinced we’re in the early days of something unprecedented.  Young people, and with them the entire world, are beginning to collaborate—for the first time ever—around a single idea: changing the weather.

For the Net Gener, it is discouraging to see who is in power and to watch how they behave.  Partisan bickering and political game playing instead of collaborative game, negation instead of affirmation, blocking change instead of accepting it—all of this is alien to the younger generation.  Those in the government and those elected to office are one-way communicators, out of touch and out of date.  They allow the public to “speak” every two years at the ballot box.  And these are the people to whom the question of Net Neutrality will be turned over.  The corporations want to segment the Internet so that they can profit maximize what has been a free good, available to everyone.  The case of whether or not the net will remain the great equalizer will probably be decided by the Supreme Court, presided over by a Chief Justice who does not understand e-mail.

Not a wonk, I am probably better informed than some people and I value the facts over ideology.  So does the Net Gen.   For us it is not Democrat or Republican, liberal or conservative, it is integrity, honor, and the desire to tell the truth.  For Washington D. C., it is sound bytes and talking points.   By selling the “War on Terror” the “War for Weapons of Mass Destruction,” and the need to Bail Out the Big Banks to the credulous public, the government has created what a Bush appointee called, a “post truth” society.  How true.  For the Net Gen, truth matters.  The trust of the public in its leaders has been shattered, leaving a vacuum for the bloggers and talkers to fill.  Another authority has to be appointed and anointed.  For the older generation, still willing to accept one-way communication, sound bytes stand for wisdom, tweets become knowledge, and talking points are the truth.  The Net Gen finds it astounding when the politicians change their stories and refuse accountability, even when they are caught changing their positions or lying or fabricating stories. The Net Gen is used to trolling the Internet and finding the facts and cannot understand how their elders can lie, get caught, pay no consequences, lie again and so on.  No wonder they are disillusioned by politics.

The Future

Tapscott does not entirely ignore the real problems brought by the Internet revolution.  He points to the gap between the have-nots of technology and those who are active users.  His main examples are the poor or the third world, but there are other have-nots, closer to home, such as the poor, the elderly, or the close minded, or the technophobes who are getting left further and further behind.  Then there are the bad effects of the Web.  One of the odd and underreported facts of technology is that the Bleeding Edge is usually made up of illegal or questionable practices that become outlets for pathologies, including on line gaming, Wall Street derivatives, pornography, pedophilia, including on-line bullying.  It is these Early Adopters who benefit the Web by using it and creating new pathways, meaning that all these nebulous people are always one step in advance to the forces of law and order.   Parents protest the perfectly legal video games, such as the horrible Grand Theft Auto (which has awesome artwork) but forget that they watch and enjoy violent adult films such as Pulp Fiction. That said, the dangers of the Internet are real but, in the name of freedom, the Net Gen will defend the right of anyone and anything to prowl there.  One can only hope that the same Supreme Court that granted freedom of speech to corporations will see fit to allow the Net to remain open to all comers.

Tapscott believes that “Net Geners are quick to recognize that the best way to achieve power and control is through people, not over people.  Good lesson.  The Net Gen is intelligent enough to know that Obama cannot change Washington D. C.  There are too many entrenched interests.  The question has become not what can I do for you? but what’s in it for me?  All that hard work, all that dedication, all that Hope and no pay off, no results.   People go into politics to get things done, to make things happen and when nothing changes, you turn away.  It’s like your last job: you learned something new and then moved on.  How sad.  The problem for the Net Gen is that the fifty-sixty something generation of Baby Boomers have no intention of changing or of letting go of power.  They are impervious to the Net Gen.   “They” being the Big Banks, they being the Big Corporations, like Big Oil, are so powerful, have such a stranglehold on America that “They” answer to no one.  Big Business cares not about the Net Gen, neither as employees, nor as consumers.  By the time the Net Gen will have their turn to come into power, they too will be in their fifties, fully thirty years from now.   The Baby Boomers joined the Tea Party in their maturity.  What will the Net Gen do with their golden years?  Tapscott concludes his book,

The big remaining question for older generations is whether that power will be shared with gratitude—or whether we will stall until a new generation grabs it from us.  Will we have the wisdom and courage to accept them, their culture, and their media?  Will we be effective in offering our experience to help them manage the dark side?   Will we grant them the opportunity to fulfill their destiny?  I think this will be a better world if we do.

Dr. Jeanne S. M. Willette

The Arts Blogger

Suggested readings from Don Tapscott’s Bibliography:

Beck, John C., and Mitchell Wade, Got Game: How the Gamer Generation is Changing the Workplace, 2004

Benkler, Yochai, The Wealth of Networks: How Social Production Transforms Markets and Freedom, 2006

Carlson, Scott, “The Net Generation Goes to College,” Chronicle of Higher Education, Oct. 7, 2005, chronicle.com

Gee, James Paul, What Video Games Have to Teach Us about Learning and Literacy, 2003

Howe, Neil, and William Strauss, Millennials Go to College: Strategies for a New Generation on Campus, 2003

——Millennials Rising: the Next Great Generation, 2000

Keen, Andrew, The Cult of the Amateur: How Today’s Internet is Killing Our Culture, 2007

Moglen, Eben, “Anarchism Triumphant: Free Software and the Death of Copyright,” First Monday, August, 1999, emoglen.law.columbia.edu

Prensky, Marc, Digital Game-Based Learning, 2000

Roos, Dave, “How Net Generation Students Learn and Work,” Howstuffworks.com, May 5, 2008

Tapscott, Don, and Anthony D. Williams, Wikinomics: Harnessing the Power of Mass Collaboration, 2006

Weinberger, David, Everything is Miscellaneous: The Power of the New Digital Disorder, 2007

Mentioned in his book but not included in his bibliography:

Carr, Nicholas, “Is Google Making us Stupid?” Atlantic Monthly, July, August, 2008