New American Philosophy

Counter-proposal [from the Hated CPGB]

The Lost World of British Communism (Review)

The book presents a series of articles, published during a period spanning the 1980s, in the midst of the Miners Strike and a crippling division in the Communist Party of Great Britain. These essays have been collected to mark the 10th anniversary of Samuel’s death on December 9th 1996 at the age of 62 and at ‘the height of his powers’. The text seeks to view the party of the 1940s through the prism of the same party in the 1980s.

When Samuels was writing these articles a fierce battle was being waged, within a number of European countries most notably Italy, between the traditional ‘old guard’ often aligned to the party line of the Communist Party of the Soviet Union and the “innovating” Euro-Communists, rejecting Marxist economic theory, and seeking to appeal to new constituency, creating a new form of popular front policy in the claims of the ‘specially oppressed’.

Within the UK the Euro-communists established themselves in positions of leadership within the CPGB, leading eventually to break away factions of hard-liners into separate organisations. The Eurocommunist dominated leadership of the CPGB eventually disbanded the organisation in 1991,before establishing a political thinktank called Democratic Left. The detritus of which then found its expression in Charter 88, an organisation dedicated to ‘constitutional and electoral reform’.

Raphael Samuels writes as a long standing member, deeply involved within the tumultuous history of the CPGB. Born into a ‘Communist family’ during a time in which he suggest membership of the party provided a ‘a way of life’, he describes a ‘complete social identity at odds with the rump of an organisation the CPGB would become in the years before his death in 1996. He markedly contrasts the declining influence of the party to that which he knew as a boy, born in 1934 to a jewish family in London, selling Daily Worker at the school gates. He provides a glimpse of a closed world where, in the words of his partner’s preface to the text, “his reading, friendship and social life were all dominated by politics”.

Relevance Today

He identifies many characteristic traits that would continue to characterise the left long after the party was over and the struggle between the ‘euro-communist’ and hard line ‘Straight left’ but a distant memory to all of the participants.

The organisation portrayed is one in the death throes of its final struggle, as the Executive “expels whole branches, expel honoured veterans and ‘screen’ new recruits with hardly a murmur” (26). Samuel suggests this merely reflects the ‘democratic centralism’ of the organisation, with little suggestion of the abuse suffered by the term by these bureaucratic maneuvers since its adoption in the Social Democratic parties of Russia and Germany in the early 20th century.

Samuels gives a glimpse into the interpretation of ‘Democratic Centralism’ provided by the party in the post Second World War communism, citing ‘The Role of the Communist Party’ (1957) on ‘factions’. The ‘training manual’ Samuels cites notes that “members … have not the right to combine with other members in other Party organisations who think like them to conduct an organised struggle for their point of view” (83).

That Trotskyists groupings were to also mimic this distortion of the organisation method suggests that the term for Samuels, as for the far left, serves as a catch-all cover for expedient political manuevering in any given Socialist-tinted organisation. A simplification understandable in the context of a journalistic reading of the movements history or the jockeying for power within a political sect, but not perhaps of someone so deeply engrained within its history seeking to provide a credible appraisal of its history.

Popular Front

Samuels touches upon another defining feature of his involvement in the CPGB, in 1935 the Soviet Union adopted a new position on its relationship with Social Democrats, previously described as ‘social fascists’ posing an immediate danger to the working class. It now argued for ‘Popular Fronts’ with Social Democrats.

Noting that “previous formulations (had) suffered from a tendency to overate the degree of maturity of the revolutionary crisis” (125 history of CPGB 1927-1941) the comintern leadership now argued for ‘popular fronts’ that were in fact “only a new name for that old policy, the gist of which lies in class collaboration in a coalition between the proletariat and the bourgeoisie” (192 Trotsky on Britain 3rd Vol). Communists involved within these fronts subordinating their political independence to bourgeoise forces within which they were in coalition.

Samuels family had experienced first hand a Communist Party that had by that time entered its ‘Popular Front’ period. In the mid 1930s the communist parties of the Popular Front had won over-whelming electoral victories in France while in Britain the popular front found initially muted reception in the election of Willie Gallacher as MP in 1935. However it was not until the mid 1940s that the strength of Popular Frontism in the United Kingdom would truly be felt, with the CPGB garnering 103,000 votes in the 1945 general election, electing two Communists alongside a peak member in 1943 of 60,000. Samuels notes the organisations all-time membership peak not in 1943-44 when the Red Army was “sweeping back the Nazi invaders” or in 44-45 when the “Communist led resistance was on the threshold of power in France and Italy” but instead in the “black months” of 1942 when the “Russians at Stalingrad - like the British at Dunkirk were fighting with their backs to the wall” (57).

The political legacy of the popular front would linger on long after its initial adoption. Within the party of the mid 1980s the ‘popular front’ remained the party’s guiding light for both wings, at least in theory. Used by the Euros as justification for a ‘Broad Democracy Alliance’ (or BDA for short), a debate about which would rage in the pages of Marxism Today in the late 1970s. The popular front was cited as the party’s most ‘effective strategy’ by venerable MT theoreticians such as Eric Hobsbawn, alluding presumably to the height of the party’s popularity in the 1940s (38).

Samuels does not diverge greatly from this conception of the Popular Front as reflecting the CPGB at its best. His reproach of the Euro-Communists is the extent that they have strayed from its initial intentions. Samuels notes that the adoption of the popular front were justified, at least formally, on the understanding of the underlying division of social classes, at least formally in the ‘theory’ provided to justify coalitions with bourgeois forces, a fact that cannot be said of the modern day Euro adherents of the popular front. Samuels also chastises the Euros because the emphasis of an ‘alliance’ is counter-posed to that of ‘Unity’. The comparisons do not withstand the strength of historical analogy, Samuels suggests, comparing the valiant fight of the International Brigades to that of the Euros ‘mobilizing the support of the house of lords for the labour government’. Today, a somewhat muted expression of this understanding of the period of the popular front as “British Communism’s Finest Hour” is readily apparent in the political discourse of the Communist Party of Britain [1].

That the action of the 1930s Comintern’s triumphant rallying cry of the ‘Popular Front’ had disastrous implications for Spain is clear. Deliberately sabotaging those who had raised the issue of a revolutionary workers’ government, this belies the underlining role played by advocates of the popular front during these halcyon days in the party’s history. The popular front in fact served to contain social struggle, when the issue of a serious revolutionary explosion presented itself, the reformist leaders seek to limit the actions undertaken, less they alienate bourgeois allies, and break up the established coalition.


Samuels characterises both wings of the 21st century CPGB as broadly adhering to a suggested age-old belief in ‘correctness’, as articulated in the appropriately formulated ‘political line’ inevitably providing a ‘clear lead’ if followed and adhered to (23). A role attributed to the snooze-enducing British Road to Socialism, despite the evident lack of interest Samues notes from anyone outside of the limited remit of the party cadre.

In terms of the self-belief of the membership of the party in this role, he does note a sea change in the sense of urgency among the broad membership of the Communist Party of the 1940s, in which each new event highlighted the ‘terrible dangers that lie ahead’ (35) to a situation in which much of the suggested dynamism had faded away. As illustrative of this relative decline Samuels cites the comparable role he suggests had been played by the party during the 1985 miners strike, during which he suggests the party conspired to lose leading members, and that of its involvement in the 1926 General Strike, following which party membership “doubled as a result of its activity” (34).

Samuels does not touch upon the many failings of the party’s leadership during this period of growth. That a potential revolutionary explosion had been smuggled by the political direction of the soviet led Anglo-Soviet Council’s over-reliance upon the existing trade union bureaucracy is largely over-looked. Suggesting that while the party had undoubtedly had great influence during this time its use of this comparative strength had often served to limit the “revolutionary possibilities” glimpsed at within these periods of heightened class struggle.

Trotsky would argue in 1928, in opposition to Stalin and Bukharin’s tactical prescriptions, that the policy of the Anglo-Soviet Council had crippled British communism with devastatingly fatal results. The extent to which the failings of British Communism can be attributed directly to the role played by the party during this period remains a subject of debate. Trotsky biographer Isaac Deutscher remains critical of the view that the policy of the Anglo-Soviet Council had been the “basic cause of the prolonged impotence of British communism” which remained “vegetated” on the fringes of British politics 30 years later (186 Unarmed Prophet). That 1926 reflected the closest Britain had moved to the brink of revolutionary during the inception of the party remains clear, that had a ‘correct’ line followed the party would have been in a stronger position to give a lead to the most advanced sections of the working class is also not in doubt.


Samuels work is of continued importance to Marxists alive today precisely because the lessons we can draw from the experiences of the CPGB during its height have still not been learnt. If there is a central point we can draw from Samuels text it is that the political independence of the working class must still be stressed in opposition to sections of the bourgeoisie and labour bureaucracy.

Modern Trotskyism, as manifested in its many myriad expressions is itself an expression of the failure to learn these lessons. The experiences Samuels documents have instead been replicated on a ridiculous micro-scale by the Trotskyist left of the 21st century. We find this within the political bankrupt Respect project of the Socialist Workers Party, and the ease with which basic demands of republican democracy have been downplayed by the existing Trotskyist left as “abstractions” within both the Socialist Alliance and more recently the NO2EU campaign.

Again this point is made not to draw crude historical analogies with the CPGB at its height. It would be wrong to suggesting the failures of the CPGB may be directly compared with those of the existing left. However the clear need for political independence of the class from the bourgeoisie has been repeatedly demonstrated by the defeats that have resulted from such class-collaborationist out-fits.

Meanwhile the abstentionism of another section of the ‘orthodox’ wing of British Trotskyism flies in the face of the advice of Trotsky to his supporters in the 1930s. Trotsky suggested, during the so called ‘French Turn’, to join the socialist parties that were participating in the people’s front in to work with leftists in them. In doing so Trotsky utilised support for candidates standing against the Radical Party to oppose the coalition policy and reassert the importance of the political independence of the class in this support.

While the tactics adopted must remain reflexive and able to adapt to concrete situations, It is this guiding principle underlining our tactics that must continue to direct our politics as Marxists. These are lessons we must not only draw from the history of the CPGB but must now more than ever seek to apply to the engagement of Marxists not within short-cuts attempts to building left-tinted populist organisations and fronts but a unified Communist party, able to effective assert these principles. What is called for now is the unity of Marxists as Marxists.

Posted by J.B at 10:42 1 comments

"Boys, our fighting men in Louisiana salute you"

Marisa Miller, Sports Illustrated


A thurstonité


New Left Review 61, January-February 2010 

teri reynolds


I spent my early childhood in a trailer park in Texas so, until I became an emergency physician in Oakland, I thought I knew something about barriers to healthcare access, and maybe even something about poverty. The Emergency Department at the Oakland county hospital has around 75,000 visits a year—say, 200 a day. It has 43 beds; because of overcrowding, there are ‘extra’ patient beds in the hallways, which have ended up being designated as official patient-care areas: first came Hallway 1, then, a year later, Hallway 2, and now Hallway 3 as well. At night the ed usually has one supervising physician with a couple of housestaff—trainee doctors—a student or two, and around ten nurses; there is double supervising coverage from the late morning through to about 2 am, the hours of heaviest traffic.

County hospitals are where those with no insurance go. The elderly and disabled who qualify for Federal Medicare and Medicaid insurance may also go there, but they often take the insurance elsewhere. Those who have no insurance, no money and nowhere else to go, come to the county hospital. Our speciality is the initial management of everything. There are patients who bless me for my time, after they have waited 18 hours to see me for a five-minute prescription refill, and another who regularly greets me with, ‘Yo bitch, get me a sandwich.’ I did have one patient, born at the county hospital, who lied about his private insurance in order to return to what he called ‘my hospital’, but many more who feel they have hit bottom when they cannot afford to get care elsewhere.

Around 47 per cent of the patients are African-American, and 32 per cent Hispanic. We call the Mongolian and Eritrean telephone translator-lines on a regular basis. We also see the patients who are not entirely disenfranchised, but fall out of the system when they lose their jobs; most Americans have insurance linked to employment, either their own or a family member’s. It is not infrequent to see the primary reason for a visit to the hospital listed as ‘Lost Insurance’, ‘Lost Kaiser’ (the main private health maintenance organization in California), ‘Lost to Follow Up’ and once, just ‘Lost’, but we all knew what it meant. We see patients every week with decompensated chronic disease who say, ‘I was doing fine until I lost my job and couldn’t get my meds.’

Some of the visits are for true emergencies—there are 2,500 major trauma cases a year. These are usually shootings, stabbings, falls, assaults and automobile accidents; many, if not most, involve alcohol and drugs. In 2008 there were 124 homicides in Oakland alone, most of them due to gun violence; many victims have been involved in violence before. The Emergency Department gets a stream of teenage gunshot victims, cursing and yelling as they come in, swinging at medics and police with arms scored with gang tattoos; by the next day we see them emerge as the children they are, cowed by the presence of their mothers beside the recovery beds. We also see the bystanders, the teenagers who get shot while walking home from school, the elderly Chinese man hit by a stray bullet as he stepped outside to get the newspaper, the mother shot stepping in front of her son—who claimed not to know the shooters when interviewed by the police, but was overheard by the nurse the next day rallying his ‘boys’ for a revenge run. This kind of trauma has a way of turning victims into perpetrators. The first ‘death notification’ I did as an intern was to the mother of three boys. The older two had spent three months on the East Coast with relatives to let a ‘neighbourhood situation’ cool off. Less than 24 hours after their return to Oakland, they were shot while walking down the street together. The two older boys died. The 18-year-old had a collapsed lung, but survived. At his last trauma clinic follow-up, he was referred to social work for ‘clinical evidence of depression’, though at the time there was no outpatient social-work clinic available.

Drugs and alcohol increase all kinds of risk, and traverse all social classes, but cocaine is its own special force in this community. Smoking crack cocaine is such a common trigger for asthma exacerbation that we have come to call it ‘crasthma’ at signout. At first, Emergency Department doctors were startled when small, wiry elderly women coming in for chest pain tested positive for cocaine on the urine screen. It turned out they were social opium smokers from the hills of Southeast Asia, who turned to smoking crack cocaine when their immigrant families moved them to Oakland. It must have seemed somehow similar, though it turned out to be much worse for their hearts. I recently saw a 55-year-old woman who had been found on the floor by her family in the middle of the night. Her ct scan showed a large bleed in her brain. After years of planning she had managed to set things up to move her family back to Mississippi where she thought her teenage grandsons, who had begun flirting with gang activity, would be safer. She had been up all night cleaning the house and packing to leave the next day, and had used the cocaine that had likely caused the brain bleed to help her stay awake.

There are the everyday medical emergencies: septic shock, heart attacks, strokes, deadly lung and skin infections, respiratory and cardiac arrests. These, along with the major traumatic injuries, are the cases the ed was designed for. But most of our patients do not have emergent conditions; they are just ill, and have nowhere else to go. The county system has a wide complement of outpatient clinics, staffed by some of the best doctors I know. But the last time I checked, their next available primary-care appointment was six months away. Sometimes there are no appointments at all, just a clipboard where we scribble a name and medical record number, to put a patient in line for the six-month wait.

Then there are the patients who did have an outpatient clinic appointment, but no telephone, and so were not informed when their clinic visit was rescheduled. There are those who have to take three buses to get to the clinic and miss the last one; those who would like to see their doctors, but forget to come in when they drink too much; and others, especially the elderly, who won’t come to late afternoon appointments because they are afraid to travel home after dark. Some patients just need prescriptions—those whose medications are stolen, those who finish a prescription before a refill is available because they feel bad and double their own dose, or those who just want the cough syrup with codeine that has become a popular drug of abuse. There are those who have lives so complicated—by three jobs, or six children—that a 3 am emergency visit is all they can manage. They come to the county ed because we are always open, and refuse care to no one.

Coming onto a shift, we hit the ground running. There is signout, a 20- or 30-minute verbal handover of all the patients in the Department, with an update on their status and discussion of what still needs to be done. Most of the shift is spent running around seeing patients and discussing their management plans. But we also negotiate with consultants and admitting doctors, intervene to control ambulance traffic, and troubleshoot staffing issues. There is no official break—we grab food when we can. I carry a portable phone that rings off the hook with referrals and questions. Emergency physicians are interrupted—by nurses, students, technicians, pharmacists and other physicians—every 3–4 minutes on average (this has actually been studied). There are shifts when I cannot find time to make it to the bathroom.

Nurse staffing is often the rate-limiting step in the process. Nurses—they range from fresh-faced graduates in tight pink scrubs to ex-military medics covered with tattoos—are the front line of care at the county hospital. They see patients first and are responsible for screening the dozens that present to triage at any one time, and deciding which ones need to be seen immediately and which can wait. They bear the brunt of patients’ frustration; they are the ones who undress them and find hidden wounds and weapons, medications and money, needles and crack pipes. Nurses have a maximum patient ratio of 1:4, mandated by California law and rigorously protected by the union. They also have mandatory protected break time and meals. Because physicians’ orders—on medications, for example—cannot be executed without a nurse, patients often wait for hours to be roomed or get pain relief. The union is generally a force for good, though some feel that it has compromised physician–nursing relations—and even I, who was our union delegate for some years, feel that it has fostered some abuse of the county system.

A few doctors rail at the patients who come to the Emergency Department for routine care, but most who have chosen to work in the county system pride themselves on being jacks-of-all-trades, holding steady in the middle of the maelstrom, being a part of the safety net. So when patients cannot get primary care, we tell them to follow up in the ed on our next scheduled shift. I have started patients on medication for newly diagnosed diabetes and transitioned them to insulin before they could manage to see a primary-care doctor. I have prescribed first, second and third-line medications for blood pressure. I have seen three generations of women, plus an uncle, in one family. There are a cadre of regulars we know by name; we discuss their recent visits and send around emails when they die. So we do deliver primary care; some of us enjoy it, and the patients certainly need it. But in the end, we are simply not very good at it. An Emergency Department is a lousy place to manage chronic disease.

The failure of preventive, primary care creates emergencies that should never have happened. The County Hospital is where diseases become the worst version of themselves: what should have been a case of simple diabetes, requiring oral medication and diet change, presents as diabetic ketoacidosis, a life-threatening condition of acid in the blood. We see severe infection that can only be treated with amputation, but was once simple cellulitis requiring antibiotics; numerous strokes, which could have been prevented through blood-pressure control. While the Emergency Department tries to give patients what they need, it cannot offer them a phone number they can call for refills, a clinic to return to or the chance to see the same doctor year after year.

Frequently, the ed fails to take the whole patient into account. Given the volume and acuity of the patients we see, some stable patients just have too many problems to address in the course of a visit. We talk about the ‘chief complaint’ in medicine—the main reason for the visit. It might be abdominal pain, a sprained ankle, lost insurance or chest pain. When patients start on a list of several complaints, we sometimes ask them to identify the main thing that brought them in that day. A colleague recently signed out a patient to me as ‘a 65-year-old man with vision loss in one eye for two weeks, seen here four days ago for indigestion, now waiting for a ct scan to rule out stroke’. I asked why we had not evaluated his vision loss when we had seen him four days ago, and was told that the patient had not mentioned it then. When we asked him why, the patient said he had been told he could only have one problem. He chose the indigestion because it hurt, while the vision loss was painless.

All Emergency Departments are legally required to examine patients and provide initial treatment, regardless of insurance status; but the definition of ‘initial treatment’ is broad. Frequently, we see patients with acute fractures diagnosed at a private hospital. They arrive with temporary splints in place and x-rays in hand, saying, ‘I didn’t have insurance, so they told me to follow-up here.’ When we want to transfer patients to a nearby hospital for cardiac catheterization to treat a severe heart attack, we are asked to fax over the ‘face sheet’, a summary printout of the patient’s basic demographic information: name, date of birth, address, phone number and insurance status. While it is technically illegal for hospitals who have room to refuse to accept a patient who needs a ‘higher level of care’, such as the cardiac catheterization that our hospital does not offer, we are frequently told there are no available beds. We are told this much more often for our uninsured patients than for those with Medicare, or those who have secured disability payments from the government.

Care delivery in America lags far behind our pharmaceutical and diagnostic science. Most applications for new drug approvals are in categories where good drugs are already available; more than new medicine for diabetes, we need good research on how to get the medicines we have to diabetic people. Our health system has generated an enormous cohort of patients who are diagnosed but untreated, or under-treated. These are not medical mysteries, but social ones. The barriers to appropriate healthcare are myriad, and not all are a function of the system. I have seen a homeless woman, probably schizophrenic, seeking her first care for a breast mass that must have been there for years before it took over half her chest. And a man brought in by the ambulance he had finally called when his legs became too swollen from heart failure and blood clots to get through his bathroom door. He hadn’t been outside in a decade. Or the young man who had been diagnosed with mild renal failure two years earlier and re-presented with a complication so severe that the kidney specialist I called told me he had only seen it once before, thirty years ago in rural India. The young man seemed reasonable—he was responsible enough to hold two jobs and support one family in the us and two in Mexico. He spoke no English and had not really understood that he was supposed to come back. Until he had become too weak to work, he had just carried on. These are patients disenfranchised by much more than the healthcare system in our country—by a collision of poverty, poor social services and lousy public transportation, substance abuse, language barriers and more.


I have recently shifted my practice to the ed of the University of California, San Francisco Medical Center, 12 miles away, for a one-year speciality fellowship. This is a tertiary referral hospital, famous for treating patients with obscure diagnoses, syndromes that only affect five patients in the world; some are named for scientists who work upstairs in the same medical centre. The Hospital is a transplant centre and many of the patients are on drugs that suppress their immune systems; the very medications that keep them from rejecting their transplanted organs leave them vulnerable to severe, rapidly progressing infections. Many of the patients have heart or lung abnormalities. I recently saw a child with so little circulating oxygen that his lips were blue-black. Before I could put a breathing tube down his throat, his father told me that he always looks like that due to his unrepaired heart defect. They had come for his abdominal pain. While we sometimes complained about the simple cases in Oakland, here we complain that there are no simple patients. Chief complaints such as ‘finger laceration’ are inevitably followed by ‘heart transplant 2 days ago’, ‘rash’ by ‘history of Gorlin’s Syndrome’, ‘cough’ by ‘awaiting lung transplant next week’.

I have never been cursed at by a patient in the Emergency Department here, rarely asked for a sandwich, and only occasionally see a urine test that is positive for cocaine. Patients can almost always get their medicines, and frequently have follow-up appointments already scheduled. They can usually list their medications and often describe their entire medical history by memory. I have more than once been told that the chair of a subspecialty department would be coming down himself because the patient is a University Faculty member or some other vip—on one surreal shift, two of my first three patients were doctors themselves. I almost never refill prescriptions for more than a two-day supply, because that is the purview of primary care. On an average shift I see at least three patients who are 90 or older, most of whom drive themselves to the hospital. Almost no one seems to live to 90 in the county system.

The healthcare proposals generated under the Obama administration take as given the profound inequalities in the distribution of medical care in the United States. Both House and Senate plans fall within a range of middle-ground options that legislate for even more money to be paid into the private system in return for only minimal concessions. They neither create the benefits of risk-sharing for the public system (which currently covers the oldest and sickest), nor make the insurance industry take on the total risk-pool of young and old, sick and well, which alone would make universal coverage feasible. With insurance mandatory and non-coverage penalized, millions more would be required to pay into the private system, while tens of millions out of the 46 million currently uninsured would remain without coverage in both the House and the Senate plan. The Congressional debate has avoided medical and social realities to focus on rhetorical dilemmas. Reproductive medicine, which should be a matter of scientific standards of care, has been thrown into the package as a negotiating quid pro quo.

Healthcare in America is the civil-rights issue of our time. Extended insurance coverage will not tackle the huge social barriers that stand between patients and optimal medical treatment. Adequate primary care would mitigate the devastating effects of these social factors. In the current County system, a patient who misses a bus and therefore an appointment may wait months to get another, and may not even be able to reschedule by phone. In a functional primary-care system, patients who miss appointments—or a patient newly diagnosed with renal failure—would be called back, not lost to follow-up.

It is hard to talk about a middle ground for something that is a fundamental right. Some believe there is no harm in taking what we can get and going from there; but this is probably not true. The insurance industry makes great gains in the current plan that will be hard to reverse. More, the proposals validate much of the profoundly unjust current system, which has grown up ad hoc but which, up till now, has never been explicitly sanctioned as a workable plan by the Federal government. To tolerate a disastrous bricolage is one thing; to extol its virtues quite another.

I have been well aware of the fallout our imbalanced system has for county patients; but until recently I don’t think I recognized the damage it was doing to the small minority it serves well. On one of my early shifts at the University of California hospital the triage nurse passed me a handwritten note from a patient in the waiting room. It read:

Please help me. My jaw has been broken and I am in a lot of pain. I’ve been here over an hour and am still bleeding. My hands and feet are numb and I’m starting to shake. I need some care. I have insurance.

The young electrical engineer who wrote the note was in his mid-thirties, used neither drugs nor alcohol, and had never been in a fight in his life. He had been prescribed cough medicine with codeine for a viral illness and had passed out in his bathroom, breaking his jaw and several teeth on the sink as he fell. His injuries were no more and no less devastating than those resulting from violence in Oakland. What was striking was that a highly educated young man could feel that his pain, bleeding and shaking might not get him care in one of the best hospitals in the country, but that his insurance would; could assume that the brief delay before he was seen was due not to the acute stroke and heart-attack patients who had come in just before him, but to the suspicion that he did not have insurance. If even the privileged feel their access to care is so vulnerable, it becomes hard to argue that the system is working for anyone.


Graffiti, Vancouver BC arts center

Something you Never even Thought of [NDP]

The philosopher-citizen

Charles Taylor

Jürgen Habermas is one of the most prominent philosophers on the global scene of the last half century. His work is of an impressive range and depth. It would be impossible to sum it up in a short essay, but I shall try to single out three facets of his extraordinary achievement which help throw light on his deserved fame and influence.

Jürgen Habermas is known in the world of analytic philosophy primarily as a moral and political philosopher. He has striven against a slide which has often seemed plausible and tempting for modern thinkers, that towards a certain relativism or subjectivism in morals. The difficulty of establishing firm ethical conclusions in the midst of vigorous debate among rival doctrines, particularly when these disputes are contrasted to those among natural scientists can all too easily push us to the conclusion that there is no fact of the matter here, that ethical doctrines are not a matter of knowledge, but only of emotional reaction or subjective projection, that the issues here are not cognitive.

Habermas from the very beginning set his face against these non-cognitivist views. There can be ethical knowledge.  But he wished also to break with a long-hallowed notion of what this knowledge must consist in, that which we find in the traditions which go back to Plato and Aristotle. According to these, ethical knowledge has for its object human nature, or the nature of things. In other words, it is grounded in some normative picture of what humans are like, or else of their place in the universe. According to Habermas, it was the discredit of these “metaphysical” views which gave colour to non-cognitivism in the first place. In order to refute subjectivism, morality needs another kind of rational basis.

The alternative route which he explored was that which makes the rationality of ethical conclusions a function of the rationality of the deliberation which produces them. A deliberation is rational if it meets certain formal requirements. This is, of course, the route which was pioneered by Kant. But Habermas made a revolutionary change in this tradition. Whereas for Kant the principal criterion of a rational and therefore defensible deliberation was that it was sought universalizable maxims, for Habermas the very notion of deliberation is transformed. Following Kant a lone reasoner can work out what maxims can be the objects of a universal will.  But Habermas introduces the dialogical dimension. The ultimately acceptable norms are those which can pass the test of acceptance by all those who would be affected by them.

In other words, for Habermas, ethical deliberation is primarily social, dialogical; it is worked out between agents. Of course, in a secondary way, we can and often do deliberate on our own, but the shape of our ethical world is dialogically elaborated, and this conditions all our moral thinking, even when we want to rebel against the morality of our community.

In proposing this transformed model of ethical reasoning, Habermas was articulating two profound changes in the consciousness of the later 20th Century, one philosophical, the other in our political culture. The philosophical change was the dialogical turn itself, which we see in a host of places: in the critique of monological Cartesian foundationalism by figures like Wittgenstein and the phenomenological writers, in the sociological literature which began to stress the dialogical nature of identity-formation, as we see with George Herbert Mead. One could prolong the list almost indefinitely.

The second big change, in the political culture, also gave a new importance to dialogue. The political identities of democratic societies were no longer seen as defined once and for all by some founding principles or acts. The combined impact of feminism, of multiculturalism, of the battles over identity and recognition, of the gay movement, and so on, brought to the fore how much traditional modes of understanding were based on silent exclusion of minorities. Redefining, renegotiating the political contract came to be seen as an important, often urgent task; and this could only be carried out dialogically.

Habermas’s philosophical articulation of the dialogical turn was not the only one. There are rival attempts  which many (including this writer) would find more convincing, but his was an extremely important and influential one, and this is one of the sources of his deserved prominence.

Moreover, Habermas’s dialogical moral theory was much broader and deeper than that of most analytical philosophers, because it has been worked out against a background of social thought, and above all of a theory of the development of modernity. In this respect, that is, in the scope of his interests, he is more of a “continental” philosopher, for all of his affinities with certain analytical thinkers. This is the  second important aspect which I would like to take up.

Drawing on the work of Weber, Habermas sees modernity as having brought about a transformation in our understanding of reason. There have been a number of formulations of this idea in his work, but I’ll deal here with the one he offered in his immensely influential Theorie des kommunikativen Handelns of 1981. For Plato and much of the Western tradition, reason is a single faculty or power which can strive to define not only the True, but also the Good and the Beautiful. That is, the same reason can establish the shape of all the important dimensions of human life: establishing what really is, deciding what we ought to do, and determining what is truly beautiful. We might speak of the scientific, the moral and the aesthetic dimensions of human life.

What Habermas proposes in the place of this is not, as we have seen, a restriction of reason to the scientific domain, and a relegation of morals and aesthetics to the arbitration of emotion or subjective taste. Rather it is a diversification of the very procedures of reason. Scientific reason tries to map the real; but moral reason, as we saw above doesn’t try to map some other domain, say, of human nature. Rather the whole notion of rationality here doesn’t rely on the idea that valid ethical norms correspond to some domain of fact. Rather the justified conclusion is designated as such by its emerging from a certain form of dialogical deliberation. Being right here has a quite different shape than it does in the factual or scientific domain. And an analogous point is made for the aesthetic sphere.

For Habermas a key feature of modernity is this differentiation of spheres, whereby rational validity ceases to mean a single thing, and to have a similar shape in the different domains. This is the shift which underlies the use of the term “post-metaphysical” applied to ethics and political theory, and which occurs frequently in Habermas’s writings. The morally justified acts or norms are no longer such because they correspond to or correctly map some metaphysical reality.

It is clear that we have here not simply a moral theory but a theory of history, and indeed, a philosophical anthropology of uncommon scope and depth, and this has been another contributing factor to Habermas’ global reputation.

But these two factors would not have had the impact that they have were it not for a third feature: that Jürgen Habermas is an exemplary public intellectual. He has never been content simply with writing, teaching, and discussing philosophy. Unremittingly and with great courage he has intervened in the important debates of our time, for instance in the Historikerstreit within Germany, and more recently in issues to do with the “war on terror,” as well as the future of Europe. One might almost say that theory and practice are organically linked in the thought of Habermas: as a theorist of democracy and of open, undistorted communication, he cannot but intervene when these crucial vales are suppressed or denied, without being untrue to himself.

Or in any case, that is the way he lives his philosophy, with a kind of passionate integrity. And it is this courageous and consistent stance which has made a deep impression on thinkers and citizens, not only in Germany, not only in Europe, but world-wide.

In our time, we can almost fear that the public intellectual is an endangered species. On the one hand, the role can be trivialized by the proliferation of collective petitions for fashionable causes which it is very easy to sign. On the other, in the making of policy the intellectual is often replaced by the expert, master of some narrow field, who is rarely asked to decide on the use to be made of his expertise. In this world, Jürgen Habermas stands out as a shining example of the philosopher-citizen, two roles indissolubly linked in a figure of great depth and integrity. We, in democratic countries and beyond, are all in his debt, and that more than anything else accounts for his unparalleled prominence. He is an inspiration to us all.

A version of this text originally appeared in German earlier this year, in honor of Jürgen Habermas’s eightieth birthday. Later this week, Jürgen Habermas and Charles Taylor will join Judith Butler and Cornel West for a dialogue on the “power of religion in the public sphere,” in an event cosponsored by the SSRC, New York University’s Institute for Public Knowledge, and Stony Brook University.—ed.

In re the governmental area

Philosophy and Real Politics
Raymond Geuss


A strong “Kantian” strand is visible in much contemporary political theory, and even perhaps in some real political practice. This strand expresses itself in the highly moralised tone in which some public diplomacy is conducted, at any rate in the English-speaking world, and also in the popularity among political philosophers of the slogan “Politics is applied ethics.” Slogans like this can be dangerous precisely because they are slickly ambiguous, and this one admits of at least two drastically divergent interpretations. There is what I will call “the anodyne” reading of the slogan, which formulates a view I fully accept, and then there is what I will call the “ethics-first” reading.

The anodyne reading asserts that “politics”—meaning both forms of political action and ways of studying forms of political action—is not and cannot be a strictly value-free enterprise, and so is in the very general sense an “ethical” activity. Politics is a matter of human, and not merely mechanical, interaction between individuals, institutions, or groups. It can happen that a group of passengers in an airplane are thrown together mechanically when it crashes, or that a man slipping off a bridge accidentally lands on a tramp sleeping under the bridge. The second of these two examples is a salutary reminder of the role of contingency and of the unexpected in history, but neither of the two cases is a paradigm for politics. Political actors are generally pursuing certain conceptions of the “good,” and acting in the light of what they take to be permissible. This is true despite the undeniable fact that most human agents most of the time are weak, easily distracted, deeply conflicted, and confused, and that they therefore do not always do only things they take to be permissible. One will never understand what they are doing unless and until one takes seriously the ethical dimension of their action in the broadest sense of that term: their various value-judgments about the good, the permissible, the attractive, the preferable, that which is to be avoided at all costs. Acting in this way can perfectly reasonably be described as “applying ethics,” provided one understands that “applying” has very few similarities with giving a proof in Euclidean geometry or calculating the load-bearing capacities of a bridge, and is often more like the process of trying to survive in a free-for-all. Provided also one keeps in mind a number of other important facts, such as the unavoidable indeterminacy of much of human life. Every point in a Cartesian coordinate system is construed as having a determinate distance from the x-axis and from the y-axis. This way of thinking is of extremely limited usefulness when one is dealing with any phenomenon connected with human desires, beliefs, attitudes, or values. People often have no determinate beliefs at all about a variety of subjects; they often don’t know what they want or why they did something; even when they know or claim to know what they want, they can often give no coherent account of why exactly they want what they claim to want; they often have no idea which portions of their systems of beliefs and desires—to the extent to which they have determinate beliefs and desires—are “ethical principles” and which are (mere empirical) “interests.” This is not simply an epistemic failing, and also not something that one could in principle remedy, but a pervasive “inherent” feature in human life. Although this fundamental indeterminacy is a phenomenon almost everyone confronts and recognises in his or her own case all the time, for a variety of reasons we are remarkably resistant to accepting it as a general feature of the way in which we should best think about our social life, but we are wrong to try to evade it. A further reason to be suspicious of quasi-Cartesian attitudes to human life is that people are rarely more than locally consistent in action, thought, and desire, and in many domains of human life this does not matter at all, or might even be taken to have positive value. I may pursue a policy that is beneficial to me in the short term, but that “in the long run” will undermine itself. This may not even be subjectively “irrational,” given that in the long run, as Keynes pointed out, I will be dead (along with all the rest of us), and I may very reasonably, or even correctly, believe that I will be lucky enough to die before the policy unravels. When Catullus expresses his love and hate for Lesbia, he is not obviously voicing a wish to rid himself of one or the other of these two sentiments. Not all contradictions resolve into temporal change of belief or desire. Any attempt to think seriously about the relation between politics and ethics must remain cognitively sensitive to the fact that people’s beliefs, values, desires, moral conceptions, etc., are usually half-baked (in every sense), are almost certain to be both indeterminate and, to the extent to which they are determinate, grossly inconsistent in any but the most local, highly formalised contexts, and are constantly changing.None of this implies that it might not be of the utmost importance to aspire to ensure relative stability and consistency in certain limited domains.

Humans’ beliefs and desires are in constant fl ux, and changes in them can take place for any number of reasons. Transformations of specific sectors of human knowledge are often accompanied by very widespread further changes in worldview and values. People have often claimed that Darwinism had this effect in Europe at the end of the nineteenth century. In addition, new technologies give people new possible objects of desire and, arguably, new ways of desiring things. It is by no means obvious that the hunger which was satisfied when Neolithic humans tore apart raw meat with their fingers is the same kind of thing as the hunger that is satisfied by dining in a five-star restaurant in 2008. Technological change can also make it possible for people to act in new ways toward each other, and sometimes these need to be regulated in ways for which there are no precedents: once it begins to become possible to transfer human organs from one person to another, and manipulate the genetic makeup of the members of the next generation of humans, people come to feel the need of some kind of guidance about which forms of transfer or manipulation should be permitted and which discouraged or forbidden. Changes in political or economic power relations often make it more or less likely that certain groups will move culturally closer to or further away from their neighbours, thus changing people’s ethical concepts, sentiments, and views (again, in the broadest sense of the term “ethical”). Politics is in part informed by and in part an attempt to manage some of these changes. In addition, as people act on their values, moral views, and conceptions of the good life, these values and conceptions oft en change precisely as the result of being “put into practice.” Sometimes one could describe this as a kind of “learning” experience. The total failure of a project that has absorbed a signifi cant amount of social energy and attention, and for which serious sacrifices have been made, in particular often seems to focus the mind and make it open to assimilating new ways of thinking and valuing.3 Th us after the events of 1914 to 1945 a very significant part of the population in Germany became highly sceptical of nationalism and the military virtues, and the experiences of Suez and Algeria tended in Britain and France to throw any further attempts at acting out the old forms of colonial imperialism into disrepute. Sometimes, to be sure, the appropriate learning process does not take place, or the “wrong” lesson is drawn, and this often exacts a high price in the form of a repetition or failure. Thus the larger significance of the Reagan era in the United States was that the political class in power to a large extent prevented any significant, long-term lessons from being drawn from the defeat in Vietnam. Learning, failure to learn, and drawing the wrong lesson are all possible outcomes, and whichever one in fact results needs to be explained, understood, and evaluated. There is no guarantee that “learning” is irreversible, nor can any distinct sense be attributed to the claim that learning in the longer term is natural, that is, will take place unless prevented.4 Furthermore, even in the best of cases learning in politics seems to be limited either to very crude transformations over long periods—“we learn” over two thousand years that it is better to have a legal code that is accessible to everyone than merely to allow the priests to consult their esoteric lore—or to what are, in historical terms, very short periods, with little in between. Th e eff ects of the short-term learning can often wear off remarkably quickly. Colonial intervention was in bad odour in Britain between the 1960s and the year 2000, but we now (2007) have troops fighting in Iraq and Afghanistan again.

One can speak of politics as “applied ethics” if this form of words takes one’s fancy, but it is not obvious that all the above-described phenomena form anything like a natural kind or a single coherent domain for study by some determinate intellectual speciality: “applied ethics” is just a term applied to people trying to manage forms of action and modes of evaluation that distinguish a good from better or less good as they interact with political programmes, individual and group interests, changes in the economic structure, the requirements of action, institutional needs, and contingently arising historical problems of various kinds.

When I object to the claim that politics is applied ethics, I do not have the above anodyne reading in mind. Rather, I intend a much more specific view about the nature and structure of ethical judgment and its relation to politics, and in particular a theory about where one should start in studying politics, what the final framework for studying politics is, what it is reasonable to focus on, and what it is possible to abstract from. “Politics is applied ethics” in the sense I fi nd objectionable means that we start thinking about the human social world by trying to get what is sometimes called an “ideal theory” of ethics. This approach assumes that there is, or could be, such a thing as a separate discipline called Ethics which has its own distinctive subject-matter and forms of argument, and which prescribes how humans should act toward one another. It further assumes that one can study this subject-matter without constantly locating it within the rest of human life, and without unceasingly refl ecting on the relations one’s claims have with history, sociology, ethnology, psychology, and economics. Finally, this approach proposes that the way to proceed in “ethics” is to focus on a very few general principles such as that humans are rational, or that they generally seek pleasure and try to avoid pain, or that they always pursue their own “interests”; these principles are taken to be historically invariant, and studying ethics consists essentially in formulating them clearly, investigating the relations that exist between them, perhaps trying to give some kind of “justification” of at least some of them, and drawing conclusions from them about how people ought to act or live. Usually, some kind of individualism is also presupposed, in that the precepts of ethics are thought to apply directly and in the first instance to human individuals. Often, although not invariably, views of this type also give special weight to “ethical intuitions” that people in our society purportedly share, and they hold that an important part of ethics is the attempt to render these intuitions consistent.

Empirical abstemiousness and systematicity are two of the major virtues to which “ideal” theories of this kind aspire. The best-known instance of this approach is Kantianism, which claims in its more extreme versions that ethics can be completely nonempirical, derived simply (but fully) from the mere notion of rational agency, and the absolute consistency of willing that is purportedly the defi ning characteristic of any rational agent. Kantian ethics is supposed to be completely universal in its application to all agents in all historical situations. Although Kant does not himself use the vocabulary of “intuitions” (or rather, he does use a term usually translated “intuition” (Anschauung), but uses it with no specific moral meaning), he does think that individuals have in common sense (“der gemeine Menschenverstand”)5— presumably post-Christian, Western European common sense—a reliable “compass” that tells them what they ought to do in individual cases. Philosophical ethics does nothing more than formulate the principle that such common sense in fact uses. Kantianism is at the moment the most influential kind of “ideal” theory, but one can fi nd similar structural features in many other views (e.g., in some forms of utilitarianism), and they are the more pronounced, the keener their proponents are to proclaim the strictly “philosophical” nature of the kind of study of ethics that they advocate. A theory of this kind might consist of constraints on action, such as the “Thou shalt not kill; thou shalt not steal” of various archaic moral codes or Kant’s “Never lie even to save a human life”; or it might also contain the presentation of some ideal goals to be pursued, such as “Strive to construct (an ideal) democracy” (or “Strive to construct an ideal speech community,” or “Strive to build socialism”) or “Love thy neighbour as thyself.” The view I am rejecting assumes that one can complete the work of ethics fi rst, attaining an ideal theory of how we should act, and then in a second step, one can apply that ideal theory to the action of political agents. As an observer of politics one can morally judge the actors by reference to what this theory dictates they ought to have done. Proponents of the view I am rejecting then often go on to make a final claim that a “good” political actor should guide his or her behaviour by applying the ideal theory. The empirical details of the given historical situation enter into consideration only at this point. “Pure” ethics as an ideal theory comes first, then applied ethics, and politics is a kind of applied ethics.

In this essay I would like to expound and advocate a kind of political philosophy based on assumptions that are the opposite of the “ethics-first” view, and so it might be useful to the reader to make the acquaintance, in a preliminary and sketchy way, of the four interrelated theses that, I will claim, ought to structure a more fruitful approach to politics than “ethics-fi rst.”

First, political philosophy must be realist. Th at means, roughly speaking, that it must start from and be concerned in the first instance not with how people ought ideally (or ought “rationally”) to act, what they ought to desire, or value, the kind of people they ought to be, etc., but, rather, with the way the social, economic, political, etc., institutions actually operate in some society at some given time, and what really does move human beings to act in given circumstances. Th e emphasis on real motivation does not require that one deny that humans have an imaginative life that is important to them, aspirations, ideals they wish to pursue, or even moral views that influence their behaviour. It also does not imply that humans are not sometimes “rational,” or that it would not often be of great benefit to them to be “rational.” What it does mean, to put it tautologically, is that these ideals and aspirations influence their behaviour and hence are politically relevant, only to the extent to which they do actually influence behaviour in some way. Just because certain ideal or moral principles “look good” or “seem plausible” to us, to those who propose them or to those to whom they are proposed—to the prophet or to the people whom the prophet addresses—it does not follow that these norms, canons, or principles will have any particular effect at all on how people will really act. Even if one were to assume something I am loath to admit, namely, that certain moral principles that have determinate contentare “absolutely true” or “eternally valid” or could be “ultimately justified by reference to the nature of reason itself,” this would not automatically ensure that these principles were in fact universally recognised— what truths except utterly trivial and banal ones are “universally” recognised? It would also not ensure that, even if they were recognised, they would be universally obeyed. Finally, a political philosopher cannot take ideals, models for behaviour, or utopian conceptions at their own face value. That the prophet claims and genuinely believes that his table of values will bring peace and prosperity to his followers, and even that the followers genuinely believe this and act according to the table of values to the best of their ability, does not ensure that peace and prosperity will in fact follow. Even if the population did prosper, that would not, in itself, show that the prophet had been right. Th is could just have been luck, or the result of completely different factors. A realist can fully admit that products of the human imagination are very important in human life, provided he or she keeps a keen and unwavering eye upon the basic motto Respice fi nem, meaning in this case not “Th e best way to live is to keep your mind on your end: death,” but “Don’t look just at what they say, think, believe, but at what they actually do, and what actually happens as a result.” An imagined threat might be an extremely powerful motivation to action, and an aspiration, even if built on fantasy, is not nothing, provided it really moves people to action. This does not mean that it is any less important to distinguish between a correct perception of the world and illusion. The opposite of reality or the correct perception of reality is in any case not the imagination but illusion; however, even illusions can have eff ects. The realist must take powerful illusions seriously as factors in the world that have whatever motivational power they in fact have for the population in question, that is, as something to be understood. This is compatible with seeing through them, and refusing steadfastly to make them part of the cognitive apparatus one employs oneself to try to make sense of the world. It is no sign of gimlet-eyed realism to deny the enormous real significance of religious practices, beliefs, and institutions in the world, past and present, but, rather, a sign of simple blindness. This, however, does not imply that the cognitive or normative claims made by religious believers have any plausibility whatever.

Second, and following on from this, political philosophy must recognise that politics is in the fi rst instance about action and the contexts of action,7 not about mere beliefs or propositions. In many situations agents’ beliefs can be very important—for instance, knowing what another agent believes is often a relevant bit of information if one wants to anticipate how that agent can be expected to act—but sometimes agents do not immediately act on beliefs they hold. In either case the study of politics is primarily the study of actions and only secondarily of beliefs that might be in one way or another connected to action. To reiterate, propounding a theory, introducing a concept, passing on a piece of information, even, sometimes, entertaining a possibility, are all actions, and as such they have preconditions and consequences that must be taken into account. When at the Potsdam Conference in 1945 Truman told Stalin about the successful explosion of the first atomic bomb, this was not merely an exchange of a bit of information about the results of a physical experiment that had succeeded; rather, in doing this Truman was also performing a certain action, one of trying to intimidate Stalin, to discourage him from acting in certain ways, etc. In fact that was the point of Truman’s action, and, whether one is Stalin or a student of twentieth-century history, one fails to understand the action at all if one fails to take that point. Even general doctrines or complex theories can have distinct effects not merely on particular courses of action, but on the general structure of action in a given society. If utilitarian philosophy, Roman law, Darwinism, Chicago-style neoliberal economics, or “rational decision theory” is taught in all the schools, this will probably, to some extent, influence the way agents in the society come to act. This does not mean that we, or anyone, know what the nature of that influence will be. It certainly does not mean that if all schoolchildren are taught “rational decision theory” they will all become fully “rational agents” (in the sense specified by the theory) even if they try hard to do so, because the actual consequence might be, for instance, that some become more like the purely rational choosers described in the theory than they would otherwise have been, but others find themselves rebelling. Dostoyevski’s Underground Man decides he would rather be anything than a piano key or an organ stop.8 There is nothing unreasonable about not wanting to be fully “rational” if “rationality” is understood in a sufficiently narrow way. Paul of Tarsus at the beginning of Christianity notably describes the Christian faith as “folly” (μωρíα), but this did not prevent it from informing European sensibilities for a rather long period of time. Six years of constant religious instruction does not ensure religious belief, and six years of public repetition of the demands of elementary hygiene won’t make quite every person in the country brush his (or her) teeth after every meal. Still, when the Medical Council issues a warning about the dangers of smoking, this is not merely the enunciation of a scientifi c result, which can be evaluated according to the usual canons of empirical support, but also an intervention that will have effects, one way or the other, on social and political life. Th e only way to tell what effects there will be is to study them. There is, of course, nothing inherently absurd in holding that when Truman told Stalin that an atomic bomb had been successfully tested, one could make this event an object of two complementary, but distinct enquiries. First, one could study this as an action that will have, and was intended to have, various consequences, and which can be evaluated in various ways, e.g., as appropriate or not, prudent or not, etc.; or, second, one could investigate the content of the claim— that the test had been successful—as something that was warranted (or not) by available evidence.

The third thesis I want to defend is that politics is historically located: it has to do with humans interacting in institutional contexts that change over time, and the study of politics must reflect this fact. This is not an objection to generalising; we don’t even know what it would be like to think without generalising. Nevertheless, it simply turns out as a matter of fact that excessive generalising ends up not being informative. There are no interesting “eternal questions” of political philosophy. It is perfectly true that if one wishes, one can construct some universal empirical truths about human beings and the societies they form, e.g., it is correct that people in general try to keep themselves alive and that all humans have had to eat to survive, and that this has imposed various constraints on the kind of human societies that have been possible, but such statements, taken on their own, are not interestingly informative for the purposes of politics.9 Such detached general statements do not wear their meaning on their sleeves; in fact, understanding politics means seeing that such statements have clear meaning at all only relative to their specific context, and this context is one of historically structured forms of action. For an isolated general statement like the one about the human need to eat to be enlightening, one must relate it to issues such as: what form of food production takes place in the society in question, who has control over it, what form that control takes, and what food taboos are observed.10 If one takes such generalisations to be more than what they really are— mere schemata that need to be filled with concrete historical content—and uses them in isolation as part of an attempt to understand real politics, they will be seriously misleading. People do not eat “food in general” but rice, or wheaten bread, or shellfish, or pork, or they do not eat beef or pork or larvae, and people have sometimes willingly starved themselves to death. Suicide through self-starvation is perhaps an extreme case that needs special explanation (of a psychopathological kind, as in anorexia, or of an ideological kind, as with the Irish hunger strikers of the 1960s), but how is one to know beforehand that a given situation with which one is confronted is not extreme? If one wants understanding or any kind of guidance for action, one will have to take the specific cultural and historical circumstances into consideration. What level of historical specificity is required for what purpose is itself a question that has no general answer. Looking for a set of formulae that are as historically invariant as possible and assuming that those formulae will allow us to grasp what is most important will point one in the wrong direction. If one thinks that understanding one’s world is a minimal precondition to having sensible human desires and projects, history is not going to be dispensable. The more important one thinks it is to act, the more this will be the case. For as long, at least, as human societies continue to change, we won’t escape history.

Finally, the fourth assumption that lies behind this essay is that politics is more like the exercise of a craft or art, than like traditional conceptions of what happens when a theory is applied. It requires the deployment of skills and forms of judgment that cannot easily be imparted by simple speech, that cannot be reliably codified or routinised, and that do not come automatically with the mastery of certain theories. A skill is an ability to act in a flexible way that is responsive to features of the given environment with the result that action or interaction is enhanced or facilitated, or the environment is transformed in ways that are positively valued. Sometimes the result will be a distinct object or product: a shoe, a painting, a building, a boat; sometimes there will be no distinct object produced, as when a skilful marriage counsellor changes the interaction between spouses in a positive way or a vocal coach helps a singer bring out some rather subtle aspects of an overplayed aria. One of the signs that I have acquired a skill, rather than that I have been simply mechanically repeating things I have seen others do, have been applying a handbook, or have just been lucky, is that I can attain interesting and positively valued results in a variety of different and unexpected circumstances. A skilful painter can produce an appropriate image even using newly created materials that have never before been used for this purpose. To the extent to which the circumstances are genuinely different and unexpected, it is unlikely that there will be any already existing body of theoretical work that gives direct advice about how to deal with them, or models of the successful exercise of skill in those circumstances that could be emulated.

The attentive reader will notice that I use the terms “political theory” and “political philosophy” (the latter sometimes assumed to be more general than the former) almost interchangeably, and that I do not distinguish sharply between a descriptive theory and a “pure normative theory” (the former purportedly giving just the facts; the latter moral principles, imperatives, or ideal norms). This is fully intentional, and indeed part of the point I am trying to make. I want precisely to try to cast as much doubt as I can on the universal usefulness of making these distinctions. Kantians, of course, will think I have lost the plot from the start; and that only confusion can result from failure to make these essential, utterly fundamental divisions between Is and Ought, Fact and Value, or the Descriptive and the Normative in as rigorous and systematic a way as possible, just as I think they have fallen prey to a kind of fetishism, attributing to a set of human conceptual inventions a significance that they do not have. By doing this, in my view, they condemn themselves to certain forms of ignorance and illusion, and introduce into their cognitive and political practice a rigidity and deformation it need not have. Politics allows itself to be cut up for study in any one of a number of different ways, and which cuts will be most illuminating will depend very much on the context, on what one is interested in finding out. Th ere is no single canonical style of theorising about politics. One can ask any number of perfectly legitimate questions about different political phenomena, and depending on the question, different kinds of enquiry will be appropriate. Asking what the question is, and why the question is asked, is always asking a pertinent question. In some contexts a relative distinction between “the facts” and human valuations of those facts (or “norms”) might be perfectly useful, but the division makes sense only relative to the context, and can’t be extracted from that context, promoted, and declared to have absolute standing. However, I also think that the most convincing way to make this point is not by a frontal attack on the Is/Ought distinction, which would be very tedious, given that I grant that one can make the distinction in virtually any particular context, as a relative distinction. Th e Is/ Ought distinction looks overwhelmingly plausible because of the way philosophers have traditionally framed the question and assumed one would have to go about answering it. It is the misleading focus on artificially simple, invented examples that seems to give the distinction its hold over us. So rather than talking at great length and to no clear purpose about the Is/Ought distinction in general, I would like to proceed indirectly by inviting the reader to see how much more interesting the political world seems to be, and how much more one can come to learn and understand about it, if one relaxes the straightjacket and simply ignores this purported distinction.

A book of this kind, and especially of this size, cannot possibly treat all, or even any, of the issues it raises in anything like a full and satisfactory way. It also cannot aspire to change the minds of people who already have fi rmly fi xed settled opinions on how political philosophy “must” be done. Rather, the most it can hope to do is address people who have perhaps occasionally had similar thoughts already themselves or those whose views are for one reason or another unformed or unsettled. To them it wishes to suggest the possibility that there might be a viable way of thinking about politics that is orthogonal to the mainstream of contemporary analytic political philosophy.

Return to Book Description

File created: 6/17/2008


"Fat Cats, Bigga Fish" The Coup 1994

I got game like I read the directions

See it Now

Carl Douglas, “Kung Fu Fighting” [*Nouveaute*], 1974