From: Rachel's Democracy & Health News #920, Aug. 16, 2007
TWO FRIENDS DEBATE RISK ASSESSMENT AND PRECAUTION
By Peter Montague
Recently my friend Adam Finkel -- a skilled and principled risk assessor -- won two important victories over the Occupational Safety and Health Administration (OSHA).
In 2000, Adam was appointed Regional Administrator for OSHA in charge of the Denver office. When he began to suspect that OSHA inspectors were being exposed to dangerous levels of the highly-toxic metal, beryllium, he took precautionary action -- urging OSHA to test beryllium levels in OSHA inspectors. It cost him his job.
The agency did not even want to tell its inspectors they were being exposed to beryllium, much less test them. So Adam felt he had no choice -- in 2002, he blew the whistle and took his concerns public. OSHA immediately relieved him of his duties as Regional Administrator, and moved him to Washington where they changed his title to "senior advisor" and assigned him to the National Safety Council -- a place where "they send people they don't like," he would later tell a reporter.
Adam sued OSHA under the federal whistleblower protection statute and eventually won two years' back pay, plus a substantial lump sum settlement, but he didn't stop there. In 2005, he lodged a Freedom of Information Act law suit against the agency, asking for all monitoring data on toxic exposures of all OSHA inspectors. Meanwhile, OSHA began testing its inspectors for beryllium, finding exactly what Adam had suspected they would find -- dangerously high levels of the toxic metal in some inspectors.
Adam is now a professor in the Department of Environmental and Occupational Health, School of Public Health, University of Medicine and Dentistry of New Jersey (UMDNJ); and a visiting scholar at the Woodrow Wilson School, Princeton University. At UMDNJ, he teaches "Environmental Risk Assessment."
Earlier this summer, Adam won his FOIA lawsuit. A judge ruled that OSHA has to hand over 2 million lab tests on 75,000 employees going back to 1979. It was a stunning victory over an entrenched bureaucracy.
Meanwhile in 2006 the American Public Health Association selected Adam to receive the prestigious David Rall Award for advocacy in public health. You can read his acceptance speech here.
When Adam's FOIA victory was announced early in July, I sent him a note of congratulations. He sent back a note, attaching a couple of articles, one of which was a book review he had published recently of Cass Sunstein's book, Risk and Reason. Sunstein doesn't "get" the precautionary principle -- evidently, he simply sees no need for it. Of course the reason why we need precaution is because the cumulative impacts of the human economy are now threatening to wreck our only home -- as Joe Guth explained last week in Rachel's News.
In any event, I responded to Adam's book review by writing "A letter to my friend, who is a risk assessor," and I invited Adam to respond, which he was kind enough to do.
So there you have it.
Do any readers want to respond to either of us? Please send responses to email@example.com.
From: Journal of Industrial Ecology (pg. 243)
ADAM FINKEL REVIEWS CASS SUNSTEIN'S BOOK, RISK AND REASON
By Adam M. Finkel
As someone dedicated to the notion that society needs quantitative risk assessment (QRA) now more than ever to help make decisions about health, safety, and the environment, I confess that I dread the arrival of each new book that touts QRA or cost-benefit analysis as a "simple tool to promote sensible regulation." Although risk analysis has enemies aplenty, from both ends of the ideological spectrum, with "friends" such as John Graham (Harnessing Science for Environmental Regulation, 1991), Justice Stephen Breyer (Breaking the Vicious Circle, 1994), and now Cass Sunstein, practitioners have their hands full.
I believe that at its best, QRA can serve us better than a "precautionary principle" that eschews analysis in favor of crusades against particular hazards that we somehow know are needlessly harmful and can be eliminated at little or no economic or human cost. After all, this orientation has brought us increased asbestos exposure for schoolchildren and remediation workers in the name of prevention, and also justified an ongoing war with as pure a statement of the precautionary principle as we are likely to find ("we have every reason to assume the worst, and we have an urgent duty to prevent the worst from occurring," said President Bush in October 2002 about weapons of mass destruction in Iraq). More attention to benefits and costs might occasionally dampen the consistent enthusiasm of the environmental movement for prevention, and might even moderate the on- again, off-again role precaution plays in current U.S. economic and foreign policies. But "at its best" is often a distant, and even a receding, target -- for in Risk and Reason, Sunstein has managed to sketch out a brand of QRA that may actually be less scientific, and more divisive, than no analysis at all.
To set environmental standards, to set priorities among competing claims for environmental protection, or to evaluate the results of private or public actions to protect the environment, we need reliable estimates of the magnitude of the harms we hope to avert as well as of the costs of control. The very notion of eco-efficiency presupposes the ability to quantify risk and cost, lest companies either waste resources chasing "phantom risks" or declare victory while needless harms continue unabated. In a cogent chapter (Ch. 8) on the role of the U.S. judiciary in promoting analysis, Sunstein argues persuasively that regulatory agencies should at least try to make the case that the benefits of their efforts outweigh the costs, but he appears to recognize that courts are often ill-equipped to substitute their judgments for the agencies' about precisely how to quantify and to balance. He also offers a useful chapter (Ch. 10) on some creative ways agencies can transcend a traditional regulatory enforcer role, help polluters solve specific problems, and even enlist them in the cause of improving eco-efficiency up and down the supply chain. (I tried hard to innovate in these ways as director of rulemaking and as a regional administrator at the U.S. Occupational Safety and Health Administration, with, at best, benign neglect from political appointees of both parties, so Sunstein may be too sanguine about the practical appeal of these innovations.)
Would that Sunstein had started (or stopped) with this paean to analysis as a means to an end -- perhaps to be an open door inviting citizens, experts, those who would benefit from regulation, and those reined in by it to "reason together." Instead, he joins a chorus of voices promoting analysis as a way to justify conclusions already ordained, adding his own discordant note. Sunstein clearly sees QRA as a sort of national antipsychotic drug, which we need piped into our homes and offices to dispel "mass delusions" about risk. He refers to this as "educating" the public, and therein lies the most disconcerting aspect of Risk and Reason: he posits a great divide between ordinary citizens and "experts," and one that can only be reconciled by the utter submission of the former to the latter. "When ordinary people disagree with experts, it is often because ordinary people are confused," he asserts (p. 56) -- not only confused about the facts, in his view, but not even smart enough to exhibit a rational form of herd behavior! For according to Sunstein, "millions of people come to accept a certain belief [about risk] simply because of what they think other people believe" (p. 37, emphasis added).
If I thought Sunstein was trying by this to aggrandize my fellow travelers -- scientists trained in the biology of dose-response relationships and the chemistry and physics of substances in the environment, the ones who actually produce risk assessments -- I suppose I would feel inwardly flattered, if outwardly sheepish, about this unsolicited elevation above the unwashed masses. But the reader will have to look long and hard to find citations to the work of practicing risk assessors or scientists who helped pioneer these methods. Instead, when Sunstein observes that "precisely because they are experts, they are more likely to be right than ordinary people. . . brain surgeons make mistakes, but they know more than the rest of us about brain surgery" (p. 77), he has in my view a quaint idea of where to find the "brain surgeons" of environmental risk analysis.
He introduces the book with three epigrams, which I would oversimplify thus: (1) the general public neglects certain large risks worthy of fear, instead exhibiting "paranoia" about trivial risks; (2) we maintain these skewed priorities in order to avoid taking responsibility for the (larger) risks we run voluntarily; and (3) defenders of these skewed priorities are narcissists who do not care if their policies would do more harm than good. The authors of these epigrams have something in common beyond their worldviews: they are all economists. Does expertise in how markets work (and that concession would ignore the growing literature on the poor track record of economists in estimating compliance costs in the regulatory arena) make one a "brain surgeon" qualified to bash those with different views about, say, epidemiology or chemical carcinogenesis?
To illustrate the effects of Sunstein's continued reliance on one or two particular subspecies of "expert" throughout the rest of his book, I offer a brief analysis of Sunstein's five short paragraphs (pp. 82-83) pronouncing that the 1989 public outcry over Alar involved masses of "people [who] were much more frightened than they should have been."1 Sunstein's readers learn the following "facts" in this example:
** Alar was a "pesticide" (actually, it regulated the growth of apples so that they would ripen at a desired time).
** "About 1 percent of Alar is composed of UDMH [unsymmetrical dimethylhydrazine], a carcinogen" (actually, this is roughly the proportion found in raw apples -- but when they are processed into apple juice, about five times this amount of UDMH is produced).
** The Natural Resources Defense Council (NRDC) performed a risk assessment claiming that "about one out of every 4,200 [preschool children] exposed to Alar will develop cancer by age six" (actually, NRDC estimated that exposures prior to age six could cause cancer with this probability sometime during the 70-year lifetimes of these children -- a huge distinction, with Sunstein's revision making NRDC appear unfamiliar with basic assumptions about cancer latency periods).
** The EPA's current risk assessment is "lower than that of the NRDC by a factor of 600" (actually, the 1/250,000 figure Sunstein cites as EPA's differs from NRDC's 1/4,200 figure by only a factor of 60 (250,000 � 4,200). Besides, EPA never calculated the risk at one in 250,000. After Alar's manufacturer (Uniroyal) finished a state-of-the- art study of the carcinogenicity of UDMH in laboratory animals, EPA (Federal Register, Vol. 57, October 8, 1992, pp. 46,436-46,445) recalculated the lifetime excess risk to humans at 2.6 � 10-5, or 1 in 38,000. And, acting on recommendations from the U.S. National Academy of Sciences, EPA has subsequently stated that it will consider an additional tenfold safety factor to account for the increased susceptibility of children under age 2, and a threefold factor for children aged 2 to 16 -- which, had they been applied to UDMH, would have made the EPA estimate almost equal to the NRDC estimate that made people "more frightened than they should have been").
** "A United Nations panel... found that Alar does not cause cancer in mice, and it is not dangerous to people" (true enough, except that Sunstein does not mention that this panel invoked a threshold model of carcinogenesis that no U.S. agency would have relied on without more and different scientific evidence: the U.N. panel simply ignored the large number of tumors at the two highest doses in Uniroyal's UDMH study and declared the third-highest dose to be "safe" because that dose produced tumors, but at a rate not significantly higher than the background rate).
** A 60 Minutes broadcast "instigated a public outcry.. . without the most simple checks on its reliability or documentation" (readers might be interested, however, that both a federal district court and a federal appeals court summarily dismissed the lawsuit over this broadcast, finding that the plaintiffs "failed to raise a genuine issue of material fact regarding the falsity of statements made during the broadcast").
** The demand for apples "plummeted" during 1989 (true enough, but Sunstein neglects to mention that within five years after the withdrawal of Alar the apple industry's revenues doubled versus the level before the controversy started).
Sunstein's entire source material for these scientific and other conclusions? Four footnotes from a book by political scientist Aaron Wildavsky and one quotation from an editorial in Science magazine (although the incorrect division of 250,000 by 4,200 and the mangling of the NRDC risk assessment appear to be Sunstein's own contributions). One reason the general public annoys Sunstein by disagreeing with the "experts," therefore, is that he has a very narrow view of where one might look for a gold standard against which to judge the merits of competing conclusions. Perhaps Sunstein himself has come to certain beliefs about Alar and other risks "simply because of what [he thinks] other people believe," and comforts himself that the people he agrees with must be "experts."
Similarly, Sunstein makes some insightful points about the public's tendency to assume that the risks are higher for items whose benefits they perceive as small, but he fails to notice the mountains of evidence that his preferred brand of experts tend to impute high economic costs to regulations that they perceive as having low risk- reduction benefits. He accepts as "unquestionably correct" the conclusion of Tengs and colleagues (1995) that government badly misallocates risk-reduction resources, for example, without acknowledging Heinzerling's (2002) finding that in 79 of the 90 environmental interventions Tengs and colleagues accused of most severely wasting the public's money, the agency involved never required that a dime be spent to control those hazards, probably because analysis showed such expenditures to be of questionable cost- effectiveness.
Finally, Sunstein fails to acknowledge the degree to which experts can agree with the public on broad issues, and can also disagree among themselves on the details. He cites approvingly studies by Slovic and colleagues suggesting that laypeople perform "intuitive toxicology" to shore up their beliefs, but fails to mention that in the most recent of their studies (1995), toxicologists and the general public both placed 9 of the same 10 risks at the top of38 risks surveyed, and agreed on 6 of the 10 risks among the lowest 10 ranked. Yet when toxicologists alone were given information on the carcinogenic effects of "Chemical B" (data on bromoethane, with its identity concealed) in male and female mice and rats, only 6% of them matched the conclusions of the experts at the U.S. National Toxicology Program that there was "clear evidence" of carcinogenicity in one test group (female mice), "equivocal evidence" in two others, and "some evidence" in the fourth. "What are ordinary people thinking?" (p. 36) when they disagree with the plurality of toxicologists, Sunstein asks, without wondering what these toxicologists must have been thinking to disagree so much with each other. One simple answer is that perhaps both toxicologists and the general public, more so than others whose training leads them elsewhere, appreciate the uncertainties in the raw numbers and the room for honest divergence of opinion even when the uncertainties are small. These uncertainties become even more influential when multiple risks must be combined and compared -- as in most life-cycle assessments and most efforts to identify and promote more eco- efficient pathways -- so Sunstein's reliance on a style of expertise that regards uncertainty as an annoyance we can downplay or "average away" is particularly ill-suited to broader policy applications.
I actually do understand Sunstein's frustration with the center of gravity of public opinion in some of these areas. Having worked on health hazards in the general environment and in the nation's workplaces, I devoutly wish that more laypeople (and more experts) could muster more concern about parts per thousand in the latter arena than parts per billion of the same substances in the former. But I worry that condescension is at best a poor strategy to begin a dialogue about risk management, and hope that expertise would aspire to more than proclaiming the "right" perspective and badgering people into accepting it. Instead, emphasizing the variations in expertise and orientation among experts could actually advance Sunstein's stated goal of promoting a "cost-benefit state," as it would force those who denounce all risk and cost-benefit analysis to focus their sweeping indictments where they belong. But until those of us who believe in a humble, humane brand of risk assessment can convince the public that substandard analyses indict the assessor(s) involved, not the entire field, I suppose we deserve to have our methods hijacked by experts outside our field or supplanted by an intuitive brand of "precaution."
Adam M. Finkel, School of Public Health, University of Medicine and Dentistry of New Jersey Piscataway, New Jersey, USA
1. This is admittedly not a disinterested choice, as I was an expert witness for CBS News in its successful attempts to have the courts summarily dismiss the product disparagement suit brought against it for its 1989 broadcast about Alar. But Sunstein's summaries of other hazards (e.g., toxic waste dumps, arsenic, airborne particulate matter) could illustrate the same general point.
Heinzerling, L. 2002. Five hundred life-saving interventions and their misuse in the debate over regulatory reform. Risk: Health, Safety and Environment 13(Spring): 151-175.
Slovic, P., T. Malmfors, D. Krewski, C. K. Mertz, N. Neil, and S. Bartlett. 1995. Intuitive toxicology, Part II: Expert and lay judgments of chemical risks in Canada. Risk Analysis 15(6): 661-675.
Tengs, T. O., M. E. Adams, J. S. Pliskin, D. G. Safran, J. E. Siegel, M. C. Weinstein, and J. D. Graham. 1995. Five hundred life saving interventions and their cost-effectiveness. Risk Analysis 15(3): 369- 390.
From: Rachel's Democracy & Health News #920, Aug. 16, 2007
A LETTER TO MY FRIEND WHO IS A RISK ASSESSOR
By Peter Montague
It was really good to hear from you. I'm delighted that you're now working in New Jersey, with joint appointments at Princeton University and at the School of Public health at the University of Medicine and Dentistry of New Jersey (UMDNJ). Everyone in New Jersey is taking a bath in low levels of toxic chemicals, so we need all the help we can get, and you're exactly the kind of person we most need help from -- honest, knowledgeable, committed, and principled. Your knowledge of toxicology and risk assessment -- and the courage you demonstrated as a government whistle-blower within the Occupational Safety and Health Administration -- are sorely needed in the Garden State, as they are everywhere in America.
I read with real interest your review of Cass Sunstein's book, Risk and Reason. I thought you did an excellent job of showing that Sunstein "has managed to sketch out a brand of QRA [quantitative risk assessment] that may actually be less scientific and more divisive, than no analysis at all." To me, it seems that Sunstein is more interested in picking a fight with his fellow liberals than in helping people make better decisions.
What I want to discuss in this note to you is your attack on the precautionary principle. In the second paragraph of your review, you wrote, "I believe that at its best, QRA [quantitative risk assessment] can serve us better than a 'precautionary principle' that eschews analysis in favor of crusades against particular hazards that we somehow know are needlessly harmful and can be eliminated at little or no economic or human cost. After all, this orientation has brought us increased asbestos exposure for schoolchildren and remediation workers in the name of prevention, and also justified an ongoing war with as pure a statement of the precautionary principle as we are likely to find ("we have every reason to assume the worst, and we have an urgent duty to prevent the worst from occurring," said President Bush in October 2002 about weapons of mass destruction in Iraq)."
The two examples you give -- asbestos removal, and Mr. Bush's pre- emptive war in Iraq -- really aren't good examples of the precautionary principle. Let's look at the details.
All versions of the precautionary principle have three basic parts:
1) If you have reasonable suspicion of harm
2) and you have scientific uncertainty
3) then you have a duty to take action to avert harm (though the kind of action to take is not spelled out in the precautionary principle).
Those are the three basic elements of the precautionary principle, found in every definition of the principle.
And here is a slightly more verbose version of the same thing:
1. Pay attention and heed early warnings. In a highly technological society, we need to keep paying attention because so many things can go wrong, with serious consequences.
2. When we develop reasonable suspicion that harm is occurring or is about to occur, we have a duty to take action to avert harm. The action to take is not spelled out by the precautionary principle but proponents of the principle have come up with some suggestions for action:
3. We can set goals -- and in doing this, we can make serious efforts to engage the people who will be affected by the decisions.
4. We can examine all reasonable alternatives for achieving the goal(s), again REALLY engaging the people who will be affected.
5. We can give the benefit of the doubt to nature and to public health.
6. After the decision, we can monitor, to see what happens, and again heed early warnings.
7. And we can favor decisions that are reversible, in case our monitoring reveals that things are going badly.
Now, let's look at the examples you gave to see if they really represent failures of the precautionary principle.
The asbestos removal craze (1985-2000) was essentially started by one man, James J. Florio, former Congressman from New Jersey (and later Governor of New Jersey).
As we learn from this 1985 article from the Philadelphia Inquirer, gubernatorial candidate Florio was fond of bashing his opponent for ignoring the "asbestos crisis" and the 'garbage crisis.' In Mr. Florio's campaign for governor of New Jersey, the "asbestos crisis" was a political ploy -- and it worked. He got elected by promising to fix the asbestos crisis and the garbage crisis.
As governor, Mr. Florio's approach to the "garbage crisis" was to site a new garbage incinerator in each of New Jersey's 21 counties. At $300 to $600 million per machine, this would have set loose an unprecedented quantity of public monies sloshing around in the political system. Later it turned out that Mr. Florio's chief of staff had close ties to the incinerator industry.
I don't know whether Mr. Florio or his political cronies profited directly from the asbestos removal industry that he created almost single-handedly, but the decision-making for asbestos was similar to his approach to the "garbage crisis." He did not engage the affected public in setting goals and then examine all reasonable alternatives for achieving those goals. He did not take a precautionary approach.
In the case of garbage, Mr. Florio and his cronies decided behind closed doors that New Jersey needed to build 21 incinerators. He and his cronies then justified those incinerators using quantitative risk assessments.
Mr. Florio's approach to asbestos was the same: without asking the affected public, he decided that removal of asbestos from 100,000 of the nation's schools was the correct goal, and thus creating a new "asbestos removal" industry was the only reasonable alternative. You can read about Mr. Florio's "Asbestos Hazard Emergency Response Act of 1986" here.
So, the goal ("asbestos removal") was not set through consultation with affected parties. Perhaps if the goal had been to "Eliminate exposures of students and staff to asbestos in schools," a different set of alternatives would have seemed reasonable. Asbestos removal might not have been judged the least-harmful approach.
Even after the questionable decision was made to remove all asbestos from 100,000 school buildings nationwide, systematic monitoring was not done, or not done properly. Furthermore, the decision to remove asbestos was not easily reversible once removal had been undertaken and new hazards had been created.
So Mr. Florio's law created overnight a new asbestos removal industry, companies without asbestos removal experience rushed to make a killing on public contracts, in some cases worker training was poor, removals were carried out sloppily, monitoring was casual or non-existent, and so new hazards were created.
This was not an example of a precautionary approach. It was missing almost all the elements of a precautionary approach -- from goal setting to alternatives assessment, to monitoring, heeding early warnings, and making decisions that are reversible.
Now let's examine President Bush's pre-emptive strike against Iraq.
True enough, Mr. Bush claimed that he had strong evidence of an imminent attack on the U.S. -- perhaps even a nuclear attack. He claimed to know that Saddam Hussein was within a year of having a nuclear weapon. But we now know that this was all bogus. This was not an example of heeding an early warning, because no threat existed -- it was an example of manufacturing an excuse to carry out plans that had been made even before 9/11.
Here is the lead paragraph from a front-page story in the Washington Post April 28, 2007:
"White House and Pentagon officials, and particularly Vice President Cheney, were determined to attack Iraq from the first days of the Bush administration, long before the Sept. 11, 2001, attacks, and repeatedly stretched available intelligence to build support for the war, according to a new book by former CIA director George J. Tenet."
Fully a year before the war began, Time magazine reported March 24, 2003, President Bush stuck his head in the door of a White House meeting between National Security Advisor Condoleeza Rice and three U.S. Senators discussing how to deal with Saddam Hussein through the United Nations or perhaps in a coalition with the U.S's Middle East partners. "Bush wasn't interested," reported Time. "He waved his hand dismissively, recalls a participant, and neatly summed up his Iraq policy:" "F--- Saddam. We're taking him out." This President was not weighing alternatives, seeking the least-harmful.
The CIA, the State Department, members of Congress, and the National Security staff wanted to examine alternatives to war, but Mr. Bush was not interested. With Mr. Cheney, he had set the goal of war, without wide consultation. He then manufactured evidence to support his decision to "take out" Saddam without much thought for the consequences. Later he revealed that God had told him to strike Saddam. Mr. Bush believed he was a man on a mission from God. Assessing alternatives was not part of God's plan, as Mr. Bush saw it.
This was not an example of the precautionary approach. Goals were not set through a democratic process. Early warning were not heeded -- instead fraudulent scenarios were manufactured to justify a policy previous set (if you can call "God told me to f--- Saddam" a policy). As George Tenet's book makes clear again and again, alternatives were not thoroughly examined, with the aim of selecting the least harmful. For a long time, no monitoring was done (for example, no one has been systematically counting Iraqi civilian casualties), and the decision was not reversible, as is now so painfully clear.
This was definitely not an example of the precautionary approach.
So I believe your gratuitous attack on the precautionary principle is misplaced because you bring forward examples that don't have anything to do with precautionary decision-making.
Now I want to acknowledge that the precautionary principle can lead to mistakes being made -- it is not a silver bullet that can minimize all harms. However, it we take to heart its advice to monitor and heed early warnings, combined with favoring decisions that are reversible, it gains a self-correcting aspect that can definitely reduce harms as time passes.
I am even more worried by the next paragraph of your review of Sunstein's book. Here you seem to disparage the central goal of public health, which is primary prevention. You write,
"More attention to benefits and costs might occasionally dampen the consistent enthusiasm of the environmental movement for prevention, and might even moderate the on-again, off-again role precaution plays in current U.S. economic and foreign policies." I'm not sure what you mean by the 'on-again, off-again role precaution plays in current U.S. economic and foreign policies." I presume you're referring here to asbestos removal and pre-emptive war in Iraq, but I believe I've shown that neither of these was an example of precautionary decision-making.
Surely you don't REALLY want environmentalists or public health practitioners to "dampen their consistent enthusiasm for prevention." Primary prevention should always be our goal, shouldn't it? But we must ask what are we trying to prevent? And how best to achieve the goal(s)? These are always the central questions, and here I would agree with you: examining the pros and cons of every reasonable approach is the best way to go. (I don't want to use your phrase "costs and benefits" because in common parlance this phrase implies quantitative assessment of costs and benefits, usually in dollar terms. I am interested in a richer and fuller discussion of pros and cons, which can include elements and considerations that are not strictly quantifiable but are nevertheless important human considerations, like local culture, history, fairness, justice, community resilience and beauty.)
So this brings me to my fundamental criticisms of quantitative risk assessment, which I prefer to call "numerical risk assessment."
1. We are all exposed to multiple stressors all of the time and numerical risk assessment has no consistent way to evaluate this complexity. In actual practice, risk assessors assign a value of zero to most of the real-world stresses, and thus create a mathematical model of an artificial world that does not exist. They then use that make-believe model to drive decisions about the real world that does exist.
2. The timing of exposures can be critical. Indeed, a group of 200 physicians and scientists recently said they believe the main adage of toxicology -- "the dose makes the poison" -- needs to be changed to, "The timing makes the poison." Numerical risk assessment is, today, poorly prepared to deal with the timing of exposures.
3. By definition, numerical risk assessment only takes into consideration things that can be assigned a number. This means that many perspectives that people care about must be omitted from decisions based on numerical risk assessments -- things like historical knowledge, local preferences, ethical perspectives of right and wrong, and justice or injustice.
4. Numerical risk assessment is difficult for many (probably most) people to understand. Such obscure decision-making techniques run counter to the principles of an open society.
5. Politics can and do enter into numerical risk assessments. William Ruckelshaus, first administrator of U.S. Environmental Protection Agency said in 1984, "We should remember that risk assessment data can be like the captured spy: If you torture it long enough, it will tell you anything you want to know."
6. The results of numerical risk assessment are not reproducible from laboratory to laboratory, so this decision-technique does not meet the basic criterion for being considered "science" or "scientific."
As the National Academy of Sciences said in 1991, "Risk assessment techniques are highly speculative, and almost all rely on multiple assumptions of fact -- some of which are entirely untestable." (Quoted in Anthony B. Miller and others, Environmental Epidemiology, Volume 1: Public Health and Hazardous Wastes [Washington, DC: National Academy of Sciences, 1991], pg. 45.)
7. By focusing attention on the "most exposed individual," numerical risk assessments have given a green light to hundreds of thousands or millions of "safe" or "acceptable" or "insignificant" discharges that have had the cumulative effect of contaminating the entire planet with industrial poisons. See Travis and Hester, 1991 and Rachel's News #831.
All this is not to say that numerical risk assessment has no place in decision-making. Using a precautionary approach, as we set goals and, later, as we examine the full range of alternatives, numerical risk assessments might be one technique for generating information to be used by interested parties in their deliberations. Other techniques for generating useful information might be citizen juries, a Delphi approach, or consensus conferences. (I've discussed these techniques briefly elsewhere.)
It isn't so much numerical risk assessment itself that creates problems -- it's reliance almost solely on numerical risk assessment as the basis for decisions that has gotten us into the mess we're in.
Used within a precautionary framework for decision-making, numerical risk assessment of the available alternatives in many cases may be able to give us useful new information that can contribute to better decisions. And that of course is the goal: better decisions producing fewer harms.
From: Rachel's Democracy & Health News #920, Aug. 16, 2007
RISK ASSESSMENT AND PRECAUTION: COMMON STRENGTHS AND FLAWS
By Adam Finkel
Whether we agree more than we disagree comes down to whether means or ends are more important. To the extent you share my view (and given your influence on me early in my career, I should probably say, "I share your view...") that we have a long way to go to provide a safe, healthy, and sustainable environment for the general and (especially) the occupational populations, our remaining differences are only those of strategy. Outcomes "producing fewer harms" for nature and public health are, I agree, the goal, and I assume you agree that we could also use fewer decisions for which more harm is the likely -- perhaps even the desired -- outcome of those in power.
But being on the same side with respect to our goals makes our differences about methods all the more important, because knowing where to go but not how to get there may ultimately be little better than consciously choosing to go in the wrong direction.
Your long-standing concern about quantitative risk assessment haunts me, if there's such a thing as being "haunted in a constructive way." I tell my students at the University of Medicine and Dentistry of New Jersey and at Princeton during the first class of every semester that I literally haven't gone a week in the past 10 years without wondering, thanks to you, if I am in fact "helping the state answer immoral questions" about acceptable risk and in so doing, am "essentially keeping the death camp trains running on time" (quote from Rachel's #519, November 7, 1996). I don't consider this analogy to be name-calling, because I have such respect for its source, so I hope you won't take offense if I point out that everyone who professes to care about maximizing life expectancy, human health, and the natural functioning of the planet's ecosystems ought to ask the same question of themselves. I do worry about quantitative risk assessment and its mediocre practitioners, as I will try to explain below, but I also wish that advocates of the precautionary principle would occasionally ask themselves whether more or fewer people will climb unwittingly aboard those death-camp trains if they run on a schedule dictated by "precaution."
And if the outcomes we value flourish more after an action based on quantitative risk assessment than they do after an action motivated by precaution, then a preference for the latter implies that noble means matter more than tangible ends -- which I appreciate in theory, but wonder what would be so noble about a strategy that does less good or causes more harm.
I distrust some versions of the precautionary principle for one basic reason. If I re-express your first three-part definition in one sentence (on the grounds that in my experience, the fact of scientific uncertainty goes without saying), I get "if you reasonably suspect harm, you have a duty to act to avert harm, though the kind of action is up to you." Because I believe that either inaction or action can be unacceptably harmful, depending on circumstances, I worry that a principle that says "act upon suspicion of harm" can be used to justify anything. This was my point about the Iraq war, which I agree is despicable, but not only because the suspicion of harm was concocted (at least, inflated) but because the consequences of the remedy were so obviously glossed over.
Whatever principle guides decision makers, we need to ask how harmful the threat really is, and also what will or may happen if we act against it in a particular way. Otherwise, the principle degenerates into "eliminate what we oppose, and damn the consequences." I'm not suggesting that in practice, the precautionary principle does no better than this, just as I trust you wouldn't suggest that quantitative risk assessment is doomed to be no better than "human sacrifice, Version 2.0." Because I agree strongly with you (your "verbose version, point 5") that when health and dollars clash, we should err on the side of protecting the former rather than the latter, I reject some risk-versus-risk arguments, especially the ones from OMB [Office of Management and Budget] and elsewhere that regulation can kill more people than it saves by impoverishing them (see, for example, my 1995 article "A Second Opinion on an Environmental Misdiagnosis: The Risky Prescriptions of Breaking the Vicious Circle [by Judge Steven Breyer]", NYU Environmental Law Journal, vol. 3, pp. 295-381, especially pp. 322-327). But the asbestos and Iraq examples show the direct trade-offs that can ruin outcomes made in a rush to prevent. Newton's laws don't quite apply to social decision-making: for every action, there may be an unequal and not-quite-opposite reaction. "Benign" options along one dimension may not be so benign when viewed holistically. When I was in charge of health regulation at OSHA, I tried to regulate perchloroethylene (the most common solvent used in dry-cleaning laundry). I had to be concerned about driving dry-cleaners into more toxic substitutes (as almost happened in another setting when we regulated methylene chloride, only to learn of an attempt -- which we ultimately helped avert -- by some chemical manufacturers to encourage customers to switch to an untested, but probably much more dangerous, brominated solvent). But encouraging or mandating "good old-fashioned wet cleaning" was not the answer either (even if it turns out to be as efficient as dry cleaning), once you consider that wet clothes are non-toxic but quite heavy -- and the ergonomic hazards of thousands of workers moving industrial-size loads from washers to dryers is the kind of "risk of action" that only very sophisticated analyses of precaution would even identify.
This is why I advocated "dampening the enthusiasm for prevention" -- meaning prevention of exposures, not prevention of disease, which I agree is the central goal of public health. That was a poor choice of words on my part, as I agree that when the link between disease and exposure is clear, preventing exposure is far preferable to treating the disease; the problem comes when exposures are eliminated but their causal connection to disease is unfounded.
To the extent that the precautionary principle -- or quantitative risk assessment, for that matter -- goes after threats that are not in fact as dire as worst- case fears suggest, or does so in a way that increases other risks disproportionately, or is blind to larger threats that can and should be addressed first, it is disappointing and dangerous. You can say that asbestos removal was not "good precaution" because private interests profited from it, and because the remediation was often done poorly, not because it was a bad idea in the first place. Similarly, you can say that ousting Saddam Hussein was not "good precaution" because the threat was overblown and it (he) could have been "reduced" (by the military equivalent of a pump-and-treat system?) rather than "banned" (with extreme prejudice). Despite the fact that in this case the invasion was justified by an explicit reference to the precautionary principle ("we have every reason to assume the worst and we have an urgent duty to prevent the worst from occurring"), I suppose you can argue further that not all actions that invoke the precautionary principle are in fact precautionary -- just as not all actions that claim to be risk-based are in fact so. But who can say whether President Bush believed, however misguidedly, that there were some signals of early warning emerging from Iraq? Your version of the precautionary principle doesn't say that "reasonable suspicion" goes away if you also happen to have a grudge against the source of the harm.
Again, in both asbestos removal and Iraq I agree that thoughtful advocates of precaution could have done much better. But how are these examples really any different from the reasonable suspicion that electromagnetic fields or irradiated food can cause cancer? Those hazards, as well as the ones Hussein may have posed, are/were largely gauged by anecdotal rather than empirical information, and as such are/were all subject to false positive bias. We could, as you suggest, seek controls that contain the hazard (reversibly) rather than eliminating it (irrevocably), while monitoring and re-evaluating, but that sounds like "minimizing" harm rather than "averting" it, and isn't that exactly the impulse you scorn as on the slippery slope to genocide when it comes from a risk assessor? And how, by the way, are we supposed to fine-tune a decision by figuring out whether our actions are making "things go badly," other than by asking "immoral questions" about whose exposures have decreased or increased, and by how much?
We could also, as you suggest, "really engage the people who will be affected," and reject alternatives that the democratic process ranks low. I agree that more participation is desirable as an end in itself, but believe we shouldn't be too sanguine about the results. I've been told, for example, that there exist people in the U.S. -- perhaps a majority, perhaps a vocal affected minority -- who believe that giving homosexual couples the civil rights conferred by marriage poses an "unacceptable risk" to the fabric of society. They apparently believe we should "avert" that harm. If I disagree, and seek to stymie their agenda, does that make me "anti-precautionary" (or immoral, if I use risk assessment to try to convince them that they have mis-estimated the likelihood or amount of harm)?
So I'm not sure that asbestos and Iraq are atypical examples of what happens when you follow the precautionary impulse to a logical degree, and I wonder if those debacles might even have been worse had those responsible followed your procedural advice for making them more true to the principle. But let's agree that they are examples of "bad precaution." The biggest challenge I have for you is a simple one: explain to me why "bad precaution" doesn't invalidate the precautionary principle, but why for 25 years you've been trashing risk assessment based on bad risk assessments! Of course there is a crevasse separating what either quantitative risk assessment or precaution could be from what they are, and it's unfair to reject either one based on their respective poor track records. You've sketched out a very attractive vision of what the precautionary principle could be; now let me answer some of your seven concerns about what quantitative risk assessment is.
(1) (quantitative risk assessment doesn't work for unproven hazards) I hate to be cryptic, but "please stay tuned." A group of risk assessors is about to make detailed recommendations to address the problem of treating incomplete data on risk as tantamount to zero risk. In the meantime, any "precautionary" action that exacerbates any of these "real-world stresses" will also be presumed incorrectly to do no harm...
(2) (quantitative risk assessment is ill-equipped to deal with vulnerable periods in the human life cycle) It's clearly the dose, the timing, and the susceptibility of the individual that act and interact to create risk. quantitative risk assessment depends on simplifying assumptions that overestimate risk when the timing and susceptibility are favorable, and underestimate it in the converse circumstances. The track record of risk assessment has been one of slow but consistent improvement toward acknowledging the particularly vulnerable life stages and individuals (of whatever age) who are most susceptible, so that to the extent the new assumptions are wrong, they tend to over-predict. This is exactly what a system that interprets the science in a precautionary way ought to do -- and the alternative would be to say "we don't know enough about the timing of exposures, so all exposures we suspect could be a problem ought to be eliminated." This ends up either being feel-good rhetoric or leading to sweeping actions that may, by chance, do more good than harm.
(3) (quantitative risk assessment leaves out hard-to-quantify benefits) Here, as in the earlier paragraph about "pros and cons," you have confused what must be omitted with "what we let them omit sometimes." I acknowledge that most practitioners of cost-benefit analysis choose not to quantify cultural values, or aggregate individual costs and benefits so that equitable distributions of either are given special weight. But when some of us risk assessors say "the benefits outweigh the costs" we consciously and prominently include justice, individual preferences, and "non-use values" such as the existence of natural systems on the benefits side of the ledger, and we consider salutary economic effects of controls as offsetting their net costs. Again, "good precaution" may beat "bad cost-benefit analysis" every time, but we'd see a lot more "good cost-benefit analysis" if its opponents would help it along rather than pretending it can't incorporate things that matter.
(4) (quantitative risk assessment is hard to understand) The same could be said about almost any important societal activity where the precise facts matter. I don't fully understand how the Fed sets interest rates, but I expect them to do so based on quantitative evaluation of their effect on consumption and savings, and to be able to answer intelligent questions about uncertainties in their analyses. "Examining the pros and cons of every reasonable approach," which we both endorse, also requires complicated interpretation of data on exposures, health effects, control efficiencies, costs, etc., even if the ruling principle is to "avert harm." So if precaution beats quantitative risk assessment along this dimension, I worry that it does so by replacing unambiguous descriptions ("100 deaths are fewer than 1000") with subjective ones ("Option A is 'softer' than Option B").
(5) (Decision-makers can orchestrate answers they most want to hear) "Politics" also enters into defining "early warnings," setting goals, and evaluating alternatives -- this is otherwise known as the democratic process. Removing the numbers from an analysis of a problem or of alternative solutions simply shifts the "torturing" of the number into a place where it can't be recognized as such.
(6) (quantitative risk assessment is based on unscientific assumptions) It sounds here as if you're channeling the knee-jerk deregulators at the American Council for Science and Health, who regularly bash risk assessment to try to exonerate threats they deem "unproven." quantitative risk assessment does rely on assumptions, most of which are grounded in substantial theory and evidence; the alternative would be to contradict your point #1 and wait for proof which will never come. The 1991 European Commission study you reference involved estimating the probability of an industrial accident, which is indeed a relatively uncertain area within risk assessment, but one that precautionary decision-making has to confront as well. The 1991 NAS study was a research agenda for environmental epidemiology, and as such favored analyses based on human data, which suffer from a different set of precarious assumptions and are notoriously prone to not finding effects that are in fact present.
(7) (quantitative risk assessment over-emphasizes those most exposed to each source of pollution) This is a fascinating indictment of quantitative risk assessment that I think is based on a non sequitur. Yes, multiple and overlapping sources of pollution can lead to unacceptably high risks (and to global burdens of contamination), which is precisely why EPA has begun to adopt recommendations from academia to conduct "cumulative risk analyses" rather than regulating source by source. The impulse to protect the "maximally exposed individual" (MEI) is not to blame for this problem, however; if anything, the more stringently we protect the MEI, the less likely it is that anyone's cumulative risk will be acceptably high, and the more equitable the distribution of risk will be. Once more, this is a problem that precautionary risk assessment can allow us to recognize and solve; precaution alone can at its most ambitious illuminate one hazard at a time, but it has no special talent for making risks (as opposed to particular exposures) go away.
I note that in many ways, your list may actually be too kind given what mainstream risk assessment has achieved to date. These seven possible deficiencies pale by comparison to the systemic problems with many quantitative risk assessments, which I have written about at length (see, for example, the 1997 article "Disconnect Brain and Repeat After Me: 'Risk Assessment is Too Conservative.'" In Preventive Strategies for Living in a Chemical World, E. Bingham and D. P. Rall, eds., Annals of the New York Academy of Sciences, 837, 397-417). Risk assessment has brought us fallacious comparisons, meaningless "best estimates" that average real risks away, and arrogant pronouncements about what "rational" people should and should not fear. But these abuses indict the practitioners -- a suspicious proportion of whom profess to be trained in risk assessment but never were -- not the method itself, just as the half-baked actions taken in precaution's name should not be generalized to indict that method.
So in the end, you seem to make room for a version of the precautionary principle in which risk assessment provides crucial raw material for quantifying the pros and cons of different alternative actions. Meanwhile, I have always advocated for a version of quantitative risk assessment that emphasizes precautionary responses to uncertainty (and to human interindividual variability), so that we can take actions where the health and environmental benefits may not even exceed the expected costs of control (in other words, "give the benefit of the doubt to nature and to public health"). The reason the precautionary principle and quantitative risk assessment seem to be at odds is that despite the death-camp remark, you are more tolerant of risk assessment than the center of gravity of precaution, while I am more tolerant of precaution than the center of gravity of my field. If the majorities don't move any closer together than they are now, and continue to be hijacked by the bad apples in each camp, I guess you and I will have to agree to disagree about whether mediocre precaution or mediocre quantitative risk assessment is preferable. But I'll continue to try to convince my colleagues that since risk assessment under uncertainty must either choose to be precautionary with health, or else choose to pretend that errors that waste lives and errors that waste dollars are morally equivalent, we should embrace the first bias rather than the second. I hope you will try to convince your colleagues (and readers) that precaution without analysis is like the "revelation" to invade Iraq -- it offers no justification but sorely needs one.
 As you admit, there are countless variations on the basic theme of precaution. I was careful to say in my review of Sunstein's book that I prefer quantitative risk assessment to "a precautionary principle that escews analysis," and did not mean to suggest that most or all current versions of it fit the description.
From: Oakland Tribune (Oakland, Calif.)
CHILDHOOD EXPOSURE TO DDT INCREASES CHANCE OF BREAST CANCER LATER
By Douglas Fischer, Staff writer
Susan Lydon, a Bay Area author and journalist, never forgot the DDT fog trucks that rumbled through the Long Island, New York, neighborhood where she grew up.
She was her block's fastest kid. The mist was cool. The trucks slow. Her speed allowed her to stay longer than any other pals in that comforting, pesticide-laced mist the sprayers left in their wake.
Lydon died of breast cancer at age 61 in 2005, going to her deathbed certain those carefree runs decades ago sealed her fate.
Her concern, it appears now, was justified.
A breakthrough study of Oakland women suggests exposure early in life to DDT significantly increases a woman's chances of developing breast cancer decades later, according to a new study published last week in the online edition of the journal Environmental Health Perspectives. [Read Dr. Pete Myers' analysis of the new study here.]
The findings bolster the controversial notion that exposure to low doses of hormonally active compounds at critical developmental stages -- in this case, as the breast is developing -- load the gun, so to speak, priming the body to develop cancer years later.
It also makes clear the final chapter of DDT's legacy is not yet written. The young girls most heavily exposed to the pesticide -- women born in the late 1950s and early 1960s, when use of the pesticide peaked in the United States -- have not yet reached age 50, let alone the age of greatest breast cancer risk, typically sometime after menopause and around age 60.
The findings further suggest society is destined to relearn the lesson of DDT many times over. Myriad synthetic chemicals in our environment today interact with our bodies, with unknown consequences. Government regulators have little power to take precautionary action against compounds that appear problematic.
Reports like this, said Barbara Brenner, executive director of San Francisco-based Breast Cancer Action, show the fallacy of that approach.
"We have to start paying very close attention to what we put in our environment," she said. "This is an example of doing something to our environment where we did not understand the long-term consequences. I don't know how many times this story has to be told."
The study probed a unique database of some 15,000 Kaiser Permanente Health Plan members who participated in a longitudinal study tracking their health over decades.
Researchers with the Berkeley-based Public Health Institute selected 129 women within that study who developed breast cancer before age 50, then analyzed their archived blood samples taken between 1959 and 1967, while they were much younger.
Every sample from a woman with cancer was matched as a control with a sample from a woman of the same age without cancer.
Researchers found women who developed cancer later in life had far higher concentrations of DDT in their blood as youths.
More significantly, women who were 14 years old or older in 1945, when DDT first hit the market, saw no increased breast cancer rates, suggesting exposure while the breast is developing is critical.
The study has its limits. Researchers don't know about other known risk factors that may have predisposed the women toward cancer. They don't know when the women were exposed to DDT. And the study size is small.
For those who, like Lydon, have memories of chasing DDT sprayers as a child, researchers involved in the study preached caution against drawing any firm conclusions.
"I don't think it's just early life exposures," said Mary Wolff, a professor of oncology at Mount Sinai School of Medicine and a report co-author. "Most cancers are an accumulation of a lot of factors."
Even among women most at risk -- those with the so-called "breast cancer gene" -- 30 percent live to age 70 and beyond without cancer, for reasons unknown, Wolff said. "It's a complex disease even when we know one of the biggest risk factors.
DDT, or dichloro diphenyl trichloroethane, was banned in the United States in 1972 amid concerns it was concentrating in the food chain and killing off bald eagles and other raptors.
But the report goes far beyond the pesticide. It indicts widely held ideas and common practices concerning minute amounts of chemicals ubiquitous in our environment.
"The work that needs to be done to identify whether there are environmental risk factors (with any particular compound) is very complicated," said Barbara Cohn, a senior researcher at the Health Institute and the report's lead author. "But it's very important. We need to look deeply at that."
The report suggests, for instance, that society is heading down the same path with atrazine, one of the world's most widely applied pesticides, said Breast Cancer Action's Brenner.
The most cutting-edge drugs in the fight against breast cancer are known as aromatase inhibitors. Post-menopausal women only produce estrogen in their adrenal glands, using the enzyme aromatase to convert the glands' androgen hormones to estrogen. Because estrogen stimulates some breast cancers, doctors attempt to curb cancer growth by blocking the body's production of aromatase.
Atrazine is an aromatase stimulator.
Despite this and other concerns about the pesticide's impact on wildlife, federal regulators say the science is too inconclusive to curb its use.
"We start using chemicals as if the only thing they're going to affect is the plant," Brenner said. "We have to start doing business a different way."
Equally worrisome, the authors say, is that many of the women most heavily exposed to DDT have not yet reached age 50. DDT production peaked in the United States in 1965, and while most studies to date have concluded such exposure wasn't meaningful, this new evidence suggests those assurances may be premature.
The most strongly affected women -- those exposed when young -- are just now reaching age 50.
Said Cohn: "It's a caution that maybe there might be other types of evidence that need to be considered before that conclusion can be reached."
Contact Douglas Fischer at firstname.lastname@example.org or at (510) 208-6425.
(c) 2007 The Oakland Tribune
From: Environmental Science & Technology
IF FLAME RETARDANTS ARE MAKING CATS SICK, WHAT ABOUT CHILDREN?
By Kellyn Betts
New ES&T research documents that house cats can have extraordinarily high concentrations of polybrominated diphenyl ether (PBDE) flame retardants in their blood. Janice Dye, a veterinary internist at the U.S. EPA's National Health and Environmental Effects Research Laboratory (NHEERL), and her colleagues say their findings suggest that "chronic [cumulative] low-dose PBDE exposure may be more endocrine-disrupting than would be predicted by most short-term or even chronic PBDE exposure studies in laboratory rodents." They contend that cats can serve as sentinels for chronic human exposure -- of both children and adults -- to the persistent, bioaccumulative, and toxic compounds.
PBDEs are known to impair thyroid functioning. They have been used since the late 1970s as flame retardants in household products, including upholstered furniture, carpet padding, and electronics. During that same time period, the incidence of a cat thyroid ailment, known as feline hyperthyroidism, has risen dramatically. "Feline hyperthyroidism... was never reported" 35 years ago, but "now it is very common," explains coauthor Linda Birnbaum, director of NHEERL's experimental toxicology division. The disease's cause has been a mystery, Dye says.
PBDE concentrations in blood serum of the 23 house cats participating in the study were 20-100 times higher than the median levels of PBDEs in people living in North America, who have been shown to have the world's highest human PBDE levels. Eleven of the cats in the study suffered from feline hyperthyroidism, and the study "points the finger at the association" between the endocrine-disrupting compounds and the disease, Dye says.
Dye and her colleagues observed that the median PBDE concentrations in their study group's young cats were on a par with the levels reported in a sampling of North American children. The paper shows that both cat food and house dust are likely sources of the cats' PBDEs. Although scant research has examined PBDE uptake in small children, studies from Australia, Norway, and the U.S. document that children younger than 4 years can have far higher levels of the compounds than adults.
Scientists hypothesize that this is because PBDEs can be found in house dust and young children are exposed to far more dust than older people. Cats' meticulous and continuous grooming habits could conceivably result in PBDE uptake similar to what toddlers are exposed to through their increased contact with floors and "mouthing" behaviors, Birnbaum says. Laboratory animals exposed to PBDEs before and after birth can have problems with brain development, including learning, memory, and behavior.
The PBDE uptake pattern of the cats in the study mirrors that of North American people, Dye points out. Both have unusually large "outlier populations" of individuals with PBDE levels that are four to seven times greater than the median concentrations.
The paper makes a convincing case that cats can be "a useful sentinel species for both [human] exposure to PBDEs and examination of endocrine disruption," notes Tom Webster, an associate professor at the Boston University School of Public Health's department of environmental health. Ake Bergman, chair of Stockholm University's environmental chemistry department, agrees, adding that the paper is noteworthy for showing that many cats harbor high quantities of the only PBDE compounds still being used in North America and Europe, which are associated with the Deca formulation used to flame retard electronics products. As of 2004, the lighter weight PBDEs associated with the Penta and Octa PBDE formulations used in polyurethane foam and other plastics were banned in Europe and discontinued in the U.S. However, these compounds are still found in older furniture and household furnishings, including upholstered furniture and carpeting.
For all of these reasons, the new research "supports the need for more studies on [PBDE] exposure to children from house dust," says Heather Stapleton, an assistant professor at Duke University's Nicholas School of the Environment.
Copyright 2007 American Chemical Society
From: The New York Times (pg. A21)
THE BIG MELT
By Nicholas D. Kristof
[Nicholas Kristof is a regular columnist for the New York Times.]
If we learned that Al Qaeda was secretly developing a new terrorist technique that could disrupt water supplies around the globe, force tens of millions from their homes and potentially endanger our entire planet, we would be aroused into a frenzy and deploy every possible asset to neutralize the threat.
Yet that is precisely the threat that we're creating ourselves, with our greenhouse gases. While there is still much uncertainty about the severity of the consequences, a series of new studies indicate that we're cooking our favorite planet more quickly than experts had expected.
The newly published studies haven't received much attention, because they're not in English but in Scientese and hence drier than the Sahara Desert. But they suggest that ice is melting and our seas are rising more quickly than most experts had anticipated.
The latest source of alarm is the news, as reported by my Times colleague Andrew Revkin, that sea ice in the northern polar region just set a new low -- and it still has another month of melting ahead of it. At this rate, the "permanent" north polar ice cap may disappear entirely in our lifetimes.
In case you missed the May edition of "Geophysical Research Letters," an article by five scientists has the backdrop. They analyze the extent of Arctic sea ice each summer since 1953. The computer models anticipated a loss of ice of 2.5 percent per decade, but the actual loss was 7.8 percent per decade -- three times greater.
The article notes that the extent of summer ice melting is 30 years ahead of where the models predict.
Three other recent reports underscore that climate change seems to be occurring more quickly than computer models had anticipated:
Science magazine reported in March that Antarctica and Greenland are both losing ice overall, about 125 billion metric tons a year between the two of them -- and the amount has accelerated over the last decade. To put that in context, the West Antarctic Ice Sheet (the most unstable part of the frosty cloak over the southernmost continent) and Greenland together hold enough ice to raise global sea levels by 40 feet or so, although they would take hundreds of years to melt. We hope.
In January, Science reported that actual rises in sea level in recent years followed the uppermost limit of the range predicted by computer models of climate change -- meaning that past studies had understated the rise. As a result, the study found that the sea is likely to rise higher than most previous forecasts -- to between 50 centimeters and 1.4 meters by the year 2100 (and then continuing from there).
Science Express, the online edition of Science, reported last month that the world's several hundred thousand glaciers and small ice caps are thinning more quickly than people realized. "At the very least, our projections indicate that future sea-level rise maybe larger than anticipated," the article declared.
What does all this mean?
"Over and over again, we're finding that models correctly predict the patterns of change but understate their magnitude," notes Jay Gulledge, a senior scientist at the Pew Center on Global Climate Change.
This may all sound abstract, but climate change apparently is already causing crop failures in Africa. In countries like Burundi, you can hold children who are starving and dying because of weather changes that many experts believe are driven by our carbon emissions.
There are practical steps we can take to curb carbon emissions, and I'll talk about them in a forthcoming column. But the tragedy is that the U.S. has become a big part of the problem.
"Not only is the U.S. not leading on climate change, we're holding others back," said Jessica Bailey, who works on climate issues for the Rockefeller Brothers Fund. "We're inhibiting progress on climate change globally."
I ran into Al Gore at a climate/energy conference this month, and he vibrates with passion about this issue -- recognizing that we should confront mortal threats even when they don't emanate from Al Qaeda.
"We are now treating the Earth's atmosphere as an open sewer," he said, and (perhaps because my teenage son was beside me) he encouraged young people to engage in peaceful protests to block major new carbon sources.
"I can't understand why there aren't rings of young people blocking bulldozers," Mr. Gore said, "and preventing them from constructing coal-fired power plants."
Critics scoff that the scientific debate is continuing, that the consequences are uncertain -- and they're right. There is natural variability and lots of uncertainty, especially about the magnitude and timing of climate change.
In the same way, terror experts aren't sure about the magnitude and timing of Al Qaeda's next strike. But it would be myopic to shrug that because there's uncertainty about the risks, we shouldn't act vigorously to confront them -- yet that's our national policy toward climate change, and it's a disgrace.
Rachel's Democracy & Health News (formerly Rachel's Environment & Health News) highlights the connections between issues that are often considered separately or not at all. The natural world is deteriorating and human health is declining because those who make the important decisions aren't the ones who bear the brunt. Our purpose is to connect the dots between human health, the destruction of nature, the decline of community, the rise of economic insecurity and inequalities, growing stress among workers and families, and the crippling legacies of patriarchy, intolerance, and racial injustice that allow us to be divided and therefore ruled by the few. In a democracy, there are no more fundamental questions than, "Who gets to decide?" And, "How do the few control the many, and what might be done about it?" As you come across stories that might help people connect the dots, please Email them to us at email@example.com. Rachel's Democracy & Health News is published as often as necessary to provide readers with up-to-date coverage of the subject. Editors: Peter Montague - firstname.lastname@example.org Tim Montague - email@example.com
To start your own free Email subscription to Rachel's Democracy & Health News send any Email to: firstname.lastname@example.org. In response, you will receive an Email asking you to confirm that you want to subscribe. To unsubscribe, send any Email to: email@example.com.