Another year is drawing to a close and everyone’s thinking about the future a little more.  At OTO we spend a lot of time thinking about the future because so much of what we do boils down to risk management and contingency planning.  Whether it’s human health risk assessment for a Brownfield site, evaluating potential seismic hazards for a hospital building, or preparing a spill response plan for an oil terminal, the focus of our work is planning for a safer future.

People, particularly engineers, like to think that what we create will last and be sustainable. How long should something be expected to last, though? This is an interesting question in the United States, where there are very few structures more than a century and a half old, and almost none more than two centuries old—my own house was built in 1900 and is considered “old,” but in many parts of the United Kingdom it’s possible to attend church on Sunday in a chapel built eight centuries ago.

Most consumer electronic products made these days can generally be expected to last a few years at most (although our cars definitely last longer than they used to). For example, Apple, despite all the attention given to its trendsetting designs like the smartphone, has been buffeted by a long series of class-action lawsuits  over such problems as the batteries in third-generation iPods failing en masse after less than two years, raising questions of sustainability, planned obsolescence, and even unfair trade practices such as ‘designed to fail.’ Although the thought of replacing your cell phone every two to three years used to rankle, by now pretty much everyone seems by now to be used to the idea. I’d be happy to get four years’ use out of a cell phone, but I’m pretty sure that in ten years’ time it won’t even be able to connect with the programming languages in use in 2026 any more than it could connect to one of Alan Turing’s vacuum tube powered Bletchley Park computer prototypes from the Second World War codebreaking project. This isn’t necessarily progress, mind you—just a recognition that making things less backwards-compatible can be part of making them profitable ….but yet, vinyl records are enjoying a surge in popularity.

The design life for most civil engineering projects, such as roads, buildings, and water supply systems, is in the range of 25-50 years, based on judgments made on the expected durability of the materials used in construction, and the capacity of the design versus demand.  Take for example a town’s water and sewer system designed in 1950 based on assumptions about projected population growth. If a major new employer relocates to town and as a result the local population spikes, some of the assumptions may no longer hold, and the mains will have to be enlarged and another water source found. Where a lot of infrastructure is created at once, however, this can create major problems further down the road; most of the United States’ modern highway and major bridge infrastructure was built within a roughly 20 year period after the Second World War and is now at or well past the end of its original lifespan, and badly in need of repair or replacement, largely because reinforced concrete is not nearly as inert and eternal as was previously thought.

With environmental issues such as contaminated sites and solid waste landfills, we generally consider a timescale of about a century, which makes sense because most of the contaminants we worry about—gasoline, fuel oils, even many chlorinated compounds—will have geochemically weathered into nothing within that time… yes, someday we will be free of PCE and TCE, though lead and arsenic will always be with us, and PCBs with five or more chlorines seem to be built for the ages. Still, this timespan is reflected in some of the material choices we make. For example, a cap for a landfill or CERCLA site might be constructed of several layers of engineered but ultimately natural materials (a clay layer to prevent water infiltration, venting and drainage layers of sand and gravel, a barrier layer of cobbles to stop burrowing animals, and an outer layer of grassy turf, all graded and contoured to shed water without erosion into grassy swales) because these are durable, and even somewhat self-repairing. By contrast, a simple concrete or asphalt slab, however reinforced, will eventually crack, spall, and buckle, while its stormwater drains into  pipes that will silt up, clog, and fail.

Some man-made structures have, of course, endured for much longer. Thomas Telford’s 1,368-foot wrought-iron Menai Bridge, completed in 1826, remains in daily use.

1200px-menai_suspension_bridge_dec_09

The Pont du Garde aqueduct in southern France, built sometime between 40 and 60 AD (the reigns of the infamous Roman emperors Caligula and Nero), remains pretty much intact, but was maintained over the years, surviving the fall of Rome and the Middle Ages largely because local noblemen could rent it out as a toll bridge.

220px-pont_du_gard_bls

The Great Pyramid of Giza is somewhere around 4,500 years old; when Julius Caesar met Cleopatra around 48 BC, the pyramid was as ancient to Rome’s most famous dictator as Caesar is to me.

kheops-pyramid
The Great Pyramid of Giza

For some projects, however, the design period starts to sound like deep time, where the project needs to remain viable not for years or decades, but for centuries or even millennia.

One of the singular engineering projects of our day is the Onkalo (Finnish for “hiding place”) nuclear waste depository under construction in a sparsely-settled area on the western coast of Finland. Construction began in the 1990s and the facility is planned to be complete in 2020, and eventually reach capacity in 2100. For a country with a small population and no conspicuous natural resource wealth like that enjoyed by oil countries, Finland is no stranger to major engineering projects, though these are generally of a decidedly pragmatic bent in contrast to the half-mile-tall Burj Khalifa superskyscraper in Dubai. The country is, after all, proudly home to one of the world’s largest commercial shipbuilding industries, producing everything from warships to cruise liners (if you ever sailed Royal Caribbean, the liner was probably built in Finland) to nuclear-powered icebreakers.  They’re also used to making things that last– for example, the old Nokia 3310 cell phone, best remembered for being almost indestructible…. in stark contrast to the third-generation iPod.

burj_khalifa
The Burj Khalifa, 2722 feet tall.

Finland gets a quarter of its electricity from nuclear power plants, and a national law requires Finland to take responsibility for the country’s own nuclear waste, rather than trying to fob it off on someone else.  This is accordingly Finland’s third such facility , and is intended to store a century’s worth of spent nuclear fuel from power plants in massive vaults carved into migmatic gneiss bedrock nearly 1,400 feet underground, with the goal of isolating the material for as long as high-level radioactive waste remains dangerous… or “only” about a hundred thousand years.

A profile of the Onkalo facility

 

A conceptual view of Onkalo at final build-out

The US, by contrast, simply buried the reactors used in the initial Manhattan Project research in a forty-foot deep hole in the ground on land in rural Illinois that is now a nature preserve, marked only by little more than a stone tabled inscribed “Do Not Dig,” and has been dithering over a long-term storage facility at Yucca Mountain, Nevada since 1978.

Site A/Plot M Disposal Site, Red Gate Woods, Illinois

A hundred thousand years is about ten times as long as the period since h. sapiens shook off his Paleolithic frostbite at the end of the last Ice Age, got a dog and started planting wheat, and it’s more than twenty times as long as all of our species’ recorded history. Nothing built by man has lasted even a tenth as long (Stonehenge and the Watson Brake mound complex in Louisiana area are each a comparatively trifling 5,000 years old), and probably very little that exists now will endure other than scars on the land created by mines, canals, and other geoengineering projects. If I can paraphrase the Scottish philosopher and mathematician John Playfair (who publicized the work of James Hutton, “discoverer” of geologic time), our minds grow giddy by looking so far into that abyss of time.

At that point, the matter of a design period is no longer just an engineering question, but a philosophical one too, as explored in the documentary Into Eternity, which explored the Onkalo facility. It’s no longer enough to find a geologically stable location and pick materials that could be expected to last so long. A repository like this would have to survive not just earthquakes and groundwater leaching, but also a nuclear World War Three and another ice age. Can you wager on there even being a government to maintain such a facility, when most of the world’s countries are less than 100 years old, and even the oldest continuously operating human organizations, such as the Roman Catholic Church, are “only” about 1,500 years old? Or, since financial assurance mechanisms may not survive a war, a financial collapse, or a post-apocalyptic new dark age, should the repository be able to endure without any human intervention at all?

 How do you keep someone ten thousand years from now from unwittingly opening it? No deed restriction (or any other document, for that matter) will outlast the paper or hard drive it’s recorded on unless it’s regularly recopied onto durable media, and who’s going to do that? How do you design a warning sign when the language you speak now may be as long lost as the Sumerian tongue is today, and the radiation trefoil’s meaning could be as lost to posterity as the story behind Paleolithic cave paintings, and even stone-carved hieroglyphics are weathered into illegibility after five or six millennia?  Do you even put up warning signs at all, or just bury it as deeply as possible and hide it as well as you can, hoping the whole thing will never be rediscovered?

trefoil

A portion of the Lascaux Caves paintings

 

The Long Now Foundation was founded to explore these issues in 01996. The 0 isn’t a typo, it’s like the sixth digit on your car’s odometer; the philosophical goal of the foundation is to explore methods by which mankind and its artifacts last long enough for that 0 to tick over into 1. Its signature project is the 10,000 year clock (which is pretty much what it sounds like), which started receiving more attention after some of the foundation’s ideas were included in Neal Stephenson’s 2008 science fiction novel Anathem. If that sounds too quixotic, a similar but more pessimistic-sounding project is the Svalbard Global Seed Vault, a repository of plant seeds built deep underground in an abandoned coal mine on the sub-Arctic island of Svalbard, where seeds would hopefully survive for hundreds or thousands of years, including, natural or man-made disasters and giving mankind a shot at restarting global agriculture, if need be.

How do you design something that may well need to outlast modern civilization (or put even less optimistically, to survive modern civilization, or at least its darker impulses)?  Now THAT is engineering for the long term!

 ….At least the chlorinated hydrocarbons will be gone by then…..

I met a traveler from an antique land,

Who said—“Two vast and trunkless legs of stone

Stand in the desert. . . . Near them, on the sand,

Half sunk a shattered visage lies, whose frown,

And wrinkled lip, and sneer of cold command,

Tell that its sculptor well those passions read

Which yet survive, stamped on these lifeless things,

The hand that mocked them, and the heart that fed;

And on the pedestal, these words appear:

My name is Ozymandias, King of Kings;

Look on my Works, ye Mighty, and despair!

Nothing beside remains. Round the decay

Of that colossal Wreck, boundless and bare

The lone and level sands stretch far away.”

–Percy Bysshe Shelly, Ozymandias, 1818

 

hi-lois-landfills

 

 

 


Coal Tar – Yesterday’s Nuisance, Today’s Problem
OTO’s work includes a lot of remediation projects.in Massachusetts and Connecticut. One of the things we run into on occasion is coal tar, a viscous, black, smelly product of our industrial heritage. Coal tar is not one of the more common challenges encountered at MCP sites in Massachusetts, with gasoline, fuel oil, chlorinated VOCs and metals all coming up more often. It is, however, a complex and challenging mixture of contaminants. Thanks in part to the unfortunate experience with the Worcester Regional Transit Authority’s redevelopment project on Quinsigamond Avenue in Worcester, or the recent proposal to cap tar-contaminated sediments in the Connecticut River in Springfield, coal tar is back in the news after a period of relative obscurity.

 

tar-well
Don’t worry– this isn’t in Massachusetts.

Coal tar is a challenge for three main reasons.

First, it can be a widespread problem. Most coal tar encountered in the environment is a legacy of the coal gasification plants that supplied Massachusetts’s cities and towns with heating and lighting gas before natural gas became available via pipeline in the 1950s. Virtually any city or substantial town before 1900 had a gasworks, and some towns had several. Gasworks were often historically built on the ‘wrong side of the tracks’ due to their historically noisome character—their smell and constant racket beggared description. Where such “city gas” plants were not available, mill or factory complexes like the Otis Mills Company in Ware often had their own private gas plants, some of which also sold surplus gas to local residents for gas lights and stoves, and in some cases essentially became the town’s gas company.

As byproducts of making gas out of coal, these plants produced coal tar, cyanide-laden spent purification media, and much else of a dangerous nature. Some coal tar could be refined into waterproofing pitch, paving materials, industrial solvents, and even the red, foul-tasting carbolic soap that nobody who has ever seen “A Christmas Story” will ever forget. Massachusetts was also home to several plants that reprocessed tar into these plants, some of which later became all too well known, like the Baird & McGuire site in Holbrook, MA or the Barrett plant in Everett and Chelsea.

In addition to historically releasing wastes to the environment at the gasworks, gas companies also historically created off-plant dumps for their wastes, creating hazardous waste sites that might be located miles from the gasworks, or even in a different town.
EPA historically kept a sharp eye out for coal gasification plants, and during the 1980s listed over a dozen coal tar sites in Massachusetts on CERCLIS. Many of the sites that most alarmed MassDEP in the ‘80s were also related to MGPs—for example, Costa’s Dump in Lowell or the former Commercial Point gasworks in Boston. In recent years, however, regulatory attention has taken on an increasingly narrow focus towards other concerns, most notably vapor intrusion from chlorinated VOCs.

The second important consideration about coal tar is that it is pretty dangerous stuff, and poses both cancer and noncancer risks. Coal tar is typically a heterogeneous mixture of something like 10,000 distinct identifiable compounds, ranging from low molecular weight, highly volatile compounds like benzene and styrene to massive “2-methyl chickenwire” asphaltene compounds. From an environmental and toxicological perspective, coal tar is most conspicuous for its high concentrations of polycyclic aromatic hydrocarbons (PAHs), as much as 10% PAHs by weight, which make it significantly more toxic than petroleum. Two of the coal tar’s signature PAHs are benzo [a] pyrene and naphthalene; some coal tar can be up to 3% naphthalene alone, which accounts for the distinct, penetrating ‘mothball’ odor at MGP remediation sites.

Coal tar was associated with occupational diseases ranging from skin lesions to scrotal cancer even during the mid-19th Century, and was the first substance to be conclusively shown to be a carcinogen (by the Japanese scientist Katsusaburo Yamagiwa in 1915). The British scientist E.L. Kennaway subsequently proved that benzo [a] pyrene was itself a carcinogen in 1933, the first individual compound to be so categorized. Coal tar also contains concentrations of lesser-known PAHs, some of which may have significantly greater carcinogenic potential than benzo [a] pyrene. Coal tar is also a powerful irritant; remediation workers and others exposed to it can expect hazards including painful irritation of the skin, and respiratory or vapor intrusion hazards including high levels of benzene and coal tar pitch volatiles.

The third consideration is that coal tar is very persistent in the environment; tars and other gasworks wastes are highly resistant to geochemical weathering (and also to many remediation technologies, such as in-situ chemical oxidation), and do not break down in the environment like gasoline and most fuel oils do, so that tar contamination can still create problems over a century after the material was released.

So, coal tar is still with us, and will be for a long time. On the bright side though, with effort and careful planning, these challenges can be overcome. Many of the “wrong side of the tracks” locations of former gasworks are now prime downtown real estate, and a number of Massachusetts gasworks have been redeveloped as shopping plazas, transportation hubs, and biotech research facilities. As land prices, urban real estate availability, and government incentives continue to drive brownfield redevelopment, hopefully most of the Commonwealth’s former gasworks will see a new life.


Soxhlet Extraction Schematic
Soxhlet Extraction Schematic

Thanks to a friend’s sharp eye, I recently learned something new about the analysis PCB caulk samples.  Because of its potential significance I thought it deserved a special blog note.

First a little background on how caulk samples get tested for PCBs.  It’s basically a three step process:

  1. First a carefully measured amount of the caulk sample is extracted with an organic solvent. As a chemist would say, PCBs would rather be dissolved in a non-polar organic solvent than to be in the caulk, so they move from the caulk to the solvent.  If you are in USEPA Region 1 this extraction must be conducted using “the Soxhlet method” also known as EPA Method 3540C.  The Soxhlet method is the gold standard of extraction methods, but it uses a lot of energy, water, solvent and glassware so ecologically it is not a very “green” method.  Additionally, it takes a long time.  The method calls for the extraction to proceed for 16 to 24 hours. In other EPA Regions other extraction methods (such as sonication) may still be acceptable.
  2. Once the PCBs have been extracted from the caulk to the solvent phase, the solvent needs to be cleared of the other potentially interfering chemical schmutz that got extracted out of the caulk along with the PCBs. These cleanup steps are fairly critical before you run any of the extract through the gas chromatograph (GC).  The GC is the instrument that will tell the analyst how much PCB is in the extract.
  3. Following the cleanup steps you inject a very small portion of solvent extract  into the GC. At the end of the GC is a very, very sensitive detector that can measure the truly minuscule amounts of PCBs in that may have been in the sample.  The detector generates a signal that allows the analyst to back-out the concentration of PCBs that were originally in the caulk, if any.

Well this probably seems simple enough, but think for a minute about what might happen when your GC is set to measure PCB concentration levels of between 1 and 10 ppm and all of a sudden a caulk sample comes through with 200,000 ppm of Aroclor 1260! Yikes!  This is the equivalent of trying to weigh a full grown African elephant on an office postage scale!  You are not going to get an accurate weight, and your postage scale will never be the same.

And of course it’s not a happy day for the analyst who will now need to spend many hours or days getting the residual PCBs out of that very sensitive GC detector, not to mention all the grossly contaminated glassware and other lab equipment.  Obviously labs need to take steps to protect themselves from this possibility or they would very quickly be out of business.

How Labs try to Reduce this Risk

One thing labs can do to reduce the risk of blowing out their GCs is to ask the people submitting samples if they know the approximate concentration of PCBs in the caulk.  But usually they don’t know, and if it were your lab would you necessarily take the word of the person submitting the sample?  I’m not sure I would.

Another option is to pre-screen samples using a “quick and dirty” method to get a rough idea of the PCB concentration.  Such a method might involve a very simple extraction, followed by a big dilution of the extract to reduce the PCB concentration (if any are actually there) followed by injection into the GC.  Something very close to this procedure is known to EPA as Method 3580A, but is also known colloquially as the “swish and shoot” method.

Now this method is completely fine for getting a quick read on the relative PCB concentration in a sample.  In fact, if the results from the swish and shoot screening shows the analyst that the sample is hot (i.e. lots of PCBs well in excess of the regulatory limits) then there really is no need to conduct any further analysis because the person submitting the sample is in most cases just wanting to know whether the concentration is greater or less than the regulatory thresholds for PCBs.  So some labs stop the analysis at this point and report the results from the sample prepared with the method 3580A extraction.

Situations Where Swish and Shoot Results might Steer you Wrong

If a sample is analyzed following a relatively inefficient extraction and the resulting sample concentration still exceeds regulatory standards, then a more efficient extraction can only result in a concentration that exceeds standards by an even greater amount.  As long as the sole analytical objective is to identify whether or not samples exceed regulatory standards, then this objective can be satisfied by a less efficient extraction provided the result is greater than the regulatory standard.

However, if your analytical objective is also to map PCB concentrations over an site area to achieve a clearer picture of how concentrations change spatially in the field, then you need an extraction and analysis protocol that is consistent, efficient and reproducible.   Without these qualities you won’t be able to reliably tease out the forensic trends you want from the data.

The lesson to be learned  from choosing  the right extraction method for PCB analysis is the timeless quality assurance principle of identifying how you want to use data before you collect samples and analyze them.  Some of the biggest problems with scientific studies can arise when data is collected for one purpose, but then used in a way that was not anticipated by the scientists who collected and analyzed the samples.  Data that satisfied the original study’s objectives may not be suitable for a subsequent study with different objectives.

So-called “meta studies” and a number of retrospective studies where batches of pre-existing data are aggregated to increase the statistical power of a study’s conclusions can be guilty of not thinking about whether the data quality objectives of the original studies meet the needs of the new study.  What motivates the meta study authors is creating as large a data set as possible to give their results statistical significance.  But this quest for large data sets can cause the consideration of data quality objectives to fall by the wayside.

These “big data” studies can sometimes make for splashy headlines because the large number of samples make results look statistically significant.  But too often these results need to be walked-back because the authors did not adequately consider the data quality objectives of the original studies in assembling their meta-data sets.

Last Word

So to reiterate again, think about how your PCB data will be used before you submit the samples to a lab, then make sure the extraction and analysis methods to be used will give you the data you need.


education1Three years ago I wrote a draft post about the cost of PCB removal in schools, but then never finished it.  What reminded me about it was a recent article[1] by Robert Herrick et al in which he developed an estimate of the number of schools in the US that may contain PCBs in caulk.  His estimate is presented as a range: 12,960 to 25,920 schools.

Herrick speculates that this range is likely to be low, and I agree.  My own estimate from 3 years ago was closer to 43,000.  Given the statistical limitations of our methods, trying to extrapolate from small possibly non-representative sample sets to the entire population of US schools, our numbers are actually pretty close.

However, what particularly interests me is the next step in the analysis, estimating the potential costs of remediating all those schools.  This is a step that Herrick, perhaps quite wisely, did not take.  With that introduction, what comes next is a lightly edited version of my 2013 unpublished post in which I do try to estimate possible costs of removing PCBs from schools nationwide.

Did EPA consider compliance costs for municipalities in its development of the PCB regulations?

When the USEPA proposes new regulations, two of the questions Congress and the public usually ask are: “How much will it cost to implement these new requirements? And is it worth it?”  To answer these questions, EPA will typically conduct a “cost-benefit analysis.”  This analysis is supposed to demonstrate the advantages of EPA’s proposed actions and explain how the benefits are worth the cost.

These analyses aren’t always 100% accurate because it can be hard to know all the exact costs associated with changes just as it can be difficult to anticipate all the benefits.  None-the-less, the cost-benefit analysis is a good-faith effort to consider the positives and the negatives associated with a proposed regulation.  Developing these analyses is one reason EPA employs economists.

Surprisingly though, EPA’s 1998 PCB Mega Rule contained no cost-benefit analysis; there was not a single sentence that spoke to the issue of the costs of these regulations even though they have imposed huge financial burdens on the public and private sectors.  To give EPA some benefit of the doubt, much of that burden is only now becoming evident as the full extent of PCBs in schools and other buildings is being discovered.

Isn’t it worth any cost to protect schools and children from any risk?

There is no answer to this question that will satisfy everyone, but as a society we can take steps to limit the negative impacts or real demonstrable risks in our schools.  By real risks I mean threats that have been shown to actually harm schools and children under real world conditions.  Examples from the top of my list of demonstrated risks would include cars and guns, but PCBs in building materials wouldn’t be on my list at all.  Why aren’t PCBs on my list of threats? The answer is simple, there are no credible scientific studies showing harm to the health of schools, students or staff despite the presence of PCBs in buildings for over 60 years.

But in this post I want to focus on the financial burden the PCB regulations are putting on schools and public education.  As a former board of education member in a small New England town, I can tell you first-hand about the battles to secure funding for public school systems.  Every year costs go up and every year vocal groups want to pay less tax and accuse administrators of mismanaging funds.

Anyone who thinks that a typical municipality can come up with extra millions of dollars to pay for PCB removal in a school ought to spend some time on their local board of education.  There isn’t extra money to do PCB remediation in a town’s budget; that money is going to come right out of the education budget.  The harm done to a typical school system by redirecting funds from educational programs to PCB removal is much greater than any harm done by the PCBs.

So, did anyone at EPA think about PCB remediation costs? No? Let me help.

So back to the threshold question, did EPA think about the financial burden it was placing on municipalities when it retroactively banned PCBs in building materials including those already in schools?  If they did, I can’t find any evidence of it.  To be helpful, I am providing below a very rough estimate of the possible national cost of removing PCBs from US K-12 schools.

The approach I use is a method I picked up in college called “the back of the envelope” approach.  I’ll leave developing a more rigorously researched approach to the economists at EPA; it’s been my experience that the back of the envelope approach often gets you remarkably close to the right answer.

The Back of the Envelope Accounting Office

From a quick Google search I discovered that there are approximately 132,000 private and public K-12 schools in the US.  As a somewhat educated guess, let’s assume that 33% (one in three) of these schools have PCBs in at least one building.  Further, let’s assume that the average cost of testing and removing PCBs from an average school is $2 million.  Some schools will cost less to remediate, but many will cost much more.  Some quick multiplication takes us to a cost of $87 billion to remove PCBs from all public and private K-12 schools.  I recently heard Speaker of the House Paul Ryan say that $80 billion is a lot of money even in Washington.

There is obviously a lot of uncertainty in this estimate.  My guess is that I underestimated the actual number of affected school buildings and that I also underestimated the average cost per building to remove PCBs.  None-the-less it is a starting point and I am going to use it below for a few simple comparisons.

What do we spend per year on Public K-12 Education?

How much is an $87 billion PCB removal cost in terms of the nation’s K-12 education budget?  According to the National Center for Education Statistics, public school districts had a total budget of $610 billion for the 2008-2009 school year.  This amount historically increases by only 1-2% per year so I am just going to use the 2008-2009 budget numbers because the uncertainty in the other values I am using in this analysis likely swamp out the small change I would make to adjust the school budget number.  Of the total K-12 spending budget, $519 billion went to current education, $65.9 billion went to capital construction projects, $16.7 billion went to cover interest payments and $8.5 billion went to other costs.

Cost of Getting PCBs out of Public Schools

The $87 billion for PCB removal is for public and private schools, and about 75% of all K-12 schools are public.  So assuming the costs for PCB removal are the same for either public or private schools, the cost for removing PCBs from just public schools will be about $67 billion.  This means the PCB removal cost would be about 11% of one year’s total national education budget or about 13% of the annual operating budget.  However it would be about 100% of the capital construction projects for a year.

This analysis is obviously too simplistic, because some school systems will not have any PCBs, and some will likely have a lot.  There is no apparent way for school systems across the country to even out these costs nationally among themselves, although there may be some ability for states to even out the costs within a state.

Still it highlights what a large issue PCBs in schools can be for a municipality and it clearly answers why most school administrators want to stay as far away from testing their schools for PCBs as they possibly can.

Final Thoughts

Two final thoughts for this post:

First and foremost – If there were credible scientific evidence that PCBs in schools were actually causing harm to the health of students or staff I would be fully supportive of decisions to get them out regardless of the cost.  But this evidence does not exist and not for lack of trying on the part of research scientists.  The fact is that most students and staff receive significantly more PCBs daily in their diet than they do from being in school buildings. The 60+ years of history of PCBs in building materials has simply not turned up evidence of harm to the health of building users.

Second – The estimated $2 million per school PCB removal cost is potentially well short the actual average cost per school because in many cases schools simply cannot be made PCB free.  Instead school buildings have been closed down with the children and staff reassigned to other schools.

In affluent communities the solution to this problem might be demolishing the old building and constructing a new one, but in more typical American communities it means a long-term loss of educational resources and a significant lessening of educational capacity as the old school building is shuttered and becomes a long-term reminder of what has been lost.

 

[1] “Review of PCBs in US schools: a brief history, an estimate of the number of impacted schools, and an approach for evaluating indoor air samples”; Herrick, R.F., Stewart, J.H., and Allen, J.G.; Environ Sci Pollut Res (2016) 23: 1975-1985.


bad-science

A lot of BS passes itself off as good science these days.  While for the most part we applaud the good science that leads to improved environmental protection and better public health, we don’t hesitate to call out the BS when we see it. In that vein, I recently had the good fortune to receive this link to a John Oliver segment  on that very topic. I strongly encourage you to invest the time to watch it, it is very funny and very true!  Enjoy.


In Part 1 of this post, I wrote about the misguided push in my home state of Connecticut to test more schools for PCBs. There’s a misconception that PCBs, even with the low potential doses likely to occur in the indoor environment, pose a health risk. This misconception persists despite a 50+ year history of PCBs in many school buildings without a documented instance of a student, teacher or other staff member experiencing adverse health effects indicative of PCB toxicity. And yes, scientists have looked.

While the presence of PCBs in buildings does not seem to have caused bodily harm, the act of removing them from school buildings can be devastating to school and municipal budgets. Experience shows that removing PCBs from schools is a very expensive process; one whose budget can grow exponentially as more information and test data becomes available. There are relatively few communities whose annual school budgets can withstand the impact of a school PCB removal project.

I ended Part 1 with this paragraph:

“TSCA – the Law of Unintended Consequences
You can read the Toxic Substances Control Act (TSCA) from cover to cover and you’ll find nothing about removing PCBs from schools or other buildings. Take a look at the 800+ page legislative history of TSCA and you will still find nothing about PCBs in schools. How about EPA’s PCB regulations (40 CFR 761)? No, still nothing about removing PCBs from schools or other buildings. So if there is nothing in the statute or the regulations about removing PCBs from schools or other buildings, and if there is no evidence that PCBs in building materials pose a health risk, then what explains the need to assess and remove PCBs from schools?”

The goal of Part 2 is to answer that question.

597960-2630-24

In the beginning . . .
From their first publication in 1978/79 until the 1998 Mega-Rule changes, the PCB regulations contained what I call the “in-service rule”, which reads in part:

“NOTE: This subpart does not require removal of PCBs and PCB Items from service and disposal earlier than would normally be the case. However, when PCBs and PCB Items are removed from service and disposed of, disposal must be undertaken in accordance with these regulations. PCBs (including soils and debris) and PCB Items which have been placed in a disposal site are considered to be ‘‘in service’’ for purposes of the applicability of this subpart”.

My naive interpretation of the in-service rule is that PCBs that were already incorporated into some product – and thus in-service – could remain in service until that product was taken out of service.  Thus PCBs in building materials could remain in those materials (and those materials could remain where they were) until they were removed from service and prepared for disposal.

(First Disclosure: In a conversation with EPA headquarters, I was told the in-service rule was only intended to apply to PCBs that had already been disposed of in a manner that did not comply with the PCB regulations. However, I think it’s obvious from the use of the words “in-service” that this current EPA HQ interpretation is inconsistent with a plain reading of the text. In my view it takes a somewhat “strained” interpretation to equate the terms “in-service” with “illegally disposed of”).

Looking beyond the in-service rule, even a casual examination of the current PCB regulations makes it apparent that EPA’s main regulatory focus has been on liquid PCBs, like the ones found in transformers and capacitors. This makes sense when your objective is to limit the further spread of PCBs to the environment – liquids are prone to being spilled and obviously spread much more easily than solids. Objectively, the regulation of PCBs formulated into solid products, like building materials, seems to have been an afterthought for EPA. While its researchers knew about PCBs in building materials, even in the 1970s, EPA’s regulation writers either did not know about them or just decided they weren’t important.

The 1994 proposed use authorization
EPA’s regulation writers finally started paying attention to PCBs in solids in the mid-1990s.  In a prelude to the 1998 PCB Mega-Rule, EPA published an Advance Notice of Proposed Rule Making (an ANPRM) in 1991 requesting comments on a number of issues concerning PCB regulation. In the December 6, 1994 Federal Register, EPA published a summary of the comments received and explained how the agency planned to respond to them.

Many commenters described experiences where PCBs had been unexpectedly discovered in building materials (such as caulk, paint and adhesives) during demolition or renovation projects. These commenters told EPA that removing these PCBs posed a huge engineering, construction and financial burden. EPA responded that it had previously been unaware of this problem, but was now proposing a solution to this unintended consequence of the PCB regulations. A few pages later, in the very same 1994 Federal Register volume, EPA proposed a new use authorization, 40 CFR 761.30(q), to legally authorize the continuing use of PCBs incorporated into solid building materials.

In the preamble to the proposed change EPA explained its rationale for the new use authorization this way:

“While the continued use of unauthorized pre-TSCA materials is a violation of the existing PCB regulations, in most cases the premature removal of the media containing PCBs could only be achieved with great difficulty and at enormous expense given the extraordinary efforts that would be required to remove the PCBs.” (Emphasis added).

So as of December 1994, the stage was set for the adoption of a new use authorization for PCBs in solid building materials. But, as one of my old bosses liked to say, “There’s been many a slip between the cup and the lip”. When, four years later, EPA finally promulgated the 1998 PCB Mega-Rule the proposed use authorization for PCBs in building materials was missing. What happened? The only explanation on offer was found at the end of the 1998 Mega-Rule preamble:

“Finally, EPA is deferring regulatory action on proposed 761.30(q) for future rule-making”. . . . “Although EPA received many comments supporting the proposed authorizations, many commenters wanted EPA to drop many, if not all, of the proposed authorizations. EPA needed additional time to review the recently submitted risk assessment studies and also to obtain additional data for certain uses in order to reduce the uncertainties associated with the available studies.”

Since it is almost 20 years later, do you think it would it be impolite to ask whether these uncertainties still exist? In a conversation with EPA headquarters a few months ago I was told not to expect a use authorization for PCBs in building materials any time soon.

So what exactly are the uncertainties EPA is concerned about? And how do they relate to PCBs in schools?

(Second disclosure: This the end of the historical account. The rest of this post is based on my research and opinions).

We know a lot about PCBs. In fact they are among the best studied of all the man-made environmental contaminants. There are 209 different individual PCB chemicals, known as congeners that make up the PCB group; we know all their molecular weights, volatilities, and many of their other physical properties. We divide them into dioxin-like and non-dioxin like categories based on the way they interact with biological receptors, which has also been studied in depth. There are elaborate risk assessment models that claim to assess the level of risk based on just which particular combination of the 209 congeners are present. Every week there is a new research paper published about PCBs with even more information.

What is probably more important is that we know the average concentration of PCBs in the environment and in people has been dropping significantly since the 1970s. We know that the average daily and annual doses of PCBs people receive has also declined significantly. And of course we know that despite their significant efforts, scientists have not been able to tease out any consistent evidence of adverse health effects in people exposed to PCBs in building materials.  Remember, consistent reproducible results is the most important factor separating good science from bad science.

The question I set out to answer with this post was: If there is nothing in the statute or the regulations about removing PCBs from schools or other buildings, and if there is no evidence that PCBs in building materials pose a health risk, then what explains the need to assess and remove PCBs from schools?

Because, after all if Congress were inclined to pass legislation, or if the EPA were going to promulgate regulations that would cost communities and public school systems billions of dollars, don’t you think there would be a cost-benefit analysis somewhere? Before new federal regulations come into being, there is supposed to be a rigorous assessment of potential negative and positive impacts – for the very purpose of avoiding costly unintended consequences. So um, what happened here?  Because there never was a cost/benefit analysis; there never was an honest discussion with the public about risks, costs and potential benefits about regulations that could collectively cost communities hundreds of billions of dollars.

Reluctantly, the conclusion I’ve come to is that there are no good answers to my questions.  My best guess is that most EPA researchers and independent scientists would rather not be the ones to point out that the emperor has no clothes; but the facts are that the fear of PCBs in buildings is without scientific foundation. But at the cost of millions of dollars per building incurred to our school budgets unnecessarily, isn’t it time to to pay attention to the real science?

Final thoughts
Last July EPA issued new guidance for schools and other buildings that may contain PCBs. While the preface contains disclaimers that the new guidance is not intended to replace the requirements of the PCB regulations or TSCA, after reading them one could be forgiven for thinking that this was pretty much what they were supposed to do. The guidance recommends a sensible Best Management Practices (BMP) approach to managing known or suspected PCBs in buildings and downplays the need or desirability of testing building materials for PCBs.

It’s unlikely that this new guidance will be codified into regulations any time soon, but it is helpful for EPA to soften its guidance and it hopefully signals a more rational approach to the issue of PCBs in buildings going forward.

 

Postscript: OTO just changed its web host, which led to some confusion in the posting of this article.  We apologize for any inconvenience.