The OTO geotechnical group will feature a series of blogs discussing soil settlement concerns and mitigation.  Topics will include forensic studies and remediation alternatives for existing building settlement and damage, as well as providing geotechnical engineering solutions for new construction to mitigate settlement concerns.

 

Part I:  Soil Detectives! Assessment of Settlement of Existing Foundations – Ashley Sullivan, PE

The geotechnical engineers at OTO spend a good portion of their time providing geotechnical engineering solutions to mitigate potential settlement for new structures.  In addition, we are often called in to assess situations where structural damage has already occurred due to the settlement of an existing building.  Our role in these situations is to determine whether foundation settlement is a cause of the structural damage, and more importantly, what was the cause of the foundation settlement.  We then will provide alternatives to mitigate on-going settlement and allow the structures to be productively used.  We work closely with owners, structural engineers, architects and sometimes real estate agents.  These are always interesting projects, since they allow us to put on our detective cap and practice forensic geotechnical engineering.

 

Often times, the OTO geotechnical engineer is not the first phone call, in that the client has already reached out to a structural engineer or architect. The structural engineer will often assess the aboveground building components such as columns and beams to determine whether these load-bearing components are sized correctly and functioning properly.  If the structural components appear to be adequate, the team may start to look at the foundations and ground conditions. This is where OTO can be a valuable asset to the project.

 

Once our services are engaged, we first gather as much information as is readily available regarding the history of the building and likely subsurface conditions. We look for information regarding construction (year built, materials), type of damage observed (cracks, doors and windows that won’t close, leaning walls, etc.), and timelines (immediate/sudden settlement, on-going settlement over long time span, etc.).  We also discuss any changes in site conditions, such as increased building or fill loads, or recent nearby construction work.  Before we leave the OTO’s office, our geotechnical engineer will put some thought into anticipated soil conditions.  We will access OTO’s database of soil boring and test pit information to review conditions at any nearby sites, review both on-line and OTO’s library of published soil and bedrock geology maps, along with historical Sanborn Fire Insurance and USGS topographical maps.  With our experience and the help of published/public information, we often can take an educated guess as to what soil conditions are anticipated at the particular site.

 

Shortly after receiving the initial call, we normally perform a site visit to obtain a firsthand look at the problem area.  We typically review topography and look for indications of fills, changes in drainage (sink holes, soft ground), or slope instability/erosion (bent tree trunks, surficial slips).  At that time, we determine the best approach for investigations, such as the type and approximate locations for invasive testing and/or a settlement monitoring program.  Investigations may include test pits, soil borings and a review of existing subsurface utilities and drainage.  A monitoring program may include the installation of points installed on a building and nearby ground surface, which will be to be surveyed periodically over time to determine trends in the amount and rate settlement.

Door and slab example
An uneven door or cracked or uneven concrete can be a good field indicator of settlement.

Many times, the test pits or soil borings with accompanying laboratory tests quickly reveal the cause of the problem.  Some examples of potential causes include:

  • A pocket of peat, soft clay or loose, non-engineered fill that has compressed under the building load.
  • A buried layer of decomposed organics, trash or other deleterious material that has compressed over time, and will further degrade over time.
  • A soft compressible layer of fine grained soil that has consolidated under the weight of the new structure or fill loads.
  • Wet, loose, granular soils indicate a possible “wash out” condition due to a drainage pipe break and the introduction of water into the soil matrix.

 

We then continue the investigations to determine the nature and extent of the unsuitable conditions.  After the assessment is complete, the geotechnical engineers can start the next phase of the evaluation, “The Fix”, which will be discussed in a future blog post.

 

Do you have a building that is settling?  Contact Ashley Sullivan at 413-276-4253 or sullivan@oto-env.com to see how OTO can help!

 


A lot of what OTO does involves helping clients manage risks. Sometimes we do this in a reactive mode– digging up leaking gasoline tanks, capping abandoned landfills, and otherwise resolving problems that already exist. The proactive side is less obvious and dramatic (ok, and maybe a little less fun), and consists mostly of identifying potential hazards, planning how to deal with them, and helping train staff in how to respond.

Multiple Planning Requirements

There is a surprisingly large amount of this work, because many federal environmental laws and regulations include emergency planning requirements.

For example:

  • RCRA Contingency Plan for hazardous waste Generators and Treatment, Storage and Disposal Facilities (40 CFR 262.34, 264.52, 265.52, and 279.52);
  • Spill Prevention, Control, and Countermeasures plans under the Oil Pollution Act (40 CFR 112)
  • Facility Response Plans under the Oil Pollution Act (40 CFR 112.20 and 112.21); (with review and approval from EPA, Coast Guard, DOT and Department of the Interior regulators as appropriate)
  • Clean Air Act Risk Management Plan( 40 CFR part 68)
  • DOT Facility Response Plan (49 CFR part 194);
  • OSHA Emergency Action Plan (29 CFR 1910.38[a])
  • OSHA Hazwoper (29 CFR 1910.120)
  • OSHA Process Safety Management (29CFR 1910.119);

IMG_20150331_094455382_HDR

Combining Plans

Wouldn’t it be nice if you could somehow combine all of these into one plan?

Well, as it happens, you can.  This isn’t really a new thing—EPA’s guidance for “integrated contingency plans,’ sometimes referred to as the “One Plan” concept, was published in the Federal Register in June 1996 (61 FR 28641, June 5, 1996 and 40 CFR 265.54), and EPA Region 1 and the Massachusetts Office of Technical Assistance produced a demonstration model plan several years ago. The “one plan” option provides a means to combine numerous contingency plans into one living document that can address multiple overlapping (or we could say ‘redundant’) requirements. It can’t cover all of them, but it can usually cover the ones listed above.

When you consider how much of the required content of each kind of plan listed above overlaps, combining them makes a great deal of sense. At bottom, a good plan consists of four components:

  1. A description of the location/situation and risks including information such as: standard precautions; potential hazards; potential receptors; an analysis of what could go wrong; and what would happen as a result (e.g. an oil slick on a river upstream of a drinking water intake, or an anhydrous ammonia cloud over an urban area). The degree of analysis is the major variable among the various plan types, for example a SPCC or FRP requires evaluation of potential releases to water bodies, whereas the RMP is concerned principally with releases of airborne vapors or gases.
  1. Emergency contact information for facility and corporate staff, emergency response personnel, regulators, and emergency management agencies such as the Coast Guard or the Local Emergency Planning Committee;
  1. Written procedures for what facility staff should do in the event of an emergency, and
  1. Documentation of relevant things such as changes to the facility, inventories of available equipment, updates to the plan, and staff training. After all, the best plan in the world doesn’t mean much if it’s not documented.

For example….

Consider, for example, a commercial dairy processing facility located along a river—ok, milk sounds innocuous, but it’s probably more complicated than that. Animal fats can be as destructive to aquatic life as heavy fuel oils- one of the major effects of any sort of oil or fat is a huge increase in chemical and biochemical oxygen demand (COD and BOD) which depletes the oxygen level in the water to a level that fish and other critters can’t survive. Animal fats are therefore covered under the Oil Pollution Act, so a SPCC is required. It could also be the case (as it often is), that the facility has a massive refrigeration plant using anhydrous ammonia, which triggers the Clean Air Act’s Risk Management Plan and OSHA Process Safety, RCRA generator status for various hazardous wastes, etc. Then let’s assume EPA thinks the facility could cause ‘substantial harm’ to the river in the event something goes wrong, and requires a Facility Response Plan (an “FRP”) on top of the SPCC.

That’s five planning requirements right there, but at bottom most of them are going to deal with the same regulated materials, the same staff, and the same emergency response procedures, so having one well-maintained and drilled plan instead of five makes complete sense. It’s also far easier to keep one plan up to date than five, particularly when that means documenting inspections, staff training (and for some plans, such as Facility Response Plans, actual drills).

Train Like It’s For Real

IMG_20161006_124728471_HDR

Of course, having a plan on paper is only the first part, since if it only exists on paper even the best and most comprehensive plan won’t do any good without training and practice. At our recent in-house OSHA refresher training, OTO had a guest speaker who had been an OSHA inspector for 37 years.  He presented us with a number of case studies for industrial accidents, and one recurring theme was emergency action plans that essentially existed only on paper, and provided no value at all when an actual emergency occurred, since staff couldn’t implement a plan they hadn’t been trained on.

Staff need to be trained, equipment bought and maintained, and procedures practiced both in the field and as tabletop exercises. Effective plans represent ongoing commitments, and require inspections, training and documentation. This of course means money, in terms of staff time, hiring an engineering consultant to assist in developing plans, and sometimes hard construction costs, such as upgrading secondary containment for tanks, modifying stormwater systems to reduce potential spill exposures or modernizing HVAC systems to keep pace with vapors or fumes. Many plans, such as the SPCC, require the party responsible for the facility to certify that they are committing the necessary resources to make the plan workable. Good plans are also ‘living’ documents, which means that if you change your operation, say by adding another 20,000 gallon aboveground storage tank, you would need to change your plan, and if one part of your plan turns out not to work, you update your plan.

While it’s always good to be the “man with a plan”, sometimes all you need is one plan.

 

 

 

 


 

sean pic

The Underground Tank Problem

If you own an old underground storage tank (UST) in Massachusetts, particularly a single-walled steel tank, chances are you have heard about the push to remove these older tanks.  The problem with them is that over time, they are prone to leaking and when they leak; they contaminate the environment and can hurt human health.  New USTs need to meet strict guidelines for environmental safety including being “double walled” (a tank within a tank) and there has to be a leak detection system.  Older tanks usually do not have these features.

If you are the owner of an old UST, taking it out of service can be a scary prospect, since it can be expensive and, in some cases, disrupt business on a property for from a few days up to several weeks.

Old single walled steel (SWS) tanks were in common use from the early 20th century up through the late 1980s.  These tanks are more prone to leaking their contents because they lack a second ‘wall’ in case the interior or exterior wall fails, and can also lack other leak prevention equipment such as corrosion protection or upgraded product piping.  At OTO we’ve even come across a number of pre-1930 tanks that were riveted together rather than welded!   These tanks did not even have tight seams.

By the late 1970s, there were hundreds of thousands of SWS USTs across the country.  Some are still in use and many others were abandoned in place often with no documentation that they had ever been installed. Removing a leaking UST is a potentially expensive clean-up project.  Leaking USTs have historically been the most widespread source of oil and gasoline contamination to groundwater and drinking water aquifers.  In addition, occupants of buildings near leaking USTs could be exposed to vapors that migrate underground and into buildings.

The Government Acts

In 1988, the USEPA set a deadline of 1998 for: 1) the removal of out-of-use USTs; 2) the incorporation of leak detection, corrosion protection, and spill and overfill containment equipment on most new or retrofitted USTs; and 3) the registration of certain in-use USTs (such as for retail gasoline or diesel) with state agencies. This requirement led to the removal and remediation of thousands of leaking USTs in the Commonwealth. In addition, the Massachusetts Department of Fire Services prohibited the installation of new SWS tanks after 1998.

SWS tanks installed prior to this date are now nearing or past their recommended service lives. The 2005 Energy Policy Act included the requirement that SWS tanks be removed by August 7, 2017. In 2009 government responsibility for the UST program in Massachusetts was transferred to MassDEP, which in January 2015 promulgated new UST regulations (310 CMR 80.00), and which maintains the SWS tank prohibition and removal requirement at 310 CMR 80.15.

These regulations are intended to protect public health, safety and the environment by removing these SWS USTs from service because they have a higher likelihood of leaking and releasing petroleum products into the environment.

 

The Current Status

The MassDEP has established a number of regulatory deadlines for the assessment, repair and/or removal of the old UST systems.   In certain situations, MassDEP is exercising enforcement discretion and granting extensions of regulatory deadlines.

In addition:

  • All spill buckets tested and, if necessary, repaired or replaced in accordance with 310 CMR 80.21(1)(a) and 28(2)(g);
  • All turbine, intermediate and dispenser sumps tested and, if necessary, repaired in accordance with 310 CMR 80.27(7) and (8);
  • All Stage II vapor recovery systems decommissioned in accordance with 310 CMR 7.24(6)(l), if applicable; and
  • New Stage I vapor recovery requirements met in accordance with 310 CMR 7.26(3)(b), if applicable.

At the time of UST system removal, environmental conditions must be assessed per state and federal regulations. In Massachusetts, tank closures must meet DEP’s Tank Regulations, 310 CMR 80.00. These regulations allow tanks to be permanently closed-in-place only if they cannot be removed from the ground without removing a building, or the removal would endanger the structural integrity of another UST, structure, underground piping or underground utilities.

If you have questions or need assistance related to  a UST system, please contact Sean Reilly with O’Reilly, Talbot, & Okun at (413) 788-6222.


With all the hubbub in Washington DC lately, it’s been largely overlooked that some of the regulatory changes that started under the previous administration are only now coming to fruition.

Hazmat storage - BADHazmat storage - BEST

For example, the Hazardous Waste Generator Improvement Rule went into effect on the federal level on May 30, 2017 by amending parts of the regulations promulgated pursuant to the Resource Conservation and Recovery Act (RCRA).  RCRA was passed in 1976 and provides the national regulatory framework for solid and hazardous waste management.  These changes will become effective in Massachusetts, Connecticut, and other states with authorized hazardous waste programs as the states update their regulations.

RCRA’s ‘generator requirements,’ haven’t changed much in the last thirty years—the last major change happened in 1984. The new requirements  address the process by which a person or company who generates a waste: 1) evaluates whether or not it’s a hazardous waste or a solid waste; 2) stores the waste and prepares it for transport; and 3) maintains records of the waste’s generation and treatment, recycling or other management.

Changes in Hazardous Waste Generation

Industry has changed a great deal since RCRA went into effect. In the last ten years, the amount of hazardous waste generated in Massachusetts has dropped from 1,121,752 tons in 2001 to 39,108 tons in 2015, even as the number of registered waste generators nearly doubled (EPA Biennial Hazardous Waste Report, 2001 and 2015). Interestingly, EPA national biennial reports indicate the quantity of RCRA waste generated in Massachusetts didn’t change very much between 1985 (114,381 tons) and 2001, although there was some fluctuation as EPA added new categories of generators and wastes to RCRA. The general trend over time has led to there being fewer Large Quantity Generators, and many more Very Small Quantity Generators, so that a representative slice of the modern population of generators consists mostly of auto repair businesses, retail stores, pharmacies, and small manufacturing operations rather than the large factories and sprawling chemical plants of the 1970s and early 1980s.

This changing waste generation demographic (for lack of a better word) matters a lot, since compliance with these generator requirements generally happens at the ‘factory floor’ level, and while Kodak or Monsanto plants had chemical engineering departments to help with waste characterization and management, small shops generally don’t.

While the new rule makes over 60 changes to the RCRA regulations, its main goal is to clarify the ‘front end’ generator requirements. Some of these changes are major; others involve only routine regulatory housekeeping; and some are potential compliance pitfalls for generators.  Several of these changes dovetail with EPA’s 2015 changes to the Definition of Solid Waste, which opened up expanded opportunities for recycling certain materials rather than requiring that they be handled as solid or hazardous wastes.

Other changes in the new rule include:

  • Under some circumstances, Very Small Quantity Generators (VSQGs) will be allowed to send hazardous waste to a large quantity generator (LQG) that is under the control of the same “person” for consolidation before the waste is shipped to a RCRA-designated treatment, storage or disposal facility (TSDF). This is most likely to benefit large “chain” operations, such as retail stores, pharmacies, health care organizations with many affiliated medical practices, universities, and automotive service franchise operations.
  • One of the common problems for VSQGs or SQGs is that since generator status is determined by the quantity of wastes generated, sometimes exceptional events (such as a spill or process line change) occur which bump them up into the Large Quantity Generator category, triggering many other regulatory requirements even if the status change is only for the space of a single month. The Generator Improvement Rule would allow a VSQG or SQG to maintain its existing generator category following such events, as long as certain criteria were met..
  • The addition of an explanation of how to quantify wastes and thus determine generator status.
  • Changes to the requirements for Satellite Accumulation Areas, and for the first time, a formal definition of a Central Accumulation Area.
  • An expanded explanation of when, why and how a hazardous waste determination should be made, and what records must be kept. The final rule does not include requirements proposed in the initial rule that generators keep records of these determinations until a facility closes. The rule also recognizes that most generators base their waste determinations on knowledge of the ingredients and processes that produce a waste, rather than laboratory testing.
  • Clearer requirements for facilities that recycle hazardous waste without storing it.
  • Small Quantity Generators will have to re-notify their generator status every four years.
  • A clarification of which generator category applies if a facility generates both acute and non-acute hazardous waste (for example, a pharmacy that generates waste pharmaceuticals that are P-listed acute hazardous wastes).
  • Revising the regulations for labeling and marking of containers and tanks
  • “Conditionally Exempt Small Quantity Generators” will be renamed Very Small Quantity Generators, a term already used in many states including Massachusetts.
  • Large and Small Quantity Generators will need to provide additional information to Local Emergency Planning Committees as part of their contingency plans

The new rule also contains several expanded sections on exemptions applicable to wastes together with a distinction between “conditional requirements,” such as those which would qualify a waste for an exemption, and ‘independent requirements,” such as container labeling and spill prevention, which are mandatory across the board.

In addition, the rule makes many relatively minor changes, such as updated references to other regulations and rearranging portions of the Code of Federal Regulations text into a more intuitive order.

As with any new or revised regulation, we can expect a learning curve, particularly as implementation filters down to the state agencies. In the meantime, EPA has the Final Rule on its website, along with several fact sheets and FAQs

https://www.federalregister.gov/documents/2016/11/28/2016-27429/hazardous-waste-generator-improvements-rule

https://www.epa.gov/hwgenerators/frequent-questions-about-hazardous-waste-generator-improvements-final-rule

https://www.epa.gov/hwgenerators/fact-sheet-about-hazardous-waste-generator-improvements-final-rule


caulk and bricj

PCBs, polychlorinated biphenyls, are a group of related chemicals that were used for a variety of applications up until the 1970s.  In the 1960s the development of improved gas chromatography methods allowed environmental scientists to become aware of the environmental persistence and global distribution of PCBs in the environment.  Since that time there have been hundreds of studies conducted to better understand the environmental transport and fate of PCBs.

However, it has been only over the past 20 years or so that studies have focused on learning more about PCBs that were incorporated into building products and their fate in the indoor environment.  Much of what has been learned is surprising and counter-intuitive.

For example, while it is generally true that PCBs have low volatility and low water solubility, it turns out that even at room temperature they are volatile enough to permit them to migrate in and around buildings at concentrations high enough to have regulatory implications.  This migration may take place slowly, over the course of several decades, but in some instances, it has happened in as little as a year.  With today’s sensitive instrumentation, chemists are able to track the movements of even tiny concentrations of PCBs as they migrate.

This post is a primer on the three primary categories of building materials which contain PCBs and how their PCBs can move inside of buildings.

Primary Sources

As the name suggests, primary sources are building materials that were either deliberately or accidentally manufactured with PCBs as an ingredient prior to their installation in a building.  The most common primary sources are:

  • Caulking;
  • Paint;
  • Mastics;
  • Various surface coatings; and
  • Fluorescent light ballasts (FLBs).

FLBs are different from the other materials on this list because they use PCBs in an “enclosed” manner.  This is defined as use in a manner that will ensure no exposure of human beings or the environment to PCBs.   However, with continuous use FLBs are known to deteriorate, sometimes resulting in the release of PCBs.  Only FLBs manufactured before the PCB ban (1979) should contain PCBs and by now (2017) any of these older PCB containing FLBs should have been replaced with non-PCB ballasts since even the youngest PCB FLBs are almost 40-years old.  FLBs are considered to have had a functional life span of only 10-15 years.  The type of PCBs used in US-made FLBs were almost exclusively Aroclors 1242 and 1016.

The other primary PCB sources on the above list are considered to be “open” PCB uses because, unlike FLBs, the PCBs were not contained in an enclosure.  In most of these cases PCBs were added to the materials to improve the performance of the products by contributing: fire resistance, plasticity, adhesiveness, extended useful life and other desirable properties.  For PCBs to impart these properties they were generally included at concentrations ranging from 2% to about 25%; this is equivalent to 20,000 parts per million (ppm) to 250,000 ppm.  The most common PCBs found in US-made building materials are Aroclor 1254 followed by Aroclor 1248, 1260 and 1262.

PCBs can sometimes be present in primary sources by accident rather than by design.  The presence of Aroclor PCBs in primary sources at concentrations less than 1,000 ppm (equal to 0.1%), or non-Aroclor PCBs at any concentration, may indicate an accidental PCB use.

Under the federal PCB regulations primary sources of are referred to as PCB Bulk Products and they are regulated when their PCB concentration is 50 ppm or greater.

Secondary Sinks and Secondary Sources

When a PCB primary source is in direct contact with a porous building material, the PCBs in the primary source can often migrate from the primary source into the porous material.   Porous building materials known to adsorb PCBs in this way include concrete, brick and wood.  When this migration occurs, the now PCB containing porous materials are referred to as secondary PCB sinks.  Secondary sinks often have PCB concentrations in the range of 10-1,000 ppm.

While the federal regulations apply to primary sources when their concentration is 50 ppm or greater, requirements for secondary sinks are stricter.  They are categorized as PCB Remediation Wastes and are regulated when their PCB concentration is 1 ppm or greater.

In some situations, the PCBs in secondary sinks can be remobilized and either migrate directly into other porous materials or they can volatilize into the air.  When this occurs, these secondary sinks may be referred to as secondary sources.  In practice one hears the terms secondary sinks and secondary sources being used interchangeably.

Tertiary Sinks and Sources

Tertiary sinks arise when PCBs from primary or secondary sources volatilize into the air and then condense onto other materials in a building.  The significance of volatilization as a PCB migration pathway was underappreciated until recent times because the relatively low volatility of PCBs suggested that the volatilization rate was too low to be meaningful.  However, laboratory testing and numerous real-world examples have demonstrated that volatilization of PCBs from primary and secondary sources with redeposition on other materials can be significant in some settings.   Tertiary sinks often have PCB concentrations between 1-100 ppm.

Some authors prefer to use the term secondary sinks to describe both secondary and tertiary sinks.  Personally, I prefer to use ‘tertiary sinks’ to identify materials affected by indirect contact (through the air) and ‘secondary sinks’ to identify materials affected by direct contact to primary sources.  However, I acknowledge that it is not always evident whether a material is a secondary or tertiary sink.

Why Understanding PCB Sources and Sinks Matters

Understanding the ways that PCBs move around in buildings is important if your goal is to reduce potential exposures inside of buildings.  It is a frequent occurrence in PCB building remediation for primary sources to be removed only to find that indoor air concentrations have not been reduced to the extent expected.  Or for air concentrations to fall immediately after remediation, only to return to previous levels with the passage of time.  This is often due to an insufficient appreciation for the influence and action of secondary and tertiary sinks.

If you have a particular PCB in building condition that needs a fresh set of eyes to review it, consider reaching out us for another opinion.


IMG_20141211_155833662

This picture is of a coal mine in West Virginia; the publication it was in was dated 1946, where it was presented as an example of ‘the bad old days.’ I found it in an old copy of the quarterly employee magazine of Eastern Gas and Fuel Associates, a holding company which used to have a very large vertically-integrated slice of the American coal industry– they owned coal mines, railroads, a fleet of colliers (coal transport ships), coking plants, blast furnace plants for making pig iron, and even a chain of general stores in mining towns.  They even owned Boston Consolidated Gas Company — this was back when gas was still mostly made out of coal, so for Eastern to own a major gas company made a lot of sense. When natural gas came along in the ’50s, Eastern Gas and Fuel promptly bought ownership stakes in the gas pipeline companies.

This was decades before the phrase “Safety First” was coined, but even so, the mining and transportation industries carry a lot of known hazards with them and Eastern Gas and Fuel evidently made a point of contrasting the ‘bad old days’ above with the image of a modern industrial company, for example the following page on drum handling, from another issue:

IMG_20141212_131708401

That’s still pretty good advice, even sixty years later.  So is “Don’t get hurt” for that matter, but we’re a lot more sophisticated about it lately.


Another year is drawing to a close and everyone’s thinking about the future a little more.  At OTO we spend a lot of time thinking about the future because so much of what we do boils down to risk management and contingency planning.  Whether it’s human health risk assessment for a Brownfield site, evaluating potential seismic hazards for a hospital building, or preparing a spill response plan for an oil terminal, the focus of our work is planning for a safer future.

People, particularly engineers, like to think that what we create will last and be sustainable. How long should something be expected to last, though? This is an interesting question in the United States, where there are very few structures more than a century and a half old, and almost none more than two centuries old—my own house was built in 1900 and is considered “old,” but in many parts of the United Kingdom it’s possible to attend church on Sunday in a chapel built eight centuries ago.

Most consumer electronic products made these days can generally be expected to last a few years at most (although our cars definitely last longer than they used to). For example, Apple, despite all the attention given to its trendsetting designs like the smartphone, has been buffeted by a long series of class-action lawsuits  over such problems as the batteries in third-generation iPods failing en masse after less than two years, raising questions of sustainability, planned obsolescence, and even unfair trade practices such as ‘designed to fail.’ Although the thought of replacing your cell phone every two to three years used to rankle, by now pretty much everyone seems by now to be used to the idea. I’d be happy to get four years’ use out of a cell phone, but I’m pretty sure that in ten years’ time it won’t even be able to connect with the programming languages in use in 2026 any more than it could connect to one of Alan Turing’s vacuum tube powered Bletchley Park computer prototypes from the Second World War codebreaking project. This isn’t necessarily progress, mind you—just a recognition that making things less backwards-compatible can be part of making them profitable ….but yet, vinyl records are enjoying a surge in popularity.

The design life for most civil engineering projects, such as roads, buildings, and water supply systems, is in the range of 25-50 years, based on judgments made on the expected durability of the materials used in construction, and the capacity of the design versus demand.  Take for example a town’s water and sewer system designed in 1950 based on assumptions about projected population growth. If a major new employer relocates to town and as a result the local population spikes, some of the assumptions may no longer hold, and the mains will have to be enlarged and another water source found. Where a lot of infrastructure is created at once, however, this can create major problems further down the road; most of the United States’ modern highway and major bridge infrastructure was built within a roughly 20 year period after the Second World War and is now at or well past the end of its original lifespan, and badly in need of repair or replacement, largely because reinforced concrete is not nearly as inert and eternal as was previously thought.

With environmental issues such as contaminated sites and solid waste landfills, we generally consider a timescale of about a century, which makes sense because most of the contaminants we worry about—gasoline, fuel oils, even many chlorinated compounds—will have geochemically weathered into nothing within that time… yes, someday we will be free of PCE and TCE, though lead and arsenic will always be with us, and PCBs with five or more chlorines seem to be built for the ages. Still, this timespan is reflected in some of the material choices we make. For example, a cap for a landfill or CERCLA site might be constructed of several layers of engineered but ultimately natural materials (a clay layer to prevent water infiltration, venting and drainage layers of sand and gravel, a barrier layer of cobbles to stop burrowing animals, and an outer layer of grassy turf, all graded and contoured to shed water without erosion into grassy swales) because these are durable, and even somewhat self-repairing. By contrast, a simple concrete or asphalt slab, however reinforced, will eventually crack, spall, and buckle, while its stormwater drains into  pipes that will silt up, clog, and fail.

Some man-made structures have, of course, endured for much longer. Thomas Telford’s 1,368-foot wrought-iron Menai Bridge, completed in 1826, remains in daily use.

1200px-menai_suspension_bridge_dec_09

The Pont du Garde aqueduct in southern France, built sometime between 40 and 60 AD (the reigns of the infamous Roman emperors Caligula and Nero), remains pretty much intact, but was maintained over the years, surviving the fall of Rome and the Middle Ages largely because local noblemen could rent it out as a toll bridge.

220px-pont_du_gard_bls

The Great Pyramid of Giza is somewhere around 4,500 years old; when Julius Caesar met Cleopatra around 48 BC, the pyramid was as ancient to Rome’s most famous dictator as Caesar is to me.

kheops-pyramid
The Great Pyramid of Giza

For some projects, however, the design period starts to sound like deep time, where the project needs to remain viable not for years or decades, but for centuries or even millennia.

One of the singular engineering projects of our day is the Onkalo (Finnish for “hiding place”) nuclear waste depository under construction in a sparsely-settled area on the western coast of Finland. Construction began in the 1990s and the facility is planned to be complete in 2020, and eventually reach capacity in 2100. For a country with a small population and no conspicuous natural resource wealth like that enjoyed by oil countries, Finland is no stranger to major engineering projects, though these are generally of a decidedly pragmatic bent in contrast to the half-mile-tall Burj Khalifa superskyscraper in Dubai. The country is, after all, proudly home to one of the world’s largest commercial shipbuilding industries, producing everything from warships to cruise liners (if you ever sailed Royal Caribbean, the liner was probably built in Finland) to nuclear-powered icebreakers.  They’re also used to making things that last– for example, the old Nokia 3310 cell phone, best remembered for being almost indestructible…. in stark contrast to the third-generation iPod.

burj_khalifa
The Burj Khalifa, 2722 feet tall.

Finland gets a quarter of its electricity from nuclear power plants, and a national law requires Finland to take responsibility for the country’s own nuclear waste, rather than trying to fob it off on someone else.  This is accordingly Finland’s third such facility , and is intended to store a century’s worth of spent nuclear fuel from power plants in massive vaults carved into migmatic gneiss bedrock nearly 1,400 feet underground, with the goal of isolating the material for as long as high-level radioactive waste remains dangerous… or “only” about a hundred thousand years.

A profile of the Onkalo facility

 

A conceptual view of Onkalo at final build-out

The US, by contrast, simply buried the reactors used in the initial Manhattan Project research in a forty-foot deep hole in the ground on land in rural Illinois that is now a nature preserve, marked only by little more than a stone tabled inscribed “Do Not Dig,” and has been dithering over a long-term storage facility at Yucca Mountain, Nevada since 1978.

Site A/Plot M Disposal Site, Red Gate Woods, Illinois

A hundred thousand years is about ten times as long as the period since h. sapiens shook off his Paleolithic frostbite at the end of the last Ice Age, got a dog and started planting wheat, and it’s more than twenty times as long as all of our species’ recorded history. Nothing built by man has lasted even a tenth as long (Stonehenge and the Watson Brake mound complex in Louisiana area are each a comparatively trifling 5,000 years old), and probably very little that exists now will endure other than scars on the land created by mines, canals, and other geoengineering projects. If I can paraphrase the Scottish philosopher and mathematician John Playfair (who publicized the work of James Hutton, “discoverer” of geologic time), our minds grow giddy by looking so far into that abyss of time.

At that point, the matter of a design period is no longer just an engineering question, but a philosophical one too, as explored in the documentary Into Eternity, which explored the Onkalo facility. It’s no longer enough to find a geologically stable location and pick materials that could be expected to last so long. A repository like this would have to survive not just earthquakes and groundwater leaching, but also a nuclear World War Three and another ice age. Can you wager on there even being a government to maintain such a facility, when most of the world’s countries are less than 100 years old, and even the oldest continuously operating human organizations, such as the Roman Catholic Church, are “only” about 1,500 years old? Or, since financial assurance mechanisms may not survive a war, a financial collapse, or a post-apocalyptic new dark age, should the repository be able to endure without any human intervention at all?

 How do you keep someone ten thousand years from now from unwittingly opening it? No deed restriction (or any other document, for that matter) will outlast the paper or hard drive it’s recorded on unless it’s regularly recopied onto durable media, and who’s going to do that? How do you design a warning sign when the language you speak now may be as long lost as the Sumerian tongue is today, and the radiation trefoil’s meaning could be as lost to posterity as the story behind Paleolithic cave paintings, and even stone-carved hieroglyphics are weathered into illegibility after five or six millennia?  Do you even put up warning signs at all, or just bury it as deeply as possible and hide it as well as you can, hoping the whole thing will never be rediscovered?

trefoil

A portion of the Lascaux Caves paintings

 

The Long Now Foundation was founded to explore these issues in 01996. The 0 isn’t a typo, it’s like the sixth digit on your car’s odometer; the philosophical goal of the foundation is to explore methods by which mankind and its artifacts last long enough for that 0 to tick over into 1. Its signature project is the 10,000 year clock (which is pretty much what it sounds like), which started receiving more attention after some of the foundation’s ideas were included in Neal Stephenson’s 2008 science fiction novel Anathem. If that sounds too quixotic, a similar but more pessimistic-sounding project is the Svalbard Global Seed Vault, a repository of plant seeds built deep underground in an abandoned coal mine on the sub-Arctic island of Svalbard, where seeds would hopefully survive for hundreds or thousands of years, including, natural or man-made disasters and giving mankind a shot at restarting global agriculture, if need be.

How do you design something that may well need to outlast modern civilization (or put even less optimistically, to survive modern civilization, or at least its darker impulses)?  Now THAT is engineering for the long term!

 ….At least the chlorinated hydrocarbons will be gone by then…..

I met a traveler from an antique land,

Who said—“Two vast and trunkless legs of stone

Stand in the desert. . . . Near them, on the sand,

Half sunk a shattered visage lies, whose frown,

And wrinkled lip, and sneer of cold command,

Tell that its sculptor well those passions read

Which yet survive, stamped on these lifeless things,

The hand that mocked them, and the heart that fed;

And on the pedestal, these words appear:

My name is Ozymandias, King of Kings;

Look on my Works, ye Mighty, and despair!

Nothing beside remains. Round the decay

Of that colossal Wreck, boundless and bare

The lone and level sands stretch far away.”

–Percy Bysshe Shelly, Ozymandias, 1818

 

hi-lois-landfills

 

 

 


Coal Tar – Yesterday’s Nuisance, Today’s Problem
OTO’s work includes a lot of remediation projects.in Massachusetts and Connecticut. One of the things we run into on occasion is coal tar, a viscous, black, smelly product of our industrial heritage. Coal tar is not one of the more common challenges encountered at MCP sites in Massachusetts, with gasoline, fuel oil, chlorinated VOCs and metals all coming up more often. It is, however, a complex and challenging mixture of contaminants. Thanks in part to the unfortunate experience with the Worcester Regional Transit Authority’s redevelopment project on Quinsigamond Avenue in Worcester, or the recent proposal to cap tar-contaminated sediments in the Connecticut River in Springfield, coal tar is back in the news after a period of relative obscurity.

 

tar-well
Don’t worry– this isn’t in Massachusetts.

Coal tar is a challenge for three main reasons.

First, it can be a widespread problem. Most coal tar encountered in the environment is a legacy of the coal gasification plants that supplied Massachusetts’s cities and towns with heating and lighting gas before natural gas became available via pipeline in the 1950s. Virtually any city or substantial town before 1900 had a gasworks, and some towns had several. Gasworks were often historically built on the ‘wrong side of the tracks’ due to their historically noisome character—their smell and constant racket beggared description. Where such “city gas” plants were not available, mill or factory complexes like the Otis Mills Company in Ware often had their own private gas plants, some of which also sold surplus gas to local residents for gas lights and stoves, and in some cases essentially became the town’s gas company.

As byproducts of making gas out of coal, these plants produced coal tar, cyanide-laden spent purification media, and much else of a dangerous nature. Some coal tar could be refined into waterproofing pitch, paving materials, industrial solvents, and even the red, foul-tasting carbolic soap that nobody who has ever seen “A Christmas Story” will ever forget. Massachusetts was also home to several plants that reprocessed tar into these plants, some of which later became all too well known, like the Baird & McGuire site in Holbrook, MA or the Barrett plant in Everett and Chelsea.

In addition to historically releasing wastes to the environment at the gasworks, gas companies also historically created off-plant dumps for their wastes, creating hazardous waste sites that might be located miles from the gasworks, or even in a different town.
EPA historically kept a sharp eye out for coal gasification plants, and during the 1980s listed over a dozen coal tar sites in Massachusetts on CERCLIS. Many of the sites that most alarmed MassDEP in the ‘80s were also related to MGPs—for example, Costa’s Dump in Lowell or the former Commercial Point gasworks in Boston. In recent years, however, regulatory attention has taken on an increasingly narrow focus towards other concerns, most notably vapor intrusion from chlorinated VOCs.

The second important consideration about coal tar is that it is pretty dangerous stuff, and poses both cancer and noncancer risks. Coal tar is typically a heterogeneous mixture of something like 10,000 distinct identifiable compounds, ranging from low molecular weight, highly volatile compounds like benzene and styrene to massive “2-methyl chickenwire” asphaltene compounds. From an environmental and toxicological perspective, coal tar is most conspicuous for its high concentrations of polycyclic aromatic hydrocarbons (PAHs), as much as 10% PAHs by weight, which make it significantly more toxic than petroleum. Two of the coal tar’s signature PAHs are benzo [a] pyrene and naphthalene; some coal tar can be up to 3% naphthalene alone, which accounts for the distinct, penetrating ‘mothball’ odor at MGP remediation sites.

Coal tar was associated with occupational diseases ranging from skin lesions to scrotal cancer even during the mid-19th Century, and was the first substance to be conclusively shown to be a carcinogen (by the Japanese scientist Katsusaburo Yamagiwa in 1915). The British scientist E.L. Kennaway subsequently proved that benzo [a] pyrene was itself a carcinogen in 1933, the first individual compound to be so categorized. Coal tar also contains concentrations of lesser-known PAHs, some of which may have significantly greater carcinogenic potential than benzo [a] pyrene. Coal tar is also a powerful irritant; remediation workers and others exposed to it can expect hazards including painful irritation of the skin, and respiratory or vapor intrusion hazards including high levels of benzene and coal tar pitch volatiles.

The third consideration is that coal tar is very persistent in the environment; tars and other gasworks wastes are highly resistant to geochemical weathering (and also to many remediation technologies, such as in-situ chemical oxidation), and do not break down in the environment like gasoline and most fuel oils do, so that tar contamination can still create problems over a century after the material was released.

So, coal tar is still with us, and will be for a long time. On the bright side though, with effort and careful planning, these challenges can be overcome. Many of the “wrong side of the tracks” locations of former gasworks are now prime downtown real estate, and a number of Massachusetts gasworks have been redeveloped as shopping plazas, transportation hubs, and biotech research facilities. As land prices, urban real estate availability, and government incentives continue to drive brownfield redevelopment, hopefully most of the Commonwealth’s former gasworks will see a new life.


Soxhlet Extraction Schematic
Soxhlet Extraction Schematic

Thanks to a friend’s sharp eye, I recently learned something new about the analysis PCB caulk samples.  Because of its potential significance I thought it deserved a special blog note.

First a little background on how caulk samples get tested for PCBs.  It’s basically a three step process:

  1. First a carefully measured amount of the caulk sample is extracted with an organic solvent. As a chemist would say, PCBs would rather be dissolved in a non-polar organic solvent than to be in the caulk, so they move from the caulk to the solvent.  If you are in USEPA Region 1 this extraction must be conducted using “the Soxhlet method” also known as EPA Method 3540C.  The Soxhlet method is the gold standard of extraction methods, but it uses a lot of energy, water, solvent and glassware so ecologically it is not a very “green” method.  Additionally, it takes a long time.  The method calls for the extraction to proceed for 16 to 24 hours. In other EPA Regions other extraction methods (such as sonication) may still be acceptable.
  2. Once the PCBs have been extracted from the caulk to the solvent phase, the solvent needs to be cleared of the other potentially interfering chemical schmutz that got extracted out of the caulk along with the PCBs. These cleanup steps are fairly critical before you run any of the extract through the gas chromatograph (GC).  The GC is the instrument that will tell the analyst how much PCB is in the extract.
  3. Following the cleanup steps you inject a very small portion of solvent extract  into the GC. At the end of the GC is a very, very sensitive detector that can measure the truly minuscule amounts of PCBs in that may have been in the sample.  The detector generates a signal that allows the analyst to back-out the concentration of PCBs that were originally in the caulk, if any.

Well this probably seems simple enough, but think for a minute about what might happen when your GC is set to measure PCB concentration levels of between 1 and 10 ppm and all of a sudden a caulk sample comes through with 200,000 ppm of Aroclor 1260! Yikes!  This is the equivalent of trying to weigh a full grown African elephant on an office postage scale!  You are not going to get an accurate weight, and your postage scale will never be the same.

And of course it’s not a happy day for the analyst who will now need to spend many hours or days getting the residual PCBs out of that very sensitive GC detector, not to mention all the grossly contaminated glassware and other lab equipment.  Obviously labs need to take steps to protect themselves from this possibility or they would very quickly be out of business.

How Labs try to Reduce this Risk

One thing labs can do to reduce the risk of blowing out their GCs is to ask the people submitting samples if they know the approximate concentration of PCBs in the caulk.  But usually they don’t know, and if it were your lab would you necessarily take the word of the person submitting the sample?  I’m not sure I would.

Another option is to pre-screen samples using a “quick and dirty” method to get a rough idea of the PCB concentration.  Such a method might involve a very simple extraction, followed by a big dilution of the extract to reduce the PCB concentration (if any are actually there) followed by injection into the GC.  Something very close to this procedure is known to EPA as Method 3580A, but is also known colloquially as the “swish and shoot” method.

Now this method is completely fine for getting a quick read on the relative PCB concentration in a sample.  In fact, if the results from the swish and shoot screening shows the analyst that the sample is hot (i.e. lots of PCBs well in excess of the regulatory limits) then there really is no need to conduct any further analysis because the person submitting the sample is in most cases just wanting to know whether the concentration is greater or less than the regulatory thresholds for PCBs.  So some labs stop the analysis at this point and report the results from the sample prepared with the method 3580A extraction.

Situations Where Swish and Shoot Results might Steer you Wrong

If a sample is analyzed following a relatively inefficient extraction and the resulting sample concentration still exceeds regulatory standards, then a more efficient extraction can only result in a concentration that exceeds standards by an even greater amount.  As long as the sole analytical objective is to identify whether or not samples exceed regulatory standards, then this objective can be satisfied by a less efficient extraction provided the result is greater than the regulatory standard.

However, if your analytical objective is also to map PCB concentrations over an site area to achieve a clearer picture of how concentrations change spatially in the field, then you need an extraction and analysis protocol that is consistent, efficient and reproducible.   Without these qualities you won’t be able to reliably tease out the forensic trends you want from the data.

The lesson to be learned  from choosing  the right extraction method for PCB analysis is the timeless quality assurance principle of identifying how you want to use data before you collect samples and analyze them.  Some of the biggest problems with scientific studies can arise when data is collected for one purpose, but then used in a way that was not anticipated by the scientists who collected and analyzed the samples.  Data that satisfied the original study’s objectives may not be suitable for a subsequent study with different objectives.

So-called “meta studies” and a number of retrospective studies where batches of pre-existing data are aggregated to increase the statistical power of a study’s conclusions can be guilty of not thinking about whether the data quality objectives of the original studies meet the needs of the new study.  What motivates the meta study authors is creating as large a data set as possible to give their results statistical significance.  But this quest for large data sets can cause the consideration of data quality objectives to fall by the wayside.

These “big data” studies can sometimes make for splashy headlines because the large number of samples make results look statistically significant.  But too often these results need to be walked-back because the authors did not adequately consider the data quality objectives of the original studies in assembling their meta-data sets.

Last Word

So to reiterate again, think about how your PCB data will be used before you submit the samples to a lab, then make sure the extraction and analysis methods to be used will give you the data you need.


education1Three years ago I wrote a draft post about the cost of PCB removal in schools, but then never finished it.  What reminded me about it was a recent article[1] by Robert Herrick et al in which he developed an estimate of the number of schools in the US that may contain PCBs in caulk.  His estimate is presented as a range: 12,960 to 25,920 schools.

Herrick speculates that this range is likely to be low, and I agree.  My own estimate from 3 years ago was closer to 43,000.  Given the statistical limitations of our methods, trying to extrapolate from small possibly non-representative sample sets to the entire population of US schools, our numbers are actually pretty close.

However, what particularly interests me is the next step in the analysis, estimating the potential costs of remediating all those schools.  This is a step that Herrick, perhaps quite wisely, did not take.  With that introduction, what comes next is a lightly edited version of my 2013 unpublished post in which I do try to estimate possible costs of removing PCBs from schools nationwide.

Did EPA consider compliance costs for municipalities in its development of the PCB regulations?

When the USEPA proposes new regulations, two of the questions Congress and the public usually ask are: “How much will it cost to implement these new requirements? And is it worth it?”  To answer these questions, EPA will typically conduct a “cost-benefit analysis.”  This analysis is supposed to demonstrate the advantages of EPA’s proposed actions and explain how the benefits are worth the cost.

These analyses aren’t always 100% accurate because it can be hard to know all the exact costs associated with changes just as it can be difficult to anticipate all the benefits.  None-the-less, the cost-benefit analysis is a good-faith effort to consider the positives and the negatives associated with a proposed regulation.  Developing these analyses is one reason EPA employs economists.

Surprisingly though, EPA’s 1998 PCB Mega Rule contained no cost-benefit analysis; there was not a single sentence that spoke to the issue of the costs of these regulations even though they have imposed huge financial burdens on the public and private sectors.  To give EPA some benefit of the doubt, much of that burden is only now becoming evident as the full extent of PCBs in schools and other buildings is being discovered.

Isn’t it worth any cost to protect schools and children from any risk?

There is no answer to this question that will satisfy everyone, but as a society we can take steps to limit the negative impacts or real demonstrable risks in our schools.  By real risks I mean threats that have been shown to actually harm schools and children under real world conditions.  Examples from the top of my list of demonstrated risks would include cars and guns, but PCBs in building materials wouldn’t be on my list at all.  Why aren’t PCBs on my list of threats? The answer is simple, there are no credible scientific studies showing harm to the health of schools, students or staff despite the presence of PCBs in buildings for over 60 years.

But in this post I want to focus on the financial burden the PCB regulations are putting on schools and public education.  As a former board of education member in a small New England town, I can tell you first-hand about the battles to secure funding for public school systems.  Every year costs go up and every year vocal groups want to pay less tax and accuse administrators of mismanaging funds.

Anyone who thinks that a typical municipality can come up with extra millions of dollars to pay for PCB removal in a school ought to spend some time on their local board of education.  There isn’t extra money to do PCB remediation in a town’s budget; that money is going to come right out of the education budget.  The harm done to a typical school system by redirecting funds from educational programs to PCB removal is much greater than any harm done by the PCBs.

So, did anyone at EPA think about PCB remediation costs? No? Let me help.

So back to the threshold question, did EPA think about the financial burden it was placing on municipalities when it retroactively banned PCBs in building materials including those already in schools?  If they did, I can’t find any evidence of it.  To be helpful, I am providing below a very rough estimate of the possible national cost of removing PCBs from US K-12 schools.

The approach I use is a method I picked up in college called “the back of the envelope” approach.  I’ll leave developing a more rigorously researched approach to the economists at EPA; it’s been my experience that the back of the envelope approach often gets you remarkably close to the right answer.

The Back of the Envelope Accounting Office

From a quick Google search I discovered that there are approximately 132,000 private and public K-12 schools in the US.  As a somewhat educated guess, let’s assume that 33% (one in three) of these schools have PCBs in at least one building.  Further, let’s assume that the average cost of testing and removing PCBs from an average school is $2 million.  Some schools will cost less to remediate, but many will cost much more.  Some quick multiplication takes us to a cost of $87 billion to remove PCBs from all public and private K-12 schools.  I recently heard Speaker of the House Paul Ryan say that $80 billion is a lot of money even in Washington.

There is obviously a lot of uncertainty in this estimate.  My guess is that I underestimated the actual number of affected school buildings and that I also underestimated the average cost per building to remove PCBs.  None-the-less it is a starting point and I am going to use it below for a few simple comparisons.

What do we spend per year on Public K-12 Education?

How much is an $87 billion PCB removal cost in terms of the nation’s K-12 education budget?  According to the National Center for Education Statistics, public school districts had a total budget of $610 billion for the 2008-2009 school year.  This amount historically increases by only 1-2% per year so I am just going to use the 2008-2009 budget numbers because the uncertainty in the other values I am using in this analysis likely swamp out the small change I would make to adjust the school budget number.  Of the total K-12 spending budget, $519 billion went to current education, $65.9 billion went to capital construction projects, $16.7 billion went to cover interest payments and $8.5 billion went to other costs.

Cost of Getting PCBs out of Public Schools

The $87 billion for PCB removal is for public and private schools, and about 75% of all K-12 schools are public.  So assuming the costs for PCB removal are the same for either public or private schools, the cost for removing PCBs from just public schools will be about $67 billion.  This means the PCB removal cost would be about 11% of one year’s total national education budget or about 13% of the annual operating budget.  However it would be about 100% of the capital construction projects for a year.

This analysis is obviously too simplistic, because some school systems will not have any PCBs, and some will likely have a lot.  There is no apparent way for school systems across the country to even out these costs nationally among themselves, although there may be some ability for states to even out the costs within a state.

Still it highlights what a large issue PCBs in schools can be for a municipality and it clearly answers why most school administrators want to stay as far away from testing their schools for PCBs as they possibly can.

Final Thoughts

Two final thoughts for this post:

First and foremost – If there were credible scientific evidence that PCBs in schools were actually causing harm to the health of students or staff I would be fully supportive of decisions to get them out regardless of the cost.  But this evidence does not exist and not for lack of trying on the part of research scientists.  The fact is that most students and staff receive significantly more PCBs daily in their diet than they do from being in school buildings. The 60+ years of history of PCBs in building materials has simply not turned up evidence of harm to the health of building users.

Second – The estimated $2 million per school PCB removal cost is potentially well short the actual average cost per school because in many cases schools simply cannot be made PCB free.  Instead school buildings have been closed down with the children and staff reassigned to other schools.

In affluent communities the solution to this problem might be demolishing the old building and constructing a new one, but in more typical American communities it means a long-term loss of educational resources and a significant lessening of educational capacity as the old school building is shuttered and becomes a long-term reminder of what has been lost.

 

[1] “Review of PCBs in US schools: a brief history, an estimate of the number of impacted schools, and an approach for evaluating indoor air samples”; Herrick, R.F., Stewart, J.H., and Allen, J.G.; Environ Sci Pollut Res (2016) 23: 1975-1985.