My name is Jhonatan Escobar and I joined O’Reilly, Talbot & Okun Associates, Inc. (OTO) after obtaining my BS in Civil Engineering in 2017.  Working as a full time field engineer represents a lifetime milestone for me..  This achievement was greatly facilitated by Western New England University (WNEU) and the extracurricular activities that were available to me while working towards my BS in Civil Engineering.  The most rewarding activity was the 2015 Solar Decathlon Latin America and Caribbean.

The Solar Decathlon (https://www.solardecathlon.gov/) is sponsored by the U.S. Department of Energy and has expanded to include worldwide competitions. The events involve college teams designing solar powered houses. The goal of the competition is to explore sustainable engineering and new technologies while keeping the importance of a well-designed and attractive house.  Each house is judged based on affordability, attractiveness, comfortability, and functionality.

In November 2015, I traveled with a small group of WNEU students and faculty to Cali, Colombia, where we teamed with students from the Universidad Tecnológica de Panamá for the first Solar Decathlon Latin America and Caribbean   The concept behind our solar decathlon design was constructing the energy-efficient house from four recycled cargo shipping containers. The house was equipped with solar thermal collectors, a water reuse system, and phytoremediation for humidity control, temperature and CO2.

Solar house

Construction of our solar-powered house was delayed by a week due to complications with the border patrol in Colombia. The Solar Decathalon committee would not extend the construction deadline, so we had to work very quickly as soon as the containers arrived on site.   The team worked 18 to 20 hour shifts for one week straight to meet the completion deadline.  The house was completed on the last available date, and was opened for visitor and judge showings.  Our solar powered house was awarded first place in energy efficiency and third place in electrical energy balance.

 

Jhon 1

WNE team photo

This experience was very rewarding and I suggest civil engineering students look into finding an opportunity to compete in a Solar Decathlon, or another field related competition.  Having to work the long shifts due to a situation that was out of the team’s control taught me the importance of being able to adjust to situations quickly.  I also gained experience in working as part of a teams, and learned a lot about sustainable design.  I look forward to applying these skills as I work with the geotechnical and environmental teams here at OTO.

Students from Western New England University are now competing in the Solar Decathlon China, and the next Solar Decathlon Latin America will be in 2019.

 

 



New England Trail: Hike 50 Challenge

As an avid hiker and lover of the outdoors, I often head to the breathtaking White Mountains of New Hampshire, Vermont’s Green Mountains, and the Adirondacks of New York for weekend trips.  My goal for 2018, was to find and explore local trails that I could visit on weeknights after work. I know of popular hiking areas around the Holyoke Range but those tend to be crowded, and I am craving something new. Then I learned about the New England Trail, or NET.

The NET is one of eleven National Scenic Trails in America.  It  extends 215 miles from Long Island Sound in Connecticut north through Massachusetts to the New Hampshire border. Prior to the NET being granted federal designation as a National Scenic Trail in 2009, a 114-mile portion was known as the historic Metacomet Modnadnock (M&M Trail), and another 50-mile section was known as the Mattabesett Trail.  At that time, these trails were over a half-century old and needed maintenance and care. With continued expansions of residential subdivisions and other development pressures, the trails were constantly being relocated and options for these relocations were decreasing.

New England Trail
Map of the New England Trail from the NET website. https://newenglandtrail.org/get-on-the-trail/map/itineraries

The National Trails System Act was developed following a speech given by President Lyndon B. Johnson in 1965 on the “Conservation and Preservation of Natural Beauty.” This act allowed for the creation and protection of American trails that celebrate outdoor adventure. The federal establishment of the NET in 2009 accomplished the National Trails System Act’s primary goal of protection for long-term trail viability.

In the past few years at OTO, I’ve participated in multiple conservation land acquisitions in western Massachusetts. For these projects, I review natural resource and endangered species files, assess environmental contaminants along proposed hiking and biking trails, and engage in discussions with MassDEP about planned recreational and conservation land use. I love what I do, and these projects hold a special place in my heart because I always enjoy my time on trails whether it be skiing, snowshoeing, backpacking, or just walking with my dog.

This year is the 50th anniversary of the National Trails System Act. In celebration of this anniversary, I decided to participate in Appalachian Mountain Club’s NET Hike 50 Challenge, in which participants hike 50 miles of the NET throughout the next year. I’m already 24 miles into this challenge, and it has taken me to beautiful forests, riverside trails, waterfalls, caves, and quiet mountain tops. To my surprise, some of the prettiest trails I have discovered so far are located just out of earshot of main roads that I frequently travel. This motivates me to keep going.  I can’t help but wonder what other hidden gems I will find along my way.

hiking

If you are interested in the Hike 50 Challenge but you aren’t sure if hiking all 50 miles is for you, that’s okay. There are many options that count towards your 50.  Point-earning activities are listed at the NET website. These include joining guided hikes or scheduled events, volunteering, monetary donations, staying overnight in a shelter or cabin, bringing a friend to the trail, and so many more!   (Although I do plan to hike all 50, I am gaining extra credit by sharing this blog on social media).

Adventure awaits!


Well, it’s done.

 

I’m proud and distinctly relieved to announce the publication of my book, Manufactured Gas Plant Remediation: A Case Study (2018, CRC Press).  Like any proud parent, I can’t fight the urge to talk about it.

 

So, here’s a quick introduction to what it is about. The ‘case study’ in the title refers to the entire state of Massachusetts, since this is the first state-level overview of the gas industry.

Northampton gasworks
A gasholder in Northampton, Massachusetts, one of four surviving in the state out of what were once hundreds.

 

‘Manufactured gas’ refers to several types of gas made from coal or oil during the 19th and early 20th centuries, and which was used much as we use natural gas in the present day. The term ‘natural gas’ was actually coined to distinguish gas naturally present in coal beds or oil reservoirs from gas made out of coal. Manufactured gas lit the foggy streets of Victorian England (some parts of Boston and London still have gas street lights). It also lit houses, heated uncounted numbers of kitchen stoves, and fueled innumerable industries. By the early 1900s, most cities and large towns had at least one gasworks; Massachusetts alone had roughly 100 manufactured gas plants (“MGPs”) and the second largest manufactured gas industry in the country, second only to New York).

 

On a larger scale, the gas industry also:

 

  • Played a crucial role in the development of urban areas and industries during the 19th and early 20th Centuries, since many industries sought to locate in communities where gas service was available. Where this wasn’t possible, many industrial plants would start their own private gas plants, some of which fell into disuse and were forgotten, while some expanded to serve the neighboring mill towns and in the fullness of time grew into utility plants themselves.

 

  • Became the first major example of the modern concept of a public utility, together with all the government regulations that went with it.

 

  • Launched the modern organic chemical industry, with coal tar derivatives becoming feedstocks for manufacturing aniline dyes, ammonium sulfate fertilizers, creosote, laboratory reagents, explosives, plastics and disinfectants, most notably carbolic soap (familiar to anyone who’s seen A Christmas Story as the foul-tasting red soap). Modern organic chemistry exists largely because of the numerous byproducts the manufactured gas industry provided.

 

The first half of the book reconstructs the history of the gas industry from its origins in the early 19th century through the general changeover to natural gas in the middle of the 20th century, including discussions of gas-making processes, equipment, business practices, and important persons. Some of this information is specific to Massachusetts, but the discussion of gasmaking technology is universal to the gas industry.

 

 

1915-Fall River-Chas St-Panorama-NEAGE copy
A panoramic photograph of the a gasworks in Fall River, Massachusetts, from 1915

 

Everett
A waterfront view of the New England Gas & Coke coking plant in Everett, Massachusetts, from 1899. At the time, this was the largest modern coking plant in the world, and would supply much of metropolitan Boston’s gas supply until the 1950s.

The second half of the book deals with the ‘dark side’ of this industry, namely its troublesome environmental legacy. Due to the toxicity of many gasmaking byproducts such as coal tar, sites contaminated due to gasworks operations can pose a risk to public health. The assessment, remediation, and redevelopment of coal tar sites pose a significant technical and financial challenge. This part of the book includes information on the chemical composition, origins, and hazards posed by gasworks wastes including coal tar and cyanide wastes, as well as on regulatory issues, assessment and remediation strategies, and other useful topics.

dig
An example of buried materials encountered at a former gasworks

My coauthor, Allen W. Hatheway (one of the preeminent experts on MGPs and coal tar sites, and author of several other publications), and I started the research and writing process in March 2012. At the beginning our goal was simple—to compile an inventory of all of the former manufactured gas plants in Massachusetts. As we continued with our research, however, (to paraphrase J.R.R. Tolkien) “the tale grew in the telling,” and the project eventually grew into a rather large book. This was partly because there were so many former gasworks and partly because a discussion of these sites required a vast amount of historical, technical and modern regulatory context.

 

I’ll be giving presentations on this topic at several conferences in 2018 and 2019, including the Society for Industrial Archaeology annual conference in Richmond, VA this June.

 

The book is available from Amazon or direct from the publisher.


PCB or Non-PCB

The other day I got an email asking a good, basic question about the federal PCB regulations: “Where did the 50 ppm regulatory cut-off for PCBs come from?”  Is it a science based number? Or did the 50 ppm number just get pulled out of the air?

The more I thought about it, the more consequential the question seemed.  Thus a new PCB blog post seemed to be in order.  As you’ll read, the 50 ppm level wasn’t exactly science based, but then it wasn’t totally pulled out of the air either.

What does TSCA say?

Congress passed and the president signed the Toxic Substances Control Act (TSCA) in 1976.  This statute directed EPA to develop two sets of PCB regulations that became known as: (1) the 1978 PCB Disposal and Marking Rule; and (2) the 1979 PCB Ban Rule.  TSCA does not specify a PCB concentration cut-off limit to let EPA (or the rest of us) know exactly what Congress had in mind as to how concentrated PCBs had to be for them to fall into the regulatory net.

Wise legislators must have agreed that setting a regulatory cut-off limit is a decision best left to the professional staff at the regulatory agency. (I will leave the question of whether the phrase “wise legislators” is an oxymoron for a more politically oriented blog).  So TSCA is silent on the issue of PCB cut-off concentrations.  This was something EPA needed to figure out.

What does EPA say about the 50 ppm cutoff?

In the draft public comment version of what became the Disposal and Marking Rule, EPA proposed a 500 ppm cut-off limit for the regulation of PCBs.  But, by the time the final rule was published in the Federal Register (February, 1978), EPA was already getting cold-feet about this high a limit.  The agency warned in the preamble to the Rule that they would likely soon reduce the cut-off level to something in the neighborhood of 50 ppm, but needed to go through a more prolonged regulation development process before they did so.  Quoting from the preamble:

“The Agency is aware that adverse health and environmental effects can result from exposure to PCB’s (sic) at levels lower than 500 ppm; however, at this time the Agency is not establishing a level based on health effects or environmental contamination but rather a level at which regulated disposal of most PCB’s can be implemented as soon as possible”.

EPA goes on to explain that they had only recently acquired the additional scientific information needed to support a lower cut-off level, and that this information was not available in time to include in the administrative record or hearings for the Disposal and Marking Rule.  More from the Rule’s preamble:

“As a consequence, the 500 ppm definition for a PCB mixture, as proposed, is included in this final rule making.  However, the Agency plans to propose a lower concentration of PCB’s, possibly in the range of 50 ppm or below, to define PCB mixture in the forthcoming . . . regulations”.

In accordance with EPA’s warning, the preamble to the May, 1979 PCB Ban Rule explains that EPA had in fact decided to adopt the 50 ppm cut-off level.  This was after the Agency considered cut-off levels of 1 ppm, 10 ppm, 50 ppm and 500 ppm.  EPA concluded that reducing the cut-off level to 10 ppm was impractical because it would bring far too much physical material and too many unrelated chemical processes into the PCB regulatory net.  EPA pointed out that a 1 ppm cut-off level would obviously be even more impractical than the 10 ppm level.

So the 50 ppm level was chosen as the happy medium.  It was a concentration that could be “administered” by EPA (presumably unlike the lower 10 ppm and 1 ppm levels) and yet would capture hundreds of thousands of pounds of PCBs that would have gone unregulated with a 500 ppm cut-off level.

So, that is the story of where the 50 ppm PCB cut-off concentration came from.  It wasn’t rocket science, one could argue it was barely science at all.  In retrospect it was a compromise between those interested in controlling as much PCB as possible and those whose focus was on what could realistically be accomplished.

Now some might wonder why it is that under the 1998 PCB Mega Rule there is a 1 ppm cut-off concentration for PCB remediation waste, but that’s a question for another blog post.


 

The OTO geotechnical group will feature a series of blogs discussing soil settlement concerns and mitigation.  Topics will include forensic studies and remediation alternatives for existing building settlement and damage, as well as providing geotechnical engineering solutions for new construction to mitigate settlement concerns.

 

Part I:  Soil Detectives! Assessment of Settlement of Existing Foundations – Ashley Sullivan, PE

The geotechnical engineers at OTO spend a good portion of their time providing geotechnical engineering solutions to mitigate potential settlement for new structures.  In addition, we are often called in to assess situations where structural damage has already occurred due to the settlement of an existing building.  Our role in these situations is to determine whether foundation settlement is a cause of the structural damage, and more importantly, what was the cause of the foundation settlement.  We then will provide alternatives to mitigate on-going settlement and allow the structures to be productively used.  We work closely with owners, structural engineers, architects and sometimes real estate agents.  These are always interesting projects, since they allow us to put on our detective cap and practice forensic geotechnical engineering.

 

Often times, the OTO geotechnical engineer is not the first phone call, in that the client has already reached out to a structural engineer or architect. The structural engineer will often assess the aboveground building components such as columns and beams to determine whether these load-bearing components are sized correctly and functioning properly.  If the structural components appear to be adequate, the team may start to look at the foundations and ground conditions. This is where OTO can be a valuable asset to the project.

 

Once our services are engaged, we first gather as much information as is readily available regarding the history of the building and likely subsurface conditions. We look for information regarding construction (year built, materials), type of damage observed (cracks, doors and windows that won’t close, leaning walls, etc.), and timelines (immediate/sudden settlement, on-going settlement over long time span, etc.).  We also discuss any changes in site conditions, such as increased building or fill loads, or recent nearby construction work.  Before we leave the OTO’s office, our geotechnical engineer will put some thought into anticipated soil conditions.  We will access OTO’s database of soil boring and test pit information to review conditions at any nearby sites, review both on-line and OTO’s library of published soil and bedrock geology maps, along with historical Sanborn Fire Insurance and USGS topographical maps.  With our experience and the help of published/public information, we often can take an educated guess as to what soil conditions are anticipated at the particular site.

 

Shortly after receiving the initial call, we normally perform a site visit to obtain a firsthand look at the problem area.  We typically review topography and look for indications of fills, changes in drainage (sink holes, soft ground), or slope instability/erosion (bent tree trunks, surficial slips).  At that time, we determine the best approach for investigations, such as the type and approximate locations for invasive testing and/or a settlement monitoring program.  Investigations may include test pits, soil borings and a review of existing subsurface utilities and drainage.  A monitoring program may include the installation of points installed on a building and nearby ground surface, which will be to be surveyed periodically over time to determine trends in the amount and rate settlement.

Door and slab example
An uneven door or cracked or uneven concrete can be a good field indicator of settlement.

Many times, the test pits or soil borings with accompanying laboratory tests quickly reveal the cause of the problem.  Some examples of potential causes include:

  • A pocket of peat, soft clay or loose, non-engineered fill that has compressed under the building load.
  • A buried layer of decomposed organics, trash or other deleterious material that has compressed over time, and will further degrade over time.
  • A soft compressible layer of fine grained soil that has consolidated under the weight of the new structure or fill loads.
  • Wet, loose, granular soils indicate a possible “wash out” condition due to a drainage pipe break and the introduction of water into the soil matrix.

 

We then continue the investigations to determine the nature and extent of the unsuitable conditions.  After the assessment is complete, the geotechnical engineers can start the next phase of the evaluation, “The Fix”, which will be discussed in a future blog post.

 

Do you have a building that is settling?  Contact Ashley Sullivan at 413-276-4253 or sullivan@oto-env.com to see how OTO can help!

 


A lot of what OTO does involves helping clients manage risks. Sometimes we do this in a reactive mode– digging up leaking gasoline tanks, capping abandoned landfills, and otherwise resolving problems that already exist. The proactive side is less obvious and dramatic (ok, and maybe a little less fun), and consists mostly of identifying potential hazards, planning how to deal with them, and helping train staff in how to respond.

Multiple Planning Requirements

There is a surprisingly large amount of this work, because many federal environmental laws and regulations include emergency planning requirements.

For example:

  • RCRA Contingency Plan for hazardous waste Generators and Treatment, Storage and Disposal Facilities (40 CFR 262.34, 264.52, 265.52, and 279.52);
  • Spill Prevention, Control, and Countermeasures plans under the Oil Pollution Act (40 CFR 112)
  • Facility Response Plans under the Oil Pollution Act (40 CFR 112.20 and 112.21); (with review and approval from EPA, Coast Guard, DOT and Department of the Interior regulators as appropriate)
  • Clean Air Act Risk Management Plan( 40 CFR part 68)
  • DOT Facility Response Plan (49 CFR part 194);
  • OSHA Emergency Action Plan (29 CFR 1910.38[a])
  • OSHA Hazwoper (29 CFR 1910.120)
  • OSHA Process Safety Management (29CFR 1910.119);

IMG_20150331_094455382_HDR

Combining Plans

Wouldn’t it be nice if you could somehow combine all of these into one plan?

Well, as it happens, you can.  This isn’t really a new thing—EPA’s guidance for “integrated contingency plans,’ sometimes referred to as the “One Plan” concept, was published in the Federal Register in June 1996 (61 FR 28641, June 5, 1996 and 40 CFR 265.54), and EPA Region 1 and the Massachusetts Office of Technical Assistance produced a demonstration model plan several years ago. The “one plan” option provides a means to combine numerous contingency plans into one living document that can address multiple overlapping (or we could say ‘redundant’) requirements. It can’t cover all of them, but it can usually cover the ones listed above.

When you consider how much of the required content of each kind of plan listed above overlaps, combining them makes a great deal of sense. At bottom, a good plan consists of four components:

  1. A description of the location/situation and risks including information such as: standard precautions; potential hazards; potential receptors; an analysis of what could go wrong; and what would happen as a result (e.g. an oil slick on a river upstream of a drinking water intake, or an anhydrous ammonia cloud over an urban area). The degree of analysis is the major variable among the various plan types, for example a SPCC or FRP requires evaluation of potential releases to water bodies, whereas the RMP is concerned principally with releases of airborne vapors or gases.
  1. Emergency contact information for facility and corporate staff, emergency response personnel, regulators, and emergency management agencies such as the Coast Guard or the Local Emergency Planning Committee;
  1. Written procedures for what facility staff should do in the event of an emergency, and
  1. Documentation of relevant things such as changes to the facility, inventories of available equipment, updates to the plan, and staff training. After all, the best plan in the world doesn’t mean much if it’s not documented.

For example….

Consider, for example, a commercial dairy processing facility located along a river—ok, milk sounds innocuous, but it’s probably more complicated than that. Animal fats can be as destructive to aquatic life as heavy fuel oils- one of the major effects of any sort of oil or fat is a huge increase in chemical and biochemical oxygen demand (COD and BOD) which depletes the oxygen level in the water to a level that fish and other critters can’t survive. Animal fats are therefore covered under the Oil Pollution Act, so a SPCC is required. It could also be the case (as it often is), that the facility has a massive refrigeration plant using anhydrous ammonia, which triggers the Clean Air Act’s Risk Management Plan and OSHA Process Safety, RCRA generator status for various hazardous wastes, etc. Then let’s assume EPA thinks the facility could cause ‘substantial harm’ to the river in the event something goes wrong, and requires a Facility Response Plan (an “FRP”) on top of the SPCC.

That’s five planning requirements right there, but at bottom most of them are going to deal with the same regulated materials, the same staff, and the same emergency response procedures, so having one well-maintained and drilled plan instead of five makes complete sense. It’s also far easier to keep one plan up to date than five, particularly when that means documenting inspections, staff training (and for some plans, such as Facility Response Plans, actual drills).

Train Like It’s For Real

IMG_20161006_124728471_HDR

Of course, having a plan on paper is only the first part, since if it only exists on paper even the best and most comprehensive plan won’t do any good without training and practice. At our recent in-house OSHA refresher training, OTO had a guest speaker who had been an OSHA inspector for 37 years.  He presented us with a number of case studies for industrial accidents, and one recurring theme was emergency action plans that essentially existed only on paper, and provided no value at all when an actual emergency occurred, since staff couldn’t implement a plan they hadn’t been trained on.

Staff need to be trained, equipment bought and maintained, and procedures practiced both in the field and as tabletop exercises. Effective plans represent ongoing commitments, and require inspections, training and documentation. This of course means money, in terms of staff time, hiring an engineering consultant to assist in developing plans, and sometimes hard construction costs, such as upgrading secondary containment for tanks, modifying stormwater systems to reduce potential spill exposures or modernizing HVAC systems to keep pace with vapors or fumes. Many plans, such as the SPCC, require the party responsible for the facility to certify that they are committing the necessary resources to make the plan workable. Good plans are also ‘living’ documents, which means that if you change your operation, say by adding another 20,000 gallon aboveground storage tank, you would need to change your plan, and if one part of your plan turns out not to work, you update your plan.

While it’s always good to be the “man with a plan”, sometimes all you need is one plan.

 

 

 

 


 

sean pic

The Underground Tank Problem

If you own an old underground storage tank (UST) in Massachusetts, particularly a single-walled steel tank, chances are you have heard about the push to remove these older tanks.  The problem with them is that over time, they are prone to leaking and when they leak; they contaminate the environment and can hurt human health.  New USTs need to meet strict guidelines for environmental safety including being “double walled” (a tank within a tank) and there has to be a leak detection system.  Older tanks usually do not have these features.

If you are the owner of an old UST, taking it out of service can be a scary prospect, since it can be expensive and, in some cases, disrupt business on a property for from a few days up to several weeks.

Old single walled steel (SWS) tanks were in common use from the early 20th century up through the late 1980s.  These tanks are more prone to leaking their contents because they lack a second ‘wall’ in case the interior or exterior wall fails, and can also lack other leak prevention equipment such as corrosion protection or upgraded product piping.  At OTO we’ve even come across a number of pre-1930 tanks that were riveted together rather than welded!   These tanks did not even have tight seams.

By the late 1970s, there were hundreds of thousands of SWS USTs across the country.  Some are still in use and many others were abandoned in place often with no documentation that they had ever been installed. Removing a leaking UST is a potentially expensive clean-up project.  Leaking USTs have historically been the most widespread source of oil and gasoline contamination to groundwater and drinking water aquifers.  In addition, occupants of buildings near leaking USTs could be exposed to vapors that migrate underground and into buildings.

The Government Acts

In 1988, the USEPA set a deadline of 1998 for: 1) the removal of out-of-use USTs; 2) the incorporation of leak detection, corrosion protection, and spill and overfill containment equipment on most new or retrofitted USTs; and 3) the registration of certain in-use USTs (such as for retail gasoline or diesel) with state agencies. This requirement led to the removal and remediation of thousands of leaking USTs in the Commonwealth. In addition, the Massachusetts Department of Fire Services prohibited the installation of new SWS tanks after 1998.

SWS tanks installed prior to this date are now nearing or past their recommended service lives. The 2005 Energy Policy Act included the requirement that SWS tanks be removed by August 7, 2017. In 2009 government responsibility for the UST program in Massachusetts was transferred to MassDEP, which in January 2015 promulgated new UST regulations (310 CMR 80.00), and which maintains the SWS tank prohibition and removal requirement at 310 CMR 80.15.

These regulations are intended to protect public health, safety and the environment by removing these SWS USTs from service because they have a higher likelihood of leaking and releasing petroleum products into the environment.

 

The Current Status

The MassDEP has established a number of regulatory deadlines for the assessment, repair and/or removal of the old UST systems.   In certain situations, MassDEP is exercising enforcement discretion and granting extensions of regulatory deadlines.

In addition:

  • All spill buckets tested and, if necessary, repaired or replaced in accordance with 310 CMR 80.21(1)(a) and 28(2)(g);
  • All turbine, intermediate and dispenser sumps tested and, if necessary, repaired in accordance with 310 CMR 80.27(7) and (8);
  • All Stage II vapor recovery systems decommissioned in accordance with 310 CMR 7.24(6)(l), if applicable; and
  • New Stage I vapor recovery requirements met in accordance with 310 CMR 7.26(3)(b), if applicable.

At the time of UST system removal, environmental conditions must be assessed per state and federal regulations. In Massachusetts, tank closures must meet DEP’s Tank Regulations, 310 CMR 80.00. These regulations allow tanks to be permanently closed-in-place only if they cannot be removed from the ground without removing a building, or the removal would endanger the structural integrity of another UST, structure, underground piping or underground utilities.

If you have questions or need assistance related to  a UST system, please contact Sean Reilly with O’Reilly, Talbot, & Okun at (413) 788-6222.


With all the hubbub in Washington DC lately, it’s been largely overlooked that some of the regulatory changes that started under the previous administration are only now coming to fruition.

Hazmat storage - BADHazmat storage - BEST

For example, the Hazardous Waste Generator Improvement Rule went into effect on the federal level on May 30, 2017 by amending parts of the regulations promulgated pursuant to the Resource Conservation and Recovery Act (RCRA).  RCRA was passed in 1976 and provides the national regulatory framework for solid and hazardous waste management.  These changes will become effective in Massachusetts, Connecticut, and other states with authorized hazardous waste programs as the states update their regulations.

RCRA’s ‘generator requirements,’ haven’t changed much in the last thirty years—the last major change happened in 1984. The new requirements  address the process by which a person or company who generates a waste: 1) evaluates whether or not it’s a hazardous waste or a solid waste; 2) stores the waste and prepares it for transport; and 3) maintains records of the waste’s generation and treatment, recycling or other management.

Changes in Hazardous Waste Generation

Industry has changed a great deal since RCRA went into effect. In the last ten years, the amount of hazardous waste generated in Massachusetts has dropped from 1,121,752 tons in 2001 to 39,108 tons in 2015, even as the number of registered waste generators nearly doubled (EPA Biennial Hazardous Waste Report, 2001 and 2015). Interestingly, EPA national biennial reports indicate the quantity of RCRA waste generated in Massachusetts didn’t change very much between 1985 (114,381 tons) and 2001, although there was some fluctuation as EPA added new categories of generators and wastes to RCRA. The general trend over time has led to there being fewer Large Quantity Generators, and many more Very Small Quantity Generators, so that a representative slice of the modern population of generators consists mostly of auto repair businesses, retail stores, pharmacies, and small manufacturing operations rather than the large factories and sprawling chemical plants of the 1970s and early 1980s.

This changing waste generation demographic (for lack of a better word) matters a lot, since compliance with these generator requirements generally happens at the ‘factory floor’ level, and while Kodak or Monsanto plants had chemical engineering departments to help with waste characterization and management, small shops generally don’t.

While the new rule makes over 60 changes to the RCRA regulations, its main goal is to clarify the ‘front end’ generator requirements. Some of these changes are major; others involve only routine regulatory housekeeping; and some are potential compliance pitfalls for generators.  Several of these changes dovetail with EPA’s 2015 changes to the Definition of Solid Waste, which opened up expanded opportunities for recycling certain materials rather than requiring that they be handled as solid or hazardous wastes.

Other changes in the new rule include:

  • Under some circumstances, Very Small Quantity Generators (VSQGs) will be allowed to send hazardous waste to a large quantity generator (LQG) that is under the control of the same “person” for consolidation before the waste is shipped to a RCRA-designated treatment, storage or disposal facility (TSDF). This is most likely to benefit large “chain” operations, such as retail stores, pharmacies, health care organizations with many affiliated medical practices, universities, and automotive service franchise operations.
  • One of the common problems for VSQGs or SQGs is that since generator status is determined by the quantity of wastes generated, sometimes exceptional events (such as a spill or process line change) occur which bump them up into the Large Quantity Generator category, triggering many other regulatory requirements even if the status change is only for the space of a single month. The Generator Improvement Rule would allow a VSQG or SQG to maintain its existing generator category following such events, as long as certain criteria were met..
  • The addition of an explanation of how to quantify wastes and thus determine generator status.
  • Changes to the requirements for Satellite Accumulation Areas, and for the first time, a formal definition of a Central Accumulation Area.
  • An expanded explanation of when, why and how a hazardous waste determination should be made, and what records must be kept. The final rule does not include requirements proposed in the initial rule that generators keep records of these determinations until a facility closes. The rule also recognizes that most generators base their waste determinations on knowledge of the ingredients and processes that produce a waste, rather than laboratory testing.
  • Clearer requirements for facilities that recycle hazardous waste without storing it.
  • Small Quantity Generators will have to re-notify their generator status every four years.
  • A clarification of which generator category applies if a facility generates both acute and non-acute hazardous waste (for example, a pharmacy that generates waste pharmaceuticals that are P-listed acute hazardous wastes).
  • Revising the regulations for labeling and marking of containers and tanks
  • “Conditionally Exempt Small Quantity Generators” will be renamed Very Small Quantity Generators, a term already used in many states including Massachusetts.
  • Large and Small Quantity Generators will need to provide additional information to Local Emergency Planning Committees as part of their contingency plans

The new rule also contains several expanded sections on exemptions applicable to wastes together with a distinction between “conditional requirements,” such as those which would qualify a waste for an exemption, and ‘independent requirements,” such as container labeling and spill prevention, which are mandatory across the board.

In addition, the rule makes many relatively minor changes, such as updated references to other regulations and rearranging portions of the Code of Federal Regulations text into a more intuitive order.

As with any new or revised regulation, we can expect a learning curve, particularly as implementation filters down to the state agencies. In the meantime, EPA has the Final Rule on its website, along with several fact sheets and FAQs

https://www.federalregister.gov/documents/2016/11/28/2016-27429/hazardous-waste-generator-improvements-rule

https://www.epa.gov/hwgenerators/frequent-questions-about-hazardous-waste-generator-improvements-final-rule

https://www.epa.gov/hwgenerators/fact-sheet-about-hazardous-waste-generator-improvements-final-rule


caulk and bricj

PCBs, polychlorinated biphenyls, are a group of related chemicals that were used for a variety of applications up until the 1970s.  In the 1960s the development of improved gas chromatography methods allowed environmental scientists to become aware of the environmental persistence and global distribution of PCBs in the environment.  Since that time there have been hundreds of studies conducted to better understand the environmental transport and fate of PCBs.

However, it has been only over the past 20 years or so that studies have focused on learning more about PCBs that were incorporated into building products and their fate in the indoor environment.  Much of what has been learned is surprising and counter-intuitive.

For example, while it is generally true that PCBs have low volatility and low water solubility, it turns out that even at room temperature they are volatile enough to permit them to migrate in and around buildings at concentrations high enough to have regulatory implications.  This migration may take place slowly, over the course of several decades, but in some instances, it has happened in as little as a year.  With today’s sensitive instrumentation, chemists are able to track the movements of even tiny concentrations of PCBs as they migrate.

This post is a primer on the three primary categories of building materials which contain PCBs and how their PCBs can move inside of buildings.

Primary Sources

As the name suggests, primary sources are building materials that were either deliberately or accidentally manufactured with PCBs as an ingredient prior to their installation in a building.  The most common primary sources are:

  • Caulking;
  • Paint;
  • Mastics;
  • Various surface coatings; and
  • Fluorescent light ballasts (FLBs).

FLBs are different from the other materials on this list because they use PCBs in an “enclosed” manner.  This is defined as use in a manner that will ensure no exposure of human beings or the environment to PCBs.   However, with continuous use FLBs are known to deteriorate, sometimes resulting in the release of PCBs.  Only FLBs manufactured before the PCB ban (1979) should contain PCBs and by now (2017) any of these older PCB containing FLBs should have been replaced with non-PCB ballasts since even the youngest PCB FLBs are almost 40-years old.  FLBs are considered to have had a functional life span of only 10-15 years.  The type of PCBs used in US-made FLBs were almost exclusively Aroclors 1242 and 1016.

The other primary PCB sources on the above list are considered to be “open” PCB uses because, unlike FLBs, the PCBs were not contained in an enclosure.  In most of these cases PCBs were added to the materials to improve the performance of the products by contributing: fire resistance, plasticity, adhesiveness, extended useful life and other desirable properties.  For PCBs to impart these properties they were generally included at concentrations ranging from 2% to about 25%; this is equivalent to 20,000 parts per million (ppm) to 250,000 ppm.  The most common PCBs found in US-made building materials are Aroclor 1254 followed by Aroclor 1248, 1260 and 1262.

PCBs can sometimes be present in primary sources by accident rather than by design.  The presence of Aroclor PCBs in primary sources at concentrations less than 1,000 ppm (equal to 0.1%), or non-Aroclor PCBs at any concentration, may indicate an accidental PCB use.

Under the federal PCB regulations primary sources of are referred to as PCB Bulk Products and they are regulated when their PCB concentration is 50 ppm or greater.

Secondary Sinks and Secondary Sources

When a PCB primary source is in direct contact with a porous building material, the PCBs in the primary source can often migrate from the primary source into the porous material.   Porous building materials known to adsorb PCBs in this way include concrete, brick and wood.  When this migration occurs, the now PCB containing porous materials are referred to as secondary PCB sinks.  Secondary sinks often have PCB concentrations in the range of 10-1,000 ppm.

While the federal regulations apply to primary sources when their concentration is 50 ppm or greater, requirements for secondary sinks are stricter.  They are categorized as PCB Remediation Wastes and are regulated when their PCB concentration is 1 ppm or greater.

In some situations, the PCBs in secondary sinks can be remobilized and either migrate directly into other porous materials or they can volatilize into the air.  When this occurs, these secondary sinks may be referred to as secondary sources.  In practice one hears the terms secondary sinks and secondary sources being used interchangeably.

Tertiary Sinks and Sources

Tertiary sinks arise when PCBs from primary or secondary sources volatilize into the air and then condense onto other materials in a building.  The significance of volatilization as a PCB migration pathway was underappreciated until recent times because the relatively low volatility of PCBs suggested that the volatilization rate was too low to be meaningful.  However, laboratory testing and numerous real-world examples have demonstrated that volatilization of PCBs from primary and secondary sources with redeposition on other materials can be significant in some settings.   Tertiary sinks often have PCB concentrations between 1-100 ppm.

Some authors prefer to use the term secondary sinks to describe both secondary and tertiary sinks.  Personally, I prefer to use ‘tertiary sinks’ to identify materials affected by indirect contact (through the air) and ‘secondary sinks’ to identify materials affected by direct contact to primary sources.  However, I acknowledge that it is not always evident whether a material is a secondary or tertiary sink.

Why Understanding PCB Sources and Sinks Matters

Understanding the ways that PCBs move around in buildings is important if your goal is to reduce potential exposures inside of buildings.  It is a frequent occurrence in PCB building remediation for primary sources to be removed only to find that indoor air concentrations have not been reduced to the extent expected.  Or for air concentrations to fall immediately after remediation, only to return to previous levels with the passage of time.  This is often due to an insufficient appreciation for the influence and action of secondary and tertiary sinks.

If you have a particular PCB in building condition that needs a fresh set of eyes to review it, consider reaching out us for another opinion.


IMG_20141211_155833662

This picture is of a coal mine in West Virginia; the publication it was in was dated 1946, where it was presented as an example of ‘the bad old days.’ I found it in an old copy of the quarterly employee magazine of Eastern Gas and Fuel Associates, a holding company which used to have a very large vertically-integrated slice of the American coal industry– they owned coal mines, railroads, a fleet of colliers (coal transport ships), coking plants, blast furnace plants for making pig iron, and even a chain of general stores in mining towns.  They even owned Boston Consolidated Gas Company — this was back when gas was still mostly made out of coal, so for Eastern to own a major gas company made a lot of sense. When natural gas came along in the ’50s, Eastern Gas and Fuel promptly bought ownership stakes in the gas pipeline companies.

This was decades before the phrase “Safety First” was coined, but even so, the mining and transportation industries carry a lot of known hazards with them and Eastern Gas and Fuel evidently made a point of contrasting the ‘bad old days’ above with the image of a modern industrial company, for example the following page on drum handling, from another issue:

IMG_20141212_131708401

That’s still pretty good advice, even sixty years later.  So is “Don’t get hurt” for that matter, but we’re a lot more sophisticated about it lately.