O’Reilly Talbot & Okun Associates, Inc. participated as a sponsor in the Net Positive Symposium for Higher Education, held at one of our recently completed projects, the R.W. Kern Center on the beautiful Hampshire College campus in Amherst, Massachusetts.  The R.W. Kern Center   is Living Certified by the International Living Future Institute, meaning that:

  • The building includes regenerative spaces that connect occupants to light, air, food, nature, and community;
  • The building is self-sufficient and remains within the resource limits of the site. A “Living Building” produces more energy than it uses, and collects and treat water on site; and
  • The building is healthy and beautiful.

You can read more about the R.W. Kern Center in the certified case study here.  The building contains a number of features to meet the “imperatives” of each of performance areas.  The building includes composting toilets and treats all its grey water on site via filtering through indoor planters in the building’s common space, and through an onsite wetland.  Thermal efficiency and a rooftop solar array are included in a net-zero energy demand for the building.  Biophilic design elements mimic the beauty of the college campus, and exposed structure and systems allow visitors to see components of the building typically covered behind ceilings and walls (who knew piping systems could be so elegant?).  Materials used in the building are locally sourced and any materials that have adverse effects on human health and the environment are avoided.

The symposium was held over two days, and included tours of the Kern Center and the Hitchcock Center (another Living Building in Amherst, Massachusetts).  The symposium highlighted projects at Hampshire College, Smith College, and Williams College, and their approach to sustainable, resilient, healthy, innovative, and equitable design.  On the 2nd day, a variety of small group lectures were held throughout the day covering many aspects of sustainable design and education, as well as design, development, implementation, and construction aspects of the Kern Center and other Living Building Certified projects.   Attendees included sustainability directors, faculty/educators, students operations staff, and design and construction professionals, and many others.

OTO was fortunate to be a part of the design, construction and commissioning teams for both the R.W. Kern Center and the Hitchcock Center.  OTO provided both environmental and geotechnical engineering services, as well and indoor air quality testing services during commissioning and certifications.  We would like to thank our clients (Hampshire College and Hitchcock Center for the Environment) and other members of the team, most notably Bruner/Cott & Associates, Inc. (architects for Kern Building) and designLAB architects, inc., (architects for Hitchcock Center), and Wright Builders (General Contractor).

OTO is a proud sponsor of the International Living Future Institute and we look forward to more Living Building Projects in the northeast.

Felt at Kern
Photograph of artwork by artist Janice Arnold (JA Felt) which includes 100 feet of dyed felt cloth hung above the staircase at the R. W. Kern Center.

 

 

 


My name is Jhonatan Escobar and I joined O’Reilly, Talbot & Okun Associates, Inc. (OTO) after obtaining my BS in Civil Engineering in 2017.  Working as a full time field engineer represents a lifetime milestone for me..  This achievement was greatly facilitated by Western New England University (WNEU) and the extracurricular activities that were available to me while working towards my BS in Civil Engineering.  The most rewarding activity was the 2015 Solar Decathlon Latin America and Caribbean.

The Solar Decathlon (https://www.solardecathlon.gov/) is sponsored by the U.S. Department of Energy and has expanded to include worldwide competitions. The events involve college teams designing solar powered houses. The goal of the competition is to explore sustainable engineering and new technologies while keeping the importance of a well-designed and attractive house.  Each house is judged based on affordability, attractiveness, comfortability, and functionality.

In November 2015, I traveled with a small group of WNEU students and faculty to Cali, Colombia, where we teamed with students from the Universidad Tecnológica de Panamá for the first Solar Decathlon Latin America and Caribbean   The concept behind our solar decathlon design was constructing the energy-efficient house from four recycled cargo shipping containers. The house was equipped with solar thermal collectors, a water reuse system, and phytoremediation for humidity control, temperature and CO2.

Solar house

Construction of our solar-powered house was delayed by a week due to complications with the border patrol in Colombia. The Solar Decathalon committee would not extend the construction deadline, so we had to work very quickly as soon as the containers arrived on site.   The team worked 18 to 20 hour shifts for one week straight to meet the completion deadline.  The house was completed on the last available date, and was opened for visitor and judge showings.  Our solar powered house was awarded first place in energy efficiency and third place in electrical energy balance.

 

Jhon 1

WNE team photo

This experience was very rewarding and I suggest civil engineering students look into finding an opportunity to compete in a Solar Decathlon, or another field related competition.  Having to work the long shifts due to a situation that was out of the team’s control taught me the importance of being able to adjust to situations quickly.  I also gained experience in working as part of a teams, and learned a lot about sustainable design.  I look forward to applying these skills as I work with the geotechnical and environmental teams here at OTO.

Students from Western New England University are now competing in the Solar Decathlon China, and the next Solar Decathlon Latin America will be in 2019.

 

 



New England Trail: Hike 50 Challenge

As an avid hiker and lover of the outdoors, I often head to the breathtaking White Mountains of New Hampshire, Vermont’s Green Mountains, and the Adirondacks of New York for weekend trips.  My goal for 2018, was to find and explore local trails that I could visit on weeknights after work. I know of popular hiking areas around the Holyoke Range but those tend to be crowded, and I am craving something new. Then I learned about the New England Trail, or NET.

The NET is one of eleven National Scenic Trails in America.  It  extends 215 miles from Long Island Sound in Connecticut north through Massachusetts to the New Hampshire border. Prior to the NET being granted federal designation as a National Scenic Trail in 2009, a 114-mile portion was known as the historic Metacomet Modnadnock (M&M Trail), and another 50-mile section was known as the Mattabesett Trail.  At that time, these trails were over a half-century old and needed maintenance and care. With continued expansions of residential subdivisions and other development pressures, the trails were constantly being relocated and options for these relocations were decreasing.

New England Trail
Map of the New England Trail from the NET website. https://newenglandtrail.org/get-on-the-trail/map/itineraries

The National Trails System Act was developed following a speech given by President Lyndon B. Johnson in 1965 on the “Conservation and Preservation of Natural Beauty.” This act allowed for the creation and protection of American trails that celebrate outdoor adventure. The federal establishment of the NET in 2009 accomplished the National Trails System Act’s primary goal of protection for long-term trail viability.

In the past few years at OTO, I’ve participated in multiple conservation land acquisitions in western Massachusetts. For these projects, I review natural resource and endangered species files, assess environmental contaminants along proposed hiking and biking trails, and engage in discussions with MassDEP about planned recreational and conservation land use. I love what I do, and these projects hold a special place in my heart because I always enjoy my time on trails whether it be skiing, snowshoeing, backpacking, or just walking with my dog.

This year is the 50th anniversary of the National Trails System Act. In celebration of this anniversary, I decided to participate in Appalachian Mountain Club’s NET Hike 50 Challenge, in which participants hike 50 miles of the NET throughout the next year. I’m already 24 miles into this challenge, and it has taken me to beautiful forests, riverside trails, waterfalls, caves, and quiet mountain tops. To my surprise, some of the prettiest trails I have discovered so far are located just out of earshot of main roads that I frequently travel. This motivates me to keep going.  I can’t help but wonder what other hidden gems I will find along my way.

hiking

If you are interested in the Hike 50 Challenge but you aren’t sure if hiking all 50 miles is for you, that’s okay. There are many options that count towards your 50.  Point-earning activities are listed at the NET website. These include joining guided hikes or scheduled events, volunteering, monetary donations, staying overnight in a shelter or cabin, bringing a friend to the trail, and so many more!   (Although I do plan to hike all 50, I am gaining extra credit by sharing this blog on social media).

Adventure awaits!


Tom Speight, CHMM, and Paul Tanner, PG, LEP

Hazmat storage - BEST

For a one-page document, EPA’s humble Form 8700-22, commonly known as the Uniform Hazardous Waste Manifest, carries a lot of very important information, is used for a number of different purposes, and is generally one of the most important routine pieces of paper in the environmental industry. Launched in the grim days of Love Canal and the Valley of the Drums, in 2018 the manifest is going electronic in a big way.

EPA created the manifest program in 1980, as part of the modern Resource Conservation and Recovery Act (RCRA) system of registered hazardous generators, transporters, and treatment, storage and disposal facilities (TSDFs). The process has had several major upsides: it has improved the environment by improving the accountability for waste, cutting down on inappropriate disposal of waste, and has spurred development of waste minimization and “greener” manufacturing processes.

The intent of the manifest is to have a single document that provides a diary of what a waste material is, where it came from, who transported it, where it went, and what was done with it—RCRA’s proverbial “cradle to grave” tracking. Once the material has reached its ultimate end or has been processed so as to lose its identity (such as being mixed with other wastes and batched into hazardous waste fuel for use at permitted cement kilns), copies of the completed manifest are sent back to the generator and the generator’s state environmental regulators to close the loop. The manifest has gone through several versions and the current form, a six-part preprinted paper form, has been in use since 2005—here’s one example (click image for larger view).

manifest example

While the waste is in transit, the manifest also serves as shipping papers under Department of Transportation regulations. Because of the hazardous nature of the waste, the manifest also includes references to emergency procedures in US DOT’s Emergency Response Guide, so that first responders can easily know what hazards may exist, what precautions to take in the event of fire, explosion, or spill, and what first aid may be necessary for affected persons.

135873262

The manifest has additional uses once the waste has gone to its ‘grave,’ (or in the case of incinerators and cement kilns, a Viking funeral).

viking-funeral-pyre-boat

  • Generators keep archives of manifests as documentation not only of appropriate management of the waste (in the event of a regulatory or ISO audit), but that the generator was acting within the limits of its generator category (large quantity, small quantity, or very small quantity).
  • Large quantity generators and TSDFs also rely on manifests for tracking their waste throughput for RCRA Biennial Hazardous Waste Reporting.
  • Companies that maintain ISO certifications use manifests to track waste minimization efforts, for example as part of the “Environmental Aspects” under ISO-14001:2015.
  • Facilities that have to report chemical usage under the federal Toxics Release Inventory program or the Massachusetts Toxics Use Reduction Act typically look to manifests to track how much of a chemical was managed as a hazardous waste (and what then happened to it), as opposed to being incorporated into a finished product, recovered or destroyed by an air or water pollution control system, etc.
  • In the least-optimal scenario, manifest records can be used to assess how much waste a generator shipped to a TSDF if the receiving facility falls into RCRA Corrective Action or Superfund status and generators start getting dunned for contributions to remediation costs.

Some states have also created separate regulatory programs that rely on manifests (such as the Connecticut Transfer Act), under which archived manifests are used as a primary means of evaluating whether a facility generated more than 100 kilograms of hazardous waste in a month. The “manifest trigger” can add significant cost and complexity to a real estate transaction —this is where the descriptions, waste codes and management methods under Sections 9, 13 and 19 of the manifest can really become important in determining whether a waste was really hazardous (since it is not unusual to ship materials that aren’t, strictly speaking, “hazardous waste” on a manifest) was just shipped on a manifest), and whether it was shipped for recycling or for disposal.

Unfortunately, manifests have also always meant paperwork, in some cases rooms full of boxes of archived manifests dating back to the early 1980s, and in this has to some degree been a burden shared by industry and regulators alike.

In order to keep pace with technology and to reduce the paperwork burden, prompted by Obama- era legislation, EPA is rolling out a new eManifest system for June 30, 2018, which will convert most of the existing paper system into an electronic one.

The rule requires the following eManifest be implemented on June 20, 2018. Some of the significant aspects of the roll-out include:

  • Everyone who will be signing or using manifests, including generator staff, truck drivers, transporter compliance managers, and TSDF staff, will need to create an individual user account.
  • Manifests will be prepared, signed, and transmitted digitally, although for the foreseeable future paper copies will be retained for use as shipping papers—the driver still needs a copy in his truck cab.
  • The RCRA Biennial Reporting process will be integrated with eManifest, although the logistics of this are still being worked out.
  • The system will be funded by fees charged on receiving facilities (mostly TSDFs), ranging from $4 for fully electronic documents to $20 for paper copies, with the ultimate goal of paper elimination in 5 years.
  • Manifests may become more accessible to enforcement personnel.

With June 30 fast approaching, EPA has been hitting the road, providing talks to state agencies and industry trade groups.  At one such meeting, hosted by the Connecticut Environmental Forum on April 4th, Beth Deabay and Lynn Hanifan of EPA provided a peek into the front-end of the system (generator and vendor registrations and protocols to start an eManifest) but admitted that the back end of the system (summary reports) is still under development in Washington.

As with pretty much any regulatory change or new digital technology, there will be a learning curve and some bumpy starts. Smaller waste vendors may be playing catch-up and could find the changeover difficult, but the larger national-level generators, transporters and waste facilities are already using the system on a small scale and working out some of the kinks, so hopefully the transmission to a digital eManifest will be fairly smooth.

Looking back on the transition from paper to digital here at OTO, the shift was awkward, and took some time, but we can’t imagine bookshelves of reports anymore….  the high point of the process was recycling over two and a half tons of paper in one day alone, and turning our old document storage into part of a nice new conference room. In the coming years, we will look back on the rollout of digital manifests and are likely to appreciate simpler data processing, saving shelf space and trees!

 


This part of our ongoing series about building settlement and geotechnical engineering strategies to mitigate settlement. Mike Talbot is OTO’s geotechnical engineering practice lead.

I recently had the pleasure of presenting a talk on this topic at the Connecticut Society of Civil Engineers Spring 2017 Geotechnical Conference. This post provides some highlights of my talk. The entire presentation is available on OTO’s web site.

Connecticut River Valley Varved Clay (or CVVC) is a type of fined grained soil deposit that was laid down within ancestral Lake Hitchcock, which filled the Connecticut River Valley after the melting of the continental glaciers  approximately 14,000 years ago.  CVVC is characterized by alternating layers of silt and clay, and is similar in composition to other varved deposits that formed in meltwater lakes across the northern part of North America following the retreat of the continental glaciers.

CVVC deposits are soft and compressible, and significant building or embankment settlement may result from building dead or live loads or from the placement of new fill loads. Having practiced extensively in the Connecticut River Valley for approximately 25 years, OTO has wide-ranging experience in investigating CVVC sites and providing geotechnical engineering solutions to address settlement concerns.

In my talk, I presented two case studies where a soil preload was constructed to mitigate post-construction settlement.  One case study involved the new Easthampton (Massachusetts) High School and the second involved a new book depository in Hatfield, Massachusetts. Both project sites were similar in that they were underlain by greater than 50 feet of soft CVVC, and in that the new construction (a combination of fill placed to form the building pad, and building dead and live loads) resulted in an increase of greater than 1,000 psf (pounds per square foot) in the vertical effective stress within the soil profile– books can  be heavy! Without ground improvement, this load increase could have caused the maximum past pressure to be exceeded within portions of the soil profile, causing virgin consolidation to occur. Consolidation is the process where the water content of the soil decreases, without being replaced by air, so that an overall volume of the soil layer changes. During design, we estimated that up to 6 inches of total settlement and as much as 3 inches of differential settlement could occur at both sites, enough to cause  damage and loss of functionality to the planned buildings.

From Langer, William H. Map Showing Distribution and Thickness of the Principal Fine-Grained Deposits, Connecticut Valley Urban Area, Central new England. Department of the Interior, United States Geological Survey, 1979
Easthampton Site

 

CVVC Deposits Map From Langer, William H. Map Showing Distribution and Thickness of the Principal Fine-Grained Deposits, Connecticut Valley Urban Area, Central new England. Department of the Interior, United States Geological Survey, 1979
Hatfield Site

Both structures were sensitive to post-construction settlement, so it was determined that soil improvement was required to mitigate the amount of settlement. The application of a soil preload was selected as the appropriate soil improvement technique. In essence, preloading is intended to simulate the design loads of a building, in this case by stockpiling large quantities of soil on the site, so that the consolidation occurs before the building is constructed. In addition, the construction schedule was tight for both projects, so the design solution included features (wick drains, which provided additional pathways for water to be eliminated) to expedite the consolidation process. Our design solution was similar for both projects and involved the following:

  • The installation of wick drains to help remove water from the soil matrix, and speed the rate of settlement (in this case, time was reduced to about three months),
  • The placement of a preload fill, which varied from two to 11 feet high, and
  • The monitoring of settlement during construction.
Easthampton Soil Profile
A typical soil profile at the Easthampton site.

Preload settlement monitoring and the post-construction performance of both structures indicate that the preload application was successful in reducing the amount of post-construction settlement. The use of wick drains allowed the preload settlement to occur without significantly impacting the construction schedule. However, despite the similarities of the site geology and the selection of a similar design solution the amount of preload settlement varied significantly between the two sites. At the Easthampton site, the preload settlement was approximately half of what occurred at the Hatfield site. The variation is likely attributable to a high silt and sand content in the CVVC at the Easthampton site.  We have found that simple moisture content data from bulk samples (combination of both silt and clay varves) provide a good indication of this variation.

More detail is provided in the complete presentation, downloadable from the link above. If you have any questions please feel free to contact us.


Vapor Intrusion Emerges as a Major Cleanup Driver

When I started work as an environmental professional in 1986, I do not recall thinking about or even being aware of the risks posed by vapor intrusion. Vapor intrusion (sometimes referred to as “VI”) occurs when a volatile solvent or the volatile portion of a petroleum hydrocarbon from a release under or adjacent to a building migrates into the air inside a building.  Once in the indoor air, these volatile constituents can be readily inhaled by the building occupants, which is important because the lungs are the most efficient mechanisms by which chemicals enter the body.

How the VI Pathway Works

Volatile compounds are generally considered to include those that have boiling points less than that of water or vapor pressures greater than that of water, such as benzene from gasoline, industrial degreasers such as trichloroethylene (TCE), or perchloroethylene (PCE) dry-cleaning solvent.  These volatile compounds are the ones most likely to cause VI problems.   Following a release, these compounds are present in the ground partially in a liquid or solid phase, but they are also present partially in the vapor or gaseous phase.  As a gas or vapor, they can move readily through the soil to nearby buildings.

This mechanism is often intensified during winter heating conditions, when buildings are closed up, and heating systems vent to the outside, creating negative relative pressures inside buildings, sometimes called the ‘chimney effect.’  Even a relatively small relative negative pressure in a building significantly enhances the rate at which vapor intrusion occurs.

Comparing Exposures from VI to those from Soil and Groundwater

In the early days of waste site cleanup, we were focused on exposures and risks arising from soil and groundwater contamination.  While we are still concerned with the potential exposures to these media, limiting them is usually relatively simple (i.e. don’t eat or play in the soil and don’t drink the groundwater). In contrast, limiting exposures to indoor air containing volatile compounds is more problematic because everybody has to breathe continuously.  With experience has come the awareness that for volatile chemicals, VI is probably the most important exposure pathway to control for reducing risk when it is present.

Developing the Right Measurement Tools

While the Massachusetts Department of Environmental Protection became aware of VI issues in the late 1980’s, it wasn’t until they issued a  guidance document in 2002 that we had some state level agency guidance on how to proceed.  The guidance called for an initial field screening to evaluate whether soil gas was beneath a building which might result in vapor intrusion. With hindsight, has come the realization that the field screening methods of the day were not nearly sensitive enough to rule out vapor intrusion.

Vapor intrusion studies now usually rely on collection of soil gas samples in the field that are subsequently analyzed in a laboratory setting with highly sensitive instrumentation.  Alternately, portable gas chromatograph/mass spectrometers (GC/MS) can also be used.  In 2016, the Massachusetts Department of Environmental Protection issued final guidance, which describes the  state of the practice for “investigating, assessing, understanding, and mitigating vapor intrusion” in Massachusetts.

Remediating VI Conditions

The preferred remedial method for VI sites has become the installation of sub slab depressurization system or SSDS.  Borrowing from the radon remediation industry, regulators and environmental consultants realized that the mechanism by which radon gas enters buildings is almost identical to VI, and thus has a nearly identical solution. By capturing the vapor within the soil beneath a building, and venting it to the outside air before it can migrate into a building, we can almost eliminate the inhalation exposure that would otherwise occur in the building. This diagram presents a generalized depiction of an SSDS.

 

BHN blog figure

In designing an SSDS, environmental practitioners need to consider a number of site-specific factors such as how large an area the SSDS needs to cover, the permeability of the soil below a building, and the characteristics of the vacuum fan required.  A building underlain by a coarse sand or gravel might need only one vapor extraction point to cover the entire footprint, while a building underlain by finer grained soils might need more extraction points or horizontally laid slotted pipe to get the desired coverage.

At OTO, we typically perform an initial evaluation to help design an SSDS with an appropriately sized fan and geometry (i.e., a single extraction point or horizontal pipe). A small system for a residential fuel oil release under a portion of the basement might need a single extraction point with a small fan.  A system like this uses about the same amount of electricity as a standard light bulb.  In contrast, a large industrial property may need a larger system with multiple zones and stronger fans, and will result in a significantly higher electrical bill.

My first experience installing and operating an SSDS came in 2005, as a response to a relatively small residential fuel oil release. Since then, I have had a number of projects including SSDS’s ranging from relatively small and simple to quite complex.  To me, no other innovation in our industry has had a greater impact on our ability to reach acceptable endpoints for vapor intrusion than sub-slab depressurization systems.

As an environmental professional, many of the problems I have worked on have been difficult to solve. However, because of their relatively low cost, ease of installation and ability to improve conditions rapidly, solving vapor intrusion through installation of an SSDS has been a genuine bright spot in my career.

 

 

 


In Part I of this topic, we discussed the assessment and the identification of the cause of settlement of existing structures.  Once the causes have been identified, we can then provide alternatives to prevent on-going settlement, if needed.  This post will discuss a few of the engineering solutions that are available to mitigate a settlement problem of an existing structure.

It should be mentioned that we do not always propose mitigation or remediation.  For instance, if the settlement of the foundation appears to be due to the placement of compacted engineered fill in the 1960s (during construction) over soft compressible clay, OTO likely may recommend that the client delay large and expensive repairs and mitigation, and instead monitor the rate settlement over the next couple of years. In these instances, the rate of settlement often has decreased to negligible amounts and further significant settlement may be unlikely.  At that time, we often recommend that the owner proceed with larger structural and cosmetic repairs.

If settlement concerns appear to be due to improper drainage and the introduction of large amounts of water into the soil mass, OTO will provide recommendations for correcting the drainage problems.  We often can provide local contractor names, upon request, to help repair or install new drainage systems.   Often times, these repair or maintenance tasks can be performed by the owner or facilities manager.

If the cause of the building settlement is the presence of an unsuitable bearing layer, such as loose, non-engineered fill that may continue to compress, or a thick organic peat layer that may continue to degrade, we will recommend a mitigation alternative such as a deep foundation or a soil improvement technique.

A deep foundation system transfers loads through the unsuitable layer to a firm bearing layer, such as driving pilings through a clay layer to bear on a layer of dense sand or bedrock.  Deep foundation alternatives to mitigate the settlement of existing buildings may include helical piles or mini piles.  Helical piles consist of a central steel shaft with horizontal bearing plates (8 to 14 inches in diameter) welded to the shaft at spacings on the order of 12 inches, which are augered into the soil. Mini piles are drilled, cast in place, cement grouted shafts. The piles are constructed by drilling and advancing casing (three to ten inches in diameter) to a selected depth or bearing stratum, installing a steel reinforcing bar down the center of the casing, and injecting cement grout into the casing.  The grout is pumped into the borehole at high pressure, starting at the bottom of the casing and moving upward in order to displace drilling mud or any remaining soil cuttings from the borehole. As the grout is pumped into the borehole, the casing is pulled up to a selected depth at the top of the “bond zone,” allowing contact between the grout and the surrounding soil. Helical or mini-piles are typically connected to the existing footings using an underpinning bracket.

Soil improvement techniques, which improve the existing loose soil so that it can function as a suitable bearing layer, may include pressure or compaction grouting.  In compaction grouting, the soils within the improvement zone are densified and strengthened by a systematic, pressurized injection of controlled low mobility cement grout. The goal of the process is to achieve increased strength of the soil mass.

Compaction Grouting Ashley Blog II
Compaction grouting in progress at an industrial facility

Many factors must be considered in order to recommend the most appropriate engineered solution for settlement issues.  OTO will often discuss existing building and soil conditions and proposed mitigation techniques with specialty geotechnical contractors to evaluate possible alternatives costs.  OTO maintains relationships with most of the foundation specialty contractors in New England and often can provide two or three independent contractor contacts to the client so that competitive cost information can be obtained.  Once the mitigation alternative and contractor is chosen, OTO can assist during construction by documenting the installation and addressing any concerns that arise.

If you have other questions about building settlement, contact Ashley Sullivan at 413-276-4253 or sullivan@oto-env.com to see how OTO can help!

 

 

 


Large areas of the Connecticut River Valley are underlain by a type of soft soil deposit known as Connecticut Valley Varved Clay (CVVC).  This deposit formed at the bottom of a large glacial lake, commonly known as Lake Hitchcock, which once filled almost the entire valley from Vermont to near Long Island Sound. This lake formed following the retreat of the last glacier and drained approximately 14,000 years ago.  During the warm “summer” runoff season, as the ice melted, soil particles were carried into the lake.  As water drained rapidly into the lake, the sand and silt, having a larger grain size, settled out first.  Fine-grained clay particles have a tendency to remain suspended much longer, particularly in turbulent water.  As the cold “winter” months moved in, the waters calmed and allowed the clay particles to settle.  This act of differential sedimentation created alternating layers of sand, silt and clay, also known as varves.

The word varve comes from the Swedish word varv, which can be translated to revolution, round, or layer, and is used to describe a layer of semi-annual sediment deposited by a glacial lake.  In CVVC, these varves are typically composed of either dark gray or red clay, or light gray or red sand and silt.  The finer grained clay particles tend to hold moisture much longer resulting in a darker coloration, whereas the sand and silt particles tend to allow moisture to escape more easily, resulting in a lighter coloration.  The transition from one layer to the next is often abrupt, which is characteristic of the rapid deposition of sand and silt particles that occurred during the runoff season.  The individual varves are typically between 1/16 and 1/2 inches in thickness.

 

varved clay
An example of varved silt and clay layers.

Lake Hitchcock existed for many thousands of years, so there can be several tens of thousands of individual varves within a soil profile.  The depth of the varved clay varies drastically throughout the valley, with the thickest sections exceeding 250 feet.  The distribution and thickness of the clay can be seen in more detail on the provided map.

USGS map
Langer, William H. Map Showing Distribution and Thickness of the Principal Fine-Grained Deposits, Connecticut Valley Urban Area, Central new England. Department of the Interior, United States Geological Survey, 1979

The largest issue presented by varved clay is consolidation.  Fine-grained, cohesive clay restricts the movement of water, and therefore takes a significant amount of time to expel water and consolidate.  The sand lenses give the water a path to escape and significantly reduce the initial consolidation time, but maximum consolidation can typically take up to 10 years.  Depending on the size of the proposed structure, and the clay conditions at the Site, the amount of settlement can be significant and may lead to structural damages over time.

A typical characteristic of the varved silt and clay stratum is that the upper few feet tends to be desiccated, resulting in a very stiff to medium consistency.  This can be advantageous, as this layer is less prone to settlement, reducing the time to consolidate under the applied load.

Several solutions exist to reduce the effects of post-construction settlement, including wick drains, preloading, and post-construction settlement monitoring.  Preloading involves the temporary placement of material to compress the varved clay prior to the construction of a building.  As discussed above, the sand lenses significantly reduce the time rate of consolidation within the varved silt and clay, since they allow excess pore water pressures that are generated when loads are applied to dissipate relatively quickly (allowing the soil to consolidate quickly under the applied load).  Wick drains may be installed, allowing additional paths for the water to escape and further expediting the consolidation process.  Post-construction monitoring helps assess the effectiveness of the pre-load (if applied).  Additionally, if on-going settlement occurs after construction, preventative measures may be taken before significant damages occur.

For large structures such as tall buildings or bridges, it may not be possible to support the structure directly on the varved clay, and deep foundations may be required.  In addition, slope instability may be of concern where embankments are built over the clay (such as highway or railroad embankments) or if deep excavations are made into the clay.  For example, significant geotechnical studies and improvements were required for the construction of the I-91 roadway embankments.

Building on varved clay will always be a challenge.  However, with modern technology and practices it is often not a question of whether it can be done, rather a question of how much it will cost and how much time it will take.  We would love to help you evaluate projects involving CVVC soils, so feel free to contact us at 413-788-6222 or www.oto-env.com.

 


Well, it’s done.

 

I’m proud and distinctly relieved to announce the publication of my book, Manufactured Gas Plant Remediation: A Case Study (2018, CRC Press).  Like any proud parent, I can’t fight the urge to talk about it.

 

So, here’s a quick introduction to what it is about. The ‘case study’ in the title refers to the entire state of Massachusetts, since this is the first state-level overview of the gas industry.

Northampton gasworks
A gasholder in Northampton, Massachusetts, one of four surviving in the state out of what were once hundreds.

 

‘Manufactured gas’ refers to several types of gas made from coal or oil during the 19th and early 20th centuries, and which was used much as we use natural gas in the present day. The term ‘natural gas’ was actually coined to distinguish gas naturally present in coal beds or oil reservoirs from gas made out of coal. Manufactured gas lit the foggy streets of Victorian England (some parts of Boston and London still have gas street lights). It also lit houses, heated uncounted numbers of kitchen stoves, and fueled innumerable industries. By the early 1900s, most cities and large towns had at least one gasworks; Massachusetts alone had roughly 100 manufactured gas plants (“MGPs”) and the second largest manufactured gas industry in the country, second only to New York).

 

On a larger scale, the gas industry also:

 

  • Played a crucial role in the development of urban areas and industries during the 19th and early 20th Centuries, since many industries sought to locate in communities where gas service was available. Where this wasn’t possible, many industrial plants would start their own private gas plants, some of which fell into disuse and were forgotten, while some expanded to serve the neighboring mill towns and in the fullness of time grew into utility plants themselves.

 

  • Became the first major example of the modern concept of a public utility, together with all the government regulations that went with it.

 

  • Launched the modern organic chemical industry, with coal tar derivatives becoming feedstocks for manufacturing aniline dyes, ammonium sulfate fertilizers, creosote, laboratory reagents, explosives, plastics and disinfectants, most notably carbolic soap (familiar to anyone who’s seen A Christmas Story as the foul-tasting red soap). Modern organic chemistry exists largely because of the numerous byproducts the manufactured gas industry provided.

 

The first half of the book reconstructs the history of the gas industry from its origins in the early 19th century through the general changeover to natural gas in the middle of the 20th century, including discussions of gas-making processes, equipment, business practices, and important persons. Some of this information is specific to Massachusetts, but the discussion of gasmaking technology is universal to the gas industry.

 

 

1915-Fall River-Chas St-Panorama-NEAGE copy
A panoramic photograph of the a gasworks in Fall River, Massachusetts, from 1915

 

Everett
A waterfront view of the New England Gas & Coke coking plant in Everett, Massachusetts, from 1899. At the time, this was the largest modern coking plant in the world, and would supply much of metropolitan Boston’s gas supply until the 1950s.

The second half of the book deals with the ‘dark side’ of this industry, namely its troublesome environmental legacy. Due to the toxicity of many gasmaking byproducts such as coal tar, sites contaminated due to gasworks operations can pose a risk to public health. The assessment, remediation, and redevelopment of coal tar sites pose a significant technical and financial challenge. This part of the book includes information on the chemical composition, origins, and hazards posed by gasworks wastes including coal tar and cyanide wastes, as well as on regulatory issues, assessment and remediation strategies, and other useful topics.

dig
An example of buried materials encountered at a former gasworks

My coauthor, Allen W. Hatheway (one of the preeminent experts on MGPs and coal tar sites, and author of several other publications), and I started the research and writing process in March 2012. At the beginning our goal was simple—to compile an inventory of all of the former manufactured gas plants in Massachusetts. As we continued with our research, however, (to paraphrase J.R.R. Tolkien) “the tale grew in the telling,” and the project eventually grew into a rather large book. This was partly because there were so many former gasworks and partly because a discussion of these sites required a vast amount of historical, technical and modern regulatory context.

 

I’ll be giving presentations on this topic at several conferences in 2018 and 2019, including the Society for Industrial Archaeology annual conference in Richmond, VA this June.

 

The book is available from Amazon or direct from the publisher.


PCB or Non-PCB

The other day I got an email asking a good, basic question about the federal PCB regulations: “Where did the 50 ppm regulatory cut-off for PCBs come from?”  Is it a science based number? Or did the 50 ppm number just get pulled out of the air?

The more I thought about it, the more consequential the question seemed.  Thus a new PCB blog post seemed to be in order.  As you’ll read, the 50 ppm level wasn’t exactly science based, but then it wasn’t totally pulled out of the air either.

What does TSCA say?

Congress passed and the president signed the Toxic Substances Control Act (TSCA) in 1976.  This statute directed EPA to develop two sets of PCB regulations that became known as: (1) the 1978 PCB Disposal and Marking Rule; and (2) the 1979 PCB Ban Rule.  TSCA does not specify a PCB concentration cut-off limit to let EPA (or the rest of us) know exactly what Congress had in mind as to how concentrated PCBs had to be for them to fall into the regulatory net.

Wise legislators must have agreed that setting a regulatory cut-off limit is a decision best left to the professional staff at the regulatory agency. (I will leave the question of whether the phrase “wise legislators” is an oxymoron for a more politically oriented blog).  So TSCA is silent on the issue of PCB cut-off concentrations.  This was something EPA needed to figure out.

What does EPA say about the 50 ppm cutoff?

In the draft public comment version of what became the Disposal and Marking Rule, EPA proposed a 500 ppm cut-off limit for the regulation of PCBs.  But, by the time the final rule was published in the Federal Register (February, 1978), EPA was already getting cold-feet about this high a limit.  The agency warned in the preamble to the Rule that they would likely soon reduce the cut-off level to something in the neighborhood of 50 ppm, but needed to go through a more prolonged regulation development process before they did so.  Quoting from the preamble:

“The Agency is aware that adverse health and environmental effects can result from exposure to PCB’s (sic) at levels lower than 500 ppm; however, at this time the Agency is not establishing a level based on health effects or environmental contamination but rather a level at which regulated disposal of most PCB’s can be implemented as soon as possible”.

EPA goes on to explain that they had only recently acquired the additional scientific information needed to support a lower cut-off level, and that this information was not available in time to include in the administrative record or hearings for the Disposal and Marking Rule.  More from the Rule’s preamble:

“As a consequence, the 500 ppm definition for a PCB mixture, as proposed, is included in this final rule making.  However, the Agency plans to propose a lower concentration of PCB’s, possibly in the range of 50 ppm or below, to define PCB mixture in the forthcoming . . . regulations”.

In accordance with EPA’s warning, the preamble to the May, 1979 PCB Ban Rule explains that EPA had in fact decided to adopt the 50 ppm cut-off level.  This was after the Agency considered cut-off levels of 1 ppm, 10 ppm, 50 ppm and 500 ppm.  EPA concluded that reducing the cut-off level to 10 ppm was impractical because it would bring far too much physical material and too many unrelated chemical processes into the PCB regulatory net.  EPA pointed out that a 1 ppm cut-off level would obviously be even more impractical than the 10 ppm level.

So the 50 ppm level was chosen as the happy medium.  It was a concentration that could be “administered” by EPA (presumably unlike the lower 10 ppm and 1 ppm levels) and yet would capture hundreds of thousands of pounds of PCBs that would have gone unregulated with a 500 ppm cut-off level.

So, that is the story of where the 50 ppm PCB cut-off concentration came from.  It wasn’t rocket science, one could argue it was barely science at all.  In retrospect it was a compromise between those interested in controlling as much PCB as possible and those whose focus was on what could realistically be accomplished.

Now some might wonder why it is that under the 1998 PCB Mega Rule there is a 1 ppm cut-off concentration for PCB remediation waste, but that’s a question for another blog post.