Most of our posts here on the OTO company blog discuss what we do at OTO and what we can do for our clients. Every once in a while, though, we like to share something a little more unusual.

As we come to the end of a year, many people find themselves reflecting on prominent events from the year, often including the passing of prominent people. We lost one gray lady with a very mysterious past this year. Engineers, ship aficionados, and Cold War fans may shed a tear over this, but the former Glomar Explorer has been sold for scrap. This is a shame, not only because of the ship’s nearly unbelievable history, but also because in 2006 the American Society of Mechanical Engineers (ASME) designated this technologically remarkable ship a historic mechanical engineering landmark.


A product of the depths of the Cold War, the Glomar Explorer ranks alongside the Titanic, the battleship Bismark, and the USS Nautilus (the first nuclear-powered submarine) as one of the most unique and storied ships of the 20th Century. In a story that reminds us that the truth is often stranger than fiction, the Glomar Explorer was designed and built at great expense for a single purpose – to recover a sunken Soviet Navy submarine, the K-129, from the bottom of the Pacific Ocean.

The gestation of the Glomar reads like something out of a spy novel, and indeed, she was ultimately the product of some wishful thinking by the CIA’s Special Activities Division. The K-129, one of the Soviet navy’s most modern ballistic missile submarines, had sunk in 1968 about 1,500 miles northwest of Oahu after an onboard explosion. Although she contained potentially valuable intelligence sources such as cryptographic equipment and nuclear technology, there was no way to recover this possible treasure from the seabed more than three miles below the surface—it was far too deep for divers and adequate remote operated vehicles just didn’t exist yet.

An aerial starboard bow view of a Soviet Golf II class ballistic missile submarine underway.
An aerial starboard bow view of a Soviet Golf II class ballistic missile submarine similar to the K-129 (courtesy Wikipedia).

Then someone had the clever idea of simply hoisting the submarine to the surface. Of course, the idea was simple…. as it eventually turned out, the execution was nearly as complicated as a manned space flight.

From there, as the proverb goes, the weird went pro. The rest of the Explorer’s gestation involved a strange crew of intelligence analysts, eccentric industrialists, oilfield roughnecks, and a dream team of engineers from companies as diverse as Global Marine, then the undisputed leader in deep-ocean undersea exploration, and Lockheed Martin’s “Skunk Works” aerospace bureau. The whole undertaking was given the codename of Azorian, possibly an intentionally misleading reference to the US Navy submarine USS Scorpion, which sank with all hands near the Azores Islands in May 1968. Project Azorian ultimately cost over $800 million at the time ($3.8 billion in 2015 dollars).

Global Marine (now part of the offshore drilling corporation Transocean) had pioneered most of the methods and technologies used in the then-new field of deep-water oil drilling during the 1960s, including the drilling ship Glomar (from “Global Marine”) Challenger, used for the Deep Sea Drilling Program that provided evidence for continental drift. They were the logical choice to design and help operate the specially built salvage ship.

For a cover story, the CIA turned to the eccentric multimillionaire defense contractor (and real-life inspiration for Tony Stark) Howard Hughes. With Hughes’ cooperation, the whole endeavor would be passed off as a testbed for mining manganese nodules from the ocean floor. Since Hughes was known to be eccentric, obsessively secretive and to embrace odd projects (indeed, at this point the 70 year old Hughes was a paranoid recluse living in a palatial Las Vegas hotel suite), the CIA’s hope was that the American media would take it as just one more strange undertaking from Howard Hughes. The manganese nodule mining story ultimately proved sufficiently convincing that several companies took the idea seriously and invested in nodule mining.

With the design, cover story and CIA funding in place, the Sun Shipbuilding and Dry Dock Company in Chester, Pennsylvania started construction. Completed on June 1, 1973, the Explorer cost more than $350 million at the time of her completion (or about 1.67 billion in 2015 dollars) and at 619 feet long and a displacement of over 60,000 tons, the Explorer’s massively reinforced hull was larger than most Second World War battleships and aircraft carriers. Although never formally commissioned into the US Navy (hence on paper she sailed as USNS Hughes Glomar Explorer rather than under the USS designation) she remained government property, the Hughes fiction notwithstanding.

Glomar starboard quarter

As originally built, the Explorer broadly resembled an oil-drilling ship, and indeed much of the Explorer’s technology was related to the offshore oil drilling industry—the K-129 would be ‘grabbed’ by equipment lowered down on the end of what was essentially a three mile long string of 30-foot long sections of pipe similar to that used in oil well drilling. On its own, this tool string massed 4,000 tons. The 2,000 ton forward half of the submarine, half buried in sediment at the ocean bottom, would then be dragged free of the muck and hauled up to the surface, one length of drill pipe at a time.

To enable this, much of the Explorer’s midsection was taken up with equipment custom-designed for the submarine recovery, including two towering gantries and a massive pyramidal derrick system that served as both “drill rig” and as a lifting apparatus with a capacity of 7,000 tons. All of this was stabilized in three dimensions on massive gimbals and a hydraulically operated heave compensator, designed to keep the rig vertical and at the same level despite the motion of the sea. The real secret was a 200-foot long “moon pool,” a dry-dock like space where the ship’s bottom would retract, allowing the K-129 to be hoisted up into the Explorer’s hull for leisurely examination.

Portside aerial oblique view of the Explorer, showing her derrick, docking legs, helipad, and extremely crowded deck!
fig1-1-1 glomar ex - side view plan
The Explorer’s starboard profile, courtesy


fig1-1-2 glomar plan view
A plan view of the Explorer, courtesy
Moon pool interior
Interior of the ‘moon pool,’ courtesy

Along with the Explorer herself came a massive (over 2,000 ton) hydraulically operated grapple, nicknamed Clementine, specially designed to grasp the K-129’s hull. To give some idea of the scale of Clementine, each of the eight claws on Clementine was essentially an assembly of I-beams measuring three feet deep and two feet wide, fabricated from two-inch maraging steel plate, and the whole device massed as much as a Second World War destroyer. There was even a special submersible barge, the HMB-1 (for “Hughes Mining Barge”), built just to make sure Clementine (which was built in California) could be brought aboard the Explorer without being seen, with the pickup happening in shallow water off Catalina Island.

The HMB-1 being towed into position

It is worth noting that by the early 1970s very little deep-water offshore drilling had yet been conducted; most offshore drilling was done in relatively shallow waters (less than 1,000 feet deep) such as the continental shelf off Santa Barbara, California. The only offshore project carried out under similar circumstances was the National Science Foundation’s abortive but still epic “Project Mohole” of 1961, which drilled into the seabed in about 12,000 feet of water in an attempt to reach the Mohorovičić discontinuity, the boundary between the Earth’s crust and mantle. The offshore drilling industry may have benefited from some degree of peace dividend from the project, since the Explorer included a number of cutting-edge technologies, including an mechanical pipe-handling system, an automated stationkeeping system designed to keep the “drill rig” within a forty-foot radius despite the action of wind and sea on the ship’s hull, horizontal thrusters to keep her in position, and her long-baseline positioning system, which became standard equipment for deep sea operations until the advent of GPS decades later.

The actual recovery of the K-129 began on July 4, 1974, and right from the start the operation was as technically demanding and fraught with tension as an Apollo moon shot. Not only did the ship itself have to be kept within a 40-foot radius, but the final positioning of the Clementine grapple had to be exact to within only two feet, which may sound like a lot of wiggle room until you remember that it’s at the bottom of the ocean on the other end of three miles of wobbly drill pipe. Very little information had been available for the project engineers to go on, for example what sort of sediments were on the seabed, so there was considerable uncertainty as to whether the recovery would succeed or not. Several baking-hot days went by in the central Pacific Ocean as one 60-foot length of custom-made drilling pipe after another was fed down into the ocean, all the while under the observation of Soviet spy ships. Tense days later, Clementine was brought into position …

Conceptual image of Clementine being lowered from the Explorer’s moon pool

Well it would hardly be fair for me to just tell the whole story, would it? In any case, much of the documentation behind Project Azorian, including exactly what the CIA recovered, remains classified.

I highly recommend the captivating 2010 documentary Azorian: The Raising of the K-129, which includes extensive interviews with several of the engineers who designed, built, and manned the Explorer during her CIA career. The Historic Naval Ships Association ( also hosts an in-depth technical description of the ship (The Glomar Explorer, Deep Ocean Working Vessel, Technical Description and Specification) on their website at, together with some amazing photographs (some of which were used with gratitude in this blog). This document was prepared in 1975, when Global Marine was trying to drum up other business for the Explorer.

The Azorian story was revealed to the public after a series of leaks to the New York Times in 1975, during a period of extreme scrutiny of the CIA after revelations about other secret undertakings. The affair also produced the notorious “Glomar response,” with the CIA responding to Freedom of Information Act requests from the press with the statement that the government would “neither confirm nor deny” details of Project Azorian based on potential harm to national security.

With her cover blown, and too specialized and expensive to repurpose, the Glomar Explorer herself spent the next twenty years in mothballs at the US Navy’s reserve storage facility at Suisun Bay in California, having sailed on exactly one operational voyage and completed exactly one mission. The HMB-1 was likewise kept around ‘just in case’ for a time before being used as an enclosed space for building prototype stealth ships such as the Sea Shadow. The HMB-1 was subsequently sold to a shipyard for use as a floating dry dock for ship repairs.

Sad explorer
The Explorer during her “mothball” years at Suisun Bay

After twenty years in mothballs, Global Marine Drilling (later part of Transocean) leased the Explorer and gave her a $180 million makeover to convert her into an oil drilling ship, replacing her Project Azorian apparatus with conventional modern drilling equipment. From 1998 through about 2013 she enjoyed a second and much longer career as a deep sea drilling ship before being taken out of service, a victim of declining petroleum prices and competition from on-shore production. Sadly, Transocean announced in April 2015 that the old lady with the mysterious past would be scrapped.

The rebuilt Explorer underway to a drilling project.

cons 1

There has been a push in my home state of Connecticut to test more schools for PCBs. While no doubt well intentioned, this initiative is driven by two misguided assumptions:

1. That the presence of PCBs in schools pose a hazard to the building’s users; and
2. That if PCBs are discovered in a school it is a relatively simple matter to remove them.

The good news is that the first assumption is wrong, there is in fact no scientific evidence that PCBs in schools pose a hazard to the building’s users. The bad news is that the second assumption is also wrong, removing PCBs from a building is anything but simple; it can be an extremely difficult and expensive endeavor. Let me get into a little more depth on each of these questions.

Do PCBs in schools pose a health risk?
When you ask a toxicologist whether a chemical poses a health risk, the response you’re likely to hear is: “that depends on how much of the chemical someone is actually exposed to”, or to phrase it more succinctly, “that depends on the dose”. You see every chemical, even water, can be toxic (even deadly) under some conditions and in some doses. So answering the question of whether something is toxic requires an understanding of the exposure conditions and the dose. So let’s look at the conditions and PCB doses that a child is exposed to in a school situation.

Between 1950 and the early 1970s PCBs were used in a variety of building materials that might be found in schools including: caulk, paint, floor wax, adhesives (mastic) and surface coatings. PCBs were used in these products because they added durability and useful life to the products. The USEPA has determined that the most likely way for people to be exposed to PCBs in buildings is by inhalation. Although PCBs are not particularly volatile, small amounts do volatilize out of building materials over the course of years and decades.

These low concentrations of PCBs in air can be measured and the dose that a building occupant might be exposed to from inhaling them can be estimated. Based on this exposure mechanism the USEPA has developed PCB Public Health Levels that it considers to be safe for school children. The average daily PCB dose corresponding to EPA’s Public Health Level for an elementary school child is about 0.94 micrograms, a little less than one microgram per day.

To assess whether one microgram per day is a risky daily dose or not it’s helpful to compare it to the amount of PCB an elementary school child receives in their diet. You see PCBs are widely distributed in the environment and they are present in our food supply, particularly in animal products like meat and fish. Based on data developed as part of a recent market basket survey by the University of Texas, the average American elementary school child is exposed to about 4.6 micrograms of PCBs a day in their diet (much of this from hamburger and fish). That’s about 5 times more than they would receive by inhalation while attending a school with PCBs at USEPA’s Public Health Level.

So even if the concentration of PCBs in school air is four times USEPA’s public health level, it would still not equal the amount of PCBs an average elementary school child receives daily in their diet. For someone concerned about PCB exposures in schools the most effective place to make changes might be in the cafeteria menu and not in the building materials.

How hard and expensive can it be to remove PCBs from schools?
The simple answer is that removing PCBs from a building where they have been in place for over 30 years is really hard and very expensive. Here’s the problem: unless PCBs are in a laboratory sealed container, they tend not stay where they were put and over the years and decades they continue to move around. So a PCB containing caulk bead placed in the gap between a window frame and a brick or concrete wall will result in the PCBs migrating out of the caulk and into the abutting brick or concrete, with each passing year moving a little deeper into the brick or concrete. But that’s not all. The PCBs from the caulk bead also volatilize slowly into the air and then condense onto other building surfaces remote from the caulk bead. The PCB load on these remote surfaces slowly increases over time.

So removing PCBs from a school building involves more than just taking out the PCB source material, it also means identifying other materials in the building that may have become contaminated either directly or indirectly as a result of the PCB source. It often means deciding how to manage building structural components that have become contaminated with PCBs.

How do School Administrators Learn about PCBs in their Buildings?
How do school administrators learn about PCBs in their schools? Usually by surprise. Here’s how it often happens: The school district receives a grant for a window or sprinkler replacement project for an old school. The facilities department develops project plans and specifications and then lets out a contract for the work to the lowest qualified bidder. The work begins and continues until one day the facilities department hears from the contractor that PCBs have been discovered. The project grinds to a halt and the contractor recommends more testing.

The contractor’s first estimate for additional costs to manage the PCBs is high, but within the project’s budget reserve. However, as more test results become available and PCBs are found to be more wide spread than originally anticipated, cost estimates zoom past the reserve amount. This is when senior administrators or boards of education start asking questions, but by then the options for alternative action are limited.

TSCA – the Law of Unintended Consequences
You can read the Toxic Substances Control Act (TSCA) from cover to cover and you’ll find nothing about removing PCBs from schools or other buildings. Take a look at the 800+ page legislative history of TSCA and you will still find nothing about PCBs in schools. How about EPA’s PCB regulations (40 CFR 761)? No, still nothing about removing PCBs from schools or other buildings. So if there is nothing in the statute or the regulations about removing PCBs from schools or other buildings, and if there is no evidence that PCBs in building materials pose a health risk, then what explains the need to assess and remove PCBs from schools? Stay tuned for part 2.



I keep a medal in my desk at home. I didn’t earn it; it is only an eBay purchase, but it has a lot of philosophical value for me. It is constructed of brass with enameled areas and a cloth ribbon on the hanger. The central detail shows symbols for alpha, beta, and gamma radiation over a drop of blood, and the Cyrillic script around the central device reads “uchastnik likvidatsyi posledstviy avarii” or roughly “participant in the liquidation of accident consequences.” (Apologies to those readers whose Russian is certainly better than mine!) As any watcher of Cold War spy movies will know, in Soviet parlance, to ‘liquidate’ something meant to eliminate, mitigate, or clean up the consequences of something, whether it was a spy or an enormous environmental disaster.

The story behind this medal is one not commonly known in the United States, but it should be, because the story behind it is enough to send a chill down your spine.

The specific term ‘liquidators’ (ликвида́торы or ‘likvidátory’ in Russian) was coined in 1986 to refer to the Soviet soldiers, scientists and others who responded to the Chernobyl disaster—the April 26, 1986 explosion at Reactor 4 of the V.I. Lenin Nuclear Power Plant in Chernobyl, Ukraine that blew a 1,200 ton reactor cover into the air and spread radioactive fallout, from dust to basketball size chunks of the reactor core, across the surrounding countryside. For my own part, I remember being very upset when the news of the Chernobyl explosion broke, because the news broadcast interrupted the Transformers cartoon I was watching – I was seven years old at the time and my priorities were in line with my age.

The Soviet Union being what it was, most large civil projects, from construction programs to disaster response efforts, were run more or less along the lines military campaigns.  As the Chernobyl disaster progressed, the military element became more pronounced, as the Soviet leadership spoke in wartime terms, of “mobilizing” and “sending troops to the front” —Mikhail Gorbachev himself usually referred to the Chernobyl cleanup as a “frontline action.” At one point, workers hoisted a red flag on top of the reactor building as a symbol of ‘victory’ after finishing a particularly difficult phase of the work. In light of the militarized character and massive resources devoted to the operation, one BBC documentary on the topic subsequently dubbed the Chernobyl cleanup “the Soviet Union’s last battle.”

A view into the exterior of the exploded reactor.

Like any battle, the Chernobyl cleanup had its heroes and its casualties. Many of the first responders, from the plant staff and the local fire department, managed to prevent further disasters such as a giant steam explosion that could have blasted the reactor core completely out of the reactor building and scattered it like radioactive shrapnel for tens of miles. Men worked in areas that still have radiation levels in the thousands of rems (roentgen equivalent man, a unit of measure for radiation effects on the human body). Most of these men died within weeks of radiation sickness; some had to be buried in lead-lined coffins.  A moving essay on the experience of the Pripyat fire crews can be read here.

Memorial to the Pripyat firefighters.

As the scope of the disaster cleanup expanded, the Soviet government called in tens of thousands of men– recent military draftees, army reservists, and thousands of specialists from many fields, including firefighters, oilfield drilling crews, heavy construction workers, hundreds of engineers and scientists, medical personnel, helicopter crews from the Afghanistan war, coal miners, police and even janitors.  Most of these people not only had no experience or training in radiation matters or even in disaster response work, and the vast majority did not even know what they had been brought in to do.  Working conditions were harsh and most of the safety equipment was improvised on the spot, with lead aprons and trucks hastily plated over with hand-beaten lead covers.

The scale of the crisis was unbelievable- by one estimate it cost 18 billion rubles, when the value of a ruble was nominally equivalent to a dollar– and the atmosphere was one of desperate improvisation. The immense steel and concrete sarcophagus that encloses the reactor was designed and built in less than six months, but that was only the tip of the iceberg. The entire population of an area of nearly a thousand square miles—120,000 people—had to be evacuated in a matter of days.  The entire vast Red Forest, which earned its nickname from the color the trees had turned after being struck by fallout, was clear cut in order to bury the trees in massive concrete-lined pits, and to allow dust suppressants to be applied to the soil. Relays of army helicopters airdropped bags of lead, sand, and boric acid into the shattered reactor building to bury the burning core. Massive geoengineering projects were launched, including construction of slurry walls around the plant to limit the migration of contaminated groundwater, and a crew of coal miners tunneled out space for a massive cooling system –sadly never actually needed– beneath the exploded reactor itself, in order to prevent the molten reactor mass from melting its way through to the water table and triggering a steam explosion—the “China Syndrome’ in US slang. One civilian helicopter pilot, Mykola Melnyk, received the two highest awards of the USSR – the Order of Lenin and Hero of the Soviet Union– for daring precision flying to install radiation sensors on the reactor, flying for hours at a time through the radioactive cloud leaking from the ruptured reactor.  Mr. Melnyk passed away in 2013.


The most dangerous part of the work, the shoveling of radioactive debris from the roof of the power plant building back into the reactor crater to allow construction of the sarcophagus, was done by army reservists in improvised protective clothing, working in relays for shifts less than a minute long, on what was still accounted a virtual suicide mission. A previous attempt to use bomb disposal robots to remove the debris had failed when the radiation levels destroyed the robots’ electronics, and the gallows humor of the Soviet military gave these men the morbid nickname of “bio-robots.”

“Bio-Robots” at work on the roof of the reactor building.

In all, an estimated 600,000 men and women served as liquidators at one point or another, mostly in the summer and fall of 1986, and about a quarter million of them were exposed to their theoretical lifetime safe limit of radiation—or far more. Tens of thousands have already died, and tens of thousands more are disabled by health problems. In recognition of their services, liquidators were awarded the status of military veterans and were granted government benefits such as medical care, though these vary according to how badly the individual was exposed for and for how long, and these allotments may be more or less forthcoming at times, especially given that most of the disaster area and many of the former liquidators are now Ukrainian, and part of the exclusion area is now in Belarus.


Next year, 2016, will be the 30th anniversary of the Chernobyl disaster; I don’t know what kind of memorial services are planned, but it surely deserves something.   In retrospect, the United States has never suffered a manmade disaster on the scale of Chernobyl– and we should count ourselves very fortunate.

And yes, I already checked– the medal isn’t radioactive.

Antelope Isle
Antelope Island (foreground), the causeway and Great Salt Lake. and the Wasatch Range on the mainland in the background.

This is my last post from my stay at the American Industrial Hygiene Association (AIHA) conference in Salt Lake City. But, it’s really not about the conference at all, it’s about Utah’s geology or at least the small pieces of it I was able to see. After arriving in the City on Saturday I had a great visit to the Natural History Museum.  By Sunday morning I was ready to get up close to some of Utah’s fascinating environmental settings. A quick internet search brought me to the Antelope Island State Park web site, which sounded like a very good destination for the day.

Antelope Island is the largest Island in the Great Salt Lake; it’s home to a large free ranging bison herd, pronghorn antelope, big horn sheep, mule deer and other wild animals and birds. From Salt Lake City it’s a little less than an hour’s to drive to the park entrance; the island is connected to the mainland by a 7-mile causeway that runs through the lake.

The Great Salt Lake, reputed to be the largest natural lake west of the Mississippi River, is actually a small remnant of historic Lake Bonneville. At its largest, about 15,000 years ago, Lake Bonneville covered 20,000 square miles and extended into Idaho to the north and Nevada to the west; it was almost as large as modern day Lake Michigan and was much deeper. Like the Great Salt Lake, Lake Bonneville was a Terminal Lake, meaning no rivers flowed out of it, but it captured all the runoff water from the surrounding mountains.

The limits of Great Salt Lake and the larger Lake Bonneville.

Geologists think that about 15,000 years ago the elevation of Lake Bonneville rose to the level of Red Rock Pass to the north in Idaho. Once the lake reached that level, its water began flowing down through the pass to the north. The erosion of the pass caused by the rapidly moving lake water led to a catastrophic flood that resulted in most of Lake Bonneville draining into Idaho’s Snake River drainage basin. The entire event is believed to have taken less than a year. Almost 5,000 cubic kilometers of water are estimated to have inundated southern Idaho as a result of the flood.

At over 6,000 feet elevation the mountains on Antelope Island would still have towered above the surface of ancient Lake Bonneville even at its height. While on the island I hiked to one of the recommended peaks and got fabulous views of the surrounding lake and mountains. While on my way I saw numerous bison, deer and antelope. The visitor’s center has excellent displays and helpful staff; it’s a good stop to make at the start of your visit. The only downside of my island adventure were the abundant no-see-ums, they were out in force and left my legs bitten and red.

Like most of the western US, northern Utah has experienced drought conditions for the past several years. As a result the Great Salt Lake has shrunk to a fraction of its size of only a few years ago. Walking out to the lake’s surface involves a longer walk over the salt crusted beach. As I was leaving Antelope Island I stopped to ask a couple of park rangers for a recommendation on where I should spend my last free morning on the day I would be leaving Salt Lake City. Almost in one voice they answered Snowbird, a town/ski resort at the top of a canyon just south of the city.


Jutting sharply up from the eastern side of the Great Salt Lake basin is the Wasatch Range, the western-most Mountains of the greater Rockies. Looking up at the mountains from the basin one sees deep “V” shaped canyons with peaks to an elevation of 12,000 feet. Not the highest in the Rockies, but they do have some of the largest unbroken elevation rises. Of the canyons, the Big and Little Cottonwood are some of the most studied geologic features in Utah. Snowbird, the name of a town and a ski resort, sits near the top of Little Cottonwood Canyon, with the Alta ski area at virtually the very top. The Cottonwood Canyons contain some of the most dramatic glacial scenery in the Wasatch Range.

These canyons, many of their tributaries and high-elevation basins were filled with hundreds of feet of glacial ice between 30,000 and 10,000 years ago. Geologists believe that The Little Cottonwood Canyon glacier reached beyond the mouth of the canyon and extended well into Lake Bonneville, calving ice bergs into the Ice Age Lake. In contrast, the Big Cottonwood Canyon glacier, is believed to have advanced only about 5 miles down its canyon, presumably due to less snow accumulation in the canyon’s catchment area.

After my morning brew at Alchemy Coffee I set off for Little Cottonwood Canyon. I knew my visit would be brief because my flight left in the mid-afternoon. The drive from the City to the mouth of the canyon was surprisingly quick. As I turned east up the canyon road I got that sense of being in a very special place. The canyon walls rise sharply on both sides of the road and the elevation kept ascending as the road went east deeper into the canyon.

Little Cottonwood Creek
My lunch time view of Little Cottonwood Creek.

The weather was perfect, another gorgeous Utah day, so the views were spectacular in all directions. When I got to Snowbird there was still snow on the mountains, but I was told that this was residual snow from a recent storm rather than the remains winter snow-pack, of which there had been little. I meandered up a steep trail for a bit, looked at my watch, sadly turned around and walked back to my car. On the way down the canyon I stopped beside Little Cottonwood Creek to have my lunch. An hour later I approached the SLC airport ready for the trip home.

Overall I was really taken by Salt Lake City, especially with how accessible it is to spectacular environmental settings. One is hard pressed to make it through the day without panoramic views of mountains and the Salt Lake basin, you really can feel the magic in the air. It is a great destination and I hope to be going back there real soon.

In my last post I discussed my invitation to speak about PCB exposures at the 2015 American Industrial Hygiene Association (AIHA) conference in Salt Lake City. In this post I want to review a fascinating remedial technology presented during the conference’s PCB session by Professor Cherie Yestrebsky from the University of Central Florida.

Zero Valent Metal PCB Treatment

Working with her graduate students, Dr. Yestrebsky developed a technology that effectively destroys PCBs on non-porous surfaces, such as metal sheet and pipes. The specific chemical reaction at the heart of her approach, sometimes referred to as “zero valent metal” chemistry, reduces PCB molecules by removing the chlorine atoms from the PCB molecules. Note that this technology has also been referred to as the “NASA method”.

In practice the technique involves the application of a specially prepared paste (containing a suspension of zero valent metal) to a PCB contaminated non-porous surface; the paste consists entirely of non-hazardous materials. Typically PCBs on non-porous surfaces are the result of their inclusion in the paint or other coating used on the surface. The paste softens the PCB containing coating and brings the PCBs into proximity with the zero valent metal. The chemical reaction between the metal and the PCBs is allowed to proceed for a period of time. The reaction takes place at room temperature, and no toxic gases or fumes are released during or after the reaction. The spent waste is not a listed or characteristically hazardous waste.

In addition to its effectiveness on non-porous surfaces, zero valent metal technology shows some promise for significantly reducing concentrations of PCBs absorbed into porous building materials like coated and non-coated concrete. When applied to non-coated concrete the mode of action is different because no surface coating is present. However, it appears that the paste is able to penetrate some distance into the concrete and destroy PCBs close to the surface. PCBs that have penetrated more deeply into the concrete matrix are not affected.

PCBs sorbed into concrete is an especially troublesome problem, particularly in occupied buildings like schools and residences. The technical solutions currently available to reduce or eliminate these PCBs are very limited and typically require the removal and replacement of the contaminated concrete. Removal of PCB contaminated concrete is usually expensive and often impossible without jeopardizing the structural integrity of the building.

Clearly the zero valent metal approach would be a welcomed addition to the PCB remediation tool kit for non-porous and porous surfaces. However, there looks to be an obstacle in the way of getting this technology into the remedial tool kit, namely the PCB regulations.

Do the PCB Regulations Stifle Innovative Remedial Solutions?

Subpart D of the PCB regulations describes in exhausting detail the permitted methods for PCB disposal. The language is so dense and the cross-references so convoluted that some have suggested EPA subcontracted the writing of Subpart D to the IRS, just joking. Subpart D does contain provisions that seem to create a regulatory path for the development of innovative PCB disposal methods.

However, the requirements for traversing this regulatory path are significant (read “abandon all hope ye who go this way”). I have heard presentations by USEPA scientists whose research into alternative PCB destruction methods came to an abrupt halt when they ran into the Subpart D regulations. If EPA’s own scientists can’t meet the Subpart D requirements, what chance does a non-federal government entity have?

Back to the zero valent metal technology. From the data I have seen the approach is 80% to 100% effective on non-porous surfaces and 40% to 80% effective on porous surfaces. On porous surfaces the effectiveness depends a lot on how deeply into the matrix the PCBs have migrated.

What standards do the PCB regulations require for alternative disposal technologies? Either equivalent to the PCB incinerator standards (aka 99.9999% effective – the “six-nine” standard) or possibly equivalent to a “high efficiency boiler” (aka 99.9% effective – the Herman Cain standard?). Of course an applicant wouldn’t know what would be acceptable until submitting an application to EPA, and you can’t submit an application until you have the data demonstrating that the technology meets the standard. Catch 22.

The Real Alternative Technology: Don’t Ask – Don’t Test

What it comes down to is this: EPA has set the bar so high for the introduction of an alternative remedial technology for PCB disposal/destruction that investing in the development of a potentially useful new technology is just too risky. Particularly in the case of PCBs in building materials a technology that provided significant concentration reduction (say greater than 50%) would seem to be very attractive even if it’s not 99.9% or 99.9999% effective. But the regulations do not permit that option.

And so the real alternative is often “don’t ask – don’t test”. Since there is no published data linking PCBs in building materials to adverse health effects, and since there are few practical approaches to removing PCBs from buildings, many have concluded it is better not to know if PCBs are present. They may just be right.


Last July I received an email from someone at the American Industrial Hygiene Association (AIHA) asking if I would be willing to speak about PCBs at their June, 2015 conference in Salt Lake City. Over the past few years the AIHA has been developing reference materials about PCBs for their members, and one of the focal points of the conference was to be PCBs in the built environment, particularly PCBs in building materials and air. After reviewing the information AIHA had already assembled and developing a topic for my presentation I agreed to submit an abstract.

The conference took place the first week of June this year at the Salt Lake City Conference Center where I was one of seven presenters speaking about PCBs. Over the next couple of posts I’ll highlight parts of the presentations that I found particularly interesting; I’ll also highlight some of my side trips in the Salt Lake City area.

PCBs in Building Air

Since the late 1990s researchers have known that PCBs from building materials can often be detected in indoor air. The concentrations detected are low (low nanograms to low micrograms of PCBs per cubic meter of air) and this reflects the relatively low volatility of PCBs. A lingering question that scientists pose is: are these airborne PCBs present in a gaseous or particulate form? In other words, are these PCBs stuck on to fine particles floating in the air like dust? Or are they present as free PCB molecules in air the same way oxygen and nitrogen are? At the conference several academic researchers presented data on this subject that was contrary to the information I’ve seen from others.

In short, these researchers found that the dominant PCB detected in air samples was a congener (an individual type of PCB molecule) referred to as PCB-28 (also known as 2, 4, 4’-trichlorobiphenyl) a PCB congener with three chlorine atoms. PCB-28 is a major constituent of the Aroclor mixtures 1016, 1242 and 1248. It is a much smaller part of Aroclor 1254 and is essentially absent from Aroclors 1260 and 1268. In the indoor air testing I am familiar with PCB-28 is detected, but not as a major component.

Also, in my experience the 4 and 5 chlorine containing PCBs tend to dominate air samples. I have attributed this to the parallel observation that Aroclor 1254 is the PCB mixture most frequently detected in building materials. Aroclor 1254 is a predominantly 5-chlorine atom mixture; it contains approximately 53% penta-chloro congeners. It is reasonable to expect that even when Aroclor 1254 is the mixture present in a building material, it would be the lower chlorinated congeners within the mixture that would be more likely to volatilize. However, as described at the conference, the extent of the shift towards lower chlorinated congeners in air coming from higher chlorinated PCB mixtures in building materials, is much greater than I have experienced in my own work.

While this may seem way too academic to worry about, from a risk assessment perspective it can make a difference, here’s why:

1. It could be that much of the higher chlorinated congeners being detected are actually associated with particulate matter and therefore may be less likely to be retained in the lungs;
2. If the PCBs are not retained in the lungs, then they do not contribute to a person’s PCB dose from breathing the air; and
3. Generally the lower chlorinated PCBs are considered to pose less risk than the higher chlorinated PCBs.

There are no answers for this yet, but I’ll continue to follow the issue.

Utah Natural History Museum

I had no plans when I arrived in Salt Lake City early on the Saturday afternoon before the conference started. It was a beautiful day, too early to go the hotel and why waste a beautiful day in a hotel room anyway? On the side of the street coming into the city was a billboard advertising the dinosaur exhibit at the Utah Natural History Museum. Since I still have more than a little of my boyhood fascination with dinosaurs, I decided going to the museum was the perfect way to spend the afternoon; good decision!

The museum is set in the foothills on the eastern side of the city in an attractive modern building with fabulous views of the surrounding mountains and the Great Salt Lake basin. The museum exhibits are laid out intelligently, the displays flowing easily into each other in a logical sequence. In addition to the incredible display of dinosaur fossils and reproductions, the museum is actively engaged in research and fossil restoration, a process that can be seen by visitors through laboratory windows.

The diversity of fossil types was beyond anything I have seen before or could even imagine. As much as I have enjoyed exhibits of ancient animals in east coast museums, they really can’t compete with what I saw at the UNHM.

If you have science-nerd children that love dinosaurs and there is any way you can swing it, my advice is to take them to Utah to see this museum. It was even better than I might have hoped, the exhibits were painstakingly constructed with great variety and well written descriptions. Personally, I was glad to be on my own so I could dawdle at my own slow pace and revisit favorite sections. A must see.


There was a recent news story about three local swimming pools being closed for the season because PCBs were discovered in the paint that lines the pools. Everyone involved acknowledges that the neither the paint nor the PCBs pose a health risk to users of the pools. However, once tested and found to contain PCBs, the paint must be taken out of service according to EPA’s PCB regulations. There is no requirement that the pool paint ever needed to be tested, but conducting the test commits the owner to a significant potential liability.

So it is the act of testing the paint and discovering the PCBs that triggered the need to remove the paint, which is likely to be a very expensive project, particularly if the PCBs are found to have migrated from the paint into the concrete of the pools. Again, no one is saying there is any health risk from the PCBs in the pool paint. So where did the idea of testing the paint come from? According to MassLive it was the recommendation of the City’s consultant, a large engineering firm engaged by the City to assist in developing repair options for the pools.

Since the use of PCBs in paint stopped between 35-40 years ago, the PCBs being discovered now have been in the pool for at least 35 years. By testing the paint for PCBs, the consultant has put the municipality in a position of spending an as yet unknown amount of money to remove the paint from the pools. Then there is the issue of what to do if the PCBs have migrated into the underlying concrete – something which they usually do. The ultimate cost of those seemingly simple PCB tests could well escalate to surprisingly high levels. Hopefully senior municipal representatives were advised in advance by their consultant of the potentials risks and costs of undertaking these PCB tests.

The Winter of 2015

For those of you from out of the New England area, you just can’t imagine how sick and tired we locals are of winter this year! My almost 89 year old dad told me this was without a doubt the snowiest, coldest winter of his life. And for once the worst of the weather has occurred along the coast, particularly in Boston. More typically Boston enjoys a temperate climate, with western and northern New England getting the brunt of winter’s snow and cold, but not this year.

In Boston and surrounding communities snow piles on street corners that are seven+ feet high are commonplace. Driving on smaller streets can be like driving through a snow tunnel. Here are a few typical scenes.

a lot of shovelingThis guy has a lot of shoveling to do – those mounds on the side of the road are buried cars!


where is my carHe’s just hoping to find his car.  Good luck!

fenway under snowEven famed Fenway Park is covered in snow. Pitching practice canceled.

For more details on this record breaking winter weather check this link:

Believe it or not, in comparing recorded annual snow falls this year is still in second place behind 1995-1996 (July 1, 1995 to June 30, 1996).  However, there is only a 5.5 inch difference left and we still have most of another month of winter to go!

I took a vacation to Savannah, Georgia about four years ago– after a couple months of a New England winter, I can’t help but start thinking about memories of warmer places. As with any good vacation, it’s the odd and unexpected things that stick in your head for years afterwards. One of my most salient memories of that vacation was an idiosyncratic concrete-like building material called tabby, which is as much a part of the historic landscape on the southeastern coast of the US as Spanish moss.

Tabbly Blocks
Tabby blocks on Sapelo Island, Georgia

Tabby is a mixture of unslaked quicklime (calcium oxide, produced by burning locally abundant oyster shells at about 2,000 degrees Fahrenheit), sand, water, and whole unburnt oyster shells which served as a coarse aggregate. The recipe is elementary, with the ingredients mixed in roughly equal proportions by measure (not weight), and then poured into structural forms or cast into large blocks and allowed to dry in the sun for several days before use. The result was a durable concrete-like material, which could be handled like concrete blocks or ashlar stones wherever something more durable than wood was desired along the damp and hurricane-prone southeastern coast. The tabby was then finished with coats of stucco (also a locally-available mixture of lime, sand, and water) to make a smooth surface and to keep water from draining through the porous tabby and eroding the material.

Tabby is also sufficiently durable that the US Army Corps of Engineers was content to use it instead of concrete for an 1880s-era underground bombproof bunker at Fort Pulaski, located between the coast and the riverside port city of Savannah, Georgia (see photo).

Interior of a late 19th Century bunker at Fort Pulaski, Georgia

Coquina, a similar but naturally occurring cementitious material made of geologically consolidated seashell fragments, was likewise used to construct the walls of Castillo de San Marcos in St. Augustine, Florida.

Tabby Economics

Tabby was known in Europe in the early Middle Ages; the now-ruined Wareham Castle in Dorset, England was built of tabby in the early 1100s. It was introduced to North America in the colonial era by English and Spanish colonists in South Carolina, Florida and Georgia and was widely used from the seventeenth century through the post-Civil War era.

Tabby was an ideal building material for the time and place for a number of reasons. Durable building materials such as brick and stone were not locally available on the coast of the southeastern states, which is for the most part a vast sandy plain. Brick and stone had to be imported at a premium cost. The technology for making and using concrete had been lost for a thousand years after the fall of Rome only gradually rediscovered in the early 19th Century. As an interesting aside, this scarcity of durable materials in the coastal South is why 19th-century seacoast forts along the coastline of the southern states were typically built of brick (some 25 million bricks for Fort Pulaski) while those north of Virginia were typically built of granite blocks, since brick was easier to transport and brickworks could be put up wherever there was a suitable clay pit somewhere inland.

By contrast, the ingredients for tabby were readily available—vast buried oyster beds can be found along the shores and islands, with live beds offshore— and although the process was labor-intensive, it was simple enough to mix and pour that it could be prepared by unskilled labor. Thanks to its simplicity and available ingredients, tabby remained in common use until the 1920s, well after Portland cement and concrete became available, although later uses of tabby were apparently more an aesthetic or decorative choice, rather than for structural reasons.

Sapelo Island Examples

The most conspicuous use of tabby I saw during my brief stay was on a visit to Sapelo Island off the Georgia seacoast, where I saw several examples of tabby used for former slave quarters and a mansion, as well as roads where oyster shells were used as gravel. Old tabby blocks marking the ruins of buildings are scattered among the live oak trees along many of the island’s roadsides. The plantation house is now known as the Reynolds Mansion and (along with most of the rest of the island) is owned by Georgia Department of Natural Resources as part of the Sapelo Island National Estuarine Research Reserve. The mansion was originally constructed of tabby circa 1802 by Thomas Spalding, an architect and tabby enthusiast, anti-abolitionist politician, and plantation owner who died in 1851. The building was subsequently rebuilt by Detroit-based Howard Coffin, owner of Hudson Motors, in 1912. Richard “Dick” Reynolds, heir to the R.J. Reynolds tobacco fortune, noted philanthropist, and one of the eventual founders of Delta Airlines, acquired the mansion in the 1930s and renovated the plantation into a sprawling private retreat, leaving it in the form it retains today.

Reynolds Mansion
Reynolds Mansion, Sapelo Island, Georgia

Most of the surviving tabby buildings are now considered historic structures or to have cultural or architectural significance. The upkeep and repair of these buildings is something of an art, as some modern materials may prove incompatible with the tabby and cause the historic material to deteriorate.