About This Item
- Full TextFull Text(subscription required)
- Pay-Per-View PurchasePay-Per-View
Purchase Options Explain
Share This Item
The AAPG/Datapages Combined Publications Database
Environmental Geosciences (DEG)
Abstract
American Association of Petroleum Geologists Annual Meeting April 11–14, 1999 San Antonio, Texas Abstracts
1999 Division of Environmental Geosciences Forum: Temperature Changes Through Time
The Division of Environmental Geosciences of the American Association of Petroleum Geologists organized and sponsored the Temperature Changes Through Time Forum at the 1999 national meeting. The Forum had two objectives. The first was to invite investigations (i.e., data and interpretations) from a wide variety of geologic specialties that may offer evidence of past climate changes. The second goal was to seek methods or techniques that can be applied to the geologic record and have sufficient resolution to discriminate between normal temperature fluctuations and significant departures from them.
Abstracts for the nine presentations made at the Forum are presented here and demonstrate that the first objective was accomplished. The morning session included five papers and a broad range of topics. These included a concept that land masses influence (or control?) oceanic circulation patterns (Gerhard), potential impacts of dramatically altering oceanic circulation and rainfall patterns (Broecker), paleontologic responses to a 6°C temperature increase that occurred during late Paleocene (Bralower), the relationship between calcium carbonate levels in soils and atmospheric carbon dioxide levels over 400 million years (Ekart), and the potential of using marine sponges as temperature proxies over 1000–2000 year time spans.
The afternoon session had four papers and included a study of Antarctic beetle response to a proposed warmer period during Pliocene (Ashworth), an investigation involving fossil leaves that suggests Eocene climates were significantly warmer than today (Kuerschner), an ice core project that offered evidence for the rapid warming of low latitude settings (Thompson), and an overview paper that summarized various types of evidence suggesting that current temperature variations are consistent with those of the past.
The Forum was a means to hear scientific evidence from a wide range of workers and disciplines and was successful in bringing multiple lines of evidence to bear on a subject of great current interest. Hopefully, the Division will be able to continue with such activities and thus “educate the membership and general public about important environmental and conservation issues,” as set forth in the charter and mission.
Response of Beetles to Global Change: The Past is a Clue to the Future
Ashworth, Allan C., Department of Geosciences, North Dakota State University, Fargo, ND
Fossils of beetles preserved as heads, pronota, and elytra in Pliocene, Pleistocene, and Holocene sediments are an important resource from which the historical effects of climate change and human activities on insect faunas can be examined. During the Plio-Pleistocene transition ~2 million years ago, a diverse beetle fauna inhabited Kap København, northernmost Greenland. The beetle fauna inhabited a forested landscape in contrast to the polar desert of today. A significantly warmer climate also may have existed in the interior of Antarctica until Pliocene time, based on a fossil assemblage that includes fossils of beetles. During the last 2 million years of the Quaternary Period, both the northern and southern hemispheres have undergone repeated glaciations in both polar and temperate latitudes. Beetle species have responded by tracking the changing climates. Even though populations were isolated and the conditions theoretically conducive to speciation and extinction, only a few new species have been described and only a few species became extinct. Regional extinctions have been detected but these did not lead to species extinctions. Human activities during the Holocene and within historical times have produced effects in the fossil record as striking as those of climate changes. In the British Isles and Europe, clearance of old growth forests, starting in Neolithic times ~5000 years ago, led to the reduction in habitat and the extinction or restriction of several species of beetles. Much later in the mid-nineteenth century, the arrival of Europeans and their cultivation practices modified the insect fauna of the American Midwest so profoundly that the event is as detectable in the fossil record as any ice age climate change. The lesson from the fossil record of insects is that with increasing human disturbance there will be more extinction of insect species.
Evidence Delimiting Past Global Climate Changes
Bluemle, John P., North Dakota Geological Survey, Bismarck, ND; Joseph M. Sabel, U.S. Coast Guard, Oakland, CA; and Wibjörn Karlén, Stockholm University, Stockholm, Sweden
Politicians and the media assume Earth’s climate is warming as the result of human activity. Various types of evidence of previous climate changes were investigated as a means of testing the validity of assigning anthropogenic causes to this change.
Data with broad geographic coverage indicative of temperature and climate were evaluated. It included records of glacial advance and retreat, sedimentologic evidence of sea level change and glacial activity, palynologic indications of species succession, dendrochronologic evidence of tree-growth response to environment, and continental-ice core parameters indicating accumulation rates as well as other climate surrogates. Also reviewed were historical sources such as explorers’ journals, which document significant climate effects over time. Each type of evidence has particular value. Among these are preservation of regional versus local conditions, transport in or out of the system, age-date reliability, correlation between data types, and data (as well as human) bias. All the data indicate that the Holocene has been characterized by ten or more global “little ice ages” irregularly spaced. Each lasted a few centuries separated by sometimes sudden and dramatic global warming events.
It is difficult to develop precise paleothermometry. Qualitative evaluations indicate frequent, sudden, and dramatic climate changes. Changes can be rapid, swinging from warmer than today to full glacial conditions within 100 years. The converse can be true. All available data indicate that current climate change is no greater in rate or magnitude, and probably less in both, than many changes that have occurred in the past.
The Late Paleocene Thermal Maximum: Ancient Global Warming at Modern Rates?
Bralower, Timothy J., Geology Department, University of North Carolina, Chapel Hill, NC; Lisa Sloan and James Zachos, Earth Science Department, University of California, Santa Cruz, CA
One of the most abrupt and dramatic ancient global warming events took place ~55 Ma in the late Paleocene epoch. This event, known as the Late Paleocene Thermal Maximum (LPTM), involved warming of high latitude and subtropical oceanic surface waters by up to 6°C and deep waters by up to 8°C. Deep-water warming and consequent oxygen deficiency led to the most severe mass extinction of deep-sea faunas in the last 90 million years. By contrast, the LPTM is also associated with major speciation of planktic foraminifers and terrestrial mammals. The event corresponds to a large (3 per mil), negative carbon isotope excursion (CIE) that suggests major changes in the nature of carbon cycling. The CIE has been used to correlate the LPTM between terrestrial and marine sediments.
Current estimates of the duration of the LPTM range between 50-200 thousands of years. The onset of the event, including the full magnitude of the CIE, is thought to span 2-10 thousands of years. This initial rate of CO2 input is comparable with anthropogenic input of fossil fuels. Possible sources of CO2 and warming mechanisms include dissociation of methane hydrates along continental margins and a major episode of effusive volcanism in the North Atlantic. For causal mechanisms to be fully tested, however, more precise estimates of the duration of the onset are required. In this talk, a state-of-the-art chronology of the LPTM was presented along with a comparison of LPTM and modern CO2 input and warming rates.
Surprises in the Greenhouse
Broecker, Wallace S., Lamont-Doherty Earth Observatory of Columbia University, Palisades, NY
During the last glacial period, Earth’s climate underwent frequent large and abrupt global changes. This behavior appears to reflect the ability of the ocean’s thermohaline circulation to assume more than one mode of operation. The record in ancient sedimentary rocks suggests that similar abrupt changes plagued the Earth at other times. The trigger mechanism for these reorganizations may have been the antiphasing of polar insolation associated with orbital cycles. Were the ongoing increase in atmospheric CO2 levels to trigger another such reorganization, it would be bad news for a world striving to feed 11-16 billion people.
Fossil Leaves as Biosensors of Eocene Paleoatmospheric CO2
Dilcher, David L., Florida Museum of Natural History, University of Florida, Gainesville, FL; Wolfram M. Kuerschner, Henk Visscher, and Friederike Wagner, Laboratory of Paleobotany and Palynology, Utrecht University, The Netherlands
During the Cretaceous, as broad-leaved flowering plants evolved leaf forms similar to those seen in many flowering plants today, they had to develop physiological responses to changes in the supply of basic resources such as [CO2]. As the amount of [CO2] varied, the anatomy of the leaves accommodated these fluctuations, displaying physiologically determined signals that can be used to discriminate atmospheric levels of [CO2]. A decreasing number of stomata (air exchange holes in a leaf) on leaves of broad-leaved trees, as determined by stomatal index, indicates an increase in [CO2]. The stomatal index represents the ratio of the number of stomata per unit leaf area to the number of total epidermal cells per unit leaf area, thus expressing frequency independently of variation in epidermal cell size, serving as a sensitive parameter for detecting stomatal frequency changes. In comparing the stomatal indexes of leaves of the loblolly bay (Gordonia lasianthus) with those of 100-year-old leaves and of related Eocene leaves, significant differences were observed between these groups which appear to be directly related to the atmospheric [CO2] present when the leaves grew. Our preliminary results indicate that the Eocene p[CO2] was the order of 450-500 ppmv at a time when the earth was significantly warmer than today. Analysis of stomatal indexes over time provides a useful method for quantifying past environmental changes of [CO2].
A 400-Million-Year Record of Atmospheric Carbon Dioxide Deduced from Pedogenic Carbonates
Ekart, Douglas D., Energy and Geoscience Institute, University of Utah, Salt Lake City, UT and Thure E. Cerling, Department of Geology and Geophysics, University of Utah, Salt Lake City, UT
Calcium carbonate commonly accumulates in soils where mean annual precipitation is < 100 cm, a condition met by a large fraction of the Earth’s terrestrial surface. Pedogenic carbonates are a common component of fossil soils (paleosols). The carbon isotope composition of pedogenic carbonates meeting specific criteria can be related to ecological conditions and the concentration of carbon dioxide in the atmosphere.
We have collected and analyzed pedogenic carbonates from hundreds of paleosols. These data have been combined with a large number of analyses from the literature. The combined data include paleosols from five continents and formed throughout the last 400 million years. Application of Cerling’s CO2 paleobarometer to these data has significantly constrained the history of atmospheric CO2 through this time period.
Results indicate that atmospheric pCO2 has significantly fluctuated throughout the Phanerozoic. High concentrations of atmospheric CO2 levels in the Late Silurian and Devonian dropped to low concentrations in the Late Paleozoic. CO2 levels increased dramatically in the Triassic and subsequently dropped throughout the rest of the Mesozoic, reaching levels comparable with the modern condition prior to the Cretaceous-Tertiary boundary. Levels remained relatively low throughout the Cenozoic.
Geological Constraints on Global Climate Variability
Gerhard, Lee C., Kansas Geological Survey, Lawrence, KS
Anthropogenic forcing of global climate is a concept with great political appeal but generally ignores basic concepts of science, particularly geological knowledge. Much of the current popular debate focuses on decadal variation in global temperature and ignores the natural variability inherent in geological records ranging in scale from centuries to eras. Discussions about geological constraints require understanding that geological science views the earth processes as being in disequilibrium, geology is a temporal science, and the energy budget of the earth is controlled by radiogenic and solar inputs creating a single dynamic Earth system.
One illustration of the geological constraints on global climate is the congruence of widespread glacial episodes (icehouses) and warm periods (greenhouses) with continental plate configurations. During icehouse events (Late Precambrian, Carboniferous, Neogene), continents are arranged so as to disrupt equatorial ocean currents, distributing heat unevenly and providing polar moisture to sustain large-scale glaciers. During greenhouse events, earth-circling equatorial currents are presumed from lack of physical barriers. The conclusion is that tectonic distribution of topography and placement of continents control the geometry of ocean currents which in turn determine Earth climate.
Two Millennia of El Niño Events Potentially Archived in Sclerosponges
Thayer, Charles W. and Gary Hughes, Earth and Environmental Science, University of Pennsylvania, Philadelphia, PA; Kyger C. Lohmann, Department of Geological Science, University of Michigan, Ann Arbor, MI
Sclerosponges have great potential as temperature (T) recorders; they precipitate carbon and oxygen isotopes in apparent equilibrium with seawater. These animals lack photosynthetic symbionts, which simplifies interpretation of their isotopic record. It also allows them to live in subphotic depths, recording T below the thermocline and as deep as the carbonate compensation depth.
The coralline sponge Acanthocaetetes wellsi is widespread in caves in the western Tropical Pacific. The West Pacific warm pool accumulates here before moving eastward to the Americas during El Niño events. A. wellsi has distinct growth bands averaging ~1 mm/yr. Skeletal ∂13C and ∂18O show millimeter-scale cyclicity, apparently due to annual T variation, and large amplitude swings that occur every four to seven cycles, likely indicating El Niño events. Because individual sponges live for several centuries, they can provide high-resolution records of pre- and post-industrial El Niños. Additionally, the caves contain both live and dead A. wellsi. Cross-correlation of successively older specimens should yield a 1000-2000-year record.
The area, depth, and T distribution of prior warm pools will be defined. The velocity of their eastward movement may be determinable from East Pacific A. wellsi and American Mytilus. From these data, estimates of heat transfer rates can be derived, allowing determination of past El Niño intensities. By correlating past intensities with known impacts, refined prediction of El Niño effects will be possible.
Stable Isotopes and Their Relationship to Temperature as Recorded in Low-Latitude Ice Cores
Thompson, Lonnie G., Department of Geological Sciences and Byrd Polar Research Center, The Ohio State University, Columbus, OH
The potential of stable isotope ratios (18O/16O and 2H/1H) of water from mid- to low-latitude glaciers as a modern tool of paleoclimate reconstruction is reviewed. To interpret quantitatively the ice core isotopic records, the response of the isotopic composition of precipitation to long-term fluctuations of key climatic parameters (temperature, precipitation amount, relative humidity) over the given area should be known. Furthermore, it is important to establish the transfer functions relating the climate-induced changes of the isotopic composition of precipitation to the isotope record preserved in the glacier. This paper presented long-term perspectives of isotopic composition variations in ice cores spanning the last 30,000 years from mid- to low-latitude ice cores. Also presented were on-going calibration studies in Tibet, China and Sajama, Bolivia where the oxygen isotopic ratios (818) of precipitation samples collected over several years at meteorological stations have been analyzed to investigate the relationship between δ18O and contemporaneous air temperature. The isotopic composition of precipitation should be viewed not only as a powerful proxy climatic indicator but also as an additional parameter for understanding climate-induced changes in the water cycle, both on a regional and global scale.
Division of Environmental Geosciences: Environmental Considerations in Exploration, Production, and the Aftermath
Presiding: L. Bruce and S. Halasz
Quantitative Risk Analysis of Natural and Environmental Processes in the Urban Setting
Browne, Carolyn, Geologic-Environmental Management Systems, Tulsa, OK
As the Millennium draws closer, media focus is fastened on natural disasters like flooding, storms—both tornados as well as hurricanes—and droughts. Not everything can be blamed on El Niño or La Niña.
Geology plays a major role in urban environmental impact such as the underlying stratigraphy totally ignored by most developers as urban sprawl continues as the population escalates. When large three- and four-storied homes are built atop friable layers of soil and ancient stream beds or shorelines, with hillside slopes nearing the angle of repose, the inevitable happens when torrential rains hit—rains that appear heavier than normal after a drought, some droughts nearing 100-year records. Such homes tend to slide down hillsides or so much water cascades down driveways that act as funnels that the city’s paved roadway erodes within a year or two of placement.
As urban development continues to place stresses and strains on natural and human-made resources, reliance upon quantitative risk analysis of natural and environmental processes may be geological tools of value to public administrators and other urban planners in the allocation of limited public financial resources in providing citizens with the type and quality of public services like water, sewers, and paved streets.
Environmental Protection During Exploration and Exploitation of Oil and Gas Fields
Gildeeva, Irina, All Russia Petroleum Research Exploration Institute (VNIGRI), St. Petersburg, Russia
Estimates of oil pollution show that every year, the surface of the globe is polluted by 30 million tons of oil. That is equivalent to the loss of one large oil field. Annual average oil losses in Russia alone are estimated to be 12 million tons in the last 2-3 decades. Lately, in Russia, up to 40,000 damages occur at the field pipelines of which at least 20 are significantly large. In the Komi Republic, an area of pastures damaged as a result of oil production totalled 17,200 hectares; in Western Siberia, up to 12.5% of all pastures are also damaged as a consequence of oil and gas field development. The author proposes to subdivide all the known oil and gas field types on the degree of their potential danger during field exploitation into five groups. This classification is the basis for constructing new environmental maps. These maps suggest that there is a potential for environmental damage by hydrocarbon field exploitation on the Timan-Pechora and Western Siberia provinces. It is indicated that it is necessary to create a defense system against oil pollution at all the stages of oil and gas field development. This system must include environmental audit monitoring, prevention of environmental pollution, and rehabilitation of soil and surface water. The characteristic of measures on pollution prevention, including engineer-technical, judicial measures, and the measures of prevention character, is given in the report. The last is illustrated by the technology of production and refining of high-viscosity, sulphurous, metal-bearing oils. The description of one effective biological cleanup method, based on the application of NAPHTOX biopreparation and created at the VNIGRI, is given. Recommendations on accidental oil spill prediction and combating measures are discussed.
Realistic Exposure Scenarios: The Key to Saving Time and Money on Risk-Based Corrective Actions
Hippensteel, David L., U.S. Dept. of Energy-Nevada Operations Office, North Las Vegas, NV
The primary objective of risk-based corrective action is to establish a direct relationship between the extent of a corrective action and the level of risk (potential harm from contaminant exposure) associated with no corrective action. In reaching this objective, one is attempting to ensure that remediation resources are expended efficiently to reduce contamination that poses the most risk. To determine if corrective action is necessary, a risk assessment is performed. The goal of risk assessment is to calculate (as close to reality as possible) the potential for harm to a given receptor from exposure to contamination. The results of risk assessment are often accompanied by uncertainty in almost every parameter used to produce a risk estimate. This uncertainty varies with exposure scenarios that risk assessors must consider during their work.
Controlling uncertainty is critical to establishing the primary objective of risk-based corrective actions, that is, not wasting remediation resources on minor risks. Avoidance of this can only be accomplished by developing realistic exposure scenarios based on established facts or defensible trends supported by accepted evidence. More often than not, exposure scenarios used in risk assessments are “presumed” or prescribed by conservative regulations. The use of presumed or prescribed exposure scenarios defeats the primary objective of risk-based corrective actions by forcing remediation to protect improbable receptors.
Environmental Cost Associated with Abandonment of Oil Production Facilities or “What Happens When the Wells Go Dry”
Kent, Bob, Geomatrix Consultants, Inc., Newport Beach, CA and Mark Hemingway, Geomatrix Consultants, Inc., Austin, TX
Environmental damage associated with oil and gas production operations is typically caused by releases of crude oil, other hydrocarbons liquids, produced water, or naturally occurring radioactive materials. These releases may be associated with wells, tank batteries, pits, or the piping systems that transport oil, gas, and water within and from the lease. The damage caused by these releases may be visible, such as large salt scars from water tank overtopping, or less apparent, such as soil or groundwater contamination from a tank or pipeline leak.
Oil production leases are often sold many times between the initial discovery and when the time production becomes uneconomical. Environmental restoration costs may not be significant if averaged over the life of the lease. When production stops, however, the last owner of record may face large capital costs related to decommissioning production facilities. This may include landowner litigation. As oil and gas fields advance in their life cycles, purchasers should be aware of the past operating history and potential environmental liabilities and use this knowledge in structuring the lease acquisition. An environmental assessment provides the information needed to protect purchasers from excessive liability.
Using Trees as a Barrier to Metals-Contaminated, Saline Groundwater
Olson, Christopher, Amoco Corporation, Warrenville, IL; Frank Thomas, KMA Environmental, Texas City, TX; David Tsao, Amoco Corporation, Warrenville, IL; and Ari Ferro, Phytokinetics, North Logan, UT
Groundwater in the shallow transmissive zone at an AMOCO site is highly saline and contaminated with inorganics and radionuclides that exceed the U.S. Environmental Protection Agency maximum concentration limits. Under a voluntary cleanup program agreement, additional response actions would be required (i.e., additional containment, pump and treat, etc.) if the plume migrates past the compliance monitoring boundary.
A dilution study and greenhouse feasibility study are under way to determine the suitability of installing a tree barrier strip at the site. The installation of dense rows of deep-rooted water-loving trees perpendicular to groundwater flow and along the leading edge of the plume may serve as added insurance against further off-site migration; the trees essentially acting as a flow confinement system.
A Practical Overview of Regulations Governing Oil Spills from Oil and Gas Producing Facilities in Texas
Railsback, Rick, Cura, Inc., Dallas, TX
Relevant legislation and regulations contained in and resulting from the Clean Water Act; the Oil Pollution Act of 1990; the Texas Oil Spill Prevention and Response Act; national, regional, area, and state spill contingency plans; and Texas Railroad Commission regulations are briefly reviewed. This legislation and regulations are summarized in a checklist of seven essential requirements for operators to follow to comply with all applicable federal regulations and state regulations specific to Texas. These essential requirements are listed as follows: (1) Do not spill oil into or on navigable waters or on land. (2) Immediately report all spills resulting in a sheen of oil on navigable waters and over five barrels of oil on land to the proper governmental agencies. (3) In cleaning up and mitigating spills, follow a response plan and cooperate fully with federal and state agencies. (4) Obtain insurance or other proof of financial responsibility in compliance with the provisions of Oil Pollution Act of 1990. (5) Develop and implement a Spill Prevention Control and Countermeasures plan and, if necessary, have the plan approved by the federal government. (6) Develop and implement a response plan(s) that will satisfy requirements of the Clean Water Act, Oil Pollution Act of 1990, and Texas Oil Spill Prevention and Response Act and have the plan(s) approved by the federal and/or state government. (7) Obtain an Oil Spill Prevention and Response Certificate from the Texas General Land Office.
Regulation of Hazardous Waste in the Oil Field: The Railroad Commission of Texas’ Approach
Sims, Bart C., Railroad Commission of Texas, Austin, TX
Many wastes generated in association with crude oil and natural gas exploration and production activities are exempt from regulation as hazardous waste. However, nonexempt oil and gas waste is subject to a hazardous waste determination, and if determined to be hazardous waste, is subject to standards for management of hazardous waste. Hazardous waste management standards are established by the federal Resource Conservation and Recovery Act, Subtitle C (RCRA Subtitle C). A state may enforce these standards through a hazardous waste program authorized by the U.S. Environmental Protection Agency (EPA), or EPA may retain RCRA Subtitle C Authority in a state. The Railroad Commission of Texas enforces standards equivalent to RCRA Subtitle C through Statewide Rule 98, Standards for Management of Hazardous Oil and Gas Waste. The Railroad Commission of Texas’ hazardous oil and gas waste program has not yet been authorized by EPA; therefore, the Railroad Commission of Texas and EPA share parallel authority over hazardous oil and gas waste in Texas.
Statewide Rule 98 is structured to address the application of federal hazardous waste regulation to the unique circumstances of oil and gas operations. This paper provides an overview of the regulatory process for hazardous oil and gas waste in Texas, including the application of important exemptions and exclusions and the most common applicable management standards.
Petroleum Hydrocarbon Fingerprinting Quantitative Interpretation: Development and Case Study for Use in Environmental Forensic Investigations
Wigger, John W., Environmental Liability Management, Inc., Tulsa, OK; Dennis D. Beckmann, Amoco Corporation, Tulsa, OK; Bruce E. Torkelson, Torkelson Geochemistry, Inc., Tulsa, OK; and Atul X. Narang, Amoco Corporation, Naperville, IL
Hydrocarbon characterization (fingerprinting) is a technique that uses gas chromatograms to identify petroleum hydrocarbons as to type of product based on boiling range and other definitive characteristics. Identifying and comparing samples are not straightforward: the composition of a single product type can vary, the composition of samples can change after release into the environment (weathering), and multiple releases can form complex mixtures. Hydrocarbon characterization is typically done by visual examination and comparison of chromatograms, and the outcome is dependent on the expertise and experience of the inter-preter(s).
This paper reports on work to establish a more quantitative and less subjective process. First, a database was created of >60 known hydrocarbon samples representing streams such as gasoline, kerosene, naphtha, reformate, jet fuel, diesel, fuel oil, hydraulic oil, lubricating oil, crude oil, and other refinery intermediates. Second, a statistical correlation algorithm has been developed to evaluate and compare chromatography characteristic numerically. The techniques were used effectively in a case study involving an investigation of released hydrocarbon products at a refinery process unit. The techniques were instrumental in helping differentiate multiple sources and characterize the subsurface extent of the hydrocarbons.
Applications of Forensic Chemistry for Petroleum Cases
Zemo, Dawn A., Geomatrix Consultants, San Francisco, CA
Forensic chemistry is useful for petroleum hydrocarbon investigations or litigation for three primary reasons: (1) petroleum products are chemically complex and can be highly variable in composition within certain performance-based ranges; (2) routine U.S. Environmental Protection Agency analytical methods only generalize the nature of petroleum products and reflect little of the chemical detail needed for forensic purposes; and (3) crude oils and products weather in the environment and change in chemical composition over time. Forensic chemistry is frequently used to answer questions about the identification or age of petroleum in the subsurface.
This presentation provided examples of multiple applications of forensic chemistry, including gas chromatography pattern-matching for product identification, discriminating between weathered fuel oils based on families of aromatic hydrocarbons, determining whether polynuclear aromatic hydrocarbons are of petroleum or combustion origin, using key discrete constituent analysis (e.g., PIANO) to distinguish between products of similar type or boiling range, and age dating products using key additives.
The best forensic interpretations rely on multiple lines of evidence and must incorporate the effects of weathering and changing refinery and transportation practices and avoid the pitfall of confusing weathering and age.
Division of Environmental Geosciences/Energy Minerals Division: CO2 Sequestration, Environmental Management Systems, and Other Environmental Topics
Presiding: M. M. Lee and W. P. Wilbert
Geologic Disposal of Carbon Dioxide Emitted by the Upstream Energy Industry: The Potential for the Alberta Basin
Bachu, Stefan, Alberta Energy and Utilities Board, Edmonton, AB, Canada
Carbon dioxide is a greenhouse gas that is believed to cause global warming and climate change. To mitigate these effects, reduction of CO2 emissions in the short- to long-term can be achieved by a combination of various actions such as improving energy efficiency, CO2 utilization, and CO2 sequestration in biomass, oceans, and geological media. Sedimentary basins are naturally associated with fossil energy resources whose exploitation leads itself to CO2 production and emissions to the atmosphere. For landlocked regions such as Alberta, sequestration of CO2 in geological media is probably the only viable solution for reducing CO2 emissions. Basically, there are five ways for CO2 sequestration in sedimentary basins: use in enhanced oil recovery, storage in depleted oil and gas reservoirs, storage in salt-caverns, replacement of methane in coal beds by CO2 injection, and hydrodynamic entrapment and mineral immobilization in deep saline aquifers. Successful CO2 sequestration depends on basin tectonics, hydrocarbon potential and maturity, and the hydrodynamic regime of formation waters. The Alberta basin is one of the few basins in the world that meet all the criteria and have all the options for CO2 sequestration in geological media. It has extensive, thick salt beds; abundant oil, gas, and coal and huge tar sand resources; it is located on a tectonically stable Precambrian platform; the hydrodynamic regime of formation waters is extremely favorable; and it has already in place the necessary technology and infrastructure for CO2 deep injection. CO2 is already used in a few enhanced oil recovery operations and is injected as acid gas (CO2-H2S) in several depleted reservoirs and deep saline aquifers.
Carbon Dioxide Sequestration Potential in Coal Deposits
Byrer, Charles W. and Hugh D. Guthrie
U.S. Department of Energy-FETC, Morgantown, WV
The concept of using gassy unmineable coalbed for carbon dioxide (CO2) storage while concurrently initiating and enhancing coalbed methane production may be a viable near-term system for industry consideration. Coal is our most abundant and cheapest fossil fuel resource, and it has played a vital role in the stability and growth of the U.S. economy. The energy source is also one of the fuels causing large CO2 emissions with the burning of coal in power plants. In the near future, coal may also have a role in solving environmental greenhouse gas concerns with increasing CO2 emissions throughout the world. Coal resources may be an acceptable “geological sink” for storing CO2 emissions in amenable unmineable coalbeds while significantly increasing the production of natural gas (CH4) from gassy coalbeds. Industry proprietary research has shown that the recovery of coalbed methane can be enhanced by the injection of CO2 over methane which could allow for the potential of targeting unmineable coals near fossil-fueled power plants to be utilized for storing stack gas CO2. Preliminary technical and economic assessments of this concept appear to merit further research leading to pilot demonstrations in selected regions of the United States. The benefits for considering and using unmineable coalbeds for a system concept of CO2-CH4 cycle include the following: (1) CO2 is captured from power plant flue gas, pressurized, and transported to injection wells completed in deep unmineable coals; (2) Coals near existing power plants have enormous capacity to store CO2 while enhancing CH4 production; (3) Coal reserves underlie many U.S. power plants with as many as 90% unmineable; and (4) Injection of CO2 into unmineable gassy coals allows for displacement of one molecule of sorbed CH4 while two or more molecules of CO2 are sequestered on the coal surface.
Exploring for Optimal Geological Environments for Carbon Dioxide Disposal in Saline Aquifers in the United States
Hovorka, Susan D. and Alan R. Dutton, Bureau of Economic Geology, The University of Texas at Austin, Austin, TX
Saline aquifers have been widely recognized as having high potential for very long term (geologic time scale) sequestration of greenhouse gasses, particularly CO2. The same properties that make saline aquifers desirable for sequestration, isolation from the surface and minimal use as a resource, however, make for typically poor characterization. The significant variables affecting the usefulness of an aquifer for CO2 sequestration include porosity, permeability, compartmentalization, aquifer depth, pressure, temperature, thickness, water chemistry, rock mineralogy, and aquifer flow rate. Reservoir characterization and geologic play approaches are used to extend our knowledge from well-known areas (saline aquifers closely associated with hydrocarbon production) to poorly known areas (potentially large-volume, unproductive saline aquifer targets for sequestration) by applying conceptual geologic and hydrologic models. Although reservoir characterization and play approaches are standard techniques for hydrocarbon exploration and development, they require adaptation for use in exploring for optimal hydrogeologic settings for CO2 injection in various geologic environments.
Land Subsidence Along the Texas Gulf Coast Due to Oil and Gas Withdrawal
Khorzad, Kaveh, Department of Geological Sciences, The University of Texas at Austin, Austin, TX
Land subsidence caused by groundwater withdrawal in the Houston-Galveston region is a well-documented phenomenon. Subsidence of up to 3 m has been calculated in the region since 1905. Subsidence caused by hydrocarbon withdrawal is also a plausible cause of subsidence, where groundwater withdrawal has diminished and significant petroleum production has occurred for >95 years.
Sixteen fields were investigated by acquiring reservoir depressurization data near bore-hole extensometers set up by the Houston-Galveston Coastal Subsidence District. All reservoirs were found to be well below hydrostatic pressure; a few of them were underpressured even before production began. Four oil and gas fields (the Mykawa, Satsuma, Dyersdale, and South Gillock) and three production zones (the Miocene, Frio, and Yegua) were used in a reservoir model and a boundary clay reservoir model to calculate subsidence. Subsidence under these fields is predicted to be as high as 0.44 m in a 19-year period at the Satsuma field and as low as 0.02 m in a 22-year period at the Dyersdale field. Implications of this study are (1) hydrocarbon production, although not the major contributor to most land surface subsidence in this area, does play a role; and (2) depressurization and, subsequently, subsidence from oil and gas fields may be regional and connected with other fields which is inferred from the fact that some fields were already under-pressured before production began.
Guidance for a Fully Integrated Health, Safety and Environment Management System
Knode, Thomas L. and Steve Abernathy, Halliburton Energy Services, Houston, TX
One of the keys to implementing structural controls that guarantee continual improvement is a comprehensive management system. Health, Safety and Environmental (HSE) Management systems have historically been separate from main stay processes of a company. This distinction may hinder full implementation of the system because operations personnel do not consider HSE to be integral to their function. The Halliburton Management System (HMS) is an integrated management system that provides a structure covering HSE and quality within the framework of each activity. Processes are mapped in HMS and feedback is captured with the Correction Prevention Improvement system.
HMS represents five key activities in practice. It includes the purpose and vision of the company, a formal system for the feedback of performance measures, customer and employee satisfaction, planning activities, and a system for making improvements to the system. The HMS is designed to focus on performance rather than compliance. By focusing on the process as a whole, the purpose, as defined in the mission statement, remains within sight.
Based on the strategy of the company, plans are developed and implemented to ensure the proper resources are in place. This includes development of personnel, purchase of capital equipment and inventories, and HSE elements. Because the HMS documents this process, a guide helps eliminate the inefficiencies. This planning also helps integrate HSE management up front through documented risk assessment and control. This system then meets the ISO requirements.
Grouting Monitor Wells—It’s All in the Mix
Mathewson, Christopher C. and Lloyd E. Morris, Department of Geology and Geophysics, Texas A&M University, College Station, TX
Monitor wells are frequently sealed using a cement-bentonite grout mixture because it is believed that the addition of bentonite (1) reduces shrinkage of the cement, (2) increases cement plasticity, (3) reduces curing temperatures, and (4) reduces grout weight. The design of a compound grout appears to be simple: add a fixed percentage of premium bentonite, by dry-weight, per sack of Portland cement and increase the volume of mix water from 5.2 gal/sack by 1.3 gal/2% bentonite added. This formula, however, is only valid if the bentonite is dry-blended with the cement before water is added. In most environmental applications, cement-bentonite mixing is performed at the job site. The driller has two options: (1) mix the cement with water and then add the dry bentonite or (2) mix the bentonite with water and then add the cement. Both of these lead to problems; in the first case, the bentonite does not fully blend with the grout, and in the second case the bentonite consumes all of the available water. The solution is to add more water. An 8% bentonite-cement compound grout can easily be mixed and pumped if the bentonite and cement are first dry blended, a problem not easily solved on an environmental drill site. If the cement is hydrated first, an additional 10 gal of water must be added; if the bentonite is hydrated first, at least 20 gal of water must be added to the blend. When expanded, high-yield bentonite is used, >30 gal of additional water must be added. These high-water content environmental grouts are very low density, very low strength, and highly permeable. A strong quality assurance/quality control program for cement-bentonite grout specifications must be established.
Beyond Compliance: Environmental Management Strategies for the Next Millenium
Sabel, Joseph M., Oakland, CA
With the growth of stringent environmental laws in the 1970s, corporate strategies were characterized by ignorance, denial, and finally, forced acceptance. Regulations expanded faster than corporate culture could adjust. Spurred by public activism, it became essential for politicians to “do something” about the environment. And they did with a vengeance.
By the late 1980s, nearly every company had an environmental program. The stand-alone environmental department was ubiquitous. Expensive, rigid, command and control management was established to reduce liability exposure.
As we enter the twenty-first century, this is simply not good enough. These management structures are significant cost centers. They are unable to effect continuous improvement. They place their organizations at a competitive disadvantage in the global marketplace. To reduce costs, gain efficiency, and continue to reduce exposure, corporations must adjust their cultures. Environmental compliance must become integrated vertically through the entire structure, from CEO to janitor.
These changes will not occur so the marketing department can promote “green” products. It won’t happen out of altruism. Nor will it occur because it is required, although all are true. Businesses will do these things simply because of the positive impact on the bottom line. Each and every change will have to pass a strict cost-benefit analysis. The result will be faster, better, and cheaper operation. The best are already there.
Mn Oxide Concentration as Evidence of a Pathway for Infiltration of Crude Oil Into a Shallow Aquifer, West Texas
Smyth, Rebecca C., The University of Texas at Austin, Bureau of Economic Geology, Austin, TX
In November 1991, landowners near Abilene, Texas, found crude oil in their water well. Subsequent drilling (four cores and 30 borings) defined a plume of crude oil (~300 bbl) floating on shallow, perched groundwater. Data suggest that the oil came from a near-surface leak associated with oil-production activities. Crude oil is present in a thin (0.5 ft), silty sand layer 17.7-19 ft below the surface. Because of water level fluctuation, traces of oil also occur along fractures as deep as 35 ft in two cores collected within the crude-oil plume.
The presence of manganese (Mn) oxide coatings along fracture surfaces might prove to be a record of the path of oil as it infiltrated the subsurface. Mn oxide minerals are concentrated along fracture surfaces to depths of 20 ft in two cores located nearest the suspected crude-oil source. Changes in redox conditions and increased microbial activity associated with the crude oil probably caused dissolution, followed by reprecipitation and concentration of Mn oxides.
Other effects of crude-oil degradation include high unsaturated zone methane concentrations in a halo around the oil plume. Methane was measured in boreholes at concentrations mainly between 5-50% but locally as high as 98% at depths of 8-10 ft. The methane is most likely a result of both volatilization and biodegradation of the crude oil. Coincident with the methane plume are zones of high carbon dioxide (as much as 10%) and low oxygen (as little as 1.9%) content.
Division of Professional Affairs/Energy Minerals Division/Division of Environmental Geosciences: Distributed Power in the Oil and Gas Patch
Presiding: J. B. Platt and J. M. Fay
Distributed Generation Using Stranded Gas in a Commodity Electrical Market: Economic Opportunities, Technical and Operational Constraints
Cousino, Dennis, Benham Holway Power Group, Tulsa, OK
The new structure of the electrical power industry has dramatically changed the opportunities available to independent power producers. Gone are the days when the developer and the utility argued, sometimes for years, at a utility commission trying to establish the rates for energy and capacity from a project.
Open access legislation and advances in metering technology and information systems have revolutionized the production, sale, transportation, and purchase of electricity. A commodity market has developed, in which the prices are set by supply and demand, and a producer’s profit is determined by the true cost of production.
Because fuel is the single largest component of delivered power cost, innovative suppliers will be exploring ways to use energy that is priced below the traditional, commercially available sources. Gas that is stranded, unable to be sold through conventional means, has the potential to become a valuable component in the energy supply mix if a few basic principles are followed.
Critical relationships between gas price, generation cost, and power market price are described, as well as relationships between key stakeholders: gas producer, independent power producer, interconnected power utility, and power marketer. Drawing on experience gained in field operation of a stranded gas program, the talk addresses financing mechanisms, contractual mechanisms, and design and operation choices for generation equipment to facilitate transactions.
On-Site Electric Generation Opportunities for Oil and Gas Producers
Mantey, Vern, Mercury Electric Corporation, Calgary, AB, Canada
Deregulation of the electric industry is creating opportunities for oil and gas producers to take more control of their energy costs. On-site electric generation using small (<100 kW) turbine generators provides an alternative to utility connection. The new generation of recuperated mini-turbines have high electric efficiency, medium power density, and inherent high reliability due to multiple unit configurations. SCADA compatibility and minimal on-site service requirements minimize operation and maintenance costs. Existing field staff are sufficient for normal day-to-day operation. Small turbines can use flare gas or other low pressure, slightly “off-spec” gas sources as fuel. Other benefits include less downtime due to electric interruptions and reduction of greenhouse gas emissions when using flare gas as fuel. Waste heat produced is not usually economic to recover in existing facilities but should be considered in new installations.
There may be cases in which selling energy in the form of electricity rather than gas is a viable alternative. These situations are very case specific and will depend on the jurisdiction, off-site electric prices, and proximity to gas transmission infrastructure. Mini-turbine technology offers opportunities that did not exist previously but the opportunities will require creativity and effort to realize.
Overview of Small-Scale Electric Generation Technologies, Fuel Requirements, and Costs
O’Sullivan, John, EPRI, Palo Alto, CA
Although field power generation is not new, most applications have greatly exceeded 1 MW. Recent advances in fuel cell and microturbine technology will provide economic power plants in the <300 kW size range. These developments are derivative of research and development efforts focused on vehicular propulsion. Small engines for hybrid electric vehicles and fuel cells for electric vehicles are yielding technology that can be applied to stationary power. The fuels of choice for market entry products are natural gas and propane with some minor emphasis on methanol. At least six fuel cell developers are emphasizing power plants in the 2-5 kW range. The early market products will begin to appear in 1999. The four major microturbine developers are introducing products in the 30-75 kW range. These began to enter the market in late 1998.
These small power plants should see wide application in the oil and gas industry if the claims for efficiency, reliability, and cost are substantiated by their field operation. Most systems are designed for operation with pipeline quality fuels. Available fuels in the field may have quality issues that will negatively impact both fuel cell and turbine operation. The fuel processing subsystems for fuel cells are especially sensitive to sulfur species and to heavy hydrocarbons, unless designed for their presence. The microturbines operate most effectively on fuels at 3-4 atm; otherwise a cost and efficiency penalty is taken for a gas compressor. Developers are projecting costs in the $500-1000/kw range. It remains to be seen whether product orders will provide a manufacturing volume that will meet these cost targets.
On-Site Electric Generation in Energy and Agriculture
Priddy, Ritchie, KN Energy, Lubbock, TX
In a time of uncertainty and little construction of new power plants, distributed generation (DG) has taken on an importance few people realized just a few years ago. In reaction to these uncertainties and transmission constraints, opportunities for tapping into areas of trapped gas in production fields for fuel for rural DG purposes have risen dramatically. This presentation identifies these opportunities and describes how to overcome obstacles and work with key players (including electric companies).
Another opportunity for DG exists using financial tools to arbitrage natural gas for electricity on state-owned lands. This power could be used for self-generation in the production field or could be transported to a larger grid for resell.
What were once fierce competitors are becoming close allies. Electric cooperatives in the Panhandle of Texas are summer-peaking due to heavy irrigation loads. Natural gas companies are winter-peaking. Under some deregulation scenarios, on-peak power for these low load factor customers will rise 20-30% over present levels. The co-ops, being nongenerators, will likely see increased ratchet costs (from 60%-80%) that forces them to pay literally millions of dollars per year for power they never take. Strategically located DG assets can help reduce purchased power costs during the summer by shaving peaks and can be, perhaps, baseloaded. The goal is to optimize efficiencies between the two fuels by leveling load profiles.
Division of Professional Affairs/Energy Minerals Division/Division of Environmental Geosciences: Electric Deregulation and Power Fundamentals
Presiding: J. B. Platt and J. M. Fay
Power Pricing—Variations and Volatility
Miller, David A. and Fred James, Pace Resources, Inc., Fairfax, VA
Under conditions of traditional regulation, the wholesale cost of electricity is generally identified with utility lambda—the marginal cost of generation. It is the most expensive unit dispatched to meet demand in any hour that sets the system price for that hour. Therefore, the highest power cost regions are usually those where older oil- and gas-fired plants spend a lot of time on the margin.
One can develop expectations for future prices by comparing the portfolio of generating plants to forecasts of demand. But in the still developing competitive wholesale market, electricity prices can reach levels not predictable by these fundamental factors. In the summer of 1998, hot weather, transmission congestion, and intense speculation led to prices in the Midwestern United States of over $3,000/MWhr, more than 100 times typical prices. As retail electricity markets around the country restructure, this trend to increased volatility will continue. It will be driven by the need of generators to recover their capital and fixed costs from the competitive market instead of ratepayers. In comparison, the natural gas market, at its most volatile, may reach peak prices of only two to four times average.
The relationship between gas and electricity prices—also known as the spark spread—is therefore important to resource managers and power plant operators both on an average and seasonal basis. Clearly, there is opportunity but also risk in playing both markets simultaneously.
How the Power Grid Behaves
Overbye, Thomas J., University of Illinois at Urbana-Champaign, Urbana, IL
The nation’s electric power grid is an interconnected network of generation groups called control areas. Each control area has the responsibility to serve power to its residential, commercial, and industrial customers, even in the event of unforeseen disturbances. A control area must fulfill this responsibility in both a reliable and a cost-effective manner: reliably, so that its customers don’t wind up in the dark; and inexpensively, so that it can remain competitive in an increasingly competitive deregulated marketplace. The highvoltage transmission system ties each control area to its neighbor, enabling the control area to buy and sell power with its neighbors. Transacting power with its neighbors helps a control area fulfill its responsibilities.
This presentation explains many of the fundamental issues surrounding the operation of the interconnected power system. It identifies the various components of the system, including the generators, loads, and transmission lines that comprise each control area, and demonstrates how automatic generation control is employed to keep pace with changing power demand. Furthermore, the presentation emphasizes the variety of issues associated with transacting power between areas, identifying the reliability and economic issues and indicators that may shape a control area’s decision to engage in a power transaction. The highly graphical and interactive PowerWorld Simulator software package is used to communicate these lessons clearly and effectively.
Possible Impacts of Electric Restructuring on Gas Use for Power Generation
Platt, Jeremy B., EPRI, Palo Alto, CA; James M. Fay, GRI, Chicago, IL; Stephen L. Thumb and A. Michael Schaal, Energy Ventures Analysis, Inc., Arlington, VA; and Frank C. Graves and Lynda S. Borucki, The Brattle Group, Cambridge, MA
Electric industry restructuring is proceeding rapidly in many states, yet impacts on the generation mix and thus fuels used for power generation are likely to be modest. The principal reason is that the power industry is highly heterogeneous. Generation units have typically been built close to load centers, taking advantage of local fuel economies, whereas transmission transfer capabilities between regions are limited. These conclusions are based on systematic study by EPRI and GRI of regional generation costs and transmission links. The objective has been to ascertain “big picture” effects, such as whether coal may displace more costly natural gas generation. Still, substantial new gas-fired capacity is being added, and in some regions (e.g., New England and Texas) the numbers of proposed projects is astronomic. Actual growth will be limited by power price feedbacks as well as gas supply and delivery limitations.
While substantial shifts in fuel use from electric industry restructuring alone appear unlikely, numerous “wild cards” affect this conclusion. One is whether restructuring will indeed lead to lower electricity prices. Most important over the long term is the added effect of heightened environmental pressures on coal generation. Impacts on coal and nuclear generation are mixed, with some units becoming more competitive and others retiring and replacement capacity coming from a variety of sources. Quantitative findings on these interrelationships are summarized and compared with results from other studies.
Fundamentals of Electric Deregulation
Yokell, Michael D., Hagler Bailly Consulting, Inc., Boulder, CO
The talk provides an overview of recent regulatory initiatives at the federal and state level concerning the deregulation of the electric power industry. The effects of these initiatives on the IPP (independent power production) sector is emphasized. A discussion of the evolution of contracting for the sale of power from the passage of the Public Utility Regulatory Policies Act in 1978 through the present is presented. The relationship between new merchant IPPs and the power exchange (a power spot market trading center) in places like California where deregulation is already in place is covered.
© AAPG Division of Environmental Geosciences, 2013
Pay-Per-View Purchase Options
The article is available through a document delivery service. Explain these Purchase Options.
Watermarked PDF Document: $14 | |
Open PDF Document: $24 |