Category: Uncategorized

  • Climate sensitivity, sea level and atmospheric carbon dioxide

    Climate sensitivity, sea level and atmospheric carbon dioxide

     Authors

    Abstract

    Cenozoic temperature, sea level and CO2 covariations provide insights into climate sensitivity to external forcings and sea-level sensitivity to climate change. Climate sensitivity depends on the initial climate state, but potentially can be accurately inferred from precise palaeoclimate data. Pleistocene climate oscillations yield a fast-feedback climate sensitivity of 3±1°C for a 4 W m−2 CO2 forcing if Holocene warming relative to the Last Glacial Maximum (LGM) is used as calibration, but the error (uncertainty) is substantial and partly subjective because of poorly defined LGM global temperature and possible human influences in the Holocene. Glacial-to-interglacial climate change leading to the prior (Eemian) interglacial is less ambiguous and implies a sensitivity in the upper part of the above range, i.e. 3–4°C for a 4 W m−2 CO2 forcing. Slow feedbacks, especially change of ice sheet size and atmospheric CO2, amplify the total Earth system sensitivity by an amount that depends on the time scale considered. Ice sheet response time is poorly defined, but we show that the slow response and hysteresis in prevailing ice sheet models are exaggerated. We use a global model, simplified to essential processes, to investigate state dependence of climate sensitivity, finding an increased sensitivity towards warmer climates, as low cloud cover is diminished and increased water vapour elevates the tropopause. Burning all fossil fuels, we conclude, would make most of the planet uninhabitable by humans, thus calling into question strategies that emphasize adaptation to climate change.

    1. Introduction

    Humanity is now the dominant force driving changes in the Earth’s atmospheric composition and climate [1]. The largest climate forcing today, i.e. the greatest imposed perturbation of the planet’s energy balance [1,2], is the human-made increase in atmospheric greenhouse gases (GHGs), especially CO2 from the burning of fossil fuels.

    Earth’s response to climate forcings is slowed by the inertia of the global ocean and the great ice sheets on Greenland and Antarctica, which require centuries, millennia or longer to approach their full response to a climate forcing. This long response time makes the task of avoiding dangerous human alteration of climate particularly difficult, because the human-made climate forcing is being imposed rapidly, with most of the current forcing having been added in just the past several decades. Thus, observed climate changes are only a partial response to the current climate forcing, with further response still ‘in the pipeline’ [3].

    Climate models, numerical climate simulations, provide one way to estimate the climate response to forcings, but it is difficult to include realistically all real-world processes. Earth’s palaeoclimate history allows empirical assessment of climate sensitivity, but the data have large uncertainties. These approaches are usually not fully independent, and the most realistic eventual assessments will be ones combining their greatest strengths.

    We use the rich climate history of the Cenozoic era in the oxygen isotope record of ocean sediments to explore the relation of climate change with sea level and atmospheric CO2, inferring climate sensitivity empirically. We use isotope data from Zachos et al. [4], which are improved over data used in our earlier study [5], and we improve our prescription for separating the effects of deep ocean temperature and ice volume in the oxygen isotope record as well as our prescription for relating deep ocean temperature to surface air temperature. Finally, we use an efficient climate model to expand our estimated climate sensitivities beyond the Cenozoic climate range to snowball Earth and runaway greenhouse conditions.

    2. Overview of Cenozoic climate and our analysis approach

    The Cenozoic era, the past 65.5 million years (Myr), provides a valuable perspective on climate [5,6] and sea-level change [7], and Cenozoic data help clarify our analysis approach. The principal dataset we use is the temporal variation of the oxygen isotope ratio (δ18O relative to δ16O; figure 1a right-hand scale) in the shells of deep-ocean-dwelling microscopic shelled animals (foraminifera) in a near-global compilation of ocean sediment cores [4]. δ18O yields an estimate of the deep ocean temperature (figure 1b), as discussed in §3. Note that coarse temporal resolution of δ18O data in the intervals 7–17, 35–42 and 44–65 Myr reduces the apparent amplitude of glacial–interglacial climate fluctuations (see electronic supplementary material, figure S1). We use additional proxy measures of climate change to supplement the δ18O data in our quantitative analyses.

    Figure 1.(a) Global deep ocean δ18O from Zachos et al. [4] and (b) estimated deep ocean temperature based on the prescription in our present paper. Black data points are five-point running means of the original temporal resolution; red and blue curves have a 500 kyr resolution. Coarse temporal sampling reduces the amplitude of glacial–interglacial oscillations in the intervals 7–17, 35–42 and 44–65 Myr BP.

    Carbon dioxide is involved in climate change throughout the Cenozoic era, both as a climate forcing and as a climate feedback. Long-term Cenozoic temperature trends, the warming up to about 50 Myr before present (BP) and subsequent long-term cooling, are likely to be, at least in large part, a result of the changing natural source of atmospheric CO2, which is volcanic emissions that occur mainly at continental margins due to plate tectonics (popularly ‘continental drift’); tectonic activity also affects the weathering sink for CO2 by exposing fresh rock. The CO2 tectonic source grew from 60 to 50 Myr BP as India subducted carbonate-rich ocean crust while moving through the present Indian Ocean prior to its collision with Asia about 50 Myr BP [8], causing atmospheric CO2 to reach levels of the order of 1000 ppm at 50 Myr BP [9]. Since then, atmospheric CO2 declined as the Indian and Atlantic Oceans have been major depocentres for carbonate and organic sediments while subduction of carbonate-rich crust has been limited mainly to small regions near Indonesia and Central America [10], thus allowing CO2 to decline to levels as low as 170 ppm during recent glacial periods [11]. A climate forcing due to a CO2 change from 1000 to 170 ppm is more than 10 W m−2, which compares with forcings of the order of 1 W m−2 for competing climate forcings during the Cenozoic era [5], specifically long-term change of solar irradiance and change of planetary albedo (reflectance) owing to the overall minor displacement of continents in that era.

    Superimposed on the long-term trends are occasional global warming spikes, ‘hyperthermals’, most prominently the Palaeocene–Eocene Thermal Maximum (PETM) at approximately 56 Myr BP [12] and the Mid-Eocene Climatic Optimum at approximately 42 Myr BP [13], coincident with large temporary increases of atmospheric CO2. The most studied hyperthermal, the PETM, caused global warming of at least 5°C coincident with injection of a likely 4000–7000 Gt of isotopically light carbon into the atmosphere and ocean [14]. The size of the carbon injection is estimated from changes in the stable carbon isotope ratio 13C/12C in sediments and from ocean acidification implied by changes in the ocean depth below which carbonate dissolution occurred.

    The potential carbon source for hyperthermal warming that received most initial attention was methane hydrates on continental shelves, which could be destabilized by sea floor warming [15]. Alternative sources include release of carbon from Antarctic permafrost and peat [16]. Regardless of the carbon source(s), it has been shown that the hyperthermals were astronomically paced, spurred by coincident maxima in the Earth’s orbit eccentricity and spin axis tilt [17], which increased high-latitude insolation and warming. The PETM was followed by successively weaker astronomically paced hyperthermals, suggesting that the carbon source(s) partially recharged in the interim [18]. A high temporal resolution sediment core from the New Jersey continental shelf [19] reveals that PETM warming in at least that region began about 3000 years prior to a massive release of isotopically light carbon. This lag and climate simulations [20] that produce large warming at intermediate ocean depths in response to initial surface warming are consistent with the concept of a methane hydrate role in hyperthermal events.

    The hyperthermals confirm understanding about the long recovery time of the Earth’s carbon cycle [21] and reveal the potential for threshold or ‘tipping point’ behaviour with large amplifying climate feedback in response to warming [22]. One implication is that if humans burn most of the fossil fuels, thus injecting into the atmosphere an amount of CO2 at least comparable to that injected during the PETM, the CO2 would stay in the surface carbon reservoirs (atmosphere, ocean, soil, biosphere) for tens of thousands of years, long enough for the atmosphere, ocean and ice sheets to fully respond to the changed atmospheric composition. In addition, there is the potential that global warming from fossil fuel CO2 could spur release of CH4 and CO2 from methane hydrates or permafrost. Carbon release during the hyperthermals required several thousand years, but that long injection time may have been a function of the pace of the astronomical forcing, which is much slower than the pace of fossil fuel burning.

    The Cenozoic record also reveals the amplification of climate change that occurs with growth or decay of ice sheets, as is apparent at about 34 Myr BP when the Earth became cool enough for large-scale glaciation of Antarctica and in the most recent 3–5 Myr with the growth of Northern Hemisphere ice sheets. Global climate fluctuated in the 20 Myr following Antarctic glaciation with warmth during the Mid-Miocene Climatic Optimum (MMCO, 15 Myr BP) possibly comparable to that at 34 Myr BP, as, for example, Germany became warm enough to harbour snakes and crocodiles that require an annual temperature of about 20°C or higher and a winter temperature more than 10°C [23]. Antarctic vegetation in the MMCO implies a summer temperature of approximately 11°C warmer than today [24] and annual sea surface temperatures ranging from 0°C to 11.5°C [25].

    Superimposed on the long-term trends, in addition to occasional hyperthermals, are continual high-frequency temperature oscillations, which are apparent in figure 1 after 34 Myr BP, when the Earth became cold enough for a large ice sheet to form on Antarctica, and are still more prominent during ice sheet growth in the Northern Hemisphere. These climate oscillations have dominant periodicities, ranging from about 20 to 400 kyr, that coincide with variations in the Earth’s orbital elements [26], specifically the tilt of the Earth’s spin axis, the eccentricity of the orbit and the time of year when the Earth is closest to the Sun. The slowly changing orbit and tilt of the spin axis affect the seasonal distribution of insolation [27], and thus the growth and decay of ice sheets, as proposed by Milankovitch [28]. Atmospheric CO2, CH4 and N2O have varied almost synchronously with global temperature during the past 800 000 years for which precise data are available from ice cores, the GHGs providing an amplifying feedback that magnifies the climate change instigated by orbit perturbations [2931].

    Ocean and atmosphere dynamical effects have been suggested as possible causes of some climate change within the Cenozoic era; for example, topographical effects of mountain building [32], closing of the Panama Seaway [33] or opening of the Drake Passage [34]. Climate modelling studies with orographic changes confirm significant effects on monsoons and on Eurasian temperature [35]. Modelling studies indicate that closing of the Panama Seaway results in a more intense Atlantic thermohaline circulation, but only small effects on Northern Hemisphere ice sheets [36]. Opening of the Drake Passage surely affected ocean circulation around Antarctica, but efforts to find a significant effect on global temperature have relied on speculation about possible effects on atmospheric CO2 [37]. Overall, there is no strong evidence that dynamical effects are a major direct contributor to Cenozoic global temperature change.

    We hypothesize that the global climate variations of the Cenozoic (figure 1) can be understood and analysed via slow temporal changes in Earth’s energy balance, which is a function of solar irradiance, atmospheric composition (specifically long-lived GHGs) and planetary surface albedo. Using measured amounts of GHGs during the past 800 000 years of glacial–interglacial climate oscillations and surface albedo inferred from sea-level data, we show that a single empirical ‘fast-feedback’ climate sensitivity can account well for the global temperature change over that range of climate states. It is certain that over a large climate range climate sensitivity must become a strong function of the climate state, and thus we use a simplified climate model to investigate the dependence of climate sensitivity on the climate state. Finally, we use our estimated state-dependent climate sensitivity to infer Cenozoic CO2 change and compare this with proxy CO2 data, focusing on the Eocene climatic optimum, the Oligocene glaciation, the Miocene optimum and the Pliocene.

    3. Deep ocean temperature and sea level in the Cenozoic era

    The δ18O stable isotope ratio was the first palaeothermometer, proposed by Urey [38] and developed especially by Emiliani [39]. There are now several alternative proxy measures of ancient climate change, but the δ18O data (figure 1a) of Zachos et al. [4], a conglomerate of the global ocean sediment cores, is well suited for our purpose as it covers the Cenozoic era with good temporal resolution. There are large, even dominant, non-climatic causes of δ18O changes over hundreds of millions of years [40], but non-climatic change may be small in the past few hundred million years [41] and is generally neglected in Cenozoic climate studies. The principal difficulty in using the δ18O record to estimate global deep ocean temperature, in the absence of non-climatic change, is that δ18O is affected by the global ice mass as well as the deep ocean temperature.

    We make a simple estimate of global sea-level change for the Cenozoic era using the near-global δ18O compilation of Zachos et al. [4]. More elaborate and accurate approaches, including use of models, will surely be devised, but comparison of our result with other approaches is instructive regarding basic issues such as the vulnerability of today’s ice sheets to near-term global warming and the magnitude of hysteresis effects in ice sheet growth and decay.

    During the Early Cenozoic, between 65.5 and 35 Myr BP, the Earth was so warm that there was little ice on the planet and the deep ocean temperature is approximated by [6] Formula3.1Hansen et al. [5] made the approximation that, as the Earth became colder and continental ice sheets grew, further increase in δ18O was due, in equal parts, to deep ocean temperature change and ice mass change, Formula3.2Equal division of the δ18O change into temperature change and ice volume change was suggested by comparing δ18O at the endpoints of the climate change from the nearly ice-free planet at 35 Myr BP (when δ18O approx. 1.75) with the Last Glacial Maximum (LGM), which peaked approximately 20 kyr BP. The change of δ18O between these two extreme climate states (approx. 3) is twice the change of δ18O due to temperature change alone (approx. 1.5), with the temperature change based on the linear relation (??eq3.1) and estimates of Tdo∼5°C at 35 Myr BP (figure 1) and approximately −1°C at the LGM [42].

    This approximation can easily be made more realistic. Although ice volume and deep ocean temperature changes contributed comparable amounts to δ18O change on average over the full range from 35 Myr to 20 kyr BP, the temperature change portion of the δ18O change must decrease as the deep ocean temperature approaches the freezing point [43]. The rapid increase in δ18O in the past few million years was associated with the appearance of Northern Hemisphere ice sheets, symbolized by the dark blue bar in figure 1a.

    The sea-level change between the LGM and Holocene was approximately 120 m [44,45]. Thus, two-thirds of the 180 m sea-level change between the ice-free planet and the LGM occurred with formation of Northern Hemisphere ice (and probably some increased volume of Antarctic ice). Thus, rather than taking the 180 m sea-level change between the nearly ice-free planet of 34 Myr BP and the LGM as being linear over the entire range (with 90 m for δ18O<3.25 and 90 m for δ18O>3.25), it is more realistic to assign 60 m of sea-level change to δ18O 1.75–3.25 and 120 m to δ18O>3.25. The total deep ocean temperature change of 6°C for the change of δ18O from 1.75 to 4.75 is then divided two-thirds (4°C) for the δ18O range 1.75–3.25 and 2°C for the δ18O range 3.25–4.75. Algebraically, Formula3.3 Formula3.4 Formula3.5 and Formula3.6where SL is the sea level and its zero point is the Late Holocene level. The coefficients in equations (3.4) and (3.6) account for the fact that the mean LGM value of δ18O is approximately 4.9. The resulting deep ocean temperature is shown in figure 1b for the full Cenozoic era.

    Sea level from equations (3.3) and (3.4) is shown by the blue curves in figure 2, including comparison (figure 2c) with the Late Pleistocene sea-level record of Rohling et al. [47], which is based on analysis of Red Sea sediments, and comparison (figure 2b) with the sea-level chronology of de Boer et al. [46], which is based on ice sheet modelling with the δ18O data of Zachos et al. [4] as a principal input driving the ice sheet model. Comparison of our result with that of de Boer et al. [46] for the other periods of figure 2 is included in the electronic supplementary material, where we also make available our numerical data. Deep ocean temperature from equations (3.5) and (3.6) is shown for the Pliocene and Pleistocene in figure 3 and for the entire Cenozoic era in figure 1.

    Figure 2.(ac) Sea level from equations (3.3) and (3.4) using δ18O data of Zachos et al. [4], compared in (b) with ice sheet model results of de Boer et al. [46] and in (c) with the sea-level analysis of Rohling et al. [47].

    Figure 3.Deep ocean temperature in (a) the Pliocene and Pleistocene and (b) the last 800 000 years. High-frequency variations (black) are five-point running means of the original data [4], whereas the blue curve has a 500 kyr resolution. The deep ocean temperature for the entire Cenozoic era is in figure 1b.

    Differences between our inferred sea-level chronology and that from the ice sheet model [46] are relevant to the assessment of the potential danger to humanity from future sea-level rise. Our estimated sea levels have reached +5 to 10 m above the present sea level during recent interglacial periods that were barely warmer than the Holocene, whereas the ice sheet model yields maxima at most approximately 1 m above the current sea level. We find the Pliocene sea level varying between about +20 m and −50 m, with the Early Pliocene averaging about +15 m; the ice sheet model has a less variable sea level with the Early Pliocene averaging about +8 m. A 15 m sea-level rise implies that the East Antarctic ice sheet as well as West Antarctica and Greenland ice were unstable at a global temperature no higher than those projected to occur this century [1,48].

    How can we interpret these differences, and what is the merit of our simple δ18O scaling? Ice sheet models constrained by multiple observations may eventually provide our best estimate of sea-level change, but as yet models are primitive. Hansen [49,50] argues that real ice sheets are more responsive to climate change than is found in most ice sheet models. Our simple scaling approximation implicitly assumes that ice sheets are sufficiently responsive to climate change that hysteresis is not a dominant effect; in other words, ice volume on millennial time scales is a function of temperature and does not depend much on whether the Earth is in a warming or cooling phase. Thus, our simple transparent calculation may provide a useful comparison with geological data for sea-level change and with results of ice sheet models.

    We cannot a priori define accurately the error in our sea-level estimates, but we can compare with geological data in specific cases as a check on reasonableness. Our results (figure 2) yield two instances in the past million years when sea levels have reached heights well above the current sea level: +9.8 m in the Eemian (approx. 120 kyr BP, also known as Marine Isotope Stage 5e or MIS-5e) and +7.1 m in the Holsteinian (approx. 400 kyr BP, also known as MIS-11). Indeed, these are the two interglacial periods in the Late Pleistocene that traditional geological methods identify as probably having a sea level exceeding that in the Holocene. Geological evidence, mainly coral reefs on tectonically stable coasts, was described in the review of Overpeck et al. [51] as favouring an Eemian maximum of +4 to more than 6 m. Rohling et al. [52] cite many studies concluding that the mean sea level was 4–6 m above the current sea level during the warmest portion of the Eemian, 123–119 kyr BP; note that several of these studies suggest Eemian sea-level fluctuations up to +10 m, and provide the first continuous sea-level data supporting rapid Eemian sea-level fluctuations. Kopp et al. [53] made a statistical analysis of data from a large number of sites, concluding that there was a 95% probability that the Eemian sea level reached at least +6.6 m with a 67% probability that it exceeded 8 m.

    The Holsteinian sea level is more difficult to reconstruct from geological data because of its age, and there has been a long-standing controversy concerning a substantial body of geological shoreline evidence for a +20 m Late Holsteinian sea level that Hearty and co-workers have found on numerous sites [54,55] (numerous pros and cons are contained in the references provided in our present paragraph). Rohling et al. [56] note that their temporally continuous Red Sea record ‘strongly supports the MIS-11 sea level review of Bowen [57], which also places MIS-11 sea level within uncertainties at the present-day level’. This issue is important because both ice core data [29] and ocean sediment core data (see below) indicate that the Holsteinian period was only moderately warmer than the Holocene with similar Earth orbital parameters. We suggest that the resolution of this issue is consistent with our estimate of the approximately +7 m Holsteinian global sea level, and is provided by Raymo & Mitrovica [58], who pointed out the need to make a glacial isostatic adjustment (GIA) correction for post-glacial crustal subsidence at the places where Hearty and others deduced local sea-level change. The uncertainties in GIA modelling led Raymo & Mitrovica [58] to conclude that the peak Holsteinian global sea level was in the range of +6 to 13 m relative to the present. Thus, it seems to us, there is a reasonable resolution of the long-standing Holsteinian controversy, with substantial implications for humanity, as discussed in later sections.

    We now address differences between our sea-level estimates and those from ice sheet models. We refer to both the one-dimensional ice sheet modelling of de Boer et al. [46], which was used to calculate sea level for the entire Cenozoic era, and the three-dimensional ice sheet model of Bintanja et al. [59], which was used for simulations of the past million years. The differences most relevant to humanity occur in the interglacial periods slightly warmer than the Holocene, including the Eemian and Hosteinian, as well as the Pliocene, which may have been as warm as projected for later this century. Both the three-dimensional model of Bintanja et al. [59] and the one-dimensional model of de Boer et al. [46] yield maximum Eemian and Hosteinian sea levels of approximately 1 m relative to the Holocene. de Boer et al. [46] obtain approximately +8 m for the Early Pliocene, which compares with our approximately +15 m.

    These differences reveal that the modelled ice sheets are less susceptible to change in response to global temperature variation than our δ18O analysis. Yet the ice sheet models do a good job of reproducing the sea-level change for climates colder than the Holocene, as shown in figure 2 and electronic supplementary material, figure S2. One possibility is that the ice sheet models are too lethargic for climates warmer than the Holocene. Hansen & Sato [60] point out the sudden change in the responsiveness of the ice sheet model of Bintanja et al. [59] when the sea level reaches today’s level (figs 3 and 4 of Hansen & Sato [60]) and they note that the empirical sea-level data provide no evidence of such a sudden change. The explanation conceivably lies in the fact that the models have many parameters and their operation includes use of ‘targets’ [46] that affect the model results, because these choices might yield different results for warmer climates than the results for colder climates. Because of the potential that model development choices might be influenced by expectations of a ‘correct’ result, it is useful to have estimates independent of the models based on alternative assumptions.

    Note that our approach also involves ‘targets’ based on expected behaviour, albeit simple transparent ones. Our two-legged linear approximation of the sea level (equations (3.3) and (3.4)) assumes that the sea level in the LGM was 120 m lower than today and that the sea level was 60 m higher than today 35 Myr BP. This latter assumption may need to be adjusted if glaciers and ice caps in the Eocene had a volume of tens of metres of sea level. However, Miller et al. [61] conclude that there was a sea level fall of approximately 55 m at the Eocene–Oligocene transition, consistent with our assumption that Eocene ice probably did not contain more than approximately 10 m of sea level.

    Real-world data for the Earth’s sea-level history ultimately must provide assessment of sea-level sensitivity to climate change. A recent comprehensive review [7] reveals that there are still wide uncertainties about the Earth’s sea-level history that are especially large for time scales of tens of millions of years or longer, which is long enough for substantial changes in the shape and volume of ocean basins. Gasson et al. [7] plot regional (New Jersey) sea level (their fig. 14) against the deep ocean temperature inferred from the magnesium/calcium ratio (Mg/Ca) of deep ocean foraminifera [62], finding evidence for a nonlinear sea-level response to temperature roughly consistent with the modelling of de Boer et al. [46]. Sea-level change is limited for Mg/Ca temperatures up to about 5°C above current values, whereupon a rather abrupt sea-level rise of several tens of metres occurs, presumably representing the loss of Antarctic ice. However, the uncertainty in the reconstructed sea level is tens of metres and the uncertainty in the Mg/Ca temperature is sufficient to encompass the result from our δ18O prescription, which has comparable contributions of ice volume change and deep ocean temperature change at the Late Eocene glaciation of Antarctica.

    Furthermore, the potential sea-level rise of most practical importance is the first 15 m above the Holocene level. It is such ‘moderate’ sea-level change for which we particularly question the projections implied by current ice sheet models. Empirical assessment depends upon real-world sea-level data in periods warmer than the Holocene. There is strong evidence, discussed above, that the sea level was several metres higher in recent warm interglacial periods, consistent with our data interpretation. The Pliocene provides data extension to still warmer climates. Our interpretation of δ18O data suggests that Early Pliocene sea-level change (due to ice volume change) reached about +15 m, and it also indicates sea-level fluctuations as large as 20–40 m. Sea-level data for Mid-Pliocene warm periods, of comparable warmth to average Early Pliocene conditions (figure 3), suggest sea heights as great as +15–25 m [63,64]. Miller et al. [61] find a Pliocene sea-level maximum of 22±10 m (95% confidence). GIA creates uncertainty in sea-level reconstructions based on shoreline geological data [65], which could be reduced via appropriately distributed field studies. Dwyer & Chandler [64] separate Pliocene ice volume and temperature in deep ocean δ18O via ostracode Mg/Ca temperatures, finding sea-level maxima and oscillations comparable to our results. Altogether, the empirical data provide strong evidence against the lethargy and strong hysteresis effects of at least some ice sheet models.

    4. Surface air temperature change

    The temperature of most interest to humanity is the surface air temperature. A record of past global surface temperature is required for empirical inference of global climate sensitivity. Given that climate sensitivity can depend on the initial climate state and on the magnitude and sign of the climate forcing, a continuous record of global temperature over a wide range of climate states would be especially useful. Because of the singularly rich climate story in Cenozoic deep ocean δ18O (figure 1), unrivalled in detail and self-consistency by alternative climate proxies, we use deep ocean δ18O to provide the fine structure of Cenozoic temperature change. We use surface temperature proxies from the LGM, the Pliocene and the Eocene to calibrate and check the relation between deep ocean and surface temperature change.

    The temperature signal in deep ocean δ18O refers to the sea surface where cold dense water formed and sank to the ocean bottom, the principal location of deep water formation being the Southern Ocean. Empirical data and climate models concur that surface temperature change is generally amplified at high latitudes, which tends to make temperature change at the site of deep water formation an overestimate of global temperature change. Empirical data and climate models also concur that surface temperature change is amplified over land areas, which tends to make temperature change at the site of deep water an underestimate of the global temperature. Hansen et al. [5] and Hansen & Sato [60] noted that these two factors were substantially offsetting, and thus they made the assumption that benthic foraminifera provide a good approximation of global mean temperature change for most of the Cenozoic era.

    However, this approximation breaks down in the Late Cenozoic for two reasons. First, the deep ocean and high-latitude surface ocean where deep water forms are approaching the freezing point in the Late Cenozoic. As the Earth’s surface cools further, cold conditions spread to lower latitudes but polar surface water and the deep ocean cannot become much colder, and thus the benthic foraminifera record a temperature change smaller than the global average surface temperature change [43]. Second, the last 5.33 Myr of the Cenozoic, the Pliocene and Pleistocene, was the time that global cooling reached a degree such that large ice sheets could form in the Northern Hemisphere. When a climate forcing, or a slow climate feedback such as ice sheet formation, occurs in one hemisphere, the temperature change is much larger in the hemisphere with the forcing (cf. examples in Hansen et al. [66]). Thus, cooling during the last 5.33 Myr in the Southern Ocean site of deep water formation was smaller than the global average cooling.

    We especially want our global surface temperature reconstruction to be accurate for the Pliocene and Pleistocene because the global temperature changes that are expected by the end of this century, if humanity continues to rapidly change atmospheric composition, are of a magnitude comparable to climate change in those epochs [1,48]. Fortunately, sufficient information is available on surface temperature change in the Pliocene and Pleistocene to allow us to scale the deep ocean temperature change by appropriate factors, thus retaining the temporal variations in the δ18O while also having a realistic magnitude for the total temperature change over these epochs.

    Pliocene temperature is known quite well because of a long-term effort to reconstruct the climate conditions during the Mid-Pliocene warm period (3.29–2.97 Myr BP) and a coordinated effort to numerically simulate the climate by many modelling groups ([67] and papers referenced therein). The reconstructed Pliocene climate used data for the warmest conditions found in the Mid-Pliocene period, which would be similar to average conditions in the Early Pliocene (figure 3). These boundary conditions were used by eight modelling groups to simulate Pliocene climate with atmospheric general circulation models. Although atmosphere–ocean models have difficulty replicating Pliocene climate, atmospheric models forced by specified surface boundary conditions are expected to be capable of calculating global surface temperature with reasonable accuracy. The eight global models yield Pliocene global warming of 3±1°C relative to the Holocene [68]. This Pliocene warming is an amplification by a factor of 2.5 of the deep ocean temperature change.

    Similarly, for the reasons given above, the deep ocean temperature change of 2.25°C between the Holocene and the LGM is surely an underestimate of the surface air temperature change. Unfortunately, there is a wide range of estimates for LGM cooling, approximately 3–6°C, as discussed in §6. Thus, we take 4.5°C as our best estimate for LGM cooling, implying an amplification of surface temperature change by a factor of two relative to deep ocean temperature change for this climate interval.

    We obtain an absolute temperature scale using the Jones et al. [69] estimate of 14°C as the global mean surface temperature for 1961–1990, which corresponds to approximately 13.9°C for the 1951–1980 base period that we normally use [70] and approximately 14.4°C for the first decade of the twenty-first century. We attach the instrumental temperature record to the palaeo data by assuming that the first decade of the twenty-first century exceeds the Holocene mean by 0.25±0.25°C. Global temperature probably declined over the past several millennia [71], but we suggest that warming of the past century has brought global temperature to a level that now slightly exceeds the Holocene mean, judging from sea-level trends and ice sheet mass loss. Sea level is now rising 3.1 mm per year or 3.1 m per millennium [72], an order of magnitude faster than the rate during the past several thousand years, and Greenland and Antarctica are losing mass at accelerating rates [73,74]. Our assumption that global temperature passed the Holocene mean a few decades ago is consistent with the rapid change of ice sheet mass balance in the past few decades [75]. The above concatenation of instrumental and palaeo records yields a Holocene mean of 14.15°C and Holocene maximum (from five-point smoothed δ18O) of 14.3°C at 8.6 kyr BP.

    Given a Holocene temperature of 14.15°C and LGM cooling of 4.5°C, the Early Pliocene mean temperature 3°C warmer than the Holocene leads to the following prescription: Formula4.1and Formula4.2This prescription yields a maximum Eemian temperature of 15.56°C, which is approximately 1.4°C warmer than the Holocene mean and approximately 1.8°C warmer than the 1880–1920 mean. Clark & Huybers [76] fit a polynomial to proxy temperatures for the Eemian, finding warming as much as +5°C at high northern latitudes but global warming of +1.7°C ‘relative to the present interglacial before industrialization’. Other analyses of Eemian data find global sea surface temperature warmer than the Late Holocene by 0.7±0.6°C [77] and all-surface warming of 2°C [78], all in reasonable accord with our prescription.

    Our first estimate of global temperature for the remainder of the Cenozoic assumes that ΔTs=ΔTdo prior to 5.33 Myr BP, i.e. prior to the Plio-Pleistocene, which yields a peak Ts of approximately 28°C at 50 Myr BP (figure 4). This is at the low end of the range of current multi-proxy measures of sea surface temperature for the Early Eocene Climatic Optimum (EECO) [7981]. Climate models are marginally able to reproduce this level of Eocene warmth, but the models require extraordinarily high CO2 levels, for example 2240–4480 ppm [82] and 2500–6500 ppm [83], and the quasi-agreement between data and models requires an assumption that some of the proxy temperatures are biased towards summer values. Moreover, taking the proxy sea surface temperature data for the peak Eocene period (55–48 Myr BP) at face value yields a global temperature of 33–34°C (fig. 3 of Bijl et al. [84]), which would require an even larger CO2 amount with the same climate models. Thus, below we also consider the implications for climate sensitivity of an assumption that ΔTs=1.5×ΔTdo prior to 5.33 Myr BP, which yields Ts approximately 33°C at 50 Myr BP (see electronic supplementary material, figure S3).

    Figure 4.(ac) Surface temperature estimate for the past 65.5 Myr, including an expanded time scale for (b) the Pliocene and Pleistocene and (c) the past 800 000 years. The red curve has a 500 kyr resolution. Data for this and other figures are available in the electronic supplementary material.

    5. Climate sensitivity

    Climate sensitivity (S) is the equilibrium global surface temperature change (ΔTeq) in response to a specified unit forcing after the planet has come back to energy balance, Formula5.1i.e. climate sensitivity is the eventual (equilibrium) global temperature change per unit forcing. Climate sensitivity depends upon climate feedbacks, the many physical processes that come into play as climate changes in response to a forcing. Positive (amplifying) feedbacks increase the climate response, whereas negative (diminishing) feedbacks reduce the response.

    We usually discuss climate sensitivity in terms of a global mean temperature response to a 4 W m−2 CO2 forcing. One merit of this standard forcing is that its magnitude is similar to an anticipated near-term human-made climate forcing, thus avoiding the need to continually scale the unit sensitivity to achieve an applicable magnitude. A second merit is that the efficacy of forcings varies from one forcing mechanism to another [66]; so it is useful to use the forcing mechanism of greatest interest. Finally, the 4 W m−2 CO2 forcing avoids the uncertainty in the exact magnitude of a doubled CO2 forcing [1,48] estimate of 3.7 W m−2 for doubled CO2, whereas Hansen et al. [66] obtain 4.1 W m−2, as well as problems associated with the fact that a doubled CO2 forcing varies as the CO2 amount changes (the assumption that each CO2 doubling has the same forcing is meant to approximate the effect of CO2 absorption line saturation, but actually the forcing per doubling increases as CO2 increases [66,85]).

    Climate feedbacks are the core of the climate problem. Climate feedbacks can be confusing, because in climate analyses what is sometimes a climate forcing is at other times a climate feedback. A CO2 decrease from, say, approximately 1000 ppm in the Early Cenozoic to 170–300 ppm in the Pleistocene, caused by shifting plate tectonics, is a climate forcing, a perturbation of the Earth’s energy balance that alters the temperature. Glacial–interglacial oscillations of the CO2 amount and ice sheet size are both slow climate feedbacks, because glacial–interglacial climate oscillations largely are instigated by insolation changes as the Earth’s orbit and tilt of its spin axis change, with the climate change then amplified by a nearly coincident change of the CO2 amount and the surface albedo. However, for the sake of analysis, we can also choose and compare periods that are in quasi-equilibrium, periods during which there was little change of the ice sheet size or the GHG amount. For example, we can compare conditions averaged over several millennia in the LGM with mean Holocene conditions. The Earth’s average energy imbalance within each of these periods had to be a small fraction of 1 W m−2. Such a planetary energy imbalance is very small compared with the boundary condition ‘forcings’, such as changed GHG amount and changed surface albedo that maintain the glacial-to-interglacial climate change.

    (a) Fast-feedback sensitivity: Last Glacial Maximum–Holocene

    The average fast-feedback climate sensitivity over the LGM–Holocene range of climate states can be assessed by comparing estimated global temperature change and climate forcing change between those two climate states [3,86]. The appropriate climate forcings are the changes in long-lived GHGs and surface properties on the planet. Fast feedbacks include water vapour, clouds, aerosols and sea ice changes.

    This fast-feedback sensitivity is relevant to estimating the climate impact of human-made climate forcings, because the size of ice sheets is not expected to change significantly in decades or even in a century and GHGs can be specified as a forcing. GHGs change in response to climate change, but it is common to include these feedbacks as part of the climate forcing by using observed GHG changes for the past and calculated GHGs for the future, with calculated amounts based on carbon cycle and atmospheric chemistry models.

    Climate forcings due to past changes in GHGs and surface albedo can be computed for the past 800 000 years using data from polar ice cores and ocean sediment cores. We use CO2 [87] and CH4 [88] data from Antarctic ice cores (figure 5a) to calculate an effective GHG forcing as follows: Formula5.2where Fa is the adjusted forcing, i.e. the planetary energy imbalance due to the GHG change after the stratospheric temperature has time to adjust to the gas change. Fe, the effective forcing, accounts for variable efficacies of different climate forcings [66]. Formulae for Fa of each gas are given by Hansen et al. [89]. The factor 1.4 converts the adjusted forcing of CH4 to its effective forcing, Fe, which is greater than Fa mainly because of the effect of CH4 on the tropospheric ozone and the stratospheric water vapour [66]. The factor 1.12 approximates the forcing by N2O changes, which are not as well preserved in the ice cores but have a strong positive correlation with CO2 and CH4 changes [90]. The factor 1.12 is smaller than the 1.15 used by Hansen et al. [91], and is consistent with estimates of the N2O forcing in the current Goddard Institute for Space Studies (GISS) radiation code and that of the Intergovernmental Panel on Climate Change (IPCC) [1,48]. Our LGM–Holocene GHG forcing (figure 5c) is approximately 3 m−2, moderately larger than the 2.8 W m−2 estimated by IPCC [1,48] because of our larger effective CH4 forcing.

    Figure 5.(a) CO2 and CH4 from ice cores; (b) sea level from equation (3.4) and (c) resulting climate forcings (see text).

    Climate forcing due to surface albedo change is a function mainly of the sea level, which implicitly defines ice sheet size. Albedo change due to LGM–Holocene vegetation change, much of which is inherent with ice sheet area change, and albedo change due to coastline movement are lumped together with ice sheet area change in calculating the surface albedo climate forcing. An ice sheet forcing does not depend sensitively on the ice sheet shape or on how many ice sheets the ice volume is divided among and is nearly linear in sea-level change (see electronic supplementary material, figure S4, and [5]). For the sake of simplicity, we use the linear relation in Hansen et al. [5] and electronic supplementary material, figure S4; thus, 5 W m−2 between the LGM and ice-free conditions and 3.4 W m−2 between the LGM and Holocene. This scale factor was based on simulations with an early climate model [3,92]; comparable forcings are found in other models (e.g. see discussion in [93]), but results depend on cloud representations, assumed ice albedo and other factors; so the uncertainty is difficult to quantify. We subjectively estimate an uncertainty of approximately 20%.

    Global temperature change obtained by multiplying the sum of the two climate forcings in figure 5c by a sensitivity of 3/4°C per W m−2 yields a remarkably good fit to ‘observations’ (figure 6), where the observed temperature is 2×ΔTdo, with 2 being the scale factor required to yield the estimated 4.5°C LGM–Holocene surface temperature change. The close match is partly a result of the fact that sea-level and temperature data are derived from the same deep ocean record, but use of other sea-level reconstructions still yields a good fit between the calculated and observed temperature [5]. However, exactly the same match as in figure 6 is achieved with a fast-feedback sensitivity of 1°C per W m−2 if the LGM cooling is 6°C or with a sensitivity of 0.5°C per W m−2 if the LGM cooling is 3°C.

    Figure 6.Calculated surface temperature for forcings of figure 5c with a climate sensitivity of 0.75°C per W m−2, compared with 2×ΔTdo. Zero point is the Holocene (10 kyr) mean.

    Accurate data defining LGM–Holocene warming would aid empirical evaluation of fast-feedback climate sensitivity. Remarkably, the range of recent estimates of LGM–Holocene warming, from approximately 3°C [94] to approximately 6°C [95], is about the same as at the time of the CLIMAP [96] project. Given today’s much improved analytic capabilities, a new project to define LGM climate conditions, analogous to the Pliocene Research, Interpretation and Synoptic Mapping (PRISM) Pliocene data reconstruction [97,98] and Pliocene Model Intercomparison Project (PlioMIP) model intercomparisons [67,68], could be beneficial. In §7b, we suggest that a study of Eemian glacial–interglacial climate change could be even more definitive. Combined LGM, Eemian and Pliocene studies would address an issue raised at a recent workshop [99]: the need to evaluate how climate sensitivity varies as a function of the initial climate state. The calculations below were initiated after the workshop as another way to address that question.

    (b) Fast-feedback sensitivity: state dependence

    Climate sensitivity must be a strong function of the climate state. Simple climate models show that, when the Earth becomes cold enough for the ice cover to approach the tropics, the amplifying albedo feedback causes rapid ice growth to the Equator: ‘snowball Earth’ conditions [100]. Real-world complexity, including ocean dynamics, can mute this sharp bifurcation to a temporarily stable state [101], but snowball events have occurred several times in the Earth’s history when the younger Sun was dimmer than today [102]. The Earth escaped snowball conditions owing to limited weathering in that state, which allowed volcanic CO2 to accumulate in the atmosphere until there was enough CO2 for the high sensitivity to cause rapid deglaciation [103].

    Climate sensitivity at the other extreme, as the Earth becomes hotter, is also driven mainly by an H2O feedback. As climate forcing and temperature increase, the amount of water vapour in the air increases and clouds may change. Increased water vapour makes the atmosphere more opaque in the infrared region that radiates the Earth’s heat to space, causing the radiation to emerge from higher colder layers, thus reducing the energy emitted to space. This amplifying feedback has been known for centuries and was described remarkably well by Tyndall [104]. Ingersoll [105] discussed the role of water vapours in the ‘runaway greenhouse effect’ that caused the surface of Venus to eventually become so hot that carbon was ‘baked’ from the planet’s crust, creating a hothouse climate with almost 100 bars of CO2 in the air and a surface temperature of about 450°C, a stable state from which there is no escape. Arrival at this terminal state required passing through a ‘moist greenhouse’ state in which surface water evaporates, water vapour becomes a major constituent of the atmosphere and H2O is dissociated in the upper atmosphere with the hydrogen slowly escaping to space [106]. That Venus had a primordial ocean, with most of the water subsequently lost to space, is confirmed by the present enrichment of deuterium over ordinary hydrogen by a factor of 100 [107], the heavier deuterium being less efficient in escaping gravity to space.

    The physics that must be included to investigate the moist greenhouse is principally: (i) accurate radiation incorporating the spectral variation of gaseous absorption in both the solar radiation and thermal emission spectral regions, (ii) atmospheric dynamics and convection with no specifications favouring artificial atmospheric boundaries, such as between a troposphere and stratosphere, (iii) realistic water vapour physics, including its effect on atmospheric mass and surface pressure, and (iv) cloud properties that respond realistically to climate change. Conventional global climate models are inappropriate, as they contain too much other detail in the form of parametrizations or approximations that break down as climate conditions become extreme.

    We use the simplified atmosphere–ocean model of Russell et al. [108], which solves the same fundamental equations (conservation of energy, momentum, mass and water substance, and the ideal gas law) as in more elaborate global models. Principal changes in the physics in the current version of the model are use of a step-mountain C-grid atmospheric vertical coordinate [109], addition of a drag in the grid-scale momentum equation in both atmosphere and ocean based on subgrid topography variations, and inclusion of realistic ocean tides based on exact positioning of the Moon and Sun. Radiation is the k-distribution method of Lacis & Oinas [110] with 25 k-values; the sensitivity of this specific radiation code is documented in detail by Hansen et al. [111]. Atmosphere and ocean dynamics are calculated on 3°×4° Arakawa C-grids. There are 24 atmospheric layers. In our present simulations, the ocean’s depth is reduced to 100 m with five layers so as to achieve a rapid equilibrium response to forcings; this depth limitation reduces poleward ocean transport by more than half. Moist convection is based on a test of moist static stability as in Hansen et al. [92]. Two cloud types occur: moist convective clouds, when the atmosphere is moist statically unstable, and large-scale super-saturation, with cloud optical properties based on the amount of moisture removed to eliminate super-saturation, with scaling coefficients chosen to optimize the control run’s fit with global observations [108,112]. To avoid long response times in extreme climates, today’s ice sheets are assigned surface properties of the tundra, thus allowing them to have a high albedo snow cover in cold climates but darker vegetation in warm climates. The model, the present experiments and more extensive experiments will be described in a forthcoming paper [112].

    The equilibrium response of the control run (1950 atmospheric composition, CO2 approx. 310 ppm) and runs with successive CO2 doublings and halvings reveals that snowball Earth instability occurs just beyond three CO2 halvings. Given that a CO2 doubling or halving is equivalent to a 2% change in solar irradiance [66] and the estimate that solar irradiance was approximately 6% lower 600 Ma at the most recent snowball Earth occurrence [113], figure 7 implies that about 300 ppm CO2 or less was sufficiently small to initiate glaciation at that time.

    Figure 7.(a) The calculated global mean temperature for successive doublings of CO2 (legend identifies every other case) and (b) the resulting climate sensitivity (1×CO2=310 ppm).

    Climate sensitivity reaches large values at 8–32×CO2 (approx. 2500–10 000 ppm;

    [Message clipped]  View entire message

    Click here to Reply, Reply to all, or Forward
    Why this ad?Ads –
    Need a extra pair of hands at no costs? Visit…
    0.01 GB (0%) of 15 GB used
    ©2013 Google – Terms & Privacy
    Last account activity: 36 minutes ago

    Details

    NEVILLE GILLMORE
    ngarthurslea@yahoo.com.au
    Show details
    Ads
    Cheaper Energy Price

    After Cheaper Energy Price? Call Australia’s No.1 Energy Broker
    Sydney Boat Hull Cleaning

    Call 0433411772 to book a dive -affordable&reliable boat cleaning
    Curtin PG Enviro Health

    Talk to the Experts About Your Study Options at our Expo on 2 Oct.
    Solar Energy

    3.29kW Solar System for just $5,499 Inc Free Fronius Upgrade & Install!
    Pay online with POLi

    Add POLi the payment method to your Magento website. Get 6 months free!
    2013 Mazda MX-5

    New. But Already Iconic. Everything a Roadster Should Be.
    Woolworths Stores

    Find Your Nearest Woolworths Now. The Fresh Food People!
    ABC Seamless Guttering

    Factory Direct-All types of Gutter. Gutter, Leafguard&Cleaning, Roofing
  • Is climate change already dangerous (2): The Arctic

    Posted: 18 Sep 2013 09:47 PM PDT
    by David Spratt

    Second in a series

    Arctic sea ice

    Download full report

    On 16 September 2012, Arctic sea-ice reached its minimum extent for the 2012 northern summer of 3.41 million square kilometres, the lowest seasonal minimum extent in the satellite record since 1979, and just half of the average area for the 1979–2000 period.  There was a loss of 11.83 million square kilometres of ice from the maximum extent on 20 March 2012.  This was the largest summer ice extent loss in the satellite record, more than one million square kilometres greater than in any previous year.

    Two-thirds of the loss of sea-ice extent has happened in the 12 years since 2000, and the process appears to be accelerating.  From 1979 to 1983 in the Arctic, the sea ice summer minimum covered an average of just over 51 per cent of the ocean.  It fell to just 24 per cent of the Arctic ocean surface in 2012.

    Not only does the sea ice cover a smaller area of ocean in summer, it is also thinning rapidly.  The sea-ice volume is now down to just one-fifth of what it was in 1979.  The PIOMAS project, which captures the process of sea-ice retreat far better than any other general climate models, finds a September 2012 minimum of 3,263 cubic kms of ice.  Contrasted with the figure of 16,855 cubic kms in 1979, more than 80 per cent of ice volume has been lost.

    Arctic sea-ice volume loss (based on PIOMAS)

    It is now clear that the Arctic is heading quickly for summer periods free of sea ice.  A linear extrapolation of sea-ice mass loss suggests it may occur within a decade or so.  An exponential fit, which is a better fit for the current data, suggests it might occur within a few years . At time of publication, the minimum volume figure for 2013 was not available, but it may be a little higher than the record low of 2012, and similar to 2011.

    Because climate models generally have been poor at dealing with Arctic sea-ice retreat , expert elicitations play a key role in considering whether the Arctic has passed a very significant and “dangerous” tipping point.  Here’s what leading figures in the research field say:

    PIOMAS Arctic sea ice annual minimum volume (black) plus “best fit” trend (red)
    • Dr Tim Lenton of the University of Exeter told the March 2012 Planet Under Pressure
conference that sea ice since 2007 had departed from model predictions, and that disappearance of Arctic sea ice has crossed a “tipping point” that could soon make ice-free summers a regular feature across most of the Arctic Ocean.  This conclusion was drawn from a subsequently published paper  which finds that “an abrupt and persistent increase in the amplitude of the seasonal Arctic sea-ice cover in 2007 which we describe as a (non-bifurcation) ‘tipping point’”.  If 2007 is the crucial point on the Arctic sea-ice decline timeline, it is also important to note that global warming above pre-industrial was 0.76ºC at that time. At equilibrium, a 0.76ºC rise is equivalent to CO2 levels of 335 ppm, so the “safe boundary” of 350 ppm already looks too optimistic from this perspective.
    • The Australian Climate Commissioner, Professor Will Steffen, told The Age in September last year: “I’m pretty certain that we have now passed the tipping point for Arctic sea ice”
    • Dr Seymour Laxon, of the Centre for Polar Observation and Modelling at University College London, says: “Preliminary analysis of our data indicates that the rate of loss of sea-ice volume in summer in the Arctic may be far larger than we had previously suspected…  Very soon we may experience the iconic moment when, one day in the summer, we look at satellite images and see no sea-ice coverage in the Arctic, just open water”.
    • Professor Carlos Duarte, Director of University of WA’s Oceans Institute, says an Arctic “snowballing” situation would prove as hard to slow down as a runaway train.  He says melting of the ice is accelerating faster than any of the models could predict and the prospect of an Arctic Ocean free of ice had been brought forward to 2015, compared with a prediction in 2007 that at least one-third of the normal extent of sea ice would remain in summer in 2100.  Duarte says that the Arctic region is fast approaching a series of imminent “tipping points” which could trigger a domino effect of large-scale climate change across the entire planet with “major consequences for the future of humankind as climate change progresses”.
    • US National Snow and Ice Data Centre Director Dr Mark Serreze told Climate Progress in 2010: “I stand by my previous statements that the Arctic summer sea-ice cover is in a death spiral.  It’s not going to recover.”   Without human intervention to drive recovery, the evidence is very clear that Serreze is right.
    • Professor Peter Wadhams, of Cambridge University and the Catlin Arctic Survey, and a leading authority on the polar regions, concludes in a research paper: “Has Arctic sea ice reached a tipping point? I believe that it has…”.

    Wadhams explains:

    I have been predicting [the collapse of sea ice in summer months] for many years.  The main cause is simply global warming: as the climate has warmed there has been less ice growth during the winter and more ice melt during the summer… in the end the summer melt overtook the winter growth such that the entire ice sheet melts or breaks up during the summer months.  This collapse, I predicted would occur in 2015–16 at which time the summer Arctic (August to September) would become ice-free.  The final collapse towards that state is now happening and will probably be completed by those dates.  As the sea ice retreats in summer the ocean warms up (to +7ºC in 2011) and this warms the seabed too.  The continental shelves of the Arctic are composed of offshore permafrost, frozen sediment left over from the last ice age.  As the water warms, the permafrost melts and releases huge quantities of trapped methane, a very powerful greenhouse gas so this will give a big boost to global warming.

    Wadhams’ analysis relies in part on a new, more specialised regional climate model, acronym NAME, developed by Dr Wieslaw Maslowski and colleagues. NAME is head and shoulders above other models so far in projecting and replicating sea-ice losses. “The future of Arctic sea ice” found that: “Given the estimated trend and the volume estimate for October–November of 2007 at less than 9,000 cubic kms, one can project that at this rate it would take only 9 more years or until 2016 +/-3 years to reach a nearly ice-free Arctic Ocean in summer”.

    The impacts of lengthening periods of sea-ice-free Arctic summers are significant and will, together with warming already “in the system”, push more climate elements past their tipping points. Our knowledge is limited because “a system-level understanding of critical Arctic processes and feedbacks is still lacking” (Maslowski, Kinney et al.) and “no serious efforts have been made so far to identify and qualify the interactions between various tipping points” (Schellnhuber).

    However, we do know that the Arctic is warming quicker than the global average.  Duarte, Lenton et al. find that: “Warming of the Arctic region is proceeding at three times the global average, and a new ‘Arctic rapid change’ climate pattern has been observed in the past decade.” Reductions in the sea-ice cover are believed to be the largest contributor toward Arctic amplification. Maslowski, Kinney et al. note that: “a warming Arctic climate appears to affect the rate of melt of the Greenland ice sheet, Northern Hemisphere permafrost sea-level rise, and global climate change”.

    The sea-ice cover in June is about two per cent of the earth’s surface.  Replacing that during summer in the Arctic with darker, more heat-absorbing ocean waters is equivalent to about 20 years of human greenhouse emissions, or about +0.5ºC of warming, according to Peter Wadhams.  This is consistent with a study by Stephen Hudson, which found that, if the Arctic were ice-free for one month a year plus associated ice-extent decreases in other months, then, without taking cloud changes into account, the global impact would be about +0.2ºC of warming.  If there were no ice at all during the main three months of sunlight, the increase would be +0.5ºC.

    The consequences of the Arctic big melt and the subsequent regional amplification and global temperature increase will include:

    • Accelerated melting of the Greenland ice sheet, very likely pushing it past its tipping point;
    • Pushing Arctic temperatures into a range that will trigger large-scale Arctic carbon store releases of methane and CO2, a positive feedback which will drive further warming;
    • Further destabilisation of the Jet Stream and hence more northern hemisphere extreme weather; and
    • The destruction of the Arctic ecosystem, which is already well under way. This has been chronicled by many researchers and organisations, including the Center for Biological Diversity and Care for the Wild International.  In the Arctic, the rate of climate change is now faster than ecosystems can adapt to naturally, and the fate of many Arctic marine ecosystems is clearly connected to that of the sea ice (Duarte, Lenton et al.). I remember well attending an Academy of Science conference in Canberra in May 2008 where the international guest speaker was Dr Neil Hamilton, then head of the WWF Arctic Programme. He told a somewhat stunned audience that the WWF was not trying to preserve the Arctic ecosystem because “it was no longer possible to do so”.  Whilst the campaign to stop the development of an oil and gas industry in the Arctic is necessary (if only to prevent more global warming emissions), the claim that in so doing we can thereby “save the Arctic” seems wide of the mark.

    Greenland Ice Sheet

    Complex, non-linear systems typically shift between alternative states in an abrupt, rather than a smooth manner, so it is often difficult to identify tipping points in advance. Only a few Arctic specialists, including Ted Scambos, Mark Serreze and Ron Lindsay, said prior to 2007 that the sea ice was close to a phase change.

    If it is sometimes hard to see tipping points coming, it is also too late to be wise after the fact. And that is precisely the case with the Greenland Ice Sheet (GIS).

    Current-generation climate models are not yet all that helpful on GIS. They have a poor understanding of the processes involved, and acceleration, retreat and thinning of outlet glaciers are not represented.

    Recent research (next post) puts a lower boundary of 0.8ºC on GIS’s tipping point, a warming level we have already reached.  In July 2013, a new study found that stretches of ice on the coasts of Antarctica and Greenland are at risk of rapidly cracking apart and falling into the ocean: “rapid iceberg discharge is possible in regions where highly crevassed glaciers are grounded deep beneath sea level, indicating portions of Greenland and Antarctica that may be vulnerable to rapid ice loss through catastrophic disintegration”.

    In 2012, GIS melting shattered the seasonal record; the duration of GIS melting was the longest yet observed; a rare, nearly ice sheet-wide melt event (covering as much as 97% of the ice sheet’s surface on a single day) occurred in July; and the reflectivity of GIS, particularly at the high elevations that were involved in the mid-July melt event, declined to record lows. Unfortunately, data from the GRACE satellite observation of GIS is not yet of sufficient duration to robustly describe the melt trend, but observations are that the rate of melting is increasing, and many glaciers are picking up speed. Since 2001, the Jakoshavn Glacier, the world’s fastest flowing glacier, has more than doubled its flow rate, and total GIS mass loss in 2011 was 70% larger than the 2003–2009 average annual loss rate.

    Previously, studies have estimated that it would take centuries to millennia for new climates to increase the temperature deep within ice sheets such as GIS. But a new study finds that when the influence of meltwater (which drains through cracks in an ice sheet and can warm the sheet from the inside, softening the ice and letting it flow faster) is considered, warming can occur within decades and produce rapid accelerations. Lead author Thomas Phillips says this research “could imply that ice sheets can discharge ice into the ocean far more rapidly than currently estimated,” thus requiring a re-assessment of the rate of both future sea-level rises and the rate of mass loss of GIS.

    Has Greenland passed its tipping point?  What would be the impact of a sea-ice-free Arctic summer and the consequent amplified regional warming on the stability of the Greenland ice sheet? Research does not yet provide a robust framework for considering such questions, yet most scientists if asked for their expert elicitation would probably say that it is hard to imagine the GIS doing anything other than actively de-glaciating at an accelerating rate and passing a critical tipping point in such circumstances.

    NASA climate research chief Dr James Hansen answered this question in the affirmative, in a peer-reviewed paper in 2007:

    Could the Greenland ice sheet survive if the Arctic were ice-free in summer and fall? It has been argued that not only is ice sheet survival unlikely, but its disintegration would be a wet process that can proceed rapidly. Thus an ice-free Arctic Ocean, because it may hasten melting of Greenland, may have implications for global sea level, as well as the regional environment, making Arctic climate change centrally relevant to definition of dangerous human interference.”

    In the same year, Hansen said that today’s level of CO2 was enough to cause Arctic sea-ice cover and massive ice sheets such as in Greenland to eventually melt away: “I think in most of these cases, we have already reached the tipping point”.

    And last year, Hansen told Bloomberg that: “Our greatest concern is that loss of Arctic sea ice creates a grave threat of passing two other tipping points – the potential instability of the Greenland ice sheet and methane hydrates… These latter two tipping points would have consequences that are practically irreversible on time scales of relevance to humanity”.

    Glaciologist Jason Box told reporters at the annual conference of the American Geophysical Union last December: “In 2012 Greenland crossed a threshold where for the first time we saw complete surface melting at the highest elevations in what we used to call the dry snow zone… As Greenland crosses the threshold and starts really melting in the upper elevations it really won’t recover from that unless the climate cools significantly for an extended period of time which doesn’t seem very likely”.

    Next post: Dangerous impacts from the current implied temperature rise

      Email delivery powered by Google
    Google Inc., 20 West Kinzie, Chicago IL USA 60610
    Click here to Reply or Forward
    Why this ad?Ads –
    0.01 GB (0%) of 15 GB used
    ©2013 Google – Terms & Privacy
    Last account activity: 3 hours ago

    Details

    NEVILLE GILLMORE
    ngarthurslea@yahoo.com.au
    Show details
    Ads
    Sydney Boat Hull Cleaning

    Call 0433411772 to book a dive -affordable&reliable boat cleaning
    EnergyAustralia Solar

    20% Off Our 250w Range Of Systems Offer Available For September Only!
    “Is Your Computer Slow?”

    In 2 min,Restore your Speed. Fix error messages! – [ Free Download ]
    Inverter World Top 5

    Sungrow PV Inverter Shipments Rank Top 5 in the World
    Leaf Stopper Gutter Guard

    CSIRO Certified Gutter Guards. In Store Or Fully Installed. NSW Wide.
    ABC Seamless Guttering

    Factory Direct-All types of Gutter. Gutter, Leafguard&Cleaning, Roofing
    Get 3 Solar Quotes

    Compare 3 Solar Installers. Save Time & Money Now!
    Sola Swim

    The Pool Heating Professionals. Relax in a warm solar heated pool.
  • Pricing the Priceless MONBIOT

    Why this ad?
    SolarWindsIT Monitoring Software – Keep an eye on your network, servers, VMs, apps, IP addresses & more. Free downloads!

    Monbiot.com

    Inbox
    x
    George Monbiot news@monbiot.com via google.com
    5:02 PM (38 minutes ago)

    to me

    Monbiot.com


    Pricing the Priceless

    Posted: 18 Sep 2013 08:23 AM PDT

    The market has not solved the problem of power: it has simply given it another name.

     

    By George Monbiot, published in Corporate Knights, 15th July 2013

    On this we can agree: the relationship between people and the natural world is broken. We fail to value the systems that keep us alive. We treat both natural resources and the biosphere’s capacity to absorb our waste as if they were worth nothing.

    The obvious answer is to place a financial value on what used to be called nature, but has now been rebranded natural capital. There are some magnificent examples of how this could, in principle, spare us from perverse decisions. As The Economics of Ecosystems and Biodiversity points out, if you turn a hectare of mangrove forest into shrimp farms you’ll make $1,220 per year. Leave it standing, and the benefits are worth ten times that amount(1).

    But the obvious answer isn’t necessarily the right answer. The issue which determines whether or not the living planet is protected is not a number with a dollar sign attached. It’s political will. That’s another way of saying that it’s about power.

    Look at the European carbon market. Through the Emissions Trading System, it was supposed to have harnessed the magic of the markets to do what politics had failed to do: drastically reduce the consumption of fossil fuels. At the time of writing, the price of carbon is 4.70 euros per tonne(2). For all the good that does, it might as well be zero.

    Why is it so low? Because carbon-intensive industries lobbied politicians to raise the supply of permits until the mechanism became useless(3). The market has not solved the problem of power: it has simply given it another name. Whether governments attempt to address climate change the old way (through regulation) or through pricing makes not a jot of difference if they won’t stand up to industrial lobbyists.

    In some respects the Emissions Trading System has made the problem worse, for it allows politicians and businesses to wash their hands of responsibility for climate change, arguing that the market will sort it all out. There is not a new airport or coal mine or power station being built in the European Union which has not cited the trading scheme as justification. This useless system has empowered polluting projects which might not otherwise have been approved.

    Even if we didn’t have a number to slap on them, we’ve known for centuries that mangrove swamps are of great value for coastal protection and as breeding grounds for fish. But this has not stopped people from bullying and bribing politicians to let them turn these forests into shrimp farms. If a hectare of shrimp farms makes $1,200 for a rich and well-connected man, that can count for far more than the $12,000 it’s worth to downtrodden coastal people. Knowing the price does not change this relationship: again, it’s about power.

    Natural capital accounting can exacerbate the underlying problem. By pricing and commodifying the natural world and then taking the obvious next step – establishing a market in “ecosystem services” – accounting has the unintended consequence of turning the biosphere into a subsidiary of the economy. Forests, fish stocks, biodiversity, hydrological cycles become owned, in effect, by the very interests – corporations, landlords, banks – whose excessive power is most threatening to them. In some cases the costing of nature looks like a prelude to privatisation.

    Already the traders and speculators are moving in. In the UK, our Ecosystem Markets Task Force talks of “harnessing City financial expertise to assess the ways that these blended revenue streams and securitisations enhance the return on investment of an environmental bond”(4). Nature is becoming the plaything of the financial markets. We know how well that tends to work out.

    While natural capital accounting empowers the moneymen, it disempowers the rest of us. That’s one of the reasons why governments like it. Who needs all that messy democratic decision-making, those endless debates about intrinsic value and beauty and wonder if you’ve already determined that the meaning of life is 42? And who can gainsay the decision to pulp a forest or blast a coral reef, if the value of the destruction turns out to be worth several times 42? Once we have ceded nature to cost-benefit analysis, we can’t complain if we don’t like the results.

    After more than a quarter of a century of environmental campaigning I’ve come to see that the only thing that really works is public mobilisation: the electorate putting so much pressure on governments that they are obliged to take a stand against powerful interests. It doesn’t matter what weapons governments use to confront these interests: what counts is their willingness to use them. A system which undermines public involvement, boosts the power of the financial markets and reduces love and passion and delight to a column of figures is unlikely to enhance the protection of the natural world.

    www.monbiot.com

    References:

    1. http://www.unep.org/documents.multilingual/default.asp?DocumentID=602&ArticleID=6371&l=en&t=long

    2. http://www.iol.co.za/business/international/eu-carbon-price-increases-1.1468977#.URqk2_IiVHl

    3. http://www.guardian.co.uk/environment/2013/jan/24/eu-carbon-price-crash-record-low

    4. http://www.defra.gov.uk/ecosystem-markets/files/EMTF-

  • Return of the steam engine: cheap storage for solar

    Return of the steam engine: cheap storage for solar

    By on 19 September 2013
    Print Friendly

    A group of Australian engineers have “re-invented” the steam engine and combined it with solar thermal energy to deliver a cheap solar storage solution. What’s more, it works on the distributed level and can operate behind the metre, and is far cheaper than PV combined with batteries.

    Not long after Robert Mierisch finished up as director of thermal systems research at the solar thermal energy pioneer Ausra, he took himself to the Smithsonian in Washington, the world’s largest museum complex, to find out everything he could about the Skinner steam engine.

    Mierisch was playing on a hunch. Or should that be a conviction. He had worked with solar thermal energy for years, and had been looking for a viable storage solution. He thought he could find the enabler in comparatively ancient technology.

    Steam turbines, he reasoned, couldn’t do the job at the scale he had in mind, nor did they have the flexibility required to replace diesel power. But steam engines just might.  This is what led him to the Smithsonian and the Skinner Unaflow steam engine, which were used in ships through the 1940s, the last commercial version of the steam engine that had begun at the start of the industrial revolution nearly 200 years earlier.

    Now, a unique distributed generation technology with storage, the product of a 4-year collaboration between Mierisch and Steve Bisset, another Australian expat and Silicon Valley entrepreneur, is soon to see the light of day.

    Terrajoule, the Redwood City, California-based company they co-founded, will soon bring a demonstration system up to full-power operation, generating 100 kW for 24 hours per day.   This is a key milestone toward bringing the solar/steam/storage technology to market within the next 18 months.

    Mierisch and Bisset say their technology is potentially revolutionary but deceptively simple. It combines inexpensive solar power with inexpensive storage and behaves like an electric motor plugged into the grid, or even like a diesel genset. In other words, it can operate 24 hours per day, but without the utility bill or the fuel cost.
 And they say it will be cheaper and far more efficient than alternatives such as solar PV combined with batteries.

    RenewEconomy was invited in July to visit Terrajoule’s demonstration site near Oakdale, on an irrigated almond farm in the heart of the California Valley, about 150kms east of San Francisco.

    There, Terrajoule has an array of parabolic trough receivers that collect solar energy to create steam to drive the engine. The breakthrough comes from the realization that the storage can be created by exploiting the difference between the high pressure and the low pressure cycles of the engine.

    While the sun is shining, high pressure steam is created and used to power the high pressure stage of the steam engine.  This high pressure stage produces power only while the sun shines, but its exhaust steam still contains over half its original energy, now reduced to an intermediate pressure.  This remaining energy is captured by condensing the exhaust steam into an insulated tank of water, which heats and pressurizes the water.

    When energy is needed above what the sunshine-driven high pressure engine stage is producing, to handle peaks in daytime demand, or nighttime demand, the pressurized water in the tank is flashed back to steam which then drives the low pressure stages of the steam engine.  The combined output of the high and low pressure engine stages provides 24-hour power on demand, like a diesel generator, but with no fuel cost.

    The energy lost in this steam-water-steam storage and retrieval process is negligible, and the net cost to store the energy is a small fraction of the equivalent batteries.

    Bisset says they have made only minor adjustments to the 1930s steam engine in concept, although the format is changed for manufacture alongside modern internal combustion engines. “They were highly evolved machines, he says. “Over 300 years they figured out how to make them that good. What we’ve done is match new technology (solar thermal), and old technology (steam engine) and a thoroughly modern idea (storage) and come up with a solution. The core patent is in the architecture, the novel combination of these technologies.”

    Bisset says it solves the storage problem because it is much more capital efficient than other ideas. “If we started from scratch, it would have taken us 100 years to refine it. We don’t have to build more factories to do it.  There is lots of capacity to build piston engines in the world, in the US, Europe, China and India.” The company’s first steam engines are being developed in collaboration with Roush Industries of Detroit, and the first market will be developed with partner JKB Energy, the leader in solar power for agriculture in California.

    The initial product units will deliver 300kW to 1.5 MW peak, with constant output of 125kW to 625 kW over 24 hours. Each unit will be delivered with a 600kWh to 5 MWh of storage capacity.  The engine and storage units are shipped, installed and operated in two or more 40’ shipping containers.

    Terrajoule’s initial target market is the more than 300,000 electric and diesel-powered irrigation pumps in the solar-rich western US. Bisset says the technology will work in other applications, for example manufacturing plants, off-grid locations including mines, and even entire towns.

    Mierisch estimates less than a five year pay-back for customers using diesel engines, a three-year pay-back when that diesel is transported long distances, and a six year payback for grid-connected customers in Australia.

    “If you consider just the cost per kilowatt-hour of electricity produced”, he says, “the Terrajoule systems will be similar in cost to wind turbines and solar PV panels, and those technologies now produce kilowatt-hours as cheaply as fossil fuels.  However, a Terrajoule kilowatt-hour is much more valuable than kilowatt-hours from wind or PV, because you can generate them when you need them.  To make wind or PV viable in the long run, you have to combine them with batteries or some other expensive storage technology, and Terrajoule will be a fraction of the cost of those power-with-storage systems”.

    The irony is that  a unaflow steam engine had been used to convert steam into energy at Australia’s first ever solar farm in the remote town of White Cliffs in NSW in 1984.

    This actually provided the seed of an idea for Mierisch’s later investigation, although it took a lot of work from Mierisch and Bisset to identify exactly where the solution was to be found.

    “I don’t know why I didn’t think of it 40 years ago,” Mierisch says. Had they started several years earlier, pre-GFC and pre-Solyndra, Mierisch says it might have been a lot easier to get funding. As it was, California, as it had been for Ausra, was a more fruitful place to get backing for “partially developed thinking” than Australia. “I went to see a VC firm in Australia – and they said, ‘when you have some sales traction I will see you again’, and I thought, ‘when I have sales traction, I won’t be talking to you’.”

     

     
  • Arctic sea ice shrinks to sixth-lowest extent on record

    Arctic sea ice shrinks to sixth-lowest extent on record

    Sea ice recovers from record low of 2012 but long-term trend continues towards an ice-free Arctic during the summer months

    Eco audit live blog: how fast is Arctic sea ice melting?

    Melting sea ice near Ellesmere Island.

    Melting sea ice near Ellesmere Island, Canada. Photograph: Gordon Wiltsie/ Gordon Wiltsie/National Geogra
    Arctic sea ice extent September 2013.  Photograph: guardian.co.uk Arctic sea ice extent September 2013. Photograph: guardian.co.ukSea ice cover in the Arctic has shrunk to one of its smallest extents on record, bringing the days of an entirely ice-free Arctic during the summer a step closer.

    The annual sea ice minimum of 5,099m sq km reached last Friday was not as extreme as last year, when the collapse of ice cover broke all previous records.

    But it was still the sixth lowest Arctic sea ice minimum on record, and well below the average set over the past 30 years of satellite records.

    This suggests the Arctic will be entirely ice-free in the summer months within decades, scientists said.

    The annual sea ice minimum, based on a five-day average, is expected to be officially declared by the US National Snow and Ice Data Centre in Boulder, Colorado, within the next few days.

    “It certainly is continuing the long-term decline,” said Julienne Stroeve, a scientist at the centre. “We are looking at long-term changes and there are going to be bumps and wiggles along the long-term declining trend, but all the climate models are showing that we are eventually going to lose all of that summer sea ice.”

    Overall, the Arctic has lost about 40% of its sea ice cover since 1980. Most scientists believe the ocean at the north pole could be entirely ice-free in the summer by the middle of the century – if not sooner.

    Arctic sea ice extent graph Arctic sea ice extent graph. Photograph: guardian.co.ukThe most dramatic changes have occurred in the past decade. The seven summers with the lowest sea ice minimums were all in the past seven years.

    The loss of sea ice cover is a leading indicator of climate change, and will be a key part of the findings released next week by the United Nations’ climate science panel, the IPCC. It has also emerged as a driver of extreme weather events in Europe.

    The extent of Arctic sea ice has generally decreased in all regions since satellite records began in the late 1970s. The Arctic continues to warm at about twice the rate of lower latitudes.

    This year’s minimum was reached despite cooler temperatures in some areas that slowed melting, Stroeve said. Air temperatures in the central Arctic were 1-4C colder than in the past six years.

    “We had a pretty cold summer in general for the time period we’re looking at and yet the sea ice cover didn’t recover to the extent that we had in the 1970s and 1980s,” she said.

    Rapid warming last year reduced the area of frozen ocean water in the Arctic to less than 3.5m sq km.

    This year’s low was more in line with the summer of 2009, Stroeve said. After shrinking to a minimum of 5.099m sq km on 13 September, the summer sea ice extent increased to 5.104m sq km on 14 September and 5.105m sq km on 15 September before falling back to 5.103m sq km on 16 September.

    But the decline of the surface area of frozen water tells only part of the story, scientists said.

    Ice in the Arctic has also been thinning over the years – which makes it more vulnerable to melting in the summer.

    Scientists now believe it is the combination of reduction in thickness and surface area that is hastening the advent of an ice-free Arctic in the summer months.

    Observations from the European Space Agency’s CryoSat mission released last week showed the volume of sea ice in the Arctic falling to a new low last winter.

    Last March and April – typically the time of year when the ice floes are at their thickest – there was just 15,000 cubic km of ice.

    There would have been 30,000 cubic km, or twice that volume, at the height of winter 30 years ago, scientists said.

    “There is very little thick multi-year ice left covering these great areas. It is really thin so if you get a little weather the next year, it’s all gone,” said Andreas Münchow, a scientist at the University of Delaware who studies the Arctic.

    The loss of the thicker, multi-year ice was also one reason for the larger year-to-year changes in Arctic ice cover, Münchow said.

    But the overall direction of sea ice cover in the Arctic was clear, he added. “We really are heading towards an ice-free Arctic in the summer.

    “It just takes a freak event eventually, in the next five or 10 or even 20 years, and the next year there will be a huge Arctic cover. But it is all going to be thin on top, and the long-term trend is that the ice is disappearing in the summer in the Arctic.”

    Green light

    close

  • Unprecedented Rate and Scale of Ocean Acidification Found in the Arctic

    Unprecedented Rate and Scale of Ocean Acidification Found in the Arctic
    Released: 9/11/2013 5:30:00 PM

    Contact Information:
    U.S. Department of the Interior, U.S. Geological Survey
    Office of Communications and Publishing
    12201 Sunrise Valley Dr, MS 119
    Reston, VA 20192
    Lisa  Robbins 1-click interview
    Phone: 727-803-8747 x3005

    Jonathan Wynn
    Phone: 813-974-9369

    In partnership with: University of South Florida
    ST. PETERSBURG, Fla. — Acidification of the Arctic Ocean is occurring faster than projected according to new findings published in the journal PLOS ONE.  The increase in rate is being blamed on rapidly melting sea ice, a process that may have important consequences for health of the Arctic ecosystem.

    Ocean acidification is the process by which pH levels of seawater decrease due to greater amounts of carbon dioxide being absorbed by the oceans from the atmosphere.  Currently oceans absorb about one-fourth of the greenhouse gas.  Lower pH levels make water more acidic and lab studies have shown that more acidic water decrease calcification rates in many calcifying organisms, reducing their ability to build shells or skeletons.  These changes, in species ranging from corals to shrimp, have the potential to impact species up and down the food web.

    The team of federal and university researchers found that the decline of sea ice in the Arctic summer has important consequences for the surface layer of the Arctic Ocean.  As sea ice cover recedes to record lows, as it did late in the summer of 2012, the seawater beneath is exposed to carbon dioxide, which is the main driver of ocean acidification.

    In addition, the freshwater melted from sea ice dilutes the seawater, lowering pH levels and reducing the concentrations of calcium and carbonate, which are the constituents, or building blocks, of the mineral aragonite. Aragonite and other carbonate minerals make up the hard part of many marine micro-organisms’ skeletons and shells. The lowering of calcium and carbonate concentrations may impact the growth of organisms that many species rely on for food.

    The new research shows that acidification in surface waters of the Arctic Ocean is rapidly expanding into areas that were previously isolated from contact with the atmosphere due to the former widespread ice cover.

    “A remarkable 20 percent of the Canadian Basin has become more corrosive to carbonate minerals in an unprecedented short period of time.  Nowhere on Earth have we documented such large scale, rapid ocean acidification” according to lead researcher and ocean acidification project chief, U.S. Geological Survey oceanographer Lisa Robbins.

    Globally, Earth’s ocean surface is becoming acidified due to absorption of man-made carbon dioxide. Ocean acidification models show that with increasing atmospheric carbon dioxide, the Arctic Ocean will have crucially low concentrations of dissolved carbonate minerals, such as aragonite, in the next decade.

    “In the Arctic, where multi-year sea ice has been receding, we see that the dilution of seawater with melted sea ice adds fuel to the fire of ocean acidification” according to co-author, and co-project chief, Jonathan Wynn, a geologist from the University of the South Florida. “Not only is the ice cover removed leaving the surface water exposed to man-made carbon dioxide, the surface layer of frigid waters is now fresher, and this means less calcium and carbonate ions are available for organisms.”

    Researchers were able to investigate seawater chemistry at high spatial resolution during three years of research cruises in the Arctic, alongside joint U.S.-Canada research efforts aimed at mapping the seafloor as part of the U.S. Extended Continental Shelf program.  In addition to the NOAA supported ECS ship time, the ocean acidification researchers were funded by the USGS, National Science Foundation, and National Oceanic and Atmospheric Administration.

    Compared to other oceans, the Arctic Ocean has been rather lightly sampled. “It’s a beautiful but challenging place to work,” said Robert Byrne, a USF marine chemist. Using new automated instruments, the scientists were able to make 34,000 water-chemistry measurements from the U.S. Coast Guard icebreaker. “This unusually large data set, in combination with earlier studies, not only documents remarkable changes in Arctic seawater chemistry but also provides a much-needed baseline against which future measurements can be compared.” Byrne credits scientists and engineers at the USF college of Marine Science with developing much of the new technology.

    Information on the most recent Arctic research cruise is available on online, and you can follow the research on Twitter @USGS Arctic.