Category: Uncategorized

  • Maintain the rage – on blocking supply

    Maintain the rage – on blocking supply

    8

    gough“Maintain the rage!”

    So said Gough Whitlam to his supporters after Malcolm Fraser, back then no friend of the left, blocked supply and effectively ended Whitlam’s remarkable reforming (if chaotic) Prime Ministership.

    Whitlam didn’t just change laws, change funding arrangements for health and education. He changed the way Australians thought about ourselves and our place in the world. He changed our very culture, through his actions and leadership.

    The egalitarian, caring, ambitiously forward-thinking spirit he brought to government has slowly been eroded over the intervening four decades, by Liberal and Labor governments. So slowly that most of us didn’t notice most of the time. Until now, when Tony Abbott and Joe Hockey took a sledgehammer to it.

    But is the answer to this really to use against Abbott the tool that brought down Whitlam? Should we really, as quite a few people are arguing, be pushing the Senate to block supply so as to bring down the government and trigger a new election?

    I suggest that doing so risks squandering the greatest opportunity we’ve had in decades to really shift the debate back to the left. We need to maintain the rage, and harness it to drive change, not use it all up in one big burst that could well blow up in our faces. Now is our best chance to reprioritise values, shift social norms, put caring for each other and our environment at the heart of our culture.

    So why would blocking supply not do that?

    Firstly, it’s critically important to understand the difference between blocking budget measures and blocking supply. Budget measures are contained in suites of legislation – vast omnibus bills to amend the tax laws generally called TLABs (Tax Laws Amendment Bills), as well as specific bills to set up new structures (eg the so-called Emissions Reduction Fund) or abolish old ones (eg the Clean Energy Finance Corporation). Blocking or amending these bills is annoying for the government, would stand in the way of their harsh and nasty agenda, and could trigger a new election if Mr Abbott chose to swallow the double dissolution pill. But it doesn’t cause a constitutional crisis of any kind.

    On the other hand are the Appropriations Bills. These are the bills by which the government effectively withdraws cash from its bank account – Consolidated Revenue – and spends it on everything from schools to fighter jets, unemployment benefits to middle class welfare, hospitals to hand-outs for billionaire mining magnates.

    Blocking the Appropriations Bills, commonly known as blocking supply, means the executive arm of government is effectively paralysed by the parliamentary arm. That is what causes a constitutional crisis.

    Now, let me clarify that I’m not worried about causing a constitutional crisis, per se. On some levels, that kind of shake up is exactly what our torpid democracy needs. But it’s only useful if we can be reasonably confident of the outcome. And I am not.

    Here’s the first key risk. In 1975, Governor General Sir John Kerr dismissed Whitlam, appointed Fraser caretaker Prime Minister, and dissolved both Houses of Parliament, triggering an election. There have been books written about the appropriateness and legality of that decision to use his ‘reserve powers’. He didn’t have to. And there is absolutely no guarantee that our current Governor General would follow the precedent. Indeed, given what it did to Kerr, who drank himself to an early grave, it might be surprising if General Cosgrove did so.

    “Then what, hmmm?” as the grandfather in Peter and the Wolf said. What would happen if we’d set this wolf free and couldn’t capture it? What if it dragged on, as it almost inevitably would, perhaps for weeks or even months?

    That’s when we’re staring down the barrel of a US-style shut down and all that goes with it. Except triggered not by a Tea Party that wears its abhorrence of government on its sleeve, but by a broad left that wants government to play a vital role but through its actions is preventing it from doing so.

    Bear in mind that the primary impacts of blocking supply would be felt by some of the most vulnerable in our society. We’re not just looking at the irony of seeing public servants, nurses, teachers, firies – public servants we want to protect from Abbott’s agenda – going unpaid. We’re talking about those on disability benefits, struggling single parents, the long-term unemployed, living hand to mouth – the very people we are trying to defend from Abbott’s knife – going without for the duration.

    And what would the political impact be? If this scenario plays out, frankly it’s not unlikely to backfire very badly indeed. Even if the Governor General did eventually dissolve parliament, support may have melted away, partly due to the simple annoyance factor, partly due to the patent clash of stated values and actual actions. We could win the battle but lose the war and end up with a renewed, even strengthened, Abbott government.

    There’s a chance this wouldn’t happen. Of course there is. The Governor General could call a snap election, Labor could win and Bill Shorten would… um… oh. Do you really reckon he’d change things if we hadn’t changed the discourse first? Wouldn’t he continue the gradual erosion, the slow shift away from fairness and caring and sustainability toward the corporate state? Wouldn’t he still cut single parent support, send desperate refugees out of sight to lose their minds, mouth platitudes about climate change while funding coal ports?

    Now is our moment to actually build change! Tony Abbott has made it easier for us by making this so explicitly about values and culture, about the kind of country we want Australia to be. Now is our chance to have that conversation, to shift the discourse, to demand the space to talk about making education more important than war planes, research and innovation more important than coal exports, people more important than the “economy” we ostensibly constructed to serve us but have now allowed to overshadow and overpower all other goals.

    We’ve had years of a creeping shift to the right, aided by Labor often, but really driven by the Liberals, years when we’ve been able to pretend to ourselves that we were still the egalitarian society we believed we were, long after it had been eroded beyond recognition. The bubble has now been burst.

    That gives us the opportunity to really fight back. Not just to use right wing tactics to kick out a government we oppose, but to actually do the hard yards of rebuilding a caring society, a daring society, a sharing society.

    Let’s do it properly this time. Because, with climate change bearing down on us, we haven’t got another chance to stuff it up.

    Share

    91
    466

    — Tim Hollo

  • Carbon budgets, climate sensitivity and the myth of “burnable carbon” Posted: 07 Jun 2014 11:08 PM PDT by David Spratt

    08 Jun 2014
    Home  »  Uncategorized   »   Carbon budgets, climate sensitivity and the myth of “burnable carbon” Posted: 07 Jun 2014 11:08 PM PDT by David Spratt Breakthrough National Climate Restoration Forum 21-22 June, Melbourne In my previous post explaining why there is no carbon budget left for burning fossil fuels for a 2-degree Celsius (°C) target, I explained that these carbon budget calculations are expressed in probabilities of not exceeding the target. This reflects a number of uncertainties in understanding, including climate sensitivity, ocean heat uptake inertia, the influences of non-carbon dioxide forcing agents, and because results vary somewhat among model ensembles. Of these, climate sensitivity is the biggest issue, because of the possibilities that climate change might proceed more rapidly than currently estimated because of reinforcing feedbacks, thresholds or tipping points in the climate system, or less rapidly because of dampening feedbacks. Another significant issue is whether the modelling used for the most recent IPCC report is too conservative in projecting the loss of Arctic sea ice, and the consequences for Arctic-driven warming. This post looks at these two issues Arctic modelling underestimates sea-ice loss, albedo change and warming The IPCC’s 2013 carbon budget work uses Coupled Model Intercomparison Project Phase 5 (CMIP5) computer modelling results. Results are given for the RCP2.6 (~2°C warming) scenario, which show a 43% reduction in September Arctic sea-ice extent by 2100 (compared to a 1985–2005 reference period), so that ~ 4 million square kilometres on Arctic sea ice still remains in summer by 2100. This is so at odds with the reality on the ground as to be not credible. With less than 1°C of global warming, the Arctic sea-ice extent has dropped by half, and the sea-ice volume by three-quarters. At the 2012 summer minimum, sea-ice extent was 3.4 million square kilometres, less than the CMIP5′s projection for 2100! Many Arctic experts think that the Arctic is likely to reach an sea-ice-free state (defined as less than 1 mil. sq. km.) in the northern summer within the next decade or so, and perhaps sooner, with the number of ice-free days growing from then on. Prof. Will Steffen told “The Age” in September 2012 that: “I’m pretty certain that we have now passed the tipping point for Arctic sea ice”. This reflects work by researchers include Livina, Lenton, Wadhams and Maslowski (1). A reasonable scenario would be to look at a sea-ice-free Arctic in five-to-ten years, with the number of ice-free days expanding from then on to several weeks, perhaps even months, before +2°C is achieved. This is important because a more rapid loss of sea ice changes the Arctic’s albedo (reflectivity), as dark seas absorb more heat than white ice, increasing warming. This feedback effectively squeezes down the carbon budget, and is underestimated in IPCC’s 2013 report. A 2011 study, for example, found that if the Arctic were ice-free for one month a year plus associated ice-extent decreases in other months then, without taking cloud changes into account, the global impact would be about 0.2ºC of warming. If there were no ice at all during the months of sunlight, the impact would close to 0.5ºC of global warming (2). It is a very credible scenario that the Arctic could indeed be sea-ice-free for a month in summer before warming reaches 2°C, but this has not been considered in any carbon budget considerations as far as I can ascertain. Warming of 0.2°C from a month of sea-ice-free conditions is roughly equivalent to ten years of current human emissions, which would have to subtracted from the IPCC’s 2013 carbon budget, reducing it by around 40%. Climate sensitivity Short-term, or Equilibrium Climate Sensitivity (ECS), is the temperature increase resulting from doubling of atmospheric carbon dioxide (CO2) levels, including such factors as rapid changes in snow and (sea) ice melt, and the behaviour of “fast” feedbacks including clouds and water vapour. Thus, doubling of atmospheric CO2 from the pre-industrial level of 280 parts per million (ppm), to 560 ppm, would resulted in a 3°C global temperature increase using ECS of 3°C. A related function is the Transient Climate Response to Cumulative CO2 Emissions (TCRE), which is used in the IPCC’s carbon budget. Arctic sea-ice loss, and the associated albedo change, is a fast feedback that is included in the CMIP5 models for IPCC AR5 carbon budgets, but as discussed above, that process appears to have been significantly underestimated. The mid-range ECS estimate is generally around 3°C (range 2–4.5°C), and it plays out in the first hundred years or so after an injection of carbon dioxide into the atmosphere. The 2013 IPCC report finds that “Equilibrium climate sensitivity is likely in the range 1.5°C to 4.5°C” but “No best estimate for equilibrium climate sensitivity can now be given because of a lack of agreement on values across assessed lines of evidence and studies.” However a recent paper by Sherwood, Bony et al. looking at clouds and atmospheric convective mixing finds that on “the basis of the available data… the new understanding presented here pushes the likely long-term global warming towards the upper end of model ranges.” Taking “the available observations at face value,” they write, “implies a most likely climate sensitivity of about 4°C, with a lower limit of about 3°C” (3). Writing in the Guardian, Skeptical Science’s Dana Nuccitelli explains that these “results are consistent with Fasullo & Trenberth (2012), who found that only the higher sensitivity climate models correctly simulated drying in key cloud-forming regions of the atmosphere. Likewise, preliminary results by scientists at the California Institute of Technology Jet Propulsion Laboratory presented at the 2013 AGU meeting showed that higher sensitivity models do the best job simulating observed cloud changes. These results are also consistent with Lauer et al. (2010) and Clement et al. (2009), which looked at cloud changes in the Pacific, finding the observations consistent with a positive cloud feedback” (4). If indeed ECS is more likely at the higher end of the range, this would diminish the remaining carbon budget. Quantifying a carbon budget for a ~4°C mid-point ECS has not been done as far as I can ascertain. Long-term earth system sensitivity Paleoclimatology (study of past climates) suggests that if longer-term feedbacks of “slow” factors are taken into account, such as the decay of large ice sheets, changes in the carbon cycle (changed efficiency of carbon sinks such as permafrost and methane clathrate stores, as well as biosphere stores such as peatlands and forests), and changes in vegetation coverage and reflectivity (albedo), then the Earth’s sensitivity to a doubling of CO2 could itself be double that of the “fast” climate sensitivity predicted by most climate models, or around 6°C (5). These “slow” feedbacks amplify the initial warming burst. A measure of these effects for a doubling of CO2 is known as Earth System Sensitivity (ESS). Longer-term ESS is generally considered to come into play over periods from centuries to several millennia, depending on how fast is the rate of change in greenhouse gas levels and temperature. The problem is that rate of climate change now being driven by human actions may be as fast as any extended warming period over the past 65 million years, and it is projected to accelerate in the coming decades. This means that longer-term “slow” events associated with ESS – such as loss of large ice sheets, and changes in Arctic and biosphere carbon stores – are starting to occur now, are happening much more quickly than expected, and likely will proceed at a significant scale in the current hundred years. We face an event unprecedented in the last 65 million years of “fast” short-term and “slow” long-term climate sensitivity events occurring alongside one another in parallel, rather than one after the other in series as is usually the case. Thus, even as some of the “fast” warming is still to be realised due to thermal inertia, some of the “slow” feedbacks are already coming into play: Evidence from Earth’s history suggests that slower surface albedo feedbacks due to vegetation change and melting of Greenland and Antarctica can come into play on the timescales of interest to humans, which could increase the sensitivity to significantly higher values, as much as 6°C … the slow feedback climate sensitivity has relevance in the Anthropocene era, since ice sheet/vegetation feedback may become significant on decadal-to-centennial timescales of interest to humans (6). and Unfortunately, slow feedbacks are amplifying on time scales that humans care about: decades, centuries, even millennia. As the planet warms, for example, ice sheets melt, exposing a darker surface that increases warming. Also warming causes a net release of long-lived greenhouse gases from the ocean and soil. Vegetation changes that occur as climate warms from today’s situation will also have a significant amplifying effect, as forests move into tundra regions in North America and Eurasia (7). The problem is that the IPCC carbon budget analysis assumes that none of these longer-term feedbacks will be materially relevant before 2°C of warming, and so exclude the possibility of large-scale permafrost, methane clathrate or less efficient biological stores (Amazon, tundra etc) making contributions to atmospheric greenhouse gas levels and impacting on the carbon budget. Thus the IPCC 2013 report notes that “Accounting for … the release of greenhouse gases from permafrost will also lower…” the target, and that the CMIP5 modelling used for the IPCC’s carbon budgets does not include “explicit representation of permafrost soil carbon decomposition in response to future warming”. It also notes that “the climate sensitivity of a model may… not reflect the the sensitivity of the full Earth system because those feedback processes [“slow feedbacks associated associated with vegetation changes and ice sheets”] are not considered”. Several lines of evidence suggest theses assumptions are not robust. Recent research shows that the Amazon may often be releasing huge quantities of CO2 to the atmosphere, acting not as a carbon sink but as a source (8); and that the seafloor off the coast of Northern Siberia is releasing more than twice the amount of methane as previously estimated and is now on par with the methane being released from the Arctic tundra (9). In February 2013, scientists using radiometric dating techniques on Russian cave formations to measure historic melting rates warned that a +1.5ºC global rise in temperature compared to pre-industrial was enough to start a general permafrost melt. They found that “global climates only slightly warmer than today are sufficient to thaw extensive regions of permafrost.” Lead researcher Anton Vaks says that: “1.5ºC appears to be something of a tipping point” (10). In 2011, Schaefer, Zhang et al. warned: “The thaw and release of carbon currently frozen in permafrost will increase atmospheric CO2 concentrations and amplify surface warming to initiate a positive permafrost carbon feedback (PCF) on climate…. [Our] estimate may be low because it does not account for amplified surface warming due to the PCF itself….We predict that the PCF will change the Arctic from a carbon sink to a source after the mid-2020s and is strong enough to cancel 42-88% of the total global land sink. The thaw and decay of permafrost carbon is irreversible and accounting for the PCF will require larger reductions in fossil fuel emissions to reach a target atmospheric CO2 concentration” (11). This very strong and disturbing finding – that permafrost decay is “irreversible” and requires a lower carbon budget – is not reflected in the PCC’s figuring. Conclusion Climate change with its non-linear events, tipping points and irreversible events – such as mass extinctions, destruction of ecosystems, the loss of large ice sheets and the triggering of large-scale releases of greenhouse gases from carbon stores such as permafrost and methane clathrates – contains many possibilities for catastrophic failure. If climate sensitivity is, in reality, at the high end of the range used for the IPCC’s carbon budgets, then as a consequence that means that we must adopt a very low-risk of exceeding the target. As the previous post explained, If a risk-averse (pro-safety) approach is applied – say, of less than 10% probability of exceeding the 2°C target – to carbon budgeting, there is simply no budget available, because it has already been used up. The notion that there is still “burnable carbon” is a myth. Notes (1) Livina, V.N. and T.M. Lenton (2013) “A recent tipping point in the Arctic sea-ice cover: abrupt and persistent increase in the seasonal cycle since 2007”, The Cryosphere 7: 275-286; UWA (2012) “Arctic scientist warns of dangerous climate change”, http://www.news.uwa.edu.au/201201304303/climate-science/arctic-scientists-warn-dangerous-climate-change, accessed 30 July 2013; Wadhams, P. (2012) “Arctic ice cover, ice thickness and tipping points”, AMBIO 41: 23–33 ; Maslowski, W., C.J. Kinney et al. (2012) “The Future of Arctic Sea Ice”, The Annual Review of Earth and Planetary Sciences, 40: 625-654 (2) Hudson S. (2011) “Estimating the global radiative impact of the sea ice–albedo feedback in the Arctic”, JGRA, 16 August 2011; For a more detailed discussion, see: http://www.climatecodered.org/2012/10/after-arctic-big-melt-1-hotter-planet.html (3) Sherwood, S.C., S. Bony et al. (2014) “Spread in model climate sensitivity traced to atmospheric convective mixing, Nature 505: 37-42 (4) Nuccittelli, D. (2014) “Global warming is being caused by humans, not the sun, and is highly sensitive to carbon, new research shows”, The Guardian, 10 January 2014. (5) The Geological Society (2013) “An addendum to the Statement on Climate Change: Evidence from the Geological Record”, London, December 2013, www.geolsoc.org.uk/climatechange; Hansen, J. (2013) “Climate Sensitivity, Sea Level and Atmospheric Carbon Dioxide”, Philosophical Transactions of the Royal Society A, 371, 20120294, doi:10.1098/rsta.2012.0294. (6) Previdi, M., B.G. Liepert et al (2011) “Climate sensitivity in the Anthropocene”, Earth Syst. Dynam. Discuss., 2, 531–550 (7) James Hansen, An Old Story, but Useful Lessons, 26 September 2013, http://www.columbia.edu/~jeh1/mailings/2013/20130926_PTRSpaperDiscussion.pdf (8) Kirby, A. (2014) “Drought ‘makes Amazonia emit carbon’”, Climate News Newtwork, 5 March 2014, http://www.climatenewsnetwork.net/2014/03/drought-makes-amazon-emit-carbon, accessed 7 April 2014; Brando, P.M., J.K.Balch et al. (2014) “Abrupt increases in Amazonian tree mortality due to drought–fire interactions”, PNAS, doi: 10.1073/pnas.1305499111 (9) Science Daily (2013), “Arctic seafloor methane releases double previous estimates”, 25 November 2013, http://www.sciencedaily.com/releases/2013/11/131125172113.htm, accessed 7 April 2014. (10) Vaks, A., O.S. Gutareva et al. (2013) “Speleothems Reveal 500,000-Year History of Siberian Permafrost”, Science 340: 183-186 (11) Khvorostyanov, D.V., P. Ciais et al. (2008) “Vulnerability of east Siberia’s frozen carbon stores to future warming”, Geophysical Research Letters, 2008; 35:L10703; Schaefer, K., T. Zhang et al. (2011) “Amount and timing of permafrost carbon release in response to climate warming”, Tellus 63:165-180.
  • The Most Powerful Volcanic Eruption of the 20th Century

     

    Home » Volcanoes » Novarupta

    Novarupta

    The Most Powerful Volcanic Eruption of the 20th Century

    June 6th, 1912

    People in Juneau, Alaska, about 750 miles from the volcano, heard the sound of the blast – over one hour after it occurred.

    The morning of June 6th arrived on the Alaska peninsula to find the area which is now Katmai National Monument being shaken by numerous strong, shallow earthquakes. The most powerful volcanic eruption of the 20th Century was about to begin – but very few people knew about it. The Alaska peninsula has a low population density today, but in 1912 it was even lower. Beyond the land shaken by the earthquake activity, the beginnings of this event were almost unnoticed.

    Volcanic Monitoring – 1912 vs. Today

    Today the stirring of an important volcano draws enormous global attention. Weeks or even months before most large eruptions, a buzz circulates through an electronically-connected community of volcano scientists as clusters of small earthquakes are detected by a global array of seismographs. Many scientists working at diverse global locations interpret this data and begin to collaborate about an awakening volcano and the eruption that might follow. Reports are posted on the internet and news stories communicate the volcano’s activity to millions of people. Often it is a false alarm – the volcano is simply stirring.

    If the earthquakes strengthen and begin moving upwards, many of these scientists will travel to the area of potential eruption to make observations and set up a local network of data-gathering instruments.

    However, in 1912, Alaska was not a US state, very few scientists were supported to do volcanic studies and a worldwide network of seismic monitoring was not in place. Scientists were just starting to understand the mechanics of volcanic eruptions.

    Novarupta Volcano Erupts!

    On June 6th, 1912 a tremendous blast sent a large cloud of ash skyward and the eruption of the century was underway. People in Juneau, Alaska, about 750 miles from the volcano, heard the sound of the blast – over one hour after it occurred.

    For the next 60 hours the eruption sent tall dark columns of tephra and gas high into the atmosphere. By the time the eruption ended the surrounding land was devastated and about 30 cubic kilometers of ejecta blanketed the entire region. This is more ejecta than all of the other historic Alaska eruptions combined. It was also thirty times more than the 1980 eruption of Mount St. Helens and three times more than the 1991 eruption of Mount Pinatubo, the second largest in the 20th Century.

    Impact of the Eruption

    Forty years after the eruption, investigators finally realized that Novarupta – and not Katmai – was the source of the tremendous blast.

    The inhabitants of Kodiak, Alaska, on Kodiak Island, about 100 miles away, were among the first people to realize the severity of this eruption. The noise from the blast would have commanded their attention and the visual impact of seeing an ash cloud rise quickly to an elevation of 20 miles then drift towards them would have been terrifying.

    Within just a few hours after the eruption a thick blanket of ash began falling upon the town – and ash continued falling for the next three days, covering the town up to one foot deep. The residents of Kodiak were forced to take shelter indoors. Many buildings collapsed from the weight of heavy ash on their roofs.

    Outside, the ash made breathing difficult, stuck to moist eyes and completely blocked the light of the sun at midday. Any animal or person who was caught outside probably died from suffocation, blindness or an inability to find food and water.

    Pyroclastic Flow

    Back on the peninsula, heavy pyroclastic flows swept over 20 kilometers down the valley of Knife Creek and the upper Ukak River. (A pyroclastic flow is a mixture of superheated gas, dust, and ash that is heavier than the surrounding air and flows down the flank of the volcano with great speed and force.)

    These flows completely filled the valley of Knife Creek with ash, converting it from a V-shaped valley into a broad flat plain. By the time the eruption was over, the world’s most extensive historic ignimbrite (solidified pyroclastic flow deposit) would be formed. It covered a surface area of over 120 square kilometers to depths of over 200 meters thick near its source. (The satellite image at right shows the original geographic extent of pyroclastic flow deposits as a yellow line.)

    Volcanic Ash

    Immediately after the June 6th blast, an ash cloud rose to an elevation of about 20 miles. It was then carried by the wind in a westerly direction, dropping ash as it moved. The ash deposits were thickest near the source of the eruption and decreased in thickness downwind. (The satellite image above/right has red contour lines showing the thickness of the ash deposits in the area of the eruption. Measurable thickness of ash fell hundreds of miles beyond the one meter contour line.)

    When the eruption stopped on June 9th, the ash cloud had spread across southern Alaska, most of western Canada and several U.S. states. Winds then carried it across North America. It reached Africa on June 17th.

    Although the eruption had these far-reaching effects, most people outside of Alaska did not know that a volcano had erupted. More surprising is that no one knew for sure which of the many volcanoes on the Alaska peninsula was responsible. Most assumed that Mount Katmai had erupted but they were wrong.

    Valley of Ten Thousand Smokes

    After the eruption, the National Geographic Society began sending expeditions to Alaska to survey the results of the eruption and to inventory the volcanoes of the Alaskan peninsula. Robert Griggs led four of these expeditions. During his 1916 expedition, Griggs and three others traveled inland to the eruption area. What they found exceeded their imagination.

    First, the valley of Knife Creek was now barren, level and filled with a loose, sandy ash which was still hot at depth. Thousands of jets of steam were roaring from the ground. Griggs was so impressed that he called it the “Valley of 10,000 Smokes”.

    James Hine, a zoologist on the expedition described the location:

    “Having reached the summit of Katmai Pass, the Valley of Ten Thousand Smokes spreads out before one with no part of the view obstructed. My first thought was: We have reached the modern inferno. I was horrified, and yet, curiosity to see all at close range captivated me. Although sure that at almost every step I would sink beneath the earth’s crust into a chasm intensely hot, I pushed on as soon as I found myself safely over a particularly dangerous-appearing area. I didn’t like it, and yet I did.”

     

    Katmai Caldera & Novarupta Dome

    Articles About Volcanoes
    San Andreas Fault
    Diamonds Don’t Form From Coal

    During the eruption a large amount of magma was drained from magma chambers below. The result was a removal of support from beneath Mount Katmai which is six miles from Novarupta. The top several hundred feet of Katmai – about one cubic mile of material – collapsed into a magma chamber below. This collapse produced a crater about two miles in diameter and over 800 feet deep.

    Early investigators assumed that Katmai was responsible for the eruption. This assumption was based upon Katmai being near the center of the impact area, Katmai was visibly reduced in height, and early witness accounts thought that the eruption cloud ascended from the Katmai area. Closer observation was not possible and expeditions into the impact zone would be very difficult to accomplish.

    The first scientific investigation to get an up-close look at the eruption area did not occur until 1916 when Robert Griggs found a 2-mile-wide caldera where Mount Katmai once stood. He also found a lava dome at the Novarupta vent. These observations convinced Griggs that Katmai was the source of the eruption.

    It was not until the 1950s – over forty years after the eruption – that investigators finally realized that ash and pyroclastic flow thicknesses were greatest in the Novarupta area. This discovery produced a revelation that Novarupta – and not Katmai – was the volcano responsible for the eruption (see satellite image medium resolution, 164 KB or higher resolution, 1330 KB). This is possibly the most important false accusation in the history of volcanic study.

    Could Novarupta Erupt Again?

    An eruption the size of Novarupta would ground commercial jet traffic across the North American continent.

    Other large eruptions on the Alaska peninsula are certain to happen in the future. Within the last 4000 years there have been at least seven Novarupta-scale eruptions within 500 miles of where Anchorage is located today. Future activity is expected because the Alaska peninsula is on an active convergent boundary.

    These large eruptions will have enormous local and global impact. Local impact will include the lahars, pyroclastic flows, lava flows and ash falls that are expected from a volcanic eruption. These can result in a significant loss of life and financial impact. The activity of these volcanoes is monitored by the United States Geological Survey and others so that eruptions can be predicted and their events mitigated.

    Large eruptions of Novarupta’s scale at high latitudes can have a significant impact upon global climate. Recent studies have linked high latitude volcanic eruptions with altered surface temperature patterns and low rainfall levels in many parts of the world. The 1912 eruption of Novarupta and other Alaskan volcano eruptions have been linked with drought and temperature changes in northern Africa.

    Another significant impact is the distribution of volcanic ash. The image at right shows the ash fall impact areas for five important volcanic eruptions of the 20th century. Augustine (1976), St. Helens (1980), Redoubt (1990) and Spurr (1992) all produced ash falls of significant regional impact. However, Novarupta’s ash fall was far greater than any other Alaska eruption in recorded history and contained a greater volume than all of the Alaska eruptions which have been recorded combined.

    One of the most important reasons to monitor volcanic eruptions is the potential danger that they present to commercial air traffic. Jet engines process enormous amounts of air and flying through finely dispersed ash can cause engine failure. Impacting the tiny ash particles at high speed is very similar to sandblasting. This can “frost” the jet’s windshield and damage external parts of the plane. Before the danger of flying through finely dispersed ash was appreciated several commercial jets were forced to land after sustaining serious damage while in the air. Eruptions the size of Spurr, Augustine, Redoubt and St. Helens can damage jets flying over 1000 miles away. An eruption the size of Novarupta would ground commercial jet traffic across the North American continent.

    What Can We Do About It?

    People can not prevent this type of eruption. They can assess the potential impact, develop with the possibility of loss in mind, plan a response, educate the public and key decision makers, and monitor the region where it might occur.

    The more you know about a natural hazard, the greater your chances of avoiding injury or loss. We are lucky to have this record of the past.

    Contributor:

  • How did Australian drylands cause record land carbon sink in 2011?

    ou agree to our use of cookies. To find out more, see our Privacy and Cookies policy. Skip to the content

    IOP A community website from IOP Publishing

    Vision prize poll

    Do you agree with your peers on climate science?

    Vision PrizeGet involved

    Corporate partners

    For maximum exposure, become a Corporate partner. Contact our sales team.

    Buyer’s Guide

    iopscience.iop.org/1748-9326/page/Highlights-of-2013

    News

    May 27, 2014

    How did Australian drylands cause record land carbon sink in 2011?

    Each year, scientists assess how much carbon the ocean, land and atmosphere absorbed. In 2011 land took up the largest amount of carbon since measurements began in 1958 – 4.1 Petagrammes (Pg) compared with the decadal average of 2.6 Pg. Now an international team has discovered that the bulk of this uptake was due to plant growth in dry regions of the southern hemisphere, particularly Australia.

    “This result was surprising considering the low productivity of semi-arid biomes,” Ben Poulter of Montana State University, US, told environmentalresearchweb. “But we discovered that our findings were explained by a prolonged La Niña event that led to high rainfall, and that greening trends in dryland systems contributed to greater carbon uptake.”

    Drylands cover roughly 45% of the Earth’s surface. Around 60% of the additional land carbon sink in 2011 was due to plant growth in semi-arid regions of Australia, the scientists discovered. La Niña conditions have brought six consecutive seasons of extra rainfall to such areas. Temperate South America and southern Africa were also significant contributors.

    “Because dryland systems have low productivity and store low amounts of carbon in vegetation and soils compared to forests in tropical or boreal systems, they have been overlooked in terms of their role in the global carbon cycle,” said Poulter. “Our study is the first to point out that the atmospheric carbon dioxide growth rate is becoming more and more influenced by vegetation activity in dryland systems.”

    Previously, tropical rainforests were thought to be the main cause of the variability in the land carbon sink from year to year. Now it seems that semi-arid ecosystems will play an increasing role. Since 1981, vegetation cover in Australia has expanded by 6%, perhaps due to altered rainfall patterns, increased atmospheric carbon dioxide affecting leaf pores and water use efficiency, or woody encroachment following land-use changes. At the same time, the sensitivity of the continent’s net carbon uptake to rainfall has increased by a factor of four.

    “There has been an increase of scientific publications highlighting greening trends of vegetation in dryland ecosystems, where various metrics of vegetation activity have increased since the early 1980s,” said Poulter. “We build on these findings to demonstrate that the greening trends are altering the biogeochemistry of dryland systems during extreme climate years.”

    Poulter and colleagues estimated the land carbon sink over the last 30 years using three different techniques – a terrestrial biogeochemical model, atmospheric carbon dioxide inversion and global carbon budget accounting. All three methods showed a record land carbon sink in 2011. The researchers also used satellite data to look at vegetation cover.

    “We first observed an anomalously large 2011 land carbon sink during the annual global carbon budget assessment coordinated by the Global Carbon Project,” said Poulter. “This annual activity estimates the land carbon sink as the ‘residual’ of a carbon balance equation that includes the better known source terms, fossil fuels and net land use change, and sink terms, the atmosphere and ocean.”

    Now the team would like to understand what’s causing the dryland greening. Current candidates for the explanation include land use and grazing, fire management, climate change, and increasing atmospheric carbon dioxide. “In addition, the carbon stored in dryland systems is vulnerable to wildfires and tends to be returned to the atmosphere rather quickly,” said Poulter. “A better understanding of the residence time of dryland carbon stocks is important to interpret global interannual carbon dioxide variability.”

    Related links

    Related stories

    About the author

    Liz Kalaugher is editor of environmentalresearchweb.

  • [New post] NSW redistribution – what could happen?

    1 of 44
    Why this ad?
    Debt Reliefdebtrelief.com.au – Reduce debt, repayments & interest rates. Free debt analysis.

    [New post] NSW redistribution – what could happen?

    Inbox
    x

    The Tally Room <donotreply@wordpress.com>

    11:52 AM (3 minutes ago)

    to me

    New post on The Tally Room

    NSW redistribution – what could happen?

    by Ben Raue

    Every three years, approximately one year after the federal election, Australia’s population is assessed, and each state and territory is given a set number of seats to be filled in the next Parliament, based on population. When the number of seats allocated to a state changes, a redistribution is immediately triggered to draw up new electoral boundaries.

    This time around, population shifts have guaranteed that New South Wales will lose its 48th seat, and Western Australia will gain a 16th seat. It now appears that the ACT’s population will not be sufficient to give them a third seat, after it first appeared to be possible in late 2013.

    These redistributions will be necessity cause significant changes to borders, in order to create a whole new seat in WA and squeeze NSW’s populations into 47 seats.

    Electorates will need to be drawn to be within two quotas. A quota is drawn up as the average population per electorate as of the time of the redistribution, and another one which is the average projected population of each new electorate as of 3.5 years after the conclusion of the redistribution. These quotas will be 1/47th of the NSW population, and 1/16th of the WA population.

    Below the fold, I’ve posted my analysis of the likely trends in the NSW redistribution, and have produced an interactive map showing the population quotas in each electorate.

    In short, I think the seat most likely to be abolished is Hunter, which will have significant knock-on effects in the Hunter region and in western NSW. Seats in inner Sydney will shift east, while seats throughout Western Sydney will expand in size in southwestern direction, shifting Werriwa and Macarthur further into the fringe of Sydney.

    I’ll be back tomorrow with a similar analysis of the prospects in Western Australia.

    Read more of this post

    Ben Raue | June 8, 2014 at 11:50 am | Tags: Australia 2016, New South Wales, Redistribution | Categories: Uncategorized | URL: http://wp.me/ppI95-5d9

     

    Comment    See all comments

     

  • Major greenhouse threat from new CFCs found in air

    Major greenhouse threat from new CFCs found in air

    By on 6 June 2014
    Print Friendly

    The Daily Climate

    ozonehole

    Photo of scientists launching a balloon to take ozone hole measurements courtesy NOAA.

    Twenty-five years after the world first moved to protect the ozone layer, British scientists have found three new potentially damaging gases in the atmosphere. While they do not expect the gases to do much damage to the ozone layer, think they may add to global warming.

    The scientists, at the University of East Anglia (UEA), in the UK, have found two new chlorofluorocarbons (CFCs) and one new hydrochlorofluorocarbon (HCFC). Their research, published in the journal Atmosphere, appears shortly after the same team found four other man-made gases in March this year.

    They made the discovery by comparing samples of today’s air with samples collected between 1978 and 2012 in the unpolluted air of Tasmania, and samples taken during aircraft flights. Their measurements show that two of the new gases have reached the atmosphere in recent years.

    Harmful effects

    Ozone protects living things against the harmful effects of ultraviolet radiation from the sun, which can cause cancer and blindness in humans, as well as harming crops and wildlife on land and at sea.

    Scientists discovered in 1985 that CFCs − the manmade gases used mainly in refrigerants and aerosols − were damaging the Earth’s protective layer of ozone over the Antarctic, thinning it and causing what has become known as “the ozone hole.”  A similar, but less marked, weakening occurs over the Arctic as well.

    These new gases should be investigated. Sources include industrial solvents, feedstock chemicals, and refrigerants.

    -Corinna Kloss,
    Jülich Research Center

    That led to the world’s governments agreeing to the Montreal Protocol in 1987. The production of all CFCs was banned in 2010, and the prediction was that, if countries observed the Protocol strictly, the damage to the ozone layer would be repaired by mid-century.

    Johannes Laube, from UEA’s school of environmental sciences, said: “Two of the gases that we found earlier in the year were particularly worrying because they were still accumulating significantly up until 2012.

    “Emission increases of this scale have not been seen for any other CFCs since controls were introduced during the 1990s, but they are nowhere near peak CFC emissions of the 1980s.”

    He said the three gases that had now been identified were in much lower concentrations than the ones found in March, so they were unlikely to threaten the ozone layer. But the findings strengthened the team’s argument that there are many more gases in the atmosphere that still await identification, and which together might well have an impact.

    Corinna Kloss, who undertook the research while at UEA and is now at the Jülich Research Centre in Germany, said: “All seven gases were only around in the atmosphere in very small amounts before the 1980s, with four not present at all before the 1960s, which suggests they are man-made.

    Possible sources

    “Where these new gases are coming from should be investigated,” she added. Possible sources include industrial solvents, feedstock chemicals, and refrigerants.

    But there is a further concern over the chemicals that destroy the ozone layer, and that is their ability also to intensify global warming.

    Laube told the Climate News Network that the concentrations of all three newly-discovered gases were about one-tenth of those found in March. None seemed to have drastically increased in concentration in recent years, so he thought they were unlikely to be a problem to the ozone layer in the foreseeable future.

    On global warming, however, one gas (HCFC-225ca) had been estimated as being 127 times stronger than CO2 on a per kilogrambasis. This, Laube estimated, meant an impact equivalent to about 50,000 tons of CO2 for 2012.

    “For the two CFCs, global warming potentials are currently unknown,” he said. “If we look at other similar CFCs, they are likely to be 5,000 to 10,000 times more effective than CO2.”

    This meant a best estimate for 2012 of emissions by one gas equivalent to 50,000 to 100,000 tons, and of 40,000 to 80,000 tons of carbon dioxide by the other gas.

    For comparison, global CO2 emissions in 2011 were estimated by the Netherlands Environmental Assessment Agency at 34 billion tons.

    Source: The Daily Climate. Reproduced with permission.