Plan to use Aboriginal land as a nuclear waste dump is flawed and misguided
Radioactive waste management is difficult, but secretive deals made without Aboriginal Traditional Owners’ full consent are even more worrying. A transparent debate is needed
Protesters opposing the national radioactive waste dump proposed for Muckaty in the Northern Territory in 2011. Photograph: Alan Porritt/AAP
This week, federal resource minister Gary Gray is talking radioactive waste with Aboriginal people in remote central Australia.
Six years ago an Aboriginal clan group, the Northern Land Council (NLC) and the then Howard government signed a secret deal to develop Australia’s first purpose-built national radioactive waste dump at Muckaty, north of Tennant Creek in the Northern Territory.
The commercial-in-confidence plan saw the clan group “volunteer” an area of the shared Muckaty Land Trust for the burial and above ground storage of radioactive waste in return for federal payments, promises and a “package of benefits” worth around $12m.
The deal was not known about or supported by the rest of the Muckaty Traditional Owners and remains the source of bitter contest, deep opposition and a Federal Court challenge. Now Gray is back to talk with the NLC about a second site nomination on Muckaty. Unfortunately the new plan appears to follow the old pattern of secrecy, exclusion and contest.
The original Howard plan was energetically embraced and promoted by former resource minister Martin Ferguson, despite conflicting with Labor policy and senior ALP figures describing the legislation to make the dump possible as “sordid” and “arrogant”.
Ferguson’s approach to radioactive waste management was characterised by a closed mind and a locked door. Aboriginal Traditional Owners opposed to the dump plan had their meeting requests rejected and correspondence effectively ignored. Unsurprisingly, community confidence in the process eroded.
Radioactive waste is a serious and growing international challenge. Despite decades of industry assurances and high cost government projects, not one nation has a final disposal facility for high level radioactive waste. Division and debate runs deep over how best to manage this material.
But while the disagreements are many, there is a growing consensus about some fundamental approaches to radioactive waste management, especially when it comes to community consent.
In a 2006 report, an expert UK committee on radioactive waste management stated “it is generally considered that a voluntary process is essential to ensure equity, efficiency and the likelihood of successfully completing the process. There is a growing recognition that it is not ethically acceptable for a society to impose a radioactive waste facility on an unwilling community.”
The current Muckaty plan and process is at sharp odds with this common sense and common decency approach.
It is also in conflict with Australia’s international obligations under the UN declaration on the rights of Indigenous peoples which explicitly requires that “states shall take effective measures to ensure that no storage or disposal of hazardous materials shall take place in the lands or territories of Indigenous peoples without their free, prior and informed consent.”
The Muckaty plan lacks consent at home and credibility abroad. It is flawed and failing and it is time for a new approach – one that reflects and is informed by best practise, sound science and respect.
Radioactive waste management is difficult. Only time can take the heat out of the waste – but transparent and robust processes and policy development can take the heat out of the waste debate.
Australia has never had an independent assessment of what is the best (or least worst) way to manage our radioactive waste. Decades ago unelected bureaucrats decided a centralised remote dump was the best model and ever since a chain reaction of politicians have tried – and failed – to find a compliant postcode.
Australia is better placed than some countries to do things differently – and better. We have much less radioactive waste than nations with domestic nuclear power and ours is mostly stored at two secure federal sites – the Woomera prohibited area in South Australia and the Australian Nuclear Science and Technology Organisation’s Lucas Heights nuclear facility in southern Sydney.
ANSTO, which produces and houses most of Australia’s radioactive waste, is upgrading its storage facilities. It and the federal nuclear regulator agree ANSTO has the ability and capacity to securely manage all radioactive waste on site, including material due for return from overseas.
This reality provides Australia with some much needed breathing space. For more than two decades, politicians have talked big and listened little – and they have spectacularly failed come up with an agreed, credible and mature approach to radioactive waste management.
It is time to move away from the obsession with finding a place to dump and instead build a space to discuss. We need to get the policy architecture right so future generations of Australians will not have a radioactive legacy that is badly wrong.
A public and independent national commission would advance the discourse on what constitutes responsible radioactive waste management and to move all stakeholders out of the trenches and to the table.
In a long overdue and most welcome change of style, if not yet of substance, the latest federal minister with responsibility for this issue has acknowledged that there are deep Aboriginal and wider concerns over the Muckaty plan.
A minister named Gray should be well-placed to show leadership on an issue of inter-generational national importance that is not – and should never be – just black and white.
Bob Ackley may be the only person who has driven up and down every single street — 1,500 miles total — in Washington, D.C.
While Ackley, a plain-speaking New Englander, enjoyed exploring the nation’s capital, which he described as “beautiful,” this was serious business. He was measuring leaks of methane, a potent greenhouse gas that is also the main component of natural gas. Measured in terms of warming the atmosphere, methane is 25 times more potent than carbon dioxide.
In January, the former gas inspector drove around with researchers Robert Jackson, a scientist at Duke University’s Nicholas School of the Environment, and Boston University’s Nathan Phillips. They were trying to create a map of all the gas leaks in the district, some of which can pose a safety hazard as well as causing climate change.
“It took me 21 days, working about 14 hours a day,” said Ackley.
The team was replicating an experiment they did last year in Boston. There, as in D.C., they drove a car equipped with special sensors, made by the Santa Clara, Calif., company Picarro Inc., that detected and mapped leaks from the aging pipeline systems underlying the city.
Although the results for the District of Columbia are not final, preliminary numbers indicate that the nation’s capital has thousands of leaks from its natural gas distribution system. It has a number of leaks per road mile similar to that of Boston, but has about twice as many miles of road, said Jackson.
The district also appears to have bigger leaks than Boston’s, he said. In Boston, the researchers counted 3,356 leaks. They also determined whether leaks were from natural gas pipes or more natural sources, such as landfills, where it is created by decaying garbage.
Since methane has different ratios of carbon isotopes in it, instruments can determine whether methane is coming from a fossil fuel source or from a landfill or a wetland.
A problem in older cities
Most of the leaks the researchers found came from the pipeline system, said Phillips. This is not surprising, given the age of many of the pipeline systems in older cities.
“The problem that we’ve seen in Boston is not unique to Boston. It’s something that characterizes the Eastern Seaboard,” said Phillips.
The technology the team used was pioneered by Picarro. It allowed the researchers to measure leaks and the concentrations of methane in those leaks and map them. That’s the first phase of the work.
Next, they try to learn more about the rate at which gas from various leaks is being released to the atmosphere. To do that, the researchers have to make a second trip, returning to the source of a representative set of leaks.
They’re currently doing this in Boston. Many of the leaks, said Phillips, are at manholes, or on curbs where the sidewalk meets the asphalt.
To measure the leak rate, the researchers cover the leaking area with an empty container of a known volume and measure the rate at which the container fills up with gas.
Once they have measured leak rates from various locations, they plan to use create a statistical distribution to represent an overall estimate of just how much methane is leaking, said Phillips. The researchers plan to compare this to the results of another experiment in which sensors are monitoring the city’s air from the tops of buildings, seeing how much methane can be measured rising out of the city into the atmosphere.
Seeping from long-buried pipes
Just how big the volumes of methane leaking from the aging pipes of Boston or the District of Columbia are is a measurement that might change whether natural gas can be regarded as a more climate-friendly source of energy.
The effort to figure out where those leaks occur, and how big they are, is in its infancy.
Distribution, the part of the gas supply chain that Jackson and Phillips focus on, involves a complicated patchwork of possibly leaky pipelines, particularly in older cities.
While gas utilities certainly prioritize plugging leaks that could cause explosions, they have fewer incentives to plug those that don’t pose an explosion risk.
In a city like Washington, the pipeline distribution system consists of pipe materials that include cast iron, unprotected or protected steel, copper and plastic. Cast iron is often the oldest and leakiest, especially at the joints, although other pipeline materials can also develop leaks.
According to 2012 data reported to the Pipeline and Hazardous Materials Safety Administration by Washington Gas Light Co., the company has about 1,200 total miles of pipeline in the District of Columbia.
Of that length, 419 miles is cast iron. There is less than 100 miles of unprotected steel, and the majority of what’s left is divided between coated steel and plastic.
Too small to explode
Gas companies prioritize finding and fixing any leaks known as “Grade 1” leaks. These are leaks likely to be explosion hazards, where gas is collecting and concentrating. If gas is leaking to the atmosphere and does not pose a safety hazard, it may not be made a priority for repair.
“Depending on the seriousness of the leak, the [Department of Transportation] guidelines state whether our crews repair the leak immediately or follow up with the repair at a later time,” wrote Ruben Rodriguez, direct of corporate communications for the company, in an email.
While it might seem as if gas companies would prioritize fixing any large leaks, since they would result in lost profits, Jackson said those financial incentives are not always as strong as one might think.
Gas companies calculate the difference between the amount of gas they send out and the amount that is metered at customers’ homes or businesses. The difference, which is primarily caused by leaks and pressure and temperature errors in gas measurement between where the gas is sent out and where it is metered, is referred to as “lost and unaccounted-for” gas.
Some public utility commissions allow the utilities to charge their customers for this difference. In the District of Columbia, the charge makes up 3 percent of a customer’s monthly cost, The Washington Post reported in March.
“I think usually the gas companies know where the big leaks are,” said Jackson. “It’s very common for us to step out of a car and start sampling on the street and have people walk out of a house or restaurant and say, ‘What are you doing?’ When we tell them we are looking at natural gas leaks, you have someone say: ‘I have called this leak in multiple times.’”
Jackson is quick to point out that the object of his work in D.C. and Boston is not simply to tell gas companies they have thousands of leaks. Rather, it’s to understand which leaks are important to plug, and how they are distributed throughout the system.
If the researchers are able to determine that 10 percent of the leaks are leaking 90 percent of the gas, which is possible, then fixing just a few leaks will result in a big reduction in greenhouse gas emissions, said Jackson.
“It’s not very useful for us to say ‘There are 3,300 leaks in Boston; go fix them all.’ It’s much more useful to say, ‘If we fix these 100 leaks, we’ll keep 50 percent of the gas from leaking out of the system,’” he said.
For the past decade I’ve been a participant in a high-stakes energy policy debate — writing books, giving lectures, and appearing on radio and television to point out how downright dumb it is for America to continue relying on fossil fuels. Oil, coal, and natural gas are finite and depleting, and burning them changes Earth’s climate and compromises our future.
In the past two or three years this debate has reached a significant turning point. Evidence that climate change is real and caused by human activity has become irrefutable, and serious climate impacts (such as the melting of the Arctic ice cap) have begun appearing sooner, and with greater severity, than had been forecast. Yet at the same time, the notion that fossil fuels are supply-constrained has gone from being generally dismissed, to being partially accepted, to being vociferously dismissed. The increasingly dire climate story has achieved widespread (though still insufficient) coverage, but the puzzling reversals of public perception regarding fossil fuel scarcity or abundance have received little analysis outside the specialist literature. Yet claims of abundance are being used by the fossil fuel industry to change the public conversation about energy and climate, especially in the United States, from one of “How shall we reduce our carbon emissions?” to “How shall we spend our new-found energy wealth?”
This is an insidious and misleading tactic. The abundance argument is based not so much on solid data (though oil and gas production figures have indeed surged in the United States) as on exaggerations about future production potential, and on a pattern of denial regarding steep costs to the environment and human health.
Permit me to use a metaphor to frame this discussion about fossil fuel abundance or scarcity. Since all debates are contests, at least superficially, it’s possible to summarize this one as if it were a game — like a soccer match or a bowling tournament. Of course, it is far more than just a game; the stakes, after all, may amount to the survival or failure of industrial civilization. But games are fun, and it’s easy to keep track of the score. So … let the metaphor begin!
First, who are the teams? On one side we have the oil and gas industry, its public relations minions and its bankers, as well as a few official agencies — including the U.S. Energy Information Administration and the International Energy Agency — that tend to parrot industry statistics and forecasts. This team is respected and well funded. We’ll call this team “the Cornucopians,” after the mythical horn of plenty, an endless source of good things.
The other team consists of an informal association of retired and independent petroleum geologists and energy analysts. This team has little funding, is poorly organized, and hardly even existed as a recognizable entity a decade ago. This is my team; let’s call us “the Peakists,” in reference to the observation that rates of extraction of nonrenewable resources tend to peak and then decline.
These two teams have very different views of the energy world. Back in 2003, the Cornucopians were saying that global oil production would continue to increase in the years and decades ahead to meet rising demand, which would in turn grow at historic rates of about 3 percent per year (about the same rate at which the economy was expanding). Meanwhile oil prices would stay at approximately their then-current level of $20 to $25 per barrel. The Cornucopians’ message could be summarized as: “There’s nothing to worry about, folks. Just keep driving.”
This view was in stark contrast to that of us Peakists, who, based on geological evidence from around the world (depleting older super-giant oilfields, declining rates of discovery of new fields, and increasing costs to develop them), were saying that rates of global oil production would soon reach a maximum and start to diminish, while petroleum prices would soar. The Peakists’ argument wasn’t that the world would suddenly run out of oil anytime soon, but that the end of cheap oil and expanding rates of production was approaching. Since oil price spikes have had severe economic impacts in recent decades, the implication was clearly that societies would be better off weaning themselves from oil as quickly as possible.
Well, what has actually happened? How has the game progressed so far?
In 2005, world crude oil extraction rates effectively stopped growing. In that year the average global production rate was 73.8 million barrels per day (mb/d); in 2012, that rate had only increased to 75.0 mb/d — a relatively insignificant bump of less than 1.5 mb/d in seven years (a 0.3 percent average annual rate of growth). This was completely counter to the forecasts of the Cornucopians, but it fit the views of the supply pessimists well. Point for the Peakists.
With oil supply rates stagnant, prices went up — soaring from a yearly (inflation-adjusted) average of $35 per barrel in 2003 to a yearly average of $110 in 2012. Again, this development was completely unforeseen by Cornucopians, but had been clearly and repeatedly forecast by Peakists. Point for my side.
When the world oil price briefly shot up to nearly $150 per barrel in the summer of 2008, the global economy shuddered and swooned. Thus began the worst recession since the 1930s. Of course, other factors contributed to the crash — most notably, a bursting housing bubble in the United States and an unsustainable buildup of debt in nearly all the world’s industrial economies. But it’s clear that high oil prices added to financial fragility and the oil price spike of 2008 provided a sudden gust that helped bring down the house of cards. Peakists had been warning of the economy’s vulnerability to high oil prices for years; here was dramatic confirmation. Another point for my team.
Now we’ve arrived at the period 2008–2009; at that stage of the game, the score was Peakists 3, Cornucopians zip. Despite the fact that we Peakists had virtually no funding and limited media access, we were seriously in danger of winning the debate. The term peak oil went from being unknown, to being associated with conspiracy theorists, to being broadly familiar to those who followed energy issues.
The Cornucopians, however, were not about to throw in the towel. In fact, they were just shaking off the complacency that accompanied their status as reigning champs. And they were about to deploy a significant new game strategy.
The “peak” issue was not limited to oil. U.S. conventional natural gas production had been declining for years, and prices were soaring. Peakists said this was evidence of an approaching natural gas supply crisis. Instead, high prices provided an incentive for drillers to refine and deploy costly hydraulic fracturing technology (commonly referred to as “fracking”) to extract gas trapped in otherwise forbidding shale reservoirs.
Small- to medium-sized companies crowded into shale gas plays in Texas, Louisiana, Arkansas, and Pennsylvania, borrowed money, bought leases, and drilled tens of thousands of wells in short order. The result was an enormous plume of new natural gas production. As U.S. gas supplies ballooned, TV talking heads (reading scripts provided by the industry) and politicians all began crowing over America’s “game-changing” new prospect of “a hundred years of natural gas.” We Peakists hadn’t foreseen any of this. Point to the Cornucopians.
Not only did supplies of natural gas grow, but prices plummeted. In the pre-fracking years of 2001 to 2006, gas prices had shot up from their 1990s level of $2 per million Btu to over $12. But after 2007, as the hydrofracturing boom saturated gas markets, prices plummeted back to a low of $1.82 in April 2012. Gas was suddenly so cheap that utilities found it economic to use in place of coal for generating base-load electricity. The natural gas industry began to promote the ideas of exporting gas (even though the United States remained a net natural gas importer), and of using natural gas to power cars and trucks. Again, Peakists had completely failed to forecast these developments. Point Cornucopians.
Then, using the same hydrofracturing technology, the industry began to go after deposits of oil in tight (low-porosity) rocks. In Texas and North Dakota, U.S. oil production began growing. It was an astonishing achievement, especially since the nation’s oil production had generally been declining since 1970. Suddenly there was serious discussion in energy policy circles of America soon producing more oil than Saudi Arabia. None of us Peakists had predicted this. Point Cornucopians.
That brings us to the present. As of 2013, the game is tied and headed into overtime. Cornucopians have the momentum and the historic advantage, so they’ve been quick to proclaim victory. Meanwhile, at least one prominent Peakist has publicly conceded defeat: in a widely circulated essay, British environmental writer George Monbiot recently proclaimed that “We were wrong on peak oil.”
It doesn’t look good for my team. It appears to most people that the “Shale Revolution” (the tapping of shale gas and tight oil, thanks to advanced drilling techniques) has changed the game for good. Is it time for us to exit the playing field, heads bowed, shoulders slumped?
No. The game is about to turn again.
Almost no one who seriously thinks about the issue doubts that the Peakists will win in the end, no matter how pathetic my team’s prospects may look for the moment. After all, fossil fuels are finite, so depletion and declining production are inevitable. The debate has always been about timing: Is depletion something we should worry about now?
Readers who’ve seen articles and TV ads proclaiming America’s newfound oil and gas abundance may find it strange and surprising to learn that the official forecast from the U.S. Energy Information Administration is for America’s historic oil production decline to resume within this decade.
But the EIA may actually be overly optimistic. Once the peak is passed, the agency foresees a long, slow slide in production from tight oil deposits (likewise from shale gas wells). However, analysis that takes into account the remaining number of possible drilling sites, as well as the high production decline rates in typical tight oil and shale gas wells, yields a different forecast: Production will indeed peak before 2020, but then it will likely fall much more rapidly than either the industry or the official agencies forecast.
And there’s much more to the story: shale gas wells that cost more to drill than their gas is worth at current prices; Wall Street investment banks that drive independent oil and gas companies to produce uneconomic resources just so brokers can collect fees; official agencies that have overestimated oil production and under-estimated prices consistently for the past decade.
The data I’ve surveyed suggest that, through the technology of hydrofracturing, the oil and gas industry will generate 10 or fewer years of growing fuel supplies. (In the case of shale gas, the clock started ticking roughly five years ago; for tight oil, about three years ago).
Let me be clear: I am not saying that the United States will run out of shale gas or tight oil sometime in the next five to seven years, but that the current spate of oil and gas supply growth will probably be over, finished, done and dusted before the end of this decade. Production will start to decline, perhaps sharply.
The temporary surge of production may yield a very few years of lower natural gas prices and may temporarily improve the U.S. balance of trade by reducing oil imports. What will we do with those years of reprieve? In the best instance, the fracking that has already been accomplished could provide us a bonus inning in which to prepare for life without cheap fossil energy. But to make use of this borrowed time we must build an energy infrastructure of wind turbines and solar panels rather than drilling rigs and pipelines. This will constitute the biggest investment, and the most ambitious project, of our lifetimes. Currently, instead, many renewable energy efforts are being hampered by the false perception of vast, long-term supplies of cheap natural gas.
We are starting the energy transition project of the 21st century far too late to altogether avert either devastating climate impacts or serious energy supply problems, but the alternative — continued reliance on fossil fuels — will ensure a future far worse, one in which even the bare survival of civilization may be in question. As we build our needed renewable energy system, we will also need to build a new kind of economy, and we must make our communities far more resilient, so as to withstand environmental and economic shocks that are inevitably on their way.
Meanwhile the fossil fuel industry is doing everything it can to convince us we don’t have to do anything at all — other than simply keep on driving. The purveyors of oil and natural gas are selling products that we all currently use and that we still depend upon for our modern way of life. But they’re also selling a vision of the future — a vision as phony as the snake oil hawked by carnival hucksters a century ago.
Ah, the beautiful wilds of western Canada. Rivers, mountains, forests… and out-of-control oil leaks that have already spurted thousands of barrels of toxic bitumen into the environment.
The leaks were caused by an underground blowout at a tar sand project in north-east Alberta run by Canadian Natural Resources that had been certified safe by government regulators. One of the firm’s scientists has been reported saying that they are mystified as to what went wrong or how to stop the leak. The company hasn’t disclosed how fast the leaks are progressing.
Chris Severson-Baker of the Pembina Institute in Edmonton, Alberta, estimates that the method, known as cyclic steam stimulation, accounts for about 30 per cent of tar sands extraction. There’s nothing inherently risky about cyclic steam stimulation, he says, making these leaks all the more worrisome. “If there are cases like this, it shows things are not as predictable as we might like,” says Severson-Baker.
How Did Earth’s Primitive Chemistry Get Kick Started?
July 30, 2013 — How did life on Earth get started? Three new papers co-authored by Mike Russell, a research scientist at NASA’s Jet Propulsion Laboratory, Pasadena, Calif., strengthen the case that Earth’s first life began at alkaline hydrothermal vents at the bottom of oceans. Scientists are interested in understanding early life on Earth because if we ever hope to find life on other worlds — especially icy worlds with subsurface oceans such as Jupiter’s moon Europa and Saturn’s Enceladus — we need to know what chemical signatures to look for.
Two papers published recently in the journal Philosophical Transactions of the Royal Society B provide more detail on the chemical and precursor metabolic reactions that have to take place to pave the pathway for life. Russell and his co-authors describe how the interactions between the earliest oceans and alkaline hydrothermal fluids likely produced acetate (comparable to vinegar). The acetate is a product of methane and hydrogen from the alkaline hydrothermal vents and carbon dioxide dissolved in the surrounding ocean. Once this early chemical pathway was forged, acetate could become the basis of other biological molecules. They also describe how two kinds of “nano-engines” that create organic carbon and polymers — energy currency of the first cells — could have been assembled from inorganic minerals.
A paper published in the journal Biochimica et Biophysica Acta analyzes the structural similarity between the most ancient enzymes of life and minerals precipitated at these alkaline vents, an indication that the first life didn’t have to invent its first catalysts and engines.
“Our work on alkaline hot springs on the ocean floor makes what we believe is the most plausible case for the origin of the life’s building blocks and its energy supply,” Russell said. “Our hypothesis is testable, has the right assortment of ingredients and obeys the laws of thermodynamics.”
Russell’s work was funded by the NASA Astrobiology Institute through the Icy Worlds team based at JPL, a division of the California Institute of Technology, Pasadena. The NASA Astrobiology Institute, based at NASA’s Ames Research Center, Moffett Field, Calif., is a partnership among NASA, 15 U.S. teams and 13 international consortia. The Institute is part of NASA’s astrobiology program, which supports research into the origin, evolution, distribution and future of life on Earth and the potential for life elsewhere.
Share this story on Facebook, Twitter, and Google:
Planetary ‘Runaway Greenhouse’ More Easily Triggered, Research Shows
July 30, 2013 — It might be easier than previously thought for a planet to overheat into the scorchingly uninhabitable “runaway greenhouse” stage, according to new research by astronomers at the University of Washington and the University of Victoria published July 28 in the journal Nature Geoscience.
In the runaway greenhouse stage, a planet absorbs more solar energy than it can give off to retain equilibrium. As a result, the world overheats, boiling its oceans and filling its atmosphere with steam, which leaves the planet glowing-hot and forever uninhabitable, as Venus is now.
One estimate of the inner edge of a star’s “habitable zone” is where the runaway greenhouse process begins. The habitable zone is that ring of space around a star that’s just right for water to remain in liquid form on an orbiting rocky planet’s surface, thus giving life a chance.
Revisiting this classic planetary science scenario with new computer modeling, the astronomers found a lower thermal radiation threshold for the runaway greenhouse process, meaning that stage may be easier to initiate than had been previously thought.
“The habitable zone becomes much narrower, in the sense that you can no longer get as close to the star as we thought before going into a runaway greenhouse,” said Tyler Robinson, a UW astronomy postdoctoral researcher and second author on the paper. The lead author is Colin Goldblatt of the University of Victoria.
Though further research is called for, the findings could lead to a recalibration of where the habitable zone begins and ends, with some planets having their candidacy as possible habitable worlds revoked.
“These worlds on the very edge got ‘pushed in,’ from our perspective — they are now beyond the runaway greenhouse threshold,” Robinson said.
Subsequent research, the astronomers say, is needed in part because their computer modeling was done in a “single-column, clear-sky model,” or a one-dimensional measure averaged around a planetary sphere that does not account for the atmospheric effect of clouds.
The findings apply to planet Earth as well. As the sun increases in brightness over time, Earth, too, will move into the runaway greenhouse stage — but not for a billion and a half years or so. Still, it inspired the astronomers to write, “As the solar constant increases with time, Earth’s future is analogous to Venus’s past.”
Other co-authors are Kevin J. Zahnle of the NASA Ames Research Center in Moffett Field, Calif.; and David Crisp of the Jet Propulsion Laboratory in Pasadena, Calif.
Share this story on Facebook, Twitter, and Google: