Bioplastics: an important component of global sustainability

(This work was commissioned by Biome Bioplastics, a leading European bioplastics company. A formatted version of the paper is available on the company website -www.biomebioplastics.com)

Plastics are a vital asset for humanity, often providing functionality that cannot be easily or economically replaced by other materials. Most plastics are robust and last for hundreds of years. They have replaced metals in the components of most manufactured goods, including for such products as computers, car parts and refrigerators, and in so doing have often made the products cheaper, lighter, safer, stronger and easier to recycle.[1] Plastics have taken over from paper, glass and cardboard in packaging, usually reducing cost and carbon emissions while also providing better care of the items that they protect.[2][3]

But we all know about the counterbalancing disadvantages.

  • Plastic litter disfigures the oceans and the coastlines. Ingestion of plastic kills marine creatures and fish. Perhaps 5% of the world’s cumulative output of plastic since 1945 has ended up in the oceans. Shopping bags and other packaging are strewn across the streets and fields of every country in the world.
  • Plastics use valuable resources of oil
  • The plastics industry uses large amounts of energy, usually from fossil fuel sources which therefore adds to the world’s production of greenhouse gases.
  • The durability of plastics means that without effective and ubiquitous recycling we will see continuing pressure on landfill. Although plastics do not represent the largest category of materials entering landfill – a position held by construction waste – they are a highly visible contributor to the problems of waste disposal.
  • The manufacturing of conventional plastics uses substantial amounts of toxic chemicals.
  • Some plastics leach small amounts of pollutants, including endocrine disruptors, into the environment. These chemicals can have severe effects on animals and humans. (The solution to this problem is to avoid using original raw materials - either monomers or plasticizers -that might produce such compounds when the plastic is in use or has been discarded).

The world needs to find a solution that gives us continued access to plastics but avoids these serious problems.  Bioplastics - partly or wholly made from biological materials and not crude oil - represent an effective way of keeping the huge advantages of conventional plastics but mitigating their disadvantages.

What is a bioplastic?                                                                                                                                              

A bioplastic is a plastic that is made partly or wholly from polymers derived from biological sources such as sugar cane, potato starch or the cellulose from trees, straw and cotton. Some bioplastics degrade in the open air, others are made so that they compost in an industrial composting plant, aided by fungi, bacteria and enzymes. Others mimic the robustness and durability of conventional plastics such as polyethylene or PET. Bioplastics can generally be directly substituted for their oil-based equivalent. Indeed, they can generally be made to be chemically identical to the standard industrial plastics.

In thinking about the potential role of bioplastics, we need to distinguish between two different types of use.

  • Items that might eventually become litter – such as shopping bags or food packaging – can be manufactured as bioplastics to degrade either in industrial composting units or in the open air or in water. Strenuous efforts need to be made to continue to reduce the amount plastic employed for single use applications. But if the world wishes to continue using light plastic films for storage, packaging or for carrying goods, then the only way we can avoid serious litter problems is to employ fully biodegradable compounds.[4]
  • Permanent bioplastics, such as polythene manufactured from sugar cane, can provide a near-perfect substitute for oil-based equivalents in products where durability and robustness is vital. Plastics made from biological materials generally need far smaller amounts of energy to manufacture but are equally recyclable. They use fewer pollutants during the manufacturing process. Per tonne of finished products, the global warming impact of the manufacture of bioplastics is less, and often very substantially less, than conventional plastics.

Plastics are regarded with deep ambivalence in the much of the world. Their association with indestructible and unsightly litter sometimes blinds us to their enormous value. Bioplastics – with a low carbon footprint and the capability of being made to completely degrade back to carbon dioxide and water – are a vital and growing complement to conventional oil-based plastics. They can be made to completely avoid the use of the monomers and additives that may have effects on human or animal health. As oil becomes scarcer, the value of bioplastics will increase yet further.

Plastics

About 4% of the world’s oil production is converted into plastics for use in products as varied as shopping bags and the external panels of cars. Another few percent is used in processing industries because oil-based plastics require substantial amounts of energy to manufacture. Each kilogramme of plastic typically requires 20 kilowatt hours of energy in the manufacturing process, more than the amount needed to make steel of the same weight. Almost all this comes from fossil sources. One survey suggested that the plastics industry was responsible for about 1.5% of allUSenergy consumption.

As oil runs out, and the use of fossil fuels becomes increasingly expensive, the need for replacement sources of raw material for the manufacture of vital plastics becomes increasingly urgent.  In addition, the use of carbon-based sources of energy for use in plastics manufacturing adds greenhouse gases to the atmosphere, impeding the world’s attempts to cut CO2 emissions.

These problems can be overcome. All the major oil-based plastics have substitutes made from biological materials. The polyethylene in a shopping bag can be made from sugar cane and the polypropylene of food packaging can be derived from potato starch. Plastics are irreplaceable and will all eventually be made from agricultural materials.

The world plastics industry and the role of bioplastics.                                                                             

The annual output of the world’s plastics industry is about 225 million tonnes a year.[5] This number has grown by a few per cent per year over the last decade. The bioplastics industry is much smaller, with 2011 probably seeing a total output of about 1m tonnes, or less than half of one per cent of total world plastics output. But the growth rate of bioplastics is much higher. Most sources suggest that this part of the plastics industry is growing at least 20% a year. The reasons for this buoyancy are discussed later in this note.

Not even the most fervent advocates of the bioplastics suggest that they will quickly replace all oil-derived compounds though most people expect rapid growth to continue.

  • They are generally two or three times more expensive than the major conventional plastics such as polyethylene or PET. This disadvantage will tend to diminish as bioplastics manufacturing plants become larger and benefit from economies of scale. When the local biological feedstock is particularly cheap, as it is in Brazil, large bio-polyethylene plants may already be close to being cost-competitive with oil-based alternatives.[6] But more generally, the crude oil for a kilo of plastic costs around €0.20 but the corn, a key source of feedstocks for bioplastics currently (August 2011) costs about twice this amount.
  • Their physical characteristics are not always a perfect substitute for the equivalent polymer. Sometimes the differences are trivial, such as the biological version having a slightly different texture, but in some cases the bioplastic cannot substitute for the conventional plastic. But for the most important plastic – polythene – the product based on biological sources is identical to the plastic made from oil.
  • There are a huge number of different market segments in which bioplastics can compete. In some cases, bioplastics are likely to make substantial inroads into share of traditional plastics while in others they will struggle. Novamont, the leading Italian bioplastics company, has estimated that biodegradable plastics can replace about 45% of the total sales of oil-based plastics in horticulture and 25% of those used in catering. Others regard these estimates as too low.
  • The Committee of Agricultural Organisations in the European Union suggested a figure for the accessible market for bioplastics in the EU alone at around 2m tonnes, several times the current production level. It sees the most important single segment as catering products, such as single use cutlery, followed by vegetable packaging.

Bioplastics versus food

In many types of applications, bioplastics offer substantial advantages over conventional products. Nevertheless, despite their relatively minor current role, one serious issue does need to be addressed, both now and in the future.

At the moment many bioplastics are made from sugars and starches harvested from crops that otherwise might be grown for food. As with liquid biofuels, the bioplastics industry has to deal with the vitally important question of whether the growth of bioplastics will tend to decrease the land available for food production, or increase the incentive to cut down forested areas to create more arable land. Cutting down forests is bad for global warming - because it returns carbon to the atmosphere - and bad for the wider environment because it tends to decrease biodiversity and increase erosion and flooding.

At present, the world bioplastics industry produces about 1 million tonnes of material. Perhaps 300,000 hectares are used to grow the crops which the industry processes into plastics.[7] For comparison, this is about 0.02% of the world’s total naturally irrigated area available for cultivation.[8]  Even if half the world’s plastics were made from crops grown on food land, the industry would only require 3% of the world’s cultivated acreage. By contrast, the bioethanol industry in the US uses over one tenth of the country’s arable acres to grow corn, but this fuel provides less than 10% of total liquid transport fuel. Biofuels are already an order of magnitude more important than bioplastics will ever be in using the world’s productive land.

How important is this issue?                            

The impact of the growth of bioplastics on the land available for growing food and on maintaining forest cover is an issue that needs to be openly discussed, as it is with the use of foodstuffs as feedstocks for biofuel refineries. But the world’s plastic industry is only about one tenth the size of the transport fuels sector in terms of its use of oil. If today’s entire plastics production was made from biological sources it would consume between 0.1% and 0.2% of the globe’s total annual production of organic matter (‘net primary production’). This is not a trivial amount but concerns about the competition for land need to be balanced by consideration of the enormous potential value of making bioplastics compared to the equivalent oil-based plastics.

In fact, the position is even less threatening. First, bioplastics are often made from products that would otherwise be wasted because they are unusable for human consumption. Potato starch is a by-product of some food production processes. As well as for bioplastics, this product – a waste that would otherwise have to be disposed of – is used for products as diverse as a constituent of drilling mud for oil and gas exploration and as a wallpaper paste. Plastics applications are only ever likely to be a small portion of total demand for this source of biological starch.

Sugar cane for bioplastics is usually grown on land in Brazil that has few alternative uses and certainly could not be used to grow grains. Furthermore the energy used to power the manufacturing process that creates the bioplastic from sugar is provided by the combustion of the stalks and leaves (‘bagasse’) of the cane, and no fossil fuel is used. Sugar cane is also the primary source of Brazil’s bioethanol which provides much of the country’s transport fuel. The crop is grown on dry lands, often used previously for cattle pasture but now so degraded that it cannot be used for any other form of intensive agriculture. There is no risk of sugar plantations encroaching on the precious Amazonian rain forest, which is over two thousand kilometres away from the land used for growing sugar cane. Braskem, the Brazilian company that is the largest plastics producer in the Americas, has just established a 200,000 tonne biopolyethylene plant (equivalent to about 20% of the world’s current bioplastics production) and states that growing the feedstock for this factory will use less than 0.1% of all Brazilian arable land.

Furthermore, as technology improves industry participants will have a much wider variety of raw material sources to use to make bioplastics. It will eventually be unnecessary to use land that might otherwise have been used for food. The list of potential alternative feedstuffs includes algae, which grows in water rather than on land, and cellulose. The cellulose molecule, which is the most abundant carbon-containing molecule in the natural world, forms the bulk of the weight of trees and of agricultural wastes such as straw. It was also the basis of the first commercial plastic, Parkesine, which was patented in 1856, and other historically important plastics such as celluloid. As oil becomes scarcer and more expensive the move back to cellulose and other biological feedstocks will represent a return to the days before the abundant availability of cheap petrochemicals.

Biome Bioplastics has traditionally focused on using potato starch as the main feedstock for its products. But as an illustration of the trend towards new feedstocks, nine out of the twelve new products launched this year have used non-starch polymers. The research and development needs to continue in the laboratories of bioplastics companies around the world.

As Dr Anne Roulin, the global head of packaging and design at Nestle, says when referring to the need to develop cellulose and other bioplastic polymers, ‘I think it is going to be an evolution where we will continuously reduce environmental impact and find more energy efficient processes. But I really see the trend going in the direction of conventional plastics made from renewable resources’.[9]

Finally, we need to consider the impact of improved recycling. Until a few years ago, the amount of plastic recycled was tiny. The costs of separating and cleaning different types of plastic were too high. Advances in recognition technology  - usually using infra-red or ultra-violet sensors to identify each of the key types of plastic – are enabling recyclers like Lincolnshire-based Eco Plastics to sort, clean and then resell almost all types of plastic. Their Hemswell plant has a total capacity equivalent to almost 5% of the UK’s total plastic consumption and enabling Coca Cola, for example, to source an increasing fraction of its total need for plastic from recycled PET, whether initially made from oil or from starch.[10]

About 25% of the UK’s plastic is now recycled and this can continue to rise strongly in the next few years, with the only obstacle being a shortage of state-of-the-art facilities like Hemswell. Why are we stressing the importance of the recycling of non-biodegradable plastics, whether from oil or from plant matter? Because the world needs to be more economical in its use of its scarce resources. Whether this is the oil used for most plastics or the starches, sugars and cellulose for biological plastics, we cannot afford to continue to throw away three quarters of the plastic we use. A swing towards biologically-sources plastics should not mean any let up in the move towards near-100% recycling of all types of plastics, whether made from oil or from agricultural wastes.

The benefits from using bioplastics

a)      Major consumer goods brands and bioplastics

Over the last five years many of the world’s largest consumer good companies have begun to employ bioplastics in the packaging of their products. Examples include Coca Cola’s use of a mixture of a conventional plastic and bioplastic in its soft drink bottles, Proctor and Gamble’s bioplastic shampoo packaging and Nestle’s adoption of a bioplastic top for his Brazilian milk products.

Coca Cola’s PlantBottle uses petroleum PET and up to 30% plant-based equivalent. The bottle can be reprocessed through existing recycling facilities in exactly the same way as other PET bottles. Coca Cola aims at using bottles that are ‘made with 100 per cent plant-waste material while remaining completely recyclable’, according to Scott Vitters, director of sustainable packaging at the company.

Coca Cola recognises the danger of raw material production for bioplastics diverting farmland away from the production of food or resulting in the loss of woodland. But the newsletter Business Green reported comments from Dr Jason Clay, senior vice president of market transformation for the WWF, saying that Coca-Cola had taken precautionary measures to ensure its bio-plastic does not inadvertently lead to deforestation and increased emissions.

Coca-Cola is currently sourcing raw materials for its PlantBottle from suppliers in Brazil, where third parties have verified that best-in-class agricultural practices are the norm," he said. ‘Preserving natural resources through sustainable agriculture is essential for businesses like Coca-Cola as they search for ways to alleviate environmental challenges.’[11]

Jason Clay of WWF also has warm words for Proctor and Gamble’s new polyethylene biopackaging, also made from sugar cane sourced from Brazil. ‘P&G's commitment to use renewable bio-derived plastic in its global beauty and grooming product packaging is an important step forward in its efforts to improve the environmental profile of its products,’ he said.[12]

Nestle is also moving rapidly towards the increased use of bioplastics, saying publicly in July 2011 that it ‘is involved in over 30 projects to introduce bioplastics in its product packaging portfolio worldwide.’[13]  In early 2011, the company launched packaging made from renewable resources for its pet food packaging in the US.

The introduction of renewable and recyclable packaging hasn’t been problem free everywhere. SunChips, a subsidiary of PepsiCo’s Frito-Lay snacks unit, recently stopped using an early version of a compostable packaging film for most of its products. The plastic film made from PLA – a renewable plastic made from corn starch – was regarded as ‘too noisy’ by customers. But SunChips didn’t lose its commitment to compostable plastic packaging. Instead its web site says that ‘we’ve created a new, quieter fully compostable chip bag that’s easy on the ears. Our new quieter compostable plastic bag will be rolling out over the next month’.[14] (We believe that the new packaging is still made from PLA) On the parent company’s web site, the statements continue to stress the importance of renewable plastic films.  ‘There’s enormous opportunity to reduce our use of non-renewable resources by using plant-based materials,’ says Tony Knoerzer, Frito-Lay’s Director of Sustainable Packaging.[15]

These four companies are among the biggest consumer goods companies in the world, with operations in almost every country. All of them appear to be committed to an increase in the use of bioplastic packaging for their products. Their reasons are simple: these businesses are watching the actions and attitudes of their customers who are increasingly concerned about the use of fossil fuel resources and, particularly, about indestructible litter. Bioplastics are important in helping consumer goods companies present their brands in a favourable light. Recyclable or compostable packaging made from biological materials can be used to make their products more environmentally friendly in the eyes of consumers. Although bioplastics may be more expensive per kilo of packaging, the extra cost is more than outweighed by the benefits seen by purchasers. The client lists of the major bioplastic suppliers include most of the largest and best-known consumer goods companies, ranging from the Shiseido cosmetics brand to Ecover, the Belgian cleaning products company.

In addition, large companies like these are becoming more aware of the risk of disruption to the supply of oil-based plastics. In order to ensure that at least part of the operations could continue after a loss of availability of conventional plastics - perhaps because of an oil embargo – many large and responsible companies are investing now in developing bioplastic packaging.

b)      The value of the reduction in landfill/expensive preparation for recycling

Some bioplastics are as robust and durable as their oil-based equivalents. Others will rapidly break down in commercial composting plants. These rapidly biodegradable plastics have high value in some circumstances such as when plastics become inevitably mixed with other streams of compostable waste and would otherwise need to be hand separated. For example, quantities of plastic material are used in greenhouse applications. A productive application for bioplastics is the ties that hold tomato vines to the support wires in commercial greenhouses. After the crop is concluded, the waste organic material, including the ties and other plant-based plastics such as the small pots in which plants are grown as seedlings, can be quickly and efficiently cleared and taken to be composted. Conventional plastics would have to be separated by hand at great expense and usually then sent to an incinerator or landfill.

A more substantial application also arises in the horticultural sector. Many field grown vegetables are covered in a thin semi-transparent polypropylene mulch to help maintain even temperatures, reduce water loss and protect the crop from insects. The mulch generally only lasts for one season and then it has to be collected up and returned for recycling. This is a complex and expensive process. A bioplastic mulch that will dissolve in the soil over the winter is much better because it saves time and money but also adds to the carbon content of the soil, helping to maintain fertility. In other important agricultural uses, such as for strimmer cord (‘weedwacker’ in the US, full biodegradability means that small pieces of plastic filament do not persist in the environment.

Another example, likely to become one of the largest single applications for bioplastics, is single use catering utensils. Restaurants and coffee shops generate three streams of waste: unused food, packaging (for example of sandwiches) and utensils such as cutlery. It is highly beneficial – as well as being advantageous to the brand image of the restaurant – to use fully compostable packaging and utensils. All the waste can be put into one bin and shipped to the composting facility without further intervention or labour cost. The thick pieces of plastic cutlery will need to shredded at the composting site to encourage rapid biodegradation but this can happen automatically. Although fully degradable cutlery costs about four times as much as conventional plastic utensils, the reduction in time spend separating out plastics from food waste and,second, reducing landfill cost, more than justifies the expense.  As well as compostable utensils, it makes sense to use bioplastic film to provide the windows in cardboard sandwich packets so that the packaging can also be added to the stream of compostable items.

Some American towns and cities are beginning to move to mandatory use of biodegradable plastics for single use catering utensils, including plates, cups and cutlery. Seattle, for example, has introduced an ordinance that obliges restaurants to only use bioplastics that will degrade in the city’s composting plant. The final imposition of this rule has been delayed by problems obtaining cutlery that is sufficiently compostable but the rules are becoming stricter here and in other towns and cities wanting to reduce use of landfill.  Seattle uses a landfill site 320 miles from the city - about the distance from Newcastle to London - creating a huge incentive to avoid high transport fees.[16] As disposal sites fill up around the world, the need either to recycle plastics or to compost them can only increase, adding further buoyancy to bioplastic sales.

In a similar move, municipalities around the world collecting food waste from homes are now often providing compostable plastic bags into which the food goes prior to collection. Householders benefit from easier and more hygienic storage of the waste. The municipality can collect the bag and does not have to separate it from the waste food before the composting process begins. While these bags are not as strong as the equivalent standard polyethylene bag, they perform their functions well.

c)       Litter

The best understood advantage of biodegradable bioplastics lies in the reduction of permanent litter. Plastic single use shopping bags are the most obvious example of how plastics can pollute the environment with huge and unsightly persistence. A large fraction of the litter in our oceans is of disposable plastic bags. Cities and countries around the world are taking action against the litter, sometimes by banning non-degradable plastic bags entirely. Italy has decided to block the use of non-biodegradable single use shopping bags from the beginning of 2012. The city of Portland, Oregon has just (July 2011) joined several dozen US municipalities in banning most plastic bags. These legislative changes represent a clear trend as politicians respond to the irritation over the persistence of plastic bag litter in the world’s seas, rivers and rural and urban environments.

Some places will continue to allow plastic bags that are genuinely biodegradable and meet the published standards for compostability. (Bags that are oxy-degradable, and only break down in to very small pieces rather than truly biodegrading, will generally be banned). Biodegradable bioplastic bags will be allowed in Italy, providing a huge boost to the European market for these products not least because until now the country has been the largest European market for single use shopping bags.

Bioplastics demand will continue to grow.                                                                                     

Continued research and development in bioplastics is creating high quality products for a wide variety of industries. Now that the benefits of biologically sourced plastics are well-understood, their market share is likely to rise sharply. The three drivers of growth – the importance of brand image to consumer goods companies, the value of joint composting and the reduction of litter – will provide the spur for continued growth in bioplastics across the world.

The carbon footprint of plastics

Calculating the greenhouse gas reductions arising from the use of bioplastics is a complex and controversial area. But it is nevertheless important to try to quantify the benefits from making plastics from biological materials in order to encourage further debate and research.

The first point to make is that the carbon footprint of a bioplastic is crucially dependent on whether the plastic permanently stores the carbon extracted from the air by the growing plant. A plastic made from a biological source sequesters the CO2captured by the plant in the photosynthesis process. If the resulting bioplastic degrades back into CO2 and water, this sequestration is reversed. But a permanent bioplastic, made to be similar to polyethylene or other conventional plastics, stores the CO2 for ever. Even if the plastic is recycled many times, the CO2 initially taken from the atmosphere remains sequestered.

The chart below offers illustrative figures for the greenhouse gas impact of making a kilo of bioplastic from a material such as wheat starch. The first column – a negative number - estimates the CO2 captured from the atmosphere by photosynthesis during the growth of the plant. The second records an estimate of the greenhouse gases emitted in the process of producing the wheat. This includes the emissions from fossil fuels used to power the tractor and other energy use in the field and in the drying of the wheat. It also measures the impact of fertiliser manufacture and the emissions of nitrous oxide, a very powerful global warming gas, as a result of the chemical breakdown of nitrogenous fertiliser in fields.

The third column estimates the CO2 impact of the energy used in converting the starches to a plastic. This figure will generally be much lower than the figures for oil-based plastics because biological materials need much lower temperatures and pressures in the manufacturing process. Bioplastics can generally be processed at about 140-180 degrees Celsius compared to temperatures of around up to 300 degrees for conversion of petrochemicals to plastics.

 

Chart A

The greenhouse gas implications of making a simple polymer plastic from wheat

(These numbers are illustrative – kilogrammes of CO2 equivalent per kilogramme of plastic produced)

-1.4 CO2 sequestration by growing plant

+0.6 GHGs emitted by farming

+2.0 GHGs produced by conversion to plastic

+1.2 Net carbon footprint

Sources: Sequestration in wheat, http://ec.europa.eu/environment/ipp/pdf/ext_effects_appendix1.pdf, GHGs from wheat cultivation, ’How Bad are Bananas’ Mike Berners Lee, Profile Books, 2010, GHGs from conversion processes, estimate from Biome Bioplastics. CO2e is a measure of emissions by which all different greenhouse gases are standardised to the global warming impact of CO2.

Most calculations of the energy used and greenhouse gases created in the production of conventional plastics produce much higher numbers. One estimate of the CO2 produced per kilogramme of oil-based polypropylene is 3.14 kilogrammes per kilogramme of plastic.[17]  This compares with the 1.2 kg illustrative figure for wheat polymers in the chart above.  To be clear, the implication is that those bioplastics that do not degrade might therefore have a carbon footprint of well under half the conventional equivalent.

Braskem, the large Brazilian producer manufacturing both bioplastic and oil-based equivalents, has calculated much higher figures for the capture of CO2 by a growing sugar cane plant. It estimates a net sequestration (that is, a negative footprint) of about 2.3 kilogramme of CO2 for every kilogramme of biopolypropylene manufactured.[18] It compares this to a carbon footprint of over 3 tonnes for polypropylene made from oil, meaning a net gain of over 5kg of CO2 for each kilogramme of plastic. This is an important potential saving; if all plastics were switched to biological feedstocks and the carbon footprint benefit was as high as much, the reduction in global greenhouse gas emissions would be about 5% of current total.

If, on the other hand, the bioplastic is of a degradable type the advantages over conventional plastics are less pronounced. The plastic will compost back into carbon dioxide and water, returning all the sequestered carbon to the atmosphere. In the illustration given above, the savings from making the bioplastic compared to the oil-based comparator would be relatively small, but nevertheless still positive. The crucial point – not well understood by commentators or by the public – is that compostable plastics will typically have a much larger carbon footprint than ones that are manufactured to be permanent. The return of the CO2 to the air reduces the sequestration of organic material.

This situation would be made worse if the bioplastic did not compost in air, but rotted in an oxygen poor landfill. In these circumstances, the plastic would degrade into methane (CH4) and other byproducts. Methane is a global warming gas of greater impact than CO2 and so the full carbon footprint needs to include any uncaptured CH4 produced in landfill.[19] Most - but not all - research shows that the conditions in well maintained landfill sites are too dry for degradable plastics to actually rot. In these circumstances, the bioplastics will therefore permanently sequester carbon. More work needs to be done on this issue, but in the intervening time the precautionary approach is to try to ensure that all biodegradable bioplastics are kept out of landfill.

The other advantages of bioplastics

We have identified five major advantages of bioplastics in this note

  • Potentially a much lower carbon footprint
  • Lower energy costs in manufacture
  • Do not use scarce crude oil
  • Reduction in litter and improved compostability from using biodegradable bioplastics
  • Improved acceptability to many households

There are also some significant technical advantages to bioplastics; these depend on the precise plastic used and how it is made. Products characteristics of value can include

  • Improved ‘printability’, the ability to print a highly legible text or image on the plastic
  • A less ‘oily’ feel. Bioplastics can be engineered to offer a much more acceptable surface feel than conventional plastics
  • Less likelihood of imparting a different taste to the product contained in a plastic container. Milk, for example, will acquire a new taste in a styrene cup but the bioplastic alternative has no such effect.
  • A bioplastic may have much greater water vapour permeability than a standard plastic. In some circumstances, such as sandwich packaging, this can be a disadvantage, but in the case of newly baked bread a bioplastic container will offer a significant advantage in letting out excess vapour or steam.
  • A bioplastic can feel softer and more tactile. For applications such as cosmetics packaging, this can be a major perceived consumer benefit.
  • Bioplastics can be made clearer and more transparent (although they are usually more opaque)
  • Plastics made from biological sources still need to contain additives such as plasticisers that give the product its required characteristics. But bioplastics do not contain bisphenol A, an additive thought to leak from plastics and which is an endocrine disruptor and mimics sex hormones. Bisphenol A is not yet banned in most countries because the chemical is rapidly excreted by most creatures, including humans. But the high levels of continuing exposure to this worrying chemical from conventional plastics may mean that consumers will want to avoid this chemical and shift to safer bioplastic alternatives.

Bioplastics are an important part of the move to a more sustainable world.                            

Bioplastics, currently accounting for less than half of one per cent of all plastics manufacture, are growing rapidly because of the clear advantages they have in many applications. As oil supply tightens, these advantages will grow. Their carbon footprint can be much lower than oil-based equivalents.  Bioplastics can provide excellent biodegradability, helping the world deal with the increasing problems of litter, particularly in the world’s rivers and seas. Durable plant-based bioplastics can also be recycled as well as their conventional equivalents, assisting the growth of a more sustainable world economy.

 

 

Chris Goodall

chris@carboncommentary.com

+44 07767 386696

 

(This research was commissioned by Biome Technologies, a major European bioplastics company. A full version is available at www.biomebioplastics.com)

[1] Comparing the greenhouse gases to make a water bottle, the New York Times reported that steel is fourteen times worse than plastic. (http://www.nytimes.com/interactive/2009/04/19/opinion/20090419bottle.html). G Magazine in Australia suggested that recyclable plastic water bottles had up to 80% less global warming impact than steel or aluminium alternatives http://www.gmagazine.com.au/node/2436/full

[2] Incpen, an industry trade body, provides analysis of comparative carbon footprints for various types of packaging. One example showing that plastics are better than glass is provided here http://www.incpen.org/resource/data/ipen1/docs/PRAG3LCAMeasTools.pdf

[3] The UK Environment Agency produced a report in early 2011 that suggested that supermarket paper bags had three times the environmental impact of single use polyethylene bags. http://publications.environment-agency.gov.uk/PDF/SCHO0711BUAN-E-E.pdf

[4] Some conventional plastics can be made to break down. However readers should be aware that many such plastics, often termed ‘oxy-degradable’, do not actually degrade into smaller molecules. Rather, they simply break into very small pieces. This reduces the visual problem caused by litter, but the wider concerns remain. For example, birds and fish eat the small pieces of plastic, thinking them to be food.

[6] Dow Chemical expects its proposed Brazilian bio-polyethylene plant to produce plastic at about the same cost as the oil-based equivalent. http://www.technologyreview.com/energy/38114/

[7] These figures are based on calculations offered by European Bioplastics at www.en.europe-bioplastics.org

[8] Michael Carus and Stephan Piotrowski, Land Use for Bioplastics, Bioplastics Magazine, 04/09

[11] Business Green, 17th November 2009, ‘Coca Cola launches bioplastic PlantBottle.’

[12] Press release from Proctor and Gamble, August 12th 2010, ‘P&G Announces Plan to Use Innovative Sustainable Packaging’.

[13] Press release from Nestle, July 29th 2011, ‘Nestlé launches bioplastic caps for milk brands in Brazil’

[14] Text taken from the SunChips website, www.sunchips.com, on August 2nd 2011

[15] Text taken from Frito-Lay website, www.fritolay.com, on August 2nd 2011

[17] This figure was calculated by Professor Ramani Narayan of Michigan State University in a study for the bioplastics company Cereplast.

[18] This number is contained in this presentation. (Page 15) http://www.braskem-ir.com.br/braskem/web/arquivos/Conference_Mar2011_Citi_1x1_v2.pdf

[19] Many landfill sites in the UK collect the methane from rotting organic materials and burn it for electricity production. Nevertheless some methane escapes and adds to global warming.

The moral case for nuclear power – an article by George Monbiot

George Monbiot and Jonathon Porritt have been engaged in a debate about the merits, or otherwise, of nuclear power. I did some of the research for George’s article on the Guardian website today (August 8th 2011). Like George, I have reluctantly come to believe that the world needs nuclear – and lots of it – if it is produce the energy it needs without carbon emissions. Energy efficiency is important and the development of renewables should continue with enthusiasm and financial commitment. But the task of getting to 100% replacement of fossil fuels is so enormous, so intimidating and so expensive that I think countries need to encourage nuclear power as well as renewables. One calculation I made George didn’t have the space to use so I have written about it here.

Jonathon Porritt praised the German decision to phase out nuclear rapidly and increased emphasis on solar PV. Porritt gave the impression that PV in Germany costs about the same price as conventional electricity. The reality is very different. As in the UK, the subsidy to renewables is spread across the all electricity users and the solar feed in tariffs in Germany are adding rapidly to the costs faced by power users, rich and poor.

The 2011 levy on German customers’ bills to meet the subsidies to renewable energy is about 3.5 cents a kilowatt hour. The figure increased by almost 1.5 cents a kilowatt hour over 2010 and most of this increase was due to what one source describes as the ‘skyrocketing’ costs of PV subsidies. (1) The net effect on typical German household bills of all the subsidies of renewable energy sources is now about £150 a year, of which about half is the payment for solar energy. Translated to the UK, the German renewables subsidy would be adding about 25% to customers’ bills, pushing millions more into fuel poverty.

All low carbon sources are going to be more expensive than fossil fuel and we shouldn’t even pretend otherwise. But he problem with solar is that in cloudy countries like the UK and Germany it requires a huge amount of capital and produces small amounts of electricity. Per unit of electricity generated, PV requires about five times more subsidy than wind.

The German PV subsidy will cost about €8bn this year, payable by all electricity users. And this will continue each year for decades, increasing with every new installation of PV panels. Current PV installations only produce about 2% of the country’s electricity, about the same as would be produced by one new nuclear power station. Just one year’s PV subsidy would pay for the construction costs and the lifetime operating expenses of a nuclear power station. There would be no further cost to consumers. But the same amount of PV generating capacity needs €8bn a year into the indefinite future  Do German consumers realise PV electricity is costing them literally an order of magnitude more than nuclear energy?

(1) http://www.germanenergyblog.de/?p=4249

Government statisticians not certain greenhouse effect actually exists

UK government statisticians put out a report today that includes a section on the effects of climate change. The Office for National Statistics document contains three bizarre comments that suggest they simply don’t understand the science. Is scepticism about the reliability of the laws of physics beginning to infect even central government? Will we get a note from ONS next week suggesting that existence of gravity is still subject to scientific dispute? 1, 'Some studies of long-term climate change have shown a connection between the concentrations of key greenhouse gases – carbon dioxide, methane and nitrous oxide - in the atmosphere and mean global temperature'.

No, not ‘some’ studies.  All research ever conducted into long-term climate change has shown not just ‘a’ connection between greenhouse gases and temperature but a very strong link. The level of CO2 in the atmosphere is highly correlated with global temperature across the last hundreds of millions of years.

2, 'The accumulation of these gases in the atmosphere may cause heat from the sun to be trapped near the Earth’s surface – known as the ‘greenhouse effect’ '.

Greenhouse gases ‘may' cause heat to be trapped? No, we know with complete certainty that greenhouse gases cause heat to be retained in the atmosphere. And we have known this for a hundred years. Without the greenhouse effect the average Earth temperature would be about 33 degrees lower than it is today. No-one, literally no-one, denies this.

3, ‘Opinion on climate change is divided’

Actually, the research being discussed by ONS at this point shows that opinion on the effects of climate change is divided.

There is real and important debate on the impact of increased greenhouse gas concentrations on the world’s climate. But no uncertainty whatsoever exists as to the existence of the greenhouse effect. The ONS needs to get out more and talk to a few scientists.

Alstom gives hope that CCS will be financially viable

While policy-makers debate how to ensure the UK gets more low carbon electricity, the big generators are actually piling their capital into large numbers of new gas power stations. Future achievement of carbon reduction targets will therefore wholly depend on finding economical ways of capturing the CO2 coming out of gas turbines. Without carbon capture and storage (CCS) the current rush for gas will lock the UK into high carbon electricity output for another generation. We urgently need to include CCS in the current support schemes for low-carbon generation. Alstom, the world leader in CCS, has just released estimates suggesting that new plants with carbon capture should produce electricity at lower cost than any other low-carbon source. (1) Based on the results from 13 pilots and demonstration projects, the company is firmly optimistic about the main CCS technologies, saying that ‘technology and costs are not in themselves obstacles to CCS deployment’. It talks of costs of around €70 a megawatt hour, a far lower figure than nuclear energy is likely to cost. Its confidence contrasts with the wariness of the Committee on Climate Change which recently described the economics of CCS as ‘highly uncertain’. The CCC is probably being appropriately cautious, but the no-one is going to find out unless major countries commit to real support for CCS demonstration projects. The signs are not auspicious: the world’s most important pilot at AEP’s Mountaineer coal power station was abandoned a few weeks ago because the US government’s lack of any form carbon policy made investment impossible. Even if CCS only adds a small amount of the cost of generating electricity – and it will always do so – no generator will spend the money without a clear set of financial incentives that reward it for capturing and storing the CO2.

Similarly, to say that the UK administration has dithered on CCS would be unfairly sympathetic. In May 2007, BP’s advanced plans to build a plant to capture the CO2 from a plant on the north east coast of Scotland were scrapped because of the UK government’s refusal to let gas power stations participate in the CCS competition it planted to launch in 2007. Now, four years later, the CCS competition appears to be stalled. Those watching the disarray ruefully comment that if BP had been given the go-ahead, the UK would now be close to having the first fully functioning low carbon fossil plant sending CO2 into a depleted oil field. Instead we have got little but windy rhetoric.

Alstom’s confidence should force us take note. If the company is right – and it has more experience than anybody else in the world – CCS will be by far the best way of decarbonising electricity generation. Without any equivocation, the company says that a gas power station capturing and storing its CO2 will be competitive with a conventional power station at a carbon price of no more than €40 a tonne. Nuclear power will need financial support equivalent to at least twice this figure.

Like nuclear, a gas power station equipped with CCS will be able to operate round the clock, with no worries about unpredictability or intermittency. Alstom suggests that the greatest uncertainty lies not in the engineering of carbon capture, but in the lack of firm knowledge of how much it will cost to run CO2 pipelines and inject the gas into depleted oil reservoirs or into the deep saline aquifers underneath our feet. (Much of northern Europe sits on top of an aquifer that looks suitable to accept CO2). But, however uncertain, these costs are less critical to the financial viability of CCS than the capital and operating cost consequences of initially capturing the carbon

To my mind, the other possible advantage of CCS is that it requires the continued consumption of fossil fuels, helping to keep the price of coal and gas high. CCS plants actually need to use more fossil energy to generate electricity than a conventional plant, increasing the rate of depletion of cheap sources of coal and gas and increasing the incentive to switch to low carbon alternatives.

But, in any event, the UK urgently needs to include CCS in its renewable energy subsidy scheme (ROCs) to provide an immediate and transparent incentive. If the incremental cost of CCS is as low as Alstom claims, the generators now quietly building tens of gigawatts of new natural gas plants around the UK will need less than half the subsidy of offshore wind to incentivise them add CCS. Why not try it and see what happens? It can’t be any worse than the mess that CCS policy is in at the moment.

 

(1)    Cost assessment of fossil fuel plants equipped with CCS under typical scenarios, Jean-Francois Leandri et al.

The case that biodiversity has substantial economic value is not yet made

Mark Lynas’s wonderful new book ‘The God Species’ attempts to put environmentalism back on track. Humankind, he says, will only be able to keep within natural boundaries by using science and technology to help minimise our growing impact on the planet. He looks at nine specific environmental indicators - the atmospheric concentration of CO2 is the best known – and offers a view of how close we are to the safe limit. One of these indicators is the loss of biodiversity.Humankind is presiding over an astonishingly rapid extinction of species and Lynas says that this loss ‘arguably forms humanity’s most urgent and critical environmental challenge’. He suggests that the rate of extinction is possibly two orders of magnitude greater than the world’s eco-systems can sustain. Put at its simplest, the justification for the concern over biodiversity loss (of which extinction is merely one facet, of course) is that species variety helps maintain stable natural environments. Extinguish all the predators and the prey can become dangerously dominant. Cultivate just one crop and nutrient loss into watercourses is far worse than if many types of plant are grown. But just how strong is the evidence that biodiversity loss is economically damaging? If we cannot show a financial calculation our chance of getting policy-makers to take the issue seriously is close to zero.

Many reasonable people bemoan the current mass extinction but don’t understand why Lynas and the ‘planetary boundaries’ group of scientists think it is so disastrous. What really suffered, they ask, after the last wolves were hunted to extinction in England in the early nineteenth century? Sheep could be more safely grazed and food production increased. Is there really a strong case that biodiversity is worth more than the economic benefits of reducing pests and predators? I think we are all very willing to be convinced that biodiversity is crucial but, to be frank, the evidence may not yet be powerful enough.

A new paper puts some interesting numbers into the debate.(1) It looks at whether the degree of diversity in land use in the agricultural heartlands of the United States affects the risk of severe crop damage from insects. The theory is this: in agricultural monocultures, insects can breed without predators whereas a mixed landscape, with woodland and multiple crops, provides the living space for birds and bats that can help control any infestations.  So Timothy Meehan and his colleagues asked the obvious question: do diverse landscapes result in farmers having to use less insecticide? Assuming that farmers respond rationally to the beginnings of insect damage and spray the crops that are affected, the number of hectares receiving insecticide is a reasonable proxy for the threat from insects.

As we might expect, Meehan shows that diverse landscapes result in less insecticide use. In other words, biodiversity has direct economic value because spraying a crop costs money and time. What the research team calls ‘landscape simplification’ increases the likelihood that any particular hectare has to be sprayed. In what seems to me to be a heroic calculation, the scientists suggest that 1.4m more hectares need to receive insecticide each year as a result of the extensive use of monocultures of wheat, soya and maize across the Midwestern states. But the direct cost per hectare is assessed at only about $48. Compare this figure with, for example, the average yield of 20 tonnes of corn a hectare in good fields in Wisconsin, valued today (July 2011) at over $6,000. Put crudely, the value of the crop is more than two orders of magnitude more than the increased cost of insecticide on an affected hectare. And, equally powerfully, the research shows that only about 4% of total cropland needs insecticide application as a result of locally low levels of plant biodiversity. (Some crops will need pesticide protection even in the most diverse landscapes).

The lesson from the paper is therefore a simple one. In the specific case of the Midwest, landscape simplification is tending to push up insecticide use but the direct economic cost of this is trivial compared to the value of the crops. If the whole of this vast area were given over to a single crop, and every hectare had to be sprayed every year, farmers would still not be losing financially from the loss of biodiversity.

The response is to say that the costs to the farmer are only a small fraction of the total impact on society, now and in the future. High levels of pesticide use mean poorer water quality and air pollution, possibly affecting the health of people hundreds of miles away. Heavy insecticide use will eventually cause pest mutations that will require a new generation of chemicals. Applications of insecticide may cause the deaths of beneficial soil organisms. Nevertheless, Meehan’s paper does not immediately provide support for Mark Lynas’s conclusion that biodiversity loss is potentially the worst environmental problem the world faces.

Timothy Meehan et al., Agricultural landscape simplification and insecticide use in the Midwestern United States, PNAS (OPEN ACCESS) July 2011.

 

Scottish wave power marches on

The last weeks have seen some crucial developments in the commercialisation of wave power. Inverness-based AWS received major investment from Alstom, the French power generation company. Aquamarine Power of Edinburgh, backed by Scottish and Southern, started drilling the foundations for the second major trial of its Oyster wave energy collector in the Orkneys. The machine itself is being finished at Burntisland Fabrications and will be installed over the summer. The granddaddy of them all, Pelamis, took on a round of new money from investors and continued its plans for installing its huge red sea-snake-like devices for E.ON and Scottish Power. As well as having tidal currents that match anywhere in the world, the UK has excellent potential for using waves to generate electricity. Despite this, the National Grid’s seven year forecast sees no wave farms before at least 2018. Other commentators, such as the Committee on Climate Change are politely unenthusiastic.

Who is right, the hard-headed financial analysts or the committed companies pushing ahead to install wave collectors in the waters off western Scotland and the northern isles? My money would be on the bloody-minded enthusiasts pushing ahead with their huge steel structures in the face of mild scepticism from banks and governments. I spoke to Martin McAdam, the CEO of Aquamarine Power, to discuss the opportunities for wave power in the UK and understand what needs to happen to get rapid growth in wave power utilisation.

There is, of course, nothing new in observers being sceptical about a new technology while the inventors and engineers running the business developing the machines are mustard keen on the opportunities. Wave is no different to so many compelling opportunities in the past. It is currently four or five times too expensive to compete with gas for electricity generation, even on the west coast of the British Isles. The engineers still have major technical challenges to overcome.

The power of the waves

Waves a few hundred metres from the shore can contain huge densities of energy, often as much as tens of kilowatts per linear metre. The ordinary British house, using about half a kilowatt on average across the 24 hour day, could be powered by a device collecting the energy in a few centimetres of waves. There is a downside to the density of energy in a wave – most collectors that have been tried off the shores of the UK have failed within a few days, unable to deal with enormous forces being placed upon them. Since the Edinburgh-based engineer Stephen Salter developed his eponymous ‘Duck’ in the 1970’s, hundreds of companies have tried and failed to convert wave energy into commercially-priced electricity. It is only in the last few years that credible designs have been developed that are both efficient at capturing wave motion and which can hope to survive storm conditions.

Aquamarine Power's Oyster is one such device. A large scale prototype worked successfully at the Orkney Wave Centre for the best part of a year. A scaled-up device generating a maximum of 800 kilowatts is will be installed at Billia Croo in the Orkneys in late July with the first commercial machines put in place in 2014.

How does it work? ‘It has a design like a laptop’, says Martin McAdam, ‘with the lid, mostly submerged, moving back and forward with the waves’. This motion powers pumps which generate high pressure water. A pipe takes this water to the shore, where it drives conventional hydro-electric turbines, housed in containers. The crucial part of the design is that most of the critical equipment is on-shore, easily and conveniently maintained without having to get into a boat in rough seas. A relatively small number of moving parts are offshore. The design used by the Oyster is very different to the other contenders, with Pelamis capturing the energy from the flexing of the joints in its thin body and AWS getting power from the bobbing of the waves changing water pressure inside the twelve-sided floating structure.

McAdam says that the ideal location is in water about 15 metres deep. Around much of the UK, this depth can be found quite close to the shoreline, with the new site in the Orkneys about 500 metres from land. He says that the Oyster technology installed in large farms in appropriate locations around the British Isles has the potential to generate a maximum of about 8 gigawatts, with the other obvious European markets, such as Portugal, Ireland and France, offering another 8 gigawatts of potential. (For comparison, current total UK generating capacity is about 75 gigawatts, with average demand running at about 35 gigawatts). Oyster’s parent, Aquamarine Power, has its eyes on sites for a 200 megawatt farm in Orkney and 40 megawatt installation off Lewis in the Western Isles. What about expected rates of actual production, as opposed to peak power? McAdam mentions an expected annual output of about 35-40% of the maximum capacity, comparable to well-sited offshore wind turbines. In quiet years, such as 2010, this number would be lower.

The estimates from Aquamarine’s CEO are not inconsistent with the Committee on Climate Change’s figure of 40 terawatt hours for the potential for wave power, about 12% of current UK electricity usage. However McAdam stresses that other companies’ machines will work in locations not suitable for Oysters, implying that the total UK potential may be substantially greater than the CCC thinks.

Wave power is intermittent but marine energy has two advantages over wind. First, it is rarely, if ever, completely still on the western coastline of the UK. Unlike wind turbines, which require a reasonable breeze to start turning, wave collectors will almost always generate some power. Second, wave energy tends to be out of phase with wind power. If it is blowing a gale today, the waves, generated a long way away and only gradually reaching the shore, will arrive after the wind has blown itself out.

Martin McAdam gives some figures for the cost of his collectors. The first Oyster installed in the water cost about £35m per megawatt of peak capacity. The machine being attached to the seafloor over this summer costs about £10m per megawatt. McAdam sees the figure declining to about £3m once costs have been driven out by further R+D, ‘learning by doing’ in the fabrication process and from the benefits of installing many devices along the same piece of shoreline. At £3m, wave is competitive with today’s offshore wind costs, which are running at about £4m per megawatt in shallow locations. Since ‘capacity factors’ are similar at about 35-40%, the output per megawatt installed will be about the same as a wind turbine and the costs of megawatt hour very similar.

Wave farms face many of the same technical challenges as offshore wind. The brutal environment means wave collectors need to be made from huge quantities of corrosion resistant steel. Fabrication is not a simple matter – the leading UK constructor, Burntisland Fabrications or BiFab, is going to be very busy indeed. Wave farms will tend to be far away from easy connection to robust parts of the electricity distribution network. Maintenance work on wave collectors will be difficult, and the absence of electrical gear on the Oyster itself is a huge potential advantage compared to some other offshore technologies. Similarly, the relatively shallow depth in which the Oyster machines operate – 15 metres – implies that, according to the laws of wave physics, all waves greater than 15 metres will have broken by the time they pass over the device, improving survivability.

The crucial question facing the wave power industry is how to get from about £10m per megawatt to £3m as fast as possible. McAdam says while private money and the grants from government bodies such as the Carbon Trust have been very useful these funds are going to be enough to push wave to a point at which it is viable without subsidy. For rapid rollout of wave power, the industry needs a substantial injection of subsidy for further R+D and cheap equity to enable the construction of substantial farms of collectors. This will enable the industry to move down the learning curve far faster than would otherwise happen.

Sceptical commentators will note that unproven renewable technologies often demand large subsidies in order to reach commercial viability at some uncertain and always receding future date. And, indeed, much money on energy R+D will be wasted. Marine Current Turbines in Bristol is close to proving that tidal current power can overcome technical challenges but no company in the wave business can yet claim similar certainty. Real issues remain and waves may never produce power that is cost-competitive with other low-carbon technologies. Nevertheless, consider the following comment in a letter published by distinguished scientists in the Guardian on 13th October 2010, commenting on the £2bn a year spent on military research.

As an example of the current imbalance in resources, we note that the current MoD R&D budget is more than 20 times larger than public funding for R&D on renewable energy.’

McAdam says his company might only need to make a total of as few as 50 or 60 Oysters to get the costs down to the £3m/MW figure. If subsidy on all these first production machines was £7m a megawatt, the total public cost would  be about £400m, a massive amount but neglible in terms of the amount spent on military research.

Wave power collectors could provide one of the UK’s most important manufacturing exports in twenty years time. The natural energy resources around Britain’s coast may eventually provide a substantial fraction of our energy needs at almost zero running cost. Does it not make sense to divert a larger fraction of the government’s R+D budget towards this increasingly plausible form of low-carbon, environmentally relatively benign, electric power?

Your chance to tell Mark Lynas where the green movement needs to go from here

Here are a few comments from Mark Lynas, quoted in a Guardian article yesterday (14th June). '...the green movement is stuck in a rut, but I think the problem is deeper than mere professionalisation and endless strategy meetings in corporate NGO head offices.

"Many 'green' campaigns, like those against nuclear power and GM crops, are not actually scientifically defensible, whilst real issues like nitrogen pollution and land use go ignored. The movement is also stuck in a left-wing box of narrow partisan politics, and needs to appeal to a broader mass of the public who are simply not interested in organic farming and hippy lifestyle choices. It needs to re-engage with science, as well as with the general public, if it is to remain relevant to the 21st century'.

Mark's new book The God Species, available in shops in the next few weeks, looks at what the world's environmental movement needs to focus on. How can we use science productively to solve ecological problems? Along the way, he takes multiple swipes at what he sees as the irrational and anti-scientific tendencies in many green organisations, obsessed with fighting the wrong battles.

Come to listen to Mark Lynas and Professor Johan Rockstrom, the leading figure in the 'planetary boundaries' movement that seeks to quantify the ecological limits that mankind has to stay within. Central London, afternoon Wednesday 6th July, free admission but reservation vital. All the details are here.

(22.June.2011 - last few tickets available, book now)

Planetary Boundaries PDF

(When booking, please say you saw these details on Carbon Commentary).

Much more UK wind power by 2018 than previously forecast: National Grid

  Every year the National Grid produces a statement that identifies how much UK electricity generating capacity is expected to open and to close over the next seven years. The Grid is careful to say that its ‘Seven Year Statement’ is not intended to be used as a forecast but this vital document gives clear indications of how it thinks the demand for electricity will be met over the next few years. The most recent statement, published at the end of May 2011, suggests that the Grid is becoming far more bullish about wind power – particularly offshore - and about new gas power stations.

Last year, it suggested that about 8 gigawatts of wind capacity might be constructed between this year and 2016. The number has risen to 15.5 gigawatts in this year’s review. Projected new gas plants are up from 12 GW to 13 GW. Unsurprisingly, the Grid has pushed back the date of the first new nuclear stations to 2018.

The recent review of renewable energy potential by the Committee on Climate Change estimated that the UK would have installed 28 GW of wind power by 2020. It sees slightly more onshore wind by this date (15 GW) than offshore (13 GW). The Grid’s view is very different, with a projected 18 GW offshore by 2018 and only 8 GW onshore, with very high rates of installation in offshore waters possible beyond 2015.

This is how the National Grid sees UK generating capacity in 2018.

Generating technology Amount of potential capacity (gigawatts) in 2018
   
Gas (CCGT) 45, (29 today)
Coal 21, (29)
Offshore wind 18, (1)
Onshore wind 8 (2)
Nuclear 13 (11)
Other 9 (10)
   
TOTAL 114 (82)

 

The increase in total generating capacity is, of course, partly a mirage since wind power only generates about 30-35% of its maximum power, with offshore at the higher end of the range. But the National Grid is seeing nevertheless a remarkable switch towards offshore wind, a point that commentators seem not to have picked up.  By 2018, if these numbers are accurate, the UK will be getting something over 20% of its electricity from wind, or average of 8-9 GW, compared to no more than 1 GW at the moment.

On the negative side, National Grid is seeing a very limited investment in grid connected tidal power, with only about 0.1 GW connected by 2018. It postulates over 2 GW of biomass power stations, meaning that even including today’s hydro-electric plants, renewables other than wind will be little more than 3 GW. The Climate Change Committee’s scenario for 2020 has a much large figure of about 10 GW, implying far more optimism about the prospects for biomass, tidal and wave.

We might see measurable amounts of solar PV by 2018 in terms of capacity, but the typical installation will perform at only about 10% of its rated power, meaning that even if 2m homes sign up to feed in tariffs the UK will struggle to get an average of 0.5 GW of power from the sun. The 2018 scenario sees 2 nuclear power stations completed during 2017/8 at Wylfa and Hinckley Point.

As an aside, National Grid forecasts show large amounts of spare electricity generating capacity as some of the coal-fired stations close in the 2015 and 2016 period. The generators are falling over themselves to install gas plants, with the winter peak maximum demand of about 60 GW almost covered by nuclear stations and by gas plants alone, with no need for any contribution from coal power at all, even from the power stations remaining open. The lights will not go out.

'Ten Technologies' is one of the five best books on climate change

TheBrowser.com asked the writer, editor and campaigner Duncan Clark to recommend the five best books on climate change. He nominated 'Ten Technologies to Save the Planet', the US edition of my 2009 book. http://thebrowser.com/interviews/duncan-clark-on-climate-change

Many thanks to Duncan. He may or may not want it to be known that he had a hand in commissioning and editing three of these five books, including mine. All of the authors will share my deep gratitude for Duncan's help and support.

Tidal energy - the UK's best kept secret

The latest report on Renewables from the Committee on Climate Change (CCC) offers lukewarm support for electricity generation from tidal streams. The UK has some of the fiercest tidal currents in the world, but the CCC says the tidal turbines will deliver energy at a higher cost than PV in 2040. The assumptions behind this pessimism are questioned in this note. The tides around Britain’s coasts sweep huge volumes of water back and forth at substantial speeds. The energy contained in the tidal races off the west of the UK is as great as anywhere in the world. Because water is a thousand or so times heavier than air, the maximum speeds of perhaps 6 metres a second are capable of generating far more electricity per square metre of turbine area than a windmill. The Pentland Firth, the narrow run of water between the north-east tip of Scotland and the Orkney islands, is possibly the best place in the world to turn racing tides into electricity. The challenges are immense: massive steel structures need to be made that survive huge stresses, day after day.

The rewards for tidal stream developers are commensurate. Unlike other renewable technologies, tidal power is utterly predictable for the entire life of a turbine. We know to the minute when the tides on a particular day will be at their peak. Once installed, the running costs of tidal stream technology will be low. The environmental impact of tidal turbines appears to be very small. And the UK could probably provide a quarter of its electricity from tides. (And much more if an environmentally acceptable means was found of damming the Severn tides).

The CCC might then have been expected to push for a significant programme of support for tidal. Its reservations appear to be as follows.

a)      Tidal generation does not help with the ‘intermittency’ problem of renewables generation.

b)      The levels of yield are relatively low. (Yield is the percentage of rated power that can be delivered in a typical day.)

c)       The cost of capital is high for a developer using tidal turbines because of the risk of the technology not working

d)      The relatively small scope for learning curve improvements.

Intermittency

An individual tidal turbine will generate most electricity when the tide is running fastest. This will be at approximately the mid point between high and low tides. The CCC therefore says that tidal power will not help deal with periods of low production.

The problem is expressed here in Chart 1.4 of the CCC’s report.

The cycle of marine power (tidal plus wave) suggests that total output will fall to zero four times a day. This would only be the case if all the turbines were sited at the same place. Turbines placed, as they will be, all around the coasts of Great Britain will generate maximum power at different times of the day. On the day I looked at the tide tables, the tides in the Channel Islands (where there are some extremely powerful races) were completely unsynchronised with the tides in northern Scotland. Two turbines, one off Alderney, one off John O’Groats, would together produce substantial amounts of (entirely predictable) power every second of the day. Tidal power is as dispatchable as nuclear.

Yields are low

The CCC offers a view as to the output of a tidal turbine, suggesting that in a ‘high’ case the figure will be 40%. That is, the average electricity output of a 1MW turbine over the course of a year will be 400kW.

Actually, the one piece of reliable data on this number suggests a much higher figure. The UK’s hugely impressive tidal turbine developer, Marine Current Turbines (MCT), has had a device in the waters of Strangford Lough for several years. This early turbine has produced 50% of its rated power. The difference is important: it means that electricity generation costs are 25% lower than the CCC would otherwise have predicted.

The cost of capital is high

I think the CCC – normally so forensically rigorous – makes an error here, guided by its capital markets advisors Oxera. The CCC suggests that capital projects have to earn a return determined by the ‘riskiness’ of the investment. The debate over what types of ‘risk’ need to be paid for is complex and almost theological in its intensity. But I will not argue about this and will accept that early tidal power projects are ‘risky’ and that investors will therefore expect high returns to compensate for their exposure.

But let’s dissect what the ‘risk’ of a tidal project actually is. At its simplest, it is that the technology will fail. And, indeed, most tidal turbines have simply broken into pieces in the early months of their life in the seas. But this is the only risk. Once working successfully, the tidal currents will flow for as long as the moon circles the earth. There are no commodities markets to disrupt the returns, no risk of increased operating costs once the technology is proven. To say that tidal has a high cost of capital is wrong: the early developers take big risks but once the technology matures, the operating risk disappears. The right assumption to make about tidal is that has huge cost of capital today, but will have very low rates in the future once the technology is proven. Instead, the CCC’s advisers weight tidal down with high returns on capital for ever. This unfairly penalises tidal stream power, and all other sources of energy in their early stages of development.

Small scope for learning curve improvements

Other renewable technologies have generally reduced in underlying cost by 10 or 15% for every doubling of the output of these devices. (This is an utterly standard ‘experience’ effect- we’ll assume tidal turbine costs only fall by 10% for each doubling).

To date, the world tidal industry has probably installed less than 20 full-scale production devices on the seabed. In fact, you could plausibly say that the MCT Northern Irish turbine is the only such turbine. Assume nevertheless that today’s accumulated production experience is 20 units.

But the CCC, advised in this case by Mott McDonald, says that costs today are about 20.5 pence per kilowatt hour of electricity generated and will only fall to 15.25 pence in 2040, a reduction of slightly more than 25%. (1) The learning curve model assumes that a 10% reduction will typically come after a doubling of total production to 40 units. A further 10% reduction comes when accumulated volume rises to 80.

The arithmetic is not complex. If Mott McDonald thinks that the costs will only fall to 15.25, it must believe that the worldwide tidal industry will install less than 160 turbines before 2040.

The CCC’s analysis locks tidal stream technology into relative failure. Costs are high, and the technology risk is great. So no developers use the tidal turbines and costs remain stubbornly high. The cycle continues. Of course this could indeed be the future. But with sustained effort and support, tidal energy may become of the UK’s most important industries. In MCT – a business few people have ever heard of - the country has the most technically advanced marine energy company in the world. I think it deserves all the backing it can get.

(1,) The mid point of the cost ranges in Figure 1.10 of the CCC’s Renewables report. All numbers in real money.

Hydrogen: the numbers still don’t stack up (and probably never will).

The dream of the hydrogen economy persists. Proponents say that hydrogen, potentially one of the densest energy sources available to us, can provide our transport fuels and the energy needed in the home. But progress has been far slower than expected over the last few decades so I visited two leading UK innovators to try to understand why. ITM Power in Sheffield makes a mobile electrolyser that can make hydrogen by splitting water into its two constituent elements. The business sees hydrogen replacing petrol in cars, either by using fuel cells to create electricity to drive a motor or by burning the hydrogen in an engine. AFC Energy, a company based in Surrey, produces low cost fuel cells that use hydrogen and the oxygen in the air to generate electricity. Its partners include the supermarket chain Waitrose, which is interested in using fuel cells to make the electricity for shops.

 Both companies have excellent technology and first-rate manufacturing skills. But if I understood the economics of their products correctly, I believe neither can hope to compete with other sources of energy, except in a few very unusual circumstances. The problem is that making hydrogen will always use energy, and that energy could always be more productively used instead to directly generate electricity in a higher efficiency process.

Let’s consider ITM Power’s proposition first. ITM uses electricity to split water into hydrogen and oxygen. Its electrolysis (separation of water into its constituent elements) is about 70% efficient. That is to say, it uses 10 units of energy to break the chemical bonds between the atoms of hydrogen and oxygen compared to a maximum of 7 units of energy, either in the form of heat or electricity, gained by then recombining the two gases to make water. This maximum of 7 units can never be actually achieved. Hydrogen burnt in a car engine (yes, petrol/gasoline engines can be modified to use H2 as a fuel) might generate about 2 units of useful energy. The rest is lost as heat. Hydrogen pumped into a fuel cell, which then generates electricity, can push this figure up to about 4 units. The implications of these numbers are simple. Use 10 units of electricity to split water, store the hydrogen and then use it later to convert back into electricity and you get only 4 units back.

Compare this to other means of storing electricity: use cheap power to pump water uphill and then releasing it through turbines is about 70% efficient (electricity out compared to electricity in). Modern batteries work at about 80% and newer technologies like compressed air storage (use the power to compress air, then release it through turbines) can eventually hope to achieve similar levels of success.

This wouldn’t necessarily be the end of the story if the equipment needed to make hydrogen was inexpensive. At times when electricity is very cheap, such as on windy summer nights when demand is low, hydrogen could be used to store electricity for sale at peak prices the following day. However ITM Power quote a cost of about £700,000 for its extremely impressive H2 manufacturing and storage system, housed in two standard freight containers. This unit will generate about 2 kilogrammes of H2 an hour, with an approximate energy value of about 80 kilowatt hours. At best, the system might hope to store energy for twelve hours a day, spending the rest of the time creating and selling electricity. Each day the kit might therefore store about a megawatt hour of electricity, buying power for part of the day, and selling the megawatt hour at peak times.

Compare this to a large rack of batteries with a similar capacity. A 2010 Deutsche Bank study (1) suggested that current prices from lithium-ion batteries are about $450 per kilowatt hour, or $450,000 (less than £300,000) per megawatt hour. This figure was for automotive use – larger scale industrial power storage units should be a bit cheaper. So battery storage – which operates at 80% efficiency is less than half the price of hydrogen storage, which has a typical output-input ratio of 40% today.

The unfortunate truth is that hydrogen is a poor way of storing electricity and the differences between H2 and batteries will probably widen because of the huge amounts of cost-reducing R+D going into lithium-ion batteries. I’m sure ITM Power knows this. It has focused instead on serving the vehicle fuel market. The rise in the price of oil means that mobile power is much more expensive than stationery energy. A UK household pays 12 or 13 pence per kilowatt hour for electricity and slightly more than this for petrol. (For US readers, the price of petrol (gasoline) in the UK is about $2.20 per litre or over $8 a US gallon). But a standard car gets only about 3 units of useful work for each 10 units of energy put in the tank with the rest lost as heat. However if we power a car with electric motor, this ratio is about 8 to 10, better than twice as good. It really makes sense to drive an electric car.

However the sad fact is that this doesn’t mean we should use hydrogen to make that electricity. (In a fuel cell car the hydrogen pumped into the tank generates electricity which then turns an electric motor). Remember the crucial calculation above – if we make hydrogen using electricity and then use it to regenerate electricity in a fuel cell, we get 4 kilowatt hours out for every 10 we put in. But if we just take electricity and store it in battery in the car we get 8 units out for every 10 we put in. And, crucially, the batteries are far cheaper than the fuel cell. A fuel cell car using hydrogen will cost more and deliver half the number of miles of travel for each unit of energy employed when compared to a battery car. The implication is that hydrogen isn’t very useful as a mobile power source.

Thus far we’ve looked at the value of using hydrogen as a way of storing electricity and as a way of moving a car. What about the third opportunity, as a means of generating stationary power? This is where AFC Energy’s technology comes in. AFC’s low cost fuel cells, using clever catalysts and cheap materials, can take hydrogen and generate electricity for shops, factories or office buildings. The problem is making the hydrogen in the first place. Certainly it is true that some industrial processes have hydrogen as a by-product. This hydrogen will generally then used for the manufacture of ammonia for fertiliser, but it could instead be inserted into a fuel cell to make electricity. AFC Energy have a prototype fuel cell going into a plant making chlorine in Germany with abundant supplies of waste hydrogen. I haven’t done the sums necessary to work out whether the hydrogen would be worth more as an input to fertiliser manufacture than for making electricity but I suspect the latter is more valuable. However the worldwide chlorine manufacturing sector will only generate enough hydrogen, AFC Energy says, to provide about 3 gigawatts of continuous energy. This is the equivalent of two very large power stations.

But for most of the potential applications of AFC’s beautifully designed technology, there won’t be cheap hydrogen available. It will have to be made on-site. Making H2 using electrolysis, and thus using electricity, in order to run a fuel cell to then generate electricity is clearly wasteful. We’d get about 4 units of electricity out for ever one we put in. AFC believes the alternative should be steam reforming of natural gas, which is largely methane (CH4). High temperature steam and a catalyst will split methane into hydrogen and carbon monoxide/dioxide. This is a well understood technology.

Think for a second about what is really going on here. Natural gas is being split at the fuel cell site, with a high energy cost. The hydrogen produced is then fed into a fuel cell, which will get about 6 units of electricity for every 10 units of hydrogen energy used. The overall efficiency will – once again - be about 4 units of electricity out for every 10 units of energy in. Compare this to a conventional modern gas-fired power station, which burns methane and uses some of the waste heat for a second turbine cycle. This gives up to 60% efficiency. Some electricity, perhaps 6%, is lost in the transmission across the grid, meaning a total efficiency of somewhere above 50%. Unless I am missing something, it is therefore better to burn natural gas in a power station than to crack it using steam and then use the resulting hydrogen in a fuel cell. Importantly, the carbon emissions will also be lower from a modern gas plant because less methane is burnt to create the equivalent quantity of electricity.

AFC responds to this point by saying that a fuel cell gives the owner security of electricity supply. In some senses, this is true. If the power fails at a site, the fuel cell will continue to produce electricity, provided there is gas available. But other than temporary outages, the primary reason that power cuts might happen is a shortage of fuels for electricity generation. If there is gas available in the UK, then the large power stations will use it for electricity generation. If there isn’t natural gas, then neither the power stations nor the AFC fuel cell site will have it. The power from an AFC fuel cell will only be as secure as its natural gas supply.

The second argument AFC makes is that when the technology is mature local electricity generation by a steam reformer and an adjacent fuel cell will produce electricity at a price of 12p per kilowatt hour, only perhaps a third more than is currently being charged by electricity suppliers to large users. The numbers to support this assertion are not available. But it would be extremely surprising if the electricity were ever cheaper than standard generation. AFC quotes a cost of about £1.2m for a 180 kilowatt system. This means over £6,000 per kilowatt, compared to about £1,000 for an equivalent share of a new CCGT gas plant. Operating costs are also likely to be much higher than at a conventional power station. Lastly, a conventional gas plant will find it far easier to fit carbon capture and storage. These are the arguments against seeing hydrogen as a important source of cost savings or carbon reduction. Fuel cells may work in specific applications, such as where the waste heat can be used in a domestic home or where the high levels of nitrogen leaving the system are useful for reducing fire risk, such as in a datacentre. But as far as I can see the fundamental energy economics of creating H2, with the associated heat losses, and then using the gas either in a combustion engine or in a fuel cell (with a maximum of about 60% efficiency) must mean that hydrogen has a very limited role in the future low carbon world.

(With many thanks to the people at ITM Power and AFC Energy for their help. Errors are mine.)

(1) The DB report is available here: http://bioage.typepad.com/files/1223fm-05.pdf

More materialism please

Fashion and sustainability do not easily mix. As societies become richer, they tend to buy more clothing. Old clothes languishing at the back of the wardrobe are thrown out, usually ending up in landfill or dumped into used clothes markets in less prosperous countries. One study showed that the weight of textiles sold in the UK nearly doubled in the prosperous decade from 1998 to 2007. (1) In How to Live a Low-carbon Life, I estimate that the carbon footprint of clothing in the UK may be as much as a tonne per person per year, or not far short of 10% of the total, and this figure will rise alongside any increase in the future sales of clothing. Fashion exemplifies the difficulties of reconciling economic growth, which gives us all more money to buy clothes, with the need to reduce emissions. The essays in a fascinating new book, Shaping Sustainable Fashion, look in detail at the how our need to keep ourselves warm and looking good can be reconciled with reducing CO2, and also reducing the environmental impact of growing cotton, possibly the most ecologically damaging crop in the world. (2) As the concluding chapter notes, we will not solve the problem until we become truly materialistic. Fast fashion and cheap disposable clothing show that ‘we fail to invest deep or sacred meanings in material goods. Instead we simply have ‘an unbounded desire to acquire, followed by a throwaway mentality’, which is the opposite of real materialism.

The photograph at the top of this post is of an article of clothing worn by Edward for several decades and discussed in the profoundly thought-provoking essay by Kate Fletcher that concludes Shaping Sustainable Fashion. Here is what Edward says about this valued and stylish piece of clothing.

‘I call this my three stage jacket. It began about forty years ago as a very slim waistcoat that was given to me. I knitted a panel and put it in the back just to be able to fasten it together at the front, you see. And then about fifteen years ago I added sleeves and a collar and some trimmings. And then, only about five years ago, I became a bit too big to button it up so I added latchets across the front so that I can fasten it.’

This is true materialism. It values and celebrates the physical objects of our lives, maintaining and refreshing them as we pass through the world from birth to death. I suspect that this jacket is indelibly associated with Edward in the minds of his friends. Because it is largely made by him, it exhibits his manual skills to all he meets. It is part of him.

Of course we need to recycle more; only about 10,000 tonnes of the 2m tonnes of textiles bought a year in the UK is properly recycled by taking the clothing apart and reusing the fibres, representing about one half of one percent of total consumption. But even a massive growth in full recycling is not enough. The world has to find ways of divorcing  our understanding of ‘prosperity’ from the continued growth in the number of things we buy, consume, and then throw away. As Kate Fletcher suggests, people need to recast themselves, moving away from being just passive consumers, enslaved to Primark and Zara’s endless rotation of new items in shop windows,  to becoming ‘suppliers of ideas and skills to fashion’ so that they value and cherish the clothes that they own, caring for them for decades.

We will not solve the world’s multiple ecological problems by telling ourselves and other people to buy less. The importance of consumption is far too firmly ingrained into our modes of thought. No-one, for example, imagines that Catherine Middleton should wear an old dress for her wedding this week. However a truly materialistic society would be looking forward to seeing her wearing the outfit her mother and her grandmother wore as their wedding dress, updated and adapted so that it became hers, perhaps even by using a sewing machine she learnt to use at school. Her husband, mutatis mutandis, would do the same.

(1)    Maximising Reuse and Recycling of UK Clothing and Textiles, Oakdene Hollings for Defra, October 2009. Chart on page 11 shows a rise from an index of about 90 in 1998 to over 160 in 2007.

(2)    Alison Gwilt and Timo Rissanen, Shaping Sustainable Fashion: changing the way we make and use clothes, Earthscan 2011

How much of Japan’s land area would be needed to generate all its electricity from wind?

  The opponents of nuclear energy claim that Japan could produce much of its electricity from wind. (See the debate at Climate Progress). Others, such as the Breakthrough Institute , offers estimates of how of the country's land area would have to be covered with turbines to generate enough electricity. Below is my estimate of how much of the country would be required – about 10%. Since Japan gets about one quarter of its electricity from nuclear power about 2.5% of the land area would need to provide the equivalent amount of power. This would come from about 50,000 turbines.

My producution figures are almost certainly far too low. I use a high estimate of the average number of watts of electricity produced per square metre of land and my figures assume constant hourly production. But of course wind is highly erratic, so far more wind turbines would have to be installed to meet the total need. The actual production of onshore turbines - which are generally less productive than offshore wind farms - would probably be much less than 3 watts per square metre. Japan would also need extensive grid links with other Asian countries to protect against long periods of low wind speed by giving the country to import large quantities of electricity. Unlike Britain, Japan does not have a single electricity network. This would further enhance the problems of dealing with regional shortages and surpluses of wind power.

I’d be very grateful for any corrections to these numbers.One point for clarification. The figures I give are for typical production of working turbines, properly spaced over large areas so that one turbine does not steal the wind of another. Pack the turbines more closely and you might get higher total production, but at much higher cost. Wind farms really do need lots and lots of space. The world’s biggest offshore installation, the planned London Array, will need over 230 square kilometres to provide 1000 megawatts of maximum generating capacity. Some turbines will be spaced over a kilometre apart.

Wind                                             a 3 watts (1) per sq metre                                         b 3 megawatts per sq kilometre                                         Japan                                             c 377,835 sq kilometres                                                                     Total wind production if all of onshore Japan given over to turbines                                     d 1,133,505 megawatts production if all of Japan used                for wind farms (2)               e 8,760 hours per year                                         f 9,929,503,800 megawatt hours per year                                         g 1,000,000 terawatt hours per megawatt hour                                       h 9,930 terawatt hours per year                                                                   Share of Japanese land area needed                                         i 1,075 terawatt hours per year used in Japan (3)                                     j 10.8% share land area of Japan needed                                        

 Notes

 (1)                         This figure is achieved by the best UK offshore wind farms        (2)         The land area multiplied by the electricity production per sq.m.    (3)                         Estimates of this figure vary slightly                                                                                            

The full cost of nitrogen pollution exceeds the financial value of higher yields from fertilisation

A lack of nitrogen in the soil often restricts the productivity of a crop. Nitrogen fertilisers, made by the energy-intensive Haber Bosch process, have hugely improved farm yields around the world. But a new study on the wider impacts of the use of nitrogen compounds on soils suggests that these benefits are much less than the environmental costs. What is called ‘reactive’ nitrogen pollutes water supplies, produces greenhouse gases, cuts air quality and reduces biodiversity.  Mark Sutton and colleagues completed a cost-benefit analysis of reactive nitrogen.(1) They found the value of extra yields from the use of nitrogen was €25-€130bn. (An unusually wide estimate, it must be said). But the cost of nitrogen-related pollution was put at €70 to €320bn meaning that roughly speaking the pollution costs of nitrogen are three times the value of the enhanced crop yields. So why we do use so much fertiliser? It is much like the banks: the gains are private (accruing to bankers and to farmers) while the losses are socialised (accruing to taxpayers and citizens). Put another way, the application of fertiliser onto farmlands is far too cheap because its price does not recognise the full costs of using it. The result is that farmers are not incentivised to be efficient in its use and the study authors estimate that about half of the nitrogen added to Europe’s soils ends up as pollution or back into the air as nitrogen gas.

Perhaps more controversially, the report recommends a reduction in meat consumption. About 85% of the nitrogen that isn’t wasted and which ends up in the complex molecules in food crops is eaten by animals, not humans. If we were all vegans, we would need to grow more plant matter but would still reduce the need for artificial nitrogen by 70% from today’s levels, even if we wasted as much nitrogen as we do today.

There’s another point which isn’t mentioned in Mark Sutton’s summary of the full report. One of the causes of nitrogen waste is monoculture. If our croplands are all devoted to huge acreages of a single plant, whether wheat or sugar beet or oilseed rape, then nitrogen uptake is likely to be lower than if we grow a diverse mix of crops in the same area. The precise mechanisms by which nitrogen is better absorbed by a range of different plants in the same area than by just one plant are not well understood. Nevertheless several studies now show that biodiversity reduces the run-off of nitrogen and thus cuts the environmental impact of using fertilisers, whether animal manure or artificial.

By coincidence, one such study was carried in Nature the week before the Sutton summary of the European work.(2) Bradley J Cardinale showed that maintaining the diversity of different species of algae in streams resulting in greater uptake of the available nitrogen that would otherwise have polluted downstream rivers. He concludes that ‘biodiversity may help to buffer natural ecosystems against the ecological impacts of nutrient pollution’.

An earlier piece of research demonstrated a similar result. Whitney Broussard and Eugene Turner found that places in the US with diverse crops had lower levels of dissolved nitrogen in the rivers leaving the area.(3). The authors recommend rotating crops, decreasing field size, increasing the width of field edges and incorporating more native grasses between fields. All worthy objectives, but as Broussard says, ‘The American farmer is caught in a mode of production that has tremendous momentum and cannot be changed on the farm’. In other words, farmers are trapped into monocultures and have neither the knowledge nor the ability to take the financial risks of moving away from reliance on one crop. We therefore need to find a way of incorporating biodiversity’s value in reducing the costs of nitrogen pollution in the calculus of the farmer.

The curious fact is that moving away from single crop agriculture also seems to increase yields. Mixing different plants in a single field, or having animals living alongside the plants, such as ducks living in rice paddies,  can systematically improve crop performance. One study in China showed that mixing plants such as maize, sugar cane, wheat, potato and broad bean in a single field might add 30 to 80% to overall production.(4) I have seen similar results from mixing grains and other crops in Australia. Perhaps even if we cannot find a way to reward farmers for increasing biodiversity, and thus reducing nitrogen run-off, we can persuade them to investigate intercropping of different plants in the same field simply because it improves their overall yields.

1, Sutton M.A. et al. The European Nitrogen Assessment, (Cambridge University Press, 2011), available at http://go.nature.com/5n9lsq. A summary can be found at www.nature.com/nature/journal/vaop/ncurrent/full/472159a.html

2, Cardinale, Bradley J, Biodiversity improves water quality by niche partitioning, Nature, www.nature.com/nature/journal/v472/n7341/full/nature09904.html

3, This research is described at http://www.azocleantech.com/Details.asp?newsID=4617

4, Chengyun Li et al., Crop Diversity for Yield Increase, PLoS ONE, November 2009

The dangers from nuclear power in light of Fukushima

This is a joint post, by Chris Goodall of carboncommentary.com and Mark Lynas (www.marklynas.org). We make no apologies for length, as these issues can really only be properly addressed in detail. An abridged version of this article was published in the TODAY newspaper of Singapore on April 6 2011. How risky is nuclear power? As the Fukushima nuclear crisis continues in Japan, many people and governments are turning away from nuclear power in the belief that it is uniquely dangerous to human health and the environment. The German government has reversed its policy of allowing the oldest nuclear plants to stay open and Italy has reportedly abandoned its efforts to develop new power stations. Beijing has stopped approving applications for nuclear reactors until the consequences of Fukushima become clear, potentially affecting up to 100 planned new stations. The mood towards the nuclear industry is antagonistic and suspicious around the world. We think this reaction is short-sighted and largely irrational.

For all its problems, nuclear power is the most reliable form of low carbon electricity. It remains the only viable source of low-carbon baseload power available to industrialised economies, and is therefore responsible for avoiding more than a billion tonnes of CO2 emissions per year. In addition to these unarguable climate benefits, we believe that nuclear power is much safer than its opponents claim. Despite the hyperbolic nature of some of the media coverage, even substantial radiation leaks such as at Fukushima are likely to cause very little or no illness or death. No power source is completely safe, but compared to coal, still the major fuel for electricity generation around the world, nuclear is relatively benign. About 3,000 people lost their lives mining coal in China alone last year. Many times that number died as a result of the atmospheric pollution arising from the burning of coal in power stations.

Although much journalism of the last few weeks has provided careful assessment of the true dangers of nuclear accidents, we thought it would be helpful to pull together the results of scientific studies on the damage caused by nuclear radiation to human health. Our aim is allow readers to put some perspective on the radiation risks of nuclear power, particularly after accidents, and to appreciate the context of the oft-quoted units of ‘millisieverts’, ‘bequerels’ and other measurements. This is a complicated story, because not all radiation is the same – a crucial factor is the timescale of exposure. There is a big difference between the expected impacts of exposure to huge amounts in a very short period, large doses over several weeks, and long-running or chronic exposure.

We examine these three scenarios in turn. The results seem to be quite clear to us: accidents and leaks from nuclear power stations are not likely to cause substantial numbers of illness or deaths, even under exceptional circumstances such as are currently being experienced after the combined earthquake and tsunami disaster at Fukushima. This is an important conclusion given the potential for nuclear power to continue to mitigate global warming, which presents vastly greater risks on a global scale. We are not advocating slackness or complacency, just suggesting that a rational and balanced assessment of the risks of radiation is a good idea. To hastily abandon or delay nuclear power because of radiation risks from accidents such as that at Fukushima is poor policy-making.

Some background

All of us are exposed to radiation every day of our lives. Very little of this comes from nuclear power or nuclear weapons. Other sources are far more important. One example: potassium is a vital chemical for carrying electrical signals around our bodies but a rare, naturally occurring, isotope, potassium 40, is radioactive. The tiny amount inside us produces 4,000 decays of individual nuclei every second. This internal nuclear fission of potassium atoms and from a radioactive natural isotope of carbon is responsible for about 10% of the annual dose received by someone in the UK.

More important sources are the radon gas produced in granite rocks, cosmic radiation and doses from medical equipment. By contrast, and despite the attention we pay to them, nuclear power stations and nuclear weapons are responsible for much less than half of one percent of the radiation typically absorbed by people in the UK. The same rough percentages apply to other countries operating nuclear reactor fleets.

The average background radiation across the UK is about 2.7 millisieverts (mSv) a year. (A ‘millisievert’ is a measure of radiation exposure to the body and is therefore a useful unit to directly compare the radiation received from different sources). People in Cornwall, where there is far more radioactive radon around because of local geology, experience more radiation than in other areas. Their dose may be as high as 10 mSv, almost four times as much as the UK average. In fact nuclear power plants could not be built in the granite areas of the county because the natural background radiation at the boundary of the power station would be higher than is allowed under the strict rules governing the operation of nuclear plants. Cornish radiation isn’t that unusual: parts of Iran, India and Australia have even more natural nuclear fission than Cornwall.

So our first point is that nuclear power is an almost trivial source of radiation, dwarfed by natural variations in other sources of radiation. The second is that exposure to radiation in the UK is tending to rise, but certainly not because of nuclear power or leaks from other nuclear operations. Instead it comes from the increased use of radiation in diagnostic equipment used by health care professionals. One scan in a CT machine will add about 10mSv to a person’s annual exposure – 3 million Britons went through this process last year. Per head of population, the number is even higher in the US.

These are two important basic numbers to help us assess just how dangerous nuclear power is: 2.7 mSv a year for the average natural background radiation received by the typical person in the UK and 10 mSv for a single CT scan. We will use these numbers to compare the radiation effect of nuclear power and to assess the importance of the very rare but severe accidents at nuclear power plants.

The impact of exposure to very high levels of radiation over a few hours

a) Chernobyl workers (1)

The fire and explosion at Chernobyl in 1986 was the world’s most severe accident at a civil nuclear power plant. It is the only such event which is known to have killed workers from the effect of radiation. About six hundred people were involved in work on the site of the power plant during the first day after the accident, of which 237 were thought to be at risk of acute radiation syndrome (ARS) because of their degree of exposure. 134 individuals developed symptoms of ARS and 28 died as a result. The deaths were generally due to the skin and lung problems, compounded by bone marrow failure. All but one of people killed received a dose of radiation above 4,000 mSv, with one of the deaths occurring after a dose of about 3,000 mSv.

The implication of this is that ARS will usually only kill someone who has experienced the impact of over 4,000 mSv. Indeed, many workers at Chernobyl actually received doses above 5,000 mSv and survived. By comparison, the workers engaged in the repair at Fukushima are being carefully monitored to ensure their total exposure does not go above 250 mSv, less than a tenth of the minimum level at which an ARS victim died at Chernobyl. As at 23rd March, 17 workers had received more than 100 mSv of radiation, forty times the yearly radiation received by the typical UK resident and equivalent to ten CT scans. It has been reported that two workers received radiation burns to the legs after exposure in contaminated water to 170 millisieverts per hour doses in Unit 3 on 24 March (2). To date this remains the only known health impact suffered by Fukushima workers.

But what of the longer term dangers to Chernobyl workers who suffered massive radiation exposures? Of those who survived acute radiation syndrome, 19 out of the 106 died between 1987 and 2006. These deaths included 5 cancers. 87 people were still alive in 2006; 9 of them had been diagnosed with various cancers including cases of leukaemia. The problem with using these statistics to draw definitive conclusions is that the numbers of workers affected by extremely high levels of radiation in the Chernobyl emergency are not large enough to give robust data on the long-term impact across wider groups. But the 20 year survival rate of the workers exposed to the greatest radiation – 82% – and the unremarkable percentage either dead of cancer or living with it – 14% in total, within ‘normal’ bounds – suggests that the human body is usually able to recover from even extremely high doses delivered in a short period of time. (This comment is not intended to diminish the severity of the effects of ARS: many of the survivors have suffered from cataracts, sexual dysfunction, skin problems and other chronic illnesses.)

Fourteen healthy children were borne to ARS survivors in the first five years after the accident. There is no evidence of genetic damage passed to future generations.

b) Chernobyl’s wider early impacts

Several hundred thousand workers were involved in the aftermath of the accident (the so-called ‘recovery operation workers’ or ‘liquidators’). These people’s average total dose was about 117 mSv in the period 1986-2005, of which we can assume the large part was experienced in the first months after the accident or at the time the sarcophagus was being placed over the reactor core a couple of years later. The exposures in this group ranged from 10 to 1,000 mSv. The UN Committee on Chernobyl comments that ‘apart from indications of an increase in leukaemia and cataracts among those who received higher doses, there is no evidence of health effects that can be attributed to radiation exposure’. The suggestion here is that the overall impacts on cancer rates among the people with lower doses – but which are still very much higher than would normally be experienced in the UK – is limited.

This conclusion has been attacked by some groups. In particular, Greenpeace published a report entitled The Chernobyl Catastrophe: Consequences on Human Health in 2006 that estimated a figure for total deaths resulting from the disaster that was many times greater than official estimates. Nevertheless, most scientific reports, including all the many official reports into the accident, have concluded that the long-term effects of radiation on the recovery workers, as opposed to the much smaller numbers working inside the plant immediately after the explosion, have been very limited.

After the 1986 Chernobyl disaster large numbers of people in surrounding populations were exposed to the radioactive isotope iodine 131, largely through consuming milk and other farm products. The human body takes up iodine and stores it in the thyroid. Radioactive iodine accumulates in this small area of the body and gives the thyroid gland disproportionate exposure. The effective dose of radiation to the thyroid among some people in the areas affected by Chernobyl fallout ranged up to 3,000-4,000 mSv.(3)The concentration of radioactive iodine in the thyroid has produced large numbers of cases – probably about 7,000 by 2005 – of thyroid cancer among the millions of people in the affected areas. (4) These cases are highly concentrated among people aged less than 18 at the time of the disaster and the impact on adults appears to be very much less or even negligible.(5) The risk of getting thyroid cancer among the most affected group is continuing to rise even now. The implication is clear: severe doses of radiation twenty five years ago produced damage that is still causing cancer today.

Thyroid cancer is treatable and death rates are low. The number of people who had died of thyroid cancer in the affected areas by 2005 was 15. (6) We have been unable to find a scientific assessment of how many people are likely to die in the future from thyroid cancer in the Chernobyl region but the effective treatment for this disease may mean that relatively few of those affected will die. The incidence of thyroid cancer after Chernobyl could have been very substantially reduced if the authorities had acted to provide the local populations with iodine tablets. The effect of taking these tablets is to flood the thyroid gland with normal iodine, reducing the uptake of iodine 131 and thus cutting the dose of radioactivity. Of those countries closest to the nuclear power plant, only Poland seems to have widely distributed iodine, although this is a well understood and simple way of reducing thyroid cancer risk from radioactivity. Second, the authorities could have banned the sale of milk, which is the medium through which most iodine 131 enters the human body and which is why young children appear to have been most severely affected.

It is notable that the authorities around Fukushima are taking an extremely precautionary approach to iodine 131 exposures in the surrounding populations, both in rejecting milk and distributing iodine tablets. Given the experience of Chernobyl, this seems sensible, even though the real risks of exposure and developing cancer as a result are very much lower.

c) Fukuryu Maru fishing boat

In the 1950s and early 1960s nuclear weapons powers like the US, Britain, China and Russia carried out above-ground explosions of atomic bombs in remote areas. (In 1963 these tests provided about 5% of the radiation dose experienced by people in the UK, over five times the impact of Chernobyl, which added less than 1% to the total dose for the average person in 1986, the year of the explosion). One of these tests was in 1954 at Bikini Atoll, one of the Marshall Islands in the mid Pacific. The device turned out to be much more powerful than expected by the US scientists running the experiment, with an explosive power of about one thousand times the 1945 bombs over Japan. As a result the fallout extended well beyond the exclusion zone established by the US, and a Japanese fishing boat was caught in the aftermath of the explosion.

The 23 individuals on this boat received huge doses of radiation – probably averaging between 4,000 and 6,000 mSv. The fishermen suffered severe radiation burns within hours and decided to return to their home port in Japan. Upon their arrival two weeks later their symptoms were recognised to be caused by radiation and they were treated for ARS. Unfortunately, one of the treatments the fishermen received was blood transfusions using blood which was infected with the hepatitis C virus. One of the crew members died a few months after the explosion from liver disease, which may have resulted from hepatitis as much as from acute radiation syndrome. The other fishermen also suffered disease from the hepatitis C in the transfusions and many of them died of liver problems. This experience complicates any medical conclusions that might be drawn about the immediate or long-term impacts of severe radiation exposure.

As at February 2011, it is reported that of the 22 crew members who survived ARS, nine are still alive 57 years later.(7)The average age of these survivors is over 80. These individuals all seem to have had major health problems during their lives, but the cause may well be the transfusions rather than the radiation. Once again, the main implication of the Fukuryu Maru event is that even huge doses of intense radioactivity can cause surprisingly few fatalities.

d) Hiroshima and Nagasaki

The survivors of the atomic bomb blasts were exposed to high but varying levels of radiation. The death rates of nearly 90,000 survivors have been painstakingly studied and compared with people from other cities, so are a valuable source of information from a horrific real-world experiment. Most survivors endured an exposure of less than 100mSv and for these people there is no statistically significant increase in cancer risk. One study shows, for example, that the number of deaths from solid cancers among those who received less than 100 mSv was 7,647, compared to 7,595 that might have been expected based on the experience of populations in other Japanese cities. (8) The increment of 52 deaths is less than 1% above the expected level, and the result is statistically meaningful because it involves a relatively large group.

Above 200 mSv of total exposure, the effect of the radiation becomes a little more obvious but it is not until the dose was greater than 1,000 mSv that a major increase in cancers occurs. Over 2,000 mSv, the risk of a survivor of the bombs dying from a solid cancer is approximately twice the level of risk in non-affected cities. But, even at this very high dose, the number of people dying from solid cancers was 18% of all bomb survivors, to which should be added the 3% of people dying from leukaemia. Compare this, for example, to the UK, where about a quarter of all today’s deaths are from cancer, presumably because of other factors.(9) So it is fair to say that even severely irradiated Japanese atomic bomb survivors appear to be at less risk of developing cancer than normal British people.

e) The effects on soldiers exposed to radiation at tests of nuclear bombs

US and UK research has shown that soldiers experiencing radiation in the aftermath of tests of nuclear bombs, such as at the ‘Smokey’ test in Nevada in 1957 have not had higher than expected incidence of cancer. Although this group seems to have experienced more leukaemia than would have been predicted, the number of other cancers has been lower. The overall death rate from cancer is not higher than in a control group.(10)

Severe exposure over longer time periods

In the previous section we looked at single catastrophic events that caused high doses of radiation, showing that only very high doses, perhaps ten or hundred times the yearly amount received from background sources, substantially affect the risk of future cancers. The same is true of less intense individual events that are repeated many times over a period, even though these events may add up to very high levels of total exposure.

a) Radiotherapies for cancer

Highly targeted bursts of radiation are used to kill cancer cells in radiotherapy. As a result the patient receives very large total doses of radiation over the period – perhaps a month – of the treatment. The amount of radiation received may be as much as 30,000 mSv, many times that sufficient for a fatal dose. This amount does not cause acute radiation sickness because the patient is given time to recover between the doses, allowing damaged non-cancerous cells to recover, and because much of the power is directed at specific internal sites in the body, where the radiation does indeed cause cell death. (That after all is the point of radiotherapy – to kill the cancerous cells in the patient’s tumour.) Some of the radiation reaches other healthy parts of the body and does seem to cause small increases in the likelihood of development of another cancer. But, as the American Cancer Society says, ‘overall, radiation therapy alone does not appear to be a very strong cause of second cancers’.(11) For this reason, radiation overall cures many more cancers than it causes in today’s populations.

b) Workers manufacturing luminous dials for watches

A classic study by Rowland et al in 1978 investigated the incidence of bone cancer among workers painting luminous dials on watches with radioactive paint before the second world war. (12) Workers ingesting more than 10 gray, a measure equivalent to more than 100,000 mSv, had very high incidence of bone cancer. Those taking in less than 10 gray had no cases of bone cancer at all. In his book Radiation and Reason, Oxford University Professor Wade Allison comments that this is ‘a most significant result’ because it shows a clear demarcation between the level of longer-term exposure that seems to cause obviously enhanced cancer risk and that which does not.(13) The threshold – 10 gray – is a level never likely to be now experienced by anyone as a result of nuclear power. It is far greater, for example, than the exposure of any workers fighting the fire at Chernobyl.

The impact of chronic enhanced background radiation

Thus far we have tried to show that only very high levels of radiation, such as are very rarely ever encountered, will tend to produce statistically significant increases in cancer and other diseases. The last category of exposure is to very long-lived elevated levels of exposure. With the exception of thyroid cancer, and high levels of radon gas if the victim also smokes (see below), raised levels of radiation appear to have a small effect on the likelihood of cancer or other diseases. In fact, some people say that small increases in the total amount of radiation received per year have no impact whatsoever on illness rates, or that some dosages of elevated radiation can even be beneficial.

The standard way of viewing the impact of radiation on human health is called the ‘linear, no threshold model’ or LNT. LNT assumes that increased rates of cancer seen in populations such as the atomic bomb survivors can help us predict the degree of cancer arising from radiation at much lower levels of radiation. The theory says that there is a straight line relationship: simply put, if a 1000 mSv dose gives 10% of people cancer, then a 100 mSv total exposure will induce the disease in 1% of the population. With this model of the relationship between radiation and cancer, all incremental doses are bad, from whatever base level, because they add to risk. But the evidence from many studies is that it is difficult to show any unfavourable effect from elevated levels of exposure. For example, people living at higher altitudes in a country generally get more background radiation than those at sea level because of greater cosmic ray density. However we could find no study that showed that these people experience more cancers or other radiation-related diseases. In Ramsar, Iran, naturally high background radiation delivers a hefty dose of 260 millisieverts per year to local residents, a hundred times higher than 2.7 mSv/yr experienced by the average UK citizen, and also ten times higher than doses normally permitted to workers in nuclear power stations. However, there is no observed increase in cancer in this or any other area where levels of background radiation are up to two orders of magnitude higher than normally observed. (14)

The LNT model is controversial because it is based on statistical assumptions (which reflect a very precautionary approach) rather than observed biological effects of radiation – it would predict higher rates of radiation-induced cancer in Ramsar where radon levels are exceptionally high despite no evidence of these occurring in reality. It has been criticised because the body can repair most DNA damage caused by radiation, and cells have mechanisms that perform this healing role on a constant basis. An analogy would be blood loss: whilst losing half a litre of blood (such as a blood donor might) causes no health impacts whatsoever, losing 5 litres of blood would be fatal. In this case clearly there is a threshold for harm, so a ‘linear no threshold’ assumption is biologically incorrect.

There is one important exception, however, to the rule that increased background radiation presents no additional health problems. In many parts of the world, particularly those with granite rocks close to the surface, radon gas represents the most important source of natural exposure to radiation. Radon is a short-lived radioactive element that arises from the decay of fissile uranium. As we said above, for the UK population as a whole the average total absorption of radiation is about 2.7 mSv per year but many people in Cornwall receive much more, largely from the pooling of the gas in their homes and workplaces.

Studies have suggested that this increase has a very small effect on the incidence of most cancers and other illnesses, although the research is not yet definitive about the precise relationship between radon gas exposure and rates of cancer. However, radon does have an observed effect on lung cancer occurrence, particularly among smokers, and this effect increases with the typical densities of radon in the home. In homes with the highest radon levels, the chance of a smoker getting lung cancer rises from about 10% to about 16%, according to one study.(15)

The US National Cancer Institute concludes: “Although the association between radon exposure and smoking is not well understood, exposure to the combination of radon gas and cigarette smoke creates a greater risk for lung cancer than either factor alone. The majority of radon-related cancer deaths occur among smokers.” (16)

a) The impact of living near a nuclear power plant

Several studies have shown ‘clusters’ of solid cancers and of leukaemia around nuclear installations in the UK and other countries, although the vast majority show no relationship between the two. (17) In particular, the incidence of childhood leukaemia appears to be marginally higher than the national average in some areas close to nuclear sites and at some locations the rate of such cancers appears to rise with closeness to the site. (This suggests a risk that is related to the dose experienced by the child, and thus in line with LNT theory).

This is a worrying finding and much research has tried to find out why the chance of cancer appears to be slightly higher in these places. But the issue is this: why should there be an increased risk of cancer around nuclear sites when the aggregate level of radiation exposure is so low compared, for example, to parts of Cornwall? Similarly, why do we not see higher incidences of childhood cancers around large coal-fired power stations, which emit far higher levels of radiation than nuclear sites as a result of the radioactive material contained in the coal being dispersed from the chimneys? And, as a separate point, why have some of the rates of higher-than-expected cancer fallen at some sites when radiation levels have remained approximately constant?

Scientists working on this issue have no convincing explanation for the higher rate of childhood cancers in these clusters. But many experts now believe that what is known as ‘population mixing’ may be responsible for the observed increase. Mixing occurs when a new population, such as those recruited to construct or operate a nuclear power station, arrives in the area. This may, one theory goes, cause unusual infections in the area and the end result of some of these infections may be childhood leukaemia.

To repeat: the clusters of cancer around some nuclear sites for some periods of time appear to suggest a worrying relationship between nuclear power stations and cancer. But the relatively low levels of radiation at these places, compared to around coal-fired power stations or areas with high natural background radiation, makes it extremely difficult to see how radioactivity could cause the higher levels of cancer.

b) Workers in defence industries exposed to radiation

Oxford’s Professor Wade Allison reports on a survey of a huge number (174,541) workers employed by the Ministry of Defence and other research establishments.(17) This study found that the workers received an average of 24.9 mSv above background radiation, spread over a number of years. But even though this amount is small when expressed as a figure per year, the large number of people in the study should enable us to see whether there is any effect on cancer incidence of low levels of incremental exposure. (Any increase will be much more statistically significant than any additional cancers in smaller groups.) In fact the survey found that the workers suffered from substantially less cancer than would be expected, even after correcting for factors such as age and social class. (The mortality rate for all cancers was between 81% and 84% of the level expected). This suggests that the increased radiation they experienced delivered no additional cancer risk at all.

More on Fukushima

How dangerous are the levels immediately next to the Fukushima boundary fence? The power plant operator TEPCO issues data every day from measurements taken at one of the gates to the plant. (18) On March 27th, about two weeks after the accident, the level had fallen to about 0.13 mSv an hour – and was continuing to decline at a consistent rate. (In the course of writing this article, the number rose to about 0.17 mSv an hour but then started to decline again.) If someone stood at that point for a year, he or she would receive about 1.1 Sv. This is a very high level – about 400 times background level in the UK – but would not necessarily have fatal effects. Professor Allison argues in a post on the BBC News web site that a figure of 100 mSv a month, or 1.2 Sv a year, would be a good level to set as the maximum exposure for human beings before real risk was incurred. (19)

Radiation intensities obey the inverse square rule: as we move away from the source of radiation, the level of radiation will decrease by the square of the distance. (This excludes the impact of fallout from an explosion or of radiation carried in plumes of steam). Thus a reading obtained 2km away from the plant will be one hundredth of the level at 200m distance. In other words if the plant were in the UK, with its average background dose of around 2.7 mSv per year, and the monitoring at the Fukushima gate is 200 metres from the source of the radiation, then the level of incremental radiation would be no greater today than the background level at a distance of 4 kilometres from the plant. Since much of the radiation emanating from Fukushima is iodine 131, which has a half life of 8 days, the level of contamination of the surrounding area will continue to fall rapidly.

The effect on water supplies in Tokyo and elsewhere

The authorities in Tokyo recommended in mid-March that infants were not provided with tap water after levels of radiation rose to higher than usual levels. The peak level reached at a Tokyo water supply plant was 210 becquerels per litre and this prompted the decision – anxious parents were provided with bottled water instead. (A becquerel is a measure of the number of nuclear fissions, not a measurement of the dose of radiation absorbed). Children will be most susceptible to the effects but an infant drinking this water for a year will absorb the equivalent of about 0.8 mSv of radiation, or less than a third of normal absorption by an adult in the UK. (20)

There are significant divergences between different country approaches to radiation in water. The European limit for radiation in public water supplies is set at 1,000 becquerels per litre, nearly five times that declared ‘unsafe’ for infants by the Tokyo authorities. (21) In one study carried out by the British Geological Survey in Tavistock, Devon, private water supplies were found to contain as much as 6,500 becquerels per litre, and no ill effects have been reported.(22)

Although this is not directly stated, we can assume that the large majority of this radioactivity in British water is derived from the decay of radon. This means that in the UK, the level is likely to remain at a roughly consistent level year after year. But in Japan the radiation is more likely to be from the decay of iodine 131, which has a very short half life. So the radiation in Japanese tap water will quickly fall, and already appears to be doing so. Thus the risk of any radiation damage, even for very young children, from drinking tap water in Tokyo is not just small but infinitesimal.

Summary

Overall the average UK person ets approximately 0.2% of his or her radiation exposure from the fallout from nuclear plants (and from nuclear accidents) and less than 0.1% from nuclear waste disposal. This compares to about 15% from medical imaging and other medicinal exposures and about 10% from the natural decay of potassium 40 and carbon 14 in the body. Naturally-occurring radon is many hundreds of times more important as a source of radiation than nuclear power stations and nuclear fallout. Even for those who believe in a direct linear relationship between radiation levels and the number of cancer deaths, the effect on mortality of normal operation of nuclear power stations would be impossible to discern statistically and in our opinion is likely to be non-existent.

It can only be in the event of a serious accident that we have any reason to be really concerned about nuclear power. We have tried to show in this article that even when such accidents occur the effects may be much less extensive than many people imagine, particularly given the constant media coverage devoted to Fukushima. Chernobyl killed 28 people in the immediate aftermath of the disaster. All these people had experienced huge doses of radiation in a short period. Mortality since the accident among the most heavily dosed workers has not been exceptionally high. And many studies after Chernobyl have suggested that – with the exception of the thyroid variant – cancer rates have only increased very marginally even among those exposed to high doses of radiation after the accident.

While reported rates of other, non-cancer, illnesses may have risen, researchers seem to think that much of this rise is due to the impact of other factors, such as the need to evacuate from the area, increased smoking, drinking and other risky behaviours, or even the wider effect of the breakup of the Soviet Union soon after the accident. There is substantial evidence, as the UN reports on Chernobyl attest, that the psychological impacts of fear of radiation far outweigh the actual biological impacts of radiation. Thus, misinformation about exaggerated dangers of radiation is actually likely to be harmful to large numbers of people – a point which should be borne in mind by anti-nuclear campaigners. This appears certainly to have been the case after Chernobyl and Three Mile Island (in the latter case the radiation released was negligible, but the political fallout immense).

We hope that a more rational sense of risk and an appreciation of what we have learned from past experience will prevent the repeat of this experience after Fukushima. It is important to appreciate that whilst radiation levels at the boundary fence are still high, they are dropping sharply. Even today, March 28th, the radiation exposure of a person a few kilometres from the plant (in the precautionary exclusion zone) is likely to be lower than experienced by many people living in Cornwall or other places with high radon density. Similarly, the peak levels of radiation in the water supply have constantly been well below levels regarded as safe in other parts of the world.

No technology is completely safe, and we don’t wish to argue that nuclear power is any different. But its dangers must be weighed against the costs of continuing to operate fossil fuel plants. Just down the road from us is Didcot A power station, a large coal-burning plant with poor pollution control and therefore with substantial effects on local air quality, as well as more substantial emissions of radiation than from any UK nuclear power station and a Co2 output of about 8 million tonnes a year. We offer a view that Didcot has caused far more deaths from respiratory diseases than all the deaths ever associated with nuclear energy in the UK, and that coal power is a far more legitimate target of environmental protest than nuclear.

Chris Goodall and Mark Lynas, 29th March 2011.

(With many thanks to Professor Wade Allison for his help on the research for this article. All errors are ours.)

1 Much of the data in this section is taken from Sources and Effects of Ionizing Radiation, United Nations Scientific Committee on the Effects of Atomic Radiation, 2008 report to the General Assembly, published February 2011.

2 http://www.world-nuclear-news.org/RS_Contaminated_pools_to_the_drained_2703111.html

3 Prof. Wade Allison, Radiation and Reason, page 100

4 Taken from Sources and Effects of Ionizing Radiation, United Nations Scientific Committee on the Effects of Atomic Radiation, 2008 report to the General Assembly, published 2011

5 Sources and Effects of Ionizing Radiation, United Nations Scientific Committee on the Effects of Atomic Radiation, p 19

6 Sources and Effects of Ionizing Radiation, United Nations Scientific Committee on the Effects of Atomic Radiation, 2008 report to the General Assembly, published 2011, page 15

7 Interview report, http://www.japantoday.com/category/national/view/crewman-of-irradiated-trawler-hopes-bikini-atoll-blast-never-forgotten

8 Preston Dale et al. (2004) Effect of Recent Changes in Atomic Bomb Survivor Dosimetry on Cancer Mortality Risk Estimates, Radiation Research.

9 National Statistics UK http://www.statistics.gov.uk/cci/nugget.asp?id=915

10 American Cancer Society, http://www.cancer.org/Cancer/CancerCauses/OtherCarcinogens/IntheWorkplace/cancer-among-military-personnel-exposed-to-nuclear-weapons

11 American Cancer Society http://www.cancer.org/Cancer/CancerCauses/OtherCarcinogens/MedicalTreatments/radiation-exposure-and-cancer 1

2 Rowland et al, 1978: Dose-response relationships for female radium dial workers, Radiation Research, 76, 2, 368-383

13 Wade Allison, Radiation and Reason, 2009

14 M. Ghiassi-nejad et al., 2002: Very high background radiation areas of Ramsar, Iran: preliminary biological studies, Health Physics, 82, 1, 87–93

15 British Medical Journal http://www.bmj.com/content/330/7485/223.abstract

16 American Cancer Society, http://www.cancer.gov/cancertopics/factsheet/Risk/radon

17 Details of some of these studies are discussed in a 2005 report on the Committee on Medical Aspects of Radiation in the Environment available at http://www.comare.org.uk/documents/COMARE10thReport.pdf

18 Wade Allison, Radiation and Reason, page 127

19 http://www.tepco.co.jp/en/nu/monitoring/11032711a.pdf

20 Professor Richard Wakeford of the Dalton Nuclear Institute, quoted on the BBC News web site at http://www.bbc.co.uk/blogs/thereporters/ferguswalsh/2011/03/japan_nuclear_leak_and_tap_water.html

21 This limit is what is called an ‘action level’. That is, the authorities expect something to be done when higher levels are observed

22 Neil M. MacPhail, A radon survey of Ministry of Defence occupied premises in Her Majesty’s Dockyard, Devonport, unpublished MSc dissertation, University of Surrey 2010.

The cost of delaying nuclear power by a year: 2030 emissions will be 3% higher

 This post was first carried on Mark Lynas's blog at www.marklynas.org The Fukushima disaster will probably delay the arrival and growth of nuclear power in the UK. Unless the gap is filled by alternative low carbon sources, CO2 emissions will inevitably be higher than they otherwise would be. This note estimates the likely effect.

Last December, the Committee on Climate Change (CCC) produced a carbon budget for the period up to the end of the next decade. It suggested that the UK needs to emit no more than 310 million tonnes of greenhouse gases by 2030. To get there requires emissions to fall at over 4% a year during the 2020s, a hugely difficult target.

Electricity generation is the easiest major source of carbon emissions to decarbonise and the Committee looks for new nuclear power stations to replace fossil fuel plants from 2018 onwards. By the start of the 2020s, the CCC believes it necessary to install an average of 2 – 2.5 gigawatts a year of new nuclear generating capacity. (Broadly speaking, this means construction of one and a half nuclear power plants a year). The Committee sees renewables and carbon capture (CCS) providing approximately the same amounts of new low-carbon capacity each year. Nuclear is needed because it provides reliable electricity 24 hours a day, unlike wind. In addition, the technology is far more mature than carbon capture, meaning that although the first stations are very unlikely to be completed before 2018, they will still be producing electricity before the first CCS stations.

Put simply, to achieve the CCC’s target of emissions from electricity of 50 grammes per kWh, down from about 500 grammes today seems to require the fastest possible expansion of nuclear. The implication of the CCC’s very robust work seems to be that if Fukushima delays nuclear construction, emissions in 2030 will be higher than they otherwise would be. By how much?

Here are my assumptions.

a)      Concerns over the implications of Fukushima delay the UK’s nuclear programme by just over a year. This means that the nuclear programme ramps up later and so has constructed two fewer Areva EPR reactors by 2030.

b)      Instead of these two reactors, the UK is obliged to keep equivalent gas-fired capacity on stream. These gas fired plants generate 350 grammes of CO2 per kilowatt hour of electricity more than the very low carbon electricity from nuclear.

c)       An Areva EPR power station generates 1.6 gigawatts for 8,000 hours a year (just over 90% uptime, its design capacity).

To replace the electricity generated in 2030 by the two EPRs that haven’t been built because of the Fukushima-induced delay, will result in 9 million extra tonnes of CO2 per year, just under 3% of the UK’s carbon budget for 2030. At the government’s target price of CO2 in 2030 of £70 per tonne, the ‘cost’ in 2030 is over £600m a year and the equivalent of several billion pounds over the decade of the 2020s.

Nuclear power may turn out to be extremely expensive. Certainly no-one watching what is going on at the two construction sites of Flamanville in Normandy and Olkiluoto in Finland can be anything but sceptical of Areva’s bland assurances. Nevertheless the reality is that a further year’s delay while politicians, regulators and industry calm a worried UK public will make achievement of carbon targets even more difficult to achieve.

2020 renewable heat targets mean converting a third of the UK to woodland

The production of heat is responsible for about half the UK’s total CO2 emissions and the announcement of the details of the Renewable Heat Incentive (RHI) is a welcome step forward. Many significant issues remain unaddressed – most importantly whether the active encouragement of the use of biomass (primarily, wood) is likely to increase pressures on land use. Put simply, are the targets for renewable heat announced today compatible with commitments not to increase deforestation around the world? Also, will the RHI mean land is converted from agricultural use to wood production, in the UK or elsewhere? The calculations in this note suggest that to achieve the 2020 targets from domestically grown wood about a third of the UK’s total land area would have to be given over to new forest. The preamble to the RHI says that the government wants to see about 12% of the UK’s heating provided by renewable sources by 2020. Since about half of all energy use in the UK is employed to provide heat, this implies that just under 6% of total national energy consumption will be provided from renewable sources. Not all of this will be wood. The government’s plans mention biomass from the municipal waste stream and biomethane from the digestion of agricultural wastes. But it is almost inevitable that wood will be used to provide a large majority of the total power sources. There just isn’t much energy in domestic waste and agricultural residues.

I have done a series of quick calculations that demonstrate how much wood is needed to provide about 6% of UK energy demand. (For fans of incomprehensible energy numbers, this is about 100 terawatt hours.) This could be provided by about 24 million tonnes of dry wood, burnt in very efficient boilers. Fresh-cut wood is about 40% moisture, meaning that about 40 million tonnes needs to be cut down and then dried.

For comparison purposes, it may be helpful to note that the UK currently produces about 9 million tonnes of forest products a year – somewhat less than 25% of what we will need for wood for energy.

I think that well-managed UK woodlands and land given over to energy crops such as elephant grass (Miscanthus) can produce about 3 dry tonnes a hectare a year, averaged over soil and climate types. So to produce enough wood domestically, we need to use about 8 million hectares. The UK’s total land area is about 25 million hectares. So to get 12% of our heating needs from wood, we would need to set additionally aside about one third of the surface of the country for forestry and energy crop production.  The figure today is about 12%.

Of course this won’t happen. We will import the vast bulk of the wood we need. The Forestry Commission publishes estimates of the UK’s likely output of wood well beyond 2020 – after all, this is now planted - and the most we can hope for is an extra 3m or so tonnes, less than a tenth of what we will need.

Other countries have far more woodland than the UK does. We could meet the UK’s targets for renewable heat in 2020 by giving over less than one quarter of one percent of world forest land to meet the 12% commitment. The problem is that the world needs to decrease the pressure to log slow-growing hardwood forests not add, however marginally, to the demands for wood as fuel. And the UK’s policies towards renewable heat will probably be copied by other countries, adding to the pressure on world timber stocks. The uncomfortable fact is that we need the world’s land to produce more food, more ethanol and biodiesel for vehicle and aircraft fuels and more biomass for heat. Although we can use our land more productively, for example re-establishing forests on the UK’s upland grasslands, the RHI will inevitably add more pressure to food prices – and to the price of wood itself.

‘Zero-carbon’: the sad story of the rapid retreat from a difficult commitment to improving Britain’s new homes

In late 2006 the British government announced that new homes in 2016 would have to be ‘zero-carbon’. The net emissions from heating the house, providing hot water, running the lights and powering the home appliances had to be no more than zero. Over the last four years this commitment has been progressively weakened. In late February 2o11, the committee charged with advising the government set a new target that set a new target about one third as demanding as the 2007 promise. Battered by builders wanting to continue constructing standard houses and by worries about the expense of achieving the best standards for insulation and airtightness, the government has given up. Four story blocks of flats build in 2016 will have emissions standards barely different from well-insulated apartments built today.

The Code for Sustainable Homes came out in December2006 at the height of enthusiasm for radical changes in the way we consumed energy. British houses are built poorly with inadequate standards of insulation, high levels of air loss and low quality window and doors. The Code was meant to jerk the industry into radical improvements in construction performance. While the use of renewable energy was seen as important, better construction standards and, for example, the use of off-site fabrication of major components was at least as important.

The drive persisted for some time. A consultation document came out in July 2007 affirming the commitment to ‘zero-carbon’. And zero carbon meant what it said. All emissions, including those from running homes appliances were meant to be covered. The definition was unambiguous.

By December 2008 worries had set in. Would it be possible to install enough renewable energy in the form of wind turbines, solar panels, biomass boilers and community heating systems? Were the proposed insulation standards too tight? A consultation document was put out, entitled ‘Definition of zero-carbon homes and non-residential buildings’. It quietly dropped the idea that the zero-carbon standard should include the electricity for powering appliances and set a new target of reducing the emissions from heating the house and running the lights by 70%  below planned 2010 standards. ‘Zero-carbon’ would be achieved by offset schemes that allowed the housing developer to pay money for renewable energy schemes elsewhere. (Details of how this might work are still to be published).

This line held for a time. The government stood by the revised definition of zero carbon and the reconfirmed it in July 2009.

But the Coalition government of 2010 took two months to retreat again. Housing minister Grant Shapps said that the 70% target ‘needs to be re-examined’.  (Nevertheless, at the same time, Mr Shapps repeated the Coalition’s promise that it would be the ‘greenest government ever’). Six months later the government’s advisory body Zero Carbon Hub reported back to him, offering a revised view that 44% reductions were right for apartment blocks, 56% for semi-detached houses and terraces and 60% for detached homes.

In tabular form, here is what happened. The numbers refer to the kilogrammes of carbon dioxide (or other greenhouse gases) per square metre of new building and are taken from the latest report of the Zero Carbon Hub. But the units are not important; what matters is the loss of commitment to genuinely low carbon building.

Reduction from emissions of typical 2006 house of 45 kilogrammes per square metre Date Government document Policy
       
45 December 2006 ‘Code for Sustainable homes’ ‘Zero carbon emissions from all energy use over the year’.
45 July 2007 ‘Building a Green Future consultation document’ ‘Zero carbon emissions from all energy use over the year’
19 December 2008 ‘Definition of zero-carbon homes and non-domestic buildings’ 70% reduction in building emissions from heating and lighting but ambitions to cancel out other electricity use dropped
19 July 2009 ‘Defining an energy efficiency standard’ Continued with 70% commitment
  July 2010 Ministerial statement 70% commitment ‘needs to be re-examined’
14 February 2011 ‘Carbon Compliance: setting an appropriate standard for zero carbon new homes’ Commitment reduced to between 44 and 60% reduction depending on house type

 

In summary, the commitment to reduce household emissions for new home developments from 45 kg/m2 to zero has been reduced to a commitment to cut emissions from 45 kg/m2 to about 31 kg/m2. This is well above what is being achieved today by careful builders constructing genuinely low carbon homes.

What has driven this change? Four things seem to be important.

  • Although we can drive insulation and airtightness standards up far beyond the Zero Carbon Hub recommendations, no-one seems very keen to do so. The proposed  energy use standards for 2016 are over twice the maximum limits used for Passivhaus buildings, usually taken as the norm for really high quality construction. Increased cost is a factor, though Passivhaus might only add £6-£10,000 to the cost of a semi-detached house.
  • Builders are frightened of moving from standard UK construction techniques. On a UK building site gangs of tradesmen construct a house in the open air and using techniques essentially a century old. The resulting property, they say, is what people want to buy. Better techniques, involving pre-fabricated panels largely assembled in a dry factory to much higher standards of air tightness, are shunned. If we want to keep traditional British housing styles, better insulation is almost impossible.
  • Assessments of the cost of improving insulation compared to the equivalent price for getting the same carbon benefit from putting up wind turbines show that renewable energy is cheaper than efficiency. Speaking personally, I am not convinced by these calculations – partly because they rely on assumptions about the future price of energy – but the result of Zero Carbon spreadsheet work is to suggest that housebuilders would get carbon reductions more cheaply from giving money to offset schemes rather than spending it on energy efficiency.
  • The regulatory load on builders is increasing. They are obliged to devote a proportion of their developments to social housing and to pay for local infrastructure improvements. On brownfield sites needing remediation (that is, exactly where the government wants housebuilding to happen) the costs of construction in some parts of the country are not far from the value of the houses that are built. Add further to costs and homes will not be constructed. Of course this isn’t true in high price areas such as Surrey but it might well reduce the number of new homes across much of England and Wales.

Whatever the reason for the watering down of what we mean by ‘zero-carbon’, you might expect that the dilution has now stopped. Don’t be too optimistic: having agreed the latest specification the House Builders Association, a trade body, withdraw its support for the proposal for houses. I suspect that we will eventually see regulations for 2016 and beyond that are no better than the agreed 2013 standards. This will leave new homes a very long way from having zero emissions.

Time for a renewed debate on biofuels

The latest world price index from the FAO shows that commodity foods are now more expensive than in the spike of 2008. After a generation of falling prices for basic foodstuffs, the world is now seeing substantial inflation in major crops, taking prices in real terms back up to the level of the 1970s. We are simultaneously watching an unsteady but sharp rise in the price of crude oil as industrial demand, particularly in China, continues to rise. During the inflation of 2008, observers frequently noted the connections between food and oil, focusing on the impact of the biofuels initiatives of the US and EU. These connections are now ever more obvious. Why aren’t we demanding that countries rein back their food into fuel programmes? In particular, why are European electorates not attempting to reverse the EU’s target of getting 10% of all transport fuel from ‘renewable’ sources by 2020, up from less than 4% today? We used to think that the food and energy markets were separate. Changes in supply and demand in one would not dramatically affect the other. The growth of state-mandated biofuels in the last decade has now intertwined the two markets. The most obvious example is the use of maize to make ethanol (a petrol substitute) in the US. About one quarter of all the global production of coarse grains, including maize, are now used to make ethanol. (263m tonnes out of 1,102m tonnes, source FAO). The global harvest of these grains, including in the Southern Hemisphere, is forecast to have fallen 2% in the last year while non-food uses rose 2%. These small changes reduced stock levels, helping to tighten world prices. The other main trend, almost unnoticed in the West, is the increasing use of the cassava root - better known as tapioca to middle aged Britons like me -  as a feedstock in Asian ethanol refineries, reducing the already limited supply of this important source of carbohydrate.

I hope that one simple comparison may help illustrate the close interconnection of liquid fuel and food markets. We eat foods principally for their energy content and need about 1000 calories to survive and about 2,000 to be healthy and active. A calorie is a unit of energy. (Specialists will know that when nutritionists refer to a calorie, they are actually talking of a kilocalorie, or one thousand calories). A simple piece of arithmetic can convert calories into kilowatt hours, the most-used measure of energy use. Our daily need of 2,000 calories is very approximately equal to 2 kilowatt hours. The energy value of any foodstuff can be expressed as a number of kilowatt hours per tonne. Wheat, for example, is about 4,500 kilowatt hours per tonne of grain. At today’s wheat prices (about £350 a tonne) the energy in the food is costing us about 4.5 pence per kilowatt hour. A similar calculation for oil shows a figure of almost 4 pence per kilowatt hour. In other words, the energy in oil costs very slightly less than the energy in wheat. But if oil rises to $120 a barrel, and wheat remains at the same price, both cost about the same for each kWh.

The important implication is this: when oil prices rise, food tends to get sucked from human consumption to use in ethanol and biodiesel refineries. If oil falls in price, grain ethanol becomes uncompetitive, and less of it is used. The long run downward trend in food prices has been interrupted, perhaps for ever, by the close linkage between foodstuffs and liquid hydrocarbons which seem likely to be in increasingly short supply. Our chance to reduce hunger has been catastrophically disrupted by our urgent need for rising quantities of oil and oil substitutes.

We saw a steady fall in world hunger until about 1995. Since then the number of undernourished people has increased sharply but erratically. The current serious bout of food inflation may push the number of undernourished people above 1 billion for the first time this year. There is genuine debate about the impact of biofuels on the degree of food price inflation. Nevertheless, I think that the evidence that diversion of crops into refineries reduces global food stocks and increases the likelihood of sharp spikes in the world prices of foodstuffs is strong. Global food production is relatively predictable from year to year (last year was disrupted by climate-related events but was still probably the third best ever) but the linkage with oil markets has introduced a new source of volatility into demand and thus prices. To be clear, it may be that the world needs higher food prices to encourage increases in supply, particularly in Africa, but unpredictable and reversible rises in foodstuff costs may not provide a consistent price signal for poor farmers and may just induce hunger, as is the case today.

More generally, the rich world’s biofuel policies seem to be a clear case of selfishness. In order to improve their security of supply of liquid fuels the prosperous nations are increasing the price of food. As this policy proceeds, the impact on food supply is going to get worse. The world’s food production today is somewhere about 4 kilowatt hours person. Some of this is eaten – very inefficiently – by cattle and pigs, reducing the world supply to about 2.5-3.0 kilowatt hours per head. The daily world oil supply of about 85 million barrels is equivalent to about 18 kilowatt hours, perhaps six or eight times as much. In other words even if we use the world’s entire food production for conversion into fuels, and at 100% efficiency, we can never hope to make more than a small fraction of the total liquid fuel needed from our foods.

Such a policy is perhaps the most regressive public policy ever initiated. It makes the petrol of the rich consumer marginally cheaper while pushing hundreds of millions of the world’s poorest into hunger.  Think back, if you will, to the cost of food expressed in pence per kilowatt hour. Today’s wholesale wheat prices tell us that wheat costs about 10p a day to feed an active and healthy person, before considering local costs. We don’t know the precise number of people living on under $1 (60p) a day but it is probably over a billion. For these people, the food price inflation of the last year has reduced their standard of living by perhaps 20%. The car driver in the rich world has probably benefited by less than 1% from the slight reduction in oil prices caused by competition from food-generated ethanol. This is not just madness, it is wicked.

Cuts at the Carbon Trust and EST - perhaps not a bad thing.

(Published on the Guardian web site on 14th February 2011) The Carbon Trust is the latest body to announce a substantial cut in its funding from government. The 40% reduction in its grant income is marginally less severe than the 50% cut imposed on the Energy Saving Trust (EST) a few weeks ago. The job of both these bodies is to reduce energy use and carbon emissions with the Carbon Trust focusing on large companies and the EST on households. They have both claimed major successes in recent years.

So should those of us worried about climate change be upset with the government’s cost savings? I suggest our reaction should be very muted indeed. Both bodies had become bloated and inefficient. I have dealt with many entrepreneurs and small businesses who have found them to be actively unhelpful. Their contribution to the climate change effort may not be worth the money we spend on them.

First of all, we should put the funding cuts in perspective. The EST had a budget of about £70m in 2009/10. But two years earlier, its funding was only £36m, just over half the current level. In other words, the cuts imposed by DECC last month simply take the EST income back to where it was in 2008. And, incidentally, the claim the EST makes for its impact on carbon emissions as a result of work carried out in 2009/10 is actually less than the figure of two years earlier, despite the much higher level of income. Similarly, the number of people employed at the agency has risen sharply in recent years with no apparent impact on the energy savings it achieved.

The position at the Carbon Trust is very similar. Its income rose from just under £100m in 2007/8 to £166m last year. As with the EST, the reduction announced today simply takes its funding from central government back to where it was two years ago. Chief Executive Tom DeLay says that the 40% cut will mean 35 redundancies but this will still leave its employee numbers substantially higher than they were only a couple of years ago.

Both bodies have large and expensive offices in central London. The Carbon Trust, for example, occupies space in an office block in one of the most desirable areas of the City. I have recently watched an entrepreneur struggling to establish his business gasp at the disparity between the conditions in which he works and the standards he saw at the Carbon Trust offices.

Other small business people have commented to me on the ponderousness of both organisations and their lack of industrial expertise. I have also heard one successful entrepreneur refuse to deal again with these bodies because of his strongly expressed fear that details of his technology had been leaked to a competitor. Perhaps these criticisms are unfair but views like these are very widely held amongst companies and individuals working in the low carbon sectors.

Both bodies can rightly claim that their jobs have got much more complicated in recent years, demanding higher allocations of funds from the taxpayer’s purse and putting strain on their ability to respond quickly and efficiently. As organisations dependent on pleasing their bosses in DECC and other government departments, they are forced to focus on projects given to them at very short notice, such as the domestic boiler scrappage scheme handled by the EST with three weeks notice.

Put together, these two agencies spent around £250m in the last financial year. No doubt there was some benefit from this expenditure: for example, the EST communicated in some way with over 3m people to give energy-saving advice and the Carbon Trust invested in some very interesting new technologies. Nevertheless the scale of taxpayer’s money being spent at these organisations looks wildly disproportionate. Much of the funding seems to go on bureaucratic activities rather than real research. Contrast the £250m bill with, for example, the total spending on research into the use of deep geothermal energy in Cornwall at around £1m a year, a technology which might produce a measurable fraction of the UK’s total electricity needs. If slimming the Carbon Trust and the EST means more money can go directly to pathbreaking research, I don’t think we should complain.