Few things matter as much for renewables as the cost of capital

A solar farm with a contract to sell electricity is almost the lowest risk investment a pension fund can make. Only index-linked government bonds beat its reliability. As long as electricity prices are contractually secure, PV resembles nothing so much as an annuity, a guaranteed flow of money that arrives every month for 25 years. Gradually, the financial markets are realising what a superb asset PV can be, particularly for pension funds needing to match long streams of liabilities. As a result, the cost of capital for solar farms is falling. It needs to fall much further. It cannot be stressed enough how important this is. This week’s IEA report, which finally ended the Agency’s long-standing contempt for PV and large scale solar thermal technologies, put it bluntly. At today’s prices and construction costs, PV produces electricity in good global locations at around 6 US cents per kilowatt hour if interest on the capital is assumed to be 0%. Assume the cost of capital is 9% per year and the cost more than doubles to over 12 cents. At this level, interest payments are more than half the cost of solar PV.

In a sunny Cornish field you could add 15% to these costs. There, PV will generate at about 7 cents at 0% cost of capital and 14 cents at 9%. The implications of this are striking. Push the cost of financing solar down close to zero and PV is already competitive with grid electricity in the UK. (7 cents is not much over 4 pence per kilowatt hour). At 9% you still need heavy subsidy to match fossil fuels.

What is the return on capital that investors demand today for PV? DECC’s latest estimate is 5.3% in real terms, or 7.3% nominal if inflation of 2% is added. This is the figure also used by other government bodies such as the Committee on Climate Change. It’s worth pointing out that this is higher than regulators assume for electricity generation using fossil fuels. Ofgem says the cost of capital for the Big Six generators is around 6% real. The difference is inexplicable. Ask yourself which investment you would rather own. A £100 share in a fossil fuel operator facing unknown carbon taxes, fluctuating fuel costs, sharply varying electricity prices and divestment campaigns growing in strength by the day, or £100 in a PV farm guaranteed a stable price for its output?

At the current interest assumed by DECC, the IEA’s figures suggest a Cornish cost for PV electricity of around 9 pence per kilowatt hour for the raw cost of power pumped into the local grid. Transmission charges will add to this figure, meaning the full cost is probably around 10.5 pence, about double the wholesale price of electricity. [1]

Travel from Cornwall to central Germany, which gets about the same amount of sun, and things are very different. There, says the Fraunhofer Institute, the cost of capital (expressed in real terms) is 2.8%, or 4.8% with inflation added in. Nothing else has changed but solar electricity immediately costs about 20% less in Germany than it does in the UK.

Why should this be? Very long dated government bonds (gilts) do yield about 1.2% more in the UK than Germany. So investors deciding whether to put pension fund money into gilts or PV will demand that PV gives them more than in Germany. Nevertheless there’s still a striking gap between the negative (i.e less than zero 0%) yields on inflation-protected government bond yields in the UK and the 5.3% real cost of capital assumed by DECC for solar PV.

I guess that the 5.3% estimate is actually too high. The small number of publicised sales of stakes in solar farms do suggest a lower figure. Lancashire County Council pension fund put £12m into the bonds of Westmill Solar Farm at a real interest rate of 3.5% in 2013. Just launched today,, Oxford’s latest £1.5m fund-raising effort to put PV on all its local schools opened for subscription offering 5% nominal returns, over 2% less than the government estimate.

It needs to go further, and not just for the sake of emissions reduction. The UK and other countries are facing increasing problems from the impact of low interest rates on pension liabilities. This week the UK universities pension fund, the largest in the UK, announced a strike ballot over plans to cut entitlements. The principal cause of its deficit is low or negative real interest rates. It has many billions of pounds now looking for a safe home that yields slightly more than the negative returns available on inflation protected gilts.

Solar PV, done at the large scale pioneered in Germany, could provide exactly the right form of asset for this fund. A sensible industrial strategy would be putting 5 gigawatts of PV on to brownfield land in the south west and funding it with pension fund money at 2 or 3% real return. I know that this won’t happen but it really is a much better idea than fracking most of Sussex and Lancashire.

 

(Boffins wanting to experiment with numbers on the importance of the cost of capital to renewable technologies may like to play with the US government online calculator at http://www.nrel.gov/analysis/tech_lcoe.html . Remember that you can use your own currency and don’t need to convert into dollars since the calculator is using percentages rather than absolute amounts).

 

 

 



[1] DECC says UK PV costs around 12 pence per kilowatt hour at a real discount rate of 5.3%. I believe it uses figures for construction cost of large scale solar farms that are too high, explaining the difference between my numbers and the Department’s.

DECC cuts 2019 fossil fuel price projections by over 20%

Much of the UK government's case for backing renewables comes from the view that it will save money in the longer term as fossil fuels become more expensive. The arguments for increasing prices for gas, oil and coal have become frayed in recent months. Coal demand has stabilised as China begins its long awaited move to use smaller volumes of imported fuel and to switch to less polluting forms of power generation. The need for oil has been compressed by falling world economic growth and gas is being undermined by increased supplies. Governments tend to be reluctant to adjust price forecasts. It might undermine investment incentive. To react too soon to market trends can suggest insecure feelings about the quality of the forecasts.

DECC bought out new numbers today. Understandably, given the scale of the change, there was no accompanying text. Just a small Excel worksheet giving the forecasts for the three main fuels. A quick look at the table doesn't suggest much has changed. The numbers for 2035 are similar to the figures produced a year ago.

It's five years out that the real differences appear. Coal prices in 2019 are expected to be 23% lower than they were forecast this time last year. Gas and oil are both 21% down. Given that DECC issued 17 press releases today, the lack of media attention isn't surprising. Nevertheless, these are really substantial changes in the medium term outlook. And they add yet another dimension of uncertainty for investors in renewable technologies.

Quarry Battery, a new pumped storage plant for North Wales

Electricity is expensive to store in large quantities. The largest battery pack in North America has just opened this week at a cost of about $50m for 32 MWh of lithium-ion cells. That’s over $1,500 a kilowatt hour, several times the cost of batteries in electric cars. (I presume the reason for the high cost must be the sophisticated electronics necessary to tie the DC battery system to the local grid). The new plant is sited at one of the substations serving the huge Tehachapi wind farms in Southern California. 600,000 individual batteries wired together in a 500 square metre warehouse are helping to stabilise the output of the five thousand turbines in this important wind province.

Tehachapi Battery Storage

The UK’s largest storage battery is being built in Leighton Buzzard, north of London, and is due for completion by the end of 2014. This 10 MWh plant is costing about £20m, partly paid by Ofgem and partly by the local operator UK Power Networks. The cost is over twice the price per kilowatt hour of the Californian battery.

Adding the gigawatts/gigawatt hours of short term storage that we need is going to cost huge sums. Batteries will get cheaper, of course, particularly if Tesla continues to invest in enormous factories in the US. But even at $250 per kilowatt hour of storage capacity – one estimate of the likely cost of Tesla batteries within a few years - a gigawatt hour will require expenditure of $250m. That buys the capacity to store about a minutes worth of UK peak electricity need.

One alternative to lithium-ion batteries is an expansion of pumped hydro. Two water reservoirs at different heights are linked and reversible turbines are installed. When electricity is cheap, water is pumped uphill to the top reservoir. At times of high power demand the water flows back downhill, turning the turbines and producing electricity. The UK has had a large pumped hydro plant at Dinorwig in Snowdonia for thirty years.

A new company, Quarry Battery, has just raised another round of seed money to push its own Snowdonia project forward. £3m will enable the company to carry out engineering costings and other preparatory tasks for its scheme to turn two disused deep slate quarries into the upper and lower reservoirs of a pumped hydro plant.

One of the two quarries

Quarry Battery has planning permission for its two sites at Glyn Rhonwy near Llanberis. It will eventually need to raise about £135m to construct the system and will gain a capacity of about 600 MWh of electricity storage. The maximum rate of generation is intended to be about 50 MW, or the equivalent of a 25 turbine wind farm working at full speed. This means that when full the top reservoir can discharge for 12 hours.

The figures for the projected cost show the relatively attractive position of the best pumped hydro sites compared to lithium ion batteries. At less than £250 per kilowatt hour of storage capacity, Quarry Battery will deliver power at about a quarter the capital cost of Tehachapi. Quarry Battery is keen to emphasise that the costs for the Glyn Rhonwy are low because the two quarries are already fully excavated but no longer used. Most other potential sites will be far more expensive to develop.

Does Glyn Rhonwy make good financial sense? Modelling the economics is difficult and I’m not sure what the answer is. We can get one idea by looking at the daily price differences in the UK electricity market  know that one simple trading tactic is to pump water uphill when power is cheap and letting it out when prices are better.

On Sunday and Monday 28/29th September, the UK’s pumped storage stations, including Dinorwig, were using power from about 10.30 pm at night to about 7pm on Monday morning to pump water uphill. The rest of the time the water stored in the upper reservoir was being used to generate electricity by letting it flow downhill. If Glyn Rhonwy copied this, it will probably be buying electricity at around £30 per MWh and selling at about £60. The company claims the overall efficiency of the round-trip is around 80%.

So if Glyn Rhonwy did nothing else but fill and empty 600 MWh worth of water every day it would earn

600 MWh * (£60-(£30/0.8) * 365 days a year = c. £4.9m a year.

But other services are potentially much  more valuable. Holding Quarry Battery ready so that it can respond to major price variations may be a better strategy. German power prices now often move below zero at times of high wind or solar output and this pattern is likely to be repeated here.  Waiting opportunistically to be paid to fill up the upper reservoir may be a good tactic.

Or it may make sense to keep the top reservoir full to meet emergency power needs. One observer recently told me that as conventional coal and gas plants are turned on and off more frequently to complement varying wind (and increasingly solar) power, they are becoming less reliable and sometimes fail to start up properly. Quarry Battery could earn good money from standing ready at 8am as power demand rises waiting to respond to the very high prices that are available when the plants that were expected to provide power fail to do so.

The company is understandably coy about revealing its own detailed estimates of income but did say that it expected the sources of annual income in the following table. About half its revenue, it said, will come from the Balancing Mechanism and half from the other sources.

Type of income Explanation
‘Balancing mechanism’ Income from responding to urgent requests to take power or to provide it in order to balance the Grid
Arbitrage income Buying when power is cheap and selling it  when it is expensive
‘Triad’ payments Payments from the local network operator (Scottish Power) for reducing  the peak power needs for North Wales from the National Grid
‘Capacity mechanism’ Payments for being ready to provide power at short notice. (Not to be confused with actually providing power).

 

As the UK grid becomes more stressed in the decades to come, Quarry Battery’s services will become increasingly valuable. The company projects that it will earn a financial payback in about 15 to 25 years. Glyn Rhonwy will last many decades, so the relatively slow returns will not necessarily impede its financing.

Dave Holmes, the Managing Director of the company, stresses that Quarry Battery looked across the entire country for the best locations for a pumped storage plant. Glyn Rhonwy was chosen because of the favourable conditions in the old slate-producing area.  ‘We are lucky on this site', he said, 'as the civil (engineering costs) are vastly reduced by the suitable geology, topography and existing cavernous disused quarries’.

The UK needs fifty or a hundred times as much as storage capacity as Glyn Rhonwy can provide. The worrying thing is that if this excellent site needs at least 15 years to pay its investors back very few other places will meet the conditions for commercial funding. And if Leighton Buzzard is any guide, lithium-ion batteries don’t offer much help either.

Time to start phasing out halogen bulbs

At 5.30 in the late afternoon the average UK house is using about 130 watts of electricity to power lights. In the winter months this number rises sharply, probably to around 200 watts. 27 million households are consuming over 5 gigawatts of electricity just for lighting in the early evening of the darkest month. The maximum need for electricity last year occurred just after 5pm on November 4th when the major generators delivered almost 53 gigawatts. At the moment of highest electricity need, domestic lighting was therefore using about 10% of the country’s power production. The easiest way of cutting this is by banning halogen bulb sales and obliging consumers to replace them with equivalent LEDs.

An LED bulb that could replace a standard halogen ceiling light

As conventional power stations close, the gap between the total generating capacity in the UK and peak winter demand is narrowing sharply. A ban on halogen lamps will dramatically improve the UK chances of ‘keeping the lights on’ in winter by shaving the top of the daily winter peak of power demand.

Halogens are not quite as inefficient as the old fashioned incandescent bulb but they use far more electricity than LED equivalents. A 35 watt bulb can be replaced by 5 watt LED of almost identical light quality. Many kitchens and living areas contain several hundred watts of halogen bulbs and all this lighting could be replaced by equally effective LEDs.

Cutting domestic lighting demand is the simplest way of reducing the maximum need for electricity. And it would reduce consumer bills and make a substantial dent in the need for electricity users to pay (indirectly via the so-called ‘capacity mechanism’) for fossil fuel power plants to stand waiting just in case the other generators couldn’t supply enough power. In addition, the reduction of peak demand would cut the need for extremely expensive grid upgrades. There is real social value in moving the country off halogen bulbs as fast as possible.

Domestic electricity demand

DECC has been quietly investigating ‘Time of Use Tariffs’ for some time. The idea is that by making electricity more expensive between 4pm and 8pm it can cut the peak demand for electricity from households. Research work published over the summer showed how power needs varied for a sample of households over the course of the day. During the working day electricity consumption is about 500 watts. This rises sharply from about 4pm as people return home and turn on TVs, washing machines, heaters and other appliances. By 5pm, average household need is almost 750 watts. The chart below estimates average use across the year. In the winter, the early evenings would see a much larger increase in domestic electricity use.

Lighting, cooking and TV use are largely responsible for the rise in demand after 4pm, as the chart below shows. I don’t think we can force people to buy smaller or more efficient TVs or cook with gas, nor do I think that increasing prices sharply on winter evenings would work. But I do think we can rapidly accelerate the move to LEDs instead of halogen bulbs. It’s mildly illiberal but then so was banning lead in petrol or smoking in enclosed spaces.

Daily household electricity consumption by  time of day, average over year

 

National electricity use

The rise in domestic use of electricity is occurring while offices and factories are still using large amounts of power. So the overall peak in winter demand occurs in the late afternoon. After a period of almost flat national electricity consumption on last winter’s peak day of around 46 gigawatts, the additional household demand after 4pm added most of the 7 gigawatts rise to almost 53 gigawatts at about 5pm. The chart below shows the pattern of national demand on 4th December 2013 when last winter’s peak usage occurred.

National electricity demand over 4th December 2013

We really do want to reduce this peak. If we ever actually run out of power, it will be because of the sharp bump in demand for a few hours in the period between November and February. A large number of very clever people are spending a lot of hours working out how the UK is going to cope with unexpectedly high demand on particularly cold or still days when the wind turbines aren’t turning. All sorts of expensive technologies are being investigated and National Grid is offering increasingly large sums to persuade businesses to turn off their machines at times of highest demand.

Last year, the National Grid said it restrained peak winter evening demand by about 2 gigawatts on the coldest day. Let’s compare this with the possible impact of banning halogen bulbs. There are about 27 million homes in the UK. If the average home reduced its need for lighting by 100 watts on winter evenings, peak demand would be cut by 5%, or well over the 2 gigawatts that has been very expensively achieved by other means by the Grid. This 5% cut would be achieved by simply replacing an average of fewer than four 35 watts halogens with 5 watt LEDs. The quality of LEDs is now almost the same as halogens with ‘warm white’ bulbs delivering light of identical colour. 5 watt halogen replacements are on sale for around £10-12, meaning that the total cost of replacement will be less than £50 for the average household.

The savings in domestic bills alone will be £20 or more a year. Payback will be in two years or so and the ten year lives of LEDs will mean that far fewer replacement bulbs will be needed after that point. The National Grid will need to spend far less on keeping generating capacity mothballed in case it is needed. Carbon emissions will fall disproportionately because this standby plant will be the most polluting power stations in the country. Grid stability will be enhanced because some of the sharp ramp up in power production from 3.30pm onwards will be avoided. Air pollution on cold, still December days from burning coal will be avoided.

Over the last few years the EU has seemed to back away from a ban on sales of halogen. As LED quality improves, and costs fall this policy needs to be re-examined both in Brussels and London.

 

(This post was re-published in The Ecologist on October 7th 2014)

 

 

Synthetic biology makes sustainable biofuels possible

Schematic of the Joule plant in Hobbs, New Mexico Scarred by the failure of first generation biofuels and by the increasingly bitter controversy over the burning of imported biomass at Drax and elsewhere, the UK has backed away from research into using biological materials for energy conversion or storage. This behaviour is mirrored across Europe. Outside the US, research into using natural materials has almost ceased as concerns over the diversion of land from food production and low carbon savings have overwhelmed the case for increased renewable energy.

This is a mistake, and possibly a tragic one. In sunny parts of the globe, solar PV may provide the cheapest source of electricity. But PV doesn’t provide either reliable 24 hour power or a source of liquid fuel for transport. Since electricity is typically provides less than 40% of total energy demand, the world needs to find inexpensive low carbon sources to meet other needs. Biological sources of energy are vital, not least because they can both store power (thus complementing intermittent sources such as PV) and can be converted to high density liquid fuels suitable for transport. A piece of wood is a semi-permanent store of solar energy and can be converted - albeit expensively at present - to a liquid hydrocarbon. Algae are similar. But work on even relatively simple technical problems such as improving the slowness of the breakdown of cellulose molecules in anaerobic digesters simply isn’t taking place in the UK.

By contrast, this note looks at Joule Unlimited, a seven year old US company that is making ethanol and other fuels from CO2, sunshine and water. Like many other US bioenergy companies, Joule has raised what to European eyes look like prodigious amounts of capital. But the $160m of investors’ money has bought what seems like exciting intellectual property. If Joule can do as it promises and produce transport fuels for less than $50 a barrel of oil equivalent, it can undermine the conventional supply of oil, a market currently worth about $8bn a day. 

The fuel storage at the Joule plant in New Mexico

Why has commercialising bioenergy proved so difficult? The problems are this. First, biology isn’t very good at turning photons into usable energy. Only in unusual circumstances can much more than 2% of the energy hitting a square metre be converted into chemical energy through photosynthesis. A modern solar farm manages about 5%. (The collection efficiency of an individual panel is of course higher than this but panels don’t occupy all of the land in a solar farm).

The second problem is that biological materials generally only contain a small percentage of usable molecules, reducing energy production by unit area still further. Ethanol made from US corn averages is the typical example, producing only about 2.8 kWh of liquid fuel a year per square metre of cropland, equivalent  to about a quarter a litre of petrol. A solar farm generates about 20 times as much energy. Third, most processes that convert the latent energy (I’m using this term in a non-technical sense) into usable fuels are extremely expensive and often inefficient.

These unattractive features stop European finance flowing into biological energy. Not so in the US. There hundreds of venture capital financed businesses search for a way of generating useful carbon-based molecules from metabolic processes at prices that will make fuels competitive with petrol. Many of these companies, such as Cool Planet, are looking for cheap ways of breaking down the lignin in trees or the cellulose in plants into useful liquids or gases. By contrast, Joule is using synthetic biology to engineer bacteria to express carbon-based molecules such as ethanol that can be used as transport fuels. Joule Unlimited’s ‘refinery’ produces ethanol, or other liquid stores of energy including a diesel substitute, directly from the bacteria, requiring no further processing.

If what Joule says is right, its bacteria can turn about 14% of the energy in sunlight into fuel, perhaps six or seven times as much as plants. Let’s about clear about this: if true, this is staggering, almost magical, achievement. It means that the amount of energy captured by the bacteria and converted to liquid fuel could be greater than PV panels with a similar areal footprint.

Ambitious claims from biofuels companies aren’t in short supply and I’m not qualified to assess the plausibility of Joule’s synthetic biology. Headline forecasts include a cost of production for ethanol of about $1.20 a US gallon, or less than 6 cents/about 3.5 pence/kWh.[1] This is less than the price of refined petrol (prior to taxes) of around 5 pence per kWh.

How does the Joule process work? A genetically engineered strain of cyanobacteria mixed with (non-potable) water is introduced into clear tubes running horizontally. A source of carbon dioxide is added, perhaps the waste gas from an industrial process. The CO2 is forced through the tubes, shaking the bacteria and ensuring even exposure to sunlight. Conventional bacteria use the energy from light to grow and reproduce but Joule’s bacteria employ it to produce a useful hydrocarbon instead. Joule’s cyanobacteria have been genetically engineered to produce specific fuels, such as simple ethanol. The ethanol ‘excreted’ from the bacteria is distilled from the circulating water, ready to be used in a car’s engine.

The greenish cyanobacteria flowing through the tubes

Joule talks of building cyanobacteria plants across 400 hectares or more to get maximum economies of scale. It claims that it can produce almost 100,000 litres of ethanol a year from each hectare of land, compared to less than 5,000 litres of ethanol refined from corn and grown on prime food production land, a twenty fold improvement.

As importantly, its refineries can be sited on otherwise unusable land and can use water that is too salty or otherwise impure for drinking or for irrigation. These advantages hugely add to the appeal of its technology. Joule can claim that its production plants do not reduce food production or add to water scarcity. In a recent announcement it also improved its claim to carbon neutrality by suggesting it would provide all the electricity it needs to run its plants by installing adjacent PV farms.

The Joule process needs a source of CO2. Many industrial processes, such as cement manufacture, may produce carbon dioxide in sufficient volumes to act as a feedstock. The problem may be that relatively few sources of large amounts of CO2 exist close to the non-agricultural land that it intends to use for fuel production. Nevertheless, the company claims to have identified at least 1,000 worldwide sites. To produce all the worlds’ oils from the Joule process, we’d need approximately 25 million hectares, the area of the whole United Kingdom but less than 2% of world’s area of farmed land.

This is a large acreage. But contrast this with what we would need for ethanol made from corn. At current productivity, biofuels would need over two thirds of the world’s total arable land area to replace conventional oil. This would reduce food production catastrophically, but if Joule achieves what it plans, all its refineries would be on land that is unusable for food production.

Europe’s aversion to using technology to make energy carriers from biological processes is entirely understandable. Devoting increasing acreages to growing corn or wheat for ethanol makes little financial or environmental sense. But this shouldn’t imply that we reject all forms of bioenergy as we appear to be doing at the moment. Joule’s pilot plant in New Mexico is one of the scores of US bioenergy experiments that will eventually create the world’s supplies of non-fossil liquid fuels. If Europe and other parts of the world continue with blanket opposition to all forms of biological power, we run the risk of being unable to decarbonise anywhere near as fast or as comprehensively as the world needs.

 

 

 

 

 



[1] One US gallon is about 3.8 litres and a litre of ethanol contains about 5.9 kilowatt hours.

Greater wind speed variability adds to the problems of accommodating renewables on the grid

A wind entrepreneur wrote to me last week pointing to the increased variability of wind speeds over the UK. Until recently, he wrote, average monthly wind speeds only very infrequently departed more than 30% from the norm from the month. In the last year, however, he said that we’ve had two months of very high speeds (more than 30% greater than the monthly average) and one very low speed period (30% less than the average for the month). This matters; greater variability of output from wind turbines means more need for backup resources. Does the data match the entrepreneur’s instinct that variability is increasing? A quick look at average wind speeds since the beginning of 2001 argues it does. The average month now varies about 13% from the norm, up from 9.5% in 2001. This isn’t a large amount, and the data doesn’t suggest a very clear trend, but if variability is increasing it will add to future problems balancing UK electricity supply. And higher winter wind speeds will cause more destruction, as they did over many parts of the UK in February of this year.

Wind speeds

The yearly average wind speed across the UK is about 9 knots, or nautical miles per hour. It doesn’t vary greatly from year to year, and there’s certainly no sign of increasing speeds even though more extreme weather is often said to be a consequence of a hotter atmosphere. If anything, the last few years have been below the long-run average.

Average UK wind speeds Article 1 September 8th 2014

Over the course of the year, speeds vary fairly predictably from month to month. Winter is windier than summer. The differences don’t look huge with winter speeds typically about 10 knots and summer averaging 8. But because the energy in the wind is the cube of the speed, a typical winter’s day will generate twice as much electricity as a day in the summer.

Average UK monthly wind speed Article 1 September 8th 2014

From the point of view of the people running the electricity grid, really high speeds aren’t necessarily much use. Above a certain speed, turbines don’t actually generate any more power, and will switch themselves off in the strongest gales. Very high wind power output also tends to produce marked instability in the wholesale price of power. The high winds of early August produced negative power prices for several hours. We can see the enormous impact of varying wind speeds most easily in Germany where wholesale prices are now frequently pushed down close to zero.

Variability

DECC produces a monthly summary of average wind speeds.

I plotted the variabililty of the monthly wind speed from the beginning of 2001.  I did this by expressing individual months as a percentage of the monthly average (2002-2011). If a month had an average speed of 12 knots, and the month’s typical figure is 10 knots, then I wrote this down as a 20% variation. A speed of 8 knots would also be a 20% variation from the average. (Note: all variations are therefore expressed as positive numbers).

Variation from average monthly wind speed, Article 1, September 8th 2014

A simple trend line plotted across the 163 months of data suggests that the average variability at the beginning of the period was 9.5%. That is, the average month departed  - positively or negatively  -about 9.5% from the month’s norm. This figure rose to about 13% in 2014. The correlation isn’t strong. In fact it is lamentably weak and this result has no statistical validity whatsoever. Nevertheless, anyone looking at the numbers will notice that only a third of months varied by more than 10% from the monthly average in the first six years of the series but this number rose to over a half in the second six years. The strong winds of August 2014, which aren’t plotted yet, will have added to the increasing apparent variability.

A very quick analysis of rainfall data shows the same pattern. Monthly rainfall figures typically diverged about 27% from the ten year average for that month in 2001 but this rose to around 32% by 2013. Once again, the data really isn’t robust but the trend is nevertheless quite sharp.

Average wind speeds and rainfall volumes are typical climate data. The ‘noise’ in the figures is an order of magnitude greater than the ‘signal’. As a result, it will be many years before we can be statistically sure monthly average wind speeds are becoming less predictable.  This is one of the problems with other impacts of climate change. By the time statisticians have sufficient proof, the impacts will be blindingly obvious.

 

Pressure on incomes is not the reason for declining UK travel

The amount of travel carried out by people in the UK continues to fall. Whether measured by the number of trips or the distance travelled, people are moving around less. The latest National Travel Survey (NTS) says UK adults made fewer trips in 2013 than they did in 1973. After rising until the early years of the last decade, the average distance travelled has also fallen. The possible explanations are fairly obvious. The internet has reduced the need for High Street shopping. Working from home is now more common than a generation ago. We tend to meet friends in local restaurants or pubs rather than visiting far-flung relations.

One other potential reason is that people’s real incomes have been dropping in the last few years. And as driving tends to get more expensive, we might expect individuals to drive less if they can. These two arguments sound plausible explanations. But examination of some of the detailed numbers in the NTS shows that they are probably wrong. Surprisingly, the richest 20% have cut their travel miles more than the least well-off 20%. And this reduction is driven mostly by decreased car travel. It’s those who can most afford to drive who have reduced their mileage the most. They still drive far more than poorer people but the difference has dipped sharply. This is additional support for the view that energy use will not rise sharply if incomes rise.

One other striking finding: more young women aged 17-20 now have driving licences than young men in the same age range. This is the first time any age cohort of women has ever had a higher percentage of drivers than men.

The data

The NTS chart below shows the recent fall in travel clearly. The number of trips taken in 2013 was 12% lower than in 2002 (and is now 3% below the 1973 figure). Time taken travelling has also been quite stable. Distance travelled rose in the nineties - principally as more people acquired cars -  but has decreased 8% since 2002.

nts infographic

We see the phenomenon of ‘peak travel’ in most developed countries around the world. Many commentators find it counter-intuitive but I think it is easy to explain. Most travel is tedious and time-wasting. We might actively to choose to travel to a safari in a glamorous country but our day-to-day lives are dominated by commuting or driving to do the shopping. Given a free choice, we’d rather not take most trips that we currently are obliged to do.

If this idea is right, we’d expect the most well-off to reduce their travel the fastest. They are more likely to have the economic freedom to do cut the number of unattractive car and public transport journeys they take. And the evidence is that they are indeed reducing their trips and the distance they travel. The chart below shows that the top 20% (quintile) of the household income distribution have reduced the number of trips by 15% since 2002. The bottom quintile has only cut the number by 5%. As a result the least well off now take 80% as many trips as the richest quintile, compared to 71% in 2002.

 

NTS trps

Most of the reduction in trips across all five income groups comes from reduced car use. (Trips in cars are about 65% of all travel miles). The richest 20% take only 80% of the trips by car that they did in 2002. Less well-off groups saw a much smaller cut.

The same pattern can be seen when looking at distances travelled in cars. The top quintile cut the miles they drove as driver or passenger by 16% between 2002 and 2013, compared to 6% for the bottom 20% of the income distribution. The most well-off group travelled a typical 7,800 miles by car in 2013, down from 9,300 in 2002.  Richer people still drive a lot more miles than poorer households, largely because they are much more likely to have access to a car, but the differences are declining sharply.

Distance travelled

The decline in real incomes in the UK over the last decade does not appear to be a good explanation for the fall in UK travel. The reason lies in technology, psychology or sociology, not simple economics.

Walking can be more carbon-intensive than driving

Copyright Karen Pendragon Another group of scientists has estimated the environmental burden of beef. The researchers suggest that meat from cows contributes 10 kilos of greenhouse gases (expressed as CO2 equivalents) for every 1000 calories of food. Put in a less scientific way, a Big Mac® a day will represent more than a tonne of global warming emissions a year, using up your entire carbon budget by the middle years of the century.

Seven years ago I wrote an article (covered in the New York Times blog here) that suggested that walking to the shops and then eating beef to replace the calories used would generate more greenhouse gases than driving a car to make the purchases. This little piece of ad hoc research was cruelly dismissed as utter nonsense by all right-thinking people. Well, if you believe the figures published today, I’ve finally got my revenge. Beef turns out to be twice as carbon intensive as driving.

The policy implications are, of course, completely non-existent. Most of us don’t exercise enough, and if we do walk to the shops we usually don’t need to replace the calories. In fact, we are probably walking because we want to lose weight. Nevertheless, it still seems interesting to me that a (quite inefficient) fossil fuel engine moving the best part of a tonne of metal is less greenhouse gas intensive than the docile bovines grazing in the field next to my office.

This is the form of the calculation. (A longer and more complicated version can be found in my book How to Live a Low Carbon Life).

Walking

1)      Assume that the individual is in calorie balance. That is, she doesn’t want to lose weight and therefore any calories used in exercise will be replaced by new food calories.

2)      She walks to the shops. The distance is 1.5 miles and she walks at 3 miles per hour. Therefore the round trip takes an hour. If she’d been sitting watching TV, she’d use about 60 calories an hour (approximately the ‘basal metabolic rate’ for a 60 kg woman).

3)      A person of this weight walking at 3 miles per hour uses about 220 calories in an hour’s walk.

4)      So the incremental effect of walking to the shops and back is about 160 calories.

5)      The global warming footprint of beef is 10 kilos per thousand calories, says the new paper. So 160 calories of beef represents 1.6 kilos of CO2 equivalent emissions.

To walk to the shops 1.5 miles away, come back, and replace the calories lost with beef would add 1.6 kilos to global warming emissions.

Driving

1)      A reasonably new mid-sized car generates about 130 grams of emissions per kilometre or roughly 200 grams per mile.

2)      To drive to the shops and back is 3 miles. So the CO2 emissions would be about 600 grams.

3)      Add a little to reflect the lower engine efficiency of driving a short  distance and increase the figure by 33%

To drive to the shops would add about 800 grams to global warming emissions, half the figure from walking and then replacing calories with beef.

Beef is about 5 times as bad as pork or dairy products. So my assertion only works for meat from cows. Nevertheless after all the scorn of seven years ago, I’m really pleased to have some academic justification for my piece of research.

And I should really stop crowing about  being a vegetarian - dairy products may be much better than beef but they're actually worse than poultry per calorie.

 

The CMA energy inquiry: how to make it better than all the other endless government investigations

Last month the headlines excitedly stated that Ofgem had asked the Competition and Markets Authority (CMA) to look at the energy market. Actually, this was a huge exaggeration. Ofgem’s request was for the CMA to examine about 5% of the business: retailing gas and electricity to domestic consumers and very small companies. Sales to large organisations are excluded, accounting for over half the market, as are the upstream activities of energy generation (50% of consumer bills), the transport of energy over wires and pipes (about 20% of the domestic bill) and taxation and social and environmental levies (15%). Market participants nevertheless genuinely seem to hope that the CMA investigation will change the way the whole energy market works, freeing up investment in generation and improvements in networks as well as stabilising prices. This note looks at how participants, particularly including the new generation of smaller retailers, might choose to respond to the investigation if they want to influence its outcome. (Full disclosure: I was member of the Competition Commission, a predecessor of the CMA, for seven years and a tribunal member on the specialist panel at the Commission dealing with the – very rare – appeals against Ofgem decisions).

The central point I want to make is that smaller energy companies and consumer bodies should understand that a market investigation by the CMA is a mammoth, many-headed process. The CMA is hugely thorough and data-driven and the demands it places on companies are often almost overwhelming. Inquiries can last for up to 24 months, not the 18 months specified in recent press releases.

To be effective, and to get arguments taken seriously by the CMA, participants need to devote resources to the process, almost certainly in a joint undertaking with groups of similar views. Occasional letters to the CMA will not work when the Big Six will be spending tens of millions of pounds on lawyers.

The scope of the inquiry

Government, regulators and the big energy companies have cooperated to launch this inquiry in order to ‘clear the air’ on the issue of domestic energy prices. This is a telling phrase, used time and again by Ofgem in recent months. ‘Clearing the air’ doesn’t mean instituting radical reform or making major changes. It implies a close examination of an industry but one that is expected to conclude that nothing much is wrong. In this respect, a CMA inquiry is all too similar to the increasing number of quasi-judicial investigations of problems that are actively embarrassing to government.

But sometimes the CMA does surprise us. It is full of genuinely independent people, but generally not with a radical turn of mind. It does occasionally propose major changes in market structures, such as the breakup of the London airport monopoly, but mostly its recommendations are marginal and not particularly effective. These means that those energy industry participants and consumer bodies who want real change will need to make their case forcefully, insistently and in a quantified and rigorous form.

What the CMA does

The CMA is an amalgamation of the Office of Fair Trading and the Competition Commission. Until a few months ago Ofgem would have gone directly to the Competition Commission. Now the CMA passed Ofgem’s request directly to the part of the Authority that conducts investigations of this sort. The people on the energy market inquiry are all old Competition Commission hands and it’s a fair guess that they’ll work as they would have done at the predecessor body.

The process is as follows.

a)      Get the request from a regulator, such as Ofgem, to carry out a market investigation

b)      Appoint a team of senior people to act as the panel of judges on the investigation and allocate the staff members to actually do the investigative work and produce drafts of the report. (This particular case has been loaded with very senior and experienced panel members).

c)       Request the main participants in the industry, and consumer bodies that might represent the public interest, to say what they think are the main problems that the CMA should look at.

d)      Take a few weeks to produce what is known as an ‘issues statement’. This public document lays out the major issues which the CMA thinks it is investigating and possible hypotheses about these questions. It will specify some aspects of the work that it will carry out to assess whether perceived problems are real and what actions it might take to remedy defects in the operation of the market. The issues statement develops one or more ‘theories of harm’ that suggest how uncompetitive features of an industry may cause detriment to customers.

e)      Market participants respond to this letter and attend hearings at the CMA at which the panel quizzes them on their opinions and the data that backs them up.

f)       The Authority will publish a series of working papers that collate the data it has generated on the main issues it believes need to be addressed.

g)      Participants can, and should, respond to the working papers.

h)      We’re now ten months or so into the inquiry and then, after a period of intense work, the CMA will produce its ‘provisional’ findings about this time next year. Companies and consumer bodies will respond. If the experience from other investigations at the Competition Commission is any guide, any conclusions that the major firms in the energy industry do not like will be given fierce and determined rebuttals. A lot will be at stake.

i)        The initial deadline is to produce the final report by Christmas 2015, but many market studies overrun and have to ask for a six month extension. The number and complexity of the interchanges between the biggest firms in the marketplace and the Authority tends to make a delay inevitable. I would be amazed if the same thing didn’t happen with the energy market investigation.

The important thing to note is that market investigations like this one have substantial inertia. Once set on a course, it is difficult for smaller participants to deflect the work into areas that seem to be important but ignored. There’s strong reasons to get arguments and data in early, backed up with as much supporting evidence as can be gathered over the next few months. This means making a lot of noise before the first ‘issues statement’ comes out.

The other things that companies in this market need to remember

a)      The process that the CMA will go through will be almost unbelievably demanding on the companies involved in the study. Believe me, this is no exaggeration. The requests from the CMA for data, analysis and opinion will numerous, wide-ranging and overwhelming. Even the very biggest companies fade under the pressure of a market inquiry such as this. I remember people from Tesco complaining of utter exhaustion during the Competition Commission inquiry into food retailing. Small companies simply tend to back away and try to avoid getting dragged in.

What this means for smaller participants: Either decide not to participate or combine with others to actively drive a shared view of what the CMA should do.

b)      Despite what it might publicly say, the CMA tends to prefer to deal with intermediaries, rather than directly with participants. Intermediaries, such as law firms and economic advisory boutiques, know the way the Authority operates. They understand the formats that the CMA uses and the underlying meaning of its requests. The CMA trusts intermediaries to present data and such things as market research results in a consistent and rigorous form. The CMA’s preference for working through experienced third parties enables law firms, in particular, to play central roles in the whole market investigation process.

What this means for smaller participants: If you do want to actively engage with the CMA inquiry – something which needs to be very, very carefully considered before a decision is made – it makes good sense to use an intermediary with at least some experience of CMA processes. Central London law firms will expect fees of millions. It may make sense to look elsewhere to find people to develop, organise and present your case and, as importantly, to act as the point of contact for the CMA and its voracious, unending requests for data.

c)       The CMA is entirely concerned with examining the features of markets that may restrict or distort competition. This is much tighter focus than most people assume.

What this means for smaller participants: There are probably many features of Big Six behaviour you find frustrating and/or impossible to deal with. But in this inquiry, focus entirely on the features of the marketplace, such as the lack of wholesale price transparency and illiquid market, that restrict genuine competition.

The CMA’s character

All institutions have an ideology, or at least a shared set of reasonably coherent views. Regulators are no exception.

In the case of the CMA the core beliefs are

a)      Businesses, particularly big businesses, are good for society

b)      Competition for consumer’s expenditure is almost always the best way of getting lower prices and more innovation from companies. Regulation, or any other form of intervention, is very much a second best.

c)       Enforcing a structural change to the marketplace, such as obliging full legal separation for the generating, network operation and retailing arms of the Big Six in this particular inquiry, is a radical move that will usually be a disproportionate response to competition problems.

d)      Smaller participants may also benefit from understanding that competition authorities tend to see major advantages to consumers from the vertical integration of suppliers. Any attempt to argue to the CMA that the generating and retailing arms of energy companies should be split faces a strong ideological headwind.

e)      But, on the other hand, the CMA will think that where possible, prices and volumes traded in markets should be transparent. So in the case of the energy market, I think it is much more likely that the CMA will require a rule that all electricity that is generated will have to be traded through a public exchange, rather than instituting a requirement that the Big Six separate their generation and retailing arms. (In their submissions to the Ofgem consultations, the Big Six stressed that they engage in large amounts of trading already. The smaller players strongly complained that prices and volumes were largely invisible to them and that the energy market is still illiquid, particular for trades a long time in the future).

f)       Very importantly, the CMA will have no working presumption that the energy market cannot work effectively with just six big suppliers. Many important marketplaces, such as mobile phones, work reasonably well with four or even fewer participants. It will be a waste of time for smaller participants to argue that the largest companies need to be broken up to achieve greater competition.

g)      Some of the submissions to the Ofgem consultations prior to the reference to the CMA made the point that many previous interventions by the regulator in the energy market had tended to result in lower levels of competition. Implicitly, these submissions were of the view that if there is a competition problem it is a result of well-meaning but counter-productive rule-making by Ofgem.

The most frequently quoted example was the regulator’s ban on doorstep selling. According to the Big Six, the effect of this had been to substantially reduce the total amount of customer switching and therefore cut the pressure on retailers to remain strongly competitive. Another example was Ofgem’s resistance to the big retailers offering lower rates outside their old incumbency regions. Once again, this had muted the competitive intensity of the whole market, claimed the major retailers. A third case was the long-standing Ofgem drive towards standardisation of the form of electricity tariffs into fixed charge and variable elements.

From my experience, these arguments will get a very sympathetic hearing from the CMA because of its deeply held view that many, if not most, market interventions by regulators will have the effect of flattening tactical approaches by different participants. Expect the Authority to suggest that Ofgem lightens its touch on regulation of many aspects of energy retailing.

h)      There will be no assumption at the CMA that energy markets need to have large numbers of small competitors to be truly competitive. The CMA will note that all the large retailers today are old incumbent gas or electricity suppliers and that no new company has arisen to really challenge this dominance. But it won’t try to sponsor smaller competitors in any way or give them an advantage.

To get a low-carbon economy, we need properly functioning and innovative energy marketplaces. It seems to me that there are problems across the whole spectrum from investment in new generation to the hugely important installation of smart meters in homes. The CMA has been given a small fraction of these problems to look at. It may, or may not, be worth consumer bodies and small suppliers actively participating in the inquiry. But if they do engage, they must focus on two or three well-defined areas rather than trying to keep up with the entire process.

 

 

Tresoc - a new type of community renewables company

Sawton Mill near Totnes. Tresoc will buy a share in this if it is fully financed Totnes Renewable Energy Society (Tresoc) in Devon is trying to raise up to £1.5m to fund a portfolio of six PV and hydro projects near the town.  What makes Tresoc unusual – and perhaps unique in the UK – is that is both financing current projects and developing a wide variety of new ventures, including an innovative waste-to-energy plant and biomass scheme for future investment.

This is an ambitious scheme to create a genuinely local energy company that might eventually hope to directly supply its electricity and heat to investors, rather than selling to a big power company. One day, this may make it an exciting form of new energy enterprise. But therein lies in the problem. Tresoc is asking for investors to back what is, in effect, a renewables development company.

This isn’t a standard community financing  in which a Community Benefit Society offers 5% annual return on the basis of a virtually risk free photovoltaic installation in local fields. It is more complex venture that can only offer lower returns on the hydro and solar assets it has permits for but which hopes to be able to generate better income on the larger projects it has in the early stages of development.

If Tresoc only raises £0.5m of its £1.5m target it only intends to pay a return of 1.25%, rising to 4% if get the full amount. These aren't high figures for community energy - Abundance, for example, offers more on its already constructed assets  - and Tresoc investors may partly be putting their money in with a hope that the company will be able to successfully develop new ideas. (I should stress that investors should only invest on the basis of the schemes specified in the prospectus and the directors of the company are making no commitment to developing any of the projects they have on the drawing board.)

In other words, this looks much more like a standard risky new business than a typical community energy funding raising. As might therefore be expected, investor money is coming in relatively slowly. The company’s effort to raise the cash it needs is further impeded by the overhang of antagonism between local residents over Tresoc’s failed attempt to get planning permission for two commercial wind turbines a couple of years ago.

Tresoc is a business that should be funded. Some of its PV and hydro projects are already operational, with records of output and costs. Totnes is not far from the edge of Dartmoor, with good access to the steep streams and large flows off the moor so there will be no shortage of new hydro schemes with good year-round supplies of water.  The woodland in the local area is mostly completely unexploited for any purpose. Biomass heating or electricity production from waste wood is likely to be as economic here as it is anywhere else in England. Solar radiation is excellent for the UK. Able people are giving huge amounts of voluntary time to make Tresoc work and to build a diverse portfolio of low-carbon energy producers.

In the longer run, Tresoc may enable Totnes to become one of the first towns to operate its own independent generation and energy retailing company supplying local homes and businesses with ‘local’ electricity under the ‘Licence Lite’ scheme. [1] ‘Licence Lite’ rules were established by Ofgem to allow small generators to sell directly to a defined group of customers, such as the investors in a renewables company. Progress has been slow and only the Greater London Authority has used the provisions to generate power in one location and sell it directly to another GLA building in a different part of the capital without going through an intermediary. But, at least  in theory, Licence Lite allows a community renewables company to be the retailer of electricity to a local consumers of power.

Tresoc has the enormous advantage, were it ever to exploit the Licence Lite rules, of having the potential diversity in its portfolio to allow it to match the profile of consumer demand (high in the morning, high in the evening, low at other times) with the output from its generating plant. Solar PV is, of course, likely to produce most power around midday but Tresoc’s other assets, such as future waste-to-energy plants, can be cranked up and down to match customer demand patterns. This freedom to follow the electricity demand of customers is completely critical for a successful Licence Lite venture. Otherwise the generating company will have to buy and sell at unpredictable times, and perhaps at very short notice, in a possibly highly illiquid electricity market.

Tresoc will probably need to add other flexible low-carbon sources, such as anaerobic digestion assets, to the range of generating plant available to it. However Totnes’ wide range of agricultural enterprises make this perfectly possible. And it has the powerful benefit of able directors operating within a local community that is both knowledgeable about renewable energy and – as the first Transition Town – very committed to the move away from fossil fuels.



[1] This is my statement of one of the opportunities open to Tresoc in the future. No details of such a scheme are in the fund-raising prospectus and I haven’t even talked about it with the directors of the company. So in no sense is it part of the current Tresoc share offering.

30% improvement in resource use efficiency since 2000

In the last post I looked at the evidence of the decreasing use of resources in the UK. The Environmental Accounts have just provided a new measure of material use, called Raw Material Consumption, which gives us a better estimate than previous series. The new index includes a figure for the resources used elsewhere in the world to make things that are then imported into the UK. If we divide Raw Material Consumption, expressed in millions of tonnes, by GDP we get a figure for the weight of physical resources the UK uses to generate a £ sterling of income. The figure has fallen from about 513 grams in 2000 to around 358 in 2012. The average reduction is just under 13 grams a year for each £ sterling of GDP. This is equivalent to a 30% reduction since 2000. (All these figures exclude fossil fuel consumption, which isn’t included in the statistics. However we do know that energy consumption is also falling fairly consistently each year).

Grams per £

Grams per £ sterling of GDP is an important measure and should be targeted. As we move haltingly to an economy that productively recycles everything for ever, we will reduce the volumes of materials harvested or mined. And moving to low carbon sources of energy, whether PV or nuclear also reduces the weight of resources we need to extract, as well as reducing CO2 emissions.

 

'Peak Stuff' again

In late 2011 I wrote a paper which suggested that the UK’s consumption of material goods had peaked. I pointed to the evidence from a variety of different statistical sources that the weight of the things we use to sustain a modern economy was tending to fall. This included products such as fertiliser, water, steel, concrete and food. I saw this as very good news; increasing prosperity would not necessarily imply increasing use of natural resources. Recent data support the 'Peak Stuff' hypothesis and suggest that economic growth in advanced countries doesn’t increase the use of material extracted from the soil or earth’s crust. I think the ‘dematerialisation’ idea has real strength to it. At the time many people questioned the conclusions of the paper. They said that I hadn’t properly accounted for the UK’s imports of processed goods from overseas. This would depress the apparent UK use of materials. And the critics commented that I had chosen an unrepresentative sample of fifteen or so indices to make my point.

Others worried that the implication that economic growth might possibly be good for the environment was dangerous. George Monbiot wrote in the Guardian

 ‘I won’t deny it: my first reaction on seeing the results of Chris Goodall’s research into our use of resources was “I don’t want this to be true.” Obviously, I’d like to see our environmental impacts reduced, as swiftly and painlessly as possible. But if his hypothesis is right – that economic growth has been accompanied by a reduction in our consumption of stuff and might even have driven it – this would put me in the wrong. I’m among those who have argued that a decline in our use of resources requires less economic activity, or at least a transition to a steady-state economy.’

Three years on, how is the conclusion that advanced economies are on the brink of dematerialisation faring? Is Monbiot right or wrong? Broadly speaking, I think it is fair to say that new data strongly supports the hypothesis that material use is tending to fall as energy use stabilises or falls and material goods get lighter and (usually) more durable. In this note, I’ll focus on today’s Environmental Accounts for 2012 which show a new measure called Raw Material Consumption that tries to include the full resource use of goods imported into the UK.

The background

Advanced human societies requires just three things: biomass, minerals from the earth’s crust, and energy, usually from fossil fuels. If we can measure the weight of these things and compare it over time, we have a measure of the sustainability of the global economy. A population that is cutting its use of materials is better equipped to be durable.

History strongly suggests that the early stages of economic growth see very rapid increases in the use of natural materials. Iron ore and stone is extracted to make steel and concrete. Fossil fuel is mined to provide energy for manufacturing, transport and for comfort in the form of heat, light and cooling. But the ‘Peak Stuff’ hypothesis suggests that there is a limit. We now, for example, generally consume much less food per person than we did. Machines do almost all our labour and we need far less energy from our meals. (The UK eats far fewer calories per person than in the 1950s). And once we have built our roads and buildings, our need to for steel tends to fall. Stronger materials and rapid digitalisation of our societies are cutting the weight of things we buy. Downloads replace DVDs. Plastic replaces metal.

The Environmental Accounts for 2012

The UK tries to measure its resource use. It can compute with reasonable accuracy how much is extracted or harvested across the country. And it can estimate the weight of imports and exports. The Office for National Statistics (ONS) has provided a figure for what is called Domestic Material Consumption for over a decade. This number adds the weight of imports and deducts the weight of exports from the national extraction and harvesting of fuels, biomass and minerals. (The UK doesn’t really mine metal ores).

This is the pattern it has found.

DMC

 

The decline in material use had continued, even as the UK emerged from recession after 2010. The UK uses about 600 million tonnes of materials per year, or between 9 and 10 tonnes per person, a very low figure by European standards. This includes fossil fuels.

What is more difficult is working out the weight of resources that went into imports. Take a tonne of steel for example. To make the metal, a much larger volume of ore needs to be mined and a large amount of coal is needed to melt the iron out of that ore. Fossil fuels are also needed to transport the finished goods to the port and then on the ship to the UK.

This year ONS has published what it calls an experimental statistical series, estimating how much material weight goes into the finished imports (such as meat, iPhones and steel girders). Raw Material Consumption excludes fossil fuels so the numbers are actually lower than Domestic Material Consumption. But the trend is the same.

Raw Material Consumption peaked in the early part of the last decade at over 600 million tonnes and then fell, including a very sharp fall of 100 million tonnes between 2007 and 2009. The beginnings of economic recovery saw a small increase to 2011 but 2012 saw a slight fall, even as growth began to pick up.

Raw material consumption

 

Energy use went up slightly in 2012, largely as a result of a cold winter. However, the long run trend in consumption of fossil fuels is also strongly downward.

 

Low carbon heat programme ropes in just 79 households in the first two months

Heating buildings is the single most important use of fossil fuels in high latitude countries such as the UK. In the average British home gas use is almost five times as much as electricity consumption. The Green Deal is a part of the approach to cutting heating demands but ‘the Renewable Heat Incentive (RHI) is the main scheme of the heat strategy’, according to DECC. The RHI for domestic homes was finally launched at the beginning of April after a gestation period of about five years. According to the most recent data, just 79 homes signed up for the RHI in the first two months. Although the RHI subsidy scheme offers some tempting payments, the signs so far are that this scheme will fail in the same way as the Green Deal has done.

The domestic RHI makes a guaranteed payment per kilowatt hour of ‘renewable’ heat produced by air and ground source heat pumps, rooftop solar hot water and wood burning boilers. These subsidies persist for the first seven years after installation. Regular readers of the comments on this blog will know that many air source heat pump installations have turned out to be horrible disasters for homeowners. Except in unusually well insulated houses, the RHI provides nowhere near enough subsidy to cover the increased cost of the electricity needed to operate the pump.

Solar hot water systems are paid over 19 pence per kilowatt hour for each unit of heat that is produced. This might produce a subsidy payment of £400 a year for homeowners spending perhaps £4,000 to put solar collectors on their roofs. Unfortunately, those of us foolish enough to have installed solar collectors ten years ago know that the cost of the annual maintenance for the system tend to outweigh the likely savings.

More pertinently, it is almost impossible for anybody reading the DECC manuals on the RHI to work out exactly how much their solar hot water payments will be. The subsidy is geared to the size of the house and the number of the occupants but I have to confess that I am completely unclear as to how to calculate the amount of heat that is deemed to be produced, and therefore the subsidy payments that are due each year. I cannot  even find the document that specifies the formula to be used. (Can anybody help??)

One solar hot water installer told me recently that he had decided to give up installing systems. It isn’t worthwhile to pay the cost of maintaining his authorisation.

This leaves biomass boilers. In certain circumstances, such as in a new house, the finances of biomass look really attractive. The payments for a medium sized home with a heating need of 15,000 kilowatt hours (about the UK average) will be almost £2,000 a year for seven years. This is certainly enough to cover the cost of the system in most circumstances. In addition, if the home is off the gas grid it will be replacing oil or LPG (perhaps 6p per kWh) with wood pellets costing about 5p per kWh. So there is actually a saving on fuel bills as well as the subsidy.

But whether the owner of an existing home heated by LPG or oil would think it worthwhile to put a pellet or wood chip boiler in the house to replace the existing apparatus is much less certain. The dislocation is likely to be very painful.

So perhaps we shouldn’t be surprised that even after the extraordinary amount of work that DECC put into designing the domestic RHI scheme it looks like failing to capture the enthusiasm of installers or homeowners. The number of new systems is deeply depressing.

Tariff band

Applications

 

Accreditations

 

 

Number

% of total

Number

% of total

 Air source heat pump

59

29%

19

24%

 Ground source heat pump

17

8%

7

9%

 Biomass

87

43%

38

48%

 Solar thermal

40

20%

15

19%

 Total

203

 

79

 

 

In the first two months of the scheme – which had been heavily pushed for almost a year in advance – the total number of RHI installations was 79 spread across the four different technologies. This equates to about 500 homes a year.

Of course things may improve as the scheme gets better known. But I‘ve seen very little sign of any increase in interest. If the RHI is indeed ‘main scheme in the heat strategy’, we’re not going  to see any observable impact on carbon emissions  from any government policy. The Green Deal continues to underperform with a total of just 1,372 plans signed over the first 16 months. In addition, the retreat from the Energy Company Obligation means that free insulation rates have fallen sharply in the last few months, with the April installation rate (about 33,000 individual measures) lower than any other month since June 2013. It’s difficult not to conclude that policymakers have completely lost interest in decarbonising domestic heat.

 

 

E.ON ignores the DECC agreement to allow local individuals to invest in wind farms

The UK government is keen to encourage more involvement by communities and individuals in commercial renewable energy projects. In particular, it believed it had made a voluntary agreement with the main developers to offer local people the chance to invest in new schemes. It had hoped that it would not have to legislate to oblige commercial companies to let communities buy shares. Unfortunately the agreement doesn’t appear to be working. Even big companies are ignoring it. One recent example is the Rhyd-y-Groes wind farm on Anglesey. E.ON, the huge German-owned utility, is starting the local consultation process prior to applying for planning permission to take down the existing turbines and put a much larger wind farm in its place, probably in late 2015. It is not offering a stake to local people. When I asked why the company was ignoring the agreement to facilitate community ownership I was told in an email

‘Every project is assessed on it’s own merits and it also depends on the size of the project’.

In addition, E.ON is not meeting the agreed industry standard for the payment of money into community funds. The benchmark is £5,000 a year per megawatt of capacity. E.ON is offering less than £4,000 at this site.

It looks like Ed Davey has failed and he’ll have to bring forward legislation to oblige wind farm developers to meet the very limited voluntary commitments he thought he had agreed with them.

In Germany, about 50% of all wind and solar is owned by individuals and cooperatives. In belatedly encouraging community ownership, the UK government is seeking to copy what it is now utterly commonplace in Germany and other parts of northern Europe. Part of the logic, by the way, is that when a town owns a respectable stake in a wind farm it ceases to object to the appearance of the turbines.

This is what the DECC Community Energy Strategy said in February of this year

The renewables industry has committed to facilitate a substantial increase in the shared ownership of new, commercial onshore renewables developments and is already developing ever more ambitious and innovative approaches to community engagement and benefits, including some good examples of shared community ownership.

But rather than trying to work with communities and government, E.ON has decided to go its own way. For a company that stresses its wish to restore its reputation in the UK in the wake of scandals such as the misspelling of electricity and gas for which it paid a penalty of £12m a few weeks ago, this is very strange behaviour indeed.

 

 

The biggest proposed tidal energy project yet. In China, of course.

Capturing the energy in the tides is an expensive business. The 340 MW Swansea tidal lagoon project is going to cost the best part of a billion pounds although future UK tidal lagoons will probably be much cheaper. (This project is raising first stage development funding from individual investors. See tidallagoonswanseabay.com for further details). A Dynamic Tidal dam, showing different sea levels on either side

Earlier in the week an even more ambitious scheme took another step forward. A Dutch/Chinese joint venture announced that its $40bn plan to capture the energy from tides off the coast of China had entered formal economic evaluation by the national government after several years of feasibility studies. If you thought you knew what tidal power plants looked like, think again. It isn’t a barrage across an estuary, nor a lagoon and even less like the Marine Current Turbines underwater windmill. It’s a giant embankment heading 40 km out into the sea completed at the far end by a sea wall running perpendicular to the main structure.

The intended location for this huge T is the Chinese coast between Xiamen and Shantou, pointing out towards Taiwan. The central idea is that this embankment will block the daily tides that run parallel to the coast, causing the water on one side of the wall to rise, and other side to fall. The physics are intuitive; if you put your foot into a stream, the water rises a little on the upstream side and you can see a slight depression in the area just downstream of your leg.

The difference in height, and the tidal flow itself, create the potential to generate electricity. 4,000 bidirectional turbines will capture the energy of the tide, generating a maximum of 15GW. (That’s about 40% of electricity demand on UK summer afternoons).

Cross section of tidal embankment

Such an enormous project seems completely fanciful. The demands for concrete and stone alone seem to dwarf even China’s capacity. However the idea has been successfully pushed by a well-regarded consortium of Dutch marine engineering companies who claim the challenges are surmountable.

Could this $40bn idea make financial sense? Let’s compare it with the proposed Hinkley nuclear power station. Hinkley is expected to cost £16bn for a two 1.6 GW reactors. That’s about £5bn or around $8bn, a GW. We need to inflate this slightly to reflect that nuclear reactors don’t work all the hours in the year – 90% uptime is a reasonable figure. So the cost of Hinkley comes out at around $8.8bn for an average gigawatt of electricity.

The Chinese tidal plant wouldn’t generate the full 15 GW all the time but power output would rise and fall during each tidal cycle. One estimate would be that the plant would average 6 GW over a year, including some allowance for maintenance. If the cost is $40bn, then the capital required per gigawatt of output is about $6.7 bn. This makes the tidal project about three quarters the price of new UK nuclear. And that’s before considering the fuel, operating and waste disposal costs of Hinkley. The nuclear plant may also take longer to build with the proponents saying that the embankment could be ready by 2020, several years before Hinkley is scheduled to open. All-in-all, the price of electricity from Hinkley – about £92.50 per megawatt hour – could perhaps be undercut by 30 or 40% percent by a tidal embankment.

Of course there are ecological issues with a barrier running 40 km out to sea. This is going to affect the marine environment, possibly severely. The obstacles to the project are still forbidding. However the Dutch consortium claims to be already looking elsewhere around the globe for its second site. As you’d expect, the focus is on the places where the strongest tides run along the coastline and water depths aren’t too great. The North Sea coast of the UK is one the list of the most favourable opportunities.

 

 

First commercial scale electricity to methane plant goes ahead

  Avedore waste water treatment plant

In December 2013 wind supplied 55% of Danish electricity. On several days, turbines provided over 130% of the total need for power. The variability and overwhelming scale of wind-generated electricity in Denmark poses problems for the grid operator, Energinet. Other countries hoping to emulate Denmark, such as the UK, will face similar concerns.

The last post on this web site moaned about the lack of fundamental research into energy production and storage. Working out how best to run an electricity system that is dominated by a single and rapidly fluctuating source of power is one obvious area where R+D is urgently needed. In Denmark, the national grid operator has just funded over half the development capital for an advanced Power2Gas project at a wastewater treatment plant in Copenhagen. The crucial advantage of Power2Gas is that it can use surplus power, available when the wind is blowing strongly, to turn electricity into natural gas. By contrast, the UK failed to find the capital for a similar proposal here.

The Copenhagen 1MW project may fail. The technology is new and although it has worked very effectively at a smaller scale, there is no guarantee that it will operate successfully in the larger configuration planned for the wastewater plant. But there is no alternative to Power2Gas as a long-run solution to the energy storage problem. The world needs to invest now in risky projects that will eventually show us how to store surplus electricity in the gas grid.

 

Power2Gas

 

When the grid has too much power pressing to enter the transmission network, the operator has no choice but to disconnect (or ‘curtail’) some sources of electricity. The power that could have been used is wasted.

 

One alternative is take the otherwise worthless surplus and use it for the electrolysis of water. This splits water molecules into the constituent hydrogen and oxygen atoms. This is a simple process, carried out in chemistry labs of all the secondary schools in the world. The hydrogen has energy value. When burnt or combined with carbon it can be used as a fuel.

 

Some people therefore believe that we should run advanced economies on hydrogen. For example, hydrogen can be used in fuel cells for vehicles or home heating. It is perfectly feasible that a large fraction of our total energy demand could be met with H2.

 

The problem is that the world would have to build huge amounts of hydrogen storage and convert all engines that currently use fossil fuels into machines that use H2 instead. This is almost certainly too expensive and too disruptive to be a realistic option.

 

An alternative is to take the hydrogen and combine it with CO2 to make methane (CH4) and oxygen. Making methane in this way is also a simple chemical process. Methane is by far the most important ingredient in natural gas.

 

Since methane can be added in almost unlimited amounts to the natural gas network, it may be possible to convert long surpluses of wind or solar power into an alternative source of power. Germany, for example, has gas storage capacity equivalent to over 200 days of use. It could conceivably store all surpluses of wind or PV electricity in the form of methane, providing a zero carbon source of gas for burning in power stations when renewable energy isn’t sufficiently available.

 

Among other advantages this might help stabilise the wholesale price of electricity in Germany which has frequently dipped below the production cost of coal-fired power stations in the last few months. On several days power prices have gone severely negative. However much the opponents of fossil fuel may cheer this development, it has had profoundly serious effects on the capacity of electricity generators to fund new electricity generation schemes. The bankruptcy of RWE or E.ON will not solve the climate problem.

 

Opponents of Power2Gas usually point to the waste of useful energy that is inherent in the two processes of electrolysis and methanation (making methane). Only about 55-60% of the power of the surplus electricity is likely to end up in the form of methane energy. The correct response to this is a) to say ‘so what, it  would have been 100% wasted otherwise’ and b) the waste heat from the two processes and the oxygen derived from electrolysis both have potential value that will reduce the loss from conversion.

 

Why is the first commercial scale electricity-to-methane project sited at a wastewater treatment plant?

 

Wastewater treatment plants (sewage farms in ordinary English) take human waste and other organic material and decompose it. One output is a biogas that is part methane and part CO2. The CO2 means it cannot be added to the national gas grid. So the biogas is burnt in a gas engine to generate electricity.

However the CO2 is useful for the methanation stage of Power2Gas. The new technology to be used at Copenhagen puts the entire stream of biogas through a reactor that converts the carbon dioxide, along with the hydrogen from electrolysis, into methane. The output from the process is pure enough to put directly into the gas grid.

 

The company delivering the technology to the project is Electrochaea, an early stage business developed from research at the University of Chicago, that has selectively breed a type of microorganism (methanogenic archaea) to feed off hydrogen and CO2 to make methane. Electrochaea has completed a pilot plant (1kW) in the US and successfully operated a larger pre-commercial system for much of 2013 at Foulum in Denmark, backed by utilities such as E.ON. The Foulum trial took place using biogas from an anaerobic digester, rather than gas from a sewage farm.

 

A wastewater treatment plant makes more sense. The surplus oxygen from the electrolysis process can be injected into the waste water to increase the rate of decomposition of the organic materials. The surplus heat from the methanation process can be similarly used to speed up the creation of biogas from the sewage.

 

Biogas can be stored temporarily at a waste water plant meaning, for example, that the electrolysis may well only take place when electricity prices are low, or perhaps even negative. The plant will also benefit from payments for being available to act as ‘frequency reserve’ to the operator of the national electricity grid. This means it will shut down the electrolysis process when the grid AC frequency drops below a safe level and will increase the electricity it is taking when the frequency is too high

 

Every wastewater plant in the world will eventually have some form of Power2Gas equipment to upgrade the biogas into methane, using electricity when it is in surplus.

 

The Copenhagen project

 

At the wastewater treatment works at Avedøre in Copenhagen, the seven commercial partners will install a 1 MW Power2Gas plant, using the proprietary Electrochaea technology for methanation and electrolysis equipment from the Belgian company Hydrogenics.[1] The plant will be built in early 2015 and will run as a trial for the remainder of 2015. A fully commercial Power2Gas system should be available in 2016.

 

About half of the €7m cost will be borne by the state-owned Energinet, which operates the gas and electricity grids in Denmark. The rest comes from the other others, including car company Audi. Audi’s interest in this venture, which complements its existing Power2Hydrogen research, arises from its wish to find non-fossil fuel sources for its cars. Liquid methane is a potential fuel for vehicles. Other participants include an energy trader and an operator of biogas plants, both of which would benefit from the success of the Avedøre trial.

 

The importance of this commercial experiment

 

Without energy storage, the renewables revolution will fail. Denmark and Germany both know this, not least because of the increasingly obvious impact of wind and solar on the functioning of the electricity market in both countries. But it should also be apparent to other countries that the world will need huge amounts of capacity to store electricity. The companies that create the means to convert surplus power into energy that can then be used when supply is tight will become enormously valuable. They will have solved perhaps the most intractable problem of the conversion to a low carbon world.

 

The UK has yet to understand this. Electrochaea has made sustained attempts to create a network of partners in the UK. Despite sustained interest from Severn Trent, the water and sewage company, and National Grid, the company told me that ‘nobody  was able to provide the matching equity’ for its proposal for a trial site in central England. Its applications into competitions for grant funding run by DECC and other bodies have been rejected.

 

As I said in a blog post of last week, spending multiple billions every year on support for existing technologies through schemes such as feed-in tariffs must be matched by financial backing for raw, risky and unconventional technologies that might radically reduce the cost of a full move away from fossil fuels.

 

I’m not qualified to judge whether Electrochaea’s technology will work but I do know that backing a trial plant in the UK with a few million pounds is an overwhelmingly sensible idea. Isn’t it about time that someone had the courage to invest in companies that could change the energy world for ever?

 

 

 

 

 



[1] I believe the 1 MW refers to the energy value of the methane output, which will be substantially less than the electricity used to carry out the electrolysis.

Time to focus on energy research

In the last few days I’ve had the privilege of giving short talks at a conference in Barcelona and to a morning seminar for an offshoot of the Technology Strategy Board in the UK. (Thank you very much to the Spanish ceramic products company ROCA for sponsoring the extremely illuminating Barcelona conversation. And to SNCF for making train travel from London to Barcelona so comfortable and efficient). I wanted to make two central points in these presentations. First, driving down the costs of renewable technologies for electricity generation is going to get increasingly expensive. I suggest it might be better to invest more in R+D and less in subsidy payments for production. By 2020, the UK will be spending at least £7bn a year on direct payments to generators that own wind, solar and other low-carbon sources of power. R+D spending will be less than 5% of this. The balance isn't right.

Second, I wanted to pose a question that I think is being too often ignored: new Chinese nuclear power stations being built with Westinghouse AP1000 technology seem to be costing about $2,000 per kilowatt of power capability. And the US nuclear plants in the middle stages of construction, such as Vogtle 3, also seem very cheap compared to the astronomical sums now quoted for Finnish, French and British plants using Areva’s competing EPR technology. The proposed power station at Hinkley in Somerset is going to cost of the order of four times the price for the Vogtle unit. Why is the UK apparently willing to finance – through loan guarantees and high and fixed prices for electricity - a particular technology that increasingly looks outdated, over-complicated and very difficult to construct?

Let’s look at nuclear power first. The parts of the world that invested heavily in first and second generation nuclear power stations have seen extravagant increases in the construction cost of nuclear power. In France the price doubled between the first nuclear power station and the last. The cost of the new plant now under construction in Normandy is still escalating. But it’s a reasonable prediction that the cost will finally come in at around 6 times the cost (inflation-adjusted, of course) of the pioneer stations of the late 1970s.

 

French nuclear

If we had the figures for the still uncompleted Olkiluoto EPR reactor in Finland, the numbers would be similar to the Flamanville example. I thought it was amusing that in a recent announcement Areva blamed ‘Finnish lawyers’ for the delay and cost overruns. Finland has about the lowest numbers of lawyers per capita in the EU, Britain has the highest. If Areva cannot cope with Finnish lawyers, we know they are going to be rapidly overwhelmed by their far more numerous and bloodthirsty UK equivalents.

The systematic inflation in nuclear power extends to the US. There most estimates see average nuclear power construction costs more than tripling since the 1960s to around 80% of the projected Flamanville cost. (Please note that the US numbers are in dollars, the French figures in euros).

 

US Nuclear costs since the 1960s

The consistent and rapid inflation in nuclear costs has dulled observers into an unthinking pessimism. Every time I mention Olkiluoto, eyes roll and experienced commentators simply say that the EPR has added unnecessary complexity in an efforts to demonstrate 100% safety. All the savings from the technological advances of the last sixty years in nuclear power stations have been taken back in the form of enhanced safety features.

(Actually, though this not strictly relevant, it’s worth mentioning that the pioneers of the nuclear power industry in the UK thought it would be possible to make electricity for about 2.5 pence per kWh when writing in 1956.[1] This is about 55p in modern money, or six times what EDF is promised for power produced at Hinkley. So things have actually got better in some ways since the first nuclear power plant was opened at Calder Hall in the mid-fifties).

Does modern nuclear energy have to cost so much? The Chinese example should give us pause. The first new generation plants will be completed this year or early in 2015. Of course we don’t know whether the numbers are strictly comparable, but the cost appears to be about $2,000 per kilowatt of generating capacity. At today’s exchange rate, this means that the new Chinese power stations will cost a quarter of Flamanville’s price. And the Chinese are expecting future plants to come in at 80% of the first reactors. Why aren’t people flying off to Beijing to find out exactly why this is happening? Let’s put this in a UK context. If we could copy China and build nuclear at $1,600 a kilowatt, the cost of switching to low carbon electricity would fall from perhaps £6,000 a household to little more than £1,000. Shouldn’t we be trying to copy Chinese engineers?

As usual, one gets quite a lot of half-baked semi-racism when one asking this question. You hear that the Chinese aren’t sufficiently concerned about safety, either of employees or local people. Or that the Chinese are using forced labour. And so on.

But some people also notice that the reactor design in China is different and, second, that the country’s extraordinary amount of high quality civil engineering in the last decade has also given it unparalleled knowledge of how to pour concrete and forge steel cheaply and well. If I had a role in UK energy policy I would be asking Chinese companies to come here and build nuclear power stations for us. And trying very hard to work out very quickly whether the AP 1000 design is safe and why it appears to be so much cheaper than the EPR model.

The construction of AP 1000 reactors at Vogtle in US state of Georgia started later than the Chinese reactors. So we may not really know what the real cost is. However the plant’s owners are saying that the cost is likely to be $3,100 a kilowatt. That’s nearly twice the Chinese cost but it’s still a huge potential saving over the costs in Finland, France and, in prospect, Britain. Please can we hear from a well-qualified engineer as to why this might so?

 

New nuclear

Just to make the obvious point, let me stress that the reason that construction costs of nuclear are so important is that the running expenditures are so low. If we can cut the capital costs of new power station, this improvement feeds into dramatic price reductions. Copying the suggested cost of Vogtle in the UK would hugely reduce the subsidy needed for nuclear electricity. My guess is that instead of £92.50 per megawatt at Hinkley we need to offer a price of less than £45 for a reactor built like Vogtle. Chinese costs would reduce the required price even further to well below the costs of fossil fuel power. And, as should be apparent to everybody, if we don’t get low-carbon costs below fossil fuels, we are never going to maintain a global push for decarbonisation. Electorates simply won’t stand for it.

This leads on to the second question I tried to look at my talks. Is it right to drive cost reductions in renewable technologies by use of direct production subsidies that are adding increasing amounts to domestic bills? Or should we be spending more, much more, on fundamental research and development? The argument is this. Broadly speaking, we can achieve cost improvements in any technology either by accumulating production experience (usually called ‘the learning curve’) or by targeting improvements in technology. It is often difficult to disentangle the two phenomena but I still think the distinction is useful. Put another way, should we trying to cut prices by ‘learning by doing’ or by ‘learning by research’?

Governments around the world have backed away from energy research. In the 1970’s administrations that had been frightened by the OPEC oil embargo put big sums into R&D, particularly into nuclear but also into wind. Outside France, that investment largely failed, and failed catastrophically. Energy R&D then plummeted around the world. A decade ago, UK energy research was costing just a few tens of millions a year. (It has gone up somewhat since).

nASA turbine

Instead of research, governments decided to back ‘learning by doing’. They offered production subsidies (now often called Feed In Tariffs) to get investors to put capital into wind, solar and a few other technologies. This, they correctly foresaw, would allow manufacturers and installers to cut costs. The learning curve (which I pedantically call the ‘experience curve’) swung into action as it almost  always does (except in nuclear). As the accumulated volumes of wind turbines that had been built doubled, costs fell by about 14%. The rate of learning for PV looks greater, at about 20%.Experience curve Wind

When I talk about the experience curve, I don’t just mean the cost improvements arising from larger turbines, or bigger factories. For some almost magical reason, costs fall in a reasonably consistent and predictable way just because companies get better at making the turbine. It’s obvious why governments like Feed In Tariffs. Prices do go down without any obvious reason.

Compare this with ‘learning by research’. Put $100m into some crazy new idea for making solar panels and you are 95% likely to fail. Faced with media always eager to locate apparent stupidity, or even corruption, no government minister or senior official will want to back the latest idea coming out of the Oxford Science Park or an automotive supplier in Swindon knowing that she is fairly certain to look really foolish within a year. As a result of bureaucratic risk aversion, direct subsidies are going to cost the UK consumer £3bn this year while government energy research languishes at perhaps 6% of this figure. (To be honest, this estimate is almost guess – nowhere can I find an authoritative estimated of the total budget for government energy R+D is for 2014/15, The DECC documents I have seen are extraordinarily confusing and obfuscatory).

There’s an even more important problem as well. At present the UK government doles out its R&D budget in tiny spoonfuls. It gives £1m to this nascent technology, a few hundred thousand to another, and a generous £3m to a particular favourite. In my view this not just pointless, it is actively counter-productive. Little dollops of cash have a truly awful effect.

I’ll try to explain why this is. Engineers leaving universities or companies with a brilliant idea need money. And government will often provide this, even when venture capital does not. Bodies like the grandiosely named Technology Strategy Board will drip small amounts of cash into many ideas-based companies. It won’t actually be enough to pay for real innovation or commercialisation but it will be just about enough to keep the business alive.

Why is this bad? It means that the talented engineer will stay beavering in his lab night after night hoping to make marginal improvements that can justify the next request for government rations. He works for the government, not for the marketplace. Actually, it would be far better if he failed, went broke and returned to the labour market where he could exercise his (undoubtedly real) skills on another project.

Spreading a hundred million pounds or so a year over perhaps two hundred potential innovators in the UK energy market is a mistake. It would be far better to gamble (and, to be absolutely clear, this is gambling) tens of millions on the technologies that might really make a difference. This is the way it would happen in the States but even there the disastrous experience of backing PV venture Solyndra has chilled the willingness to try to back winners.

But back winners we must, however unfashionable this task seems to be. Without large punts, progress on cost reduction in renewable technologies will slow.

Let’s look at one example of this. What will the iron law of the experience curve do to the cost of wind turbines over the next few years? More precisely, if we do decide to continue to back wind globally, but only by means of production subsidy, we’re reliant on the expected magical cost reduction of 14% for every doubling of accumulated (not yearly) production.

Let’s say we want to cut the cost of wind in half to make it competitive with fossil fuels across the world. If the 14% experience curve continues to operate, we’ll need to expand total accumulated production about eighteen fold. I guess that the world now has about 250 gigawatts of production experience of turbine production. That would mean we’d need 4,500 gigawatts of wind turbines to get down to 50% of the current cost. And this amount is equivalent to almost the entire world generating capacity today.

Following experience

It’s also going to be very expensive indeed. To cut costs by 14% when the world had made 1 gigawatt of wind turbines required another1 gigawatt to be manufactured. To do so now requires 250 gigawatts to come off the production line. And although the subsidy needed has fallen considerably since the days of 1 gigawatt accumulated production, it has probably only declined by five or six fold per unit of capacity. So overall subsidy costs might be fifty times as much.

Of course this is a very generalised argument. In some windy places, including much of the central USA, wind is probably already almost competitive with new gas-fired plants. In other countries, wind will never be a real choice. I just want to make the point that for governments to rely on the experience or learning curve to drive down costs inevitably becomes more and more expensive. By contrast, sponsoring R&D doesn’t cost more as technology advances. It probably costs less.

So my case is simply that whether it be wind, PV, anaerobic digestion, heat pumps, geothermal energy, tidal lagoons or micro-hydro, genuine background R&D must make more and more sense. Intelligently directed in large amounts per idea, it may create large improvements in costs.

I know of three technologies (in wind, PV and AD) that may have the potential to reduce the underlying price of energy by at least 50%. None will come to market through the aid of Feed In Tariffs. All of them need tens of millions of pounds, which may well not be available from commercial sources. A small fraction of the billions now used to subsidise existing technologies needs to be diverted into directly backing companies like these.

But, of course, this is unlikely to happen. The renewable industries, who are so ready to criticise fossil fuel subsidies, are now addicted to their own guaranteed cash streams from government and have growing lobbying power. The genuine innovation that we need is in danger of never happening.

 

 

 

 

 

 

 



[1] This figure is taken from Calder Hall by Kenneth Jay, Methuen, 1956.

New subsidy scheme likely to lock out large scale PV

Three days ago the government announced the abrupt end of the current subsidy scheme for large scale solar PV farms. From early 2015, no PV installation above 5 MW will be entitled to payments under the Renewable Obligation (RO). The industry was understandably upset but assumed that the next scheme (called Contracts for Difference) would replace the RO. This seems to be very much the wrong impression. Another DECC document, put out on the same day as the subsidy withdrawal, makes clear that under the new scheme, starting in late 2014, solar will have to fight onshore wind and other cheaper technologies for budget. A limited pot will be made available in October for all mature technologies such as wind, energy from waste, PV and sewage gas. These are all grouped together in Group 1 and a solar development will only win funds if it bids for a lower subsidy than these alternatives.

I contacted DECC this morning and it confirmed this. 'Technologies in group 1 will have to compete with each other. This will require projects in those technologies (to) submit bids, which will be assessed on the basis of price'.

In effect, this probably kills stand-alone solar PV in the UK. Although solar has rapidly come down in price, well-located wind farms are likely to be able to substantially underbid PV for the subsidy funds. Furthermore, the indications are that the pot available in late 2014 and beyond for these more mature technologies will be very small. (Partly because offshore wind, a less mature technology in Group 2, will need very much higher levels of subsidy).  Solar PV will be entering a very crowded Dutch auction against better positioned competitors. Perhaps some new PV farms on the south coast will able to match other technologies but the odds of success are not great.

DECC acknowledges the precarious position of stand-alone solar. ‘Solar costs and support are currently higher than other (mature) technologies’ competing for money, it says. It seems happy to see farm-scale PV in the UK die. Its rationale is that solar is being rapidly rolled out elsewhere in the world. The UK can piggy-back on the cost reductions achieved in other countries which will ‘occur largely independently of what the UK does’.

Although DECC says that solar PV will be ‘first large-scale renewable technology to be able to deploy without financial support at some point in the mid-to-late 2020s’ it thinks that the UK should play no part in this global effort. Instead, DECC suggests, the country should play a more aggressive role in developing PV built into roofs and other surfaces, even though these are always likely to be more costly than field solar. (That is until Henry Snaith and his colleagues at Oxford PV start painting buildings).

It has to said that there is some logic in the new DECC stance, however destructive the policy is going to be. But most of us would prefer these policy decisions to be made more explicitly and more consideration given to the companies struggling to compete in what has been an extraordinary success story. The UK was going to be the largest PV market in Europe this year.

The end of big solar will save households one and a half pence per week

Someone installing a large solar farm today is going to earn a subsidy of about £60 per megawatt hour, about 70% of the payment to offshore wind. As costs continue to drop steeply, developers are rushing to put panels on any field they can find. Government has finally woken up to the growth of solar farms and has decided to remove the subsidy for all installations over 5 megawatts, which will cover about 20 acres/8 hectares. In other words, solar has become too successful for its own good. The obvious response to the unexpected (to government, if not to the industry) growth of large scale PV would be to reduce the subsidy to perhaps £50 or even £40 a megawatt hour until the arrival of the new payment mechanism – Contracts for  Difference – in the next  few years. But, panicked by the explosive growth in farm PV, the government has instituted a blanket ban, citing fears of running out of budget. And, probably, it simply couldn't face yet another drawn-out - and probably unsuccessful - process to try to find precisely the right level of subsidy.

As usual, all the interesting reasoning for the decision is to be found in subsidiary documents and not in the verbose tergiversation in the main paper. And here’s the key figure from background papers: the saving from introducing the ban is calculated at £0.75 per household a year in the later part of the decade. (About one and a half pence per week). The total cost across all electricity users is about £70m a year, or about 1% of the total renewables subsidy. For this saving the government is disrupting the growth of this impressively successful industry and, probably more importantly, increasing the likely level of subsidy needed under the Contracts for Difference (CfD) scheme.

Why is this? The rapid growth of farm PV is helping drive down the costs of solar. When the CfD scheme comes into operation, solar installers will not get a guaranteed price but will bid into an auction. The key value of today’s burgeoning large scale PV industry is that it is forcing costs down rapidly. Surprisingly, the government still doesn’t appear to realise that this only partly because of falling equipment costs. As important, it is coming from unprecedented reductions in the cost of raising money from investors as they become more aware of the reliability of returns from solar. I have heard of offers at around 3% above inflation.

If the government had allowed the continued growth, experienced and well-resourced operators  would have been able to bid even lower prices for big farms when CfDs come into force in a few years. But today’s decision means that the industry will temporarily disappear in early 2015 or earlier and the cost reductions will cease. Capital providers will go on to other projects.

Was this worth it for one and a half pence per household per week? (Just for emphasis, this is their figure, not mine).

 

Age predicts opposition to onshore wind

Who disapproves of onshore wind? The DECC survey of UK public attitudes that I referred to in the previous post allows us to drill down into the personal characteristics of all those who oppose wind turbines on land. (Thank you to the statisticians for making this possible).  Analysis shows that perhaps the most important predictor of someone’s attitude to wind power is their age: opposition to wind barely registers among the under 45s but then rises sharply. By contrast, whether someone lives in a rural area or a city has little impact. The regular survey of attitudes to energy issues interviewed over 2,000 randomly-selected people in March. Of these, 248 either ‘strongly opposed’ or ‘opposed’ wind turbines on land, about 12% of those involved in the survey. A couple of commenters on this web site couldn’t believe these numbers and suggested that those surveyed were unrepresentative of the UK population. This prompted me to look a little more deeply into the other responses of the interviewees opposed to wind.

First, are those unhappy with wind more likely to be anthropogenic climate change sceptics? Yes, 16% of all those surveyed thought that climate change didn’t exist, or was mainly naturally caused. Among those opposed to wind, the number was over twice as high at about 34%. But this can be put another way; only 24% of those who think that climate change isn’t manmade oppose onshore wind.

Social class has little impact. 11% of ABC1s are opposed to wind turbines on land against 13% of C2DEs. The highest percentage of opponents are among As (23%, but numbers are too small to be relied on) and Es (15%)

Whether someone lives in a large town or city (about three quarters of those in the survey) or in a more rural area (one quarter) is not a particularly good predictor of opposition. 11% of urbanites are anti-wind compared to 15% in the country. The simple view that rural dwellers are against wind turns out not to be really true.

Age does matter. Less than 5% of those under 44 are against onshore wind turbines compared to 25% of those aged over 65. Five-fold differences in a social survey like this are very unusual.

Age groups and onshore wind 8th may 2014

Does anybody know why this is??

Addendum

While doing this little bit of work, I noticed a surprising anachronism in the data. The percentage of people confident in the existence of man-made climate change has tended to fall. Only 35% of respondents say that climate change is mainly or entirely caused by human activity, down from 38% two years ago. (But almost half believe that climate change is caused ‘partly by natural processes and partly by human activity’ and this percentage has risen).

But despite the growing uncertainty about the anthropogenic source of global warming, far more people now rate climate change as one of the top three problems facing Britain. 22% in the latest wave of research compared to only 10% just two years ago. This is a very striking increase - floods and gales have had an effect.

Those opposed to onshore wind were almost as likely to see climate change as a top 3 challenge as the average respondent. 17% of the anti-wind group were in this camp compared to 22% of all those interviewed. Another surprise?