Skip to content

Global warming

Idly browsing, I came across a Horizon programme (still available on iPlayer) in which Paul Nurse bemoans the lack of public trust in science, basing his discussion on the case of global warming. I seem to have written about global warming only peripherally – Climategate and wind energy). So perhaps it is worth summarising my views.

My major conclusion is that the debate about whether human output of CO2 is significantly raising global temperatures is a waste of time.

For the first 20 years or so that people focussed on global warming, there were two major problems. Theoretically, it was obvious that CO2 emissions were likely to raise global temperatures. The snag was that for a 30 year period (1940-1970) we had rapidly rising emissions accompanied by slight global cooling. The second problem was that all the data prior to the satellite data (which only became available in the second half of the twentieth century) was riddled with problems. Going back in history over the last couple of thousand years it was almost impossible to get data which was remotely accurate enough. The claimed effect is measured in small fractions of a degree per decade averaged over the globe, so to test it one needs highly accurate data for a large fraction of the globe. Such data was simply not available prior to the satellites.

These difficulties were compounded by three factors. The first was that, going back further into history, it was clear that the climate had been subject to large variations before there was any question of a human effect. In particular, we had had the ice ages, followed by massive melts. So it was clear that the climate was far from stable. It was capable of large changes without any human intervention. Then each particular source of data seemed to have its own complexities and reasons why it might be an unreliable guide. Finally, it was disconcerting that so much of the climate work seemed to take the form of elaborate computer projections that were clearly based on an inadequate understanding of how the climate actually worked.

The effect of all that was to make the whole thesis highly speculative. Unfortunately, many campaigners and many scientists made light of these difficulties, insisted that human actions were causing global warming, and abused those who disagreed or reserved judgment as “deniers” or worse.

That changed when the period of rapidly rising emissions accompanied by slight global cooling was explained. It turned out to be due to human sulfate emissions which had a substantial cooling effect. Those emissions were controlled (because of other adverse effects, notably acid rain) and we got several decades of significantly rising temperatures. So the basic theoretical prediction (CO2 emissions should cause rising temperatures) was now confirmed.

Meanwhile the public debate has been almost entirely about how to reduce human output of CO2. That seems to me completely daft.

There are three problems. One is that it is completely unrealistic to expect people to reduce their energy usage significantly. Even if those in the developed world could be persuaded not to increase, or even to reduce their per capita usage, those in the Third World are determined to increase their usage in order to gain the same standard of living that they see in the developed world. Plus world population is remorselessly increasing.

The second problem is that there is only one technology available for roll-out on a large scale with low CO2 emissions – nuclear. Unfortunately, that has historically been seriously mishandled by politicians, scientists and engineers. Despite being a reasonably safe and reliable method of generating electricity, they politicians failed to grasp that the inevitable association in the public’s mind with nuclear weapons was going to result in quite different standards being applied to it. The current situation is that we now have largely unnecessary, but hugely expensive, gold-plating in the form of regulations on disposal of nuclear waste, coupled with poor reactor design which makes the plants less safe than they could be whilst they are operating. This lack of safety is not bad enough to kill people in significant numbers, but it is bad enough to cause periodic scares which force up operating costs.

The third problem is that even if we immediately stopped all human CO2 production, the CO2 remains in the atmosphere for so long that temperatures would continue to rise for another hundred years. In other words, even action far more drastic than that envisaged by the most optimistic campaigners would not do much to ameliorate the effects of too much CO2.

What we need is to shift our focus to coping with the effects of global warming, rather than trying to stop it. There is clearly substantial scope for such action – the Netherlands dike building programme was effective and not prohibitively expensive. Of course, it is also worth continuing to develop alternative energy sources and to research climate mechanisms. But it will take decades to develop new energy sources and more decades to roll them out on a large scale, whilst the effects of more research on climate are unpredictable.

In passing, I find Nurse’s programme irritating. He is just bleating that the rest of the world is not treating scientists with more respect. But in this case scientists have hardly deserved it. In the early decades they hopelessly over-egged their case. Moreover, much of the work was shoddy and made life easy for those who disagreed with it. More recently, they have too readily lent their support to such manifestly daft projects as switching from gas power stations to wind power.

I also find it amusing to find him complaining about amateurs forming opinions irrationally, paying attention to fashion or emotion, instead of carefully reviewed data and carefully analysed arguments. Has he not noticed that scientists frequently do the same! Take a field with which I am reasonably familiar – cosmology. It is rife with highly speculative theories, which are widely supported simply because they are fashionable.

He concludes by demanding that in this case we focus on the science. By that he seems to mean his view of the debate with those who believe that human CO2 emission is not causing global warming. But that debate is irrelevant. What we need are practical solutions that will help to deal with the consequences of global warming.

Bookmark and Share

Supernova 1987A

franson

This supernova was in the Large Magellanic Cloud, a dwarf galaxy rotating slowly around our own galaxy, the Milky way. Such nearby supernovae are rare. It is estimated (by counting supernova remnants) that there is a supernova in the Large or Small Magellanic Cloud about every 300 years.

Supernovae are exploding stars and it is not unusual for the supernova to outshine its galaxy for a brief period. Nonetheless, most of the energy released in the explosion comes in the form of neutrinos. Neutrinos, however, are extremely difficult to detect. It is estimated that about 1058 were released in the 1987A explosion, giving about 1015 per square metre at the earth. The largest neutrino detector, Kamiokande II, observed just 12 of these. Another large detector managed 8.

Curiously, a third detector (the LVD detector under Mont Blanc) claimed to see 5. But (1) calculations based on the detections by Kamiokande and its own characteristics suggest it should only have seen 1, and (2) it detected them 5 hours too early.

Now an online paper has been published by James Franson, Univ Maryland, in the New Journal of Physics (Apparent correction to the speed of light in a gravitational potential), claiming that this curiosity means that the speed of light is actually slower than we thought. In other words, he thinks that (A) the Mont Blanc detection marked the supernova, (B) the far more sensitive Kamiokande detector failed to detect it, (C) the later detection by Kamiokande was either some kind of glitch, or it was caused by another event which Mont Blanc failed to detect because of some glitch.

This seems wildly unlikely. Unfortunately, it is difficult to disprove his idea that the speed of light varies according to the gravitational potential. If true that would mean that all astronomical light reaching us from all directions would have travelled slightly slower than normal light-speed for most of its journey, but detecting that slow-down would be difficult, because events like 1987A where we have something else travelling from the distant object to compare with the light are extremely rare.

But unlikely or not, the 1987A event provides negligible evidence for its truth. It is overwhelmingly more likely that Mont Blanc messed up and recorded an non-existent influx of neutrinos.

I mention all this because I find this kind of science coverage (by the Mail) extremely irritating. An implausible speculation is presented as if it had a serious chance of overthrowing a major physical theory.

Bookmark and Share

Brian Greene

The idea that fundamental physics is explained by the multiverse is one of those ideas so obviously bad that I can only watch in absolute amazement so many smart people supporting it.

By comparison string theory (Greene’s area of specialism when he is not popularising) seems relatively innocuous. Its problem is that it is prediction-free. That is not necessarily fatal. Good theories sometimes take decades to devise and elaborate to the point where they can be effectively tested. Einstein’s general relativity is a good example. It took decades before effective computational techniques were found to work out exactly what it predicted in most practical situations, and in parallel it took the experimenters a similar time to devise apparatus which could measure the mainly tiny differences from Newtonian predictions.

Having said all that, I was amazed by a TED talk by Brian Greene, which he apparently gave two years ago in April 2012.

[Transcript]

Here is his explanation of why we must believe in multiverses. The 2011 Nobel prize for physics went to three physicists who used improved techniques to detect supernovae in distant galaxies. The idea is that one particular type of supernova (type 1A) is a “standard candle” so by finding a large number and plotting their apparent brightness against redshift one can detect whether the rate of expansion of the universe is changing with time. The claim is that the data shows the rate of expansion is increasing.

As it happens, the Nobel Committee notwithstanding, the data for this claim is exceedingly flimsy:

perlmutter1

The various dotted lines represent some widely different assumptions on the variation in the expansion rate. The red dots denote supposed type 1A supernovae. Note that the error bars on each one are substantial. Moreover a good many dots have been eliminated as being unreliable for various reasons. It is in fact a classic “small effect”.

It you want to believe the conclusions, then you have to believe that: (1) the explosion mechanism for type 1A supernovae is well-understood; (2) we can accurately distinguish type 1A explosions from other types; (3) we have correctly removed aberrant data and only aberrant data.

Personally, I don’t have much faith in any of those points. I am particularly sceptical of our ability to distinguish 1A supernovae from others. Wrongly including 10% of the red dots and wrongly excluding another 10% would easily be enough to convert a decreasing rate of expansion into an increasing rate.

But suppose for the moment that the Nobel Committee did not make one of its rare errors. Suppose the data does show an increasing rate of expansion. So what?

Well Greene finds it hard to explain. That is unsurprising. Most basic physics is replete with consequences. If we take Newton’s laws of motion or Maxwell’s laws of electrodynamics we can use them to explain countless natural phenomena or engineering marvels. But the kind of explanation that theorists dream up for an increasing rate of expansion typically explains nothing except the increasing rate of expansion.

Unsurprisingly, the usual explanation has a free parameter. By adjusting that free parameter you can get essentially any increase you want for the rate of expansion. So how do we explain the value of the free parameter that is needed to fit the data?

That would seem to be hopeless. Take the closely analogous of ordinary gravitation. That has a free parameter – the gravitational constant G. How do we explain its apparent value of 6.67 x 10-11 in SI units? Well, we don’t. Knowledge has just not advanced far enough. But that constant was discovered hundreds of years ago. The case of the expansion parameter is far worse. Even if the explanation is broadly along the right lines, it clearly needs a good deal of further development to give us any insight into where its value has come from.

But no! Greene can tell us! There are actually untold zillions of universes (together comprising the multiverse), each with a different value of that parameter. We happen to be in the one we are in, so that explains why we find the value we do!

I find it deeply mysterious why anyone should think this kind of nonsense explains anything. But if you want a more measured response, the problem is Ockham’s Law. Explanations are supposed to be economical. It is hard to conceive of a less economical explanation than the multiverse. We are proposing the existence of zillions of universes about which we can know absolutely nothing in order to “explain” our own. It is also a disconcertingly “all-purpose” explanation. One can use it to explain absolutely anything no matter how bizarre.

Why the applause? [20m21s on youtube]

Bookmark and Share

Mochi (1)

shinichiMochizuki

In 1985 David Masser proposed the abc conjecture. You probably don’t want to know exactly what it is, and you certainly don’t need to for this article. But for those who are curious I give a few details at the end (under Technical Details).

The important point is that after twenty years it was coming to be considered as the most important unproved result in Number Theory (one of the major branches of maths). Unfortunately no one seemed to have the foggiest idea how to go about proving it.

That turned out to be a misleading impression, because Shinichi Mochizuki did have a clear idea and beavered away on it for years. In August 2012 he published 4 papers, totalling about 533 pages, which appeared to prove the result. Wonderful! The math world must have been abuzz with excitement and delight. Well no.

The first complaint was that he didn’t actually publish them, he put them up on his university website. And, shock horror, they had not been peer reviewed. Then he declined to give seminars about them.

An article a year ago by Caroline Chen, a talented young graduate student at Columbia School of Journalism, explained:

mochiParadox

The problem, as many mathematicians were discovering when they flocked to Mochizuki’s website, was that the proof was impossible to read. The first paper, entitled “Inter-universal Teichmuller Theory I: Construction of Hodge Theaters,” starts out by stating that the goal is “to establish an arithmetic version of Teichmuller theory for number fields equipped with an elliptic curve … by applying the theory of semi-graphs of anabelioids, Frobenioids, the etale theta function, and log-shells.”

This is not just gibberish to the average layman. It was gibberish to the math community as well.

Mochi just wasn’t playing the game as it is supposed to be played.

After working in isolation for more than a decade, Mochizuki had built up a structure of mathematical language that only he could understand. To even begin to parse the four papers posted in August 2012, one would have to read through hundreds, maybe even thousands, of pages of previous work, none of which had been vetted or peer-reviewed. It would take at least a year to read and understand everything.

The subtext: most academics at today’s universities are not of quite the quality one might naively expect, and do not like being shown up.

Not so long ago, there were not that many universities. Now they are everywhere. The growth has far exceeded the population growth. The strange idea took hold that everyone had a right to go to a university (after a gap year, of course) to get at least a first degree.

A university is, by definition, a place where research is carried out. The original idea was that they should also teach the next generation of researchers plus, at the undergraduate level, a modest number of the most able and thoughtful in society who would benefit from contact with the researchers. So the implication of a huge expansion in undergraduate numbers was a huge expansion in researchers.

What happened? Well, inevitably, quality suffered, some might even say plummeted. A little care is needed here. There are many types of research. The pejorative jargon is “stamp collecting”. In many areas, research goes through phases. In the stamp collecting phase people diligently collect data, then when the patterns are sufficiently clear, some genius comes along and makes sense of it all. The classic example is botany. After endless counting of petals, there was enough data for Darwin and a handful of others to put it together and come up with the Theory of Natural Selection.

A more recent example is DNA sequencing. A few geniuses figured out how DNA works – RNA transcription and all that – then the drones and their sequencing machines started publishing complete sequences for hundreds of organisms. Back to stamp collecting.

Needless to say, I oversimplify somewhat. But the underlying point is surely true. Walk into a university library, pick up a recent journal at random and start to read it. After 10 minutes it will become clear that the typical article is appallingly badly written. You will have not the foggiest idea what it is trying to say.

Of course, you will blame yourself. You are stupid. Or you lack the background to understand this arcane stuff. Wrong! That is not the main problem. If you had nothing else to do for a few hours, you could persist. It would become clear, typically after 2-5 hours, that the content of the article was, well, not a lot. In general, the longer a journal article takes to read, the less useful content it has. The uncharitable might wonder whether this might not be a chance correlation: if you don’t have much worth saying, there is something to be said for not making that too obvious.

Another reason this attracts relatively little comment is that it takes more arrogance and hard-won expertise, not to say a larger streak of the enfant terrible, than most people possess to condemn as useless any article that is not making much sense after 10 minutes.

Look at things from another angle. Why are journal articles peer reviewed? If you think about it, the idea is bizarre. Why on earth does anyone competent need some third party to tell them whether an article is good or bad? Surely they can judge that for themselves? Indeed, there is some interesting evidence on that from the early days of arxiv. Fed up with delays sometimes running into years before articles submitted to journals for publication actually appeared, a researcher at Los Alamos wrote some neat software which allowed academics to upload articles in maths and physics to his server. The software displayed and indexed them almost instantly on his website.

It took off in a big way. Academics got into the habit of simultaneously uploading to arxiv and submitting to a journal; they also uploaded a good deal of excellent material that never got into the journals. After a year or two, the journals realized what was happening and tried to stop it as a breach of their copyright. Of course, they were too late.

The journals provide a service rather like Bernie Ecclestone to Formula One. They do not actually do much, they just charge universities hundreds of millions of pounds and persuade everyone that they are playing an essential role. The academics write the articles, prepare the publication-ready typescripts, select the articles (as members of “editorial boards”), edit the articles, peer-review the articles, all without charge. The journals get away with it because (1) the academics do not personally pay them, their institutions do, and (2) they are hooked on the “kudos” of getting published in particular “prestigious” journals.

The interesting point is that when the articles are uploaded they have not (in the vast majority of cases) been peer reviewed. Occasionally, a completely wrong-headed article is uploaded. What typically happened was that someone else would read it, see it was wrong, and upload a short article explaining why, all within a few days of its appearance.

There was one wonderful occasion a few years back when this happened to an article that had actually been peer reviewed prior to uploading, but the reviewers had been asleep (or incompetent). The authors were furious that their peer-reviewed article had been savaged by a non-peer-reviewed article and made the mistake of uploading another article defending the indefensible. After a few weeks a team at a particularly prestigious university amused themselves uploading a lengthy and clearly definitive article explaining in painful detail exactly why the original article was complete bunk. I keep lapsing into the past tense because unfortunately arxiv has now been taken over by Cornell which seems to be gradually introducing a form of peer review. In any case, it should be clear by now that the only useful function of peer review is to protect the timid or incompetent author rather than the competent reader.

But how do the second-rate majority survive in today’s universities? Dealing with their day to day colleagues is not an issue. Sometimes entire university departments are hopeless; indeed some would say that was the new norm. But how do they deal with people in the surviving good universities?

The basic idea is extreme specialisation. Back in Leibniz (1646-1716)’ day a scientist reckoned to keep up to date with all the major developments, across the board. Even a hundred years ago, a mathematician would have had a reasonable familiarity with a large chunk of maths, the best with most of maths, eg David Hilbert’s famous list of outstanding problems.

There is, incidentally, a particular difficulty about maths: it does not offer much scope for stamp collecting. Instead, there is the business of proving new theorems which are just trivial consequences of existing theorems and techniques. For this to work convincingly you need to wrap things up a bit.

Today we are in the era of the sub-sub. People are “expert” in a tiny sub-sub-field. If you are second-rate, then the safest strategy is to pick a sub-sub in which only a dozen or two people are working. But it is essential that they be spread across at least two universities on different continents. That way the European group (say) can have its work endorsed by the American group (say) and vice versa. The denizens will make sure that the sub-sub rapidly develops (needlessly) its own terminology. That way outsiders will be put off. The astute reader will spot that turgid and incomprehensible articles might also have a role to play.

The really smart guys will probably notice that your sub-sub is vacuous, but they are unlikely to care. Their entire energies will be focussed on discovering and understanding genuinely important maths. Indeed, a front runner for the definition of “genius” is someone who cannot stop themselves thinking all the time about the problems that have captured them.

What happens with the major outstanding problems in maths? Well, they typically require a genius to solve them. Geniuses are tolerated provided that they leave plenty of crumbs. What is completely unacceptable is for a genius to work away on his own and solve a problem, without publishing anything until he has done so. How can anyone else get any credit? He is supposed to publish partial results every quarter, or (worst case) every year or so. Then others can jump in and make use of his work to prove their own “theorems”.

andrewWiles2

Andrew Wiles (who proved Fermat’s last theorem) – itself an easy consequence of the abc conjecture – might seem like an exception, but he was forgiven because his proof turned out to be wrong and had to be rescued with the help of Richard Taylor. More important, he failed to prove the full Modularity Theorem, which was the key underlying result (conjectured forty years earlier by Yutaka Taniyama and Goro Shimura in 1956-7), so that was left for others.

Needless to say, there are a few wrinkles. For example, the “your first error is on page x” syndrome. Some notoriously difficult problems seem to attract people, often amateurs, who do not take the trouble to do their homework (understanding what the problem is really about and what existing tools and techniques have to offer). Such a person works away on their own and then, full of excitement, finds a “solution” and sends it to some famous mathematician. The usual approach used to be to give it to a graduate student as an exercise. He would fill in the blank at x and send it off.

More seriously, squeezing errors out of lengthy proofs is a slow and painful business. Some mathematicians prefer not to bother and send out a 100 page proof as a preprint. If others find errors, then they will try to fix them. If not, then they submit it for publication. The notorious case is Louis de Branges. It took a while before anyone agreed to look seriously at his purported proof of the Bieberbach conjecture (which turned out to be correct) and almost everyone still refuses to look at his purported proof of the Riemann hypothesis (the most famous outstanding problem in maths).

gerdFaltings

The snag about Mochi is that his research supervisor was Gerd Faltings, probably the greatest number theorist in history. Faltings would not have tolerated an idiot, so Mochi had to be good. Indeed, he had published a steady flow of well-thought-of papers. It was hard to dismiss him as a crank.

It emerged that the complaints were not just that the papers totalled over 500 pages, but that they referred to earlier papers he had written which totalled another 500 pages or so. But these days 1,000 pages is not outlandish. The papers on the classification of finite simple groups totalled more than double that. So why the fuss?

Now I am being a little unkind. The fault is not wholly on the side of the typical academic. Tenure has become much harder to get. When I was a Cambridge in the late 1960s, the faculty formed a view whilst you were an undergraduate as to whether they wanted you to stay on to do research. Provided your first few years after graduation confirmed this view, you would get tenure. You were then left alone, if necessary for decades, to come up with something good.

Just to put things in perspective, a good result for a lifetime in research is one, maybe two, really good papers. The occasional genius might manage half-a-dozen. It is unfortunate if you fail to publish any, but that is possible. Not all good people do, despite working hard for decades.

But the situation changed in the 1970s. Now you have to publish something every quarter in the early years and at least annually for decades. Otherwise you get kicked out. I can think of only one physicist in history who managed three good papers in a year, and no mathematicians (well I have not really thought carefully about the quality of some of the early mathematicians with prolific output, like Euler). The problem is not just that the requirement results in poor papers, but that it leaves no time for more thoughtful work.

Another key issue is what is needed to advance maths. Important new discoveries are clearly part of the answer, but it is notorious that the first proof of a major new result is typically a dog’s breakfast. A famous mathematician who could still be seen crossing Great Court to the baths when I arrived at Trinity in the late 60s expressed this backhandedly as “you judge a mathematician by the number of bad proofs he has produced”.

Just as important as the need for new discoveries is that to make progress you have to stand on the shoulders of giants. The snag is that you cannot do that until their gems have been reduced to a single undergraduate lecture, or, better, a ten minute chat. That requires distillation.

Unfortunately, universities do not currently give credit for distillation. To acquire tenure you have to get “theorem credits” (in the words of William Thurston in a famous article about discovering maths today). So a young mathematician usually puts all his energies into trying to get his name attached to a new “theorem”, however insignificant. That is not straightforward, because so much is published these days, that it is common for a new result to be overlooked, then rediscovered years later by someone else, who gets the theorem-credit.

Alas, I have almost reached 3000 words, well over my 1000 word target. It eventually emerged that Caroline Chen had somewhat oversimplified the situation, or maybe her article itself triggered further developments. But all that will have to wait for a later instalment!

Technical detail

Define the radical r(n) of a positive integer n to be the product of its distinct prime factors. Recall that a prime has no factors except itself and 1; the first few are 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37. So to get r(n) we drop the repeated (prime) factors. For example, r(225) = r(45) = r(15) = 15, because they all have no prime factors except 3 and 5.

The abc conjecture is:

Given k>0 there are only finitely many triples (a, b, c) of positive integers such that a + b = c and c is greater than d(1+k) where d is r(abc).

Put more simply, if sloppily, in almost all cases a + b is less than the radical of ab(a+b). For example:

12 is less than 2.3.11 = r(1.11.(1+11) ), but 6,436,343 is bigger than 15,042 = 2.3.23.109 = r(2 x 109.310 x 235) = r(2.6,436,341.6,436,343)

The extraordinary thing about the conjecture was the way that it showed a quite unexpected connection between the additive and multiplicative properties of the integers. We can define any integer as a sum of 1s, eg 7 = 1 + 1 + 1 + 1 + 1 + 1 + 1. The basic link with multiplication comes through the “distributive law”: a(b+c) = ab + ac for any integers a, b, c. But it turns out to be extremely hard to prove anything significant about this link. For example, although a good deal is known about the primes , their distribution appears almost “random” (of course, from the proper Bayesian viewpoint, it currently is, but that is another story). No-one has yet found an easy way of factorising large numbers into primes.

There were some sketchy theoretical reasons for thinking that the conjecture might be true, and the corresponding result for polynomials had been proved four years earlier.

Eventually some serious computer searching was started. The quality of a triple a, b, c with a + b = c was defined as log c/log r(abc). So the conjecture is that for any given k>0 only finitely many triples have quality greater than 1+k. A search up on c < 1018 found 14.4M triples with quality greater than 1, but only 160 with quality greater than 1.4. The highest quality triples found to date (or at least to relatively recently) are:

2 + 109.310 = 235 with quality just under 1.63
112 + 32 56 73 = 23.221 with quality 1.626
19.1307 + 7.292 318 = 28 322 54 with quality 1.624
283 + 511 132 = 28 38 173 with quality 1.581
1 + 2.37 = 54 7 with quality 1.568

So there was some modest numerical evidence for the conjecture. Note that almost all is a typical mathsism – it has a rather technical meaning. 14.4 million exceptions might seem to rule out “almost all”, but it is only a tiny proportion of 1018.

Bookmark and Share

Trussell Trust

Paddy and Carol Henderson were activists employed by the UN and working with street children in Sofia. Carol’s mother, Betty Trussell, had left them some money and in 1997 they established the Trussell Trust, a registered charity (no. 1110522). In 2000 they decided to switch operations to the UK on hearing about a mother having difficulty getting enough money to feed her children. In 2005 they set up the first food bank in Salisbury.

trussell3

After a couple of years they handed over the running to Chris Mould, the former CEO of Salisbury District Hospital. He proved successful at fundraising and organising:

trussellFigs

They now have more than a dozen employees and around 30,000 volunteers (according to the Independent 12 Dec 13). In a recent report (apparently a kind of extended web press release, organized as short headlines and charts rather than a more conventional text-based report, at least I cannot find a text report after 10 minutes of looking) Trussell claims to run over 400 food banks and to have given 3 days emergency food to nearly a million people in the UK during the year to 31 Mar 14, roughly half because of “benefit delays” or “benefit changes” and another 20% because of “low income”.

The obvious question is how real the need is. Are the clientele people who have no (legal) way of feeding themselves and their families without these foodbanks, or are they people who are incompetent at managing their (limited) resources, or are they people who simply prefer to conserve their scarce cash if someone is offering free food?

Unsurprisingly, there is plenty of anecdotal evidence for all three explanations and more besides, so the resulting debate tends to generate more heat than light. It is the kind of issue that somehow brings out the worst in the media, because it so easy to get emotionally-charged interviews that support your own pre-conceptions – or those of your target audience.

Of course, this issue has occurred to Trussell, so we have on the “How a foodbank works” page of the website:

trussell2

The Department of Work and Pensions is not convinced, so we get the chairman, Chris Mould

… responding to accusations that his charity was “aggressively marketing”, Mould said: “You can’t get free food from the Trussell Trust by walking through the door and asking for it; you must have a voucher. More than 24,000 professionals – half of whom work in the public sector and health service, the police, and in social services – ask us to give this food to clients because they’ve made the decision that this individual or family is in dire straits and needs help. We’re not drumming up demand.”

The DWP last week claimed that food poverty has gone down under this government, pointing to a recent report by the Organisation for Economic Co-operation and Development. It found that the proportion of people in the UK who said they were finding it difficult to afford food had fallen from 9.8% in 2007 to 8.1% in 2012.

Clearly many GPs and other professionals do not relish a role as gatekeeper to benefits. There is ample anecdotal evidence that the standards they apply, for example, in providing sick-notes vary widely. Equally, is a volunteer food bank manager with Trussell, who sees someone without a voucher but in clear distress, really going to send them away without food? Does anyone really believe that Trussell’s clearly PR-savvy management would not back them up in that approach? The potential damage to the organisation from reports that they were turning away the genuinely hungry because some stupid box had not been ticked would be huge.

The careful reader will also have noticed that I gave three possible types of client, not just the genuinely needy and the scrounger, but also those who find it difficult to prioritize their limited funds. That has been a contentious issue all my lifetime. The old, paternalistic approach was to accept that this “incompetent with money” category is substantial and that it is better to give benefits in kind – the state pays the rent direct, gives food vouchers etc. The other approach was a mixture: partly that such an attitude is dehumanising and insulting and people must be trusted to make their own decisions about their own lives, sometimes coupled with a flat denial that the category exists; partly that it is important to help people to get better at such decision-making. Of course, you cannot sensibly deny that the category exists and advocate helping people make better decisions. But there is no easy answer.

Unfortunately, even debating the issue now seems to be politically incorrect. Food banks came up at the last “Any Questions” (over the Easter weekend). None of the panel dared to address it. They all preferred to accept that all food bank clients are genuinely needy.

To be clear, I am not criticising the Trussell’s work. Indeed I applaud it. There is obviously a genuine problem and the organization is doing a good job in helping to alleviate it.

Bookmark and Share