The science of nutrition is coming of age, at least in some
quarters; and this in turn, says Colin Tudge, is part of a more general progression from an Age of Chemistry and industrialization to a true Age of Biology
Nutrition science is in the throes of what Thomas Kuhn in the 1960s called a “paradigm shift”; a metamorphosis; a cross-the-board change of perspective. In truth the shift embraces all of biology – and indeed all of science, for we are moving away (or should be!) from an age when scientists took it for granted that we will one day understand the material world completely, to a new realization that that is impossible both in practice and in theory. Science can probe the mysteries of life and the universe, and does so more and more effectively, but the more it probes the more it will always show that there is more to know.
The paradigm shift, as shifts must, has a negative and a positive aspect. On the negative front it pushes aside or at best subsumes ideas that are now taken for granted. On the positive side it opens up whole new vistas.
So it is that for almost four centuries – ever since René Descartes – orthodox biologists have tended to regard living bodies, or indeed entire creatures, as mechanisms, conceptually similar to the clockwork mannequins that were so popular in Descartes’s own day. Wind them up (feed them) and press the right levers (apply the right stimulus) and they will do whatever they are designed (or evolved) to do. Clockwork toys can be understood precisely and so, it has been assumed, can living systems – if only we do enough research; and when living systems are understood they can be controlled to the nth degree. All of life, it seemed, could and should be reduced to chemistry, and chemistry in turn to physics: the ultimate “reductionist” agenda. Reductionist biologists have been said to suffer from “physics envy”.
The science of genetics these past 100 years has tended to reinforce this mechanistic view of life: “DNA makes RNA makes protein” — and then the proteins do whatever needs doing. The genes, ensconced in the nucleus, have been seen as the high command, decidedly aloof, better at giving orders than at listening. From this it seemed to follow that whoever could control the genes would control the whole works (which is the prime conceit behind the vogue for genetic engineering). Overall it has been taken to be self-evident through most of that time that particular causes led invariably to particular effects – in line with Isaac Newton’s idea, 50 years or so after Descartes, that the universe as a whole can be likened to a clockwork model. (Both Descartes and Newton, incidentally, were deeply devout. They simply assumed that the “laws” of nature they sought to unravel were the thoughts of God – who, very clearly and properly, had a tidy mind (give or take a few loose ends). Later, science veered towards atheism or at least to agnosticism but the general idea continues: that if we do enough research we will soon understand the material world completely, whether or not it has been planned by God).
So what’s new? What form is the shift taking?
How we got to where we are: a lightning history of the past 60 years
Traditional nutrition science, as was seen to be right and proper, was rooted in chemistry – essentially benchtop chemistry – and in thermodynamics. At least when I first engaged with these matters, at school in the 1950s, it was generally understood that carbohydrates, fats, and proteins, the macronutrients, were broken down completely in the gut into their component parts – sugars, lipids, and amino acids – so the detailed nature of the original didn’t matter very much. We were also introduced to a shortlist of vitamins – complex molecules that the body could not make for itself, or not at least in sufficient amounts, and had to be supplied ready-made from a somewhat unlikely array of foodstuffs that included things we didn’t see much of, like spinach and liver, plus sunshine for vitamin D. We also needed to imbibe a fair slice of the periodic table – “minerals”. Food also contained what in those days was known as “roughage”, consisting almost entirely of plant cell walls. But roughage, more or less by definition, was seen to be “inert”, indigestible, serving primarily to scour the bowel, an alimentary chimney-sweep. Roughage was the stock in trade of school matrons but avant-garde doctors generally had very little time for it.
The bowel was also home to various bacteria who mostly did nothing very interesting – they were merely “commensals”, lodgers – though they sometimes included some definite rogues, like salmonella, which caused diseases, sometimes fatal. Body weight was a matter of thermodynamics. Energy can neither be created nor destroyed so if you ate more than you burned off the surplus had to be stored as fat; and if you ate less than you burned off you would grow slimmer. Obvious.
That, very roughly, was it.
Things changed dramatically over the next few decades, though not in a straight line. Theories came and theories went. Opinions swung too and fro, with wave on wave of dietary advice. No corner of traditional nutrition science and the advice it gave rise to has gone untouched, and the swings to and fro are as vigorous as ever.
Protein. So it was that in the 1950s and 1960s when I was at school and university we were told to eat loads of protein – up to 15% of the diet, measured in calories. This advice apparently derived from observations before World War II on kwashiorkor in Africa: the nutritional disease of children that leaves them swollen-bellied (with oedema), and light-skinned, with reddish, frizzy hair. It seemed that lack of protein was the cause – and this was assumed to be a prime cause of malnutrition worldwide. The war interrupted research and this initial hypothesis became the accepted theory by default. Human beings, it seemed, need lots of protein and it had to be of the “first class” with a near-perfect balance of amino acids – so it had to be animal protein. Plant proteins were “second class”. We were told – not least on children’s television – to eat as much cheese and eggs as possible. Even bread was written off as “empty calories”. Fat per se was not mentioned.
The pre-war observations that led to the emphasis on protein were made very conscientiously by excellent physicians working in difficult conditions and with the best of intentions. Yet it seems they were wrong. Kwashiorkor, it seems, like marasmus – wasting – is caused primarily by lack of calories: a body starved of calories “burns” the protein in the muscles and so appears to be protein-deficient. But as is the way of the modern world, science and good intentions were overtaken by commerce. Agrochemistry (making use not least of the factories that had recently been producing explosives and agents of biological warfare) provided huge arable surpluses on which to raise poultry, pigs, and cattle on the grandest scale. Yet this, it seemed, wasn’t just commerce in pursuit of money. It was seen to be necessary. A service to humankind. Truly good and solid medical research had apparently shown that animal protein was vital, in large amounts, and only industrial farming, rooted in agrochemistry, could supply what was needed. Commerce and piety together sweep all before them (and to a large extent still do).
In the 1970s, though, it began to seem that the zeal for protein was misguided. Indian physicians in particular pointed out that many millions of Asians functioned very well indeed and could raise large families although most of them had very little meat or fish and many had none at all. Various albeit limited trials suggested that healthy non-pregnant adults could make do on remarkably little protein – nearer two per cent than 15. Clearly, too, some of the world’s cleverest people were vegetarian or vegan, including a great many Asians and westerners such as Leonardo and Tolstoy. The proteins from cereals or pulses were perfectly adequate for most purposes, especially when the two were eaten in combination (when the surplus lysine in beans makes up for the relative lack of it in cereals). Professor N W (“Bill”) Pirie from Rothamsted pointed out too that all the great cuisines use meat only as garnish or occasional feasts – and they also use meat and bones to make stock. Stock is the key to risotto and noodle soup, and who should ask for more?
If diet followed scientific theory in any simple way then we might have expected that the vogue for meat would have faded with the vogue for protein. But there is much more to meat eating than that. It is a sign of wealth and hence of status. It is the original fast food (“slam in the lamb” said a recent TV ad, fronted by Geoffrey Palmer – and that indeed is all you have to do). Besides, meat tastes nice and it’s filling. In the 1970s and 1980s steak and salad was the standard executive lunch, the up-market way to lose weight. Yum yum.
Sugar and starch. In the 1960s, too, the first serious doubts were raised around sugar and carbohydrates in general, primarily by the London-based nutritionist John Yudkin in his book Pure White and Deadly. The doubts continue, not least around fructose, which combined with glucose makes sucrose, which is what most people call “sugar”. Among other things, fructose is said to upset the production of insulin whose job it is to regulate blood sugar levels, and this in turn may predispose to diabetes. The jury is still debating the effects of fructose on health but at present, fructose derived from corn syrup is widely used as a sweetener (it is very sweet). Few, though, have many good words to say for sugar in general – although sweet foods are high in energy and surely should be good in the short term for anyone who is truly short of calories, which a great many people are, worldwide.
Others extended Professor Yudkin’s antipathy to sugar to carbohydrates in general – including starch, the principal energy store of plants: macromolecules compounded from many molecules of glucose. People on low starch diets can still eat plenty of plants in the form of leaves and fruits but they miss out on what for most of humanity are the staples – cereals (the seeds of grasses, (Poaceae)) and pulses (the seeds of legumes (Fabaceae)); and tubers such as potatoes and cassava. So a low starch diet by the standards of most of humanity is on the one hand very luxurious – generally high in meat, fish, and fruit – but also austere: no bread, no rice, no tortillas, no potatoes – foods that for most people through most of history have formed most of the diet. Some nutritionists and anthropologists have gone on to argue that the birth of arable farming not long after the last Ice Age was a huge nutritional mistake, even though it is widely seen to have given rise to modern civilization. Human beings simply are not equipped, say the doubters, to cope with all that cereal. The “paleo” (short for “Palaeolithic”) diet” has enjoyed some vogue in recent years, based on the idea that our pre-agricultural, hunter-gathering ancestors must have got their calories from nuts, fruits, tubers, and the lean meat of wild animals. This idea appeals too to those, including many farmers, who are now reacting against the low-fat diets that were recommended in the 1970s and beyond and are now telling us that we should eat more meat and eggs and dairy (provided the relevant livestock have not been fed on cereal). More of this later.
Whatever the benefits if any of the paleo diet, the theory on which it is based seems highly speculative, not to say flimsy. I have written a lot about human evolution in my time and the widely accepted theory, backed by a great deal of fossil and geological evidence, is that our australopithecine ancestors broke away from the ancient apish line as the world grew cooler and drier in the late Miocene and early Pliocene, and they left the dwindling forests of southern and eastern Africa to make their living on the savannah. There presumably they ate whatever plants were growing and whatever animals they could catch – but animals are elusive, and fight back. But savannah basically is grassland and grasses were abundant, not to say dominant, and our savannah-dwelling ancestors must surely have eaten their seeds. Cereals too are the seeds of grasses so why are we supposed to be so ill-adapted to them? Particular indispositions such as gluten intolerance are surely caused by modern high-gluten cereals and modern processing rather than by cereals per se. Besides, in his excellent book The Diet Myth, Tim Spector points out that even if we were ill-adapted to eating cereals 10,000 years ago (a very big “if”), we have had plenty of time to adapt. Favourable mutations are not common but they do happen and westerners on the whole have for example adapted to lactose, the sugar in milk, even though our ancestors did not retain the enzymes to cope with it after they had passed infancy.
In short, common sense and evolutionary biology suggest that a high cereal diet should be perfectly good (although the special case of gluten intolerance seems to need far more research than it is getting). The idea that bread is the staff of life (and rice and oats) seems to have served most of the human species pretty well over the past 10,000 years. There is a lot wrong with modern diets but surplus of cereals per se (at least in their more pristine forms) does not seem high on the list.
Overall, though, the position on carbohydrates remains unresolved, not to say confused. Some advisers in evangelical vein condemn sugars and starches out of hand – yet for health food shops, variations on a theme of cereals (muesli, wholemeal bread, and mixed loaves of wheat, barley, and rye) are the stock-in-trade. There’s something wrong somewhere. (Quinoa and buckwheat are non-cereal grains. Quinoa is a relative of Good King Henry and Fat Hen, and buckwheat is related to Dock).
Fibre. “Roughage” too underwent a serious overhaul from the 1970s onwards, and re-emerged as “dietary fibre”. There were significant advances on three fronts. First, physicians such as Denis Burkitt and Hugh Trowell (both of whom I had the privilege of meeting) perceived that the people of Africa in particular on a traditional, plant-oriented, high-fibre diet suffered none of the “diseases of affluence” that were besetting the affluent world – obesity, heart disease, various cancers, diabetes, and indeed gallstones; and they proposed that fibre could be protective. Secondly, botanists and nutritionists alike pointed out that plant cell walls do not consist solely of cellulose, which does tend to be “inert”, but also include a host of “hemicelluloses” which incorporate some recondite sugars, and pectin (described as a “heteropolysaccharide”: its molecules are compounded of various sugars and other things as well). All of these could be broken down in the colon and then become active players. Clearly, too, not all plant fibres are chemically or physically the same. Thirdly, by then it was abundantly obvious that the colon is not merely a conduit, designed to return the remains of previous meals to the wider world, but is a vital player in the whole digestive and absorptive process. Among other things, bile salts, squirted into the gut from the liver to assist the digestion of fat, are resorbed by the colon, and re-used. But they are also altered chemically in the colon – and the way they are altered depends in part on the amount and on the particular nature of any fibre that’s present. Bile salts are stored in the gall bladder and some kinds of bile salt are more likely to cause stones than others, so here was a possible physiological link between dietary fibre and gallstones. So long as fibre was seen as “roughage”, a kind of Brillo pad, it was hard to see how it could influence the various diseases with which it was being associated. But once it was called “fibre” and recognized as an active player in the metabolism of the gut, the whole fibre story became, not proven, but eminently plausible; and plausibility is the start of serious interest.
Fats. Arch rival to John Yudkin, intellectually and, I understand, personally, was the Minnesota epidemiologist Ancel Keys. Keys, from the 1950s onwards, argued that the main cause of the world’s dietary ills was not carbohydrates, but fats.
The simple message from Ancel Keys was that too much fat is bad for people – and this is still widely accepted. In the meat-rich diets of the Western (northern) world, fat provides around 40% of total calories whereas in the low-meat diets of what Gandhi called the “Third World” it’s nearer 20%. In the traditional diets of Japan fat may account for only a tenth of total calories. In the mid-20th century it was Westerners who suffered the “diseases of affluence”. The focus in those days was on coronary heart disease, sometimes shortened to CHD: blockage of the coronary arteries which feed the heart muscle itself.
But the story became more and more complicated. For a start, fats serve two main functions. Some, the “essential” fats, are key components of body structure, not least cell membranes. Other fats serve as long-term stores of energy (and in animals such as seals and penguins they are vital insulators).
Fats of the kind that concern us here are of two main kinds. The ones we normally think of as “fat” are triglycerides: they consist of three long chains of fatty acids yoked together. Cholesterol is also a form of fat: one fatty acid chain attached to a sterol molecule. Thus, cholesterol is both a fat and a steroid.
Cholesterol often features in popular medical literature as the arch-villain – but again the picture is not simple. It is, after all, an essential component of cell membranes and a precursor in the synthesis of steroid hormones. But if there is too much cholesterol circulating in the blood then it tends to get deposited on the walls of arteries to form “plaques”; and plaques blocking the coronary arteries lead to CHD and so perhaps to heart attack. In the light of this doctors have often advised their patients to steer clear of high-cholesterol foods such as eggs, including those of hens and the roes of fishes, though in truth, we normally consume very little cholesterol. But, it seems, diets high in fat of the triglyceride kind, as in lamb chops and burgers, also raise blood cholesterol. So it is that from about the 1960s onwards, many doctors and scientists have advised us to eat less carbohydrate (and particularly less sugar) while others have told us to eat less fat. Since fat and carbohydrate between them provide us with at least 80% of our calories, and sometimes well over 90%, it’s clear that if we followed both streams of advice slavishly we would starve to death.
But as always, the fat–cholesterol story soon proved to be more complicated. To begin with, cholesterol molecules are carried in the blood attached to protein molecules; and the two together form “lipoproteins” (where “lipo” means fat which in this instance is in the form of cholesterol). In some lipoprotein molecules the protein content is high, and since proteins weigh more than fats these are called high-density lipoproteins or HDLs; and in some lipoprotein molecules the protein content is low and these accordingly are of low density, and are called LDLs. It’s the LDLs that can build plaques. They are the bad guys. Cholesterol in the form of HDLs are en route to the liver – which means they are being removed from the blood. So both LDLs and HDLs contribute to “high cholesterol”. But while high LDLs can mean trouble, high HDLs are a sign that all is well. The cholesterol is being safely chaperoned out of harm’s way.
Triglycerides, too, are immensely various – they range from oils like those of fish, sunflower seeds, and olives to beef dripping and lard. The fats that are liquid at normal temperatures are generally of the kind known as “unsaturated”. Their long carbon chains contain far less hydrogen than they theoretically could. The hard fats tend to be highly saturated (with hydrogen). The two kinds in general seem to have different effects. In the last decades of the 20th century it became clear that while saturated fats tend to raise blood cholesterol and in particular to raise LDLs, unsaturated fats tend to lower LDLs, reducing the total level of blood cholesterol and increasing the ratio of HDLs to LDLs.
There are further complications. Thus, in general, the polyunsaturates that feature in food belong to two main “series”: the omega-3s (which include alpha-linolenic acid), and the omega-6s (which include linoleic acid). Omega-3s and omega-6s should ideally be consumed in equal amounts. Both are found in plant oils and omega-3 is largely associated with fish oil.
Then there are trans-fats. They are uncommon in nature but have been produced on the grand scale in industry by adding hydrogen to unsaturated fats to “harden” them; that is, turn them from an oily to a solid form so they can be used in margarine and for baking. Adding hydrogen means that the formerly polyunsaturated fat becomes more saturated – and it now seems that such artificially hardened fats are more likely than most to lead to CHD. The hardening of plant oils to make a substitute for butter could be seen as yet another example of scientific hubris.
There are suggestions too that the ill effects of eating meat may not be due primarily to the fats. The culprit may be the protein carnitine, which is associated with fat metabolism and, it seems, with the onset of atherosclerosis (fatty plaques on the artery walls of the kind that lead to CHD). There isn’t enough evidence to allow serious comment. Suffice to say only that the carnitine story is coming up on the rails – and who can tell what else might turn up in the years and decades to come?
Complicateder and complicateder, as Alice might have said. But then, why should life be simple? If it was, it wouldn’t work. To borrow a thought from Immanuel Kant and to misquote Samuel Johnson, the wonder is not that life is hard to understand but that we should understand it at all.
Anyway, as Tim Spector records in The Diet Myth, Ancel Keys’s fat story won out over John Yudkin’s sugar story. Thus in 1976 Britain’s Royal Society of Physicians (RCP) advised westerners to reduce their total fat intake from around 40% of total calories to nearer 30%, although they said that 20% was probably better still. They also advised us all to switch from saturated fats to unsaturated fats. Saturated – hard – fats are mainly produced by animals, so we were advised in general to eat less meat, and especially red meat, with special reference to beef. Instead we should eat more unsaturated fats – the polyunsaturated kind which come mainly from fish and from seeds such as sunflower; and the monounsaturated fats which mainly mean olive oil.
I read the RCP report in great detail when it first came out and can attest that it is excellent. Its advice is clear but also cautious. The evidence is less than perfect, said the physicians (more on this later!) but on what they called the “balance of probabilities” it seemed that if we ate less fat in general, and switched as far as possible from saturated to unsaturated, this could and should significantly reduce the risk of CHD – which, in the post-war decades, from the 1950s to well into the 1980s, was of epidemic proportions.
Although the evidence for all this was clearly less than perfect, what there was seemed very convincing. For example – just to take one anecdote – the people of Naples in those days apparently suffered far less heart attack than the people of Bologna. But then, the Neapolitans are southerners, who traditionally ate a “Mediterranean diet”: low in fat (but not ultra-low) but with a high proportion of fish and olive oil. Bolognans were rich northerners with a far higher intake of meat. We should all, it seemed, eat more like Mediterraneans.
That is how the fat story stood for about half a century and a great many people including me feel there is still a great deal in it. Tim Spector in The Diet Myth says he favours the Mediterranean diet (and for what it’s worth, so do I). The logic behind it and the many threads of data add up to a powerful story. But the game is not over (and never can be). There are several further twists.
One of these further twists seems to be letting beef off the hook big time – providing the beef is pasture fed. Thus it seems that the fat of cattle fed on grass contains far more unsaturated fat than that of cattle fed on concentrates. Ancel Keys based his recommendations largely on epidemiological studies of seven nations, some of whom (like the Americans) ate large amounts of red meat and some of whom, like the Italians, ate far less. But now it seems that some people who eat a lot of red meat, like some Cretans, do not have particularly high LDL levels and very little CHD, and evidence from elsewhere suggests that it may be the diet of the cattle that makes the difference. In his latest book Grass Fed Nation Graham Harvey points out too that some of the British in times past ate a lot of beef and revelled in butter, yet did not suffer heart attacks. Again the evidence is less than perfect but there surely is enough for governments who are concerned about more than GDP at least to take a serious interest – to encourage pasture-feeding and assess the consequences as far as this is possible.
Cryptonutrients. Food isn’t just nutrients. It also includes agents that behave more like pharmaceuticals or tonics, helping the works to run more smoothly. These include vitamins, known to be essential; deficiency leads to disease and death. But in the 1990s it became clear that there could be and probably is a whole host of such materials that if not necessarily vital are at least beneficial. One such are plant sterols which were found to lower blood cholesterol. The pharmaceutical industry seized on this and called these agents “nutraceuticals”. The food industry called them “functional foods”.
My own contribution to this discussion, in an essay published by the Caroline Walker Trust, was to class them all as “cryptonutrients” and to suggest an evolutionary reason why there should be such things. To begin at the beginning: human evolution did not begin when our apish ancestors took to the plains. It began long before our ancestors were human at all – with the origin of life on Earth, 3.8 billion or so years ago. After all, we still share some biochemical pathways and the corresponding genes with our fellow mammals and indeed with vertebrates in general. Evolutionary biologists commonly point out that we have inherited many of the features that enabled our australopithecine and early hominin ancestors to survive the Pliocene and Pleistocene but we have also inherited at least some of the adaptations that enabled our fishy ancestors to survive in Palaeozoic times. Indeed, we still retain a fair slice of the genes and pathways that are common to all animals and even to microbes.
Through all those millions and billions of years our ancestors must have been assailed with many thousands of different chemical entities, some of them produced by plants and other organisms for the express purpose of keeping predators at bay – in other words, toxins. Other such agents arose simply as by-products. Notably, oxygen gas appeared in the atmosphere as a by-product of photosynthesis. Our ancestors had to adapt to this relentless chemical assault. Sometimes, presumably, they just avoided trouble: didn’t eat things that they “knew” to be harmful. Often they evolved mechanisms of detoxification. Often, too, they went one step further and evolved the means to make positive use of the extraneous chemical agents that hitherto had harmed them. Oxygen is a case in point. For organisms that cannot cope with it, it is the mother of all toxins precisely because it is so chemically active. For creatures like us, which can cope with it, it has become vital – its chemical potency turned to advantage as the supplier of instant energy. Even so, our bodies are packed with “anti-oxidants”, including some of the vitamins, to deal with rogue oxygen that escapes the normal coping mechanisms.
The cryptonutrients, I suggest, arose as agents in the environment, generally produced by other species, that our ancestors were obliged to deal with and which, as their evolution progressed, they turned to advantage. The vitamins can then be seen as cryptonutrients on which we have come to rely more or less absolutely, so they are indeed vital. The rest, possibly many thousands of them, may well be good for us, but the effects are generally too small to notice or to quantify. A regular dose of cryptonutrients – a plant sterol, for example – may well add two years to a life that is already long. But it would be very difficult in practice to demonstrate such an effect. Such an effect would not show up for example in a six-week trial on laboratory rats.
The study of cryptonutrients also illustrates a more general principle that has often been overlooked: that the body is very discerning. It is alert to fine chemical distinctions and we cannot assume that it will treat two molecules in the same way just because we with our present knowledge perceive that they are similar. It also begins to seem likely that any nutrients might have pharmacological effects in addition to their obvious nutritional value. Thus we cannot assume that the body will treat all similar-looking sugars or starches or fibres or fats or anything else in the same way. The details could well matter. To assume as is often assumed that some ersatz can be substituted for the real thig because it is perceived to be “chemically equivalent” is too cavalier by half. The idea that we should in general eat natural foods (as little processed as possible) and whole foods (nothing taken out) begins to seem highly plausible, not to say obvious.
Cryptonutrients also lead us very nicely into what one of the greatest insights of all in the history of nutritional science: the absolute importance of microbes.
Microbes. I was brought up as was usual in those days to be afraid of “germs”. Polio, a virus, was the great fear. Measles, another virus, was seen largely as a routine childhood ailment although it was a prime cause of death in parts of the tropics and the main cause of non-congenital blindness and deafness in the west. There were plenty of nasty bacteria too, including TB which mainly affected the poor and malnourished, although our mothers still lived in fear of whooping cough and diphtheria. Hygiene was stressed. Food poisoning could kill too. Penicillin was produced on the mass scale only in World War II and in the years immediately afterwards it was still very expensive. All bacteria and indeed all microbes tended to be classed as germs and were commonly seen to be seriously bad news. We had to put up with the ones that were known to thrive in the gut but mainly, it seemed, because there wasn’t much we could do about them.
This kind of mentality lives on. Young mums on daytime television are still being urged to spray their kitchens and their children’s high-chairs with disinfectant. The message still goes out that our and our children’s surroundings should be as sterile as possible; one big intensive care unit. There was even a vogue in the 1960s for germ-free piggeries.
Even in my schooldays, though, the story wasn’t entirely negative. Industrial microbiology has flourished since the 19th century (Louis Pasteur was a seminal figure) and its triumphs include the control of fermentations of all kinds from bread to cheese to wine and beer, and large-scale production of antibiotics. Before World War II Albert Howard introduced the world to the modern age of composting which of course is primarily an exercise in fungal and microbial decay. The essential role of mycorrhizas in extending the range of roots was known since the 19th century, and so too the role of Rhizobium bacteria in legumes in fixing nitrogen. Bernard Dixon wrote an excellent summary of all these positives in Invisible Allies in 1976 and John Postgate excellently summarized nitrogen fixation in 1976 (both books later up-dated). All who wrote in such veins stressed that most microbes including most bacteria are our friends and indeed, as Bernard Dixon stressed, that the world would rapidly grind to a halt without them.
All such observations helped to lay the ground for what is perhaps the most significant nutritional insight of the past few decades: that the microbes in our guts are not spivs, out for a free lunch and ever ready to put the boot in. They are our partners – mediators between us and the world at large – and at least in net are our benefactors. To a mostly unknown but certainly to a very large extent the gut microbes determine how we respond to any particular kind of food; whether it makes us fat or doesn’t; whether it makes us ill or does us good. The differences in individuals in their responses to particular foods or indeed to entire diets can at least in part (probably in large part) be ascribed to differences in their gut microbiota; and those differences in turn probably reflect differences in previous diet – or indeed may reflect differences in childhood experiences, not least around the time of birth; and even reflect the microbial status of our mothers. Antibiotics and food additives or novel foods of the kind cooked up in the laboratory can alter the gut flora dramatically. Again, the general idea from health food buffs that natural food is generally good and artificial food is likely to be bad begins to seem highly plausible, although it is still mocked in some circles of commercial science.
Two anecdotes from Tim Spector’s The Diet Myth illustrate the extraordinary influence of microbes. The first involves identical twin sisters, one prone to put on weight and forever dieting, while the other was forever slim. They were genetically identical, so why the difference? The answer, apparently, was that their gut flora was very different – and the reason, it seems, is that they were born by Caesarian section and thus had no opportunity to pick up bacteria from the birth canal and general nether regions of their mother; and were handled by different nurses from the word go and so acquired their particular microbiota instead. The microbiota of the two sisters remained very different into adulthood. Spector also tells of an American doctor who, to get around this problem, wiped a swab around the relevant areas of his wife when she gave birth by Caesarian section, and then applied the swab to the face and mouth of the newly emerged baby. So far all is well and some Scandinavian hospitals have unofficially adopted the swab technique as standard practice in Caesarian birth. It’s becoming fairly common practice too to enrich impoverished gut microbiota with faecal implants from other people. We have come a long way since my schooldays, when bacteria were just “germs”.
The parallels with organic husbandry are obvious. Both now recognise the supreme importance of microbes. Above all, it seems, it is essential to maintain their diversity. At least as a rough rule of thumb it seems that in the gut, as in a tropical forest or on a farm run on agroecological lines, the greater the diversity the better. Diversity can be encouraged with “prebiotics”, which typically means mixtures (as mixed as possible) of biochemically various plants – herbs, nuts, recondite roots. Impoverished gut floras are enriched by adding new species, for example by eating active yoghurt or real cheese, or fermented plants including miso and sauerkraut (and of course with faecal implants). The whole procedure is hit and miss but it surely must be taken seriously. In fact, says Professor Spector, “You won’t go wrong if you just treat your own microbes like you would treat your own garden. Give them plenty of fertilizer – prebiotics, fibre, and nutrients. Plant new seeds regularly in the shape of probiotics [foods that already contain useful bacteria]. Give the soil an occasional rest by fasting. Experiment, but avoid poisoning your microbiotic garden with preservatives, antiseptic mouthwashes, antibiotics, junk food and sugar.”
Finally, the growing appreciation of gut microbes and microbes in general adds weight to the cryptonutrient idea. It’s a fair bet that most of the peculiar chemical agents to which our human and pre-human ancestors were obliged to adapt were and are microbial in origin. Oxygen certainly was. Cyanobacteria (known in earlier times as “blue-green algae”) invented photosynthesis. Plants acquired the skill by turning cyanobacteria into chloroplasts (or so all the evidence suggests).
Clearly, there has been much confusion in nutritional science this past half century, and there still is. As Spector says, while some nutritionists advise us to nibble more or less continuously (“grazing”), others recommend a few big meals. Over the past 30 years, too, “almost every component of our diet has been picked on as the villain by some expert or other” and yet “our diets continue to deterioriate”. The focus is largely on losing weight, yet the average western waistline continues to expand by about an inch per decade and there’s evidence that repeated dieting may make people fatter. Two generalizations that seem to make sense are, first, that most diets are monotonous, and a surfeit of grapefruit or cabbage soup (one of Spector’s examples) suppresses the appetite wonderfully. Secondly, the many and various dietary adventures affect the microbiota in ways that are not understood, with consequences that are equally mysterious – but in the light of what is known, seem eminently plausible.
But why is the picture so confused? What’s the problem?
Why has there been such confusion?
There are various problems of various kinds. First, as the RCP stressed in their report in 1976, it is simply very difficult to work out what effect different diets have on health. On the face of things it should have been fairly easy to pin down the relationship between consumption of fat and heart disease, since fat intake can be measured or at least estimated, and death by heart attack seems easy enough to diagnose. Yet the jury is still out after half a century of endeavour. One problem is that fat is only one component of diet, and diet is only one component of lifestyle. Other known or apparent risk factors – stress, exercise, smoking – also need to be assessed before the effect of any one of them can be singled out. The evidence that led Ancel Keys and many other scientists and physicians to link fat – and especially saturated fat – to heart disease was abundant and various – and yet, as the RCP made very clear, all of it was flawed.
It is actually far more difficult than it seems to find out what particular people really eat, as opposed to what they say they eat; and the damage to the coronary arteries that leads to CHD may take years or decades to build up, and long-term studies are even harder. So Keys relied mainly on epidemiology – comparison of whole populations – but such studies can only be broad-brush and they are not controlled. There was data too from vegetarian groups and on patients on various kinds of diets – but such groups are small and in various ways are not typical of the whole population. Animal studies can be more tightly controlled but animals, of course, are the wrong species. Truly to pin down the effects of fat even to the rigorous yet still imperfect standards that medicine normally demands would require comparative trials of large, matched, but various populations on tightly controlled diets and with circumscribed lifestyles, over decades. For obvious practical reasons such an experiment cannot be done. So we are left with clues: some epidemiology, some patient studies, some laboratory research, but nothing resembling a definitive trial (if, indeed, any trial can really be “definitive”). All the researchers can legitimately say is what, in effect, the RCP said: “We are doing our best, and this, as we see things, is the ‘balance of probabilities’.” They could have added: “But please don’t shoot the messenger!”
There are other sources of confusion, too. For one thing, it is impossible even in theory to solve problems unless we know the nature of the problem. What should we be looking at, and for? Some of the background data that now seems essential is only just coming to light. Thus the RCP report of the 1970s acknowledged the different effects of saturated and unsaturated fats but although there were hints from studies of wild animals, it was not properly recognized at that time that the saturated/unsaturated status of cattle, say, depends on what the animals have been fed on. Fat from grass-fed beasts may be very different from the fat of cereal-fed beasts, and may well be far less damaging to the coronary arteries. Some, like Graham Harvey in Grass Fed Nation (2016), now claim that the fat of grass-fed beef is positively beneficial. Very little was known in the 1970s of the influence of microbes. Yet investigators do not and cannot take such variables into account unless they have good reason to do so. The trouble, as with all empirical science, is that we need to know a great deal before we can know what it is we should be trying to find out. This is one very good reason why science cannot proceed in a straight line; why every now and again a whistle blows and we seem to have to start all over again, albeit (with luck) from a higher base-line.
To borrow a thought from Donald Rumsfeld, too, though he is not generally known as a philosopher, it’s impossible for us to know what we don’t know. But physicians and medical scientists are always under pressure to solve the health problems of humankind and to recommend courses of action, and in practice they just have to make the best of the evidence available and do the research that seems most pertinent, though it may at any one time be impossible to find out the things they really need to know and in the absence of perfect knowledge they may be looking in the wrong places. All this is just the way things are – the nature of research. In addition, of course, there is pressure from commerce, growing ever more pressing in this neoliberal age, as governments strive on ideological grounds to “save” public money by leaving research in the hands of corporates who must forever compete to come up with new solutions however half-baked the underlying theory. Margarines made from sunflower oil became very big business in the wake of Ancel Keys’s fat story. Tim Spector reports that there are 30,000 different books on diet out there all with their own pet theories and algorithms (including my own Future Cook, published in 1990, though that just says what I am saying here: eat what grows naturally and re-learn how to cook).
It’s not at all surprising, then, taken all in all, that over the past 50-odd years there has been much chasing of hares. Yet knowledge progresses nonetheless. We may at least hope that the present health claims for grass-fed beef will turn out to be justified, for if the story is true it should make us healthier and should also promote interest in grass itself, and particularly in natural, herb-rich pasture, which surely would be good for the whole biosphere. But as the past half century has shown, it’s dangerous to turn new bodies of data, however alluring, into fads.
The greater hope, is that nutritional science, and indeed all biology, is at last coming of age. The paradigm shift is happening and it could lead to a far more sensible approach to health care and farming, and change our attitude towards the whole biosphere, for the better and forever.
The paradigm shift
With one exception, the changes of mind outlined above do not of themselves amount to a paradigm shift. All but one could have taken place, and for the most part did take place, within the traditional, 17th-century Cartesian–Newtonian–reductionist idea which says that living bodies are mechanisms and nutrition can and should be reduced to chemistry and thermodynamics, and that particular causes lead inexorably to particular effects. All these ideas are now up for grabs, not to say passé, not just in nutritional science but in all biology and, I suggest, in all of science – and that really is a new world view.
To begin with, it has become obvious in recent years, as it was not in my schooldays, that food is not just fuel and the gut is not just the boiler-room, requiring no great intellectual effort to understand it. Clearly, for a start, the body is very discriminating. It does not for example necessarily treat two sugars alike just because they have the same general formula and provide the same amount of energy when oxidized. The individual quirks of each kind of molecule clearly matter, or they certainly may do. It’s clear, too, just to complicate things, that the gut is not so efficient as we were told, and as professionals in olden days tended to assume. Proteins for instance are not broken down thoroughly to their constituent amino acids and the gut is leakier than was generally supposed, and short lengths of proteins known as peptides often get through the gut wall – and sometimes entire proteins. Among other things, this makes the whole food allergy story eminently plausible, at least in principle. More generally, as illustrated by the vogue for “nutraceuticals” (cryptonutrients), and as of course is implied by the science of vitamins, the boundaries of nutrition and pharmacology are far more blurred than once was the case. When does food become tonic, and tonic become medicine?
With all this going on it comes as no surprise that nutritional science is bound to involve all the body’s systems, working as physiology always does, holistically, with everything interacting with everything else. Truly to get to grips with what’s going on and what really matters, nutritional science must also take account of the immune system, the endocrine system (at all levels, from the cell to the whole organism) and the brain. For example, how we react to food depends very much, among other things, on our memories of past experiences – and our attitude to particular foods affects how those foods affect us.
The simple hierarchical system of control that my generation learnt at university, with the genes as unresponsive despots telling the rest what to do, has also had its day. Clearly the genes are very responsive to the cell around them, which in turn is tuned in to the whole organism, which in turn must respond to the world at large. The past decade or so in particular has seen the rise of epigenetics – ad hoc mechanisms that turn particular genes on and off and thus change the behaviour of the genome, sometimes in the short term, sometimes through a whole lifetime, and sometimes over several generations. Thus it’s clear now that the way we respond to food, and whether or not for example we became fat or stay slim, depends in part on our own diet in infancy, in part on our mother’s diet when we were in utero; and the way our mother doles out nutrients to her fetus depends in part on her own experiences in infancy, and in utero.
As Nessa Carey describes in The Epigenetics Revolution (Icon Books 2011), all this is illustrated wonderfully by the Dutch Hunger Winter from November 1944 to late spring 1945 when the war prevented food imports into Holland. Many starved and many of the survivors were permanently affected – like Audrey Hepburn, sixteen at the time, who remained thin (or enviably slim) though rarely in good health through all of her somewhat foreshortened life. Others were in utero at the time, and for some weeks or months were deprived of nutrient as their mothers went hungry. If women were already pregnant by the time the hunger began, so that their fetuses were well fed in the first months and then deprived, their babies were likely to be born small – and they tended to stay small throughout their lives, and were not so likely as the rest of the population to grow fat. The fetuses of women who conceived towards the end of the famine were deprived in the first months of their gestation but then were well fed – and then, generally, they caught up, and at birth were of normal weight. They, though, were more likely than usual to be obese in later life and also suffered other problems of health. Truly intriguing, however, is that some of these effects lasted into the following generation. That is, the grandchildren of the women who were starved in pregnancy also tended to be small, or else to be more than usually prone to obesity and ill-health. The inference (for which there is much independent evidence) is that epigenetics was and is the cause. Malnourishment in the womb did not cause the genes of the fetus to mutate, but it did impose epigenetic controllers onto some of the genes which altered their behaviour – and the effects continued after birth and into the next generation.
That is just one example of the power of epigenetics. Nessa Carey is right to speak of the “epigenetic revolution”. Presumably, the gut microbiota is involved too. The gut flora of women starved in late pregnancy must surely have been affected. Perhaps the microbes themselves cause at least some of the epigenetic effects. We might assume, too, that the interaction between the genes and the microbes is two-way. Microbes and their hosts are in constant dialogue, each responsive to the other. All in all, the narrative of nutrition grows richer and richer.
Put all this together and the Cartesian–Newtonian model of the body-as-machine, a souped up mannequin, begins to seem most inappropriate. Wind up the clock, stoke up the boilers, press the right levers, and a machine will behave exactly as it is designed to behave. But the bodies of living creatures are not machines. They are intelligent beings. Even those that don’t have brains behave as if they were intelligent. Even trees have their own agenda, and how they respond to any one stimulus is conditional. It depends on what else is going on – whether it’s spring or autumn, for example: time to grow or time to shut up shop. For animals, each individual molecule imbibed is a potential stimulus and each, at least in principle, is treated on its merits depending on the circumstances and on the individual’s own history. Of course there are general patterns and there is some consistency, for if there were not then the science of biology as a whole would be impossible. But we can never hope to predict precisely the effects of any one dietary component on any one individual or on the population as a whole. Most of all, we can never hope to control the bodies of human beings or other animals to the nth degree as sometimes seems to be the ambition and the promise. Of course, physicians and farmers who deal with real creatures have always known this. Science can take us so far but for the rest we need experience and intuition.
All this is part of the paradigm shift but the greatest change of all, in nutrition as in farming (did the powers-that-be but realize it), is the rise and rise of microbiology. It’s clear now as it has been to some biologists for a very long time that microbes are not just spivs and rivals. They, more than any living creatures, make the world the way it is – the soil, the air, the state of the sea, our own nutritional status. They are the great intermediaries, between us and the universe at large. Instead of zapping them willy-nilly, spraying kitchens with disinfectant and trying in general to live in an intensive care unit, we need to open an intelligent dialogue with them. Eve Balfour said when she founded the Soil Association in 1946, “Look after the soil and the plants will look after themselves”; and it is clear now that care for the soil in very large part means care for its microbiota. Now we can see very clearly that the same principle applies to the gut. If we want to be as healthy as we can be, we must nurture and cultivate or at least not unduly molest our on-board microbes.
More broadly – and this is what really justifies the claim of “paradigm shift” – the 17th-century model which saw and still sees bodies as machines that can be exhaustively understood and ultimately controlled should be seen to be well and truly dead. Nutrition is being re-conceived as an exercise in ecology – specifically, the ecology of microbes and their dialogue with us. Ecology has been treated as an inferior pursuit that should not be ranked, say, with biochemistry or molecular biology. In truth it is the most subtle and intricate of sciences, obliged to juggle endless variables at any one time. Cause and effect cease to be simple, as in the Newtonian clockwork universe, and become “non-linear”. (All that, and ecology is rooted in natural history: the endless delights of observing wild creatures. What could be better?)
Even more broadly, we might invoke an idea from the French playwright and philosopher Gabriel Marcel. He differentiated between puzzles, problems, and mysteries. Puzzles, like jigsaws and Rubik cubes, contain all the information needed to solve them. You just have to re-arrange the bits. Problems do not contain all the information needed. You have to bring new information to bear from the world at large, and supply your own logic, and this is the nature of research. Problems too, though, like puzzles, are soluble.
Mysteries, on the other hand, can never be solved. We can probe and go on probing, as scientists do, but the more we find out, the more we see that there is more to know; and every now and again we realize that what we thought we knew before is wrong, or is at least inadequate, and needs to be subsumed within some wider scenario. Right now, I suggest, at least the more avant-garde scientists are realizing that life and the universe as a whole are not problems that can be solved once and for all. They are a mystery. That is a true paradigm shift. Specifically, we are moving from the Age of Chemistry to an Age of Biology and particularly of ecology, which is, or should be, altogether more intricate and humble. If only the people in charge, the powers-that-be, realized that.
So what are we going to do with all these insights?
The absolute importance of Folk Knowledge
Truly modern nutritional advice contrasts wonderfully with that of 50 years ago. The content of it has changed of course, radically, sometimes in several diametrically opposite directions at once, as outlined above. More to the point – at least among the truly modern – the tone has changed. Doctors and scientists and scientists who are really on the ball are not dogmatic. They know that they are dealing in uncertainties, and always will be. At best they talk of probabilities. The Royal College of Physicians, in their report on fat and heart disease in the 1970s, anticipated the trend. But then, physicians have always had to deal with uncertainty.
Most striking is that truly modern nutritional advice, as opposed to the magic-bullet, snake-oil kind that has become the norm, veers closer and closer to folk wisdom. Thus we were told in the old days –
Eat what grows naturally
– which accords perfectly with the modern insights into gut microbes and cryptonutrients: the evolutionary idea that our gut flora and our general physiology have largely become adapted to the myriad products of nature over the millions and billions of years through which our human and pre-human ancestors have been exposed to them, and are not adapted to laboratory novelties. The fact that it has sometimes been shown that those novelties do not cause cancer or general collywobbles in laboratory rats tells us little about their total effects on human beings throughout their lives.
In wondrous contrast to the general germ warning, our grannies were also wont to tell us to –
Eat a peck of dirt before you die.
I confess I have met only one other person – an octogenarian – who remembers this adage, but it was certainly current in my day. It can of course be taken too far (hygiene still matters of course; E coli and Listeria are always ready to pounce). But it accords very well with the advice to nourish and cultivate the gut flora (and indeed to keep the immune system on its toes).
We were also assured that –
A little of what you fancy does you good.
Again this seems perfectly in line with most modern theory, including the advice to eat a diet that’s as varied as possible. These days too zoologists and vets are very impressed by the way that animals may go to great lengths to seek out particular herbs and minerals that they know, by whatever means, will make them feel better. Thus enlightened zoo-keepers and farmers provide their charges with patches of herbs which, demonstrably, the animals seek out when they are feeling poorly (as revealed, for example, by loose stools and general mopiness). Land animals of all kinds know when they are short of salt and may walk many miles to the nearest lick, as elephants and Arabian oryx do; and macaws stoke up on kaolin to sequestrate the toxins that perfuse their natural diets; and so on.
The stress, though, seems to be on “a little” because as Waldo Emerson advised, and I remember being advised, we should adopt —
Moderation in all things
– which seems pretty sound in most contexts.
Overall, too, as I have now outlined in several books, modern nutritional advice can be boiled down to nine words:
Plenty of plants, not much meat, and maximum variety
– and here we encounter a couple of huge serendipities, suggesting that God really is on our side.
First, agroecological farming focuses primarily on arable and horticulture, both as various as possible, with livestock fitted in as and when. So indeed it provides plenty of plants, not much meat, and maximum variety. Secondly, as N W Pirie pointed out, all the great cuisines use meat sparingly but to maximum effect, and we may also observe that all the great cuisines make maximum use of all the herbs, nuts, wild fruits, and spices that grow locally (and they sometimes import spices from far and wide, which it is perfectly reasonable to do). In other words, plenty of plants, not much meat, and maximum variety also describes the basic structure of all the world’s greatest cuisines.
In other words:
There is perfect correspondence between agroecology, sound nutrition, and the world’s greatest cuisines.
In other words, if we want to be healthy, and really want to keep the world as a whole in good heart, we just have to take food seriously; value whatever grows, and has been grown with tender loving care. In other words –
The future belongs to the gourmet.
This again is in the sharpest contrast to what we have been told from on high, which is that if we want to survive in large numbers then we have to tighten our belts and/or live on the various kinds of ersatz, like the various kinds of “textured vegetable protein” – textured, that is, to resemble the fibres of meat. Such life-savers can, of course, be produced only by courtesy of high-tech food companies; and so we are invited once more to give thanks to the corporates for our salvation, and our governments obligingly support them with our money.
In fact, all we really have to do, if we don’t ourselves want to become farmers, is –
Re-learn the arts and crafts of cooking.
It would of course be good to emulate the world’s great cooks but that doesn’t necessarily mean the most famous. In the history of the world many millions of people were and still are great cooks, albeit working in tiny kitchens (or indeed with a pile of brass pots, as I have seen on the streets of Mumbai, though that is clearly far from ideal). Raymond Blanc is among the world-renowned chefs who emphasize the absolute importance of traditional cooking, which largely means peasant cooking. He learnt to cook at home, he says. So, surely, did many more.
More and more, in all aspects of food and farming (and other areas too, but they can be discussed elsewhere) it’s clear that the people who have most influence in the world, the oligarchy of governments, big business, and their chosen expert and intellectual advisers – including selected scientists – have sold us horribly short, and are leading us more and more deeply into the mire. Truly we need enlightened agriculture rooted in agroecology, which means skills-intensive, primarily organic agriculture in small units – and governments like Britain’s and big-time commerce promote the precise opposite; high-input, zero-labour monocultures, while land, the sine qua non, is on sale to the highest bidder to be used, in effect, for whatever purpose. We need to promote home cooking and to place cooking at the heart of all school curricula, together with gardening. Again, though, this isn’t what’s been happening. In British schools, “domestic science” became “home economics” and both were and are commonly reduced to the arts of defrosting and the splitting of polythene.
Even more broadly, science needs to come off its high horse. Scientists need to acknowledge the paradigm shift within their own métier, which clearly for the most part they have not. For example, the Royal Society no less has now taken it upon itself to promote GM. Intellectuals in all fields need to ask themselves whether they seriously improve on folk wisdom – and to ask why they apparently assume they should be able to do so. After all, folk wisdom at its best encapsulates the experience and insights of all humankind over thousands of years. Why should anyone suppose that their own particular wheeze should improve on that? How can western agriculturalists presume to sweep aside the world’s traditional farming, developed by men, women, and sometimes children over centuries, in favour of some untried algorithm (untried in all but the shortest term), and call it progress? Mercifully, some people including some intellectuals in high places are asking these questions, and are seeking to develop whatever knowledge and skills they have to offer in true collaboration with people at large. The role of science is not – surely? – simply to sweep aside all that has gone before. Even less should it seek to transfer wealth and the power to act from people at large to a ruling minority, which is what it is tending to do at the moment as it helps to replace traditional farming, and cooking, with industrialized systems controlled by corporates. What the world really needs – another slogan – is:
That, I suggest, would really be progress.
Colin Tudge September 24 2016