Academic reading: help! Here’s some guidance that might help

So the new term is upon us and fresh faced undergraduate fill the campus.  There are some perennial questions that always come to the surface around now and many of them are about reading.  So for the benefit of my new Geography students here are a few observations which you may or may not find useful.

Listen to the introductions for University Challenge and the student will say ‘reading Biology, or reading Geology’ or at least they used to.  The point here is the word ‘reading’ – you read for your degree.  The lectures define the syllabus and provide core knowledge but above all else they scope the subject and its frontiers.  Frontiers and terrain that you are expected to explore through your own reading.

In a generation brought up with the internet and with everything online at your fingertips this is not as straight forward as it used to be 30 years ago when reading meant spending hours sitting in the basement of the library.  There is now a diversity of output (Fig. 1) where once there were simply printed journals and textbooks.  Since most academic material is now delivered online there is a blurring of boundaries.


Figure 1: Information sources.

A common question is: ‘material from a website is OK right, it’s no different from an e-journal or e-book is it?’  The answer off course is no, they are worlds apart!  An e-journal has been peer reviewed, a website has not.  Anyone can post what they like, as witnessed by this site, on a website accurate or not.  Figure 2 is my take on the history of journals which can help resolve some of these questions; it is also useful to understand the process of academic publishing and the role of peer review (Fig. 3).  It is peer review that provides the safeguard against ‘fake research’ although it is not without its dark-side.


Figure 2: My take on the history of the academic journal.


Figure 3: Summary of the academic peer review publishing system.  It is a bit like having your work marked by your peers; it is often a painful process!

We can look at the reading process in three steps: how do you surface material >> check its quality and reference management >> and actively read it?

Finding stuff to read

When I was a student back when the dinosaurs roamed free we were always, by the better lectures at least, given a print sheet of references to take to the library and read.  These days finding material is much easier, you can just go on line.  A lecturer may direct you to specific books and papers, but probably less so than in the past because it is so easy these days to find material.  There are specialist search engines and databases such as the Web of Knowledge and there are details and instructions on the Library web pages.  However in truth the best option is to do a simple Google Scholar search.  To be clear this is not your normal Google search engine, you will need to find it in the Google apps or search for it.  Once you have found it save it your favourites, it is in truth all you need in my opinion at least.  It surfaces academic papers and books and does so like a dream!  I use nothing else in my research unless I am on the quest for something very specialised.  Figure 4 show a typical search.  For finding the key papers fast and easily there is nothing like it and for Physical Geography it covers all the key subjects well.


Figure 4: Typical Google Scholar search.  Key words go in at the top, you can set the date range on the right and the availability of the work is listed on the right.  The double quotation marks brings out a popup with the citation.  

Quality and Reference Management

Having found your material the next thing is to think about quality.  Some good questions to ask of a source before you spend time reading are:

  • Where is the item published? Is it a textbook or research article?  Now textbooks are good in the fact that you get lots of information in one place and for developing core knowledge there is nothing better, but they are always out of date!  For some subjects this does not matter the information is timeless, but science is never static and if you want the latest information you need to seek out the original research (Fig. 5; Table 1).


Figure 5: Different lead times from research to publication.


Type of Publications Comment Quality
Journal paper/article

[Paper and e-journals]

Provided it is a reputable journal then peer reviewed is normally rigorous.  Check this Good editor and/or editorial board helps. Best source every time.  Some disciplines (e.g., archaeology) don’t always place their best stuff in journal papers – restrictions of length etc. *****
Conference volume or edited issue Peer review can be very variable and often more flex especially for the editors mates.  Editor dependent. Work is often under developed – saving the best for a paper.  More common in archaeology/anthropology. ***
Research monograph Common in archaeology were there is a lot of data to convey.  Quality depends on author and editor.  Peer review protocol more shadowy. ***
Readers and popular science The book once completed is usually reviewed by publisher – light touch.  It is also reviewed by community in published book reviews. Depends on the author, but bear in mind they are advancing a ‘thesis’ (idea) and may not be as objective as you might wish.

[Be very careful of ‘self-published’ works]

Textbooks The book once completed is usually reviewed by publisher – light touch.  It is also reviewed by community in published book reviews. Depends on the author, but bear in mind they are selecting information, not always representing the whole field.  Textbooks are always a few years out of date. **

Table 1: My personal assessment of quality of different types of source.  Not everyone will agree with this and it varies with discipline.  

  • When was the item published?  Now this is a tricky question.  In theory the more recent a paper the more up to date it should be; so should you only look at stuff published in the last few years?  The answer is no, there are many classic papers which can be anything from ten to hundred years old.  Yes its is often harder to get those papers digitally but that is not a reason for ignoring them!  The classics are often the best.  You just need to be aware of how how ideas may have changed.  Look at a couple of recent articles on a subject and if they all link back to one older piece then it is often worth giving it a read.
  • Is the journal peered reviewed?  On the journal’s home page there should be information about this, check that it has been, if it hasn’t treat with caution.  Books can be a bit of grey area, most textbooks undergo some review before publication but it in truth the rigour varies.  Be particularly careful of anything that is clearly self-published, by contrast large publishing houses have a reputation to maintain and are careful.  The impact factor of a journal is a crude measure how much research in the journal is cited.  The higher the value, and most journals have impact factors of <5, the more the research in the journal is being read and cited by others.  A journal without an impact factor is more suspect unless starting out and backed by a big publishing house.
  • Who is the editor or who is on the editorial board?  Are the editors and members of the editorial board established figures in the discipline, can you trace them back to solid academic institutions and profiles?
  • If the work has been published for a while you might like to check the articles metrics.  There are various metrics that you may look at but the simplest is the number of citation.  How many times has an academic who is not an author of the piece referenced the work in another paper?  The more citation the more impact the article has had; remember that if it was published just a few months ago the citations will always be low since it take time for people to read and cite a work.  The Altmetrics is another measure.  This is a measure of the media and public interest in article when published.  It is based on things like the number of downloads, press coverage, tweets and the like.  The only caution I would say is that bad or outrageous science can sometimes have high Altmetrics for the wrong reasons.
  • Where was it published?  There are a lot of new journals popping up at the moment with a move to Open Access publishing and in truth a lot of them are very poor.  A quick check on the age of a journal can really help here; has it being going for decades or not?
  • Who funded the work?  As you read a piece it is always a good idea to turn to the acknowledgements or declarations at the end of a paper to see in the authors declare any conflicts of interest and/or funding details.  A paper funded by the nuclear industry for example may not be that independent when critiquing that industry!  This is very true of papers in about drugs and medical devises.  You may also want to look carefully at the sample size and experimental design.  Just because a paper got published doesn’t mean that it is always free from flaws!

So you have found something to read, you have save the PDF or printed the paper what comes next.  Well managing your papers (Fig. 6) is a good housekeeping step and there are various bibliographic programmes some free, some not that can help you manage this stack of papers whether they be printed or saved to your hard drive.  There are various tables which compare different reference managers, here is an example.  If you are a Bournemouth student then Endnote is provided free, but what happens when you leave?  My advice is to go for something free at least to start with.  I use and personally recommend Mendeley, but you might find something better.  It creates reference lists, but for me the key is it stores PDFs and you can access your library via the web from anywhere.  If I am honest I am terrible at keeping it up to date and Figure 6 is a shot from my office!


Figure 6: Having a hard time finding the right paper?  May be time to go digital and use a reference manager?

Critical Reading

We are now at the final and most important task.  So you have found stuff, downloaded it, stored it nicely . . . . so you feel better, yes?  This is sometimes referred to as psychological value of unused information.  People buy self-help books, never read them, but feel better anyway!  It doesn’t quite work like that in this case, you need to read.

In truth most academic papers will put you to sleep if you try to read them end to end even if they are well written, so don’t try!  Academic reading is about the assimilation of information and its translation (i.e. engage with it) that information into something useful to you.


Figure 8: As the Borg would say assimilation is everything!

You need to be a critical reader and like all academic skills it has to be learnt.  Here are few observations that might help:

  • Reading critically does not, necessarily, mean being critical of what you read. It is not about identifying faults and flaws.
  • Critical reading means engaging in what you read by asking ‘what is the author trying to say?’ or ‘what is the main argument being presented?’
  • Critical reading involves presenting a reasoned argument that evaluates and analyses what you have read thereby advancing your understanding, not dismissing and therefore closing off learning.
  • Having a reading agenda helps (Table 2), what do you need?
    • General knowledge on a broad subject area
    • Improved understanding of a specific concept
    • Examples and illustrations of key points [e.g., Case Studies]
    • Information on a debate or controversy [e.g., Pros and cons]
    • Data on best practice?


Requirement Best sources?
1 General knowledge on a broad subject area Textbooks are good for this in combination with a reader.  Select the relevant chapter and skim read focusing on key sections/paragraphs.
2 Improved understanding of a specific concept Textbooks are best for this.  Select the relevant chapter or use the index and focus on key the section(s) or paragraphs.
3 Examples and illustrations of key points [e.g., Case Studies] Journal articles are best for this.  Examples in textbooks are often ‘tired’.  Look for new.
4 Information on a debate or controversy [e.g., Pros and cons] Journal articles from different sides of a debate; focus on the introduction and discussion sections which paraphrase a debate.  A good review article may really help.
5 Data on best practice Journal articles are best for this since they are most current.

Table 2: Different types of sources.

Reading should be a process of discovery, with one question leading to another.  Above all else reading should be an active process.  Producing a precise or summary of a paper and trying to fit it on no more than one piece of A4 is a good habit to form.  I suggest you read the post on writing a precise here.  I have one other tip which you might find useful.  It can be useful to keep your notes on journal papers separate from your lecture notes, although cross-referenced. Why?  Well it allows to see linkages beyond the structure of your lectures and can aid discovery and allows you to use of one bit of reading in multiple places.  For example, one case study using multiple techniques might be useful as an example in four or five places in your lecture notes.  Figure 9 gives one possible mode of working and is ideal for use with an electronic notebook like Evernote.


Figure 9: Using paper summaries in a flexible way like a stack of cards, ideal for use with something like Evernote or OneNote.

Finally remember it is always a great idea to reflect and think about what you read!


Self-regulating against ‘fake’ research; but at what cost to academic innovation?

Fake news and research is a story of the moment.  Many professions self-regulate and academics are no exception.  The system we use is peer-review, which governs the publication of research and in many countries the availability of competitive research funding.  It can make or break research careers and is there in part to safe-guard against ‘fake’ research, but to what extent does it hold back creativity and innovation?

While there is a growing proliferation of journals across a range of disciplines, careers and reputations are made by publication in the most elite publications, such as Nature or Science, for which competition is fierce.  But whatever the journal, getting controversial or truly innovative ideas published can be a challenge and is at least in my experience limited by the very self-regulatory system we as academics uphold.  Peer-review aims to uphold academic standards of scholarship and should protect decision-makers and the public from bogus claims (fake news), but it can and sadly does become a form of censorship in some cases.  Now in my experience reviewers often stray from a being a constructive and critical friend focused on issues of quality, presentation and logic, to expressions of opinion and reaction to a new idea and the boundaries here are unclear.

As a former journal editor I know just how hard it is to secure reviews from busy researchers.  As a reviewer I know that such tasks can become squeezed into bad tempered and stolen moments at the end of a busy day.  As an author, and a dyslexic one at that, I have experienced many painful reviews over the last 25 years, some deserved others not, and many have verged from professional to the personal.  Challenging convention is what researchers should be trained to do, but many don’t choosing to replicate research and innovation slows.  To do otherwise incurs pain and disappointment as peer review system can allow vested interests to stifle true innovation.  I am reminded of a piece I wrote as a young researcher about R.G. Carruthers a British geologist so infuriated by the interference of reviewers he took the unusual step of publishing a private pamphlet in 1953.  Carruthers was realistic about its success: ‘One has to recognize that the independent issue of scientific pamphlets is rarely a success.  The life of such things, like that of the medieval peasant, is apt to be “nasty, brutish, and short”. Still there are times – and this is one of them – when there is no other way, if one’s work is to be presented as written, and free from the interference of others.  Whether it be accepted now, or later – perhaps much later – is no great matter. But it will be . . .’ (Carruthers, 1953: p. iii).

A year or so ago I got a paper to review from an elite journal with some challenging findings and a long history of rejections, ill-tempered reviews and downright unprofessional behaviours on the part of some reviewers including breaches of confidentiality.  I reviewed it fairly, the first fair review the author’s had received in several attempts to get the material published, but the paper was still rejected by the editor.  One lone voice is not enough.  The authors were so shocked to receive a kind review given their past experience that I was invited as an expert in the field to help them re-shape their work.  I did so, and the new manuscript was tamer, better formed with more data and more considered, but it still ruffled too many feathers and was rejected yet again.  Some of the author’s felt as if they were being accused of participating in a scientific forgery (fake research!) from the tone of the reviews.  It has recently been published finally after six years.

This story, and the one about Carruthers, illustrates how difficult it is to get innovative ideas published and discussed openly, particularly when they challenge established paradigms and figures.  It is after all for the research community as a whole to adjudicate their value, not just a couple of reviewers, acting as representatives of that community.  I almost feel angry at the thought of how much innovative and provocative research might have been rejected in the name of the academic community, in my name and in yours (!), without us even being aware.  Surely, open discussion of all ideas, however unusual, is essential for innovation and progression?

Peer-review revolves around well-established academics and experts in the field; the very people who often have most to lose by the publication of new models and ideas.  Prejudice of all sorts is rife; institutional and national prejudices are often to the fore and are subconsciously applied without thought by many academics.  Yet unconscious bias training as part of recruitment processes is now common in most of the UK’ universities.  One of the people I consulted about this piece said bluntly ‘it is about academic morals and they are not as white as one would hope’.  Treat as one would wish to be treated is a good adage but is easy to forget in a world dominated by competition.  Academic competition, through such things as the research enhancement exercise in the UK and for scarce funding, coupled with the competition between journals for the most cited research breeds competitive behaviours.  They are easy to measure and monitored by metrics, but there are no easy metrics for compassion, for mentoring and coaching of talent and innovation beyond your immediate research team.  One academic I spoke to said ‘what do you expect?  You go to see a lawyer and they charge a fortune per hour, or go to a private clinic and you pay handsomely to see a specialist, but you ask an academic expert with over 30 years of experience for their considered view and expect it for free!’  Most academic journal editors are not paid for their time.  In the digital age it can take a matter of days to get a paper formatted, proofed and online once accepted, as productions time fall editors are under pressure to cut editorial processing times and consequently many editors (as I was) are encouraged to simply reject rather than nurture papers that need a lot of re-working.

Now to be clear I am not suggesting that we abandon peer-review, it plays an important role in ensuring that what is published is at least intelligible and meets some basic standards of ethical research.  The case here when dealing with medical or drug trials for example is clear, but the need to nurture, debate and support the publication of innovative ideas needs greater thought.  It is something that is under even greater threat with the current focus on fake news.  There have been a number of experiments and new approaches over the years to try and make the process more transparent and less open to personality and abuse.  Some journals now offer blind-blind reviewing, others publish the reviewer’s comments and the author’s responses, and there are a number of journals that now allow reviewers to debate the decision letter between them.  Blind-blind has its advocates, but any form of anonymity allows abuse in my experience.  It is the reviewer’s anonymity not the authors that is the problem and the lack of redress permitted by authors when treated unfairly.

For innovation in self-regulation we have perhaps to go back to the early origins of the peer-review system itself.  A learned researcher would present their work via a formal lecture and the audience would discuss and question the author in person effectively providing peer review through active rather than passive debate. Those comments would be minuted and published along with the original lecture.  Now we can’t restrict publications to oral presentations and conference invitations are far from unbiased now as they were in the past; you don’t invite the opposition to your own jamboree that often!  My point here however is that a more open and transparent debate is needed, not one simply limited to a select, and often self-nominating few acting on behalf of a wider, and usually oblivious, community.  The Arxiv ( project is one example.  Here articles can be uploaded and receive online comments, it also acts as a digital archive for more conventionally published works.  Forums and publications that allow more open peer debate and active, rather than passive and hidden debate, is perhaps closer to the true spirit of peer review?  Further experiments are much needed to protect society against fake news and research yet create the innovative, free thinking research talent that our society so desperately needs.

This post is based on a presentation given by Professor Bennett at Bournemouth University in 2018 entitled ‘The Dark-side of Peer Review’.



Climate zones, palaeo-climate and your 2018/19 coursework!

Regional physical geography is a bit dated these days with a greater emphasis being placed on processes and principles.  In the past however regional geography was a big deal and to be able describe the climate, vegetation and soils of the world region-by-region was an end in itself.  To assist with this task a range of climate classification systems were developed.

The Köppen classification is probably the most widely recognised.  Wladimir Köppen (1846-1940) developed the scheme in 1884 making several modifications the last of which was in 1936.  It is sometimes known as the Köppen–Geiger climate classification due to later modifications made by Rudolf Geiger in 1954 and 1961).  It is a zonal scheme, warm and wet in the equator moving outwards through the dry and arid subtropics to the temperate mid-latitudes and the polar deserts beyond.  It reflects global atmospheric circulation and the heat engine modified by land-ocean distribution.


Figure 1: Koppen Climate Classification. Source details and key to classes can be found here.  

Looking beyond the classification to the underlying principles is at least in my view more useful especially if we are to understand the climate of past or future continents and/or Worlds.  A hypothetical continent is often used to convey these principles, like the one in Figure 2.  You can find a similar more sophisticated version on the web here.


Figure 2: Climate Zones for a hypothetical continent straddling the equator. 

So crudely speaking this has warm wet climates close to the Equator, dry ones in the subtropics and temperate ones in the mid-latitudes.   It follows the zonal pattern we would expect given that there is an excess of heat in the equator a deficit in the poles.  The warm-wet Equatorial region reflects the location of the Inter-tropical Convergence Zone.  This will move north and south as the overhead sun varies between the solstices. The humid equatorial regions extends both north and south to a greater degree on the eastern side of the continent due to the influence of the trade winds which bring moist air on to land.  There may also be strong monsoonal influences in these locations.

The subtropical zone is associated with warm descending air linked to the returning surface winds which completes the Hadley Cell .  The arid and semi-arid regions on the western side of the hypothetical continent are the manifestation of this.  Note that they extend further north into the centre of the continent in the Northern Hemisphere.  This is due to a ‘continental-effect’ given their distance from the ocean.  Land heats up and cools rapidly giving temperature extremes and there is little moisture due to the distance inland.  In the Southern Hemisphere this arid zone is modified by the presence of a chain of mountains.  Note it extends further south on their lee-side due a rain shadow effect.  Equally this arid zone also extends further north toward the equator along the coast due to the presence of off shore winds.  In South America today the movement of the trade winds and associated ocean currents is offshore which causes coastal up-welling of cold ocean waters and also ensures that any moisture moves off-shore rather than on-shore.  The Atacama Desert is the result.  It is worth noting that this pattern is influence by the El Nino and there is no reason to suggest something similar would not be active on our hypothetical continent.

The mid-latitudes show a contrast west to east due to the influence of the westerlies.  The western seaboard of our continent is wet and mild with a strong maritime influence.  In contrast the eastern seaboard is drier and subject to greater continental extremes.   By finding current cities/regions that match those on the hypothetical continent you could  augment the picture further with climate data.  Figure 3 is a crude sketch of what the vegetation zones might look like.


Figure 3: Hand drawn sketch of what the vegetation zones associated with Figure 2 might look like. Forgive my handwriting!

Ocean currents could be added to this hypothetical world and you could speculate on what if any thermohaline circulation might exist.  Think about the salinity balance.  Cold, saline water is dense and will sink, warm fresh water will not.  Where are the regions of greatest evaporation, where in the freshwater runoff likely to be most marked?

You can use these principles as we have above for the hypothetical continent and apply them to Pangea, or in the case of your course work (Physical Geography 2018/19) to Pangea Ultima (Fig. 4).


Figure 4: An image of Panage Ultima based on the work of Scotese  and the one you have been provided with for your Physical Geography Coursework in 2018/19.

Chilling stars and the origin of complex life?

Do you owe your existence to a supernova?  It is perhaps rather far-fetched but for some scientists not too big a leap.  To follow this story we need to step back and question our understanding of climate change.  The consensus model is based on the volume of global carbon dioxide; it was drawn down in the Cenozoic priming Earth Systems such that orbital radiation variations could modulate our climate through mechanism such as the thermohaline circulation.  The link between orbital radiation and the climate pulse beat during the Quaternary has been well established by deep sea sediment and ice cores.  But not everyone agrees with this, after all science is about disagreement and challenge so let take a look. 

Wilson Chamber and Cosmic Rays

A Wilson Chamber is a sealed vessel supersaturated with water or alcohol.  The passage of atomic particles and cosmic rays is recorded by a trail of water droplets.  It represents one of the earliest ways of seeing atomic and perhaps subatomic particles (Fig. 1).  It also shows how these particles can cause condensation, essential for cloud formation.


Figure 1: Wilson Cloud chamber.

This piece of old fashioned scientific kit has led some people to suggest that there is link between cloud formation and cosmic rays (Fig. 2).


Figure 2: Cosmic rays and cloud formation. (Source: Svensmark and Calder, 2007).

Cosmic rays consist of a range of atomic and subatomic particles with varying energy fluxes.  Gamma ray bursts are common products from starburst when giant stars die (supernova) and new stars are born.  The link to cloud formation is not entirely clear but there may well be a link of sorts.  Changing the percentage of cloud cover changes the atmospheric albedo (reflectivity) and atmospheric absorption both of which may impact on the surface energy balance.

Cosmic rays entering our atmosphere all the time, if they collide with an atom they can produce a secondary ray in the form of a neutron.  If this collides with a nitrogen-14 atom for example the neutron displaces a proton to give the carbon isotope Carbon-14.  Carbon-14 is the stuff we use for carbon dating and the proportion in the atmosphere and therefore in living matter is known to vary through time.  In fact it varies with the frequency of cosmic rays; more cosmic rays more C-14.

Now cosmic rays are quite bad but we are protected by the Sun’s magnetic field; the weaker the field the greater the C-14 production.  This also implies the weaker the field the greater the potential for cloud formation via cosmic rays assuming this link exists.  Sunspots come and go on the surface of the Sun with an 11 year cycle (Fig. 3), but the magnitude of this activity varies over longer time scales.  It just so happens that one weak period of sunspot activity corresponds to the Little Ice Age (Fig. 4).  This coincidence has been known for a while but the mechanism has been unclear until Svensmark (2007) put forward the link between clouds and cosmic rays.




Figure 3: solar sunspot cycle.  (Source: David Hathaway, NASA, Marshall Space Flight Center –


Figure 4: Sunspot activity through time.  Note the Maunder minimum which corresponds to the Little Ice Age.

Little Ice Age

The River Thames doesn’t freeze these days, but it did in the past.  Bishops were sent for in Alpine villages to exercise demons and evil spirits from advancing glaciers.  The European wine harvest failed.  Between about 1650 and 1750 AD things were tough.  This period with its wealth of documentary evidence is known as the Little Ice Age (Fig. 5).  Whether it was a European focused event or more global is still a matter of some debate and may in part reflect the availability of the European historical record.  But cold it got.

The cause of this cold event is unknown, some have linked it to a slowdown in the thermohaline conveyor, but the evidence is unclear.  Could this event be the result of the sunspot minima?  More clouds would increase atmospheric albedo, meaning that less solar radiation would make it through the atmosphere to warm surface.


Figure 5: Little Ice Age in context. 

In theory this should be a testable hypothesis.  We have records for cosmic rays going back several years and also satellite records of cloud cover.  While this would not prove the cause of the Little Ice Age it would give veracity to the mechanism.  Figure 5 shows a correlation between cloud cover and a single solar cycle.  The correlation with high altitude cloud cover is poor which is surprising since it is high altitude clouds that are most important in controlling the albedo of the atmosphere, but with low altitude clouds it is strong.

The thing about science is that it has to be repeatable.  The onus is on the authors of a paper to make sure their data sources and methods are clear, so that in theory they can be repeated.  Well many people have tried to repeat the analysis of Svensmark and colleagues using the same and different data and the relationships have not always been replicated (e.g., Sun and Bradley, 2002).  For example, Kulmala et al. (2010) could find no evidence of a relationship between the creation of new particles in the atmosphere and cosmic ray activity.

In parallel, theoretical work much of which has been done at CERN (European Organization for Nuclear Research, physicists and engineers) has led to a better understanding of the physics (Kirby, 2007; Kirby et al., 2011).  These researchers have greater belief in the potential link, but the test of a hypothesis like this must be against known historical records.

Interestingly, Tsonis et al. (2015) found no link between global temperature during the 20th Century and cosmic rays, but did find a significant link, although modest, between cosmic rays and short-term, year-to-year variability in GT.

The verdict about the link between cosmic rays and cloud remains one of debate and largely unproven.  There is a divergence between the physics that can be shown in the laboratory and the actual record.  While the consensus is perhaps moving away from cosmic rays it should not be dismissed until proven false.  The association between the Maunder Minima and the Little Ice Age is compelling, but is it a case of coincidence or causality?


Figure 7: Correlation between cosmic rays and low cloud cover during a solar cycle. (Source: Carslaw et al., 2002)

Origin of Life and Snowball Earth

Despite the fact that the link between cosmic rays and climate has yet to be proven  some researchers continue to build on the hypothesis.  In particular they point to a potential link between cosmic rays and the origin of complex life on Earth.  This is perhaps even more tenuous, but it is important that scientist keep an open mind about ideas – you never known when the next paradigm shift is going to happen!

The Cambrian Explosion was a massive evolutionary radiation.  For much of Earth history there has been little in the way of life apart from bacteria and algae.  Slime ball Earth would have been quite an apt description.  This very long period in earth history – 3 billion or so years – was vital in transforming the Earth atmosphere, reducing the carbon dioxide content and replacing it with oxygen.  This terra forming period created the right circumstances for the evolution of complex life.  This really kicked off around 520 million with the Cambrian Explosion.

The causes this radical evolutionary event have been debated, the presence of extensive and flooded continental shelves following the break-up of Rodina may have been important and the availability of Oxygen.  It also may be a function of preservation in that the innovation was the development of ‘hard parts’ made of calcium carbonate that could be preserved.  Complex multi-cellular life first appeared in the Neoproterozoic with animals known as the Edicarian fauna.  The association with Snowball Earth has attracted many geologists attention despite the fact that timings don’t quite work.  The Neoproterozoic snowball episodes are constrained to 770 to 635 Ma, approximately 100 Ma years before the Cambrian Explosion (520-488 Ma) when metazoan fauna hugely diversified.  By working back via the molecular clock embedded within DNA we can get an approximate date for the origins of ancestral metazoans; that is when the genome was last re-organised into its current form.  The data for this event is around 900 Ma.  Reconciling these events is difficult because the record is so poor and different elements are often in conflict; in this case the molecular clock and the fossil record.

Figure 8 shows a rather speculative model that links the Cambrian explosion to cosmic rays (Maruyama and Santosh, 2008).  The scenario goes something like this.  Starbursts between 900 and 600 Ma lead to extensive cosmic radiation within our solar system.  This may have led to cloud formation triggering snowball Earth so the argument goes.  As Rodina rifted it created seaways with nutrient enriched waters and wide shallow continental shelves idea for the diversification of life.  Saturation by cosmic radiation caused the mutation of DNA and the creation of more complex life including sexual reproduction.  At around 500 Ma oxygen levels had got to a sufficient level for the life to explode in diversity.

It is important to recognise that is a speculative story – perhaps geological poetry would be a better term – but stories can help scientist to frame question for more rigorous analysis.  It does mean that these stories are correct or will be proven correct in time.  It is however an interesting hypothesis, even if one excludes the climate link.  We know that the founding DNA modern life dates from around 900 Ma; what caused that mutation and re-organisation of the genome?  Extra-terrestrial radiation is as good a working hypothesis as any other at this point.  So yes you may owe your existence to the explosion of a distance star.


Figure 8: Speculative model linking cosmic rays to the origin of complex life and the Cambrian Explosion.  (Sources Maruyama and Santosh, 2008).

Science is about ideas, advancing them, putting forward tests with which to examine them and allowing peer debate.  Many ideas are advanced that ultimately are discarded.  The link between climate and cosmic rays is increasingly being marginalised, that is not to say however it should not be debated, taught and discussed.



Space-time substitution, but not as we know it Captain!

One of the problems in physical geography is that things are a bit slow.  If you sat watching them you would probably get bored; perhaps not if you were watching a pyroclastic flow, or were being shaken by an earthquake, or swamped by tsunami, but let’s face it most processes are slow.  So if we want to watch a landscape evolve and change before us we need better solutions.  We can reconstruct past landscapes by looking at the sediments, rocks, residues and artefacts left behind, with the addition of a few dating tools we might make a good stab at reconstructing a palaeo-landscape.  Alternatively we might use historical material – archives of aerial photographs, old satellite images, historical documents, old newspapers and even pictures.  Failing that there are other tools that we can use, particularly space-time substitution which is loosely referred to as ergodic reasoning.  It might sound like something out of Star Trek, but it works!

Using historical evidence

The photograph in Figure 1 is of the River Thames in 1895, partially frozen and looking like a scene from an Arctic port.  Figure 2 shows a similar image from Chatham Dockyard.  Both illustrates the potential of historical records to bring to life past climate events.

thameFigure 1: Photograph in 1895 of the River Thames at Gravesend.


Figure 2: Etching from Chatham Dockyard from 1895.

As an undergraduate I read a piece of work by Pearson (1976) which used historical data to reconstruct climate and made a big impression on me.  Pearson (1976) scanned literally thousands of copies of Edinburgh’s evening papers for stories about snow.  Things like the mail coach from London being delayed and the like and assembled the data into a historical time series.  The series clearly shows a peak in the late 1700s in the incidence of snow stories.  One off course has to be careful in interpreting such data; was it really a climate event or was it simply that the frequency of reporting increased.  For example, many of the stories concern delay to the mail coach at a time of increasing economic prosperity and linkage to London; that is disruption of the mail became news.  The fact that the peak goes down gives some corroborative support to the idea that this was a real climate event, perhaps linked to the Little Ice Age.

The use of historical data is not restricted to climate events but is relevant to geomorphological processes as well.  For example, Figure 2 shows the pattern of meanders on the Mississippi based on historical maps providing important information on the rate of channel migration.


Figure 2: Use of historical maps to reconstruct the channel patterns of the Mississippi River. (Source:

Space-Time Substitution

Think of a typical crowd of people at a shopping centre on a wet Saturday afternoon.  There will be kids whining for toys, old men needing a cup of tea, hassled mothers and fathers running errands, teenage couples hanging out and gaggles of students shopping; a mixed population.  By sampling in space (e.g., picking on people in different locations) we are in effect sampling in time; sampling a cross section of the age population.  If we do that statistically we are close to employing the true concept of ergodic reasoning as developed to study kinetic theory of gasses (Paine, 1985).

In geomorphology we tend to use a looser form, perhaps better described as location-time substitution.  If we can demonstrate, or reasonably assume that process rates have remained constant, and that the landform being examined spawned progressively (i.e. multiple ages) then we can use time space substitution.

One of the more famous examples is the work of Brunsden and Kesel (1973) on the evolution of the Mississippi River bluffs.  They surveyed a series of river bank cross-sections that had been abandoned (i.e. lateral river erosion had stopped) at various times (Fig. 3).  They then assembled these cross-sections in relative time order to create a model for the ‘relaxation’ of the river bank once the river had migrated laterally away from it (Fig. 4).  It is a classic example of time-space substitution.


Figure 3: Sample locations along a section of the Mississippi River between Mt Pleasant and Port Hudson.  At each site a slope cross-section was surveyed.  (Source: Brunsden and Kesel, 1973).

miss2Figure 4: Mississippi bank cross-sections organised to give a temporal model for how they evolve through time.  Note the slope relaxation when the lateral undercutting from the river is removed. (Source: Brunsden and Kesel, 1973).

Another very nice example of this approach has been applied to the evolution coastal chines like those around Bournemouth.  These are effectively gullies or incised river channels perpendicular to the coast.  In the case of the Bournemouth examples most are seasonally dry.  Their formation depends on a balance between coastal erosion causing the overall cliff to recede and the rate of headward cutting of the gully or channel.  Leyland and Darby (2008) study the formation of chines on the south coast of the Isle of Wight.


Brunsden, D. and Kesel, R.H., 1973. Slope development on a Mississippi River bluff in historic time. The Journal of Geology, pp.576-598.

Leyland, J. and Darby, S.E., 2008. An empirical–conceptual gully evolution model for channelled sea cliffs. Geomorphology, 102(3), pp.419-434.

Pearson, M.G., 1976. SNOWSTORMS IN SCOTLAND‐1729 TO 1830. Weather, 31(11), pp.390-393.

Nuclear power, tsunamis and the jökulhaup?

Fukushima Daiichi nuclear disaster on the 12 March 2011 was a direct result of the Tōhoku earthquake and tsunamis the day before.  The reactors automatically shut down when the earthquake hit but the tsunami led to flooding of the emergency cooling generators causing three nuclear meltdowns and the release of radioactive material.  The Fukushima disaster is the largest nuclear disaster since the 1986 Chernobyl incident and the second disaster to be given the Level 7 event classification of the International Nuclear Event Scale.  It was a sobering event for the international radiological community but what if any are the implications for a country such as the UK?  Most of the UK’s nuclear plants are coastal; the availability of cheap land and abundant water for cooling being the key.  Unlike Japan the UK is not located on an active tectonic boundary, but on what is called a ‘passive margin’, but a UK tsunami is not beyond the bounds of probability. 

Tsunami records in the UK

In the late 1980s and early 1990s tsunami records were recognised first from the Norwegian coast and then from Scotland.  They consist of an anomalous layer of sand containing shell fragments and rip-up clasts often set within terrestrial peat.  These layers evidence wave run-up way beyond the norm often between 1 and 100 m (Smith et al., 2004).  They have been widely studied and have been variously dated to around 7.1 radiocarbon years BP (7.9 Calibrated years BP).  Particularly impressive is the dating of chlorophyll from mosses washed out to sea and preserved in marine sediment by the tsunami.  These give calibrated ages of 8.12 K and 8.175 K years BP (Bondevik et al., 2012).

Tsunamis can be caused by earthquakes, meteorite impacts, volcanic cone collapse and by submarine landslides.  The smoking gun for the UK tsunami was the Storrega Slide on the Norwegian continental slope.  This complex multi-phase slide appears to have been last active around 7.9-8.1 K years BP when a failure of some 4000 cubic km occurred.  The antimony of this slide is of particular interest.  Ocean basins have a distinct marginal geometry; the continental shelf, the continental slope, and abyssal plain. As move away from the shoreline the rate of sedimentation declines, also sedimentary flows are not always straight forward – shallow to deep.  As dense turbid water moves basin ward it may in time encounter denser bottom water (colder) at this point it may cease to move downhill and will also be deflected by Coriolis Force.  These flows may now contour the edge of the basin eroding and depositing sediment as contourites.  If we now enter a glacial phase ice sheets will build up over the continent and slowly advance to the edge of the continental shelf.  Where ice flow is concentrated in ice streams big fans of sediment will extend the edge of the continental shelf.  These are called trough mouth fans.  The steepening of the continental slope is potentially a source of instability and therefore of failure.  The Sorrega Slide is no exception being at the mouth of the Norwegian Ice Stream which at the height of the last glacial cycle drained out of the Baltic.

To this sediment pile we need to add methane clathyate in which methane is trapped in a crystal structure of water forming a solid similar to ice – but one that can burn!  It occurs in various places around the Earth, one of which is at depth in ocean sediments.  It instability, and therefore release, can triggered by changes in ocean temperature and pressure (i.e. water depth via sea level changes). It has been implemented as potential cause for failure of the Storrega Slide.

There are two issues worth exploring further here:  the first is the implications of this tsunami record for future events in the North Atlantic, and secondly some potential climatic implications given the global 8.2k BP climate event.

Tsunamis in the future?

The Storegga Slide tsunami demonstrates the potential for such events to occur on passive continental margins.  It is not the only tsunami to impact on Britain.  There are some other more restricted sand layers that have been documented and dated some of which are just 1.5 years BP.  These are not as extensive as those of 7.9kBP and the cause is uncertain.  There is the ever present need to separate out what is an actually tsunamis from other coastal flood waves such as tidal surges and storm waves.  In historical times the Lisbon earthquake of 1755 was associated with a modest tsunami which impacted on the UK’s southern coast.

Tsunamis can be modelled relatively easily and a number of studies have attempted to marry up models with observed historical details.  The following You Tube link provides results from modelling the Storegga Slide.

So what are the chances of a tsunami similar to that of Storegga happening again.  In the case of the slide itself it quite stable, but in truth it is just one location around the North Atlantic where large trough mouth fans accumulated during the last glacial cycle.

The Grand Banks Earthquake and tsunami of 1929 which has been modelled and reconstructed by Fine et al. (2005) involved the failure of the Laurentian Fan off Newfoundland and killed 28 people.  Failure of other trough mouth fans is a real possibility.

Off greater impact might be a volcanic collapse in the Canary Islands (Ward, 2001).  In particular attention has focused on the potential for flank collapse of the Cumbre Vieja Volcano.  Such a failure would drop between 150 and 500 km cubed of rock into the sea, setting off a tsunami that would have the potential to inundate the eastern seaboard of the USA.  The role of submarine slides around volcanic islands is reviewed by Whelan and Kelletat (2001).

While most authorities rank tsunami risk in countries like the UK to be small it is not insignificant and the impact on an unprotected coast without any warning systems or disaster plans in place could be quite catastrophic especially to wind farms and other off shore infrastructure.

Storegga Slide and Climate?

When is a correlation between two event causative, or just coincidence?  This is a particular problem in dealing what a colleague of mine calls ‘wiggly line science’.  Take a couple of climate proxies with slightly different fidelity and one of more events; if you want them to line up then they tend to, if you don’t they don’t.  More to the point if they do show a degree of correlation how do you know they are linked?

The Storegga Slide provides an example.  Beget and Addison (2007) proposed tentatively that the age of the Storegga Slide’s most recent episode corresponded to a small spike in the methane within the Greenland GRIP ice core.  They suggested that release of methane clathrate during the slide may have been the cause of this spike.  In fact they sugest that it might have helped the Earth recover from the global cooling event of 8.2ka.  It might provide an example of something known as the ‘clathrate gun hypothesis’.  A rise in sea temperature and/or a fall in sea level causes the release of methane which as a greenhouse gas causes further warming and further methane releases.  Such a cycle has been implicated in the Paleocene–Eocene Thermal Maximum 56 million years ago, and perhaps the Permian–Triassic extinction event, when up to 96% of marine species became extinct.

The 8.2k year cooling event was both severe and appears to have been global.  It has been attributed to the partial shutdown of the thermohaline conveyor caused by the drainage of the Lake Agassiz via a mega-flood (jökulhaup).

doggerFigure 1: Reconstruction of Doggerland. (Source:

More recent work has challenged this idea Dawson et al. (2011).  First, analysis of Storegga Slide sediments suggest that little methane may have actually been released during the slide.  Secondly, revised dating suggests that the slide occurred during the cold spell and was not associated with the subsequent warming event.

Spare a thought for your ancestors living on Doggerland (Fig. 1).  You may have heard of Dogger Bank if nothing else from the Shipping Forcast; it is a low lying body of submerged land in the North Sea.  It was once home to a flourishing community.  The Storegga Tsunami would have had a catastrophic impact on both the human and animal communities living there sweeping across the low lying terrain.


Bondevik, S., Løvholt, F., Harbitz, C., Mangerud, J., Dawson, A. and Svendsen, J.I., 2005. The Storegga Slide tsunami—comparing field observations with numerical simulations. Marine and Petroleum Geology, 22(1), pp.195-208.

Bondevik, S., Stormo, S.K. and Skjerdal, G., 2012. Green mosses date the Storegga tsunami to the chilliest decades of the 8.2 ka cold event. Quaternary Science Reviews, 45, pp.1-6.

Dawson, A.G., Long, D. and Smith, D.E., 1988. The Storegga slides: evidence from eastern Scotland for a possible tsunami. Marine geology, 82(3-4), pp.271-276.

Dawson, A., Bondevik, S. and Teller, J.T., 2011. Relative timing of the Storegga submarine slide, methane release, and climate change during the 8.2 ka cold event. The Holocene, 21(7), pp.1167-1171.

Haflidason, H., Lien, R., Sejrup, H.P., Forsberg, C.F. and Bryn, P., 2005. The dating and morphometry of the Storegga Slide. Marine and Petroleum Geology, 22(1), pp.123-136.

Haflidason, H., Sejrup, H.P., Nygård, A., Mienert, J., Bryn, P., Lien, R., Forsberg, C.F., Berg, K. and Masson, D., 2004. The Storegga Slide: architecture, geometry and slide development. Marine geology, 213(1), pp.201-234.

Haflidason, H., Sejrup, H.P., Nygård, A., Mienert, J., Bryn, P., Lien, R., Forsberg, C.F., Berg, K. and Masson, D., 2004. The Storegga Slide: architecture, geometry and slide development. Marine geology, 213(1), pp.201-234.

Smith, D.E., Shi, S., Cullingford, R.A., Dawson, A.G., Dawson, S., Firth, C.R., Foster, I.D., Fretwell, P.T., Haggart, B.A., Holloway, L.K. and Long, D., 2004. The holocene storegga slide tsunami in the United Kingdom. Quaternary Science Reviews, 23(23), pp.2291-2321.

Whelan, F. and Kelletat, D., 2003. Submarine slides on volcanic islands-a source for mega-tsunamis in the Quaternary. Progress in Physical Geography, 27(2), pp.198-216.

Day, S., Day,  2001. Cumbre Vieja Volcano–Potential collapse and tsunami at La Palma, Canary Islands. Geophysical Research Letters, 28(17), pp.3397-3400.


Other Sources:





Snowball or slush ball?

A global glaciation sounds like something out of a disaster movie in which the intrepid heroine has to save the world.  We associate glacier and ice with the poles, perhaps with high tropical mountains like Kilimanjaro but not with the tropics in general – they are to warm, verdant with vegetation and the coasts have golden sand and coral reefs at least according to the travel brochures.  As early as the turn of the twentieth century the Antarctic explorer Douglas Mawson (1882-1958; Fig. 1) was finding evidence in Australia for Precambrian glaciations at low latitudes.  Throughout the twentieth century tillities, fossilised glacial till, were increasingly found with low palaeo-latitudes.  Only with the advent of the plate tectonic paradigm did geologists have the framework to reconstruct past continental palaeo-geographies and with it assess the true implications of these low latitude tillities.  The radical idea of a global glaciation – poles and tropics – was proposed in the 1990s and Snowball Earth was born.  As a concept it has been linked to the origins of multicellular life and for some it represents the closest the Earth has come to extinguishing all life, but for others the paradigm lies hollow.  Fierce debate has ensured and illustrates the often combative nature of science. 


Figure 1: Douglas Mawson (1882-1958) professional geologist and leader of the Australasian Antarctic Expedition in 1912.  The story of the expedition is recorded in Home of the Blizzard by Frank Hurley. (Source:

Preservation of tills and diamictons

Imagine a kitchen table piled high with cake mix – preferably an almond cherry cake.  Using your arm you sweep the mix into a cake tin, some falls on to the floor and some is left on the table.  The only cake mix to survive, assuming you clean up, is the cake in the tin (or basin).  It is the same with sediments; land erodes and sediment is preserved in depositional basins, usually marine basins, where it is lithified and folded by plate tectonics.  As a consequence much of our understanding of Earth history comes from the marine record.  The preservation of evidence for icehouse periods (ice ages) prior to the Cenozoic (i.e., last 65 million years) is no exception.

The record of these pre-Cenozoic glaciations comes in the form of diamicton.  What is diamicton you might well ask?  It is a non-genetic term for poorly sorted sediment with a wide range of grain-sizes present (Fig. 2).

Figure 2: Right, dimictonite – fossiled diamicton. Left, dropstones. (Source:

The difference from a till (tillite when lithified) is one of semantics; a till is directly deposited by glacier ice and diamicton could be deposited by a landslide or a glacier (Fig. 3).  Given much of the pre-Cenozoic record is marine the glacier is rarely directly involved in deposition, hence the use of the word diamicton.  In most cases the glacier or ice sheet simply supplies debris to the edge of the continental shelf where it is re-sedimented by a range of underwater mass movements (subaqueous sediment gravity flows).  Semantics you might say?  To a certain extent yes, but diamictons can form from a range of different processes glacial and non-glacial (Fig. 3).  The presence of clasts with striations may help show a glacial influence but is not always diagnostic.  Dropstones may help (Fig. 2).  If the sediments are fine grained and suddenly you find a huge pebble or cobble you have what is known as a hydrodynamic paradox.  The flow rate needed to move the pebble would not allow the fine sediment to be deposited.  The problem is solved is the pebble was dropped in from above via a melting iceberg.


Figure 3: Various processes that can form diamictons in the geological record.  Note that glacial processes are just one way.  (Source: Eyles and Januszczak 2004).

The Cambridge geologist Brian Harland (1917-2004) gathered evidence from throughout the world including Scotland and proposed a great ‘infra-Cambrian’ glaciation which extended to low latitudes (Harland, 1964; Fig. 4).  The evidence for low latitude glaciation comes from the palaeo-magnetic record, with important evidence eventually emerging from Australia.  Ferrous minerals become orientated to the poles during deposition and the orientation and declination can be used to reconstruct palaeo-latitudes.  Figure 5 shows the emerging evidence.  There were a number of key issues:

  1. Did the tropics really freeze? Was the Earth completely frozen oceans and land?  If so how did life survive?  This is about the reliability of the palaeo-latitude studies and the dating of those deposits; there needs to be lots of samples showing low latitude locations and all from a similar age.  Precise dating, so far back in Earth history, is not easy and often involves error margins plus or minus tens of millions of years.
  2. The second issue is all about mechanism; how and why did the Earth become frozen? And more to the point how did we escape the deep freeze?

snow5Figure 4:  History of ice on Earth. Note the term ‘infra Cambrian’ was an old fashioned term used for the Neoproterozoic which is the most recent part of the Precambrian.  (Source: Craig et al., 2009)

Snowball Earth

Work in the 1960s by Mikhail Budyko and largely ignored at the time was re-considered in the 1990s by those advocating global glaciation.  He modelled something that is now referred to as the ‘runaway albedo scenario.’  Albedo is the term we use for the reflectivity of the earth’s surface.  Snow and ice (high albedo) reflect more solar radiation back to space than dark forest or ocean (low albedo).  We can imagine a feedback loop in which colder temperatures lead to more ice and snow and in turn to higher albedo.  Higher albedo means more solar radiation is reflected back to space causing temperatures to fall further leading to more snow and ice.  The key contribution of Budyko (1969) was to realise that this feedback loop was potentially unstoppable once glaciation proceeded south of 30 degrees of latitude.

In a short paper in 1990 Joseph Kirschvink first coined the term ‘snowball’ Earth.  The concept that was later developed and expanded by Paul Hoffman and his co-workers examining glacial deposits primarily in Namibia (e.g., Hoffman et al., 1998).  Gabrielle Walker popularised the idea via a wonderful piece of science writing (Snowball Earth; Walker, 2003).


Figure 5: Palaeo-magnetic (latitude) evidence for low latitude glaciation in the late Precambrian (Neoprotozeroic; Source:

The basic idea runs something like this (Fig. 6):

Step One: Enhanced continental weathering causes a draw-down in carbon dioxide, a greenhouse gas, which results in global cooling.  This enhanced weathering may have been associated with the break-up of an early supercontinent (called by some: Rodina).  The rifting uplifts the continental margins revealing fresh silicates for weathering.

Step Two: Glaciation initiates and is accelerated by albedo feedback.  Once it precedes below 30 degrees of latitude it become unstoppable and the plant and its ocean freeze.  Life may survive in the deepest of oceans, where the lack of oxygen causes the deposition of iron stones.

Step Three: With no exposed rock to weather there is nothing to counter the build-up of carbon dioxide which is out-gassed by volcanoes.  Greenhouse gases build-up progressively over millions of years.

Step Four: Greenhouse gases force the planet into a super hot state which breaks it free of ice.  Deglaciation occurs and intense weathering of glacial sediments by acid rain leads to the deposition of carbonate layers in the oceans. With lots of continents around the tropics the weathering is enhanced.  A run away albedo effect causes the world to gradually freeze and once it exceeds 30 degrees in latitude it is unstoppable.  There is global shutdown of the hydrological systems and life is only able to survive in the deepest oceans.  The anoxic conditions in these oceans lead to the deposition of iron stones.  With the continental area covered by ice, weathering ceases.  Gradually over a period of time carbon dioxide begins to accumulate in the atmosphere

The hypothesis has generated a huge amount of debate and research (Fairchild and Kennedy, 2007).  One of the challenges is the absence of a suitable modern analogue – the key to the past is the present, does not apply.  Glacial processes today are very different from a global planetary freeze.  This is the crux of the issue in many ways since, snowball earth invokes ‘unknown’ at least on this planet conditions.  Glacial geologists hit back interpreting the sedimentary record in terms of process operating today and finding evidence for fluctuating ice margins.  Others attacked the dating and specifically evidence for synchronicity in time of the palaeolatitude evidence.  This debate has range for almost 20 years.  So where does it stand now?

snow7Figure 6: Snowball Earth. (Source:

Snowballs, slush balls and zippers

Fairchild and Kennedy (2007) reviewed the state the debate elegantly (Fig. 7).  They made reference to work that suggested that an alternative explanation for the low latitude glaciation was a shift in the axial tilt of the earth (Fig. 8).  The evidence for such instability in the axial tilt is not however convincing, although it remains a possible alternative explanation.  Others have argued about the degree to which the glaciation was ‘complete’ and have coined the term Slush ball Earth accommodate such an idea.

However, by the far the most convincing alternative explanation has been proposed by Eyles and Januszczak (2004).  Their ‘zipper-rift’ model suggests that the record for direct glaciation is more limited than is supposed; that much of the record consists of large subaqueous mass flows into rifted basins; that the glaciation did occur at low latitude but was diachronous reflecting local adiabatic rather than global glaciation.  The key to this was the rifting of Rodina the supercontinent that formed and broke-up in the Neoproterozoic (Fig. 9).  Today the East African Rift illustrates this process.  Here two plates are pulling apart, it starts with a terrestrial rift valley which ultimate is flooded to form a shallow sea that will in time form an ocean if sea floor spreading occurs.  The rift flanks are uplifted by the buoyant warm volcanic rocks which rise to the surface to initiate the rift.  The same process occurred along the western sea board of Europe as the North Atlantic opened.  Large scale rifting associated with the breakup of a supercontinent causes widespread uplift and weathering.  This draws down atmospheric carbon dioxide causing cooling.  Local ice caps form on the rift flanks associated with this cooling and the adiabatic uplift.  They supply glacial debris to the growing rift basins (Fig. 9) which are also tectonically unstable and therefore associated with olistromes (i.e., large mass movements; Fig. 3).

The key to this model is that the rifting and the deposition of the sediments occurs by known geological processes (i.e. active today) and is diachronous.  That is the same type of deposits and sedimentary sequences occurs in different locations but at slightly different times.  There is also no need to invoke extreme climate events or global glaciation.


Figure 7: Alternative hypothesises for low latitude glaciation according to Fairchild and Kennedy (2007).


Figure 8: Axial tilt as a mechanism for low latitude glaciation.  The right panel shows the distribution of solar radiation with different degrees of axial tilt.  (Sources:; and Fairchild and Kennedy, 2007).


Figure 9: Breakup of Rodian about 750 Ma and the associated sedimentary model (Source: Eyles and Januszczak, 2004).

As the debate has continued there is a gradual move to recognise more conventional (i.e. current) geological processes as being the key to interpreting these records.  In some ways one could call it a triumph for uniformitarianism over catastrophism.  While the heat in the debate is beginning to cool, it would be unfair to suggest that it is over or that ‘zippers’ have vanquished the snowballs.


Budyko, M.I. (1969). “The effect of solar radiation variations on the climate of the earth”. Tellus. 21 (5): 611–9.

Craig, J., Thurow, J., Thusu, B., Whitham, A. and Abutarruma, Y., 2009. Global Neoproterozoic petroleum systems: the emerging potential in North Africa. Geological Society, London, Special Publications, 326(1), pp.1-25.

Eyles, N. and Januszczak, N., 2004. ‘Zipper-rift’: a tectonic model for Neoproterozoic glaciations during the breakup of Rodinia after 750 Ma. Earth-Science Reviews, 65(1), pp.1-73.

Fairchild, I.J. and Kennedy, M.J., 2007. Neoproterozoic glaciation in the Earth System. Journal of the Geological Society, 164(5), pp.895-921.

Harland, W.B. (1964). “Critical evidence for a great infra-Cambrian glaciation” (PDF). International Journal of Earth Sciences. 54 (1): 45–61.

Hoffman, P. F.; Kaufman, A. J.; Halverson, G. P.; Schrag, D. P. (1998). “A Neoproterozoic Snowball Earth” Science. 281 (5381): 1342–1346.

Hoffman, P.F.; Kaufman, A.J.; Halverson, G.P.; Schrag, D.P. (28 August 1998). “A Neoproterozoic Snowball Earth”. Science. 281 (5381): 1342–6.

Kirschvink, J.L. (1992). “Late Proterozoic low-latitude global glaciation: The snowball Earth”. In Schopf, JW; Klein, C. The Proterozoic Biosphere: A Multidisciplinary Study (PDF). Cambridge University Press. pp. 51–2.

Kirschvink, J.L. (2002). “When All of the Oceans Were Frozen” (PDF). Recherche. 355: 26–30. Retrieved 17 January 2008.

  1. F. Hoffman; D. P. Schrag (2002). “The snowball Earth hypothesis: testing the limits of global change” (PDF 1.3 Mb). Terra Nova. 14 (3): 129–55.

Smith, A.G. (2009). “Neoproterozoic timescales and stratigraphy”. Geological Society, London, Special Publications. Geological Society, London, Special Publications. 326: 27–54.

Walker G. (2003). Snowball Earth. Bloomsbury Publishing. ISBN 0-7475-6433-7.

Popular articles:


Ploughs, plagues and the Anthropocene

What has the Black Death got to do with global warming?  You might well ask but there is a theory that links the early onset of global warming to fluctuations in human population caused by plagues.  This radical idea was put forward by William Ruddiman in his book Plows, Plagues and Petroleum and has added another dimension to our understanding of the Anthropocene.  Geological time is divided into eras; the current one being the Quaternary which started some 2 million years ago with the onset of the last ice age.  Some have argued that we should formally recognise a new geological era called the Anthropocene – the period of human impact.  Now geological periods are defined by type sections where the international geological community agree to place an imaginary Golden Spike at the precise point between the two periods.  But where should one place the Golden Spike for the start of the Anthropocene?  Should it be a sequence in which you can detect the residue of the first atomic tests?  Should it be the first appearance of heavy metals and coal dust associated with the Industrial Revolution?  Or should it be much earlier, perhaps with the first forest clearances? 

Global Warming

Hot things give out shorter wavelength radiation than cold things.  So the radiation from the sun is short, it heats the Earth but only a bit, which therefore re-radiates at a much longer wavelength.  Different gases absorb radiation at different wavelengths.  So called greenhouse gases (e.g., methane, carbon dioxide and water vapour) allow short wave radiation to pass through the atmosphere but absorb longer wave length radiation.  As they absorb the radiation the atmosphere heats up and in turn radiates heat back to Earth causing it to warm.  While the media is fixated on carbon dioxide it is important to recognise that other greenhouse gases have greater impact.  For example, methane is four times more effective than carbon dioxide; however it has a shorter residence time in the atmosphere before being broken down.

The commonly held view is that global warming commenced with the industrial revolution and the burning of fossil fuels.  But is this correct?

Early onset of global warming and delayed glaciation?

Ruddiman (2005) put forward the controversial idea that human impact on the atmosphere may have started much earlier.  The argument starts with methane.

So called Milankovitch orbital radiation variation have modulated the beat of climate change during the Quaternary.  In particular the 22,000 orbital precession cycle influences the amount of solar radiation received in the tropics and therefore the intensity of the summer monsoon in both Asia and Africa.  The summer monsoon is all about the rapid heating of land .  The enhanced summer rainfall saturates tropical wetland and enhances methane production.  The link between the precession of the equinoxes and the monsoon has been demonstrated via General Circulation Models (GCMs).  We can monitor the global methane content in the past by sampling the methane trapped in ice cores.  As shown in Figure 1 the total methane tracks orbital variations which determine the solar radiation received by the Earth.

methFigure 1: The relationship between methane and solar radiation.  From: Ruddiman (2005)

The Quaternary Ice Age consists of a series of glacials interspersed with interglacials.  For approximately the last million years the interglacials have been approximately 10,000 years long and the glacials 100,000.  The last glacial (end-Pleistocene) ended about 10,000 years ago.  One could therefore argue that the Holocene and the current interglacial are due to end soon!  In fact the glacial cycle should be imminent.

If we use a past interglacial to predict what the methane record for the current interglacial should look like something interesting arises (Fig. 2). There is a clear departure between the two records and the question is why?

Ruddiman (2005) reviewed the various sources of methane and having done so tentatively suggested that the departure was down to humans.  Specifically the irrigation of paddy fields and the domestication of cattle both of which are significant methane producers.  Is this evidence of the early impact of humans on our climate?


Figure 2: Solar radiation and observed methane. From: Ruddiman (2005)

Looking at the same question or carbon dioxide is much harder because the controls on atmospheric carbon dioxide are much more complex (Fig. 3).  Ruddiman (2005) also see evidence for an early rise in carbon dioxide; as early as 8,000 years.  He ascribes this to forest clearance and the development of widespread agriculture.  The real question and the one that has been hotly debated is could so few people have such an impact on carbon dioxide.  Ruddiman (2005) believes that they could have and uses such sources as the 1089 Doomsday Book to calculate the amount of forest clearance and therefore the amount of carbon dioxide that could have been produced.

meth3Figure 3: Observed and predicted carbon dioxide levels. From: Ruddiman (2005)

The idea remains contentious and not all authorities believe that so few people (global population was small) could have such a large and profound effect.  Notwithstanding this Ruddiman (2005) went on to explore the implications of what he had found, specifically the role of these early rises in methane and carbon dioxide might have had in delaying the onset of the next glacial cycle.  Using a GCM he calculated the onset point for the development of a small ice sheet in northern Canada.  The implication is that the early impact on global atmosphere may have delayed this event and lead to a more stable earth climate during the last 2,000 years.  Heretical it may sound but the argument runs that global warming may have been a good thing until recently leading to the climate stability that has helped global civilisation develop.


Figure 4: Schematic showing actual and predicted temperatures with the glaciation threshold clearly shown. From: Ruddiman (2005)

Ruddiman (2005) went even further; he argued that if forest clearance was the key it must have varied with fluctuations in the human population.  As it rose more land was cleared and more carbon dioxide was released.  As it fell, for example due to pandemics, land would have been recolonised by trees consuming carbon dioxide.  As shown in Figure 4 a severe pandemic could have brought global temperatures close to the glaciation threshold.

The bullet points below gives some idea of the frequency of pandemics prior to 1500.

  • 75,000–100,000 Greece 429–426 BC Plague of Athens unknown, possibly typhus
  • 5 million; 30% of population in some areas Europe, Western Asia, Northern Africa 165–180 Antonine Plague unknown, symptoms similar to smallpox
  • Europe 250–266 Plague of Cyprian unknown, possibly smallpox
  • 25–50 million; 40% of population Europe 541–542 Plague of Justinian plague British Isles 664-668 Plague of 664 plague, British Isles 680-686
  • 75–100 million; 30–60% of population Europe 1346–1350 Black Death plague

Perhaps the best evidence comes from the colonisation of the Americas which was associated with a massive pandemic of native populations exposed to the first time to European disease such as small pox.  Here there is some evidence for a correlation between declining populations and both carbon dioxide and methane levels (Nevle et al., 2011). The problem here is always on of coincidence over causality however.

Notwithstanding the radical nature of Ruddiman’s (2005) hypothesis he coherently argues that humans were having a profound impact on the Earth’s climate long before the industrial revolution.  While the impact of global warming on the future of our planet may be severe, the irony is that it may have actually helped stabilise climate in the past facilitating growth in human population.

Golden Spike for the Anthropocene?

There is no doubt about the fact that human activity is modifying our environment and as such there is a compelling case for the recognition of the Anthropocene.  Steffens et al., (2016) provides an excellent review of some of the issues around defining the Anthropocene and what it means.  Voosen (2016) also provides an interesting blog post on the subject. You might also like to check out the new scientific journal called the Anthropocene published by Elsevier.  Should the Anthropocene be formally recognised by the international association that governs geological stratigraphy the key question becomes when did it start?  Or to put it in geological speak where do we place the Golden Spike (Fig. 5)?

Brown et al. (2016) provide a review of the range of geomorphological impacts associated with the Anthropocene.  The numerous illustrations clearly demonstrate the impact, the problem that they are all diachronous.  Diachronous (Greek dia, through + Chronos time) referrers to a geological unit which looks the same but varies in age from place to place. For example, the impact of humans on a river system may vary from catchment to catchment depending on when humans started to modify the river at each location.


Figure 5: Most formally identified geological boundaries are demarcated by an imaginary spike all though some like this one is near Pueblo, Colo. (GSSP is the acronym for Global Stratotype Section and Point. Source: Brad Sageman, Northwestern University).

Some of the possible markers include (Smith and Zeder, 2013):

  • 1950s artificial radionuclides widely present in sediment caused by atomic detonations (Zalesiewicz et al., 2010).
  • 1750-1800 increase in methane and carbon dioxide due to industrial revolution (Crutzen, 2002).
  • 2000 BC evidence of human ecosystem engineering (Certini and Scalenghe, 2011).
  • 5,000-4,000 BC methane spike from increased wet rice agriculture and cattle raising (Fuller et al., 2011).
  • 8,000-5,000 BC methane and carbon dioxide rise due to forest clearance and wet rice cultivation (Ruddiman, 2005).
  • 11,000-9,000 emergence of human constructions and animal domestication.
  • Current Pleistoecene to Holocene boundary is as 11,700 BC.
  • 13,800 BC mass extinction of Quaternary megafauna potential by humans.

The debate ranges on and a flavour of it can be got from such articles as Lewis and Maslin (2015) and Monastersky (2015).  One might however usefully ask does it matter?

In a geological context the answer would normally be yes; defining the start of the Jurassic is quite a big deal for stratigraphers.  It allows people to organise their local outcrops within a global framework that is agreed by all geologists working on similar deposits.  However, the Anthropocene is the current and there defining its boundary is less important.  In a million years’ time it might matter to a future geologist to correlate events, but for now it is much more of an academic debate (Fig. 8).  Papers are there to be written and reputations made – the rather base driver for a lot of scientific endeavour – so the debate is very current and generating a lot of papers.  Its importance however lies not in a defined period but as an umbrella for scientific endeavour.  For example, there are lot of Quaternary Scientists out there and they have a plethora of journals and academic courses dedicated to them.  The Anthropocene offers the same potential framework for the interdisciplinary scientists – ecologists, geologists, and environmental scientists – all working on the impact of humans on our planet.  The recent launch of a number of Anthropocene journals is perhaps the start of this movement.

antroFigure 6: Different options for the Anthropocene.  The left hand panel shows the current situation and on the right are the two potential options.  (Source: Lewis and Maslin, 2015)


Brown, A.G., Tooth, S., Bullard, J.E., Thomas, D.S., Chiverrell, R.C., Plater, A.J., Murton, J., Thorndycraft, V.R., Tarolli, P., Rose, J. and Wainwright, J., 2016. The geomorphology of the Anthropocene: emergence, status and implications. Earth Surface Processes and Landforms.

Smith, B.D. and Zeder, M.A., 2013. The onset of the Anthropocene. Anthropocene, 4, pp.8-13. (Crutzen and Stoermer, 2000).

Certini, G. and Scalenghe, R., 2011. Anthropogenic soils are the golden spikes for the Anthropocene. The Holocene, p.0959683611408454.

Fuller, D.Q., Van Etten, J., Manning, K., Castillo, C., Kingwell-Banham, E., Weisskopf, A., Qin, L., Sato, Y.I. and Hijmans, R.J., 2011. The contribution of rice agriculture and livestock pastoralism to prehistoric methane levels: An archaeological assessment. The Holocene, p.0959683611398052.

Lewis and Maslin (2015) and Monastersky (2015).

Lewis, S.L., Maslin, M.A., 2015. Defining the anthropocene. Nature 519, 171-180.

Monastersky, R., 2015. Anthropocene: the human age. Nature 519, 144-147.

Nevle, R.J., Bird, D.K., Ruddiman, W.F. and Dull, R.A., 2011. Neotropical human-landscape interactions, fire, and atmospheric CO2 during European conquest. The Holocene, p.0959683611404578.

Ruddiman, W.F., 2005. Plows, Plagues, and Petroleum: How Humans Took Control of Climate. University Press

Steffen, W., Persson, Å., Deutsch, L., Zalasiewicz, J., Williams, M., Richardson, K., Crumley, C., Crutzen, P., Folke, C., Gordon, L. and Molina, M., 2011. The Anthropocene: From global change to planetary stewardship. Ambio, 40(7), pp.739-761.

Voosen, P. (2015) Geologists drive golden spike toward Anthropocene’s base

Zalasiewicz, J., Waters, C.N., Williams, M., Barnosky, A.D., Cearreta, A., Crutzen, P., Ellis, E., Ellis, M.A., Fairchild, I.J., Grinevald, J. and Haff, P.K., 2015. When did the Anthropocene begin? A mid-twentieth century boundary level is stratigraphically optimal. Quaternary International, 383, pp.196-203.

What’s in a bell curve?

Geographers like statistics; they allow us to test for significant differences between samples and to look for empirical relationships between different variables.  Most statistics rely on probability and a definition of what ‘normal looks like’.  We exclude the outliers – data that appears to be anomalous or outside the bounds of probability (Fig. 1).  As such the focus is always on comparing normal with normal.  Was the Boxing Day Tsunami of 2004 normal?  Was it predictable?  If so why did so many people die?  The answer to this lies largely on the time-scale with which one views Earth processes.  In an influential book The Black Swan Nassim Taleb describes such events as outliers and goes on to argue that they are more significant than conventional statistics and science gives them credit.  In geographical terms this speaks to an early debate between those that favour uniformitarianism and those that believed in catastrophism. 


Figure 1: Normal probability curve showing confidence limits and potential outliers.

Early Geographical-Geological Ideas

Early geological thought was plagued by the idea of trying to reconcile religious beliefs (i.e. the account in genesis) with emerging ideas on the antiquity of the Earth.  Many early scientists were profoundly religious yet found their faith was challenged by the observations and inferences they made.

This was particularly true of James Hutton (1726-1797) who many would describe as the founder of geology.  He gave us the concept of uniformitarianism – the key to the past is the present.  Basically that the processes that we observe today however slow, must have always operated.  Since the Earth process he could observe on his estate in the Southern Uplands of Scotland were slow; the earth must be much older than suggested in the bible.  He challenged head on the idea that forces in the past must have been different from the present.  Catastrophism invoked exactly that – processes must have been more active, powerful or different in the past than in the present.  The latter requires much less time than the former and is easier to reconcile with religious estimates of the age of the Earth and such events as the biblical deluge.

Charles Lyell (1797-1875) a friend and contemporary of Charles Darwin codified uniformitarianism in the late nineteenth century.  He envisaged both the uniformity of process but also the uniformity of rate.  The fundamental physics of our universe have not changed through earth history; gravity, light, heat transfer and motion are guided by fundamental physical laws.  The processes by which land erodes and sediment is transported and deposited should not have changed therefore.  The uniformity of process is therefore built on a sound premise, but what about the uniformity of rate?

Let us image rain falling on a slope devoid of vegetation because it had yet to evolve.  Vegetation is good at binding soil and sediment together and resisting erosion.  So erosion rates may have been faster before terrestrial vegetation evolved?  Is this not an example of varying rate potential?  The past may not have been quite the same as the present.  What about catastrophic events like the occasional tsunami?

Humans tend to see things on human rather than geological timescales and set the temporal window accordingly.  Tsunamis occur quite frequently on geological timescales hundreds of thousands to millions of years, but not on human scales of one or two generations (living memory).  So a Black Swan event is only one if we restrict the timescales over which we view things.  We call this the return period for events.  For example, an observed flood event can be described as the one in 20 year event.  It means that in the normal course of events an event of this magnitude occurs once in 20 years; once in a generation or with a probability of one in five each year.  A larger event may have a return period of one in a hundred years; this does not mean that it will happen next year if it has not happened in the last 99 years.  It is just a probability statement.  The point here is that the longer the time period sampled the larger the events that may occur within it.  So in the case of the Boxing Day Tsunami the fact that such an event has a relatively high frequency when sampled over a 1000 years – may four or five – your average person works on ‘living memory’.  Since most humans can’t conceive a 1000 years we tend to work on generational time; the largest event in living memory.  It blinds us in some ways to the hazard that has not happened in living memory but is waiting; that is a Black Swan event.  The importance of temporal sampling is shown in Figure 2.  Only time sample B or E would capture the anomalously large event.  It is one of the reasons why good historical and geological records of event frequency are so important in assessing true risk.

bwell2Figure 2: The effect on temporal sampling on magnitude assessments.

Magnitude and Frequency

This leads us to the question of the relative impact of catastrophic versus non-catastrophic geomorphological events.  Does a large infrequent event have more geomorphological impact than a small but frequent event?

Image a large pile of sand and two students.  One has a t-spoon but no smart phone and diligently moves a spoonful once every minute.  The other student has a spade and a smart phone.  While the spade moves more per shovel full than the spoon, the student is too busy texting and looking for Pokémon Go creatures to move more than one shovelful an hour.  Who will shift most sand in 24 hours?  The chances are the student with the spoon will win out; each spoonful is a low magnitude event, but it is occurring with a high frequency.


Figure 3: The concept of magnitude and frequency in geomorphology.

This is the concept of Magnitude and Frequency in geomorphology discussed by Wolman and Millar (1960).  Low magnitude, but high frequency processes have the potential to do more geomorphological work than high magnitude but low frequency events.  Essentially there is a trade-off between frequency and magnitude as illustrated in Figure 3.  The classic example is soil creep – the down slope movement of individual soil particles as the ground expands and contracts with temperature and moisture changes.  Because soil creep occurs continuously some researchers have argued that its impact on slope morphology is greater than large mass movements.  There are a number of good illustrations of this type of balance to be had.

The concept is also nicely illustrated by coastal erosion rates.  The classic example is one from a coastal town called Walton on the Naze in Essex, just north of London.  Figure 4 shows the erosion rates along a stretch of coast below the Naze Tower.  The rates of recession are more variable in the south and accelerate toward the north as the cliff height falls.  This feels a little counter intuitive; surely bigger cliffs should erode faster?  The answer to this lies with the magnitude and frequency of the events doing the erosion.  In the south the cliffs are taller and fail periodically by large rotational mass movements founded in the London Clay at the base of the cliff.  The debris acts as a natural sea defence and must be eroded before the cliff can be undercut and steepened again by the sea, ready to fail again.  So the events are large 20 to 30 metres of cliff being lost each time, but the events only occur once every 30 to 40 years so on average the recession rate is modest (Fig. 5).  Contrast this with the cliffs in the north (Fig. 4).  Here the cliffs are only a metre or so high.  The sea is able to erode the base of these cliffs almost continually and when they fail the mass movements are small with little debris involved.  These low magnitude events occur frequently (in fact almost continuously during a high sea) and consequently erosion is fast.  Figure 6 shows this conceptually and is a model that holds for lots of coastal areas around the UK and elsewhere.    There are some good papers based on this case study the original is by Murray Gray and published in the Proceedings of the Geologists’ Association.


Figure 4: Coastal recession at Walton on the Naze.  Based on Bennett and Doyle (1997).

Figure 5: Periodic coastal erosion via mass movements.


Figure 6: The concept of magnitude and frequency applied to coastal erosion rates in Britain.  Based on Bennett and Doyle (1997) and Cosgrove et al. (1998).

To this we must add the concept of landscape relaxation.  That is not kicking-back in a nice landscape, but the time it takes to erase the evidence of a geomorphological event.

The last glacial cycle in Britain was a high impact event lasting about 100,000 years.  Its impact on the British landscape was profound and we can still see the evidence of its impact in the landscape ten thousand or more years later.  (As an aside the impact of the Anglian Glaciation (antepenultimate glaciation) was so profound it is still visible today in the form of the north-south UK river pattern and the existence of the Wash.)  The relaxation time from this high magnitude event is considerable.  Now let us consider the beach cusps on Bournemouth beach which are often present.  They form over a single tidal cycle, and can be destroyed over another.  As such they are a lower magnitude event, with a correspondingly lower relaxation time.

So while low magnitude, high frequency events, might do more geomorphological work the legacy of such events may be much more transitory.  Catastrophic events may occur infrequently, but their impact on the landscape may be correspondingly greater.

Rhetoric as a guide in shaping/reviewing a piece of communication

Rhetoric hails from Ancient Greece and along with grammar and logic is one of the three arts of discourse.  At its simplest it is the art of effective (or persuasive) speaking and/or writing and a key to effective communication.  The word is now often used however in a derogatory way ‘language designed to have a persuasive or impressive effect, but which is often regarded as lacking in sincerity or meaningful content’.  While pretentious speaking and/or writing is not for the modern scientist, at least in my view, the basic elements of Aristotle’s rhetoric of logos, pathos, and ethos has some value.  A speaker or writer does well to consider these three elements which for the rhetorical triangle (Fig. 1).

rehetroci and writing

Figure 1: The rhetorical triangle.

We can start at any point, so let us consider first the writer/speaker’s perspective (ethos).  Fundamentally your audience/readers want to know what your motives are in communicating to them.  Are you providing information? Trying to educate or call them to action?  Is your aim simply to entertain or to change hearts and minds?  The identity of the speaker/writer is also important to the audience and impacts on the argument.  Who are you and what are your credentials for speaking out on this subject, where is your mandate to speak and where does your authority come from?  At its simplest this may be about setting out your experiences/qualifications, your roll in any debate and/or demonstrating that you have knowledge of a subject.  The literature review in a dissertation or academic paper serves this function; to show your mastery of a subject.

The context or logos of your communication is also important and overlaps with the example above.  The literature review is a way of demonstrating context, but it is more than this.  A good introduction will establish a rationale for the communication, why are you speaking on this subject now for example.  What events have preceded and led to the communication?  Why it is important now and why is being delivered in this way?  The logic of an argument and the evidence gathered to support or debate that argument all have a context.  There is an emphasis on rationale, logic and reason. Your audience/reader needs to be able to follow what you are saying for it to be believable and understand its context and implications.  The discussion and conclusions of a piece or prose or a speech are critical here.  That is, the extension from the specific case in question to the general case with wider implications and/or recommendations for action.

The final element is the audience itself.  Knowing who you’re speaking to or writing for helps you pitch it well.  For example, should you use lay-terms or will you be accused of dumbing down if the content is intended for an expert professional? What are the audience’s expectations of your communication?  Has it been invited or is it unsolicited?  Are they likely to be hostile?  How will they use the information you provide and what are they hoping (and you would like them) to take away?  Ultimately why do they care about the question argument in hand and how do you use the emotions of the audience (the pathos) to get them to engage and perhaps act on your message?  What emotion do you want to evoke: Fear, trust, loyalty…?  Do you have shared values or beliefs you want to draw on?  How do you connect with the audience/readers to gain their support, interest and/or action?

These all questions to ask and consider in framing any form of communications be it written, oral or graphic.  So how do you use the triangle?  Well it is simply something to bear in mind when writing an email (give me a job!), reporting a piece of research or giving a talk.  Best applied in the planning stages; you could for example sketch out a triangle and note some points or observations around each corner.  When you are finished and are reviewing your communication consider each corner in turn: have you set out the context, have you provided enough information about you as the communicator, and have you met the audience expectations?  When all said and done it is a tool and nothing more, but despite the negativity associated these days with the word rhetoric it is a useful and valuable tool.