CFCs, the ozone layer and global change

I guess most people reading this blog already know about the role of CFCs in the thinning of the ozone layer and its extreme manifestation the “ozone hole”. (If not you will find explanations here and here and ozone depletion maps here, and information on the Montreal protocol here and here.)

An article by Prof. Nigel Paul published in the The Conversation highlights the success of the protocol.

However, what fewer people know is that CFCs are potent “greenhouse gases”, and a recent article discusses why of all measures taken up to day, what has most significantly contributed to slowing-down global warming is the Montreal protocol. In my view, to a large extent this just shows how little progress has been achieved in reducing emissions of other “greenhouse gases” like carbon dioxide. A recent article in The Economist highlights this.

 

Talking plants

This layman’s introduction to plant-plant communication in Quanta on-line magazine is interesting both in relation to the phenomena studied and how science works. Nowadays that plants communicate with each other is widely accepted, but several types of communication are still controversial and not all available evidence is as strong as one would wish. Consequently, it is a very exciting field and time to do research about plant-plant interactions, including communication!

“Reproducible research” is a hot question

I have long been interested in the question of reproducible research and as a manuscript author, reviewer and more recently, editor, have attempted to make sure that no key information was missing and that methods were described in full detail and, of course, valid.

Although the problem has always existed, I think that in recent years papers and reports with badly described methods have become more frequent. I think that there are many reasons for this: 1) the pressure to publish quickly and frequently as a condition for career advance, 2) the overload on reviewers work’ and the pressure from journals to get manuscript reviews submitted within a few days’ time, 3) the stricter and stricter rules of journals about maximum number of “free” pages, and 4) the practice by some journals of publishing methods at the end of the papers or in smaller typeface, implying that methods are not important for most readers, and irrelevant for understanding the results described (which is a false premise).

Continue reading ““Reproducible research” is a hot question”

Some frequent ways of unwillingly misrepresenting experimental results

Many students and some researchers are ignorant of the fact that any of the following practices are statistically invalid and could be considered to be ‘research-results manipulation’ (=cheating):

  1. Repeating an experiment until the p-value becomes significant.
  2. Reporting only a ‘typical’ (=nice-looking) replication of the experiment, and presenting statistics (tests of significance and/or parameter estimates such as means and standard errors) based only on this subset of the data.
  3. Presenting a subset of the data chosen using a subjective criterion.
  4. Not reporting that outliers have been removed from the data presented or used in analyses.

RG: Do we actually need (or understand) more than basic statistics?

Link to the original Q&A thread at ResearchGate

This is another topic worthwhile looking at, and especially thinking about. I copy here, my answer, that is to some extent off-topic (you will need to follow the link above to read the original post and other answers):

Frequently students that I have supervised, seem to think that statistical tests come first, rather than being a source of guidance on how far we can stretch the inferences that we can make by “looking at the data” and derived summaries. They just describe effects as statistically significant or not. This results in very boring “results” sections lacking the information that the reader wants to know. When I read a paper I want to know the direction and size of an effect, what patterns are present in the data, and if there is a test, then statistical tests should help us decide what amount of precaution we need to use until additional evidence becomes available. Many students and experienced researchers which “worship” p-values and the use of strict risk levels ignore how powerful and important is the careful design of experiments, and how the frequently seen use of “approximate” randomization procedures or the approach of repeating an experiment until the results become significant invalidate the p-values they report.

[edited 5 min later] As I read again what I wrote it feels off-topic, but what I am trying to say is that not only the proliferation of p-values and especially the use fixed risk levels, but also many times how results are presented, is the reflection of a much bigger problem: statistics being taught as a mechanical and exact science based on clear and fixed rules. Oversimplifying the subtleties and degree of subjectivity involved in any data analysis, especially in relation to what assumptions are reasonable or not, and how any experimental protocol relates to which assumptions are tenable or not, is simply not teaching what would be the most useful training for anybody doing experimental research. So, in my opinion, yes we need to understand much more than basic statistics in terms of principles, but this does not mean that we need to know advanced statistical procedures unless we use them or assess work that uses them.