Journalists and researchers: Can one criticize the work of a good guy without being labeled as someone with a grudge?

In a recent paper Dumas‑Mallet et al. (2020) have raised “concerns about the influence of the media on the research communication and dissemination.” In this text, I describe what might happen when a small group of academics decides to reward or “shame” journalists for their reporting on the same issues these academics cover in their research.

The recently established Active Travel Academy’s Media Awards at the University of Westminster “recognises excellent work by journalists and reporters covering issues around active transport and road safety” (Active Travel Academy’s website, 2020). This seems as a noble initiative because news coverage of active travel in UK is often negative: “Sustrans’ Xavier Brice presented his thoughts and mentioned some recent research by the charity that found 60% of news coverage of active travel was negative” (Aldred, 2020).

An expert panel of judges was supposed to select winners in seven categories such as news (written word), news (broadcast), student journalist, etc. Anyone including journalists could nominate their own or others’ work. In addition to these seven categories, there was a separate category “People’s choice award” for best and worse reporting on TV/radio/in print or online; however, it was unclear how the winners would be selected (Active Travel Academy’s website, 2019). Typically, people’s choice awards of any kind include massive (online) voting on a preselected list of choices. Open nominations are also possible if the number of potential voters is expected to considerably outnumber the potential winners. It is unclear to me what the organizers of the awards expected to happen regarding their people’s choice awards, and my repeated questions on Twitter were unanswered.

Regardless of the unclear selection procedure, I nominated a journalist for his, in my view, awful article (Reid, 2018) about bicycle helmets. The journalist interviewed only one researcher involved in a long-term scientific debate and allowed him to question the other party’s motives: “It’s notable that the university research group (which wrote the 2013 paper and others) seem very interested in rebutting [my underline] any suggestions that cycle helmets are not a panacea for safety.” This is an issue that is central to research integrity and it is unusual that researchers openly and publicly question each other. When that happens, an unbiased journalist should give a chance for the criticized party to respond. This journalist has not done that. In my nomination letter, I also list several other problematic issues in this journalist’s article (Radun, 2019).

To my surprise, the “People’s choice awards” have not been awarded. After my repeated questions on Twitter, one of the organizers finally responded. She wrote: “Thanks for your interest. We decided not to run it, due to a lack of nominations. Two of us organised the whole thing over a short time, but perhaps we’ll be able to do more next year. Should we run it in future any nominations will need good cause, not just someone with a grudge” (Laker, 2019).

 

After this answer I started to wonder how many nominations were needed in order to declare it a “People’s choice”? More than two? Ten? Two hundred? If the organizers have overestimated the interest of people in their awards they should in my view have transparently reported this, not delete the “People’s choice awards” from their website and pretend they never existed. They should, I think, have also apologized to those like me who spent some hours writing their nomination letters and were left wondering what had happened. Despite my repeated questions, they have not done any of these things. What I received instead was a somewhat unsubtle impugning of nominator’s (or my?) motives: “Should we run it in future any nominations will need good cause, not just someone with a grudge.”

I don’t know how many nominations were received for the “People’s choice awards”. Perhaps they were unhappy that I nominated one of the most prominent pro-cycling journalists for the worse reporting award. It did, however, come to me as something of a surprise that this should happen with the eminent academics and journalists involved in the Active Travel Academy’s inaugural Media Awards. It is also led me to think about certain questions concerning who are the supposed “good guys” and who are the “bad guys”. Is one a good guy if one votes for the predetermined favourite, and is one a bad guy with a grudge if one dares to criticize these favourites?

The media have always been important for the dissemination of scientific knowledge. Their influence on the formation of the public opinion regarding whatever scientific issue is undisputed; however, the media can also play a significant role in the promotion of particular research fields, researchers and their institutions as Dumas‑Mallet et al. (2020) article suggests. In situations when these compete for research funding, an increased media visibility, at least a positive one, without any doubt improves their public image and perhaps places them in a better position in this fierce competition for funding. Therefore, researchers and institutions aware of the media’s power are interested in maintaining good relationship with them.

In conclusion, I cannot stop wondering whether establishing of the Active Travel Academy’s Media Awards was indeed a good idea. And to whose benefit it was established. Perhaps time will tell. Nevertheless, it is well known that a good cause does not always justify the means.

Studded tyres, climate change, environmental issues and skiing holiday

Today ends the skiing week (hiihtoloma) in the south of Finland. Thousands of Finns are returning home from the northern ski resorts or own winter/summer cottages. Many travel by own car. The most populated southern regions had no snow this year. Actually we had no winter at all.

 

 

 

 

 

 

Source: Finnish Meteorological Institute

In Finland studded tyres are allowed. The new traffic law brings some changes regarding winter tyres usage and gives more responsibility to drivers to decide whether weather conditions require winter tyres.

Studded tyres significantly contribute to springtime dust problems. The flexibility new law brings will make many drivers think when to change tyres. The changing climate will also raise questions whether studded tyres should be forbidden in cities or city centers. I am sure many researchers in Finland will follow what happens with winter tyres usage and crash statistics.

This brings me to a methodological issue I raised following the famous Elvik et al. (2013) paper “Effects on accidents of changes in the use of studded tyres in major cities in Norway: a long-term investigation.” The main issue relates to the fact that many Finns, as well as Norwegians, drive their vehicles in all kind of winter conditions and that any cost-benefit analysis has no sense from the overall traffic safety perspective if it is restricted only to within the cities’ boundaries.

I submitted a letter to the editor to Accident Analysis and Prevention immediately after the paper had been published; however, it was rejected because the journal “doesn’t publish letters to the editor.” Anyway, Rune Elvik wrote me a nice response, which I will not publish here. I will publish only my unpublished letter to the editor.

 

My unpublished letter to the editor.

Dear Dr. Elvik,

I read with great interest and admiration the recent article by Elvik et al. (2013). The article summarizes the results of two reports (one in Norwegian and one in English) dealing with the effects on accidents of changes in the use of studded tires in major cities in Norway. As studded tires are responsible for a significant proportion of micro-particles torn off of road surfaces, which can cause health problems in humans, including premature death, some cities and municipalities in Nordic countries discourage the usage of such tires. Elvik et al. (2013), consistently with previous studies, reported a negative relationship between the prevalence of studded tires and the number of accidents in several Norwegian cities.

As an addition to this well-done study, I would like to point out the following issue. In the article it is not completely clear why the authors focused on the relationship between the number of accidents and the usage of studded tires only within the cities’ boundaries. Restricting the analysis only to the city area might be understandable from the point of data collection costs, the reliability of the data, and the particular interest of city governments in the health of their fellow citizens; however, from the overall traffic safety perspective, such a decision does not provide a complete enough picture for future cost-benefit analyses regarding decisions on whether to restrict or completely forbid the usage of studded tires. I am not aware what proportions of drivers driving to work in these Norwegian cities come from outside of the cities’ boundaries, and how many of them drive on secondary and even lower classes of roads which are less maintained during the winter time. However, I assume that the number of these drivers is not insignificant and probably has changed over the long period (1991-2009) covered in the study. Therefore, we lack information about how many of such drivers had an accident outside the city borders with non-studded tires, as recommended by the cities. Furthermore, how many of the cities’ inhabitants in vehicles with no studded tires had an accident while driving, for example, to ski resorts outside the cities’ boundaries on weekends and holidays?

An attenuation circumstance for Elvik et al. (2013) is that the effects they found in their study would probably have been even larger had they considered the number of accidents occurring outside the cities’ borders as a result of the decision discouraging the usage of studded tires within the cities. I hope that policy-makers, such as the Swedish Transport Administration, which commissioned one of the reports summarized in Elvik et al. (2013), will consider (or already has considered) the broader implications of the decision whether to allow or forbid the usage of studied tires in Swedish cities.

References

Elvik, R., Fridstrøm, L., Kaminska, J., Meyer, S.F. (2013). Effects on accidents of changes in the use of studded tyres in major cities in Norway: a long-term investigation. Accident Analysis and Prevention, 54, 15-25.

Igor Radun, PhD
Human Factors and Safety Behavior Group, Institute of Behavioural Sciences, University of Helsinki, Finland
and
Vehicle Engineering and Autonomous Systems, Department of Applied Mechanics, Chalmers University of Technology, Göteborg, Sweden

 

On close passes, journalism and awards

The media are often criticized for the way they report on crashes involving cyclists. So-called victim blaming is the central accusation. In this blog post, I discuss the text ‘Motorists Punish Helmet-Wearing Cyclists With Close Passes, Confirms Data Recrunch’ written by Carlton Reid, ‘the Press Gazette Transport Journalist of the Year 2018,’ and the famous Ian Walker ’s study on which Reid’s text is based.

Walker’s 2007 study

One of the most famous studies in cycling research is Ian Walker’s “Drivers overtaking bicyclists: Objective data on the effects of riding position, helmet use, vehicle type and apparent gender ” from 2007. In this study, Walker, the main investigator, was riding a bicycle with a helmet or without it. The study reported that “wearing a bicycle helmet led to traffic getting significantly closer when overtaking.” The difference was “around 8.5cm closer on average.”

This finding received large attention from the research community as well as from the media and general public. It is often cited in support of risk compensation (motorists unknowingly (?) rate cyclists without a helmet as more inexperienced and unpredictable and thus keep greater distances to them). It is also used as an argument against bicycle helmet laws and/or promotion of bicycle helmets as the ‘obvious’ conclusion from this study is that helmets put cyclists at greater risk posed by motor vehicle drivers.

Olivier and Walter (2013) reanalysis of Walker’s data

In 2013, Olivier and Walter reanalyzed Walker’s data, which he had generously posted online. [This is something for which Walker deserves huge credit] Olivier and Walter dichotomized the passing distance by the one meter rule and carried out a logistic regression, while in the original analysis Walker (2007) applied an analysis of variance (ANOVA) on the raw data (with a square-root transformation). Olivier and Walter wrote: “The previously observed significant association between passing distance and helmet wearing was not found when dichotomised by the one metre rule.” Their conclusion was that “helmet wearing is not associated with close motor vehicle passing.”

Walker and Robinson (2019) response to Olivier and Walter (2013)

Walker and Robinson published their rebuttal in Accident Analysis and Prevention in 2019, the same journal where the original Walker study had been published (Olivier and Walter, 2013, paper was published in PlosOne). They criticized Olivier and Walter on several accounts: “Their conclusion was based on omitting information about variability in driver behaviour and instead dividing overtakes into two binary categories of ‘close’ and ‘not close’; we demonstrate that they did not justify or address the implications of this choice, did not have sufficient statistical power for their approach, and moreover show that slightly adjusting their definition of ‘close’ would reverse their conclusions.”

My view about Walker (2007) study

1. The experimenter effect. Ian Walker was the single author and experimenter in this study. He had a clear hypothesis and expectations. Furthermore, we could assume he was observing how close the motorists overtake him in the relation to his hypothesis as another single-experimenter reported doing so in another similar Walker’s study (see below). This raises a question whether the behavior of the riders arising from their hypotheses and subjective experiences while the experiment was going on had any effect of the behavior of the naïve participants (i.e., motorists).

We (Radun and Lajunen, 2018) discussed the study design in the context of the experimenter effect. We wrote: “Although drivers were effectively blind to the study, the experimenter was not. Consequently, his hypothesis could have caused him to behave in ways that influenced overtaking distances, for example, by making head movements suggesting an intended turn, which might have prompted drivers to give him a wider berth.”

2. The observer bias. It seems some of the overtaking events were discounted based on the video analysis. However, it is unclear who has performed this selection and how many events were excluded. Typically, two observers should independently analyze all events, compare their results and discuss possible discrepancies. Walker has not thanked anyone for this work in the paper’s Acknowledgements, which makes me wonder whether a fully informed experimenter/author was the only observer making the selection, which of the events should be excluded and which should remain in the data set. Please note that the observer bias as well as the experimenter effect do not imply any deliberate action. Neither do I in this text.

3. Never replicated. To my knowledge, the main finding (i.e., drivers overtake closer when a cyclist wears a helmet) has never been replicated in another setting or country.

4. One meter vs. 1.5m rule. Walker and Robinson criticized Olivier and Walter for their choice to dichotomize the passing distance by the one meter rule. They write: “if we want to use existing legislation as a guide to separating close from not-close events, we should at least use the 1.5m rule mandated in Spain and Germany (road.cc, 2009; Spanish News Today, 2014) and place the burden on proof on those who would suggest a closer distance to define safety.”

However, they somehow forgot to cite an old TRRL study that used “the numbers of vehicle coming with 1m” and inspired Walker to reanalyze his data for an US TV interview in 2007 by using the same 1m rule (see below a snapshot from Walker’s old webpage).

It seems somewhat unfair to criticize Olivier and Walter of not justifying their choice of using the 1m rule while ignoring the fact that Walker in 2007 thought that “this is perhaps the clearest way to illustrate the effect of helmet wearing seen in the data.” Furthermore, one would think that the previous study from the same setting (e.g., traffic culture, roads’ width etc.) would be at least as relevant as the German and Spanish 1.5m rule Walker and Robinson cite. I don’t want to speculate about why Walker and Robinson failed to cite this older UK study.

5. Statistical vs. clinical significance (i.e., closer vs. close passes). The main issue of dispute between these researchers is about what is to be considered as a close pass (and methodological and statistical consequences of a particular choice) and whether the observed difference of 8.5 is a reason for concern. Walker and Robinson write “The W7 dataset at least hints that the probability of a trip transitioning from the large pool of un-eventful journeys to the smaller pool of journeys with a collision might increase owing to changes in driver behaviour in response to seeing a helmet. This merits caution until further data can be obtained.”

Below is a figure with the empirical distribution for passing distance (helmet vs. not). They’re nearly identical towards 0 (i.e., a likely crash) and don’t separate until after 1m.

 

6. In conclusion. Although I don’t dispute the reported 8.5cm difference, Walker’s experimental design was far from perfect because he was a fully informed experimenter interacting with his participants (i.e., motorists) and as it seems the only observer who pre-selected the data for further analysis. Furthermore, to my knowledge this finding has never been replicated (the study was published in 2007). I wonder whether a single never-replicated UK-study with a single fully informed experimenter/observer should be used to scare people around the globe of not wearing a bicycle helmet. I wonder.

 

Carlton Reid’s ‘Motorists Punish Helmet-Wearing Cyclists With Close Passes, Confirms Data Recrunch’.

Now I describe one of the most scandalous examples of scaring people by an award winning transport journalist in the context of above mentioned studies.

1. The misleading and malicious title (“Motorists Punish Helmet-Wearing Cyclists With Close Passes, Confirms Data Recrunch”). In my view, this represents a scandalous attempt to scare people of using a bicycle helmet. The verb ‘punish’ used in this context implies a deliberate action. Walker (2007) study provides no evidence that motor vehicle drivers deliberately give less space to helmeted cyclists.

Furthermore, it is incorrect that “Motorists Punish Helmet-Wearing Cyclists With Close Passes” because Walker (2007) and Walker and Robinson (2018) do not describe the reported overtaking difference as close passes. As discussed above, they report that motor vehicle drivers overtake cyclists with a helmet closer (!) with an average of 8.5 cm than those without a helmet. This difference of 8.5cm does not necessarily imply any of the overtaking events should be considered as a close pass. As Ian Walker told Carlton Reid “He claims that Olivier and Walter were only able to disprove his study by redefining what was meant by the words ‘close’ and ‘closer.’” Carlton Reid obviously ignores this and insists that closer (i.e., 8.5cm) means “a close pass.”

2. The failure to interview the other party. Carlton Reid has interviewed Ian Walker for this article. As he interviewed Walker after the original study had appeared eleven years ago. To my knowledge, Reid has never attempted to interview Jake Olivier and Scott Walter since their study had been published in 2013.

Ian Walker said in this new interview: “It’s notable that the university research group [which wrote the 2013 paper and others] seem very interested in rebutting any suggestions that cycle helmets are not a panacea for safety,” remarked Walker.”

It is obvious that Ian Walker not only questioned Olivier and Walter, 2013 paper, he also questioned their motives (“seem very interested in rebutting any…”). This is a clear attack on their research integrity. According to good practices of journalism, Carlton Reid should have asked Olivier or Walter for their comments. It is unusual that researchers question each other motives in an interview. When that happens, it is an absolute must to interview the other party and give them an opportunity to respond to such questioning. To my knowledge, Carlton Reid has never done that. Actually, it seems he was so eager to publish his text as it appeared online only a few hours after Walker and Robinson paper had been published online as a preprint.

The preprint was posted online on November 14, 2018
Carlton Reid’s text appeared online Nov 14, 2018, 05:01pm

This clearly shows Carlton Reid had no intention to interview the ‘other party’.

3. “Other academics agree.” Implying that other academics agree (“Other academics agree”) with something Ian Walker said by mentioning only one academic is so wrong that I believe no further comment is necessary.

 

The Active Travel Academy’s inaugural Media Awards

Because of the above reasons, I nominated Carlton Reid for The Active Travel Academy’s “People’s choice awards: A. Worst reporting on TV/Radio, print or online”

This award “celebrates the work of journalists and reporters covering issues across media outlets around active transport and road safety in the UK.” In addition to my nomination, Carlton Reid was shortlisted for another category (“1. News (written word))” for another of his texts.

I am not sure whether People’s choice awards have been awarded as this category is no longer visible on the Active Travel Academy’s webpage. I have repeatedly asked Active Travel Academy’s “expert panel of judges” about this on Twitter but they have never responded.

It is also interesting that although the awards cover not only active transport issues but also road safety in general, the “expert panel of judges” includes, in addition to several academics, representatives from cycling and pedestrian organizations, however, no one from any motorist organization. To my knowledge, traffic safety issues and how they are represented in media is also of interest to motorist organizations.

Conclusion

There is no doubt that the media has a lot of power when it comes to disseminating scientific knowledge. Given the above discussion surrounding Walker’s 2007 study and subsequent re-analyses, Carlton Reid’s article, in my view, represents an awful piece of journalism. I am not fully familiar with his work but one would expect a more balanced text given that he is after all ‘the Press Gazette Transport Journalist of the Year 2018,’ and obviously respected by the Active Travel Academy’s “expert panel of judges” as they shortlisted his another text for one of their awards.

I have recently proposed an EU project that would gather all interested stakeholders (researchers, journalists, police, advocacy groups etc.) in several workshops in order to produce a guideline for journalists about how to report on crashes including cyclists as this issue produces a lot of discussion. Similar guideline for the media, although for different reasons, exists about suicides (“Suicide Prevention Toolkit for Media Professionals”). Unfortunately, I have not received funding for it. I hope more qualified and suitable researchers will get funding for similar project and that in the near future they will produce a good guideline for the media. Before that happens, we will see more of Reid-like articles. Unfortunately.

A response to the “Executive Director at Finnish Cyclists’ Federation”

Last week I wrote a text “Can advocacy groups replace independent university researchers?” capturing my views about advocacy groups (i.e., Pyöräliitto) entering the research domain in my field. Pyöräliitto’s Executive Director, Matti Koistinen, wrote his response yesterday. Apparently, he wanted to correct mistakes in my blog. As I will show now, he has not corrected any mistake because there were none to be corrected. Matti actually confirmed all of my writings. His explanations and justifications of Pyöräliitto’s research attempts, however, deserve my answer. I thank Matti for his response as it is important to discuss the role of advocacy groups in conducting research and disseminating scientific knowledge.

OK. Let’s see what ‘mistakes’ should be corrected.

I am glad we both agree that more traffic psychology research is needed.

I wrote in my text “They often engage in debates, especially in social media, with traffic safety workers employed by state-sponsored organizations such as Liikenneturva, whose main role is not to conduct research but rather to operate within strict rules set by various laws.”

It is difficult for me to understand why Matti wrote a separate section about Liikenneturva. I mentioned Liikenneturva in the above sentence as an example of state-sponsored organizations, which Liikenneturva is as Matti acknowledges. Everything else Matti wrote in these two paragraphs is unrelated to my text. Btw, “…Liikenneturvan uhreja…” looks like a Freudian slip.

As I understand, Matti says it was difficult to make selection because they received more than 40 submissions. As I wrote on Twitter, perhaps, it is now good time for him to publicly answer my question I asked several times. Did you or did you not ask new presenters after the deadline for submissions had expired?

This is a very important question because there is a difference between “we had too many (good) submissions so we had to leave some out (including Igor’s)” and “we were not satisfied with the (quality of) received submissions (including Igor’s) so we had to invite new presenters despite the fact the submission process had ended.”

Btw, we researchers are used to rejections. All researchers, even the best ones (whatever that means), have papers rejected. The rejection of my submission to Velo Finland represents only one of the several examples in my text describing the way Pyöräliitto entered the research domain.

My intention was not to mock other researchers. My writings (on twitter) are specific (we all know to whom I referred) and I was very clear in my blog:

“This doesn’t mean some excellent researchers will not present their high quality research, it means each submission will not be judged by its quality by qualified researchers, they will be judged based on whether a few people sitting in Pyöräliitto’s board like them nor not. Pyöräliitto will filter what they like and this will be presented to their audience and the Finnish media.”

I understand the difference between a professional and scientific meeting. However, I would like to know how many organizers of professional meetings advertise their seminar as “the place to be if you want to learn about new research” and invite researchers “to present their best research” (!) while not having an independent scientific committee. Submit your best research, but please be aware we will not select presentations based on their quality, we will select them based on… what? This is disrespectful to researchers and their work.

Matti says that although the state decides about the funding model for Liikenneturva, the money is actually collected from motor vehicle owners. Because the state has decided that Liikenneturva’s funding comes from motor vehicle insurance, it is possible, according to Matti, that Liikenneturva is not neutral in their work (”Raduin väittää, että Liikenneturva on valtion rahoittama ja antaa ymmärtää sen olevan neutraali toimija.”)? Similarly, Pyöräliiton puhenjohtaja repeatedly and publicly insinuate that car industry funds my Australian colleagues who then produce bicycle helmet research in support of denialism.

On the other hand, Matti doesn’t see any problem if an advocacy group conducts research, which results will directly (!) support their goals. It must be that advocacy groups are very neutral when it comes to research they carry out, while others are not neutral in their work and/or are funded by the car industry to produce research in support of denialism. It smells of double standards to me, to say the least.

Furthermore, it is irrelevant you have no ambition to publish your results in scientific journals. It is important, as you say yourself, that your results will be used for the development of cycling tourism (and cycling infrastructure) in Finland. Pyöräliitto uses their results to influence policy makers. Policy decisions must be based on evidence. Not based on kind of ‘research.’

Hillman’s publications were from 1992-1994. That’s plenty of time for you (and others) to carefully and critically read them. Unfortunately, you are not an exception. It is unbelievable how often this ratio has been used and abused. Here is my list that is constantly updated. On the other hand, it seems important to hear researchers even before they publish their results in scientific journals. For example, one option could be to allow university researchers to present their preliminary findings at professional meetings organized by advocacy groups.

I have no doubt that your employees and members are highly educated. However, as I clearly explained in my blog, you have not put your hypothesis (i.e., a claim) to the test. Researchers test their hypotheses. That’s what researchers do. That’s how research is done.

And no, it is not absurd to have independent evaluators if you call researchers to submit their best research. And it doesn’t cost anything because researchers are not routinely paid for their participation in a conference committee, but it seems you are unfortunately unaware of that. To repeat again: it is misleading and disrespectful to invite researchers to submit their best research if you don’t have independent and qualified evaluators who care about the research quality instead of the advocacy group’s goals.

You have every right to say whatever you think that might be in the interest of your members and organization, cycling in general, and traffic safety. I really don’t care whether Pyöräliitto will issue a statement about nuclear plants, euthanasia or whatever. I care, however, whether policy makers will accept your statements as a source of a balanced, critical and comprehensive overall review of scientific literature. I was very clear in my blog:

“We should understand and accept that advocacy groups are advocacy groups and not a source of a balanced, critical and comprehensive overall review of scientific literature. The lack of independent traffic psychology researchers at Finnish universities makes it easier for advocacy groups and ‘experts’ to enter the research domain and establish themselves as a credible research source in public policy discussions.”

I also care whether the state and state-funded organizations give you funding for your ‘research’ and support the way you disseminate scientific knowledge.

In conclusion, I have no idea what incorrect things in my original text you supposedly corrected. I might not be the most politically correct or polite person, but I understand the difference between advocacy and research. I hope politicians and public policy makers also understand it.

 

Note: this time I had no time or interest to edit the text, but I hope my English is understandable enough.

Can advocacy groups replace independent university researchers?

The rise of fake news is a global problem. The Internet and social media in particular are prone to the spreading of such false information. Developing software that would automatically and accurately detect fake news is extremely difficult. Automatically detecting malicious texts ‘professionally’ written and based on cherry-picked and slightly modified information is even more difficult. Exposing them is a real challenge and requires persistent efforts from experts. In many cases communicating scientific facts, such as research on global warming and vaccination, to the general public is as important as obtaining the facts themselves. Unfortunately, medicine as well as climate change sciences are not the only fields that have to fight ignorance and the spread of cherry-picked or false information by various advocacy and interest groups. I am a traffic psychology researcher and here are my views about advocacy groups entering the research domain in my field.

Traffic psychology research in Finland
Traffic safety research including traffic psychology is experiencing hard times in Finland. It seems traffic safety is no longer a hot (research) topic. In my view, there are several reasons for this. The first relates to the general expectations that oncoming autonomous/automated vehicles will solve all traffic safety problems and, therefore, all our efforts should be focused on developing and accommodating such vehicles. The second relates to the poor status of traffic psychology research applications by scientific foundations, which often treat such applications as too applied and, therefore, unsuitable for their funds. The third reason relates to the even worse status of traffic psychology at Finnish universities. How many times have I heard “Igor, why don’t you find a job in Liikenneturva or Traficom” from my colleagues and superiors at my university? Unfortunately, traffic psychology professorships have not continued after their former and internationally well-recognized holders (i.e., Heikki Summala and Esko Keskinen) retired (‘it belongs to us’ other fields say), which contributed to the collapse of their research groups. Traffic psychology courses are taken away from curriculums (on account of profiling), some researchers got fired (the Big Wheel rolled) and all this has led to a serious lack of expertise in this field at Finnish universities.

Traffic safety and advocacy groups
This lack of expertise has created a situation, a certain vacuum, in which various advocacy and interest groups have entered to spread their own truths. They often engage in debates, especially in social media, with traffic safety workers employed by state-sponsored organizations such as Liikenneturva, whose main role is not to conduct research but rather to operate within strict rules set by various laws. And when I say “engaging in debate” I mean rather obsessive and borderline offensive commenting.

I describe here a very personal experience as a traffic psychology researcher (still at university) with one of such advocacy groups, the Finnish Cyclists’ Federation – Pyöräliitto.

Failing to declare one’s own affiliation with an advocacy group.
My first experience with these advocates goes back to September 2016 when I received an email from Marjut Ollitervo, a Pyöräliitto board member, and at that time also a vice chair of Helsinki Cyclists – HePo. Her email surprised me: “Who has funded Jake Olivier’s visit to University of Helsinki?” Professor Olivier, a friend and colleague of mine, has conducted several important studies, including systematic reviews and meta-analyses, regarding bicycle helmets. He is hated by those who oppose bicycle helmet legislation and promotion. When I asked the emailer who she was and why she would like to know that, she replied: “I am a citizen. I ask this because I am interested in it. So just to be clear, are you refusing to tell me who is financing Jake Olivier’s visit to the University of Helsinki? Is it a secret?” She has never mentioned that she was sitting on the boards of two advocacy groups. I am not sure how many researchers get these kinds of inquiry from ‘random’ citizens; however, my wife, who is currently doing research on the effect of wind turbines on people’s well-being at Turun ammattikorkeakoulu has received an email from someone who criticized her work and who also failed to mention her connection with an organization that opposes wind turbines.

Organizing a kind of ‘scientific’ seminars
Another experience is from last year when I submitted an abstract to VeloFinland organized by Pyöräliitto. They advertised their seminar as “the place to be if you want to learn about new research.” Given that Pyöräliitto opposes the current Finnish bicycle helmet law, I thought they would find our systematic review about risk compensation and bicycle helmets interesting. So I submitted a proposal, which they rejected. It seems the first ever systematic review on the topic they are so passionate about by the only active traffic psychology docent at a Finnish university was below their quality threshold. This was my first ever rejection to a conference or seminar in a research career stretching seventeen years. It should tell us something.

Instead, a Pyöräliitto board member, oh yes, the same one who sent me that email and who, to my knowledge, has no traffic safety research education or experience, had a presentation with the very ambitious title, “Rethinking Traffic Safety.” I wrote about this several times on Twitter and in my email to Pyöräliitto board members last year and I thought that they would learn something. At some point Pyöräliitto even blocked me on Twitter (because I was ‘spamming them’). Later they unblocked me, but at least one of their board members is still blocking me.

Unfortunately, they haven’t learned anything. They have organized the same event again this year and there is even a “call for papers,” among other things inviting researchers to share “their best research…” although they don’t have an independent scientific committee. As I wrote on Twitter, “Let’s say the Flat Earth Society organizes a ‘professional meeting’ but ‘calls for papers’ & invites researchers to submit their ‘best research.’ I wonder whether a university researcher showing that the Earth is not flat would pass the selection process by FES people.”

In my view, Pyöräliitto are misleading their audience and Finnish media. This doesn’t mean some excellent researchers will not present their high quality research, it means each submission will not be judged by its quality by qualified researchers, they will be judged based on whether a few people sitting in Pyöräliitto’s board like them or not. Pyöräliitto will filter what they like and this will be presented to their audience and the Finnish media.

Conducting (own) research
Two years ago, Pyöräliitto conducted a survey and asked Finns what would make them cycle more. Although they often claim that the law is a barrier to cycling, they did not include ‘repealing the current helmet law’ as one of the offered answers to the mentioned question. Any decent researchers would put their hypothesis to the test; however, they did not. Furthermore, I find it surprising that the Ministry of Transport and Communications in one of its publications (p. 38 in Kävelyn ja pyöräilyn edistämisohjelma) mentioned the possibility that Pyöräliitto (together with Pyöräilykuntien verkosto) would organize yearly surveys in order to follow the development of cycling in Finland. Does anyone seriously think that an advocacy group should be in charge of conducting research surveys? Would anyone suggest that Autoliitto should organize similar surveys? Unfortunately, it seems Pyöräliitto has established themselves as a research organization as the Finnish Transport and Communications Agency Traficom recently ordered a cycling travel survey from them.

More on research expertise in Pyöräliitto
In their submission to the Ministry of Transport and Communications, when opposing the helmet law, Pyöräliitto wrote, “Helmet use regulation provides an image of cycling as a particularly risky activity, which reduces the attractiveness of cycling. However, cycling is not particularly dangerous compared to, for example, pedestrians, and because of the health benefits as it has been shown that the benefits of cycling exceed the risks 20 times [Hillman M. “Cycling and the Promotion of Health.” Policy Studies Vol. 14, 49-58, 1993)” [my translation].

However, as we showed in our recent preprint, Hillman had not provided “supporting data or even a description of the methods used to derive this ratio.” We also show several other instances where the alleged 20:1 ratio has been used uncritically. It is worrying that poorly evidenced statistics can be so widely accepted and used even in policy making discussions. Cost-benefit analysis is a complex method and scientific community and policy makers should not accept the final result of an oversimplified and never presented calculation. It is unclear whether Pyöräliitto board members had actually read the article they cited because any qualified and impartial reader would notice Hillman had not provided any evidence about the alleged 20:1 ratio. An impartial reader would not cite it in a policy making discussion.

Conclusion
Given the above, I ask: Is Pyöräliitto a research institute (i) with necessary expertise in understanding research and able to critically and impartially assess their quality regardless of whether a particular study supports or conflicts with Pyöräliitto’s goals; (ii) which conducts research following research ethics and good practices, and (iii) organizes seminars with research presentations selected based on their quality?

By sharing this rather personal experience, my aim is not to take a revenge on Pyöräliitto because they did not accept my proposal for their VeloFinland seminar (although I am sure many would interpret it like that). My aim was to point out that filtering scientific evidence does not happen somewhere else, it happens also here in Finland and is not always recognized by the media and the general public.

It is unfortunate that Pyöräliitto, an advocacy group, in a more or less subtle way has entered a research domain in Finland. Although the promotion of cycling is worthwhile because cycling is associated with environmental and health benefits, it should be stressed that a good cause does not always justify the means. We should understand and accept that advocacy groups are advocacy groups and not a source of a balanced, critical and comprehensive overall review of scientific literature. The lack of independent traffic psychology researchers at Finnish universities makes it easier for advocacy groups and ‘experts’ to enter the research domain and establish themselves as a credible research source in public policy discussions.