Aus Forschung wird Gesundheit.
BIH_Podcast_32_How did the Coronavirus influence research?
Interviewee: Professor John Ioannidis, professor of medicine at the Stanford Prevention Centre, professor of epidemiology and population health, of statistics and biomedical data science. He is also the director of METRICS at the Institute of Meta-Research at Stanford University. In 2019, John Ioannidis founded METRIC B, the Institute of Meta-Research at the BIH in Berlin.
Seltmann: Welcome to the BIH Podcast “Turning Research into Health” from the Berlin Institute of Health at Charité, the BIH. We want to answer questions in this podcast around the topic of health and health research. My name is Stefanie Seltmann. Today, we will talk about meta-research, which is the research about research. How well is the research performed, how robust are the results, how sound are the statistics behind the experimental setup? One of the most famous representatives of meta-research is John Ioannidis from Stanford University. He is a professor of medicine at the Stanford Prevention Centre, a professor of epidemiology and population health, of statistics and biomedical data science. He is also the director of METRICS at the Institute of Meta-Research at Stanford University. In 2019, John Ioannidis founded METRIC B, the Institute of Meta-Research at the BIH in Berlin. Professor Ioannidis, what have you been researching over the last two years at your Berlin location?
John Ioannidis: It's a great pleasure to be in Berlin, Stefanie, and Berlin has been a fascinating place to do meta-research. There is wonderful colleagues at QUEST and BIH and also other Berlin institutions. Along with my fellows and students we have already completed a number of studies looking at the use of reproducible research practices across different scientific fields. We looked at biomedicine, of course, but we also looked at social sciences and also psychology. Obviously, each field has its own peculiarities and some are doing better than others. So that work requires a very much in-depth assessment of research papers, but we have also developed algorithms that can really look at millions of papers like what we did in biomedical research. We looked at 2.75 million papers in terms of their sharing of data, their availability of code and algorithms, their registration practices, whether they report conflicts of interest and whether they are or do replications, whether they declare their funding sources. So, it creates a map of the research being done in the past and currently and hoping to use that information to make it better.
Seltmann: So, lots of most interesting questions and research projects, what were your most interesting findings?
John Ioannidis: I think that the ability to map what is being done and how this is changing over time and how different interventions probably change the way that we do research, is the most exciting part. We don't want to just document our failures, I think that there's lots of them, including my own very prominently. Research is a very difficult enterprise, and we start from making mistakes and errors and using suboptimal approaches and we try to make them better. So, I think that the most exciting part is that in many of these assessments we see that there is improvement over time, there is clear improvement, there's more scientists who are sensitized to the need for transparency, for openness, for reproducible research practices, for more sharing with other scientists and with a wider community. I think that the pandemic has made that request even more prominent. People want to know, they want transparency, they want openness, they want sincerity. And I think that there's prospects that it will become even better in the future.
Seltmann: So, if you look at the biomedical data or the biomedical research, you analysed millions of papers. Were there some findings which are common to all biomedical researchers, which everybody has a problem with?
John Ioannidis: I think that there is wide variability across different subdisciplines in terms of how they do research, how they communicate research, how they present their research findings, whether they share or not, how transparent they are. But I think that almost all fields, for example, have a very poor record of sharing code or algorithms. And this is becoming a key problem especially in fields that become increasingly complex and complicated. You have a lot of very complex data science, you have artificial intelligence methods that really depend on whether you're able to get the code, get the algorithm, to get it up and running. If you cannot do that then it is a complete black box. So even though this is a deficit that is prevalent across all fields, some fields are in more urgent need of really overcoming that block to make that research more useful and more dependable.
Seltmann: Would you allow us to know which fields these are?
John Ioannidis: Well, I think that anything related to complex modelling, complex use of data sets that are very rich, that could range from electronic health records, all the way to molecular data, genomic data, precision medicine, a lot of modelling studies which became very prominent during the pandemic, for example, I think all of these fields largely depend on transparency. If you're not transparent, probably you can get any results you want, so you can fool yourself all the time, fitting the narrative of what you believe, before you did the research, and if there's others who are equally fooled, they will just believe you're correct, but then nobody would really know.
Seltmann: So, transparency would securely improve research results?
John Ioannidis: I think so. I think that there is some diligence from many researchers and probably some reservations in terms of whether that requires additional resources, maybe more effort for documentation, maybe more bureaucracy that would be the worst part, but I believe that if we have efficient research chains of how we run our work, then a problem we can save time, we can save time and resources and effort and still do the same work and do it better and do it in a way that also would be more usable.
Seltmann: Have you told researchers about this problem? What do they answer?
John Ioannidis: I think that this is not an outsider observation, this is an observation that researchers make within their own fields. What my team and other teams are doing, is just documenting that comparatively across different fields. No research practice is going to be successful, unless researchers themselves realize that this is at the core of their work. So, I believe that the best researchers in each field are becoming increasingly familiar with these issues, because they are stumbling blocks in their own ability to make progress and to make research that does matter, that does translate, that is useful, that is efficient that, if it's medical research, saves lives. So, it's not an outside imposed type of quest, it is something that is a grassroots-movement within scientific fields, who sometimes they have probably better leading scientist than others who are probably using more antiquated methods.
Seltmann: You mentioned that the corona virus pandemic influenced the research. So, before the pandemic you had a famous article written, ten years ago, that almost half of all research results are wrong. So, what would you estimate: Is this proportion of wrong results now even higher or has it improved over the pandemic?
John Ioannidis: I think that we can separate Covid related research from the rest of research during the pandemic. Covid research was an amazing feat. We had an analysis that we performed of the entire Covid-19 related literature, and we found that by the end of February 2021 about half a million scientists had already published peer-reviewed papers that were indexed in (scopes? 11:01), which is an amazing number. If you think of any other disease this is a number that would be very hard to match for the entire history of doing research on that disease, and obviously not within a single year. And we saw people from every scientific field working on Covid-19. We had a first analysis done by the end of 2020 and at that time I could say that among 174 fields that science is divided into, no one who had an expertise in automobile engineering had published on Covid-19, but after that even automobile engineering experts published on Covid-19. So, everybody was interested in this. We had some fascinating results that were very trustworthy and also very high impact. We had vaccines developed in record time that were very effective, so some parts of the scientific machinery worked extremely well, I think surprisingly well in terms of the speed and the efficiency. Many others probably were very suboptimal and obviously, if you look at these 300,000 papers that are floating around on Covid-19, many of them have cut corners, many of them were very hastily done, many of them were very unreliable with hugely exaggerated results, hugely wrong results sometimes. I am not sure I can easily put a percentage (laughs) of how much of that literature will survive, probably it’s going to be less than 50%, but obviously it’s very early and as you realize this is the something that takes time to mature to see which of all these efforts that we try to put together so urgently with all hands on deck really will survive scrutiny. If you go to non-Covid-19 research, over the years most fields have improved to different levels. And I think that Covid-19 also offered an impetus, because lots of people now were looking at science and how science is done. Every single citizen was interested to hear about the science, because that's about my life and my children and my family and my parents, so there was a lot of visibility and people started to pay even more attention to these chronic issues. Has it improved? I would say that, if you take single fields, each of them may have improved to a lesser or a bigger extent, but if you take a random paper from the literature, from the published literature, maybe the chances that it will be correct and have no major flaws and it will not be false might actually have decreased. And the reason for that is that is that fields that are more difficult, more complex have a lower yield of correct results even with the best intentions, have become more prominent, scientists are increasingly working on more difficult questions, on more subtle associations, on more subtle effects, more soft signals and therefore, even though we are improving science as an enterprise, our success rate is actually likely to go down, even if (laughs) we improve our efforts, it is unlikely that we will be able to keep up with high success rate, just because the odds are very difficult.
Seltmann: Can you give us an example for this?
John Ioannidis: So, for example, if you take fields that are looking at associations with big data, it means that we are searching a space of potentially millions of associations. And it's not like smoking and lung cancer, the associations of the type smoking and lung cancer have mostly been described. I'm not sure, maybe there's some that have missed our attention, but if you think of 20 million scientists publishing and some associations that are so strong out there no one noticing until now, it's not so likely to happen. So, there is mostly small signals in these huge data sets. And lots of people who publish on the soft signals with very suboptimal replication practices, most of the time with very lenient statistical rules of engagement for claiming that I have found something, most of these signals are not likely to be reproducible. I'm not saying that we should not do this research, we have these data we should look at them they are some sort of evidence, but we have to be very careful when we say that we know that for sure when we have generated signals out of data dredging, you know, professional data dredging (laughs), but still data dredging.
Seltmann: During this pandemic people were more willing to work together – virologists with data scientists, with epidemiologists, with clinicians. Did this influence the robustness of the results positively or negatively?
John Ioannidis: In principle, when people from different ways of life and different expertise join forces, you get a richer research environment, you get more rounded research questions, you get more opportunities to attack research questions from different angles, you’d get complementarity. So, all of that is likely to lead to better science, other things being equal. At the same time, it doesn't mean that just having any coalition of scientists will necessarily get you a good result (laughs). And I think that Covid-19 showed all the power, but also the difficulties that we have in the collaboration across scientists that belong to very different disciplines, it's an issue of communication, of trying to understand the language that each scientist is using, of translating some of that language to a different field and what it means and in their field. So, it was a great learning experience. We had some fantastic results and some very high efficiency efforts like developing new vaccine. It was an effort that required scientists working from very different angles and very different domains of expertise. We also had some very weird applications of people probably trying to do their best, but not really being knowledgeable in the field that they wanted to do research and just ignoring some very basic principles that someone, who was the relevant to the field, might know. So, it was a very interesting experience. We will continue learning from it for many years. And I think we should better learn, because a pandemic is not something that happens once (laughs), it's practically certain that will have more pandemics coming down the road. So, dissecting the science of the Covid-19 sciences is very interesting on its own. And it will be something that will take a lot of time and a lot of effort and probably a bit of distance to understand with clarity, because currently we're still in the pandemic, so we're all having some subjective conflict in a sense, because we are in that event. So, it is very difficult to observe an event when you're within the event.
Seltmann: You mentioned that during the pandemic many people who had never published anything about the Corona virus or about a virus at all or about a pandemic, suddenly started to do research into this field. What would you recommend: should one forbid this situation, or should one say, no, only people who have at least some experiences with viruses or pandemics, or is it good that everybody tries to bring his own background into a totally new field and combine all efforts like kind of crowd research?
John Ioannidis: (laughs) Of course, I would never discourage people to do science. I think that science is the best thing that has happened to humans. I have said that again and again and if someone wants to do science, especially on a critical topic that might have impact on saving lives, I'm not going to turn that person away. At the same time, it is true that science needs expertise and it doesn't need expert opinions, it needs technical expertise. So, I'm probably one of the fiercest opponents of opinionated experts, including myself (laughs) at times, but at the same time I believe that science does require technical expertise and it's becoming even more complex and more demanding over time, as we learn more as we have better tools, as we have tools that it takes more time to master and to be able to get up to speed and to be able to use them meaningfully. So, it's true that we did see lots of code and code research that was very suboptimal, that had glaring errors, that ... it just happened very quickly from people who probably had very good intentions but were doing things completely wrong. And I think that some of that research probably even might have had negative repercussions, because it was about saving lives and trying to deal with a very major crisis. I think that it's very difficult to know what is the best recipe to deal with that, you cannot ban or veto research, you need to have some better sorting and some better approach of review and clearance of research. Covid-19 led to the advent of new tools for dissemination of information. We had a vast increase in the number of preprints for research and in biomedicine, which was not so common before the pandemic. We had a huge amount of comments and criticisms and feedback in social media, Twitter and blogs, most of that at very low level, it goes to trash with tons of politics and biases and conflicts and then all sorts of anxiety and fear really just messing up science in very weird ways, but we also had lots of constructive comments and criticisms. And no tool is perfect, no tool is horrible, it's just an issue of how we use them, how we are able to navigate that chaos of information that is emerging, especially in a very serious and acute situation like the pandemic. And at the end of the day, I want to be optimistic that now we are also immunized against some of that junk science (laughs). I think, more and more people would be better trained to and have more experience at realizing what is completely junk and what has some validity.
Seltmann: A scientist can distinguish between peer-reviewed publication and preprint. A journalist or even a normal person cannot really distinguish between the Twitter account of a researcher and his preprint and his peer-reviewed publication. So, should there be a more strict frontier between the preliminary results of scientists and the public?
John Ioannidis: Indeed. It's a big problem and I think that media responded to the crisis with alacrity, but also with vehemence. And I think that this added an extra layer of difficulty sometimes. Because I have wished all my life, to have more people being interested in science, I always lamented that people are not so much interest in science – in the last one and a half years, I fear that just too many people (laughs) have been interested in science in ways that are distorting the environment and are creating a very explosive mix of fear, anger, instability. People are just watching for the last detail to be communicated and shared from media and social media and obviously, lots of these last-minute details are going to be false. They're more likely to be false compared to something that is more mature, that of course, is peer-reviewed, but not just peer-reviewed, it has a little bit of distance so that we can really vet it and say that yes, this has been replicated, we have seen that again, it has been validated, we have a model that we have tried out and it gives results that are reproducible. We didn't have time for that, and I think that media just jumped to the fray and lots of very unreliable information was disseminated. To give the equivalent of opera that I like, it is you know, a pandemic is not at a short-time thing, it's something that lasts for years. So, you have to be prepared for a four-year ... for a four-hour opera. And if you start in the first bar at two octaves higher compared to what it's supposed to be and you expect to keep it that way for four hours (laughs), it's not going to work. So I think that this is what happened with media, they just gave an extra, higher pitch and extra tone that, you was a bit off-sink and that probably created a lot of difficulty, because we need to keep calm, we need to, of course depend on science, we need to watch for the best new science that comes out, and as I said, there were major successes during the pandemic, but we cannot feed an example or a paradigm, where every day we need to say: we have earth-shaking discoveries (laughs). And this the last-second discovery that just emerged and everybody should get crazy about – we will all get crazy about it, if we do it like that. And unfortunately, survey suggest that a big portion of the population has really had serious mental problems and I'm wondering whether media have contributed to this (laughs).
Seltmann: What do you think about preprints, now after the pandemic situation?
John Ioannidis: Preprints, I have always been very supportive of the concept, and I was one of the people pushing to make it more prominent and more visible and more widely used in biomedicine, as it has been in other scientific fields in the past for many years now. And I initially was very happy to see that there was an increase in the use of this mode of dissemination of findings, because it gives an opportunity to get feedback from colleagues ahead of time and improve your work, find out about mistakes or errors or optimize your methods. I have some potential caveats though that have evolved over time. I have realized that sometimes research that is communicated in preprints may be misinterpreted, especially when it has public health implications, moreover in a toxic environment like what was going on in the Covid-19-pandemic. You will get lots of nasty feedback that has very little scientific value, but that it may create a storm in a teapot in a sense of social media. So currently, I'm still promoting the concept of preprints, but I would be a bit reluctant to release preprints for things that may be controversial or highly debated and debated in the wrong sense. Now, is it possible to predict what will be debated in the wrong sense? This is not easy. So, one has to use that approach, but also be cautious and try to get the best science out of it.
Seltmann: The science got political during the pandemic. Was this a problem?
John Ioannidis: It was a huge problem, because I think that science should remain apolitical, it should remain detached from what politicians say or wish to say or wish to do or what their followers might believe. And I think that it is very unfortunate, because once you get political polarization, nuance that is always necessary for science, gets lost, people try to interpret results, as if they are political statements. And obviously, I will not change my results to make them more congruent with one narrative or another, but many scientists would feel highly pressed or oppressed under these circumstances. I know of many people who just decided that they will not do Covid-19 research, because of that sick political environment that we were all trying to survive in. And I know of other people who were silenced, after they did some Covid-19 research, because they felt that they couldn't continue under these circumstances. They had a life to defend, they could not tolerate getting life threads for themselves and for their families and it’s not really the way to do science under such pressure.
Seltmann: So, what are the lessons learned for you personally from the Covid-19 pandemic?
John Ioannidis: I'm still learning. I think that it was a very challenging time. I could realize, how easy it is to make mistakes and errors, which I knew all along that I make lots of mistakes, because that's what a scientist does: they make mistakes and then we try to correct them. But working on Covid-19, the pace was accelerated. We had to move very fast, we had to get evidence very quickly so that creates an extra pressure. I think that the hallmarks of good science remain the same: principled, good methods, transparency, openness, try to replicate, try to compare your notes with what other studies do and what other studies find. And I think that also all the quest for improving transparency and openness became more urgent and I think, more people realize this. I think that I don't want that pandemic to continue (laughs), to learn more. I think that we can still learn without it, so I do hope that it will come to a close sooner rather than later, but it was a challenge. It was a challenge in getting science done, in communicating results, in navigating a space of media and politics and lots of other forces that most scientists where not really familiar with. We know that they are out there, but (laughs) we don't see them like invading our technical space. I think that it was a very challenging period, very interesting period, very sad period, because of all the tragic loss of life, but we learned a lot and I hope that we will learn without losing more lives.
Seltmann: We all hope this, thank you so much, Professor Ioannidis for this interview.
John Ioannidis: Thank you, Stefanie.
Seltmann: And that was the BIH-podcast “Turning Research into Health” from the Berlin Institute of Health at Charité, the BIH. Professor John Ioannidis, the director of the Institute of Meta-Research at Stanford and in Berlin, explained how the Corona virus influence the robustness of research. If you have a question about health or health research, please send it to podcast@bih-charite.de. Goodbye and see you next time, Stefanie Seltmann says.