Short Summary: An insider’s look at the messy realities of scientific research with Stanford’s Dr. John Ioannidis. The good, the bad, and the ugly about how scientific research actually works.
About the guest: John Ioannidis, MD, PhD is a professor at Stanford University in medicine, epidemiology, population health, and biomedical data science, with an MD from the University of Athens and a PhD from Harvard in biostatistics. He directs the Meta-Research Innovation Center at Stanford (METRICS), focusing on improving research methods and practices. Renowned for his paper “Why Most Published Research Findings Are False,” he’s among the most cited scientists globally, tackling biases and reproducibility in science.
Note: Podcast episodes are fully available to paid subscribers on the M&M Substack and everyone on YouTube. Partial versions are available elsewhere. Full transcript and other information on Substack.
Episode Summary: Nick Jikomes dives deep with John Ioannidis into the nuts and bolts of scientific research, exploring the replication crisis, the flaws of peer review, and the $30 billion publishing industry’s profit-driven quirks. They unpack Ioannidis’s controversial COVID-19 infection fatality rate estimates, the politicization of science, and the gaming of metrics like publication counts. The chat also covers NIH funding woes, administrative bloat, and Ioannidis’s current work on bettering research through transparency and new metrics.
Key Takeaways:
Science’s “replication crisis” isn’t new—it’s baked into how tough and bias-prone research is, hitting all fields, not just “soft” ones like psychology.
Ioannidis’s famous claim, “most published findings are false,” holds up: stats show many “significant” results are flukes due to weak studies or bias.
Peer review’s a mixed bag—only a third of papers improve, and unpaid, tired reviewers miss a lot, letting shaky stuff slip through.
Publishing’s a $30 billion game with 50,000+ journals; big players like Elsevier rake in huge profits from subscriptions and fees, often over $10,000 per paper.
Researchers game the system—think fake co-authorships or citation cartels—boosting metrics like the H-index, which tracks papers with matching citation counts.
Ioannidis’s early COVID-19 fatality rate (0.2-0.3%) was spot-on but sparked a firestorm as politics warped science into “clan warfare.”
NIH funding’s clogged by red tape and favors older researchers, starving young innovators and risky ideas that could shake things up.
He’s building tools like a public database of scientist stats (4 million downloads!) to spotlight gaming and push for transparent, fair research.
Related episode:
M&M #100: Infectious Disease, Epidemiology, Pandemics, Health Policy, COVID, Politicization of Science | Jay Bhattacharya
*Not medical advice.
Support M&M if you find value in this content.
Episode transcript below.
Episode Chapters:
00:00:00 Intro
00:06:21 The Replication Crisis Explained
00:13:12 Replication in Science: How Much and Why?
00:18:14 Why Most Published Research Findings Are False
00:25:13 Peer Review: Strengths and Weaknesses
00:33:07 The Explosion of Journals and Predatory Publishing
00:41:40 The Business of Scientific Publishing
00:48:45 Open Access Costs and the Funding Dilemma
00:57:00 Preprints & Potential Solutions
01:04:34 Gaming the System: Metrics and Misconduct
01:11:08 COVID-19 & Politicization of Science
01:18:31 Revisiting the Infection Fatality Rate
01:25:48 NIH Funding & Leadership Changes
01:32:13 Directs vs. Indirects in Research Grants
01:40:56 Hopes for NIH Reform with Jay Bhattacharya
01:46:37 Current Projects & Closing Thoughts
Full AI-generated transcript below. Beware of typos & mistranslations!
John Ioannidis 1:53
I'm a professor at Stanford in the Department of Medicine, of epidemiology and population health and biomedical data science. I'm running the meta research innovation center, or metrics at Stanford, which is a center focused on studying research and its processes, its practices, how we can make research methods and practices better. And I've worked in different fields, in evidence based medicine and other areas that it's very common to see problems with methods, with biases, with making errors, including prominently my own, I guess, and trying to be sensitized by them and try to see how we can improve efficiency and
Nick Jikomes 1:16
To find all of my content. Foreign.
Thank you for joining me.
John Ioannidis 2:39
eventually get the most out of this fascinating enterprise that we call science.
Nick Jikomes 2:43
Yeah, and it really is an enterprise. There's lots of parts to the scientific research process. So you you've done a lot of work studying the scientific research process itself. You research research.
John Ioannidis 2:57
It is research on research, and sometimes research on research and research. So there's no end to the method transformation.
Nick Jikomes 3:05
And so how did you, how did you even get into this? Why did this come to be a focus of your work?
John Ioannidis 3:11
I think from my very early steps in trying to do research, I was very interested on methods. So methods tend to be the fine print section that most people are not so interested in. People probably focus more on results, and results are great but, but I was fascinated by the machinery, and I tried my hands on different types of research. So I I did some research that was wet lab basic science. I did some research that was clinical, some research that was population based epidemiology, some work that was more mathematical and statistics that the common denominator of what I found most attractive was the difficulty of doing good research, how much effort and dedication and commitment and being aware that error was just waiting for you to creep in. And biases are so prevalent, starting from our own biases, and also seeing that play out in pretty much the majority of papers that I was reading to try to inform my evidence base and try to see what I would do next. So increasingly, I was interested to try to understand the problems in the methods and the machinery, rather than test the specific result, which might be interesting and fascinating to pursue. But I thought that unless we can improve our methods and the way that we run our investigative efforts, our chances of getting reliable results would be pretty dismal. Yeah,
Nick Jikomes 4:53
and you know, I want to dig into a lot of different things with you, including, you know, how we identify whether results. Are likely to be reliable. What markers of unreliable work are give people a sense for some of the problems and the biases and the replication issues that are out there. But also, I think one thing that we'll come to is, you know, I've also gotten interested in sort of the research process itself. And, you know, especially as we're entering this new age where we have AI tools, there's a lot of AI based large language models and other AI based search tools that you know are helping people search through and synthesize the literature. One of the things I become fascinated by is there are problems in the literature that aren't it's not just that there are individual studies that I say are poorly done or under sampled. There can be systematic biases in entire fields for decades at a time, sometimes, and the technologies that we build on top of them are sort of going to inherit those biases based on how they work. And I want to dig into a lot of this. Let's just give people some sense for one of the big issues here. So a lot of people talk about the replication crisis. They will often talk about this in the context of specific fields like psychology. But of course, it's not specific to any one field. There's a lot of big replication issues that have become more known to people I think in recent years, what would you say the replication crisis is? Is it something that spans most scientific disciplines, and what is sort of the extent of this crisis?
John Ioannidis 6:21
I think the replication crisis is not necessarily something new. It is something that is inherent to the way that we have been practicing science, and replication is a fundamental component of how science should be run. It's a way to verify that what we're doing is reproducible. We can put some more trust to it, and we can build on it with with more confidence. I think that the term crisis has been coined in the last 1015, years because more attention has focused on that aspect of research. And I think that our calibration and expectations of how likely our results might be correct has been a bit overblown, and when people started looking systematically into that question, we had to recalibrate our expectations. I think that for a while we had forgotten that science is so difficult and so bias prone, we thought that we were probably overconfident, and then we have these large reproducibility efforts suggesting that a large share of our published results cannot be reproduced, and that leads to that terminology crisis. But it's not that this is new. You know, it was there. I think that people, at least the majority probably did not have that as a top priority in their thinking, and probably it moved up the ladder of priorities and prime considerations in doing science and thinking about science and interpreting science and translating science. It's it's a problem, and it is also inherent in the scientific method. So it's both a challenge and an opportunity. I don't see it as something negative. Realizing that there is that challenge could be a good way to try to think about how we can improve some of our performance and some of the track record of how our research could replicate or not.
Nick Jikomes 8:23
Is there? How are replication issues distributed across fields? So you know, people often talk. People often sort of think in terms of the spectrum from soft sciences to hard sciences, usually based in terms of how rigorously quantitative they are. Is are the replication issues confined mostly to so called softer sciences, or is this something we see in in harder sciences and even physical sciences as well? I
John Ioannidis 8:50
think that they can arise in any scientific field, provided that it is a scientific field. So a field that is always 100% correct is probably not a scientific field, it's probably some dogmatic religion, political, but not scientific effort that would be always correct. I think that also it depends on how difficult the questions are that we ask and how difficult are the odds of success. So some fields that seemingly have lower replication rates may actually be dealing with more interesting questions, with more high risk questions, with more difficult issues, that success is likely to be less compared to others. So I don't believe in a hierarchical view of sciences as these are the best sciences, and this is the second class citizen, and this is the the worst scientific fields, because each scientific field has its own performance characteristics, has its own goals and and targets. It has very different priors compared to others in terms of the likelihood of success. And it would be a pity if we, if we try to kind of put one scientific field against the others, and create some sort of competition of shaming each other that you're not reproducible and you're not replicable. We have seen problems with replication across practically any field that someone wanted to take a serious look. And it is not something that should be so surprising, even with rigorous measurements, even with fields that are very structured, things can go wrong or the types of questions are such that the success rate is bounded by some percentage. And you know, maybe that's the best that you can get in some cases. But that's okay if you have a way to filter eventually the credible information and move forward.
Nick Jikomes 10:58
Yeah. So a low replication rate, whatever exactly we choose that to mean it could indicate something negative or bad, like, you know, people are doing sloppy work. But it can also be an indicator of, you know, being on the cutting edge or actually doing high risk interesting questions. So it's not necessarily a negative thing, inherently,
John Ioannidis 11:19
exactly. I mean, if you study whether the sun is going to rise next day, probably the replication rate will be 100% but it's not an interesting question. Conversely, if you work in a very high risk field, many of the of the leads that are discovered are likely to be false and be refuted. But this doesn't mean that that field needs to be abandoned. So it's an it's a question of efficiency that needs to be calibrated against what is the likely value of the information. What are the consequences of true discoveries? What are the consequences of false discoveries, especially false discoveries that take a while to be refuted, and they stay with us and we build on them and go down the wrong path, or even get translated into interventions and policies that may be detrimental. So it's it's a complex system. I don't think that we should be oversimplifying into single percentages of success and failure, and we need to look at the big picture and try to see whether we can get something that that makes more sense, is more useful and is is more credible,
Nick Jikomes 12:35
and whether or not there's something like a replication crisis in any given field, If we, you know, if we have any sense for what the replication rate is, that implies that people have actually tried to replicate experiments. And of course, it's very natural, you know, if you're, if you're doing research, you know, it's, it's, it's more fun and more exciting and more advantageous from a career advancement standpoint, to make a novel discovery, not to just check if someone else's discovery is accurate. So how much emphasis is there out there on actually replicating results that are already in the literature?
John Ioannidis 13:12
So there's quite some debate about that, both regarding how much replication is out there and also how much replication is desirable. If you take scientific papers at face value, the vast majority of them are trying to say that what I have done here is something noble or has something new to say. And the reality is, nevertheless, that many papers don't really do something noble, and they're just replicating, perhaps with some minor twists, some experiments or studies or knowledge that already exists. One way to document this is to look at meta analysis, systematic reviews and meta analysis that try to revisit what we know on a given scientific question, and in medicine, for example, the average meta analysis finds seven to nine studies. There are some topics that we have 100 plus studies, and, of course, others that we have zero but but on average, we do have a number of studies that systematic reviewers believe are similar enough to put in the same forest plot, consider them as attacking more or less the same or similar question that they can be summarized together. Now, are they replications of each other? I think the people who do them don't think themselves as replicators. They see themselves as investigators, contributing knowledge. But in fact, they're pretty similar, so they do belong to the same bin of information, to the same bin of knowledge. So there's more replication than we are knowledge. Probably it varies a lot across scientific fields. I use the example of meta analysis, even when whether we perform since. Medical reviews and meta analysis has wide heterogeneity across science. In most biomedical fields, there's a lot of meta analysis. In some fields, there's more meta analysis than primary studies. And in other fields, there's very few, and some fields still have done hardly any such efforts to synthesize the evidence and see where do we stand? Where do we know? Where have we done a lot of studies, and where have we done no studies at all? So there is wide divergence on both the extended replication and even the willingness to look at how much replication has been done. That goes even a step further when you think about what is the desirable level of replication? Because an argument that is raised very often is that you cannot just try to replicate everything, and that would mean that you will have a lot of waste, and you're then trying to replicate waste, and that would be a loss of time and resources and effort. And we need to move forward and give priority to discovery. This is partly true. Probably there are some experiments or some studies that are so horrible and so useless that nobody should care about them and we should just put them to rest. But at the same time, I believe that replication is integral to discovery. It's not necessarily separate from discovery. I think that the replication and reproducibility check is integral in solidifying discovery and making sure that that we're dealing with something that we can put some trust and and move forward. And without it, we may be wasting even more effort and even more resources going down the the wrong path because we trusted something that was not to be trusted. So this is an open debate, and obviously the the best answer has to be operationalized differently in different circumstances, depending on feasibility. Sometimes, you know, if you have a study that took 20 years to do, and how is it to replicate that wait another 20 years versus here's a study that replicating it means just clicking a few buttons on your laptop to to run some analysis on some existing data sets that can tell you right away whether you get the same signal or not, and it would cost nothing versus it would cost a billion. So it's not a one size fits all, but there's unevenness in both how much replication we have, and also how much replication would be desirable.
Nick Jikomes 17:48
One of your most highly cited papers, and you're a very, very highly cited researcher yourself, one of the most highly cited papers you have is titled Why most published research findings are false, and it says right on the first page, it can be proven that most claimed research findings are false. Is that, can you unpack that for us? Is that meant to be taken literally or not? It
John Ioannidis 18:14
is literal. I mean, it's a bold statement, perhaps, but it is basically modeling the chances that, if you come up with a discovery based on, let's say, some statistical threshold of significance, and you say, I have found some signals, some treatment effects, some association. How likely is that to be truly so and not a false positive finding? So it tries to see what the impact of different factors would be, including the power of the study to discover effects that might be out there to be discovered, the number of investigators who try to attack that question, or similar questions, and do not join forces to just do a single analysis of all their data sets, but each one of them is trying to outpace the others, and Also the extended bias that may creep in and may turn some of the non significant results into significant results. So why these things are happening? Of course, can be due to multiple reasons and multiple influences on the research agenda, the research design, the sponsors, conflicts of interest, knowledge of research methods or lack of knowledge of research methods, sloppiness, sometimes fraud, hopefully not that common, but even that exists, so if you build their. Composite impact under most circumstances, where you can think of how research is done in most fields and most types of designs and most types of questions that we ask, the probability of a statistically significant discovery to be a true positive is less than 50% and in some cases, it's much less than that, especially when we talk about a very lenient statistical significant threshold of a p value of less than 0.05 which is what most fields traditionally have used. So I don't think that it would be surprising, and it is actually congruent with what we see with empirical results when we try to to reproduce, replicate empirical investigations. I want
Nick Jikomes 20:56
to talk a little bit about statistical significance and and what that means and how we define it and how it's used. Of course, you know, if you just sort of think in, you know, everything worked the way that we hope it worked. You know, scientists do scientific research. They submit their research to the peer review process, so other people that have similar expertise have to actually check their work, and if they agree that the work is is good enough, it gets published. And then, of course, after it's published, it's hopefully going to be open to anyone to look at, and then other people can look at it and and discover the results. So for example, journalists will often report on the scientific results that are published in the peer reviewed literature for a lay audience. Obviously, that work is published in journals by technical experts. If everything's working as it should, then, you know, a journalist is going to look at it, and they should be able to take it at face value, right? They should be able to say, this was published in Nature. Therefore, it's a very reputable journal. It's been peer reviewed. Two or three experts have looked at it and checked it, and it says, right in the paper, statistically significant, of course, as you know, and anyone who's in the research world knows, of course, that's not, not exactly how things work. Just because something's published doesn't mean it's rigorous or replicable. Just because something statistical statistically significant doesn't mean it's actually true for people who don't know what a p value is. Can you explain that concept and what we mean by statistical significance in normal speak?
John Ioannidis 22:28
Okay, I think that any effort to simplify these concepts probably will lead to a wrong definition, unavoidably, to some extent. But one simple way to put it is that it is one way to provide some sense about what would be the probability of finding some result that is as extreme as we have found, or even more extreme, so deviating from the the null, finding, you know, like there's no difference, or no treatment effect, or no treatment benefit, or no harm, or no signal, no association, so we find some signal, what is the probability of finding that signal, or even a stronger signal, if actually there is no signal. And if actually you know that is kind of silenced, there is no bias. This is taken out of the the usual way of of thinking about it based on on what we have done so if we do see a small p value, this doesn't mean that we have a very small chance that this is not true, because it much depends on what are the starting chances that there is some signal. And if we're looking in a field that has no signals to be discovered, and let's say we were very unlucky, and we selected to go into a scientific field where there's nothing to be discovered, then no matter what the p value is, we're stuck, you know, we will get some very nice looking p values, very small p values, but they will mean nothing. And it's always a challenge to understand and calibrate what is the field that I'm working on and how much is there to be discovered? There now a paper being published in a peer reviewed journal is is better than not being published in a peer reviewed journal, and we know that peer review does improve scientific papers and roughly about a third. 100 of scientific papers become better through peer review, maybe 5% become worse. The editors and the reviewers managed to make the paper worse.
Nick Jikomes 25:09
And how do we specifically know that? And what exactly do you mean by better or worse? So
John Ioannidis 25:13
there are some studies, for example, like some that Sally Hopewell and others have have done, where we had access to the original version of a paper and the final published version, and also the reviewer comments, and let's say independent scientists, try to arbitrate very carefully to see whether the interventions of the peer reviewers and editors improve the paper in some tangible way, not improving commas or full stops or just a little bit of the esthetics of the language. But you know, some equation was wrong and it was corrected, or some data had been miscalculated and that was fixed, or some information was missing and that was really added. Conversely, some important information was there, but it was removed, which makes the paper worse. So roughly, based on this type of empirical evaluations, we know that, as I said, about a third of the papers get better, about 5% get worse, and about two thirds are not really touched materially by by peer review. And now these that are not touched materially does not mean that they were perfect. It's very unlikely that they were perfect, based on what we have seen in terms of biases and errors and misrepresentations and flaws, very likely most of them have problems that simply were not detected by peer review. Reviews are over fatigued. They have very limited time. They don't get paid for what they do. Sometimes you ask 20 people to get a couple to agree to write a review that is usually 100 words or 200 words, and half of that is not really saying anything. And maybe there's a few points that might be making a difference at best. So without saying that peer review is a bad idea, it's not a panacea, and a lot of flawed papers will go through the system. There's also lots of journals that have little or no peer review. There's predatory journals. There's other journals that have very little peer review, they will publish practically anything or almost anything. So having a paper published does not necessarily mean much. Having it published in a in a prestigious, highly competitive journal like Nature or Science in basic science, or new journal medicine or Lancet in in medicine does not mean necessarily that it is more credible. I think that this is a misconception. There's opposing forces here. I think that some papers that end up in these top tier journals that have the highest competition for their pages and acceptance rates of 5% or less, I think that, yes, they may attract some of the best work, some of the most kind of anticipated and expected and work that a lot of resources and a lot of thinking and a lot of brain power have been invested, and everybody's waiting for their results, and it is a well done study, very well designed and very well supervised, and lots of people, in a way, have reviewed it beyond the couple of peer reviews who see it at the end. So yes, that that type of research probably is going to be more credible on average, although sometimes even that may not be the case and if there's a conflicted agenda, for example, behind it, so it has to be seen on a case by case basis. But then the majority of papers that would end up in these journals are not necessarily these widely expected kind of central studies that hundreds of people are reviewing somehow and putting effort into. They're mostly studies that tend to have a very strong surprise factor. They are studies that find some result that is extreme, really novel or seeming to be novel, and effect sizes are larger than than average, and and in that case, there's two possibilities. One is that, yes, this is a discovery of something that is really so extreme and so so nice and so beautiful, large effect. And the second possibility is that this is winners curse, that even simply by chance, someone has found this very strong signal. But actually the real signal is much smaller, if not nonexistent at all. And I. If you think in this way, if you have a small study, that there's million such studies being done, and it gets an average result, it's not going to make it in these journals. Yeah, yeah. The only chance that it will make it in in one of these journals is if it has found something that's really extreme.
Nick Jikomes 30:18
So if, by chance, something that's not that extreme is studied a bunch of times. The person who finds the extreme result is actually going to be biased to be the one who publish on publishes on it first. Exactly
John Ioannidis 30:28
so. So you have a winner scarce that is likely to affect these journals far more than the average, let's say good, respectable journal that is willing to publish more average type of results, and we have documented that we have run empirical analysis looking at topics that have been assessed by studies that were published in top tier journals like New England Journal medicine Jama and Lancet, and also in specialty journals on the very same intervention the same type of question. What we saw is that when we're talking about large studies, then the effect sizes are about the same in the top tier journals and in the specialty journals that published on that topic. But when you have small studies published in these top journals, their results are hugely inflated compared to similar studies asking the same question and published in special journals, and these inflated results typically just get washed away when someone looks at them again and realizes that that's not such a huge benefits, such as huge signal as what has been published now. So you know, nothing is perfect. No journal is perfect. Guarantee that what is published there is going to be impeccable.
Nick Jikomes 32:00
You mentioned, there's a lot of places we can go with this, I guess, I guess the the essence of what you were, you were saying just now, for people to understand is, you know, statistical significance is an important concept. It has to be computed the right way. There are assumptions to go into it, which may or may not be justified. The sample sizes and the effect sizes matter. I'm not sure we need to go into too much technical detail on those things, but just for everyone listening, the size of your samples, how good your data is when it's collected, matters, the assumptions, being cognizant of the assumptions that are made to do the appropriate statistical tests, all of that matters. And sometimes people do it correctly and rigorously, and sometimes they don't. It's a mixed bag. All these things matter. I want to talk about next something that you mentioned, which is that there are a there's a lot of journals out there. There has been a massive proliferation of the number of journals over the years. And you also mentioned that some of these journals are predatory journals. Let's take those one at a time. How many journals are out there? Can you give us a sense for that, and why are there so many?
John Ioannidis 33:07
There's different estimates, but let's say that there's more than 50,000 journals out there. 50,000 50,005 zero 1000. Yes, so yes, indeed, that's a huge number, of course, that covers all science, and there's a very large of scientific fields. So it could be that, if you break it down per scientific field, for some fields, there's only one or two that really published the majority of papers in that particular domain. And some other fields, there's just a very large number that could accommodate the literature that is coming out of scientific efforts. So it's not that every paper can go to any of these 50,000 journals. Some papers have a more limited target journal space compared to others. But clearly there's a lot and and we don't even know the full number, because, as I said, there's some that are less visible. There's many that are predatory, which means that they are just small, or more than small businesses that you pay some money and the paper will get published practically with with no peer review. There's a growing array of mega journals that publish 1000s of papers every year. The typical journals in the past used to publish 100 200 papers every year, some of them a bit more, but not much more. And mega journals by definition, they publish more than 2000 papers every year. Some of them publish more than 20,000 papers every year. So they have peer review. Their acceptance rates are not 100% unless they're predatory, but they may be accepting 3040, 50, 60% of of what they receive. And. Then you have a very wide array of all sorts of specialty journals and with different business models, with a very large publisher industry that is making lots of money out of that process. So the publishing industry is roughly $30 billion annual turnaround.
Nick Jikomes 35:32
That's 30 billion in revenue per year, in revenue,
Speaker 1 35:35
and the profit margin for the big publishers is in the range of 30 to 40% which is, if not the highest, among the highest, compared to any other really legitimate enterprise.
Nick Jikomes 35:48
That would be a product, you know, that would that's as good or better than the profit margins of like apple.
Speaker 1 35:54
It is. Yes, it is so Elsevier, for example, and Wiley have better profit margins than Apple. Yeah,
Nick Jikomes 36:01
and those are two of the larger publishers that own, journals that people probably heard about, brand name journals like sell
Speaker 1 36:09
Exactly. There's five publishers that publish the lion's share of the scientific literature, and now we have these mega journal publishers that are also pushing the frontier of one of them is called frontiers actions,
Unknown Speaker 36:26
yeah,
Speaker 1 36:29
that are, you know, pushing the numbers of papers. So,
Nick Jikomes 36:33
so 50,000 journals, give or take, doing 30 billion with a B, dollars in revenue per year with profit margins that are among the highest of any private company in any industry. So this is a big business in every sense.
Speaker 1 36:51
It is, it is, and one wonders whether we're getting what we pay for.
Nick Jikomes 36:58
Yes, and why is this such a big business, how does this connect to so so obviously there's issues here. On the supply side, there are a lot of academics doing a lot of research, and so there's a lot of potential papers to publish. There's a demand, there's a demand side to this, and we can talk about that. And then there's also the issue of costs, and the costs for these publishers, it would be an interesting area to talk about, because you already mentioned previously that. So we're talking about publishing papers that go through peer review. So peer review is a key component of the manufacturing process, so to speak, but the peer review is done for free. In most cases, that is that true.
Speaker 1 37:40
That's true. There's very, very few exceptions where peer review is is paid. And even those, the amount being paid is so little that it doesn't make a difference. Like, you know, $5 $10 you know, maybe $50 I'm involved in one journal that was launched recently, where the model is to pay reviewers $500 and it's, it's an experiment. I don't know if it would work to improve the quality and rigor of of the peer review process. Yeah, just
Nick Jikomes 38:11
Martin cold, or if that's the journal that you're referring to, yeah,
Speaker 1 38:14
yeah. So, so I see it as as an experimental effort. We don't know what is the best model for supporting peer review. To be honest, we have run some randomized trials, not as many as I would wish, but there are some randomized trials trying to randomize different modes of review, like open peer review, blinded peer review. There's some trials of training for peer review. There's some that look at having a statistician, peer reviewer, look at at the papers, the the effects tend to be modest or even null. I think the the clearest signal would be for having a statistician look at at the papers that clearly seems to improve the subsequent versions that get published. But then the the challenge is, how? Where do we find statisticians to look at 7 million papers that are published every year? Right? There's not that big a workforce of statisticians and methodologies. They have their own things to do and work on. We cannot hope to engage them to review 7 million papers. So there's there's lots of incentives to publish. I have nothing against publication and productivity. In fact, for many decades, I have been struggling, as many other scientists, against publication bias and non publication of negative results. So I would be the last to say that we should not publish. We need to publish. We need transparency. We need openness. We need to communicate, then we need to communicate with even more transparency and with more details about what we do. But it is, it is a huge business, as you say, and it is exploited by those who are centrally placed to make profit out of it.
Nick Jikomes 40:27
Let's talk about that a little bit, because, I mean, people in this world often know, at least to some extent, how some of this stuff works. Almost no one in the research world who I've ever met when I was in academia, almost no one is satisfied with the way peer review works for those listening, you know, when you're in the academic world and you're doing research, and you know all of your friends and colleagues are doing research too, when I think about every single time I heard someone talk about, oh, I've got my paper. It's under review right now. Oh, how's it going? Almost 100% of the time the answer is some version of it sucks. It's terrible. The reviewers aren't doing it right. Blah, blah, No, I've never heard anyone once in my life say, you know, the reviewers had some critical things to say, and they're absolutely right, and I changed my mind, and, you know, the paper is not getting in, but I learned a lot. And then, of course, there's sort of the business side of this and the exploitative nature that many people would say that that the big journals operate with. Can you start to talk about how this became such a big business? How? Where are they generating all this revenue from? Is it from subscriptions? How is this such a big profitable business?
Speaker 1 41:40
So just to close our discussion on peer review, I think that there are peer reviews that are constructive. I don't want to be so dismissive. Personally, I have had peer reviews on many of my papers where I felt that the paper did improve. I feel that I really benefited from that input, sometimes even very negative input. But goodness, you know, thank you very much. You know, you pick some error that I had made and I hadn't noticed, and now I can, I can fix this so, and some papers do get substantially improved. So yeah, so I'm uneasy about just discrediting peer review, and I do notice, nevertheless that it is a very suboptimal system. It leaves lots of possibilities of things to go wrong. Now the the publishing system has evolved over the years, and it has become more massive, both in terms of number of journals and in terms of number of papers, and also it is highly hierarchical, so journals have very strong prestige factors attached to them, and this has been largely been a journal impact factor business, which basically looks at the average number of citations in The first two years after the publication of a paper, it is a flawed metric by many different ways of looking at it. There have been many efforts to try to dismantle it, to say, No, we're not going to look at it. But goodness, everybody's looking at it, yeah, and that reinforces both the prestige ladder, and also the gaming of the system. So people, journals, editors, publishers, are just struggling to prevail in a in a gaming system that is not necessarily aligned with with better science, with better research, it is trying to optimize some numbers that are surrogates, and most of the time they're very poor surrogates, or capture very little of the essence of what we should be interested in. How do you get rid of that? I think that we need to experiment. We need to try many different ways. Let many flowers bloom and see whether some of them may be more successful. It is not like a uniform system. We have new ideas. We have new concepts. We have pre prints, for example, where people can post their work practically for free, a model that has been very successful in fields like physical sciences. And I think that now it starts becoming more successful, or at least more popular in biomedical sciences as well, we have models like E life, where journals are seen mostly as a platform to perform review, hopefully good review, and then the authors may decide to publish their papers regardless of what reviews they get, but they will have the reviews along with with what they publish. And I think that that we need to study peer review rigorously to to understand what it does, what it does not do. Also study who is doing what you know, some very fundamental questions like, what kind of editors do we need? For example, most of the high profile journals, they don't have editors, people who are highly credential scientists, who are in the top of their fields. They have professional editors who they only have experience as editors. You know, they may have done some background training, you know, maybe got a master's degree, some of them maybe got even a PhD, but they never, kind of did a lot of scientific work themselves. They're editors, and that's what they do. Is that better? Is it worse? I mean, they will defend the model as being more objective, that they don't care that much about who is submitting and what they say they're kind of remote. But, I mean, how remote can you be and still be relevant to a field, right? I think that that's a challenge. So, so even very fundamental questions, things should be open to scrutiny and assessment in terms of, do they improve or do they make things worse?
Nick Jikomes 46:17
But, you know, just just getting down to the nitty gritty here to make this super concrete for people. So we got a $30 billion industry. So there's a lot of revenue being generated with high profit margins by these big name journals. Like, where is the money coming from? Like, like, what is the specific revenue that's being generated? Is it? Is it a high cost to publish that the researchers are paying? Is it subscriptions to the journal that many people are are subscribing to, or is it so
Speaker 1 46:44
some of the revenue comes from subscriptions. A large part of the revenue comes from subscription, which means that universities and other institutions need to pay these publishers a lot of money in order to be able to have access to the papers that their researchers are producing,
Nick Jikomes 47:05
and how are those prices determined? Is there sort of a free market where the prices are set by supply and demand? Or, well,
Speaker 1 47:13
in theory, it is a free market. But as I said, there's really very few big publishers, and that creates a situation that is probably more close to a cartel. So if you really had 50,000 players that are independent, truly independent, probably the prices would go down. But we don't have 50,000 players. We have five, you know, maybe six, seven, if you add also the new mega journal publishers that are rising quickly, and therefore, somehow, the prices have been staying at at pretty high levels and allow for these large profit margins for For these big players. So it's, it's a it's mostly, how do you kind of change that cartel situation? And to a large extent, it is about subscriptions. There's an extra layer, which is the publication cost, the article processing fees, where typically to publish your work in open access, and some journals are entirely open access. Others are moving to become entirely open access. Some are mixed, but people may still wish to have their work published as open access. They have to pay several $1,000 to do that, and in some of the top journals, they need to pay more than 10,000 more than $12,000 you know, nature, for example, to have a single paper published.
Nick Jikomes 48:45
So so just to be clear for those listening, if, in some cases, if you see a paper out there published in a place like journal, a journal like nature or another big name journal, if it's open access, which it may or may not be, if it's open access, that often means that the scientists who did the study, who led the study, paid 1000s of dollars just to have it published. So this open openly available, exactly
Speaker 1 49:09
yes, which means that that money is removed from that scientist budget to do research. You pay research or salaries or consumables or or other types of resources and effort. Often it would come from the funders. If it's public funding, it would come from public funding, meaning taxpayers. So there's a lot of double dipping here that people who profit from the system. They're, they're gathering money from from different sides and different steps in that process. Yeah,
Nick Jikomes 49:51
you're actually asking scientists to do peer review for free. It's like, it's like, literally having an employee that you don't pay, and then they have to pay you to have it published. And. Pay more if it's open access. And then it actually becomes even more absurd, you might say, when you start to think about it in terms of well over time, right? We used to have physical journals. Used to physically buy a journal, and they would have to physically print it. But that's really not the case so much anymore. So the cost of the journal should be going down in that sense, and yet, I imagine these fees have not adjusted,
Speaker 1 50:22
indeed, and if you have a cartel, it will not get adjusted everything. It will remain at a very high profit margin moving forward, no matter what improvements we have in technological transformation of the of the publishing enterprise, which indeed now can cost less because we can just do everything online, we can have things done much more efficiently. Yeah, so it is a big problem. It is a big black hole, in a sense, in in the vicinity of science. It is absorbing a lot of of scientific resources. And scientists, by default, are just offering more to that black hole. We're offering free peer review, as you mentioned. We are offering funds to make a paper open access. We're offering funds from our universities to to pay for subscriptions
Nick Jikomes 51:19
and so forth. So there really is probably, like a collective action problem here, where, if every scientist, every researcher, just simultaneously demanded, hey, we need to be compensated for a time, even if it's modest, because that's just what we're demanding from you, something would happen. But of course, you need a critical mass of people to do that, and for various reasons, that's very unlikely indeed. Yeah, and
Speaker 1 51:43
perhaps we should have someone from the publishers side to to give a balanced view. I mean, I don't want to demonize publishers as being goodness, these horrible demons who are just making money out of good scientists and the good institutions and good funders, I'm sure that they also offer some value and offer a commodity that is important,
Nick Jikomes 52:08
some value, sure, yeah. I mean, it's an
Speaker 1 52:11
issue of what we prioritize as a society, and what we feel is our legitimate enterprises and legitimate margins of profit. I mean, we have so many other enterprises like, you know, the tobacco industry is making a lot of profit, and it's just killing people. So I think that I'm biased to say that we have a problem, and I think that publishers should be making much less out of what they do. But it would have been nice to have someone to give a counter argument that maybe we should have more of them and perhaps less of the tobacco industry or other companies that are really just causing death and and wreaking havoc to society with with no value.
Nick Jikomes 52:59
Yeah. Yeah. So well, it's hard to even imagine workable solutions, workable solutions. So again, like hypothetically, you could imagine every scientist you know gets on an email thread and we say, Hey guys, tomorrow, we're just demanding, you know, $2,000 per peer review, no questions asked. Of course, that's not actually going to happen. What are potential, workable solutions here? Is there anything on sort of the regulatory or funding agency side? I know that there's been some talk recently for things like, you know, NIH funding to require open access. What are some potential solutions that could change how publication operates.
Speaker 1 53:43
I think that there's many solutions, but I worry that most of them are not evidence based. You know, they have not been tested in some study, in some pilot study, even to try to see how they translate in action. So people have speculated, and they come up with all sorts of proposals. Some of these proposals seem extreme to me, and probably are going to be very damaging if implemented. For example, one proposal is, don't allow a scientist to publish more than x number of words every year, which means that probably many of the biases that we have with publication bias and extreme results having preference to be disseminated, will become even worse, because everyone will be counting their words and goodness. I need to take my best shot and really impress everyone, my funders, my stakeholders, the community, consumers, whatever it is. So probably we'll have more bias if we say, we restrict publication and let's have less let's try to shrink the system somehow. There's other solutions that. So we know it would work, but then we don't know how to implement them. I mentioned the example of using statisticians as peer reviewers, but then where do we find these statisticians? Where do we find these people? There's others that seem to cost nothing or very little, like pre prints. I like those because even if we get it wrong, we're not going to pay for that, and also have it a track record of being very successful in in the physical sciences already. So I don't really see a problem. And some people are concerned that well, so you let everything out there, and it hasn't been peer reviewed, and people will use it for misinformation and disinformation will be all a mess. Well, I that is happening already, and having peer review is not really that much of a safeguard. If you have something wrong, you can still be very widely disseminated. So, so
Nick Jikomes 55:56
for those unfamiliar, a pre print is basically, you know, if a group of science, group will do research, they will make a paper, and historically, you would send that off to peer review, and if it goes through that process and gets approved, then it gets published in the journal. A pre print is the idea of you send in the draft to a pre print server in parallel to submitting it to a journal, and then it's public. You can see it on the Internet. Anyone can read it, and it says, you know, has not been peer reviewed yet. And I guess the idea is this allows everything to get out there, for everyone to see what's out there. And on the one hand, arguments against doing that might be, well, if it's not peer reviewed, then you don't have quality control. You might have bad research out there that then informs the public or misinforms the public. But on the other hand, you know, if you let everything go out there in an open way, it enables everyone to scrutinize it, it helps get around issues where, you know, just because two reviewers reject a paper doesn't mean it's a bad paper. Maybe a lot of other people would have approved it. And so there's those two sides to this, but the pre print is sort of putting it out there before it's actually going through a journal,
Speaker 1 57:00
exactly. And I like this. And personally, for most of my empirical papers, I would try to have them pre printed early on, when I feel that they're in a fairly mature stage, and I plan to submit them for for peer review. But I mean, nothing is perfect. Even pre print servers would not accept everything. I think that met archive, for example, accepts about 70, 75% of the pre prints that are submitted and declines the others for various reasons, some of which may be legitimate. For example, they may be not original data, they may be more like opinions. I would argue, doesn't hurt to help opinions as well, but provided that it is clear that this is just an opinion piece and some others may just be rejected because they they're felt to be potentially dangerous, let's say so that's very tricky to decide. And even more so when supposedly, a pre print server does not do any peer review. I mean, that that's what it's supposed to be. You know, it allows something to be posted before any peer review. So rejecting something because someone who typically is not qualified to even judge the work, but just looks at the title and gets a gut feeling, oh, goodness, I think that this may cause trouble. I'm not going to post it that that. I think that that's problematic. So, so there's, there's no perfect solution at the moment, and I, I think we should just keep trying, trying different options, different possibilities, see how they work, try to map their strengths and their weaknesses, and do some testing. You know, do some real studies, empirical studies comparing different types of peer review, different types of publishing, different ways of reimbursement or of intervening in the status quo and see what kind of a perturbation we get, rather than just say that, oh goodness, it's all wrong, and it's so inefficient, let's dismantle it. Well, if we dismantle it, maybe it will become even worse. So I think we have to be very, very careful in what we do.
Nick Jikomes 59:33
But you know, you know one thing that seems clear here from listening to you, but also from my own experiences, the ecosystem as it exists right now is hackable in different ways. So because so much is citation based and publication volume based and prestige based, because you even have, as you mentioned earlier, predatory journals that will essentially publish anything without going through. Being true to this, the spirit that. Peer review process. They're just trying to collect publication fees. You know, you could have unscrupulous researchers out there, and to some extent, you probably do that. You know, they can just churn out mediocre paper after mediocre paper. They can self cite to themselves as many times as they want. They know that they can get it published at, you know, one of the 50,000 journals that's out there. And you know, you can, you can crank up your H index or your citations through artificial means. These the ecosystems facilitating that kind of behavior to any significant degree. This
Speaker 1 1:00:34
is happening to a significant degree indeed. And I think that different metrics are more easy or less so to be gained, and metrics like journal impact factor, for example, is highly gameable. And this is why, this is one more reason why I think it is not a good metric, besides other arguments, that is a journal level, it is just an average in a highly skewed distribution. It is, I mean, I can add probably 50 reasons why it's about metric, but it is also gameable. And the same applies to number of publications. Number of publications currently is highly gameable. It's very easy to publish papers, very large numbers of papers in in different journals. So I think that metrics that are highly gameable Probably should be abandoned. We should not be paying attention to them, and we should prefer metrics that are more difficult to game now, once a metric acquires more prestige, and people respect it more. It is expected to be gamed more. So people will try to game that particular metric is because this is how I'm going to be judged, this is how I'm going to be promoted, this is how I'm going to get funded. So people will try to look good in that metric as well. But some metrics are far more difficult to game compared to others. For example, number of publications is extremely easy to game, right? H index is more difficult than number of publications, but it can still be gamed. Yeah.
Nick Jikomes 1:02:14
Why is just? Just to make it explicit for people, why is publication number easy to game?
Speaker 1 1:02:20
Because there's 50,000 journals that are just waiting for you to submit anything and publish it. Yeah. So if someone wants to submit 1000 papers tonight, if they have the money to pay for that, they can get 1000 publications by the end of the day. The other
Nick Jikomes 1:02:32
the other thing that I learned from from my academic days was, um, very often, you know, you, when you see a paper, there might be 510, 20 or more names on it. Very often, some of those names are on there simply because they shared a reagent with someone else. Exactly. Yeah, there's lots of different ways to get your names on papers that don't that might not involve you really doing much of anything in terms of the research, right? So
Speaker 1 1:02:56
again, if we pay a lot of attention to number of publications, people will flock into the author's masthead. They will try to become authors in papers that actually they have not contributed to. And that creates a further boosting of unhealthy ecosystems where the director or the professor will be an author no matter what on anything that comes out of the department, even though they have no clue about the work that was done. And then also the associate professors. And then, you know, at some point, maybe everyone will be an author and everything, even though their contribution has been minimal or completely none. So authorship is very easily gamed, and this is why I believe that it should not be taken into account in judging individuals or teams based on number of papers. I don't think that that should count if they have published one paper or they have published 10,000 papers, both of them may be legitimate, so I'm not saying that we should penalize those who publish 10,000 papers. Yeah, you know, maybe, maybe for communicating their work properly and in sufficient detail, in sufficient depth, they need 10,000 papers. That's perfectly fine. Or maybe they can do it with one. I don't think that that should be an issue. They should not get a better grade or a better salary or a better chance of promotion or more funding depending on the number of papers. H index is less gameable, but it can still be gamed.
Nick Jikomes 1:04:30
So how is the H index computed? What is that? So the H
Speaker 1 1:04:34
index is that is the number of papers that have received at least as many citations as the H index. So, so if an H index of 30 means that that that person has published 30 papers that have at least 30 citations each
Nick Jikomes 1:04:52
I see. So maybe you published 100 but some of them have less than 30. Right numbers telling you how many you have that are cited that much,
Speaker 1 1:04:58
right? So. Many of them have at least 30. And you know, the most cited may have 30,000 I mean, it doesn't say anything about what is the number above 30, but at least 30 papers have at least 30. So we know that H index is also gameable. It is gameable through becoming a co author, an undeserved co author in papers, and it is also gameable by people who create cartels that cite each other and then boost the number of citations. Sometimes it's so funny that they boost citations to specific papers so that they just increase the age index. They make the papers pass exactly the necessary thresholds just to boost the H index. And this is discoverable. I mean, you can see that, you can see that the distribution of citations and papers fits that gaming so it is gameable, not as easily as just number of papers, but it is gameable total number of citations is even less gameable Because, goodness, someone needs to find big cartels that are willing to massively cite many papers. And then again, that gaming can be revealed, it can be detected. You will see papers that cite some author 150 times,
Nick Jikomes 1:06:23
right, right, right. Or the class, there's a classic example in the research world, and many researchers listening will know what I'm talking about. Where you submit your paper for peer review, your anonymous peer review reviewer will then request that you cite a particular reviewer 12 times. And one wonders that was, that was the person, yeah, so.
Speaker 1 1:06:41
And then there's other metrics. There's co authorship adjusted metrics, like the HM index that corrects for CO authorship. And then you can look at contributions as first author, as single author. You can look at self citations. You can look at orchestration cartel metrics that suggest that someone has been gaming citations. So I think that we can have metrics that probe into the gaming of metrics. Yeah, and looking at the at the false full signature of here's a scientist who's not his work is not quantified by a single number, like number of papers, but look at these 50 numbers, and some of them are good, some of them are bad. Some of them clearly suggest that there's fraud going on here, or clearly some overt gaming. I think that this would be more appropriate to to do. It's a little bit like virus and antivirus software. You come up with a new metric, and this can be game. Then you have to come up with something that is even
Nick Jikomes 1:07:51
there's Yes, as soon as something becomes a metric for success, it stops being a good metric for success, right, right?
Speaker 1 1:07:57
So besides metrics, I think we need to look more broadly on other issues like citizenship, you know, scientific citizenship, use of rigorous methods, transparency, openness, sharing, you know, sharing, data sharing, code Sharing, protocol registration, especially for studies that registration matters, and it's important to register your study ahead of doing it. And for example, for clinical trials, this is the default, and for some other types of studies, I think it would also be the default, and try to see how a scientist performs on these dimensions, which means that they're rigorous about their research. Again, there's not a single number or a single thing that would say, here's a good scientist or here's a bad scientist. The same applies to institutions, the same applies to journals and to other types of groupings. But having a more complete picture of multiple dimensions probably is more informative, and it's very difficult to game all of those. Yes, yes, people may game a few, but, but then some of that gaming will be detectable, and they will fail in other aspects, if they just are out there to game the process,
Nick Jikomes 1:09:19
you know, another sort of, so, you know, just on the theme of how research is conducted, you know, we can imagine sort of the ideal version of scientific research. You know, it's purely about truth seeking. It's, it's, you know, an open, you know, free market, you know, totally based on, you know, productivity and rigor and etc. But of course, you know, you know, we don't live in the platonic ideal of the world we live in. We live in the real world. We've talked about some of the constraints on the business side to do with, you know, the journals and how they operate, some of the sort of status seeking that leads to the gaming of these metrics among researchers. Another sort of intersection point. Point is that the science itself can become skewed and biased in different ways when it starts to get very entangled in politicized issues. A big one that we're still in the midst of is the whole COVID 19 issue, everything from where the virus came from to whether or not certain treatments work or don't work, or the extent to which they work. All of these things have been controversial every step of the way for the last several years. I remember back in the early days of the pandemic, back when there's much more uncertainty than there is now. We didn't know if there was going to be a zombie apocalypse or this was a relatively benign virus. There was lots of legitimate debate to be had on that, but very early on, I think you got embroiled in a little bit of controversy. Maybe that was justified, maybe it wasn't, but you calculated something in a paper called The IFR, the infection fatality rate for COVID, and this was back in, I forget what year, but early in the pandemic, can you walk us through what happened there? What was the IFR that you calculated, and why was there such a big stir about this?
Speaker 1 1:11:08
Oh, goodness, yeah. These were horrible times, toxic times. But some of the toxicity, unfortunately, has, has persisted. Yes, indeed. I mean, I did a lot of research on on COVID 19, you know, my my background is infectious diseases and epidemiology and population health and preventive medicine.
Nick Jikomes 1:11:32
For those listening, this is your particular background you are to do this. So
Speaker 1 1:11:38
this is like, you know, let's say my field based on these highly gamed metrics, you know, counting citations and so forth. All these databases, you know, have me like the most cited epidemiologists in the world, the most cited public health research in the world. So whatever is worth it, I thought that that I should work on that topic. It was a major crisis, a major pandemic, a major threat, and it was important to do work much like other scientists. And we
Nick Jikomes 1:12:09
obviously want to know what this number is. We want to know how deadly the virus is exactly
Speaker 1 1:12:13
so. So one of the questions was, you know, what is the infection fatality rate? What is the ratio, basically, of people who die, divided by the number of people who are infected. And there was a lot of debate about this. So I was involved in the the two earliest seroprevalence studies that were done in the US, and in some other efforts, I published that work with my colleagues in JAMA in the International Journal of Epidemiology. I did a meta analysis of the cerebellum studies that had been done during 2020 and that paper was published in the Bulletin of the who and then continued to look at that question with additional studies done in different samples in different countries, and continue to publish systematic reviews of that evidence, it became highly toxic and highly debated because much like many of the questions surrounding the pandemic, unfortunately, they Were very strong political positions and partisan positions taken it people in different ends of the political spectrum somehow endorsed specific aspects or specific positions. So even though I consider myself as the the least likely ever to be political in science, and I have argued repeatedly that science should be detached from politics and should not be subjected to political pressure, and should be independent and should be respected from all sides, and understood also about the limitations and biases and flows that that could happen but, but, You know, really tried to protect science from intrusion. That did not happen. It was a very toxic environment.
Nick Jikomes 1:14:10
And so, just so at the time, yeah, what was the infection fatality rate that you calculated, and how did that compare to other estimates that were floating around at the time?
Speaker 1 1:14:18
So, for example, the infection fatality rate that we calculated in our first two seroprevalence studies in Santa Clara County and in LA County was between point two and point 3%
Nick Jikomes 1:14:32
0.2 and 0.3 0.2 to 0.3% meaning essentially the chances of dying for the average person if they got an infection was 0.2 to 0.3% right,
Speaker 1 1:14:44
right? The meta analysis that I published in the Bulletin of the who had a median infection fatality rate of zero point 23 with some corrections, zero point 27 without these corrections. So pretty much. In the same ballpark, these numbers were far lower compared to the original estimates of of case fatality rate of 3.4% that was what we had seen in the experience in China. Case fatality rate is the number of people who die, with the denominator being the number of people who have been diagnosed, so it's not necessarily that
Nick Jikomes 1:15:25
infected. Yes, okay, so, so that was maybe an issue that was accounting for the discrepancy in the numbers is whether or not people knew the denominator, the total number of people actually,
Speaker 1 1:15:35
but in the early days, the who you know, gave that figure 3.4% and also their envoy to China came back saying that we don't think that there's many asymptomatic cases, basically saying that if we haven't missed infections, then the infection fatality rate would be pretty much, very close, if not identical, to the case fatality rate. So maybe it's not 3.4% but it would be pretty hard. Yeah.
Nick Jikomes 1:16:03
So in other words, in other words, there's sort of two camps at the time, so to speak. One was saying that the fatality rate is what we would consider to be quite high on the order of 3% your estimate came in much lower, on the order of 0.3% and there was a question of, you know, who's getting it right, who's getting it wrong? Are we accounting for all the infected, and if you're not accounting for all the affected, that would imply that there's many asymptomatic people, and that's sort of what your result implied. But others were saying, we don't think that's the case exactly,
Speaker 1 1:16:35
exactly and and the debate became very quickly, very toxic because it was endorsed by people who really merged the message with other messages. And therefore, if you believe that the infection fatality rate was point 2.3 point 4% you had to belong to one clan. And if you believe that it was higher, you had to belong to another clan. Yeah, and it's similar.
Nick Jikomes 1:17:02
It's similar to the lab leak thing, where, for a while, if you believed it could have possibly been a lab leak from China, you got put in one camp. And right, that was the science was totally enmeshed with the politics. Exactly,
Speaker 1 1:17:14
which is, which is very, very sad, because, you know, personally, regardless of of what my ideological biases might be, I, you know, I would never think of of distorting my numbers. And, you know, try to to think and pay attention. Okay, so what does this politician or or this politician saying today? I need to avoid saying this, or I need to say this so as to avoid trouble, or to or if I want to create trouble, yeah, so it's, it's very difficult to do science under these circumstances. You try to provide, these are my results, but, but then you you get attacked, and you get praised, and both are bad. You know, both to be praised and to be attacked. Sometimes it's worse to be praised,
Nick Jikomes 1:18:04
actually, if it's for the wrong reasons, that's
Speaker 1 1:18:07
not what science is about. You know, science is not about becoming, let's say, a hero or a demon. It's a process.
Nick Jikomes 1:18:16
Yeah, well, and you know, this is a few years ago when you came up with your IFR number and other people had different numbers. We've now had the benefits of a few years going by. We have more data. We have more knowledge. What do we know about what the IFR is today?
Speaker 1 1:18:31
Oh, today, it's 0.0 something you know for for SARS, cov, two infections now. But you know by now we have people who have been vaccinated and people who have been infected on average two or three or more times. Yeah.
Nick Jikomes 1:18:45
So people have both natural immunity and vaccine induced immunity. That's right. So network mortality risk.
Speaker 1 1:18:51
It is, it is, you know, far, far less than even the Yeah, let's say low range figures that we had come up with in in early 2020, I guess what
Nick Jikomes 1:19:01
I was getting at. I guess what I was getting at is, you know, the people who are saying it was three to 3% versus you who were saying it's more like point 3% was there indication after that that you were closer to the truth, or they were closer to the truth?
Speaker 1 1:19:16
I think you're talking to someone who was part of the debate. So maybe I'm biased, but I think that that our estimates were surprisingly accurate.
Nick Jikomes 1:19:24
Yeah, I mean, that's, that's my read of the situation is, you seem to come up with an accurate number. I'm
Speaker 1 1:19:29
surprised actually that they were accurate. Because, I mean, there would have been every reason for us to be off, you know, two fold, three fold, five fold easily. You know, with single studies that can easily happen, and even with the meta analysis of 1020, 30 studies, sometimes you may get inaccurate results. So no, I think that based on having a more complete picture and multiple studies that were done back then and and later, probably that was. An accurate estimate overall. And of course, even then, we stated that there's a very strong gradient of risk, so that 0.3% is not really representing everyone. Little children will have 0.000 something percent risk, and elderly people with comorbidities in nursing homes may have a 10% I think it's an amalgamment of very, very different risks.
Nick Jikomes 1:20:30
The average, the IFR for everyone is 0.3% roughly, that's what you calculated. But I think the other number you added to that was that for people under 70 years old, it was actually 0.05 so it's a very strong age gradient, right,
Speaker 1 1:20:45
right? So, so I think, I think that these were pretty accurate estimates, but unfortunately they were immersed into very toxic deliberations. And as I said, both the both the attacks and the praise were not helpful, and during the pandemic, I found myself in toxic situations because, you know, lots of lots of the issues that were scientific became entangled into the political debates and public policy, you know lockdowns, for example, you know my my results suggested that lockdowns did not have a substantial benefit compared to other restrictive measures that were not as aggressive. Again, that was both praised and demonize
Nick Jikomes 1:21:41
is sort of found, yeah, and psychologically, you know, sort of my interpretation of what was going on there is, the reason this got so heated and so controversial is, instead of doing the research and following the numbers and the data, people had policy positions First, and they wanted to justify those no matter what. So people that were pro lockdown or anti lockdown wanted that IFR number to be higher or lower, to support the policy that they already were advocating for, right?
Speaker 1 1:22:10
So it was like clan mentality. And you know, your results need to fit to what we want to see. Otherwise, you're an enemy. You need to be destroyed. This is warfare, and there's no room for dissent. You know, in other cases, I found myself opposed by the opposite camps. For example, I led an international meta analysis on hydroxychloroquine, which we published, finding that hydroxychloroquine randomized trial suggest possibly an increase in mortality in people getting hydroxychloroquine versus the control. And that also created toxic debate. And you know, most
Nick Jikomes 1:22:53
direction for you attack that we're in the opposite direction compared to
Speaker 1 1:22:57
do the IFR debates. Yeah,
Nick Jikomes 1:23:00
so, so for doing doing good work in the sense that you're trying to get to the answer and the truth without being political, you actually got demonized by both sides, right,
Speaker 1 1:23:09
right, praised by both sides, demonized by both sides. Horrible, really, completely horrible. Yeah,
Nick Jikomes 1:23:16
what was that like for you personally, was was that tough did you have? Did that affect any of your personal or professional relationships?
Speaker 1 1:23:25
It did. I mean, it's, it's not a joke, receiving death threats and feeling uncertain about your life, and you know, not only for me, but also for all my family members. So, yes, so it was, it was a very, very nasty situation, and I, I was not prepared for that. I don't think that you ever get any scientific training that can prepare you for such a situation, for such a toxic situation, that you feel that your life is threatened, that the life of your the people who you love, is threatened that you have uncertainty about some very fundamental issues of your existence. So yeah, I and I, I feel pity, I feel sorry for people who were on the opposite side of these debates, because I'm sure that they were also attacked in similar ways. And if I had some way, I would wish that they would at least not receive all that pressure and all of that toxicity and that that they would be spared because it's, it's, it's completely beyond human understanding that this can happen. You know, you try to do science, and this is what you get.
Nick Jikomes 1:24:49
What? What are your thoughts about? So, so right now, you know, I mean, everything that we've talked about is controversial to some extent. There's many different opinions about how scientific research. Should be done from in terms of how peer review works, how the journals are structured, how the universities work, et cetera, et cetera. That all also sort of rolls into questions around funding and NIH. And of course, there's a controversy happening right now because NIH funding is starting to change. We're going to have a new incoming NIH Director, Jay Bhattacharya, whose confirmation is next week, I believe, who will presumably make some substantial changes compared to his predecessor. What are your thoughts on changes to the NIH leadership, changes that are happening to NIH funding? Should people be worried about this in the research world, is there, is there room for optimism? Here? Are you pessimistic? What do you think is coming here?
Speaker 1 1:25:48
I think the current situation is a bit chaotic and and very uncertain. And I I believe that I know many people who are very anxious and feeling threatened, especially with with cuts in in the budget and with uncertainty about what would be the next day. At the same time, we know that NIH funding has been instrumental in keeping the research enterprise alive and thriving. And clearly the US has a very successful research enterprise in many ways, I think, compared to what other countries can do. This does not mean that our research enterprise is optimized or that it doesn't have problems. I mean, we have spent so much time to discuss about some of these problems, and there's many more. And, you know, the the argument that, well, there's so many people who get Nobel Prizes and discoveries are made, you know, does not mean that we could not get more discoveries, and that we could not get even better work, and that we could not be more efficient in how we invest scientific resources. So I think it's important to think about science reform, and it is important to try to think about, how can we make things better? We know that currently, our system is not very efficient. We have many biases. We have reproducibility problems. We have difficulties in peer review. We discussed only about journal peer review. We haven't discussed about Grant peer review, which has its own problems. And you know, study sections, for example, have served us for many, many decades, and I think that they do a decent job in most circumstances, but we have identified biases with them. We know that they're not really very receptive to high risk ideas, disruptive ideas, the way that things work, if you have one or two members in the study section feel that this is a high risk idea, and it's not going to pay out. It will be shut down and
Nick Jikomes 1:28:05
correct me if I'm wrong. But in the academic world, in academic scientific research, there seems to be perhaps a similar phenomenon that you see with politicians. In the political world, where there's a kind of bias towards older, more well established researchers. Does that have something to do with what you're talking about? So
Speaker 1 1:28:27
this is a superimposed bias. You know, one is some difficulty to get high risk innovation and disruption funded, and of course, high risk innovation has very high failure rates. This is unavoidable, but unless you try and fail and fail multiple times, you will not get that one chance that we'll be successful and will really change the landscape. So we're not really good at that. I think that NIH, for example, performs much worse compared to Howard Hughes Medical institutes, which has a different approach, of well, let's give money to really good investigators, and let's give them freedom for a number of years to try to get the best ideas tested and implemented. There is a superimposed bias of seniority, and the average age of getting an NIH grant is in the mid 40s, which is, I think, too late. You know, marcher died 10 years earlier, and I think the same applies to scientific investigators. Many of them have their best ideas in their 20s or early 30s. And if you have to wait until you're 45 or 47 or even older to get your first independent roone grant, that's That's too late. So so we need to find ways to shift more funding to early career investigators to really bright innovators who want to try bold things and give them a little bit of more. Opportunity to do that. At the moment, we give them very little opportunity. And I think that this is something that can be fixed, should be fixed. It has been discussed for for a long time, but really has not happened, at least in a bold way. NIH can do a lot on reproducibility. At the moment, it has reinforced some of its practices and policies. For example, it has improved its policies on data sharing and openness and making data available, but it is still very ambiguous about replication agendas, so I think that we can do much better on that. And I think that there is some entrenched ideas that are very difficult to dismantle somehow, once you have a lot of resources go into one direction, and a lot of scientists make a career out of that, even though you may reach a plateau, and there's not much more to be gleaned out of continued resources being invested. More resources are invested just because there's so many people who defend their domain. And you know, they're members of the study sections. They're those who are advisors, those who control Well, research will go in in this domain, in this direction, this type of tools, well, maybe they're not that high yield as they used to be. Perhaps, you know, perhaps they had substantial yield in the past, but not any longer. So. So the system currently is pretty slow in in moving to new priorities and new opportunities, somehow it gets entrenched into some bandwagon type of occupations. Obviously, there are some other huge challenges that have come forth in the last couple of weeks, like the cut indirects that has been going into a legal battle. Can
Nick Jikomes 1:32:04
you explain that for people? So, so when a researcher gets an NIH grant for a million dollars, say, what are the directs and indirects? What does that mean?
Speaker 1 1:32:13
So, in in simple words, the direct cost is what goes to the researcher to do the research, in terms of paying for the salary of the researcher himself and the the post docs and other Junior researchers, and also for consumables for for the experiments. So it's the cost of the research itself. So
Nick Jikomes 1:32:38
if a researcher gets a $1 million grant. Naively, someone might think, Oh, he gets a million dollars. He or she gets a million dollars to do their research, but actually, they only get a percentage of that to do the research, and then another percentage, a large one, goes to the university to cover other things, exactly
Speaker 1 1:32:54
so. So this is the indirects that goes to the university, and in the past, the proportion of the indirects has varied from each institution. It has been negotiated between NIH and each institution. And basically, the indirects cover everything else that needs to be available in order to be able to do the research. So that includes the space, and, you know, the electricity, the water, everything that needs to be paid to have a space being functional. It includes the administration, the staff that needs to support the research, to prepare all the forms and the audits and the regulations that NIH itself may be asking
Nick Jikomes 1:33:35
those indirects, are they? Are they going to things that directly support that research? But there's a lot of people out there saying that this sort of go, essentially goes into a slush fund, and it's not clear where all of those dollars go. So
Speaker 1 1:33:50
yes and no. So there's very explicit negotiation, at least until now, between each university and the NIH and, you know, the university needs to explain that this is the space that I'm dedicating to research, and this is the cost of that space and the cost of maintaining that space in a functional capacity. So
Nick Jikomes 1:34:13
literally, you know, this is the research building that we where we do our neuroscience research. This is the electricity bill every month, etc, etc, exactly,
Speaker 1 1:34:21
you know, this is the staff that is needed to support the research. This is the facilities, you know, like an animal facility. If you don't have it, cannot even start on some types of research. And this is the cost for the animal facility. So it's not that it is completely arbitrary. I think that, you know, this is a misunderstanding, and this is also the reason why some institutions are asking for a higher rate compared to others. You know, some institutions are happy with with 25% and some others are asking for 70% or even more so because they say we have more things to put in place in order to start doing the research. Now is. That like something that is inviolate and cut in stone. And yes, you cannot do the research. If you remove this little piece, then you cannot do it. No, that's not the case. And if you take each one of these components space, for example, during the pandemic, universities were ghost towns. The space I was coming to my office and there was no one else in the building. And even now, which is five years later, occupancy in many university facilities on a regular day and a regular hour, maybe very, very low, you know, maybe 10% sometimes even less than that. So do I believe that this is nice, or, you know, this is the way that it should be? No, I think that people should be back, and I think it would be better for everyone, because this is how science really makes progress, by brainstorming, by interacting, by talking with colleagues. So I hate that we had so much leeway to just do work from home, especially for people like me who do things that can be mostly done through a laptop. In theory, I could be at my home. I come to my office every day because I believe in it. But you know, there's no incentive, really, other than I just want to come, yeah, you know, if you take then the administration NIH itself has placed a lot of administrative barriers to do research. So if I write a grant, the grant may be six pages long, but the administrative forms may be 600 pages long, and these need to be filled out by administrators, by people who are dedicated. They're excellent people, and I never want to say that, Oh no, we should let them go. You know, they're highly trained, they're skilled, but perhaps we could use that expertise in a more productive and in a more positive way, Instead of lumping more and more administration on top of that.
Nick Jikomes 1:37:03
I guess that's what it's kind of like, what people in the private sector called red tape. Like, it's just creating a lot more barrier before the things that have to get done before you can do the real work, so to speak, exactly.
Speaker 1 1:37:13
And you know, if you look at the numbers, there was a report published a couple of years ago by actually a left wing progressive Institute. The title is a very declarative title, like like the universities administrative bloat or something like that, and they present data from the top 50 universities in the US. There is, on average, three times more administrative staff than faculty, and there's some universities among these 50 top that have more administrative staff than students. So So I mean, clearly there's some imbalance here. I would be the last to say that we need to cut back on research, I would fight even for $1 to be cut, because I think this is a bad idea. I think that research and science is a top societal priority, but how the funds are going to be used? I think that there's plenty of room to discuss, you know, how, how do we dedicate more funds to research? How do we become more efficient? How do we inform more reproducible science? Yeah, rather than just red tape? And, yeah, you know, administration, for the sake of administration, right?
Nick Jikomes 1:38:35
And, you know, because all of those administrators, you know, they're, they're doing actual work, they have to do the paperwork and all of the stuff that's required by the NIH, for example, but those are all full time salaried employees. That's that's a huge cost. And I guess one way to start to think about this is, you know, if we've seen a, say, a 10x increase in the administrative staff over the last X number of years, have we seen a 10x increase in scientific output or scientific quality? It's not clear that that's true, not
Speaker 1 1:39:02
really. And of course, this is an uncontrolled experiment, because we don't know what it would have been the case if we didn't have that. But what I'm trying to say is that I think that there is room for negotiating. What is the optimal model for doing science, not an austerity model. I think that I would start that science is underfunded. And in fact, the underfunding is leading to some of the problems that that we discussed. For example, if you have a highly competitive environment and you know that you will be out of the game, you will lose your job, you will not be promoted, you will not get tenure unless you come up with some extreme results, some extravagant discovery, or some claim. You will come up with an extravagant claim. So the underfunding is also a strong contributing factor to some of the problems that we're discussing. Equality. Issues, right? So, so I don't think that cutting back on funding and austerity is going to make things worse. It's probably just going to make things much worse. And there's also a breaking point, you know, some institutions may just say, forget it. I I cannot do research, you know, I don't have the resources, I don't have the means to do it. So I will just close my labs. I will close my animal facilities. I will close these resources. And if you're a researcher who wants to do work, forget it. You can stay as a teacher, teach. Or if you don't like it, you know, go elsewhere. Maybe go to another country even. So we have to be very careful about how we try to change the research enterprise? We clearly need to change it. There's plenty of things that can change, but it has to be done with care and with evidence in making bold moves.
Nick Jikomes 1:40:56
What are your thoughts on someone like Jay Bhattacharya becoming the NIH Director. Are you optimistic about that? What are some of the things you might you know, whether or not it was him going in, you know, what are your What are your hopes? And any words you might offer to the new director to to help usher us into a new era of scientific research.
Speaker 1 1:41:18
I know Jim betta cherry for, for many years. I think I met him when I first arrived at Stanford, 2010 he had some research interests that were parallel to mine. At that time on exposure wide association studies, trying to study the exposures that surround us in large scale. And then we collaborated. We also collaborated during the pandemic in some of the early studies, like the early seroprevalence studies. I think he's extremely strongly credentialed. He's really brilliant. I have absolutely no concern for his moral integrity. He's he's an amazing person, and I believe that he's well intentioned. I did see a transformation of Jay during the pandemic. You know, he was someone who had no social media accounts, who was mostly an introverted scientist, a methodologist, not someone to go out to speak in public or whatever, and during the pandemic based on the toxic environment that was generated that affected him as well, I think he made the decision that goodness I need to fight back. And that means I go into social media, I go into media, I start talking, I start creating coalitions and declarations. And so it was his response to this. So he went from zero social media to more than half a million followers on Twitter. X and, you know, I think that now that he has the power, hopefully to to lead NIH, I hope that he leads it with with wisdom and with care, because there's lots of things that are happening that suggests to me that the toxic environment has not passed. I think that it is perpetuated. I think that it is perhaps even escalating to some extent. There's some sort of, again, people circling the wagons and taking positions for the war that is erupting and, you know, retaliation and retribution, and you know, science is politicized again. I cannot think of a worse recipe for fixing science, you know, getting it entangled into toxicity and partisanship and and politicizing. So I do hope that he can really go beyond that and actually suppress this, this type of degeneration of the scientific debate into political or partisan debate. And think about the common good, the good science of research, integrity of doing better science. You know, there's so many things that we can do better, and he can leave an amazing legacy, if, if he really focuses on on these aspects,
Nick Jikomes 1:44:36
what sorts of research projects are you working on today, what's, what's really exciting to you? What you know, what is capturing your attention and your excitement most today,
Speaker 1 1:44:48
there's lots of things happening in my center, you know, the meta research innovation center at Stanford, and I'm privileged to work with lots of very bright people who know. More than I do and learn from them every day. What I enjoy is multidisciplinary. I work with people who come from very different disciplines. And we have people who are coming from biomedicine and life sciences, and others who come from psychology, social sciences, economics, sometimes even more remote fields, but with a common denominator about how to improve research and research practices. So there's a large theme on on peer review, which we have discussed already as to its importance. I'm one of the directors of the International Congress on peer review and scientific publication that we co organize between metrics, BMJ and JAMA, and we also have editors from many other journals and many other people attending in Chicago in in September, there's lots of very interesting ideas and lots of empirical studies done on how to study and how to improve peer review. Another large area of my work is about studying metrics on on science, all these metrics that we discussed that some of them can be gamed more than others. And how do you find metrics of gaming and and counter metrics, in a sense, so we're generating databases that cover all science and all scientists, and trying to get more granular views of performance and of efficiency.
Nick Jikomes 1:46:29
So could you guys actually build like a UI? You could? You could build a page where I could click on a scientist and see their stats and all of their Oh, yeah, we
Speaker 1 1:46:37
have that publicly available, and we update it every now and then. What is that? It is online. It's it's open to public view. It has been downloaded about 4 million times.
Nick Jikomes 1:46:52
What's it called? How do people find it?
Speaker 1 1:46:55
Well, it's updated citation indicators, and in the latest edition, we have included also retraction metrics. So for each sign, as you can see, whether they have had papers retracted, which is one sign that something going wrong, and also the citations to their retracted papers, citations from retracted papers to their work and so forth. So I believe in more transparency and more harmonization of information. The other frontier that I find very interesting to work on is on on funding and funding mechanisms, which I think becomes more pertinent, especially with the major anxiety surrounding NIH, trying to understand, what do we get out of different funding approaches? Who is funding? What is that the best way to fund research? Can we do better in that regard, and many, many other things. I mean, it's, I feel like, like a child in a in a candy shop, and there's always lots of things floating around. So I really enjoy that there's so much happening in that space. I
Nick Jikomes 1:48:16
think that's a great place to end it. John, thank you very much for your time. Really appreciate it. For those listening, I should just say, Oh, I've been emailing you for probably two or three years, maybe, but I've been pestering John for a while, and finally I got him on so very excited to talk to you. I've wanted to talk to you since I started the podcast, and I really appreciate it.
Unknown Speaker 1:48:36
Thank you so much. Nick, it was great talking with you.
Share this post