08 - Standards of Evidence
Before we start, there is this impression I get from the conspiracy community that fact checkers just crush dissenting opinion and reinforce a narrative by choosing what is fact and what is not. This is a reflection of the systemic issue that people don’t understand how to evaluate quality of evidence, know what evidence even is, and don’t understand objectivity. When people make fun of fact-checking, they’re unfortunately broadcasting their complete lack of understanding of the concept of facts and evidence.
Remember from the page on bias that I said we can never truly know anything - we can only be various likelihoods of being correct. In that gradient of likelihood exists cut-off points where something is unsubstantiated enough to be dismissed as probably false, and something is supported enough to be probably correct. This builds with supporting evidence to reinforce likelihood, or builds with other low quality non-evidence to be even less likely.
Global conspiracies predicting the future involving millions of people all staying quiet with claims contrary to all good scientific evidence has a sub-microscopic likelihood of being true. The theory of evolution by natural selection has such an insanely strong body of evidence behind it, that it’s almost inconceivable that it’s not correct.
Generally we equate these to true and false for shorthand. Evidence can always spring up at any time so in the spirit of that, we can’t ever be 100% or 0% on anything. Whether it’s something you hear from a friend of a friend about something, or a scientific fact that is as close to 100% as you can get without being 100%, the range is vast. The less supported a claim, the further it moves into the red, and the more reliably you can say it's false. The more supported it is, it moves into the green and can be considered more correct.
Without the ability to acknowledge strength of evidence, you get to pick and choose what is reliable and what is not. Evidence isn’t subjective so let’s treat it objectively. Ultimately the strength of evidence translates to reliability and accuracy. Here is my quick list of various tiers of evidence. Keep in mind that you will always find outliers - those who break the mould. These should be met with extreme caution. Picking the outliers is unreliable and leads to fringe views which have a high likelihood from stemming from error or biases. You need to corroborate between evidence sources, not just pick one thing and run with it.
As the name suggests, the things on this list are so unreliable and prone to error and bias that it’s not even worth considering them in any way whatsoever.
Hearsay, anecdotes, and opinion
Someone saying something on Facebook or Twitter. People can be wrong, are prone to biases, ignorance, and exist in tribal groups. This is not a good standard of evidence to base your views on whatsoever. They are only helpful in the respect of adding to already good evidence (even then it doesn’t prove anything). If someone makes a claim in a debate or in general and doesn’t or can’t provide supporting evidence or references to back it up, then it can be dismissed as it shows the foundation for what they’re saying isn’t based on anything reliable, and is probably based on belief or pure opinion.
Hearsay can, however, be an indication that something might be happening. Testimony or someone's word would be a starting-point for investigation, but it is not proof of that thing happening by any means. Remember that this relates to objective subjects such as science. Hearsay can have a more substantial role in things such as court (eye witness testimony) or subjective subjects such as how good a recipe is. Bringing it back to the objective and tested - it isn't a form of evidence whatsoever.
Documentaries are also not evidence. The filmmaker is controlling the narrative. They add in music to sway your emotions, they choose what information does or doesn’t get into the documentary. On top of that, they tell you what you’re looking at. They can easily tell a false story by picking the outlier “experts”. Most notably, there is no sourcing. Claims are made for an extended period of time with no supporting documents and no sources to check. It is extremely easy to crop videos, show clips out of context, misrepresent what’s being shown or outright lie and make things up.
An online video which is just someone talking is also not evidence. If the claims made in the video are not supported by any references in the description to check, then this is a totally unsourced video and can easily be dismissed as unreliable. It is no different to someone telling you it’s true to your face and leaving it at that. Don’t operate out of belief - it’s not objective.
Unsourced websites and articles
Websites which make claims and provide no links to references or supporting evidence. If the site makes claims to what is true and what is not, then those claims should be backed up with links. If not, then it’s just someone stating their opinion and isn’t helpful. It’s as reliable as unsourced YouTube videos or hearsay.
Studies with no controls
A study which has no controls is a study with no way to determine whether the observations seen are the product of the study topic. If you test a drug against another drug with no controls, then your study is fundamentally flawed. Controls are a staple of good science and I’m sure there are situations where controls aren’t possible or aren’t needed, but generally speaking, if you’re studying something, you usually need controls.
Cropped and out of context images and video
Low resolution screenshots and images are common among conspiracy theory sharers. Again, they aren’t sourced. They’re presented as true and go viral. Videos that are cut up are manipulated by the video maker with the intent of showing a narrative. Out of context clips and images are the exact same thing - they lack context. Cropped images are cropped for a reason. Short clips are short for a reason. Edited videos are edited for a reason. People misrepresent their targets all the time, either through ignorance or intentionally. Screenshots of websites and text are also common and always lack the information on where they came from.
This is hearsay in image form. There are no supporting references and it’s presented as true. I can make a meme right now that ants have secret discos but it isn’t true (that we know of). Memes are simply not evidence.
These things are potentially evidence. This means there exist scenarios where they can contribute to a body of evidence, but the potential for inaccuracy remains large enough that they are no proof of anything in of themself. They are good for adding on to actual evidence, but they aren’t proof of anything.
Videos whether on Youtube or another website which disclose references and aren’t just making claims are potential evidence. The reason is that you’re able to check their sources and see how accurately they’ve represented the information. It also lets you see where their information is coming from which lets you vet it for reliability and accuracy. Even so, the potential for misleading videos using dodgy, unreliable, or misleading sources means these videos are only potential evidence.
Something being sourced is the starting point of potential evidence - it means you can verify them and see if the sources support what they're being used for. Sometimes videos will look well-sourced but will in fact be a disceptive web of misinformation or is in of itself, a list of sources with no sources behind them. They might misrepresent or skew their sources which otherwise might be good. You have to check them first.
A virologist’s opinion on a virus isn’t going to be exactly the same as another virologist. Where clear data ends and speculation or quantifying scenarios begins - opinions will diverge. That being said, for the available evidence on a subject there will be a general acceptance in that specialism. The professional opinion of leading specialists and their organisations is important but this is potential evidence for two reasons: it doesn’t account for outliers (experts with a completely different opinion), and these experts are still prone to biases like the rest of us.
This is in the same category as sourced videos. Whether it is written down on a page or spoken in a video - if it is referenced then you can check it. Articles get their information from somewhere. If they don’t disclose sources, or use dodgy sources, then it’s not something to consider evidence. You have to take articles on a case-by-case basis, and the origin of the information and the article publisher matters. The least biased and most factually accurate the better.
I've also seen my fair share of sites which link to legitimate studies but quote-mine small sections to display out of context and deliberately misrepresent their contents. The anti-mask movement during the COVID-19 pandemic is a prime example. Plenty of "science-based" anti-mask sites popped up which did this very thing and fall apart once you actually read their sources.
Preprint means a scientific paper which has been complete but is still awaiting to go through the peer-review process. The paper hasn’t gone through the scrutiny process that is peer-review, which means it’s to be taken with a grain of salt until considered legitimate evidence. Papers can fail peer-review, so taking pre-print studies at face value as undeniable proof isn’t very reliable. Papers can go through corrections after review.
There is some contention over how high or low to regard pre-print studies, so you'd be better off getting opinions from more than just me which should include people who actually publish research. Perhaps this needed its own category in review of this page. While I hold it in higher regard than everything else in this category, I don't hold it in the same regard as peer-reviewed research. Perhaps I'd classify this as "Supplementary evidence".
Pre-prints can be very good papers that just haven't been published yet. As we will get into on the peer-reviewed section next, that barrier isn't the line that makes research legitimate and good, which makes things even more confusing. Pre-prints in general are taken with a grain of salt in the scientific community I have been watching / reading for the last few years. This isn't cause to dismiss; they can be read and analysed like any other paper, but it does require experienced scruitany to do this if you aren't adept at picking apart papers (which most people aren't).
As the name suggests, this is the starting point of what constitutes legitimate evidence. Anything below this is more or less an additional piece on top of everything here and above.
This is a loaded one. Peer-reviewed research means that a research paper has been scrutinised and checked, allowing the paper to be published in a peer-reviewed journal. While this might seem the end of it, it’s actually the beginning. There are three main things to consider:
The type of study
Peer-reviewed isn’t the marker of top-tier science. In the world of science it is essentially the starting point. Does the study utilise controls? Have subjects been randomised (if possible)? Is it blinded or double-blinded? Is it a retrospective cohort or a prospective cohort study? Is the sample size small, moderate, large? How well-referenced is it? In other words - how robust is the study?
This isn't a framework to dismiss papers, however. For example: small sample sizes lack diversity of subjects and have a large potential for bias, but it doesn't invalidate the results. Blinded and double-blinded studies are important, but sometimes you simply can't blind subjects if for example, the study is to test mask wearing (they will notice if they're wearing a mask or not). Like with peer-review, these paper checks (type of study, potential for bias, and the journal), are starting points to work from.
The potential for bias
Researchers can be biased. What language is used in the paper? Do they assert things to be true based on a single reference? Do they use emotional wording? Does the introduction look like it’s walking towards the conclusion? Does the paper reflect on study limitations? Do the authors hold special interests? Who funded the research? In other words - how reliable does it seem?
While markers of bias such as funding and special interests are important to be aware of, they aren't grounds to dismiss a paper. It doesn't mean that the study is going to be biased. Historically, tobacco companies funding research was a big problem, and I've wrote before about how journals are refusing to accept papers funded by tobacco companies. However, on the other hand, if the study is robust and the data checks out then that's what is important. If the methodology is solid - a good study is a good study, especially when it utilises quality control measures such as bias prevention.
The journal it’s published in
It’s no secret that some journals have shoddy peer-review. Some journals don’t even have peer-review or pretend to do it. This predatory behaviour is to pretend to do peer-review to charge researchers for the publication. A recent example which is pretty funny is a paper published in The Asian Journal of Medicine and Health titled SARS-CoV-2 was Unexpectedly Deadlier than Push-scooters: Could Hydroxychloroquine be the Unique Solution?.
Retraction Watch reported on this back when it happened:
“Their goal: to highlight a concerning paper in the Asian Journal of Medicine and Health, which they and others suspect of being a predatory publication — one that is happy to take money to publish anything, while pretending to perform peer review.”
I've been shown a page called Beall's List which is a database of predatory journals. It's pretty good.
Is it a reputable journal?
Here’s a ranking of scientific journals.
It’s not a guide but it’s a start. Generally, you'll find studies in the same places with the odd study in a journal you haven't heard of. It also depends on the type of research you're looking for, too. You won't find too many vaccine studies in the Astrophysical Journal for example.
Larger journals such as the Lancet will have many more papers submitted to them than small journals. You want to publish your research in the top journals, and the top journals want to find and publish the newest and most innovative research. The flip-side of this is that this will result in interesting duds, more retractions, and appealing yet flawed research making it through the door.
A prime example is the Wakefield autism vaccine study which is regarded as one of the worst scientific fraud cases I know about. It was published in the Lancet. It wasn't retracted until Wakefield was found guilty of scientific fraud 12 years later. As I've said many times: science is the process of refining knowledge over time. Papers get submitted and rejected or approved. The scientific community reads and picks it apart. Studies get retracted, results get replicated or are unable to be. Science doesn't end once research is published.
Where to look?
My starting point for finding studies is always PubMed which is the National Institute of Health's study database search engine.
Scientific peer-reviewed papers have to be read fully, understood, cross-referenced, and put into context. It’s not as easy as finding studies that seem to say what you want to be true, they have to be analysed for their methodology and content. Another way to put this is: don't b-line for the conclusion and pretend you read the study. You didn't and you did the research and yourself a disservice.
This is high-tier evidence. It is a robust and comprehensive type of evidence. Like all evidence it is to be evaluated and not taken at face value. The likelihood that it’s incorrect is a low lower than the other tiers on this page.
Meta-Analysis and Systematic reviews
These research papers collect and evaluate large numbers of studies. They typically search multiple scientific databases for papers and reduce them down by excluding papers which don’t meet the quality they’re looking for such as not using confidence intervals or present a large bias or the potential for bias. They then analyse them all in context and use their collective data to see what the science is saying.
The benefit of reviews like this is that they don’t theoretically require funding to do. You can find systematic reviews and meta-analysis which aren’t even funded online (allegedly). It’s rare that I’ve seen a meta-analysis or systematic review done badly. Good reviews should:
-Set a standard of quality and exclude papers that don’t meet it.
-They should examine for and eliminate papers based on bias.
-Show their exclusion and inclusion criteria
-They should examine the papers findings as well as their limitations
These reviews are more reliable but you still have to scrutinise them like any other peer-reviewed research. Read the whole thing and check how robust the analysis is.
This isn't consistent over all meta-analysis. I've seen at least one meta-analysis who's only exclusion criteria was not using confidence intervals (which is a pretty low bar). Meta-analysis are different, as studies in general are different; you have to take them on a case-by-case basis. They're only as good or robust as they are designed to be which is dependant on the researcher. While this is tippy-top research in weighting, they're not to be accepted without the usual scruitany. Science is complicated. And exhausting. Don't forget exhausting. (It's worth it, though).
Collective opinion of scientific institutions
Institutions like the WHO, the NHS, the CDC etc are made up of a collective group of scientists and specialists who review the data from all studies in their field. They hold the insight into how robust and detailed the studies are and keep up to date on the scientific knowledge while being reserved enough to be sure of their recommendations. There are tons of science institutions and come from multiple countries.
Is there a possibility of corruption? Of course. Are they able to be incorrect? Of course. Take the Building Biology institute for example. They’re an outlier institution who say RF radiation is harmful and needs to be way lower than it is. That’s why the title is collective opinion. If they’re all virtually saying the same thing citing science from all over the scientific journal landscape, then it’s very very likely correct. Importantly, they cite their sources so you can view them yourself.
Those same meta analysis, systematic reviews, and peer-reviewed research will be what drives the collective opinion of scientific institutions. It's not the sum of all studies, it's the sum of all reliable and robust studies.
This is all well and good but what do you do when you debate someone who doesn’t prescribe to any standard of evidence where anything that says what they want to hear is evidence, and anything that doesn’t is not? How do you convince someone that evidence can be assessed for its accuracy and reliability when they already don’t know, and are only focussed on winning the debate? How do you convince someone who doesn’t value evidence to value evidence? I don’t know. It’s a real sticking point.
Many times I find myself arguing for an objectively verifiable point by sharing meta analysis and systematic reviews to be met with opinions, unsourced videos, personal attacks, and memes. I always point out that those things aren’t evidence but it doesn’t matter. A few replies later, they keep going with the memes and videos and opinions. It’s a really difficult situation. I try to be as specific and narrow in the conversation topic to get a targeted exchange, but it always descends into generalities, vague statements, and subjective viewpoints.
My answer to all of this is: I don’t know. I really don’t know. Maybe the continued repetition will allow it to be adopted slowly, that continuing to say to the same people that “this isn’t evidence” and explaining why might eventually work. I don’t know.
The next page is the final page and will show you how to fact-check claims using every resource available on this website so far.
Next page: How to Fact-Check