Dr. Reddy's Pediatric Office on the WebTM


Scientific Reasoning, Evidence-Based Medicine, Logical Fallacies, Vaccines and Autism

Children's Health Pediatric Resources Fun Sites for Kids HP Palmtops Dr. Reddy's Home Page Feedback Our Real Office

Much of the mail I get from visitors to the Office are questions about medical problems (only some of which I answer because I, like all doctors, cannot answer medical questions accurately if I haven't seen the patient). I also receive messages from people who like the material on my site -- and from people who don't like what I have to say.

Some of the criticism I receive is well-founded. I try to give visitors the most accurate information I can provide, but I do make mistakes occasionally, and I appreciate being told about my errors and correct them when people point them out to me. However, there are some people who criticize my facts and opinions with little or no evidence to back up their claims. An example is this message, which I received a while ago in response to my posting on thimerosal and autism:

dr reddy, 

Before making such claims in public, you might want to be aware that most flu
vaccines used today contain thimerosol and the jury is still out on its affect.
there are studies that clearly contradict your position and you do people a
disservice not giving the other side and not being particularly well read.
you apparently dont mind giving a child mercury or anyone else for that matter.
im very diappointed in your online publication.

< Name withheld.  Otherwise, this message is exactly as I received it. >

Evidence-Based Medicine, and Evidence

Physicians -- and scientists and engineers -- are taught an awful lot in school and on the job.

Much of what we are taught consists of "facts". Some of these really are facts: most people have the same set of bones, and we have names for all of them which help us identify a particular bone to any other doctor with only a few words. However, some of the "facts" we learn are found later -- sometimes much later -- not to be true. Two examples are calomel (mercurous chloride), once used by doctors to relieve headache, and hydrogen cyanide, which was used as a sedative in the late 19th century. Both are now known as nothing more than poisons. Less drastic examples include the changes in just the last 10-20 years in the way we treat asthma, and especially in the medicines we use for asthma -- most of which weren't even available in 1988.

A lot of medical students don't enjoy memorising "facts". I certainly didn't. Memorisation is not as important in many other scientific professions -- in engineering, many tests, including licensing exams, are open-book -- and I personally think that our students don't have to remember some of what we make them memorise. Obviously we have to memorise things like the procedures for CPR that we don't have time to look up when we need them, but some things, like drug doses, should be checked every time.

The other thing we learn -- and the most important -- is how to make decisions. Our decisions have to be based on what we think is wrong with a patient, but we must also use the best information we can find on the patient's problem, its causes, and its treatment. We have to decide how valuable that information is as we work. The value of that information depends on how it was gathered and on how similar our patient is to the patients who were studied to get the information. The way we are taught to evaluate that information is similar to the way courts evaluate testimony: we look for evidence that a particular treatment helps the patient, or that a particular treatment (example: a vaccine) does not harm the patient. Evidence-based medicine is at its core merely a systematic way of searching published medical literature for well-designed studies on a problem our patient has whose results are likely to apply to our patient.

We evaluate medical evidence partly with some of the same rules courts use. For example, hearsay ("I heard Dr. Joe Blow say...") isn't "evidence" to us any more than it is to a court. Neither a judge nor a doctor want to hear what Dr. Blow said from anyone other than Dr. Blow himself -- we want to be able to ask Dr. Blow questions about his information directly.

However, we must also look closely at a medical study to see whether the results of that study can apply to our patient. Many things go into this, including:

What were the patients in the study like?
We look at the study patients' ages, ethnic backgrounds (some ethnic groups have genetic differences from other groups that may affect the way their bodies react to different treatment. An example is primaquine, a drug used to treat certain kinds of malaria but which also can cause destruction of red blood cells in people with particular genetic abnormalities. These people tend to be of African, South Asian, or Southeast Asian descent, partly because carriers of the abnormality are relatively protected from malaria) and other characteristics.

We look at these to see if our patient is similar to the patients in the study, and also to see if there are differences between the experimental patients (those who received the treatment being studied) and the control patients (those who did not receive the treatment) that might explain any difference in the experimental and control patients' outcome. In an ideal trial, patients are assigned to the experimental and control groups randomly, and neither the patients nor the study doctors know which group they are in until after all the data is collected and analysed. This helps to avoid all sorts of biases in the results.

Was the study evaluated beforehand for patient safety, and were the study patients informed of the possible risks and benefits before they agreed to participate in the study?
Because of many problems in the last few decades with people being experimented on without their consent, there are very strict rules on studies on human subjects. These always require that the patient agree voluntarily to particpate (even children must agree to be in a study, although their parents must also give their approval). Studies on people are always approved before they start by review boards which have non-scientists and non-physicians as members. Without that advance review, no journal will publish the results and no funding agency will pay to help run the study -- quite apart from what a court may do to someone who experiments on people without advance review or without informed consent from the study patients. One example of the seamy side of medical research is the study by Wakefield and others on the possible association between the measles-mumps rubella (MMR) vaccine and autism and bowel disease. The Lancet, the journal that originally published the study in 1998, retracted it in February, 2010 -- after, among other discoveries, Wakefield was found to have obtained samples of blood from children at his son's birthday party without telling them or their parents why he was drawing their blood. Unfortunately, there are many parents who have refused to let their children receive MMR because of this study, and many children who have unnecessarily developed measles because of unfounded fears triggered by a badly-done study. (For more information on the Wakefield paper and its flaws, see this article at the Office.

Did the study investigators have any reason -- known or unknown to the public and/or the study's readers -- to be biased?
Even if there are no obvious safety issues, if the study designers have conflicts of interest thet they do not tell readers about the whole study is suspect. Yes, many investigators have direct or indirect interest in the results of their study, including money-related interest... but if they are honest about their conflicts, readers can decide for themselves how much the study results are worth. It wasn't until years after the study by Wakefield and colleagues on MMR vaccine and autism was published that the public found out that Wakefield stood to make money on a competing vaccine if MMR went off the market.

Are the results statistically significant?
I won't even try to get into the details of statistics here -- especially since the detailed statistical analysis of a study depends on many factors, including how the study was designed. The really simple version: we use statistical analysis to decide
These are sometimes referred to by the chance of getting the error we're trying to avoid:

The terms "false alarm" and "miss" were supposedly coined by the British Royal Air Force to describe the accuracy of radar sets and their operators during the Battle of Britain in World War II. A "false alarm" resulted in a British fighter being sent up to attack a German bomber that wasn't really there, which the RAF wanted to avoid because it wasted precious aviation gasoline -- but they also wanted to avoid German bombers being "missed" long enough to drop bombs on London. The RAF tried to minimise both the miss rate and the false-alarm rate, but they couldn't make both rates zero at the same time: they could keep their gas supply intact by not sending any fighters up and letting the Germans bomb London, or they could send fighters up for every blip on the radar screen until they ran out of gas. Since neither running out of gas nor letting London burn were options, the RAF had to compromise, and much of their tactical and radar work was aimed (pardon the pun) at reducing both the false-alarm and miss rates.

Medical studies work the same way. We can't reduce both the false-alarm and miss rates to zero at the same time. The best a study designer can do is make the error rates as low as possible. We can reduce the miss rate by including more patients in the study, but that requires finding all those study patients and then providing (some of) them with the experimental treatment (usually we provide the control patients with placebo treatments that look and feel like the experimental treatment so that they don't know if they did or did not receive the real treatment until everyone finds out at the end of the study). We can reduce the false-alarm rate by making the difference in our outcome measure (which is often as simple as a blood test result that we expect to show how well the experimental treatment works) larger, but that can increase the miss rate by making it harder to find a difference.

Reasoning, Logic, and Logical Fallacies

I am indebted to two excellent Web sites on logic and logical fallacies, Logical Fallacies (whose Webmaster I haven't been able to identify), and the posting of Michael C. Laboissiere's tutorial on fallacies by the Niskor Project, as source material for this section. Several of the examples I use here are adapted from examples presented at these two sites.

We need to decide whether evidence is valid and applicable. This involves reasoning from the information we have -- taking premises (which correspond roughly to pieces of evidence) and drawing conclusions from the premises. Formal reasoning involves logic, and a logical fallacy can be thought of as an error in the process of reasoning. There are two kinds of reasoning, deductive and inductive, and two matching kinds of fallacies.

Deductive Reasoning and Deductive Fallacies
A good deductive argument is one which cannot possibly be false if all of its premises are true. A classic example, quoted at Logical Fallacies:

  1. All men are mortal.
  2. Socrates was a man.
  3. Therefore, Socrates was mortal.

If statements 1 and 2 are true, then statement 3 cannot possibly be false.

Any deductive argument that can be false if all of its premises are true is a "formal fallacy". This is an awfully high standard, and many arguments we make, especially in science and medicine where we often must deal with statements that are probably true, do not qualify as good deductions.

Inductive Reasoning
Since we have to deal with "probably true" and "probably false" statements so often, there is always a chance that our conclusion is false even if all the premises are true. Therefore, instead of "good" and "bad" deductive arguments, we talk about "strong" and "weak" inductive arguments. An example of a strong inductive argument:

  1. Every day the sun rises in the east.
  2. Therefore, tomorrow the sun will rise in the east.

Inductive arguments can contain fallacies which make them weak even if the premises are true. To think logically, one must be able to see and avoid these "informal fallacies".

Examples of Common Fallacies
(And yes -- some of these come from comments I have received over the years, both here on the Web and in my practice)

Post hoc, ergo propter hoc
(Latin: "after this, therefore because of this")

Just because B happened after A doesn't mean that B was caused by A. Example:

  1. Most drivers involved in car accidents had breakfast the morning of their accidents.
  2. Therefore, having breakfast causes car accidents.

This sounds ridiculous, doesn't it?. However, try this one...

  1. Most autistic children start showing signs of autism when they are 18 months old.
  2. MMR (measles/mumps/rubella) vaccine is given at age 15 months -- 3 months before most autistic children start showing signs.
  3. Therefore, MMR vaccine causes autism.

This isn't very different from the "proof" that breakfast causes car accidents, is it?

I do not say this to denigrate people with autism or their families. As I have said elsewhere on this site, I have a family member with Asperger's syndrome, which is a form of autism. If anything, I believe that wasting time on specious connections between autism and vaccine administration is more likely to obscure the real causes of autism than it is to help find the real causes or to improve treatment.

Circumstantial Ad Hominem

A says that a claim is true. B attacks A by saying that the claim is to A's advantage. Example:

  1. Drs. A, B, and C conduct a study that finds no significant link between vaccine administration and autism. The costs of this study were paid for by the government.
  2. Other studies performed by Drs. A and B were funded by a company that makes vaccines.
  3. Therefore, the new study's results were falsified.

It is a good idea to be a little suspicious of studies funded by a drug or vaccine manufacturer -- as has been made quite clear in the press in recent years. (Disclaimer: I own (a small amount of) stock in a drug company. I also teach residents and students not to take claims the company makes about at least one of their products (not a vaccine) at face value, based on my reading of the literature and on my experience with that product in certain clinical situations.) However, 2. does not automatically invalidate 3. Also -- and more to the point -- a study performed by investigators who have been funded by (possibly) biased sources in the past will not necessarily be biased in favour of the previous sources. In fact, that kind of bias is very unlikely, especially if the investigators want to continue working in their field.

Appeal to Belief

Just because lots of people think statement A is not true does not prove A is true. Example:

  1. Many people think that the earth is flat.
  2. Therefore, the earth is flat.

Statement 1 was true several centuries ago. Just because "many" or "most" people believe something doesn't make it true. The same can be said for the following, from the writer who I quote at the beginning of this page::

  1. Everyone knows that most flu vaccines used today contain thimerosal.
  2. Therefore, most flu vaccines contain thimerosal.

It's not even clear who "everyone" is here. Is it everyone in the world? Everyone in the United States? Every one of the writer's neighbours? And even if "everyone knows that most flu vaccines used today contain thimerosal", that doesn't prove that most flu vaccines contain thimerosal. (Nor does this statement say anything about how much thimerosal, if any, is in flu vaccine, or why the thimerosal is there in the first place.)

Straw Man

In this fallacy, the arguer misstates someone's actual position to make it seem weaker than it actually is. Then he attacks this misrepresentation and claims that the actual position is wrong. In truth, he hasn't attacked the actual position. An example, from the writer who I quote at the beginning of this page:

  • You apparently don't mind giving a child mercury or anyone else for that matter.

Conpare this to my actual position, as I have stated it on the Infections and Immunizations index page.

I am aware that there are many people who believe that thimerosal should not be used to preserve vaccines. I believe that thimerosal should not be used to preserve vaccines if at all possible. I also know several people with autism and Asperger's syndrome, including one in my own family. However, I also believe that vaccine-preventable diseases are sufficiently dangerous to patients, especially the very young (many of whom I admit to the hospital every year with complications of flu, whooping cough, and other vaccine-preventable diseases), that any risk from the preservative is MUCH less than that from the diseases themselves, and I would not hesitate to vaccinate my family against those diseases even if I could not obtain preservative-free vaccine. (That is my opinion. Other people's opinions differ. As always, you need to talk to your own or your child'd doctor to help decide if you or your child should receive vaccines.)

There are many more fallacies that I see all over the Internet and in the press. At some point I will add a few more. (And I will never identify publicly anyone who sends me E-mail, even critical E-mail, and including the sender of the message that triggered this page, without their explicit permission.)

As for vaccines, I say again that I recommend vaccines when and if I believe, based on available scientific evidence, that the risk of harm caused by the vaccine is much less than the risk of harm from the disease itself. Many of the diseases we vaccinate against, including diphtheria, HiB, measles, meningococcus, pneumococcus, polio, tetanus, varicella chickenpox, and whooping cough -- and the flu -- can kill or cripple children and adults -- and killed and crippled people regularly before we had the vaccines. I'll stop recommending a vaccine if and only if I am shown strong, valid scientific evidence that the risk of injury from the vaccine is higher than the risk of injury from the disease. And not before then.


Search the Office for:

Results

See the Detailed Search page for complete instructions on searching the Office.

Back to Dr. Reddy's Pediatric Office on the Web
Sources We Use in the Office
We welcome your comments and questions.

PLEASE NOTE: As with all of this Web site, I try to give general answers to common questions my patients and their parents ask me in my (real) office. If you have specific questions about your child you must ask your child's regular doctor. No doctor can give completely accurate advice about a particular child without knowing and examining that child. I will be happy to try and answer general questions about children's health, but unless your child is a regular patient of mine I cannot give you specific advice.



We subscribe to the Health on the Net Foundation
HONcode standard
for trustworthy health information.

Copyright © 2009, 2010, 2011 Vinay N. Reddy, M.D. All rights reserved.
Written 09/12/09; last revised 02/05/11 counter