Comments on Basic vs. Advanced Life Support Outcomes After Out-of-Hospital Cardiac Arrest Article

I originally began this blog post with an explanation of my education and training with regard to research.  It began to sound like a rooster crowing, so I moved it to the end in favor of beginning with the most important part of my message.

And here it is:

If good science isn't used in conducting research, the results are meaningless.

There is a corollary to this rule:

Conclusions aren't more scientific or less scientific.  They either ARE science, or they aren't. 

The most recent popular news article making the rounds of EMS web sites is a new "study" concluding that when it comes to cardiac arrest, ALS is a waste of time and BLS transport has the best chance of long term meaningful recovery.

The study itself has numerous flaws in the design.  The most egregious of which is the use of secondary data without verification or quality control in an analysis for which it was not designed.  Further lack of controls for very important factors such as the proportion of cardiac arrest cases that are pronounced in the field, or the ACLS algorithm followed, lead me to suspect the conclusions of the study--conclusions that are already being cited by MDs condemning ALS ambulances across the board--are almost certainly meaningless.

If a sample is used and it isn't randomized and generalizable to the entire population, the results are meaningless.  If the data are skewed, faulty, or the results of bad measures, the results are meaningless.  If the only mathematical analysis done is correlative and not causal, the results are meaningless.  If the research is data mining, not a well designed study, the results are meaningless.  I'm sorry to say, that survey you sent to all your friends and co-workers for your college research project--it wasn't science, and your conclusions weren't valid.  All you could reasonably conclude from it was that on a certain day, a certain number of people wrote certain things down on a piece of paper, and what they wrote has no relevance to the world at large.  If you did your survey by e-mail, well, I won't even go into it.

You MUST use good science if you want your results to be scientific.

I love research methods and statistical analysis.  My relationship with math was never a good one, however.  Growing up, I was always below the curve in math.  English, history, science, social studies--I was in the top classes for those, but then for math class, I was dropped down to a lower level.  Remedial math.  I hated it.  Grrr.

I graduated college with a degree in history.  No math there.  Then I went to graduate school to get my Master of Public Administration Degree (MPA).  Not much math there either.  Then--and I'll pause for laughter--I got a scholarship to go get my PhD.  You've heard of "publish or perish" in university life?  It's true.  You have to be constantly doing research, writing, and publishing if you want a real career in academe.  Turns out, I'm an idiot savante when it comes to research methods and statistics.  I see the relationships in my head, I understand the variance, I know the patterns the computer is looking for, and I...just...get it.  So when I joined the faculty of various universities, I became the guy other professors came to for help with their research.  That got my name on a lot of journal articles, and got a lot of their journal articles published.  It also clued me in on a dirty little secret of academic life (shhh, don't tell anyone):  Most professors conducting research are very insecure, shaky, and weak when it comes to research design and statistical analysis.  They learn one or two methods, get comfortable with them, and stick to them even if it's not the best thing for the current problem.

Don't believe me?   Try reading the body of work of the average professor.  You'll see ANOVA after ANOVA, linear regression after linear regression, chi-square after chi-square...again and again in their work.  Their sampling framework is always the same.  They use the same survey questions or other measures.  Their work isn't eclectic, it's repetitive.  They're staying in their comfort zone.

I understand when people complain that research design is complicated, hard to do correctly, and statistical analysis is a mystery.  I understand their misery, but I don't experience it.  For me, it all makes logical sense.  So that's why it's easy for me to be critical of the research I see out there, especially the research I see in EMS.  Surely medical school requires that a heck of a lot of information be crammed into an already overstuffed brain.  Research design and statistical analysis isn't among the topics covered in any great detail.  So those who conduct medical studies are often winging it.  They're trying, and they're doing a few things right, but they're missing the mark.  It isn't really all that surprising that one year we think one thing works and the next we don't.  We're not really testing our observations in an epistemologically rigorous way.

All this jibber jabber really only serves to further the impression that all research is crap.  And that's really too bad, because science is an awesome and beautifully majestic lens through which we can reveal wonderful ways of understanding the world.  But in order to reveal anything of value, it has to actually be scientifically valid.

Views: 78

Comment

You need to be a member of JEMS Connect - EMS Emergency Medical Services to add comments!

Join JEMS Connect - EMS Emergency Medical Services

Follow JEMS

Share This Page Now
Add Friends

JEMS Connect is the social and professional network for emergency medical services, EMS, paramedics, EMT, rescue squad, BLS, ALS and more.

© 2017   Created by JEMS Web Chief.   Powered by

Badges  |  Report an Issue  |  Terms of Service