Apomediation, Online Health Info and Baloney

A recent article in the Journal of Rheumatology:

“Trying to Measure the Quality of Health Information on the Internet: Is It Time to Move On?” [html] | [PDF]

Short answer:

Hell, no.

Longer answer:

Says the article:

“The natural assumption is to believe that there exists a link between the quality of information on the Internet and harm. However, a systematic review attempting to evaluate the number and characteristics of reported cases of harm in the peer-review literature determined that for a variety of reasons, there was little evidence to support this notion.”

It is impossible to quantify why people make bad decisions. For instance, say someone makes foolish financial decisions and loses everything they own: can it be determined if these bad decisions were made based on information they found online?

On the other hand, ask anyone you know who works in an Emergency Room if they’ve seen people who have done harm to themselves with a self-diagnosis or self-treatment based on something they read online. Every one of them will confirm they have. This isn’t necessarily the fault of the information found online, but knowing as we do that people increasingly make decisions about their health that are informed by information they find online, we don’t need evidence to assume, a priori and with confidence, that bad information can lead to harm. It is common sense for everyone in the health community to promote/produce good online information resources and discourage the existence/use of bad online information resources. Most importantly, we must help both health professionals and patients gain the information and health literacy skills to tell the good from the bad.

Authors Deshpande and Jadad argue that the evaluation tools for Web sites suffer…

“…from several limitations, which, in addition to those mentioned by the authors, include uncertain levels of usability, reliability, and validity.”

I won’t argue with that. I don’t think any single evaluation tool can let someone without information literacy skills determine the quality of information found online. I don’t think that even a vast arsenal of such evaluation tools will do the trick.

The authors have questions I’ll answer (I don’t care that the questions were intended to be rhetorical- they need to be answered):

“Will we ever develop an ideal tool that allows individuals to assess the quality of health information?”

Nope. We’ll also never invent a diagnostic tool that can replace a good physician’s experience and judgement- but that doesn’t mean we shouldn’t create tools that help new doctors build these skills or help more experienced doctors flesh out their differential.

What are the determinants of this quality?

That’s a pretty broad question, but the MLA has a good basic guide of where to start.

“Is it possible to assess or measure quality?”

Medical librarians assess the quality of information every single day. So…yeah, it is.

“Even if possible, is the formal assessment of quality even necessary?”

Necessity depends on who the user is and what his/her particular information needs are.

“Does it even matter?”

Kind, encouraging teachers throughout the world often say that there’s no such thing as a stupid question. These extraordinarily compassionate educators are lying. Yes, it matters.

I’m a Web enthusiast who sees a lot of value in lots of online collaborative efforts. There are absolutely places and uses for the wisdom of crowds– but to hear some people talk about apomediation, you’d think the wisdom of crowds could replace the judgement of experts.

“For example, Internet users could provide ratings or recommendations based on their own experiences to judge the quality and relevance of health information.”

Huh. So if we’re just gonna’ go with what a crowd of self-selecting amateurs agree upon, I guess we don’t need double blind trials any more, either? We’ll just ask a crowd their opinion. Who needs empiricism? Science, schmience. Why should we take this article more seriously because the authors have two MDs and dotorate between them? I’m not Andrew Keen (a total jackass), but neither am I an irrational technotopian.

“Analogous to the peer-review process, aggregation of ratings from many individuals (a form of crowdsourcing) allows “good” information to be highlighted prominently, while “not so good” information gets pushed to the bottom.”

That’s a terrible analogy. In the peer-review process, both the author(s) and the reviewer(s) are credentialed experts in their field. I would not ask a room full of neurologists for advice on my house’s plumbing and I would not ask a room full of plumbers about the treatment of peripheral neuropathy. For the “Digg” model to work at all for health information, the crowd should be very large and very knowledgeable. On the other hand, even widely agreed-upon practices have later been proven wrong by empirical testing and evidence…so shouldn’t we rely on evidence?

The problem with relying only on the wisdom of crowds is that, sometimes, the crowd shows an alarming lack of wisdom regarding health information. Huckster snake-oil salesman Kevin Trudeau’s Natural Cures ‘They’ Don’t Want You to Know About was a New York Times self-help Best Seller! (For more on why this is a great example of an unwise crowd, see this video.)

Let me illustrate with a real life example: When our infant son developed what appeared to be a tremor, we didn’t waste time asking a crowd. We first asked a local pediatric neurologist for his opinion. When he was unable to make a diagnosis, we asked Dr. Marc Patterson, a pediatric neurologist at the Mayo Clinic. Dr. Patterson’s unusual experience and education enabled him to make a diagnosis very quickly.It turns out that Simon had a benign shiver that has already almost gone away. We were, of course, deeply relieved and grateful to Dr. Patterson. We were also really impressed with the pediatrics center at Mayo. Wow.

I wonder: If Drs. Deshpande and Jadad have family members with worrying illnesses, do they consult the best available clinical expert, or go to the wisdom of the crowd?

But I’m getting off-track. Back to the article.

As seems to be typical for malinformed physician technotopians, these authors point at Wikipedia to support their perspective.

“The interplay of users to collaborate and deal with information overload has already been proven successful in other areas outside the health space. For example, Wikipedia not only allows users to submit content on various topics, but also provides the capability for users to edit the content of others. Although there is the potential for misuse, Wikipedia, which relies on anonymous, unpaid volunteers, seems to be as accurate in covering scientific information as the Encyclopedia Britannica.”

Well, yeah. And I’d trust Wikipedia as a source for health information about as much as I’d trust Britannica….which is to say not very much.Please see a number of previous posts on health information wikis.

“Since its inception in 1990 until the present day, the health system has grappled with how to manage potential harm associated with information available on the Internet. Research in this area, for the most part, continues to assume that techniques used to evaluate paper-based information can automatically be applied to online resources, ignoring the added complexity created by the multiple media formats, players, and channels that are brought together by the Internet.”

Someone please explain to me why peer review is less effective with texts distributed online than texts distributed on artifacts of dead trees. Text is text. Someone please explain to me why peer review would be less effective if this text is read aloud and recorded/played online as video or audio (downloadable or streamed in any format). The “added complexity” the authors mention impacts peer review in no way. Perhaps this is why they make such an assertion while providing utterly no support for it.

I like Web technology and have made a nice niche for myself by writing and talking about how it can be useful to health information professionals. I am sick to death, however, of people attempting to make names for themselves with inane prognostication and unsupported technotopianism. When these authors write that “…as the Web continues to evolve, we will likely gain new insights as to how this happens along with a better understanding of how to handle health information from any source…”, I want to beat my head against a wall. To me, this is no different from saying we should go ahead and build lots more nuclear reactors because we have faith that technology will work out a way to dispose of nuclear waste in a safe manner before we are harmed by our inability to dispose of it properly. Such things are too important to take on faith in the future.

“The time has likely come to end our Byzantine discussions about whether and how to measure the quality of online health information. The public has moved on. It is time to join them in what promises to be an exciting voyage of human fellowship, with new discoveries and exciting ways to achieve optimal levels of health.”

Such a perspective would have us ally ourselves with the Jenny McCarthys of the world. Jenny McCarthy believes and popularizes the idea that immunizations cause autism spectrum disorders, despite the fact that there is no scientific evidence correlating immunizations and ASDs.

Laypeople, even brilliant laypeople, do not generally have the information or health literacy skills to know where to find quality information and to know what to trust. My brother is an experienced Web programmer. He has topped out every IQ test he has ever taken. He is a brilliant man and as talented an autodidact as anyone I know. Still, when he needed a medical procedure, he was able to find more information and trust its authority by conferring with me- because I spend my days finding and evaluating health information.

Reliance on science and expertise is not Byzantine. Rationalism is not Byzantine. Empiricism is not Byzantine. Politicians should not make public policy decisions based on polls and clinicians should not make decisions based on the misinformed preferences of their patients. Clinicians have a duty to educate patients and help point them towards good information because the volume of shoddy information is growing at an alarming rate.

9 thoughts on “Apomediation, Online Health Info and Baloney

  1. Pingback: Why bother apprasing medical info on the web? « (the) health informaticist

  2. You wonder:

    If Drs. Deshpande and Jadad have family members with worrying illnesses, do they consult the best available clinical expert, or go to the wisdom of the crowd?

    The irony here is: If you want to learn about ehealth or the role of social networking in health, are you asking an ehealth expert (e.g. Mr Chief Innovator and Founder, Centre for Global eHealth Innovation, Canada Research Chair in eHealth Innovation, Dr Jadad), or are you consulting the wisdom of the crowd? Surely – by publishing this blog post – you decided to put the apomediation model to a test. And apomediation works – by people posting comments on your blog you may learn from a comment at http://healthinformaticist.wordpress.com/2009/01/20/why-bother-apprasing-medical-info-on-the-web/ that these authors Deshpande and Jadad have in fact a huge undisclosed conflict of interest – they omit to mention the minor fact that they own or are involved in a commercial medical social networking site. They cite their fancy academic titles in the paper but forget a simple conflict of interest statement – at many universities such behavior would be grounds for dismissal or an academic misconduct investigation.

    So experts have their pitfalls, conflicts of interests, etc as well. So what is better, apomediation or intermediation? As Eysenbach stated in his original apomediation theory, there is a place for both, and people switch back and forth, depending on their situation, prior knowledge etc. It is never EITHER experts OR peers/wisdom of crowds etc.

  3. Thanks, David. I’m giving a health literacy presentation for Seniors in a few weeks, and I’ve been wondering what to say when someone asks about “the wisdom of crowds” and whether or not our traditional evaluation methods needs to be re-vamped. I’ll stay the course …

  4. Hi Joe-

    If I understand your point correctly, I think we agree. I thought Eysenbach’s article rightly asserted that there is value in both. My greatest complaint about Deshpande and Jadad is that they imply apomediation can (and should) replace expertise.

    They’re saying some hugely stupid things. If they’re motivated by what you call an undisclosed conflict of interest, they’re schmucks as well.

    All that said, I’m not sure I agree a blog post with comments demonstrates or tests apomediation. However, Rachel submitting it to StumbleUpon might. 🙂

    Hi Michele!

    I think our traditional evaluation methods could probably stand a lot of improvement- but I don’t think that the wisdom of the crowd comes close to approaching the value of peer review when it comes to evaluating the quality of health information.

    Thanks for the comments!

  5. Pingback: Headlines for Jan 19-25 | Health Content Advisors