Rating Medical Blogs

I’ve been thinking a lot lately about the possible ways of rating, ranking or scoring blogs.

As I mentioned when speaking to Mayo Clinic librarians a few weeks ago, I grow increasingly annoyed with the way Technorati uses the word “Authority.”

It annoys me because although the Technorati rankings are interesting and useful, they do not actually measure authority (as libraryfolk understand the term). As the Technorati FAQ explains:

Technorati Authority is the number of blogs linking to a website in the last six months. The higher the number, the more Technorati Authority the blog has.

So this score doesn’t indicate authority, quality or trustworthiness. At best, it could be said to measure popularity. (It could also be argued that the Technorati rank is to blogs what Impact Factor is to journals…but I’m not going to make that argument here.)

Other ranking sites do something similar, ranking blogs based on metrics that measure popularity.

For instance, I’ve previously mentioned the Healthcare 100 ranking of medical blogs at eDrugSearch. Its scores for medical blogs are calculated using these elements:

Google PageRank (0 to 10) – Google PageRank is a link analysis algorithm that interprets Web links and assigns a numerical weighting (0 to 10) to each site. High-quality sites receive a higher PageRank.

Bloglines Subscribers
(1 to 20) – Bloglines displays the number of subscribers for each blog’s feed. Each blog is assigned a Bloglines value from 1 to 20 based on subscriber ranges – e.g., more than 20, more than 30, etc. The more subscribers, the higher the Bloglines value.

Technorati Authority Ranking (1 to 30) – Technorati’s authority ranking shows the number of unique blogs that have linked to a particular blog over the past six months. The more link sources referencing your blog, the higher the Technorati ranking. Similar to a blog’s Bloglines value, a blog’s Technorati value is determined based on ranges (e.g., top 10,000, top 20,000, etc.), and each range is assigned a number (1 to 30) that is part of the algorithm.

eDrugSearch.com Points (1 to 10) – To add a little spice to the Healthcare 100 rankings, we add our own subjective ranking of each blog based on the quality of content and frequency of updates.

So with the exception of the eDrugSearch.com Points, these measures are generally objective and, like Technorati, measure only popularity.

So what’s wrong with attempting to measure blog popularity? Nothing at all- I just want us to remember that popularity does not equal quality. As an illustration of this I’d like to point out that the New Kids on the Block had TWO #1 records in both the U.S. and the U.K..

Of course, the newest attempt to rank medical blogs is at MedGadget.nl, and this has gotten a good bit of attention from medical bloggers.

Berci Meskó summarizes the four elements that are used to calculate rank in MedGadget.nl’s system:

  • the number of published articles
  • the number of comments
  • the mean Google pagerank of the homepage
  • the mean Technorati-rank, -inblog and inlink numbers
  • the mean number of subscription to Feedburner (the circulation) and the mean hits of Feedburner.

Berci continues:

In my opinion, this list is more accurate and comprehensive than that of eDrugSearch. And the reason I think so is not Scienceroll’s 22nd rank, but the objective parameters. Let us know if you know more lists about medical blogs.

I can’t agree with Berci. First, because the word “accuracy” has absolutely no place here. Neither list is “accurate” in the sense that neither list is “salty” or “orange”. Second, because I see these other problems with the metrics that MedGadget.nl uses:

  • Counting the number of published articles says nothing about quality OR popularity. It might say something about how long the blog has been around, it might say something about how prolific the blog’s author is. It also might indicate a whole ton of lousy content. Any way you look at it, this isn’t a reliably useful metric.
  • Counting the number of contents says nothing about quality or popularity. If paired with the number of pages views as a ratio, it might say something about how much readers are engaged to share their thoughts. Without that ratio, it might (again) say something about how long the blog has been around or how prolific the blog’s author is. It also might indicate a whole ton of comment spam. Any way you look at it, this isn’t a reliably useful metric.
  • To use the mean number of subscribers via FeedBurner as a metric is clearly a very bad idea for the simple reason that many blogs don’t use FeedBurner.

Technorati and the Healthcare 100 are imperfect, but the rankings at MedGadget.nl are significantly less meaningful.

So how do you find out what the best blogs are? Heck, check out the most popular ones written on topics that interest you. They’re probably popular because they have some sort of broad appeal that you might also experience. More importantly, read and read critically. Take seriously the reading recommendations of the blog authors you most enjoy/trust. Read the “about” page of each blog carefully to check the author’s credentials and experience. (Hint: A blog without a detailed “About” page is like a “fact” without a citation.)

What do you think?

Do you have a favorite rating system for blogs? How do you go about evaluating a medical blog (or a biblioblog)?

17 thoughts on “Rating Medical Blogs

  1. Pingback: MedblogNL lijkt vooralsnog een succes » Medgadget.nl

  2. Hey David! :)

    Ok, you’re right about the number of comments/articles. It has nothing to do with the quality of a blog. But counting the number of readers of Bloglines is even worse than doing the same with Feedburner.

    The e-Drug list is half-objective, the Medgadget list is objective, just inaccurate… :)

    Accuracy: I mean all the medical blogs should be ranked by the same, objective parameters. It doesn’t mean you’re the No.1, somebody else is just the 100th, so you’re a hundred times better than him.

    My choice would still be the second list (Medgadget). I think we do need a good list just to help our readers orientate among the many medical blogs. If my rank is 200, then I can even be the favourite blog of many readers, so it doesn’t really mean anything. Just like a compass…

  3. First of all thanks for your critical review of the algorithm I use to determine the ranking.

    I agree with you that the number of comments or the number of articles is no indication of the quality of the postings. As I wrote in a comment: It is unfair to give the comments too many influence. At the end of this month I will be able to determine the influence of the comments on the ranking of a medblog. If this influence is too big I will decrease it by adjusting the numbers in front of the parameters. The same point holds for the articles. If a medblog decides to publish a lot of advertorials it will gain points with “low”-quality content. It will be hard to get rid of these advertorials automatically. The current algorithm puts a lot of emphasis on the Technorati ranking. Again, I will consider this at the end of the month when I will be able to determine the influence of the individual parameters on the overall ranking. A better indicator might be the ratio between the number of articles and the number of comments. SSandy also disagrees with the use of the number of comments, because not all medblogs have it enabled.

    The use of feedburner statistics is indeed a problem since a lot of medblogs do not use this service. I could change this to bloglines, but this also causes problems. Or I can use the bloglines statistics if the feedburner statistics are not available.

    I might add your “about” page in the ranking algorithm. If the medblog has a relevant about page the medblogs gets extra points.

    But still it is my hope that by using the above defined parameters I will be able to create an overall idea about the “quality” of a site. But still it is very hard to define the quality of a medblog.

  4. Ha! I will write that someday, Berci!

    …but not today.

    FeedBurner is an awful measure because only some blogs use it. However, roughly “X%” of RSS users use bloglines, so a bloglines subscriber count can be said to likely be proportionate to the total number of subscribers.

    I think using FeedBurner is wholly wrong and using Bloglines is iffy.

    Also: the word you’re looking for isn’t “accurate,” it is “consistant.”

    Best always, Berci!

    -David

  5. Since you ask about blogs in general…and since I’ve struggled with this for some time…

    I won’t even touch “best,” since quality is not determined by popularity (unless you’re of the mind that truth is determined by consensus, and I’m guessing you’re not). I used to use “reach”–but that now strikes me as extreme, given that any of the numbers you can get are wild approximations. (It’s gotten worse: Until recently, you could use Yahoo! Link: searches, checking for actual rather than estimated results, as a reasonably good number–but Yahoo!’s changed its handling and it’s now as worthless as Google’s Link: result.)

    Right now–for a forthcoming book where it’s totally unimportant, and for one after that where it’s slightly more significant–I’m using “Visibility.” It’s actually the log10 of the sum of Bloglines subscriptions and Technorati links, both of which I garner from popuri.us. (That site offers a bunch of other numbers, most of which strike me as even more specious.)

    Right now, for example, this blog comes up with a Visibility that rounds off to 3.0, which puts it into the “Quite visible” category (3.0-3.9). Using the log10 helps reduce the snowball effect of popularity. (I’m using “Slightly visible” for 1.0-1.9, “Visible” for 2.0-2.9, where most of us live, “Quite visible” for 3.0-3.9, “Highly visible” for 4.0-4.9, and “Extremely visible” for anything over 5, which is pretty rarefied territory.)

    The #1 blog on the Healthcare 100 also comes out as Quite visible (3.6).

    And, of course, all of this is in the realm of SWAG–statistical wild-ass guessing. None of it has much to do with quality.

  6. Walt, I’m making a note of “SWAG” …I’m almost certain I can find a way to work it into my vocabulary so that it sounds natural if I use it in the next meeting I’m in that involves analysis of our hospital’s quality data…

  7. I did not read all the comments already but I have a suggestion or two:1. technorati or others may use polls or surveys to check about the popularity.
    2. a committee formulated from the different interested medical associations as the AMA and BMA or others can check for the authority and real rank and for the quality of the medical contents of blogs. As medical blogging is becoming more and more important for many reasons I’m not able to list here, I think the medical associations might be interested in this item.

  8. Hi Iskandar-

    1. Polls and surveys are problematic because participants are self-selecting. Besides, I think we can estimate popularity without them.

    2. Rather than endorsing external medical blogs, I think professional institutions like the AMA or BMA will probably create their own blogs.

  9. What, David, New Kids on the Block weren’t quality? Mwahahaha. Aside from which, good post. I don’t think there’s a great solution – as a parallel, local news stations have measures of viewership, and newspapers of subscribers, but that doesn’t mean that “if it bleeds it leads” is quality reporting, or what viewers/readers would prefer if given an option. It’s an old problem, in a new medium.

  10. Pingback: Aanpassingen aan MedblogNL top 25 » Medgadget.nl

  11. Pingback: Sneak preview ranking english written medblogs » Medgadget.nl

  12. I know exactly what you mean when you say that the ranking doesn’t reflect “Authority/Accuracy”.

    A quick example from the
    MedGadget.nl list:

    Well Woman Blog is ranked #62 while The Blog That Ate Manhattan is #77. The problem? Here’s an actual line from a WWB post: “A variety of [birth control] pills are available as well as injections and patches. None have proven safe for a woman to use.

    That last statement alone should disqualify WWB from inclusion in any medblogger ranking. Of course, the key question is how do you design an algorithm capable of distinguishing authoritative content from nonsense?

  13. Ema, I agree with you that WWB clearly sucks.

    But the key question isn’t how to design an algorithm capable of discerning authority because such a thing isn’t possible.

    The key questions (imho) are (1) whether ANY algorithmic analysis can produce a useful result and (2)if users can commony grasp a nuanced understanding of how such algorithms work and stop confusing terms like “top”, “best”, “most respected”, “most reliable” and “most popular.”

    But as Rachel pointed out (see her comment from August 13th), this isn’t a new problem, just a new-media version of an old one.

  14. But the key question isn’t how to design an algorithm capable of discerning authority because such a thing isn’t possible.

    Hmm, I must say that’s something I hadn’t considered.

    Still, I’m not ready to give up on the idea just yet. Wouldn’t adding a human element make such an algorithm possible? [It took me all of 5 minutes to go to the WWB (my first visit there) and figure out its “authority” level.]

  15. Hi Ema-

    Because authority is at least partially determined by subjective judgement. A secular humanist PhD in Women’s Studies and Public Health will likely evaluate WWB’s authority differently from, for instance, Pat Robertson.

    If you were to add a number representing human judgement to the algorithm, you’d need to have a LOT of faith in the person(s) assigning that number.