Impact factor for blogs

For people unaware of impact factor, it is the AP Top 25 for scientific journals. It provides a score for every journal based on how often scientific articles cite articles published by the journal. The number of citations is divided by the number of articles published so just pushing a lot of crap through in hopes that some of it gets cited does not inflate your score. The top three medical journals are NEJM, Lancet and JAMA in that order. Journal editors spend a lot of time worrying, strategizing and optimizing in order to improve Impact Factor.

I had the privilege of going to the AJKD editors meeting Kidney Week and saw first hand how IF colors a lot of what they do. Editor in chief, Dr. Levey talked about change in IF over the last decade and what happened over the last year. Then he talked about the different sections of the journal. One recurring entry are the narrative reviews. He says that these are rarely cited so increase the denominator of the IF without moving the numerator. Despite the fact that these hurt the IF they publish them anyways because they see it as part of their mission. Not every journal editor’s morals are so straight.

A different scoring system, called the H-index is used to rank authors and scientists.

Brent Thoma at Academic Life in Emergency Medicine has published a ranking system for blogs and online education. This has sparked an active and colorful debate, here and here. In terms of why he did it, and he said (paraphrasing) “A scoring system will come to open access medical education content. I would like it to evolve out of the community rather than have it imposed on the community. So let’s build our own score.”
Brent is an ER doc and all of the blogs he ranked were of interest to ER doctors. However he published his scoring algorithm and I used it to score PBFluids, Renal Fellow Network, Nephron Power, and eAJKD.
  1. PBFluids edged RFN with 3.02, good for 19th place among the 60 blogs he ranked.
  2. Renal Fellow Network came in right below PBF with a 2.94, good enough to tie for 20th on the list.
  3. eAJKD, scores 2.86, good for 21st place.
  4. Nephron Power had 2.18, good for 36th place.
None of those three had any Google+ presence and NephronPower was hurt by the lack of a Facebook page and minimal footprint on Twitter.
You can view my spreadsheet here. Please add your website if you would like. 
  • Alexa is the website rank according to Alexa.com. Mine was 4,589,000. Enter the rank divided by 1,000. This is a measure of traffic.
  • Page Rank is google’s measure of quality by its analysis of incoming links (and likely a bunch more secret ingredients). You can find out a sites page rank here.
  • Twitter is the number of followers of the principle author.
  • Facebook is the number of likes for the blog on its facebook page.
  • Google+ is some metric around this failed social media site.
Here are how the three sites compared:
Everyone had the same page rank of 4 except NephronPower. Alexa traffic was close, but eAJKD had the highest traffic with RFN in second. PBFluids took first based on the strength of my Twitter following.
This is very interesting to me. I have been thinking about ways to measure blog quality as part of a follow-up study from my poster presentation at Kidney Week. I tracked posts/month to measure productivity but I had no way to assess if the posts were any good. The SM-i seems like a reasonable stab at answering this question, at least at the blog/author level.
Some thoughts: purchasing twitter followers and Facebook likes is a thing and a real index should be resistant to that type of gaming. Additionally, Youtube is a very important educational channel, Nephrology on Demand is just killing it by adding educational content to YouTube, and has over 1,500 subscribers. This should be captured in the SM-i.