Web Geekery in Recent Literature – 2/6/2008 (1337speak in Pu8M3d!)

Geekiest Article Title in PubMed to Date!

J Exp Psychol Hum Percept Perform. 2008 Feb;34(1):237-41.
R34d1ng w0rd5 w1th numb3r5.
Perea M, Duñabeitia JA, Carreiras M.
Departamento de Metodologia, Facultad de Psicologia.
Letter identities and number identities are usually thought to imply different cortical mechanisms. Specifically, the left fusiform gyrus responds more to letters than to digits (T. A. Polk et al., 2002). However, a widely circulated statement on the internet illustrates that it is possible to use numbers (leet digits) as parts of words, 4ND TH3 R35ULT1NG S3NT3NC3 C4N B3 R34D W1TH0UT GR34T 3FF0RT. Two masked priming lexical decision experiments were conducted to determine whether leet digits produce (automatic) lexical activation. Results showed that words are identified substantially faster when they are preceded by a masked leet word (M4T3R14L-MATERIAL) than when they are preceded by a control condition with other letters or digits. In addition, there was only a negligible advantage of the identity condition over the related leet condition. This leet-priming effect is not specific to numbers: A prime in which leet digits are replaced by letter-like symbols (MDeltaTeuroR!DeltaL-MATERIAL) facilitates word processing to the same degree as an identity prime. Therefore, the cognitive system regularizes the shape of the leet digits and letter-like symbols embedded in words with very little cost. (PsycINFO Database Record (c) 2008 APA, all rights reserved).
PMID: 18248151

Actually interested in this subculture? Check out Wikipedia’s entry on Leetspeak.


Suggested alternate title: “Comparison tools are okay if they’re what’s called-for. Meh.”

J Med Internet Res. 2008 Jan 22;10(1):e3.
What do evaluation instruments tell us about the quality of complementary medicine information on the internet?
Breckons M, Jones R, Morris J, Richardson J.
School of Nursing and Community Studies, University of Plymouth, Faculty of Health and Social Work, Plymouth, United Kingdom.
BACKGROUND: Developers of health information websites aimed at consumers need methods to assess whether their website is of “high quality.” Due to the nature of complementary medicine, website information is diverse and may be of poor quality. Various methods have been used to assess the quality of websites, the two main approaches being (1) to compare the content against some gold standard, and (2) to rate various aspects of the site using an assessment tool. OBJECTIVE: We aimed to review available evaluation instruments to assess their performance when used by a researcher to evaluate websites containing information on complementary medicine and breast cancer. In particular, we wanted to see if instruments used the same criteria, agreed on the ranking of websites, were easy to use by a researcher, and if use of a single tool was sufficient to assess website quality. METHODS: Bibliographic databases, search engines, and citation searches were used to identify evaluation instruments. Instruments were included that enabled users with no subject knowledge to make an objective assessment of a website containing health information. The elements of each instrument were compared to nine main criteria defined by a previous study. Google was used to search for complementary medicine and breast cancer sites. The first six results and a purposive six from different origins (charities, sponsored, commercial) were chosen. Each website was assessed using each tool, and the percentage of criteria successfully met was recorded. The ranking of the websites by each tool was compared. The use of the instruments by others was estimated by citation analysis and Google searching. RESULTS: A total of 39 instruments were identified, 12 of which met the inclusion criteria; the instruments contained between 4 and 43 questions. When applied to 12 websites, there was agreement of the rank order of the sites with 10 of the instruments. Instruments varied in the range of criteria they assessed and in their ease of use. CONCLUSIONS: Comparing the content of websites against a gold standard is time consuming and only feasible for very specific advice. Evaluation instruments offer gateway providers a method to assess websites. The checklist approach has face validity when results are compared to the actual content of “good” and “bad” websites. Although instruments differed in the range of items assessed, there was fair agreement between most available instruments. Some were easier to use than others, but these were not necessarily the instruments most widely used to date. Combining some of the better features of instruments to provide fewer, easy-to-use methods would be beneficial to gateway providers.
Publication Types:
Research Support, Non-U.S. Gov’t
PMID: 18244894


Article about Isabel (video demo available here).

J Gen Intern Med. 2008 Jan;23 Suppl 1:37-40.
Comment in:
J Gen Intern Med. 2008 Jan;23 Suppl 1:85-7.
Performance of a web-based clinical diagnosis support system for internists.
Graber ML, Mathew A.
Medical Service-111, VA Medical Center, Northport, NY 11768, USA. mark.graber@va.gov
BACKGROUND: Clinical decision support systems can improve medical diagnosis and reduce diagnostic errors. Older systems, however, were cumbersome to use and had limited success in identifying the correct diagnosis in complicated cases. OBJECTIVE: To measure the sensitivity and speed of “Isabel” (Isabel Healthcare Inc., USA), a new web-based clinical decision support system designed to suggest the correct diagnosis in complex medical cases involving adults. METHODS: We tested 50 consecutive Internal Medicine case records published in the New England Journal of Medicine. We first either manually entered 3 to 6 key clinical findings from the case (recommended approach) or pasted in the entire case history. The investigator entering key words was aware of the correct diagnosis. We then determined how often the correct diagnosis was suggested in the list of 30 differential diagnoses generated by the clinical decision support system. We also evaluated the speed of data entry and results recovery. RESULTS: The clinical decision support system suggested the correct diagnosis in 48 of 50 cases (96%) with key findings entry, and in 37 of the 50 cases (74%) if the entire case history was pasted in. Pasting took seconds, manual entry less than a minute, and results were provided within 2-3 seconds with either approach. CONCLUSIONS: The Isabel clinical decision support system quickly suggested the correct diagnosis in almost all of these complex cases, particularly with key finding entry. The system performed well in this experimental setting and merits evaluation in more natural settings and clinical practice.
Publication Types:
Research Support, Non-U.S. Gov’t
PMID: 18095042

Free full text: [PDF] [HTML]

3 thoughts on “Web Geekery in Recent Literature – 2/6/2008 (1337speak in Pu8M3d!)

  1. Pingback: Unleash the Metadata! » Blog Archive » L33tspeak Update