Mar 18

Patient Handouts at the Point of Care

My Primary Care Physician is a good guy.  His practice implemented an EMR a few years ago- each time I see him, I ask him how that’s going and he lets me see how it looks on the tablet PC he carries into the exam room.

My last visit was for an annual checkup a few weeks ago and we were talking about point-of-care tools and integration with his EMR.  It turns out that their EMR has no useful functionality to help find or produce patient education handouts he can quickly sent to a printer

I told him it would not be difficult to make a tool that would enable him to find authoritative handouts quickly and easily from the paid resources his practice has available, and he expressed interest in that idea.

He hasn’t followed up, but I found the idea interesting, so I started thinking about what sort of tool could be built for this purpose that could be integrated into any EMR using only patient handouts that are available at no cost on the Web.

With that in mind, I came up with a Google Custom Search Engine for use by providers at our hospital, but I see no reason why it couldn’t be used by any institution or practice.

The idea behind this is that any search result is not only authoritative, but that it is within a click of a “print” button.

There are built-in refinements for large print, pediatrics, Spanish language, Seniors, and low literacy.

Please give it a try here.

Internists and medical libraryfolk: I’d be grateful for your feedback!

Sep 10

‘Qualities’ not ‘Quality’ – Text Analysis Methods to Classify Consumer Health Websites

Guocai Chen, Jim Warren, Joanne Evans. ‘Qualities’ not ‘Quality’ – Text Analysis Methods to Classify Consumer Health Websites. electronic Journal of Health Informatics, 2009; 4(1): e5.

Abstract

There is an increasing need to help health consumers to achieve timely, differentiated access to quality online healthcare resources. This paper describes and evaluates methods for automated classification of consumer health Web content with respect to qualitative attributes relevant to the preferences of individual health consumers. This is illustrated in the context of identifying breast cancer consumer web pages that are ‘supportive’ versus ‘medical’ perspective, as compared to an existing manual classification employed by a breast cancer portal with personalised search preference options. Classification is performed based on analysis of word co-occurrences and an enhanced decision tree classifier (a decision forest). Current classification test results for ‘medical’ versus ‘supportive’ type resources are 90% accurate (95% confidence interval, 86-94%) using this decision forest classifier. These early results are indicating that language use patterns can be used to automate such classification with acceptable accuracy; however, a wider range of websites and metadata attributes needs to be assessed and compared to end-user feedback. Future application may be either in a tool to facilitate metadata coders in populating the databases of domain-specific portals such as BCKOnline, or in providing tagging or sorting on content type on live search results from health consumers.

Full Text: PDF (Free, registration required)

Sep 03

Physician Rating Sites: Pew-pew-pew!

Bleah. Yet another article about Web sites for rating doctors.

Is anyone else really tired of seeing these articles and pretending these sites matter? They might one day, but they don’t now.

pew-pew-pew-small Anyway, the Pew Internet and American Life Project (Please tell me I’m not the only one who quietly thinks “pew-pew-pew!” to himself every time Pew is mentioned?) says:

Nearly half (47%) of internet users, or 35% of adults, have turned to the internet for information about doctors or other health professionals.”

Nothing surprising there.

“These health information seekers, however, are not likely to post their own reviews of doctors: just 7% of those who looked for information about doctors online (and 4% of all internet users) report posting a review of a doctor online.”

Well, nothing surprising there, either. The vast majority of Wikipedia’s users (or Digg’s) are there to read, not to contribute. Isn’t this the overwhelming trend in most “social media”? (And wouldn’t noting this context be important? What does this item from Pew mean without such context?)

I’ll state again that I think every physician rating site I’ve seen is useless. When patrons (or friends) ask me how to find a good specialist, I recommend avoiding these sites. The advice I gave one family member was to get in touch with local, regional, and national patient support groups for the diagnosed (or suspected) condition necessitating a visit to a specialist. If you want the opinion of informed patients, that’s where you’ll find it.

Just for good measure:

I like lolcats. Sue me.

Jul 30

Family Practice POC Web Geekery

University of Wisconsin Department of Family Medicine physician Derek Hubbard, MD instructs family doctors on how to find clinical information [on the Web] at the point of care.

There are definitely some good tips for clinicians here, but a couple that make me a little uneasy (like using info from About.com as a patient handout).

Dr. Hubbard might also be interested in using the Consumer Health and Patient Education Search Engine.

[Hattip: Ratcatcher]

Apr 27

Watching Swine Flu on the Web

Holy cow! Holy pig!

Watching misinformation spread is sort of entertaining. Check out all the people who talk about not eating pork on Twitter. (The flu is not spread by eating pork.)

Hah! As I was writing this post, the latest xkcd appeared!

The CDC’s Emergency Preparedness and Response Twitter feed seems to be a frequently-updated source of sanity:

http://twitter.com/cdcemergency

RSS Feed for CDC’s Swine Flu site

Maps
Google Map 1 (H1N1 Swine Flu)
Google Map 2 (“Swine Flu 2009″)
Google Map 3 (“HPAI H5N1 30-Day Outbreak Map”)

HealthMap (previously mentioned here) might be the most complete map visualization. HealthMap’s twitter feed is also interesting, but gives a more panicked impression than that of the CDC (see above)

Apr 25

*Really* Stupid Social Health Site

The idea behind rateadrug.com is for users to rate drugs.

rateadruglogo

Our goal is to provide unique user-generated data on side effects and subtle side effects of medications. We want to know how these prescription drugs make you feel.

I’ve seen stupid applications of social media in healthcare, but this may take the cake as the dumbest I’ve seen in a good while.

Feb 08

HAVIDOL (avafyneyme HCI)

Dated 2007 but new to me:

Havidol is clearly an amazing new drug. Thank goodness there’s such a wonderfully detailed site to tell us all about Havidol and how it can treat Dysphoric Social Attention Consumption Deficit Anxiety Disorder (DSACDAD).

Click to visit the site

Click to visit the site

Great parody of direct-to-consumer advertising.

Feb 05

HHS/FDA/CDC Social Media Tools for Consumers and Partners

New to me- and a good idea to put all of this on one page.

http://www.cdc.gov/socialmedia/

I didn’t know the CDC was on MySpace or that the FDA had a recall Twitter feed.

I decided I should definitely follow the CDC’s Twitter feed for Health Professionals, which is for “…Health Professionals interested in staying up-to-date with CDC’s interactive media activities…”

They’ve also got a widget to help consumers search for products impacted by the Peanut-Containing Product Recall (embedded below).

FDA Salmonella Typhimurium Outbreak 2009. Flash Player 9 is required.

Includes:

  • Blogs
  • eMail Subscriptions
  • Health-e-Cards
  • Mobile Information
  • Online Video
  • Phone/Email
  • Podcasts
  • RSS Feeds
  • Social Networks
  • Badges for Social Networks
  • Twitter
  • Virtual Worlds
  • Web Sites
  • Widgets

Go check it out.

Hat tip: Maura Sostack

Feb 03

More on Evaluating Health Journalism

Francesca Frati (who rules) pointed out last week a site produced by the Royal College of Physicians of Edinburgh and Royal College of Physicians and Surgeons of Glasgow: http://behindthemedicalheadlines.com/.

Craig Stoltz (previously mentioned) dropped me an email to point out a post I’d missed from The Health Care Blog by Alicia White of Bazian (the company which evaluates stories for the NHS’s Behind the Headlines service).

Says Ms. White:

…we’ve developed the following questions to help you figure out which articles you’re going to believe, and which you’re not.

Questions include:

  • Does the article support its claims with scientific research?
  • Is the article based on a conference abstract?
  • Was the research in humans?
  • How many people did the research study include?
  • Did the study have a control group?
  • Who paid for and conducted the study?
  • Did the study actually assess what’s in the headline?
  • How can I find out more?

Good stuff. Go read it.

Thanks again for the pointers, Francesca and Craig!

Jan 26

Sites that Critique Health Journalism

(Example of how backed-up I am: WordPress says I started drafting this post on 9/18/08)


I was skeptical when I first heard about Health News Review…but learning that Craig Stoltz was involved with the projectI met Craig, a pleasantly skeptical guy, at an AMA conference last year and liked him immediately. He spent six years as the editor of the Washington Post health section and was the editorial director for Revolution Health. Craig also writes a great blog called Web 2.Oh…really? in which he “cast[s] a weary eye on the alarming, annoying and occasionally amazing uses of Web 2.0.” made me give it a close look.

HealthNewsReview.org is published by Gary Schwitzer of the University of Minnesota’s health journalism program and funded by the Foundation for Informed Medical Decision Making.

Read a few reviews and you’ll likely find them reliable and wonderfully critical. Be sure to check how how they rate stories.

Health News Review focuses on U.S. news, so anglophones in other nations will want to note these:

For Canadian news, there’s Media Doctor Canada.

Australians have Media Doctor Australia.

The NHS Choices site has a section called Behind the Headlines which seems to serve a similar purpose for U.K. health news.

Your turn: Are there other sites like these that I missed?

Jan 22

Annals of Pharmacotherapy on Wikipedia

I know I’m way behind on such things, but this article from the Annals of Pharmacotherapy deserves a mention, even one this belated:

Scope, completeness, and accuracy of drug information in Wikipedia.
Ann Pharmacother. 2008 Dec;42(12):1814-21. Epub 2008 Nov 18.
[PubMed] | [html] | [PDF]

The article compares drug information in Wikipedia to drug information in the Medscape drug reference.

“This study suggests that Wikipedia may be a useful point of engagement for consumers looking for drug information, but that it should be supplementary to, rather than the sole source of, drug information. This is due, in part, to our findings that Wikipedia has a more narrow scope, is less complete, and has more errors of omission versus the comparator database.”

And I loved this:

“…health professionals should not use user-edited sites as authoritative sources in their clinical practice, nor should they recommend them to patients without knowing the limitations and providing sufficient additional information and counsel. If these sites are recommended, it should be in the form of a permanent link pointing to the specific recommended version of an entry. Finally, the issues raised in Web 2.0 are not novel, nor are the approaches; consumer education, watchful editors, alert health professionals, and ethical online behavior remain, as ever, the foundation for the safety of Internet health information.”

Jan 20

Apomediation, Online Health Info and Baloney

A recent article in the Journal of Rheumatology:

“Trying to Measure the Quality of Health Information on the Internet: Is It Time to Move On?” [html] | [PDF]

Short answer:

Hell, no.

Longer answer:

Says the article:

“The natural assumption is to believe that there exists a link between the quality of information on the Internet and harm. However, a systematic review attempting to evaluate the number and characteristics of reported cases of harm in the peer-review literature determined that for a variety of reasons, there was little evidence to support this notion.”

It is impossible to quantify why people make bad decisions. For instance, say someone makes foolish financial decisions and loses everything they own: can it be determined if these bad decisions were made based on information they found online?

On the other hand, ask anyone you know who works in an Emergency Room if they’ve seen people who have done harm to themselves with a self-diagnosis or self-treatment based on something they read online. Every one of them will confirm they have. This isn’t necessarily the fault of the information found online, but knowing as we do that people increasingly make decisions about their health that are informed by information they find online, we don’t need evidence to assume, a priori and with confidence, that bad information can lead to harm. It is common sense for everyone in the health community to promote/produce good online information resources and discourage the existence/use of bad online information resources. Most importantly, we must help both health professionals and patients gain the information and health literacy skills to tell the good from the bad.

Authors Deshpande and Jadad argue that the evaluation tools for Web sites suffer…

“…from several limitations, which, in addition to those mentioned by the authors, include uncertain levels of usability, reliability, and validity.”

I won’t argue with that. I don’t think any single evaluation tool can let someone without information literacy skills determine the quality of information found online. I don’t think that even a vast arsenal of such evaluation tools will do the trick.

The authors have questions I’ll answer (I don’t care that the questions were intended to be rhetorical- they need to be answered):

“Will we ever develop an ideal tool that allows individuals to assess the quality of health information?”

Nope. We’ll also never invent a diagnostic tool that can replace a good physician’s experience and judgement- but that doesn’t mean we shouldn’t create tools that help new doctors build these skills or help more experienced doctors flesh out their differential.

What are the determinants of this quality?

That’s a pretty broad question, but the MLA has a good basic guide of where to start.

“Is it possible to assess or measure quality?”

Medical librarians assess the quality of information every single day. So…yeah, it is.

“Even if possible, is the formal assessment of quality even necessary?”

Necessity depends on who the user is and what his/her particular information needs are.

“Does it even matter?”

Kind, encouraging teachers throughout the world often say that there’s no such thing as a stupid question. These extraordinarily compassionate educators are lying. Yes, it matters.

I’m a Web enthusiast who sees a lot of value in lots of online collaborative efforts. There are absolutely places and uses for the wisdom of the crowd- but to hear some people talk about apomediation, you’d think the wisdom of the crowd could replace the judgement of experts.

“For example, Internet users could provide ratings or recommendations based on their own experiences to judge the quality and relevance of health information.”

Huh. So if we’re just gonna’ go with what a crowd of self-selecting amateurs agree upon, I guess we don’t need double blind trials any more, either? We’ll just ask a crowd their opinion. Who needs empiricism? Science, schmience. Why should we take this article more seriously because the authors have two MDs and dotorate between them? I’m not Andrew KeenAndrew Keen is a total jackass., but neither am I an irrational technotopian.

“Analogous to the peer-review process, aggregation of ratings from many individuals (a form of crowdsourcing) allows “good” information to be highlighted prominently, while “not so good” information gets pushed to the bottom.”

That’s a terrible analogy. In the peer-review process, both the author(s) and the reviewer(s) are credentialed experts in their field. I would not ask a room full of neurologists for advice on my house’s plumbing and I would not ask a room full of plumbers about the treatment of peripheral neuropathy. For the “Digg” model to work at all for health information, the crowd should be very large and very knowledgeable. On the other hand, even widely agreed-upon practices have later been proven wrong by empirical testing and evidence…so shouldn’t we rely on evidence?

The problem with relying only on the wisdom of crowds is that, sometimes, the crowd shows an alarming lack of wisdom regarding health information. Huckster snake-oil salesman Kevin Trudeau’s Natural Cures ‘They’ Don’t Want You to Know About was a New York Times self-help Best Seller! (For more on why this is a great example of an unwise crowd, see this video.)

Let me illustrate with a real life example: When our infant son developed what appeared to be a tremor, we didn’t waste time asking a crowd. We first asked a local pediatric neurologist for his opinion. When he was unable to make a diagnosis, we asked Dr. Marc Patterson, a pediatric neurologist at the Mayo Clinic. Dr. Patterson’s unusual experience and education enabled him to make a diagnosis very quickly.It turns out that Simon had a benign shiver that has already almost gone away. We were, of course, deeply relieved and grateful to Dr. Patterson. We were also really impressed with the pediatrics center at Mayo. Wow.

I wonder: If Drs. Deshpande and Jadad have family members with worrying illnesses, do they consult the best available clinical expert, or go to the wisdom of the crowd?

But I’m getting off-track. Back to the article.

As seems to be typical for malinformed physician technotopians, these authors point at Wikipedia to support their perspective.

“The interplay of users to collaborate and deal with information overload has already been proven successful in other areas outside the health space. For example, Wikipedia not only allows users to submit content on various topics, but also provides the capability for users to edit the content of others. Although there is the potential for misuse, Wikipedia, which relies on anonymous, unpaid volunteers, seems to be as accurate in covering scientific information as the Encyclopedia Britannica.”

Well, yeah. And I’d trust Wikipedia as a source for health information about as much as I’d trust Britannica….which is to say not very much.Please see a number of previous posts on health information wikis.

“Since its inception in 1990 until the present day, the health system has grappled with how to manage potential harm associated with information available on the Internet. Research in this area, for the most part, continues to assume that techniques used to evaluate paper-based information can automatically be applied to online resources, ignoring the added complexity created by the multiple media formats, players, and channels that are brought together by the Internet.”

Someone please explain to me why peer review is less effective with texts distributed online than texts distributed on artifacts of dead trees. Text is text. Someone please explain to me why peer review would be less effective if this text is read aloud and recorded/played online as video or audio (downloadable or streamed in any format). The “added complexity” the authors mention impacts peer review in no way. Perhaps this is why they make such an assertion while providing utterly no support for it.

I like Web technology and have made a nice niche for myself by writing and talking about how it can be useful to health information professionals. I am sick to death, however, of people attempting to make names for themselves with inane prognostication and unsupported technotopianism. When these authors write that “…as the Web continues to evolve, we will likely gain new insights as to how this happens along with a better understanding of how to handle health information from any source…”, I want to beat my head against a wall. To me, this is no different from saying we should go ahead and build lots more nuclear reactors because we have faith that technology will work out a way to dispose of nuclear waste in a safe manner before we are harmed by our inability to dispose of it properly. Such things are too important to take on faith in the future.

“The time has likely come to end our Byzantine discussions about whether and how to measure the quality of online health information. The public has moved on. It is time to join them in what promises to be an exciting voyage of human fellowship, with new discoveries and exciting ways to achieve optimal levels of health.”

Such a perspective would have us ally ourselves with the Jenny McCarthys of the world. Jenny McCarthy believes and popularizes the idea that immunizations cause autism spectrum disorders, despite the fact that there is no scientific evidence correlating immunizations and ASDs.

Laypeople, even brilliant laypeople, do not generally have the information or health literacy skills to know where to find quality information and to know what to trust. My brother is an experienced Web programmer. He has topped out every IQ test he has ever taken. He is a brilliant man and as talented an autodidact as anyone I know. Still, when he needed a medical procedure, he was able to find more information and trust its authority by conferring with me- because I spend my days finding and evaluating health information.

Reliance on science and expertise is not Byzantine. Rationalism is not Byzantine. Empiricism is not Byzantine. Politicians should not make public policy decisions based on polls and clinicians should not make decisions based on the misinformed preferences of their patients. Clinicians have a duty to educate patients and help point them towards good information because the volume of shoddy information is growing at an alarming rate.

Aug 27

Eagle Dawg on Pew’s “The Engaged E-Patient Population”

Nikki (at Eagle Dawg Blog) nails down exactly why I didn’t bother posting about the latest from Pew.

(Typical of my cynicism- the report wasn’t worth blogging about, but criticism of the report *is*.)

If you’re not reading Nikki’s blog yet, subscribe now:
[Eagle Dawg Blog - Feed].

Aug 15

Pam Dolan on Medpedia in American Medical News

Amidst all the coverage of Medpedia that has generally seemed to be derived from a press release is this more informative article from Pam Dolan at American Medical News.

I’m quoted in the article:

Medical librarian and blogger David Rothman, who regularly writes at DavidRothman.net about medical wikis, expressed concerns about the regular monitoring of Medpedia’s content. “If the academic institutions … wish to avoid embarrassment, I’d recommend that they dedicate some time of their health care experts to regular review of articles,” Rothman wrote.

He estimates about 65 medical wikis exist. He’s not sure what the involvement of prominent medical institutions will mean to the project, noting that comparisons won’t be possible until the site is up and running.

As I usually do when I’m interviewed or quoted, I thought I’d post the entirety of my comments here. Pam got my views partially from this post I wrote about Medpedia and partially from an email. Pam’s questions are bolded:

My question for you is whether or not medpedia will be the largest collaboration of its kind for a medical wiki…

That depends on what you mean by “largest” and what we learn about Medpedia when it comes out of Beta. We haven’t yet seen how many contributors/editors it has or how many articles/words it contains. We won’t know for months after it begins how active a community it has. What other metrics could be used to measure “largeness”? The names of affiliated institutions? Medpedia doesn’t really say exactly what contributions those institutions are making (aside from, apparently, allowing the use of their names and logos).

…and if it will raise the bar for those wanting to develop medical wikis in the future.

I think that remains to be seen. So far, Medpedia looks to the public like a press release and a mock-up. When it is up and running, we can begin to compare it to other efforts. Until then, such comparisons aren’t possible.

Aug 04

“Dr. Web Makes Many Americans Question Trusted Health Providers”

Interesting item.

Excerpts:

Thirty-eight percent of U.S. adults (or 85.6 million people) say they have doubted a medical professional’s opinion or diagnosis because it conflicted with information they found online. However, despite the growing power of the Internet, the majority of Americans still view health providers as their most trusted source of medical information.

Previous research indicates that trust in Internet resources is not widespread. However, this study suggests credibility may be influenced by who is authoring the content. Thirteen percent of Americans say they would consult medical professional-developed information posted on blogs, online forums or other Websites first if they believed they had a health condition or disease.

This study reveals that most adult Americans instinctually trust health providers. However, increasingly, they are using online information to critically evaluate medical advice. It also suggests that trust in government and non-profits has significantly eroded. Finally, health communicators and marketers should resist overestimating the impact of patient-generated online content on medical decision-making.

Aug 01

Notes on Medpedia’s changes

I first wrote about Medpedia in January.

I noted in my post that Medpedia did not seem to specify what would qualify an applicant to become a contributor. Medpedia’s Angela Simmons addressed this in the comments:

Anyone with medical and health knowledge is encouraged to apply to become a Contributor. It is not a requirement that you have medical credentials; however, it is important that you are passionate and knowledgeable about at least one topic related to medicine, health and the body.

My concern here is that clinicians should not use an information resource built by people who are not qualified health professionals. Passion is not, in my view, a sufficient qualification.

I also asked Angela if Medpedia was intended to be a resource for professionals (like UpToDate) or a resource for healthcare consumers (like MedlinePlus). Angela replied:

Initially, Medpedia will be a resource for the general public. Over time, with 1000′s of clinicians and researchers on the site, discussing what should be on the main pages, Medpedia will also become a resource for medical professionals, health educators, and medical schools.

This did not seem promising to me. I don’t believe that a single article on a topic can appropriately serve both healthcare professionals and healthcare consumers- their needs are usually quite different.

Medpedia seems to have addressed some of these concerns since that time. Their index page now only invites “Medical Professionals” to “Apply to be a Member, ” and the FAQ says:

There are multiple ways of contributing. If you are an MD or PhD in the biomedical field, you can apply to become an Editor and make changes directly to Medpedia articles (See more below). If you are anyone else, you can use the “Make a suggestion” link at the top of any page to make a suggestion for that page. An approved Editor will review and potentially add your suggestion.

Also interesting to note that Medpedia will be advertising-supported (neither Ganfyd nor AskDrWiki are ad-supported. AskDrWiki is a non-profit). Again from the FAQ:

To support the costs of operation in the future, non-invasive, text-based advertising will be shown on the Medpedia website through third-party ad networks such as Google’s AdSense or Healthline’s third party ad service. Next to these ads on the page will be a link “Flag inappropriate ads” so that the community can keep the ads on the site clean and useful.

Then there’s the question of how reliable the content will be. The FAQ says:

The seed content available on Medpedia at launch is up to date, accurate, and provided by reputable sources. After launch at the end of 2008, once Editors start making edits and adding new pages to the seed content, it is possible, and even likely that there will be mistakes and language that is unclear. This is the nature of a collaborative wiki.

If the site is meant to be used by healthcare professionals, I’d strongly recommend a routine (Monthy? Quarterly?) review of each article by an admin to make sure the content is accurate and up-to-date. To say “it can’t be kept reliable because it is a wiki” is, in my thinking, a cop-out. After all, Medpedia’s own FAQ says the site is meant to be “…a platform to share the most up-to-date medical knowledge.” If the academic institutions listed on the front page of Medpedia wish to avoid embarrassment, I’d reccommend that they dedicate some time of their healthcare experts to regular review of articles.

Prediction:
(Just a guess, but) I think Medpedia’s content will end up focusing mostly on the information needs of healthcare consumers. In that sense, I think it’ll resemble MayoClinic.com

Criticism aside, here are some things about Medpedia that I DO like:

  • Editors/contributors must be qualified health professionals
  • Editors/contributors cannot be anonymous
  • Content is freely usable under a GNU Free Documentation License

What do you think? Do you anticipate other problems I may have missed? Maybe you think I’m being too critical? Share your thoughts in the comments.

Aug 01

A Thought on the Online Rating of Doctors

When discussing sites which allow patients to rate doctors, I have frequently heard the argument that the ratings wouldn’t really be useful or meaningful.

My response to this is that it doesn’t matter at all how accurate or meaningful the ratings on these sites are- users will like them (and use them) regardless.

My wife, for instance, spent a good bit of time examining books about pregnancy, parturition, and the care of infants. Rather than making use of the abundance of experts at her disposal (including midwives, OBs, medical librarians, and pediatricians), she took very seriously how well reviewed and rated each book was on Amazon.com.

Nevermind if the particular review of a particular book showed the reviewer to be ignorant and semi-literate. What mattered was that the ratings were overwhelmingly positive.

So here’s the advice I gave physicians at the 28th Annual AMA Medical Communications Conference:

Rather than fretting about potentially negative reviews on sites that allow patients to rate or review physicians (about which, after all, little can be done), the physician should place her/his efforts into building a very strong Web presence. Hire a white hat SEO consultant if you have to, but make sure that anyone Googling your name (or your practice’s name) sees YOUR site first in the search results.