What patients say about their doctors

Consumers Checkbook What Patients Say About Their Doctors
Survey of Patients about Their Experience of Care with Their Doctors
 

Background, Methods, and Improvement Resources

This website provides information on a new survey conducted by Consumers' CHECKBOOK/Center for the Study of Services (CHECKBOOK/CSS), a nonprofit consumer research organization, asking patients about their experience of care with their physicians. The website gives information on the survey, how the survey data were analyzed, how to interpret survey results, and how physicians and patients working together can do better on the aspects of care the survey measures.

Under each section of this website, there is a place for readers to ask questions and give comments. We hope there will be a dialogue, with suggestions, criticisms, and new ideas that will lead to continuing improvement of the survey and eventually improvement in physicians' and patients' performance of their roles in the aspects of care measured by the survey. Survey results are made available on this website for each listed physician confidentially to review for 60 days before the results are made public.

Each doctor can now see his or her detailed Survey Results Report for Physicians by clicking here. Consumers and physicians can see What Patients Say About Their Doctors website by clicking here [not available until 60 days after the survey results are posted on this website for confidential doctor review].

Background on the Survey

How the Survey was Done

Survey Results and Reporting

How to Use the Survey Results Report for Physicians (Accessible Only to Physicians)

How to Use the Public Survey Results Report

Resources to Help Physicians Improve

Resources to Help Patients Do Their Part

How Physicians Can Use Their Survey Results to Help Meet Maintenance of Certification and Continuing Medical Education Requirements


Background on the survey

What is the survey about?

The survey asks patients about their experience of care with their doctors. By clicking here, you can see the questionnaire used in the survey. The survey asks patients about their experience in the past 12 months with regard to–

  • How well the doctor communicated, including listening, explaining things, giving easy-to-understand instructions, being aware of the patient's medical history, showing respect, and spending enough time with the patient.
  • Being able to get appointments and care when needed, including getting appointments, getting medical questions answered by phone, and office wait time.
  • Helpfulness and courtesy of office staff.



What evidence is there that the information collected on the survey is important?

It makes sense that the aspects of care the survey measures would relate to better medical results for patients, but you might want more scientific evidence. There is an increasing body of such evidence.

For example, one bibliography of articles on the importance of physician-patient communication is at the website of the Institute for Healthcare Communication (www.healthcarecomm.org). It includes many articles on the impacts of effective physician-patient communication on health, functional, and emotional status; improvement in diagnostic accuracy; adherence to treatment plans; increased trust in the physician; improved patient and clinician satisfaction; informed consent; and reduction in malpractice risk.

A 2007 article in the Journal of the American Medical Association, "The Value of Assessing and Addressing Communication Skills," summarizes the evidence:

"In a diverse set of studies since [the late 1960s], effective communication has been linked with increases in patient and physician satisfaction, better adherence to treatment plans, more appropriate medical decisions, better health outcomes, and fewer malpractice claims...."
"Education and accreditation initiatives have evolved along with the research trajectory. Indeed, the focus on communication now extends throughout the continuum of medical education and practice...."

Lady Physician 1 Evidence of the recognition medical standard-setters have given to the importance of physician-patient communication includes the following:
  • Communication is one of six required competencies identified by the Accreditation Council on Graduate Medical Education (ACGME), and elements of competent communication are featured in four of the six ACGME competencies.
  • The Association of American Medical Colleges (AAMC) published recommendations for communication in the Medical School Objective Project, Paper III.
  • The National Board of Medical Examiners (NBME) is requiring objective standardized clinical examinations to assess interviewing and communication skills.
  • The Institute of Medicine, in its 2004 report, "Improving Medical Education: Enhancing the Behavioral and Social Science Content of Medical School Curricula," names communication skills as one of six curricular domains.
  • The new standards to elevate physician life-long learning assessment adopted by the American Board of Medical Specialties for Maintenance of Certification programs include assessment of communication skills as a standard for all physician diplomates with direct patient care--using a CAHPS patient survey (similar to the communication questions used in the CHECKBOOK/CSS pilot surveys) or other approved survey.

While key questions in the CHECKBOOK/CSS survey focus on physician-patient communication, the survey also includes questions about access to care–appointment scheduling and responsiveness to medical questions by phone–and about office staff helpfulness and courtesy. These dimensions of office operations are also important to patient satisfaction and health. On access issues, for example, a 2007 article in The Joint Commission Journal on Quality and Patient Safety summarized the challenge: "delays for appointments are prevalent, resulting in patient dissatisfaction, higher costs, and possible adverse clinical consequences."



How is this survey different from other patient surveys about doctors?

First, compared to other surveys and patient reports on physicians that are appearing more and more broadly on the Internet, this survey is much more rigorous, valid, and reliable. The survey was done using the Clinician/Group CAHPS (CG-CAHPS) questionnaire, developed by research teams funded by the U.S. Agency for Healthcare Research and Quality (AHRQ). The CG-CAHPS instrument was first endorsed by the National Quality Forum (NQF) in 2007 and also has been adopted by the AQA Alliance.

The experts who developed this survey focused on dimensions that are known to be important to outcomes and did thorough testing of the survey to be sure, for example, that patients understood the questions, that sufficient numbers of patients have had the experiences the questions ask about, that the questions measure different dimensions of care, and that the answers patients give are internally consistent.

Equally important, in stark contrast to what is commonly seen on the Internet, the procedures for administering this survey are designed to ensure reliability and freedom from manipulation and bias. The way the sample of patients to survey is selected ensures that the patients are real patients who have actually have had visits with the doctors they are reporting on, avoids biases or manipulation in the selection of patients to survey, and eliminates any risk that patients will rate the same doctor multiple times. The number of patients surveyed per doctor is large enough to produce reliable results, so that it is possible to identify doctor-to-doctor differences that are very unlikely to result just from "the luck of the draw."

Compared to many other surveys that have been developed or used by health plans, medical groups, or other providers, this survey has the advantage that it is a nationally endorsed standard survey. That means that it can produce results that are comparable across different uses, by different sponsors. In fact, the survey results, at the individual respondent level but with physician identifiers removed, will be provided to the federally sponsored National CAHPS Benchmarking Database so that independent researchers will be able to analyze the data and develop national benchmarks and alternative analysis procedures. Other surveys used by medical groups and other providers are of varying quality. Some are excellent and some focus on particular questions that are of interest in specific practice situations. But there are great advantages in having a survey like this one that adheres to a national standard.



How can the survey help patients and doctors?

The survey results reported on the What Patients Say About Their Doctors website can help patients choose a doctor. The website can also help patients see whether their experience with a doctor they have seen is similar to the experience other patients have had with that doctor. And it can help patients talk with their doctor about ways in which they might like their doctor to interact with them differently.

Probably more important than the guidance the website can give to patients is how the information on the website–and more detailed information CHECKBOOK/CSS provides in the Survey Results Report for Physicians–can motivate and guide individual doctors and medical organizations to improve on the aspects of care the survey measures.

Given limited supply of doctors in family practice, internal medicine, pediatrics, and some other fields, and given limits on consumer choice, quality improvement rather than shifting of patients to higher rated doctors is the most promising path toward better health care results.

Elsewhere in this Guide you will find information on various resources that can help both physicians and patients do better in the aspects of care measured in the survey.



How can doctors and patients work together to improve?

To improve results on the aspects of care the survey measures requires effort by both doctors and patients. Good communication, efficient use of office time, and other aspects of the doctor-patient relationship are a two-way street.

Later in this Guide, under the headings "Resources to Help Physicians Improve" and "Resources to Help Patients Do Their Part," doctors will find organizations and materials that can help them improve, and patients will find advice, checklists, and other materials that will help them do their part in communication and other aspects of relating to a doctor and the doctor's staff. Patients might even want to look at some of the materials intended for doctors to enhance patient understanding of the challenges physicians are up against in dealing, given limited available time, with many different types of patients who need many different types of interaction with their doctors.



How the Survey was Done

Who conducted the survey?

The survey was sponsored and conducted by Consumers' CHECKBOOK/Center for the Study of Services (CHECKBOOK/CSS), a nationally recognized, nonprofit consumer research organization. CHECKBOOK/CSS has extensive experience surveying consumers and patients. For example, the organization manages the surveys the federal government sponsors of members of all the health plans and prescription drug plans that operate under the Medicare program. The organization also publishes magazines, books, and websites (at www.checkbook.org) directly for consumers, including Consumers' CHECKBOOK magazine, which evaluates and reports on the quality and prices of a wide range service providers (from auto repair shops to plumbers to hospitals) in seven major metropolitan areas around the U.S. (Boston, Philadelphia, Washington, DC, Chicago, Twin Cities, Seattle-Tacoma, and San Francisco-Oakland-San Jose). CHECKBOOK magazine first published patient survey ratings of doctors in 1980, based on surveys of CHECKBOOK and Consumer Reports magazine subscribers. The organization has long sought to improve on the rigor and reliability of such surveys, and the survey described here was designed to achieve that.



Where has the survey been carried out?

The survey so far has been carried out in four pilot metropolitan areas–

  • Denver-Aurora-Boulder, Colorado–Adams, Arapahoe, Boulder, Broomfield, Clear Creek, Denver, Douglas, Elbert, Gilpin, Jefferson, and Park Counties.
  • Kansas City, Kansas and Missouri–primarily in Franklin, Johnson, Leavenworth, Linn, Miami, and Wyandotte Counties in Kansas; Andrew, Bates, Buchanan, Caldwell, Cass, Clay, Clinton, Jackson, Johnson, Lafayette, Platte, and Ray Counties in Missouri; with a few physicians outside of these areas included.
  • Memphis, Tennessee–Fayette, Shelby, and Tipton Counties in Tennessee; DeSoto, Marshall, Tate, and Tunica Counties in Mississippi; and Crittenden County in Arkansas.
  • New York County (Manhattan).

CHECKBOOK/CSS and other organizations that have helped make this survey possible have the goal of having patient survey results available in the future on most physicians in the U.S.



What community groups collaborated in the survey?

The survey has been done in collaboration with a leading community-based health improvement organization in each of the four pilot metropolitan areas.

  • In the Denver area, it is the Colorado Business Group on Health (CBGH), the local organization that publishes the Colorado Health Matters Quality Report and other information as part of a mission to improve health care quality and value in Colorado. The survey was done in cooperation with the Colorado Medical Society (CMS) and the Colorado Academy of Family Physicians (CAFP). CMS and CAFP are working with CHECKBOOK/CSS to ensure that the survey is as helpful as possible to physicians and to patients
  • In the Kansas City area, the collaborating community coalition is the Kansas City Quality Improvement Consortium (KCQIC), a local coalition of physicians, health plans, and other stakeholders with a mission of improving health care quality in the Kansas City area. KCQIC's role in the survey is in keeping with its responsibilities as a grantee of the Robert Wood Johnson Foundation's Aligning Forces for Quality Program, including working to improve the quality of care, measuring and publicly reporting on the quality of care, and engaging consumers to make informed choices about their own health and health care.
  • In the Memphis area, the collaborating community coalition is Healthy Memphis Common Table (HMCT), a local coalition of physicians, consumers, health plans, and other stakeholders with a mission of improving health care quality in the Memphis area. HMCT sponsors Health Care Quality Matters (HCQM), providing public reports on the quality of health care in Tennessee. The survey was done in cooperation with the Memphis Medical Society, working to ensure that the survey is as helpful as possible to physicians and to patients. HMCT is also a grantee of the Robert Wood Johnson Foundation's Aligning Forces for Quality Program.
  • In the New York area, the collaborating community coalition is the New York Business Group on Health, a nonprofit coalition of employers, unions, health plans, providers, and other healthcare organizations operating in New York, New Jersey, and Connecticut, which has since 1982 been leveraging the power of healthcare purchasers to drive healthcare reform and aid employers in their quest for value in the healthcare system.



How were the patient responses collected?

The survey was done by mail. A cover letter and questionnaire were mailed to each patient included in the survey sample for each physician, and a second-wave letter and questionnaire were mailed about six weeks later to anyone who did not respond to the initial mailing. Surveyed patients could return the survey by mail in a pre-paid envelope, or they could use a personal ID code included in the cover letter to go to a CHECKBOOK/CSS website and complete the survey online.

The first wave of surveys was mailed on November 10, 2008, in Denver and Kansas City; on January 7, 2009, in Memphis; and on May 4, 2010, in New York.



What types of doctors were included?

The survey, in this pilot phase, focused on adult primary care physicians (PCPs) (family practitioners, internists, geriatricians, and general practitioners) in the Denver and Kansas City areas; on PCPs, cardiologists, and obstetricians/gynecologists in the Memphis area; and on PCPs, cardiologists, gastroenterologists, obstetricians/gyenecologists, and orthopedists in the New York area.



How were the patients to survey selected?

The doctors to be included and patients to be surveyed were selected from lists provided by health plans, which included, depending on the metro area, Aetna, UnitedHealthcare, CIGNA HealthCare, BlueCross and BlueShield of Kansas City, and BlueCross BlueShield of Tennessee. The metro areas where each plan participated are as follows–

Counseling Doctor
  • Denver: Aetna, UnitedHealthcare
  • Kansas City: Aetna, UnitedHealthcare, and BlueCross and BlueShield of Kansas City
  • Memphis: Aetna, UnitedHealthcare, CIGNA HealthCare, and BlueCross BlueShield of Tennessee
  • New York: Aetna, UnitedHealthcare (including Oxford), and CIGNA HealthCare.

The plans supplied lists of all of their physicians contracting with the plan with at least one practice location in the counties included in the survey.

For a one-year period, the plans listed all patient visits to these doctors involving certain evaluation and management CPT codes. (CPT codes describe the service/procedure rendered; to see a list of CPT codes, click here.) For each visit, the visit record included the date, the CPT code, the plan's identification code for the physician, and the plan's identification code for the plan member/patient.

The plans also provided CHECKBOOK/CSS lists of all members who had had a visit included on the visit list.

The patients on the lists the plans provided were predominantly members of commercial health plans (although a small number of Medicaid or Medicare members were included on some areas).

CHECKBOOK/CSS matched the physician lists across plans, using various physician identifiers supplied by the plans, including, where available, NPI number, UPIN number, date of birth, name, and various other identifiers.

Then, using the list of visits from each plan, CHECKBOOK/CSS identified patients who plan records showed had had visits to these physicians in a targeted time period (the period from November 1, 2007, through October 15, 2008 for the Denver, Kansas City, and Memphis areas, and the period from April 15, 2009, through April 14, 2010, for the New York area). From each doctor's list of patients with visits, CHECKBOOK/CSS selected a random sample of patients to survey. These patients were selected so that not more than one person in a household would be surveyed about the same doctor and not more than two persons in the same household would be surveyed at all.



How many patients were surveyed for each physician and how many physicians were included in the survey?

The AHRQ specifications recommend, but don't require, that 113 patients be surveyed for each physician. In fact, CHECKBOOK/CSS selected at least 113 patients per physician, but selected and surveyed a larger number where possible-up to a maximum of 150 patients per physician in the Denver, Kansas City, and Memphis areas, and up to 170 patients per physician in the New York area.. The average number of patients surveyed per physician ranged from 142 to 157, depending on the metro area.

CHECKBOOK/CSS included in the survey all physicians for whom the combined health plan lists of patients who had had visits and who could be surveyed reached 113 or more. That number was reached for 479 physicians in the Denver area, 713 physicians in the Kansas City area, 437 physicians in the Memphis area, and 936 physicians in the New York area. A total of 68,067 surveys were mailed out in the Denver area, 103,537 in the Kansas City area, 63,717 in the Memphis area, and 147,033 in the New York area.

The sample of patients surveyed included primarily commercial health plan members. The patient-provided demographic characteristics of the patients who responded to the survey about a doctor are shown in each doctor's Survey Results Report, along with the characteristics of the entire pool of survey respondents for the metro area where the doctor is located.

There are many physicians in each metro area, even in the specialties the survey focused on, for whom CHECKBOOK/CSS, working with the health plans lists of patients, could not identify a sample of at least 113 patients to survey. Those physicians were not included in the survey results reported on this website. The fact that a physician is not listed says nothing, either good or bad, about that physician.

So that there will eventually be comparable patient-feedback survey results and scores for all or virtually all actively practicing physicians, CHECKBOOK/CSS plans to offer a system for physicians who have not been included in the survey to participate in a follow-on survey that will produce comparable results and scores. Physicians interested in being included in such a survey are invited to send an e-mail to [email protected].



Survey Results and Reporting

How many responses were received?

The total number of survey responses received was 24,643 in the Denver area, for an average of 51 returned per doctor; was 43,863 in the Kansas City area, for an average of 62 returned per doctor; was 23,612 in the Memphis area, for an average of 54 returned per doctor; and was 48,014 in the New York area, for an average of 51 per doctor. This represents a gross response rate of about 36.2 percent in the Denver area, about 42.5 percent in the Kansas City area, about 37.1 percent in the Memphis area, and about 32.7 percent in the New York area. Of these, CHECKBOOK/CSS did not consider some of the responses complete and usable because the respondent failed to confirm on Question 1 of the survey that he or she had had a visit with the doctor named on the survey in the past 12 months. The number of surveys CHECKBOOK/CSS considered complete, and that were used in the analysis that is the basis for each doctor's Survey Results Report, averaged 48.6 in the Denver area, 58.5 in the Kansas City area, 51.6 in the Memphis area, and 49.5 in the New York area.



What survey results are being reported?

The "What Patients Say About Their Doctors" public website reports results on each quality-related question from the survey.

For each question on the survey, patients had various response options. For most questions, the options were never, almost never, sometimes, usually, almost always, and always. For the overall rating question, the response options were any number from 0 to 10, and CHECKBOOK/CSS re-coded these options to the following six possibilities: 0 to 4, 5, 6 to 7, 8, 9, and 10. For the recommend-to-family-and-friends question, there were four response options: definitely yes, somewhat yes, somewhat no, definitely no.

For all of the questions except the recommend-to-family-and-friends question, there are six points on the scale (after the re-coding of the overall rating question), and these points were given a value of 0, 20, 40, 60, 80, and 100. For the recommend-to-family-and-friends question, the scale has four points, given the values 0, 33.3, 66.7, and 100. So, for each answered question, there is a value for each respondent. The response values for each question were averaged across all the patients who reported on each doctor, to get a mean value for each question for each doctor. That value is the doctor's "unadjusted score." The unadjusted score is not shown on the "What Patients Say About Their Doctors" website (although it is shown, along with the percent of respondents selecting each response option, on the Survey Results Report for Physicians at www.checkbook.org/DoctorReview, which CHECKBOOK/CSS put together for doctors confidentially to review details of their survey results).

The "Community Average" score reported for each question is the average score for all the members who were included in the survey in the same metropolitan area, based on surveying the types of members included on the lists provided by the health plans.



How were scores adjusted for respondent characteristics?

Doctors x-ray Scans

We know that respondents with certain characteristics tend to give more favorable reports than respondents with other characteristics. For example, older respondents tend to give more favorable reports than younger respondents. So that individual physicians will not have their scores inflated, or deflated, simply because of the characteristics of the patients they serve, we calculated an "adjusted score" for each physician for each question, doing a case-mix adjustment that takes into account self-reported health status, age, and education level. These are the three characteristics recommended for case-mix adjustment by the CAHPS Consortium. (Respondent gender was also considered in the case-mix adjustment in New York because the U.S. Agency for Healthcare Research and Quality has approved this characteristic for use in adjustment.)

The case-mix adjustment is done as follows for each question for each doctor. A regression analysis is used to predict, for each respondent, the score someone with that respondent's characteristics would be expected, on average, to give a doctor. The mean of these predicted scores for each of a doctor's respondents is calculated to get a predicted score for that doctor for that question. Then the mean of the actual scores given by each respondent for that doctor for that question is calculated. Then the difference–the actual minus the predicted score–for that doctor for that question is calculated. Finally, that difference is added to the mean score for all survey respondents for that question to get the adjusted score for that doctor for that question.

The scores reported on the "What Patients Say About Their Doctors" public website are adjusted scores.



xx

How does the reporting indicate the confidence one can have that differences among doctors are not explained
by chance alone?

CHECKBOOK/CSS did tests of statistical significance for each score reported. For these tests, we evaluated for each question the likelihood that the difference between a doctor's mean actual score and mean predicted score might be the result of good or bad luck in the particular patients who were selected and responded to the survey about that doctor–in other words, whether one could be confident that the doctor's actual score would be higher (or lower) than the doctor's predicted score if all possible patients for that doctor could have been surveyed. We used the statistical technique called a t-test to make this assessment. For each doctor the What Patients Say About Their Doctors website includes the word "Better" for each question for which there is less than a five percent chance that the doctor's actual score was better than the doctor's predicted score simply as a result of the "luck of the draw" in which patients were surveyed. The word "Lower" is given if there is less than a five percent chance that the doctor's actual score was lower than the doctor's predicted score simply as a result of the "luck of the draw" in which patients were surveyed. In all other cases, the website includes the word "Average."

The t-test takes into account for each question how many respondents rated a doctor and how consistent these respondents' reports on the doctor were. For this reason, it is possible that one doctor might get a "Better" designation on a question while a second doctor with a higher score does not. This might be true if the second doctor was rated by fewer survey respondents than the first doctor or if there was relatively little agreement among the respondents who rated the second doctor–with some of that doctor's respondents giving relatively high ratings and some giving relatively low ratings.



Is there some reliability threshold that a doctor's scores must meet in order to be reported to the public at all?

CHECKBOOK/CSS has decided not to report to the public a doctor's score on a question unless either (1) the doctor's score on that question is significantly "Better" or "Lower" than average or (2) the doctor's sample size on that question meets a certain threshold. We have used something called the "reliability" statistic to decide on that sample size threshold. Reliability can range from 0.0 to 1.0. We have decided not to report to the public any score on a question for a doctor where the number of survey responses received for that doctor is below a number that, if all doctors in the survey had the same number of responses, would yield a reliability of 0.7.

Unfortunately, even with the relatively high numbers of responses collected in this survey, there are still many cases where specific questions cannot be reported to the public consistent with our reliability criteria.

For those who are not familiar with the use of the reliability statistic, we will explain the concept briefly here. The reason to use the reliability statistic (although other measures could also be used) is that it is an indicator of how confidently one will be able to distinguish among doctors on a list based on their survey scores. The higher the reliability statistic, the less the chance that someone comparing doctors' survey scores will conclude that two doctors are different (or not different) when the difference (or lack of difference) is just a result of the "luck of the draw."

The reliability statistic takes into account three characteristics of a set of survey responses for doctors. First, how much variation is there in the ratings each doctor's patients give their doctors? The more doctors' patients tend to agree about their doctors (small within-doctor variance), the higher the reliability statistic. Second, how much variation is there from doctor to doctor in the mean rating each doctor gets from his or her patients? The larger the differences from doctor to doctor (large between-doctor variance), the higher the reliability statistic. Third, how large is the number of respondents for each doctor? Larger numbers of responses produce a higher reliability statistic. Where within-doctor variance is low, between-doctor variance is high, and number of responses is high, the reliability statistic tends to be high, and your confidence in distinguishing (or not distinguishing) among doctors can be relatively high.



What are the composite measures?

In addition to reporting on each survey question, we have also reported on composite measures, which combine results for more than one question into one score. There are three composite measures: overall got appointments and care when needed; overall doctor communicated well; and overall office staff was helpful and courteous

The results for composites are reported along with the results for individual questions in each doctor's Survey Results Report. These composite scores will also be reported to the public if they meet the reliability or t-test standards described above for individual questions. In fact, composites tend to have higher reliability statistics and t-test statistics than the individual questions that make up the composites.

We will explain briefly how each composite score was calculated. Respondent-level actual and predicted composite scores were calculated for each survey respondent for each composite measure by averaging the actual and predicted values from each of the questions that make up the composite. After the respondent-level composite values were calculated, the composites were treated in the same manner as the individual questions with regard to calculation of adjusted scores, testing for statistical significance, and calculating reliability. The table below shows an example of a respondent-level composite calculation for a physician who had just a small number of respondents for the office staff composite.


  Q22 Q23 Office Staff
Respondent Actual Predicted Actual Predicted Actual Predicted
1 100 80.2 100 78.5 100 79.4
2 100 83.1 80 81.4 90 82.3
3 80 81.8     80 81.8
4 100 85.1 60 83.7 80 84.4
5 60 82.2 100 80.5 80 81.4
6     80 79.9 80 84.3
7 40 79.9 60 77.1 50 78.5
8 80 85.1 80 83.4 80 84.3
9 100 80.9 100 79.2 100 80.1
10 100 82.2 80 79.7 90 81.0
Mean 84.4 82.3 82.2 80.4 83.0 81.3



What is the source of the information listed about each doctor’s addresses, hospitals, etc.?

This information, which appears on the public website, is intended as a convenience for website users. This website's facts on the survey and on doctors' scores are the responsibility of CHECKBOOK/CSS. In contrast, the information on doctors' addresses, hospitals, etc. comes from various sources and is intended merely as a convenience for users; users should check it independently before relying on it.

CHECKBOOK/CSS collected address, phone number, specialty, hospital, gender, and medical school information for each doctor from public or proprietary data sources and/or health plans.

With regard to addresses, the CHECKBOOK/CSS staff called each doctor's office(s) shortly before the first survey mailing to the doctor's patients to confirm at least one address. Since that time, CHECKBOOK/CSS has made some address changes if letters mailed to the doctor were returned as undeliverable, or if other address-change information came to CHECKBOOK/CSS's attention.

With regard to board certification in a specialty, the starting point for the information displayed on this website is from the American Board of Medical Specialties (ABMS) and has been extracted and manipulated from the Directory Database compiled by Elsevier and ABMS to publish The Official American Board of Medical Specialties (ABMS©) Directory. (This website has not been designated as an Official ABMS Display Agent and thus the ABMS data provided is not deemed valid for Primary Source Verification (PSV) purposes as required for accrediting organizations.) It should be noted that CHECKBOOK/CSS was unable to acquire information on board certification of osteopathic physicians through the 18 osteopathic specialty boards, so some osteopathic physicians who are board certified might not be listed as being certified. Also, Elsevier did the matching of ABMS information against information CHECKBOOK/CSS has on each physician to identify physicians who are certified by ABMS specialty boards. Elsevier is able to use only a limited set of identifying information to do its matching (first, middle, and last name; date of birth; specialty; gender; and National Provider Identifier-but not some other information CHECKBOOK/CSS has on physicians). So the matching process is not always correct, and some physicians might not be properly identified as ABMS-certified. Finally, the board certification information was updated based on information provided by Elsevier/ABMS in July 2010; some physicians may have become certified since that time and the certification of some may have expired.

At least 60 days before survey results for a doctor were first made public, the doctor was sent a personal Access ID code by First Class mail and invited to edit or add address, hospital, and other information on a CHECKBOOK/CSS website. Physicians can continue to use this Access ID to make corrections and updates. For the most part, CHECKBOOK/CSS makes changes indicated by the doctors, including information the doctors claim about board certification. But if the doctor deletes a specialty that CHECKBOOK/CSS used as a basis for including the doctor in the survey, CHECKBOOK/CSS still lists that specialty for that doctor, among other specialties that the doctor may ask to have listed.



How to Use the Survey Results Report for Physicians (Accessible Only to Physicians)

What is the Survey Results Report for Physicians?

CHECKBOOK/CSS has created a website at www.checkbook.org/doctorReview where each physician included in the survey can confidentially review his or her own Survey Results Report for Physicians. This physician-only website includes more detailed information than will appear on the "What Patients Say About Their Doctors" public website. Also, the physician-only website gives physicians an opportunity to correct or add information on address(es), specialty(ies), and other background. To use the physician-only website, a physician needs the Access ID CHECKBOOK/CSS sent each included physician by First Class mail.



What does the information on the table of survey results mean?

The following is a description of the report's contents:

  • On the far left column of the report, you will find the full text of the question patients were asked to answer, along with the question number as shown on the questionnaire (which you can see by clicking here). Please note that for some of the reported questions, there was a screening question on the questionnaire. For example, Question 7–"When you phoned this doctor's office to get an appointment for care you needed right away, how often did you get an appointment as soon as you thought you needed?"–was preceded on the questionnaire by a screening question–"In the last 12 months, did you phone this doctor's office to get an appointment for an illness, injury, or condition that needed care right away."
  • The second column of the table tells you how many patient respondents answered each question.
  • The next columns tell you what percent of the patients who answered the question chose each response option–for example, what percent said "always." In the case of the overall ratings question, the respondents chose a number from 0 to 10 and the table has grouped some of the response options into ranges, for example, "0 to 4."
  • Following these percentage breakdown columns, are three columns showing the mean score on each question, where each patient's response on that question was given a value and the values were averaged across all patients who answered the question. To calculate these means, different possible response options (or ranges of options as shown in the preceding columns) were given values from 0 to 100–for example, "always" was given a value of 100, "almost always" was given a value of 80, "usually" was given a value of 60, and so on down to a value of 0 for "never." (For the overall ratings question, "10" was given a value of 100, "9" was given a value of 80, and so on down to 0 for any response in the "0 to 4" range. And for the recommend-to-family-and-friends question, the possible values were 100, 66.7, 33.3, and 0.)
  • The three columns showing mean scores are–
    • "Your unadjusted score," which is just the actual mean of the coded values given by all the patients how responded for the doctor,
    • "Your adjusted score," which is a score that takes into account the self-reported characteristics of the mix of patients who reported on the doctor, as explained elsewhere on this website under the question "How were scores adjusted for respondent characteristics?"
    • "Community average," which is a simple average of the coded values given by all patients who reported on doctors in the metro area. Depending on where the doctor is located, the community average shown is for the Denver area, the Kansas City area, the Memphis area, or the New York area.
  • The far-right column on the table indicates whether the physician's score is statistically significantly different from the community average. The words "Better" and "Lower" mean that the doctor's score is better, or lower, than the community average and that the doctor would very likely (at the 95 percent confidence level) get a better- or lower-than-average score even if all possible patients were surveyed. This statistical testing is described elsewhere in this website under the question "How does the reporting indicate the confidence one can have that differences among doctors are not explained by chance alone?" Users of this report should understand that such statistical testing takes into account the number of survey respondents for each physician and the amount of agreement among those respondents in the scores they give to that physician. As a result, it is possible, for example, for one physician to get a "Better" designation on a question when other physicians with equal or higher scores do not get a "Better" designation.
  • As explained elsewhere on this website under the question "Is there some reliability threshold that a doctor's scores must meet in order to be reported to the public at all?" CHECKBOOK/CSS has decided not to report to the public the score for a physician on a question unless either (1) the physician gets a "Better" or Lower" score on that question or (2) the sample size for that question for that physician is sufficiently large to achieve what CHECKBOOK/CSS has set as a threshold level of reliability. But in the Survey Results Report for Physicians, survey results on all questions are reported. Physicians will find much of this information interesting and useful, and even the information that CHECKBOOK/CSS has decided not to report to the public is much more reliable than survey results that are commonly reported. On each doctor's Survey Results Report for Physicians, the questions for which that physician's results will not be reported to the public are presented in grey type, and marked with a footnote.


What is the information about patient-reported characteristics?

In this section of the Survey Results Report for Physicians, we show the total number of patients who responded to the survey asking about the physician (and confirming that they had used the named physician). We also show the percentage of respondents in various demographic categories (by age, education level, etc.). These are demographic characteristics as reported by the patients in their survey responses (although CHECKBOOK/CSS used data provided on the member lists provided by the health plans to fill in the patient's age in the very small number of cases where the patient did not answer the age question on the survey).



What is the purpose of the section about physician location, specialty, and other information?

CHECKBOOK/CSS collected physician location, specialty, and other information from various sources, including health plan records, public databases, and credentialing organizations. And CHECKBOOK/CSS called each physician's office to confirm at least one address and phone number. To make the Public Survey Results Report as useful as possible to consumers, the information collected in this way will be included along with physician survey results in the Public Survey Results Report. CHECKBOOK/CSS wishes to have this information as accurate and current as possible; the Survey Results Report for Physicians includes links that take each physician to a form where he or she can correct or update all information the Report is showing. We ask that physicians take the time to make any corrections or updates.



How to Use the "What Patients Say About Their Doctors" Public Website

How do you use the "What Patients Say About Their Doctors" public website?

You can start using the website by going to the first web page, entitled "Overall Rating of Doctor." The diagram below shows key features of the "Overall Rating of Doctor" web page.

Table Explained

The diagram below shows key features of the web page you will go to if you click on a doctor's name, or if you check boxes for several doctors and click the "Compare" button.

Details Explained

A key part of the "Overall Rating of Doctor" web page is the bar-graphs and numbers showing how each doctor scored based on answers given by his or her surveyed patients when asked to "rate this doctor." This web page also shows you if each doctor's score was statistically significantly Better or Lower than the average of all surveyed doctors in the community. The words Better and Lower mean that the doctor's score is better, or lower, than average and that the doctor would very likely (95% likelihood) get a better, or lower, than average score even if all possible patients were surveyed.

On the "Overall Rating of Doctor" web page, you can sort by clicking on a column heading (for example, sort alphabetically by clicking on the words "Doctor's name" in the heading of the column that shows doctors' names and addresses). You can focus your search on doctors you are most likely to be interested in by making selections in the "SEARCH" box on the left of the web page (for example, you can enter a zip code, select how many miles from that zip you want to search, and then press the "Search" button to narrow the list of doctors geographically). You can also click on an individual doctor's name to see how patients rated the doctor on specific issues, such as how well the doctor communicates and how easy it is to arrange appointments and care. And you can check the boxes to the left of several doctors' names and then press the "Compare" button in the SEARCH box to see a detailed comparison of up to four doctors at a time.



Resources to Help Physicians Improve

What can physicians do to improve on the dimensions of care measured by the survey?

The most important benefits of the survey are expected to result from motivating and guiding physicians, in partnership with patients, to improve. It is hoped that the potential for professional satisfaction and for recognition and appreciation by peers, the public, patients, and organizations with which physicians are affiliated will be one factor contributing to physicians' motivation for continuing high-quality performance and improvement in the elements of practice measured in the survey. There are various tools and resources that can enable physicians to refine their focus on specific elements of their everyday practice that affect the broad dimensions measured in the survey, and there are tools to help physicians learn how to make improvements, and practice these improvements. Improvements can come in each of the broad areas measured in the survey–physician-patient communication, access to care, and office staff helpfulness and courtesy.

The Agency for Healthcare Research and Quality (AHRQ), which developed the questions used in this survey, has put together an online CAHPS Improvement Guide to help healthcare providers and organizations improve on the various aspects of care measured by a family of surveys, referred to as the CAHPS surveys. This includes surveys about physicians, hospitals, health plans, and other organizations. The Guide has a wide variety of advice and resources on such subjects as:

  • Are You Ready to Improve?—advice on the organizational behaviors that are critical to success in improving patients' experiences with care.
  • Analyzing Your Survey Results—advice on how to look beyond the scores to identify the best opportunities for improvement.
  • Quality Improvement Steps—advice on an effective process for implementing interventions to achieve specific performance goals.
  • Possible Interventions—advice on strategies for improving specific aspects of patients' experience with care. Some of these strategies will be relevant mainly to hospitals, health plans, and other organizations, but many will be useful to medical groups and individual physicians' offices.


What physician-patient communication elements will physicians want to focus on?

The "Kalamazoo Consensus Statement: Essential Elements of Physician-Patient Communication," developed by experts from medical schools and residencies, and representatives from medical education organizations in North America, and used as guidance by the Accreditation Council on Graduate Medical Education (ACGME) and other educational leadership organizations, says that physicians should focus on the following elements and tasks.

Physician focus
  • Establishes rapport
    • Encourages a partnership between physician and patient
    • Respects patient's active participation in decision making
  • Opens discussion
    • Allows patient to complete his/her opening statement
    • Elicits patient's full set of concerns
    • Establishes/maintains a personal connection
  • Gathers information
    • Uses open-ended and closed-ended questions appropriately
    • Structures, clarifies, and summarizes information
    • Actively listens using nonverbal (e.g., eye contact, body position) and verbal (words of encouragement) techniques
  • Understands patient's perspective on illness
    • Explores contextual factors (e.g., family, culture, gender, age, socioeconomic status, spirituality)
    • Explores beliefs, concerns, and expectations about health and illness
    • Acknowledges and responds to patient's ideas, feelings, and values
  • Shares information
    • Uses language patient can understand
    • Checks for understanding
    • Encourages questions
  • Reaches agreement on problems and plans
    • Encourages patient to participate in decisions to the extent he/she desires
    • Checks patient's willingness and ability to follow the plan
    • Identifies and enlists resources and supports
  • Provides closure
    • Asks whether patient has other issues or concerns
    • Summarizes and affirms agreement with the plan of action
    • Discusses follow-up (e.g., next visit, plan for unexpected outcomes)

The various questions on the survey relate to different elements and tasks from this consensus list.



What types of tools and resources are available to assist physicians to improve communication skills?

Medical schools, professional societies, continuing medical education providers, office practice consultants, and others in local communities may offer skill-development resources. Physicians may want to ask their local organizations what they have available; if a number of physicians come together with an interest in a training program, this might prompt organizations to set up skill-development programs. Various books, articles, and websites are also available. What follows is a short list of organizations and resources that might serve as a starting point.

The Institute for Healthcare Communication (www.healthcarecomm.org) is a nonprofit organization, founded in 1987, with a mission to advance the quality of healthcare by optimizing the experience and process of healthcare communication. The IHC creates and disseminates educational programs and services.

Some of the IHC workshops are as short as half a day; others require several days, and may be spaced over a period of weeks or months. The following is excerpted from the write-up of one of IHC's popular workshops:

"Clinician-Patient Communication to Enhance Health Outcomes is offered as either a full day or half day workshop for groups with six to thirty participants. The workshop is a fast paced interactive program designed to provide participants with opportunities to practice skills and techniques, not simply hear about them.
"Participants work individually and in teams to analyze videotaped re-enactments of actual cases, reach agreement on what was and was not effective in the cases, and then create responses that would be more effective.
"In the last activity, participants work together to develop approaches to patients they are currently working with. Finally, participants are asked to choose one or two techniques that they can immediately use in their practice."

Another, more extensive IHC program, which helps participants develop skills to coach others, is described as follows:

"Coaching For Improved Performance consists of 24 instructional hours and is usually conducted in three consecutive days. The program uses a small group format with six to twenty-four members in a group. A faculty ratio of one faculty member to every four participants allows for considerable individual attention and for a high level of involvement for all group members.
"During the program, participants practice coaching strategies with one another and with clinicians from outside the training group. Standardized patients provide realistic scenarios and 'real time' coaching is practiced. Videotaped feedback provides each participant with an opportunity to see him or herself in the coaching role.
"Because of the emphasis on practice, participants have an opportunity to develop their own clinician-patient communication skills as well as develop coaching techniques."

IHC does not actually schedule workshops and courses itself, but it has regional coordinators and many trainers around the U.S. And the regional coordinators can help health plans, local medical societies, medical groups, medical schools, and even informal groups of individual physicians to set up workshops. A one-day workshop with adequate enrollment might be expected to cost each participating physician $100 to $200.

The American Academy on Communication in Healthcare (AACH)(www.aachonline.org) is a nonprofit organization with a mission of fostering best patient care by advocating a relationship-centered approach to healthcare communication, education, and research.

The organization provides institutional courses on topics that include enhancing effective communication, delivering bad news, and managing medical malpractice risk with improved communication skills. Health plans, local medical societies, medical groups, medical schools, and informal groups of individual physicians can contact AACH about setting up workshops.

AACH also offers doc.com, which is an interactive learning resource for healthcare communication (at www.AACHonline.org). Doc.com has a set of 41 online modules and more than 400 videos describing and demonstrating various aspects of physician-patient communication, actually showing interactions between real MDs and standardized patients. These videos are jointly produced by AACH and the Drexel University College of Medicine. Basic topics (with links to a few modules) include–

  • Integrated patient-centered and doctor-centered interviewing-structure and content of the interview.
  • Building a relationship
  • Opening the discussion
  • Gathering information
  • Understanding the patient's perspective
  • Sharing information
  • Reaching agreement
  • Providing closure

More advanced modules include–

  • Responding to strong emotions
  • Understanding difference and diversity in the medical encounter: communication across cultures.
  • Giving bad news
  • Domestic violence
  • Alcoholism
  • Drug abuse
  • Discussing medical error
  • Promoting adherence and health behavior change.
  • Exploring sexual issues
  • Exploring spirituality and religious beliefs

The module in the "Integrated patient-centered and doctor-centered interviewing" module, for example, shows the same doctor conducting an interview with two different approaches and obtaining a very different level of information and relationship with the patient depending on the approach. As the user views the interview online, a sidebar moves down the screen pointing out key elements of the interview and their effects. The objective of that module and accompanying materials is to bring the user to the point where he or she can-

  • Describe the content, and process, of a "complete" medical history,
  • Describe the difference between the tasks or functions of an interview and its structure,
  • Describe patient-centered and doctor-centered interview goals and skills,
  • Describe the different contributions of patient-centered and doctor-centered skills to understanding the patient's full (biopsychosocial) history,
  • Describe the content and structure the written medical history.

At www.aachonline.org, anyone can try out the full set of Doc.com videos for 15 days for free.

The website also has an extensive bibliography on physician-patient communication.

And the website includes lists of articles documenting current deficiencies in practice, which AACH groups under such headings as–

  • We interrupt patients in the earliest phases of the encounter.
  • We fail to identify and prioritize patient concerns.
  • We miss opportunities to understand and acknowledge patients' ideas and feelings.
  • We fail to understand the importance of culture and ethnicity in health care
  • We do not give bad news concisely and compassionately.
  • We minimize patients' roles in their care.
  • We underestimate patients' health literacy.
  • We don't negotiate differences well with patients.

There are many other websites that have resources to help physicians improve communication. These include–



What problems and resources will physicians want to focus on related to helping patients get appointments and care
when needed and ensuring office staff helpfulness and courtesy?

For many patients, problems getting appointments and care when needed and interactions with physician office staff are bigger concerns than physician-patient communication. In fact on the Clinician/Group CAHPS surveys, physicians and their staffs rate lower on these matters than on communication issues. As noted elsewhere on this website, a 2007 article in The Joint Commission Journal on Quality and Patient Safety stated that "delays for appointments are prevalent, resulting in patient dissatisfaction, higher costs, and possible adverse clinical consequences."

Physicians can affect these aspects of care with decisions on staff hiring and on design of office systems and procedures. Decisions have to be made about how to handle phone calls, about the use of e-mail for communicating with patients, about appointment scheduling systems, and other matters. Medical office management consultants can be helpful in designing and implementing office practices. Other organizations can also help. For example, the American Academy of Family Physicians (www.aafp.org) has various resources that bear on these questions. AAFP's Family Practice Management magazine includes articles on such topics as "Reducing Delays and Waiting Times with Open-Office Scheduling," "Making the Case for Online Physician-Patient Communications," "Same-Day Appointments: Exploding the Access Paradigm," and "Reducing Waits and Delays in the Referral Process." Similarly, the American College of Physicians (www.acponline.org) offers various resources, such as its "Patient Satisfaction Tip Book--Improving Patient Perceptions."



Resources to Help Patients Do Their Part

What can patients do to improve on the dimensions of care measured by the survey?

Especially in the arena of physician-patient communications, patients have a key role in achieving improvement. There are many resources to help.



What resources can help patients interact more effectively with their physicians?

Some of the resources described below for doctors would also be of interest to patients–and are a good way to understand the communication challenges from the physician's side. But there are many resources specifically for patients. The following are just a few of the available resources–

Essential advice on how to make the most of physician visits is available from many sources. For example, the following is adapted from advice given by Park Nicollet Health Services in Minnesota:

  • Write down your most important concerns.
    Before your visit, review all your symptoms, including when they started; the history of the problem, including whether you've had the problem before; and any treatments you have tried. List these things in order of importance, so you will be sure to get your most pressing concerns answered.
  • Bring related records.
    If you have information about drugs you use, allergies or other health problems, bring these records along if you are seeing a doctor for the first time. If your appointment is with a doctor you've been with for a while, be sure to let him or her know what over-the-counter remedies you are using and whether you are taking medicine prescribed by another doctor.
  • Be brief and clear.
    As you describe your symptoms to your doctor, avoid vague statements, such as "I've been feeling sick lately." Be specific: "I've had a headache and nausea for the past week, and I don't know what's causing it."
  • Take notes.
    Even if you can't write down everything you hear, an outline of the discussion will dramatically increase your memory of the information. Take some time immediately after the visit to fill in other details you remember about the discussion. It also may help to talk your visit over with a friend or family member soon afterward.
  • Ask for information that is organized.
    Studies on communication show that understanding improves when information is well organized. Ask your doctor to put information into categories, such as what is wrong, what tests you may need, what treatments are available, and what you must do.
  • Ask for explanations.
    When in doubt about a term your doctor uses, ask. A good way to ensure that you understand is to restate what you believe the doctor has told you. Then if you have misunderstood something, your doctor can explain it again.

The following is adapted from advice given by the National Institutes of Health–

  • Be honest: It is tempting to say what you think the doctor wants to hear; for example, that you smoke less or eat a more balanced diet than you really do. Or that you take your medication when you really don't. While this is natural, it's not in your best interest. Your doctor can give you the best treatment only if you say what is really going on.
  • Stick to the point: Although your doctor might like to talk to you at length, each patient is given a limited amount of time. To make the best use of yours, stick to the point. Give the doctor a brief description of the symptom, when it started, how often it happens and if it is getting worse or better.
  • Ask questions: Asking questions is key to getting what you want from the visit. If you don't ask questions, your doctor may think that you understand why he or she is sending you for a test or that you don't want more information. Ask questions when you don't know the meaning of a word or when instructions aren't clear. You might want to say, "I want to make sure I understand, could you explain that a little further?" It may help to repeat what you think the doctor means in your own words and ask, "Is this correct?" Also, if you are worried about cost, say so.
  • Share your point of view: Your doctor needs to know what's working and what's not. He or she can't read your mind, so it is important that they hear from you.
  • Plan to update your doctor: Think of any important information you need to share with your doctor about things that have happened from your last visit. You can write them down on a list as you notice them. Let your doctor know about any recent changes in the way your medication affects you.
  • Your doctor may ask how your life is going. This isn't just polite talk or an attempt to be nosy. Information about what's happening in your life can be useful medically. Let the doctor know about any major changes or stresses in your life like moving, changing jobs, a loved one's death, change in relationship status, etc.
  • Remember, doctors don't know everything, and even the best doctor may not be able to answer some questions. There still is much that isn't known about the human body and disease. Most doctors will say when they don't have the answers. If a doctor regularly brushes off your questions or symptoms, think about looking for another doctor.

A wide variety of resources to help patients communicate can be found at www.mercksource.com. Patients can begin with a few brief videos done by Dr. Marie Savard, at MerckSource. Topics of these videos include–

  • Getting started.
  • Recording your medical history
  • Preparing for your doctor's visit
  • Building you medical records
  • Keeping track of medicines and allergies

Each of these videos is accompanied by forms that patients use to prepare and keep information to improve communication. Below is a list of the forms, which you can see and print out now by clicking on the form name–

Patients will find much valuable advice on the website of the U.S. Agency for Healthcare Research and Quality (AHRQ). The following is a list of articles by Dr. Carolyn Clancy, the agency's Director. You can see and print out any article by clicking on the article's title:

Dr. Clancy
Carolyn Clancy, M.D., AHRQ Director

You can also click the topics below to listen to interviews with Dr. Clancy offering much useful advice on what patients can do to help themselves get the best possible care:

Many sources stress the desirability of having a friend or family member be with the patient to help the patient ask questions and take notes on what is said. More broadly, there is increasing awareness of the importance of having family and friends–and a whole community–involved in each patient's care.



What are some resources to help patients be informed about their diseases, injuries, or conditions?

The better informed patients are about their diseases, injuries, and conditions, the better able they are to ask questions of physicians, understand treatment options, understand treatment plans, and be motivated to carry out treatment plans. There are many sources of this kind of information, including the following online resources–

Diseases and Treatments from A to Z
www.checkbook.org/health/disease-treat.cfm
A website developed by CHECKBOOK/CSS with a collection of links to various websites (including some listed below) on common diseases and treatments, with links to comprehensive guides on these diseases and treatments; fact sheets, tutorials and interactive tools; videos and podcasts; patient forums and support communities; clinical practice guidelines; and important articles from medical journals.

Healthfinder
www.healthfinder.gov
A free gateway to reliable consumer health and human services information developed by the U.S. Department of Health and Human Services.

Mayo Clinic
www.mayoclinic.com
General-information website with Mayo's advice and information, including such features as "Diseases and Conditions A-Z," "Condition Centers," "Healthy Living," and "Health Tools."

MedlinePlus
medlineplus.gov
A consumer-oriented website that brings together authoritative information from the U.S. National Library of Medicine, the National Institutes of Health, and other government agencies and health-related organizations. Includes extensive information about drugs, an illustrated medical encyclopedia, interactive patient tutorials, and recent health news.

PubMed
www.pubmed.gov
A service of the U.S. National Library of Medicine that includes over 17 million citations from academic journals for biomedical articles dating back to the 1950s. Includes links to many abstracts, full text articles, and other related resources.

National Guideline Clearinghouse
www.guideline.gov
A resource sponsored by the Agency for Healthcare Research and Quality that gives information on current guidelines for the diagnosis and treatment of diseases.

Merck Manuals Online Medical Library
www.merckmanuals.com and www.mercksource.com
Includes the "Merck Manuals Home Edition," which explains disorders, who is likely to get them, their symptoms, how they're diagnosed, how they might be prevented, how they can be treated, and prognoses. Also includes the "Merck Manual of Health and Aging" and other resources.

University of Pittsburgh Medical Center-Managing Your Health
http://www.upmc.com/health-library/
Consumer-oriented website with information on conditions and diseases, procedures, and drugs. Includes an "anatomy navigator," health tools and calculators, a medical dictionary, and other resources.

As an alternative to these online resources, patients can use available libraries. At any major public library, you can ask for general consumer-oriented medical literature or for medical texts. For more in-depth information, you can use a medical school library. These libraries may also be able to help patients find support groups and organizations that regularly provide information on the patient's type of medical problem.



How Physicians Can Use Survey Result to Help Meet ABIM Maintenance of Certification Requirements and for CME Credit

How Physicians Can Use Their Survey Results to Help Meet Maintenance of Certification and Continuing Medical Education Requirements

Specialties Included
Survey results can be used by physicians who are board certified by the American Board of Internal Medicine (ABIM) (and possibly in the future by other specialty boards). For more information about ABIM's Maintenance of Certification requirements, visit http://www.abim.org/moc/requirements-for-MOC.aspx.

Specialty Board Requirements
ABIM requires 20 points of Self-Evaluation of Practice Performance in its MOC program. You can use your survey results shown on this website to complete ABIM's Self-Directed PIM Practice Improvement Module®. Click here for a fact sheet describing the Self-Directed PIM. You can learn more about it on ABIM's website at: http://www.abim.org/moc/earning-points/productinfo-demo-ordering.aspx.

Completing the Self-Directed PIM
You must be enrolled in MOC and may order the Self-Directed PIM through the Physician Login portal of ABIM's website. Then you will complete the following steps as set out in sequential sections on the ABIM website—

  1. Submit your performance results for five different survey questions in the "Enter Performance Data" section on the ABIM website. A mockup of this form can be found by clicking here. NOTE: survey results must be based on at least 25 completed surveys (as shown in the column headed "number of Patients who answered").
  2. Complete and submit the "Examine Systems" section on the ABIM website.
  3. Identify goals for improvement and redesign processes to achieve these goals on at least one of the measures (survey questions) submitted to ABIM in step 1.
  4. Perform a focused re-measurement. This re-measurement will require surveying patients on at least the question or questions targeted by the goal(s) and redesign process(es) identified in step 3. Click here for a selection of several survey procedures that can be used to make this re-survey as easy as possible. After the re-survey is done, the results of the re-survey and lessons learned are reported to ABIM as a final step in completing the PIM.

Performance Data for Self-Directed PIM
The survey results shown on this website must have been collected not more than 24 months before you use the data in the Self-Directed PIM. Be sure footnote 1 in the table reporting your survey results shows a date that data collection "ended" not more than 24 months prior to the date when you will be reporting the data to ABIM.

Getting CME Credit
Physicians enrolled in MOC and who complete the Self-Directed PIM are eligible for Category 1 Continuing Medical Education credit. Instructions for obtaining CME credit are provided once you've completed the PIM.





Back to top