Health Information Quality Assessment Tools

Determining the quality of online health information remains a significant challenge. No single standard exists to objectively measure the quality of medical information available online. A number of frameworks and measurement tools have been proposed over the last twenty-five years that provide the general public and healthcare professionals with several options for assessing the quality of health information online.

This guide provides an overview of a number of quality assessment frameworks. Each vary in comprehensiveness, ease-of-use, effectiveness, and intended audience.1, 2

General Assessment Frameworks

Scored Assessment Frameworks

Evaluate Document

General Assessment Frameworks

These general assessment methods provide readers with a simple framework to quickly assess the quality of online information by asking a few questions whenever they arrive on a website. In part to encourage broad use, the frameworks aim to be succinct, memorable, and cover the salient points efficiently.

JAMA Benchmarks (Silberg Standards) 1997

Initially published in a paper titled “Assessing, Controlling, and Assuring the Quality of Medical Information on the Internet,” this standard proposes four different measures by which medical information can be considered authentic and reliable. Subsequent reviews have demonstrated that the approach may not be sufficient to accurately assess content reliability.3

JAMA Benchmarks (Silberg Standards)

  • Authorship: Authors and contributors, their affiliations, and relevant credentials should be provided.
  • Attribution: References and sources for all content should be listed clearly, and all relevant copyright information noted.
  • Disclosure: Web site "ownership" should be prominently and fully disclosed, as should any sponsorship, advertising, underwriting, commercial funding arrangements or support, or potential conflicts of interest. This includes arrangements in which links to other sites are posted as a result of financial considerations. Similar standards should hold in discussion forums.
  • Currency: Dates that content was posted and updated should be indicated.

Currency, Relevance, Authority, Accuracy, Purpose (CRAAP) 2004

Sarah Blakeslee, a librarian at California State University Chico, developed the CRAAP Test as a memorable way to evaluate content in five main categories: currency, relevance, authority, accuracy, and purpose. In an article published in LOEX Quarterly describing the CRAAP Test, Blakesee explained, “Sometimes a person needs an acronym that sticks.”

  • Measures: Currency, relevance, authority, accuracy, purpose
  • Created by: Sarah Blakeslee in 2004
  • Format: 5 questions, unscored
  • Original paper: Blakeslee, Sarah. The CRAAP Test. LOEX Quarterly: Vol. 31 : Iss. 3 , Article 4. Fall 2004.

Currency, Relevance, Authority, Accuracy, and Purpose (CRAAP)

  • Currency: The timeliness of the information. When was the information published or posted? Has the information been revised or updated? Does your topic require current information, or will older sources work as well? Are the links functional?
  • Relevance: The importance of the information for your needs. Does the information relate to your topic or answer your question? Who is the intended audience? Is the information at an appropriate level (i.e. not too elementary or advanced for your needs)? Have you looked at a variety of sources before determining this is the one you will use? Would you be comfortable citing this source in your research paper?
  • Authority: The source of the information. Who is the author / publisher / source / sponsor? What are the author's credentials or organizational affiliations? Is the author qualified to write on the topic? Is there contact information, such as a publisher or email address? Does the URL reveal anything about the author or source?
  • Accuracy: The reliability, truthfulness and correctness of the content. Where does the information come from? Is the information supported by evidence? Has the information been reviewed or refereed? Can you verify any of the information in another source or from personal knowledge? Does the language or tone seem unbiased and free of emotion? Are there spelling, grammar or typographical errors?
  • Purpose: The reason the information exists. What is the purpose of the information? Is it to inform, teach, sell, entertain or persuade? Do the authors / sponsors make their intentions or purpose clear? Is the information fact, opinion or propaganda? Does the point of view appear objective and impartial? Are there political, ideological, cultural, religious, institutional or personal biases?

Currency, Reliability, Authority, and Purpose (CRAP) 2007

A modified and slightly simplified version of CRAAP was proposed by Molly Beestrum, an instructor and librarian at Dominican University. Beestrum’s CRAP Test removes the Relevance criteria and renames Accuracy to Reliability, resulting in an acronym that stands for: currency, reliability, authority, and purpose.

  • Measures: Currency, reliability, authority, purpose
  • Created by: Molly Beestrum, MLIS at Dominican University in 2007
  • Format: 4 questions, unscored
  • Original paper: Molly Beestrum, Kenneth Orenic. Wiki-ing Your Way into Collaborative Learning. 2010. (Includes materials presented at LOEX National Conference in 2008).

Currency, Reliability, Authority, and Purpose (CRAP)

  • Currency: When was it originally published? Has it been updated or revised since? Does the time frame fit your needs?
  • Reliability: Can you depend on the information and trust it to be accurate? Did the author use any evidence, and show their sources with citations and references, or list of sources? Is the spelling and grammar correct? Can you verify the information through other sources?
  • Authority: Can you trust the source the information comes from? Who is the author, and what are their credentials? Who is the publisher or sponsor? What does the URL end with? (.gov, .org, .edu, .com?)
  • Purpose/Point of View: What is the author's motivation for publishing the resource? Is the author trying to inform, persuade, sell to, or entertain you? Are there advertisements or links to buy things? If so, are they marked clearly or sponsored by the resource? Does the author seem objective or biased? Do they name any affiliations or conflicts of interest?

Date, Author, References, Type, and Sponsor (DARTS) 2007

DARTS proposes five questions that readers can ask to assess content for currency, authorship, credibility, purpose, and potential conflicts of interest. The tool was developed in 2007 by the Working Group on Information to Patients under the Pharmaceutical Forum established by the European Commission.

  • Measures: Currency, authorship, credibility, purpose, and potential conflicts of interest
  • Created by: Working Group on Information to Patients under the Pharmaceutical Forum established by the European Commission in 2007
  • Format: 5 questions, unscored
  • Original paper: Ulla Närhi et al. The DARTS tool for Assessing Online Medicines Information. Pharmacy World & Science. September 2008.

Date, Author, References, Type, and Sponsor (DARTS)

  • Date: When was the information updated?
  • Author: Who is the writer? Is he/she qualified?
  • References: Are the references and sources of content valid?
  • Type: What is the purpose of the site?
  • Sponsor: Is the site sponsored and, if so, by whom?

FA4CT Algorithm 2007

An academic review of instruments available to consumers for the evaluation of information resources found that “few are likely to be practically usable by the intended audience.” 4 The creators of FA4CT propose a simplified approach—one not involving a checklist—to evaluate the credibility of a specific health claim (rather than an entire article or web site).

FA4CT Algorithm

  • Find Answers and Compare: information from different sources
  • Check Credibility: of sources, if conflicting information is provided
  • Check Trustworthiness (Reputation): of sources, if conflicting information is provided

Trust It or Trash It 2009

Trust It or Trash It is a very simple interactive tool that encourages users to think critically about the quality of health information they find online.

Trust It or Trash It

  • Who said it?: Who wrote it? Who provided the facts? Who paid for it?
  • When did they say it?: When was it written or updated?
  • How did they know?: How do you know if this information pertains to you? Does the information seem reasonable based on what you’ve read or know?

Scored Assessment Frameworks

Scored tools provide a more rigorous and standard method of measuring health information quality. These tools may be used by health information providers, authors of consumer health information, or health professionals. Given the time commitment necessary to complete an assessment, these tools are less useful for the casual researcher.

Sandvik’s General Quality Criteria 1998

A general quality measure for online health information. Criteria were based in part on those suggested by Silberg et al. and in part on the HONcode principles.

Sandvik’s General Quality Criteria

  • Ownership: 2 (name and type of provider clearly stated); 1 (all other indications of ownership); 0 (no indication of ownership)
  • Authorship: 2 (author’s name and qualification clearly stated); 1 (all other indications of authorship); 0 (no indication of authorship)
  • Source: 2 (references given to scientific literature); 1 (all other indications of source); 0 (no indication of source)
  • Currency: 2 (date of publication or update clearly stated on all pages); 1 (all other indications of currency); 0 (no indication of currency)
  • Interactivity: 2 (clear invitation to comment or ask questions by an email address or link to a form); 1 (any other email address on the site); 0 (no possibility for interactivity)
  • Navigability: 2 (information easily found by following links from home page); 1 (information found only with difficulty by following links, search engine provided if information widely scattered on site); 0 (information scattered around, no search engine)
  • Balance: 2 (balanced information); 1 (biased in favor of own products or services); 0 (only promoting own products or services)

DISCERN 1999

DISCERN is a brief questionnaire that provides users with a reliable way of assessing the quality of written information on treatment choices for a health problem. The DISCERN Handbook details how questions are scored on a 5-point scale from 1 (definitely no) to 5 (definitely yes).

DISCERN

  1. Are the aims clear?
  2. Does it achieve its aims?
  3. Is it relevant?
  4. Is it clear what sources of information were used to compile the publication (other than the author or producer)?
  5. Is it clear when the information used or reported in the publication was produced?
  6. Is it balanced and unbiased?
  7. Does it provide details of additional sources of support and information?
  8. Does it refer to areas of uncertainty?
  9. Does it describe how each treatment works?
  10. Does it describe the benefits of each treatment?
  11. Does it describe the risks of each treatment?
  12. Does it describe what would happen if no treatment is used?
  13. Does it describe how the treatment choices affect overall quality of life?
  14. Is it clear that there may be more than one possible treatment choice?
  15. Does it provide support for shared decision-making?

QUality Evaluation Scoring Tool (QUEST) 2018

The developers of QUEST set out to create a simple tool to assess the quality of online health information. They note that “there is currently no singular quantitative tool that has undergone a validation process, can be used for a broad range of health information, and strikes a balance between ease of use, concision and comprehensiveness.”5

  • Measures: Authorship, attribution, conflict of interest, complementarity, currency, tone
  • Created by: Julie M. Robillard, Jessica H. Jun, Jen-Ai Lai, Tanya L. Feng in 2018
  • Format: 6 questions (rated on a 0-2 or 0-1 scale) weighted, yielding an overall quality score between 0 and 28
  • Original paper: Robillard et al. The QUEST for quality online health information: validation of a short quantitative tool. BMC Medical Informatics and Decision Making. Oct 2018.

QUality Evaluation Scoring Tool (QUEST)

  • Authorship: 0 (No indication of authorship); 1 (All other indications of authorship); 2 (Author’s name and qualifications clearly stated) [Score x 1]
  • Attribution: 0 (No sources); 1 (Mention of expert source, research findings--though with insufficient information to identify the specific studies. Links to various sites, advocacy body, or other reference list); 2 (Reference to at least one identifiable scientific study); 3 (References to mainly identifiable scientific studies in >50% of claims) [Score x 3]

    For all articles scoring 2 or 3 on Attribution:

    • Type of study: 0 (In vitro, animal models, or editorials); 1 (All observational work); 2 (Meta-analyses, randomized controlled trials, clinical studies) [Score x 1]
  • Conflict of interest: 0 (Endorsement or promotion of intervention designed to prevent or treat condition within the article); 1 (Endorsement or promotion of educational products & services); 2 (Unbiased information) [Score x 3]
  • Currency: 0 (No date present); 1 (Article is dated but 5 years or older); 2 (Article is dated within the last 5 years) [Score x 1]
  • Complementarity: 0 (No support of the patient-physician relationship); 1 (Support of the patient-physician relationship) [Score x 1]
  • Tone (includes title): 0 (Fully supported. Authors fully and unequivocally support the claims, strong vocabulary such as "cure", "guarantee", and "easy", mostly use of non-conditional verb tenses ("can", "will"), no discussion of limitations); 1 (Mainly supported. Authors mainly support their claims but with more cautious vocabulary such as "can reduce your risk" or "may help prevent", no discussion of limitations); 2 (Balanced/cautious support: Authors' claims are balanced by caution, includes statements of limitations and/or contrasting findings) [Score x 3]

Conclusion

When health information is well-researched and accessible, it can be an invaluable resource that advances the public good. The tools described in this guide were developed to help medical professionals and the general public assess the quality of health information.

All healthcare professionals can benefit from understanding these critical evaluation frameworks. They may further benefit from educating patients on how to use the internet to find information about their conditions and treatments.

Related Guides:

References

  1. Fahy E, Hardikar R, Fox A, Mackay S. Quality of patient health information on the Internet: reviewing a complex and evolving landscape. Australas Med J. 2014;7(1):24-8. 2014 Jan 31. doi:10.4066/AMJ.2014.1900
  2. Eysenbach G, Powell J, Kuss O, Sa ER. Empirical studies assessing the quality of health information for consumers on the World Wide Web - A systematic review. JAMA, 287 (20). pp. 2691-700. ISSN 0098-7484. 2002. doi:10.1001/jama.287.20.2691
  3. Barker S et al. Accuracy of Internet Recommendations for Prehospital Care of Venomous Snake Bites. Wilderness & Environmental Medicine, Volume 21, Issue 4, 298 -302. 2009.
  4. Bernstam EV, Shelton DM, Walji M, Meric-Bernstam F. Instruments to assess the quality of health information on the World Wide Web: what can our patients actually use? Int J Med Inform, 74:13-9. 2005.
  5. Robillard JM, Jun JH, Lai JA, Feng TL. The QUEST for quality online health information: validation of a short quantitative tool. BMC Med Inform Decis Mak. 2018;18(1):87. 2018 Oct 19. doi:10.1186/s12911-018-0668-9
  6. Breckons, Matthew et al. “What do evaluation instruments tell us about the quality of complementary medicine information on the internet?.” Journal of medical Internet research vol. 10,1 e3. 22 Jan. 2008. doi:10.2196/jmir.961

Published: November 15, 2019