-
Academic Medicine : Journal of the... Sep 2014Standards for reporting exist for many types of quantitative research, but currently none exist for the broad spectrum of qualitative research. The purpose of the...
PURPOSE
Standards for reporting exist for many types of quantitative research, but currently none exist for the broad spectrum of qualitative research. The purpose of the present study was to formulate and define standards for reporting qualitative research while preserving the requisite flexibility to accommodate various paradigms, approaches, and methods.
METHOD
The authors identified guidelines, reporting standards, and critical appraisal criteria for qualitative research by searching PubMed, Web of Science, and Google through July 2013; reviewing the reference lists of retrieved sources; and contacting experts. Specifically, two authors reviewed a sample of sources to generate an initial set of items that were potentially important in reporting qualitative research. Through an iterative process of reviewing sources, modifying the set of items, and coding all sources for items, the authors prepared a near-final list of items and descriptions and sent this list to five external reviewers for feedback. The final items and descriptions included in the reporting standards reflect this feedback.
RESULTS
The Standards for Reporting Qualitative Research (SRQR) consists of 21 items. The authors define and explain key elements of each item and provide examples from recently published articles to illustrate ways in which the standards can be met.
CONCLUSIONS
The SRQR aims to improve the transparency of all aspects of qualitative research by providing clear standards for reporting qualitative research. These standards will assist authors during manuscript preparation, editors and reviewers in evaluating a manuscript for potential publication, and readers when critically appraising, applying, and synthesizing study findings.
Topics: Publishing; Qualitative Research; Research Design; Research Report
PubMed: 24979285
DOI: 10.1097/ACM.0000000000000388 -
Journal of Educational Evaluation For... 2021Appropriate sample size calculation and power analysis have become major issues in research and publication processes. However, the complexity and difficulty of... (Review)
Review
Appropriate sample size calculation and power analysis have become major issues in research and publication processes. However, the complexity and difficulty of calculating sample size and power require broad statistical knowledge, there is a shortage of personnel with programming skills, and commercial programs are often too expensive to use in practice. The review article aimed to explain the basic concepts of sample size calculation and power analysis; the process of sample estimation; and how to calculate sample size using G*Power software (latest ver. 3.1.9.7; Heinrich-Heine-Universität Düsseldorf, Düsseldorf, Germany) with 5 statistical examples. The null and alternative hypothesis, effect size, power, alpha, type I error, and type II error should be described when calculating the sample size or power. G*Power is recommended for sample size and power calculations for various statistical methods (F, t, χ2, Z, and exact tests), because it is easy to use and free. The process of sample estimation consists of establishing research goals and hypotheses, choosing appropriate statistical tests, choosing one of 5 possible power analysis methods, inputting the required variables for analysis, and selecting the “calculate” button. The G*Power software supports sample size and power calculation for various statistical methods (F, t, χ2, z, and exact tests). This software is helpful for researchers to estimate the sample size and to conduct power analysis.
Topics: Humans; Research Design; Sample Size; Software
PubMed: 34325496
DOI: 10.3352/jeehp.2021.18.17 -
Perspectives on Medical Education Apr 2019As a research methodology, phenomenology is uniquely positioned to help health professions education (HPE) scholars learn from the experiences of others. Phenomenology... (Review)
Review
INTRODUCTION
As a research methodology, phenomenology is uniquely positioned to help health professions education (HPE) scholars learn from the experiences of others. Phenomenology is a form of qualitative research that focuses on the study of an individual's lived experiences within the world. Although it is a powerful approach for inquiry, the nature of this methodology is often intimidating to HPE researchers. This article aims to explain phenomenology by reviewing the key philosophical and methodological differences between two of the major approaches to phenomenology: transcendental and hermeneutic. Understanding the ontological and epistemological assumptions underpinning these approaches is essential for successfully conducting phenomenological research.
PURPOSE
This review provides an introduction to phenomenology and demonstrates how it can be applied to HPE research. We illustrate the two main sub-types of phenomenology and detail their ontological, epistemological, and methodological differences.
CONCLUSIONS
Phenomenology is a powerful research strategy that is well suited for exploring challenging problems in HPE. By building a better understanding of the nature of phenomenology and working to ensure proper alignment between the specific research question and the researcher's underlying philosophy, we hope to encourage HPE scholars to consider its utility when addressing their research questions.
Topics: Awareness; Bullying; Decision Making; Health Occupations; Hermeneutics; Humans; Knowledge; Philosophy; Qualitative Research; Research Design; Research Personnel
PubMed: 30953335
DOI: 10.1007/s40037-019-0509-2 -
BMJ (Clinical Research Ed.) Jan 2015Protocols of systematic reviews and meta-analyses allow for planning and documentation of review methods, act as a guard against arbitrary decision making during review...
Protocols of systematic reviews and meta-analyses allow for planning and documentation of review methods, act as a guard against arbitrary decision making during review conduct, enable readers to assess for the presence of selective reporting against completed reviews, and, when made publicly available, reduce duplication of efforts and potentially prompt collaboration. Evidence documenting the existence of selective reporting and excessive duplication of reviews on the same or similar topics is accumulating and many calls have been made in support of the documentation and public availability of review protocols. Several efforts have emerged in recent years to rectify these problems, including development of an international register for prospective reviews (PROSPERO) and launch of the first open access journal dedicated to the exclusive publication of systematic review products, including protocols (BioMed Central's Systematic Reviews). Furthering these efforts and building on the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) guidelines, an international group of experts has created a guideline to improve the transparency, accuracy, completeness, and frequency of documented systematic review and meta-analysis protocols--PRISMA-P (for protocols) 2015. The PRISMA-P checklist contains 17 items considered to be essential and minimum components of a systematic review or meta-analysis protocol.This PRISMA-P 2015 Explanation and Elaboration paper provides readers with a full understanding of and evidence about the necessity of each item as well as a model example from an existing published protocol. This paper should be read together with the PRISMA-P 2015 statement. Systematic review authors and assessors are strongly encouraged to make use of PRISMA-P when drafting and appraising review protocols.
Topics: Access to Information; Guideline Adherence; Meta-Analysis as Topic; Publishing; Quality Control; Research Design; Research Report; Systematic Reviews as Topic
PubMed: 25555855
DOI: 10.1136/bmj.g7647 -
Military Medical Research Feb 2020Methodological quality (risk of bias) assessment is an important step before study initiation usage. Therefore, accurately judging study type is the first priority, and... (Review)
Review
Methodological quality (risk of bias) assessment is an important step before study initiation usage. Therefore, accurately judging study type is the first priority, and the choosing proper tool is also important. In this review, we introduced methodological quality assessment tools for randomized controlled trial (including individual and cluster), animal study, non-randomized interventional studies (including follow-up study, controlled before-and-after study, before-after/ pre-post study, uncontrolled longitudinal study, interrupted time series study), cohort study, case-control study, cross-sectional study (including analytical and descriptive), observational case series and case reports, comparative effectiveness research, diagnostic study, health economic evaluation, prediction study (including predictor finding study, prediction model impact study, prognostic prediction model study), qualitative study, outcome measurement instruments (including patient - reported outcome measure development, content validity, structural validity, internal consistency, cross-cultural validity/ measurement invariance, reliability, measurement error, criterion validity, hypotheses testing for construct validity, and responsiveness), systematic review and meta-analysis, and clinical practice guideline. The readers of our review can distinguish the types of medical studies and choose appropriate tools. In one word, comprehensively mastering relevant knowledge and implementing more practices are basic requirements for correctly assessing the methodological quality.
Topics: Animals; Bias; Humans; Psychometrics; Reproducibility of Results; Research; Research Design
PubMed: 32111253
DOI: 10.1186/s40779-020-00238-8 -
Korean Journal of Anesthesiology Apr 2020Properly set sample size is one of the important factors for scientific and persuasive research. The sample size that can guarantee both clinically significant... (Review)
Review
Properly set sample size is one of the important factors for scientific and persuasive research. The sample size that can guarantee both clinically significant differences and adequate power in the phenomena of interest to the investigator, without causing excessive financial or medical considerations, will always be the object of concern. In this paper, we reviewed the essential factors for sample size calculation. We described the primary endpoints that are the main concern of the study and the basis for calculating sample size, the statistics used to analyze the primary endpoints, type I error and power, the effect size and the rationale. It also included a method of calculating the adjusted sample size considering the dropout rate inevitably occurring during the research. Finally, examples regarding sample size calculation that are appropriately and incorrectly described in the published papers are presented with explanations.
Topics: Biometry; Humans; Patient Dropouts; Research Design; Sample Size
PubMed: 32229812
DOI: 10.4097/kja.19497 -
PLoS Biology Mar 2020Credibility of scientific claims is established with evidence for their replicability using new data. According to common understanding, replication is repeating a...
Credibility of scientific claims is established with evidence for their replicability using new data. According to common understanding, replication is repeating a study's procedure and observing whether the prior finding recurs. This definition is intuitive, easy to apply, and incorrect. We propose that replication is a study for which any outcome would be considered diagnostic evidence about a claim from prior research. This definition reduces emphasis on operational characteristics of the study and increases emphasis on the interpretation of possible outcomes. The purpose of replication is to advance theory by confronting existing understanding with new evidence. Ironically, the value of replication may be strongest when existing understanding is weakest. Successful replication provides evidence of generalizability across the conditions that inevitably differ from the original study; Unsuccessful replication indicates that the reliability of the finding may be more constrained than recognized previously. Defining replication as a confrontation of current theoretical expectations clarifies its important, exciting, and generative role in scientific progress.
Topics: Data Interpretation, Statistical; Humans; Reproducibility of Results; Research Design; Statistics as Topic
PubMed: 32218571
DOI: 10.1371/journal.pbio.3000691 -
Journal of Thrombosis and Haemostasis :... Aug 2017
Topics: Biomedical Research; Comprehension; Humans; Periodicals as Topic; Prospective Studies; Research Design; Retrospective Studies; Terminology as Topic
PubMed: 28762625
DOI: 10.1111/jth.13776 -
BMJ (Clinical Research Ed.) May 2015
Topics: Decision Support Techniques; Humans; Patient Selection; Randomized Controlled Trials as Topic; Research Design
PubMed: 25956159
DOI: 10.1136/bmj.h2147 -
Journal of Physiotherapy Oct 2017
Topics: Case-Control Studies; Humans; Research Design
PubMed: 28966003
DOI: 10.1016/j.jphys.2017.08.007