-
Research in Nursing & Health Feb 2017Qualitative description (QD) is a term that is widely used to describe qualitative studies of health care and nursing-related phenomena. However, limited discussions... (Review)
Review
Qualitative description (QD) is a term that is widely used to describe qualitative studies of health care and nursing-related phenomena. However, limited discussions regarding QD are found in the existing literature. In this systematic review, we identified characteristics of methods and findings reported in research articles published in 2014 whose authors identified the work as QD. After searching and screening, data were extracted from the sample of 55 QD articles and examined to characterize research objectives, design justification, theoretical/philosophical frameworks, sampling and sample size, data collection and sources, data analysis, and presentation of findings. In this review, three primary findings were identified. First, although there were some inconsistencies, most articles included characteristics consistent with the limited available QD definitions and descriptions. Next, flexibility or variability of methods was common and effective for obtaining rich data and achieving understanding of a phenomenon. Finally, justification for how a QD approach was chosen and why it would be an appropriate fit for a particular study was limited in the sample and, therefore, in need of increased attention. Based on these findings, recommendations include encouragement to researchers to provide as many details as possible regarding the methods of their QD studies so that readers can determine whether the methods used were reasonable and effective in producing useful findings. © 2016 Wiley Periodicals, Inc.
Topics: Humans; Qualitative Research; Research Design
PubMed: 27686751
DOI: 10.1002/nur.21768 -
ILAR Journal 2002Scientists who use animals in research must justify the number of animals to be used, and committees that review proposals to use animals in research must review this...
Scientists who use animals in research must justify the number of animals to be used, and committees that review proposals to use animals in research must review this justification to ensure the appropriateness of the number of animals to be used. This article discusses when the number of animals to be used can best be estimated from previous experience and when a simple power and sample size calculation should be performed. Even complicated experimental designs requiring sophisticated statistical models for analysis can usually be simplified to a single key or critical question so that simple formulae can be used to estimate the required sample size. Approaches to sample size estimation for various types of hypotheses are described, and equations are provided in the Appendix. Several web sites are cited for more information and for performing actual calculations
Topics: Animals; Data Interpretation, Statistical; Models, Statistical; Pilot Projects; Research Design; Sample Size
PubMed: 12391396
DOI: 10.1093/ilar.43.4.207 -
Asian Nursing Research Dec 2019Scoping reviews are a useful approach to synthesizing research evidence although the objectives and methods are different to that of systematic reviews, yet some... (Review)
Review
Scoping reviews are a useful approach to synthesizing research evidence although the objectives and methods are different to that of systematic reviews, yet some confusion persists around how to plan and prepare so that a completed scoping review complies with best practice in methods and meets international standards for reporting criteria. This paper describes how to use available guidance to ensure a scoping review project meets global standards, has transparency of methods and promotes readability though the use of innovative approaches to data analysis and presentation. We address some of the common issues such as which projects are more suited to systematic reviews, how to avoid an inadequate search and/or poorly reported search strategy, poorly described methods and lack of transparency, and the issue of how to plan and present results that are clear, visually compelling and accessible to readers. Effective pre-planning, adhering to protocol and detailed consideration of how the results data will be communicated to the readership are critical. The aim of this article is to provide clarity about what is meant by conceptual clarity and how pre-planning enables review authors to produce scoping reviews which are of high quality, reliability and readily publishable.
Topics: Evidence-Based Practice; Humans; Patient Selection; Publishing; Research Design; Review Literature as Topic
PubMed: 31756513
DOI: 10.1016/j.anr.2019.11.002 -
Internal and Emergency Medicine Feb 2017Network meta-analysis is a technique for comparing multiple treatments simultaneously in a single analysis by combining direct and indirect evidence within a network of...
Network meta-analysis is a technique for comparing multiple treatments simultaneously in a single analysis by combining direct and indirect evidence within a network of randomized controlled trials. Network meta-analysis may assist assessing the comparative effectiveness of different treatments regularly used in clinical practice and, therefore, has become attractive among clinicians. However, if proper caution is not taken in conducting and interpreting network meta-analysis, inferences might be biased. The aim of this paper is to illustrate the process of network meta-analysis with the aid of a working example on first-line medical treatment for primary open-angle glaucoma. We discuss the key assumption of network meta-analysis, as well as the unique considerations for developing appropriate research questions, conducting the literature search, abstracting data, performing qualitative and quantitative synthesis, presenting results, drawing conclusions, and reporting the findings in a network meta-analysis.
Topics: Humans; Models, Statistical; Network Meta-Analysis; Research; Research Design; Review Literature as Topic
PubMed: 27913917
DOI: 10.1007/s11739-016-1583-7 -
Medical Care Jan 2022Pilot studies test the feasibility of methods and procedures to be used in larger-scale studies. Although numerous articles describe guidelines for the conduct of pilot...
BACKGROUND
Pilot studies test the feasibility of methods and procedures to be used in larger-scale studies. Although numerous articles describe guidelines for the conduct of pilot studies, few have included specific feasibility indicators or strategies for evaluating multiple aspects of feasibility. In addition, using pilot studies to estimate effect sizes to plan sample sizes for subsequent randomized controlled trials has been challenged; however, there has been little consensus on alternative strategies.
METHODS
In Section 1, specific indicators (recruitment, retention, intervention fidelity, acceptability, adherence, and engagement) are presented for feasibility assessment of data collection methods and intervention implementation. Section 1 also highlights the importance of examining feasibility when adapting an intervention tested in mainstream populations to a new more diverse group. In Section 2, statistical and design issues are presented, including sample sizes for pilot studies, estimates of minimally important differences, design effects, confidence intervals (CI) and nonparametric statistics. An in-depth treatment of the limits of effect size estimation as well as process variables is presented. Tables showing CI around parameters are provided. With small samples, effect size, completion and adherence rate estimates will have large CI.
CONCLUSION
This commentary offers examples of indicators for evaluating feasibility, and of the limits of effect size estimation in pilot studies. As demonstrated, most pilot studies should not be used to estimate effect sizes, provide power calculations for statistical tests or perform exploratory analyses of efficacy. It is hoped that these guidelines will be useful to those planning pilot/feasibility studies before a larger-scale study.
Topics: Feasibility Studies; Guidelines as Topic; Humans; Pilot Projects; Research Design
PubMed: 34812790
DOI: 10.1097/MLR.0000000000001664 -
Evidence-based Nursing Apr 2015
Topics: Humans; Qualitative Research; Reproducibility of Results; Research Design
PubMed: 25653237
DOI: 10.1136/eb-2015-102054 -
Military Medical Research Feb 2020Methodological quality (risk of bias) assessment is an important step before study initiation usage. Therefore, accurately judging study type is the first priority, and... (Review)
Review
Methodological quality (risk of bias) assessment is an important step before study initiation usage. Therefore, accurately judging study type is the first priority, and the choosing proper tool is also important. In this review, we introduced methodological quality assessment tools for randomized controlled trial (including individual and cluster), animal study, non-randomized interventional studies (including follow-up study, controlled before-and-after study, before-after/ pre-post study, uncontrolled longitudinal study, interrupted time series study), cohort study, case-control study, cross-sectional study (including analytical and descriptive), observational case series and case reports, comparative effectiveness research, diagnostic study, health economic evaluation, prediction study (including predictor finding study, prediction model impact study, prognostic prediction model study), qualitative study, outcome measurement instruments (including patient - reported outcome measure development, content validity, structural validity, internal consistency, cross-cultural validity/ measurement invariance, reliability, measurement error, criterion validity, hypotheses testing for construct validity, and responsiveness), systematic review and meta-analysis, and clinical practice guideline. The readers of our review can distinguish the types of medical studies and choose appropriate tools. In one word, comprehensively mastering relevant knowledge and implementing more practices are basic requirements for correctly assessing the methodological quality.
Topics: Animals; Bias; Humans; Psychometrics; Reproducibility of Results; Research; Research Design
PubMed: 32111253
DOI: 10.1186/s40779-020-00238-8 -
Journal of Vascular Surgery Aug 2020
Topics: Aortic Aneurysm, Abdominal; Humans; Patient Selection; Research Design
PubMed: 32711903
DOI: 10.1016/j.jvs.2020.01.036 -
PLoS Genetics Jul 2020
Topics: Humans; Research Design; Science
PubMed: 32667915
DOI: 10.1371/journal.pgen.1008950 -
Qualitative Health Research Mar 2017Saturation is a core guiding principle to determine sample sizes in qualitative research, yet little methodological research exists on parameters that influence...
Saturation is a core guiding principle to determine sample sizes in qualitative research, yet little methodological research exists on parameters that influence saturation. Our study compared two approaches to assessing saturation: code saturation and meaning saturation. We examined sample sizes needed to reach saturation in each approach, what saturation meant, and how to assess saturation. Examining 25 in-depth interviews, we found that code saturation was reached at nine interviews, whereby the range of thematic issues was identified. However, 16 to 24 interviews were needed to reach meaning saturation where we developed a richly textured understanding of issues. Thus, code saturation may indicate when researchers have "heard it all," but meaning saturation is needed to "understand it all." We used our results to develop parameters that influence saturation, which may be used to estimate sample sizes for qualitative research proposals or to document in publications the grounds on which saturation was achieved.
Topics: Humans; Interviews as Topic; Qualitative Research; Research Design; Sample Size
PubMed: 27670770
DOI: 10.1177/1049732316665344