Search strategy formulation (5 parts)

Ace your studies with our custom writing services! We've got your back for top grades and timely submissions, so you can say goodbye to the stress. Trust us to get you there!


Order a Similar Paper Order a Different Paper

  

Search Strategy Formulation (5 parts)

Read this article first – Literature Searching for Practice Research.pdf

1. Complete a Boolean search guided by keywords related to “GLP-1 agonist and anesthesia risk”. Use a minimum of 3 bibliographic databases.  Select articles within 2017-2023, Level 1-2 Evidence (need to use this library link https://web.p.ebscohost.com/ehost/search/advanced?vid=0&sid=7bdcabf7-f7e4-4b88-bee0-4167f9bc1de3%40redis

2. Describe the process for selecting studies: how studies were screened, and what criteria were used to include and exclude studies in the systematic review.

3. List the exclusion and inclusion criteria

4. Provide three examples of studies excluded and rationale.

5. Provide the number of documents identified, screened, and assessed for eligibility, and reasons for exclusion using a flow diagram (see PRISMA template and checklist attachments). The PRISMA Diagram GeneratorLinks to an external site. may be helpful. Use the tab “Create Flow Diagram” and follow the video guide below.

6. Attach the PRISMA Diagram to show how many relevant studies were found.

7. Refer to Chapter 5, pp204-236 in Polit and Beck. The textbook should appear as one of the references for this assignment.

      

Criteria: 

1-Boolean Search

2-Which data bases and key terms used(remember to use the library link provided only)

3-Inclusion/Exclusion Criteria. 

4-Examples of excluded materials and rationale

5-Research studies within a 5-year span, Level 1-2 Evidence

6-Prisma

7- AMA format

8-references

   

PRISMA 2020 Checklist

Section and Topic

Item #

Checklist item

Location where item is reported

TITLE

Title

1

Identify the report as a systematic review.

ABSTRACT

Abstract

2

See the PRISMA 2020 for Abstracts checklist.

INTRODUCTION

Rationale

3

Describe the rationale for the review in the context of existing knowledge.

Objectives

4

Provide an explicit statement of the objective(s) or question(s) the review addresses.

METHODS

Eligibility criteria

5

Specify the inclusion and exclusion criteria for the review and how studies were grouped for the syntheses.

Information sources

6

Specify all databases, registers, websites, organisations, reference lists and other sources searched or consulted to identify studies. Specify the date when each source was last searched or consulted.

Search strategy

7

Present the full search strategies for all databases, registers and websites, including any filters and limits used.

Selection process

8

Specify the methods used to decide whether a study met the inclusion criteria of the review, including how many reviewers screened each record and each report retrieved, whether they worked independently, and if applicable, details of automation tools used in the process.

Data collection process

9

Specify the methods used to collect data from reports, including how many reviewers collected data from each report, whether they worked independently, any processes for obtaining or confirming data from study investigators, and if applicable, details of automation tools used in the process.

Data items

10a

List and define all outcomes for which data were sought. Specify whether all results that were compatible with each outcome domain in each study were sought (e.g. for all measures, time points, analyses), and if not, the methods used to decide which results to collect.

10b

List and define all other variables for which data were sought (e.g. participant and intervention characteristics, funding sources). Describe any assumptions made about any missing or unclear information.

Study risk of bias assessment

11

Specify the methods used to assess risk of bias in the included studies, including details of the tool(s) used, how many reviewers assessed each study and whether they worked independently, and if applicable, details of automation tools used in the process.

Effect measures

12

Specify for each outcome the effect measure(s) (e.g. risk ratio, mean difference) used in the synthesis or presentation of results.

Synthesis methods

13a

Describe the processes used to decide which studies were eligible for each synthesis (e.g. tabulating the study intervention characteristics and comparing against the planned groups for each synthesis (item #5)).

13b

Describe any methods required to prepare the data for presentation or synthesis, such as handling of missing summary statistics, or data conversions.

13c

Describe any methods used to tabulate or visually display results of individual studies and syntheses.

13d

Describe any methods used to synthesize results and provide a rationale for the choice(s). If meta-analysis was performed, describe the model(s), method(s) to identify the presence and extent of statistical heterogeneity, and software package(s) used.

13e

Describe any methods used to explore possible causes of heterogeneity among study results (e.g. subgroup analysis, meta-regression).

13f

Describe any sensitivity analyses conducted to assess robustness of the synthesized results.

Reporting bias assessment

14

Describe any methods used to assess risk of bias due to missing results in a synthesis (arising from reporting biases).

Certainty assessment

15

Describe any methods used to assess certainty (or confidence) in the body of evidence for an outcome.

RESULTS

Study selection

16a

Describe the results of the search and selection process, from the number of records identified in the search to the number of studies included in the review, ideally using a flow diagram.

16b

Cite studies that might appear to meet the inclusion criteria, but which were excluded, and explain why they were excluded.

Study characteristics

17

Cite each included study and present its characteristics.

Risk of bias in studies

18

Present assessments of risk of bias for each included study.

Results of individual studies

19

For all outcomes, present, for each study: (a) summary statistics for each group (where appropriate) and (b) an effect estimate and its precision (e.g. confidence/credible interval), ideally using structured tables or plots.

Results of syntheses

20a

For each synthesis, briefly summarise the characteristics and risk of bias among contributing studies.

20b

Present results of all statistical syntheses conducted. If meta-analysis was done, present for each the summary estimate and its precision (e.g. confidence/credible interval) and measures of statistical heterogeneity. If comparing groups, describe the direction of the effect.

20c

Present results of all investigations of possible causes of heterogeneity among study results.

20d

Present results of all sensitivity analyses conducted to assess the robustness of the synthesized results.

Reporting biases

21

Present assessments of risk of bias due to missing results (arising from reporting biases) for each synthesis assessed.

Certainty of evidence

22

Present assessments of certainty (or confidence) in the body of evidence for each outcome assessed.

DISCUSSION

Discussion

23a

Provide a general interpretation of the results in the context of other evidence.

23b

Discuss any limitations of the evidence included in the review.

23c

Discuss any limitations of the review processes used.

23d

Discuss implications of the results for practice, policy, and future research.

OTHER INFORMATION

Registration and protocol

24a

Provide registration information for the review, including register name and registration number, or state that the review was not registered.

24b

Indicate where the review protocol can be accessed, or state that a protocol was not prepared.

24c

Describe and explain any amendments to information provided at registration or in the protocol.

Support

25

Describe sources of financial or non-financial support for the review, and the role of the funders or sponsors in the review.

Competing interests

26

Declare any competing interests of review authors.

Availability of data, code and other materials

27

Report which of the following are publicly available and where they can be found: template data collection forms; data extracted from included studies; data used for all analyses; analytic code; any other materials used in the review.

From: Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ 2021;372:n71. doi: 10.1136/bmj.n71

For more information, visit:
http://www.prisma-statement.org/

image1.jpeg

PRISMA 2020 flow diagram for new systematic reviews which included searches of databases and registers only

Identification of studies via databases and registers

Records removed
before screening:

Duplicate records removed (n = )

Records marked as ineligible by automation tools (n = )

Records removed for other reasons (n = )

Records identified from*:

Databases (n = )

Registers (n = )

Identification

Records screened

(n = )

Records excluded**

(n = )

Reports sought for retrieval

(n = )

Reports not retrieved

(n = )

Screening

Reports assessed for eligibility

(n = )

Reports excluded:

Reason 1 (n = )

Reason 2 (n = )

Reason 3 (n = )

etc.

Studies included in review

(n = )

Reports of included studies

(n = )

Included

*Consider, if feasible to do so, reporting the number of records identified from each database or register searched (rather than the total number across all databases/registers).

**If automation tools were used, indicate how many records were excluded by a human and how many were excluded by automation tools.


From: Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ 2021;372:n71. doi: 10.1136/bmj.n71

For more information, visit:
http://www.prisma-statement.org/

Quick Guide to Bivariate Statistical Tests

Nursing Research

Generating and Assessing Evidence 
for
Nursing Practice

ELEVENTH EDITION

Denise F. Polit, PhD, FAAN
President
Humanalysis, Inc.
Saratoga Springs, New York, and
Adjunct Professor
Griffith University School of Nursing
Brisbane, Australia
(www.denisepolit.com)

Cheryl Tatano Beck, DNSc, CNM, FAAN
Distinguished Professor
School of Nursing
University of Connecticut
Storrs, Connecticut

Vice President and Publisher: Julie K. Stegman
Director of Nursing Content Publishing: Renee Gagliardi
Acquisitions Editor: Mark Foss
Director of Product Development: Jennifer K. Forestieri
Senior Development Editor: Meredith L. Bri�ain
Editorial Coordinator: David Murphy
Marketing Manager: Bri�any Clements
Editorial Assistant: Molly Kennedy
Design Coordinator: Stephen Druding
Art Director, Illustration: Jennifer Clements
Production Project Manager: Barton Dudlick
Manufacturing Coordinator: Karin Duffield
Prepress Vendor: TNQ Technologies

Eleventh Edition

Copyright © 2021 Wolters Kluwer.

Copyright © 2017 Wolters Kluwer. Copyright © 2012 Wolters Kluwer Health |
Lippinco� Williams & Wilkins. Copyright © 2008, 2004, 1999 by Lippinco�
Williams & Wilkins. Copyright © 1995, 1991, 1987, 1983, 1978 by J. B.
Lippinco� Company.

All rights reserved. This book is protected by copyright. No part of this book
may be reproduced or transmi�ed in any form or by any means, including as
photocopies or scanned- in or other electronic copies, or utilized by any
information storage and retrieval system without wri�en permission from the
copyright owner, except for brief quotations embodied in critical articles and
reviews. Materials appearing in this book prepared by individuals as part of
their official duties as US government employees are not covered by the
abovementioned copyright. To request permission, please contact Wolters
Kluwer at Two Commerce Square, 2001 Market Street, Philadelphia, PA 19103,
via email at [email protected], or via our website at shop.lww.com
(products and services).

9 8 7 6 5 4 3 2 1

Printed in China

Library of Congress Cataloging- in- Publication Data

ISBN-13: 978-1-975110-64-2

Cataloging in Publication data available on request from publisher.

This work is provided “as is,” and the publisher disclaims any and all
warranties, express or implied, including any warranties as to accuracy,
comprehensiveness, or currency of the content of this work.

This work is no substitute for individual patient assessment based upon
healthcare professionals’ examination of each patient and consideration of,
among other things, age, weight, gender, current or prior medical conditions,
medication history, laboratory data, and other factors unique to the patient.
The publisher does not provide medical advice or guidance, and this work is
merely a reference tool. Healthcare professionals, and not the publisher, are
solely responsible for the use of this work including all medical judgments and
for any resulting diagnosis and treatments.

Given continuous, rapid advances in medical science and health information,
independent professional verification of medical diagnoses, indications,
appropriate pharmaceutical selections and dosages, and treatment options
should be made, and healthcare professionals should consult a variety of
sources. When prescribing medication, healthcare professionals are advised to
consult the product information sheet (the manufacturer’s package insert)
accompanying each drug to verify, among other things, conditions of use,
warnings, and side effects and identify any changes in dosage schedule or
contraindications, particularly if the medication to be administered is new,
infrequently used, or has a narrow therapeutic range. To the maximum extent
permi�ed under applicable law, no responsibility is assumed by the publisher
for any injury and/or damage to persons or property, as a ma�er of products
liability, negligence law or otherwise, or from any reference to or use by any
person of this work.

shop.lww.com

TO

The memory of Denise’s husband:

Alan A. Janosy, 1943-2019

Acknowledgments

This 11th edition, like the previous 10 editions, depended on the
contributions of dozens of people. Many faculty and students who
used the text have made invaluable suggestions for its improvement,
and to all of you we are very grateful. In addition to all those who
assisted us during the past 40 years with the earlier editions, the
following individuals deserve special mention.
We would like to acknowledge the comments of reviewers of the
previous edition of this book, anonymous to us initially, whose
feedback influenced our revisions. Faculty at Griffith University in
Australia made useful suggestions and inspired the inclusion of
some new content. Valori Banfi, reference librarian at the University
of Connecticut, provided ongoing assistance. Dr. Carrie Morgan
Eaton at the University of Connecticut provided regular feedback.
Dr. Deborah Dillon McDonald and Dr. Xiaomei Cong were
extraordinarily generous in giving us access to NINR grant
application material for the Resource Manual.
We also extend our thanks to those who helped to turn the
manuscript into a finished product. The staff at Wolters Kluwer has
been of great assistance to us over the years. We are indebted to
Mark Foss, Meredith Bri�ain, David Murphy, Bri�any Clements,
Barton Dudlick, and all the others behind the scenes for their fine
contributions.
Finally, we thank our family and friends. Our husbands Alan and
Chuck have become accustomed to our demanding schedules, but
we recognize that their support involves a lot of patience and many
sacrifices.

Reviewers

Roy K. Aaron MD
Professor, Orthopedic Surgery,Professor, Molecular Pharmacology,
Physiology, and Biotechnology,Warren Alpert Medical School of Brown
University, Providence, Rhode Island

Kelley M. Anderson, PhD, FNP, CHFN- K
Associate ProfessorDepartment of Professional Nursing PracticeGeorgetown
University Washington, District of Columbia

Debra Bacharz, PhD, MSN, RN
Professor of Nursing, Leach College of NursingUniversity of St. Francis Joliet,
Illinois

Kimberly Balko, PhD, RN
Assistant ProfessorDepartment of NursingSUNY Empire State College
Saratoga Springs, New York

Susan A. Bonis, PhD, RN
Assistant Clinical ProfessorCollege of NursingUniversity of Wisconsin—
Milwaukee Milwaukee, Wisconsin

Barbara Brewer, PhD, RN, MALS, MBA
Associate ProfessorCollege of NursingThe University of Arizona Tucson,
Arizona

Kathleen A. Fagan, PhD, RN, APN
Associate ProfessorGraduate NursingSchool of NursingFelician University
Lodi, New Jersey

Tracia Forman, PhD, RN-BC, CNE
Assistant ProfessorDepartment of NursingUniversity of Texas Rio Grande
Valley Brownsville, Texas

LaDawna R. Goering, DNP, APN, ANP- BC
Assistant ProfessorDepartment of NursingNorthern Illinois University
DeKalb, Illinois

Rebecca W. Grizzle, PhD, RN, MSN, NP- C
Clinical Assistant ProfessorCollege of NursingSacred Heart University
Fairfield, Connecticut

Ashlyn Johnson, DNP, FNP- BC
Assistant Professor of NursingMSN Program (FNP & PMHNP Tracks)Mount
Marty College Yankton, South Dakota

Kara Misto, PhD, RN
Assistant ProfessorSchool of NursingRhode Island College Providence, Rhode
Island

Stephen J. Stapleton, PhD, MS, RN, CEN, FAEN
Associate ProfessorMennonite College of NursingIllinois State University
Normal, Illinois

Debbie Stayer, PhD, RN- BC, CCRN- K
Assistant ProfessorDepartment of NursingBloomsburg University
Bloomsburg, Pennsylvania

Kathleen Thompson, PhD, RN, CNE
Clinical ProfessorDepartment of NursingUniversity of Tennessee, Knoxville
Knoxville, Tennessee

Ann Tritak, EdD, RN
Associate DeanDepartment of Graduate NursingFelician University Lodi,
New Jersey

Shelly Wells, PhD, MBA, MS, APRN- CNS
Division Chair and ProfessorDivision of NursingNorthwestern Oklahoma
State University Alva, Oklahoma

Kelli D. Whi�ington, PhD, RN, CNE
Chair, Division of NursingMcKendree University Lebanon, Illinois

Preface
Research methodology is a dynamic enterprise. Even after writing 10
editions of this book, we have continued to draw new material and
inspiration from ground- breaking advances in research methods and
in nurse researchers’ use of those methods. It is thrilling to share
many of those developments in this new edition. We expect that
many of the new methodologic and technological enhancements will
be translated into powerful evidence for nursing practice. Four years
ago, we considered the 10th edition as a watershed edition of a
classic textbook, having added two new chapters. We are persuaded,
however, that this 11th edition is even be�er than the previous one.
We have retained many features that made this book a classic
textbook and resource, including its focus on research as a support
for evidence- based nursing, but have introduced important
innovations that we hope will help to shape the future of nursing
research.

New to This Edition

New Chapters
We are excited to have added two new chapters to this edition. The
first new chapter (Chapter 12) focuses on quality improvement and
improvement science. Quality improvement (QI) has not historically
been considered “research” because knowledge from QI has been
deemed too localized to be of broad interest. Yet, QI initiatives
undertaken by interprofessional teams often yield important lessons
for healthcare professionals in diverse se�ings. In the new chapter,
we discuss methods and frameworks that can be used to develop
and assess improvement projects.
We are particularly enthusiastic about our second new chapter,
which concerns the applicability, generalizability, and relevance of
research evidence (Chapter 31). There is growing awareness that
approaches being used to support evidence- based practice (EBP)
have limitations in terms of their applicability to individual patients
or subgroups of patients. EBP efforts prioritize rigorous evidence
from tightly controlled studies with select populations that often
exclude many patients typically seen in real- world se�ings.
Moreover, evidence for EBP usually represents average effects for
these atypical populations. Our new chapter describes a range of
cu�ing- edge strategies for generating practice- based evidence that is
patient- centered. We discuss such approaches as comparative
effectiveness research, pragmatic clinical trials, adaptive
interventions, SMART designs, subgroup (moderator) analyses, and
multivariable risk- stratified analyses for be�er understanding the
diversity of treatment effects. This chapter is consistent with the
emerging interest in precision healthcare.

Extensively Revised Chapters
We have made major revisions to two chapters in this book. We have
revamped Chapter 2, the chapter on evidence- based practice, to

be�er guide efficient evidence searches (e.g., via the 6S hierarchy of
preappraised evidence) and for ranking evidence on traditional
level- of- evidence (LOE) scales. We have also extensively revised
another chapter that has relevance for EBP: the chapter on systematic
reviews (Chapter 30). The types of reviews being undertaken, and
the methods used to conduct them, have expanded considerably in
recent years. We describe in some detail the GRADE system for
assessing the degree of confidence a review team has in the estimated
effects of an intervention on key outcomes. We also describe
differences in two broad approaches to qualitative syntheses,
distinguishing interpretive approaches (metasyntheses) from
aggregative approaches using meta- aggregation.

New and Added Content
Throughout the book, we have included material on methodologic
innovations that have arisen in nursing, medicine, and the social
sciences during the past 4 to 5 years. The many additions and
changes are too numerous to describe here. One deserves special
mention, however: we have revised the chapter on qualitative data
analysis (Chapter 25) to provide greater support for the actual tasks
of coding and categorizing data.
The inclusion of two new chapters and the expansion of others made
it challenging to keep the textbook to a manageable length. Our
solution was to include some content in supplements that are
available online. Every chapter has an online supplement (and some
chapters in this edition have two supplements), which gave us the
opportunity to add a considerable amount of new material. For
example, one new supplement is devoted to the conduct of
plausibility analyses as a tool for strengthening internal validity in
nonrandomized intervention studies. Other supplements include a
description of various randomization methods such as urn
randomization, an overview of item response theory, and a
description of statistical process control.
Here is a complete list of the supplements for the 33 chapters of the
textbook:

1. The History of Nursing Research
2. A. Evaluating Clinical Practice Guidelines—AGREE II

B. Evidence- Based Practice in an Organiza tional Context
3. Deductive and Inductive Reasoning
4. Complex Relationships and Hypotheses
5. A. Finding Evidence for a Clinical Query

B. Literature Review Summary Tables
6. Prominent Conceptual Models of Nursing Used by Nurse

Researchers, and a Guide to Middle- Range Theories
7. Historical Background on Unethical Research Conduct
8. Research Control
9. Randomization Strategies

10. A. Selected Experimental and Quasi- Experi mental Designs:
Diagrams, Uses, and Drawbacks/Validity Threats

B. Plausibility Assessments and Other Strategies When
Randomization is Not Possible

11. Other Specific Types of Research
12. Statistical Process Control
13. Sample Recruitment and Retention
14. Other Types of Structured Self- Reports
15. Cross- Cultural Validity and the Adaptation/Translation of

Measures
16. Overview of Item Response Theory
17. SPSS Analysis of Descriptive Statistics
18. SPSS Analysis of Inferential Statistics
19. SPSS Analysis and Multivariate Statistics
20. Some Preliminary Steps in Quantitative Analysis Using SPSS
21. Clinical Significance Assessment with the Jacobson–Truax

Approach
22. Historical Nursing Research and Other Types of Qualitative

Inquiry
23. Models of Generalizability in Qualitative Research
24. Additional Types of Unstructured Self- Reports

25. Transcribing Qualitative Data
26. Whi�emore and Colleagues’ Framework of Quality Criteria in

Qualitative Research
27. Transforming Quantitative and Qualitative Data
28. Complex Intervention Development: Additional 
Resources
29. Examples of Various Pilot and Feasibility Objectives
30. A. Publication Bias in Systematic Reviews

B. Supplementary Resources for Qualitative Evidence Synthesis
31. The RE- AIM Framework
32. A. Tips for Publishing Reports on Pilot Intervention Studies

B. Impact Factor and Publication Information for Selected Nursing
Journals

33. Proposals for Pilot Intervention Studies

Another feature of this edition concerns readers’ access to references
we cited. To the extent possible, the studies we have chosen as
examples of research methods are published as open- access articles.
These studies are identified in the reference list at the end of each
chapter, and a link to the articles is included in the accompanying
Resource Manual for Nursing Research, 11th Edition (available for
separate purchase) in the online Toolkit (for more information, see
the section “A Comprehensive Package for Teaching and Learning”
later in this preface.) In addition, one Wolters Kluwer article per
chapter that is available on the book’s companion website is also
identified in each chapter’s reference list.
We hope that our many revisions will help users of this book to
maximize their learning experience.

Organization of the Text
The content of this edition is organized into six main parts.

Part 1—Foundations of Nursing Research and Evidence- Based
Practice introduces fundamental concepts in nursing research.
Chapter 1 briefly summarizes the history and future of nursing
research, discusses the philosophical underpinnings of
qualitative research versus quantitative research, and describes
major purposes of nursing research. Chapter 2, extensively
revised, offers guidance on using research to support evidence–
based practice. Chapter 3 introduces readers to key research
terms and presents an overview of steps in the research process
for both qualitative and quantitative studies.
Part 2—Conceptualizing and Planning a Study to Generate
Evidence for Nursing further sets the stage for learning about
the research process by discussing issues relating to a study’s
conceptualization: the formulation of research questions and
hypotheses (Chapter 4), the review of relevant research (Chapter
5), the development of theoretical and conceptual contexts
(Chapter 6), and the fostering of ethically acceptable approaches
in doing research (Chapter 7). Chapter 8 provides an overview
of important issues that researchers must a�end to during the
planning of any study.
Part 3—Designing and Conducting Quanti tative Studies to
Generate Evidence for Nursing presents material on
undertaking quantitative nursing studies. Chapter 9 describes
fundamental principles of quantitative research design, and
Chapter 10 focuses on methods to enhance the rigor of a
quantitative study, including mechanisms of research control.
Chapter 11 examines research with different and distinct
purposes, such as noninferiority trials, realist evaluations,
surveys, and outcomes research. Chapter 12, a new chapter in
this edition, is devoted to methods used in quality improvement

and improvement science. Chapter 13 presents strategies for
sampling study participants in quantitative research. Chapter 14
describes structured data collection methods that yield
quantitative information. Chapter 15 discusses the concept of
measurement and then focuses on methods of assessing the
quality of formal measuring instruments. In this edition, we
describe methods to assess the properties of point- in- time
measurements (reliability and validity) and longitudinal
measurements—i.e., change scores (reliability of change scores
and responsiveness). Chapter 16 presents material on how to
develop high- quality self- report instruments. Chapters 17, 18,
and 19 present an overview of univariate, bivariate, and
multivariate statistical analyses, respectively. Chapter 20
describes the development of an overall analytic strategy for
quantitative studies, including material on handling missing
data. Chapter 21, a chapter that was added in the 10th edition,
discusses the issue of interpreting results and making inferences
about clinical significance.
Part 4—Designing and Conducting Quali tative Studies to
Generate Evidence for Nursing presents material on
undertaking qualitative nursing studies. Chapter 22 is devoted
to research designs and approaches for qualitative studies,
including information on critical theory, feminist, and
participatory action research. Chapter 23 discusses strategies for
sampling study participants in qualitative inquiries. Chapter 24
describes methods of gathering unstructured self- report and
observational data for qualitative studies. Chapter 25 discusses
methods of analyzing qualitative data, with specific information
on grounded theory, phenomenologic, and ethnographic
analyses. Greater guidance on coding qualitative data has been
added to this edition. Chapter 26 elaborates on methods
qualitative researchers can use to enhance (and assess) integrity
and trustworthiness throughout their inquiries.
Part 5—Designing and Conducting Mixed Methods Studies to
Generate Evidence for Nursing presents material on mixed

methods nursing studies. Chapter 27 discusses a broad range of
issues, including asking mixed methods questions, designing a
study to address the questions, sampling participants in mixed
methods research, and analyzing and integrating qualitative
and quantitative data. Chapter 28 presents information about
using mixed methods approaches in the development of
complex nursing interventions. In Chapter 29, a chapter that
was new in the 10th edition, we provide suggestions for
designing and conducting pilot studies and using data from the
pilots to make decisions about “next steps.”
Part 6—Building an Evidence Base for Nursing Practice
provides additional information on linking research and clinical
practice. Chapter 30 offers an overview of methods of
conducting systematic reviews that support EBP. In this greatly
expanded chapter in this edition, we provide guidance on
conducting meta- analyses (and an evaluation of confidence in
the evidence using the GRADE system), metasyntheses,
qualitative evidence syntheses using meta- aggregation, and
mixed studies reviews. Chapter 31, a new chapter in this
edition, offers cu�ing- edge advice on strategies to enhance the
applicability of practice- based evidence to clinical decisions for
individuals and subgroups. Chapter 32 discusses the
dissemination of evidence—how to prepare a research report
(including theses and dissertations) and how to publish research
findings. The concluding chapter (Chapter 33) offers
suggestions and guidelines on developing research proposals
and obtaining financial support; it includes information about
applying for NIH grants and interpreting scores from NIH’s
scoring system.

Key Features
This textbook was designed to be helpful to those who are learning
how to do research, as well as to those who are learning to appraise
research reports critically and to use research findings in practice.
Many of the features successfully used in previous editions have
been retained in this 11th edition. Among the basic principles that
helped to shape this and earlier editions of this book are (1) an
unswerving conviction that the development of research skills is
critical to the nursing profession, (2) a fundamental belief that
research is intellectually and professionally rewarding, and (3) a
steadfast opinion that learning about research methods does not
need to be intimidating nor dull. Consistent with these principles,
we have tried to present the fundamentals of research methods in a
way that both facilitates understanding and arouses curiosity and
interest. Key features of our approach include the following:

Research examples. Each chapter concludes with one or two
actual research examples designed to highlight methodologic
features described in the chapter and to sharpen the reader’s
critical thinking skills. In addition, many research examples are
used throughout the book to illustrate key points and to
stimulate ideas for a study. Many examples used in this edition
are published as open- access articles that can be used for further
learning and classroom discussion.
Specific practical tips on doing research. The textbook is filled
with practical suggestions on how to translate the abstract
notions of research methods into realistic strategies for
conducting research. Every chapter includes several tips for
applying the chapter’s lessons to real- life situations. These tips
are an acknowledgment that there is often a gap between what
gets taught in research methods textbooks and what a
researcher needs to know to conduct a study.

Critical appraisal guidelines. Almost all chapters include
guidelines for conducting a critical appraisal of various aspects
of a research report.
A comprehensive index. We have crafted an exceptionally
thorough index. We know that our book is used as a reference
book as well as a textbook, and we recognize how crucial it is to
access needed information efficiently.
Aids to student learning. This book includes several additional
features designed to enhance and reinforce learning, including
the following: succinct, bulleted summaries at the end of each
chapter; tables and figures that provide examples and graphic
materials in support of the text discussion; and a detailed
glossary.
Clear, user-friendly style. Our writing style is designed to be
easily digestible and nonintimidating. Concepts are introduced
carefully and systematically, difficult ideas are presented
clearly, and readers are assumed to have no prior exposure to
technical terms.

A Comprehensive Package for Teaching and
Learning
To further facilitate teaching and learning, a carefully designed
ancillary package has been developed to assist faculty and students.

Resources for Instructors
Tools to assist you with teaching your course are available upon
adoption of this text at h�p://thepoint.lww.com/Polit11e.

An e- Book gives you access to the book’s full text and images
online.
The Test Generator lets you put together exclusive new tests
from a bank containing more than 790 questions to help you in
assessing your students’ understanding of the material.
PowerPoint Presentations summarizing key points in each
chapter provide an easy way for you to integrate the textbook
with your students’ classroom experience, either via slide shows
or handouts. Multiple-choice and true/false questions are
integrated into the presentations to promote class participation
and allow you to use i- clicker technology.
An Image Bank of all the images in the book allows you to use
these illustrations in your PowerPoint slides or as you see fit in
your course.
Other helpful resources include Answers to Application
Exercises (the exercises are found in the student resources) and
Strategies for Effective Teaching.

Contact your sales representative for more details and ordering
information.

Resources for Students

An exciting set of free resources is available to help students review
material and become even more familiar with vital concepts.
Students can access all these resources at
h�p://thepoint.lww.com/Polit11e using the codes printed in the front
of their textbooks.

Chapter supplements include material to enhance the content
of each chapter (the full list of these supplements is included
earlier in this preface).
Application Exercises test methodologic skills with short-
answer and essay questions related to research studies.
Journal Articles offer access to current research available in
Wolters Kluwer journals.
A Spanish–English Audio Glossary provides helpful terms and
phrases for communicating with patients who speak Spanish.
A description of Nursing Professional Roles and
Responsibilities provides information about these functions.

Resource Manual for Nursing Research, 11th Edition
Available for separate purchase, Resource Manual for Nursing
Research, 11th Edition augments the textbook in important ways.
The manual itself provides students with exercises that correspond
to each text chapter, with opportunities to carefully glean
information from and critically appraise actual studies. The
appendices include 13 research journal articles in their entirety, plus
portions of two successful grant applications for studies funded by
the National Institute of Nursing Research. The 13 reports cover a
range of nursing inquiries, including qualitative, quantitative, and
mixed methods studies, an instrument development study, an
evidence- based practice project, a quality improvement project, and
two systematic reviews. Full critiques of two of the reports are also
included and can serve as models for a comprehensive critical
appraisal.

The online Toolkit to the Resource Manual is a “must have”
innovation that will save considerable time for both students
and seasoned researchers. Included on the manual’s companion
webpage, the Toolkit offers dozens of research resources in
Word documents that can be downloaded and used or adapted
in research projects. The resources reflect best- practice research
material, most of which has been pretested and refined in our
own research. The Toolkit originated with our realization that in
our technologically advanced environment, it is possible to not
only illustrate methodologic tools as graphics in the textbook but
also to make them directly available for use and adaptation.
Thus, we have included dozens of documents in Word format
that can readily be used in research projects, without requiring
researchers to “reinvent the wheel” or tediously retype material
from the textbook. Examples include informed consent forms, a
demographic questionnaire, content validity forms, templates
for statistical tables, and a coding sheet for a meta- analysis—to
name only a few. The Toolkit also lists relevant and useful
websites for each chapter, which can be “clicked” on directly
without having to retype the URL and risk a typographical
error. Links to open- access articles cited in the textbook, as well
as other open- access articles relevant to each chapter, are
included in the Toolkit.

A Comprehensive, Digital, Integrated Course
Solution: Lippincott® CoursePoint
The same trusted solution, innovation and un matched support that
you have come to expect from Lippinco� CoursePoint is now
enhanced with more engaging learning tools and deeper analytics to
help prepare students for practice. This powerfully integrated,
digital learning solution combines learning tools, case studies, real–
time data, and the most trusted nursing education content on the
market to make curriculum- wide learning more efficient and to meet
students where they are at in their learning. And now, it is easier
than ever for instructors and students to use, giving them everything
they need for course and curriculum success!
Lippinco� CoursePoint includes:

Engaging course content provides a variety of learning tools to
engage students of all learning styles.
A more personalized learning approach gives students the
content and tools they need at the moment they need it, giving
them data for more focused remediation and helping to boost
their confidence and competence.
Powerful tools, including varying levels of case studies,
interactive learning activities, and adaptive learning powered
by PrepU, help students learn the critical thinking and clinical
judgment skills to help them become practice- ready nurses.
Unparalleled reporting provides in- depth dashboards with
several data points to track student progress and help identify
strengths and weaknesses.
Unmatched support includes training coaches, product trainers,
and nursing education consultants to help educators and
students implement CoursePoint with ease.

It is our hope that the content, style, and organization of Nursing
Research, 11th Edition continue to meet the needs of a broad

spectrum of nursing students and nurse researchers. We also hope
that the book will help to foster enthusiasm for the kinds of
discoveries that research can produce and for the knowledge that
will help support an evidence- based nursing practice.
DENISE F. POLIT, PhD, FAAN

CHERYL TATANO BECK, DNSC, CNM, FAAN

Table of Contents

Table of Contents
Part 1 Foundations of Nursing Research and Evidence- Based
Practice

Chapter 1 Introduction to Nursing Research in an Evidence- Based Practice
Environment

Chapter 2 Evidence- Based Nursing: Translating Research Evidence into
Practice

Chapter 3 Key Concepts and Steps in Qualitative and Quantitative Research

Part 2 Conceptualizing and Planning a Study to 
Generate Evidence
for Nursing

Chapter 4 Research Problems, Research Questions, and Hypotheses

Chapter 5 Literature Reviews: Finding and Critically Appraising Evidence

Chapter 6 Theoretical Frameworks

Chapter 7 Ethics in Nursing Research
Chapter 8 Planning a Nursing Study

Part 3 Designing and Conducting Quantitative Studies to 
Generate
Evidence for Nursing

Chapter 9 Quantitative Research Design

Chapter 10 Rigor and Validity in Quantitative Research

Chapter 11 Specific Types of Quantitative Research

Chapter 12 Quality Improvement and Improvement Science
Chapter 13 Sampling in Quantitative Research

Chapter 14 Data Collection in Quantitative Research

Chapter 15 Measurement and Data Quality

Chapter 16 Developing and Testing Self- Report Scales

Chapter 17 Descriptive Statistics

Chapter 18 Inferential Statistics

Chapter 19 Multivariate Statistics
Chapter 20 Processes of Quantitative Data Analysis

Chapter 21 Clinical Significance and Interpretation of Quantitative Results

Part 4 Designing and Conducting Qualitative Studies to 
Generate
Evidence for Nursing

Chapter 22 Qualitative Research Design and Approaches

Chapter 23 Sampling in Qualitative Research

Chapter 24 Data Collection in Qualitative Research
Chapter 25 Qualitative Data Analysis

Chapter 26 Trustworthiness and Rigor in Qualitative Research

Part 5 Designing and Conducting Mixed MethodS Studies to 

Generate Evidence for Nursing

Chapter 27 Basics of Mixed Methods Research

Chapter 28 Developing Complex Nursing Interventions Using Mixed Methods
Research

Chapter 29 Feasibility and Pilot Studies of Interventions Using Mixed Methods

Part 6 Building an Evidence Base for Nursing Practice

Chapter 30 Systematic Reviews of Research Evidence
Chapter 31 Applicability, Generalizability, and Relevance: Toward Practice- –

Based Evidence

Chapter 32 Disseminating Evidence: Reporting Research Findings

Chapter 33 Writing Proposals to Generate Evidence

Appendix: Statistical Tables of Theoretical Probability Distributions
Glossary

Index

PA R T 1
Foundations of Nursing Research and
Evidence- Based Practice

Chapter 1 Introduction to Nursing Research in an
Evidence- Based Practice Environment
Chapter 2 Evidence- Based Nursing: Translating
Research Evidence into Practice
Chapter 3 Key Concepts and Steps in Qualitative and
Quantitative Research

C H A P T E R 1

Introduction to Nursing Research in an
Evidence- Based Practice Environment

Nursing Research in Perspective
In all parts of the world, nursing has experienced a profound culture
change. Nurses are increasingly expected to understand and conduct
research, and to base their professional practice, in part, on research
evidence—that is, to adopt an evidence- based practice (EBP). EBP
involves using the best evidence (as well as clinical judgment and
patient preferences and circumstances) in making patient care
decisions, and “best evidence” typically comes from research
conducted by nurses and other healthcare professionals.

What is Nursing Research?
Research is systematic inquiry that relies on disciplined methods to
answer questions or solve problems. Nurses are increasingly
engaged in disciplined studies that benefit nursing and its clients.
Nursing research is systematic inquiry designed to generate
evidence about issues of importance to the nursing profession,
including nursing practice, education, administration, and
informatics. In this book, we emphasize clinical nursing research
aimed at guiding nursing practice and improving the health and
quality of life of nurses’ clients.
Nursing research has experienced remarkable growth in the past few
decades, providing nurses with a growing evidence base from which
to practice. Yet many questions persist, and mechanisms for
incorporating research innovations into nursing practice still are in
development.

Examples of Nursing Research Questions:

How effective is a web- based intervention in improving parent–
adolescent communication about sexuality and sexual health?
(Varas- Díaz et al., 2019)
What are the experiences of college students who are newly
diagnosed with type 1 diabetes mellitus? (Saylor et al., 2019)

The Importance of Research in Nursing
Findings from rigorous research provide evidence for informing
nurses’ decisions. Nurses have come to accept the desirability of
incorporating research evidence into their actions, if the evidence
shows that the actions are clinically appropriate and result in
positive patient outcomes.
In some countries, research plays an important role in nursing
credentialing and status. For example, the American Nurses
Credentialing Center—an arm of the American Nurses Association
and a prestigious credentialing organization in the United States—
developed a Magnet Recognition Program to acknowledge
healthcare organizations that provide high- quality nursing care. The
2019 Magnet application manual incorporates revisions that
strengthen evidence- based requirements (Graystone, 2017). Indeed,
applicants must now submit at least three nursing studies, indicating
that Magnet hospitals must not only be involved in EBP but also in
the creation of new practice knowledge. The good news is that there
is growing evidence that the focus on research and EBP may have
important payoffs. For example, Barnes and coresearchers (2016)
found that Magnet hospitals had lower rates of central line–
associated bloodstream infection than non- Magnet hospitals, even
when differences in other hospital characteristics were taken into
account. And McCaughey et al. (2019) found that patients treated at
a Magnet hospital were more satisfied with their care than patients
in non- Magnet hospitals.
Changes to nursing practice now occur regularly because of EBP
efforts. Practice changes often are local initiatives that are not
publicized, but broader clinical changes are also occurring based on

accumulating research evidence about beneficial practice
innovations.

Example of Evidence- Based Practice:
“Kangaroo care” (the holding of diaper- clad infants skin- to–
skin by parents) is now routinely practiced in neonatal
intensive care units (NICUs), but before 2000, only a minority of
NICUs offered kangaroo care options. Expanded adoption of
this practice reflects mounting evidence that early skin- to- skin
contact has benefits without negative side effects (e.g., Johnston
et al., 2017; Moore et al., 2016). Some of that evidence came
from rigorous studies conducted by nurse researchers (e.g.,
Bastani et al., 2017; Billner- Garcia et al., 2018; Cho et al., 2016).

The Consumer–Producer Continuum in Nursing
Research
Most nurses are likely to engage in research activities along a
continuum of participation. At one end are consumers of nursing
research, who read research reports or research summaries to keep
up- to- date on findings that might affect their practice. EBP depends
on well- informed research consumers.
At the other end of the continuum are producers of nursing research:
nurses who conduct research. At one time, most nurse researchers
were academics who taught in nursing schools, but research is
increasingly being conducted by clinical nurses who seek solutions
to recurring problems in patient care.
Between these end points on the continuum lie a variety of research
activities that are undertaken by nurses. Even if you never
personally carry out a study, you may (1) contribute to an idea for a
clinical study; (2) gather information for a study; (3) advise clients
about participating in research; (4) seek answers to a clinical problem
by searching for and appraising research evidence; or (5) discuss the
implications of a study in a journal club in your practice se�ing,
which involves meetings (in groups or online) to discuss research

articles. Understanding research can improve the depth and breadth
of every nurse’s professional practice.

TIP The Cochrane Collaboration, an important organization
for EBP, offers an online journal club resource with podcasts,
slides, and discussion questions
(h�p://www.cochranejournalclub.com). Journal clubs can help
to create an environment of lifelong learning and can foster a
commitment to EBP (Gardner et al., 2016). Links to some
articles about journal clubs are provided in the Toolkit in the
accompanying Resource Manual.

Nursing Research in Historical Perspective
Table 1.1 summarizes some of the key events in the historical
evolution of nursing research. An expanded summary of the history
of nursing research appears in the Supplement to this chapter on

.

TABLE 1.1
Historical Landmarks in Nursing Research

Year Event
1859 Nightingale’s Notes on Nursing is published.
1900 American Journal of Nursing begins publication.
1923 Columbia University establishes first doctoral program for nurses.

Goldmark Report with recommendations for nursing education is published.
1936 Sigma Theta Tau awards first nursing research grant in the United States.
1948 Brown publishes report on inadequacies of nursing education.
1952 The journal Nursing Research begins publication.
1955 Inception of the American Nurses’ Foundation to sponsor nursing research.
1957 Establishment of nursing research center at Walter Reed Army Institute of Research.
1963 International Journal of Nursing Studies begins publication.
1965 American Nurses’ Association (ANA) sponsors nursing research conferences.
1969 Canadian Journal of Nursing Research begins publication.

Year Event
1972 ANA establishes a Commission on Research and Council of Nurse Researchers.
1976 Stetler and Marram publish guidelines on assessing research for use in practice.

Journal of Advanced Nursing begins publication.
1982 Conduct and Utilization of Research in Nursing (CURN) project publishes report.
1983 Annual Review of Nursing Research begins publication.
1985 ANA Cabinet on Nursing Research establishes research priorities.
1986 National Center for Nursing Research (NCNR) is established within U.S. National Institutes

of Health.
1988 Conference on Research Priorities is convened by NCNR.
1989 The U.S. Agency for Health Care Policy and Research (AHCPR) is established.
1993 NCNR becomes a full institute, the National Institute of Nursing Research (NINR).

The Cochrane Collaboration is established.
Magnet Recognition Program makes first awards.

1995 Joanna Briggs Institute, an EBP collaborative, is established in Australia.
1997 Canadian Health Services Research Foundation is established with federal funding.
1998 The European Academy of Nursing Science (EANS) is launched.
1999 AHCPR is renamed Agency for Healthcare Research and Quality (AHRQ).
2000 NINR’s annual funding exceeds $100 million.

The Canadian Institute of Health Research is launched.
Council for the Advancement of Nursing Science (CANS) is established.

2005 The Quality & Safety Education for Nurses (QSEN) initiative is inaugurated.
2006 NINR issues strategic plan for 2006- 2010.
2010 The Institute of Medicine publishes a report, The Future of Nursing, that includes research

priorities and recommendations for lifelong learning.
2011 NINR celebrates 25th anniversary and issues a new strategic plan.
2016 NINR issues The NINR Strategic Plan: Advancing Science, Improving Lives.
2019 NINR budget exceeds $145 million.

Most people would agree that research in nursing began with
Florence Nightingale in the 1850s. Her most well- known research
contribution involved an analysis of factors affecting soldier
mortality and morbidity during the Crimean War. Based on skillful
analyses, she was successful in effecting changes in nursing care—
and, more generally, in public health. After Nightingale’s work,
research was absent from the nursing literature until the early 1900s,
but most early studies concerned nurses’ education rather than
patient care.
In the 1950s, research by nurses began to accelerate. For example, the
American Nurses’ Foundation, which is devoted to the promotion of
nursing research, was founded. The surge in the number of studies
conducted in the 1950s created the need for a new journal; Nursing
Research came into being in 1952. As shown in Table 1.1,
dissemination opportunities in professional journals grew steadily
thereafter.

In the 1960s, nursing leaders expressed concern about the shortage
of research on practice issues. Professional nursing organizations,
such as the Western Interstate Council for Higher Education in
Nursing, established research priorities, and practice- oriented
research on various clinical topics began to emerge in the literature.
During the 1970s, improvements in client care became a more visible
research priority, and guidance on assessing research for application
in practice se�ings emerged. Also, nursing research expanded
internationally. For example, the Workgroup of European Nurse
Researchers was established in 1978 to develop greater
communication and opportunities for partnerships among 25
European National Nurses Associations.
In the United States, the National Center for Nursing Research
(NCNR) at the National Institutes of Health (NIH) was established in
1986. Several forces outside of nursing also helped to shape the
nursing research landscape in the 1980s. A group from the McMaster
Medical School in Canada designed a clinical learning strategy that
was called evidence- based medicine (EBM). EBM, which
promulgated the view that research findings were superior to the
opinions of authorities as a basis for clinical decisions, constituted a
profound shift for medical education and practice, and has had a
major effect on all healthcare professions.
Nursing research was strengthened and given more visibility when
NCNR was promoted to full institute status within the NIH. In 1993,
the National Institute of Nursing Research (NINR) was established,
helping to put nursing research more into the mainstream of health
research. Funding opportunities for nursing research expanded in
other countries as well.

Current and Future Directions for Nursing Research
Nursing research continues to develop at a rapid pace and will
undoubtedly flourish throughout the 21st century. Broadly
speaking, the priority for future nursing research will be the
promotion of excellence in nursing science. Toward this end, nurse
researchers and practicing nurses will be sharpening their research

skills and using those skills to address emerging issues of
importance to the profession and its clientele. Among the trends we
foresee for the early 21st century are the following:

Continued focus on EBP. Encouragement for nurses to engage in evidence–
based patient care and lifelong learning is sure to continue. In turn,
improvements will be needed both in the quality of studies and in nurses’
skills in locating, understanding, critically appraising, and using relevant
study results. Relatedly, there is an emerging interest in translational
research, which involves research on how findings from studies can best
be translated into practice.
Accelerating emphasis on research synthesis. Research syntheses that
integrate research evidence across studies are the cornerstone of EBP. Of
particular importance is a type of synthesis called systematic reviews,
which rigorously integrate research information on a research question.
Clinical practice guidelines typically rely on such systematic reviews. We
offer some guidance on how to create, as well as how to appraise,
research syntheses in this book.
Expanded local research and quality improvement efforts in healthcare se�ings.
Projects designed to solve local problems are increasing. This trend will
be reinforced as more hospitals apply for (and are recertified for) Magnet
status in the United States and in other countries. Mechanisms need to be
developed to ensure that evidence from these projects becomes available
to others facing similar problems.
Strengthening of interprofessional collaboration. Collaboration of nurses
with researchers in related fields has expanded in the 21st century as
researchers address fundamental healthcare problems. In turn, such
collaborative efforts could lead to nurse researchers playing a more
prominent role in national and international healthcare policies. One
major recommendation in the Institute of Medicine’s influential 2010
report The Future of Nursing was that nurses should be full partners with
physicians and other healthcare professionals in redesigning health care.
Increased emphasis on patient- centeredness. Patient centeredness has
become a central concern in health care, as well as in research. In the
United States, the Patient- Centered Outcomes Research Institute (PCORI)
funds research focused on assisting patients and their caregivers to make
well- informed healthcare decisions. Efforts are increasing to ensure that
research is relevant to patients and that patients play a role in se�ing
research priorities. Comparative effectiveness research, which involves

direct comparisons of alternative treatments, has emerged as an
important tool for patient- centered research.
Relatedly, greater interest in the applicability of research. More a�ention is
being paid to figuring out how study results can be applied to individual
patients or groups of patients. A limitation of the current EBP model is
that standard strategies offer evidence on average effects of healthcare
interventions under ideal circumstances. Ideas are emerging about how
best to enhance the applicability of research in real- world se�ings.
Growing interest in defining and ascertaining clinical significance . Research
findings increasingly must meet the test of being clinically significant,
and patients have taken center- stage in efforts to define clinical
significance.
Growing interest in precision health care and symptom science. NINR has
embraced research in these areas (Cashion & Grady, 2015). Symptom
science involves research to study the underlying behavioral and
molecular mechanisms of symptoms, irrespective of the health disorder.
The Precision Healthcare Initiative is helping to advance nursing omic
research (e.g., genomic, microbiomic).

What are nurse researchers likely to be studying in the future?
Although there is rich diversity in research interests—as we
illustrate throughout this book in our research examples—research
priorities have been articulated by several nursing organizations,
including NINR, Sigma Theta Tau International, and other nursing
organizations throughout the world. For example, the primary areas
of interest articulated in the 2016 NINR strategic plan were
Symptom Science: Promoting Personalized Health Strategies;
Wellness: Promoting Health and Preventing Disease; Self–
Management: Improving Quality of Life for Individuals with
Chronic Illness; and End- of- Life and Palliative Care: The Science of
Compassion. Two cross- cu�ing areas of emphasis were promoting
innovation and developing innovative strategies for research careers
(NINR, 2016). And in 2017, the Science Commi�ee of the Council for
the Advancement of Nursing Science (CANS) in the United States
identified four priorities: precision science, big data and data
analytics, determinants of health, and global health (Eckardt, 2017).

Sources of Evidence for Nursing Practice
Nurses make clinical decisions based on knowledge from many
sources, including coursework, textbooks, and their own clinical
experience. Because evidence is constantly evolving, learning about
best practice nursing will persist throughout your career.
Some of what you have learned is based on systematic research, but
some is not. What are the sources of evidence for nursing practice?
Until recently, knowledge primarily was handed down from one
generation to the next based on experience, trial and error, tradition,
and expert opinion. A brief discussion of some alternative sources of
evidence shows how research- based information is different.

Tradition and Authority
Decisions are sometimes based on custom or tradition. Certain
“truths” are accepted as given, and such “knowledge” is so much a
part of a common heritage that few seek validation. Some nursing
interventions are based on custom and “unit culture” rather than on
sound evidence. Indeed, one analysis suggested that some “sacred
cows” (ineffective traditional habits) persisted even in a healthcare
center recognized as a leader in EBP (Hanrahan et al., 2015).
Another common source of information is an authority, a person
with specialized expertise. Reliance on authorities (such as faculty or
textbook authors) is unavoidable but imperfect: authorities are not
infallible, particularly if their expertise is based primarily on
personal experience or out- of- date materials.

Clinical Experience and Trial and Error
Clinical experience is a functional source of knowledge and plays an
important role in EBP. Yet personal clinical experience has some
limitations as a knowledge source because each nurse’s experience is
too narrow to be generally useful. Moreover, the same objective
event is often perceived differently by different nurses.

Trial and error involves trying alternatives successively until a
solution to a problem is found. Trial and error may offer a practical
means of securing knowledge, but the method tends to be
haphazard and solutions may be idiosyncratic.

Logical Reasoning
Solutions to some problems are developed by logical reasoning,
which combines experience, the intellect, and formal systems of
thought. Inductive reasoning involves developing generalizations
from specific observations. For example, a nurse may observe the
anxious behavior of (specific) hospitalized children and conclude
that (in general) children’s separation from their parents is stressful.
Deductive reasoning involves developing specific predictions from
general principles. For example, if we assume that separation
anxiety occurs in hospitalized children (in general), then we might
predict that (specific) children in a hospital whose parents do not
room- in will manifest symptoms of stress. Both types of reasoning
are useful for understanding phenomena, and both play a role in
research. Logical reasoning by itself, however, is limited because the
validity of reasoning depends on the accuracy of the initial premises.

Assembled Information
In making clinical decisions, healthcare professionals rely on
information that has been assembled for a various purposes. For
example, local, national, and international benchmarking data provide
information on such issues as infection rates or the rates of various
procedures (e.g., cesarean births) and can facilitate evaluations of
clinical practices. Cost data—information on the costs associated with
certain procedures, policies, or practices—are sometimes used as a
factor in clinical decision- making. Quality improvement and risk data,
such as medication error reports, can be used to assess the need for
practice changes. Such sources are useful, but they do not provide a
mechanism for making clinical decisions or guiding improvements.

Disciplined Research

Research conducted in a disciplined framework is the best method of
acquiring knowledge. Nursing research combines logical reasoning
with other features to create evidence that, although fallible, tends to
be especially reliable. Carefully synthesized findings from rigorous
research are especially valuable. The current emphasis on EBP
requires nurses to base their clinical practice to the greatest extent
possible on research- based findings rather than on tradition,
authority, intuition, or personal experience—although nursing will
always remain a rich blend of art and science.

Paradigms and Methods for Nursing Research
A paradigm is a world view, a general perspective on the
complexities of the world. Paradigms for human inquiry are often
characterized in terms of the ways in which they respond to basic
philosophical questions, such as, “What is the nature of reality?” and
“What is the relationship between the inquirer and those being
studied?”
Disciplined inquiry in nursing has been conducted mainly within
two broad paradigms, positivism and constructivism. This section
describes these two paradigms and outlines the research methods
associated with them. In later chapters, we describe the
transformative paradigm that underpins critical theory research (Chapter
22) and a pragmatism paradigm that underlies mixed methods research
(Chapter 27).

The Positivist Paradigm
The paradigm that dominated healthcare research for decades is
called positivism (or logical positivism). Positivism is rooted in 19th
century thought, guided by such philosophers as Newton and
Locke. Positivism reflects a broader cultural phenomenon
(modernism) that emphasizes the rational and the scientific.
A fundamental assumption of positivists is that there is a reality out
there that can be studied and known. (An assumption is a basic
principle that is believed to be true without proof.) Adherents of
positivism assume that nature is basically ordered and regular and
that reality exists independent of human observation (Table 1.2). The
related assumption of determinism refers to the positivists’ belief
that phenomena are not haphazard but rather have antecedent
causes. If a person has a cerebrovascular accident, a positivist
assumes that there must be a reason that can be potentially
identified. Within this paradigm, much research activity is aimed at
understanding the underlying causes of phenomena.

TABLE 1.2

Major Assumptions of the Positivist and Constructivist Paradigms

Philosophical
Question

Positivist Paradigm
Assumption

Constructivist Paradigm
Assumption

What is the nature of
reality?

Reality exists; there is a real world
driven by real natural causes

Reality is multiple and subjective,
mentally constructed by individuals

In what way is the
researcher related to
those being
researched?

The researcher is independent from
those being researched; findings are
not influenced by the researcher

The researcher interacts with those
being researched; findings are the
creation of the interactive process

What is the role of
values in the inquiry?

Values and biases are to be held in
check; objectivity is sought

Subjectivity and values are
inevitable and desirable

What are the best
methods for
obtaining evidence?

Deductive processes → hypothesis
testing

Inductive processes → hypothesis
generation

Emphasis on discrete, specific
concepts

Emphasis on entirety of a
phenomenon, holistic

Focus on the objective and
quantifiable

Focus on the subjective and
nonquantifiable

Outsider knowledge—researcher is
external, separate

Insider knowledge—researcher is
part of the process

Fixed, prespecified design Flexible, emergent design
Controls over context Context- bound
Large, representative samples Small, information- rich samples
Measured (quantitative) information Narrative (unstructured)

information
Statistical analysis Qualitative analysis
Seeks generalizations Seeks in- depth understanding

Positivists value objectivity and a�empt to hold personal beliefs and
biases in check. The positivists’ scientific approach involves using
orderly procedures with tight controls of the research situation to
test hunches about the phenomena being studied.
Strict positivist thinking has been challenged, and few researchers
adhere to the tenets of pure positivism. In the postpositivist
paradigm, there is a belief in reality and a desire to understand it,
but postpositivists recognize the impossibility of total objectivity.
They do, however, see objectivity as a goal and strive to be as neutral
as possible. Postpositivists also recognize the obstacles to knowing
reality with certainty and therefore seek probabilistic evidence—i.e.,
learning what the true state of a phenomenon probably is. This
modified positivist position remains a dominant force in healthcare
research. For the sake of simplicity, we refer to it as positivism.

The Constructivist Paradigm

The constructivist paradigm (also called the naturalistic paradigm)
began as a countermovement to positivism with writers such as
Weber and Kant. Just as positivism reflects the cultural phenomenon
of modernism that burgeoned after the industrial revolution,
naturalism is an outgrowth of the cultural transformation called
postmodernism. Postmodern thinking emphasizes the value of
deconstruction, taking apart old ideas and structures, and
reconstruction, pu�ing ideas and structures together in new ways.
The constructivist paradigm represents a major alternative system
for conducting disciplined research in nursing. Table 1.2 compares
the major assumptions of the positivist and constructivist
paradigms.
For the naturalistic inquirer, reality is not a fixed entity but rather is
a construction of the people participating in the research; reality
exists within a context, and many constructions are possible.
Naturalists thus take the position of relativism: if there are multiple
interpretations of reality that exist in people’s minds, then there is no
process by which the ultimate truth or falsity of the constructions
can be determined.
The constructivist paradigm assumes that knowledge is maximized
when the distance between the researcher and those under study is
minimized. The voices and interpretations of study participants are
crucial to understanding the phenomenon of interest. Findings in a
constructivist inquiry are the product of the interaction between the
inquirer and the participants.

Paradigms and Methods: Quantitative and Qualitative
Research
Research methods are the techniques researchers use to structure a
study and to gather and analyze information relevant to the research
question. The two alternative paradigms correspond to different
approaches to developing evidence. A key methodologic distinction
is between quantitative research, which is most closely allied with
positivism, and qualitative research, which is associated with
constructivist inquiry—although positivists sometimes undertake

qualitative studies and constructivist researchers sometimes collect
quantitative information. This section provides an overview of the
methods associated with the two paradigms.

The Scientific Method and Quantitative Research
The traditional scientific method refers to a set of orderly,
disciplined procedures used to acquire information. Quantitative
researchers use deductive reasoning to generate predictions that are
tested in the real world. They typically move in a systematic fashion
from the definition of a problem and the selection of concepts on
which to focus, to the solution of the problem. By systematic, we
mean that the investigator progresses logically through a series of
steps, according to a prespecified plan of action.
Quantitative researchers use various control strategies. Control
involves imposing conditions on the research situation so that biases
are minimized and validity is maximized. Control mechanisms are
discussed at length later in this book.
Quantitative researchers gather empirical evidence—evidence that
is rooted in objective reality and gathered through the senses (e.g.,
through sight or hearing). Observations of the presence or absence of
skin inflammation, patients’ agitation, or infant birth weight are all
examples of empirical observations. Reliance on empirical evidence
means that findings are grounded in reality rather than in
researchers’ personal beliefs.
Evidence for a study in the positivist paradigm is gathered
according to an established plan, using structured methods to collect
needed information. Usually the information gathered is
quantitative—that is, numeric information that is obtained through
a formal measurement and is analyzed statistically.
A traditional scientific study strives to go beyond the specifics of a
research situation. For example, quantitative researchers are
typically not as focused on understanding why a particular person
has a stroke as in understanding what factors influence its
occurrence in people generally. The degree to which research

findings can be generalized to individuals other than those who
participated in a study is called generalizability.
The scientific method has enjoyed considerable stature as a method
of inquiry and has been used productively by nurse researchers
studying a wide range of nursing problems. This approach cannot,
however, solve all nursing problems. One important limitation—
common to both quantitative and qualitative research—is that
research cannot be used to answer moral or ethical questions. Many
intriguing questions about humans fall into this area—questions
such as whether euthanasia should be practiced or abortion should
be legal.
The traditional research approach also must address measurement
challenges. To study a phenomenon, quantitative researchers try to
measure it using numeric values that express quantity. For example,
if the phenomenon of interest is patient stress, researchers would
want to assess if patients’ stress is high or low. Physiologic
phenomena like blood pressure can be measured with great
accuracy and precision, but measuring psychological phenomena
(e.g., stress, resilience, depression) is challenging.
Another issue is that nursing research focuses on humans, who are
inherently complex and diverse. Quantitative studies typically
concentrate on relatively few concepts (e.g., weight gain, fatigue,
pain). Complexities tend to be controlled and, if possible, eliminated,
rather than studied directly, and this narrowness of focus can
sometimes obscure insights. Quantitative research within the
positivist paradigm has been accused of an inflexibility of vision that
fails to capture the full breadth of human experience.

Constructivist Methods and Qualitative Research
Researchers in constructivist traditions emphasize the inherent
complexity of humans, their ability to shape and create their own
experiences, and the idea that truth is a composite of realities.
Constructivist studies are thus focused on understanding the human
experience as it is lived, usually through the collection and analysis
of qualitative materials that are narrative and subjective.

Researchers who criticize the scientific method believe that it is
overly reductionist—that is, it reduces human experience to the few
concepts under investigation, and those concepts are defined in
advance by the researcher rather than emerging from the perspective
of those under study. Constructivist researchers tend to emphasize
the dynamic and holistic aspects of human life and a�empt to
capture those aspects in their entirety.
Flexible, evolving procedures are used to capitalize on findings that
emerge during the study. Constructivist inquiry often takes place in
the field (i.e., in naturalistic se�ings), sometimes over an extended
time period. In constructivist research, the collection of information
and its analysis typically progress concurrently; as researchers sift
through information, insights are gained, new questions emerge,
and further evidence is sought to amplify or confirm the insights.
Through an inductive process, researchers integrate information to
develop a theory or description that helps illuminate the
phenomenon of interest.
Constructivist studies yield rich, in- depth information that can
elucidate varied dimensions of a complicated phenomenon. Findings
from qualitative research are typically grounded in the real- life
experiences of people with first- hard knowledge of a phenomenon.
Nevertheless, the approach has several limitations. Human beings
are used directly as the instrument through which information is
gathered, and humans are extremely intelligent and sensitive—but
fallible—tools. The subjectivity that enriches the analytic insights of
skillful researchers can yield trivial and obvious “findings” among
less competent ones.
Another potential limitation involves the subjectivity of
constructivist inquiry, which sometimes raises concerns about the
idiosyncratic nature of the conclusions. Would two constructivist
researchers studying the same phenomenon in similar se�ings arrive
at similar conclusions? The situation is further complicated by the
fact that most constructivist studies involve a small group of
participants. Thus, the generalizability of findings from
constructivist inquiries is sometimes a potential concern.

Multiple Paradigms and Nursing Research
Paradigms should be viewed as lenses that help to sharpen our focus
on phenomena, not as blinders that limit intellectual curiosity.
Nursing knowledge would be thin if there were not a rich array of
methods available within the two paradigms—methods that are
often complementary in their strengths and limitations. We believe
that intellectual pluralism is advantageous.
We have emphasized differences between the two paradigms and
associated methods so that distinctions would be easy to
understand. Subsequent chapters of this book elaborate further on
differences in terminology, methods, and research products. It is
equally important to note, however, that the two main paradigms
have many features in common, only some of which are mentioned
here:

Ultimate goals. The aim of disciplined research, regardless of paradigm, is
to answer questions and solve problems. Both quantitative and
qualitative researchers seek to capture the truth about an aspect of the
world in which they are interested, and both groups can make
meaningful contributions to evidence for nursing practice.
External evidence. Although the word empiricism has come to be associated
with the classic scientific method, researchers in both traditions gather
and analyze evidence empirically, that is, through their senses.
Reliance on human cooperation. Human cooperation is essential in both
qualitative and quantitative research. To understand people’s
circumstances and experiences, researchers must persuade them to
participate in the investigation and to speak and act candidly.
Ethical constraints. Research with human beings is guided by ethical
principles that sometimes are at odds with research goals. Ethical
dilemmas sometimes confront researchers, regardless of paradigm or
method.
Fallibility of disciplined research. Virtually all studies have limitations.
Every research question can be addressed in many ways, and inevitably
there are tradeoffs. The fallibility of any single study makes it important
to understand and critically appraise researchers’ methodologic decisions
when evaluating evidence quality.

Thus, despite philosophic and methodologic differences, researchers
using traditional scientific or constructivist methods face many
similar challenges. The selection of an appropriate method depends
on researchers’ personal philosophy and on the research question. If
a researcher asks, “What are the effects of cryotherapy on nausea
and oral mucositis in patients undergoing chemotherapy?” the
researcher needs to study effects by carefully measuring patient
outcomes. On the other hand, if a researcher asks, “What is the
process by which parents learn to cope with the death of a child?”
the researcher would be hard pressed to quantify such a process.
Personal world views of researchers help to shape their questions.
In reading about the alternative paradigms for nursing research, you
likely were more a�racted to one of the two paradigms. It is
important, however, to learn about both approaches to disciplined
inquiry and to recognize their respective strengths and limitations.
In this textbook, we describe methods associated with both
qualitative and quantitative research to assist you in becoming
methodologically bilingual. This is especially important because large
numbers of nurse researchers are now undertaking mixed methods
research that involves the collection and analysis of both qualitative
and quantitative data (Chapters 27- 29).

The Purposes of Nursing Research
The general purpose of nursing research is to answer questions and
solve problems of relevance to nursing. Specific purposes can be
classified in various ways. For example, a distinction sometimes is
made between basic and applied research. Basic research is
undertaken to discover general principles of human behavior and
biophysiologic processes. Some basic research (bench research) is
performed in laboratory se�ings and focuses on the molecular and
cellular mechanisms that underlie disease. Applied research is
aimed at examining how basic principles can be used to solve
practice problems. Nurse researchers undertake both types of
research.
Another way to classify research purposes concerns the extent to
which studies provide explanatory information. Specific study goals
can range along a descriptive/explanatory continuum, but a
fundamental distinction is between studies whose primary intent is
to describe phenomena and those that are cause- probing —that is,
designed to illuminate the underlying causes of phenomena. The
descriptive/explanatory continuum includes studies whose purposes
are identification, description, exploration, prediction/control, and
explanation of health- related phenomena. For each purpose, various
types of question are addressed—some more amenable to qualitative
than to quantitative inquiry, and vice versa. Table 1.3 gives examples
of questions asked for these purposes.

TABLE 1.3
Research Purposes and Questions on the Description/Explanation
Continuum

Purpose Types of Questions:
Quantitative Research

Types of Questions: 
Qualitative
Research

Identification What is this phenomenon?
What is its name?

Purpose Types of Questions:
Quantitative Research

Types of Questions: 
Qualitative
Research

Description How prevalent is the phenomenon?
How often does the phenomenon
occur?
How intense is the phenomenon?

What are the dimensions or characteristics of
the phenomenon?
What is important about the phenomenon?

Exploration What factors are related to the
phenomenon?
What are the antecedents of the
phenomenon?

What is the full nature of the phenomenon?
What is really going on here?
How is the phenomenon experienced? What
is the process by which the phenomenon
evolves?

Explanation What is the underlying cause of the
phenomenon?
Does the theory explain the
phenomenon?

How does the phenomenon work?
What does the phenomenon mean?
How did the phenomenon occur?

Prediction If phenomenon X occurs, will
phenomenon Y follow?
What will happen if we modify a
phenomenon or introduce an
intervention?

Control Can the occurrence of the
phenomenon be prevented or
controlled?

In both nursing and medicine, several books have been wri�en to
facilitate evidence- based practice, and these books categorize studies
in terms of the types of information needed by clinicians (Guya� et
al., 2015; Melnyk & Fineout- Overholt, 2015). These writers focus on
several types of clinical purposes: Therapy/intervention;
Diagnosis/assessment; Prognosis; Etiology (causation)/prevention of
harm; Description; and Meaning/process.

Therapy/Intervention
Therapy/intervention questions are addressed by healthcare
researchers who want to learn about the effects of specific actions,
products, or processes. Typically, researchers addressing this type of
question are evaluating whether a new treatment or a practice
change has beneficial effects.
The name “Therapy” for this category originates from promoters of
EBP in medicine who focused on studies of the effects of
“therapeutic” medical interventions, such as new drugs or surgical
procedures. However, this category should be thought of more
broadly to include research on the effects of alternative ways of

doing things, usually with the intent of testing strategies for making
improvements. Therapy questions are foundational for evidence- based
decision- making. Evidence for changes to nursing practice, nursing
education, and nursing administration comes from studies that have
specifically tested the effects of intervening in a particular way. Table
1.4 provides some examples of studies in which nurse researchers
addressed diverse Therapy/intervention questions. If such questions
are answered in a rigorous fashion, the evidence might suggest a
practice change or the implementation of an institutional innovation.

TABLE 1.4
Examples of Therapy/Intervention Questions

Therapy/Intervention Question Area of Focus
Does an education intervention improve teenagers’ knowledge and behaviors
relating to contraception? (Piva�i et al., 2019)

Nursing practice

Do muscle relaxation or nature sounds reduce fatigue in patients with heart
failure? (Seifi et al., 2018)

Nursing practice

Does a nurse- led phone follow- up education program reduce cardiovascular risk
among patients with cardiovascular disease? (Zhou et al., 2018)

Nursing practice

Does a simulation- based palliative care communication skill workshop improve
self- perception of skills in expressing empathy and discussing spiritual issues
among healthcare workers and students? (Brown et al., 2018)

Interprofessional
education

Does simulation improve the ability of first year nursing students to learn vital
signs? (Eyikara & Baykara, 2018)

Nursing
education

Does a bundle of interventions to support nurses’ engagement in evidence- based
practice (EBP) increase their knowledge, a�itudes, and use of library resources?
(Carter et al., 2018)

Nursing
administration

Studies in this category range from evaluations of highly specific
treatments (e.g., comparing two types of cooling blankets for febrile
patients) to assessments of complex multisession interventions
designed to change behaviors (e.g., nurse- led health promotion
programs). Intervention research is essential for evidence- based
practice, and nurses are increasingly engaging in this type of
research. Research addressing Therapy questions is inherently
cause- probing: the researcher wants to know if a certain intervention
will cause improved outcomes.

Diagnosis/Assessment

A burgeoning number of nursing studies concern the rigorous
development and evaluation of formal instruments to screen,
diagnose, and assess patients and to measure important clinical
outcomes—that is, they address Diagnosis/assessment questions.
High- quality instruments with documented accuracy are essential
for both clinical practice and research. Typically, the question being
addressed is: Does this new instrument yield reliable and valid
information about an outcome, situation, or condition of importance
to nursing? Studies addressing Diagnosis questions are not cause–
probing.

Example of a Study Aimed at Diagnosis/Assessment
Kang and colleagues (2018) developed and evaluated the
Automated Medical Error Assessment System, which was
incorporated into an electronic health record system.

Prognosis
Researchers who ask Prognosis questions strive to understand the
outcomes that are associated with a disease or a health problem (i.e.,
its consequences), to estimate the probability they will occur, and to
predict the types of people for whom the outcomes are most likely.
Such studies facilitate the development of long- term care plans for
patients and can suggest the need for appropriate interventions. For
example, Prognosis studies provide valuable information for
guiding patients to make lifestyle choices or to be vigilant for key
symptoms. Prognosis questions are typically cause- probing; the
researcher wants to know if, for example, a certain disease or
behavior causes subsequent adverse outcomes.

Example of a Study Aimed at Prognosis
Galazzi and colleagues (2018) studied the long- term quality of
life outcomes of patients with severe respiratory failure who
had undergone extracorporeal membrane oxygenation.

Etiology (Causation)/Prevention of Harm
Nurses encounter patients who face potentially harmful exposures
as a result of environmental agents or because of personal behaviors
or characteristics. Providing information to patients about such
harms and how best to avoid them depends on the availability of
accurate evidence about factors that contribute to health risks. For
example, there would be no smoking cessation programs if research
had not provided strong evidence that smoking cigare�es causes or
contributes to a wide range of health problems. Thus, identifying
factors that affect or cause illness, mortality, or morbidity is an
important purpose of many nursing studies. Etiology questions are
inherently cause- probing—the purpose is to understand factors that
cause health problems.

Example of a Study Aimed at Identifying and Preventing
Harm
Philpo� and Corcoran (2018) did a study to identify factors that
put men at risk of paternal postnatal depression in Ireland. The
risk factors examined included a prior history of depression,
economic circumstances, marital status, and availability of
paternity leave.

Description
Description questions are not in a category typically identified in
EBP- related classification schemes, but so many nursing studies
have a descriptive purpose that we include it here. Examples of
phenomena that nurse researchers have described include patients’
pain, physical function, confusion, and levels of depression.
Quantitative description focuses on the prevalence, size, intensity,
and measurable a�ributes of phenomena. Qualitative researchers, by
contrast, describe the dimensions or the evolution of phenomena.

Example of a Quantitative Study Aimed at Description
Schoenfisch and colleagues (2019) did a study to describe
hospital nursing staff’s use of lift or transfer devices. They
found that only 40% of the nurses used equipment for at least
half of lifts/transfers.

Example of a Qualitative Study Aimed at Description
Dose and Rhudy (2018) undertook a study to describe what
was important to patients newly diagnosed with advanced
cancer and receiving dignity therapy during cancer treatment.

Meaning/Process
Designing effective interventions, motivating people to comply with
treatments and health promotion activities, and providing sensitive
advice to patients are among the many healthcare activities that can
benefit from understanding clients’ perspectives. Research that
provides evidence about what health and illness mean to clients,
what barriers to positive health practices they face, and what
processes they experience in a transition through a healthcare crisis
are important to evidence- based nursing practice. Studies that
address Meaning/process questions are seldom focused on
identifying the underlying causes of phenomena but might offer
important clues.

Example of a Study Aimed at Understanding
Meaning/Process
Qin and coresearchers (2019) studied the process by which
women experienced a cognitive–behavioral transition after
undergoing pregnancy termination for fetal anomaly.

Study Purposes and Evidence- Based Practice

Studies that address Therapy/intervention questions provide the
most direct evidence for EBP. If we want to know, for example,
whether wedge- shaped foam cushions are more effective in
preventing heel pressure ulcers than standard foam pillows, we
would need to look for rigorous studies that have addressed this
Therapy question. However, other questions also play a role in
improving the quality of nursing care, albeit in different ways.
Table 1.5 presents examples of different types of questions relating to
cigare�e smoking, using the study purpose categories we just
described. The findings from studies relating to only one of these
questions is directly actionable—the Therapy question. If there is
strong evidence that nurse- led smoking cessation programs are
effective in reducing smoking among young adults, we might
consider initiating such a program in our own community.

TABLE 1.5
Different Categories of Questions Related to Cigarette Smoking

Type of Question Example of a Related Research Question on Cigare�e
Smoking

Therapy/intervention Does a nurse- led smoking cessation program for young adults reduce
smoking?

Diagnosis/assessment Is our Smoking Susceptibility Index a valid and reliable measure of
propensity to initiate smoking in teenagers?

Prognosis Is a diagnosis of smoking- related lung cancer associated with increased
risk of suicidal ideation?

Etiology
(causation)/prevention
of harm

Does being poor increase the risk that a person will smoke cigare�es?

Description What percentage of high school students smoke 1+ packs of
cigare�es/week, and what percentage of smokers have tried to quit?

Meaning/process What is it like for long- term smokers to a�empt and fail at qui�ing?

If the other questions in Table 1.5 were answered in rigorous studies,
the evidence could also play a role in guiding efforts to improve
nursing practice—but not as directly. Answers to some of these
questions might help to target those most in need of an intervention.
For example, based on studies addressing the Diagnosis question,
we could launch a prevention effort aimed at teenagers with high
scores on the evidence- based Smoking Susceptibility Index, or

results from an Etiology study might lead us to offer a smoking–
cessation initiative in low- income neighborhoods. Evidence from the
Prognosis question might prompt us to develop a strong program of
emotional support for patients with lung cancer. We might be
motivated to implement an intervention for high school students if
we knew that rates of smoking were high (the Description question).
And, if we knew that a high percentage of smokers in our
community had been unsuccessful in efforts to quit, we might design
an intervention with that information in mind. The stories from
long- term smokers who failed to quit despite efforts to do so (the
Meaning question) could lead us to involve them in the design of an
intervention for hardened smokers.
Nurse researchers are making strides in addressing all types of
questions about important health problems—but evidence regarding
what “works” to address problems comes from studies focused on
Therapy questions. Evidence about the scope of a problem, factors
affecting the problem, the consequences of the problem, and the
meaning of the problem can, however, play a crucial role in efforts to
design be�er interventions, to aim our resources at those in greatest
need, and to provide appropriate guidance to clients in everyday
practice.

Assistance for Users of Nursing Research
This book is designed primarily to help you develop skills for
conducting research, but in an environment that stresses EBP, it is
extremely important to hone your skills in reading, evaluating, and
using nursing studies. We provide specific guidance to consumers in
most chapters by including guidelines for critically appraising
aspects of a study covered in the chapter. The questions in Box 1.1
are designed to assist you in using the information in this chapter in
an overall preliminary assessment of a research report.

TIP The Resource Manual (RM) for this book offers rich
opportunities to practice your critical appraisal skills. The RM’s
Toolkit on includes Box 1.1 as a Word document,
which will allow you to adapt these questions, if desired, and to
input answers to them directly in a Word document without
having to retype the questions.

Research Examples
Each chapter of this book presents brief descriptions of studies
conducted by nurse researchers, focusing on aspects emphasized in
the chapter. Read the full journal articles to learn more about the
methods and results of these studies.

Research Example of a Quantitative Study

Study: Promoting heart health among rural African Americans
(Abbo� et al., 2018)
Study purpose: The purpose of the study, which addressed a
Therapy question, was to evaluate a culturally relevant health
promotion intervention designed to reduce cardiovascular disease
risk in rural African American adults—the “With Every Heartbeat is
Life” program.
Study methods: Twelve rural churches in two counties of northern
Florida were assigned, at random, to either receive the intervention
(six churches) or not receive it (the other six churches). Pastors and
community members from the churches then recruited people to
participate in the study. A total of 115 adults were in the
intervention group, and 114 were in the group not receiving the
intervention (the control group). Those in the intervention group
received the weekly, 90- minute cardiovascular health promotion
intervention for 6 weeks, whereas those in the control group did not
receive any health promotion education. Everyone who participated
in the study completed questionnaires before the start of the study
and 6 weeks later at the end of the study. The questionnaires were
used to gather information about participants’ a�itudes, intentions,
and self- efficacy to increase the consumption of produce, reduce
dietary saturated fat intake, and increase exercise.
Key findings: Those in the intervention group had significantly
greater improvements than those in the control group on most of the
outcomes. For example, participants who received the program had
significantly greater intentions to increase produce consumption and

reduce dietary fat intake. Self- efficacy for healthy choices also
increased significantly more among participants in the intervention
group.
Conclusions: Abbo� and colleagues concluded that nurse- led
interventions in community se�ings can potentially reduce
cardiovascular disease risk.

Research Example of a Qualitative Study

Study: “I can never be too comfortable”—Race, gender, and emotion
at the hospital bedside (Co�ingham et al., 2018)
Study purpose: The purpose of this descriptive study was to explore
how gender and race intersect to shape the emotion practice of
nurses as they experience, manage, and reflect on their emotions in
the workplace.
Study methods: As part of a larger study of nurses and emotional
labor, audio diaries were elicited from a sample of 48 nurses who
were diverse with respect to gender (both women and men) and race
(white, black, and Asian). Study participants were given a digital
voice recorder and were instructed to make a recording after six
consecutive shifts. They were asked to reflect on how they felt
during and after their last shift, to describe things that influenced
their emotions, and to explain how they responded to their own
emotions. Participants were not asked to specifically reflect on
experiences related to race. Each recording was transcribed for
analysis.
Key findings: Analysis of the audio diary data revealed “a
disproportionate emotional labor that emerges among women
nurses of color in the white institutional space of American health
care” (p. 145). Women of color were found to experience an
emotional “double shift” in negotiating interactions between
patients, coworkers, and supervisors. These women were found to
have experiences that added to job- related stress and that resulted in
depleted emotional resources that negatively influenced patient care.
Conclusions: The researchers expressed the hope that their study
would help to make more visible the toll of the intersection of race

and gender on emotional labor in nursing.

Summary Points

Nursing research is systematic inquiry undertaken to develop evidence
on problems of importance to nurses. Nurses are adopting an evidence-
based practice (EBP) that incorporates research findings into their clinical
decisions.
Nurses can participate in a range of research- related activities that span a
continuum from being consumers of research (those who read and evaluate
studies) to being producers of research (those who design and undertake
studies). Engagement with research often occurs in practice se�ings
through participation in a journal club.
Nursing research began with Florence Nightingale but developed slowly
until its rapid acceleration in the 1950s. Since the 1980s, the focus has
been on clinical nursing research—that is, on problems relating to
clinical practice.
The National Institute of Nursing Research (NINR), established at the
U.S. National Institutes of Health in 1993, affirms the stature of nursing
research in the United States.
Contemporary issues in nursing research include the growth of EBP,
expansion of local research and quality improvement efforts, research
synthesis through systematic reviews, interprofessional studies,
patient- centeredness in both clinical care and in research, interest in the
applicability of research to individual patients or groups, interest in
precision health care and symptom science, and efforts to measure the
clinical significance of research results.
Disciplined research stands in contrast to other knowledge sources for
nursing practice, such as tradition, authority, personal experience, trial
and error, and logical reasoning.
Nursing research is conducted mainly within one of two broad
paradigms—world views with underlying assumptions about reality: the
positivist and the constructivist paradigms.
In the positivist paradigm, it is assumed that there is an objective reality
and that natural phenomena are orderly. The assumption of determinism
is the belief that phenomena result from prior causes and are not
haphazard.
In the constructivist (naturalistic) paradigm, it is assumed that reality is
not fixed, but it is a construction of human minds; “truth” is a composite

of multiple constructions of reality.
The positivist paradigm is associated with quantitative research —the
collection and analysis of numeric information. Quantitative research is
typically conducted within the traditional scientific method, which is a
systematic, controlled process. Quantitative researchers gather and
analyze empirical evidence (evidence collected through the human
senses) and strive for generalizability of their findings.
Researchers within the constructivist paradigm emphasize
understanding the human experience as it is lived through the collection
and analysis of subjective, narrative materials using flexible procedures
that evolve in the field; this paradigm is associated with qualitative
research.
Basic research is designed to extend the knowledge base for the sake of
knowledge itself. Applied research focuses on discovering solutions to
immediate problems.
A fundamental distinction, especially relevant in quantitative research, is
between studies whose primary intent is to describe phenomena and those
that are cause- probing —i.e., designed to illuminate underlying causes of
phenomena. Specific research purposes on the description/explanation
continuum include identification, description, exploration,
prediction/control, and explanation.
Nursing studies can be classified in terms of several EBP- related aims:
Therapy/intervention; Diagnosis/assessment; Prognosis; Etiology
(causation)/prevention of harm; Description; and Meaning/process.
Rigorous answers to Therapy questions are foundational for EBP.

Study Activities
Study activities are available to instructors on .

Box 1.1 Questions for a Preliminary Overview of a Research
Report

1. How relevant is the research question in this study to the actual practice
of nursing? Does the study focus on a topic that is a priority area for
nursing research?

2. Was the research quantitative or qualitative?
3. What was the underlying purpose (or purposes) of the study—

identification, description, exploration, explanation, or prediction and
control? Does the purpose correspond to an EBP focus such as
Therapy/intervention, Diagnosis/assessment, Prognosis, Etiology
(causation)/prevention of harm, Description, or Meaning/process?

4. Is this study fundamentally cause- probing?
5. What might be some clinical implications of this research? To what type

of people and se�ings is the research most relevant? If the findings are
valid, how might I use the results of this study in my clinical work?

References Cited in Chapter 1
** Abbo� L., Williams C., Slate E., & Gropper S. (2018). Promoting heart health

among rural African Americans. Journal of Cardiovascular Nursing, 33, E8–
E14.

* Barnes H., Reardon J., & McHugh M. (2016). Magnet® hospital recognition
linked to lower central line- associated bloodstream infection rates. Research
in Nursing & Health, 39, 96–104.

Bastani F., Rajai N., Farsi Z., & Als H. (2017). The effects of kangaroo care on
the sleep- wake states of preterm infants. Journal of Nursing Research, 25, 231–
239.

Billner- Garcia R., Spilkerm A., & Goyak D. (2018). Skin to skin contact:
newborn temperature stability in the operating room. MCN: American
Journal of Maternal- Child Nursing, 43, 158–163.

Brown C., Back A., Ford D., Kross E., Downey L., Shannon S., … Engelberg R.
(2018). Self- assessment scores improve after simulation- based palliative care
communication skill workshop. American Journal of Hospice & Palliative Care,
35, 45–51.

Carter E., Rivera R., Gallagher K., & Cato K. (2018). Targeted interventions to
advance a culture of inquiry at a large, multicampus hospital among nurses.
Journal of Nursing Administration, 48, 18–24.

* Cashion A. K., & Grady P. (2015). The National Institutes of Health/National
Institutes of Nursing Research intramural research program and the
development of the NIH Symptom Science Model. Nursing Outlook, 63, 484–
487.

Cho E., Kim S., Kwon M., Cho H., Kim E., Jun E., & Lee S. (2016). The effects of
kangaroo care in the neonatal intensive care unit on the physiological
functions of preterm infants, maternal- infant a�achment, and maternal
stress. Journal of Pediatric Nursing, 31, 430–438.

Co�ingham M., Johnson A., & Erickson R. (2018). “I can never be too
comfortable”: race, gender, and emotion at the hospital bedside. Qualitative
Health Research, 28, 145–158.

Dose A., & Rhudy L. (2018). Perspectives of newly diagnosed advanced cancer
patients receiving dignity therapy during cancer treatment. Supportive Care
in Cancer, 26, 187–195.

Eckardt P., Culley J., Corwin E., Richmond T., Dougherty C., Pickler R., …
DeVon H. (2017). National nursing science priorities: creating a shared

vision. Nursing Outlook, 65, 726–736.
Eyikara E., & Baykara Z. (2018). Effect of simulation on the ability of first year

nursing students to learn vital signs. Nurse Education Today, 60, 101–106.
Galazzi A., Brambilla A., Grasselli G., Pesenti A., Fumagali R., & Lucchini A.

(2018). Quality of life of adult survivors after extra corporeal membrane
axygenation (ECMO). Dimensions of Critical Care Nursing, 37, 12–17.

Gardner K., Kanaskie M., Knehans A., Salisbury S., Doheny K., & Schirm V.
(2016). Implementing and sustaining evidence based practice through a
nursing journal club. Applied Nursing Research, 31, 139–145.

Graystone R. (2017). The 2014 Magnet® Application Manual: nursing
excellence standards evolving with practice. Journal of Nursing
Administration, 47, 527–528.

Guya� G., Rennie D., Meade M., & Cook D. (2015). Users’ guide to the medical
literature: essentials of evidence- based clinical practice (3rd ed.). New York:
McGraw Hill.

Hanrahan K., Wagner M., Ma�hews G., Stewart S., Dawson C., Greiner J., …
Williamson A. (2015). Sacred cows gone to pasture: a systematic evaluation
and integration of evidence- based practice. Worldviews on Evidence- Based
Nursing, 12, 3–11.

* Institute of Medicine. (2010). The future of nursing: leading change, advancing
health. Washington, DC: The National Academies Press.

Johnston C., Campbell- Yeo M., Disher T., Benoit B., Fernandes A., Streiner D.,
… Zee R. (2017). Skin- to- skin care for procedural pain in neonates. Cochrane
Database of Systematic Reviews, CD0008435.

Kang M., Jin Y., Jin T., & Lee S. (2018). Automated medication error risk
assessment system (Auto- MERAS). Journal of Nursing Care Quality, 33, 86–93.

McCaughey D., McGhan G., Rathert C., Williams J., & Hearld K. (2019).
Magnetic work environments: patient experience outcomes in Magnet
versus non- Magnet hospitals. Health Care Management Review (in press).

Melnyk B. M., & Fineout- Overholt E. (2015). Evidence- based practice in nursing
and healthcare: a guide to best practice (3rd ed.). Philadelphia: Lippinco�
Williams & Wilkins.

Moore E. R., Bergman N., Anderson G., & Medley N. (2016). Early skin- to- skin
contact for mothers and their health newborn infants. Cochrane Database of
Systematic Reviews, CD0003519.

* National Institute of Nursing Research. (2016). The NINR strategic plan:
advancing science, improving lives. Bethesda, MD: NINR.

Philpo� L., & Corcoran P. (2018). Paternal postnatal depression in Ireland:
prevalence and associated factors. Midwifery, 56, 121–127.

Piva�i A., Osis M., & deMorales Lopes M. (2019). The use of educational
strategies for promotion of knowledge, a�itudes and contraceptive practice
among teenagers: a randomized clinical trial. Nurse Education Today, 72, 18–
26.

Qin C., Chen W., Deng Y., Li Y., Mi C., Sun L., & Tang S. (2019). Cognition,
emotion, and behaviour in women undergoing pregnancy termination for
foetal anomaly: a grounded theory analysis. Midwifery, 68, 84–90.

Saylor J., Hanna K., & Calamaro C. (2019). Experiences of students who are
newly diagnosed with type 1 diabetes mellitus. Journal of Pediatric Nursing,
44, 74–80.

Schoenfisch A., Kucera K., Lipscomb H., McIlvaine J., Becherer L., James T., &
Avent S. (2019). Use of assisteive devices to lift/transfer, and reposition
hospital patients. Nursing Research, 68, 3–12.

Seifi L., Najafi Ghezeljeh T., & Haghani H. (2018). Comparison of the effects of
Benson muscle relaxation and nature sounds on the fatigue of patients with
heart failure. Holistic Nursing Practice, 32, 27–34.

Varas- Díaz N., Betancourt- Díaz E., Lozano A., Huang L., DiNapoli L., Hanlon
A., & Villaruel A. (2019). Testing the efficacy of a web- based parent- –
adolescent sexual communication intervention among Puerto Ricans. Family
& Community Health, 42, 30–43.

Zhou Y., Liao J., Feng F., Ji M., Zhao C., & Wang X. (2018). Effects of a nurse- –
led phone follow- up education program based on the self- efficacy among
patients with cardiovascular disease. Journal of Cardiovascular Nursing, 33,
E15–E23.

*A link to this open- access article is provided in the Toolkit for Chapter 1 in

the Resource Manual.

**This journal article is available on for this chapter.

C H A P T E R 2

Evidence- Based Nursing: Translating Research
Evidence into Practice

Evidence- based practice (EBP) has been a major force in the health
professions for the past few decades. In nursing, many organizations and
initiatives have promoted EBP. For example, EBP has been named as one
of the six core competencies in the Quality and Safety Education for
Nurses (QSEN) initiative (Cronenwe�, 2012).
This book will help you to develop skills to generate, and to evaluate,
research evidence for nursing practice. Before we delve into the “how- tos”
of research, we discuss key aspects of EBP to clarify the key role that
research plays in EBP.

Background of Evidence- Based Nursing Practice
This section provides a context for understanding evidence- based nursing
practice and closely related concepts.

Definition of Evidence- Based Practice
Dozens of definitions of evidence- based practice have been proposed.
Here is the one offered by Melnyk and Fineout- Overholt (2019) in their
textbook on EBP: “A paradigm and lifelong problem- solving approach to
clinical decision making that involves the conscientious use of the best
available evidence (including a systematic search for and critical appraisal
of the most relevant evidence to answer a clinical question) with one’s own
clinical expertise and patient values and preferences to improve outcomes
for individuals, communities, and systems” (p. 753). This definition, like
many others, declares that EBP is a decision- making (or problem- solving)
process. Most definitions also include the idea that EBP is built on a “three–
legged stool,” each “leg” of which is essential to the process: best evidence,
clinical expertise, and patient preferences and values. Figure 2.1 depicts these
concepts.

FIGURE 2.1 Evidence- based practice components.

TIP Sco� and McSherry (2009), in their review of evidence- based
nursing concepts, identified 13 overlapping but distinct definitions of
evidence- based nursing and EBP—and many more definitions have
emerged. A few alternative definitions of EBP are presented in a table
in the Toolkit of the accompanying Resource Manual .

Best Evidence
A basic feature of EBP as a clinical problem- solving strategy is that it de–
emphasizes decisions based on tradition or expert opinions. The emphasis

is on identifying and evaluating the best available research evidence as a
tool for solving problems.

TIP The consequences of not using research evidence can be
devastating. For example, from 1956 through the 1980s, Dr. Benjamin
Spock—who was considered an expert on the care of infants—
published a top- selling book, Baby and Child Care. Spock advised
pu�ing babies on their stomachs to sleep. In their systematic review,
Gilbert and colleagues (2005) wrote, “Advice to put infants to sleep
on the front for nearly half a century was contrary to evidence from
1970 that this was likely to be harmful” (p. 874). They estimated that
if medical advice had been guided by research evidence, over 60,000
infant deaths might have been prevented.

There continues to be debate about what qualifies as “best” evidence.
Numerous organizations and authors have created evidence hierarchies that
rank evidence sources according to the degree to which they provide
unbiased evidence to guide clinical decisions. We discuss evidence
hierarchies in more detail later in this chapter. Evidence, however,
whether “best” or not, is never by itself a sufficient basis for clinical
decision- making.

Patient Values and Preferences
Patient- centered care has been defined by the Institute of Medicine (2001)
as “providing care that is respectful of and responsive to individual
patient preferences, needs and values, and ensuring that patient values
guide all clinical decisions.” Patient- centered care is an important feature of
EBP.
“Patient preferences” encompass several concepts, including patient
preferences for type of treatment; preferences for being involved in
decision- making; patients’ social or cultural values; preferences about
involving family members in healthcare decisions; patients’ priorities
regarding quality of life issues; and their spiritual or religious values.
Decisions also require understanding patients’ circumstances, such as the
resources at their disposal. Nurses thus need the skills to elicit and
understand patient preferences—and to communicate information about
“best evidence” to patients.

Clinical Expertise and Experiential Evidence
Decision- making in clinical practice ultimately relies on clinicians’
expertise, which is an amalgam of academic knowledge gained during
training and continuing education, experiences with patient care, and
interdisciplinary sharing of new knowledge. David Sacke�, the pioneer of
evidence- based medicine, strongly advocated for the importance of clinical
expertise in making decisions because even very strong research evidence
may not be appropriate or applicable for individual patients.
Newhouse (2007) also stressed the importance of experiential evidence,
which is internal evidence from local monitoring or evidence- gathering
efforts, such as quality improvement projects. Clinical expertise and
experiential evidence, combined with patient preferences, guide how “best
evidence” can be used to make healthcare decisions.

Evidence- Based Practice and Related Concepts
During the 1980s, concern about research utilization began to emerge.
Research utilization (RU) is the use of findings from a study in a practical
application. In RU, the emphasis is on translating new knowledge into
real- world applications. EBP is a broader concept than RU because it
integrates research findings with other factors, as just noted. Also, whereas
RU begins with the research itself (How can I put this new knowledge to
use in my clinical se�ing?), the start- point in EBP typically is a clinical
question (What does the evidence suggest is the best approach to solving
this clinical problem?).
During the 1980s and 1990s, RU projects were undertaken by numerous
hospitals and nursing organizations. These projects were institutional
a�empts to implement changes in nursing practice based on research
findings. During the 1990s, however, the call for research utilization was
superseded by the push for EBP.
The EBP movement originated in the fields of medicine and epidemiology
during the 1990s. British epidemiologist Archie Cochrane criticized
healthcare practitioners for failing to incorporate research evidence into
their decision- making. His work led to the establishment of the Cochrane
Collaboration, an international partnership with centers established in 43
countries. The Collaboration prepares and disseminates reviews of
research evidence and has a goal of making Cochrane “the home of
evidence” relating to healthcare decision- making.

TIP The Cochrane Collaboration publishes a series called Making a
Difference, which presents stories of how evidence from Cochrane
reviews has made impacts on real- world decision- making and patient
outcomes. For example, one article in this series focused on the
benefits of continuity of midwife care
(h�p://www.cochrane.org/news/cochrane- making- difference–
midwifery).

Also during the 1990s, a group from McMaster Medical School in Canada
(led by Dr. David Sacke�) developed a clinical learning strategy, which
they called evidence- based medicine. The evidence- based medicine
movement has shifted to a broader conception of using best evidence by
all healthcare practitioners (not just physicians) in a multidisciplinary
team. EBP is considered a major shift for healthcare education and
practice. In the EBP environment, a skillful clinician can no longer rely on
a repository of memorized information but rather must be a lifelong
learner who is adept in accessing, evaluating, and using new evidence.

TIP A debate has emerged concerning whether the term “evidence–
based practice” should be replaced with “evidence- informed practice”
(EIP). Those who advocate for EIP have argued that the word
“based” suggests a stance in which patient preferences are not
sufficiently considered in clinical decisions (e.g., Glasziou, 2005). Yet,
as noted by Melnyk and Newhouse (2014), all current models of EBP
incorporate clinicians’ expertise and patients’ preferences. They
argued that “Changing terms now…will only create confusion at a
critical time where progress is being made in accelerating EBP” (p.
348). We concur and use the term EBP throughout this book.

Knowledge translation (KT) is a related term that is often associated with
efforts to enhance systematic change in clinical practice. The term was
coined by the Canadian Institutes of Health Research (CIHR), which
defined KT as “the exchange, synthesis, and ethically- sound application of
knowledge—within a complex system of interactions among researchers
and users—to accelerate the capture of the benefits of research for
Canadians through improved health, more effective services and products,
and a strengthened health care system” (CIHR, 2004). The World Health
Organization (WHO) (2005) adapted the CIHR’s definition and defined KT

as “the synthesis, exchange, and application of knowledge by relevant
stakeholders to accelerate the benefits of global and local innovation in
strengthening health systems and improving people’s health.”
Institutional projects aimed at KT often use methods and models that are
similar to organizational EBP projects.
Translational research has emerged as a discipline devoted to developing
methods to promote knowledge translation and the use of evidence.
Translational science involves the study of interventions, implementation
processes, and contextual factors that affect the uptake of new evidence in
healthcare practice (Titler, 2014). In nursing, the need for translational
research was an important impetus for the development of the Doctor of
Nursing Practice degree. We discuss translational research in Chapter 11.
EBP can be undertaken by individual nurses working with patients or as a
project taken on by a team within a healthcare organization.
Organizational EBP projects share certain features with quality
improvement (QI) efforts. We describe methodologic strategies for quality
improvement in Chapter 12.

TIP EBP is widely endorsed in nursing, but its adoption often faces
many challenges. Some of the obstacles include nurses’ lack of
research appraisal skills; their misperceptions about EBP; heavy
patient loads and lack of time; nurses’ and administrators’ resistance
to change; and lack of autonomy about practice decisions. Factors
that facilitate EBP include strong organizational support; the
availability of EBP mentors and resources; collaboration among
healthcare professionals; and participation in journal clubs (Gardner
et al., 2016; Newhouse & Spring, 2010).

Resources for Evidence- Based Practice in Nursing
Although EBP can present challenges to nurses, resources to support EBP
are increasingly available. We offer some guidance and urge you to
explore other ideas with your colleagues, mentors, and health information
experts.

Preprocessed and Preappraised Evidence
Searching for best evidence requires skill, especially because of the
accelerating pace of evidence production. Thousands of studies of
relevance to nurses are published each month in professional journals.
These primary studies are not preappraised for quality or clinical utility.
Fortunately, finding evidence useful for practice is often facilitated by the
availability of evidence sources that are preprocessed (synthesized) and
sometimes preappraised. DiCenso and colleagues (2009) have created a
“6S” hierarchy of evidence sources, which is intended as a guide to
evidence retrieval. The 6S hierarchy, typically shown as a pyramid, places
five types of preprocessed evidence at the top, and individual studies at
the bo�om. The hierarchy is intended to help you see how to proceed with
an evidence search. A clinician seeking evidence would start at the top of
the hierarchy and work downward if appropriate evidence was lacking at
a given level. Table 2.1 shows the 6S hierarchy and provides examples at
each level. In this section, we describe each evidence source, starting at the
bo�om of the hierarchy because higher levels build on the ones that
precede them.

TABLE 2.1
The “6S” Hierarchy of Evidence Sources a

Evidence
Source

Description/Examples Examples of Resources

1. Systems ↓ Computerized decision support systems In some electronic health
records systems

Evidence
Source

Description/Examples Examples of Resources

1. Summaries

Evidence- based clinical practice guidelines
Online EBP summary resources

U.S. National Guidelines
Clearinghouse
Registered Nurses
Association of Ontario Best
Practices
EBSCO Nursing Reference
Center; JBI COnNECT+;
UpToDate

1. Synopses
of
syntheses ↓

Synopses published in evidence- based
abstraction journals or compiled by
organizations

Evidence- Based Nursing
DARE Database of Reviews
of Evidence
The Centre for Reviews and
Dissemination (CRD)

1. Syntheses

Systematic reviews
Rapid reviews

Joanna Briggs Institute
Database
Cochrane Database
AHRQ Evidence Reports
BMC Systematic Reviews

1. Synopses
of studies ↓

Brief summaries of single studies, often
with commentary on clinical applicability

Evidence- Based Nursing
ACP Journal Club

1. Single
original
studies

Not preprocessed, primary studies
published in journals

PubMed (MEDLINE)
CINAHL

aThe “6S” hierarchy depicting the efficiency of evidence retrieval for different sources was proposed
by DiCenso et al., 2009.
AHRQ, Agency for Healthcare Research and Quality; EBP, evidence- based practice.

TIP The 6S hierarchy does not imply a gradient of evidence in terms
of quality, but rather in terms of ease in retrieving relevant evidence
to address a clinical question. At all levels, the evidence should be
assessed for quality and relevance.

Level 6 in the 6S Hierarchy: Single Studies
Reports describing a single original study are at the base of the 6S
hierarchy because single studies are not ready for immediate use in
making EBP decisions. At a minimum, individual primary studies need to
be critically appraised for their rigor and their relevance to clinical
problems. Clinicians searching for best evidence for a clinical query would

start with a single study only if evidence from higher levels was
unavailable or was judged to be flawed. We describe the major source of
research reports (journal articles) in Chapter 3 and provide guidance in
searching for studies in Chapter 5.

Level 5 in the 6S Hierarchy: Synopses of Single Studies
A synopsis of a study provides a brief overview of the research, often with
sufficient detail to understand the evidence. As noted by DiCenso et al.
(2009), a synopsis offers three advantages over the original report: (1) the
brevity of the synopsis makes it more readily accessible to practitioners; (2)
the study was likely chosen for abstraction because an expert believed the
study was important; and (3) the synopsis is sometimes accompanied by
commentary about the clinical utility of the evidence (i.e., preappraised).
Several evidence- based journals include synopses of original studies,
including Evidence- Based Nursing, Evidence- Based Midwifery, ACP Journal
Club, and The Online Journal of Knowledge Synthesis for Nursing.

Level 4 in the 6S Hierarchy: Syntheses
Evidence- based practice relies on meticulous integration and synthesis of
research evidence on a topic. The importance of such syntheses has given
rise to many different types of research review (Grant & Booth, 2009), but
the best known and most widely respected type of synthesis is the
systematic review. A systematic review is not just a literature review, such
as ones we describe in Chapter 5. A systematic review is in itself a
methodical, scholarly inquiry that follows many of the same steps as those
for primary studies and that yields a summary of current best evidence at
the time the review was wri�en. Chapter 30 offers guidance on conducting
and critically appraising systematic reviews and describes a few other
types of synthesis, such as scoping reviews, realist reviews, and umbrella
reviews.
Systematic reviewers sometimes integrate findings from quantitative
studies using statistical methods, in what is called a meta- analysis. Meta–
analysts treat the findings from a study as one piece of information. The
findings from multiple studies on the same topic are combined and
analyzed statistically. Instead of individual people being the unit of
analysis (the basic entity of a statistical analysis) as in most primary
studies, meta- analysts use findings from individual studies as the unit of
analysis. Meta- analysis is an objective method of integrating a body of

findings and of observing pa�erns that might otherwise have gone
undetected.

Example of a Meta- Analysis
Zhang and colleagues (2018) conducted a meta- analysis of the
effectiveness of psychological interventions for patients with
osteoarthritis. Their analysis included findings from 12 randomized
controlled trials. They found that psychological interventions could
reduce pain and fatigue and improve self- efficacy, but the researchers
concluded that be�er confirmatory evidence is needed.

Systematic reviews of qualitative studies often take the form of
metasyntheses, which are rich resources for EBP (Beck, 2009). A
metasynthesis, which involves integrating qualitative research findings on
a topic, is less about reducing information and more about amplifying and
interpreting it. For certain qualitative questions, an approach to systematic
synthesis called meta- aggregation may be appropriate, as we describe in
Chapter 30. Strategies have also been developed for systematic mixed
studies review (also called mixed research syntheses), which are efforts to
integrate and synthesize both quantitative and qualitative evidence on a
topic (Heyvaert et al., 2017; Sandelowski et al., 2013).

Example of a Mixed Studies Review
Beck and Woynar (2017) conducted a mixed studies review on
pos�raumatic stress in mothers while their preterm infants are in the
neonatal intensive care unit. They synthesized a total of 37 studies: 25
were quantitative and 12 were qualitative.

Many systematic reviews are published in professional research journals
that can be accessed using standard literature search procedures; others
are available in dedicated databases. A major example is the Cochrane
Database of Systematic Reviews, which contains thousands of systematic
reviews. Most Cochrane reviews involve meta- analyses, and most of them
relate to healthcare interventions—but the Cochrane Collaboration now
also includes qualitative evidence syntheses. Cochrane reviews are done
with great rigor and have the advantage of being checked and updated
regularly.

In recent years, a type of synthesis called a rapid review (or rapid evidence
assessment) has emerged (Khangura et al., 2012). These streamlined
reviews are less rigorous than systematic reviews but are typically
completed in a period of weeks, rather than months or years. Rapid
reviews are described in Chapter 30.

TIP Many resources are available for finding systematic reviews. For
example, the Joanna Briggs Institute in Australia
(h�p://joannabriggs.org/) and the Centre for Reviews and
Dissemination at the University of York in England
(h�p://www.york.ac.uk/inst/crd/index.htm) produce useful
systematic reviews. We provide links to many of these resources (as
well as to other EBP- related websites) in the Toolkit of the
accompanying Resource Manual.

Level 3 in the 6S Hierarchy: Synopses of Syntheses
Synopses of systematic reviews make rigorously integrated evidence even
more handy for practitioners seeking answers to clinical queries. Many
abstract journals mentioned in connection with Level 5 synopses of studies
(e.g., Evidence- Based Nursing, Evidence- Based Midwifery) also include
synopses of selected systematic reviews. The Cochrane Collaboration is
working toward making their reviews more accessible by creating plain–
language summaries of systematic review findings. A link to such a
summary is included in the Toolkit of the accompanying Resource Manual

.

Level 2 in the 6S Hierarchy: Summaries
For some clinical questions, best evidence may be conveniently available
in “Summaries,” which include online EBP summary resources and
clinical practice guidelines.
Dozens of evidence- based point- of- care resources for healthcare
professionals have become available. These web- based resources are
designed to provide rapidly accessible evidence- based information (and,

sometimes, guidance) that is periodically updated. Campbell and
colleagues (2015) undertook a quantitative evaluation of the content,
breadth, quality, and rigor of 20 online point- of- care summary resources.
Their assessment led them to conclude that the top five were UpToDate,
Nursing Reference Center, Mosby’s Nursing Consult, BMJ Best Practice,
and the Joanna Briggs Institute’s COnNECT+. Kwag and colleagues (2016),
who focused on evidence summaries for physicians, also came to the
conclusion that UpToDate and BMJ Best Practice were two of the best and
most reliable resources out of the 23 they evaluated.
Evidence- based clinical practice guidelines, like systematic reviews,
represent efforts to distill a large body of evidence into a manageable
form, but guidelines differ from reviews in a number of respects. First,
clinical practice guidelines, which are usually based on systematic
reviews, give specific recommendations for evidence- based decision–
making. Second, guidelines a�empt to address all issues relevant to a
clinical decision, including balancing benefits and risks. Third, systematic
reviews are evidence- driven—that is, they are undertaken when a body of
evidence has been produced and needs to be synthesized. Guidelines, by
contrast, are “necessity- driven” (Straus et al., 2011, p. 125), meaning that
guidelines are developed to guide clinical practice—even when available
evidence is limited or of unexceptional quality. Fourth, systematic reviews
are done by researchers, but guideline development typically involves the
consensus of a group of researchers, experts, and clinicians. For this
reason, guidelines based on the same evidence may result in different
recommendations. Differences across guidelines sometimes reflect genuine
contextual factors—for example, guidelines appropriate in the United
States may be unsuitable in India.
It can be challenging to find clinical practice guidelines because there is no
single guideline repository. One approach is to search for guidelines in
comprehensive guideline databases. For example, in the United States,
nursing and other healthcare guidelines are maintained by the National
Guideline Clearinghouse (www.guideline.gov), and similar databases are
available in other countries. An important nursing guideline resource
comes from the Registered Nurses Association of Ontario (RNAO)
(www.rnao.org/bestpractices).
In addition to looking for guidelines in national clearinghouses and in the
websites of professional organizations, you can search bibliographic
databases such as MEDLINE or EMBASE. Search terms such as the
following can be used: practice guideline, clinical practice guideline, best

g p g p g
practice guideline, evidence- based guideline, and consensus statement. Be aware,
though, that a standard search for guidelines in bibliographic databases
will yield many references—but often a frustrating mixture of citations to
not only the actual guidelines, but also to commentaries, anecdotes,
implementation studies, and so on.

Example of a Nursing Clinical Practice Guideline
In 2017, the Registered Nurses Association of Ontario (RNAO)
published the second edition of a best practice guideline called “Adult
asthma care: Promoting control of asthma.” The guideline is intended for
use “by nurses and other members of the interprofessional healthcare
team to enhance the quality of their practice pertaining to the
assessment and management of adult asthma.”

There are many topics for which practice guidelines have not yet been
developed, but the opposite problem is also true: the dramatic increase in
the number of guidelines means that there are sometimes multiple
guidelines on the same topic. Worse yet, because of variation in the rigor
of guideline development and in interpretations of the evidence, different
guidelines sometimes offer different and even conflicting
recommendations. Thus, those who wish to adopt clinical practice
guidelines to address a clinical problem are urged to critically appraise
them to identify ones that are based on the strongest and most up- to- date
evidence, have been meticulously developed, are user- friendly, and are
appropriate for local use.
Several guideline appraisal instruments are available, but the one that has
gained the broadest support is the Appraisal of Guidelines Research and
Evaluation (AGREE) Instrument, now in its second version (Brouwers et
al., 2010). This tool has been translated into many languages and has been
endorsed by the World Health Organization. Further information about
the AGREE II instrument is provided in Supplement A to Chapter 2 on

. A shorter and simpler tool for evaluating guideline quality is
called the iCAHE Guideline Quality Checklist (Grimmer et al., 2014). A
“mini- checklist” (MIChe) for assessing guideline quality for daily practice
use has also been proposed (Siebenhofer et al., 2016).

TIP The U.S. Agency for Healthcare Research and Quality (AHRQ)
offers “guideline syntheses” that provide systematic comparisons of
agreement and disagreement among selected guidelines on the same
topic (h�ps://www.guidelines.gov/syntheses/index).

One final issue is that guidelines change more slowly than original
research or syntheses. If a high- quality guideline is not recent, it is
advisable to determine whether more up- to- date evidence would alter (or
strengthen) the guideline’s recommendations. It has been recommended
that, to avoid obsolescence, guidelines should be reassessed every 3 years.

TIP In addition to clinical guidelines, evidence- based care bundles
are being developed. The concept of care bundles, developed by the
Institute for Healthcare Initiatives (www.ihi.org), refers to a set of
interventions to treat or prevent a specific cluster of symptoms. There
is evidence that a bundle of strategies produces be�er outcomes than
a single intervention.

Level 1 in the 6S Hierarchy: Systems
In a perfect world, evidence- based clinical information systems would link
rigorous, up- to- date evidence (e.g., from summaries or syntheses) about a
problem with information about a particular patient from the patient’s
electronic health record. Clinicians would then, with best evidence in
hand, incorporate their own expertise and patient preferences in arriving
at a course of action. Although few current systems match this ideal, some
computerized decision support systems have been developed for
particular problems, including decisional support tools available on
laptops and smartphones. We can expect progress on such systems in the
years ahead.

Example of a Clinical Decision Support Systems
Gengo e Silva and colleagues (2018) described an electronic decision
support system in a Brazilian hospital that links nursing diagnoses,
outcomes, and interventions performed by nurses caring for medical
and surgical patients.

Evidence Hierarchies and Level of Evidence Scales
The EBP movement has led to a proliferation of different evidence
hierarchies, which are intended to show a ranking of evidence sources in
terms of their risk of bias. (These are distinct from the 6S hierarchy
discussed in the previous section, which rank evidence sources in terms of
the ease and efficiency of finding answers to clinical questions.) Evidence
hierarchies are often presented as pyramids, with the highest ranking
sources—those presumed to have the least bias for making inferences
about the effects of an intervention—at the top.
The hierarchies form level of evidence (LOE) scales that rank order types
of evidence. Level I evidence usually is considered the best (least biased)
type of evidence, and almost all leveling schemes put systematic reviews
at the top level. Some LOE scales have only three levels, while others have
10 or more levels.
Figure 2.2 shows our eight- level evidence hierarchy for
Therapy/intervention questions. This hierarchy ranks sources of evidence
with respect to the readiness of an intervention to be put to use in practice.
In our scheme, the Level I evidence source is a systematic review of a type
of study called a randomized controlled trial (RCT), which is the “gold
standard” type of study for Therapy questions. An individual RCT is a
Level II evidence source in our hierarchy. Going down the “rungs” of the
evidence hierarchy for Therapy questions results in evidence with a higher
risk of bias in answering questions about “what works.” For example,
Level III evidence comes from a type of study called quasi- experiments
(The terms in Figure 2.2 are explained later in the book). Of course, there
continue to be clinical practice questions for which there is relatively li�le
research evidence. In such situations, nursing practice must rely on other
sources, including internal evidence from pathophysiologic data, local
projects, and expert opinion (Level VIII). As Straus and colleagues (2011)
have noted, one benefit of the EBP movement is that a new research
agenda can emerge when clinical questions arise for which there is no
satisfactory evidence.

FIGURE 2.2 Polit–Beck evidence hierarchy/levels of evidence scale for therapy
questions.

TIP Several alternative LOE scales that you may want to consider
using are presented in the Toolkit in the accompanying Resource
Manual.

Hierarchies and Level of Evidence Scales: Some Caveats
Although evidence hierarchies are intended as an EBP resource,
considerable confusion exists regarding LOE scales. The fact that there are
dozens from which to choose exacerbates this confusion.
One important issue that is seldom acknowledged is that different types of
questions require different hierarchies. An evidence hierarchy for
Prognosis questions, for example, is different from the hierarchy for
Therapy questions. The concept of evidence hierarchies arose in medicine,
with the goal of informing decisions about medical interventions—thus
early evidence hierarchies explicitly ranked evidence for
Therapy/intervention questions. Few of the currently published
hierarchies make this point clear, the major exceptions being the LOE
hierarchies created by the Oxford Centre for Evidence- Based Medicine

(h�p://www.cebm.net/ocebm- levels- of- evidence/) and the Joanna Briggs
Institute (h�p://joannabriggs.org/jbi- approach.html). We also provide LOE
scales in this book for different types of questions (see Chapter 9). As we
noted in Chapter 1, evidence for non- Therapy questions can play a role in
EBP, but such evidence does not directly support practice changes.

TIP As an example, if we wanted to know whether drinking alcohol
during pregnancy puts the women at higher risk of a miscarriage (an
Etiology question), we would not find “best evidence” from a
systematic review of RCTs. Pregnant women would never be
assigned at random to a “drinking” versus nondrinking condition to
assess whether miscarriage rates are higher in the drinking group.

A second issue is that LOE scales have been used for different purposes.
Some writers suggest that LOE scales are similar to the 6S hierarchy—the
highest level offers the best starting place in a search for evidence. Others,
however, use evidence hierarchies to “level” or grade evidence sources,
implying that higher levels provide be�er quality evidence. As pointed out
by Levin (2014), an evidence hierarchy “is not meant to provide a quality
rating for evidence retrieved in the search for an answer” (p. 6). The
Oxford Centre for Evidence- Based Medicine concurs: the levels in their
scheme are “NOT intended to provide you with a definitive judgment
about the quality of evidence. There will inevitably be cases where ‘lower
level’ evidence…will provide stronger evidence than a ‘higher level’
study” (Howick et al., 2011, p. 2). A critical appraisal of each study or
evidence source, regardless of level, is needed to make a final
determination of the quality of evidence.
Related to this second issue is the fact that some LOE scales conflate risk of
bias levels with terms implying quality. For example, in Melnyk and
Fineout- Overholt’s (2019) evidence hierarchy (Box 1.3), Level II is defined
as well- designed RCTs.
Another word of caution: evidence hierarchies are seldom sufficiently
detailed to include the full range of possible evidence sources. Users of
LOE scales often must “read between the lines” and use some judgment.
For example, in our hierarchy, if a systematic review included both RCTs
and nonrandomized trials, we would still consider this Level I evidence.
However, if a systematic review included several nonrandomized trials
but no RCTs, we might consider this to be evidence somewhere between

Levels I and II. As another example, in the Melnyk and Fineout- Overholt
(2019) hierarchy, there is no level specified for RCTs that are not especially
“well- designed.”
As noted by Levin (2014), those who wish to use an LOE scale must choose
one that matches their needs from the many that exist, keeping in mind
that “leveling” a study based on the chosen scale is not a substitute for a
critical appraisal of the evidence.

TIP Evidence hierarchies and LOE scales are rather firmly
entrenched in the EBP literature, but they are not without
controversy. Concern was expressed initially by critics who felt that
qualitative evidence was being undervalued. For example, for
Therapy questions, qualitative studies are typically near the bo�om
of the hierarchy. Another criticism of these ranking systems is that
they focus exclusively on the risk of certain types of bias, but not on
biases that might undermine the applicability of evidence in real–
world se�ings (e.g., Goodman, 2014). We discuss this important
concern about EBP in Chapter 31.

Systems for a Body of Evidence
It is important to note that LOE scales are typically used to “level” an
individual piece of evidence, such as a single study. Other systems exist,
however, for grading an entire body of evidence with regard to the
strength of evidence. By far the most widely used system is the Grading of
Recommendations Assessment, Development, and Evaluation (GRADE)
system (Guya� et al., 2008). The GRADE system involves two components
—grading the quality of an overall body of evidence and ranking the
strength of recommendations based on that evidence. GRADE is used with
increasing frequency in systematic reviews and in the development of
clinical practice guidelines. We discuss GRADE at some length in Chapter
30.

Models for Evidence- Based Practice
Models of EBP are important resources for designing and implementing
EBP projects in practice se�ings. Some models focus on the use of research
from the perspective of individual clinicians (e.g., the Stetler Model), but
most focus on institutional EBP efforts (e.g., the Iowa Model). Another way

to categorize existing models is to distinguish process- oriented models
(e.g., the Iowa Model) and models that are explicitly mentor models, such
as the ARCC-E (Advanced Research and Clinical Practice Through Close
Collaboration in Education) model.
The many worthy EBP models are too numerous to list comprehensively,
but a few are shown in Box 2.1. Melnyk and Fineout- Overholt (2019)
provide a good synthesis of several EBP models, and Schaffer and
colleagues (2013) identify features to consider in selecting a model to plan
an EBP project. Although each model offers different perspectives on how
to translate research findings into practice, several of the steps and
procedures are similar across the models. Figure 2.3 shows a diagram of
one prominent EBP model, the revised Iowa Model of EBP (Buckwalter et
al., 2017).

FIGURE 2.3 Revised Iowa Model of Evidence- Based Practice to Promote Quality
Care

Iowa Model Collaborative. (2017). Iowa model of evidence- based practice: revisions
and validation. Worldviews on Evidence- Based Nursing, 14(3), 175- 182.

doi:10.1111/wvn.12223. Used/reprinted with permission from the University of Iowa
Hospitals and Clinics, copyright 2015. For permission to use or reproduce, please

contact the University of Iowa Hospitals and Clinics at 319-384-9098.

Box 2.1 Selected Models for Evidence- Based Practice

ACE Star Model of Knowledge Transformation (Stevens, 2012)
Advancing Research and Clinical Practice Through Close Collaboration in
Education (ARCC-E) Model (Melnyk & Fineout- Overholt, 2019)
Diffusion of Innovations Model (Rogers, 1995)
Iowa Model of Evidence- Based Practice to Promote Quality Care (Buckwalter et
al., 2017; Titler et al., 2001)
Johns Hopkins Nursing EBP Model (Dearholt & Deng, 2012)
Promoting Action on Research Implementation in Health Services (PARiHS)
Model (Harvey & Kitson, 2016; Rycroft- Malone et al., 2013),
Stetler Model of Research Utilization (Stetler, 2010)

Example of Using an Evidence- Based Practice Model
Saqe- Rockoff and colleagues (2018) used the Iowa Model in their EBP
project designed to improve thermoregulation for trauma patients in
the emergency department.

Individual and Organizational Evidence- Based Practice
Individual nurses make many decisions and convey important healthcare
information and advice to patients, and so they have ample opportunity to
put research into practice. Here are three clinical scenarios that provide
examples of such opportunities:

Clinical Scenario 1. You work in an allergy clinic and notice how difficult it is for
many children to undergo allergy scratch tests. You wonder if an interactive
distraction intervention would help reduce children’s anxiety when they are
being tested.
Clinical Scenario 2. You work in a rehabilitation hospital, and one of your
elderly patients, who had total hip replacement, tells you she is planning a long
airplane trip to visit her daughter after rehabilitation treatments are completed.
You know that a long plane ride will increase her risk of deep vein thrombosis
and wonder if compression stockings are an effective in- flight treatment for her.
You decide to look for the best evidence to answer this question.
Clinical Scenario 3. You are caring for a hospitalized cardiac patient who tells
you that he has sleep apnea. He confides in you that he is reluctant to undergo
continuous positive airway pressure (CPAP) treatment because he worries it
will hinder intimacy with his wife. You wonder if there is any evidence about
what it is like to experience CPAP treatment so that you can be�er address your
patient’s concerns.

In these and thousands of other clinical situations, research evidence can
be put to good use to improve the quality of nursing care. Thus, individual
nurses need to have the skills to personally search for, appraise, and apply
evidence in their practice.
For some clinical scenarios that trigger an EBP effort, individual nurses
have sufficient autonomy to implement research- informed actions on their
own (e.g., answering patients’ questions about experiences with CPAP). In
other situations, however, decisions are best made among a team of nurses
(or with an interprofessional team) working together to solve a common
clinical problem. Institutional EBP efforts typically result in a formal policy
or protocol affecting the practice of many nurses and other staff.
Many of the steps in institutional EBP projects are the same as those we
describe in the next section, but additional issues are of relevance at the
organizational level. For example, as shown in the Iowa Model (Figure
2.3), some of the activities include assessing whether the question is an
organizational priority, forming a team, and conducting a formal

evaluation. We offer further information about organizational EBP efforts
in Supplement B for Chapter 2 on .

Major Steps in Evidence- Based Practice
In this section, we provide an overview of how research evidence can be
put to use in clinical se�ings. In describing the basic steps in the EBP
process, we use a mnemonic device (the 5As) that we have adapted from
several sources (e.g., Guya� et al., 2015; EBP blogs by nurse educator
Cathy Thompson [h�ps://nursingeducationexpert.com]).

Step 1: Ask—Ask a well- worded clinical question that can be answered with
research evidence;
Step 2: Acquire—Search for and retrieve the best evidence to answer the clinical
question;
Step 3: Appraise—Critically appraise the evidence for validity and applicability
to the problem and situation;
Step 4: Apply—After integrating the evidence with clinical expertise, patient
preferences, and local context, apply it to clinical practice; and
Step 5: Assess—Evaluate the outcome of the practice change.

The EBP process cannot be undertaken in a vacuum, however. A
precondition for the entire undertaking is to have an openness to change
and a desire to provide the best possible care, based on evidence showing
benefits to patient outcomes. Melnyk and Fineout- Overholt (2019) call this
Step 0: Cultivating a spirit of inquiry. Johnson and Fineout- Overholt (2005)
noted that “ge�ing from zero to one” involves having nurses be reflective
about their clinical practice. An additional step after Step 5 might be to
disseminate information about the EBP project.

Step 1: Ask a Well- worded Clinical Question
A crucial first step in EBP involves converting information needs into
well- worded clinical questions that can be answered with research
evidence. You might wonder, though, where do the questions come from?
Some EBP models distinguish two types of “triggers” for an EBP
undertaking: (1) problem- focused triggers—a clinical practice problem in
need of solution, or (2) knowledge- focused triggers—readings in the research
literature. Problem- focused triggers may arise in the normal course of
clinical practice and include both patient- identified and clinician- identified
issues. The Iowa Model (Figure 2.3) includes examples of both types of
trigger in the top box.

EBP experts distinguish between background and foreground questions.
Background questions are foundational questions about a clinical issue, for
example: What is cancer cachexia (progressive body wasting), and what is
its pathophysiology? Answers to such background questions are typically
found in textbooks. Foreground questions, by contrast, are those that can be
answered based on current research evidence on diagnosing, assessing, or
treating patients, or on understanding the meaning or prognosis of their
health problems. For example, we may wonder, is a fish oil–enhanced
nutritional supplement effective in stabilizing weight in patients with
advanced cancer? The answer to such a Therapy question may provide
direction on how to address the needs of patients with cachexia. In other
words, foreground questions seek the specific information needed to make
clinical decisions.
Most guidance for EBP uses the acronyms PIO and PICO to help
practitioners develop well- worded questions. In the PICO form, the
clinical question is worded to identify four components:

1. P: the Population or patients (What are key characteristics of the patients or
people?)

2. I: the Intervention, influence, or exposure (What is the intervention or therapy of
interest? Or what is a potentially harmful or beneficial influence?)

3. C: an explicit Comparison to the “I” component (With what is the intervention or
influence being compared?)

4. O: the Outcome (What is the outcome or consequence in which we are
interested?)

Applying this scheme to our question about cachexia, our population (P) is
cancer patients with cachexia; the intervention (I) is fish oil–enhanced
nutritional supplements; and the outcome (O) is weight stabilization. In this
question, the comparison is not formally stated, but the implied “C” is the
absence of fish oil–enhanced supplements—the question is in a PIO format.
However, when there is an explicit comparison of interest, the full PICO
question is required. For example, we might be interested in learning
whether fish oil–enhanced supplements (I) are be�er than melatonin (C) in
stabilizing weight (O) in patients with cancer (P).
For questions that can best be answered with qualitative information (e.g.,
about the meaning of an experience or health problem), two components
are most relevant:

1. the population (What are the characteristics of the patients or clients?) and

2. the situation (What conditions, experiences, or circumstances are we interested
in understanding?)

For example, suppose our question was “What is it like to suffer from
cachexia?” In this case, the question calls for rich qualitative information;
the population is patients with advanced cancer, and the situation is the
experience of cachexia.
In addition to the basic PICO components, other components may be used
in an evidence search. For example, some EBP experts suggest adding a
“T” component (PICOT) to designate a time frame. For example, take the
following question: Among caregivers of people with dementia (P), what
is the effect of participation in a caregiver intervention (I), compared with
not participating in the intervention (C) on quality of life (O) 6 months
after enrollment (T)? Other experts, however, consider the time frame as
part of the outcome: e.g., quality of life 6 months after enrollment (O). Still
others prefer to search for the PICO elements without filtering out
evidence from studies that used a different period of follow- up, such as
4 months after enrollment.

TIP The Cochrane Collaboration has launched a PICO project—a
Strategy to 2020 initiative—to annotate its systematic reviews with
PICO component identification to facilitate retrieval efforts.

Table 2.2 offers question templates for asking well- framed clinical
foreground questions for specific types of questions. The right- hand
column includes questions with an explicit comparison (PICO questions),
while the middle column does not (PIO). The questions are categorized in
a manner similar to that discussed in Chapter 1 (EBP purposes), as
featured in Table 1.3. Note that although there are some differences in
components across question types, there is always a P component.

TABLE 2.2
Question Templates for Selected Clinical Foreground Questions: PIO and PICO

Type of Question PIO Question Template
(Questions Without an
Explicit Comparison)

PICO Question Template
(Questions With an Explicit
Comparison)

Type of Question PIO Question Template
(Questions Without an
Explicit Comparison)

PICO Question Template
(Questions With an Explicit
Comparison)

Therapy/treatment/intervention In ________ (Population), what is 

the effect of 
__________  
(Intervention) on 
__________  
(Outcome)?

In ________ (Population), what is 
the
effect of 
__________  (Intervention), in

comparison to 
__________  
(Comparative/alternative 

intervention), on 
__________  
(Outcome)?

Diagnosis/assessment For ________ (Population), does 

___________  (Identifying tool/

procedure) yield accurate and 

appropriate diagnostic/assessment
information about 
___________  
(Outcome)?

For ________ (Population), does 

___________  (Identifying tool/

procedure) yield more accurate 
or
more appropriate diagnostic/

assessment information than 

___________  (Comparative tool/

procedure) about 
___________  
(Outcome)?

Prognosis In __________ (Population), does
____________  (Influence/exposure
to disease or condition) increase
the risk of ____________  
(Outcome)?

In __________ (Population), does
____________  (Influence/exposure to 

disease or condition), relative to
____________  (Comparative
disease/condition OR absence of the
disease/condition) increase the risk of 

____________  (Outcome)?

Etiology/harm In __________ (Population), does
____________  
(Influence/exposure/characteristic)
increase the risk of ____________  
(Outcome)?

In __________ (Population), does
____________  
(Influence/exposure/characteristic)
compared to 
____________  
(Comparative influence/
exposure OR
lack of influence 
or exposure)
increase the risk of ____________  
(Outcome)?

Description
(prevalence/incidence)

In ________ (Population), how 

prevalent is 
__________  
(Outcome)?

Explicit comparisons are not typical,
except to compare different populations

Meaning or process What is it like for 
_________
(Population) to experience 

(condition, illness, circumstance)? 

OR What is the process by which 

_________ (Population) cope with,

adapt to, or live with (condition, 

illness, circumstance)?

Explicit comparisons are not typical in
these types of questions

TIP The Toolkit for Chapter 2 in the accompanying Resource Manual
includes Table 2.2 in a Word file that can be adapted for your use, so
that the template questions can be readily “filled in.” .

Step 2: Acquire Research Evidence

By asking clinical questions in a well- worded form, you should be able to
more effectively search the research literature for the information you
need. Using the templates in Table 2.2, the information inserted into the
blanks constitutes keywords for undertaking an electronic search.
Earlier in this chapter, we described resources to facilitate an efficient
search for evidence. As shown in the 6S hierarchy (Table 2.1), there is a
range of preappraised evidence sources that can help you acquire evidence
regarding your question. Starting with preappraised evidence might lead
you to a quick answer—and potentially to a be�er answer than would be
possible if you had to start at the bo�om rung with individual studies.
Researchers who prepare systematic reviews and synopses usually have
excellent research skills and use established standards to evaluate the
evidence. Thus, when preprocessed evidence is available to answer a
clinical question, you may not need to look any farther, unless the review
is not recent or is of poor quality. When high- quality preprocessed
evidence cannot be located or is old, you will need to look for best
evidence in primary studies, using strategies we describe in Chapter 5.

TIP In Chapter 5, we describe the free internet resource, PubMed,
which offers a special tool for those seeking evidence for clinical
decisions. Guidance on conducting a clinical query search is provided
in the online Supplement A to Chapter 5. Another important
database, CINAHL, allows users to restrict a search with an “EBP”
limiter.

Step 3: Appraise the Evidence
The evidence acquired in Step 2 of the EBP process should be appraised
before taking clinical action. Critical appraisal for EBP may involve several
types of assessments. Various criteria have been proposed for EBP
appraisals, including the following:

1. Quality: To what extent is the evidence valid—that is, how serious is the risk of
bias?

2. Magnitude: How large is the effect of the intervention or influence (I) on the
outcome (O) in the population of interest (P)? Are the effects clinically
significant?

3. Quantity: How much evidence is there? How many studies have been
conducted, and did those studies involve a large number of study participants?

4. Consistency: How consistent are the findings across various studies?
5. Applicability: To what extent is the evidence relevant to my clinical situation and

patients?

Evidence Quality
The first appraisal issue is the extent to which the findings in a research
report are valid. That is, were the study methods sufficiently rigorous that
the evidence has a low risk of bias? Melnyk and Fineout- Overholt (2019)
propose the following formula: Level of evidence (e.g., Figure
2.2) + quality of evidence = strength of evidence. Thus, in coming to a
conclusion about the quality of the evidence, it is insufficient to simply
“level” the evidence using an LOE scale—it must also be appraised. We
offer guidance on appraising the quality of evidence from primary studies
throughout this book, and Chapter 5 includes an appraisal worksheet.
If there are several primary studies and no existing systematic review, you
would need to draw conclusions about the body of evidence taken as a
whole. The previously mentioned GRADE system (Guya� et al., 2008) is
being used increasingly to summarize evidence quality for a body of
evidence in systematic reviews (Chapter 30).

Magnitude of Effects
The appraisal criterion relating to magnitude considers how powerful the
effects of an intervention or influence are. Estimating the magnitude of the
effect for quantitative findings is especially important when an
intervention is costly or when there are potentially negative side effects. If,
for example, there is good evidence that an intervention is only marginally
effective in improving a health problem, it is important to consider other
factors (e.g., evidence regarding its effects on quality of life). There are
various ways to quantify the magnitude of effects, such as an effect size
index that we describe later in this book.
The magnitude of effects also has a bearing on clinical significance. We
discuss how to assess the clinical significance of study findings in Chapter
21.

Quantity and Consistency of Evidence
A rigorously conducted primary study of a randomized controlled trial
offers especially strong evidence about the effect of an intervention on an
outcome of interest. But multiple RCTs are be�er than a single study.

Julio Santana
Julio Santana
Julio Santana
Julio Santana
Julio Santana
Julio Santana

Moreover, large- scale studies (such as multisite studies) with a large
number of study participants are especially desirable.
If there are multiple studies that address your clinical query, however, the
strength of the evidence is likely to be diminished if there are inconsistent
results across studies. In the GRADE system, inconsistency of results leads
to a lower quality- of- evidence grade. When the results of different studies
do not corroborate each other, it is likely that further research will have an
impact on confidence about an intervention’s effect.

Applicability
It is also important to appraise the evidence in terms of its relevance for
the clinical situation at hand—that is, for your patient in a specific clinical
se�ing. Best practice evidence can most readily be applied to an individual
patient in your care if he or she is similar to people in the study or studies
under review. Would your patient have qualified for participation in the
study—or is there some factor such as age, illness severity, or comorbidity
that would have excluded him or her? Practitioners must reach
conclusions about the applicability of research evidence, but researchers
also bear some responsibility for enhancing the applicability of their work.
As we discuss in Chapter 31, concerns about the fact that “best evidence”
is usually about “average” patients from restricted populations has made
the issue of applicability increasingly salient.

TIP An appraisal of evidence for use in your practice may involve
additional factors. In particular, costs are likely to be an important
consideration. Some interventions are expensive, and so the amount
of resources needed to put best evidence into practice would need to
be factored into any decision. Of course, the cost of not taking action
is also important.

Actions Based on Evidence Appraisals
Appraisals of the evidence may lead you to different courses of action. You
may reach this point and conclude that the evidence is not sufficiently
sound, or that the likely effect is too small, or that the cost of applying the
evidence is too high. The evidence may suggest that “usual care” is the
best strategy—or it may lead you to pose an alternative clinical query. You
may also consider the possibility of undertaking your own study to add to
the body of evidence relating to your original clinical question. If,

Julio Santana
Julio Santana
Julio Santana
Julio Santana
Julio Santana
Julio Santana
Julio Santana
Julio Santana
Julio Santana

however, the initial appraisal of evidence suggests a promising clinical
action, then you can proceed to the next step.

Step 4: Apply the Evidence
As the definition for EBP implies, research evidence needs to be integrated
with your own clinical expertise and knowledge of your clinical se�ing.
You may be aware of factors that would make implementation of the
evidence, no ma�er how sound or promising, inadvisable. Patient
preferences and values are also important. A discussion with the patient
may reveal negative a�itudes toward a potentially beneficial course of
action, contraindications (e.g., comorbidities), or possible impediments
(e.g., lack of health insurance).
Armed with rigorous evidence, your own clinical know- how, and
information about your patient’s circumstances, you can use the resulting
information to make an evidence- based decision or provide research–
informed advice. Although the steps in the process, as just described, may
seem complicated, in reality the process can be efficient—if there is an
adequate evidence base and especially if it has been skillfully
preprocessed. EBP is most challenging when findings from research are
contradictory, inconclusive, or “thin”—that is to say, when be�er quality
evidence is needed.
One final issue is the importance of integrating evidence from qualitative
research, which can provide rich insights about how patients experience a
problem, or about barriers to complying with a treatment. A new
intervention with strong potential benefits may fail to achieve desired
outcomes if it is not implemented with sensitivity and understanding of
the patients’ perspectives. As Morse (2005) so aptly noted, evidence from
an RCT may tell you whether a pill is effective, but qualitative research can
help you understand why patients may not swallow the pill.

Step 5: Assess the Outcomes of the Practice Change
One last step in many EBP efforts concerns evaluating the outcomes of the
practice change. Did you achieve the desired outcomes? Were patients
satisfied with the results?
Straus and colleagues (2011) remind us that part of the ongoing evaluation
involves how well you are performing EBP. They offer self- evaluation
questions that relate to the EBP steps, such as asking answerable questions
(Am I asking any clinical questions at all? Am I asking well- formulated

Julio Santana
Julio Santana
Julio Santana
Julio Santana
Julio Santana

question?) and acquiring external evidence (Do I know the best sources of
current evidence? Am I becoming more efficient in my searching?).

TIP Every nurse can play a role in using research evidence. Here are
some strategies:

Read widely and critically. Professionally accountable nurses keep abreast of
important research developments relating to their specialty by reading
professional journals.
A�end professional conferences. Conference a�endees have opportunities to
meet researchers and to explore practice implications of new research.
Insist on evidence that a procedure is effective. Every time nurses or nursing
students are told about a standard nursing procedure, they have a right to
ask: Why? Nurses need to develop expectations that the clinical decisions
they make are based on sound, evidence- based rationales.
Become involved in a journal club. Many organizations that employ nurses
sponsor journal clubs that review studies with potential relevance to
practice.
Pursue and participate in EBP projects. Several studies have found that
nurses who are involved in research activities (e.g., an EBP project or data
collection activities) develop more positive a�itudes toward research and
be�er research skills.

Research Example
Thousands of EBP projects are underway in practice se�ings. Many that
have been described in the nursing literature offer information about
planning and implementing such an endeavor. One is described here, and
another full article is included in the Resource Manual.
Study: Implementation of the MEDFRAT to promote quality care and
decrease falls in community hospital emergency rooms (McCarty et al.,
2018).
Purpose: An interprofessional team undertook an evidence- based practice
implementation project at a large healthcare delivery system with 12
emergency departments (EDs). The focus of the project was to decrease
falls in community hospital EDs.
Framework: The project used the Iowa Model as its guiding framework.
The EBP team identified a problem- focused trigger—the inconsistent use
of fall- risk assessments and variation in falls in the EDs.
Approach: The project team assembled relevant literature to identify an
appropriate assessment tool for use in emergency departments. The team
selected the Memorial Emergency Department Fall- Risk Assessment Tool
(MEDFRAT) because it was simple to use (only six questions) and had
been validated for use in EDs (i.e., it had evidence- based utility). The tool
creates two risk- stratification levels, and each has suggested fall- risk
prevention interventions. For example, possible interventions included
hourly rounding, bed in low position, bedside alarms, and locating
patients into view of the nurses’ station. Information systems staff built the
MEDFRAT into the electronic medical record. The team then created and
implemented a 1- hour education session about falls for nurses in the EDs.
The EDs in the project were visited over a 4- month period, with 60 nurses
a�ending the sessions. The participating nurses offered feedback and
further suggestions. Several nurses mentioned the lack of bedside alarms,
and so portable alarms were ordered. Another suggestion concerned the
use of different colored grip socks to identify patients at high risk of a fall.
Overall, the nurses’ reactions to MEDFRAT were unanimously positive.
Evaluation: The MEDFRAT has been implemented in all 12 EDs in the
system. Baseline levels of falls in the ED over a 4- year period ranged from
0 (in EDs with under 10 beds) to 76 in the ED with the most beds. Data
regarding the effectiveness of the intervention were not available when the

Julio Santana

report was wri�en, but short- term outcomes and longer- term outcomes
(decrease in ED falls) are being monitored.
Conclusions: The authors of the report concluded that the Iowa Model
was a useful framework. They were optimistic about the outcomes and
about using the Iowa Model to implement other evidence- based nursing
interventions in their se�ing.

Summary Points

Evidence- based practice (EBP) is the conscientious integration of current best
evidence and other factors in making clinical decisions. The three main
components of EBP are (1) best research evidence; (2) your own clinical
experience and knowledge; and (3) patient preferences, values, and
circumstances.
Two underpinnings of the EBP movement are the Cochrane Collaboration
(which is based on the work of British epidemiologist Archie Cochrane) and the
clinical learning strategy called evidence- based medicine developed at the
McMaster Medical School.
Research utilization (RU) and EBP are overlapping concepts that concern
efforts to use research as a basis for clinical decisions, but RU starts with a
research- based innovation that gets evaluated for possible use in practice.
Knowledge translation (KT) is a term used primarily about system- wide efforts
to enhance systematic change in clinical practice or policies. Translational
research is a discipline devoted to developing methods to promote knowledge
translation and the use of evidence.
Resources to support EBP are growing at a phenomenal pace. Preprocessed
(synthesized) and preappraised evidence is especially useful and efficient in
addressing clinical queries. The 6S hierarchy of preappraised evidence offers a
guide for efficient evidence searches. This hierarchy includes (6) systems at the
pinnacle; (5) summaries; (4) synopses of syntheses; (3) syntheses; (2) synopses of
single studies; and (1) individual primary studies, which are not preappraised,
at the base.
Systematic reviews (Syntheses) have been considered the cornerstone of EBP.
Systematic reviews are rigorous integrations of research evidence from
multiple studies on a topic. Systematic reviews can involve either narrative
approaches to integration (including metasynthesis and meta- aggregation of
qualitative studies) or quantitative methods (meta- analysis) that integrate
findings statistically by using individual studies as the unit of analysis. The
emergence of rapid reviews reflects the need for less rigorous, but more timely,
syntheses of evidence.
Evidence- based clinical practice guidelines are a major example of
preappraised evidence in the “Summaries” category of the 6S hierarchy. These
guidelines combine a synthesis and appraisal of research evidence from a
systematic review with specific recommendations for clinical decision- making.
Clinical practice guidelines should be carefully and systematically appraised,
for example, using the Appraisal of Guidelines Research and Evaluation
(AGREE II) instrument.

Julio Santana
Julio Santana
Julio Santana

The EBP movement has given rise to a proliferation of evidence hierarchies that
provide a preliminary guidepost for finding “best” evidence—evidence with the
lowest risk of bias. Evidence hierarchies reflect level of evidence (LOE) scales
that rank order types of evidence source. Most published LOE scales are
appropriate only for Therapy/intervention questions. In LOEs for Therapy
questions, systematic reviews of randomized controlled trials (RCTs) are
considered Level I sources. However, at every level, the quality of the evidence
must be appraised: Strength of evidence = level + quality.
Many models of EBP have been developed, including models that provide a
framework for individual clinicians (e.g., the Stetler model) and others for
organizations or teams of clinicians (e.g., the Iowa Model of Evidence- Based
Practice to Promote Quality Care).
Although organizational projects include additional steps, the most basic steps
in EBP for both individuals and team are as follows (the 5As): Ask a well–
worded clinical question; Acquire the best evidence to answer the question;
Appraise and synthesize the evidence; Apply the evidence, after integrating it
with patient preferences and clinical expertise; and Assess the effects of the
practice change.
A widely used scheme for asking well- worded clinical questions involves four
primary components, an acronym for which is PICO: Population or patients (P),
Intervention or influence (I), Comparison (C), and Outcome (O).

An appraisal of the evidence involves such considerations as the quality of the
evidence, in terms of the risk of bias; the magnitude of the effects and their clinical
importance; the quantity of evidence; the consistency of evidence across studies; and
the applicability of the evidence to particular se�ings and patients.

Julio Santana
Julio Santana
Julio Santana

Study Activities
Study activities are available to instructors on .

References Cited in Chapter 2
Beck C. (2009). Metasynthesis: a goldmine for evidence- based practice. AORN Journal,

90, 701–702.
Beck C. T., & Woynar J. (2017). Pos�raumatic stress in mothers while their preterm

infants are in the newborn intensive care unit: a mixed research synthesis. Advances
in Nursing Science, 40, 337–355.

* Brouwers M., Kho M., Browman G., Burgers J., Cluzeau F., Feder G., …
Zi�elsberger L. for the AGREE Next Steps Consortium. (2010). AGREE II:
advancing guideline development, reporting and evaluation in health care.
Canadian Medical Association Journal, 182, E839–E842.

Buckwalter K., Cullen L., Hanrahan K., Kleiber C., McCarthy A., Rakel B., … Tucker
S. (2017). Iowa model of evidence- based practice: revisions and validation.
Worldviews on Evidence- Based Nursing, 14, 175–182.

Campbell J. M., Umapathysivam K., Xue Y., & Lockwood C. (2015). Evidence- based
practice point- of- care resources: a quantitative evaluation of quality, rigor, and
content. Worldviews on Evidence- Based Nursing, 12, 313–327.

CIHR (2004). Knowledge translation strategy 2004–2009: innovation in action. O�awa,
ON: Canadian Institutes of Health Research.

Cronenwe� L. R. (2012). A national initiative: quality and safety education for nurses
(QSEN). In Sherwood G., & Barnsteiner J. (Eds.), Quality and safety in nursing: a
competency approach to improving outcomes. Ames, IA: John Wiley & Sons.

Dearholt D., & Dang D,. (Eds.). (2012). Johns Hopkins nursing evidence- based practice:
model and guidelines. Indianapolis, IN: Sigma Theta Tau International.

DiCenso A., Bayley L., & Haynes B. (2009). Accessing pre- appraised evidence: fine- –
tuning the 5S model into a 6S model. Evidence- based Nursing, 12, 99–101.

Gardner K., Kanaskie M., Knehans A., Salisbury S., Doheny K., & Schirm V. (2016).
Implementing and syustaining evidence based practice through a nursing journal
club. Applied Nursing Research, 31, 139–145.

Gengo e Silva R., Dos Santos Diogo R., da Cruz D., Ortiz D., Ortiz D., Peres H., &
Moorhead S. (2018). Linkages of nursing diagnoses, outcomes, and interventions
performed by nurses caring for medical and surgical patients using a decision
support system. International Journal of Nursing Knowledge, 29, 269–275.

* Gilbert R., Salanti G., Harden M., & See S. (2005). Infant sleeping position and the
sudden infant death syndrome: systematic review of observational studies and
historical review of recommendations from 1940 to 2002. International Journal of
Epidemiology, 34, 874–887.

Glasziou P. (2005). Evidence- based medicine: does it make a difference? Make it
evidence informed with a li�le wisdom. British Medical Journal, 330(7482), 92.

* Goodman C. S. (2014). HTA 101: introduction to health technology assessment.
Washington, DC: National Information Center on Health Services Research and
Health Care Technology.

* Grant M., & Booth A. (2009). A typology of reviews: an analysis of 14 review types
and associated methodologies. Health Information and Libraries Journal, 26, 91–108.

* Grimmer K., Dizon J., Milanese S., King E., Beaton K., Thorpe O., … Kumar S.
(2014). Efficient clinical evaluation of guideline quality: development and testing of
a new tool. BMC Medical Research Methodology, 14, 63.

* Guya� G., Oxman A., Vist G., Kunz R., Falck- Y�er Y., Alonso- Coello P., … GRADE
Working Group (2008). GRADE: an emerging consensus on rating quality of
evidence and strength of recommendations. BMJ, 336, 924–926.

Guya� G., Rennie D., Meade M., & Cook D. (2015). Users’ guide to the medical literature:
essentials of evidence- based clinical practice (3rd ed.). New York: McGraw Hill.

* Harvey G., & Kitson A. (2016). PARIHS revisited: from heuristic to integrated
framework for the successful implementation of knowledge into practice.
Implementation Science, 11, 33.

Heyvaert M., Hannes K., & Onghena P. (2017). Using mixed methods research synthesis
for literature reviews. Los Angeles: Sage Publications.

* Howick J., Chalmers I., Glasziou P., Greenhalgh T., Heneghan C., Liberati A., …
Thornton H. (2011).The 2011 Oxford CEBM levels of evidence: introductory
document. Oxford: Centre for Evidence- Based Medicine.

* Institute of Medicine. (2001). Crossing the quality chasm: a new health care system for
the 21st century. Washington, DC: National Academic Press.

Johnston L., & Fineout- Overholt E. (2005). Teaching EBP: “Ge�ing from zero to one.”
moving from recognizing and admi�ing uncertainties to asking searchable,
answerable questions. Worldviews on Evidence- Based Nursing, 2, 98–102.

* Khangura S., Konnyu K., Cushman R., Grimshaw J., & Moher D. (2012). Evidence
summaries: the evolution of a rapid review approach. Systematic Reviews, 1, 10.

* Kwag K. H., Gonzalez- Lorenzo M., Banzi R., Bonovos S., & Moja L. (2016).
Providing doctors with high- quality informatation: an updated evaluation of web- –
based point- of- care information summaries. Journal of Medical Internet Research, 18,
e15.

Levin R. F. (2014). Levels, grades, and strength of evidence: “What’s it all about,
Alfie?”. Research and Theory for Nursing Practice, 28, 5–8.

* McCarty C., Woehrle T., Waring S., Taran A., & Kitch L. (2018). Implementation of
the MEDFRAT to promote quality care and decrease falls in community hospital
emergency rooms. Journal of Emergency Nursing, 44, 280–284.

Melnyk B.M., & Fineout- Overholt E. (2019). Evidence- based practice in nursing and
health care (4th ed.). Philadelphia: Lippinco� Williams & Wilkins.

Melnyk B. M., & Newhouse R. (2014). Evidence- based practice versus evidence- –
informed practice: a debate that could stall forward momentum in improving

health care quality, safety, patient outcomes, and costs. Worldviews on Evidence- –
Based Nursing, 11, 347–349.

Morse J. M. (2005). Beyond the clinical trial: expanding criteria for evidence.
Qualitative Health Research, 15, 3–4.

Newhouse R. P. (2007). Diffusing confusion among evidence- based practice, quality
improvement, and research. Journal of Nursing Administration, 37, 432–435.

* Newhouse R. P., & Spring B. (2010). Interdisciplinary evidence- based practice:
moving from silos to synergy. Nursing Outlook, 58, 309–317.

Registered Nurses’ Association of Ontario (2017). Adult asthma care: promoting
control of asthma (2nd ed.). Retrieved from h�p://rnao.ca/bpg/guidelines/adult- –
asthma- care.

Rogers E. M. (1995). Diffusion of innovations (4th ed.). New York: Free Press.
* Rycroft- Malone J., Seers K., Chandler J., Hawkes C., Crichton N., Allen C., …

Strunin L. (2013). The role of evidence, context, and facilitation in an
implementation trial: implications for the development of the PARIHS framework.
Implementation Science, 8, 28.

** Saqe- Rockoff A., Schubert F., Ciardiello A., & Douglas E. (2018). Improving
thermoregulation for trauma patients in the emergency department: an evidence- –
based project. Journal of Trauma Nursing, 25, 14–20.

Sandelowski M., Voils C. I., Crandell J. L., & Leeman J. (2013). Synthesizing
qualitative and quantitative research findings. In Beck C. T. (Ed.). Routledge
international handbook of qualitative nursing research (pp. 347–356). New York:
Routledge.

Schaffer M. A., Sandau K., & Diedrick L. (2013). Evidence- based practice models for
organizational change: overview and practical applications. Journal of Advanced
Nursing, 69, 1197–1209.

Sco� K., & McSherry R. (2009). Evidence- based nursing: clarifying the concepts for
nurses in practice. Journal of Clinical Nursing, 18, 1085–1095.

* Siebenhofer A., Semlitsch T., Herbom T., Siering U., Kopp I., & Hartig J. (2016).
Validation and reliability of a guideline appraisal mini- chicklist for daily practice
use. BMC Medical Research Methodology, 16, 39.

Stetler C. B. (2010). Stetler model. In Rycroft- Malone J. & Bucknall T. (Eds.), Models
and frameworks for implementing evidence- based practice: linking evidence to action
(pp. 51–77). Malden, MA: Wiley- Blackwell.

Stevens K. R. (2012). Star model of EBP: knowledge transformation. Academic center
for evidence- based practice. San Antonio, TX: The University of Texas Health
Science Center at San Antonio.

Straus S. E., Glasziou P., Richardson W., & Haynes R. (2011). Evidence- based medicine:
how to practice and teach it (4th ed.). Toronto: Churchill Livingstone.

Titler M. (2014). Overview of evidence- based practice and translation science. Nursing
Clinics of North America, 49, 269–274.

Titler M. G., Kleiber C., Steelman V., Rakel B., Budreau G., Evere� L., … Goode C.
(2001). The Iowa model of evidence- based practice to promote quality care. Critical
Care Nursing Clinics of North America, 13, 497–509.

* World Health Organization (2005). Bridging the “Know- Do” gap: meeting on
knowledge translation in global health. Retrieved June 20, 2019, from
h�ps://www.measureevaluation.org/resources/training/capacity- building- –
resources/high- impact- research- training- curricula/bridging- the- know- do- gap.pdf.

Zhang L., Fu T., Zhang Q., Yin R., Zhu L., He Y., … Shen B. (2018). Effects of
psychological interventions for patients with osteoarthritis: a systematic review
and meta- analysis. Psychology, Health, and Medicine, 23, 1–17.

*A link to this open- access article is provided in the Toolkit for Chapter 2 in the
Resource Manual.

**This journal article is available on for this chapter.

C H A P T E R 3

Key Concepts and Steps in Qualitative and
Quantitative Research

This chapter covers a lot of ground—but, for many of you, it is familiar
ground. If you have taken an earlier research course, this chapter will be a
review of key terms and steps in the research process. If you have no
previous exposure to research methods, this chapter offers basic
grounding in research terminology.
Research, like any discipline, has its own jargon. Some terms are used by
both qualitative and quantitative researchers, but others are used mainly
by one or the other group. Also, some nursing research jargon has its roots
in the social sciences, but sometimes different terms for the same concepts
are used in medical research; we cover both.

Fundamental Research Terms and Concepts
When researchers address a problem—regardless of the underlying
paradigm—they undertake a study (or an investigation). Studies involve
people cooperating with each other in different roles.

The Faces and Places of Research
Studies with humans involve two groups: those doing research and those
providing the information. In a quantitative study, the people being
studied are called subjects or study participants (Table 3.1). In a
qualitative study, those under study are called study participants or
informants. Collectively, study participants comprise the sample.

TABLE 3.1
Key Terms in Quantitative and Qualitative Research

Concept Quantitative Term Qualitative Term
Person contributing
information

Subject –

Study participant Study participant
– Informant, key informant

Person undertaking the
study

Researcher Researcher

Investigator Investigator
That which is being
investigated

– Phenomena

Concepts Concepts
Constructs –
Variables –

System of organizing
concepts

Theory, theoretical framework Theory

Conceptual framework, conceptual
model

Conceptual framework, sensitizing
framework

Information gathered Data (numerical values) Data (narrative descriptions)
Connections between
concepts

Relationships (cause-and-effect,
associative)

Pa�erns of association

Logical reasoning
processes

Deductive reasoning Inductive reasoning

Box 3.1 Example of Quantitative Data

Question: Thinking about the past week, how depressed would you say you have been on a scale from 0 to
10, where 0 means “not at all” and 10 means “the most possible”?

Data: 9 (Subject 1)
0 (Subject 2)

4 (Subject 3)

Box 3.2 Example of Qualitative Data

Question: Tell me about how you’ve been feeling lately—have you felt sad or depressed at all, or have you
generally been in good spirits?

Data: “Well, actually, I’ve been pre�y depressed lately, to tell you the truth. I wake up each morning
and I can’t seem to think of anything to look forward to. I mope around the house all day, kind
of in despair. I just can’t seem to shake the blues, and I’ve begun to think I need to go see a
shrink.” (Participant 1)
“I can’t remember ever feeling be�er in my life. I just got promoted to a new job that makes me
feel like I can really get ahead in my company. And I’ve just go�en engaged to a really great guy
who is very special.” (Participant 2)
“I’ve had a few ups and downs the past week, but basically things are on a pre�y even keel. I
don’t have too many complaints.” (Participant 3)

Box 3.3 Additional Questions for a Preliminary Review of a
Research Report

1. What is the study all about? What are the main phenomena, concepts, or
constructs under investigation?

2. If the study is quantitative, what are the independent and dependent variables?
What are the PICO elements—and for what type of question (Therapy,
Prognosis, etc.)?

3. Do the researchers examine relationships or pa�erns of association among
variables or concepts? Does the report imply the possibility of a causal
relationship?

4. Are key concepts clearly defined, both conceptually and operationally?
5. What type of study does it appear to be, in terms of types described in this

chapter: Quantitative—experimental? nonexperimental? Qualitative—
descriptive? grounded theory? phenomenologic? ethnographic?

6. Does the report provide any information to suggest how long the study took to
complete?

7. Does the format of the report conform to the traditional IMRAD format? If not,
in what ways does it differ?

The person who conducts a study is the researcher or investigator. Studies
are often done by a team; the person directing the study is the principal
investigator (PI). Increasingly, nurse researchers are working as a part of

interdisciplinary research teams. In large- scale projects, dozens of
individuals may be involved in planning and conducting the study.
Research can be undertaken in a variety of 
se�ings—the specific places
where information is gathered. Some studies take place in naturalistic
se�ings in the field, such as in people’s homes, but some studies are done
in laboratory or clinical se�ings. Qualitative researchers are especially
likely to engage in fieldwork in natural se�ings because they are
interested in the contexts of 
people’s experiences. The site is the overall
location for the research—it could be an entire community (e.g., a Haitian
neighborhood in Miami) or an institution (e.g., a hospital in Toronto).
Researchers sometimes undertake multisite studies because the use of
multiple sites offers a larger or more diverse sample of participants.

The Building Blocks of Research

Phenomena, Concepts, and Constructs
Research involves abstractions. For example, pain, fatigue, and obesity are
abstractions of human characteristics. These abstractions are called
concepts or, in qualitative studies, phenomena.
Researchers also use the term construct, which refers to an abstraction
inferred from situations or behaviors—but often one that is deliberately
invented or constructed. For example, self- care in Orem’s model of health
maintenance is a construct. The terms construct and concept are sometimes
used interchangeably, but by convention, a construct typically refers to a
more complex abstraction than a concept.

Theories and Conceptual Models
A theory is a systematic explanation of some aspect of reality. Theories,
which knit concepts together into a coherent system, play a role in both
qualitative and quantitative research.
Quantitative researchers may start with a theory or conceptual model
(distinctions are discussed in Chapter 6). Based on theory, researchers
predict how phenomena will behave in the real world if the theory is true.
Researchers use deductive reasoning to go from a theory to specific
predictions, which are tested through research; study results are used to
support, reject, or modify the theory.
In qualitative research, theories may be used in various ways. Sometimes
conceptual or sensitizing frameworks, derived from qualitative research

traditions we describe later in this chapter, offer an orienting world view.
In such studies, the framework helps to guide the inquiry and to interpret
the findings. In other qualitative studies, theory is the product of the
research: the investigators use information from participants inductively to
develop a theory rooted in the participants’ experiences.

Deductive and inductive logical reasoning processes are described more

fully in the Supplement to this chapter on the book’s website, .

Variables
In quantitative studies, concepts often are called variables. A variable, as
the name implies, is something that varies. Weight, fatigue, and stress are
variables—each varies from one person to another. In fact, most aspects of
humans are variables. If everyone weighed 150 pounds, weight would not
be a variable but rather would be a constant. It is precisely because people
and conditions do vary that most research is conducted. Quantitative
researchers seek to understand how or why things vary and to learn if
differences in one variable are related to differences in another. For
example, lung cancer research focuses on the variable of lung cancer,
which is a variable because not everyone has this disease. Researchers
have studied factors that might be linked to lung cancer, such as cigare�e
smoking. Smoking is also a variable because not everyone smokes. A
variable, then, is any quality of a person, group, or situation that takes on
different values.
When an a�ribute is highly varied in the group under study, the group is
heterogeneous with respect to that variable. If the amount of variability is
limited, the group is homogeneous. For example, for the variable height, a
sample of 2- year- old children would be more homogeneous than a sample
of 21- year- olds.

Characteristics of Variables
Variables may be inherent characteristics of people, such as their age or
blood type. Sometimes, however, researchers create a variable. For
example, if a researcher tests the effectiveness of patient- controlled
analgesia as opposed to intramuscular analgesia in relieving pain after
surgery, some patients would be given patient- controlled analgesia and
others would receive intramuscular analgesia. In the context of the study,

method of pain management is a variable because different patients get
different analgesic methods.
Some variables take on a wide range of values that can be represented on a
continuum. For example, a person’s age is a continuous variable that can, in
theory, assume an infinite number of values between two points. For
example, between 1 and 2 pounds for the variable weight, the number of
values is limitless (e.g., 1.05, 1.3333, and so on). Other variables take on
only a few values. Discrete variables convey quantitative information (e.g.,
number of children), but categorical variables involve placing people into
categories (e.g., gender, blood type). Categorical variables with only two
categories (e.g., alive/dead) are dichotomous variables.

Dependent and Independent Variables
Many studies seek to unravel and understand causes of phenomena. Does
a nursing intervention cause improvements in patient outcomes? Does
smoking cause lung cancer? The presumed cause is the independent
variable, and the presumed effect is the dependent variable (or the
outcome variable). The dependent variable corresponds to the “O”
(outcome) of the PICO scheme discussed in Chapter 2. The independent
variable corresponds to the “I” (the intervention, influence, or exposure),
plus the “C” (the comparison). In doing an evidence search, you might
want to learn about the effects of an intervention or influence (I),
compared with any alternative, on an outcome (O). In a study, however,
researchers must always specify the comparator (the “C”) that they will
investigate.
Variation in the dependent variable is presumed to depend on variation in
the independent variable. For example, researchers study the extent to
which lung cancer (the dependent variable) depends on smoking (the
independent variable). Or, investigators might study the extent to which
patients’ pain (the dependent variable) depends on certain nursing actions
(the independent variable). The dependent variable is the outcome that
researchers want to understand, explain, or predict.
The terms independent variable and dependent variable are also used to
indicate direction of influence rather than a causal link. For example,
suppose a researcher studied the role of gender in the mental health (O) of
spousal caregivers of patients with dementia (P) and found lower
depression for wives than for husbands (I and C). We could not conclude
that depression was caused by gender. Yet the direction of influence clearly

runs from gender to depression: patients’ level of depression does not
influence their gender. Even without a cause-and-effect connection, it is
appropriate to consider depression as the outcome variable and gender as
an independent variable.
Most outcomes have multiple causes or influences. If we were studying
factors that influence obesity, as measured by people’s body mass index
(the dependent variable), we might consider height, physical activity, and
diet as independent variables in this Etiology question. Two or more
dependent variables also may be of interest. For example, a researcher may
compare the effects of alternative nursing interventions for children with
cystic fibrosis (a Therapy question). Several dependent variables could be
used to assess treatment effectiveness, such as length of hospital stay,
number of recurrent respiratory infections, and so on. It is common to
design studies with multiple independent and dependent variables.
Variables are not inherently dependent or independent. A dependent
variable in one study could be an independent variable in another. For
example, a study might examine the effect of an exercise intervention
versus no intervention (the independent variable) on osteoporosis (the
dependent variable) to answer a Therapy question. Another study might
investigate the effect of osteoporosis versus no osteoporosis (the
independent variable) on bone fracture incidence (the dependent variable)
to address a Prognosis question. In short, whether a variable is
independent or dependent is a function of the role that it plays in a
particular study.

Example of Independent and Dependent Variables
Research question (Etiology/Harm question): Is dietary vitamin C
deficiency associated with cardiac event–free survival in adults with
heart failure? (Wu et al., 2019)
Independent variable: Dietary vitamin C deficiency (vs. no
deficiency).
Dependent variable: Cardiac event–free survival versus a cardiac event.

Conceptual and Operational Definitions
Concepts are abstractions of observable phenomena, and researchers’
world views shape how those concepts are defined. A conceptual
definition presents the abstract or theoretical meaning of concepts under

study. Even seemingly straightforward terms need to be conceptually
defined. The classic example is the concept of caring. Morse et al. (1990)
examined how researchers and theorists defined caring and identified five
classes of conceptual definition: as a human trait; a moral imperative; an
affect; an interpersonal relationship; and a therapeutic intervention. More
recently Andersson et al. (2015) found that nurses offered multiple
interpretations of caring. Researchers undertaking studies of caring need
to clarify which conceptual definition they have adopted.
In qualitative studies, conceptual definitions of key phenomena may be a
major end product, reflecting an intent to have the meaning of concepts
defined by those being studied. In quantitative studies, however,
researchers must define concepts at the outset because they must decide
how the variables will be observed and measured. An operational
definition specifies what the researchers must do to measure the concept
and collect needed information.
Variables differ in the ease with which they can be operationalized. The
variable weight, for example, is easy to define and measure. We might
operationally define weight as the amount that an object weighs, to the
nearest half pound. This definition designates that weight will be
measured using one system (pounds) rather than another (grams). We
could also specify that weight will be measured using a digital scale with
participants fully undressed after 10 hours of fasting. This operational
definition clarifies what we mean by the variable weight.
Few variables are operationalized as easily as weight. Most variables can
be measured in different ways, and researchers must choose the one that
best captures the variables as they conceptualize them. Take, for example,
anxiety, which can be defined in terms of both physiologic and
psychological functioning. For researchers choosing to emphasize
physiologic aspects, the operational definition might involve a
measurement of salivary cortisol. If researchers conceptualize anxiety as a
psychological state, the operational definition might be people’s scores on
a patient- reported test such as the State Anxiety Scale. Readers of research
articles may not agree with how variables were conceptualized and
measured, but definitional precision is important for communicating
exactly what concepts mean within the study.

TIP Operationalizing a concept is often a two- part process that
involves deciding (1) how to accurately measure the variable and (2)

how to represent it in an analysis. For example, a person’s age might
be obtained by asking them to report their birthdate but
operationalized in an analysis in relation to a threshold (e.g., younger
than 65 years vs. 65 years or older).

Example of Conceptual and Operational Definitions
Rafferty et al. (2017) developed a measure called the Culture of Care
Barometer (CoCB) to measure the culture of care in healthcare
organizations. They defined “culture of care” conceptually as the
shared beliefs, norms, and routines through which the environment
of a healthcare organization can be interpreted and understood. This
construct was operationalized in the CoCB through a series of 30
questions to staff. Two examples are, “I have the resources I need to
do a good job” and “I feel supported to develop my potential.”

Data
Research data (singular, datum) are the pieces of information obtained in a
study. In quantitative studies, researchers define their variables and then
collect relevant data from study participants. Quantitative researchers
collect primarily quantitative data—data in numeric form. For example,
suppose depression was a key variable in a quantitative study. We might
ask participants, “Thinking about the past week, how depressed would
you say you have been on a scale from 0 to 10, where 0 means ‘not at all’
and 10 means ‘the most possible’?” Box 3.1 presents quantitative data for
three fictitious people. Subjects provided a number along a 0 to 10
continuum representing their degree of depression—9 for subject 1 (a high
level of depression), 0 for subject 2 (no depression), and 4 for subject 3
(mild depression). The numeric values for all participants, collectively,
would comprise the data on depression in this study.
In qualitative studies, researchers collect qualitative data, that is, narrative
descriptions. Narrative information can be obtained by having
conversations with participants, by making detailed notes about how
people behave, or by obtaining narrative records, such as diaries. Suppose
we were studying depression qualitatively. Box 3.2 presents qualitative
data for three people responding conversationally to the question, “Tell me
about how you’ve been feeling lately—have you felt sad or depressed at

all, or have you generally been in good spirits?” The data consist of rich
narrative descriptions of participant’s emotional state.

Relationships
Researchers are rarely interested in isolated concepts, except in descriptive
studies. For example, a researcher might describe the percentage of
patients receiving intravenous (IV) therapy who experience IV infiltration.
In this example, the variable is IV infiltration versus no infiltration.
Usually, however, researchers study phenomena in relation to other
phenomena—that is, they focus on relationships. A relationship is a bond
or a connection between phenomena. For example, researchers repeatedly
have found a relationship between cigare�e smoking and lung cancer. Both
qualitative and quantitative studies examine relationships, but in different
ways.
In quantitative studies, researchers examine the relationship between the
independent and dependent variables. Researchers ask whether variation
in the dependent variable (the outcome) is systematically related to
variation in the independent variable. Relationships are usually expressed
in quantitative terms, such as more than, less than, and so on. For example,
let us consider a person’s weight as our dependent variable. What
variables are related to (associated with) body weight? Some possibilities
are height, caloric intake, and exercise. For each independent variable, we
can make a prediction about its relationship to the outcome variable:

Height: Taller people will weigh more than shorter people.
Caloric intake: People with higher caloric intake will be heavier than those with
lower caloric intake.
Exercise: The lower the amount of exercise, the greater the person’s weight.

Each statement expresses a predicted relationship between weight (the
dependent variable) and a measurable independent variable. Terms like
more than and heavier than imply that as we observe a change in one
variable, we are likely to observe a change in weight. If Alex is taller than
Tom, we would predict (in the absence of other information) that Alex is
heavier than Tom.
Quantitative studies can address one or more of the following questions
about relationships:

Does a relationship between variables exist? (e.g., Is cigare�e smoking related to
lung cancer?)

What is the direction of the relationship between variables? (e.g., Are people who
smoke more likely or less likely to develop lung cancer than those who do not?)
How strong is the relationship between the variables? (e.g., How great is the risk
that smokers will develop lung cancer?)
What is the nature of the relationship between variables? (e.g., Does smoking
cause lung cancer? Does some other factor cause both smoking and lung cancer?)

Variables can be related in different ways. One type of relationship is a
cause-and-effect (or causal) relationship. Within the positivist paradigm,
natural phenomena have antecedent causes that are presumably
discoverable. In our example about a person’s weight, we might speculate
that there is a causal relationship between caloric intake and weight: we
might predict 
that consuming more calories causes weight 
gain. Many
quantitative studies are cause- probing—they seek to illuminate the causes
of phenomena.

Example of a Study of Causal Relationships
Lee et al. (2019a) evaluated the effect of California’s safe patient
handling legislation on musculoskeletal injury prevention among
nurses.

Not all relationships between variables can be interpreted as causal ones.
There is a relationship, for example, between a person’s pulmonary artery
and tympanic temperatures: people with high readings on one tend to
have high readings on the other. We cannot say, however, that pulmonary
artery temperature caused tympanic temperature, nor vice versa. This type
of relationship is a functional (or associative) relationship rather than a
causal one.

Example of a Study of Associative Relationships
Fox et al. (2018) studied the relationship between various risk factors
(including age and sex) and severe respiratory depression (SRD)
among adults with acute prescription opioid overdose. Age was
associated with higher risk of SRD.

Qualitative researchers are not concerned with quantifying relationships
nor in testing causal relationships. Qualitative researchers seek pa�erns of

association as a way to illuminate the underlying meaning and
dimensionality of phenomena. Pa�erns of interconnected themes and
processes are identified as a means of understanding the whole.

Example of a Qualitative Study of Patterns
MacArtney et al. (2017) explored what steps patients with cancer in
three countries (Denmark, England, and Sweden) took to arrive at
their original cancer diagnosis. In- depth interviews with 155 men and
women revealed two distinct pa�erns: (1) those who left their
primary care consultation with a plan about what should happen
next and (2) those who were unclear about next steps. The second
pa�ern extended over many weeks of uncertainty. Patients from
Sweden were more likely to follow the first pa�ern.

Major Classes of Quantitative and Qualitative Research
Researchers usually work within a paradigm that is consistent with their
world view and that gives rise to questions that excite their curiosity. The
maturity of the focal concept also may lead to one or the other paradigm:
when li�le is known about a phenomenon, a qualitative approach may be
more fruitful than a quantitative one. In this section, we briefly describe
broad categories of quantitative and qualitative research.

Quantitative Research: Experimental and Nonexperimental
Studies
A basic distinction in quantitative studies is between experimental and
nonexperimental research. In experimental research, researchers actively
introduce an intervention or treatment—most often, to address Therapy
questions. In nonexperimental research, researchers are bystanders—they
collect data without intervening (most often, to address Etiology,
Prognosis, or Description questions). For example, if a researcher gave
bran flakes to one group of people and prune juice to another to evaluate
which method facilitated elimination more effectively, the study would be
experimental because the researcher intervened in the normal course of
things. If, on the other hand, a researcher compared elimination pa�erns of
two groups whose regular eating pa�erns differed, the study would be
nonexperimental because there is no intervention. In medical research, an
experimental study usually is called a clinical trial and a nonexperimental
inquiry is called an observational study. A randomized controlled trial or
RCT is a particular type of clinical trial.

TIP On the evidence hierarchy shown in Figure 2.1, the two levels
directly below systematic reviews (RCTs and quasi- experiments)
involve interventions.

Experimental studies are explicitly cause- probing—they test whether an
intervention causes changes in the dependent variable. Sometimes
nonexperimental studies also explore causal relationships, but the
resulting evidence is usually less conclusive. Experimental studies offer
the possibility of greater control over confounding influences than
nonexperimental 
studies, and so causal inferences are more plausible.

Example of Experimental Research
Mitchell et al. (2018) are testing the effectiveness of an online therapy
program (ReaDySpeech) for people with dysarthria following a
stroke.

In this example of a study addressing a Therapy question, the researchers
intervened by giving some stroke patients the special intervention but not
giving it to others. In other words, the researcher controlled the
independent variable, which in this case was receipt versus nonreceipt of
the ReaDySpeech intervention.

Example of Nonexperimental Research
Chung and Sohn (2018) studied the relationship between nurse
staffing levels on in- hospital mortality (after taking into account such
factors as patient comorbidities) among stroke inpatients from 615
hospitals in Korea. Be�er staffing was associated with lower rates of
mortality.

In this nonexperimental study to address an Etiology/Harm question, the
researchers did not intervene in any way—they did not have control over
nurse staffing. They were interested in a similar population as in the
previous example (stroke patients), but their intent was to examine
existing relationships rather than to test a potential solution to a problem.

Qualitative Research: Disciplinary Traditions
The majority of qualitative nursing studies can best be described as
qualitative descriptive research. Many qualitative studies, however, are
rooted in research traditions that originated in anthropology, sociology,
and psychology. Three such traditions that are prominent in qualitative
nursing research are briefly described here. Chapter 22 provides a fuller
discussion of 
these traditions and the methods associated with them.
Grounded theory research, with roots in sociology, seeks to describe and
understand the key social psychological processes that occur in social
se�ings. Most grounded theory studies focus on a developing social
experience—the social and psychological processes that characterize an
event or episode. A major component of grounded theory is the discovery

of not only the basic social psychological problem but also a core variable
that is central in explaining what is going on in that social scene.
Grounded theory researchers strive to generate explanations of
phenomena that are grounded in reality. Grounded theory was developed
in the 1960s by two sociologists, Glaser and Strauss (1967).

Example of a Grounded Theory Study
Hsieh et al. (2018) conducted a grounded theory study in Taiwan to
explore ischemic stroke patients’ decision- making process regarding
the use of Western medicine and complementary and alternative
medicine (CAM).

Phenomenology is concerned with the lived experiences of humans.
Phenomenology is an approach to thinking about what life experiences of
people are like and what they mean. The phenomenologic researcher asks
the questions: What is the essence of this phenomenon as experienced by
these people? Or, what is the meaning of the phenomenon to those who
experience it?

Example of a Phenomenologic Study
Lee et al. (2019b) conducted in- depth interviews to explore the social
adjustment experiences of adolescents with Toure�e syndrome.

Ethnography, the primary research tradition in anthropology, provides a
framework for studying the pa�erns, lifeways, and experiences of a
defined cultural group in a holistic manner. Ethnographers typically
engage in extensive fieldwork, often participating in the life of the culture
under study. Ethnographic research can be concerned with broadly
defined cultures (e.g., Syrian refugee communities), but sometimes focuses
on more narrowly defined cultures (e.g., the culture of an intensive care
unit). Ethnographers strive to learn from members of a cultural group, to
understand their world view, and to describe their customs and norms.

Example of an Ethnographic Study
Ahlstedt et al. (2019) conducted an ethnographic study of Swedish
nurses to explore nurses’ workday events to be�er understand what

influences nurses’ decision to keep working.

Major Steps in a Quantitative Study
In quantitative studies, researchers move from the beginning of a study
(posing a question) to the end point (obtaining an answer) in a reasonably
linear sequence of steps that is broadly similar across studies. In some
studies, the steps overlap; in others, some steps are unnecessary. Still, a
general flow of activities is typical in a quantitative study (see Figure 3.1).
This section describes that flow, and the next section explains how
qualitative studies differ.

FIGURE 3.1 Flow of steps in a quantitative study.

Phase 1: The Conceptual Phase

Early steps in a quantitative study typically have a strong conceptual
element. Activities include reading, conceptualizing, theorizing, and
reviewing ideas with colleagues or advisers. During this phase,
researchers call on such skills as creativity, deductive reasoning, and a
firm grounding in previous research on a topic of interest.

Step 1: Formulating and Delimiting the Problem
Quantitative researchers begin by identifying an interesting, significant
research problem and formulating research questions. Good research
requires starting with good questions. In developing research questions,
nurse researchers must a�end to substantive issues (What kind of new
evidence is needed?); theoretical issues (Is there a conceptual context for
understanding this problem?); clinical issues (How could evidence from
this study be used in clinical practice?); methodologic issues (How can this
question best be studied to yield high- quality evidence?); and ethical
issues (Can this question be rigorously addressed in an ethical manner?)

TIP A critical ingredient in developing good research questions is
personal interest. Begin with topics that fascinate you or about which
you have a passionate interest.

Step 2: Reviewing the Related Literature
Quantitative research is conducted in a context of previous knowledge.
Quantitative researchers typically strive to understand what is already
known about a topic by undertaking a literature review. A thorough
literature review provides a foundation on which to base new evidence
and usually is conducted before data are collected. For clinical problems, it
may also be necessary to learn the “status quo” of current procedures and
to review existing practice guidelines.

Step 3: Undertaking Clinical Fieldwork
Unless the research problem originated in a clinical se�ing, researchers
embarking on a clinical nursing study benefit from spending time in
relevant clinical se�ings, discussing the problem with clinicians and
administrators, and observing current practices. Clinical fieldwork can
provide perspectives on recent clinical trends, diagnostic procedures, and
relevant healthcare delivery models; it can also help researchers be�er
understand clients and the se�ings in which care is provided. Such

g p
fieldwork can also be valuable in gaining access to an appropriate site or in
developing research strategies. For example, in the course of clinical
fieldwork, researchers might discover the need for research staff who are
bilingual.

Step 4: Defining the Framework and Developing Conceptual
Definitions
Theory transcends the specifics of a particular time, place, and group and
characterizes regularities in the relationships among variables. When
quantitative research is performed within the context of a theoretical
framework, the findings often have broader significance and utility. Even
when the research question is not embedded in a theory, researchers
should have a conceptual rationale and a clear vision of the concepts
under study.

Step 5: Formulating Hypotheses
Hypotheses state researcher’s predictions about relationships between
study variables. The research question identifies the study concepts and
asks how the concepts might be related; a hypothesis is the predicted
answer. For example, the research question might be: Is preeclamptic
toxemia related to stress during pregnancy? This might be translated into
the following hypothesis: Women with high levels of stress during
pregnancy will be more likely than women with lower stress to experience
preeclamptic toxemia. Most quantitative studies involve testing
hypotheses through statistical analysis.

Phase 2: The Design and Planning Phase
In the second major phase of a quantitative study, researchers decide on
the methods they will use to address the research question. Researchers
make many methodologic decisions, which have important implications
for the integrity and generalizability of the resulting evidence.

Step 6: Selecting a Research Design
The research design is the overall plan for obtaining answers to the
research questions. In designing the study, researchers select a specific
design from the many experimental and nonexperimental research designs
that are available. Research designs specify how often data will be
collected, what types of comparisons will be made, and where the study

will take place. Researchers also identify strategies to minimize biases and
to maximize the applicability of their research to real- life se�ings. The
research design is the architectural backbone of the study.

Step 7: Developing Protocols for the Intervention
In experimental research, researchers create an intervention (the
independent variable) and need to articulate its features. For example, if
we were interested in testing the effect of biofeedback on hypertension, the
independent variable would be exposure to biofeedback compared with
either an alternative treatment (e.g., relaxation) or no treatment. An
intervention protocol for the study must be developed, specifying exactly
what the biofeedback treatment would entail (e.g., what type of feedback,
who would administer it, how frequently and over how long a period the
treatment would last, and so on) and what the alternative condition would
be. The goal of such protocols is to ensure that all people in each group are
treated in the same way. (In nonexperimental research, this step is not
necessary.)

Step 8: Identifying the Population
Quantitative researchers need to clarify the group to whom study results
can be generalized—that is, they must identify the population to be
studied. A population is all the individuals or objects with common,
defining characteristics (the “P” component in PICO questions). For
example, the population of interest might be all patients undergoing
chemotherapy in Atlanta.

Step 9: Designing the Sampling Plan
Researchers collect data from a sample, which is a subset of the
population. Using samples is more feasible than collecting data from an
entire population, but the risk is that the sample might not reflect the
population’s traits. In a quantitative study, a sample’s adequacy is
assessed by its size and representativeness. The quality of the sample
depends on how typical, or representative, the sample is of the population.
The sampling plan specifies how the sample will be selected and recruited
and how many subjects there will be.

Step 10: Specifying Methods to Measure Research Variables

Quantitative researchers must identify methods to measure their research
variables. The primary methods of data collection are self- reports (e.g.,
interviews), observations (e.g., observing the sleep- wake state of infants),
and biophysiologic measurements (biomarkers). Self- reports from patients are
the largest class of data collection methods in nursing research. The task of
selecting measures of research variables and developing a data collection
plan is complex and challenging.

Step 11: Developing Methods to Safeguard Human/Animal Rights
Most nursing research involves humans, and so procedures need to be
developed to ensure that the study adheres to ethical principles. A formal
review by an ethics commi�ee is usually required.

Step 12: Reviewing and Finalizing the Research Plan
Before collecting their data, researchers often take steps to ensure that
plans will work smoothly. For example, they may evaluate the readability
of wri�en materials to assess if participants with low reading skills can
comprehend them, or they may pretest their measuring instruments to see
if they work well. Normally, researchers also have their research plan
critiqued by peers, consultants, or other reviewers before implementing it.
Researchers seeking financial support submit a proposal to a funding
source, and reviewers usually suggest improvements.

TIP For major studies, researchers often undertake a small- scale pilot
study to test their research plans. Strategies for designing effective
pilot studies are described in Chapter 29.

Phase 3: The Empirical Phase
The empirical phase of quantitative studies involves collecting data and
preparing the data for analysis. Often, the empirical phase is the most
time- consuming part of the investigation. Data collection typically requires
months of work.

Step 13: Collecting the Data
The actual collection of data in quantitative studies often proceeds
according to a preestablished plan. A data collection protocol typically spells
out procedures for training data collection staff; for actually collecting data

(e.g., the location and timing of gathering the data); and for recording
information. Technological advances have expanded possibilities for
automating data collection.

Step 14: Preparing the Data for Analysis
Data collected in a quantitative study must be prepared for analysis. One
preliminary step is coding, which involves translating verbal data into
numeric form (e.g., coding gender as “1” for females, “2” for males, and
“3” for other). Another step may involve transferring the data from
wri�en documents onto computer files for analysis.

Phase 4: The Analytic Phase
Quantitative data must be subjected to analysis and interpretation, which
occur in the fourth major phase of a project.

Step 15: Analyzing the Data
Quantitative researchers analyze their data through statistical analyses,
which include simple procedures (e.g., computing an average) as well as
ones that are complex. Some analytic methods are computationally
formidable, but the underlying logic of statistical tests is fairly easy to
grasp. Computers have eliminated the need to get bogged down with
mathematic operations.

Step 16: Interpreting the Results
Interpretation involves making sense of study results and examining their
implications. Researchers a�empt to explain the findings in light of prior
evidence, theory, and their own clinical experience—and in light of the
adequacy of the methods they used in the study. Interpretation also
involves drawing conclusions about the clinical significance of the results,
envisioning 
how the new evidence can be used in nursing practice, and
suggesting what further research is needed.

Phase 5: The Dissemination Phase
In the analytic phase, the researcher comes full circle: questions posed at
the outset are answered. Researchers’ responsibilities are not completed,
however, until study results are disseminated.

Step 17: Communicating the Findings

A study cannot contribute evidence to nursing practice if the results are
not shared. Another—and often final—task of a study is the preparation of
a research report that summarizes the study. Research reports can take
various forms: dissertations, journal articles, conference presentations, and
so on. Journal articles—reports appearing in professional journals such as
Nursing Research—usually are the most useful because they are available to
a broad, international audience. We discuss journal articles later in this
chapter.

Step 18: Utilizing the Findings in Practice
Ideally, the concluding step of a high- quality study is to plan for the use of
the evidence in practice se�ings. Although nurse researchers may not
themselves be able to implement a plan for using research findings, they
can contribute to the process by making recommendations for utilizing the
evidence, by ensuring that adequate information has been provided for a
systematic review, and by pursuing opportunities to disseminate the
findings to clinicians.

Activities in a Qualitative Study
Quantitative research involves a fairly linear progression of tasks—
researchers plan the steps to be taken to maximize study integrity and
then follow those steps as faithfully as possible. In qualitative studies, by
contrast, the progression is closer to a circle than to a straight line—
qualitative researchers continually examine and interpret data and make
decisions about how to proceed based on what has already been
discovered (Figure 3.2).

FIGURE 3.2 Flow of activities in a qualitative study.

Because qualitative researchers have a flexible approach, we cannot show
the flow of activities precisely—the flow varies from one study to another,
and researchers themselves do not know exactly how the study will
unfold. We provide a sense of how qualitative studies are conducted,
however, by describing major activities and indicating when they might be
performed.

Conceptualizing and Planning a Qualitative Study

Identifying the Research Problem
Qualitative researchers usually begin with a broad topic area, focusing on
an aspect of a topic that is poorly understood and about which li�le is
known. Qualitative researchers often proceed with a fairly broad initial
question, which may be narrowed and clarified on the basis of self–
reflection and discussion with others. The specific focus and questions are
usually delineated more clearly once the study is underway.

Doing a Literature Review
Qualitative researchers do not all agree about the value of doing an
upfront literature review. Some believe that researchers should not consult
the literature before collecting data because prior studies could influence
conceptualization of the focal phenomenon. In this view, the phenomena
should be explicated based on participants’ viewpoints rather than on
prior knowledge. Those sharing this opinion often do a literature review at
the end of the study. Other researchers conduct a brief preliminary review
to get a general grounding. Still others believe that a full early literature
review is appropriate. In any case, qualitative researchers typically find a
small body of relevant previous work because of the types of question they
ask.

Selecting and Gaining Entrée Into Research Sites
Before going into the field, qualitative researchers must identify an
appropriate site. For example, if the topic is the health beliefs of the urban
poor, an inner- city neighborhood with low- income residents must be
identified. Researchers may need to engage in anticipatory fieldwork to
identify a suitable and information- rich environment for the study. In
some cases, researchers have ready access to the study site, but in others,
they need to gain entrée. A site may be well suited to the needs of the
research, but if researchers cannot “get in,” the study cannot proceed.
Gaining entrée typically involves negotiations with gatekeepers who have
the authority to permit entry into their world.

TIP The process of gaining entrée is usually associated with doing
fieldwork in qualitative studies, but quantitative researchers often
need to gain entrée into sites for collecting data as well.

Developing an Overall Approach in Qualitative Studies

Quantitative researchers do not collect data until they have finalized their
research design. Qualitative researchers, by contrast, use an emergent
design that materializes during the course of data collection. Certain
design features may be guided by the qualitative research tradition within
which the researcher is working, but few qualitative studies follow rigidly
structured designs that prohibit changes while in the field. Although
qualitative researchers do not always know in advance exactly how the
study will progress, they nevertheless must have some sense of how much
time is available for fieldwork and must also arrange for and test needed
equipment, such as laptop computers or cameras.

Addressing Ethical Issues
Qualitative researchers, like quantitative researchers, must also develop
plans for addressing ethical issues—and, indeed, there are special
concerns in qualitative studies because of the more intimate nature of the
relationship that typically develops between researchers and study
participants. Chapter 7 describes these concerns.

Conducting a Qualitative Study
In qualitative studies, the tasks of sampling, data collection, data analysis,
and interpretation typically take place iteratively. Qualitative researchers
begin by talking with or observing a few people who have first- hand
experience with the phenomenon under study. The discussions and
observations are loosely structured, allowing for the expression of a full
range of beliefs, feelings, and behaviors. Analysis and interpretation are
ongoing, concurrent activities that guide choices about the kinds of people
to sample next and the types of questions to ask or observations to 
make.
The process of data analysis involves clustering together related types of
narrative information into a coherent scheme. As analysis and
interpretation progress, researchers begin to identify themes and
categories (or stages in a process), which are used to build a rich
description or theory of the phenomenon. The kinds of data obtained and
the people selected as participants tend to become increasingly purposeful
as the conceptualization is developed and refined. Concept development
shapes the sampling process—as a conceptualization or theory emerges,
the researcher seeks participants who can confirm and enrich the
theoretical understandings, as well as participants who can potentially
challenge them and lead to further theoretical development.

Quantitative researchers decide upfront how many people to include in a
study, but qualitative researchers’ sampling decisions are guided by the
data. Qualitative researchers use the principle of data saturation, which
occurs when themes and categories in the data become repetitive and
redundant, such that no new information can be gleaned by further data
collection.
Quantitative researchers seek to collect high- quality data by measuring
their variables with methods that have been found to be reliable and valid.
Qualitative researchers, by contrast, are the main data collection
instrument and must take steps to demonstrate the trustworthiness of the
data. The central feature of these efforts is to confirm that the findings
accurately reflect the experiences and viewpoints of participants rather
than the researcher’s perceptions. One confirmatory activity, for example,
involves going back to participants and sharing interpretations with them
so that they can evaluate whether the researcher’s thematic analysis is
consistent with their experiences.

Disseminating Qualitative Findings
Qualitative nurse researchers also share their findings with others at
conferences and in journal articles. Regardless of researchers’ positions
about when a literature review should be conducted, a summary of prior
research is usually offered in qualitative reports as a means of providing
context for the study.
Quantitative reports almost never contain raw data—that is, data in the
form they were collected, which are numeric values. Qualitative reports,
by contrast, are usually filled with rich verbatim passages directly from
participants. The excerpts are used in an evidentiary fashion to support or
illustrate researchers’ interpretations and thematic construction.

Example of Raw Data in a Qualitative Report
Nijboer and Van der Cingel (2019) did an in- depth study of the
perceptions of novice nurses in the Netherlands on compassion. The
nurses identified four themes, one of which was compassion as part
of the nurses’ professional identity. Here is an illustrative quote: “I
am convinced that I have to be a nurse. Being compassionate and
being a nurse is a part of who I am” (p. 87).

Like quantitative researchers, qualitative nurse researchers want their
findings used by others. Qualitative findings sometimes are the basis for
formulating hypotheses that are tested by quantitative researchers, for
developing measuring instruments for both research and clinical
purposes, and for designing effective nursing interventions. Qualitative
studies help to shape nurses’ perceptions of a problem or situation, their
conceptualizations of potential solutions, and their understanding of
patients’ concerns and experiences.

Research Journal Articles
Research journal articles, which summarize the background, design, and
results of a study, are the primary method of disseminating research
evidence. This section reviews the content and style of research journal
articles to ensure that you will be equipped to delve into the research
literature. A more detailed discussion of the structure of journal articles is
presented in Chapter 32, which provides guidance on writing research
reports.

Content of Journal Articles
Many quantitative and qualitative journal articles follow a conventional
organization called the IMRAD format. This format involves organizing
material into four main sections—Introduction, Methods, Results, and
Discussion. The text of the report is usually preceded by an abstract and
followed by cited references.

The Abstract
The abstract is a brief description of the study placed at the beginning of
the article. The abstract answers, in about 250 words, the following: What
were the research questions? What methods did the researcher use to
address the questions? What did the researcher find? And, what are the
implications? Readers review abstracts to assess whether the entire report
is of interest. Some journals have moved from traditional abstracts—single
paragraphs summarizing the study’s main features—to longer, structured
abstracts with specific headings. For example, in the journal Nursing
Research, abstracts are organized under the following headings:
Background, Objectives, Method, Results, and Conclusions.

The Introduction
The introduction communicates the research problem and its context. The
introduction, which often is not be specifically labeled “Introduction,”
follows immediately after the abstract. This section typically describes the
following: (1) the central phenomena, concepts, or variables under study;
(2) the population of interest; (3) the current state of evidence, based on a
brief literature review; (4) the theoretical framework; (5) the study
purpose, research questions, or hypotheses to be tested; and (6) the study’s

significance. Thus, the introduction sets the stage for a description of what
the researcher did and what was learned. The introduction corresponds
roughly to the conceptual phase of a study.

The Method Section
The method section describes the methods used to answer the research
questions. This section lays out methodologic decisions made in the design
and planning phase and may offer rationales for those decisions. In a
quantitative study, the method section usually describes the following: (1)
the research design; (2) the sampling plan for selecting participants from
the population of interest; (3) methods of data collection and specific
instruments used; (4) study procedures (including ethical safeguards); and
(5) analytic procedures and methods.
Qualitative researchers discuss many of the same issues, but with different
emphases. For example, a qualitative study often provides more
information about the research se�ing and the study context, and less
information on sampling. Also, because formal instruments are not used to
collect qualitative data, there is less discussion about data collection
methods. Reports of qualitative studies may also include descriptions of
the researchers’ efforts to enhance the trustworthiness of the study.

The Results Section
The results section presents the findings from the data analyses. The text
summarizes key findings, and (in quantitative reports) tables provide
greater detail. Virtually all results sections contain a description of the
participants (e.g., their average age, percentage male/female).
In quantitative studies, the results section provides information about the
statistical tests used to test hypotheses and to evaluate the believability of
the findings. For example, if the percentage of smokers who smoke two
packs or more daily is computed to be 40%, how probable is it that the
percentage is accurate? If the researcher finds that the average number of
cigare�es smoked weekly is lower for those in an intervention group than
for those not ge�ing the intervention, how probable is it that the
intervention effect is real? Statistical tests help to answer such questions.
Researchers typically report the following:

The names of statistical tests used. Different tests are appropriate for different
situations but are based on common principles. You do not have to know the

names of all statistical tests—there are dozens of them—to comprehend the
findings.
The value of the calculated statistic. Computers are used to calculate a numeric
value for the statistical test used. The value allows researchers to draw
conclusions about the results. The actual numeric value of the statistic, however,
is not inherently meaningful and need not concern you.
The statistical significance. A critical piece of information is whether the value of
the statistic was significant (not to be confused with important or clinically
relevant). When researchers say that results are statistically significant, it means
the findings are probably reliable and replicable with a new sample. Research
reports indicate the level of significance, which is an index of how probable it is
that the findings are reliable. For example, if a report says that a finding was
significant at the .05 level, this means that only 5 times out of 100 (5 ÷ 100 = .05)
would the result be spurious. In other words, 95 times out of 100, similar results
would be obtained with a new sample. Readers can have a high degree of
confidence—but not total assurance—that the result is reliable.

Example From the Results Section of a Quantitative Study
Caldwell et al. (2018) tested a tailored smoking cessation intervention
for parents of young children, to reduce children’s exposure to
tobacco smoke. A total of 453 parents, recruited from 14 elementary
schools, were assigned, at random, to either receive the intervention
or to serve as a nonintervention control group. Saliva cotinine was
measured at the start of the study and then at the end of the
intervention 2 years later. In the intervention group, average cotinine
levels dropped from 239.9 to 99.3, whereas in the control group
average cotinine levels increased from 221.1 to 239.0, F = 5.72, p = .004.

In this study addressing a Therapy question, Caldwell et al. found
improvement over time (from an initial measurement to a second
measurement after the intervention was completed), in the intervention
parents’ levels of salivary cotinine—but not among parents in the control
group. This finding is very reliable: less than four times in 1,000 (p < .004)
would a group difference as great as that observed have occurred as a
fluke. To understand this finding, you do not have to understand what an
F statistic is, nor do you need to worry about the actual value of the
statistic, 5.72.
Results sections of qualitative reports often have several subsections, the
headings of which correspond to the themes, processes, or categories

identified in the data. Excerpts from the raw data are presented to support
and provide a rich description of the thematic analysis. The results section
of qualitative studies may also present the researcher’s emerging theory
about the phenomenon under study.

The Discussion Section
In the discussion section, researchers draw conclusions about what the
results mean, and how the evidence can be used in practice. The
discussion in both qualitative and quantitative reports may include the
following elements: (1) the degree to which results are consistent with
previous research; (2) an interpretation of the results and their clinical
significance; (3) implications for clinical practice and for future and
research; and (4) study limitations and ramifications for the integrity of the
results. Researchers are in the best position to point out sample
deficiencies, design problems, weaknesses in data collection, and so forth.
A discussion section that presents these limitations demonstrates to
readers that the author was aware of these limitations and probably took
them into account in interpreting the findings.

The Style of Research Journal Articles
Research reports tell a story. However, the style in which many research
journal articles are wri�en—especially reports of quantitative studies—
makes it difficult for many readers to figure out the story or become
intrigued by it. To unaccustomed audiences, research reports may seem
stuffy, pedantic, and overwhelming. Four factors contribute to this
impression:

Compactness. Journal space is limited, so authors compress a lot of
information into a short space. Interesting, personalized aspects of the
study are not reported. Even in qualitative studies, only a handful of
supporting quotes can be included.
Jargon. The authors of research reports use terms that may seem
esoteric.
Objectivity. Quantitative researchers tell their stories objectively, in a
way that may make them sound impersonal. For example, most
quantitative reports are wri�en in the passive voice (i.e., personal
pronouns are avoided), which tends to make a report less lively than

use of the active voice. Qualitative reports, by contrast, are more
personal and wri�en in a more conversational style.
Statistical information. Quantitative reports summarize the results of
statistical analyses. Numbers and statistical symbols can intimidate
readers who do not have statistical training.

In this textbook we try to assist you in dealing with these issues and strive
to encourage you to tell your research stories in a manner that makes them
accessible to practicing nurses.

Tips on Reading Research Reports
As you progress through this book, you will acquire skills for evaluating
research reports critically. Some preliminary hints on digesting research
reports follow.

Grow accustomed to the style of research articles by reading them frequently,
even though you may not yet understand all the technical points.
Read from an article that has been downloaded and printed so that you can
highlight portions and write marginal notes (or use software that allows you to
do this in pdf files).
Read articles slowly. Skim the article first to get major points and then read it
more carefully a second time.
On the second reading of a journal article, train yourself to be an active reader.
Reading actively means that you constantly monitor yourself to assess your
understanding of what you are reading. If you have problems, go back and
reread difficult passages or make notes so that you can ask someone for
clarification. In most cases, that “someone” will be your research instructor, but
also consider contacting researchers themselves via email.
Some people find it helpful to use a structured reading method when reading
research reports. One such method is called the SQ3R Reading Technique,
which involves five steps: Survey, Question, Read, Recite, and Review. We provide
basic guidance about this method in Chapter 3 of the Toolkit in the
accompanying Resource Manual.

Keep this textbook with you as a reference while you are reading articles so that
you can look up unfamiliar terms in the glossary or index.
Try not to get “turned off” by statistical information. Try to grasp the gist of the
story without le�ing numbers frustrate you.
Until you become accustomed to research journal articles, you may want to
“translate” them by expanding compact paragraphs into looser constructions,
by translating jargon into familiar terms, by recasting the report into an active

voice, and by summarizing findings with words rather than numbers (Chapter 3
in the accompanying Resource Manual has an example of such a translation).

General Questions in Reviewing a Research Study
Most chapters of this book contain guidelines to help you evaluate
different aspects of a research report critically, focusing primarily on the
researchers’ methodologic decisions. Box 3.3 presents some further
suggestions for performing a preliminary overview of a research report,
drawing on concepts explained in this chapter. These guidelines
supplement those presented in Box 1.1, Chapter 1.

Research Examples
In this section, we illustrate the progression of activities and discuss the
time schedule of two studies (one quantitative and the other qualitative)
conducted by the second author of this book.

Project Schedule for a Quantitative Study

Study: Postpartum depressive symptomatology: Results from a two- stage
U.S. national survey (Beck et al., 2011).
Study purpose: Beck and colleagues undertook a study to estimate the
prevalence of mothers experiencing elevated postpartum depressive
symptom levels in the United States and to explore factors that contributed
to variability in symptom levels.
Study methods: This study required a li�le less than 3 years to complete.
Key activities and methodologic decisions included the following:
Phase 1. Conceptual Phase: 1 Month. Beck had been a member of the
Listening to Mothers II National Advisory Council. Data for their national
survey (the Childbirth Connection: Listening to Mothers II U.S. National
Survey) had already been collected when Beck was approached to analyze
the variables in the survey relating to postpartum depressive (PPD)
symptoms. The first phase took only 1 month because data collection was
already completed, and Beck, a world expert on PPD, just needed to
update a review of the literature.
Phase 2. Design and Planning Phase: 3 Months. The design phase
entailed identifying which of the hundreds of variables on the national
survey the researchers would focus on in their analysis. Also, their
research questions were formalized and approval from a human subjects
commi�ee was obtained during this phase.
Phase 3. Empirical Phase: 0 Months. In this study, the data from nearly
1,000 postpartum women had already been collected.
Phase 4. Analytic Phase: 12 months. Statistical analyses were performed
to (1) estimate the percentage of new mothers experiencing elevated
postpartum depressive symptom levels and (2) identify which
demographic, antepartum, intrapartum, and postpartum variables were
significantly related to these elevated symptom levels.
Phase 5. Dissemination Phase: 18 Months. The researchers prepared and
submi�ed their report to the Journal of Midwifery & Women’s Health for

possible publication. It was accepted within 5 months and was “in press”
(awaiting publication) another 4 months before being published. The
article received the Journal of Midwifery & Women’s Health 2012 Best
Research Article Award.

Project Schedule for a Qualitative Study

Study: Pos�raumatic growth after birth trauma: “I was broken, now I am
unbreakable” (Beck & Watson, 2016)
Study Purpose: The purpose of this study was to describe the meaning of
mothers’ experiences of pos�raumatic growth after experiencing a
traumatic childbirth.
Study Methods: This study required a li�le less than 4 years to complete.
Key activities and methodologic decisions included the following:
Phase 1. Conceptual phase: 4 months. Beck and Watson had conducted a
number of qualitative studies on traumatic childbirth and the negative
consequences for mothers (e.g., the impact of the traumatic birth on their
breastfeeding experiences and subsequent childbirth). This was their first
study on pos�raumatic growth, so they needed time to review relevant
studies and to read about the theory of pos�raumatic growth.
Phase 2. Design and planning phase: 3 months. Beck and Watson chose a
phenomenologic design for this study. They had conducted several
phenome nologic studies so designing this new study did not require a
lengthy time period. Once their proposal was finalized, it was submi�ed to
the university’s commi�ee on ethics for approval.
Phase 3. Empirical/analytic phrases: 2 years. A recruitment notice was
placed on the website of Trauma and Birth Stress, a charitable trust located
in New Zealand. Fifteen mothers sent narratives about their pos�raumatic
growth after a previous traumatic birth to Beck via the Internet. It took
18 months to recruit the sample. Analysis of the mothers’ stories took an
additional 6 months. Four themes emerged from the data analysis: (1)
opening oneself up to a new present, (2) achieving a new level of
relationship nakedness, (3) fortifying spiritual- mindedness, and (4) forging
new paths.
Phase 4. Dissemination phase: 1 year, 1 month. It took approximately
4 months to prepare the manuscript reporting this study. It was submi�ed
to the MCN: The American Journal of Maternal Child Nursing on December 1,
2015. This journal had an unusually rapid response, and 1 month later on
January 4, 2016, Beck and Watson received a “revise- and- resubmit”

decision from the journal. Only minor revisions were needed, and so on
January 11, 2016, the authors submi�ed their revised manuscript. One
week later, on January 19, 2016, Beck and Watson received notification that
their manuscript had been accepted for publication, and the article was
published in the September/October issue of 2016.

Summary Points

The people who provide information to the researchers (investigators) in a study
are called subjects or study participants (in quantitative research) or study
participants or informants in qualitative research; collectively the participants
comprise the sample.
The site is the overall location for the research; researchers sometimes engage in
multisite studies. Se�ings are the types of places where data collection occurs.
Se�ings can range from totally naturalistic environments to formal research
locations.
Researchers investigate concepts (or constructs) and phenomena, which are
abstractions or mental representations inferred from behavior or characteristics.
Concepts are the building blocks of theories, which are systematic explanations
of some aspect of the real world.
In quantitative studies, concepts are called variables. A variable is an a�ribute
that takes on different values (i.e., that varies from one person to another).
Groups that vary with respect to an a�ribute are heterogeneous; groups with
limited variability are homogeneous.
The dependent (or outcome) variable is the behavior or characteristic the
researcher is interested in explaining, predicting, or affecting (the “O” in the
PICO scheme). The independent variable is the presumed cause of, antecedent
to, or influence on the dependent variable. The independent variable
corresponds to the “I” and the “C” components in the PICO scheme.
A conceptual definition describes the abstract or theoretical meaning of a
concept being studied. An operational definition specifies how the variable will
be measured.
Data—information collected during a study—may take the form of narrative
information (qualitative data) or numeric values (quantitative data).
A relationship is a bond or connection between two variables. Quantitative
researchers examine the relationship between the independent variable and
dependent variable.
When the independent variable is a cause of the dependent variable, the
relationship is a cause-and-effect (or causal) relationship. In an associative
(functional) relationship, variables are related, but in a noncausal way.
A key distinction in quantitative studies is between experimental research, in
which researchers introduce an intervention, and nonexperimental (or
observational) research, in which researchers observe existing phenomena
without intervening.
Qualitative research sometimes is rooted in research traditions that originate in
other disciplines. Three such traditions are grounded theory, phenomenology,

and ethnography.
Grounded theory seeks to describe and understand key social psychological
processes that occur in social se�ings.
Phenomenology focuses on the lived experiences of humans and is an approach
to learning what the life experiences of people are like and what they mean.
Ethnography provides a framework for studying the meanings, pa�erns, and
lifeways of a culture in a holistic fashion.
Quantitative researchers usually progress in a fairly linear fashion from asking
research questions to answering them. The main phases in a quantitative study
are the conceptual, planning, empirical, analytic, and dissemination phases.
The conceptual phase involves (1) defining the problem to be studied; (2) doing a
literature review; (3) engaging in clinical fieldwork for clinical studies; (4)
developing a framework and conceptual definitions; and (5) formulating
hypotheses to be tested.
The planning phase entails (6) selecting a research design; (7) developing
intervention protocols if the study is experimental; (8) specifying the
population; (9) developing a sampling plan; (10) specifying methods to
measure research variables; (11) developing strategies to safeguard the rights of
participants; and (12) finalizing the research plan (e.g., pretesting instruments).
The empirical phase involves (13) collecting data and (14) preparing data for
analysis.
The analytic phase involves (15) analyzing data through statistical analysis and
(16) interpreting the results.
The dissemination phase entails (17) communicating the findings in a research
report and (18) promoting the use of the study evidence in nursing practice.
The flow of activities in a qualitative study is more flexible and less linear.
Qualitative studies typically involve an emergent design that evolves during
data collection.
Qualitative researchers begin with a broad question regarding a phenomenon,
often focusing on a li�le- studied aspect. In the early phase of a qualitative study,
researchers select a site and seek to gain entrée into it, which typically involves
enlisting the cooperation of gatekeepers.
Once in the field, qualitative researchers select informants, collect data, and then
analyze and interpret them in an iterative fashion. Knowledge gained during
data collection helps to shape the design of the study and the selection of
participants.
Early analysis in qualitative research leads to refinements in sampling and data
collection, until data saturation (redundancy of information) is achieved.
Both qualitative and quantitative researchers disseminate their findings, often in
journal articles that concisely communicate what the researchers did and what
they found.

Journal articles typically consist of an abstract (a brief synopsis) and four major
sections in an IMRAD format: an Introduction (explanation of the study
problem and its context); Method section (the strategies used to address the
problem); Results section (study findings); and Discussion (interpretation of the
findings).
Research reports can be difficult to read because they are dense and contain a lot
of jargon. Quantitative research reports may be intimidating at first because,
compared with qualitative reports, they are more impersonal and include
statistical information.
Statistical tests are procedures for testing research hypotheses and evaluating
the believability of the findings. Findings that are statistically significant are
ones that have a high probability of being “real.”

Study Activities
Study activities are available to instructors on .

References Cited in Chapter 3
Ahlstedt C., Eriksson-Lindvall C., Holmström I., & Muntlin-Athlin A. (2019). What

makes registered nurses remain in work? An ethnographic study. International
Journal of Nursing Studies, 89, 32–38.

* Andersson E., Willman A., Sjöström-Strand A., & Borglin G. (2015). Registered
nurses’ descriptions of caring: A phenomenographic interview study. BMC
Nursing, 14, 16.

Beck C. T., Gable R. K., Sakala C., & Declercq E. R. (2011). Postpartum depressive
symptomatology: Results from a two-stage U.S. national survey. Journal of
Midwifery & Women’s Health, 56, 427–435.

Beck C. T., & Watson S. (2016). Postraumatic growth after birth trauma: “I was
broken, now I am unbreakable”. MCN: The American Journal of Maternal Child
Nursing, 41, 264–271.

Caldwell A., Tingen M., Nguyen J., Andrews J., Heath J., Waller J., & Treiber F.
(2018). Parental smoking cessation: Impacting children’s tobacco exposure in the
home. Pediatrics, 141, S96–S106.

Chung W., & Sohn M. (2018). The impact of nurse staffing on in-hospital mortality of
stroke patients in Korea. Journal of Cardiovascular Disease, 22, 47–54.

Fox L., Hoffman R., Vlahov D., & Manini A. (2018). Risk factors for severe respiratory
depression from prescription opioid overdose. Addiction, 113, 59–66.

Glaser B. G., & Strauss A. L. (1967). The discovery of grounded theory: Strategies for
qualitative research. Chicago: Aldine.

Hsieh C., Wang S., Chuang Y., & Chen H. (2018). Ischemic stroke patients’ decision-
making process in their use of Western medicine and alternative and
complementary medicine. Holistic Nursing Practice, 32, 17–26.

Lee S., Lee J., & Harrison R. (2019a). Impact of California’s safe patient handling
legislation on mulculoskeletal injury prevention among nurses. American Journal of
Industrial Medicine, 62, 50–58.

Lee M., Wang H., Chen C., & Lee M. (2019b). Social adjustment experiences of
adolescents with Toure�e syndrome. Journal of Clinical Nursing, 28, 279–288.

* MacArtney J., Malmstrom M., Overgaard Nielsen T., Evans J., Bernhardson B.,
Hajdarevic S., … Ziebland S. (2017). Patients initial steps to cancer diagnosis in
Denmark, England and Sweden: What can a qualitative, cross-country comparison
of narrative interviews tell us about potentially modifiable factors? BMJ Open,
7(11), e018210.

* Mitchell C., Bowen A., Tyson S., & Conroy P. (2018). A feasibility randomized
controlled trial of ReaDySpeech for people with dysarthria after stroke. Clinical
Rehabilitation, 32, 1037–1046.

Morse J. M., Solberg S. M., Neander W. L., Bo�orff J. L., & Johnson J. L. (1990).
Concepts of caring and caring as a concept. Advances in Nursing Science, 13, 1–14.

Nijboer A., & Van der Cingel M. (2019). Compassion: Use it or lose it?: A study into
the perceptions of novice nurses on compassion: A qualitative approach. Nurse
Education Today, 72, 84–89.

* Rafferty A. M., Philippou J., Fi�patrick J., Pike G., & Ball J. (2017). Development and
testing of the “culture of care barometer” (CoCB) in healthcare organisations. BMJ
Open, 7, e016677.

** Wu J., Song E., Moser D., & Lennie T. (2019). Dietary vitamin C deficiency is
associated with health-related quality of life and cardiac event-free survival in
adults with heart failure. Journal of Cardiovascular Nursing, 34, 29–35.

*A link to this open-access article is provided in the Toolkit for Chapter 3 in the
Resource Manual.

**This journal article is available on for this chapter.

PA R T 2
Conceptualizing and Planning a Study to 

Generate Evidence for Nursing

Chapter 4 Research Problems, Research Questions, and
Hypotheses
Chapter 5 Literature Reviews: Finding and Critically
Appraising Evidence
Chapter 6 Theoretical Frameworks
Chapter 7 Ethics in Nursing Research
Chapter 8 Planning a Nursing Study

C H A P T E R 4

Research Problems, Research Questions,
and Hypotheses

Overview of Research Problems
Studies begin, much like evidence- based practice (EBP) efforts, with
a problem that needs to be solved or a question that needs to be
answered. This chapter discusses the development of research
problems. We begin by clarifying some relevant terms.

Basic Terminology
At a general level, a researcher selects a topic or a phenomenon on
which to focus. Examples of research topics are claustrophobia
during MRI tests, pain management for sickle cell disease, and
nutrition during pregnancy. Within broad topic areas are many
potential research problems. In this section, we illustrate various
terms using the topic side effects of chemotherapy.
A research problem is an enigmatic or troubling condition.
Researchers identify a research problem within a broad topic of
interest. The purpose of research is to “solve” the problem—or to
contribute to its solution—by generating relevant, high- quality
evidence. Researchers articulate the problem in a problem statement
that also presents a rationale for the study.
Many reports include a statement of purpose (or purpose
statement), which summarizes the goal of the study. Research
questions are the specific queries researchers want to answer in
addressing the problem. Research questions guide the types of data
to collect in a study. Researchers who make predictions about
answers to research questions pose hypotheses that are tested in the
study.
These terms are not always consistently defined in research methods
textbooks, and differences among them are often subtle. Table 4.1
illustrates the terms as we define them.

TABLE 4.1
Example of Terms Relating to Research Problems

Term Example
Topic/focus Side effects of chemotherapy
Research
problem
(simple
problem
statement)

Nausea and vomiting are common side effects among patients on chemotherapy,
and interventions to date have been only moderately successful in reducing these
effects. One issue concerns the efficacy of alternative means of administering
antiemetic therapies.

Statement of
purpose

The purpose of the study is to test an intervention to reduce chemotherapy- induced
side effects—specifically, to compare the effectiveness of patient- controlled and
nurse- administered antiemetic therapy for controlling nausea and vomiting in
patients on chemotherapy.

Research
questions

What is the relative effectiveness of patient- controlled antiemetic therapy versus
nurse- controlled antiemetic therapy with regard to (1) medication consumption and
(2) control of nausea and vomiting in patients on chemotherapy?

Hypotheses Patients receiving antiemetic therapy by a patient- controlled pump will (1) be less
nauseous, (2) vomit less, and (3) consume less medication than patients receiving the
therapy by nurse administration.

Box 4.1 Draft Problem Statement on Humor and Stress

A diagnosis of cancer is associated with high levels of stress. Sizable
numbers of patients who receive a cancer diagnosis describe feelings
of uncertainty, fear, anger, and loss of control. Interpersonal
relationships, psychological functioning, and role performance have
all been found to suffer following cancer diagnosis and treatment.
A variety of alternative/complementary therapies have been
developed in an effort to decrease the harmful effects of stress on
psychological and physiological functioning, and resources devoted
to these therapies (money and staff) have increased in recent years.
However, many of these therapies have not been carefully evaluated
to determine their efficacy, safety, or cost- effectiveness. For example,
the use of humor has been recommended as a therapeutic device to
improve quality of life, decrease stress, and perhaps improve
immune functioning, but the evidence to support this claim is
limited.

Box 4.2 Some Possible Improvements to Problem Statement
on Humor and Stress

Each year, more than 1 million people are diagnosed with cancer,
which remains one of the top causes of death among both men and
women (reference citations). Numerous studies have documented
that a diagnosis of cancer is associated with high levels of stress.
Sizable numbers of patients who receive a cancer diagnosis describe
feelings of uncertainty, fear, anger, and loss of control (citations).
Interpersonal relationships, psychological functioning, and role
performance have all been found to suffer following cancer
diagnosis and treatment (citations). These stressful outcomes can,
in turn, adversely affect health, long- term prognosis, and medical
costs among cancer survivors (citations).
A variety of alternative/complementary therapies have been
developed in an effort to decrease the harmful effects of stress on
psychological and physiological functioning, and resources devoted
to these therapies (money and staff) have increased in recent years
(citations). However, many of these therapies have not been
carefully evaluated to determine their efficacy, safety, or cost–
effectiveness. For example, the use of humor has been recommended
as a therapeutic device to improve quality of life, decrease stress,
and perhaps improve immune functioning (citations), but the
evidence to support this claim is limited. Preliminary findings from
a recent small- scale endocrinology study with a healthy sample
exposed to a humorous intervention (citation) holds promise for
further inquiry with immuno- compromised populations.

Box 4.3 Guidelines for Critically Appraising Research
Problems, Research Questions, and Hypotheses

1. What is the research problem? Is the problem statement easy to locate
and is it clearly stated? Does the problem statement build a cogent and
persuasive argument for the new study?

2. Does the problem have significance for nursing? How might the research
contribute to nursing practice, administration, education, or policy?

3. Is there a good fit between the research problem and the paradigm in
which the research was conducted? Is there a good fit between the
problem and the qualitative research tradition (if applicable)?

4. Does the report formally present a statement of purpose, research
question, and/or hypotheses? Is this information communicated clearly
and concisely, and is it placed in a logical and useful location?

5. Are purpose statements or questions worded appropriately? For
example, are key concepts/variables identified and is the population of
interest specified? Are verbs used appropriately to suggest the nature of
the inquiry and/or the research tradition?

6. If there are no formal hypotheses, is their absence justified? Are statistical
tests used in analyzing the data despite the absence of stated hypotheses?

7. Do hypotheses (if any) flow from a theory or previous research? Is there a
justifiable basis for the predictions?

8. Are hypotheses (if any) properly worded—do they state a predicted
relationship between two or more variables? Are they directional or
nondirectional, and is there a rationale for how they were stated? Are
they presented as research or as null hypotheses?

Research Problems and Paradigms
Some research problems are be�er suited to qualitative versus
quantitative methods. Quantitative studies usually focus on concepts
that are fairly well developed, about which there is existing
evidence, and for which reliable methods of measurement have been
(or can be) developed. For example, a quantitative study might be
undertaken to explore whether older people with chronic illness
who continue working are less (or more) depressed than those who
retire. There are relatively good measures of depression that would
yield quantitative information about the level of depression in a
sample of employed and retired seniors who are chronically ill.
Qualitative studies are often undertaken because a researcher wants
to develop a rich and context- bound understanding of a poorly
understood phenomenon. Researchers often initiate a qualitative
study to heighten awareness and create a dialogue about a

phenomenon. Qualitative methods would not be well suited to
comparing levels of depression among employed and retired
seniors, but they would be ideal for exploring, for example, the
meaning or experience of depression among chronically ill retirees.
Thus, the nature of the research question is linked to paradigms and
to research traditions within paradigms.

Sources of Research Problems
Where do ideas for research problems come from? At a basic level,
research topics originate with researchers’ interests. Because
research is a time- consuming enterprise, curiosity about and interest
in a topic are essential. Research reports rarely indicate the source of
researchers’ inspiration, but a variety of explicit sources can fuel
their interest, including the following:

Clinical experience. Nurses’ everyday clinical experience is a rich source of
ideas for research inquiries. Immediate problems that need a solution—
analogous to problem- focused triggers discussed in Chapter 2—may
generate enthusiasm and have high potential for clinical relevance.
Patients’ involvement. Increasingly, researchers are turning to patients and
other key stakeholders for input in identifying important issues for
research. Patient- centered outcomes research (PCOR) has become
increasingly prominent.
Quality improvement efforts. Important clinical questions sometimes
emerge in the context of findings from quality improvement studies.
Personal involvement on a quality improvement team can sometimes
lead to ideas for a study. In Chapter 12, we discuss a process called root
cause analysis that can suggest a research focus.
Nursing literature. Ideas for studies sometimes come from reading the
nursing literature. Research articles may suggest problems indirectly by
stimulating the reader’s curiosity and directly by pointing out needed
research.
Social issues. Topics are sometimes suggested by global social or political
issues of relevance to the healthcare community. For example, the
feminist movement raised questions about such topics as gender equity
in health care. Public awareness about health disparities has led to
research on healthcare access and culturally sensitive interventions.

Ideas from external sources. External sources and direct suggestions can
sometimes provide the impetus for a research idea. For example, ideas
for studies may emerge from brainstorming with other nurses.

Additionally, researchers who have developed a program of
research on a topic area may get inspiration for “next steps” from
their own findings or from a discussion of those findings with
others.

Example of a Problem Source in a Program of Research
Beck, one of this book’s authors, conducted a study with two
collaborators (Beck et al., 2015) on secondary traumatic stress
among certified nurse midwives (CNMs). Beck has developed a
strong research program on postpartum depression and
traumatic births. She and Gable had previously conducted a
study with labor and delivery nurses and their experiences of
secondary traumatic stress caring for women during traumatic
births. When Beck presented the findings of this study at
conferences, certified CNMs in the audience often said “You
should research us too. We also have secondary traumatic
stress.”

TIP Personal experiences in clinical se�ings are a provocative
source of research ideas and questions. Here are some hints:

Watch for a recurring problem and see if you can discern a pa�ern
in situations that lead to the problem.
Example: Why do so many patients complain of being tired after
being transferred from a coronary care unit to a progressive care
unit?

Think about aspects of your work that are frustrating or do not
result in the intended outcome—then try to identify factors
contributing to the problem that could be changed.
Example: Why is suppertime so frustrating in a nursing home?

Critically examine your own clinical decisions. Are they based on
tradition, or are they based on systematic evidence that supports
their efficacy?
Example: What would happen if you used the return of flatus to
assess the return of GI motility after abdominal surgery, rather than
listening to bowel sounds?

Developing and Refining Research Problems
Procedures for developing a research problem are difficult to
describe. The process is rarely a smooth and orderly one; there are
likely to be false starts, inspirations, and setbacks. The few
suggestions offered here are not intended to imply that there are
techniques for making this first step easy but rather to encourage
you to persevere in the absence of instant success.

Selecting a Topic
Developing a research problem is a creative process—and it is a
process that is sometimes best done in teams. The teams can include
other nurses, mentors, interdisciplinary partners, patients, or other
community members.
In the early stages of initiating research ideas, try not to be too self–
critical. It is be�er to relax and jot down topics of interest as they
come to mind. It does not ma�er if the ideas are abstract or concrete,
broad or specific, technical or colloquial—the important point is to
put ideas on paper.
After this first step, ideas can be sorted in terms of interest,
knowledge about the topics, and the perceived feasibility of turning
the ideas into a study. When the most fruitful topic area has been
selected, the list should not be discarded; it may be necessary to
return to it.

TIP The process of selecting and refining a research problem
usually takes longer than you might think. The process involves
starting with some preliminary ideas; having discussions with
colleagues, advisers, or stakeholders; perusing the research
literature; looking at what is happening in clinical se�ings; and
a lot of reflection.

Narrowing the Topic

Once you have identified a topic of interest, you can begin to ask
some broad questions that can lead you to a researchable problem.
Examples of question stems that might help to focus an inquiry
include the following:

What is going on with …?
What is the process by which …?
What is the meaning of …?
What would happen if …?
What influences or causes …?
What are the consequences of…?
What factors contribute to …?

Early criticism of ideas can be counterproductive. Try not to jump to
the conclusion that an idea sounds trivial or uninspired without
giving it more careful consideration or exploring it with others.
Another potential danger is that new researchers sometimes develop
problems that are too broad in scope. The transformation of a
general topic into a workable problem often is accomplished in
uneven steps. Each step should result in progress toward the goals of
narrowing the scope of the problem and sharpening the concepts.
As researchers move from general topics to more specific ideas,
several possible research problems may emerge. Consider the
following example. Suppose you were working on a medical unit
and were puzzled by that fact that some patients always complained
about having to wait for pain medication when certain nurses were
assigned to them. The general problem is discrepancy in patient
complaints regarding pain medications. You might ask: What
accounts for the discrepancy? How can I improve the situation?
These are not research questions, but they may lead you to ask such
questions as the following: How do the two groups of nurses differ?
or What characteristics do the complaining patients share? At this
point, you may observe that the cultural and ethnic background of
the patients and nurses could be relevant. This may lead you to
search the literature for studies about culture and ethnicity in
relation to nursing care, or it may prompt you to discuss your

observations with others. These efforts may result in several research
questions, such as the following:

What is the nature of patient complaints among patients of different
cultural backgrounds?
Is the cultural background of nurses related to the frequency with which
they dispense pain medication?
Does the number of patient complaints increase when patients are of
dissimilar cultural backgrounds as opposed to when they are of the same
cultural background as nurses?
Do nurses’ dispensing behaviors change as a function of the similarity
between their own cultural background and that of patients?

These questions stem from the same problem, yet each would be
studied differently. Some suggest a qualitative approach and others
suggest a quantitative one. A quantitative researcher might be
curious about cultural or ethnic differences in nurses’ dispensing
behaviors. Both ethnicity and nurses’ dispensing behaviors are
variables that can be operationalized. A qualitative researcher would
likely be more interested in understanding the essence of patients’
complaints, their experience of frustration, or the process by which the
problem got resolved.
Researchers choose a problem to study based on several factors,
including its inherent interest and its compatibility with a paradigm
of preference. In addition, tentative problems vary in their feasibility
and worth. A critical evaluation of ideas is appropriate at this point.

Evaluating Research Problems
Although there are no rules for selecting a research problem, four
important considerations to keep in mind are the problem’s
significance, researchability, feasibility, and interest to you.

Significance of the Problem
A crucial factor in selecting a problem is its significance to nursing.
Evidence from the study should have potential to contribute
meaningfully to nursing; the new study should be the right “next

step” in building an evidence base. The right next step could be an
original study, but it could also be a replication to answer previously
asked questions with greater rigor or with a different population.

TIP In evaluating the significance of an idea, ask the following
kinds of questions: Is the problem important to nursing and its
clients? Will patient care benefit from the evidence? Will the
findings challenge (or lend support to) existing practices? If the
answer to all these questions is “no,” then the problem
probably should be abandoned.

Researchability of the Problem
Not all problems are amenable to research inquiry. Questions of a
moral or ethical nature, although provocative, cannot be researched.
For example, should assisted suicide be legalized? There are no right
or wrong answers to this question, only points of view. Of course,
related questions could be researched, such as: Do patients living
with high levels of pain hold more favorable a�itudes toward
assisted suicide than those with less pain? What moral dilemmas are
perceived by nurses who might be involved in assisted suicide? The
findings from studies addressing such questions would have no
bearing on whether assisted suicide should be legalized, but they
could be useful in developing a be�er understanding of key issues.

Feasibility of the Problem
A third consideration concerns feasibility, which encompasses
several issues. Not all of the following factors are universally
relevant, but they should be kept in mind in making a decision.
Time. Most studies have deadlines or completion goals, so the
problem must be one that can be studied in the allo�ed time. It is
prudent to be conservative in estimating time for the various tasks
because research activities typically require more time than
anticipated.
Researcher experience. Ideally, the problem should relate to a topic
about which you have some prior knowledge or experience. Also,

beginning researchers should avoid problems that might require the
development of a new measuring instrument or that demand
complex analyses.
Availability of study participants. In any study involving humans,
researchers need to consider whether people with the desired
characteristics will be available and willing to cooperate. Researchers
may need to put considerable effort into recruiting participants or
may need to offer a monetary incentive.
Cooperation of others. It may be necessary to gain entrée into an
appropriate community or se�ing and to develop the trust of
gatekeepers. In institutional se�ings (e.g., hospitals), access to
clients, personnel, or records requires authorization.
Ethical considerations. A research problem may be unfeasible if the
study would pose unfair or unethical demands on participants. The
ethical issues discussed in Chapter 7 should be reviewed when
considering a study’s feasibility.
Facilities and equipment. All studies have resource requirements,
although needs are sometimes modest. It is prudent to consider
what facilities and equipment will be needed and whether they will
be available.
Money. Monetary needs for studies vary widely, ranging from $100
or less for small student projects to hundreds of thousands of dollars
for large- scale research. If you are on a limited budget, you should
think carefully about projected expenses before selecting a problem.
Major categories of research- related expenditures include:

Personnel costs—payments to research assistants (e.g., for interviewing,
coding, data entry, transcribing, statistical consulting)
Participant costs—payments to participants as an incentive for their
cooperation or to offset their expenses (e.g., parking, babysi�ing costs)
Supplies—paper, memory sticks, postage, and so forth
Printing and duplication—costs for reproducing forms, questionnaires,
and so on
Equipment—computers and software, audio- or video- recorders,
calculators, and the like
Laboratory fees for the analysis of biophysiologic data

Transportation costs (e.g., travel to participants’ homes)

TIP If your study involves testing a new procedure or
intervention, you should also consider the feasibility of
ultimately implementing it in real- world se�ings, should it
prove effective. If the innovation requires a lot of resources,
there may be li�le interest in adopting it, even if it results in
improvements.

Researcher Interest
Even if a tentative problem is researchable, significant, and feasible,
there is one more criterion: your own interest in the problem.
Genuine curiosity about a research problem is an important
prerequisite to a successful study. A lot of time and energy are
expended in a study; there is li�le sense devoting these resources to
a project about which you are not enthusiastic.

TIP New researchers often seek suggestions about a topic area,
and such assistance may be helpful in ge�ing started.
Nevertheless, it is unwise to be talked into a topic in which you
have limited interest. If you do not find a problem appealing at
the beginning of a study, you are likely to regret your choice
later.

Communicating Research Problems and
Questions
Every study needs a problem statement—an articulation of what is
problematic and is the impetus for the research. Most research
reports also present a statement of purpose, research questions, or
hypotheses.
Many people do not understand problem statements and may have
trouble identifying them in a research article—not to mention
developing one. A problem statement often begins with the very first
sentence after the abstract. Specific research questions, purposes, or
hypotheses appear later in the introduction. Typically, however,
researchers begin their inquiry by identifying their research question
and then develop an argument in their problem statement to present
the rationale for the new research. This section follows that sequence
by describing statements of purpose and research questions,
followed by a discussion of problem statements.

Statements of Purpose
Many researchers articulate their research goals in a statement of
purpose, worded declaratively. It is usually easy to identify a
purpose statement because the word purpose is explicitly stated “The
purpose of this study was…”—although sometimes the words aim,
goal, or objective are used instead, as in “The goal of this study
was….”
In a quantitative study, a statement of purpose identifies the key
study variables and their possible interrelationships, as well as the
population of interest (i.e., the PICO elements).

Example of a Statement of Purpose From a Quantitative
Study
“Aim: This study examined the effects of a music intervention
on anxiety, depression, and psychosomatic symptoms of

oncology nurses” (Ploukou & Panagopoulou, 2018, p. 77).

In this purpose statement for a Therapy question, the population (P)
is oncology nurses. The aim is to assess whether a music
intervention (I) compared with no music intervention (C)—which
together comprise the independent variable—has an effect on the
nurses’ anxiety, depression, and psychosomatic symptoms, which
are the dependent variables (the Os).
In qualitative studies, the statement of purpose indicates the key
concept or phenomenon, and the people under study.

Example of a Statement of Purpose From a Qualitative
Study
The aims of this study were “to explore the experiences of
adherence to endocrine therapy in women with breast cancer
and their perceptions of the challenges they face in adhering to
their medication” (Iacorossi 
et al., 2018, p. E57).

This statement indicates that the central phenomenon in this study
was the experiences of medication adherence and related challenges
among women with breast cancer (P).
The statement of purpose communicates more than just the nature of
the problem. Researchers’ selection of verbs in a purpose statement
suggests how they sought to solve the problem, or the state of
knowledge on the topic. A study whose purpose is to explore or
describe a phenomenon is likely an investigation of a li�le- researched
topic, sometimes involving a qualitative approach. A purpose
statement for a qualitative study may also use verbs such as
understand, discover, or develop. Statements of purpose in qualitative
studies may “encode” the tradition of inquiry, not only through the
researcher’s choice of verbs but also through the use of “buzz
words” associated with those traditions, as follows:

Grounded theory: Processes; social structures; social interactions
Phenomenologic studies: Experience; lived experience; meaning; essence

Ethnographic studies: Culture; roles; lifeways; cultural behavior

Quantitative researchers also suggest the nature of the inquiry
through their selection of verbs. A statement indicating that the
study’s purpose is to test or evaluate something (e.g., an intervention)
suggests an experimental design. A study whose purpose is to
examine or explore the relationship between two variables likely
involves a nonexperimental design. Sometimes the verb is
ambiguous: a purpose statement indicating that an intent to compare
could be referring to a comparison of alternative treatments (using
an experimental approach) or a comparison of preexisting groups
(using a nonexperimental approach). In any event, verbs such as test,
evaluate, and compare suggest an existing knowledge base and
quantifiable variables.
The verbs in a purpose statement should connote objectivity. A
statement of purpose indicating that the study goal was to prove,
demonstrate, or show something suggests a bias. The word determine
should usually be avoided as well because research methods almost
never provide definitive answers to research questions.

TIP Unfortunately, some reports fail to state the study purpose
clearly, leaving readers to infer the purpose from such sources
as the title of the report. In other reports, the purpose may be
difficult to find. Researchers often state their purpose toward
the end of the report’s introduction.

Research Questions
Research questions are sometimes direct rewordings of purpose
statements, phrased interrogatively rather than declaratively, as in
the following example:

Purpose: The purpose of this study was to assess the relationship between
the functional dependence level of renal transplant recipients and their
rate of recovery.

Question: What is the relationship between the functional dependence
level (I and C: higher versus lower levels) of renal transplant recipients
(P) and their rate of recovery (O)?

Questions have the advantage of simplicity and directness—they
invite an answer and help to focus a�ention on the kinds of data
needed to provide that answer. Some research reports thus omit a
statement of purpose and state only research questions. Other
researchers use a set of research questions to clarify or lend greater
specificity to a global purpose statement.

Research Questions in Quantitative Studies
In Chapter 2, we discussed the framing of clinical foreground
questions to guide an EBP inquiry. Many of the EBP question
templates in Table 2.1 could yield questions to guide a study as well,
but researchers tend to conceptualize their questions in terms of their
variables. Take, for example, the Therapy question in Table 2.1, which
states, “In (Population), what is the effect of (Intervention) on
(Outcome)?” A researcher would likely think of the question in these
terms: “In (population), what is the effect of (independent variable)
on (dependent variable)?” Thus, in quantitative studies research
questions identify the population (P) under study, the key study
variables (I, C, and O components), and possible relationships
among the variables. The variables are all quantifiable concepts.
Most research questions concern relationships, so many quantitative
research questions could be articulated using a general template: “In
(population), what is the relationship between (independent variable
or IV) and (dependent variable or DV)?” Variations include the
following:

Therapy/intervention: In (population), what is the effect of (IV: intervention
versus an alternative) on (DV)?
Prognosis: In (population), does (IV: presence of disease or illness versus
its absence) affect or increase the risk of (DV: adverse consequences)?
Etiology/harm: In (population), does (IV: exposure versus nonexposure)
cause or increase the risk of (DV: disease, health problem)?

Clinical foreground questions for an EBP- focused search and a
question for a study sometimes differ. As shown in Table 2.1,
sometimes clinicians ask PICO questions about explicit comparisons
(e.g., they want to compare intervention A with intervention B) and
sometimes they do not (e.g., they want to learn the effects of
intervention A, compared with those of any other intervention or to
the absence of an intervention, PIO questions). In a research
question, there must always be a designated comparison because the
independent variable must be operationally defined; this definition
would articulate the specific “I” and “C” being studied.

TIP Research questions are sometimes more complex than
clinical foreground questions for EBP. They may include, in
addition to the independent and dependent variable, elements
called moderator variables or mediating variables. A moderator
variable is a variable that influences the strength or direction of
a a relationship between two variables (e.g., a person’s age
might moderate the effect of exercise on physical function). A
mediating variable is one that acts like a “go- between” in a link
between two variables (e.g., a smoking cessation intervention
may affect smoking behavior through the intervention’s effect
on motivation to quit). The Supplement for this chapter on this
book’s website describes the role of moderating and mediating
variables in complex research questions.

Some research questions are primarily descriptive. As examples,
here are some descriptive questions that could be addressed in a
study on nurses’ use of humor:

What is the frequency with which nurses use humor as a complementary
therapy with hospitalized patients with cancer?
What are the reactions of hospitalized cancer patients to nurses’ use of
humor?
What are the characteristics of nurses who use humor as a
complementary therapy with hospitalized patients with cancer?

Is my Use of Humor Scale a reliable and valid measure of nurses’ use of
humor with patients in clinical se�ings?

Answers to such questions might, if addressed in a methodologically
sound study, be useful in developing interventions for reducing
stress in patients with cancer.

Example of a Research Question From a Quantitative
Study
Lechner and colleagues (2018) studied skin condition and skin
care in German care facilities. Here is one research question:
What is the prevalence of dry skin in nursing home residents and
hospital patients and is the prevalence higher in nursing homes or
hospitals?

TIP The Toolkit section of Chapter 4 of the accompanying
Resource Manual includes question templates in a Word
document that can be “filled in” to generate many types of
research questions for both qualitative and quantitative studies.

Research Questions in Qualitative Studies
Research questions for qualitative studies state the phenomenon of
interest and the group or population of interest. Researchers in the
various qualitative traditions vary in their conceptualization of what
types of questions are important. Grounded theory researchers are
likely to ask process questions, phenomenologists tend to ask meaning
questions, and ethnographers generally ask descriptive questions
about cultures. Special terms associated with the various traditions,
noted previously, are likely to be incorporated into the research
questions.

Example of a Research Question From a
Phenomenologic Study
What is the lived experience of children with spina bifida in the
West Bank, Palestine (Nahal et al., 2019)?

Not all qualitative studies are rooted in a specific research tradition.
Many researchers use qualitative methods to describe or explore
phenomena without focusing on cultures, meaning, or social
processes.

Example of a Research Question From a Descriptive
Qualitative Study
In their descriptive qualitative study, Dial and Holmes (2018)
asked, “What are the successful self-care hygienic strategies
that patients of size use to care for themselves at home?”

In qualitative studies, research questions may evolve over the course
of the study. Researchers begin with a focus that defines the broad
boundaries of the study, but the boundaries are not cast in stone.
The boundaries “can be altered and, in the typical naturalistic
inquiry, will be” (Lincoln & Guba, 1985, p. 228). The naturalist
begins with a research question that provides a general starting
point but does not prohibit discovery. The emergent nature of
qualitative inquiry means that research questions can be modified as
new data make it relevant to do so.

Problem Statements
Problem statements express the dilemma or troubling situation that
needs investigation and that provide a rationale for a new inquiry. A
good problem statement is a well- structured formulation of what is
problematic, what “needs fixing,” or what is poorly understood.
Problem statements, especially for quantitative studies, often have
most of the following six components:

1. Problem identification: What is wrong with the current situation?
2. Background: What is the context of the problem that readers need to

understand?
3. Scope of the problem: How big a problem is it? How many people are

affected?
4. Consequences of the problem: What are the costs of not fixing the problem?
5. Knowledge gaps: What information about the problem is lacking?
6. Proposed solution: How would the proposed study contribute to the

solution of the problem?

These components, taken together, form the argument for the study
—researchers try to persuade readers that the rationale for
undertaking the study is sound.

TIP The Toolkit section of Chapter 4 of the accompanying
Resource Manual includes these six questions in a Word
document that can be “filled in” and reorganized as needed, as
an aid to developing a problem statement.

Suppose our topic was humor as a complementary therapy for
reducing stress in hospitalized patients with cancer. Our research
question is, “What is the effect of nurses’ use of humor on stress and
natural killer cell activity in hospitalized patients with cancer?” Box
4.1 presents a rough draft of a problem statement for such a study.
This problem statement is a reasonable first draft. The draft has
several, but not all, of the six components.
Box 4.2 illustrates how the problem statement could be strengthened
by adding information about scope (component 3), long- term
consequences (component 4), and possible solutions (component 6).
This second draft builds a more compelling argument for new
research: millions of people are affected by cancer, and the disease
has adverse consequences not only for those diagnosed and their

families but also for society. The revised problem statement also
suggests a basis for the new study by describing a solution on which
the new study might build.
As this example suggests, the problem statement is usually
interwoven with supportive evidence from the research literature. In
many research articles, it is difficult to disentangle the problem
statement from the literature review, unless there is a subsection
specifically labeled “Literature Review.”
Problem statements for a qualitative study similarly express the
nature of the problem, its context, its scope, and information needed
to address it, as in the following abridged example:

Example of a Problem Statement From a Qualitative
Study
“Rheumatoid arthritis (RA) and psoriatic arthritis (PsA) are
inflammatory diseases characterised by chronic arthritis that
can result in considerable disease burden. Disease activity and
symptoms of RA and PsA can contribute to reduced physical,
emotional or psychosocial health and well- being…A physically
active lifestyle is associated with reduced risk of several
diseases…However, only a minority of people with RA
participate in health- promoting physical activities…In addition,
people with RA report high levels of pain- catastrophising
exhibited as high levels of self- rated pain associated with
increased fear- avoidance behaviour towards physical activity…
This study was conducted to gain be�er insight into fear–
avoidance beliefs in relation to physical activity among people
experiencing moderate- to- severe rheumatic pain.” (Lööf &
Johansson, 2019, p. 322).

Qualitative studies embedded in a particular research tradition
usually incorporate terms in their problem statements that
foreshadow the tradition. For example, the problem statement in a
grounded theory study might refer to the need to generate a theory
relating to social processes. A problem statement for a

phenomenologic study might note the need to gain insight into
people’s experiences or the meanings they a�ribute to those
experiences. And an ethnographer might indicate the need to
understand how cultural forces affect people’s health behaviors.

Research Hypotheses
A hypothesis is a prediction, almost always a prediction about the
relationship between variables. 1 In qualitative studies, researchers
do not have an a priori hypothesis, in part because there is too li�le
known to justify a prediction and in part because qualitative
researchers want the inquiry to be guided by participants’
viewpoints rather than by their own hunches. Thus, our discussion
here focuses on hypotheses in quantitative research.

Function of Hypotheses in Quantitative Research
Research questions, as we have seen, are usually queries about
relationships between variables. Hypotheses are predicted answers
to these queries. For instance, the research question might ask: Does
sexual abuse in childhood affect the development of irritable bowel
syndrome in women? The researcher might predict the following:
Women (P) who were sexually abused in childhood (I) have a higher
incidence of irritable bowel syndrome (O) than women who were
not (C).
Hypotheses sometimes follow from a theory. Scientists reason from
theories to hypotheses and test those hypotheses in the real world.
Take, as an example, the theory of reinforcement, which maintains
that behavior that is positively reinforced (rewarded) tends to be
learned or repeated. Predictions based on this theory could be tested.
For example, we could test the following hypothesis: Pediatric
patients (P) who are given a reward (e.g., a toy) (I) when they
undergo nursing procedures tend to be more cooperative during
those procedures (O) than nonrewarded peers (C). This hypothesis
can be put to a test, and the theory gains credibility if it is supported
with real data.
Even in the absence of a theory, well- conceived hypotheses offer
direction and suggest explanations. For example, suppose we
hypothesized that cue- based feedings compared with traditional
methods of feeding for preterm infants will shorten the time to full

oral feedings and discharge from the NICU. We could justify our
speculation based on earlier studies or clinical observations, or both.
The development of predictions forces researchers to think logically and to
tie together earlier research findings.
Now let us suppose the preceding hypothesis is not confirmed: we
find that time to full oral feedings and discharge is similar for
preterm infants on cue- based feedings and traditional methods of
feeding. The failure of data to support a prediction forces researchers to
analyze theory or previous research critically, to consider study limitations,
and to explore alternative explanations for the findings. The use of
hypotheses tends to induce critical thinking and encourages careful
interpretation of the evidence.
To illustrate further the utility of hypotheses, suppose we conducted
the study guided only by the research question, Is there a
relationship between feeding method in preterm infants and the
length of time to full oral feedings and NICU discharge? The
investigator without a hypothesis is apparently prepared to accept
any results. The problem is that it is almost always possible to
explain something superficially after the fact, no ma�er what the
findings are. Hypotheses reduce the risk that spurious results will be
misconstrued.

TIP Consider whether it might be appropriate to develop
hypotheses that predict different effects of the independent
variable on the outcome for different subgroups of people—
that is, to consider the effects of moderator variables. For
example, would you predict the effects of an intervention to be
different for males and females? Testing such hypotheses might
facilitate greater applicability of the evidence to specific types
of patient (Chapter 31).

Characteristics of Testable Hypotheses
Testable hypotheses state the expected relationship between the
independent variable (the presumed cause or antecedent) and the

dependent variable (the presumed effect or outcome) within a
population.

Example of a Research Hypothesis
Palesh and colleagues (2018) hypothesized that, among women
with advanced breast cancer, a greater degree of physical
activity is associated with longer survival.

In this example, the population is women with advanced breast
cancer, the independent variable is amount of physical activity, and
the dependent variable is length of time before death. The
hypothesis predicts that these two variables are related within the
population—greater physical activity is predicted to be associated
with longer survival.
Hypotheses that do not make a relational statement are difficult to
test. Take the following example: Pregnant women who receive prenatal
instruction about postpartum experiences are not likely to experience
postpartum depression. This statement expresses no anticipated
relationship—there is only one variable (postpartum depression),
and a relationship requires at least two variables.
The problem is that without a prediction about an anticipated
relationship, the hypothesis is difficult to test using standard
statistical procedures. In our example, how would we know whether
the hypothesis was supported—what standard could be used to
decide whether to accept or reject it? To illustrate this concretely,
suppose we asked a group of mothers who had been given
instruction on postpartum experiences the following question
1 month after delivery: On the whole, how depressed have you been
since you gave birth? Would you say (1) extremely depressed, (2)
moderately depressed, (3) a li�le depressed, or (4) not at all
depressed?
Based on responses to this question, how could we compare the
actual outcome with the predicted outcome? Would all the women
have to say they were “not at all depressed?” Would the prediction
be supported if 51% of the women said they were “not at all

depressed” or “a li�le depressed?” It is difficult to test the accuracy
of the prediction.
A test is simple, however, if we modify the prediction as follows:
Pregnant women who receive prenatal instruction are less likely to
experience postpartum depression than those with no prenatal
instruction. Here, the outcome variable (O) is the women’s
depression, and the independent variable is receipt (I) versus
nonreceipt (C) of prenatal instruction. The relational aspect of the
prediction is embodied in the phrase less than. If a hypothesis lacks a
phrase such as more than, less than, greater than, different from, related
to, associated with, or something similar, it is probably not amenable
to statistical testing. To test this revised hypothesis, we could ask
two groups of women with different prenatal instruction experiences
to respond to the question on depression and then compare the
average responses of the two groups. The absolute degree of
depression of either group would not be at issue.
Hypotheses should be based on justifiable rationales. Hypotheses
often follow from previous research findings or are deduced from a
theory. When a new area is being investigated, the researcher may
have to turn to logical reasoning or clinical experience to justify
predictions.

The Derivation of Hypotheses
Many students ask, How do I go about developing hypotheses? Two
basic processes—induction and deduction—are the intellectual
machinery involved in deriving hypotheses (The Supplement to
Chapter 3 
 on the Point book’s website described
induction and deduction).
An inductive hypothesis is inferred from observations. Researchers
observe certain pa�erns among phenomena and then make
predictions based on the observations. An important source for
inductive hypotheses is clinical experiences. For example, a nurse
might notice that presurgical patients who ask a lot of questions
about pain have a more difficult time than other patients in learning
postoperative procedures. The nurse could formulate a hypothesis,

such as: Patients who are stressed by fear of pain have more
difficulty in deep breathing and coughing after surgery than patients
who are not stressed. Qualitative studies are an important source of
inspiration for inductive hypotheses.

Example of Deriving an Inductive Hypothesis
LoGiudice and Beck (2016) conducted a phenomenological
study of the experience of childbearing from eight survivors of
sexual abuse. One of the themes from this study was
“Overprotection: Keeping my child safe.” A hypothesis that can
be derived from this qualitative finding might be as follows:
Women who are survivors of sexual abuse will be more
overprotective of their children than mothers who have not
experienced sexual abuse.

Inductive hypotheses begin with specific observations and move
toward generalizations. Deductive hypotheses have theories or
prior knowledge as a starting point, as in our earlier example about
reinforcement theory. Researchers deduce that if the theory is true,
then certain outcomes can be expected. If hypotheses are supported,
then the theory is strengthened. The advancement of nursing
knowledge depends on both inductive and deductive hypotheses.
Researchers need to be organizers of concepts (think inductively),
logicians (think deductively), and critics and skeptics of resulting
formulations, constantly demanding evidence.

Wording of Hypotheses
A good hypothesis is worded clearly and concisely and in the
present tense. Researchers make predictions about relationships that
exist in the population and not just about a relationship that will be
revealed in a particular sample. There are various types of
hypotheses.

Directional Versus Nondirectional Hypotheses

Hypotheses can be stated in a number of ways, as in the following
example:

1. Older patients are more likely to fall than younger patients.
2. There is a relationship between the age of a patient and the risk of falling.
3. The older the patient, the greater the risk that he or she will fall.
4. Older patients differ from younger ones with respect to their risk of

falling.
5. Younger patients tend to be less at risk of a fall than older patients.

In each example, the hypothesis indicates the population (patients),
the independent variable (patients’ age), the dependent variable (a
fall), and the anticipated relationship between them.
Hypotheses can be either directional or nondirectional. A directional
hypothesis is one that specifies not only the existence but the
expected direction of the relationship between variables. In our
example, hypotheses 1, 3, and 5 are directional because there is an
explicit prediction that older patients are more likely to fall than
younger ones. A nondirectional hypothesis does not state the
direction of the relationship, as illustrated by versions 2 and 4. These
hypotheses predict that a patient’s age and risk of falling are related,
but they do not stipulate whether the researcher thinks that older
patients or younger ones are at greater risk.
Hypotheses derived from theory are almost always directional
because theories provide a rationale for expecting variables to be
related in a certain way. Existing studies also offer a basis for
directional hypotheses. When there is no theory or related research,
when findings of prior studies are contradictory, or when
researchers’ own experience leads to ambivalence, nondirectional
hypotheses may be appropriate. Some people argue, in fact, that
nondirectional hypotheses are preferable because they connote
impartiality. Directional hypotheses, it is said, imply that researchers
are intellectually commi�ed to certain outcomes, and such a
commitment might lead to bias. Yet, researchers typically do have
hunches about outcomes, whether they state them explicitly or not.
We prefer directional hypotheses when there is a reasonable basis for

them because they clarify the study’s framework and demonstrate
that researchers have thought critically about the study variables.

TIP Hypotheses can be either simple hypotheses (ones with one
independent variable and one dependent variable) or complex
hypotheses (ones with three or more variables—for example,
with multiple independent or dependent variables).
Information about complex hypotheses is available in the
Supplement for this chapter on .

Research Versus Null Hypotheses
Hypotheses can be described as either research hypotheses or null
hypotheses. Research hypotheses (also called scientific hypotheses)
are statements of expected relationships between variables. All
hypotheses presented thus far are research hypotheses that state
actual predictions.
Statistical inference uses a logic that may be confusing. This logic
requires that hypotheses be expressed as an expected absence of a
relationship. Null hypotheses (or statistical hypotheses) state that
there is no relationship between the independent and dependent
variables. The null form of the hypothesis used in our example
might be: “Patients’ age is unrelated to their risk of falling” or
“Older patients are just as likely as younger patients to fall.” The
null hypothesis might be compared with the assumption of
innocence of an accused criminal in many justice systems: the
variables are assumed to be “innocent” of any relationship until they
can be shown “guilty” through appropriate statistical procedures.
The null hypothesis represents the formal statement of this
assumption of “innocence.”
Researchers typically state research rather than null hypotheses.
Indeed, you should avoid stating hypotheses in null form in a
proposal or a report because this gives an amateurish impression. In
statistical testing, underlying null hypotheses are assumed without
being stated. If the researcher’s actual research hypothesis is that no

relationship among variables exists, complex procedures are needed
to test it.

Hypothesis Testing and Proof
Hypotheses are formally tested through statistical analysis.
Researchers use statistics to test whether their hypotheses have a
high probability of being correct (i.e., have a p < .05). Statistical
analysis does not offer proof; it only supports inferences that a
hypothesis is probably correct (or not). Hypotheses are never proved
or disproved; rather, they are supported or rejected. Findings are always
tentative. Hypotheses come to be increasingly supported with
evidence from multiple studies.
Let us look at why this is so. Suppose we hypothesized that height
and weight are related. We predict that, on average, tall people
weigh more than short people. We then obtain height and weight
measurements from a sample and analyze the data. Now suppose
we happened by chance to get a sample that consisted of short,
heavy people, and tall, thin people. Our results might indicate that
there is no relationship between height and weight. But we would
not be justified in concluding that this study proved or demonstrated
that height and weight are unrelated.
This example illustrates the difficulty of using observations from a
sample to draw definitive conclusions about a population. Issues
such as the accuracy of the measures, the effects of uncontrolled
variables, and idiosyncracies of the study sample prevent
researchers from concluding that hypotheses are proved.

TIP If a researcher uses any statistical tests (as is true in most
quantitative studies), it means that there were underlying
hypotheses—regardless of whether the researcher explicitly
stated them—because statistical tests are designed to test
hypotheses. In planning a quantitative study of your own, do
not hesitate to state hypotheses.

Critical Appraisal of Research Problems, Research
Questions, and Hypotheses
In appraising research articles, you need to evaluate whether
researchers have adequately communicated their problem. The
problem statement, purpose, research questions, and hypotheses set
the stage for a description of what the researchers did and what they
learned. You should not have to dig deeply to decipher the research
problem or the questions.
A critical appraisal of the research problem is multidimensional.
Substantively, you need to consider whether the problem has
significance for nursing. Studies that build in a meaningful way on
existing knowledge are well- poised to contribute to evidence- based
nursing practice. Researchers who develop a systematic program of
research, designing new studies based on their own earlier findings,
are especially likely to make important contributions (Conn, 2004).
For example, Cheryl Beck’s series of studies relating to postpartum
depression and traumatic births have influenced women’s health
care worldwide. Also, research problems stemming from established
research priorities (Chapter 1) have a high likelihood of yielding
important new evidence for nurses because they reflect expert
opinion about areas of needed research.
Another dimension in appraising the research problem is
methodologic—in particular, whether the research problem is
compatible with the chosen research paradigm and its associated
methods. You should also evaluate whether the statement of purpose
or research questions have been properly worded and lend
themselves to empirical inquiry.
In a quantitative study, if the research article does not contain
explicit hypotheses, you need to consider whether their absence is
justified. If there are hypotheses, you should evaluate whether they
are logically connected to the problem and are consistent with
existing evidence or relevant theory. The wording of hypotheses
should also be assessed. To be testable, the hypothesis should
contain a prediction about the relationship between two or more

measurable variables. Specific guidelines for critically appraising
research problems, research questions, and hypotheses are presented
in Box 4.3.

Research Examples
This section describes how the research problem and research
questions were communicated in two nursing studies, one
quantitative and one qualitative.

Research Example of a Quantitative Study

Study: Effectiveness of a patient- centred, empowerment- based
intervention programme among patients with poorly controlled type
2 diabetes (Cheng et al., 2018).
Problem statement (Excerpt; citations omi�ed to streamline
presentation): “Despite extensive advances and collective
prioritization of evidence- based diabetes management, poor
glycaemic control still remains common in many countries…
Adherence to diabetes self- management regimen continues to be the
most significant determinant to a�ain glycaemic target. Patients with
poorly controlled type 2 diabetes find enormous difficulty
synthesizing self- management recommendations in the dynamic and
complex daily context. There is a great call to support and empower
them to take a proactive self- management role in the disease
trajectory. A flourishing body of studies have illustrated that
patient- centred, empowerment- based approach could boost patients’
engagement in and commitment to diabetes self- management.” (p.
44).
Statement of purpose: The aim of this study was “to evaluate the
effectiveness of a patient- centred, empowerment- based programme
on glycaemic control and self- management behaviours among
patients with poorly controlled type 2 diabetes.” (p.43).
Research question: Although not formally stated by the researchers,
we can state their Therapy question as follows: Among patients with
poorly controlled type 2 diabetes (P), does participation in a patient–
centered self- management program (I), compared with
nonparticipation (C), lead to improvements in HbA1c levels and
self- management behaviors (O)?

Hypotheses: The researchers hypothesized that compared with
study participants who do not receive the intervention, patients who
receive the intervention program will have (1) significantly
optimized glycaemic control and (2) be�er self- management
behaviors.
Study methods: The study was conducted in two tertiary hospitals
in China. A total of 242 eligible patients were recruited and were
allocated, at random, to either receive or not receive the intervention.
Those in the intervention group received a 6- week self- management
program; the control group received general health education and
post discharge follow- up. The key outcomes were HbA1c levels and
scores on a measure of self- management behaviors.
Key findings: HbA1c values declined in both groups, and group
differences at follow- up were not statistically significant. However,
patients in the intervention group exhibited significant
improvements in diet management and blood glucose self–
monitoring both in the short term (8- week follow- up) and longer
term (20- week follow- up).

Research Example of a Qualitative Study

Study: Patients’ perceptions and experiences of living with a surgical
wound healing by secondary intention (McCaughan et al., 2018).
Problem statement (Excerpt; citations omi�ed to streamline
presentation): Most surgeries in the United Kingdom “result in a
wound that heals by primary intention; that is to say, the incision is
closed by fixing the edges together with sutures (stitches), staples,
adhesive glue, or clips. However, some wounds may be left open to
heal…Healing occurs through the growth of new tissue from the
base of the wound upwards, a process described as ‘healing by
secondary intention.’ …Management of open surgical wounds
requires intensive treatment that may involve prolonged periods of
hospitalisation for patients and/or further surgical intervention…
While there is an expansive literature relating to patients’
experiences of chronic wounds, such as leg ulcers, evidence

concerning the impact on patients of experiencing an open surgical
wound is lacking.” (p. 30).
Statement of purpose: The objective of this study was “to explore
patients’ views and experiences of living with a surgical wound
healing by secondary intention” (p. 29).
Research questions: The patients’ experiences were explored by
asking such questions as “How has this wound impacted on your
daily life?” and “What effect has the wound had on your
relationship with your immediate family or friends?”
Method: 20 patients from two locations in the north of England who
had a surgical wound healing by secondary intention participated in
the study. The researchers made efforts to recruit patients of
different gender, age, wound duration, and type of surgery. The
study was designed in collaboration with three patient advisers.
Study participants were interviewed in- depth, with interviews
continuing until data saturation occurred.
Key findings: The patients reported that alarm, shock, and disbelief
were their initial reactions to their surgical wound healing. Wound–
associated factors had a profound negative impact on their daily life,
physical and psychosocial functioning, and well- being. Feelings of
powerlessness and frustration were common, and many expressed
dissatisfaction with the perceived lack of continuity of care in
relation to wound management.

Summary Points

A research problem is a perplexing or enigmatic situation that a
researcher wants to address through disciplined inquiry. Researchers
usually identify a broad topic, narrow the problem scope, and identify
questions consistent with a paradigm of choice.
Common sources of ideas for nursing research problems are clinical
experience, patient queries, relevant literature, quality improvement
initiatives, social issues, and external suggestions.
Key criteria in selecting a research problem are that the problem should
be clinically important; researchable; feasible; and of personal interest.
Feasibility involves the issues of time, researcher skills, cooperation of
participants and other people, availability of facilities and equipment,
adequacy of resources, and ethical considerations.
Researchers communicate their goals as problem statements, statements
of purpose, research questions, or hypotheses.
Problem statements, which articulate the nature, context, and
significance of a problem, include several components organized to form
an argument for a new study: problem identification; the background,
scope, and consequences of the problem; knowledge gaps; and possible
solutions to the problem.
A statement of purpose, which summarizes the overall study goal,
identifies key concepts or variables and the population. Purpose
statements often communicate, through the use of verbs and other key
terms, the underlying research tradition of qualitative studies, or whether
study is experimental or nonexperimental in quantitative ones.
A research question is the specific query researchers want to answer in
addressing the research problem. In quantitative studies, research
questions usually focus on relationships between variables.
In quantitative studies, a hypothesis is a statement of predicted
relationships between two or more variables. Complex hypotheses may
involve a moderator variable (a variable that alters the strength or
direction of a relationship between two variables) or a mediating
variable that acts as a “go- between” in the link between two variables.
Directional hypotheses predict the direction of a relationship;
nondirectional hypotheses predict the existence of relationships, not
their direction.

Research hypotheses predict the existence of relationships; null
hypotheses, which express the absence of a relationship, are the
hypotheses subjected to statistical testing.
Hypotheses are never proved or disproved in an ultimate sense—they are
accepted or rejected, supported or not supported by the research data.

Study Activities
Study activities are available to instructors on .

References Cited in Chapter 4
Beck C. T., LoGiudice J., & Gable R. K. (2015). A mixed methods study of

secondary traumatic stress in certified nurse- midwives: shaken belief in the
birth process. Journal of Midwifery & Women’s Health, 60, 16–23.

Cheng L., Sit J., Choi K., Chair S., Li X., Wu Y., … Tao M. (2018). Effectiveness
of a patient- centred, empowerment- based intervention programme among
patients with poorly controlled type 2 diabetes. International Journal of
Nursing Studies, 79, 43–51.

Conn V. (2004). Building a research trajectory. Western Journal of Nursing
Research, 26, 592–594.

Dial M., & Holmes J. (2018). “I do the best I can;” Personal care preferences of
patients of size. Applied Nursing Research, 39, 259–264.

Iacorossi L., Gambalunga F., Fabi A., Giannarelli D., Marche�i A., Piredda M.,
& DeMarinis M. (2018). Adherence to oral administration of endocrine
treatment in patients with breast cancer. Cancer Nursing, 41, E57–E63.

* Lechner A., Lahmann N., Lichterfeld- Ko�ner A., Müller- Werdan U., Blume- –
Peytavi U., & Ko�ner J. (2018). Dry skin and the use of leave- on products in
nursing care: a prevalence study in nursing homes and hospitals. Nursing
Open, 6, 189–196.

Lincoln Y. S., & Guba E. G. (1985). Naturalistic inquiry. Newbury Park, CA:
Sage.

LoGiudice J. A., & Beck C. T. (2016). The lived experience of childbearing from
survivors of sexual abuse: “it was the best of times, it was the worst of
times”. Journal of Midwifery & Women’s Health, 61, 474–481.

Lööf H., & Johansson U. (2019). “A body in transformation”—an empirical
phenomenological study about fear- avoidance beliefs toward physical
activity among persons experiencing moderate to severe rheumatic pain.
Journal of Clinical Nursing, 28, 321–329.

McCaughan D., Sheard L., Cullum N., Dunville J., & Che�er I. (2018). Patients’
perceptions and experiences of living with a surgical would healing by
secondary intention. International Journal of Nursing Studies, 77, 29–38.

Nahal M., Axelsson A., Iman A., & Wigert H. (2019). Palestinian children’s
narratives about living with spina bifida: stigma, vulnerability, and social
exclusion. Child: Care, Health, and Development, 45, 54–62.

** Palesh O., Kamen C., Sharp S., Golden A., Neri E., Spiegel D., & Koopman
C. (2018). Physical activity and survival in women with advanced breast

cancer. Cancer Nursing, 41, E31–E38.
Ploukou S., & Panagopoulou E. (2018). Playing music improves well- being of

oncology nurses. Applied Nursing Research, 39, 77–80.
*A link to this open- access article is provided in the Toolkit for Chapter 4 in

the Resource Manual.

**This journal article is available on for this chapter.

1Although this does not occur with great frequency, it is possible to make a hypothesis
about a specific value. For example, we might hypothesize that the rate of medication
compliance in a specific population is 60%. Chapter 18 has an example.

C H A P T E R 5

Literature Reviews: Finding and Critically Appraising Evidence

A research literature review is a wri�en synthesis and appraisal of evidence on a research problem.
Researchers typically undertake a literature review as an early step in conducting a study. This chapter
describes activities associated with literature reviews, including locating and critically appraising
studies.

Some Literature Review Basics
Before discussing the steps involved in doing a research- based literature review, we briefly discuss
some general issues.

Purposes of Research Literature Reviews
Healthcare professionals are undertaking many different types of research synthesis, several of which
are specifically intended to support evidence- based practice. Grant and Booth (2009) identified 14
different types of synthesis—and even more review types are now appearing in the literature. We
described one type of synthesis (systematic reviews) in Chapter 2, and several others will be discussed
in Chapter 30. In this chapter, we focus primarily on narrative literature reviews that researchers
prepare during the conduct of a new study.

TIP A narrative literature review is one in which the findings from the studies under review are
integrated using the judgments of the reviewers, rather than through statistical integration—as in a
meta- analysis. Until meta- analytic techniques were developed, all reviews were narrative reviews.

Once a research problem and research questions have been identified, a thorough literature review is
essential. Literature reviews provide researchers with information to guide a high- quality study, such as
information about the following:

The scope and complexity of the identified research problem (for the argument);
What other researchers have found in relation to the research question;
The quality and quantity of existing evidence;
The contexts and locales in which research has been conducted;
The characteristics of the people who have served as study participants;
Theoretical underpinnings of completed studies;
Methodologic strategies that have been used to address the question; and
Gaps in the existing evidence base—the type of new evidence that is needed.

This list suggests that a good literature review requires thorough familiarity with available evidence. As
Garrard (2017) has advised, you must strive to own the literature on a topic to be confident of preparing
a high- quality review.
The term “reviewing the literature” is often used to refer to the process of identifying, locating, and
reading relevant sources of research evidence—that is, conducting a literature review. However,
researchers will ultimately need to summarize what they have learned in wri�en form. The length of the
product depends on its purpose. Wri�en narrative literature reviews may take the following forms:

A review embedded in a research report. Literature reviews in the introduction to a research report provide readers
with an overview of existing evidence and contribute to the argument for new research. These reviews are usually
only two to three double- spaced pages, and so only key studies can be cited. The emphasis is on summarizing
and critiquing an overall body of evidence and demonstrating the need for a new study.
A review in a research proposal. A literature review in a proposal (often, to request financial support) provides
context and illuminates the rationale for new research. The length of such reviews is specified in proposal
guidelines; sometimes it is just a few pages. When this is the case, the review must reflect expertise on the topic in
a succinct fashion.
A review in a thesis or dissertation. Dissertations in the traditional format (see Chapter 32) often include a thorough,
critical literature review. An entire chapter may be devoted to the review, and such chapters are often 20 to 30
pages long. These reviews typically include an evaluation of the overall body of literature as well as critiques of
key individual studies. They may also describe relevant theoretical foundations for the study.

In all three cases, the review is not simply a knowledge synthesis: the review provides a context for
readers of the report or proposal and offers a justification for a new inquiry. Such reviews also can
demonstrate the researcher’s competence and thoroughness.

Additionally, nurses sometimes prepare free- standing narrative reviews that are not necessarily done in
connection with a planned new study. A wri�en review may be undertaken as a course requirement in
graduate school or for publication in a journal. As an example, Gleason et al. (2018) published a
literature review on the prevalence of atrial fibrillation symptoms and the relationship between such
symptoms and patients’ sex, race, and psychological distress. Such free- standing reviews are usually 15
to 25 pages long.

Literature Reviews in Qualitative Research
Quantitative researchers almost always do an upfront literature review, but qualitative researchers have
varying opinions about reviewing the literature before doing a new study. Some of the differences
reflect viewpoints associated with qualitative research traditions.
Grounded theory researchers often collect and analyze their data before reviewing the literature.
Researchers turn to the literature once the grounded theory is sufficiently developed, seeking to relate
the theory to prior findings. Glaser (1978) warned that, “It’s hard enough to generate one’s own ideas
without the ‘rich’ detailment provided by literature in the same field” (p. 31). Thus, grounded theory
researchers may defer doing a literature review, but then later consider how previous research fits with
or extends the emerging theory.
Phenomenologists often undertake a search for relevant materials at the outset of a study, looking in
particular for experiential descriptions of the phenomenon being studied (Munhall, 2012). The purpose
is to expand the researcher’s understanding of the phenomenon from multiple perspectives, and this
may include an examination of artistic sources in which the phenomenon is described (e.g., in novels or
poetry).
Even though “ethnography starts with a conscious a�itude of almost complete ignorance” (Spradley,
1979, p. 4), literature relating to the chosen cultural problem is often reviewed before data collection. A
second, more thorough literature review is often done during data analysis and interpretation so that
findings can be compared with previous findings.
Regardless of tradition, if funding is sought for a qualitative project, an upfront literature review is
usually necessary. Proposal reviewers need to understand the context for a proposed study when
deciding whether it should be funded.

Sources for a Research Review
Wri�en source materials vary in their quality and content. In performing a literature review, you will
have to decide what to read and what to include in a wri�en review. You may begin your search with
broad reference sources on a topic (e.g., textbooks), but ultimately you will mostly be retrieving
information from articles published in professional journals.
Findings from prior completed studies are the most important type of information for a research review.
You should rely mostly on primary source research reports, which are descriptions of studies wri�en by
the researchers who conducted them.

TIP Study protocols are an additional type of primary source—they are descriptions of the design
and methods for studies that are underway but have not yet been completed. These protocols,
which are available in registries and sometimes in journals, allow researchers to understand what
new evidence will become available and hence can help you avoid unwanted duplication.

Secondary sources are descriptions of studies prepared by someone other than the original researcher.
Literature reviews, for example, are secondary sources. If reviews are recent, they are very useful
because they provide an overview of the topic and a valuable bibliography. Secondary sources are not
substitutes for primary sources because they typically fail to provide much detail about studies and may
not be completely objective.
In addition to research reports, your search may yield nonresearch references, such as case reports,
anecdotes, editorials, or clinical descriptions. Nonresearch materials may broaden understanding of a
problem, demonstrate a need for research, or describe aspects of clinical practice. These writings may
help in formulating research ideas, but they usually have limited utility in wri�en research reviews

because they do not address the central question: What is the current state of evidence on this research
problem?

Primary and Secondary Questions for a Review
For free- standing literature reviews, reviewers may summarize evidence about a single focused
question, such as: Do virtual reality goggles (I) reduce pain (O) in patients undergoing wound care procedures
(P)? For those who are undertaking a literature review as part of a new study, the primary question for
the literature review is the same as the research question for the new study. The researcher wants to
know: What is the current state of knowledge on the question that I will be addressing in my study?
If you are doing a review for a new study, you inevitably will need to search for current evidence on
several secondary questions because you need to develop an argument for the new study. An example
will clarify this point.
Suppose that we were conducting a study to address the following question: Among nurses working in
hospitals (P), what characteristics of the nurses or their practice se�ings (I) are associated with their management
of children’s pain (O)? Such a question might arise in the context of a perceived problem, such as a
concern that nurses’ treatment of children’s pain is not always optimal. A simplified statement of the
problem might be as follows:
Many children are hospitalized annually and many hospitalized children experience high levels of pain.
Although effective analgesic and nonpharmacologic methods of controlling children’s pain exist, and
although there are reliable methods of assessing children’s pain, previous studies have found that
nurses do not always manage children’s pain effectively.
This rudimentary problem statement suggests a number of secondary questions for which up- to- date
evidence needs to be found. Examples of such secondary questions include the following:

How many children are hospitalized each year?
What levels of pain do hospitalized children 
typically experience?
How can pain in hospitalized children be reliably assessed?
How knowledgeable are nurses about pain assessment and pain management strategies for children?

Thus, a literature review tends to be a multipronged task when it is done in preparation for a new
study. It is important to identify all questions for which information from the research literature needs
to be retrieved.

Major Steps and Strategies in a Narrative Literature Review
Conducting a literature review is a li�le like doing a full study, in the sense that reviewers start with a
question, formulate and implement a plan for gathering information, and then analyze and interpret the
information. The “findings” are then summarized in a wri�en product.
Figure 5.1 outlines key steps in the literature review process. As the figure shows, there are several
potential feedback loops, with opportunities to retrace earlier steps in search of more information. This
chapter discusses each step, but some steps are elaborated in Chapter 30 in our discussion of systematic
reviews.

FIGURE 5.1 Flow of tasks in a literature review.

Conducting a high- quality literature review is more than a mechanical exercise—it is an art and a
science. Several qualities characterize a high- quality review. First, the review must be comprehensive,
thorough, and up- to- date. To “own” the literature (Garrard, 2017), you must be determined to become
an expert on your topic, which means that you need to be diligent in hunting down leads for possible
sources of evidence.

TIP Locating all relevant information on a research question is like being a detective. The
literature retrieval tools we discuss in this chapter are essential aids, but there inevitably needs to
be some digging for the clues to evidence on a topic. Be prepared for sleuthing.

Second, a high- quality review is systematic. Decision rules should be clear, and criteria for including or
excluding a study need to be explicit. This is because a third characteristic of a good review is that it is
reproducible, which means that another diligent reviewer would be able to apply the same decision
rules and criteria and come to similar conclusions about the evidence.
Another desirable a�ribute of a literature review is the absence of bias. This is more easily achieved
when systematic rules for evaluating information are followed or when a team of researchers
participates in the review—as is almost always the case in systematic reviews. Finally, reviewers should
strive for a review that is insightful and that is more than “the sum of its parts.” Reviewers can
contribute to knowledge through an astute synthesis of the evidence.
Doing a literature review is somewhat similar to doing a qualitative study: you will need a flexible and
creative approach to “data collection.” Leads for relevant studies should be pursued until “saturation”
is achieved—i.e., until your search strategies yield redundant information about studies to include.
Finally, the analysis of your “data” will typically involve the identification of important themes in the
literature.

Organization in Literature Reviews
The importance of being well- organized in conducting a literature review cannot be overemphasized.
As discussed in “Documentation in Literature Retrieval” later in this chapter, we encourage you to
document all your decisions and products, and documentation needs to be maintained in an organized
framework.
You may prefer to use traditional methods of searching, retrieving, and storing information. For
example, you may retrieve a journal article, print or photocopy it, and write notes in the margin. If you
do this, you will still need to develop a cataloging system that enables you to find a particular article
(e.g., alphabetical filing by last name of the first author).
Increasingly, journal articles are retrieved as portable document files (pdf) and read online using Adobe
software, which permits you to highlight text passages and enter marginal comments. If this is your
approach, you should create a folder on your computer or in the cloud to store these articles, naming
each file in a manner that will allow you to easily locate it. For example, here is how we named the file
storing the previously mentioned Gleason et al. (2018) literature review:
Gleason2018JCNAtrialFibSymptoms.pdf. This file name indicates the last name of the first author, year
of publication, an abbreviation for the journal (JCN = Journal of Cardiovascular Nursing), and a brief
phrase about the topic. This system would result in a document folder with articles listed alphabetically
by the first authors’ last names.
You may opt to use reference management software that will help you to stay organized—as well as
help you retrieve articles, maintain a reference library and notes, insert citations into papers, and create
a bibliography when you write up your review. Popular reference management software that can be
used with either Windows for PCs or Macs includes EndNote (free for the Basic version), Mendelay
(also free), and RefWorks. Many other reference management software packages are available (for
example, see h�ps://en.wikipedia.org/wiki/Comparison_of_reference_management_software).
It is wise to think ahead about the various components of your literature review effort and to have a
plan for how to organize them—most likely this will involve the creation of various file folders that will
be stored on your computer or in the cloud. For example, if you are not using reference management
software, you should create a master folder (e.g., labeled “Pain_Management_Children”), with multiple

subfolders. For example, one subfolder could store the source documents (e.g., the pdf journal article
files), another could store documentation of your search strategy and results, and another subfolder
could save drafts of your actual literature review.
Another organizational tool—one that is essential for a systematic review—is a flow chart that
documents your progress in identifying, retrieving, screening, and selecting source materials. Figure 5.2
presents an example of such a flow chart with fictitious numbers (n = ) shown in each box. This figure
shows that the reviewer started with 400 possible source documents, of which only 15 were used in the
final literature review.

FIGURE 5.2 Example of a flow chart documenting literature search progress.

Locating Relevant Literature for a Research Review
As shown in Figure 5.1, an early step in a literature review is devising a strategy to locate relevant
studies. The ability to locate research documents on a topic is an important skill that requires
adaptability. Sophisticated new search strategies and tools are being introduced regularly. We urge you
to consult with librarians, colleagues, or faculty for suggestions. Reference librarians in health libraries
are especially valuable and often serve on teams conducting systematic reviews.

Formulating a Search Strategy
There are many ways to search for research 
evidence. Searching is inevitably an iterative 
process that
evolves as new “leads” are discovered based on information you have already retrieved.

Search Strategy Options
Cooper (2017) has identified several search strategies, one of which we describe in some detail in this
chapter: searching for references in bibliographic databases. Database searches, which can be done
efficiently from computers and tablets, are likely to yield the largest number of research references—
indeed, sometimes the yield can be overwhelming. Databases are searched primarily for key variables
(e.g., pain management) but can also be searched for the names of researchers who have played a key
role in a field.
Another approach, called the ancestry approach (also called snowballing, footnote chasing, or pearl growing),
involves using references cited in recent relevant studies to track down earlier research on the same
topic (the “ancestors”). This is an ongoing process that can be used to not only identify earlier relevant
studies, but also to discover new search terms for subsequent electronic searches.
A third method, the descendancy approach, is to find a pivotal early study and to search forward in
citation indexes to find more recent studies (“descendants”) that cited the key study. Other strategies
exist for tracking down what is called the grey literature, which refers to studies with more limited
distribution, such as conference papers, unpublished reports, and so on. We describe these strategies in
Chapter 30 on systematic reviews. If your intent is to “own” the literature, then you will likely want to
adopt many of these strategies.

TIP You may be tempted to begin a literature search through an Internet search engine, such as
Google, Yahoo, or Bing. Such a search is likely to yield a lot of “hits” on your topic but is unlikely
to give you full bibliographic information on relevant research. However, such searches can provide
useful leads for search terms. Also, an Internet search may be the appropriate route for finding
answers to secondary questions, such as: How many children are hospitalized annually? This
information is more likely to be found in government reports, which are available online, than in
research articles.

Eligibility Criteria Specifications
Search plans also involve decisions about the criteria that would make a study eligible for your review.
These decisions need to be explicit to guide your search of bibliographic databases. Search limits are
most often managed in databases through the use of filters (or limiters in some bibliographic software).
If you are not multilingual, you may need to constrain your search to studies wri�en in your own
language. You may want to limit your search to studies conducted within a certain time frame (e.g.,
within the past 15 years). You may also want to exclude studies with certain types of participants. For
instance, in our example of a literature search about nurses’ management of children’s pain, we might
want to exclude studies in which the children were neonates.
Constraining your search might help you to avoid irrelevant material but be cautious about pu�ing too
many restrictions on your search, especially initially. You can always make decisions to exclude studies
at a later point.

TIP Be sure not to limit your search to articles exclusively in the nursing literature (e.g., in the
nursing subset of records in the database called PubMed). Researchers in many disciplines engage
in research relevant to nursing. Also, many nurse researchers publish in nonnursing journals,
increasingly as members of interprofessional teams. Moreover, in some databases (e.g., PubMed),
some journals with many articles contributed by nurse researchers are not coded as being in the
nursing subset (e.g., Qualitative Health Research, Birth), whereas some journals that are in the
nursing subset have articles mostly not wri�en by nurse authors (e.g., Journal of Wound Care).

Identifying Keywords
Reviewers seeking articles for their reviews begin with a set of search terms, often called keywords.
Thus, an important early task is to identify and make a wri�en list of the keywords that will be used to
search bibliographic databases. The keyword list will be augmented as your search proceeds.
Traditionally, the keywords are your main research variables. Many researchers use the PICO
formulation (population, intervention/influence, comparison, outcome) discussed in Chapter 2 as
keywords for a literature search, although this may not always be the best strategy for systematic
reviews (See Chapter 30).
In developing a list of keywords, it is important to include synonyms and to think broadly about related
terms. For example, if we were searching for articles on teenage smoking, you should consider other
terms for teenage (e.g., adolescent, children, youth) and for smoking (e.g., tobacco, cigare�es). The use of a
thesaurus (available in word processing software) for identifying synonyms is recommended—but take
note of keywords specified by researchers themselves in articles you locate.

Searching Bibliographic Databases
Reviewers typically begin by searching bibliographic databases that can be accessed by computer. The
databases contain entries for millions of journal articles, and the articles are coded by professional
indexers to facilitate retrieval. For example, articles may be coded for language used (e.g., English),
subject ma�er (e.g., pain), journal subset (e.g., nursing), and so on. Some databases can be accessed free
of charge (e.g., PubMed, Google Scholar), whereas others are sold commercially—but they are often
available through hospital or university libraries. Most database programs are user- friendly, offering
menu- driven systems with on- screen support so that retrieval can proceed with minimal instruction.

Getting Started With a Bibliographic Database
Before searching an electronic database, you should become familiar with the features of the software
used to access the database. The software gives you options for limiting your search, combining the
results of two searches, saving your search, and sending you notifications of new citations relevant to
your search. Most programs have tutorials that can improve the efficiency and effectiveness of your
search.
In most databases, there are two major strategies for searching. One method is to search for
standardized subject headings (subject codes) that are assigned by indexers (usually professionals with
Master’s degrees or higher in relevant disciplines). The subject headings differ from one database to
another. It is useful to learn about the relevant subject codes because they offer a path to retrieving
articles that use different words to describe the same concept. Another major advantage is that indexers
code the articles based on a reading of the entire article (not just the abstract), and they code for meaning
and not just words. Subject codes for databases can be located in the database’s thesaurus or reference
tools.
An alternative strategy is to enter your own keywords into a search field. Such a search is an important
supplement to searching using the database’s controlled vocabulary because indexers are not infallible.
However, such keyword searches are limited to searching for words in the article’s title or abstract (not
in the full text), and so if concepts are not mentioned in the title or abstract, the article will not be
retrieved.
Most bibliographic software has automatic term mapping capabilities. Mapping is a feature that
facilitates a search using your own keywords. The software translates (“maps”) the keywords you enter

into the most plausible subject codes. Nevertheless, it is important to undertake both a keyword search
and a subject code search because they yield overlapping but nonidentical results.

General Database Search Features
Some features of an electronic search are similar across databases. One feature is the use of Boolean
operators to expand or delimit a search. Three widely used Boolean operators are AND, OR, and NOT
(in all caps for some databases). The operator AND delimits a search. If we searched for pain AND child,
the software would retrieve only records that have both terms. The operator OR expands the search:
pain OR child could be used in a search to retrieve records with either term. Finally, NOT narrows a
search: pain NOT child would retrieve all records with pain that did not include the term child. Note that
when using multiple Boolean operators, they are processed from left to right. For example, the search
phrase teenage AND smoking OR cigare�es would retrieve (1) records that include both teenage and
smoking and (2) all records with cigare�es, whether or not the article is about teenage smokers.
Parentheses can be used to reorder the terms: teenage AND (smoking OR cigare�es). Boolean operators
also can be used to combine searches for keyword terms and the last names of prominent researchers in
a field, for example, teenage AND (smoking OR cigare�es) AND Kulbok (a researcher).

TIP Be extremely careful using the “NOT” operator because you run the risk of inadvertently
removing relevant articles. For example, if you were searching for studies of female teenage
smokers and used “NOT male” in the search field, the software would remove any article that
included both male and female participants.

Truncation symbols are another useful tool for searching databases. These symbols vary from one
database to another, but their function is to expand the search. A truncation symbol (often an asterisk,
*) expands a search term to include all forms of a root word. For example, a search for child* would
instruct the computer to search for any word that begins with “child” such as children, childhood, or
childrearing. For each database, it is important to learn what these special symbols are and how they
work. For example, many databases require at least three le�ers at the beginning of a search term before
a truncation symbol can be used. (e.g., ca* would not be allowed).
Some databases (but not PubMed) allow for a wildcard symbol—often a question mark—that can be
inserted into the middle of a search term to allow for alternative spellings. For example, in databases
that allow wildcards, a search for behavio?r would retrieve records with either behavior or behaviour.
Although truncation and wildcard symbols can sometimes be useful, they have one major drawback: in
most databases, the use of special symbols turns off a software’s mapping feature. For example, a search
for child* would retrieve records in which any form of “child” appeared in text fields, but it would not
map any of these concepts onto the database’s subject heading codes. It may be preferable to use a
Boolean operator to list all terms of interest (e.g., child OR children), which would look for either term in
a text word search of the title and abstract and would map onto the appropriate subject code.
Another issue concerns phrase searching in which you want words to be kept together (e.g., blood
pressure). Some bibliometric software would treat this as blood AND pressure and would search for
records with both terms somewhere in text fields, even if they are not contiguous. Quotation marks
sometimes can be used to ensure that the words are searched in combination, as in “blood pressure.”
PubMed recommends, however, that you do not use 
quotation marks until you have first tried a search
without them. PubMed automatically searches for phrases during its mapping process—i.e., 
in
searching for relevant subject heading 
codes.

Key Electronic Databases for Nurse Researchers
Two bibliographic databases that are especially useful for nurse researchers are the Cumulative Index to
Nursing and Allied Health Literature (CINAHL) and Medical Literature On- Line (MEDLINE, accessed
through PubMed), which we discuss in the next sections. We also briefly discuss Google Scholar. Other
potentially useful bibliographic databases/search engines for nurses include the following:

British Nursing Index (BNI)

Cochrane Central Register of Controlled Trials (CENTRAL)
Cochrane Database of Systematic Reviews
Database of Promoting Health Effectiveness Reviews (DoPHER)
Excerpta Medica database (EMBASE)
Health and Psychosocial Instruments database (HaPI)
Psychology Information (PsycINFO)

In addition, the ISI Web of Knowledge and Scopus are two citation indexes for retrieving articles that cite
a source article.
Note that a search strategy that works well in one database does not always produce good results in
another. Thus, it is important to explore strategies in each database and to understand how each
database is structured—for example, what subject codes are used, how they are organized in a
hierarchy, and what special features are available.

TIP In the following sections, we provide specific information about using CINAHL and
MEDLINE via PubMed. Note, however, that databases and the software through which they are
accessed change periodically, and so our instructions may not be up- to- date.

Cumulative Index to Nursing and Allied Health Literature
CINAHL is an important bibliographic database: it covers references to virtually all English- language
nursing and allied health journals, and includes books, dissertations, and selected conference
proceedings in nursing and allied health fields. There are several versions of the CINAHL database
(e.g., CINAHL Plus, CINAHL Complete), each with somewhat different features relating to full text
availability and journal coverage.
The CINAHL database indexes material from more than 5,000 journals dating from 1981 and contains
more than 6 million records. In addition to providing bibliographic information for references (i.e.,
author, title, journal, year of publication, volume, and page numbers), CINAHL provides abstracts of
most citations. Links to the actual article are sometimes provided. We illustrate features of CINAHL but
note that some features may be different at your institution.
At the outset, you might begin with a “basic search” by simply entering keywords or phrases relevant to
your primary question. As you begin to enter your term into the search box, autocomplete suggestions
will display, and you can click on the one that is the best match. In the basic search screen, you can limit
your search in a number of ways, for example, limiting the records retrieved to those with certain
features (e.g., only ones with abstracts; only research articles); to a specific range of publication dates
(e.g., only those from 2010 to the present); or to those in specific languages (e.g., English). The search
screen allows you to expand your search by clicking an option labeled “Apply related words.”
As an example, suppose we were interested in recent research on nurses’ pain management for children.
If we did a keyword search for pain management, we would get about 18,000 records. Searching for pain
management AND child AND nurse would bring the number down to about 400 (we did not truncate
child* because this would retrieve records for some irrelevant terms associated with pain, such as
childhood). We could pare the number down to about 160 by limiting the search to research articles with
abstracts published since 2000.
The full records for the 160 references would then be displayed on the monitor in a Search Results list.
There is a “sort” option at the top of the list that allows you to sort the references based on several
criteria, such as publication date, author’s last name, and relevance. From the Results list, we could
place promising references into a folder for later scrutiny by clicking on a file icon in the upper right
corner of each entry. We could then save the folder, print it, or export it to reference manager software
such as 
EndNote.
An example of an abridged CINAHL record entry for a study identified through the search on the
management of children’s pain is presented in Figure 5.3. The record begins with the article title, the
authors’ names and affiliation, and source. The source indicates the following:

Name of the journal (Pain Management Nursing)

Year and month of publication (Feb 2015)
Volume (16)
Issue (1)
Page numbers (40- 50)

FIGURE 5.3 Example of a record from a CINAHL (Cumulative Index to Nursing and Allied Health Literature) search.

(Abstract reprinted with permission from He H.G., Klainin- Yobas P., Ang E., Sinnappan R., Pölkki T., & Wang W. (2015).
Nurses’ provision of parental guidance regarding school- aged children’s postoperative pain management: A descriptive

correlational study. Pain Management Nursing , 16 , 40–50.)

The record also shows the major and minor CINAHL subject headings that were coded by the indexers.
Any of these headings could have been used to retrieve this reference. Note that the subject headings
include substantive codes, such as Postoperative Pain, and methodologic codes (e.g., Correlational Studies),
person characteristic codes (e.g., Child), and a location code (Singapore). Next, the abstract for the study
is shown. Based on the abstract, we might be able to decide whether this reference was pertinent. Each
entry shows an accession number that is 
the unique identifier for each record in the CINAHL database,
as well as other identifying numbers.
An important feature of CINAHL helps you to find other relevant references once a good one has been
found. In Figure 5.3 you can see that the record offers many embedded links on which you can click. For
example, you could click on any of the authors’ names to see if they published other related articles.
There is also a sidebar link in each record called Times Cited in this Database (if there has been a citation),

with which you could retrieve records for articles that had cited this paper (for a descendancy search).
Another link is labeled Find Similar Results that suggests other relevant references.
In CINAHL, you can also explore the structure of the database’s thesaurus to get additional leads for
searching. The tool bar at the top of the screen has a tab called CINAHL Headings. When you click on this
tab, you can enter a term of interest in the Browse field and select one of three options: Term Begins With,
Term Contains, or Relevancy Ranked (which is the default). For example, if we entered pain management
and then clicked on Browse, we would be shown the major subject headings relating to pain
management; we could then search the database for any of the listed subject codes.

TIP Note that the keywords we used to illustrate this simplified search (pain management, child,
nurse) would not be adequate for a comprehensive retrieval of studies relevant to our review
question. For example, we would want to search for several additional terms (e.g., pediatric).

The MEDLINE Database and PubMed
The MEDLINE database was developed by the U.S. National Library of Medicine and is widely
recognized as the premier source for bibliographic coverage of the biomedical literature. MEDLINE
covers about 5,600 medical, nursing, and health journals published in about 70 countries and contains
more than 28 million records dating back to the mid- 1940s. In 1999, abstracts of systematic reviews from
the Cochrane Collaboration became available through MEDLINE.
The MEDLINE database can be accessed through a commercial vendor, but it can be accessed for free
through the PubMed website (h�p://www.ncbi.nlm.nih.gov/PubMed). This means that anyone,
anywhere in the world with Internet access can search for journal articles, and thus PubMed is a lifelong
resource. PubMed has excellent tutorials, including a 30- minute tutorial specifically for nurses (PubMed
for Nurses). PubMed includes all references in the MEDLINE library plus additional references, such as
those that have not yet been indexed.
On the Home page of PubMed, you can launch a basic search that looks for your keywords in text fields
of the record. PubMed, like CINAHL, has an autocomplete feature that offers suggestions as you begin
to enter your terms.

TIP On the PubMed home page, you can also launch a Clinical Query search, which is a
particularly useful tool for searching for evidence in the context of an EBP inquiry. Supplement A
to this chapter on provides guidance for undertaking such a clinical query.

MEDLINE uses a controlled vocabulary called MeSH (Medical Subject Headings) to index articles.
Indexers assign as many MeSH headings as appropriate to cover content and features of the article—
typically 5 to 15 codes. You can learn about relevant MeSH terms by clicking on the “MeSH database”
link on the Home page (under the heading More Resources). If, for example, we searched the MeSH
database for “pain,” we would find that Pain is a MeSH subject heading (a definition is provided) and
there are 60 related categories—for example, “Cancer pain,” “Back pain,” and “Headache.” Each
category has numerous subheadings, such as “Complications,” “Etiology,” and “Nursing.”
If you begin with a keyword search, you can see how your term mapped onto MeSH terms by looking
in the right- hand panel for a section labeled Search Details. For example, if we entered “children” as our
keyword in the search field of the initial screen, Search Details would show us that PubMed searched
for all references that have “child” or “children” in text fields of the database record, and it also searched
for all references that had been coded “child” as a subject heading because “child” is a MeSH subject
heading.
If we did a PubMed search of MEDLINE similar to the one we described earlier for CINAHL, we would
find that a simple search for pain management would yield about 102,000 records; a search for pain
management AND child AND nurse would yield nearly 700 records. We can place restrictions on the
search using filters that are shown in the left sidebar of the screen. If we limited our search to entries
with abstracts, wri�en in English, and published in 2000 or later, the search would yield about 450
records. Thus, PubMed search yielded more references than the CINAHL search, in part because

MEDLINE indexes more journals; another factor, however, is that in PubMed we could not limit the
search to research articles because PubMed does not have a generic category that distinguishes research
articles from nonresearch articles.

TIP Here are the Search Details (the strategy and syntax) for the PubMed search just described:
(“pain management”[MeSH Terms] OR (“pain”[All Fields] AND “management”[All Fields]) OR
“pain management”[All Fields]) AND (“child”[MeSH Terms] OR “child”[All Fields]) AND
(“nurses”[MeSH Terms] OR “nurses”[All Fields] OR “nurse”[All Fields]) AND (hasabstract[text]
AND (“2000/01/01”[PDAT]: “3000/12/31”[PDAT]) AND English[lang])

From the Search Results page, we would then click on links to the citations that suggest a relevant
article; this would bring up a new screen that provides the abstract for the article and further details.
Figure 5.4 shows the full citation and abstract for the same study we located earlier in CINAHL. Beneath
the abstract, the display presents the MeSH terms that were indexed for this study. (Those marked with
an asterisk, such as Pain Management/nursing, are MeSH subject headings that are a major focus of the
article). As you can see, the MeSH terms are different than the subject headings for the same article in
CINAHL. As with CINAHL, you can click on highlighted record entries (authors’ names and MeSH
terms) for possible new leads.

FIGURE 5.4 Example of a record from a PubMed search.

(Abstract reprinted with permission from He H.G., Klainin- Yobas P., Ang E., Sinnappan R., Pölkki T., & Wang W. (2015).
Nurses’ provision of parental guidance regarding school- aged children’s postoperative pain management: A descriptive

correlational study. Pain Management Nursing , 16 , 40–50.)

In the right panel of the screen for specific PubMed records, there is a list of Similar Articles, which is a
useful feature once you have found a study that is a good exemplar of what you are seeking. Further
down in the right panel, PubMed provides a list of any articles in the PubMed Central database that had
cited this study. PubMed Central is a repository for full- text articles, so you could immediately
download any of the articles that appeared in this list. You can also save articles that look pertinent to
your review by clicking the bu�on “Add to Favorites” at the top of the right panel.
A useful feature of PubMed is that it provides access to new research by including citations to
forthcoming articles in many journals. The records for these not yet published articles have the tag
“Epub ahead of print.” McKeever et al. (2015) offer further suggestions for using PubMed for doing an
exhaustive literature review.

TIP Searching for qualitative studies can pose special challenges. Wilczynski et al. (2007)
described optimal search strategies for qualitative studies in the CINAHL database. Flemming and
Briggs (2006) compared three alternative strategies for finding qualitative research.

Google Scholar
Launched in 2004, Google Scholar (GS) has become an increasingly popular bibliographic search engine.
GS includes articles in journals from scholarly publishers in all disciplines, as well as scholarly books,
technical reports, and other documents. GS is accessible free of charge over the Internet. Like other
bibliographic search engines, GS allows users to search by topic, by a title, and by author and uses
Boolean operators and other search conventions. Like PubMed and CINAHL, GS has a Cited By feature
for a descendancy search and a Related Articles feature to locate other sources with relevant content to an
identified article. Because of its expanded coverage of material, GS can provide access to many free full–
text publications.
Unlike other scholarly databases, GS does not order the retrieved references by publication date (i.e.,
most recent ones first). The ordering of records in GS is determined by an algorithm that puts most
weight on the number of times a reference has been cited; this in turn means that older references are
usually earlier on the list. Another disadvantage of GS is that the search filters are fairly limited.
In the field of medicine, GS has generated considerable controversy, with some arguing that it is of
similar utility and quality to popular medical databases (Gehanno et al., 2013), and others urging
caution in depending primarily on GS (e.g., Boeker et al., 2013; Bramer et al., 2013). Some have found
that for quick clinical searches, GS returns more citations than PubMed (Shariff et al., 2013). The
capabilities and features of GS may improve in the years ahead, but at the moment, it may be risky to
depend on GS exclusively. For a full literature review, we think it is best to combine searches using GS
with searches of other databases. We note, however, that GS has been of particular interest in efforts to
retrieve the so- called grey literature (Haddaway 
et al., 2015).

TIP For most reviews, other resources beyond bibliographic databases should be considered.
Other sources include government reports, clinical trial registries (e.g., ClinicalTrials.gov), and
records of studies that are in progress such as in NIH RePORTER, which is a searchable database
of biomedical projects funded by the U.S. government.

Screening and Gathering References
Screening references for relevance is a multiphase process. The first screen is the title of the article itself.
For example, suppose our study question was the one we presented earlier: Among nurses working in
hospitals, what characteristics of the nurses or their practice se�ings are associated with their management of
children’s pain? The PubMed search for pain management AND child AND nurse yielded about 450
references in PubMed. The title of one article identified in this search was “Nurses’ perceptions of caring
for childbearing women who misuse opioids.” Based on this title, we could conclude that this article
(which was retrieved because the name of the journal in which it was published included the word

“Child,” one of our keywords) would provide no evidence about factors influencing nurses’ pain
management with children.
Once this initial screening is completed and the various search lists are also purged of duplicates, we
would then examine the abstracts of the remaining references. When there is no abstract, or when the
abstract is ambiguous as to its relevance to your review, it is usually necessary to screen the full article.
During the screening, keep in mind that some articles judged to be not relevant for your primary
question may be useful for a secondary question.
The next step is to retrieve the full text of references you think may have value for your review. If you
are affiliated with an institution, you may have online access to most full- text articles, which you should
download and file. If you are not so fortunate, more effort will be required to obtain the full- text articles.
Consulting with a librarian is a good strategy.
The open- access journal movement is gaining momentum in healthcare publishing. Open- access
journals provide articles free of charge online, regardless of any institutional subscriptions. Some
journals have a hybrid format in which most articles are not open- access but some individual articles are
designated as open- access. Bibliographic databases indicate which articles are open- access, and for these
articles, the full text can be retrieved by clicking on a link. (In PubMed, the link to click on states “Free
Article” or “Free PMC article.”)

TIP We provide links to open- access articles with content relevant to each chapter of this book in
the Toolkit of the accompanying Resource Manual.

When an article is not available to you online, you may be able to access it by communicating with the
lead author. Bibliographic databases usually provide an email address for lead authors. Another
alternative is to go to scholarly collaboration network (SCN) websites such as Research Gate or Academia.edu
and do a search for a particular author. Authors sometimes upload articles onto their profile for access
by others. If an article has not been uploaded, these network sites provide a mechanism for you to send
the author a message so that you can request an article to be sent to you directly.

Documentation in Literature Retrieval
If your goal is to “own” the literature, you will be using a variety of databases, keywords, subject
headings, authors’ names, and search strategies in an effort to pursue all leads. As you meander
through the complex world of research information, you will likely lose track of your efforts if you do
not document your actions from the outset.
It is advisable to use word processing, spreadsheet, or reference manager software to record your search
strategies and search results. You should make note of information such as names of the databases
searched; limits put on your search; specific keywords, subject headings, or authors used to direct the
search; studies used to inaugurate a “Related Articles” or “descendancy” search; websites visited; links
pursued; authors contacted to request further information or copies of articles not readily available; and
any other information that would help you keep track of what you have done—including information
about the dates your searches were undertaken. Part of your strategy usually can be documented by
saving your search history from bibliographic databases. Completing a flow chart such as the one
shown in Figure 5.2 is recommended if your goal is to publish a free- standing review.
By documenting your actions, you will be able to conduct a more efficient search—that is, you will not
inadvertently duplicate a strategy you have already pursued. Documentation will also help you to
assess what else needs to be tried—where to go next in your search. Finally, documenting your efforts is
a step in ensuring that your literature review is reproducible.

Extracting and Recording Information
Once you have a set of useful source materials, you need a strategy for making sense of the information.
If a literature review is fairly simple, it may be sufficient to jot down notes about key features of the
studies under review and to use these notes as the basis for the synthesis.
Many literature reviews are sufficiently complex that a systematic process for extracting and recording
information must be developed. In the past, researchers used paper- based data extraction forms to
record information about each reference. The use of word processing or spreadsheet software is
advantageous, however, because then the forms can be easily searched and sorted. We call them data
extraction forms because, in a review, the “data” are the information from each study. The data extraction
forms are the critical bridge between the information in the original research reports and the synthesis
of evidence by reviewers.
An approach that is gaining in popularity is the creation of two- dimensional data collection forms
(matrices or evidence summary tables) in which rows are used for individual studies and columns are
used to insert relevant data about each study, such as sample characteristics, methodologic features, and
results. Two- dimensional tables can provide insights into important “themes” in the data across studies.

Information to Extract
It is wise to record key information for each study in a systematic way. Regardless of what approach is
used to record data, reviewers should decide in advance what information about each study is
important. The key elements will vary from one review to the next, but you should have, as a goal, the
creation of a file in which each study in the review is abstracted for a consistent set of features.
Box 5.1 presents a list of some elements that could be considered for your data extraction forms. Not all
of these elements are needed for each review, and for other reviews additional elements are likely to be
needed. Although many terms in this table may not be familiar to you yet, you will learn about them in
later chapters.
Once you have decided on the elements you wish to use in your data extraction form, you should pilot
test it with a sample of studies. If you discover later in the extraction process that other elements are
needed, you would have to go back to every 
completed article to retrieve the new information.

TIP We encourage the use of two- dimensional data extraction forms, but if you prefer using a
separate form to extract information for each study, an example is provided as a Word document
in the Toolkit for this chapter that you can adapt.

Coding the Studies for Key Variables
In systematic reviews, the review team almost always develops coding systems to support statistical
analyses of study findings. Coding may not be necessary in less formal reviews, but coding certain
elements can be helpful in organizing the review, and so we offer some suggestions and an example.
We find it useful to code study findings for key variables (quantitative) or themes (qualitative). In our
earlier example about factors affecting nurses’ management of children’s pain, nurses’ characteristics
are the independent variables and nurses’ pain management behaviors are the dependent variables. By
reading the retrieved articles, we find that several characteristics have been studied—nurses’ knowledge
about pain management, their nursing experience, demographic characteristics, and so on. We can
assign codes to each type of factor. With regard to the dependent variable—nurses’ pain management
behaviors—some studies have focused on nurses’ pain assessment, others have examined nurses’ use of
nonpharmacologic methods of pain relief, and so on. These outcome categories can also be coded. An

example of a coding scheme is presented in Box 5.2—there are eight independent variable categories
and five outcome categories.
The results of each study can then be coded. You can record these codes in data extraction forms, but we
think it is also useful to note the codes in the margins of the articles themselves, so you can easily find
the information. Figure 5.5, which presents an excerpt from the results section of the study by He et al.
(2015), shows marginal coding of key variables. In this excerpt, we see that the researchers reported that
nurses’ guidance to parents about pain management (Code E) varied by the nurses’ age (Code 4),
whether or not they had children of their own (Code 4), and their perceived knowledge about methods
of pain relief (Code 1).

FIGURE 5.5 Coded excerpt from the Results section of a research article: nurses’ management of children’s pain example.
The codes in the margin, which here were entered as a comment on the pdf file, correspond to the codes explained in Box

5.2. Supplement B on discusses this excerpt and why additional codes would be required.

(Excerpt reprinted with permission from He H.G., Klainin- Yobas P., Ang E., Sinnappan R., Pölkki T., & Wang W. (2015).
Nurses’ provision of parental guidance regarding school- aged children’s postoperative pain management: A descriptive

correlational study. Pain Management Nursing , 16 , 40–50.)

When reviews are more sharply focused than the one we have used as an example, coding may not be
necessary or codes that are more fine- tuned could be used. For example, if our research question
focused explicitly on nurses’ use of nonpharmacologic methods of pain relief (not about use of
analgesics or pain assessment), the outcome categories might be specific nonpharmacologic approaches,
such as distraction, guided imagery, music, massage, and so on. The point is to use codes to organize
information in a way that facilitates retrieval and analysis. Further guidance on coding study findings is
offered in Supplement B on .

Literature Review Summary Tables
As noted earlier, we recommend using two- dimensional tables (matrices) to extract and record
information from the source documents because such tables directly support a thematic analysis of the
retrieved evidence. For some literature reviews—for example, in a dissertation—such tables are
sometimes included directly in the wri�en product. In other words, these tables can serve not only as a
data extraction tool, but also as a display of critical information in complex reviews.

As Box 5.1 suggests, the list of potential elements to be extracted from each study can be long. With
two- dimensional tables for recording the extracted data, it may be advantageous to create multiple data
extraction forms, so that the information can be conveniently displayed on your computer screen
without having to scroll right and left. For example, separate forms can be used for source information,
methods used, results, and evaluation.
Table 5.1 presents an example of one such matrix for extracting methodologic features of studies in a
review. Such tables can be created in word processing or spreadsheet software. This table only shows
one illustrative entry: the study by He et al. (2015), whose CINAHL and PubMed records were shown in
Figures 5.3 and 5.4. Complete evidence summary tables would have a row for each study in the review.
These tables can be electronically searched and sorted and re- sorted (e.g., by authors’ names, year of
publication, level of evidence, etc.). Although we have only included one entry in this table as an
illustration, if this table listed 10 to 15 studies, we would be able to tell at a glance when and where the
studies had been done, what sampling methods had been used, and so on. The scrutiny of such tables
can tell us not only what has been done but can also point to gaps or problems—for example,
overreliance on nurses’ self- reported pain management strategies rather than direct observation of
nurses’ behaviors.
Supplement B to this chapter on provides additional guidance about the use of evidence
summary tables, together with more complete examples.

Critical Appraisal of the Evidence
In drawing conclusions about a body of research, reviewers must record not only factual information
about studies—methodologic features and findings—but must also make judgments about the value of
the evidence. This section discusses issues relating to the appraisal of studies in the review.

TIP A distinction is sometimes made between a research critique and a critical appraisal. The la�er
term is favored by those focusing on the evaluation of evidence for nursing practice. The term
critique is more often used when individual studies are being evaluated for their scientific merit—
for example, when a manuscript is reviewed by two or more peer reviewers who make
recommendations about publishing the paper, or when a person is preparing a literature review. In
both critiques and appraisals, however, the point is to apply knowledge about research methods,
theory, and substantive issues to draw conclusions about the validity and relevance of the findings.

Appraisals of Individual Studies
As traditionally defined, a research critique is an appraisal of the strengths and weaknesses of a study.
A good critique identifies areas of adequacy and inadequacy in an unbiased manner. Literature reviews
mainly concern the evaluation of a body of research evidence for a literature review, but we briefly offer
some advice about appraisals of individual studies.
We provide support for the critical appraisal of individual studies in several ways. First, suggestions for
appraising relevant aspects of a study are included at the end of each chapter. Second, it can be
illuminating to have a good model, and so Appendixes H and I of the accompanying Resource Manual
include comprehensive appraisals of a quantitative and mixed methods study.
Third, we offer a set of key critical appraisal questions in this chapter, in Box 5.3 (quantitative studies)
and Box 5.4 (qualitative studies). The second column in these two boxes lists appraisal questions, and
the third column cross- references the more detailed appraisal guidelines in other chapters. Many
questions may be too difficult for you to answer at this point, but your methodologic and appraisal
skills will improve as you progress through this book. The questions in these two boxes are relevant for
a rapid critical appraisal that would be conducted as part of an EBP effort, as well as for appraisals for a
literature review.
A few comments about these guidelines are worth noting. First, the questions in Boxes 5.3 and 5.4
mainly concern the rigor with which the researchers conducted their research. For example, there are no
questions regarding ethical issues because—while extremely important—the researchers’ handling of
ethical concerns is unlikely to affect evidence quality.
Second, the questions in these two boxes call for a yes or no answer (although for some, the answer may
be “Yes, but…”). In all cases, the desirable answer is “yes.” A “no” suggests a possible limitation, and a
“yes” suggests a likely strength. Therefore, the more “yeses” a study gets, the stronger its evidence is
likely to be. These questions can thus cumulatively suggest a global assessment: a report with 10 “yeses”
is likely to be superior to one with only 4.
Our simplified guidelines have shortcomings. In particular, they are generic despite the fact that
appraisals cannot use a one- size- fits- all list of questions. Some questions that are relevant to, say, clinical
trials do not make sense for descriptive studies. Thus, you need to use some judgment about whether
the guidelines are appropriate in your situation.
Finally, there are questions in these guidelines for which there are no objective answers. Even experts
sometimes disagree about what are the best methodologic strategies for a study.

TIP Students may be asked to critically appraise a study to document their mastery of research
concepts. Such appraisals may be expected to be comprehensive, covering substantive, theoretical,
ethical, methodologic, and interpretive aspects. The Toolkit for this chapter offers more detailed
lists of questions than are included in Boxes 5.3 and 5.4 for such comprehensive appraisals.

Evaluating a Body of Research
In reviewing the literature, you would not undertake a comprehensive critical appraisal of each study—
but you would need to evaluate the evidence quality in each study so that you could aggregate
appraisals across studies to draw conclusions about the overall body of evidence.
In preparing a literature review for a new study, the studies under review need to be assessed with an
eye to answering some broad questions. First, to what extent do the cumulative findings accurately
reflect the truth or, conversely, to what extent do methodologic flaws undermine the credibility of the
evidence? Another important question to consider is: For which types of people does the evidence apply
—that is, for whom is the evidence applicable?
The use of literature review matrices, as described in Supplement B ( ), supports the analysis and
evaluation of multiple studies. For example, if there is a column for sample size in the matrix (as in
Table 5.1), one could readily see at a glance whether, for example, a lot of the evidence is from studies
with small, unrepresentative samples.

TABLE 5.1
Example of an Evidence Summary Table for Methodologic Features of Relevant Studies

Author Year Country Dependent
Variables 

(With
Codes) a

Independent 

Variables 
(With
Codes) a

Study 

Design

Level 
of
Evidence
b

Sample
Size,
Character-

istics

Child 

Age

Sampling
Method

Data 

Collec
Metho

He et al. 2015 Singapore E: Nurses’ 

provision of
information
regarding 

non-

pharmacologic
methods of 

pain
management

1. Perceived 

knowledge of
nonpharmacologic

pain relief
methods

2. Nursing
experience

3. Demographic 

(age, education, 

having own child)

4. Nurses’ role 
(staff
nurse vs. more
senior)

Descriptive
correlational,
cross- 

sectional

V 134 RNs in 
7
pediatric 

wards of
 2
hospitals

School- –
aged

Convenience Questio

aThe codes for the independent and dependent variables are shown in Box 5.2.
bFor this table, levels from the evidence hierarchy presented in Figure 2.2 in Chapter 2 were used—although this hierarchy is appropriate
primarily for Therapy questions. Alternative hierarchies for different types of questions are described in Chapter 9.

TIP Formal systems for grading a body of evidence have been developed and will be discussed in
the chapter on systematic reviews (Chapter 30).

Analyzing and Synthesizing Information
Once all the relevant studies have been retrieved, read, abstracted, and appraised, the information has
to be analyzed and integrated. A literature review is not simply a summary of each previous study—it
is a synthesis that features important pa�erns. As previously noted, doing a literature review is similar
to doing a qualitative study, particularly with respect to the analysis of the data, which in this case is the
information from the retrieved studies. In both, the focus is on identifying important themes.
A thematic analysis essentially involves detecting regularities, as well as inconsistencies and “holes.”
Several different types of themes can be identified, as described in Table 5.2. The reason we recommend
using literature review summary tables can be seen by reading the list of possible themes and questions:
it is easier to discern pa�erns by reading down the columns of the matrices than by flipping through a
file of review forms or skimming through articles.

TABLE 5.2
Thematic Possibilities for a Literature Review

Nature of the Theme Questions for Thematic Analysis
Substantive What does the pa�ern of evidence suggest? How much evidence is there? How consistent is the body of evidence across

studies? How powerful are observed effects? How persuasive is the evidence? Has the clinical significance of the
findings been assessed? What gaps are there in the body of evidence?

Methodologic What types of research designs or approaches have predominated? What level of evidence is typical? What populations
have been studied? Have certain groups been omi�ed from the research? What data collection methods have been used
primarily? Are data typically of high quality? Overall, what are the methodologic strengths and deficiencies?

Theoretical Which theoretical frameworks have been used—or has most research been atheoretical? How congruent are the
frameworks?

Generalizability/transferability To what types of people and se�ings do the findings apply? Do findings vary for different types of people or se�ings?
Historical Have there been substantive, methodologic, or theoretical trends over time? Is evidence ge�ing be�er? When was most

research conducted?
Researcher Who has been doing the research, in terms of discipline, specialty area, and nationality? Do any of the researchers have

a systematic program of research devoted to this topic?

Clearly, it is not possible—even in lengthy free- standing reviews—to address all the questions in Table
5.2. Reviewers must decide which pa�erns to pursue. In preparing a review as part of a new study, you
would need to determine which pa�ern is of greatest relevance for developing an argument and
providing a context for the new research.

Preparing a Written Literature Review
Writing literature reviews can be challenging, especially when voluminous information must be
condensed into a few pages, as is typical for a journal article or proposal. We offer a few suggestions but
acknowledge that skills in writing literature reviews develop over time.

Organizing the Review
Organization is crucial in a wri�en review. Having an outline helps to structure the narrative’s flow. If
the review is complex, we recommend a wri�en outline. The outline should list the main topics or
themes to be discussed and the order of presentation. The important point is to have a plan before
starting to write so that the review has a coherent progression of ideas. The goal is to structure the
review in such a way that the presentation is logical, demonstrates meaningful thematic integration, and
leads to a conclusion about the state of evidence on the topic.

Writing a Literature Review
It is beyond the scope of this book to offer detailed guidance on writing research reviews, but we offer a
few comments on their content and style. Additional assistance is provided in books such as the ones by
Fink (2020) and Galvan and Galvan (2017).

Content of the Written Literature Review
A wri�en research review should provide readers with an objective, organized synthesis of evidence on
a topic. A review should be neither a series of quotes nor a series of abstracts. The central tasks are to
digest and critically evaluate the overall evidence so as to reveal the current state of knowledge—not
simply to describe what researchers have done.
Although key studies may be described in some detail, it is seldom necessary to provide particulars for
every reference. Studies with 
comparable findings often are summarized together.

Example of Grouped studies
Kayser et al. (2019) summarized findings from several studies in their introduction to a study of
predictors of hospital- acquired pressure injuries: “In a review of 54 studies examining risk factors
of pressure injuries…as many as 200 significant risk factors were identified (Coleman et al., 2015)…
Examples of indirect risk factors studied include: incontinence, age, nutrition, diabetes, and
vasopressor therapy.”

The review should demonstrate that you have considered the cumulative worth of the body of research.
The review should be as objective as possible. Studies that are at odds with your hypotheses should not
be omi�ed, and the review should not ignore a study because its findings contradict other studies.
Inconsistent results should be analyzed for insights into factors that might have led to discrepancies.
A literature review typically concludes with a concise summary of evidence on the topic and any gaps
in the evidence. If the review is 
undertaken for a new study, this critical summary should demonstrate
the need for the research 
and should clarify the basis for any hypotheses.

TIP As you progress through this book, you will acquire proficiency in critically evaluating
studies. We hope you will understand the mechanics of doing a review after reading this chapter,
but you probably will not be ready to write a state- of- the- art review until you have gained more
skills in research methods.

Style of a Research Review
Students preparing their first wri�en research review often struggle with stylistic issues. Students
sometimes accept research findings uncritically, perhaps reflecting a common misunderstanding about

the conclusiveness of research. You should keep in mind that hypotheses cannot be proved or disproved
by empirical testing, and no research question can be answered definitively in a single study. The issue
is partly semantic: hypotheses are not proved; they are supported by research findings.

TIP When describing study findings, you should use phrases suggesting that results are tentative,
such as the following:

Several studies have found…
Findings thus far suggest…
The study results support the hypothesis that…
There appears to be good evidence that…

A related stylistic problem is the interjection of opinions into the review. The review should include
opinions sparingly and should be explicit about their source. Reviewers’ opinions do not belong in a
literature review, except for assessments of study quality.

TIP The Toolkit for this chapter in the accompanying Resource Manual includes a table with
examples of several stylistic flaws, and suggests possible rewordings.

Critical Appraisal of Research Literature Reviews
We conclude this chapter with some advice about appraising a literature review. It is often difficult to
critique a research review because the author is almost invariably more knowledgeable about the topic
than the readers. It is not usually possible to judge whether the author has included all relevant
literature—although you may have suspicions if none of the citations are to recent articles. Several
aspects of a review, however, are amenable to evaluation by readers who are not experts on the topic.
Some suggestions for appraising wri�en research reviews are presented in Box 5.5. (These questions
could be used to review your own literature review as well.)
In assessing a literature review, the key question is whether it summarizes the current state of research
evidence adequately. If the review is wri�en as part of an original research report, an equally important
question is whether the review lays a solid foundation for the new study.

Research Examples of Literature Reviews
The best way to learn about the style and organization of a research literature review is to read reviews
in nursing journals. We present excerpts from two reviews that were part of the introduction to journal
articles about original studies. a

Literature Review From a Quantitative Research Report

Study: Evaluation of a person- centered, theory- based intervention to promote health behaviors
(Worawong et al., 2018).
Statement of purpose: The purpose of this study was to test the effect of a behavioral, person- centered
intervention (I) on physical activity and fruit and vegetable intake (Os) in community- living adults (P).
Literature review (excerpt): “Although many researchers have tested intervention effects on health
behaviors, scholars continue to be challenged to develop stronger behavioral interventions to improve
individuals’ health (Desroches et al., 2013)… Scholars have tried to promote health behaviors, such as
diet and activity, by focusing individuals on the need to prevent or minimize chronic illnesses (e.g.,
diabetes, Estabrooks et al., 2005; Guo, Chen, Whi�emore, & Whitaker, 2016; or cardiovascular disease
[CVD], Edelman et al., 2006; Parra- Medina et al. 2011; Snieho�a, Scholz, & Schwarzer, 2006). These
approaches rest on the assumptions that individuals (a) value prevention highly, (b) perceive
susceptibility to disease or its consequences, (c) perceive health behaviors as feasible, and (d) appreciate
the connection between behaviors and clinical outcomes. However, these assumptions are not often
valid as explained below.
People’s motives for health behaviors can differ from those of researchers and clinicians. People can
perceive the distant risk of disease as less salient than their other life goals and may not initiate or
sustain health behaviors (Carpenter, 2010; Segar, Eccles, & Richardson, 2008; Teixeira et al., 2012). Based
on a systematic review, people engage in health behaviors to meet various proximal, short- term goals
more so than to prevent a distal outcome such as disease (Rhodes, Quinlan, & Mistry, 2016). People may
engage in physical activity or healthy eating to alter their moods in the short term or to look be�er in the
long term (Bowen, Balbuena, Bae�, & Schwar�, 2013; Lauver, Worawong, & Olsen, 2008).
Thus, health behavior interventions could be strengthened by making them more patient- centered. This
would involve customizing interventions on people’s choices of health behaviors and on their motives,
preferences, values, goals, beliefs, characteristics, or needs (Morgan & Yoder, 2012; Rhodes et al., 2016).
Patient- centered interventions can be motivational and efficacious for improving diet, activity, and
clinical status in the longer term (Greaves et al., 2011; Teixeira et al., 2012).
To strengthen behavioral interventions, researchers have tried to identify key components of successful
dietary and activity interventions (Desroches et al., 2013; Pomerleau, Lock, Knai, & McKee, 2005). For
example, interventions delivered face- to- face have been more efficacious than those without face- to- face
contact on physical activity… and subsequent cardiovascular fitness… (Richards, Hillsdon, Thorogood,
& Foster, 2013), as well as on maintenance of diet and activity behaviors (Fjeldsoe, Neuhaus, Winkler, &
Eakin, 2011). Researchers need to identify what other components can contribute to interventions that
are efficacious, feasible, acceptable, and cost- effective (Dombrowski, O’Carroll, & Williams, 2016;
Teixeira et al., 2012).”
(Excerpt reprinted with permission from Worawong C., Borden M. J., Cooper K., Perez O., & Lauver D.
(2018). Evaluation of a person- centered, theory- based intervention to promote health behaviors. Nursing
Research , 67 , 6- 15.)

Literature Review From a Qualitative Research Report

Study: Understanding advanced prostate cancer decision- making utilizing an interactive decision tool
(Jones et al., 2018)
Statement of purpose: The purposes of this study were to describe and understand the lived
experiences of patients with advanced prostate cancer and their decision partners who used an
interactive decision aid (DecisionKEYS) in making informed, shared treatment decisions.

Literature review (excerpt): “Prostate cancer is the most commonly diagnosed cancer in men and the
second leading cause of cancer deaths in the United States. In 2016, an estimated 180,890 men will be
diagnosed with prostate cancer, and approximately 26,120 men will die of the disease (American Cancer
Society, 2016). In a lifetime, approximately 14% of all men will be diagnosed with prostate cancer
(National Cancer Institute, 2016)…
There are numerous difficult decisions that patients with advanced prostate cancer must make,
including treatment options, cost of care, and family involvement; however, over time, patients with
advanced cancer often regret some past decisions (Brom et al., 2015; Christie et al., 2015; Mahal et al.,
2015). Many factors may increase the likelihood that patients will not have complete information at the
time it is needed in order to optimize decision making, for example, time constraints, forge�ing to ask
questions, and provider- patient miscommunication (Hillen et al., 2011; Lu et al., 2011; Shay & Lafata,
2015; Woods et al., 2013)…
Many patients with advanced prostate cancer struggle with treatment decisions… If patients and
healthcare providers fail to engage in a systematic, informed, shared decision- making process (a
collaborative process whereby patient and healthcare provider make a healthcare decision together,
taking into account scientific/clinical evidence and the patient’s/decision partner’s values and
preferences), there is a greater chance that the patient will be dissatisfied and regretful regarding the
decisions that were made (Mahal et al., 2015; Poon, 2012; Weeks et al., 2012). Moreover, decision
partners may become ‘proxies’ in interactions with healthcare providers, but they often misunderstand
the patient’s informational and decision needs (Longo & Slater, 2014).
Decision aids can help patients apply specific health information while actively participating in health–
related decision making (O’Connor et al., 2009; Stacey et al., 2014).…Decision aids are most effective
when they are tailored, interactive, collaborative, and focused on the priorities of the individual patient
(Fowler et al., 2011; Jimbo et al., 2013; Ozanne et al., 2014; Sepucha et al., 2013; Stacey et al., 2014) but
interactive decision aids are rarely implemented (Jimbo et al., 2013).”
(Excerpt reprinted with permission from Jones R., Hollen P., Wenzel J., Weiss G., Song D., Sims T., &
Petroni G. (2018). Understanding advanced prostate cancer decision making utilizing an interactive
decision aid. Cancer Nursing , 41 , 2- 10.)

Summary Points

A research literature review is a wri�en synthesis of evidence on a research problem.
Major steps in preparing a wri�en research review include formulating a question, devising a search strategy,
developing a plan to organize and document review activities, conducting a search, screening and retrieving
relevant sources, extracting key data from the sources, appraising studies, analyzing aggregated information for
important themes, and writing a synthesis.
Research articles are the major focus of research reviews. Information in nonresearch references—e.g., case
reports, editorials—may broaden understanding of a research problem but has limited utility in summarizing
research evidence.
A primary source is the description of a study prepared by the researcher who conducted it; a secondary source
is a description of the study wri�en by someone else. Literature reviews should be based on primary source
material.
Strategies for finding studies on a topic include the use of bibliographic databases, the ancestry approach (tracking
down earlier studies cited in a reference list of a report), and the descendancy approach (using a pivotal study to
search forward to subsequent studies that cited it.)
Electronic searches of bibliographic databases are a key method of locating references. For nurses, the CINAHL
and MEDLINE (via PubMed) databases are especially useful. Google Scholar is also a popular and free resource.
In searching a database, users can perform a keyword search that looks for searcher- specified terms in text fields
of a database record (or that maps keywords onto the database’s subject codes) or they search according to subject
heading codes themselves.
Access to many journal articles is becoming easier through online resources, especially 
for articles available in an
open- access 
format.
References must be screened for relevance, and then pertinent information must be extracted for analysis. Two–
dimensional evidence summary tables (matrices) facilitate the extraction and organization of data from the studies,
as does a good coding scheme.
A research critique (or critical appraisal) is a careful evaluation of a study’s strengths and weaknesses. Critical
appraisals for a research review tend to focus on the methodologic aspects and findings of retrieved studies.
The analysis of information from a literature search involves the identification of important themes—regularities
(and inconsistencies) in the information. Themes can take many forms, including substantive, methodologic, and
theoretic themes.
In preparing a wri�en review, it is important to organize materials logically. The reviewers’ role is to describe
study findings, the dependability of the evidence, evidence gaps, and (in the context of a new study)
contributions that the new study would make.

Study Activities
Study activities are available to instructors on .

Box 5.1 Information to Consider for Data Extraction in a Literature Review

Source

Citation
Contact details of lead author

Methods

Study design
Level of evidence

Research tradition (qualitative)
Longitudinal or cross- sectional
Methods of bias control (e.g., blinding)
Methods of enhancing trustworthiness (qualitative)

Participants

Number of participants
Power analysis information

Key characteristics of the sample
Age
Sex
Ethnicity/race
Socioeconomic
Diagnosis/disease
Comorbidities

Country
Method of sample selection
A�rition (percent dropped out)

Intervention/Independent variable(s)

Independent variable
Intervention or influence
Comparison

Number of (intervention) groups
Specific intervention (e.g., components of a complex intervention)
Intervention fidelity

Outcomes/Dependent variables

Outcomes (or phenomena in qualitative studies)
Time points for outcome data collection

For each key outcome:

Outcome definition
Method of data collection (e.g., self- report, observation)
Specific instrument (if relevant)

Reliability, validity information

Results

Qualitative: Summary of major themes
Quantitative: for each outcome of interest
Summary of results

Effect size
p values
Confidence intervals

Subgroup analyses

Evaluation

Major strengths
Major weaknesses
Overall quality rating

Other

Theoretical framework
Funding source
Key conclusions of the study authors

Broadly adapted from Table 7.3.a of the Cochrane Handbook for Systematic Reviews (Higgins & Green, 2011).

Box 5.2 Substantive Codes for a Literature Review on Factors Affecting Nurses’ Management
of Children’s Pain

Codes for Nurse Characteristics Associated With Their Pain Management
Behavior (Independent Variables)

1. Nurses’ pain management knowledge or specialized pain training
2. Nurses’ years of nursing experience
3. Nurses’ pain a�itudes and beliefs
4. Demographic nurse factors (e.g., age, sex, education, has own children)
5. Nurses’ role/credential/status (e.g., RN, CNS, APN, NP)
6. Other nurse factors (e.g., self- efficacy, personal experience with pain)
7. Organizational factors (e.g., nurses’ workload, organizational culture)
8. Participation in interventions to improve nurses’ pain management skills

Codes for Nurses’ Pain Management Behaviors (Dependent Variables)

A. Nurses’ assessment of children’s pain
B. Nurses’ pain management—general strategies
C. Nurses’ use of analgesics for pain management
D. Nurses’ use of nonpharmacologic methods of pain management
E. Provision of guidance to parents about managing their child’s pain

Box 5.3 Guide to a Focused Critical Appraisal of Evidence Quality in a Quantitative Research
Report

SECTION OF
THE REPORT

CRITICAL APPRAISAL QUESTIONS DETAILED
GUIDELINES

Method
Research design Was the most rigorous design used, given the 
purpose of the study?

What was the level of evidence for the type of 
question asked—and is this level the highest
possible?
Were suitable comparisons made to enhance interpretability?
Was the number of data collection points 
appropriate? Was the period of follow- up (if any)
adequate?
Did the design minimize threats to the internal validity of the study (e.g., was randomization
and blinding used, was a�rition minimized)?
Did the design enhance the external validity and applicability of the study results?
If there was an intervention, did it have a strong theoretical basis?

Box 9.1, page 201;
Box 10.1, page 223
Box 31.1, page 720

Population and
sample Was the population identified? Was the sample adequately described?

Was a good sampling design used to enhance the sample’s representativeness of the
population? Were sampling biases minimized?
Was the sample size adequate? Was a power analysis used?

Box 13.1, page 274

Data collection and
measurement Were key variables operationalized using the best possible methods (e.g., interviews,

observations)?
Were clinically important and patient- centered outcomes measured?
Did the data collection methods yield data that were reliable, valid, and responsive?

Box 14.1, page 291; 
Box
15.1, page 336

Procedures
If there was an intervention, was it rigorously developed and implemented? Did most
participants allocated to the intervention group actually receive it?
Were data collected in a manner that minimized bias?

Box 9.1, page 201; 
Box
10.1, page 223

Results
Data analysis Were appropriate and powerful statistical methods used? Did the analysis help to control for

confounding variables?
Were Type I and Type II errors avoided or minimized?
Were subgroup analyses undertaken to be�er understand the applicability of the results to
different types of people?

Box 17.1, page 381
Box 18.1, page 408
Box 31.1, page 720

Findings
Were the findings adequately summarized? Was information about effect size and precision of
estimates (confidence intervals) presented?
Were findings reported in a manner that facilitates a 
meta- analysis, and with sufficient
information needed 
for EBP?

Box 17.1, page 381

Discussion
Interpretation of the
findings

Were interpretations consistent with the study’s limitations?
Were causal inferences, if any, justified?
Was the clinical significance of the findings discussed?
Did the report address the generalizability and applicability of the findings?

Box 21.1, page 465

Summary
Assessment Despite any limitations, do the study findings appear to be valid—do you have confidence in

the truth value of the results?
Does the report inspire confidence about the types of people and se�ings for whom the
evidence is applicable?

Box 5.4 Guide to a Focused Critical Appraisal of Evidence Quality in a Qualitative Research
Report

SECTION OF THE
REPORT

CRITICAL APPRAISAL QUESTIONS DETAILED
GUIDELINES

Method
Research design/research
tradition

Is the identified research tradition congruent with the methods used to collect and analyze data?
Was an adequate amount of time spent with study participants?
Was there evidence of reflexivity in the design?

Box 22.1, page
490

Sample and se�ing
Was the group or population of interest adequately described?
Were the se�ing and sample described in sufficient detail?
Was a good method of sampling used to enhance information richness?
Was the sample size adequate? Was saturation achieved?

Box 23.1, page
506

Data collection
Were appropriate methods used to gather data? Were data gathered through two or more
methods to achieve triangulation?
Were the data of sufficient depth and richness?

Box 24.1, page
526

Procedures
Do data collection and recording procedures appear appropriate?
Were data collected in a manner that minimized bias?

Box 24.1, page
526

Enhancement of
trustworthiness Did the researchers use effective strategies to enhance the trustworthiness/integrity of the study?

Was there “thick description” of the context, participants, and findings, and was it at a sufficient
level to support transferability?
Do the researchers’ methodologic and clinical experience enhance confidence in the study
findings and interpretations?

Box 26.1, page
580

Results
Data analysis Was the data analysis strategy compatible with the research tradition and with the nature and

type of data gathered?

Box 25.1, page
553

Findings
Were findings effectively summarized, with good use of excerpts and strong supporting
arguments?
Did the analysis yield an insightful, provocative, authentic, and meaningful picture of the
phenomenon under investigation?

Box 25.1, page
553

Theoretical integration
Were the themes or pa�erns logically connected to each other to form a convincing and
integrated whole?

Box 25.1 page
553

Discussion
Interpretation of the
findings

Were the findings interpreted within an appropriate social or cultural context, and within the
context of prior studies?
Were interpretations consistent with the study’s limitations?
Did the report address the transferability and applicability of the findings?

Box 25.1, page
553

Summary Assessment
Do the study findings appear to be trustworthy—do you have confidence in the truth value of
the results?
Does the report inspire confidence about the types of people and se�ings for whom the evidence
is applicable?

Box 5.5 Guidelines for Critically Appraising Literature Reviews

1. Is the review thorough—does it include all major studies on the topic? Does it include recent research (studies
published within the previous 1- 3 years)? Are studies from other related disciplines included, if appropriate?

2. Does the review rely mainly on primary source research articles?
3. Is the review merely a summary of existing work, or does it critically appraise and compare key studies? Does the

review identify important trends and gaps in the literature?
4. Is the review well organized? Is the development of ideas clear?
5. Does the review use appropriate language regarding the tentativeness of prior findings? Is the review objective?

Does the author paraphrase, or is there an overreliance on quotes from original sources?
6. If the review is part of a research report for a new study, does the review support the need for the study?
7. If it is a review designed to summarize evidence for clinical practice, does the review draw reasonable

conclusions about practice implications?

References Cited in Chapter 5
* Boeker, M., Vach, W., & Motschall, E. (2013). Google Scholar as replacement for systematic literature searches: Good

relative recall and precision are not enough. BMC Medical Research Methodology, 13, 131.
* Bramer, W. M., Giustini, D., Kramer, B., & Anderson, P. (2013). The comparative recall of Google Scholar versus

PubMed in identical searches for biomedical systematic reviews. Systematic Reviews, 2, 115.
Cooper, H. (2017). Research synthesis and meta- analysis: A step- by- step approach (5th ed.). Thousand Oaks, CA: Sage

Publications.
Fink, A. (2020). Conducting research literature reviews: From the Internet to paper (5th ed.). Thousand Oaks, CA: Sage.
Flemming, K., & Briggs, M. (2006). Electronic searching to locate qualitative research: Evaluation of three strategies.

Journal of Advanced Nursing, 57, 95–100.
Galvan, J. L.. & Galvan, M. (2017). Writing literature reviews: A guide for students of the social and behavioral sciences

(7th ed.) New York: Routledge.
Garrard, J. (2017). Health sciences literature review made easy: The matrix method (5th ed.) Burlington, MA: Jones and

Bartle� Publishers.
* Gehanno, J. F., Rollin, L., & Darmon, S. (2013). Is the coverage of Google Scholar enough to be used along for

systematic reviews? BMC Medical Informatics and Decision Making, 13, 7.
Glaser, B. (1978). Theoretical sensitivity. Mill Valley, CA: The Sociology Press.
Gleason, K., Nazarian, S., & Dennison- Himmelfarb, C. (2018). Atrial fibrillation symptoms and sex, race, and

psychological distress: A literature review. Journal of Cardiovascular Nursing, 33, 137–143.
* Grant, M., & Booth, A. (2009). A typology of reviews: 
An analysis of 14 review types and associated methodologies.

Health Information and Libraries Journal, 26, 91–108.
Haddaway, N., Collins, A., Coughlin, D., & Kirk, S. (2015). *The role of Google Scholar in evidence reviews and its

applicability to grey literature searching. PLoS One, 10, e0138237.
He, H. G., Klainin- Yobas, P., Ang, E., Sinnappan, R., Pölkki, T., & Wang W. (2015). Nurses’ provision of parental

guidance regarding school- aged children’s postoperative pain management: A descriptive correlational study. Pain
Management Nursing, 16, 40–50.

*Higgins, J., & Green, S., (Eds.). (2011). Cochrane handbook for systematic reviews of interventions version 5.1. Oxford: The
Cochrane Collaboration.

**Jones, R., Hollen, P., Wenzel, J., Weiss, G., Song, D., Sims, T., & Petroni G. (2018). Understanding advanced prostate
cancer decision making utilizing an interactive decision aid. Cancer Nursing, 41, 2–10.

Kayser, S., VanGilder, C., & Lachenbruch, C. (2019). Predictors of superficial and severe hospital- acquired pressure
injuries: A cross- sectional study using the International Pressure Ulcer Prevalence™ survey. International Journal of
Nursing Studies, 89, 46–52.

* McKeever, L., Nguyen, V., Peterson, S., Gomez- Perez, S., & Braunschweig, C. (2015). Demystifying the search bu�on:
A comprehensive PubMed search strategy for performing an exhaustive literature review. Journal of Parenteral and
Enteral Nutrition, 39, 622–635.

Munhall, P. L. (2012). Nursing research: A qualitative perspective (5th ed.). Sudbury, MA: Jones & Bartle�.
* Shariff, S. Z., Bejaimal, S., Sontrop, J., Iansavichus, A., Haynes, R. B., Weir, M., & Garg, A. (2013). Retrieving clinical

evidence: A comparison of PubMed and Google scholar for quick clinical searches. Journal of Medical Internet
Research, 15(8), e164.

Spradley, J. (1979). The ethnographic interview. New York: Holt Rinehart & Winston.
Wilczynski, N., Marks, S., Haynes, R. (2007). Search strategies for identifying qualitative studies in CINAHL.

Qualitative Health Research, 17, 705–710.
Worawong, C., Borden, M. J., Cooper, K. Perez, O., & Lauver, D. (2018). Evaluation of a person- centered, theory- based

intervention to promote health behaviors. Nursing Research, 67, 6–15.
*A link to this open- access article is provided in the Toolkit for Chapter 5 in the Resource Manual.

**This journal article is available on for this chapter.

aConsult the full research reports for references cited within these excerpted literature reviews.

C H A P T E R 6

Theoretical Frameworks

High- quality studies achieve a high level of conceptual integration. This
means that the methods are appropriate for the research questions, the
questions are consistent with existing evidence, and there is a plausible
conceptual rationale for hypotheses to be tested or for the design of an
intervention.
For example, suppose we hypothesized that a nurse- led smoking cessation
intervention would result in reduced rates of smoking among patients
with cardiovascular disease. Why would we make this prediction—what is
our “theory” (our theoretical rationale) about how the intervention might
change people’s behavior? Do we predict that the intervention will change
patients’ knowledge? motivation? sense of control over their decision–
making? Our view of how the intervention would “work”—what mediates
the relationship between intervention receipt and the desired outcome—
should guide the design of the intervention and the study.
In designing studies, researchers need to have a conceptualization of
people’s behaviors or characteristics, and how these affect or are affected
by interpersonal, environmental, or biologic forces. In high quality
research, a strong, defensible conceptualization is made explicit. This
chapter discusses theoretical and conceptual contexts for nursing research
problems.

Theories, Models, and Frameworks
Many terms are used in connection with conceptual contexts for research,
such as theories, models, frameworks, schemes, and maps. We offer
guidance in distinguishing these terms but note that our definitions are
not universal—indeed one confusing aspect of theory- related writings is
that there is no consensus about terminology.

Theories
The term theory is used in many ways. For example, nursing instructors
and students use the term to refer to classroom content, as opposed to the
actual practice of performing nursing actions. In both lay and scientific
usage, the term theory connotes an abstraction.
In research, the term theory is used differently by different authors.
Classically, theory refers to an abstract generalization that explains how
phenomena are interrelated. In this definition, a theory embodies at least
two concepts that are related in a manner that the theory purports to
explain. The purpose of traditional theories is to explain or predict
phenomena.
Others, however, use the term theory less restrictively to refer to a broad
representation that can thoroughly describe a phenomenon. Some authors
refer to this type of theory as descriptive theory. Broadly speaking,
descriptive theories are ones that describe or categorize characteristics of
individuals, groups, or situations by abstracting common features
observed across multiple manifestations. Descriptive theory plays an
important role in qualitative studies. Qualitative researchers often strive to
develop conceptualizations of phenomena that are grounded in actual
observations. Descriptive theory is sometimes a precursor to predictive
and explanatory theories.

Components of a Traditional Theory
Concepts are the basic building blocks of a theory. Classical theories
comprise a set of propositions that indicate relationships among the
concepts. Relationships are denoted by such terms as “is associated with,”
“varies directly with,” or “is contingent on.” The propositions form an
interrelated deductive system. Theories provide a mechanism for logically
deriving new statements from the original propositions.

Let us illustrate with the Theory of Planned Behavior (TPB; Ajzen, 2005),
which is related to another theory called the Theory of Reasoned Action
(Fishbein & Ajzen, 2010). TPB provides a framework for understanding
people’s behavior and its psychological determinants. A greatly simplified
construction of the TPB consists of the following propositions:

1. Behavior that is volitional is determined by people’s intention to perform that
behavior.

2. Intention to perform or not perform a behavior is determined by three factors:
A�itudes toward the behavior (i.e., the overall evaluation of performing the behavior)
Subjective norms (i.e., perceived social pressure to perform or not perform the
behavior)
Perceived behavioral control (i.e., the anticipated ease or difficulty of engaging in the
behavior)

3. The relative importance of the three factors in influencing intention varies across
behaviors and situations.

The concepts that form the basis of the TPB include behaviors, intentions,
a�itudes, subjective norms, and perceived self- control. The theory, which
specifies the nature of the relationship among these concepts, provides a
framework for generating hypotheses relating to health behaviors. For
example, we might hypothesize that compliance with a medical regimen
(the behavior) could be enhanced by changing people’s a�itudes toward
compliance or by increasing their sense of control. The TPB has been used
as the underlying theory for studying a wide range of health decision–
making behaviors and in developing health- promoting interventions.

Example using the TPB
Shi et al. (2019) used the Theory of Planned Behavior to study factors
influencing patient delay in seeking treatment among people with
hemorrhoids in China.

TIP Links to websites devoted to theories and conceptual models
mentioned in this chapter are listed in the Toolkit of the

accompanying Resource Manual for you to click on directly

Levels of Theories
Theories differ in their level of generality and abstraction. The most
common labels used in nursing for levels or scope of theory are grand,
middle- range, and micro or practice.
Grand theories or macrotheories purport to describe and explain large
segments of human experience. In nursing, several grand theories offer
explanations of the whole of nursing and address the nature, goals, and
mission of nursing practice, as distinct from the discipline of medicine. An
example of a nursing theory that has been described as a grand theory is
Parse’s Humanbecoming Paradigm (Parse, 2014).
Theories of relevance to researchers are often more focused than grand
theories. Middle- range theories a�empt to explain such phenomena as
decision- making, stress, comfort, and unpleasant symptoms. Middle- range
theories are more specific and more amenable to empirical testing than
grand theories (Peterson & Bredow, 2017). Literally dozens of middle–
range theories have been developed by or used by nurses, a few of which
we briefly describe in this chapter.
The least abstract level of theory is practice theory (sometimes called
situation- specific theory or micro theory). Such theories are highly specific,
narrow in scope, and have an action orientation. They are not always
associated with research, although grounded theory studies can be a
source of situation- specific theory (Peterson & Bredow, 2017).

Models
Conceptual models, conceptual frameworks, or conceptual schemes (we use
the terms interchangeably) are a less formal means of organizing
phenomena than theories. Like theories, conceptual models deal with
abstractions (concepts) that are assembled by virtue of their relevance to a
common theme. Conceptual models, however, lack the deductive system
of propositions that purport to explain relationships among concepts.
Conceptual models provide a perspective regarding interrelated
phenomena but are more loosely structured than theories. Conceptual
models can serve as springboards for generating hypotheses, but
conceptual models in their entirety are not formally “tested.” (In actuality,
however, the terms model and theory are sometimes used interchangeably.)

The term model is often used in connection with a symbolic representation
of a conceptualization. Schematic models (or conceptual maps), which are
visual representations of some aspect of reality, use concepts as building
blocks but with a minimal use of words. A visual or symbolic
representation of a theory or conceptual framework often helps to express
abstract ideas in a concise and accessible format.
Schematic models are common in both qualitative and quantitative
research. Concepts and linkages among them are represented through the
use of boxes, arrows, or other symbols. As an example, Figure 6.1 shows
Pender’s Health Promotion Model, which is a model for explaining and
predicting the health- promotion component of lifestyle (Murdaugh et al.,
2019). Such schematic models can be useful in succinctly communicating
linkages among concepts.

FIGURE 6.1 Pender’s Health Promotion Model.
(Retrieved from h�ps://nolapender.weebly.com/critical- elements.html.)

Frameworks
A framework is the overall conceptual underpinnings of a study. Not
every study is based on a formal theory or conceptual model, but every
study has a framework—that is, a conceptual rationale. In a study based
on a theory, the framework is a theoretical framework; in a study with
roots in a conceptual model, the framework is a conceptual framework.

In most nursing studies, the framework is not an explicit theory or model,
and sometimes the underlying conceptual rationale for the inquiry is not
explained. Frameworks are often implicit, without being formally
described. In studies without an articulated conceptual framework, it may
be difficult to figure out what the researchers thought was “going on.”
Sometimes researchers fail even to adequately describe key constructs at
the conceptual level. The concepts in which researchers are interested are
abstractions of observable phenomena, and our world view shapes how
those concepts are defined and operationalized. Researchers should make
clear the conceptual definition of their key variables, thereby providing
information about the study’s framework.
In most qualitative studies, the frameworks are part of the research
tradition in which the study is embedded. For example, ethnographers
usually begin their work within a theory of culture. The questions that
most qualitative researchers ask and the methods they use to address
those questions inherently reflect certain theoretical formulations.

TIP In recent years, concept analysis has become an important
enterprise among students and nurse scholars, and several methods
have been proposed for undertaking a concept analysis and clarifying
conceptual definitions (e.g., Rodgers & Knafl, 2000; Walker & Avant,
2019). However, Bergdahl and Berterö (2016) have argued that
concept analysis is not a suitable approach to theory development.

Example of Developing a Conceptual Definition
Mollohan (2018) used Walker and Avant’s eight- step concept analysis
methods to conceptually define dietary culture. Mollohan searched
and analyzed 67 relevant articles identified through multiple
database and proposed the following: “Dietary culture can be defined
as pa�erned group earing behaviors that are unconsciously
influenced and socially organized” (p. E2).

The Nature of Theories and Conceptual Models
Theories and conceptual models have much in common, including their
origin, general nature, purposes, and role in research. In this section, we
examine some characteristics of theories and conceptual models. We use
the term theory in a broad sense, inclusive of conceptual models.

Origin of Theories and Models
Theories, conceptual frameworks, and models are not discovered; they are
invented. Theory building depends not only on observable evidence but
also on the originator’s ingenuity in pulling facts together and organizing
them. Theory construction is a creative enterprise that can be undertaken
by anyone who is insightful, has a firm grounding in existing evidence,
and is able to knit together evidence into an intelligible pa�ern.

Tentative Nature of Theories and Models
Theories and conceptual models cannot be proved—they represent a
theorist’s best effort to describe and explain phenomena. Today’s
flourishing theory may be discredited or revised tomorrow. This may
happen if new evidence or observations undermine a previously accepted
theory. Or, a new theory might integrate new observations into an existing
theory to yield a more parsimonious or accurate explanation of a
phenomenon.
Theories and models that are not congruent with a culture’s values also
may fall into disfavor over time. For example, certain psychoanalytic and
structural social theories, which had broad support for decades, have come
to be challenged as a result of changing views about women’s roles.
Theories are deliberately invented by humans, and so they are not free
from human values, which can change over time.

The Role of Theories and Models
Theories allow researchers to integrate observations and facts into an
orderly scheme. The linkage of findings into a coherent structure can make
a body of evidence more useful.
In addition to summarizing, theories and models can guide a researcher’s
understanding of not only the what of natural phenomena but also the why
of their occurrence. Theories often provide a basis for predicting

phenomena. Prediction, in turn, has implications for influencing
phenomena. A utilitarian theory has potential to 
bring about desirable
changes in people’s behavior or health outcomes. Thus, theories are an
important resource for developing nursing interventions.
Theories and conceptual models help to stimulate research and the
extension of knowledge by providing both direction and impetus. Thus,
theories may serve as a springboard for advances in knowledge and the
accumulation of evidence for practice.

Relationship Between Theory and Research
Theory and research have a reciprocal relationship. Theories are built
inductively from observations, and research evidence is an excellent
source for those observations. Concepts and relationships that are
validated through research become the foundation for theory
development. The theory, in turn, must be tested by subjecting deductions
from it (hypotheses) to systematic inquiry. Thus, research plays a dual and
continuing role in theory building. Theory guides and generates ideas for
research; research assesses the worth of the theory and provides a
foundation for new theories.

Conceptual Models and Theories Used in Nursing
Research
Nurse researchers have used nursing and nonnursing frameworks to
provide a conceptual context for their studies. This section briefly
discusses several frameworks that have been found useful.

Conceptual Models and Theories of Nursing
Several nurses have formulated theories and models of nursing practice.
These models offer formal explanations of what nursing is and what the
nursing process entails. As Fawce� and DeSanto- Madeya (2013) have
noted, four concepts are central to models of nursing: human beings,
environment, health, and nursing. The various models, however, define these
concepts differently, link them in diverse ways, and emphasize different
relationships among them. Moreover, the models view different processes
as being central to nursing.
The conceptual models were not developed primarily as a base for nursing
research. Most models have had more impact on nursing education and
practice than on research. Nevertheless, nurse researchers have been
inspired by these conceptual models in formulating research questions
and hypotheses. Two nursing models that have generated particular
interest as a basis for research are briefly described.

Roy’s Adaptation Model
In Roy’s Adaptation Model, humans are viewed as biopsychosocial
adaptive systems who cope with environmental change through the
process of adaptation (Roy & Andrews, 2009). Within the human system,
there are four subsystems: physiologic/physical, self- concept/group
identity, role function, and interdependence. These subsystems constitute
adaptive modes that provide mechanisms for coping with environmental
stimuli and change. Health is viewed as both a state and a process of
becoming integrated and whole that reflects the mutuality of persons and
environment. The goal of nursing, according to this model, is to promote
client adaptation. Nursing also regulates stimuli affecting adaptation.
Nursing interventions usually take the form of increasing, decreasing,
modifying, removing, or maintaining internal and external stimuli that

affect adaptation. Roy’s Adaptation Model has been the basis for several
middle- range theories and dozens of studies.

Example Using Roy’s Adaptation Model
Frank et al. (2017) were guided by Roy’s Adaptation Model in their
study of the effect of implementing a pos�raumatic stress disorder
screening tool for acute traumatically injured patients.

Orem’s Self- Care Deficit Nursing Theory
Some basic concepts in Orem’s Self- Care Deficit Theory include self- care,
self- care deficit, and self- care agency (Orem et al., 2003). Self- care activities
are what people do on their own behalf to maintain their life, health, and
well- being. The ability to perform self- care is called self- care agency.
Orem’s universal self- care requisites to maintain health include air, food,
water, elimination, activity and rest, solitude and social interaction, hazard
prevention, and promotion of normality. Self- care deficit occurs when
self- care agency is not adequate to meet a person’s self- care demands.
Orem’s theory explains that patients need nursing care when their
demands for self- care outweigh their abilities.

Example Using Orem’s Theory
Using Orem’s self- care deficit theory as her framework Treadwell
(2017) explored depression among patients on dialysis. The
researcher concluded that Orem’s theory was appropriate for
identifying depression and motivation for change, and for
encouraging self- care practices with hemodialysis patients.

Other Models and Middle- Range Theories Developed by
Nurses
In addition to conceptual models that are designed to describe and
characterize the nursing process, nurses have developed middle- range
theories and models that focus on more specific phenomena of interest to
nurses. Examples of middle- 
range theories that have been used in research
include:

Beck’s (2012) Theory of Postpartum Depression;
Kolcaba’s (2003) Comfort Theory;
Symptom Management Model (Dodd et al., 2001);
Theory of Transitions (Meleis et al., 2000);
Peplau’s (1997) Theory of Interpersonal Relations
Swanson’s (1991) Theory of Caring
Reed’s (1991) Self- Transcendence Theory
Pender’s Health Promotion Model (Murdaugh, Parsons, & Pender, 2019); and
Mishel’s Uncertainty in Illness Theory (1990).

The la�er two are briefly described here.

The Health Promotion Model
Nola Pender’s Health Promotion Model (HPM) focuses on explaining
health- promoting behaviors, using a wellness orientation (Murdaugh et
al., 2019). According to the model (see Figure 6.1), health promotion entails
activities directed toward developing resources that maintain or enhance a
person’s well- being. The model embodies several theoretical propositions
that can be used to develop interventions and to gain insight into health
behaviors. For example, one HPM proposition is that people commit to
behaviors from which they anticipate deriving valued benefits, and
another is that perceived competence or self- efficacy relating to a given
behavior increases the likelihood of performing it. Greater perceived self–
efficacy is viewed as resulting in fewer perceived barriers to a health
behavior. The model also incorporates interpersonal and situational
influences on a person’s commitment to health- promoting actions.

Example Using the HPM
Eren Fidanci et al. (2017) tested the effects of an intervention based on
Pender’s Health Promotion Model on the healthy life behaviors of
obese children in Turkey.

Uncertainty in Illness Theory
Mishel’s Uncertainty in Illness Theory (Mishel, 1990) focuses on the
concept of uncertainty—a person’s inability to determine the meaning of
illness- related events. According to this theory, people develop subjective
appraisals to assist them in interpreting the experience of illness and

treatment. Uncertainty occurs when people are unable to recognize and
categorize stimuli. Uncertainty results in the inability to obtain a clear
conception of the situation, but a situation appraised as uncertain will
mobilize individuals to use their resources to adapt to the situation.
Mishel’s theory as originally conceptualized was most relevant to patients
in an acute phase of illness or in a downward illness trajectory, but it has
been reconceptualized to include constant uncertainty in chronic or
recurrent illness. Mishel’s conceptualization of uncertainty, and her
Uncertainty in Illness Scale, has been used in many nursing studies.

Example Using Uncertainty in Illness Theory
Shun et al. (2018) studied changes in patients’ degree of uncertainty
in relation to levels of symptom distress and unmet care needs
among patients with recurrent hepatocellular carcinoma.

Other Models and Theories Used by Nurse Researchers
Many concepts of interest to nurse researchers are not unique to nursing,
and so their studies are sometimes linked to frameworks that originated in
other disciplines. Several of these alternative models have gained special
prominence in the development of nursing interventions to promote
health- enhancing behaviors. In addition to the previously described TPB,
three nonnursing models or theories have often been used in 
nursing
studies: Bandura’s Social Cognitive Theory, Prochaska’s Transtheoretical
(stages of change) Model, and the Health Belief Model (HBM).

Bandura’s Social Cognitive Theory
Social Cognitive Theory (Bandura, 1997, 2001), which is sometimes called
self- efficacy theory, offers an explanation of human behavior using the
concepts of self- efficacy and outcome expectations. Self- efficacy concerns
people’s belief in their own capacity to carry out particular behaviors (e.g.,
smoking cessation). Self- efficacy expectations influence the behaviors a
person chooses to perform, their degree of perseverance, and the quality of
the performance. Bandura identified four factors that influence a person’s
cognitive appraisal of self- efficacy: (1) their own mastery experience; (2)
verbal persuasion; (3) vicarious experience; and (4) physiologic and
affective cues, such as pain and anxiety. The role of self- efficacy has been

studied in relation to numerous health behaviors (e.g., weight control,
smoking).

TIP Bandura’s self- efficacy construct is a key mediating variable in
several theories discussed in this chapter. Self- efficacy has repeatedly
been found to explain a significant amount of variation in people’s
behaviors and to be amenable to change. As a result, self- efficacy
enhancement is often a goal in interventions designed to change
people’s health- related behaviors (Conn et al., 2001).

Example Using Social Cognitive Theory
Staffileno et al. (2018) evaluated a Web- based, culturally relevant
lifestyle change intervention, with roots in Social Cognitive Theory,
that targeted young African American women at risk for developing
hypertension.

The Transtheoretical (Stages of Change) Model
The Transtheoretical Model (Prochaska et al., 2002; Prochaska & Velicer,
1997) has been the basis of numerous interventions designed to change
people’s problem behavior (e.g., alcohol abuse). The core construct around
which other dimensions are organized is stages of change, which
conceptualizes a continuum of motivational readiness to change
dysfunctional behavior. The five stages of change are precontemplation,
contemplation, preparation, action, and maintenance. Studies have shown
that successful self- changers use different processes at each stage,
suggesting the desirability of interventions that are individualized to the
person’s stage of readiness for change. The model incorporates a series of
mediating variables, one of which is self- efficacy.

Example Using the Transtheoretical Model
Wen et al. (2019) tested the effectiveness of a Transtheoretical Model–
based intervention on the self- management of people with an
ostomy.

The Health Belief Model

The Health Belief Model (HBM; Becker, 1978) has become a popular
framework in nursing studies focused on patient compliance and
preventive healthcare practices. The model postulates that health- seeking
behavior is influenced by a person’s perception of a threat posed by a
health problem and the value associated with actions aimed at reducing
the threat. The major components of the HBM include perceived
susceptibility, perceived severity, perceived benefits and costs, motivation,
and enabling or modifying factors. Perceived susceptibility is a person’s
perception that a health problem is personally relevant or that a diagnosis
is accurate. Even when one recognizes personal susceptibility, action will
not occur unless the individual perceives the severity to be high enough to
have serious implications. Perceived benefits are patients’ beliefs that a
given treatment will cure the illness or help prevent it, and perceived
barriers include the complexity, duration, and accessibility of the
treatment. Motivation is the desire to comply with a treatment. Among the
modifying factors that have been identified are personality variables,
patient satisfaction, and sociodemographic factors.

Example Using the HBM
Rakhshkhorshid et al. (2018) used concepts from the Health Belief
Model in their study of the association of health literacy with breast
cancer knowledge, perception, and screening behavior.

TIP A theoretical framework called the Theoretical Domains
Framework (TDF) is being used increasingly in implementation
science as a way to understand factors influencing the behaviors of
healthcare professionals, as well as to facilitate the design of
interventions. The TDF, which was developed by expert consensus, is
a framework with 14 domains derived from 33 behavior- change
theories (Michie et al., 2005).

Selecting a Theory or Model for Nursing Research
As we discuss in the next section, theory can be used by qualitative and
quantitative researchers in various ways. A common challenge, however,
is identifying an appropriate model or theory—a task made especially
daunting because of the burgeoning number available. There are no rules

for how this can be done, but there are two places to start—with the theory
or model, or with the phenomenon being studied.
Readings in the theoretical literature often give rise to research ideas, so it
is useful to become familiar with a variety of grand and middle- range
theories. Several nursing theory textbooks provide good overviews of
major nurse theorists (e.g., Alligood, 2018; Bu�s & Rich, 2018; Morse,
2017). Resources for learning more about middle- range theories include
Smith and Liehr (2018) and Peterson and Bredow (2017).

: The Supplement for this chapter on includes a table that
describes 11 nursing models that have been used by researchers. The
Supplement also offers references for about 100 middle- range
theories and models that have been used in nursing research,
organized in broad domains (e.g., aging, mental health, pain).

If you begin with a research problem or topic and are looking for a theory,
a good strategy is to examine the conceptual contexts of existing studies on
a similar topic. You may find that several different theories have been
used, and so the next step is to learn as much as possible about the most
promising ones so that you can select a theory that is appropriate for your
own study.

TIP Although it may be tempting to read about the features of a
theory in a secondary source, it is best to consult a primary source
and to rely on the most up- to- date reference because models are often
revised as research accumulates. However, it is also a good idea to
review studies that have used the theory so that you can judge how
much empirical support the theory has received and how key
variables were measured.

Many writers have offered advice on how to evaluate a theory for use in
nursing practice and nursing research (e.g., Chinn & Kramer, 2018;
Fawce� & DeSanto- Madeya, 2013; Smith & Parker, 2015). Box 6.1 presents
some basic questions that can be asked in a preliminary assessment of a
theory or model.
In addition to evaluating the general integrity of the model or theory, it is
important to make sure that there is a proper “fit” between the theory and

the research question to be studied. A critical issue is whether the theory
has done a good job of explaining, predicting, or describing constructs that
are key to your research 
problem. A few additional questions include the
following:

Has the theory been applied to similar research questions, and do the findings
from prior research lend credibility to the theory’s utility for research?
Are the theoretical constructs in the model or theory readily operationalized?
Do instruments of adequate quality exist?
Is the theory compatible with your world view and with the world view implicit
in the research question?

Testing, Using, and Developing a Theory or Framework
In this section, we describe how theory is used by qualitative and
quantitative researchers. We use the term theory broadly to include
conceptual models and frameworks.

Theories and Qualitative Research
Theory is almost always present, either peripherally or centrally, in studies
that are embedded in a qualitative research tradition such as ethnography,
phenomenology, or grounded theory. These research traditions inherently
provide an overarching framework that gives qualitative studies a
theoretical grounding. However, different traditions involve theory in
different ways.
Sandelowski (1993) made a useful distinction between substantive theory
(conceptualizations of the target phenomenon under study) and theory
that reflects a conceptualization of human inquiry. Some qualitative
researchers insist on an atheoretical stance vis- à- vis the phenomenon of
interest, with the goal of suspending a priori conceptualizations
(substantive theories) that might bias their collection and analysis of data.
For example, phenomenologists are in general commi�ed to theoretical
naiveté and explicitly try to hold preconceived views of the phenomenon
in check. Nevertheless, they are guided in their inquiries by a philosophy
of phenomenology that focuses their analysis on certain aspects of a
person’s lived experiences.
Ethnographers typically bring a strong cultural perspective to their
studies, and this perspective shapes their initial fieldwork. Ethnographers
often adopt one of two cultural theories: ideational theories, which suggest
that cultural conditions stem from mental activity and ideas, or
materialistic theories, which view material circumstances (e.g., resources,
money, production) as the source of cultural developments.
The most prominent sociologic theory in grounded theory is symbolic
interaction (or interactionism), which has three underlying premises
(Blumer, 1986). First, humans act toward things based on the meanings
that the things have for them. Second, the meaning of things arises out of
the interaction humans have with other humans. Last, meanings are
handled in, and modified through, an interpretive process in dealing with
the things humans encounter. Despite having a theoretical umbrella,

grounded theory researchers, like phenomenologists, a�empt to hold prior
substantive theory (existing knowledge and conceptualizations about the
phenomenon) in abeyance until their own substantive theory begins to
emerge.

Example of a Grounded Theory Study
Girardon- Perlini and Ângelo (2017) conducted a grounded theory
study based on a symbolic interactionist framework to explore the
experiences of rural families with relatives who had cancer. Their
main category was “Caregiving to support the family world,” which
represented the family’s strategies to reconcile care for the patient
and care for family life.

The use of theory in qualitative studies has been the topic of some debate.
Morse (2002) called for qualitative researchers to not be “theory ignorant
but theory smart” (p. 296) and to “get over” their theory phobia. Morse
(2004) elaborated by noting that qualitative research does not necessarily
begin with holding in check all prior knowledge of the phenomenon under
study. She suggested that if the boundaries of the concept of interest can
be identified, a qualitative researcher can use these boundaries as a
scaffold to inductively explore the a�ributes of the concept.
Some qualitative nurse researchers have adopted a perspective known as
critical theory as their framework. Critical theory is a paradigm that
involves a critique of society and societal processes and structures, as we
discuss in Chapter 22.
Qualitative researchers sometimes use conceptual models of nursing as an
interpretive framework, rather than as a guide for the conduct of a study.
For example, some qualitative nurse researchers acknowledge that the
philosophic roots of their studies lie in conceptual models of nursing
developed by Newman, Parse, or Rogers.
One final note is that a systematic review of qualitative studies on a
specific topic is another strategy leading to theory development. In
metasyntheses (Chapter 30), qualitative studies on a topic are scrutinized
to identify essential elements. The findings from different sources are then
used for theory building.

Theories and Models in Quantitative Research

Quantitative researchers, like qualitative researchers, link research to
theory or models in several ways. The classic approach is to test
hypotheses deduced from an existing theory.

Testing an Existing Theory
Theories sometimes stimulate new studies. For example, a nurse might
read about Pender’s HPM (Figure 6.1), and the following type of reasoning
might ensue: “If the HPM is valid, then I would expect that patients with
osteoporosis who perceived the benefit of a calcium- enriched diet would
be more likely to alter their eating pa�erns than those who perceived no
benefits.” Such a conjecture can serve as a starting point for testing the
model.
In testing a theory or model, quantitative researchers deduce implications
(as in the preceding example) and develop hypotheses, which are
predictions about the way variables would be interrelated if the theory
were sound. The hypotheses are then subjected to testing through
systematic data collection and analysis.
The testing process involves a comparison between observed outcomes
with those hypothesized. Through this process, a theory is continually
subjected to potential disconfirmation. If studies repeatedly fail to
disconfirm a theory, it gains support. Testing continues until pieces of
evidence cannot be interpreted within the context of the theory but can be
explained by a new theory that also accounts for previous findings.
Theory- testing studies are most useful when researchers devise logically
sound deductions from the theory, design a study that reduces the
plausibility of alternative explanations for observed relationships, and use
methods that assess the theory’s validity under maximally heterogeneous
situations so that potentially competing theories can be ruled out.
Researchers sometimes base a new study on a theory to explain earlier
descriptive findings. For example, suppose several researchers had found
that nursing home residents demonstrate greater levels of noncompliance
with nursing staff around bedtime than at other times. These findings shed
no light on underlying causes of the problem, and so suggest no way to
improve it. Explanations rooted in theories of stress might be relevant in
explaining the residents’ behavior. By directly testing the theory in a study
(i.e., deducing hypotheses derived from the theory), a researcher might be
able to explain why bedtime is a vulnerable period for nursing home
residents.

Researchers sometimes combine elements from two theories as a basis for
generating hypotheses. In doing this, researchers need to be thoroughly
knowledgeable about both theories to see if there is an adequate conceptual
rationale for conjoining them. If underlying assumptions or conceptual
definitions of key concepts are not compatible, the theories should not be
combined (although perhaps elements of the two can be used to create a
new conceptual framework with its own assumptions and definitions).
Tests of a theory increasingly are taking the form of testing theory- based
interventions. If a theory is correct, it has implications for strategies to
influence people’s health- related a�itudes or behavior and hence their
health outcomes. 
The role of theory in the development of interventions is
discussed at greater length in 
Chapter 28.

Example of a Theory- Based Intervention
Worawong et al. (2018), whose literature review was excerpted in the
previous chapter, tested the effect of a person- centered intervention
on physical activity and healthy nutrition in community- living
adults. The intervention, which they called “Healthy You,” was
developed using integrated concepts from two theories—Self–
Regulation Theory and Self- Determination Theory.

Using a Model or Theory as an Organizing Structure
Many researchers who cite a theory or model as their framework are not
directly testing it, but rather using the theory as an organizational or
interpretive tool. In such studies, researchers begin with a
conceptualization of nursing (or stress, health beliefs, and so on) that is
consistent with that of a model developer. The researchers assume that the
model used as a framework is valid and proceed to conceptualize and
operationalize constructs with the model in mind. Using models in this
fashion can serve a valuable organizing purpose, but such studies do not
address the issue of whether the theory itself is sound.

TIP The Toolkit with the accompanying Resource Manual offers some
criteria for drawing conclusions about whether researchers were truly
testing a theory or using a theory as an organizational or interpretive
aid.

We should note that the framework for a quantitative study need not be a
formal theory such as those described in the previous section. Sometimes
quantitative studies are undertaken to further explicate constructs
identified in grounded theory or other qualitative research.

Fitting a Problem to a Theory
Researchers sometimes develop a set of research questions or hypotheses
and subsequently try to devise a theoretical context in which to frame
them. Such an approach may in some case be worthwhile, but we caution
that an after- the- fact linkage of theory to a problem does not always
enhance a study. An important exception is when the researcher is
struggling to make sense of findings and calls on an existing theory to help
explain or interpret them.
If it is necessary to find a relevant theory or model after a research
problem is selected, the search for such a theory must begin by first
conceptualizing the problem on an abstract level. For example, take the
following research question: “Do daily telephone conversations between a
psychiatric nurse and a patient for 2 weeks after hospital discharge reduce
rates of readmission by short- term psychiatric patients?” This is a
relatively concrete research problem, but it might profitably be viewed
within the context of reinforcement theory, a theory of social support, or a
theory of crisis resolution. Part of the difficulty in finding a theory is that a
single phenomenon of interest can be conceptualized in ways.
Fi�ing a problem to a theory after- the- fact should be done with
circumspection. Although having a theoretical context can enhance the
meaningfulness of a study, artificially linking a problem to a theory is not
the route to scientific utility. If a conceptual model is really linked to a
problem, then the design of the study, decisions about what to measure
and how to measure it, and the interpretation of the findings flow from that
conceptualization.

TIP If you begin with a research question and then subsequently
identify a theory or model, be willing to adapt or augment your

original research problem as you gain greater understanding of the
theory.

Developing a Framework in a Quantitative Study
Novice researchers may think of themselves as unqualified to develop a
conceptual scheme of their own. But theory development depends less on
research experience than on powers of observation, grasp of a problem,
and knowledge of prior research. Nothing prevents a creative and astute
person from formulating an original conceptual framework for a study.
The framework may not be a full- fledged theory, but it should place the
issues of the study into some broader perspective.
The basic intellectual process underlying theory development is induction
—that is, reasoning from particular observations and facts to broader
generalizations. The inductive process involves integrating what one has
experienced or learned into an organized scheme. For quantitative
research, the observations used in the inductive process usually are
findings from other studies. When pa�erns of relationships among
variables are derived in this fashion, one has the makings of a theory that
can be put to a more rigorous test. The first step in the development of a
framework, then, is to formulate a generalized scheme of relevant concepts
that is firmly grounded in the research literature.
Let us use as an example a study question identified in Chapter 4, namely,
What is the effect of humor on stress in patients with cancer? (See the
problem statement in Box 4.2). In undertaking a literature review, we find
that researchers and reviewers have suggested a myriad of complex
relationships among such concepts as humor, social support, stress,
coping, appraisal, immune function, and neuroendocrine function on the
one hand and various health outcomes (pain tolerance, mood, depression,
health status, and eating and sleeping disturbances) on the other (e.g.,
Christie and Moore, 2005). While there is a fair amount of research
evidence for the existence of these relationships, it is not clear how they all
fit together. Without some kind of “map” of what might be going on, it
could be challenging to design a strong study—we might, for example, not
measure all the key variables or we might not undertake an appropriate
analysis. And, if our goal is to design a humor therapy, we might struggle
in developing a strong intervention in the absence of a framework.
The conceptual map in Figure 6.2 represents an a�empt to put the pieces
of the puzzle together for a study involving a test of a humor intervention

to improve health outcomes for patients with cancer. According to this
map, stress is affected by a cancer diagnosis and treatment both directly
and indirectly, through the person’s appraisal of the situation. That
appraisal, in turn, is affected by the patient’s coping skills, personality
factors, and available social supports (factors that themselves are
interrelated). Stress and physiological function (neuroendocrine and
immunologic) have reciprocal relationships.

FIGURE 6.2 Conceptual Model of Stress and Health Outcomes in Patients with
Cancer.

Note that we have not yet put in a “box” for humor in Figure 6.2. How do
we think humor might operate? If we see humor as having primarily a
direct effect on physiologic response, we would place humor near the
bo�om and draw an arrow from the box to immune and neuroendocrine
function. But perhaps humor reduces stress because it helps a person cope
(i.e., its effects are primarily psychological). Or maybe humor will affect
the person’s appraisal of the situation. Alternatively, a nurse- initiated
humor therapy might have its effect primarily because it is a form of social
support. Each conceptualization has a different implication for the design
of the intervention and the study. To give but one example, if the humor
therapy is viewed primarily as a form of social support, then we might
want to compare our intervention with an alternative intervention that

involves the presence of a comforting nurse (another form of social
support), without any special effort at including humor.
This type of inductive conceptualization based on existing research is a
useful means of providing theoretical grounding for a study. Of course,
our research question in this example could have been addressed within
the context of an existing conceptualization, such as the
psychoneuroimmunology (PNI) framework of McCain et al. (2005), but
hopefully our example illustrates how developing an original framework
can inform researchers’ decisions and strengthen the study. Havenga et al.
(2014) offer additional tips on developing a model.

TIP We strongly encourage you to draw a conceptual map before
launching an investigation based on either an existing theory or your
own inductive conceptualization—even if you do not plan to
formally test the entire model or present the model in a report. Such
maps are valuable heuristic devices in planning a study.

Example of Developing a New Model
Hoffman et al. (2017) developed and tested a rehabilitation program
for lung cancer patients. The intervention was based on their own
model, which represented a synthesis of two theories, the
Transitional Care Model and the Theory of Symptom Self–
Management.

Critical Appraisal of Frameworks in Research Reports
It is often challenging to critically appraise the theoretical context of a
published research report—or its absence—but we offer a few suggestions.
In a qualitative study in which a grounded theory is developed and
presented, you probably will not be given enough information to refute
the proposed theory because only evidence supporting it is presented. You
can, however, assess whether the theory seems logical, whether the
conceptualization is insightful, and whether the evidence in support of it is
persuasive. In a phenomenologic study, you should look to see if the
researcher addressed the philosophical underpinnings of the study. The
researcher should briefly discuss the philosophy of phenomenology upon
which the study was based.
Critiquing a theoretical framework in a quantitative report is also difficult,
especially because you are not likely to be familiar with a range of relevant
theories and models. Some suggestions for evaluating the conceptual basis
of a quantitative study are offered in the following discussion and in Box
6.2.
The first task is to determine whether the study does, in fact, have a
theoretical or conceptual framework. If there is no mention of a theory,
model, or framework, you should consider whether the study’s
contribution is weakened by this absence. In some cases, the research may
be so pragmatic that it does not really need a theory to enhance its utility.
If, however, the study involves evaluating a complex intervention or
testing hypotheses, the absence of a formally stated theoretical framework
or rationale suggests conceptual fuzziness.
If the study does have an explicit framework, you must ask whether the
particular framework is appropriate. You may not be able to challenge the
researcher’s use of a particular theory, but you can assess whether the link
between the problem and the theory is genuine. Did the researcher present
a convincing rationale for the framework used? Do the hypotheses flow
from the theory? Will the findings contribute to the validation of the
theory? Did the researcher interpret the findings within the context of the
framework? If the answer to such questions is no, you may have grounds
for criticizing the study’s framework, even though you may not be able to
articulate how the conceptual basis of the study could be improved.

Research Examples
Throughout this chapter, we have mentioned studies that were based on
various conceptual and theoretical models. This section presents more
detailed examples of the linkages between theory and research from the
nursing research literature—one from a quantitative study and the other
from a qualitative study.

Research Example From a Quantitative Study: The Health Promotion
Model

Study: The relationship between religiosity and health- promoting
behaviors in pregnant women (Cyphers et al., 2017)
Statement of purpose: The purpose of the study was to examine the
relationship between religiosity and health- promoting behaviors of
women at Pregnancy Resource Centers (PRCs).
Theoretical framework: The Health Promotion Model (HPM, Figure 6.1)
was the guiding framework for the study: “The…HPM, a middle- range
theory based on expectancy- value theory and Social Cognitive Theory,
provides a holistic, multidimensional framework for exploring a person’s
health- promoting behavior…Religiosity had not been previously studied
with the HPM, but as religiosity can be considered a personal factor…, it
was included in this research study” (p. 1430).
Method: The study was conducted in eastern Pennsylvania. The
researchers sampled 86 pregnant women who visited PRCs, which are
community centers that offer Christian, faith- based approaches to care.
Study participants completed an anonymous questionnaire in a private
area of the PRC. The questionnaire was used to gather data on pregnancy
intention, religiosity, health- promoting behaviors, services used at the
PRC, and demographics.
Key findings: The researchers found that women who a�ended more
classes at the centers reported more frequent health- promoting behaviors.
Religiosity, a�endance at religious services, and a scale measuring
“satisfaction with surrender to God” were also found to be associated with
higher health- promoting behavior scores. These variables included
personal factors, behavior- specific cognitions, and interpersonal factors of
Pender’s model.

Research Example From a Qualitative Study: A Grounded Theory

Study: Follow the yellow brick road: Self- management by adolescents and
young adults after a stem cell transplant (Morrison et al., 2018)
Statement of purpose: The purpose of the study was to understand the
process adolescents and young adults use to manage their care after a stem
cell transplant, and to explore self- management facilitators, barriers,
processes, and behaviors.
Theoretical framework: A grounded theory approach was chosen to
explore the psychosocial processes that adolescents and young adults use
in managing their care. The authors noted that “Grounded theory is an
ideal methodology for studying complex social and psychological actions
and processes. Data gathered are rich and detailed including participants’
views, actions, intentions, feelings, and life structures and the context in
which they are occurring” (p. 348).
Method: Data were collected through in- depth interviews with 17
adolescents and young adults (AYA) who underwent a stem cell transplant
between the ages of 13 and 25. In addition, caregivers of 13 of the AYA
participants were interviewed to gain a deeper understanding of how AYA
care is managed after the transplant. Interviews, which lasted about an
hour, were digitally recorded and transcribed for analysis. Data collection
and data analysis occurred concurrently, and data collection continued
until saturation occurred.
Key findings: AYA and caregiver interviewer data were integrated into
one framework that was developed inductively. The metaphor of
Dorothy’s journey in the Wizard of Oz was applied after theoretical
brainstorming by the research team was completed. Figure 6.3 provides a
graphical depiction of their framework. Key concepts include “at the
mercy of transplant” (the tornado), “education and instructions” (the
yellow brick road), and “inner strength” (the Great and Powerful Oz).

FIGURE 6.3 A grounded theory of the self- management process of adolescents and
young adults after a stem cell transplant. Process starts with “At the mercy of

transplant” and proceeds through the cycle. Adolescents/young adults may skip
setbacks and proceed to new normal, or they may revert back to another stage and
repeat the cycle. Yearn for normal, inner strength, and social support influence and

are influenced by the context of SCT and self- management.
(Adapted with permission from Morrison C., Martsolf D., Borich A., Coleman K.,

Ramirez P., Wehrkamp N., Pai A. (2018). Follow the yellow brick road: Self–
management by adolescents and young adults after a stem cell transplant. Cancer

Nursing , 41 , 347–358.)

Summary Points

High- quality research requires conceptual integration, one aspect of which is
having a defensible theoretical rationale for the study. Researchers demonstrate
conceptual clarity by delineating a theory, model, or framework on which the
study is based.
A theory is a broad characterization of phenomena. As classically defined, a
theory is an abstract generalization that systematically explains relationships
among phenomena. Descriptive theory thoroughly describes a phenomenon.
Concepts are the basic components of a theory. Classically defined theories
consist of a set of propositions about the interrelationships among concepts,
arranged in a logical system that permits new statements (hypotheses) to be
deduced from them.
Grand theories (macrotheories) a�empt to describe large segments of the human
experience. Middle- range theories (e.g., Pender’s HPM) are specific to certain
phenomena (e.g., stress, uncertainty in illness).
Concepts are also the basic elements of conceptual models, but concepts are not
linked in a logically ordered deductive system. Conceptual models, like
theories, provide context for nursing studies.
The goal of theories and models in research is to make findings meaningful, to
integrate knowledge into coherent systems, to stimulate new research, and to
explain phenomena and relationships among them.
Schematic models (or conceptual maps) are graphic, theory- driven
representations of phenomena and their interrelationships using symbols or
diagrams and a minimal use of words.
A framework is the conceptual underpinning of a study, including an overall
rationale and conceptual definitions of key concepts. In qualitative studies, the
framework often springs from distinct research traditions.
Several conceptual models and grand theories of nursing have been developed.
The concepts central to models of nursing are human beings, environment, health,
and nursing. Two major conceptual models of nursing used by researchers are
Roy’s Adaptation Model and Orem’s Self- Care Deficit Theory.
Nonnursing models used by nurse researchers include Bandura’s Social
Cognitive Theory, Prochaska’s Transtheoretical Model, and Becker’s Health
Belief Model.
In some qualitative research traditions (e.g., phenomenology), the researcher
avoids existing substantive theories of the phenomena under study, but there is
a rich theoretical underpinning associated with the tradition itself.
Some qualitative researchers specifically seek to develop grounded theories —
data- driven explanations to account for phenomena under study through

inductive processes.
In the classical use of theory, researchers test hypotheses deduced from an
existing theory. An emerging trend is the testing of theory- based interventions.
In both qualitative and quantitative studies, researchers sometimes use a theory
or model as an organizing framework or an interpretive tool.
Researchers sometimes develop a problem, design a study, and then look for a
conceptual framework; such an after- the- fact selection of a framework usually is
less compelling than a more systematic application of a particular theory.
Even in the absence of a formal theory, quantitative researchers can inductively
weave together the findings from prior studies into a conceptual scheme that
provides methodologic and conceptual direction to the inquiry.

Study Activities
Study activities are available to instructors on .

Box 6.1 Some Questions for a Preliminary Assessment of a Model
or Theory

Issue Questions
Theoretical clarity

Are key concepts defined, and are definitions clear?

Do all concepts “fit” within the theory? Are concepts used in the theory in a
manner compatible with conceptual definitions?

Are schematic models helpful, and are they compatible with the text? Are
schematic models needed but not presented?

Is the theory adequately explained? Are there ambiguities?

Theoretical
complexity Is the theory sufficiently rich and detailed?

Is the theory overly complex?

Can the theory be used to explain or predict phenomena, or only to describe
them?

Theoretical
grounding Are the concepts identifiable in reality?

Is there a research basis for the theory? Is the basis a sound one?

Appropriateness
of the theory Are the tenets of the theory compatible with nursing’s philosophy?

Are key concepts within the domain of nursing?

Importance of the
theory Could research based on this theory answer critical questions for nursing?

Will testing the theory contribute to nursing’s evidence base?

General issues
Are there other theories or models that would do a be�er job of explaining
phenomena of interest?
Is the theory compatible with your world view?

Box 6.2 Guidelines for Critically Appraising Theoretical and
Conceptual Frameworks in a Research Article

1. Did the report describe an explicit theoretical or conceptual framework for the
study? If not, does the absence of a framework detract from the usefulness or
significance of the research?

2. Did the report adequately describe the major features of the theory or model so
that readers could understand the study’s conceptual basis?

3. Does the theory or model fit the research problem? Would a different
framework have been more appropriate?

4. If there is an intervention, was there a cogent theoretical basis or rationale for
how the intervention was expected to “work” to produce desired outcomes?

5. Was the theory or model used as a basis for generating hypotheses, or was it
used as an organizational or interpretive framework? Was this appropriate?

6. Did the research problem and hypotheses (if any) naturally flow from the
framework, or did the purported link between the problem and the framework
seem contrived? Were deductions from the theory logical?

7. Were concepts adequately defined, and in a way that is consistent with the
theory? If there was an intervention, were intervention components consistent
with the theory?

8. Was the framework based on a conceptual model of nursing or on a model
developed by nurses? If it was borrowed from another discipline, is there
adequate justification for its use?

9. Did the framework guide the study methods? For example, was the appropriate
research tradition used if the study was qualitative? If quantitative, did the
operational definitions correspond to the conceptual definitions?

10. Did the researcher tie the study findings back to the framework in the
Discussion section? Did the findings support or challenge the framework? Were
the findings interpreted within the context of the framework?

References Cited in Chapter 6
Ajzen I. (2005). A�itudes, personality and behavior (2nd ed.). New York: McGraw Hill.
Alligood M. R. (2018). Nursing theorists and their work (9th ed.). St. Louis, MO:

Elsevier.
Bandura A. (1997). Self- efficacy: The exercise of control. New York: W. H. Freeman.
Bandura A. (2001). Social cognitive theory: An agentic perspective. Annual Review of

Psychology, 52, 1–26.
Beck C. T. (2012). Exemplar: Teetering on the edge: A second grounded theory

modification. In Munhall P. L. (Ed.), Nursing research: A qualitative perspective
(5th ed.) (pp. 257–284). Sudbury, MA: Jones & Bartle� Learning.

Becker M. (1978). The health belief model and sick role behavior. Nursing Digest, 6,
35–40.

Bergdahl E., & Berterö C. (2016). Concept analysis and the building blocks of theory:
Misconceptions regarding theory development. Journal of Advanced Nursing, 72,
2558–2566.

Blumer H. (1986). Symbolic interactionism: Perspective and method. Berkeley:
University of California Press.

Bu�s J., & Rich K. (2018). Philosophies and theories for advanced nursing practice (3rd ed.).
Burlington, MA: Jones & Bartle�.

Chinn P., & Kramer M. (2018). Knowledge development in nursing: Theory and process
(10th ed.). St. Louis: Mosby.

Christie W., & Moore C. (2005). The impact of humor on patients with cancer. Clinical
Journal of Oncology Nursing, 9, 211–218.

Conn V. S., Ran� M. J., Wipke- Tevis D. D., & Maas M. L. (2001). Designing effective
nursing interventions. Research in Nursing & Health, 24, 433–442.

* Cyphers N., Clements A., & Lindseth G. (2017). The relationship between religiosity
and health- promoting behaviors in pregnant women. Western Journal of Nursing
Research, 39, 1429–1446.

Dodd M., Janson S., Facione N., Fawce� J., Froelicher E. S., Humphreys J., … Taylor
D. (2001). Advancing the science of symptom management. Journal of Advanced
Nursing, 33, 668–676.

Eren Fidanci B., Akbayrak N., & Arslan F. (2017). Assessment of a health promotion
model on obese Turkish children. Journal of Nursing Research, 25, 436–446.

Fawce� J., & DeSanto- Madeya S. (2013). Contemporary nursing knowledge: Analysis
and evaluation of nursing models and theories (3rd ed.). Philadelphia: F.A. Davis
Company.

Fishbein M., & Ajzen I. (2010). Predicting and changing behavior: The reasoned action
approach. New York, NY: Psychology Press.

Frank C., Schroeter K., & Shaw C. (2017). Addressing traumatic stress in the acute
traumatically injured patient. Journal of Trauma Nursing, 24, 78–84.

* Girardon- Perlini N., & Ângelo M. (2017). The experience of rural families in the face
of cancer. Revista Brasileira Enfermagem, 70, 550–557.

Havenga Y., Poggenpoel M., & Myburgh C. (2014). Developing a model: An
illustration. Nursing Science Quarterly, 27, 149–156.

Hoffman A., Brintnall R., Given B., von Eye A., Jones L., & Brown J. (2017). Using
perceived self- efficacy to improve fatigue and fatigability in postsurgical lung
cancer patients. Cancer Nursing, 40, 1–12.

Kolcaba K. (2003). Comfort theory and practice. New York: Springer Publishing Co.
* McCain N. L., Gray D. P., Walter J. M., & Robins J. (2005). Implementing a

comprehensive approach to the study of health dynamics using the
psychoimmunology paradigm. Advances in Nursing Science, 28, 320–332.

Meleis A. I., Sawyer L. M., Im E., Hilfinger Messias D., & Schumacher K. (2000).
Experiencing transitions: An emerging middle- range theory. Advances in Nursing
Science, 23, 12–28.

* Michie S., Johnston M., Abraham C., Lawton R., Parker S., & Walker A. (2005).
Making psychological theory useful for implementing evidence- based practice: A
consensus approach. Quality & Safety in Health Care, 14, 26–33.

Mishel M. H. (1990). Reconceptualization of the uncertainty in illness theory. Image:
Journal of Nursing Scholarship, 22, 256–262.

Mollohan E. A. (2018). Dietary culture: A concept analysis. Advances in Nursing
Science, 41, E1–E12.

** Morrison C., Martsolf D., Borich A., Coleman K., Ramirez P., Wehrkamp N., … Pai
A. (2018). Follow the yellow brick road: Self- management by adolescents and
young adults after a stem cell transplant. Cancer Nursing, 41, 347–358.

Morse J. M. (2002). Theory innocent or theory smart? Qualitative Health Research, 12,
295–296.

Morse J. M. (2004). Constructing qualitatively derived theory. Qualitative Health
Research, 14, 1387–1395.

Morse J. M. (2017). Analyzing and conceptualizing the theoretical foundations of nursing.
New York: Springer Publishing Company.

Murdaugh C., Parsons M. A., & Pender N. J. (2019). Health promotion in nursing
practice (8th ed.). Upper Saddle River, NJ: Pearson.

Orem D., Taylor S., Renpenning K., & Eisenhandler S. (2003). Self- care theory in
nursing: Selected papers of Dorothea Orem. New York: Springer.

Parse R. R. (2014). The humanbecoming paradigm: A transformational worldview.
Pi�sburgh, PA: A Discovery International Publication.

Peplau H. E. (1997). Peplau’s theory of interpersonal relations. Nursing Science
Quarterly, 10, 162–167.

Peterson S. J., & Bredow T. S. (2017). Middle range theories: Applications to nursing
research (4th ed.). Philadelphia: Lippinco� Williams & Wilkins.

Prochaska J. O., Redding C. A., & Evers K. E. (2002). The transtheoretical model and
stages of changes. In Lewis F. M. (Ed.), Health behavior and health education: Theory,
research and practice (pp. 99–120). San Francisco: Jossey Bass.

Prochaska J. O., & Velicer W. F. (1997). The transtheoretical model of health behavior
change. American Journal of Health Promotion, 12, 38–48.

Rakhshkhorshid M., Navaee M., Nouri N., & Safarzaii F. (2018). The association of
health literacy with breast cancer knowledge, perception and screening behavior.
European Journal of Breast Health, 14, 144–147.

Reed P. G. (1991). Toward a nursing theory of self- transcendence. Advances in Nursing
Science, 13, 64–77.

Rodgers B.,, & Knafl K.,. (Eds.). (2000). Concept development in nursing: Foundations,
techniques, and applications (2nd ed.). Philadelphia: Saunders.

Roy C., Sr., & Andrews H. (2009). The Roy adaptation model (3rd ed.). Upper Saddle
River, NJ: Pearson.

Sandelowski M. (1993). Theory unmasked: The uses and guises of theory in
qualitative research. Research in Nursing & Health, 16, 213–218.

Shi Y., Yang D., Chen S., Wang S., Li H., Ying J., … Sun J. (2019). Factors influencing
patient delay in individuals with haemorrhoids: A study based on theory of
planned behavior and common sense model. Journal of Advanced Nursing, 75(5),
1018–1028.

Shun S., Chou Y., Chen C., & Yang J. (2018). Change of uncertainty in illness and
unmet care needs in patients with recurrent hepatocellular carcinoma during
active treatment. Cancer Nursing, 41, 279–289.

Smith M. J., & Liehr P. (2018). Middle- range theory for nursing (4th ed.). New York, NY:
Springer Publishing Co.

Smith M. C., & Parker M. (2015). Nursing theories and nursing practice (4th ed.).
Philadelphia: F.A. Davis.

Staffileno B., Tangney C., & Fogg L. (2018). Favorable outcomes using an eHealth
approach to promote 
physical activity and nutrition among you African American
women. Journal of Cardiovascular Disease, 33, 62–71.

Swanson K. M. (1991). Empirical development of a middle- range theory of caring.
Nursing Research, 40, 161–166.

Treadwell A. A. (2017). Examining depression in patients on dialysis. Nephrology
Nursing Journal, 44, 295–307.

Walker L. O., & Avant K. C. (2019). Strategies for theory construction in nursing (6th ed.).
Upper Saddle River, NJ: Prentice Hall.

Wen S., Li J., Wang A., Lv M., Li H., Lu Y., & Zhang J. (2019). Effects of
transtheoretical- model- based intervention on the self- management of patients with

an ostomy: A randomised controlled trial. Journal of Clinical Nursing, 28(9–10),
1936–1951.

Worawong C., Borden M. J., Cooper K. Perez O., & Lauver D. (2018). Evaluation of a
person- centered, theory- based intervention to promote health behaviors. Nursing
Research, 67, 6–15.

*A link to this open- access article is provided in the Toolkit for Chapter 6 in the
Resource Manual.

**This journal article is available on for this chapter.

C H A P T E R 7

Ethics in Nursing Research

Researchers who conduct studies with human being or animals must
do so ethically. Ethical demands can be challenging because they
sometimes conflict with the goal of producing rigorous evidence.
This chapter discusses major ethical principles for conducting
research.

Ethics and Research
The obligation for ethical conduct with human study participants
may strike you as self- evident, but ethics have not always been given
adequate a�ention. Historical examples of ethical transgressions are
abundant, as described in the chapter Supplement on the book’s
website.

Codes of Ethics
Human rights violations in the name of science have led to the
development of various codes of ethics. The Nuremberg Code,
developed after Nazi crimes were made public in the Nuremberg
trials, was an international effort to establish ethical standards. The
Declaration of Helsinki, another international set of ethical principles
regarding human experimentation, was adopted in 1964 by the
World Medical Association and was most recently revised in 2013.
Most disciplines (e.g., psychology, medicine) have established their
own ethical codes. In nursing, the American Nurses Association
(ANA) issued Ethical Guidelines in the Conduct, Dissemination, and
Implementation of Nursing Research (Silva, 1995). The ANA, which
declared 2015 the Year of Ethics, published a revised Code of Ethics for
Nurses with Interpretive Statements, a document that includes
principles that apply to nurse researchers. In Canada, the Canadian
Nurses Association published a revised version of Ethical Research
Guidelines for Registered Nurses in 2017. In Australia, three nursing
organizations collaborated to develop the Code of Ethics for Nurses in
Australia (2018). Additionally, the International Council of Nurses
(ICN) developed the ICN Code of Ethics for Nurses, which was most
recently revised in 2012.

Government Regulations for Protecting Study
Participants
Governments throughout the world fund research and establish rules
for adhering to ethical principles. For example, Health Canada

created the Tri- Council Policy Statement: Ethical Conduct for Research
Involving Humans as the guidelines to protect study participants in all
types of research, most recently revised in 2014. In Australia, the
National Health and Medical Research Council issued the National
Statement on Ethical Conduct in Human Research, updated in 2018.
In the United States, the National Commission for the Protection of
Human Subjects of Biomedical and Behavioral Research adopted a
code of ethics in 1978. The commission issued the Belmont Report,
which provided a model for many disciplinary guidelines.
Regulations affecting research sponsored by the U.S. government,
including studies supported by the National Institute of Nursing
Research (NINR), are based on the Belmont Report. The U.S.
Department of Health and Human Services (DHHS) has issued
ethical regulations that have been codified as Title 45 Part 46 of the
Code of Federal Regulations (45 CFR 46). These regulations were
revised most recently in 2018.

TIP Many useful websites are devoted to research ethics.
Several websites are listed in the Toolkit of the accompanying
Resource Manual, for you to click on directly.

Ethical Dilemmas in Conducting Research
Research that violates ethical principles is rarely done to be cruel, but
usually reflects a conviction that knowledge is important and
beneficial in the long run. There are situations in which participants’
rights and study demands are in direct conflict, posing ethical
dilemmas for researchers. Here are examples of research problems in
which the desire for rigor conflicts with ethical considerations:

1. Research question: Does a new medication improve mobility in patients
with Parkinson disease?

Ethical dilemma: The best way to test the effectiveness of an
intervention is to administer the intervention to some participants
but withhold it from others to see if differences between the groups
emerge. However, if the intervention is untested (e.g., a new drug),
the group receiving the intervention may be exposed to potentially
hazardous side effects. On the other hand, the group not receiving the
drug may be denied a beneficial treatment.

1. Research question: Are nurses equally empathic in their care of male and
female patients in the intensive care unit (ICU)?

Ethical dilemma: Ethics require that participants be aware of their role
in a study. Yet if the researcher informs nurse participants that their
empathy in treating male and female ICU patients will be
scrutinized, will their behavior be “normal”? If the nurses’ usual
behavior is altered because of the known presence of research
observers, then the findings will be misleading.

1. Research question: How do parents cope if a child has a terminal illness?

Ethical dilemma: To answer this question, the researcher may need to
probe into parents’ psychological state at a vulnerable time; such
probing could be painful or traumatic. Yet knowledge of the parents’
coping mechanisms might help to design effective ways of
addressing parents’ stress and grief.

1. Research question: What is the process by which adult children adapt to the
day- to- day stresses of caring for a parent with Alzheimer’s disease?

Ethical dilemma: Sometimes, especially in qualitative studies, a
researcher may get so close to participants that they become willing
to share “secrets” and privileged information. Interviews can become
confessions—sometimes of unseemly or illegal behavior. In this
example, suppose a woman admi�ed to physically abusing her
mother—how does the researcher respond to that information
without undermining a pledge of confidentiality? And, if the

researcher divulges the information to authorities, how can a pledge
of confidentiality be given in good faith to other participants?
As these examples suggest, researchers are sometimes in a bind. They
want to develop good evidence, but they must also protect human
rights. Another dilemma can arise if nurse researchers are confronted
with conflict of interest situations, in which their expected behavior
as researchers conflicts with their expected behavior as nurses (e.g.,
deviating from a research protocol to give assistance to a patient). It
is precisely because of such dilemmas that codes of ethics have been
developed to guide researchers’ efforts.

Ethical Principles for Protecting Study Participants
The Belmont Report articulated three broad principles on which
standards of ethical conduct in research in the United States are
based: beneficence, respect for human dignity, and justice. We briefly
discuss these principles and then describe procedures researchers
adopt to comply with them.

Beneficence
Beneficence imposes a duty on researchers to maximize benefits and
minimize harm. Human research should be intended to produce
benefits for participants or—a more common situation—for others.
This principle covers multiple aspects.

TIP The increased involvement of patients and lay people in
the development of research questions and protocols has been
viewed as an especially ethical approach to research conduct. As
noted by Domecq et al. (2014), “there is an overarching ethical
mandate for patient participation in research as a manifestation
of the ‘democratization’ of the research process” (p. 1).

The Right to Freedom From Harm and Discomfort
Researchers have an obligation to avoid, prevent, or minimize harm
(nonmaleficence) in research with humans. Participants should not be
subjected to unnecessary risks of harm or discomfort, and their
participation must be essential to achieving societally important aims
that could not otherwise be realized. In research with humans, harm
and discomfort can be physical (e.g., injury, fatigue), emotional (e.g.,
stress, fear), social (e.g., diminished social support), or financial (e.g.,
loss of wages). Ethical researchers must use strategies to minimize all
types of harms and discomforts, even ones that are temporary.
Research should be conducted by qualified people, especially if
potentially dangerous procedures are used. Ethical researchers must
be prepared to terminate a study if they suspect that continuation

would result in injury or undue distress to participants. When a new
medical procedure or drug is being tested, prior experimentation
with animals or tissue cultures is advisable.
Protecting human beings from physical harm may be
straightforward, but psychological consequences are often subtle. For
example, participants may be asked questions about their personal
weaknesses, fears, or concerns. Such queries might lead people to
reveal very personal information. The point is not that researchers
should refrain from asking questions but that they need to be aware
of the intrusion on people’s psyches.
The need for sensitivity may be greater in qualitative studies, which
often involve in- depth exploration of personal topics. Extensive
probing may expose deep- seated anxieties that participants had
previously repressed. Qualitative researchers must be vigilant in
anticipating potential ethical challenges.

The Right to Protection From Exploitation
Study involvement should not place participants at a disadvantage or
expose them to damages. Participants need to be assured that their
participation, or information they provide, will not be used against
them. For example, people divulging illegal drug use should not fear
exposure to criminal authorities.
Study participants enter into a special relationship with researchers,
and this relationship should never be exploited. Exploitation may be
overt and malicious (e.g., sexual exploitation, commercial use of
donated blood) but might be more elusive. For example, suppose
people agreed to participate in a study requiring 30 minutes of their
time, but the time commitment was actually 2 hours. In such a
situation, the researcher might be accused of exploiting the
researcher–participant relationship.
Because nurse researchers may have a nurse–patient (in addition to a
researcher–participant) relationship, special care may be required to
avoid exploiting that bond. Patients’ consent to participate in a study
may result from their understanding of the researcher’s role as nurse,
not as researcher.

In qualitative research, psychological distance between researchers
and participants often declines as the study progresses. The
emergence of a pseudotherapeutic relationship is not uncommon,
which can heighten the risk that exploitation could occur
inadvertently (Eide & Kahn, 2008). On the other hand, qualitative
researchers often are in a be�er position than quantitative researchers
to do good, rather than just to avoid doing harm.

Example of Therapeutic Research Experiences
Some of the participants in Beck et al.’s (2015) study on
secondary traumatic stress among certified nurse- midwives told
the researchers that writing about the traumatic births they had
a�ended was therapeutic for them. One participant wrote, “I
think it’s fascinating how li�le respect our patients and
coworkers give to the traumatic experiences we suffer. It is
healing to be able to write out my experiences in this study and
actually have researchers interested in studying this topic.”

Respect for Human Dignity
Respect for human dignity is the second ethical principle in the
Belmont Report. This principle includes the right to self- determination
and the right to full disclosure.

The Right to Self- Determination
Humans should be treated as autonomous agents. Self–
determination means that prospective participants can voluntarily
decide whether to take part in a study, without risk of prejudicial
treatment. It also means that people have the right to ask questions,
to refuse to give information, and to withdraw from the study.
A person’s right to self- determination includes freedom from
coercion, which involves threats of penalty for failing to participate
in a study or excessive rewards for agreeing to participate. Protecting
people from coercion requires careful thought when the researcher is
in a position of authority or influence over potential participants, as

is often the case in a nurse–patient relationship. The issue of coercion
may require scrutiny even when there is not a preestablished
relationship. For example, a generous monetary incentive (or
stipend) offered to encourage participation among an economically
disadvantaged group (e.g., the homeless) might be considered mildly
coercive because such incentives might pressure prospective
participants into cooperation.

The Right to Full Disclosure
People’s right to make informed, voluntary decisions about study
participation requires full disclosure. Full disclosure means that the
researcher has fully described the study, the right to refuse
participation, the researcher’s responsibilities, and likely risks and
benefits. The right to self- determination and the right to full
disclosure are the two major elements of informed consent, discussed
later in this chapter.
Full disclosure can, however, create biases and sample recruitment
problems. Suppose we were testing the hypothesis that high school
students with a high rate of absenteeism are more likely to be
substance abusers than students with good a�endance. If we
approached potential participants and fully explained the study
purpose, some students likely would refuse to participate, and
nonparticipation would be selective; those least likely to volunteer
might well be substance abusing students—the group of primary
interest. Moreover, by knowing the research question, those who do
participate might not give candid responses. In such a situation, full
disclosure could undermine the study.
A technique that is sometimes used in such situations is covert data
collection (concealment), which is the collection of data without
participants’ knowledge and consent. This might happen, for
example, if a researcher wanted to observe people’s behavior in real–
world se�ings and worried that doing so openly would affect the
behavior of interest. Researchers might choose to obtain the
information through concealed methods, such as by videotaping
with hidden equipment or observing while pretending to be engaged
in other activities. Covert data collection may in some cases be

acceptable if risks are negligible and participants’ right to privacy has
not been violated. Covert data collection is least likely to be ethically
tolerable if the study is focused on sensitive aspects of people’s
behavior, such as drug use or sexual conduct.
A more controversial technique is the use of deception, which
involves deliberately withholding information about the study or
providing participants with false information. For example, in
studying high school students’ use of drugs, we might describe the
research as a study of students’ health practices, which is a mild form
of misinformation.
Deception and concealment are problematic ethically because they
interfere with people’s right to make informed decisions about
personal costs and benefits of participation. Some people argue that
deception is never justified. Others, however, believe that if the study
involves minimal risk to participants and if there are anticipated
benefits to society, then deception may be justified as a means of
enhancing the validity of the findings.
Another issue that has emerged in this era of electronic
communication concerns data collection over the Internet. For
example, some researchers analyze the content of messages posted to
social media sites. The issue is whether such messages can be treated
as research data without permission and informed consent. Some
researchers believe that messages posted electronically are in the
public domain and can be used without consent for research
purposes. Others, however, feel that standard ethical rules should
apply in cyberspace research and that researchers must carefully
protect the rights of those who participate in “virtual” communities.
Guidance for the ethical conduct of health research on the Internet is
offered by Elle� et al. (2004) and Heilferty (2011).

Justice
The third broad principle articulated in the Belmont Report concerns
justice, which includes participants’ right to fair treatment and their
right to privacy.

The Right to Fair Treatment

One aspect of justice concerns the equitable distribution of benefits
and burdens of research. Participant selection should be based on
study requirements and not on a group’s vulnerability. Participant
selection has been a key ethical issue historically, with researchers
sometimes selecting groups with lower social standing (e.g.,
prisoners) as participants. The principle of justice imposes special
obligations toward individuals who are unable to protect their own
interests (e.g., dying patients) to ensure that they are not exploited.
Distributive justice also imposes duties to not discriminate against
individuals or groups who may benefit from research. During the
1980s and 1990s, it became evident that women and minorities were
being unfairly excluded from many clinical studies in the United
States. This led to the promulgation of regulations requiring that
researchers who seek funding from the National Institutes of Health
(NIH) include women and minorities as participants. The regulations
also require researchers to examine whether clinical interventions
have differential effects (e.g., whether benefits are different for men
than for women), although this provision has had limited adherence
(Polit & Beck, 2009, 2013).
The fair treatment principle covers issues other than participant
selection. The right to fair treatment means that researchers must
treat people who decline to participate (or who withdraw from the
study) in a nonprejudicial manner; that they must honor all
agreements with participants; that they demonstrate respect for the
beliefs and lifestyles of people from different backgrounds or
cultures; that they give participants access to research staff for
desired clarification; and that they treat participants courteously and
tactfully at all times.

The Right to Privacy
Research with humans involves intrusions into personal lives.
Researchers should ensure that their research is not more intrusive
than it needs to be and that participants’ privacy is maintained.
Participants have the right to expect that their data will be kept in
strict confidence.

Privacy issues have become especially salient in the U.S. healthcare
community since the passage of the Health Insurance Portability and
Accountability Act of 1996 (HIPAA), which articulates federal
standards to protect patients’ health information. In response to the
HIPAA legislation, the U.S. Department of Health and Human
Services issued the regulations Standards for Privacy of Individually
Identifiable Health Information.

TIP Some information relevant to HIPAA compliance is
presented in this chapter, but you should confer with
organizations that are involved in your research (if they are
covered entities) regarding their practices and policies relating
to HIPAA provisions. Here is a website that provides
information about the implications of HIPAA for health
research: h�ps://privacyruleandresearch.nih.gov.

Procedures for Protecting Study Participants
Now that you are familiar with fundamental ethical principles in
research, you need to understand procedures that researchers use to
adhere to them.

Risk/Benefit Assessments
One strategy that researchers use to protect participants is to conduct
a risk/benefit assessment. Such an assessment is designed to
evaluate whether the benefits of participating in a study are in line
with the costs, be they financial, physical, emotional, or social—i.e.,
whether the risk/benefit ratio is acceptable. A summary of risks and
benefits should be communicated to recruited individuals so that
they can evaluate whether it is in their best interest to participate. Box
7.1 summarizes some potential costs and benefits of research
participation.

TIP The Toolkit in the accompanying Resource Manual includes
a Word document with the factors in Box 7.1 arranged in
worksheet form for you to complete in doing a risk/benefit
assessment. By completing the worksheet, it may be easier for
you to envision opportunities for “doing good” and to avoid
possibilities of doing harm.

The risk/benefit ratio should take into account whether risks to
participants are commensurate with benefits to society. A broad
guideline is that the degree of risk by participants should never
exceed the potential humanitarian benefits of the evidence to be
gained. Thus, the selection of a significant topic that has the potential

to improve patient care is the first step in ensuring that research is
ethical. Gennaro (2014) has wri�en eloquently about this issue.
All research involves some risks, but risk is sometimes small.
Minimal risk is defined as risks no greater than those ordinarily
encountered in daily life or during routine procedures. When the
risks are not minimal, researchers must proceed with caution, taking
every step possible to diminish risks and maximize benefits.
In quantitative studies, most details of the study usually are spelled
out in advance, and so a reasonably accurate risk/benefit assessment
can be developed. Qualitative studies, however, usually evolve as
data are gathered, and so it may be difficult to assess all risks at the
outset. Qualitative researchers must remain sensitive to potential
risks throughout the study.

Example of Ongoing Vigilance and Assessment
Stormorken et al. (2017) studied factors impacting the illness
trajectory of postinfectious fatigue syndrome (PIFS) following
an outbreak of Giardia lamblia in Norway. Recognizing that
interviewing people with PIFS could trigger painful emotional
reactions, the interviewers were vigilant for signs of emotional
distress (e.g., crying) and asked participants if they wanted to
terminate the interview. Invariably, participants renewed their
consent to continue, “as they wished to complete their story of
living with the condition” (p. 6).

One potential benefit to participants is monetary. Stipends offered to
prospective participants are rarely viewed as an opportunity for
financial gain, but there is ample evidence that stipends are useful
incentives to participant recruitment and retention (Edwards et al.,
2009). Financial incentives are especially effective when the group
under study is difficult to recruit, when the study is time- consuming
or tedious, or when participants incur study- related costs (e.g., for
child care or transportation). Stipends can range from $1 to hundreds
of dollars, but many are in the $25 to $75 range.

TIP In evaluating the anticipated risk/benefit ratio of a study
design, you might want to consider how comfortable you would
feel about being a study participant.

Informed Consent and Participant Authorization
A particularly important procedure for safeguarding participants is
to obtain their informed consent. Informed consent means that
participants have adequate information about the research,
comprehend that information, and can consent to or decline
participation voluntarily. This section discusses procedures for
obtaining informed consent and for complying with HIPAA rules
regarding accessing patients’ health information.

The Content of Informed Consent
Fully informed consent typically involves communicating the
following pieces of information to participants:

1. Participant status. Prospective participants need to understand the
distinction between research and treatment. They should be told which
healthcare activities are routine and which are implemented specifically
for the study. They also should be informed that data they provide will be
used for research purposes.

2. Study goals. The overall goals of the research should be stated, in lay
rather than technical terms. The use to which the data will be put should
be described.

3. Type of data. Prospective participants should be told what type of data
(e.g., self- reports, laboratory tests) will be collected.

4. Procedures. Prospective participants should be given a description of the
data collection procedures and procedures to be used regarding any
innovative treatment.

5. Nature of the commitment. Participants should be told the expected time
commitment at each point of contact and the number of contacts within a
given time frame.

6. Sponsorship. Information on who is sponsoring or funding the study
should be noted; if the research is part of an academic requirement, this
information should be shared.

7. Participant selection. Prospective participants should be told how they were
selected for recruitment and how many people will be participating.

8. Potential risks. Foreseeable risks (physical, psychological, social, or
economic) or discomforts should be communicated, as well as efforts that
will be made to minimize risks. The possibility of unforeseeable risks
should be discussed, if appropriate. If injury is possible, treatments that
will be made available to participants should be described. When risks are
more than minimal, prospective participants should be encouraged to
seek advice before consenting.

9. Potential benefits. Specific benefits to participants, if any, should be
described, as well as possible benefits to others.

10. Alternatives. If appropriate, participants should be told about alternative
procedures or treatments that might be advantageous to them.

11. Compensation. If stipends or reimbursements are to be paid (or if
treatments are offered without any fee), these arrangements should be
discussed.

12. Confidentiality pledge. Prospective participants should be assured that their
privacy will be protected. If anonymity can be guaranteed, this should be
stated.

13. Voluntary consent. Researchers should indicate that participation is strictly
voluntary and that failure to volunteer will not result in any penalty or
loss of benefits.

14. Right to withdraw and withhold information. Prospective participants should
be told that, after consenting, they have the right to withdraw from the
study or to withhold any specific piece of information. Researchers may
need to describe circumstances under which researchers would terminate
the study.

15. Contact information. The researcher should tell participants whom they
could contact in the event of questions, comments, or complaints.

In qualitative studies, especially those requiring repeated contact
with participants, it may be difficult to obtain meaningful informed
consent at the outset. Qualitative researchers do not always know in
advance how the study will evolve. Because the research design
emerges during data collection, researchers may not know the exact
nature of the data to be collected, what the risks and benefits to
participants will be, or how much of a time commitment they will be
expected to make. Thus, in a qualitative study, consent is often

viewed as an ongoing, transactional process, sometimes called
process consent. In process consent, the researcher continually
renegotiates the consent, allowing participants to play a collaborative
role in making decisions about ongoing participation.

Example of Process Consent
Coombs et al. (2017) studied the decision- making processes that
influence transitions in care when approaching the end of life.
Terminally ill patients and family members were interviewed
when they were recruited for the study and then again 3 to
4 months later. Wri�en consent was obtained before the first
interview, and then a process consent model was adopted.

Comprehension of Informed Consent
Consent information is typically presented to prospective
participants while they are being recruited, either orally or in writing.
Wri�en notices should not, however, take the place of spoken
explanations, which provide opportunities for elaboration and for
participants to question and “screen” the researchers.
Because informed consent is based on a person’s evaluation of the
potential risks and benefits of participation, the information must not
only be communicated but understood. Researchers may have to
play a “teacher” role in conveying consent information. They should
use simple language and avoid technical terms whenever possible.
Wri�en statements should be consistent with the participants’
reading levels. For participants from a general population (e.g.,
patients in a hospital), statements should be at about the seventh or
eighth grade reading level.

TIP Innovations to improve understanding of consent are being
developed. Nishimura et al. (2013) did a systematic review of 54
of them.

For some studies, especially those involving more than minimal risk,
researchers need to ensure that prospective participants understand
what participation will entail. This might involve testing participants’
comprehension of informed consent material before deeming them
eligible. Such efforts are especially warranted with participants
whose native tongue is not the same as the researchers or who have
cognitive impairments (Fields & Calvert, 2015; Simpson, 2010).

Example of Ensuring Comprehension in Informed
Consent
Zhang et al. (2018) tested a nurse case- managed intervention to
reduce substance abuse among homeless gay/bisexual men and
transgender women. All participants signed wri�en informed
consent forms. Participants were later asked to repeat critical
aspects of the design and study procedures, to confirm their
cognitive capacity and their understanding of key consent
provisions.

Documentation of Informed Consent
Researchers usually document informed consent by having
participants sign a consent form. In the United States, federal
regulations for studies funded by the government require wri�en
consent of participants, except under certain circumstances. When
the study does not involve an intervention and data are collected
anonymously—or when existing data from records or specimens are
used without linking identifying information to the data—
regulations requiring wri�en informed consent usually do not apply.
HIPAA legislation is explicit about the type of information that must
be eliminated from patient records for the data to be considered de–
identified, as we illustrate in the Toolkit.
The consent form should contain all the information essential to
informed consent. Prospective participants (or their legally authorized
representatives) should have ample time to review the document

before signing it. The consent form should also be signed by the
researcher, and a copy should be retained by both parties.

TIP In developing a consent form, the following suggestions
might prove helpful:

1. Organize the form coherently so that prospective participants can
follow the logic of what is being communicated. If the form is
complex, use headings as an organizational aid.

2. Use a large enough font so that the form can be easily read, and use
spacing that avoids making the document appear too dense. Make
the form a�ractive and inviting.

3. In general, simplify. Avoid technical terms if possible, and if they
are needed, include definitions.

4. Assess the form’s reading level by using a readability formula to
ensure an appropriate level for the group under study. There are
several such formulas, including the Flesch Reading Ease score and
Flesch–Kincaid grade level score (Flesch, 1948). Microsoft Word
provides Flesch readability statistics.

5. Test the form with people similar to those who will be recruited, and
ask for feedback.

In certain circumstances (for example, with non–English- speaking
participants), researchers have the option of presenting the full
information orally and then summarizing essential information in a
short form. If a short form is used, however, the oral presentation
must be witnessed by a third party and the witness’s signature must
appear on the short consent form. The signature of a third- party
witness is also advisable in studies involving more than minimal risk,
even when a comprehensive consent form is used.
When the primary means of data collection is through self–
administered questionnaires, some researchers do not obtain wri�en
informed consent because they assume implied consent (i.e., that the
return of the completed questionnaire reflects voluntary consent to
participate). In such situations, researchers often provide an
information sheet that contains all the elements of an informed consent

form but does not require a signature. An example of such an
information sheet used in a study of Cheryl Beck (an author of this
book) is presented in Figure 7.1. The numbers in the margins of this
figure correspond to the types of information for informed consent
outlined earlier.

FIGURE 7.1 Example of an information sheet for study participants (University
of Connecticut template).

TIP The Toolkit in the accompanying Resource Manual includes
several informed consent forms and information sheets as Word
documents that can be adapted for your use. Most universities
now offer templates for consent forms.

Authorization to Access Private Health Information
Under HIPAA regulations in the United States, a covered entity such
as a hospital can disclose individually identifiable health information
(IIHI) from its records if the patient signs an authorization. The
authorization can be incorporated into the consent form, or it can be
a separate document. Using a separate authorization form may
be advantageous to protect the patients’ confidentiality because the
form does not need to provide detailed information about the study
purpose. If the research purpose is not sensitive, or if the entity is
already cognizant of the study purpose, an integrated form may
suffice. The authorization, whether obtained separately or as part of
the consent form, must include the following: (1) who will receive the
information; (2) what type of information will be disclosed; and (3)
what further disclosures the researcher anticipates.

Confidentiality Procedures
Study participants have the right to expect that data they provide
will be kept in strict confidence. Participants’ right to privacy is
protected through various confidentiality procedures.

Anonymity

Anonymity, the most secure means of protecting confidentiality,
occurs when the researcher cannot link participants to their data. For
example, if questionnaires were distributed to a group of nursing
home residents and were returned without any identifying
information, responses would be anonymous. As another example, if
a researcher reviewed hospital records from which all identifying
information had been expunged, anonymity would protect
participants’ right to privacy. Whenever it is possible to achieve
anonymity, researchers should strive to do so.

Example of Anonymity
Wilson et al. (2019) conducted a study of nurses’ views on
legalizing assisted dying in New Zealand. A sample of 475
nurses responded to an anonymous online survey.

Confidentiality in the Absence of Anonymity
When anonymity is not possible, other confidentiality procedures are
needed. A promise of confidentiality is a pledge that any
information participants provide will not be reported in a manner
that identifies them and will not be accessible to others. This means
that research information should not be shared with strangers nor
with people known to participants (e.g., relatives, doctors, other
nurses), unless participants give explicit permission to do so.
Researchers can take a number of steps to ensure that a breach of
confidentiality does not occur, including the following:

Obtain identifying information (e.g., name, address) from participants
only when essential.
Assign an identification (ID) number to each participant and a�ach the
ID number rather than other identifiers to actual data forms.
Maintain identifying information in a locked file.
Restrict access to identifying information to only a few people on a need–
to- know basis.
Enter identifying information onto computer files that are encrypted.
Destroy identifying information as quickly as practical.

Make research personnel sign confidentiality pledges if they have access
to identifying information.
Report research information in the aggregate; if information for an
individual is reported, disguise the person’s identity, such as through the
use of a fictitious name.

TIP Researchers who plan to collect data from participants
multiple times (or who use multiple forms that need to be
linked) do not have to forego anonymity. A technique that has
been successful is to have participants themselves generate an
ID number. They might be instructed, for example, to use the
first three le�ers of their mother’s middle names and their birth
year as their ID code (e.g., FRA1983). This code would be put on
every form so that forms could be linked, but researchers would
not know participants’ identities.

Qualitative researchers may need to take extra steps to safeguard
participants’ privacy. Anonymity is rarely possible in qualitative
studies because researchers typically become closely involved with
participants. Moreover, because of the in- depth nature of qualitative
studies, there may be a greater invasion of privacy than is true in
quantitative research. Researchers who spend time in the home of a
participant may, for example, have difficulty segregating the public
behaviors that the participant is willing to share from private
behaviors that unfold during data collection. A final issue concerns
disguising participants in reports. Because the number of participants
is small, qualitative researchers need to take special precautions to
safeguard identities. This may mean more than simply using a
fictitious name. Qualitative researchers may have to slightly distort
identifying information or provide broad descriptions. For example,
a 49- year- old antique dealer with ovarian cancer might be described
as “a middle- aged cancer patient who was a shop owner” to avoid
identification that could occur with the more detailed description.

Example of Confidentiality Procedures in a Qualitative
Study
Strandås et al. (2019) conducted a focused ethnography to gain a
deeper understanding of nurse–patient relationships in
Norwegian public home care. Participants (who were observed
interacting with nurses and were also interviewed) received
information about the researchers and the study, including
rights to withdraw. Oral informed consent was obtained from
patients who were included in observations. Data were
anonymized by removing names and locations and by changing
some details. Interview transcripts and audiotapes were kept in
locked files.

Certificates of Confidentiality
In some situations, confidentiality can create tensions between
researchers and legal or other authorities, especially if participants
engage in criminal activity (e.g., substance abuse). To avoid the
possibility of forced, involuntary disclosure of sensitive research
information (e.g., through a court order or subpoena), researchers in
the United States can apply for a Certificate of Confidentiality from
the National Institutes of Health (Lu� et al., 2000; Wolf et al., 2015).
Any research that involves the collection of personally identifiable,
sensitive information is potentially eligible for a Certificate.
Information is considered sensitive if its release might damage
participants’ financial standing, employability, or reputation.
Information about a person’s mental health, as well as genetic
information, is also considered sensitive. A Certificate allows
researchers to refuse to disclose identifying information on study
participants in any civil, criminal, administrative, or legislative
proceeding at the federal, state, or local level.
A Certificate of Confidentiality helps researchers to achieve their
research objectives without threat of involuntary disclosure and can
be helpful in recruiting participants. Researchers who obtain a
Certificate should inform prospective participants about this valuable

protection in the consent form and should state any planned
exceptions to those protections. For example, a researcher might
decide to voluntarily comply with state child abuse reporting laws
even though the Certificate would prevent authorities from
punishing researchers who chose not to comply.

Example of Obtaining a Certificate of Confidentiality
Mallory and Hesson- McInnis (2013) pilot tested an HIV (human
immunodeficiency virus) infection prevention intervention with
incarcerated and other high- risk women. The women were
asked about various sensitive topics, and so the researchers
obtained a Certificate of Confidentiality.

Debriefings, Communications, and Referrals
Researchers can show their respect—and proactively minimize
emotional risks—by carefully a�ending to the nature of their
interactions with participants. For example, researchers should
always be gracious and polite, should phrase questions tactfully, and
should be considerate with regard to cultural and linguistic diversity.
Researchers can also use formal strategies to communicate respect for
participants’ well- being. For example, it is sometimes useful to offer
debriefing sessions after data collection is completed to permit
participants to ask questions or air complaints. Debriefing is
especially important when the data collection has been stressful or
when ethical guidelines had to be “bent” (e.g., if any deception was
used in explaining the study).

Example of Debriefing
Payne (2013) evaluated the effectiveness of a diabetes support
group for indigenous women in Australia. Information was
obtained before and after implementing the support group. At
the end of the study “A final group debriefing was implemented
for ethical closure” (p. 41).

It is also thoughtful to communicate with participants after the study
is completed, to let them know that their participation was
appreciated. Researchers sometimes offer to share study findings
with participants once the data have been analyzed (e.g., by emailing
them a summary). The National Academies in the United States
(2018) has published guidance on returning individual results to
participants.
Finally, in some situations, researchers may need to assist study
participants by referring them to appropriate health, social, or
psychological services.

Example of Referrals
Mwalabu et al. (2017) studied factors influencing the sexual and
reproductive healthcare experiences of female adolescents in
Malawi with perinatally acquired HIV. In- depth interviews
were conducted with 42 young women. Provisions were made
to refer the young women to support services if they became
distressed.

Treatment of Vulnerable Groups
Adherence to ethical standards is often straightforward, but
additional procedures may be required to protect special vulnerable
groups. Vulnerable populations may be incapable of giving fully
informed consent (e.g., cognitively impaired people) or may be at
risk of unintended side effects because of their circumstances (e.g.,
pregnant women). Researchers interested in studying high- risk
groups should understand guidelines governing informed consent,
risk/benefit assessments, and acceptable research procedures for such
groups. Research with vulnerable groups should be undertaken only
when the risk/benefit ratio is low or when there is no alternative (e.g.,
studies of prisoners’ health behaviors require inmates as
participants).
Among the groups that nurse researchers should consider vulnerable
are the following:

Children. Legally and ethically, children do not have competence to give
informed consent, so the informed consent of their parents or legal
guardians must be obtained. It is appropriate, however—especially if the
child is at least 7 years old—to obtain the child’s assent as well.
Assent refers to the child’s agreement to participate. If the child is mature
enough to understand basic informed consent information, it is advisable
to obtain wri�en consent from the child and the parent, as evidence of
respect for the child’s right to self- determination. Recent research suggests
that children at the age of 12 years are competent to give consent (Hein et
al., 2015). The U.S. government has issued special regulations (Code of
Federal Regulations, 2009, Subpart D) for additional protections of
children as study participants.

TIP Crane and Broome (2017) have prepared a systematic review on the
ethical aspects of research participation from the perspective of
participating children and adolescents.

Mentally or emotionally disabled people. Individuals whose disability makes
it impossible for them to weigh the risks and benefits of participation (e.g.,
people who are in a coma) also cannot legally or ethically provide
informed consent. In such cases, researchers should obtain the wri�en
consent of a legal guardian. If possible, informed consent or assent from
participants themselves should be sought as a supplement to a guardian’s
consent. NIH guidelines stipulate that studies involving people whose
autonomy is compromised by disability should focus directly on their
condition.
Severely ill or physically disabled people. For patients who are very ill, it
might be prudent to assess their ability to make reasoned decisions about
study participation. For certain disabilities, special procedures for
obtaining consent may be required. For example, with deaf participants,
the entire consent process may need to be in writing. For people who have
a physical impairment preventing them from writing or for participants
who cannot read, alternative procedures for documenting informed
consent (e.g., video recording consent proceedings) should be used.
The terminally ill. Terminally ill people seldom expect to benefit personally
from participating in research, and so the risk/benefit ratio needs to be
carefully assessed. Researchers must take steps to ensure that the care and
comfort of terminally ill participants are not compromised.

Institutionalized people. Prudence is required in recruiting institutionalized
people because their dependence on healthcare personnel may make them
feel pressured into participating; they may believe that their treatment
would be jeopardized by failure to cooperate. Prison inmates, who have
lost autonomy in many spheres of activity, may also feel constrained in
their ability to withhold consent. The U.S. government has issued specific
regulations for the protection of prisoners as study participants (see Code
of Federal Regulations, 2009, Subpart C). Researchers studying
institutionalized groups need to emphasize the voluntary nature of
participation.
Pregnant women. The U.S. government has issued additional requirements
governing research with pregnant women and fetuses (Code of Federal
Regulations, 2009, Subpart B). These requirements reflect a desire to
safeguard both the pregnant woman, who may be at heightened physical
and psychological risk, and the fetus, who cannot give informed consent.
The regulations stipulate that a pregnant woman cannot be involved in a
study unless its purpose is to meet the health needs of the pregnant
woman and risks to her and the fetus are minimized or there is only a
minimal risk to the fetus.

Example of Research With a Vulnerable Group
Culbert and Williams (2018) developed a culturally adapted
medication adherence intervention for prisoners living with
HIV in Indonesia. The cultural adaptation was based on an
ethnographic appraisal of the target group. The intervention
was pilot tested in two prisons in Jakarta. Participation was
voluntary, and participants were selected equitably without
prison staff involvement.

Researchers need to proceed with extreme caution in conducting
research with people who fall into two or more vulnerable categories
(e.g., incarcerated youth).

External Reviews and the Protection of Human Rights
Researchers, who have a commitment to their research, may not be
objective in their risk/benefit assessments or in their plans to protect
participants’ rights. Because a biased self- evaluation is possible, the

ethical dimensions of a study normally should be subjected to
external review.
Most institutions where research is conducted have formal
commi�ees for reviewing proposed research plans. These commi�ees
are sometimes called human subjects commi�ees, ethical advisory boards,
or research ethics commi�ees. In the United States, the commi�ee
usually is called an Institutional Review Board (IRB); in Canada, it
is called a Research Ethics Board (REB).

TIP You should find out early what an institution’s
requirements are regarding ethics, in terms of its forms,
procedures, and review schedules. It is wise to allow a generous
amount of time for negotiating with IRBs, which may require
modifications and re- review.

Institutional Review Boards
In the United States, federally sponsored studies are subject to strict
guidelines for evaluating the treatment of human participants. Before
undertaking such a study, researchers must submit research plans to
the IRB and must also go through formal training on ethical conduct
and a certification process.
The duty of the IRB is to ensure that the proposed plans meet federal
requirements for ethical research. An IRB can approve, require
modifications to, or disapprove the proposed plans. The main
requirements governing IRB decisions may be summarized as
follows (Code of Federal Regulations, 2009, §46.111):

Risks to participants are minimized.
Risks to participants are reasonable in relation to anticipated benefits, if
any, and the importance of the knowledge that may reasonably be
expected to result.
Selection of participants is equitable.
Informed consent will be sought, as required, and appropriately
documented.
Adequate provision is made for monitoring the research to ensure
participants’ safety.

p p y
Appropriate provisions are made to protect participants’ privacy and
confidentiality of the data.
When a vulnerable group is involved, appropriate additional safeguards
are included to protect the rights and welfare of participants.

Example of Institutional Review Board Approval
Dzikowicz and Carey (2019) evaluated the possible relationship
between QRS- T angle (a measure of repolarization
heterogeneity and potentially a predictor of poor ventricular
health) and blood pressure during exercise among on- duty
firefighters. The study was approved by the IRB of the State
University of New York.

Many studies require a full IRB review at a meeting with a majority
of IRB members present. An IRB must have five or more members, at
least one of whom is not a researcher (e.g., a lawyer or someone from
the patient population). One IRB member must be a person who is
not affiliated with the institution and is not a family member of an
affiliated person. To protect against potential biases, the IRB cannot
comprise entirely men, women, or members from a single profession.
For certain research involving no more than minimal risk, the IRB
can use expedited review procedures, which do not require a
meeting. In an expedited review, a single IRB member (usually the
IRB chairperson) carries out the review. An example of research that
qualifies for an expedited IRB review is minimal- risk research “…
employing survey, interview, oral history, focus group, program
evaluation, human factors evaluation, or quality assurance
methodologies” (Code of Federal Regulations, 2009, §46.110).
Federal regulations also allow certain types of research in which
there are no apparent risk to participants to be exempt from IRB
review. The website of the Office for Human Research Protections, in
its policy guidance section, includes decision charts designed to
clarify whether a study is exempt.

TIP Researchers seeking a Certificate of Confidentiality must
first obtain IRB approval, which is a prerequisite for the
Certificate. Applications for the Certificate should be submi�ed
at least 3 months before participants are expected to enroll in the
study.

Data and Safety Monitoring Boards
In addition to IRBs, researchers in the United States may have to
communicate information about ethical aspects of their studies to
other groups. For example, some institutions have established
separate Privacy Boards to review researchers’ compliance with
provisions in HIPAA, including review of authorization forms and
requests for waivers.
For researchers evaluating interventions in clinical trials, NIH also
requires review by a data and safety monitoring board (DSMB). The
purpose of a DSMB is to oversee the safety of participants, to
promote data integrity, and to review accumulated outcome data on
a regular basis to evaluate whether study protocols should be altered
or the study stopped altogether. Members of a DSMB are selected
based on their clinical, statistical, and methodologic expertise. The
degree of monitoring by the DSMB should be proportionate to the
degree of risk involved. Slimmer and Andersen (2004) offer
suggestions on developing a DSM plan. Artinian et al. (2004)
provided good descriptions of their data and safety monitoring plan
for a study of a nurse- managed telemonitoring intervention and
discussed how IRBs and DSMBs differ.

Building Ethics Into the Design of the Study
Researchers need to give thought to ethical requirements while
planning a study and should ask themselves whether intended
safeguards are sufficient. They must continue their vigilance
throughout the course of the study as well, because unforeseen
ethical dilemmas may arise. Of course, first steps in doing ethical
research include asking clinically important questions and using

rigorous methods—it can be construed as unethical to do poorly
designed research because it would be a poor use of participants’
time. Another issue concerns dissemination: it can be considered
unethical and wasteful of people’s time to not communicate research
findings to others.
The remaining chapters of the book offer advice on how to design
studies that yield high- quality evidence for practice. Methodologic
decisions about rigor, however, must be made within the context of
ethical requirements. Box 7.2 presents examples of questions that
might be posed in thinking about ethical aspects of study design.

TIP After study procedures have been developed, researchers
should evaluate those procedures to determine if they meet
ethical requirements. Box 7.3 later in this chapter provides
guidelines that can be used for such a self- evaluation.

Other Ethical Issues
In discussing ethical issues relating to the conduct of nursing
research, we have given primary consideration to the protection of
human participants. Two other ethical issues also deserve mention:
the treatment of animals in research and research misconduct.

Ethical Issues in Using Animals in Research
Some nurse researchers work with animal subjects. Despite
opposition to such research by animal rights activists, researchers in
health fields likely will continue to use animals to explore
physiologic mechanisms and interventions that could pose risks to
humans.
Ethical considerations are clearly different for animals and humans:
the concept of informed consent is not relevant for animal subjects.
Guidelines have been developed governing treatment of animals in
research. In the United States, the Public Health Service has issued a
policy statement on the humane care and use of laboratory animals.
The guidelines articulate nine principles for the proper treatment of
animals used in biomedical and behavioral research. These principles
cover such issues as alternatives to using animals, pain and distress
in animal subjects, researcher qualifications, use of appropriate
anesthesia, and conditions for euthanizing animals. In Canada,
researchers who use animals in their studies must adhere to the
policies and guidelines of the Canadian Council on Animal Care
(CCAC) as articulated in their Guide to the Care and Use of Experimental
Animals. Hol�claw and Hanneman (2002) noted several important
considerations in the use of animals in nursing research, and Osier et
al. (2016) discussed the use of animal models in genomic nursing
research.

Example of Research With Animals
Kupferschmid and Therrien (2018) investigated the time
trajectory of age- dependent sickness responses over 5 days in

adult and aged male Brown- Norway rats. Animals were housed
individually in a temperature- controlled room and allowed free
access to food and water. The University of Michigan Animal
Care and Use Commi�ee approved all procedures.

Research Misconduct
Ethics in research involves not only the protection of human and
animal subjects but also protection of the public trust. The issue of
research misconduct has received greater a�ention in recent years as
incidents of researcher fraud and misrepresentation have come to
light. Currently, the agency in the United States responsible for
overseeing efforts to improve research integrity and for handling
allegations of research misconduct is the Office of Research Integrity
(ORI). Researchers seeking funding from NIH must demonstrate that
they have received training on research integrity and the responsible
conduct of research.
Research misconduct is defined by U.S. Public Health Service
regulation (42 CFR Part 93.103) as “fabrication, falsification, or
plagiarism in proposing, performing, or reviewing research, or in
reporting research results.” To be construed as misconduct, there
must be a significant departure from accepted practices in the
research community, and the misconduct must have been commi�ed
intentionally and knowingly. Fabrication involves making up data or
study results. Falsification involves manipulating research materials,
equipment, or processes; it also involves changing or omi�ing data or
distorting results. Plagiarism involves the appropriation of someone’s
ideas, results, or words without giving due credit, including
information obtained as a reviewer of research proposals or
manuscripts.

Example of Research Misconduct
In 2015, the U.S. ORI ruled that a researcher engaged in
scientific misconduct in a study supported by the NINR. The
researcher falsified and fabricated data that were reported in

five publications and three grant applications submi�ed to the
NINR.

Although the official definition focuses on only three types of
misconduct, there is widespread agreement that research misconduct
covers many other issues including improprieties of authorship, poor
data management, conflicts of interest, inappropriate financial
arrangements, failure to comply with governmental regulations, and
unauthorized use of confidential information.
Research integrity is an important concern in nursing. Habermann et
al. (2010) studied 1,645 research coordinators’ experiences with
research misconduct in their clinical environments. More than 250
coordinators, most of them nurses, said they had firsthand
knowledge of research misconduct that included protocol violations,
consent violations, fabrication, falsification, and financial conflicts of
interest. Fierz et al. (2014) concluded that research misconduct in
nursing science “not only compromises scientific integrity by
distorting empirical evidence, but it might endanger patients” (p.
271).

Critical Appraisal of Ethics in Research
Guidelines for critically appraising the ethical aspects of a study are
presented in Box 7.3. Members of an ethics commi�ee should be
provided with sufficient information to answer all these questions.
Research journal articles, however, do not always include detailed
information about ethics because of space constraints. Thus, it is not
always possible to evaluate researchers’ adherence to ethical
guidelines, but we offer a few suggestions for considering a study’s
ethical aspects.
Many research reports acknowledge that study procedures were
reviewed by an IRB or ethics commi�ee, and some journals require
such statements. When a report specifically mentions a formal
review, it is usually safe to assume that a group of concerned people
did a conscientious review of the study’s ethical issues.
You can also come to some conclusions based on a description of the
study methods. There may be sufficient information to judge, for
example, whether participants were subjected to harm or discomfort.
Reports do not always state whether informed consent was secured,
but you should be alert to situations in which the data could not have
been gathered as described if participation were purely voluntary
(e.g., if data were gathered unobtrusively).
In thinking about ethical issues, you should also consider who the
study participants were. For example, if a study involved vulnerable
groups, there should be more information about protective
procedures. You might also need to consider who the study
participants were not. For example, there has been considerable
concern about the omission of certain groups (e.g., minorities) from
clinical research.
It is often difficult to determine whether the participants’ privacy was
safeguarded unless the researcher mentions pledges of
confidentiality or anonymity. A situation requiring special scrutiny
arises when data are collected from two related people, such as a
husband/wife or parent/child, who are interviewed either jointly or
separately (Forbat & Henderson, 2003; Haahr et al., 2014). For

example, researchers may struggle with asking one person probing
questions after having been given confidential information about the
issue by the other.

Research Examples
Two research examples that highlight ethical issues are presented in
the following sections.

Research Example From a Quantitative Study

Study: Using simulated family presence to decrease agitation in
older hospitalized delirious patients (Waszynski et al., 2018).
Study purpose: The purpose of this study was to examine the effect
of simulated family presence through prerecorded video messages
on the agitation level of delirious, acutely agitated hospitalized
patients.
Research methods: A total of 111 hospitalized patients in an inner–
city trauma center experiencing delirium participated in the study.
Participants were assigned, at random, to one of three groups. One
group viewed a 1- minute family video message, the second group
viewed a 1- minute nature video, and the third group had usual care
without a video. Patients’ level of agitation before, during, and after
the intervention was measured.
Ethics- related procedures: The study was approved and monitored
by the IRBs of Hartford Healthcare and the researchers’ university, in
accordance with the Code of Ethics of the World Medical
Association. Because all participants were delirious, informed
consent was obtained from a legally appointed representative or next
of kin. Informed consent was also obtained from members of the
family who participated in the creation of the family video message.
The principal investigator assessed each patient for delirium and
obtained verbal assent. Assent was obtained by asking participants if
the researcher could return later that day—if and when the patient
felt “out of sorts”—to show a video. An independent observer rated
each family video message as positive, neutral, or negative; the vast
majority were rated as positive with an encouraging message.
Key findings: A significantly greater proportion of participants in the
family video group (94%) experienced a reduction in agitation from

pre- intervention to during the intervention than those viewing the
nature video (70%) or those in usual care (30%).

Research Example From a Qualitative Study

Study: The changing nature of relationships between parents and
healthcare providers when a child dies in the pediatric intensive care
unit (Butler et al., 2018).
Study purpose: The purpose of the study was to explore bereaved
parents’ interactions with healthcare providers when a child dies in
the pediatric intensive care unit.
Study methods: The researchers used a grounded theory approach.
Data for the study were gathered through in- depth interviews with
26 bereaved parents from four pediatric ICUs. The interviews, which
lasted between 1.5 and 2.5 hours, were mostly undertaken in the
participants’ homes and were audio- recorded.
Ethics- related procedures: The study was approved by human
research ethics commi�ees in the relevant facilities. Participants
signed wri�en informed consent forms, and verbal consent was
reaffirmed throughout the interview process. The interviews were
conducted with parents either individually or jointly, at their request.
The researchers, who were conscious of the highly sensitive nature of
the research, paid significant a�ention to the parents’ psychological
well- being. The consent form identified the strong likelihood of
emotional distress, to enhance the parents’ ability to make an
informed decision about their participation. The researchers also
encouraged the parents to take breaks during the interview and to
call upon personal coping strategies. The interviewer, in preparing
for this project, took a bereavement counseling course, to be able to
offer support following the interview during debriefing. The
researchers offered further support in a follow- up telephone call.
Social workers were available for on- going follow- up if required.
Key findings: The researchers identified a three- phase process that
they described as “Transitional togetherness.” In phase 1,
“Welcoming expertise,” the focus was on the child’s medical needs.
In phase 2, “Becoming a team” involved working collaboratively

with providers. Finally, in the “Gradually disengaging” phase, the
parents expressed a desire for the relationship with providers to
continue after the child’s death.

Summary Points

Researchers sometimes face ethical dilemmas in designing studies that
are rigorous and ethical. Codes of ethics have been developed to guide
researchers.
Three major ethical principles from the Belmont Report are incorporated
into most guidelines: beneficence, respect for human dignity, and justice.
Beneficence involves the performance of some good and the protection of
participants from physical and psychological harm and exploitation.
Respect for human dignity involves participants’ right to self–
determination, which means they are free to control their own actions,
including voluntary participation.
Full disclosure means that researchers have fully divulged participants’
rights and the risks and benefits of the study. When full disclosure could
bias the results, researchers sometimes use covert data collection (the
collection of information without the participants’ knowledge or consent)
or deception (providing false information).
Justice includes the right to fair treatment and the right to privacy. In the
United States, privacy has become a major issue because of the Privacy
Rule regulations that resulted from the Health Insurance Portability and
Accountability Act (HIPAA).
Various procedures have been developed to safeguard study participants’
rights. For example, researchers can conduct a risk/benefit assessment in
which the potential benefits of the study to participants and society are
weighed against the risks.
Informed consent procedures, which provide prospective participants
with information needed to make a reasoned decision about participation,
normally involve signing a consent form to document voluntary and
informed participation.
In qualitative studies, consent may need to be continually renegotiated
with participants as the study evolves, through process consent
procedures.
Privacy can be maintained through anonymity (wherein not even
researchers know participants’ identities) or through formal
confidentiality procedures that safeguard the information participants
provide. Researchers must guard against a breach of confidentiality .

U.S. researchers can seek a Certificate of Confidentiality that protects
them against the forced disclosure of confidential information (e.g., by a
court order).
Researchers sometimes offer debriefing sessions after data collection to
provide participants with more information or an opportunity to air
complaints.
Vulnerable groups require additional protection. These people may be
vulnerable because they are unable to make a truly informed decision
about study participation (e.g., children); because of diminished
autonomy (e.g., prisoners); or because circumstances heighten the risk of
physical or psychological harm (e.g., pregnant women).
External review of the ethical aspects of a study by an ethics commi�ee or
Institutional Review Board (IRB) is often required by the agency funding
the research and the organization from which participants are recruited.
In studies in which risks to participants are minimal, an expedited review
by a single member of the IRB may be substituted for a full board review;
in cases in which there are no anticipated risks, the research may be
exempted from review.
Researchers need to give careful thought to ethical requirements
throughout the study’s planning and implementation and to ask
themselves continually whether safeguards for protecting humans are
sufficient.
Ethical conduct in research involves not only protection of the rights of
human and animal subjects, but also efforts to maintain high standards of
integrity and avoid such forms of research misconduct as plagiarism,
fabrication of results, or falsification of data.

Study Activities
Study activities are available to instructors on .

Box 7.1 Potential Benefits and Risks of Research to
Participants

Major potential benefits to participants

Access to a potentially beneficial intervention that might otherwise be unavailable

Comfort in being able to discuss their situation or problem with a friendly, objective person

Increased knowledge about themselves or their conditions, either through opportunity for
introspection and self- reflection or through direct interaction with researchers

Escape from normal routine

Satisfaction that information they provide may help others with similar conditions

Direct monetary or material gains through stipends or other incentives

Major potential risks to participants

Physical harm, including unanticipated side effects

Discomfort, fatigue, or boredom

Emotional distress as a result of self- disclosure, introspection, discomfort with strangers, fear
of repercussions, anger, or embarrassment at the questions being asked

Social risks, such as the risk of stigma, adverse effects on personal relationships, loss of status

Loss of privacy

Loss of time

Monetary costs (e.g., for transportation, child care, time lost from work)

Box 7.2 Examples of Questions for Building Ethics into the
Design of a Study

Research Design

Will participants be assigned fairly to different treatment groups?
Will the study se�ing minimize participants’ discomfort or anxiety?

Intervention

Is the intervention designed to maximize benefits and minimize harms?
Under what conditions could the intervention be withdrawn or altered?

Sample

Is the population under study defined so as to minimize the risk that
certain types of people (e.g., women, minorities) will be excluded or
underrepresented?
Will potential participants be recruited into the study equitably and
without the use of coercion?

Data Collection

Will respondent burden be minimized? Will participants’ time be used
efficiently?
Will procedures for ensuring confidentiality of data be adequate?
Will data collection staff be trained to be courteous, respectful, and
caring?

Reporting

Will participants’ identities be adequately protected?

Box 7.3 Guidelines for Critically Appraising the Ethical
Aspects of a Study

1. Was the study approved and monitored by an Institutional Review Board,
REB, or other similar ethics review commi�ee?

2. Were participants subjected to any physical harm, discomfort, or
psychological distress? Did the researchers take appropriate steps to
remove, prevent, or minimize harm?

3. Did the benefits to participants outweigh any potential risks or actual
discomfort they experienced? Did the benefits to society outweigh the
costs to participants?

4. Was any type of coercion or undue influence used to recruit participants?
Did they have the right to refuse to participate or to withdraw without
penalty?

5. Were participants deceived in any way? Were they fully aware of
participating in a study and did they understand the purpose and nature
of the research?

6. Were appropriate informed consent procedures used? If not, were there
valid and justifiable reasons?

7. Were adequate steps taken to safeguard participants’ privacy? How was
confidentiality maintained? Were Privacy Rule procedures followed (if
applicable)? Was a Certificate of Confidentiality obtained? If not, should
one have been obtained?

8. Were vulnerable groups involved in the research? If yes, were special
precautions used because of their vulnerable status?

9. Were groups omi�ed from the inquiry without a justifiable rationale, such
as women (or men), minorities, or older people?

References Cited in Chapter 7
Artinian N., Froelicher E., & Wal J. (2004). Data and safety monitoring during

randomized controlled trials of nursing interventions. Nursing Research, 53,
414–418.

Beck C. T., LoGiudice J., & Gable R. K. (2015). A mixed methods study of
secondary traumatic stress in certified nurse- midwives: Shaken belief in the
birth process. Journal of Midwifery and Women’s Health, 60, 16–23.

Butler A., Hall H., & Copnell B. (2018). The changing nature of relationships
between parents and healthcare providers when a child dies in the
paediatric intensive care unit. Journal of Advanced Nursing, 74, 89–99.

* Coombs M., Parker R., & de Vries K. (2017). Managing risk during care
transitions when approaching end of life: A qualitative study of patients’
and health care professionals’ decision making. Palliative Medicine, 31, 617–
624.

Crane S., & Broome M. (2017). Understanding ethical issues of research
participation from the perspective of participating children and adolescents:
A systematic review. Worldviews on Evidence- Based Nursing, 14, 200–209.

Culbert G., & Williams A. (2018). Cultural adaptation of a medication
adherence intervention with prisoners living with HIV in Indonesia. Journal
of the Association of Nurses in AIDS Care, 29, 454–465.

* Domecq J., Prutsky G., Elraiyah T., Wang Z., Nabhan M., Shippee N., …
Murad M. H. (2014). Patient engagement in research: A systematic review.
BMC Health Services Research, 14, 89.

** Dzikowicz D., & Carey M. (2019). Widened QRS- T angle may be a measure
of poor ventricular stretch during exercise among on- duty firefighters.
Journal of Cardiovascular Nursing, 34(3), 201- 207.

* Edwards P., Roberts I., Clarke M., Diguiseppi C., Wen� R., Kwan I., … Pratap
S. (2009). Methods to increase response to postal and electronic
questionnaires. Cochrane Database of Systematic Reviews, MR000008.

Eide P., & Khan D. (2008). Ethical issues in the qualitative researcher- –
participant relationship. Nursing Ethics, 15, 199–207.

Elle� M., Lane L., & Keffer J. (2004). Ethical and legal issues of conducting
nursing research via the Internet. Journal of Professional Nursing, 20, 68–74.

* Fields L., & Calvert J. (2015). Informed consent procedures with cognitively
impaired patients: A review of ethics and best practices. Psychiatry and
Clinical Neurosciences, 69, 462–471.

Fierz K., Gennaro S., Dierickx K., Van Achtenberg T., Morin K., & DeGeest S.
(2014). Scientific misconduct: Also an issue in nursing science? Journal of
Nursing Scholarship, 46, 271–280.

Flesch R. (1948). New readability yardstick. Journal of Applied Psychology, 32,
221–223.

Forbat L., & Henderson J. (2003). “Stuck in the middle with you”: The ethics
and process of qualitative research with two people in an intimate
relationship. Qualitative Health Research, 13, 1453–1462.

Gennaro S. (2014). Conducting important and ethical research. Journal of
Nursing Scholarship, 46, 2.

Haahr A., Norlyk A., & Hall E. (2014). Ethical challenges embedded in
qualitative research interviews with close relatives. Nursing Ethics, 21, 6–15.

* Habermann B., Broome M., Pryor E., Ziner K. W. (2010). Research
coordinators’ experiences with scientific misconduct and research integrity.
Nursing Research, 59, 51–57.

Heilferty C. M. (2011). Ethical considerations in the study of online illness
narratives. Journal of Advanced Nursing, 67, 945–953.

* Hein I., DeVries M., Troost P., Meynen G., Van Goudoever J., & Lindauer R.
(2015). Informed consent instead of assent is appropriate in children from
the age of twelve: Policy implications of new findings on children’s
competence to consent in clinical research. BMC Medical Ethics, 16, 76.

Hol�claw B. J., & Hanneman S. (2002). Use of non- human biobehavioral
models in critical care nursing research. Critical Care Nursing Quarterly, 24,
30–40.

Kupferschmid B., & Therrien B. (2018). Spatial learning responses to
lipopolysaccharide in adults and aged rats. Biological Research for Nursing, 20,
32–39.

Lu� K. F., Shelton K., Robrecht L., Ha�on D., & Becke� A. (2000). Use of
certificates of confidentiality in nursing research. Journal of Nursing
Scholarship, 32, 185–188.

Mallory C., & Hesson- McInnis M. (2013). Pilot test results of an HIV prevention
intervention for high- risk women. Western Journal of Nursing Research, 35,
313–329.

* Mwalabu G., Evans C., & Redsell S. (2017). Factors influencing the experience
of sexual and reproductive healthcare for female adolescents with
perinatally- acquired HIV. BMC Women’s Health, 17, 125.

* National Academies of Sciences, Engineering & Medicine (2018). Returning
individual research results to participants: Guidance for a new research

paradigm. Washington, DC: National Academies.
* Nishimura A., Carey J., Erwin P., Tilburt J., Murad M., & McCormick J. (2013).

Improving understanding in the research informed consent process: A
systematic review of 54 interventions tested in randomized control trials.
BMC Medical Ethics, 14, 28.

Osier N., Pham L., Savarese A., Sayles K., & Alexander S. (2016). Animal
models in genomic research: Techniques, applications, and roles for nurses.
Applied Nursing Research, 32, 247–256.

Payne C. (2013). A diabetes support group for Nywaigi women to enhance
their capacity for maintaining physical and mental wellbeing. Contemporary
Nurse, 46, 41–45.

Polit D. F., & Beck C. T. (2009). International gender bias in nursing research,
2005–2006: A quantitative content analysis. International Journal of Nursing
Studies, 46, 1102–1110.

Polit D. F., & Beck C. T. (2013). Is there still gender bias in nursing research? An
update. Research in Nursing & Health, 36, 75–83.

Silva M. C. (1995). Ethical guidelines in the conduct, dissemination, and
implementation of nursing research. Washington, DC: American Nurses
Association.

Simpson C. (2010). Decision- making capacity and informed consent to
participate in research by cognitively impaired individuals. Applied Nursing
Research, 23, 221–226.

Slimmer L., & Andersen B. (2004). Designing a data and safety monitoring
plan. Western Journal of Nursing Research, 26, 797–803.

* Stormorken E., Jason L., & Kirkevold M. (2017). Factors impacting the illness
trajectory of post- infectious fatigue syndrome. BMC Public Health, 17, 952.

Strandås M., Wackerhausen S., & Bondas T. (2019). The nurse- patient
relationship in the new public management era, in public home care: A
focused ethnography. Journal of Advanced Nursing, 75(2), 400–411.

Waszynski C., Milner K., Staff I., & Molony S. (2018). Using simulated family
presence to decrease agitation in older hospitalized delirious patients: A
randomized controlled trial. International Journal of Nursing Studies, 77, 154–
161.

Wilson M., Oliver P., & Malpas P. (2019). Nurses’ views on legalizing assisted
dying in New Zealand: A cross- sectional study. International Journal of
Nursing Studies, 89, 116–124.

* Wolf L. E., Patel M., Williams- Tarver B., Austin J., Dame L., & Beskow L.
(2015). Certificates of confidentiality: Protecting human subject research data

in law and practice. Journal of Law, Medicine, and Ethics, 43, 594–609.
Zhang S., Shoptaw S., Reback C., Yadav K., & Nyamathi A. (2018). Cost- –

effective way to reduce stimulant- abuse among gay/bisexual men and
transgender women. Public Health, 154, 151–160.

*A link to this open- access journal article is provided in the Toolkit for Chapter
7 in the Resource Manual.

**This journal article is available on for this chapter.

C H A P T E R 8

Planning a Nursing Study

Advance planning is required for all research. This chapter provides
advice for planning qualitative and quantitative studies.

Tools and Concepts for Planning Rigorous Research
This section discusses key methodologic concepts and tools in meeting the
challenges of doing rigorous research.

Inference
Inference is an integral part of doing and evaluating research. An
inference is a conclusion drawn from the study evidence, taking into
account the methods used to generate that evidence. Inference is the
a�empt to come to conclusions based on limited information, using logical
reasoning processes.
Inference is necessary because researchers use proxies that “stand in” for
the things that are fundamentally of interest. A sample of participants is a
proxy for an entire population. A study site is a proxy for all relevant sites
in which the phenomena of interest could unfold. A control group that
does not receive an intervention is a proxy for what would happen to
those receiving the intervention if they did not receive it.
Researchers face the challenge of using methods that yield persuasive
evidence in support of inferences they wish to make.

Reliability, Validity, and Trustworthiness
Researchers want their inferences to correspond with the truth. Research
cannot contribute evidence to guide clinical practice if the findings are
biased or fail to represent the experiences of the target group. Consumers
of research need to assess the quality of a study’s evidence by evaluating
the conceptual and methodologic decisions the researchers made, and
those who do research must strive to make decisions that result in high–
quality evidence.
Quantitative researchers use several criteria to assess the rigor of a study,
sometimes referred to as its scientific merit. Two especially important
criteria are reliability and validity. Reliability refers to the accuracy and
consistency of information obtained in a study. The term is most often
associated with the methods used to measure variables. For example, if a
thermometer measured Alan’s temperature as 98.1°F 1 minute and as
102.5°F the next minute, the reliability of the thermometer would be
suspect.

Validity is a more complex concept that broadly concerns the soundness of
the study’s evidence—whether the findings are unbiased and well
grounded. Like reliability, validity is a key criterion for evaluating
methods to measure variables. In this context, the validity question is
whether the methods are really measuring the concepts that they purport
to measure. Is a self- reported measure of depression really measuring
depression? Or is it measuring something else, such as loneliness?
Researchers strive for solid conceptualizations of research variables and
valid methods to operationalize them.
Validity is also relevant with regard to inferences about the effect of the
independent variable on the dependent variable. Did a nursing
intervention really bring about improvements in patients’ outcomes—or
were other factors responsible for patients’ progress? Researchers make
numerous methodologic decisions that influence this type of study
validity. Yet another validity question concerns whether the evidence can
validly be extrapolated to people who did not participate in the study.
Qualitative researchers use different criteria (and different terminology) in
evaluating a study’s quality. Qualitative researchers pursue methods of
enhancing the trustworthiness of the study evidence (Lincoln & Guba,
1985). Trustworthiness encompasses several dimensions—credibility,
transferability, confirmability, dependability, and authenticity—which are
described in Chapter 26.
Credibility, an especially important aspect of trustworthiness, is achieved
to the extent that the research methods inspire confidence that the results
and interpretations are truthful. Credibility can be enhanced in various
ways, but one strategy merits early discussion because it has implications
for the design of all studies, including quantitative ones. Triangulation is
the use of multiple sources or referents to draw conclusions about what
constitutes the truth. In a quantitative study, this might mean using
multiple measures of an outcome variable to see if predicted effects are
consistent. In a qualitative study, triangulation might involve trying to
reveal the complexity of a phenomenon by using multiple means of data
collection to converge on the truth (e.g., having in- depth discussions with
participants, as well as watching their behavior in natural se�ings). Or, it
might involve triangulating the interpretations of multiple researchers
working together as a team. Nurse researchers are increasingly
triangulating across paradigms—that is, integrating both qualitative and
quantitative data in a mixed methods study to enhance the validity of the
conclusions (Chapter 27).

p

Example of Triangulation
Bower et al. (2018) conducted an exploratory study of nurses’
decision- making when they are interrupted during administration of
medication in the pediatric intensive care unit (PICU). During their
fieldwork, the researchers conducted in- depth interviews with PICU
nurses and made observations during medication administration.

Nurse researchers need to design their studies in such a way that the
reliability, validity, and trustworthiness of their studies are maximized.
This book offers advice on how to do this.

Bias
A bias is an influence that produces a distortion or error. Bias can threaten
a study’s validity and trustworthiness and is a major concern in designing
a study. Bias can result from factors that need to be considered in planning
a study. These include the following:

Participants’ lack of candor. Sometimes people distort their behavior or statements
—consciously or subconsciously—to present themselves in the best light.
Researcher subjectivity. Investigators may distort inferences in the direction of
their expectations or in line with their own experiences—or they may
unintentionally communicate their expectations to participants and thereby
induce biased responses.
Sample imbalances. The sample itself may be biased; for example, if a researcher
studying abortion a�itudes included only members of right- to- life (or pro–
choice) groups in the sample, the results would be distorted.
Faulty methods of data collection. Inadequate methods of capturing concepts can
lead to biases; for example, a flawed measure of patient satisfaction with
nursing care may exaggerate or underestimate patients’ concerns.
Inadequate study design. A researcher may structure the study in such a way that
an unbiased answer to the research question cannot be achieved.
Flawed implementation. Even a well- designed study can sustain biases if the study
protocols are not carefully implemented.

A researcher’s job is to reduce or eliminate bias to the extent possible, to
establish mechanisms to detect or measure it when it exists, and to take
known biases into account in interpreting study findings. The job of
consumers is to scrutinize methodologic decisions to reach conclusions
about whether biases undermined the study evidence.

Unfortunately, bias can seldom be avoided totally because the potential for
its occurrence is pervasive. Some bias is haphazard. Random bias (or
random error) is essentially “noise” in the data. When error is random,
distortions are as likely to bias results in one direction as the other.
Systematic bias, on the other hand, is consistent and distorts results in a
single direction. For example, if a scale consistently measured people’s
weights as being 2 pounds heavier than their true weight, there would be
systematic bias in the data on weight.
Researchers adopt a variety of strategies to eliminate or minimize bias and
strengthen study rigor. Triangulation is one such approach, the idea being
that multiple sources of information or points of view can help
counterbalance biases and offer avenues to identify them. Methods that
quantitative researchers use to combat bias often involve research control.

Research Control
Quantitative researchers usually make efforts to control aspects of the
study. Research control typically involves holding constant other
influences on the dependent variable so that the true relationship between
the independent and dependent variables can be understood. In other
words, research control a�empts to eliminate contaminating factors that
might obscure the relationship between the variables of central interest.
Contaminating factors—called confounding (or extraneous) variables—
can best be illustrated with an example. Suppose we were studying
whether urinary incontinence (UI) affects depression. Prior evidence
suggests a link, but the question is whether UI itself (the independent
variable) contributes to higher levels of depression, or whether other
factors account for the relationship between UI and depression. We need
to design a study to control other determinants of depression that are also
related to the independent variable, UI.
One confounding variable in this situation is age. Levels of depression
tend to be higher in older people; people with UI tend to be older than
those without this problem. In other words, perhaps age is the real cause of
higher depression in people with UI. If age is not controlled, then any
observed relationship between UI and depression could be caused by UI
or by age.
Three possible explanations might be portrayed schematically as follows:

1. UI → depression

2. Age → UI → depression

3.

The arrows here symbolize a causal mechanism or an influence. In Model
1, UI directly affects depression, independent of any other factors. In
Model 2, UI is a mediating variable—the effect of age on depression is
mediated by UI. According to this representation, age affects depression
through the effect that age has on UI. In Model 3, both age and UI have
separate effects on depression and age also increases the risk of UI. Some
research is specifically designed to test paths of mediation and multiple
causation, but in the present example, age is extraneous to the research
question. We want to design a study so that the first explanation can be
tested. Age must be controlled if our goal is to explore the validity of
Model 1, which posits that, no ma�er what a person’s age, having UI
makes a person more vulnerable to depression.
How can we impose such control? There are several ways (Chapter 10),
but the general principle is that confounding variables must be held
constant. The confounding variable must somehow be handled so that, in
the context of the study, it is not related to the independent variable or the
outcome. As an example, let us say we wanted to compare the average
scores on a depression scale for those with and without UI. We would
want to design a study in such a way that the ages of those in the UI and
non- UI groups are comparable, even though, in general, the groups are not
comparable in terms of age.
By exercising control over age, we would have more confidence in
explaining the relationship between UI and depression. The world is
complex: many variables are interrelated in complicated ways. When
studying a problem in a quantitative study, it is difficult to examine this
complexity directly; researchers analyze only a few relationships at a time.

The value of the evidence in quantitative studies is often related to how
well researchers controlled confounding influences. In the present
example, we identified one variable (age) that could affect depression, but
dozens of others might be relevant (e.g., social support, self- efficacy).
Researchers need to isolate the independent and dependent variables in
which they are interested and then identify confounding variables that
need to be controlled.

Confounding variables need to be controlled only if they are
simultaneously related to both the dependent and independent variables,
as explained in the Supplement to this chapter on .
Research control is a critical tool for managing bias and enhancing validity
in quantitative studies. Sometimes, however, too much control can
introduce bias. If researchers tightly control the ways in which key study
variables are manifested, the true nature of those variables may be
obscured. In studying phenomena that are poorly understood or whose
dimensions have not been clarified, a qualitative approach that allows
flexibility and exploration is more appropriate.

Randomness
For quantitative researchers, bias reduction often involves randomness—
having features of the study established by chance rather than by
researcher preference. When people are selected at random to participate
in the study, for example, each person in the initial pool has an equal
probability of being selected—which means that there are no systematic
biases in the sample’s makeup. Similarly, if participants are assigned
randomly to groups that will be compared (for example intervention and
“usual care” groups), then there would be no systematic biases in the
groups’ composition. Randomness is a compelling method of controlling
confounding variables and reducing bias.

Example of Randomness
Van der Meulen et al. (2018) tested a protocol that involved screening
with the Distress Thermometer and Problem List for patients with
head and neck cancer. A total of 110 patients were assigned, at
random, to either the Distress Thermometer intervention or to usual
care. The two groups were then compared in terms of cancer worry,
depressive symptoms, and quality of life.

Qualitative researchers almost never consider randomness a desirable tool.
Qualitative researchers tend to use information obtained early in the study
in a purposeful (nonrandom) fashion to guide their inquiry and to pursue
information- rich sources that can help them expand or refine their
conceptualizations. Researchers’ judgments are viewed as indispensable
vehicles for uncovering the complexities of phenomena of interest.

Reflexivity
Qualitative researchers do not use research control or randomness, but
they are as interested as quantitative researchers in discovering the truth
about human experience. Qualitative researchers often rely on reflexivity
to guard against personal bias in making judgments. Reflexivity is the
process of reflecting critically on the self and of analyzing and recording
personal values that could affect data collection and interpretation.
Schwandt (2007) has described reflexivity as having two aspects. The first
concerns an acknowledgment that the researcher is part of the se�ing or
context under study. The second involves self- reflection about one’s own
biases, preferences, and fears about the research. Qualitative researchers
are encouraged to explore these issues, to be reflexive about decisions
made during the inquiry, and to note their reflexive thoughts in personal
journals. As Pa�on (2015) noted, “To excel in qualitative inquiry requires
keen and astute self- awareness” (p. 71).
Reflexivity can be a useful tool in quantitative as well as qualitative
research. Self- awareness and introspection can enhance the quality of any
study.

Example of Reflexivity
Currie and Szabo (2019) explored parents’ perspectives on caring for
a child with a rare disease. Reflexivity played an important role in the
analysis and interpretation of their interview data with 15 parents:
“Data were analyzed considering reflexivity throughout the
process…The interpretation is a process of cocreation between the
researcher and the participant through reinterpretation and
reflection” (p. 97).

Generalizability and Transferability

Nurses increasingly rely on evidence from research in their clinical
practice. Evidence- based practice is based on the assumption that study
findings are not unique to the people, places, or circumstances of the
original research (Polit & Beck, 2010).
Generalizability is a criterion used in quantitative studies to assess the
extent to which findings can be applied to people and se�ings beyond
those used in a study. How do researchers enhance the generalizability of
a study? First and foremost, they must design studies strong in reliability
and validity. There is no point in wondering whether results are
generalizable if they are not accurate or valid. In selecting participants,
researchers must also give thought to the types of people to whom the
results might be generalized—and then select participants in such a way
that the sample reflects the population of interest. If a study is intended to
have implications for male and female patients, then men and women
should be included as participants. Several chapters in this book describe
strategies for enhancing generalizability.
Qualitative researchers do not specifically aim for generalizability, but
they do want to generate knowledge that could be useful in other
situations. Lincoln and Guba (1985), in their influential book on
naturalistic inquiry, discussed the concept of transferability, the extent to
which qualitative findings can be transferred to other se�ings, as an aspect
of a study’s trustworthiness. An important mechanism for promoting
transferability is the amount of rich descriptive information qualitative
researchers provide about study contexts. Transferability in qualitative
research is discussed in Chapter 26.

TIP Researchers are increasingly paying a�ention to the applicability
of their findings—that is, the extent to which findings can be applied
to individuals or small subgroups. We discuss this issue at length in
Chapter 31.

Stakeholder Engagement
There is growing agreement within the healthcare community that greater
stakeholder engagement is needed in all phases of research, beginning in
the planning phase—or even earlier, during the identification of a research
problem. Proponents of stakeholder involvement during the planning and
implementation of health research argue that it enhances the relevance and

transparency of the research and accelerates the adoption of research
evidence in practice.

TIP In Europe, advocates often use the term patient and public
involvement (PPI). In the United States, the Patient- Centered
Outcomes Research Institute (PCORI) was established in 2010 to fund
research that can help patients make be�er healthcare choices, and
patients play a role in guiding the research agenda.

Although patients have been identified as key stakeholders, researchers
can consider involving others in planning a study. Concannon et al. (2012)
developed a taxonomy to guide researchers in this new era of
stakeholder- engaged research and proposed this definition of
“engagement” of stakeholders: “A bi- directional relationship between the
stakeholder and the researcher that results in informed decision- making
about the selection, conduct, and use of research” (p. 986). They created a
framework called the 7Ps to aid in the identification of stakeholders:
Patients and the public; providers (e.g., nurses, physicians); purchasers;
payers; policy makers; product makers; and principal investigators.
Researchers need to identify key stakeholders and determine how best to
involve them in the planning process.

Overview of Research Design Features
A study’s research design spells out the basic strategies that researchers
adopt to develop evidence that is accurate and interpretable. The research
design incorporates some of the most important methodologic decisions
that researchers make, particularly in quantitative studies.
Table 8.1 describes seven design features that need to be considered in
planning a quantitative study; several are also pertinent in qualitative
studies. These features include:

whether or not there will be an intervention;
how confounding variables will be controlled;
whether blinding will be used to avoid biases;
what the relative timing for collecting data on dependent and independent
variables will be;
what types of comparisons will be made to enhance interpretability;
what the location of the study will be; and
what timeframes will be adopted.

TABLE 8.1
Key Research Design Features in Quantitative Studies

Feature Key Questions Design Options
Intervention Will there be an intervention?

What will the intervention entail?
What specific design will be used?

Experimental (randomized controlled
trial), quasi- experimental,
nonexperimental (observational) design

Control over
confounding
variables

How will confounding variables be controlled?
Which confounding variables will be controlled?

Matching, homogeneity, blocking,
crossover, randomization, statistical
control

Blinding
(masking)

From whom will critical information be withheld
to avoid bias?

Open versus closed study; single- blind,
double- blind (with blinded groups
specified)

Relative
timing

When will information on independent and
dependent variables be collected—looking
backward or forward?

Retrospective, prospective design

Comparisons What type of comparisons will be made to
illuminate key processes or relationships? What
is the nature of the comparison?

Within- subject design, between- subject
design, mixed design, external
comparisons

Location Where will the study take place? Single site versus multisite; in the field vs.
controlled se�ing

Timeframes How often will data be collected?
When, relative to other events, will data be
collected?

Cross- sectional, longitudinal design;
repeated measures design

Note: Many terms in this table are explained in subsequent chapters.

This section discusses the last three features because they are relevant in
planning both qualitative and quantitative studies. Chapters 9 and 10
elaborate on the first four.

TIP Design decisions affect the integrity of your findings. These
decisions may influence whether you receive funding (if you seek
financial support) or whether your findings will be published (if you
submit to a journal). Therefore, a great deal of care and thought
should go into these decisions during the planning phase.

Comparisons
In most quantitative (and some qualitative) studies, researchers
incorporate comparisons into their design to provide a context for
interpreting results. Most quantitative research questions involve either an
explicit or an implicit comparison. For example, if our research question
asks what is the effect of massage on anxiety in hospitalized patients, the
implied comparison is massage versus no massage, which is the
independent variable.
Researchers can structure their studies to make various types of
comparison, the most common of which are as follows:

1. Comparison between two or more groups. For example, if we were studying the
emotional consequences of having a mastectomy, we might compare the
emotional status of women who had a mastectomy with that of women with
breast cancer who did not have a mastectomy. Or, we might compare those
receiving a special intervention with those receiving “usual care.” In a
qualitative study, we might compare mothers and fathers with respect to their
experience of having a child diagnosed with leukemia.

2. Comparison of one group’s status at two or more points in time. For example, we
might want to compare patients’ levels of stress before and after introducing a
procedure to reduce preoperative stress. Or we might want to compare coping
processes among caregivers of patients with AIDS early and later in the
caregiving experience.

3. Comparison of one group’s status under different circumstances. For example, we
might compare people’s heart rates during two different types of exercise.

4. Comparison based on relative rankings. If, for example, we hypothesized a
relationship between the pain level and degree of hopefulness in patients with
cancer, we would be asking whether those with high levels of pain felt less
hopeful than those with low levels of pain. This research question involves a

comparison of those with different rankings—higher versus lower—on both
variables.

5. Comparison with external data. Researchers may compare their results with those
from other studies or with norms (standards from a large and representative
sample). This type of comparison often supplements rather than replaces other
comparisons. In quantitative studies, this approach is useful primarily when the
dependent variable is measured with a reliable, well- accepted method (e.g.,
blood pressure readings, scores on a respected measure of depression).

Example of Using Comparative Data From External Sources
Dias et al. (2018) studied the health status of bereaved parents during
the first 6 months after their child’s death. They used measures of
health and well- being for which national comparative data were
available, which enabled them to compare their participants’
outcomes with national norms for adults in the United States.

Research designs for quantitative studies can be categorized based on the
type of comparisons that are made. Studies that compare different people
(as in examples 1 and 4) are between- subjects designs. Sometimes,
however, it is preferable to make comparisons for the same participants at
different times or under difference circumstances, as in examples 2 and 3.
Such designs are within- subjects designs. When two or more groups of
people are followed over time, the design is sometimes called a mixed
design because comparisons can be both within groups over time or
between different groups at a given point in time.
Comparisons provide a context for interpreting the findings. In example 1
regarding the emotional status of women who had a mastectomy, it would
be difficult to know whether the women’s emotional state was worrisome
without comparing it with that of others—or comparing it to their state at
an earlier time (for example, prior to diagnosis). In designing a study,
quantitative researchers choose comparisons that will best illuminate
answers to the research question.
Qualitative researchers sometimes plan to make comparisons when they
undertake an in- depth study, but comparisons are rarely their primary
focus. Nevertheless, pa�erns emerging in the data often suggest that
certain comparisons have rich descriptive value.

TIP Try not to make design decisions single- handedly. Seek the
advice of faculty or colleagues; patient input may also be desirable.
Once you have made design decisions, consider writing out a
rationale for your choices and sharing it with others to see if they can
suggest improvements. A worksheet for documenting design
decisions is available in the Toolkit of the accompanying Resource
Manual.

Research Location
An important planning task is to identify sites for the study. In some
situations the study site is a “given,” as might be the case for a clinical
study conducted in a hospital or institution with which researchers are
affiliated, but in other studies the identification of an appropriate site
involves considerable effort. The closer the se�ing is to the “real world,”
the more relevant the evidence is likely to be to clinical practice (Chapter
31).
Planning the study location involves two types of activities—selecting the
site or sites and gaining access to them. Although some of the issues we
discuss here are of particular relevance to qualitative researchers working
in the field, many quantitative studies also need to a�end to these ma�ers
in planning a project, especially in intervention studies.

Site Selection
The primary consideration in site selection is whether the site has people
with the behaviors, experiences, or characteristics of interest. The site must
also have a sufficient number of these kinds of people and adequate
diversity or mix of people to achieve research goals. In addition, the site
must be one in which access to participants will be granted. Both
methodologic goals (e.g., ability to impose needed controls) and ethical
requirements (e.g., ability to ensure privacy and confidentiality) need to be
achieved in the chosen site.
Researchers sometimes must decide how many sites to include. Having
multiple sites is advantageous for enhancing the generalizability of the

study findings, but multisite studies are complex and challenging.
Multiple sites are a good strategy when several coinvestigators from
different institutions are working together on a project.
Site visits to potential sites and clinical fieldwork are useful to assess the
“fit” between what the researcher needs and what the site has to offer.
During site visits, the researcher makes observations and converses with
key gatekeepers or stakeholders in the site to be�er understand its
characteristics and constraints. Buckwalter et al. (2009) have noted
particular issues of concern when working in sites that are “unstable”
research environments, such as critical care units or long- term care
facilities.

Gaining Entrée
Researchers must gain entrée into the sites deemed suitable for the study.
If the site is an entire community, a multitiered effort of gaining
acceptance from gatekeepers may be needed. For example, it may be
necessary to enlist the cooperation first of community leaders and
subsequently of administrators and staff in specific institutions (e.g.,
domestic violence organizations) or leaders of specific groups (e.g.,
support groups).
Because establishing trust is a central issue, gaining entrée requires strong
interpersonal skills, as well as familiarity with the site’s customs and
language. Researchers’ ability to gain the gatekeepers’ trust can best occur
if researchers are candid about research requirements and express genuine
interest in and concern for people in the site. Gatekeepers are most likely
to be cooperative if they believe that there will be direct benefits to them or
their constituents.
Information to help gatekeepers make a decision about granting access
usually should be put in writing, even if the negotiation takes place in
person. An information sheet should cover the following points: (1) the
purpose and significance of the research; (2) why the site was chosen; (3)
what the research would entail (e.g., study timeframes, how much
disruption there might be, what resources are required); (4) how ethical
guidelines would be maintained, including how results would be
reported; and (5) what the gatekeeper or others at the site have to gain
from cooperating in the study. Figure 8.1 presents an example of a le�er of
inquiry for gaining entrée into a facility.

FIGURE 8.1 Sample letter of inquiry for gaining entrée into a research site (fictitious).

Gaining entrée may be an ongoing process of establishing relationships
and rapport with people at the site, including prospective informants. The
process might involve progressive entry, in which certain privileges are
negotiated at first and then are subsequently expanded. Ongoing
communication with gatekeepers between the time that access is granted
and the start- up of the study is recommended—this may be a lengthy
period if funding requests are involved. It is not only courteous to keep
people informed, it may prove critical to the project’s success because
circumstances (and leadership) at the site can change.

Timeframes

Research designs designate when, and how often, data will be collected. In
many studies, data are collected at one point in time. For example, patients
might be asked on a single occasion to describe their health- promoting
behaviors. Some designs, however, call for multiple contacts with
participants, often to assess changes over time. Thus, in planning a study,
researchers must decide on the number of data collection points needed to
address the research question properly. The research design also
designates when, relative to other events, data will be collected. For
example, the design might call for weight measurements 4 and 8 weeks
after an exercise intervention.
Designs can be categorized in terms of study timeframes. The major
distinction, for both qualitative and quantitative researchers, is between
cross- sectional and longitudinal designs.

Cross- Sectional Designs
Cross- sectional designs involve the collection of data once: the
phenomena under study are captured at a single time point. Cross–
sectional studies are appropriate for describing the status of phenomena or
for describing relationships at a fixed point in time. For example, we might
be interested in examining whether psychological symptoms in women
going through menopause correlate contemporaneously with physiologic
symptoms.

Example of a Cross- Sectional Study
Van Hoek et al. (2019) studied the influence of demographic factors,
resilience, and stress- reducing activities on the academic outcomes of
undergraduate nursing students. Data were gathered at a single point
in time from 554 Belgian nursing students.

Inferences about causal relationships are tricky when cross- sectional
designs are used. For example, we might test the hypothesis, using cross–
sectional data, that a determinant of excessive alcohol consumption is low
impulse control, as measured by a psychological test. When both alcohol
consumption and impulse control are measured concurrently, however, it
is difficult to know which variable influenced the other, if either.
Cross- sectional data can best be used to infer time sequence under two
circumstances: (1) when a cogent theoretical rationale guides the analysis
or (2) when evidence or logic indicates that one variable preceded the

other. For example, in a study of the effects of low birth weight on
morbidity in school- aged children, it is clear that birth weight came first.
Cross- sectional studies can be designed to permit inferences about
processes evolving over time, but such designs are weaker than
longitudinal ones. Suppose, for example, we were studying changes in
children’s health promotion activities between the ages of 10 and 13 years.
One way to study this would be to interview children at the age of
10 years and then 3 years later at the age of 13 years—a longitudinal
design. On the other hand, we could use a cross- sectional design by
interviewing different children of ages 10 and 13 years and then comparing
their responses. If 13- year- olds engaged in more health- promoting
activities than 10- year- olds, we might infer that children improve in
making healthy choices as they age. To make this kind of inference, we
would have to assume that the older children would have responded like
the younger ones had they been questioned 3 years earlier, or, conversely,
that 10- year- olds would report more health- promoting activities if they
were questioned again 3 years later. Such a design, which involves a
comparison of multiple age cohorts, is sometimes called a cohort comparison
design.
Cross- sectional studies are economical but inferring changes over time
with such designs is problematic. In our example, 10- and 13- year old
children may have different a�itudes toward health promotion,
independent of maturation. Rapid social and technologic changes make it
risky to assume that differences in the behaviors or traits of different age
groups are the result of time passing rather than of cohort differences. In
cross- sectional studies designed to explore change, there are often
alternative explanations for the findings—and that is precisely what good
research design tries to avoid.

Example of a Cross- Sectional Study With Inference of Change
Over Time
Hladek et al. (2018) studied the feasibility of using sweat to measure
cytokines in older adults (aged 65+) compared with those in younger
adults (aged 18- 40 years). Higher concentrations of TNF- α and IL- 10
were observed in older adults, consistent with the hypothesis that
cytokines increase with age.

Longitudinal Designs
A study in which researchers collect data at more than one point in time
over an extended period is a longitudinal design. There are four situations in
which a longitudinal design is appropriate:

1. Studying time- related processes. Some research questions specifically concern
phenomena that evolve over time (e.g., wound healing).

2. Determining time sequences. It is sometimes important to establish how
phenomena are sequenced. For example, if it is hypothesized that infertility
affects depression, then it would be important to ascertain that the depression
did not precede the fertility problem.

3. Assessing changes over time. Some studies examine whether changes have
occurred over time. For example, an intervention study might examine both
short- term and long- term changes in health outcomes. A qualitative study
might explore the evolution of grieving in the spouses of palliative care patients.

4. Enhancing research control. Quantitative researchers sometimes collect data at
multiple points to enhance the interpretability of the results. For example, when
two groups are being compared with regard to the effects of alternative
interventions, the collection of preintervention data allows the researcher to
assess group comparability initially.

There are several types of longitudinal designs. Most involve collecting
data from one group of participants multiple times, but others involve
different samples. Trend studies, for example, are investigations of a
specific phenomenon using different samples from the same population
over time (e.g., every 2 years). Trend studies permit researchers to examine
pa�erns and rates of change and to predict future developments. Many
trend studies document trends in public health issues, such as smoking,
obesity, and so on.

Example of a Trend Study
Neaigus et al. (2017) studied trends in HIV and hepatitis C virus risk
behaviors among people who inject drugs in New York City. The
team examined changes from 2005 to 2009 and to 2012. Significant
trends in risk behaviors included a decline in unsafe syringe source,
but an increase in vaginal or anal sex without condoms.

In a more typical longitudinal study, the same people provide data at two
or more points in time. Longitudinal studies of general (nonclinical)

populations are sometimes called panel studies. The term panel refers to
the sample of people providing data. Because the same people are studied
over time, researchers can examine diverse pa�erns of change (e.g., those
whose health improved or deteriorated). Panel studies are intuitively
appealing as an approach to studying change, but they are expensive.

Example of a Panel Study
Many national governments sponsor large- scale panel studies whose
data have been analyzed by nurse researchers. For example, Davis et
al.(2018) used data from the Australian Longitudinal Study on
Women’s Health to examine the relationship between parity and
long- term weight gain over a 16- year period.

Follow- up studies are undertaken to examine the subsequent
development of individuals who have a specified condition or who have
received a specific treatment. For example, patients who have received a
special nursing intervention may be followed to ascertain long- term
effects. Or, in a qualitative study, patients interviewed shortly after a
diagnosis of prostate cancer may be followed to assess their experiences
after treatment decisions have been made.

Example of a Qualitative Follow- Up Study
Hansen et al. (2017) followed- up, over a 6- month period, the family
members caring for patients with terminal hepatocellular carcinoma
as patients approached the end of life. The caregivers were
interviewed monthly.

In some longitudinal studies, called cohort studies, a group of people (the
cohort) is tracked over time to see if subsets with exposure to different
factors diverge in terms of subsequent outcomes. For example, in a cohort
of women, those with or without a history of childbearing could be
tracked to examine differences in rates of ovarian cancer. This type of
study, sometimes called a prospective study, is discussed in Chapter 9.
Longitudinal studies are appropriate for studying the trajectory of a
phenomenon over time, but a major problem is a�rition—the loss of
participants after initial data collection. A�rition is problematic because
those who drop out of the study often differ in systematic ways from those

who continue to participate, resulting in potential biases and difficulty in
generalizing to the original population. The longer the interval between
data collection points, the greater the risk of a�rition and resulting biases.
In longitudinal studies, researchers make decisions about the number of
data collection points and the intervals between them. When change or
development is rapid, numerous time points at short intervals may be
needed to document it. Researchers interested in outcomes that may occur
years after the original data collection must use longer- term follow- up—or
use surrogate outcomes. For example, in evaluating the effectiveness of a
smoking cessation intervention, the main outcome of interest might by
lung cancer incidence or age at death, but the researcher would likely use
subsequent smoking (e.g., 3 months after the intervention) as the surrogate
outcome.

Repeated Measures Designs
Studies with multiple points of data collection are sometimes described as
having a repeated measures design, which usually signifies a study in
which data are collected three or more times. Longitudinal studies, such as
follow- up and cohort studies, sometimes use a repeated measures design.
Repeated measures designs, however, can also be used in studies that are
essentially cross- sectional. For example, a study involving the collection of
postoperative patient data on vital signs hourly over a 6- hour period
would not be described as longitudinal because the study does not involve
an extended time perspective. Yet, the design could be characterized as
repeated measures. Researchers are especially likely to use the term
repeated measures design when they use a repeated measures approach to
statistical analysis (see Chapter 18).

Example of a Repeated Measures Design
Krause- Parello et al. (2018) studied the effects of an animal- assisted
intervention on hospitalized veterans receiving palliative care. Blood
pressure, heart rate, and salivary cortisol were measured before,
immediately after, and again 30 minutes after the intervention.

TIP In making design decisions, you will need to balance various
considerations, such as time, cost, ethics, and study rigor. Try to
understand your “upper limits” before finalizing your design. That

is, what is the most money that can be spent on the project? What is
the maximum amount of time available for conducting the study?
What is the limit of acceptability with regard to a�rition? These limits
often eliminate some design options. With these constraints in mind,
the central focus should be on designing a study that maximizes the
rigor or trustworthiness of the study.

Planning Data Collection
In planning a study, researchers must select methods to gather their
research data. This section provides an overview of various methods of
data collection for qualitative and quantitative studies.

Overview of Data Collection and Data Sources
A broad array of data collection methods can be used in research. In some
cases, researchers may be able to use data from existing sources, such as
records. Most often, however, researchers collect new data, and one key
planning decision concerns the types of data to gather. Three approaches
have been used most frequently by nurse researchers: self- reports,
observation, and biophysiologic measures.

Self- Reports (Patient- Reported Outcomes)
A good deal of information can be gathered by questioning people
directly, a method known as self- report. In the medical literature, self–
reports are often called patient- reported outcomes or PROs, but some
self- reports are not about patients (e.g., self- reports about nurses’ burnout)
and some are not outcomes (self- reports about prior hospitalizations). Most
nursing studies involve self- report data. The unique ability of humans to
communicate verbally makes direct questioning a particularly important
part of nurse researchers’ data collection repertoire.
Self- reports are versatile. If we want to know what people think, believe,
or plan to do, the most efficient approach is to ask them. Self- reports can
yield information that would be impossible to gather by other means.
Behaviors can be observed but only if participants engage in them
publicly. Furthermore, observers can observe only those behaviors
occurring at the time of the study. Through self- reports, researchers can
gather retrospective data about events occurring in the past or information
about behaviors in which people plan to engage in the future. Self- reports
can also capture psychological a�ributes such as motivation or resilience.
Nevertheless, verbal report methods have some weaknesses. The most
serious issue concerns their validity and accuracy: Can we be sure that
people feel or act the way they say they do? We all have a tendency to
present ourselves positively, and this may conflict with the truth.

Researchers who gather self- report data should recognize this limitation
and take it into consideration when interpreting the results.

Example of a Study Using Self- Reports
Bea�ie et al. (2019) explored the perceptions of healthcare providers
on workplace violence perpetrated by clients. The data came from
in- depth group and one- on- one interviews with nurses and other
healthcare staff in Australia.

Self- report methods depend on respondents’ willingness to share personal
information. Projective techniques are sometimes used to obtain data
about people’s psychological states indirectly. Projective techniques
present participants with a stimulus of low structure, permi�ing them to
“read in” and describe their interpretations. The Rorschach (inkblot) test is
an example of a projective technique. Other projective methods encourage
self- expression through the construction of a product (e.g., drawings). The
assumption is that people express their needs, motives, and emotions by
working with or manipulating materials. Projective methods are used by
nurse researchers mainly in studies exploring sensitive topics with
children.

Example of a Study Using Projective Methods
Anderson and Tulloch- Reid (2019) investigated the experiences of
adolescents with diabetes living in Jamaica. Participants took part in
group interviews and were also asked to draw pictures representing
their experiences.

Observation
An alternative to self- reports is observation of study participants.
Observation can be done directly through the human senses or with
technical apparatus, such as video equipment, X- rays, and so on.
Observational methods can be used to gather information about a wide
range of phenomena, such as: (1) people’s characteristics and conditions
(e.g., patients’ sleep–wake state); (2) verbal communication (e.g., nurse–
patient dialogue); (3) nonverbal communication (e.g., facial expressions);
(4) activities and behavior (e.g., geriatric patients’ self- grooming); (5) skill

a�ainment (e.g., diabetic patients’ skill in testing their urine); and (6)
environmental conditions (e.g., architectural barriers in nursing homes).
Observation in healthcare se�ings is an important data- gathering strategy.
Nurses are in an advantageous position to observe, relatively
unobtrusively, the behaviors of patients, their families, and hospital staff.
Moreover, nurses may, by training, be especially sensitive observers.
Observational methods are especially useful when people are unaware of
their own behavior (e.g., manifesting preoperative symptoms of anxiety),
when people are embarrassed to report activities (e.g., aggressive actions),
when behaviors are emotionally laden (e.g., grieving), or when people
cannot describe their actions (e.g., young children). A shortcoming of
observation is potential behavior distortions when participants are aware
of being observed—a problem called reactivity. Reactivity can be
eliminated if observations are made without people’s knowledge, through
concealment—but this may pose ethical concerns. Another problem is
observer biases. Several factors (e.g., prejudices, emotions, fatigue) can
undermine objectivity. Observational biases can be minimized through
careful training.

Example of a Study Using Observation
Vi�ner et al. (2018) studied whether skin- to- skin contact between
parents and stable preterm infants alleviates parental stress while
also supporting mother–father–infant relationships. Parent–infant
interactions were examined via video- recorded observations, in
which levels of synchrony and responsiveness were recorded.

Biophysiologic Measures/Biomarkers
Many clinical studies rely on the use of biophysiologic measures or
biomarkers. Biomarkers are objective, quantifiable characteristics of
biological processes (Strimbu & Tavel, 2010). Biophysiologic and physical
variables typically are measured using specialized technical instruments
and equipment. Because such equipment is available in healthcare se�ings,
the costs of these measures to nurse researchers may be small or
nonexistent.
A major strength of biophysiologic measures is their objectivity. Nurse A
and nurse B, reading from the same spirometer output, are likely to record
the same forced expiratory volume (FEV) measurements. Furthermore,

two different spirometers are likely to produce the same FEV readouts.
Another advantage of physiologic measurements is the relative precision
they normally offer. By relative, we are implicitly comparing physiologic
instruments with measures of psychological phenomena, such as self–
report measures of anxiety or pain. Biophysiologic measures usually yield
data of exceptionally high quality.

Example of a Study Using Biomarkers
Imes et al. (2019) studied factors associated with endothelial function
in older adults with obstructive sleep apnea and cardiovascular
disease. The variables examined included body mass index, blood
pressure, and several cholesterol values.

Records
Most researchers create original data for their studies, but sometimes they
take advantage of information available in records. Electronic health
records and other records constitute rich data sources to which nurse
researchers may have access. Research data obtained from records are
advantageous because they are economical: the collection of original data
can be time- consuming and costly. Also, records avoid problems
stemming from people’s reaction to study participation.
On the other hand, when researchers are not responsible for collecting
data, they may be unaware of the records’ limitations and biases, such as
the biases of selective deposit and selective survival. If the available records
are not the entire set of all possible such records, researchers must
question how representative existing records are. Many record keepers
intend to maintain an entire universe of records but may not succeed.
Careful researchers should a�empt to learn what biases might exist.
Gregory and Radovinsky (2012) have suggested some strategies for
enhancing the reliability of data extracted from medical records, and
Dziadkowiec et al. (2016) have described a method of “cleaning” data
extracted from electronic health records.
Other difficulties also may be relevant. Sometimes records have to be
verified for their authenticity or accuracy, which may be difficult if the
records are old. In using records to study trends, researchers should be
alert to possible changes in record- keeping procedures. Another problem
is the increasing difficulty of gaining access to institutional records. Thus,

although records may be plentiful and inexpensive, they should not be
used without paying a�ention to potential problems.

TIP Nurse researchers are increasingly using information from”Big
Data” sources, such as large administrative databases or registries.
Registries are collections of large amounts of data about a particular
disease or patient population, such as trauma or cancer registries.
Talbert and Sole (2013) and Gephart et al. (2018) have wri�en about
doing research with large databases.

Example of a Study Using Records
Pressler et al. (2018) studied the symptoms, nutrition, and pressure
ulcer status among older women with heart failure in relation to their
return to the community from a skilled nursing facility. The data
were collected from the electronic medical records.

Dimensions of Data Collection Approaches
Data collection methods vary along three key dimensions: structure,
researcher obtrusiveness, and objectivity. In planning a study, researchers
make decisions about where on these dimensions the data collection
methods should fall.

Structure
In structured data collection, information is gathered from participants in
a comparable, prespecified way. Most self- administered questionnaires are
structured: They include a fixed set of questions, usually with
predesignated response options (e.g., agree/disagree). Structured methods
give participants limited opportunities to qualify their answers or to
explain the meaning of their responses. By contrast, qualitative studies rely
mainly on unstructured methods of data collection.
Structured methods often take considerable effort to develop, but they
yield data that are relatively easy to analyze because the data can be
readily quantified. Structured methods are not appropriate for an in- depth
examination of a phenomenon, however. Consider the following two
methods of asking people about their levels of stress:

Structured
During the past week, would you say you felt stressed:

1. rarely or none of the time,
2. some or a li�le of the time,
3. occasionally or a moderate amount of the time, or
4. most or all of the time?

Unstructured
How stressed or anxious have you been this past week? Please tell me
about any tensions and stresses you experienced.
The structured question allows us to compute what percentage of
respondents felt stressed most of the time but provides no information
about the circumstances of the stress. The unstructured question allows for
deeper and more thoughtful responses but may not be useful for people
who are not good at expressing themselves; moreover, the resulting data
are more difficult to analyze.

Researcher Obtrusiveness
Data collection methods differ in the degree to which people are aware of
the data- gathering process. If people know they are under scrutiny, their
behavior and responses may not be “normal,” and distortions can
undermine the value of the research. When data are collected
unobtrusively, however, ethical problems may emerge.
Study participants are most likely to distort their behavior and their
responses to questions under certain circumstances. Researcher
obtrusiveness is likely to be most problematic when (1) a program is being
evaluated and participants have a vested interest in the evaluation
outcome; (2) participants engage in socially unacceptable or unusual
behavior; (3) participants have not complied with medical and nursing
instructions; and (4) participants are the type of people who have a strong
need to “look good.” When researcher obtrusiveness is unavoidable under
these circumstances, researchers should make an effort to put participants
at ease, to emphasize the importance of candor, and to adopt a
nonjudgmental demeanor.

Objectivity

Objectivity refers to the degree to which two independent researchers can
arrive at similar “scores” or make similar observations regarding concepts
of interest. Objectivity is a mechanism for avoiding biases. Some data
collection approaches require more subjective judgment than others.
Researchers with a positivist orientation usually strive for a reasonable
amount of objectivity. In research based on the constructivist paradigm,
however, the subjective judgment of investigators is considered essential
for understanding human experiences.

Developing a Data Collection Plan
In planning a study, researchers make decisions about the type and
amount of data to collect. Several factors, including costs, must be
weighed, but a key goal is to identify the kinds of data that will yield
accurate, valid, and trustworthy information for addressing the research
question.
Most researchers face the issue of balancing information needs against the
risk of overburdening participants. In many studies, more data are
collected than are needed or analyzed. Although it is be�er to have
adequate data than to have unwanted omissions, minimizing participant
burden should be an important goal. Specific guidance on data collection
plans is offered in Chapter 14 for quantitative studies and Chapter 24 for
qualitative studies.

Organization of a Research Project
Studies typically take many months to complete and longitudinal studies
require years of work. During the planning phase, it is a good idea to
make preliminary estimates of how long various tasks will require.
Having deadlines helps to restrict tasks that might otherwise continue
indefinitely, such as a literature review.
Chapter 3 presented a sequence of steps that quantitative researchers
follow in a study. The steps represented an idealized conception: the
research process rarely follows a neatly prescribed sequence of
procedures, even in quantitative studies. Decisions made in one step, for
example, may require alterations in a previous activity. For example,
sample size decisions may require rethinking how many sites are needed.
Nevertheless, preliminary time estimates are valuable. In particular, it is
important to have a sense of how much total time the study will require
and when it will begin.

TIP We could not suggest even approximations for the percentage of
time that should be spent on each task. Some projects need many
months to recruit participants, whereas other studies can rely on an
existing group. Clearly, not all steps are equally time- consuming.

Researchers sometimes develop visual timelines to help them organize a
study. These devices are especially useful if funding is sought because the
schedule helps researchers to understand when and for how long staff
support is needed (e.g., for transcribing interviews). This can best be
illustrated with an example, in this case of a hypothetical quantitative
study.
Suppose a researcher was studying the following problem: Is a woman’s
decision to have an annual mammogram related to her perceived
susceptibility to breast cancer? Using the organization of steps outlined in
Chapter 3, here are some of the tasks that might be undertaken: a

1. The researcher is concerned that many older women do not get mammograms
regularly. Her specific research question is whether mammogram practices are
different for women with different perceptions about their susceptibility to
breast cancer.

2. The researcher reviews the research literature on breast cancer, mammography
use, and factors affecting mammography decisions.

3. The researcher does clinical fieldwork by discussing the problem with nurses and
other healthcare professionals in various clinical se�ings and by having
informal discussions with women in a support group for breast cancer patients.

4. The researcher seeks theories and models for her problem. She finds that the
Health Belief Model is relevant, which helps her to develop a conceptual
definition of susceptibility to breast cancer.

5. Based on the framework, the following hypothesis is developed: Women (P) who
perceive themselves as susceptible to breast cancer (I) are more likely than other
women (C) to get an annual mammogram (O).

6. The researcher adopts a nonexperimental, cross- sectional, between- subjects
research design. Her comparison strategy will be to compare women with
different rankings on a measure of susceptibility to breast cancer. She designs
the study to control the confounding variables of age, marital status, and health
insurance status. Her research site will be Pi�sburgh.

7. There is no intervention in this study and so this step is unnecessary.
8. The researcher designates that the population of interest is women between the

ages of 50 and 65 years living in Pi�sburgh who have not been previously
diagnosed as having any form of cancer.

9. The researcher will recruit 250 women living in Pi�sburgh as her research sample;
they are identified at random using a procedure known as random- digit dialing,
and so she does not need to gain entrée into any institution.

10. Research variables will be measured by self- report; the independent variable
(perceived susceptibility), dependent variable (mammogram history), and
confounding variables will be measured by asking participants a series of
questions.

11. The Institutional Review Board (IRB) at the researcher’s institution is asked to
review the plans to ensure that the study adheres to ethical standards.

12. Plans for the study are finalized: the methods are reviewed by colleagues with
clinical and methodologic expertise and by the IRB; the data collection
instruments are pretested; and interviewers who will collect the data are
trained.

13. Data are collected by means of telephone interviews with women in the research
sample.

14. Data are prepared for analysis by coding them and entering them onto a computer
file.

15. Data are analyzed using statistical software.
16. The results indicate that the hypothesis is supported; however, the researcher’s

interpretation must take into consideration that many women who were asked to
participate declined to do so.

17. The researcher presents an early report on her findings and interpretations at a
conference of Sigma Theta Tau International. She subsequently publishes the
report in the International Journal of Nursing Studies.

18. The researcher seeks out clinicians to discuss how the study findings can be used
in practice.

The researcher plans to conduct this study over a 2- year period; Figure 8.2
presents a hypothetical schedule. Many steps overlap or are undertaken
concurrently; some steps are projected to involve li�le time, whereas
others require months of work. (The Toolkit in the accompanying Resource
Manual includes Figure 8.2 as a Word document for you to adapt.)

FIGURE 8.2 Project timeline (in months) for a hypothetical study of women’s
mammography decisions.

In developing a schedule, several considerations should be kept in mind,
including methodologic expertise and the availability of funding. In the
present example, if the researcher needed financial support to pay for the

cost of interviewers, the timeline would need to be expanded to
accommodate the time required to prepare a proposal and await the
funding decision. It is also important to consider the practical aspects of
performing the study, which were not noted in the preceding section.
Securing permissions, hiring staff, and holding meetings are all time–
consuming, but necessary, activities.
In large- scale studies—especially studies in which there is an intervention
—it is wise to undertake a pilot study. A pilot study is a trial run designed
to test planned methods and procedures. Results and experiences from
pilot studies help to inform many of decisions for larger projects. We
discuss the important role of pilot studies in Chapter 29.
Individuals differ in the kinds of tasks that appeal to them. Some people
enjoy the preliminary phase, which has an intellectual component; others
are more eager to collect the data, which is more interpersonal.
Researchers should, however, allocate a sensible amount of time to do
justice to each activity.

TIP Ge�ing organized for a study has many dimensions beyond
having a timeline. Two important issues concern having the right
team and mix of skills for a research project, and developing plans for
hiring and monitoring research staff (Nelson & Morrison- Beedy,
2008).

Critical Appraisal of the Planning Aspects of a Study
Researchers typically do not describe the planning process or problems
that arose during the study in journal articles. Thus, there is typically li�le
that readers can do to critically appraise the researcher’s planning efforts.
What can be appraised, of course, are the outcomes of the planning—that
is, the methodologic decisions themselves. Guidelines for critically
appraising those decisions are provided throughout this book.
Readers can, however, be alert to a few things relating to research
planning. First, evidence of careful conceptualization provides a clue that
the project was well planned. If a conceptual map is presented (or implied)
in the report, it means that the researcher had a “road map” that facilitated
planning.
Second, readers can consider whether the researcher’s plans reflect
adequate a�ention to concerns about evidence- based practice. For
example, was the comparison group strategy designed to reflect a realistic
practice concern? Was the se�ing one that maximizes potential for the
generalizability of the findings? Did the timing of data collection
correspond to clinically important milestones? Was the intervention
sensitive to the constraints of a typical practice environment?
Finally, a report might provide clues about whether the researcher
devoted sufficient time and resources in preparing for the study. For
example, if the report indicates that the study grew out of earlier research
on a similar topic, or that the researcher had previously used the same
instruments, or had completed other studies in the same se�ing, this
suggests that the researcher was not plunging into unfamiliar waters.
Unrealistic planning can sometimes be inferred from a discussion of
sample recruitment. If the report indicates that the researcher was unable
to recruit the originally hoped- for number of participants, or if recruitment
took months longer than anticipated, this suggests that the researcher may
not have done adequate homework during the planning phase.

Research Example
In this section, we describe a pilot study and the “lessons learned” by the
researchers. This is a good example of the importance of strong advance
planning for a study.
Study: Recruitment of older African American males for depression
research: Lessons learned (Bryant et al., 2014)
Purpose: The purpose of the article was to describe the setbacks and
lessons learned in a pilot study aimed at exploring the signs and
symptoms of depression experienced by older African American men.
Methods: The researchers sought to recruit a sample of about 20 African
American men aged 60 years and older over a 3 to 4- month recruitment
period. The men were to have been interviewed to learn how they
recognize, express, and describe their depression. Initial recruitment was
through flyers distributed to community clinics and physicians’ offices
serving the target group. The colorful flyers included photos and a
description of the study and contact information.
Findings: Nine months into recruitment, only one person had inquired
about participation in the study, and that person was deemed ineligible.
This recruitment failure prompted members of the team to solicit feedback
from university community liaisons and a local community development
group. The advisers thought the study was important, but noted that the
researchers faced numerous recruitment barriers, such as the likelihood
that older black men would not easily trust outsiders and might believe
that they are too strong to be depressed. The advisers also provided
valuable feedback about the recruitment flier and other aspects of the
study design.
Conclusions: The researchers concluded that their “failure to recruit
participants can be ascribed to a number of missteps: non- culturally
relevant recruitment materials, a failure to build trust and engage
community coalitions beforehand, (and) the use of ineffective strategies to
address the stigma associated with mental illness” (p. 4). They noted that
the lessons learned would hopefully facilitate future recruitment efforts for
mental health research involving black men.

Summary Points

Researchers face numerous challenges in planning a study, including the
challenge of designing a study that is strong with respect to reliability and
validity (quantitative studies) or trustworthiness (qualitative studies).
Reliability refers to the accuracy and consistency of information obtained in a
study. Validity is a more complex concept that broadly concerns the soundness
of the study’s evidence—that is, whether the findings are cogent and well
grounded.
Trustworthiness in qualitative research encompasses several different
dimensions, including dependability, confirmability, authenticity,
transferability, and credibility.
Credibility is achieved to the extent that the research methods engender
confidence in the truth of the data and in the researchers’ interpretations.
Triangulation, the use of multiple sources or referents to draw conclusions
about what constitutes the truth, is one approach to enhancing credibility.
A bias is an influence that distorts study results. Systematic bias results when a
bias operates in a consistent direction.
In quantitative studies, research control is used to hold constant outside
influences on the outcome variable so that its relationship to the independent
variable can be be�er understood. Researchers use various strategies to control
confounding variables, which are extraneous to the study aims and can obscure
understanding.
In quantitative studies, a powerful tool to eliminate bias is randomness—having
certain features of the study established by chance rather than by researchers’
intentions.
Reflexivity, the process of reflecting critically on the self and of scrutinizing
personal values that could affect interpretation, is an important tool in
qualitative research.
Generalizability in a quantitative study concerns the extent to which findings
can be applied to people or se�ings other than the ones used in the research.
Transferability is the extent to which qualitative findings can be transferred to
other se�ings.
During the planning phase, researchers need to consider the extent to which key
stakeholders will be involved in the research and who the key stakeholders are.
In planning a study, researchers make many design decisions, including
whether to have an intervention, how to control confounding variables, what
type of comparisons will be made, where the study will take place, and what the
study timeframes will be.

Quantitative researchers often incorporate comparisons into their designs to
enhance interpretability. In between- subjects designs, different groups of
people are compared. Within- subjects designs involve comparisons of the same
people at different times or under different circumstances, and mixed designs
involve both types of comparison.
Site selection for a study often requires site visits to evaluate suitability and
feasibility. Gaining entrée into a site involves developing and maintaining trust
with gatekeepers.
Cross- sectional designs involve collecting data at one point in time, whereas
longitudinal designs involve data collection two or more times over an
extended period.
Trend studies have multiple points of data collection with different samples
from the same population. Panel studies gather data from the same people,
usually from a general population, more than once. In a follow- up study, data
are gathered two or more times from a well- defined group (e.g., those with a
particular health problem). In a cohort study, a cohort of people is tracked over
time to see if subsets with different exposures to risk factors differ in terms of
subsequent outcomes.
A repeated measures design typically involves collecting data three or more
times, either in a longitudinal fashion or in rapid succession over a shorter
timeframe.
Longitudinal studies are typically expensive and time- consuming, and have risk
of a�rition (loss of participants over time) but are essential for illuminating
time- related phenomena.
Researchers also develop a data collection plan. In nursing, the most widely
used methods are self- report, observation, biophysiological measures, and
existing records.
Self- report data (sometimes called patient- reported outcomes or PROs) are
obtained by directly questioning people. Self- reports are versatile and powerful
but a drawback is the potential for respondents’ deliberate or inadvertent
misrepresentations.
A wide variety of human activity and traits are amenable to direct observation.
Observation is subject to observer biases and distorted participant behavior
(reactivity).
Biophysiologic measures (biomarkers) tend to yield high- quality data that are
objective and valid.
Existing records and documents are an economical source of research data, but
two potential biases in records are selective deposit and selective survival.
Data collection methods vary in terms of structure, researcher obtrusiveness,
and objectivity, and researchers must decide on these dimensions in their plan.
Planning efforts should include the development of a timeline that provides
estimates of when important tasks will be completed.

Study Activities
Study activities are available to instructors on .

References Cited in Chapter 8
Anderson M., & Tulloch- Reid M. (2019). “You cannot cure it, just control it”: Jamaican

adolescents living with diabetes. Comprehensive Child and Adolescent Nursing, 42(2),
109–123.

Bea�ie J., Griffiths D., Innes K., & Morphet J. (2019). Workplace violence perpetrated
by clients of health care: A need for safety and trauma- informed care. Journal of
Clinical Nursing, 28, 116–124.

Bower R., Coad J., Manning J., & Pengelly T. (2018). A qualitative, exploratory study
of nurses’ decision- making when interrupted during medication administration
within the paediatric intensive care unit. Intensive & Critical Care Nursing, 44, 11–
17.

* Bryant K., Wicks M., & Willis N. (2014). Recruitment of older African American
males for depression research: Lessons learned. Archives of Psychiatric Nursing, 28,
17–20.

* Buckwalter K., Grey M., Bowers B., McCarthy A., Gross D., Funk M., & Beck C.
(2009). Intervention research in highly unstable environments. Research in Nursing
& Health, 32, 110–121.

* Concannon T., Meissner P., Grunbaum J., McElwee N., Guise J. M., Santa J., …
Leslie L. (2012). A new taxonomy for stakeholder engagement in patient- centered
outcomes research. Journal of General Internal Medicine, 27, 985–991.

Currie G., & Szabo J. (2019). “It is like a jungle gym, and everything is under
consideration”: The parent’s perspective of caring for a child with a rare disease.
Child: Care, Health and Development, 45, 96–103.

Davis D., Brown W., Foureur M., Nohr E., & Xu F. (2018). Long- term weight gain and
risk of overweight in parous and nulliparous women. Obesity, 26, 
1072–1077.

Dias N., Brandon D., Haase J., & Tanabe P. (2018). Bereaved parents’ health status
during the first 6 months after their child’s death. American Journal of Hospice &
Palliative Care, 35, 829–839.

* Dziadkowiec O., Callahan T., Ozkaynak M., Reeder B., & Welton J. (2016). Using a
data quality framework to clean data extracted from the electronic health record: A
case study. EGEMS, 4, 1201.

Gephart S., Davis M., & Shea K. (2018). Perspectives on policy and the value of
nursing science in a Big Data era. Nursing Science Quarterly, 31, 78–81.

* Gregory K. E., & Radovinsky L. (2012). Research strategies that result in optimal
data collection from the patient medical record. Applied Nursing Research, 25, 
108–
116.

Hansen L., Rosenkranz S., Wherity K., & Sasaki A. (2017). Living with hepatocellular
carcinoma near the end of life: Family caregivers’ perspectives. Oncology Nursing
Forum, 44, 562–570.

Hladek M., Szanton S., Cho Y., Lai C., Sacko C., Roberts L., & Gill J. (2018). Using
sweat to measure cytokines in older adults compared to younger adults. Journal of
Immunological Methods, 454, 1–5.

** Imes C., Baniak L., Choi J., Luyster F., Morris J., Ren D., & Chasens E. (2019).
Correlates of endothelial function in older adults with untreated obstructive sleep
apnea and cardiovascular disease. Journal of Cardiovascular Nursing, 34, E1–E7.

Krause- Parello C., Levy C., Holman E., & Kolassa J. (2018). Effects of VA facility dog
on hospitalized veterans seen by a palliative care psychologist. American Journal of
Hospice & Palliative Care, 35, 5–14.

Lincoln Y. S., & Guba E. G. (1985). Naturalistic inquiry. Newbury Park, CA: Sage.
Neaigus A., Reilly K., Jenness S., Hagan H., Wendel T., Gelpi- Acosta C., & Marshall

D. (2017). Trends in HIV and HVC risk behaviors and prevalent infection among
people who inject drugs in New York City, 2005- 2012. Journal of Acquired Immune
Deficiency Syndromes, 75, S325–S332.

* Nelson L. E., & Morrison- Beedy D. (2008). Research team training: moving beyond
job descriptions. Applied Nursing Research, 21, 159–164.

Pa�on M. Q. (2015). Qualitative research & evaluation methods (4th ed.). Thousand Oaks,
CA: Sage.

Polit D. F., & Beck C. T. (2010). Generalization in qualitative and quantitative
research: Myths and strategies. International Journal of Nursing Studies, 47, 1451–
1458.

Pressler S., Jung M., Titler M., Harrison J., & Lee K. (2018). Symptoms, nutrition,
pressure ulcers, and return to community among older women with heart failure
at skilled nursing facilities. Journal of Cardiovascular Nursing, 33, 22–29.

Schwandt T. (2007). The Sage dictionary of qualitative inquiry (3rd ed.). Thousand Oaks,
CA: Sage.

* Strimbu K., & Tavel J. (2010). What are biomarkers? Current Opinion in HIV & AIDS,
5, 463–366.

Talbert S., & Sole M. L. (2013). Too much information: Research issues associated
with large databases. Clinical Nurse Specialist, 27, 73–80.

Van Hoek G., Por�ky M., & Franck E. (2019). The influence of sociodemographic
factors, resilience and stress- reducing activities on academic outcomes of
undergraduate nursing students: A cross- sectional research study. Nurse Education
Today, 72, 90–96.

Van der Meulen I., May A., Koole R., & Ros W. (2018). A distress thermometer
intervention for patients with head and neck cancer. Oncology Nursing Forum, 45,
E14–E32.

Vi�ner D., McGrath J., Robinson J., Lawhon G., Cusson R., Eisenfeld L., … Cong X.
(2018). Increase in oxytocin from skin- to- skin contact enhances development of
parent- infant relationship. Biological Research for Nursing, 20, 54–62.

*A link to this open- access article is provided in the Toolkit for Chapter 8 in the
Resource Manual.

**This journal article is available on for this chapter.

aThis is only a partial list of tasks and is designed to illustrate the flow of activities; the flow in this
example is more orderly than would ordinarily be true.

PA R T 3
Designing and Conducting Quantitative
Studies to 
Generate Evidence for Nursing

Chapter 9 Quantitative Research Design
Chapter 10 Rigor and Validity in Quantitative Research
Chapter 11 Specific Types of Quantitative Research
Chapter 12 Quality Improvement and Improvement
Science
Chapter 13 Sampling in Quantitative Research
Chapter 14 Data Collection in Quantitative Research
Chapter 15 Measurement and Data Quality
Chapter 16 Developing and Testing Self- Report Scales
Chapter 17 Descriptive Statistics
Chapter 18 Inferential Statistics
Chapter 19 Multivariate Statistics
Chapter 20 Processes of Quantitative Data Analysis
Chapter 21 Clinical Significance and Interpretation of
Quantitative Results

C H A P T E R 9

Quantitative Research Design

General Design Issues
This chapter describes options for designing quantitative studies. We
begin by discussing several broad issues.

Causality
Several types of research questions are relevant to evidence- based nursing
practice—questions about interventions (Therapy); Diagnosis and
assessment; Prognosis; Etiology (causation) and prevention of harm;
Description; and Meaning or process. Questions about meaning or process
call for qualitative approaches (Chapter 22). Questions about diagnosis or
assessment, as well as questions about the status quo of health- related
situations, are typically descriptive. Many research questions, however,
are about causes and effects:

Does a telephone therapy intervention (I) for patients diagnosed with prostate
cancer (P) cause improvements in their decision- making skills (O)? (Therapy
question)
Do birthweights less than 1,500 g (I) cause developmental delays (O) in children
(P)? (Prognosis question)
Does a high- carbohydrate diet (I) cause cognitive impairment (O) in the elderly
(P)? (Etiology [causation]/prevention of harm question)

Causality is a hotly debated issue, and yet we all understand the general
concept of a cause. For example, we understand that lack of sleep causes
fatigue and that high caloric intake causes weight gain.
Most phenomena have multiple causes. Weight gain, for example, can be
the effect of high caloric consumption, but many other factors can cause
weight gain. Causes of health- related phenomena usually are not
deterministic, but rather are probabilistic—that is, the causes increase the
probability that an effect will occur. For example, there is ample evidence
that smoking is a cause of lung cancer, but not everyone who smokes
develops lung cancer, and not 
everyone with lung cancer has a history of
smoking.

The Counterfactual Model
While it might be easy to grasp what researchers mean when they talk
about a cause, what exactly is an effect? Shadish et al. (2002), who wrote a
seminal book on research design and causal inference, explained that a
good way to grasp the meaning of an effect is to conceptualize a
counterfactual. In a research context, a counterfactual is what would have
happened to the same people exposed to a causal factor if they simultaneously
were not exposed to the causal factor. An effect is the difference between
what actually did happen with the exposure and what would have
happened without it. A counterfactual clearly can never be realized, but it
is a good model to keep in mind in designing a study to answer cause–
probing questions. As Shadish and colleagues noted, “A central task for all
cause- probing research is to create reasonable approximations to this
physically impossible counterfactual” (p. 5).

Criteria for Causality
Several writers have proposed criteria for establishing a cause- and- effect
relationship. Three criteria are a�ributed to 19th- century philosopher John
Stuart Mill:

1. Temporal: A cause must precede an effect in time. If we test the hypothesis that
smoking causes lung cancer, we need to show that cancer occurred after
smoking commenced.

2. Relationship: An empirical relationship between the presumed cause and the
presumed effect must exist. In our example, an association between smoking
and cancer must be found—i.e., that a higher percentage of smokers than
nonsmokers get lung cancer.

3. No confounders: The relationship cannot be explained as being caused by a third
variable. Suppose that most smokers lived in urban environments. The
relationship between smoking and lung cancer might then reflect an underlying
causal link between the environment and lung cancer.

Additional criteria were proposed by Bradford- Hill (1965)—precisely as
part of the discussion about the causal link between smoking and lung
cancer. Two of Bradford- Hill’s criteria foreshadow the importance of
meta- analyses, techniques for which had not been developed when the
criteria were proposed. The criterion of coherence involves having similar
evidence from multiple sources, and the criterion of consistency involves
having similar levels of statistical relationship in several studies. Another

important criterion is biologic plausibility, that is, evidence from laboratory
or basic physiologic studies that a causal pathway is credible.

Causality and Research Design
Researchers testing hypotheses about casual relationships seek to provide
persuasive evidence that these various criteria have been met. Some
research designs are be�er at revealing cause- and- effect relationships than
others. True experimental designs are the best possible designs for
illuminating causal relationships, but it is not always possible to use such
designs.

Design Terminology
Research design terms can be confusing because there is inconsistency
among writers. Also, design terms used by medical researchers are often
different from those used by social scientists. Early nurse researchers got
research training in social science fields such as psychology before
doctoral training became available in nursing schools, and so social
scientific terms have prevailed in the nursing literature.
Nurses interested in establishing an evidence- based practice must
comprehend studies from many disciplines. We use both medical and
social science terms in this book. The first column of Table 9.1 shows
design terms used by social scientists and the second shows corresponding
terms used by medical researchers.

TABLE 9.1
Research Design Terminology in the Social Scientific and Medical Literature

Social 
Scientific 
Term Medical 
Research 
Term
Experiment, true experiment, experimental study Randomized controlled trial, randomized clinical

trial, RCT
Quasi- experiment, quasi- experimental study Controlled clinical trial; clinical trial without

randomization
Nonexperimental study; correlational study Observational study
Retrospective study Case–control study
Prospective nonexperimental study Cohort study
Group or condition (e.g., experimental or control
group/condition)

Group or arm 
(e.g., intervention or control arm)

Experimental group Treatment or intervention group
Dependent variable Outcome or endpoint

Experimental Design
A basic distinction in quantitative research design is between experimental
and nonexperimental research. In an experiment (typically called a
randomized controlled trial, RCT), researchers are active agents, not
simply observers. Early physical scientists learned that although
observation is valuable, complexities in nature often made it difficult to
disentangle relationships. This problem was addressed by isolating
phenomena and controlling the conditions under which they occurred.
The 20th century witnessed the acceptance of experimental methods by
researchers interested in human physiology and behavior.
Controlled experiments are considered the gold standard for yielding
reliable evidence about causes and effects. Experimenters can be relatively
confident in the veracity of causal relationships because they are observed
under controlled conditions and meet the criteria for causality.
Hypotheses are never proved by scientific methods, but RCTs offer the most
convincing evidence about whether one variable has a casual effect on
another.
A true experimental or RCT design is characterized by the following
properties:

Manipulation: the researcher does something to at least some participants—there
is some type of intervention
Control: the researcher introduces controls, including devising a counterfactual
approximation—usually, a control group that does not receive the intervention
Randomization: the researcher assigns participants to a control or experimental
condition on a random basis

Design Features of True Experiments
Researchers have many options in designing an experiment. We begin by
discussing several features of experimental designs.

Manipulation: The Experimental Intervention
Manipulation involves doing something to study participants.
Experimenters manipulate the independent variable by administering a
treatment (or intervention [I]) to some people and withholding it from
others (C), or by administering alternative treatments to two or more
groups. Experimenters deliberately vary the independent variable (the

presumed cause) and observe the effect on the outcome (O)—which is
sometimes referred to as an endpoint in the medical literature.
For example, suppose we hypothesized that gentle massage is an effective
pain relief strategy for nursing home residents (P). The independent
variable, receipt of gentle massage, can be manipulated by giving some
patients the massage intervention (I) and withholding it from others (C).
We would then compare pain levels (O) in the two groups to see if receipt
of the intervention resulted in group differences in average pain levels.
In designing RCTs, researchers make many decisions about what the
experimental condition entails. To get a fair test, the intervention should
be appropriate to the problem, consistent with a theoretical rationale, and
of sufficient intensity and duration that effects might reasonably be
expected. The full nature of the intervention must be delineated in formal
intervention protocols that spell out exactly what the treatment is. Here
are some questions intervention researchers need to address:

What is the intervention, and how does it differ from usual methods of care?
What is the dosage or intensity of the intervention?
Over how long a period will the intervention be administered, how frequently
will it be administered, and when will the treatment begin (e.g., 2 hours after
surgery)?
Who will administer the intervention? What are their credentials? What type of
special training will they need?
Under what conditions will the intervention be withdrawn or altered?

The goal in most RCTs is to have an identical intervention for all people in
the treatment group. For example, in most drug studies, those in the
experimental group are given the exact same ingredient, in the same dose,
administered in exactly the same manner. There has, however, been a
growing interest in tailored interventions or patient- centered
interventions (PCIs), whose purpose is to enhance treatment efficacy by
taking people’s characteristics into account (Lauver et al., 2002). In tailored
interventions, each person receives an intervention customized to certain
characteristics, such as demographic traits (e.g., gender) or cognitive
factors (e.g., reading level). Behavioral interventions based on the
Transtheoretical (Stages of change) Model (Chapter 6) usually are PCIs
because the intervention is tailored to fit people’s readiness to change their
behavior. Some evidence suggests that tailored interventions can be

effective (e.g., Richards et al., 2007), but special challenges face those
conducting PCI research (Beck et al., 2010).

TIP Although PCIs are not universally standardized, they are
administered according to well- defined procedures; intervention
agents are trained in making systematic decisions about who should
get which type of treatment.

Manipulation: The Control Condition
Evidence about relationships requires a comparison. If we were to
supplement the diet of premature infants (P) with a special nutrient (I) for
2 weeks, their weight (O) at the end of 2 weeks would tell us nothing
about treatment effectiveness. At a bare minimum, we would need to
compare pos�reatment weight with pretreatment weight to determine if,
at least, their weight had increased. But let us assume that we find an
average weight gain of 1 pound. Does this gain support the conclusion
that the nutrition supplement (the independent variable) caused weight
gain (the outcome)? No, it does not. Babies normally gain weight as they
mature. Without a control group—a group that does not receive the
supplement (C)—it is hard to separate the effects of maturation from those
of the treatment.
The term control group refers to a group of participants whose
performance on an outcome is used to evaluate the performance of the
treatment group on the same outcome. Researchers with training in the
social sciences use the term “group” or “condition” (e.g., the control group
or control condition), but medical researchers often use the term “arm,” as
in the “intervention arm” or the “control arm” of the study.
The control condition is a proxy for an ideal counterfactual. Researchers
have choices about what to use as the counterfactual. Possibilities for the
counterfactual include the following:

1. An alternative intervention; for example, participants could receive alternative
therapies for pain, such as music versus massage.

2. Standard methods of care—i.e., the usual procedures used to care for patients.
This is the most typical control condition in nursing studies.

3. A placebo or pseudointervention presumed to have no therapeutic value; for
example, in drug studies, some patients get the experimental drug and others
get an innocuous substance. Placebos are used to control for the

nonpharmaceutical effects of drugs, such as extra a�ention. There can, however,
be placebo effects—changes in the outcome a�ributable to the placebo
condition—because of participants’ expectations of benefits or harms.

Example of a Placebo Control Group
Saad and an interprofessional team (2018) tested the effect of vitamin D
supplementation in children with autism spectrum disorder (ASD). They
randomly assigned 109 children with ASD to receive vitamin D or a placebo for
4 months.

1. Sometimes researchers use an a�ention control group when they want to rule
out the possibility that intervention effects are caused by the special a�ention
given to those receiving the intervention, rather than by the actual treatment
itself. The idea is to separate the “active ingredients” of the treatment from the
“inactive ingredient” of special a�ention.

Example of an Attention Control Group
Doering and Dogan (2018) did a pilot test of an intervention for postpartum
sleep and fatigue. Participants were randomized to the theory- guided
intervention that focused on self- management or to an a�ention control group
that received general information about healthy eating and sleep.

1. Different doses or intensities of treatment wherein all participants get some type
of intervention, but the experimental group gets an intervention that is richer,
more intense, or longer. This approach is a�ractive when there is a desire to
analyze dose- response effects, i.e., to test whether larger doses are associated
with larger benefits, or whether a smaller (and less costly or burdensome) dose
would suffice.

Example of an Alternative Dose Design
Breneman and an interdisciplinary team (2019) studied the effect of two
moderate- intensity walking programs with low- dose versus high- dose energy
expenditure on night- to- night variability in sleep among older women.
Participants were randomized to one of the programs.

1. Wait- list control group, with delayed treatment; the control group eventually
receives the full intervention after all outcomes are assessed.

In terms of inferential conclusiveness, the best test is between two
conditions that are as different as possible, as when the experimental
group gets a strong treatment and the control group gets no treatment.
Ethically, the wait- list approach (number 6) is appealing, but may be hard
to do pragmatically. Testing two competing interventions (number 1) also
has ethical appeal but runs the risk of ambiguous results if both
interventions are moderately effective. This option is, however, the
preferred approach in comparative effectiveness research (CER), which
strives to produce evidence that is especially useful for clinical decision–
making. CER is described in Chapters 11 and 31.
Some researchers combine several comparison strategies. For example,
they might test two alternative treatments (option 1) against a placebo
(option 3). The use of three or more comparison groups is often a�ractive
but adds to the cost and complexity of the study.

Example of a Three- Group Randomized Design
Özkan and Zincir (2017) tested the effect of reflexology on the
spasticity and muscular function of children with cerebral palsy.
Children were randomized to a reflexology group, a placebo group
(sham reflexology), or a control group (no intervention).

The control group decision should be based on an underlying
conceptualization of how the intervention might “cause” the intended
effect and should also reflect what needs to be controlled. For example, if
a�ention control groups are being considered, there should be an
underlying conceptualization of the construct of “a�ention” (Gross, 2005).
Researchers need to carefully spell out their control group strategy. In
research reports, researchers sometimes say that the control group got
“usual care” without explaining what usual care entailed. In drawing on
evidence for practice, nurses need to understand exactly what happened
to study participants in different conditions. Barkauskas et al. (2005) and
Shadish et al. (2002) offer useful advice about developing a control group
strategy.

Randomization
Randomization (also called random assignment or random allocation)
involves assigning participants to treatment conditions at random. Random
means that participants have an equal chance of being assigned to any

group. If people are placed in groups randomly, there is no systematic bias
in the groups with respect to preintervention a�ributes that are potential
confounders that could affect outcomes.

Randomization Principles
The purpose of random assignment is to approximate the ideal—but
impossible—counterfactual of having the same people exposed to two or
more conditions simultaneously. For example, suppose we wanted to
study the effectiveness of a contraceptive counseling program for
multiparous women (P) who wish to avoid another pregnancy (O). Two
groups of women are included—one will be counseled (I) and the other
will not (C). Women in the sample are likely to be diverse in terms of age,
marital status, income, and so on. Any of these characteristics could affect
a woman’s contraceptive use, independent of whether she receives
counseling. We need to have the “counsel” and “no counsel” groups equal
with respect to confounding traits to assess the impact of counseling on
subsequent pregnancies. Random assignment of people to one group or
the other is designed to perform this equalization function.
Although randomization is the preferred method for equalizing groups,
there is no guarantee that the groups will be equal. The risk of unequal
groups is high when sample size is small. For example, with a sample of
only 10—5 men and 5 women—it is possible that all 5 men would be
assigned to one group and all 5 women to the other. The likelihood of
ge�ing markedly unequal groups is reduced as the sample size increases.
You may wonder why we do not consciously control characteristics that
are likely to affect the outcome through matching. For example, if
matching were used in the contraceptive counseling study, we could
ensure that if there were a married, 38- year- old woman with three
children in the experimental group, there would be a married, 38- year- old
woman with three children in the control group. To match effectively,
however, we must know the characteristics that are likely to affect the
outcome, but this knowledge is often imperfect. Even if we knew the
relevant traits, the complications of matching on more than two or three
confounders simultaneously are prohibitive. With random assignment, all
personal characteristics—age, income, health status, and so on—are likely
to be equally distributed in all groups. Over the long run, randomized
groups tend to be counterbalanced with respect to an infinite number of
biologic, psychological, economic, and social traits.

Basic Randomization
The most straightforward randomization procedure for a two- group
design is to simply allocate each person as they enroll into a study on a
random basis—for example, by flipping a coin. If the coin comes up
“heads,” a participant would be assigned to one group; if it comes up
“tails,” he or she would be assigned to the other group. This type of
randomization, with no restrictions, is sometimes called complete
randomization. Each successive person has a 50- 50 chance of being assigned
to the intervention group. The problem with this approach is that large
imbalances in group size can occur, especially when the sample size is
small. For example, with a sample of 10 subjects, there is only a 25%
probability that perfect balance (5 per group) would result. In other words,
three times out of four, the intervention and control groups would be of
unequal size, by chance alone. This method is not recommended with
sample sizes less than 200 (Lachin et al., 1988).
Researchers often want treatment groups of equal size or with
predesignated proportions. Simple randomization involves starting with a
known sample size, and then prespecifying the proportion of subjects who
will be randomly allocated to different treatment conditions. To illustrate
simple randomization, suppose we were testing two interventions to
reduce the anxiety of children who are about to undergo tonsillectomy.
One intervention involves giving structured information about the
surgical team’s activities (procedural information); the other involves
structured information about what the child will feel (sensation
information). A third control group receives no special intervention. We
have a sample of 15 children, and 5 will be randomly assigned to each
group.
Before widespread availability of computers, researchers used a table of
random numbers to randomize. A small portion of such a table is shown
in Table 9.2. In a table of random numbers, any digit from 0 to 9 is equally
likely to follow any other digit. Going in any direction from any point in
the table produces a random sequence.

TABLE 9.2
Small Table of Random Digits

46 85 05 23 26 34 67 75 83 00 74 91 06 43 45
69 24 89 34 60 45 30 50 75 21 61 31 83 18 55
14 01 33 17 92 59 74 76 72 77 76 50 33 45 13
56 30 38 73 15 16 52 06 96 76 11 65 49 98 93

81 30 44 85 85 68 65 22 73 76 92 85 25 58 66
70 28 42 43 26 79 37 59 52 20 01 15 96 32 67
90 41 59 36 14 33 52 12 66 65 55 82 34 76 41
39 90 40 21 15 59 58 94 90 67 66 82 14 15 75
88 15 20 00 80 20 55 49 14 09 96 27 74 82 57
45 13 46 35 45 59 40 47 20 59 43 94 75 16 80
70 01 41 50 21 41 29 06 73 12 71 85 71 59 57
37 23 93 32 95 05 87 00 11 19 92 78 42 63 40
18 63 73 75 09 82 44 49 90 05 04 92 17 37 01
05 32 78 21 62 20 24 78 17 59 45 19 72 53 32
95 09 66 79 46 48 46 08 55 58 15 19 02 87 82
43 25 38 41 45 60 83 32 59 83 01 29 14 13 49
80 85 40 92 79 43 52 90 63 18 38 38 47 47 61
81 08 87 70 74 88 72 25 67 36 66 16 44 94 31
84 89 07 80 02 94 81 03 19 00 54 10 58 34 36

In our example, we would number the 15 children from 1 to 15, as shown
in column 2 of Table 9.3, and then draw numbers between 01 and 15 from
the random number table. To find a random starting point, you can close
your eyes and let your finger fall at some point on the table. For this
example, assume that our starting point is at number 52, bolded in Table
9.2. We can move in any direction from that point, selecting numbers that
fall between 01 and 15. Let us move to the right, looking at two- digit
combinations. The number to the right of 52 is 06. The person whose
number is 06, Alexander, is assigned to group I. Moving along, the next
number within our range is 11. (To find numbers in the desired range, we
bypass numbers between 16 and 99.) Violet, whose number is 11, is also
assigned to group I. The next three numbers are 01, 15, and 14. Thus,
Alaine, Christopher, and Paul are assigned to group I. The next five
numbers between 01 and 15 in the table are used to assign five children to
group II, and the remaining five are put into group III. Note that numbers
often reappear in the table before the task is completed. For example, the
number 15 appeared four times during this randomization. This is normal
because the numbers are random.

TABLE 9.3
Example for Random Assignment Procedure

Child’s Name Number Group 
Assignment
Alaine 01 I
Kristina 02 III
Julia 03 III
Lauren 04 II
Grace 05 II
Alexander 06 I
Norah 07 III

Child’s Name Number Group 
Assignment
Cormac 08 III
Ronan 09 II
Cullen 10 III
Violet 11 I
Maren 12 II
Leo 13 II
Paul 14 I
Christopher 15 I

We can look at the three groups to see if they are similar for one
discernible trait, gender. We started out with eight girls and seven boys.
Randomization did a fairly good job of allocating boys and girls similarly
across the three groups: there are 2, 3, and 3 girls and 3, 2, and 2 boys in
groups I through III, respectively. We must hope that other characteristics
(e.g., age, initial anxiety) are also well distributed in the randomized
groups. The larger the sample, the stronger the likelihood that the groups
will be balanced on all factors that could affect the outcome.
Researchers usually assign participants proportionately to groups being
compared. For example, a sample of 300 participants in a two- group
design would generally be allocated 150 to the treatment group and 150 to
the control group. If there were three groups, there would be 100 per
group. It is also possible (and sometimes desirable ethically) to have a
different allocation. For example, if an especially promising treatment
were developed, we could assign 200 to the treatment group and 100 to the
control group. Such an allocation does, however, make it more difficult to
detect treatment effects at statistically significant levels—or, to put it
another way, the overall sample size must be larger to a�ain the same level
of statistical reliability.
Computerized resources are available for free on the Internet to help with
randomization (e.g., www.randomizer.org, which has a useful tutorial).
Standard statistical software packages (e.g., SPSS or SAS) can also be used.

TIP There is considerable confusion—even in research methods
textbooks—about random assignment versus random sampling.
Randomization is a signature of an experimental design. If
participants are not randomly allocated to conditions, then the design
is not a true experiment. Random sampling, by contrast, is a method
of selecting people for a study (see Chapter 13). Random sampling is
not a signature of an experiment. In fact, most RCTs do not involve
random sampling.

Randomization Procedures
The success of randomization depends on two factors. First, the allocation
process should be truly random. Second, there must be strict adherence to
the randomization schedule. The la�er can be achieved if the allocation is
unpredictable (for both participants and those enrolling them) and
tamperproof. Random assignment should involve allocation concealment
that prevents those who enroll participants from knowing upcoming
assignments, to avoid potential biases. As an example, if the person doing
the enrollment knew that the next person would be assigned to a
promising intervention, he or she might defer enrollment until a needier
patient enrolled.
Several methods of allocation concealment have been devised, several of
which involve developing a randomization schedule before the study
begins. This is advantageous when people do not enter a study
simultaneously, but rather on a rolling enrollment basis. One method is to
have sequentially numbered, opaque sealed envelopes (SNOSE) containing
assignment information. Participants entering the study receive the next
envelope in the sequence (for procedural suggestions, see Doig &
Simpson, 2005). The gold standard approach is to have treatment
allocation performed by an agent unconnected with enrollment and
communicated to researchers by telephone or e- mail. Herbison et al. (2011)
found, however, that trials with a SNOSE system had a comparable risk of
bias as trials with centralized randomization.

TIP Downs et al. (2010) offer recommendations for avoiding
practical problems in implementing randomization.

Timing of randomization is important. Study eligibility—whether a person
meets the criteria for inclusion—should be ascertained before
randomization. If baseline data (preintervention data on outcomes) are
collected, this should occur before randomization to rule out any
possibility that knowledge of the group assignment might distort baseline
measurements. Randomization should occur as closely as possible to the
intervention start- up, to increase the likelihood that participants will
actually receive the condition to which they have been assigned. Figure 9.1
illustrates the sequence of steps that occurs in most RCTs, including the
timing for obtaining informed consent.

FIGURE 9.1 Sequence of steps in a standard two- arm randomized design.

TIP Some studies use quasi- randomization, which is a method of
allocating participants in a manner that is not strictly random. For
example, participants may be assigned to groups on an alternating

basis (every other person to a group) or based on whether their
birthdate is an odd or even number. These are not true methods of
randomization.

Randomization Variants
Simple or complete randomization is used in many nursing studies, but
variants of randomization offer advantages in terms of ensuring group
comparability or minimizing certain biases. These variants include the
following:

Stratified randomization, in which randomization occurs separately for distinct
subgroups (e.g., males and females);
Permuted block randomization, in which people are allocated to groups in
small, randomly sized blocks to ensure a balanced distribution in each block;
Urn randomization, in which group balance is continuously monitored and the
allocation probability is adjusted when an imbalance occurs (i.e., the probability
of assignment becomes higher for the condition with fewer participants);
Randomized consent, in which randomization occurs prior to obtaining
informed consent (also called a Zelen design);
Partial randomization, in which only people without a strong treatment
preference are randomized—sometimes called partially randomized patient
preference (PRPP); and
Cluster randomization, which involves randomly assigning clusters (e.g.,
hospitals) rather than people to different treatment groups.

: These and other randomization variants are described in greater
detail in the Supplement to Chapter 9 on .

Blinding or Masking
People usually want things to turn out well. Researchers want their ideas
to work, and they want their hypotheses supported. Participants want to
be helpful and want to present themselves in a positive light. These
tendencies can lead to biases because they can affect what participants do
and say (and what researchers ask and perceive) in ways that distort the
truth.
A procedure called blinding (or masking) is often used in RCTs to prevent
biases stemming from awareness. Blinding involves concealing information
from participants, data collectors, care providers, intervention agents, or
data analysts to enhance objectivity and minimize expectation bias. For

example, if participants are not aware of whether they are ge�ing an
experimental drug or a placebo, then their outcomes cannot be influenced
by their expectations of its efficacy. Blinding typically involves disguising
or withholding information about participants’ status in the study (e.g.,
whether they are in the experimental or control group) but can also
involve withholding information about study hypotheses or baseline
performance on outcomes.
Lack of blinding can result in several types of bias. Performance bias
refers to systematic differences in the care provided to members of
different groups of participants, apart from any intervention. For example,
those delivering an intervention might treat participants in groups
differently (e.g., with greater a�entiveness), apart from the intervention
itself. Efforts to avoid performance bias usually involve blinding
participants and the agents who deliver treatments. Detection (or
ascertainment) bias, which concerns systematic differences between
groups in how outcome variables are measured, verified, or recorded, is
addressed by blinding those who collect the outcome data or, in some
cases, those who analyze the data.
Unlike allocation concealment, blinding is not always possible. Drug
studies often lend themselves to blinding but many nursing interventions
do not. For example, if the intervention were a smoking cessation
program, participants would know whether they were receiving the
intervention, and the interventionist would be aware of who was in the
program. However, it is usually possible, and desirable, to mask
participants’ treatment status from people collecting outcome data and
from clinicians providing normal care.

TIP Blinding may not be necessary if subjectivity in measuring the
outcome is low. For example, participants’ ratings of pain are
susceptible to biases stemming from their own or data collectors’
awareness of treatment group status. Hospital readmission and
length of hospital stay, on the other hand, are less likely to be affected
by people’s awareness.

When blinding is not used, the study is an open study, in contrast to a closed
study. When blinding is used with only one group of people (e.g., study
participants), it is sometimes called a single- blind study. When it is
possible to mask with two groups (e.g., those delivering an intervention

and those receiving it), it is sometimes called double blind. However,
recent guidelines recommend that researchers not use these terms without
explicitly stating which groups were blinded because the term “double
blind” has been used to refer to many different combinations of blinded
groups (Moher et al., 2010).
The term blinding, though widely used, has been criticized because of
possible pejorative connotations. The American Psychological Association,
for example, has recommended using masking instead. Medical
researchers appear to prefer blinding unless the people in the study have
vision impairments (Schulz et al., 2002). Most nurse researchers use the
term blinding rather than masking (Polit et al., 2011).

Example of an Experiment With Blinding
George et al. (2018) conducted a multicenter RCT to evaluate an oral
health program initiated by midwives to improve oral health and
birth outcomes for pregnant women. Data collectors and study
investigators were blinded to whether participants were in the
intervention or control group.

Specific Experimental Designs
Some popular experimental designs are described in this section. We
illustrate some of them using design notation from a classic monograph
(Campbell & Stanley, 1963). In this system, R means random assignment;
O represents outcome measurements; and X stands for exposure to the
intervention. Each row designates a different group. (Supplement A to
Chapter 10 on provides more detail about various designs using
this notation).

Basic Experimental Designs
Earlier in this chapter, we described a study that tested the effect of gentle
massage on pain in nursing home residents. This example illustrates a
simple design that is sometimes called a pos�est- only design (or after- only
design) because data on the outcome are collected only once—after
randomization and completion of the intervention. Here is the notation for
this design, which shows that both groups are randomized (R), but only
the first group gets the intervention (X):

R X O
R O

A second basic design involves collecting baseline data, like the design in
Figure 9.1. Suppose we hypothesized that convective airflow blankets are
more effective than conductive water- flow blankets in cooling critically ill
febrile patients. Our design involves assigning patients to the two blanket
types (the independent variable) and measuring the outcome (body
temperature) twice, before and after the intervention. Here is a diagram
for this design:

R O1 X O2
R O1 O2

This design allows us to examine whether one blanket type is more
effective than the other in reducing fever; with this design researchers can
examine change. This design is a pretest–pos�est design (before–after
design), which are mixed designs: analyses can examine both differences
between groups and changes within groups over time. Some pretest–
pos�est designs include data collection at multiple postintervention
points, i.e., repeated measures designs. These basic designs can be “tweaked”
in various ways—for example, the design could involve comparison of
three or more groups.

Example of a Pretest–Posttest Experimental Design
Ng and Wong (2018) studied the effects of a home- based palliative
program on the quality of life, symptom burden, functional status,
and satisfaction with care of patients with end- stage heart failure. The
outcomes for patients in the intervention and control groups were
measured at baseline, and at 4 and 6 weeks after discharge from the
hospital.

Factorial Design
Most experimental designs involve manipulating only one independent
variable, but it is possible to manipulate two or more variables
simultaneously. Suppose we wanted to compare two therapies for
premature infants: tactile versus auditory stimulation. We also want to
learn if the daily amount of stimulation (15, 30, or 45 minutes) affects
infants’ progress. The outcomes are measures of infant development (e.g.,
weight gain). Figure 9.2 illustrates the structure of this RCT.

g g g

FIGURE 9.2 Example of a 2 × 3 factorial design.

This factorial design allows us to address three research questions:

1. Does auditory stimulation have a more beneficial effect on premature infants’
weight gain than tactile stimulation, or vice versa?

2. Is amount of stimulation (independent of type) related to infants’ weight gain?
3. Is auditory stimulation most effective when linked to a certain dose and tactile

stimulation most effective when coupled with a different dose?

The third question shows the strength of factorial designs: they permit us
to test not only main effects (effects from the manipulated variables, as in
questions 1 and 2) but also interaction effects (effects from combining
treatments). Our results may indicate that 30 minutes of auditory
stimulation is the most beneficial treatment. We could not have learned
this by conducting two separate studies that manipulated one
independent variable and held the second one constant.
In factorial experiments, participants are randomly assigned to a specific
combination of conditions. In our example (Figure 9.2), infants would be
assigned randomly to one of six cells—i.e., six treatment conditions. The
two independent variables in a factorial design are the factors. Type of
stimulation is factor A, and amount of daily exposure is factor B. Level 1 of
factor A is auditory and level 2 is tactile. When describing the dimensions
of the design, researchers refer to the number of levels. The design in

Figure 9.2 is a 2 × 3 design: two levels in factor A times three levels in
factor B. Factorial experiments with more than two factors are rare.

Example of a Factorial Design
Adams et al. (2017) used a factorial design in their study of strategies
to increase adults’ physical activity. In their 2 × 2 design, one factor
was type of goal se�ing strategy (adaptive versus static goals for
number of steps per day) and the other factor was timing of rewards
(immediate versus delayed). The outcome was number of steps
walked per day.

Crossover Design
Thus far, we have described RCTs in which different people are randomly
assigned to different conditions. For instance, in the previous example,
infants who received auditory stimulation were not the same infants as
those who received tactile stimulation. A crossover design involves
exposing the same people to more than one condition. This within- subjects
design has the advantage of ensuring the highest possible equivalence
among participants exposed to different conditions—the groups being
compared are equal with respect to age, weight, and so on because they
are composed of the same people.
Because randomization is a signature of an experiment, participants in a
crossover design must be randomly assigned to different orderings of
treatments. For example, if a crossover design were used to compare the
effects of auditory and tactile stimulation on infant development, some
infants would be randomly assigned to receive auditory stimulation first,
and others would be assigned to receive tactile stimulation first. When
there are three or more conditions to which participants will be exposed,
the procedure of counterbalancing can be used to rule out ordering
effects. For example, if there were three conditions (A, B, C), participants
would be randomly assigned to one of six counterbalanced orderings:

A, B, C A, C, B
B, C, A B, A, C
C, A, B C, B, A

Although crossover designs are powerful, they are inappropriate for
certain research questions because of possible carry- over effects. When
people are exposed to two different conditions, they may be influenced in

the second condition by their experience in the first one. Drug studies, for
example, rarely use a crossover design because drug B administered after
drug A is not necessarily the same treatment as drug B administered before
drug A. When carry- over effects are a potential concern, researchers often
have a washout period in between the treatments (i.e., a period of no
treatment exposure).

Example of a Crossover Design
Reddy and an interprofessional team (2018) used a randomized
crossover design with a sample of patients with type 1 diabetes to test
the effect of different exercise routines on sleep and nocturnal
hypoglycemia.

TIP New experimental designs are emerging in response to growing
interest in personalized health care. Several of these designs, such as
N- of- 1 trials and adaptive trials are discussed in Chapter 31, which
focuses on the applicability and relevance of research evidence.

Strengths and Limitations of Experiments
In this section, we explore why experimental designs are held in high
esteem and examine some limitations.

Experimental Strengths
Experimental designs are the gold standard for testing interventions
because they yield strong evidence about intervention effectiveness.
Experiments offer greater corroboration than other approaches that, if the
independent variable (e.g., diet, drug, teaching approach) is varied, then
certain consequences to the outcomes (e.g., weight loss, recovery, learning)
will ensue. The great strength of RCTs, then, lies in the confidence with
which causal relationships can be inferred. Through the controls imposed
by manipulation, comparison, and randomization, alternative
explanations can be discredited. It is because of this strength that meta–
analyses of RCTs, which integrate evidence from multiple experiments, are
at the pinnacle of evidence hierarchies for Therapy questions (Figure 2.2 of
Chapter 2).

Experimental Limitations
Despite the benefits of experiments, they also have limitations. First,
constraints—which we discuss later in this chapter—often make an
experimental approach impractical or impossible.

TIP Shadish et al. (2002) described 10 situations that are especially
conducive to randomized experiments; these are summarized in a
table in the Toolkit.

Experiments are sometimes criticized for their artificiality, which partly
stems from the requirements for comparable treatment within randomized
groups, with strict adherence to protocols. In ordinary life, by contrast, we
interact with people in nonformulaic ways. A related concern is that the
rigidity of the research process can undermine translation into real- world
se�ings, an issue we address in Chapter 31.
Problems also emerge when participants “opt out” of the intervention.
Suppose, for example, that we randomly assigned patients with HIV to a
support group intervention or to a control group. Intervention subjects
who elect not to participate in the support groups, or who participate
infrequently, are in a “condition” that looks more like the control
condition than the experimental one. The treatment is diluted through
nonparticipation, and it may be difficult to detect treatment effects, no
ma�er how effective it might otherwise have been.
Another potential problem is the Hawthorne effect, which is caused by
people’s expectations. The term is derived from a series of experiments
conducted at the Hawthorne plant of the Western Electric Corporation in
which various environmental conditions, such as light and working hours,
were varied to test their effects on worker productivity. Regardless of what
change was introduced, that is, whether the light was made be�er or
worse, productivity increased. Knowledge of being in the study (not just
knowledge of being in a particular group) appears to have affected
people’s behavior, obscuring the effect of the intervention.
In sum, despite the superiority of RCTs for testing causal hypotheses, they
have several limitations, some of which may make them difficult to apply

to real clinical problems. Nevertheless, with the growing demand for
strong evidence for practice, experimental designs are increasingly being
used to test the effects of nursing interventions.

Quasi- Experiments
Quasi- experiments, sometimes called controlled trials without randomization
in the medical literature, involve an intervention, but they lack
randomization, the signature of a true experiment. Some quasi–
experiments even lack a control group. The signature of a quasi–
experimental design, then, is an intervention in the absence of
randomization.

Quasi- Experimental Designs
We describe a few widely used quasi- experimental designs in this section,
and for some we use the schematic notation introduced earlier.

Nonequivalent Control Group Designs
The nonequivalent control group pretest–pos�est design (sometimes
called a controlled before–after design in the medical literature) involves two
groups of participants, for whom outcomes are measured before and after
the intervention. For example, suppose we wished to study the effect of a
new chair yoga intervention for older people. The intervention is being
offered to everyone at a community senior center, and randomization is
not workable. For comparative purposes, we collect outcome data at a
different senior center that is not instituting the intervention. Data on
health- related quality of life are collected from both groups at baseline and
again 10 weeks after implementing the intervention. Here is a schematic
representation of this design:

O1 X O2
O1 O2

The top line represents those receiving the intervention (X) at the
experimental site and the second row represents the group at the
comparison site. This diagram is identical to the experimental pretest–
pos�est design depicted earlier except there is no “R”—participants have
not been randomized to groups. The quasi- experimental design is weaker
because it cannot be assumed that the experimental and comparison groups are
initially equivalent. Because there is no randomization, quasi- experimental
comparisons provide a weaker counterfactual than experimental
comparisons. The design is nevertheless strong because baseline data
allow us to assess whether patients in the two centers had similar quality

of life scores at the outset. If the two groups are similar, on average, at
baseline, we could be relatively confident inferring that pos�est
differences in outcomes were the result of the yoga intervention. If quality
of life scores are different initially, however, it will be difficult to interpret
pos�est differences. Note that in quasi- experiments, the term comparison
group is used in lieu of control group to refer to the group with whom the
treatment group is compared.
Now suppose we had been unable to collect baseline data:

X O
O

This design has a major flaw. We no longer have information about initial
equivalence of people in the two senior centers. If quality of life in the
experimental center is higher than that in the control site at pos�est, can
we conclude that the intervention caused improved quality of life? An
alternative explanation for pos�est differences is that the people in the two
centers differed at the outset. This nonequivalent control group pos�est- only
design is a much weaker quasi- experimental design.

Example of a Nonequivalent Control Group Pretest–Posttest
Design
Takahashi and an interprofessional team (2018) used a quasi–
experimental design to test the effectiveness of community- based
interventions to reduce harmful alcohol consumption in rural Kenya.
Problem drinkers in one village got the brief intervention with
motivational talks, those in another village got the intervention
without the talks, and those in a third village received only general
health information. Alcohol consumption was measured at baseline
and follow-up.

In lieu of using a contemporaneous comparison group, researchers
sometimes use a historical comparison group. That is, comparison data
are gathered from other people before implementing the intervention. Even
when the people are from the same institutional se�ing, however, it is
risky to assume that the two groups are comparable, or that the
environments are comparable except for the new intervention. The
possibility remains that something other than the intervention could
account for observed differences in outcomes.

Example of a Historical Comparison Group
Barta et al. (2017) studied the reconviction rates of driving- under–
the- influence (DUI) offenders who participated in an intensive
supervision program that included prerelease psycho- education and
close postrelease supervision. Their rates of reconviction were
compared with those of an historical comparison group of 302 DUI
offenders.

Time Series Designs
In the designs just described, a control group was used but randomization
was not, but some quasi- experiments have neither. Suppose that a hospital
implemented rapid response teams (RRTs) in its acute care units.
Administrators want to examine the effects on patient outcomes (e.g.,
unplanned ICU admissions, mortality rate). For the purposes of this
example, assume no other hospital could serve as a good comparison. One
comparison that can be made is a before–after contrast. If RRTs were to be
implemented in January, the mortality rate (for example) during the
3 months before RRTs could be compared with the mortality rate during
the subsequent 3- month period. The schematic representation of such a
study is:

O1 X O2

This one- group pretest–pos�est design seems straightforward, but it has
several weaknesses. What if either of the 3- month periods is atypical, apart
from the innovation? What about the effects of other policy changes
inaugurated during the same period? What about the effects of external
factors that influence mortality, such as a flu outbreak? This design cannot
control these factors.
In our RRT example, the design could be modified so that some alternative
explanations for changes in mortality could be ruled out. One such design
is the time series design (or interrupted time series design). In a time series,
data are collected over an extended period during which an intervention is
introduced, as in this diagram:

O1 O2 O3 O4 X O5 O6 O7 O8

Here, O1 through O4 represent four separate instances of preintervention
outcome measurement, X is the introduction of the intervention, and O5

through O8 represent four pos�reatment measurements. In our example,
O1 might be the number of deaths in January through March in the year
before the new RRT system, O2 the number of deaths in April through
June, and so forth. After RRTs are introduced, data on mortality are
collected for four consecutive 3- month periods, giving us observations O5
through O8.
Even though the time series design does not eliminate all interpretive
challenges, the extended time period strengthens our ability to a�ribute
change to the intervention. Figure 9.3 demonstrates why this is so. The line
graphs (A and B) in the figure show two possible outcome pa�erns for
eight mortality observations. The vertical do�ed line in the center
represents the introduction of the RRT system. Pa�erns A and B both
reflect a feature common to time series studies—fluctuation from one data
point to another. These fluctuations are normal. One would not expect
that, if 480 patients died in a hospital in 1 year, the deaths would be spaced
evenly with 40 per month. It is precisely because of these fluctuations that
the one- group pretest–pos�est design, with only one observation before
and after the intervention, is so weak.

FIGURE 9.3 Two possible time series outcome patterns for quarterly mortality data.

Let us compare the interpretations for the outcomes shown in Figure 9.3.
In both pa�erns A and B, mortality decreased between O4 and O5,
immediately after RRTs were implemented. In B, however, the number of
deaths rose at O6 and continued to rise at O7. The decrease at O5 looks
similar to other apparently haphazard fluctuations in mortality. In A, on
the other hand, the number of deaths decreased at O5 and remained
relatively low for subsequent observations. There may well be other
explanations for a change in the mortality rate, but the time series design
permits us to rule out the possibility that the data reflect unstable
measurements of deaths at only two points in time. If we had used a
simple pretest–pos�est design, it would have been analogous to obtaining
the measurements at O4 and O5 of Figure 9.3 only. The outcomes in both A
and B are the same at these two time points. The broader time perspective
leads us to draw different conclusions about the effects of RRTs.
Nevertheless, the absence of a comparison group means that the design
does not yield an ideal counterfactual.

Example of a Time Series Design
Norman et al. (2017) tested the effect of a multimodal educational
intervention designed to reduce the unnecessary use of urinary
catheters in hospital patients at a large teaching hospital. Incidence of
urinary catheterizations was measured monthly for over 3 years; the
monthly incidence declined after the intervention.

One drawback of a time series design is that many data points—100 or
more—are recommended for a traditional analysis (Shadish et al., 2002),
and the analysis tends to be complex. Nurse researchers have, however,
begun to use a versatile approach called statistical process control (SPC)
to assess effects when they have collected data sequentially over a period
of time before and after implementing a practice change (Polit &
Chaboyer, 2012). Time series designs with SPC analyses are important in
quality improvement (QI) projects because randomization is rarely possible
in QI (see Chapter 12).
A particularly powerful quasi- experimental design results when the time
series and nonequivalent control group designs are combined. In the
example just described, a time series nonequivalent control group design
would involve collecting data over an extended period from both the
hospital introducing the RRTs and another similar hospital not
implementing the system. Information from another comparable hospital
would make any inferences regarding the effects of RRTs more convincing
because other external factors influencing the trends (e.g., a flu outbreak)
would likely be similar in both cases.
Numerous variations on the simple time series design are possible. For
example, additional evidence regarding the effects of a treatment can be
achieved by instituting the treatment at several different points in time,
strengthening the treatment over time, or instituting the treatment at one
point in time and then withdrawing it at a later point, sometimes with
reinstitution.

Other Quasi- Experimental Designs
Earlier in this chapter, we mentioned the PRPP design. Those without a
strong treatment preference 
are randomized, but those with a preference
are given the condition they prefer and are followed up as part of the
study. The two randomized groups are part of the true experiment, but the
two groups who get their preference are part of a quasi- experiment. This

type of design can yield valuable information about the kind of people
who prefer one condition over another and may help persuade people to
participate in a study. However, evidence of treatment effectiveness is
weak in the quasi- experimental segment because the people who elected a
certain treatment likely differ from those who opted for the alternative—
and these preintervention differences, rather than the alternative
treatments, could account for observed differences in outcomes. Yet,
evidence from the quasi- experiment could usefully support or qualify
evidence from the experimental portion of the study.

Example of a PRPP Design
Chalmers and an interprofessional team (2018) are assessing the
feasibility of providing a psychosocial assessment via telehealth
(versus face- to- face) to adolescents and young adults receiving
treatment for cancer. The trial is using a PRPP design—participants
with strong preferences are being given the assessment in the manner
they chose and those without a preference are being randomized.

Another quasi- experimental approach—sometimes embedded within a
true experiment—is a dose–response design in which the outcomes of
those receiving different doses of an intervention (not as a result of
randomization) are compared. For example, in lengthy interventions,
some people a�end more sessions or get more intensive treatment than
others. The rationale for a quasi- experimental dose–response analysis is
that if a larger dose corresponds to be�er outcomes, the results provide
supporting evidence that the treatment caused the outcome. The difficulty,
however, is that people tend to get different treatment doses because of
differences in motivation, physical function, or other characteristics that
could be the true cause of outcome differences. Nevertheless, dose–
response evidence may yield useful information.

Example of a Dose–Response Analysis
Smith et al. (2018) implemented an online resilience training
program. The analysis tested a dose–response effect—that is, whether
the amount of time spent in the training program affected the amount
of change in participants’ resilience.

Quasi- Experimental and Comparison Conditions
Researchers using a quasi- experimental approach should develop
intervention protocols that document what the interventions entail.
Researchers need to be especially careful in understanding and
documenting the counterfactual. In the case of nonequivalent control
group designs, this means understanding the conditions to which the
comparison group is exposed (e.g., activities at the senior center without
the yoga intervention in our example). In time series designs, the
counterfactual is the conditions existing before implementing the
intervention, and these should be understood. Blinding should be used, to
the extent possible—indeed, this may be more feasible in a quasi–
experiment than in an RCT.

Strengths and Limitations of Quasi- Experiments
A major strength of quasi- experiments is that they are practical. In clinical
se�ings, it may be impossible to conduct true experimental tests of nursing
interventions. Strong quasi- experimental designs introduce some research
control when full experimental rigor is not possible.
Another advantage of quasi- experiments is that patients are not always
willing to relinquish control over their treatment condition. Indeed, people
are increasingly unwilling to volunteer to be randomized in clinical trials
(Vedelø & Lomborg, 2011). Quasi- experimental designs, because they do
not involve random assignment, are likely to be acceptable to a broader
group of people. This, in turn, has positive implications for the
generalizability of the results—but the problem is that the evidence may
be less conclusive.
Researchers using quasi- experimental designs should realize their
weaknesses and take them into account in interpreting results. When a
quasi- experimental design is used, there usually are rival hypotheses
competing with the intervention as explanations for the results. (This issue
relates to internal validity, discussed in Chapter 10.) Take as an example the
case in which we administer a special diet to frail nursing home residents
to assess its effects on weight gain. If we use no comparison group or a
nonequivalent control group and then observe a weight gain, we must ask:
Is it plausible that some other factor caused the gain? Is it plausible that
pretreatment differences between the intervention and comparison groups
resulted in differential gain? Is it plausible that the elders on average gained
weight because the frailest patients died? If the answer is “yes” to such

questions, then inferences about the causal effect of the intervention are
weakened. The plausibility of any particular rival explanation typically
cannot be known unequivocally, but nevertheless a careful plausibility
analysis should be undertaken. We describe how to do this in Supplement
B of Chapter 10.

TIP The Journal of Clinical Epidemiology has published an excellent
13- paper series on quasi- experimental designs. Examples include
papers by Geldse�er and Fawzi (2017) and Bärnighausen et al.
(2017).

Nonexperimental/Observational Research
Many research questions—including ones seeking to establish causal
relationships—cannot be addressed with an experimental or quasi–
experimental design. For example, at the beginning of this chapter we
posed this Prognosis question: Do birthweights less than 1,500 g cause
developmental delays in children? Clearly, we cannot manipulate
birthweight, the independent variable. One way to answer this question is
to compare developmental outcomes for two groups of infants—babies
with birthweights above and below 1,500 g. When researchers do not
intervene by manipulating the independent variable, the study is
nonexperimental, or, in the medical literature, observational.
Most nursing studies are nonexperimental because most human
characteristics (e.g., weight, lactose intolerance) cannot be manipulated.
Also, many variables that could technically be manipulated cannot be
manipulated ethically. For example, if we were studying the effect of
prenatal care on infant mortality, it would be unethical to provide such
care to one group of pregnant women while deliberately depriving
women in a randomized control group. We would need to locate naturally
occurring groups of pregnant women who had or had not received
prenatal care, and then compare their birth outcomes. The problem,
however, is that the two groups of women are likely to differ in terms of
other characteristics, such as age, education, and income, any of which
individually or in combination could affect infant mortality, independent
of prenatal care. Nevertheless, many nonexperimental studies explore
cause- and- effect relationships when an experimental design is not
possible.

Correlational Cause- Probing Research
When researchers study the effect of a potential cause that they cannot
manipulate, they use correlational designs to examine relationships
between variables. A correlation is a relationship or association between
two variables, that is, a tendency for variation in one variable to be related
to variation in another. For example, in human adults, height and weight
are correlated because there is a tendency for taller people to weigh more
than shorter people.

As mentioned earlier, one criterion for causality is that an empirical
relationship (correlation) between variables must be demonstrated. It is
risky, however, to infer causal relationships in correlational research. A
famous research dictum is relevant: correlation does not prove causation. The
mere existence of a relationship between variables is not enough to
conclude that one variable caused the other, even if the relationship is
strong. In experiments, researchers directly control the independent
variable; the experimental treatment can be administered to some and
withheld from others, and the two groups can be equalized through
randomization with respect to everything except the independent variable.
In correlational research, investigators do not control the independent
variable, which often has already occurred. Groups being compared often
differ in ways that affect outcomes of interest—that is, there are usually
confounding variables. Although correlational studies are inherently
weaker than experimental studies in confirming causal relationships,
different designs offer differing degrees of supportive evidence.

Retrospective Designs
Studies with a retrospective design are ones in which a phenomenon
existing in the present is linked to phenomena that occurred in the past.
The signature of a retrospective study is that the researcher begins with
the dependent variable (the effect) and then examines whether it is
correlated with one or more previously occurring independent variables
(potential causes).
Most early studies of the smoking–lung cancer link used a retrospective
case–control design, in which researchers began with a group of people
who had lung cancer (cases) and another group who did not (controls). The
researchers then looked for differences between the two groups in
antecedent circumstances or behaviors, such as smoking.
In designing a case–control study, researchers try to identify controls
without the disease or condition who are as similar as possible to the cases
on key confounding variables (e.g., age, gender). Researchers sometimes
use matching or other techniques to control for confounding variables. To
the degree that researchers can demonstrate comparability between cases
and controls regarding confounding traits, inferences regarding the
presumed cause of the disease are enhanced. The difficulty, however, is
that the two groups are almost never totally comparable on factors

influencing the outcome. Grimes and Schulz (2005) offer guidance on
identifying controls for case–control studies.

Example of a Case–Control Design
Yuan et al. (2018) studied risk factors for death among patients with
severe stroke. A total of 188 patients who died of stroke at a
university hospital in China were the cases; 188 stroke survivors from
the same neurological ICU were randomly selected as the controls.
Clinical characteristics of the two groups were compared.

Not all retrospective studies can be described as using a case–control
design. Sometimes researchers use a retrospective approach to identify
risk factors for different amounts of an outcome rather than “caseness.” For
example, a retrospective design might be used to identify factors
predictive of the length of time new mothers breastfed their infants. Such a
design often is intended to understand factors that cause women to make
different breastfeeding decisions (i.e., an Etiology question).
Many retrospective studies are cross- sectional, with data on both the
dependent and independent variables collected at a single point in time. In
such studies, data for the independent variables often are based on
recollection (retrospection)—or the researchers “assume” that the
independent variables occurred before the outcome. One problem,
however, is that recollection can be biased by subsequent events or
memory lapses.

Example of a Retrospective Design
Como (2018) used cross- sectional data in a retrospective study
designed to identify factors predictive of perceived physical and
mental health among people with chronic heart failure. The
independent variables included self- efficacy, health literacy, and
medication adherence.

Prospective Nonexperimental Designs
In correlational studies with a prospective design (called a cohort design
in medical circles), researchers start with a presumed cause and then go
forward in time to the presumed effect. For example, in prospective lung

cancer studies, researchers start with a cohort of adults (P) that includes
smokers (I) and nonsmokers (C), and then compare the two groups in
terms of subsequent lung cancer incidence (O). The best design for
Prognosis questions, and for Etiology questions when randomization is
impossible, is a cohort design. A particularly strong design for Prognosis
questions is an inception cohort design, which involves the study of a
group assembled at a common time early in a health disorder or exposure
to a putative “cause” of an outcome (e.g., immediately after a traumatic
brain injury), and then followed thereafter to assess the outcomes.
Prospective studies are more costly than retrospective studies, in part
because prospective studies require at least two rounds of data collection.
A lengthy follow- up period may be needed before the outcome of interest
occurs, as is the case in prospective studies of cigare�e smoking and lung
cancer. Also, prospective designs require large samples if the outcome of
interest is rare. Another issue is that in a good prospective study,
researchers take steps to confirm that all participants are free from the
effect (e.g., the disease) at the time the independent variable is measured,
and this may be difficult or expensive to do. For example, in prospective
smoking/lung cancer studies, lung cancer may be present initially but not
yet diagnosed.
Despite these issues, prospective studies are considerably stronger than
retrospective studies. Any ambiguity about whether the presumed cause
occurred before the effect is resolved in prospective research if the
researcher has confirmed the initial absence of the effect. In addition,
samples are more likely to be representative, and investigators may be able
to impose controls to rule out competing explanations for the results.

TIP The term “prospective” is not synonymous with “longitudinal.”
Although most nonexperimental prospective studies are longitudinal,
prospective studies are not necessarily longitudinal. Prospective
means that information about a possible cause is obtained prior to
information about an effect. RCTs are inherently prospective because
researchers introduce the intervention and then determine its effect.
An RCT that collected outcome data 1 hour after an intervention
would be prospective, but not longitudinal.

Some prospective studies are exploratory. Researchers sometimes measure
a wide range of possible “causes” at one point in time (e.g., foods

consumed), and then examine an outcome of interest at a later point (e.g.,
a cancer diagnosis). Such studies are usually more convincing than
retrospective studies if it can be determined that the outcome was not
present initially because time sequences are clear. They are not, however,
as powerful as prospective studies that involve specific a priori hypotheses
and the comparison of cohorts known to differ on a presumed cause.
Researchers doing exploratory retrospective or prospective studies are
sometimes accused of going on “fishing expeditions” that can lead to
erroneous conclusions because of spurious or idiosyncratic relationships in
a particular sample of participants.

Example of a Prospective Nonexperimental Study
Ndosi and an interprofessional team (2018) studied the prognosis of
infected diabetic foot ulcers. Clinical information was obtained for the
patients 12 months after they required antibiotic therapy. The
researchers studied factors relating to ulcer healing, such as having
single versus multiple ulcers and perfusion grades >2.

Natural Experiments
Researchers are sometimes able to study the outcomes of a natural
experiment in which a group exposed to a phenomenon with potential
health consequences is compared with a nonexposed group. Natural
experiments are nonexperimental because the researcher does not
intervene, but they are called “natural experiments” if people are affected
essentially at random. For example, the psychological well- being of people
living in a community struck with a natural disaster (e.g., a volcanic
eruption) could be compared with the well- being of people living in a
similar but unaffected community to assess the toll exacted by the disaster
(the independent variable).

Example of a Natural Experiment
Dotson et al. (2016) studied whether the administration of calcium
was associated with adverse outcomes in critically ill patients
receiving parenteral nutrition. Outcomes such as in- hospital
mortality and acute respiratory failure were studied before and after

a calcium gluconate shortage, which created the opportunity for this
natural experiment.

Path Analytic Studies
Researchers interested in testing theories of causation using
nonexperimental data often use a technique called path analysis (or
similar causal modeling techniques). Using sophisticated statistical
procedures, researchers test a hypothesized causal chain among a set of
independent variables, mediating variables, and a dependent variable.
Path analytic procedures allow researchers to test whether
nonexperimental data conform sufficiently to the underlying model to
justify causal inferences. Path analytic studies can be done within the
context of both cross- sectional and longitudinal designs, the la�er
providing a stronger basis for causal inferences because of the ability to
verify time sequences.

Example of a Path Analytic Study
Lau et al. (2018) tested a causal model to explain early breastfeeding
initiation. Their path analysis tested hypothesized causal pathways
between mode of birth, labor duration, NICU admission, and early
skin- to- skin contact and early breastfeeding initiation, the outcome.

Descriptive Research
Descriptive research is a second broad class of nonexperimental research.
The purpose of descriptive studies is to observe, describe, and document a
situation as it naturally occurs. Sometimes descriptive studies are a
starting point for hypothesis generation or theory development.

Descriptive Correlational Studies
Some research problems are cast in noncausal terms. We may ask, for
example, whether men are less likely than women to seek assistance for
depression, not whether configurations of sex chromosomes caused
differences in health behavior. Unlike other types of correlational research
—such as the cigare�e smoking and lung cancer investigations—the aim
of descriptive correlational research is to describe relationships among
variables rather than to support inferences of causality.

Example of a Descriptive Correlational Study
Rosenzweig et al. (2019) conducted a descriptive correlational study
to examine the relationship between financial toxicity (out- of- pocket
treatment expenses) and quality of life and cancer- related distress in
women with metastatic breast cancer.

Studies designed to address Diagnosis/assessment questions—i.e.,
whether a tool or procedure yields accurate assessment or diagnostic
information about a condition or outcome—often involve descriptive
correlational designs—although sometimes two procedures or tools are
tested against each other for accuracy in RCTs.

Univariate Descriptive Studies
The aim of some descriptive studies is to describe the frequency of
occurrence of a behavior or condition, rather than to study relationships.
Univariate descriptive studies are not necessarily focused on a single
variable. For example, a researcher interested in women’s experiences
during menopause might gather data about the frequency of various
symptoms and the use of medications to alleviate symptoms. The study
involves multiple variables, but the primary purpose is to describe the
status of each, not to study correlations among them.
Two types of descriptive study come from the field of epidemiology.
Prevalence studies are done to estimate the prevalence rate of some
condition (e.g., a disease or a behavior, such as smoking) at a particular
point in time. Prevalence studies rely on cross- sectional designs in which
data are obtained from the population at risk of the condition. The
researcher takes a “snapshot” of the population at risk to determine the
extent to which the condition is present. The formula for a prevalence rate
(PR) is:

K is the number of people for whom we want to have the rate established
(e.g., per 100 or per 1,000 population). When data are obtained from a
sample, the denominator is the size of the sample, and the numerator is
the number of cases identified with the condition. If we sampled 500

adults living in a community, administered a measure of depression, and
found that 80 people met the criteria for clinical depression, then the
estimated prevalence rate of clinical 
depression would be 16 per 100 adults
in that community.
Incidence studies estimate the frequency of new cases. Longitudinal
designs are needed to estimate incidence because the researcher must first
establish who is at risk of becoming a new case—that is, who is free of the
condition at the outset. The formula for an incidence rate (IR) is:

Continuing with our previous example, suppose in July 2019 we found
that 80 in a sample of 500 people were clinically depressed (PR = 16 per
100). To determine the 1- year incidence rate, we would reassess the sample
in July 2020. Suppose that, of the 420 previously deemed not to be
clinically depressed in 2018, 21 were now found to meet the criteria for
depression. In this case, the estimated 1- year incidence rate would be 5 per
100 ([21 ÷ 420] × 100 = 5).
Prevalence and incidence rates can be calculated for subgroups of the
population (e.g., for men versus women). When this is done, it is possible
to calculate another important descriptive index. Relative risk is an
estimated risk of “caseness” in one group compared with another. Relative
risk is computed by dividing the rate for one group by the rate for another.
Suppose we found that the 1- year incidence rate for depression was 6 per
100 women and 4 per 100 men. Women’s relative risk for developing
depression over the 1- year period would be 1.5; that is, women would be
estimated to be 1.5 times more likely to develop depression than men.
Relative risk (discussed in Chapter 17) is an important index in assessing
the contribution of risk factors to a disease or condition.

Example of a Prevalence Study
Wong et al. (2018) used data from three large private hospitals in
Australia to estimate the prevalence of the use of peripheral
intravenous cannulae in hospital wards.

TIP The quality of studies that test hypothesized causal relationship
is heavily dependent on design decisions—that is, how researchers
design their studies to rule out competing explanations for the
outcomes. Methods of enhancing the rigor of such studies are
described in the next chapter. The quality of descriptive studies, by
contrast, depends more on having a good sample (Chapter 13) and
strong measures (Chapter 15).

Strengths and Limitations of Correlational Research
The quality of a study is not necessarily related to its approach; there are
many excellent nonexperimental studies as well as flawed RCTs.
Nevertheless, nonexperimental correlational studies have several
drawbacks if causal explanations are sought.

Limitations of Correlational Research
Relative to experimental and quasi- experimental research,
nonexperimental studies are weak in their ability to support causal
inferences. In correlational studies, researchers work with preexisting
groups that were not formed at random, but rather through self- selection.
A researcher doing a correlational study cannot assume that groups being
compared were similar before the occurrence of the hypothesized cause—
i.e., the independent variable. Preexisting differences may be a plausible
alternative explanation for any group differences on the outcome variable.
The difficulty of interpreting correlational findings stems from the fact
that, in the real world, behaviors and characteristics are interrelated
(correlated) in complex ways. An example may help to clarify the problem.
Suppose we conducted a cross- sectional study that examined the
relationship between level of depression in cancer patients and their level
of social support (i.e., assistance and emotional support from others). We
hypothesize that social support (the independent variable) affects levels of
depression (the outcome). Suppose we find that the patients with weak
social support are significantly more depressed than patients with strong
support. We could interpret this finding to mean that patients’ emotional
state is influenced by the adequacy of their social supports. This
relationship is diagrammed in Figure 9.4A. Yet, there are alternative
explanations. Perhaps a third variable influences both social support and
depression, such as the patients’ marital status. It may be that having a

spouse is a powerful influence on how depressed cancer patients feel and
on the quality of their social support. This set of relationships is
diagrammed in Figure 9.4B. In this scenario, social support and depression
are correlated simply because marital status affects both. A third
possibility is reversed causality (Figure 9.4C). Depressed patients with
cancer may find it more difficult to elicit needed support from others than
patients who are more cheerful or amiable. In this interpretation, the
person’s depression causes the amount of received social support, and not
the other way around. Thus, interpretations of most correlational results
should be considered 
tentative, particularly if the research has no
theoretical basis and if the design is cross- 
sectional.

FIGURE 9.4 Alternative explanations for relationship between depression and social
support in patients with cancer.

Strengths of Correlational Research
Earlier, we discussed constraints that limit the application of experimental
designs. Correlational research will continue to play a crucial role in
nursing research because many interesting problems cannot be addressed
any other way.
Correlational research is often efficient in that it may involve collecting a
large amount of data about a problem. For example, it would be possible
to collect extensive information about the health histories and eating habits
of a large number of individuals. Researchers could then examine which
health problems were associated with which diets and could discover a
large number of interrelationships in a relatively short amount of time. By

contrast, an experimenter looks at only a few variables at a time. One
experiment might manipulate foods high in cholesterol, whereas another
might manipulate salt, for example.
Finally, correlational research is often strong in realism. Unlike many
experimental studies, correlational research is seldom criticized for its
artificiality.

TIP It can be useful to design a study with several relevant
comparisons. In nonexperimental studies, multiple comparison
groups can be effective in dealing with self- selection, especially if
groups are chosen to address competing biases. For example, in case–
control studies of potential causes of lung cancer, cases would be
people with lung cancer, one comparison group could be people with
a different lung disease, and a second could be those with no
pulmonary disorder.

Designs and Research Evidence
Evidence for nursing practice depends on descriptive, correlational, and
experimental research. There is often a progression to evidence expansion
that begins with rich description, including description from qualitative
research. In- depth qualitative research may suggest causal links that could
be the focus of controlled quantitative research. For example, Colón–
Emeric et al. (2006) explored communication pa�erns among the medical
and nursing staff in relation to information flow in two sites. Their
findings suggested that a “chain of command” type communication style
may limit clinicians’ ability to provide high- quality care. The study
suggests possibilities for interventions—and indeed, Colon- Emeric et al.
(2013) tested an intervention designed to improve nursing home staff’s
communication and problem solving. Thus, although qualitative studies
are low on the standard evidence hierarchy for confirming causal
connections, they serve an important function in stimulating ideas.
Correlational studies also play a role in developing evidence for causal
inferences. Retrospective case–control studies may pave the way for more
rigorous (but more expensive) prospective studies. As the evidence base
builds, conceptual models may be developed and tested using path
analytic designs and other theory- testing strategies. These studies can
provide hints about how to structure an intervention, who can most profit
from it, and when it can best be instituted.
Different questions relating to causality (Therapy, Prognosis, Etiology)
have different evidence hierarchies for ranking designs according to the
risk of bias, as we show in Table 9.4, which augments the evidence
hierarchy presented in Figure 2.2 (Chapter 2). For Therapy questions (and
some Etiology questions), experimental designs are the gold standard
(Level II), superseded only by systematic reviews of RCTs on level- of–
evidence scales (Level I). On the next rung of the hierarchy for Therapy
questions are quasi- experimental designs (and even at this rung, some
designs have a lower risk of bias than others). Further down the hierarchy
are 
observational and qualitative studies, which tend not to be strong in
corroborating causal hypotheses.

TABLE 9.4
Level of Evidence Rankings for Different Cause- Probing Research Questions

Level Type of Question
Therapy/Intervention 
and Etiology
(Causation)/Prevention of Harm a

Prognosis
Level Type of Question

Therapy/Intervention 
and Etiology
(Causation)/Prevention of Harm a

Prognosis

I Systematic review of RCTs b Systematic review of
nonexperimental studies

II Randomized controlled trial Prospective cohort study
III Quasi- experimental study Path analytic/theory- based study
IV Systematic review of nonexperimental 
studies Retrospective/case–control study
V Nonexperimental/observational study

a. Prospective cohort study
b. Path analytic/theory- based study
c. Retrospective case/control study
d. Descriptive correlational study

Descriptive correlational study

VI Metasynthesis of qualitative studies Metasynthesis of qualitative
studies

VII Qualitative study Qualitative study
VIII Nonresearch source Nonresearch source

aRCTs and quasi- experimental designs can sometimes be used for Etiology (causation)/prevention
of harm questions (e.g., the effect of salt intake on blood pressure levels). If intervening is not
possible (e.g., testing smoking as a cause of lung cancer), the level of evidence rankings would be
the same as for Prognosis questions.
bSystematic reviews (Level I) sometimes include RCTs and quasi- experimental studies.

Box 9.1 Guidelines for Critically Appraising Quantitative Research
Designs

1. What type of question (Therapy, Prognosis, etc.) was being addressed in this
study? Is the research question cause- probing, i.e., does it concern a
hypothesized causal relationship between the independent and dependent
variables?

2. What would be the strongest design for the research question? How does this
compare to the design actually used?

3. Was there an intervention or treatment? Was the intervention adequately
described? Was the control or comparison condition adequately described? Was
an experimental or quasi- experimental design used?

4. If the study was an RCT (a true experiment), what specific design was used?
Was this design appropriate?

5. In RCTs, what type of randomization was used? Were randomization
procedures adequately explained and justified? Was allocation concealment
confirmed?

6. If the design was quasi- experimental, what specific quasi- experimental design
was used? Was there adequate justification for deciding not to randomize
participants to treatment conditions? Did the report provide evidence that any
groups being compared were equivalent prior to the intervention?

7. If the design was nonexperimental, was the study inherently nonexperimental?
If not, is there adequate justification for not manipulating the independent
variable? What specific nonexperimental design was used? If a retrospective
design was used, is there good justification for not using a prospective design?
What evidence did the report provide that any groups being compared were
similar with regard to important confounding characteristics?

8. What types of comparisons were specified in the design (e.g., before–after?
between groups?) Did these comparisons adequately illuminate the relationship
between the independent and dependent variables? If there were no
comparisons, or faulty comparisons, how did this affect the study’s integrity
and the interpretability of the results?

9. Was the study longitudinal? Was the timing of the collection of data
appropriate? Was the number of data collection points reasonable?

10. Was blinding/masking used? If yes, who was blinded—and was this adequate?
If not, was there a justifiable rationale for failure to mask? Was the intervention
a type that could raise participants’ expectations that, in and of themselves,
could affect the outcomes?

For Prognosis questions, by contrast, randomization to groups is not
possible (e.g., for the question of whether low birthweight causes
developmental delays). In the hierarchy for Prognosis questions, the best
design for an individual study is a prospective cohort design. Path analytic
studies with longitudinal data and a strong theoretical basis can also be
powerful. Retrospective case–control studies are relatively weak in
addressing questions about causality. Systematic reviews of multiple
prospective studies, together with support from theories or biophysiologic
research, represent the strongest evidence for these types of question.
In terms of Etiology questions, RCTs are sometimes feasible (e.g., Does low
salt intake cause reductions in blood pressure levels?). For such questions,
the hierarchy is the same as that for Therapy questions. Many important
Etiology questions will never be answered using evidence from RCTs,
however. A good example is the Etiology question of whether smoking
causes lung cancer. Despite the inability to randomize people to smoking
and nonsmoking groups, few people doubt that this causal connection
exists. Thinking about the criteria for causality discussed early in this
chapter, there is abundant evidence that smoking cigare�es is correlated
with lung cancer and, through prospective studies, that smoking precedes
lung cancer. Researchers have been able to control for, and thus rule out,
other possible “causes” of lung cancer. There has 
been a great deal of

consistency and coherence in the findings, and the criterion of biologic
plausibility has been met through basic physiologic research.

TIP Some early studies found that evidence from experimental and
observational studies often do not yield the same results. The
relationship between “causes” and “effects” was found to be stronger
in nonexperimental studies than in randomized studies. However,
other studies have found that well- designed observational studies do
not overestimate the magnitude of effects in comparison with RCTs,
especially when the criteria for participating in the study are similar
(e.g., Concato et al., 2000).

Critical Appraisal of Quantitative Research Designs
The research design used in a quantitative study strongly influences the
quality of its evidence, and so should be carefully scrutinized. Researchers’
design decisions have more of an impact on study quality than any other
methodologic decision when the research question is about causal
relationships.
Actual designs and some control techniques (randomization, blinding,
allocation concealment) were described in this chapter, and the next
chapter explains in greater detail specific strategies for enhancing research
control. The guidelines in Box 9.1 are the first of two sets of questions
to help you in critically appraising quantitative research designs.

Research Examples
In this section we present descriptions of an experimental, quasi–
experimental, and nonexperimental study.

Research Example of an RCT

Study: Nonnutritive sucking, oral breast milk, and facilitated tucking
relieve preterm infant pain during heel- stick procedures (Peng et al., 2018).
Statement of Purpose: The purpose of this study was to compare the
effects of alternative strategies to reduce the pain of preterm infants during
heel- stick procedures.
Treatment Groups: In this trial, there were three treatment groups.
Preterm infants received either (1) combined nonnutritive sucking + oral
expressed breast milk, (2) nonnutritive sucking + breast milk + facilitated
tucking, or (3) routine care. Those in the control group received position
support and gentle touch. For those receiving breast milk, infants were
orally fed expressed milk through a syringe 2 minutes before the heel
stick.
Method: A sample of 109 preterm infants (gestational age 39- 37 weeks)
needing procedural heel sticks were randomly assigned to one of the three
conditions. Infants were excluded if they had a condition that might
influence their responses to pain (e.g. a congenital anomaly). Random
assignment was carried out by a blinded statistician, who used a block
randomization procedure. Heel sticks by a senior nurse were used to
collect infants’ blood. The time for heel- stick procedures was controlled at
2 minutes in all three conditions. The heel sticks occurred over eight
phases: phase 1 (baseline without stimuli), phases 2 and 3 (the second and
third minutes during the procedures), and phases 4 to 8 (recovery, a 10–
minute period starting when the nurse finished collecting blood and left
the infant). During all 8 phases, the infants’ reactions were video recorded.
Infant pain was scored by a research assistant from the videos at 1- minute
intervals. The research assistant was blinded to the study purpose and the
infants’ clinical information.
Key Findings: The combined use of sucking + breast milk—with or
without tucking—was found to have reduced preterm infants’ pain during
heel- stick procedures. Adding facilitated tucking helped infants recover
from pain.

Research Example of a Quasi- Experimental Study

Study: Thyroid cancer patients receiving an interdisciplinary team- based
care approach (ITCA- ThyCa) appear to display be�er outcomes (Henry et
al., 2018)
Statement of Purpose: The purpose of the study was to evaluate the
effects of a special Interdisciplinary Team- based Care Approach (ITCA) for
thyroid cancer patients.
Treatment Groups: Adult patients with a biopsy indicating confirmed or
highly suspicious thyroid cancer at the Jewish General Hospital in
Montreal, Canada, received the special ITCA intervention. The approach
included a dedicated nurse who had a central, integrative role in an
interdisciplinary team that included surgery, endocrinology, pharmacy,
dietetics, social work, and community supports. The ITCA also included
regularly scheduled team meetings to promote service coordination and
continuity of care. The comparison group comprised patients having
undergone a thyroidectomy at the McGill University Health Centre, a
facility with similar sociodemographic and clinical profiles and medical
approach as those in the intervention hospital. The comparison group
received care as 
usual.
Method: The researchers initially sought to use a randomized design but
discovered that patients were too distressed to provide consent while they
were waiting for surgery, and so they opted for a nonequivalent control
group design. A total of 200 patients (122 in the intervention group and 78
in the comparison group) participated and completed various patient–
reported assessments that measured patient satisfaction and general well–
being at the end of the study.
Key Findings: The intervention and comparison group members were
similar demographically and clinically. Patients in the ITCA group had
higher levels of well- being and fewer physical and practical concerns than
patients in the comparison group at the pos�est. Those in the intervention
group were also more satisfied with their care and were more likely to
recommend their hospital.

Research Example of a Correlational Study

Study: Dementia- related restlessness: Relationship to characteristics of
persons with dementia and family caregivers (Regier & Gitlin, 2018)

Statement of Purpose: The purpose of the study was to examine the
relationship of dementia- related restlessness to patient outcomes and to
caregiver well- being.
Method: The study participants in this cross- sectional study were 569
caregivers of persons with moderate- stage dementia who had one or more
behavioral disturbances. Caregivers were included if they lived with the
person with dementia, provided at least 4 hours of daily care, and
reported that the patient exhibited boredom, sadness, anxiety, agitation, or
restlessness. Caregivers completed questionnaires that included measures
of their perceptions of the patients’ neuropsychiatric symptoms (including
restlessness), pain, and functional capacity. Measures of the caregivers’
level of burden, depression, and caregiver mastery were also incorporated
into the questionnaires. The team also had scores for the person with
dementia’s score on a measure of cognition. The analysis involved
examining correlations among the various variables.
Key Findings: Nearly 65% of the dementia caregivers reported
restlessness as a symptom. Persons with restlessness had significantly
higher pain scores, were more likely to be on behavioral medications, and
had more neuropsychiatric symptoms than those without restlessness.
Caregivers of patients with restlessness reported greater burden and
depression.

Summary Points

Many quantitative nursing studies aim to facilitate inferences about cause- and–
effect relationships.
One criterion for causality is that the cause must precede the effect. Two other
criteria are that a relationship between a presumed cause (independent variable)
and an effect (dependent variable) exists and cannot be explained as being
caused by other (confounding) variables.
In an idealized model, a counterfactual is what would have happened to the
same people simultaneously exposed and not exposed to a causal factor. The
effect is the difference between the two. The goal of research design is to find a
good approximation to the idealized (but impossible) counterfactual.
Experiments (or randomized controlled trials, RCTs) involve manipulation
(the researcher manipulates the independent variable by introducing a
treatment or intervention); control (including use of a control group that does
not receive the intervention and represents the comparative counterfactual); and
randomization/random assignment (with people allocated to experimental and
control groups at random so that they are equivalent at the outset).
Participants in the experimental group usually all get the same intervention as
delineated in formal protocols, but some studies involve patient- centered
interventions (PCIs) that are tailored to meet individual needs or characteristics.
Researchers can expose the control group to various conditions, including no
treatment; an alternative treatment; standard treatment (“usual care”); a
placebo or pseudointervention; different doses of the treatment; or a delayed
treatment (for a wait- list group).
Random assignment is done by methods that give every participant an equal
chance of being in any group, such as by flipping a coin or using a table of
random numbers. Randomization is the most reliable method for equalizing
groups on all characteristics that could affect study outcomes. Randomization
should involve allocation concealment that prevents foreknowledge of
upcoming assignments.
Several variants to simple randomization exist, such as permuted block
randomization, in which randomization is done for blocks of people—for
example, 6 or 8 at a time, in randomly selected block sizes.
Blinding (or masking) is often used to avoid biases stemming from
participants’ or research agents’ awareness of group status or study hypotheses.
In double- blind studies, two groups (e.g., participants and investigators) are
blinded.
Many specific experimental designs exist. A pos�est- only (after- only) design
involves collecting data after an intervention only. In a pretest–pos�est (before–

after) design, data are collected both before and after the intervention,
permi�ing an analysis of change.
Factorial designs, in which two or more independent variables are manipulated
simultaneously, allow researchers to test both main effects (effects from
manipulated independent variables) and interaction effects (effects from
combining treatments).
In a crossover design, subjects are exposed to more than one condition,
administered in a randomized order, and thus they serve as their own controls.
Experimental designs are the gold standard because they come closer than any
other design in meeting criteria for inferring causal relationships.
Quasi- experimental designs (trials without randomization) involve an
intervention but lack randomization. Strong quasi- experimental designs
incorporate features to support causal inferences.
The nonequivalent control group pretest–pos�est design involves using a
nonrandomized comparison group and the collection of pretreatment data so
that initial group equivalence can be assessed.
In a time series design, information on the dependent variable is collected
multiple times before and after the intervention in a single group. The extended
time period for data collection enhances the ability to a�ribute change to the
intervention.
Other quasi- experimental designs include nonrandomized dose–response
analyses and the nonrandomized arms of a partially randomized patient
preference (PRPP) design (i.e., groups with strong preferences).
In evaluating the results of quasi- experiments, it is important to ask whether it
is plausible that factors other than the intervention caused or affected the
outcomes (i.e., whether there are credible rival hypotheses for explaining the
results).
Nonexperimental (or observational) research includes descriptive research—
studies that summarize the status of phenomena—and correlational studies that
examine relationships among variables but involve no manipulation of
independent variables (often because they cannot be manipulated).
Designs for cause- probing correlational studies include retrospective (case–
control) designs (which look back in time for antecedent causes of “caseness”
by comparing cases that have a disease or condition with controls who do not);
prospective (cohort) designs (studies that begin with a presumed cause and
look forward in time for its effect); natural experiments (in which a group is
affected by a random event, such as a disaster); and path analytic studies
(which test causal models developed on the basis of theory).
Descriptive correlational studies describe how phenomena are interrelated
without invoking a causal explanation. Univariate descriptive studies examine
the frequency or average value of variables.

Descriptive studies include prevalence studies that document the prevalence
rate of a condition at one point in time and incidence studies that document the
frequency of new cases, over a given time period. When the incidence rates for
two subgroups are estimated, researchers can compute the relative risk of
“caseness” for the two.
The primary weakness of correlational studies for cause- probing questions is
that they can harbor biases, such as self- selection into groups being compared.

Study Activities
Study activities are available to instructors on .

References Cited in Chapter 9
* Adams M., Hurley J., Todd M., Bhuiyan N., Jarre� C., Tucker W., … Angadi S.

(2017). Adaptive goal se�ing and financial incentives: a 2 × 2 factorial randomized
controlled trial to increase adults’ physical activity. BMC Public Health, 17, 286.

Barkauskas V. H., Lusk S. L., Eakin B. L. (2005). Selecting control interventions for
clinical outcome studies. Western Journal of Nursing Research, 27, 346–363.

Bärnighausen T., Tugwell P., Rø�ingen J., Shemilt I., Rockers P., Geldse�er P., …
Atun R. (2017). Quasi- experimental study design series—paper 4: uses and value.
Journal of Clinical Epidemiology, 89, 4–11.

Barta W., Fisher V., & Hynes P. (2017). Decreased re- conviction rates of DUI offenders
with intensive supervision and home confinement. American Journal of Drug &
Alcohol Abuse, 43, 742–746.

* Beck C., McSweeney J., Richards K., Robertson P., Tsai P., & Souder E. (2010).
Challenges in tailored intervention research. Nursing Outlook, 58, 104–110.

Bradford- Hill A. (1965). The environment and disease: association or causation.
Proceedings of the Royal Society of Medicine, 58, 295–300.

Breneman C., Kline C., West D., Sui X., Porter R., Bowyer K., … Wang X. (2019). The
effect of moderate- intensity exercise on nightly variability in objectively measured
sleep parameters among older women. Behavioral Sleep Medicine, 17, 459–469.

Campbell D. T., & Stanley J. C. (1963). Experimental and quasi- experimental designs for
research. Chicago: Rand McNally.

* Chalmers J., Sansom- Daly U., Pa�erson P., McCowage G., & Anazodo A. (2018).
Psychosocial assessment using telehealth in adolescents and young adults with
cancer: a partially randomized patient preference pilot study. JMIR Research
Protocols, 7, e168.

* Colón- Emeric C., Ammarell N, Bailey D., Corazzini K., Lekan- Rutledge D., Piven
M., … Anderson R. A. (2006). Pa�erns of medical and nursing staff communication
in nursing homes. Qualitative Health Research, 16, 173–188.

* Colón- Emeric C., McConnell E., Pinheiro S., Corazzini K., Porter K., Earp K., …
Anderson R. A. (2013). CONNECT for be�er fall prevention in nursing homes.
Journal of the American Geriatric Society, 61, 2150–2159.

** Como J. M. (2018). Health literacy and health status in people with chronic heart
failure. Clinical Nurse Specialist, 32, 29–42.

* Concato J., Shah N., & Horwi� R. (2000). Randomized, controlled trials,
observational studies, and the hierarchy of research designs. New England Journal of
Medicine, 342, 1887–1892.

Doering J., & Dogan S. (2018). A postpartum sleep and fatigue intervention feasibility
pilot study. Behavioral Sleep Medicine, 16, 185–201.

Doig G., & Simpson F. (2005). Randomization and allocation concealment: a practical
guide for researchers. Journal of Critical Care, 20, 187–191.

Dotson B., Larabell P., Patel J., Wong K., Qasem L., Arthur W., … Tennenberg S.
(2016). Calcium administration is associated with adverse outcomes in criticall ill
patients receiving parenteral nutrition: results from a natural experiment created
by a calcium gluconate shortage. Pharmacotherapy, 36, 1185–1190.

Downs M., Tucker K., Christ- Schmidt H., & Wi�es J. (2010). Some practical problems
in implementing randomization. Clinical Trials, 7, 235–245.

Geldse�er P., & Fawzi W. (2017). Quasi- experimental study design series—paper 2:
complementary approaches to advancing global health knowledge. Journal of
Clinical Epidemiology, 89, 12–16.

* George A., Dahlen H., Blinkhorn A., Ajwani S., Bhole S., Ellis S., … Johnson M.
(2018). Evaluation of a midwifery initiated oral health- dental service program to
improve oral health and birth outcomes for pregnant women: a multi- centre
randomised controlled trial. International Journal of Nursing Research, 82, 49–57.

Grimes D., & Schulz K. (2005). Compared to what? Finding controls for case- control
studies. The Lancet, 365, 1429–1433.

Gross D. (2005). On the merits of a�ention control groups. Research in Nursing &
Health, 28, 93–94.

Henry M., Frenkiel S., Chartier G., Payne R., MacDonald C., Black M., … Hier M.
(2018). Thyroid cancer patients receiving an interdisciplinary team- based care
approach (ITCA- ThyCa) appear to display be�er outcomes: program evaluation
results indicating a need for further integrated care and support. Psycho- oncology,
27, 937–945.

Herbison P., Hay- Smith J., & Gillespie W. (2011). Different methods of allocation to
groups in randomized trials are associated with different levels of bias. A meta- –
epidemiological study. Journal of Clinical Epidemiology, 64, 1070–1075.

Lachin J. M., Ma�s J., & Wei L. (1988). Randomization in clinical trials: conclusions
and recommendations. Controlled Clinical Trials, 9, 365–374.

Lau Y., Tha P., Ho- Lim S., Wong L., Lim P., Citra Nurfarah B., & Shorey S. (2018). An
analysis of the effects of intrapartum factors, neonatal characteristics, and skin- to- –
skin contact on early breastfeeding initiation. Maternal & Child Nutrition, 14(1).

Lauver D. R., Ward S. E., Heidrich S. M., Keller M. L., Bowers B. J., Brennan P. F.,
Kirchhoff K. T., & Wells T. J. (2002). Patient- centered interventions. Research in
Nursing & Health, 25, 246–255.

* Moher D., Hopewell S., Schulz K. F., Montori V., Go�sche P., Devereaux P., …
Altman D. G. (2010). CONSORT 2010 explanation and elaboration: updated
guidelines for reporting parallel- group randomised trials. BMJ, 340, c869.

Ndosi M., Wright- Hughes A., Brown S., Backhouse M., Lipsky B., Bhogal M., …
Nelson E. (2018). Prognosis of the infected diabetic foot ulcer: a 12- month
prospective observational study. Diabetic Medicine, 35, 78–88.

Ng A., & Wong F. (2018). Effects of a home- based palliative heart failure program on
quality of life, symptom burden, satisfaction and caregiver burden: a randomized
controlled trial. Journal of Pain and Symptom Management, 55, 1–11.

Norman R., Ramsden R., Ginty L., & Sinha S. (2017). Effect of a multimodal
educational intervention on use of urinary catheters in hospitalized individuals.
Journal of the American Geriatric Society, 65, 2679–2684.

Özkan F., & Zincir H. (2017). The effect of reflexology upon spasticity and function
among children with cerebral palsy who received physiotherapy: three group
randomised trial. Applied Nursing Research, 36, 128–134.

Peng H., Yin T., Yang L., Wang C., Chang Y., Jeng M., & Liaw J. (2018). Non- nutritive
sucking, oral breast milk, and facilitated tucking relieve pretern infants pain
during heel- stick procedures: a prospective, randomized controlled trial.
International Journal of Nursing Studies, 77, 162–170.

Polit D. F., & Chaboyer W. (2012). Statistical process control in nursing research.
Research in Nursing & Health, 35, 82–93.

Polit D., Gillespie B., & Griffin R. (2011). Deliberate ignorance: a systematic review of
the use of blinding in nursing clinical trials. Nursing Research, 61, 9–16.

Reddy R., El Yousself J., Winters- Stone K., Branigan D., Leitschuh J., Castle J., &
Jacobs P. (2018). The effect of exercise on sleep in adults with type 1 diabetes.
Diabetes, Obesity & Metabolism, 20, 443–447.

Regier N., & Gitlin L. (2018). Dementia- related restlessness: relationship to
characteristics of persons with dementia and family caregivers. International Journal
of Geriatric Psychiatry, 33, 185–192.

Richards K., Enderlin C., Beck C., McSweeney J., Jones T., & Robertson P. (2007).
Tailored biobehavioral interventions: a literature review and synthesis. Research
and Theory for Nursing Practice, 21, 271–285.

* Rosenzweig M., West M., Ma�hews J., Stokan M., Yoojin Kook Y., Gallups S., &
Diergaarde B. (2019). Financial toxicity among women with metastatic breast
cancer. Oncology Nursing Forum, 46, 83–91.

Saad K., Abdel- Rahman A., Elserogy Y., Al- Altram A., El- Houfey A., Othman H., …
Abdel- Salam A. (2018). Randomized controlled trial of vitamin D supplementation
in children with autism spectrum disorder. Journal of Child Psychology and
Psychiatry, 59, 20–29.

Schulz K. F., Chalmers I., & Altman D. G. (2002). The landscape and lexicon of
blinding in randomized trials. Annals of Internal Medicine, 136, 254–259.

Shadish W. R., Cook T. D., & Campbell D. T. (2002). Experimental and quasi- –
experimental designs for generalized causal inference. Boston: Houghton Mifflin.

* Smith B., Sha�é A., Perlman A., Siers M., & Lynch W. (2018). Improvements in
resilience, stress, and somatic symptoms following online resilience training: a
dose- response effect. Journal of Occupational and Environmental Medicine, 60, 1–5.

Takahashi R., Wilunde C., Magutah K., Mwaura- Tenambergen W., Atwoli L., &
Perngparn U. (2018). Evaluation of alcohol screening and community- based brief
interventions in rural western Kenya: a quasi- experimental study. Alcohol and
Alcoholism, 53, 121–128.

Vedelø T. W., & Lomborg K. (2011). Reported challenges in nurse- led randomised
controlled trials: an integrative review of the literature. Scandinavian Journal of the
Caring Sciences, 25, 194–200.

Wong K., Cooper A., Brown J., Boyd L., & Levinson M. (2018). The prevalence of
peripheral intravenous cannulae and pa�ern of use: a point prevalence in a private
hospital se�ing. Journal of Clinical Nursing, 27, e363–e367.

Yuan M., Li F., Fang Q., Wang W., Peng J., Qin D., Wang X., & Liu G. (2018). Research
on the cause of death for severe stroke patients. Journal of Clinical Nursing, 27, 450–
460.

*A link to this open- access article is provided in the Toolkit for Chapter 9 in the
Resource Manual.

**This journal article is available on for this chapter.

C H A P T E R 1 0

Rigor and Validity in Quantitative Research

Validity and Inference
This chapter describes strategies for controlling sources of bias in
quantitative studies. Many of these strategies strengthen inferences that
can be made about cause-and-effect relationships.

Validity and Validity Threats
In designing a study, it is useful to anticipate factors that could undermine
the validity of inferences. Shadish, Cook, and Campbell (2002) define
validity in the context of research design as “the approximate truth of an
inference” (p. 34). For example, inferences that a cause results in a
hypothesized effect are valid to the extent that researchers can marshal
strong supporting evidence. Validity is always a ma�er of degree, not an
absolute.
Validity is a property of an inference, not of a research design, but design
elements profoundly affect the inferences that can be made. Threats to
validity are reasons that an inference could be wrong. When researchers
introduce design features to minimize potential threats, the validity of the
inference about relationships under study is strengthened.

Types of Validity
Shadish and colleagues (2002) proposed a taxonomy that identified four
types of validity and cataloged dozens of validity threats. This chapter
describes the taxonomy and summarizes major threats, but we urge
researchers to consult this seminal work for further guidance.
The first type of validity, statistical conclusion validity, concerns the
validity of inferences that there truly is an empirical relationship, or
correlation, between the presumed cause and the effect. The researcher’s
job is to provide strong evidence that an observed relationship is real.
Internal validity concerns the validity of inferences that, given that an
empirical relationship exists, it is the independent variable, rather than
something else, that caused the outcome. Researchers must develop
strategies to rule out the plausibility that some factor other than the
independent variable accounts for the observed relationship.

Construct validity involves the validity of inferences “from the observed
persons, se�ings, and cause-and-effect operations included in the study to
the constructs that these instances might represent” (p. 38). One aspect of
construct validity concerns the degree to which an intervention is a good
representation of the underlying construct that was theorized as having
the potential to cause beneficial outcomes. Another issue concerns whether
the measures of the outcomes are good operationalizations of the
constructs for which they are intended.
External validity concerns whether inferences about observed
relationships will hold over variations in persons, se�ing, or time. External
validity, then, relates to the generalizability of inferences—a critical
concern for evidence- based nursing practice.
These four types of validity and their associated threats are discussed in
this chapter. Many validity threats result from inadequate control over
confounding variables, and so we briefly review methods of controlling
confounders associated with participants’ characteristics.

Controlling Confounding Participant Characteristics
This section describes six methods of controlling participant characteristics
—characteristics that could compete with the independent variable as the
cause of an outcome.

Randomization
As noted in Chapter 9, randomization is the most effective method of
controlling individual characteristics. The function of randomization is to
secure comparable groups—i.e., to equalize groups with respect to
confounding variables. A distinct advantage of random assignment,
compared with other strategies, is that it can control all possible sources of
confounding variation, without any conscious decision about which variables
need to be controlled.

Crossover
Randomization within a crossover design is an especially powerful
method of ensuring equivalence between groups being compared—
participants serve as their own controls. Moreover, fewer participants
usually are needed in such a design. Fifty people exposed to two
treatments in random order yield 100 data points (50 × 2); 50 people
randomly assigned to two different groups yield only 50 data points

(25 × 2). Crossover designs are not appropriate for all studies, however,
because of possible carry- over effects: people exposed to two different
conditions may be influenced in the second condition by their experience
in the first.

Homogeneity
When randomization and crossover are not feasible, alternative methods
of controlling confounding characteristics are needed. One method is to
use only people who are homogeneous with respect to confounding
variables. Suppose we were testing the effectiveness of a physical fitness
program on the cardiovascular functioning of elders. In our quasi-
experimental design, elders from two different nursing homes are
recruited, with elders in one of them receiving the intervention. If gender
were a key confounding variable—and if the two nursing homes had
different proportions of men and women—we could control gender by
using only men (or only women) as participants.
The price of homogeneity is that research findings cannot be generalized
to types of people who did not participate in the study. If the physical
fitness intervention were found to have beneficial effects on the
cardiovascular status of a sample of women 65 to 75 years of age, its
usefulness for improving the cardiovascular status of men in their 80s
would require a separate study. Indeed, one criticism of this approach is
that researchers sometimes exclude people who are extremely ill, which
means that the findings cannot be generalized to those who may be most
in need of interventions.

Example of Control Through Homogeneity
Bang and colleagues (2018) used a nonequivalent control group
design to test the effects of a health promotion program with nursing
student mentors on the psychological health of elementary school
children in Korea. Several variables were controlled through
homogeneity, including the children’s age (all in grades 4- 6) and
socioeconomic background (all children were considered
“vulnerable”), and all were ge�ing social services in community
centers.

TIP The principle of homogeneity is often used to control (hold
constant) external factors known to influence outcomes. For example,
it may be important to collect outcome data at the same time of the
day for all participants if time could affect the outcome (e.g., fatigue).
As another example, it may be desirable to maintain constancy of
conditions in terms of data collection locale—e.g., interviewing all
respondents in their homes, rather than some in their places of work,
because context can influence responses to questions.

Stratification/Blocking
Another approach to controlling confounders is to include them in the
research design through stratification. To pursue our example of the
physical fitness intervention with gender as the confounding variable, we
could use a randomized block design in which men and women are assigned
separately to treatment groups. This approach can enhance the likelihood
of detecting differences between our experimental and control groups
because the effect of the blocking variable (gender) on the outcome is
eliminated. In addition, if the blocking variable is of interest substantively,
researchers have the opportunity to study differences in the subgroups
created by the stratifying variable (e.g., men versus women).

Matching
Matching (also called pair matching) involves using information about
people’s characteristics to create comparable groups. If matching were
used in our physical fitness example and age and gender were the
confounding variables, we would match a person in the intervention
group with one in the comparison group with respect to age and gender.
Matching is often problematic: to use matching, researchers must know
the relevant confounders in advance. Also, it is difficult to match on more
than two or three variables. This problem is sometimes addressed with a
sophisticated matching technique, called propensity matching. This
method, which requires some statistical sophistication, involves the
creation of a propensity score that captures the conditional probability of
exposure to a treatment given various preintervention characteristics.
Members of the groups being compared (either in an observational or
quasi-experimental study) can then be matched on the propensity score
(Qin et al., 2008). Both conventional and propensity matching are most

easily implemented when there is a large pool of potential comparison
group participants from which good matches to treatment group members
can be selected. Nevertheless, matching as the primary control technique
should be used only when other, more powerful procedures are not
feasible.
Sometimes, as an alternative to matching, researchers use a balanced
design with regard to key confounders. In such situations, researchers
a�empt only to ensure that the groups being compared have similar
proportional representation on confounding variables, rather than
matching on a one- to- one basis. For example, if gender and age were the
two variables of concern, we would strive to ensure that the same
percentage of men and women were in the two groups and that the
average age was comparable. Such an approach is less cumbersome than
matching but has similar limitations. Nevertheless, both matching and
balancing are preferable to failing to control participant characteristics at
all.

Example of Control Through Matching
Fehlberg and an interdisciplinary team (2017) used a case–control
design to study associations between hyponatraemia, sodium
depletion, and the risk of falls in hospitalized patients. Data were
collected in four hospitals from 699 adult patients who fell and 1,189
matched controls who did not. Up to two controls with similar length
of stay who were on the same nursing unit at the time the case fell
were selected. Low serum sodium levels were found to be strongly
associated with falls.

Statistical Control
Another method of controlling confounding variables is through statistical
analysis rather than research design. A detailed description of powerful
statistical control mechanisms will be postponed until Chapter 19, but we
will explain underlying principles with a simple illustration of a procedure
called analysis of covariance (ANCOVA).
In our physical fitness example, suppose we used a nonequivalent control
group design with elders from two nursing homes, and resting heart rate
was an outcome. Individual differences in heart rate in the sample would
be expected—that is, heart rate would vary from one person to the next.

The research question is, “Can some of the differences in heart rate be
a�ributed to program participation?” We know that differences in heart
rate are also related to other traits, such as age. In Figure 10.1, the large
circles represent the total amount of variation for resting heart rate. A
certain amount of variation can be explained by a person’s age, depicted as
the small circle on the left in Figure 10.1A. Other variation may be
explained by participation or nonparticipation in the program, represented
as the small circle on the right. The two small circles (age and program
participation) overlap, indicating a relationship between the two. In other
words, people in the physical fitness group are, on average, either older or
younger than those in the comparison group. Age should be controlled;
otherwise, we could not determine whether postintervention differences in
resting heart rate are due to differences in age or program participation.

FIGURE 10.1 Schematic diagram illustrating principles of analysis of covariance
conceptually.

Analysis of covariance statistically removes the effect of confounding
variables on the outcome. In the illustration, the portion of heart rate
variability a�ributable to age (the hatched area of the large circle in A) is
removed through ANCOVA. Figure 10.1B shows that the final analysis

tests the effect of program participation on heart rate after removing the
effect of age. By controlling heart rate variability resulting from age, we get
a more accurate estimate of the effect of the program on heart rate. Note
that even after removing variability due to age, there is still individual
variation not associated with the program treatment—the bo�om half of
the large circle in B. This means that the study can probably be improved
by controlling additional confounders, such as gender, smoking history,
and so on. ANCOVA and other sophisticated procedures can control
multiple confounding variables.

Example of Statistical Control
Abbasi and colleagues (2018) tested the effectiveness of e- learning
versus an educational booklet (or usual care without any
intervention) on the childbirth self- efficacy of pregnant women. The
researchers compared the scores of women in the three groups on a
childbirth self- efficacy measure after the intervention, statistically
controlling for baseline values on the same measure.

TIP Confounding participant characteristics that need to be
controlled vary from one study to another, but we can offer some
guidance. The best variable is the outcome variable itself, measured
before the independent variable occurs. In our physical fitness
example, controlling preprogram measures of cardiovascular
functioning would be a good choice. Major demographic variables
(e.g., age, race/ethnicity, education) and health indicators are usually
good candidates for statistical control. Confounding variables that
correlate with the outcomes should be identified through a literature
review.

Evaluation of Control Methods
Table 10.1 summarizes benefits and drawbacks of the six control
mechanisms. Randomization is the most effective method of managing
confounding variables—that is, of approximating the ideal but
una�ainable counterfactual discussed in Chapter 9—because it tends to
cancel out individual differences on all possible confounders. Crossover
designs are a useful supplement to randomization but are not always

appropriate. The remaining alternatives have common disadvantages:
Researchers must know in advance the relevant confounding variables and
can rarely control all of them. To use homogeneity, stratify, match, or
perform ANCOVA, researchers must know which variables need to be
measured and controlled. Yet, when randomization is impossible, the use
of any of these strategies is be�er than no control strategy.

TABLE 10.1
Methods of Control Over Participant Characteristics

Method Benefits Limitations
Randomization

Controls all preintervention
confounding variables
Does not require advance
knowledge of which variables need
to be controlled

Constraints (ethical, practical)
on which variables can be
manipulated
Possible artificiality of
conditions
Resistance to being randomized
by many people

Crossover
If done with randomization, very
strong approach: subjects serve as
their own controls and thus are
perfectly “matched”

Cannot be used if there are
possible carry- over effects from
one condition to the next
History threat may be relevant
if external factors change over
time

Homogeneity
Easy to achieve in all types of
research
Could enhance interpretability of
relationships

Limits generalizability
Requires knowledge of which
variables to control
Range restriction could lower
statistical conclusion validity

Stratification/blocking
Enhances the ability to detect and
interpret relationships
Offers opportunity to examine
stratifying variable as an
independent variable

Usually restricted to a few
stratifying variables
Requires knowledge of which
variables to control

Matching
Enhances ability to detect and
interpret relationships
May be easy if there is a large
“pool” of potential available
comparison subjects

Usually restricted to a few
matching variables (except with
propensity matching)
Requires knowledge of which
variables to match
May be difficult to find
comparison group matches,
especially if there are more than
two matching variables

Method Benefits Limitations
Statistical control

Enhances ability to detect and
interpret relationships
Relatively economical means of
controlling several confounding
variables

Requires knowledge of which
variables to control, as well as
measurement of those variables
Requires some statistical
sophistication

Statistical Conclusion Validity
One criterion for establishing causality is demonstrating that there is a
relationship between the independent and dependent variable. Statistical
methods are used to support inferences about whether relationships exist.
Researchers can make design decisions that protect against reaching false
statistical conclusions. Shadish and colleagues (2002) discussed nine
threats to statistical conclusion validity. We focus here on three especially
important threats.

Low Statistical Power
Detecting existing relationships among variables requires statistical
power. Adequate statistical power can be achieved in various ways, the
most straightforward of which is to use a sufficiently large sample. When
small samples are used, statistical power tends to be low, and the analyses
may fail to show that the independent and dependent variables are related
—even when they are. Power and sample size are discussed in Chapters 13
and 18.
Another aspect of a powerful design concerns how the independent
variable is defined. Both statistically and substantively, results are clearer
when differences between groups being compared are large. Group
differences on the outcomes can be enhanced by maximizing differences
on the independent variable. Conn and colleagues (2001) offered good
suggestions for enhancing the power and effectiveness of nursing
interventions. Note that strengthening group differences is easier in RCTs
than in nonexperimental research. In experiments, investigators can devise
treatment conditions that are as distinct as money, ethics, and practicality
permit.
Another aspect of statistical power concerns maximizing precision, which
is achieved through accurate measuring tools, controls over confounding
variables, and powerful statistical methods. Precision can best be
explained with an example. Suppose we were studying the effect of
admission into a nursing home on depression by comparing people who
were or were not admi�ed. Depression varies from one elderly person to
another for various reasons. We want to isolate—as precisely as possible—
variation in depression a�ributable to a person’s entry into a nursing

home. The following ratio expresses what we wish to assess in this
example:

This ratio, greatly simplified here, captures the essence of many statistical
tests. We want to make variability in the numerator (the upper half) as
large as possible relative to variability in the denominator (the lower half),
to evaluate precisely the relationship between nursing home admission
and depression. The smaller the variability in depression due to
confounding variables (e.g., age, pain), the easier it will be to detect
differences in depression between elders who were or were not admi�ed
to a nursing home. Thus, reducing variability caused by confounders can
increase statistical conclusion validity. As a purely hypothetical
illustration, we will a�ach some numeric values * to the ratio as follows:

If we can make the bo�om number smaller, say by changing it from 4 to 2,
we will have a more precise estimate of the effect of nursing home
admission on depression, relative to other influences. Control mechanisms
such as those described earlier help to reduce variability caused by
extraneous variables. We illustrate this by continuing our example,
singling out age as a key confounding variable. Total variability in levels
of depression can be conceptualized as having the following components:

This equation can be taken to mean that part of the reason why elders
differ in depression is that some were admi�ed to a nursing home and
others were not; some were older and some were younger; and other
factors (e.g., pain) also affect depression.

One way to increase precision in this study would be to control age,
thereby removing the variability in depression that results from age
differences. We could do this, for example, by restricting age to elders
younger than 80 years, thereby reducing the variability in depression due
to age. As a result, the effect of nursing home admission on depression
becomes greater, relative to the remaining variability. Thus, this design
decision (homogeneity) enabled us to get a more precise estimate of the
effect of nursing home admission on level of depression (although, of
course, this limits generalizability). Research designs differ in the
sensitivity with which effects under study can be detected statistically.
Lipsey (1990) has prepared a good guide to enhancing the sensitivity of
research designs.

Restriction of Range
The control of extraneous variation through homogeneity is easy to use
and can help to clarify the relationship between key research variables, but
it can be risky. Not only does this approach limit generalizability, it can
sometimes undermine statistical conclusion validity. When the use of
homogeneity restricts the range of values on the outcome variable,
relationships between the outcome and the independent variable will be
a�enuated and may therefore lead to the erroneous conclusion that the
variables are unrelated. For example, if everyone in the sample had a
depression score of 50, scores would be unrelated to age, nursing home
admission, and so on.
In our example, we suggested limiting the sample of nursing home
residents to people younger than 80 years to reduce variability in the
denominator. Our aim was to enhance the variability in depression scores
a�ributable to nursing home admission, relative to depression variability
due to other factors. But what if few elders younger than 80 years were
depressed? With limited variability, relationships cannot be detected.
Therefore, in designing a study, you should consider whether there will be
sufficient variability to support the statistical analyses envisioned. The
issue of floor effects and ceiling effects, which involve range restrictions at
the lower and upper end of a measure, respectively, is discussed later in
this book.

Unreliable Implementation of a Treatment

The strength of an intervention—and statistical conclusion validity—can
be undermined if an intervention is not as powerful in reality as it is “on
paper.” Intervention fidelity (or treatment fidelity) concerns the extent to
which the implementation of an intervention is faithful to its plan. There is
growing interest in intervention fidelity and considerable advice on how
to achieve it (e.g., Bova et al., 2017; Rixon et al., 2016).
Interventions can be weakened by various factors, which researchers can
often influence. One issue concerns whether the intervention is similar
from one person to the next. Usually, researchers strive for constancy of
conditions in implementing a treatment because lack of standardization
adds extraneous variation. Even in tailored, patient- centered interventions,
there are protocols, though different protocols are used with different
people. Using the notions just described, when standard protocols are not
followed, variability due to the intervention (i.e., in the numerator) can be
suppressed, and variability due to other factors (i.e., in the denominator)
can be inflated, possibly leading to the erroneous conclusion that the
intervention was ineffective. This suggests the need for some
standardization, the use of procedures manuals, thorough training of
personnel, and vigilant monitoring (e.g., observing delivery of the
intervention) to ensure that the intervention is being implemented as
planned—and that control group members have not gained access to the
intervention.
Assessing whether the intervention was delivered as intended may need to
be supplemented with efforts to ensure that the intervention was received
as intended. This may involve a manipulation check to assess whether the
treatment was perceived in an expected manner. For example, if we were
testing the effect of soothing versus jarring music on anxiety, we might
want to learn whether participants themselves perceived the music as
soothing and jarring. Another aspect of treatment fidelity for behavior
change interventions concerns the concept of enactment (Bellg et al., 2004).
Enactment refers to participants’ performance of the treatment- related
skills, behaviors, and cognitive strategies in relevant real- life se�ings.

Example of Attention to Intervention Fidelity
Morrison and colleagues (2017) described strategies for enhancing
and evaluating intervention fidelity, using as an illustration the
efforts used in a multisite RCT for adults with multiple sclerosis.
Their approach included audiotaping intervention classes, auditing

computer exercises completed by participants, and monitoring class
a�endance.

Treatment adherence can be another problem. It is not unusual for those
in the intervention group to elect not to participate fully in the treatment—
for example, they may stop going to treatment sessions. Researchers
should take steps to encourage participation among those in the treatment
group. This might mean making the intervention as enjoyable as possible,
offering incentives, and reducing burden in terms of data collection (Polit
& Gillespie, 2010). Nonparticipation in an intervention is rarely random.
Researchers should document which people got what amount of treatment
so that individual differences in “dose” can be examined in the analysis or
interpretation of results.

TIP Except for small- scale studies, every study should have a
procedures manual that delineates the protocols and procedures for
implementation. The Toolkit section of the accompanying Resource
Manual provides a model table of contents for such a procedures
manual. The Toolkit also includes a model checklist to monitor
delivery of an intervention through direct observation of intervention
sessions.

Internal Validity
Internal validity refers to the extent to which it is possible to make an
inference that the independent variable, rather than another factor, truly
had a causal effect on the outcome. We infer from an effect to a cause by
eliminating other potential causes. The control mechanisms reviewed
earlier are strategies for improving internal validity. If researchers do not
manage confounding variation, the conclusion that the outcome was
caused by the independent variable is open to challenge.

Threats to Internal Validity
Experiments possess a high degree of internal validity because
manipulation and random assignment allows researchers to rule out most
alternative explanations for the results. Researchers who use quasi-
experimental or correlational designs must contend with competing
explanations of what caused the outcomes. Major threats to internal
validity are examined in this section.

Temporal Ambiguity
One criterion for inferring a causal relationship is that the cause must
precede the effect. In RCTs, researchers create the independent variable
and then observe subsequent performance on an outcome, so establishing
temporal sequencing is never a problem. In correlational studies, however,
it may be unclear whether the independent variable preceded the
dependent variable, or vice versa—and this is especially true in cross–
sectional studies.

Selection
Selection (self- selection) encompasses biases resulting from preexisting
differences between groups. When individuals are not assigned to groups
randomly, the groups being compared are seldom completely equivalent.
Differences on the outcomes could then reflect initial group differences
rather than the effect of the independent variable. For example, if we
found that men who were overweight were more likely to be depressed
than men who were not overweight, it would be impossible to conclude
that the two groups differed in depression because of their weight. The
problem of selection is reduced if researchers can collect data on

participants’ characteristics before the occurrence of the independent
variable. In our example, if we could measure men’s level of depression
before they became overweight, then the study could be designed to control
earlier levels of depression. Selection bias is the most problematic and
frequent threat to internal validity in studies not using an experimental
design.

History
The history threat concerns the occurrence of external events that take
place concurrently with the independent variable and that can affect the
outcomes. For example, suppose we were studying the effectiveness of an
outreach program to encourage pregnant women in rural areas to improve
health practices (e.g., smoking cessation, prenatal care). The program
might be evaluated by comparing the average birth weight of infants born
in the 12 months before the outreach program with the average birth
weight of those born in the 12 months after the program was introduced,
using a time series design. However, suppose that 1 month after the new
program was launched, a well- publicized TV program about the
importance of healthy lifestyles during pregnancy was aired. Infants’ birth
weight might now be affected by both the intervention and the messages
in the TV program, and it would be difficult to disentangle the two effects.
In a true experiment, history is not as likely to be a threat to a study’s
internal validity because we can often assume that external events are as
likely to affect the intervention group as the control group. When this is
the case, group differences on the dependent variables represent effects
over and above those created by outside factors. There are, however,
exceptions. For example, when a crossover design is used, an event
external to the study may occur during the first half (or second half) of the
experiment, and so treatments would be contaminated by the effect of that
event. That is, some people would receive treatment A with the event and
others would receive treatment A without it, and the same would be true
for treatment B.
Selection biases sometimes interact with history to compound the threat to
internal validity. For example, if the comparison group is different from
the treatment group, then the characteristics of the members of the
comparison group could lead them to have different intervening
experiences, thereby introducing both history and selection biases into the
design.

Maturation
In a research context, maturation refers to processes occurring during the
study as a result of the passage of time rather than as a result of the
independent variable. Examples of such processes include physical
growth, emotional maturity, and fatigue. For instance, if we wanted to
evaluate the effects of a sensorimotor program for developmentally
delayed children, we would have to consider that progress occurs in these
children even without special assistance. A one- group pretest–pos�est
design is highly susceptible to this threat.
Maturation is often a relevant consideration in health research. Maturation
does not refer just to aging but rather to any change that occurs as a
function of time. Thus, maturation in the form of wound healing,
postoperative recovery, and other bodily changes could be a rival
explanation for the independent variable’s effect on outcomes.

Mortality/Attrition
Mortality is the validity threat that arises from a�rition in groups being
compared. If different kinds of people remain in the study in one group
versus another, then these differences, rather than the independent
variable, could account for observed differences on the outcomes. Severely
ill patients might drop out of an experimental condition because it is too
demanding, or they might drop out of the control group because they see
no advantage to participating. In a prospective cohort study, there may be
differential a�rition between groups being compared because of death,
illness, or geographic relocation. A�rition bias can also occur in single–
group quasi-experiments if those dropping out of the study are a biased
subset that makes it look like a change in average values resulted from a
treatment.
The risk of a�rition is especially great when the length of time between
points of data collection is long. A 12- month follow- up of participants, for
example, tends to produce higher rates of a�rition than a 1- month follow–
up (Polit & Gillespie, 2009). In clinical studies, the problem of a�rition may
be especially acute because of patient death or disability.
If a�rition is random (i.e., those dropping out of a study are comparable to
those remaining in it), then there would not be bias. However, a�rition is
rarely random. In general, the higher the rate of a�rition, the greater the
likelihood of bias.

TIP In longitudinal studies, a�rition may occur because researchers
cannot locate participants, not because they dropped out of the study.
An effective strategy for tracing people is to obtain contact
information from participants at each point of data collection.
Contact information should include the names, addresses, telephone
numbers, and email addresses of two to three people with whom the
participant is close (e.g., siblings)—people who could provide
information if participants moved. A sample contact information
form is provided in the Toolkit of the accompanying Resource Manual.

Testing and Instrumentation
Testing refers to the effects of taking a pretest on people’s performance on
a pos�est. It has been found, particularly in studies of a�itudes, that the
mere act of collecting data from people changes them. Suppose a sample
of nursing students completed a questionnaire about a�itudes toward
assisted suicide. We then teach them about various arguments for and
against assisted suicide, outcomes of court cases, and the like. Then we
give them the same a�itude measure and observe whether their a�itudes
have changed. The problem is that the first questionnaire might sensitize
students, resulting in a�itude changes regardless of whether instruction
follows. If a comparison group is not used, it may be impossible to
segregate the effects of the instruction from the pretest effects.
Sensitization, or testing, problems are more likely to occur when people
are exposed to controversial or novel material in the pretest.
A related threat is instrumentation. This bias reflects changes in
measuring instruments or methods of measurement between two points of
data collection. For example, if we used one measure of stress at baseline
and a revised measure at follow- up, any differences might reflect changes
in the measuring tool rather than the effect of an independent variable.
Instrumentation effects can occur even if the same measure is used. For
example, if the measuring tool yields more accurate measures on a second
administration (e.g., if data collectors are more experienced) or less
accurate measures the second time (e.g., if participants become bored and
answer haphazardly), then these differences could bias the results.

Internal Validity and Research Design
Quasi-experimental and correlational studies are especially susceptible to
threats to internal validity. Table 10.2 lists specific designs that are most
vulnerable to the threats just described—but it should not be assumed that
threats are irrelevant in designs not listed. Each threat represents an
alternative explanation that competes with the independent variable as a
cause of the outcome. The aim of a strong research design is to rule out
competing explanations.

TABLE 10.2
Research Designs and Threats to Internal Validity

Threat Designs Most Susceptible
Temporal ambiguity Case–control

Other retrospective/cross- sectional studies
Selection Nonequivalent control group (especially, pos�est- only)

Case–control
“Natural” experiments with two groups
Time series, if the population undergoes a change

History One- group pretest–pos�est
Time series
Prospective cohort
Crossover

Maturation One- group pretest–pos�est
Mortality/a�rition Prospective cohort

Longitudinal studies (experimental and observational)
One- group pretest–pos�est

Testing All pretest–pos�est designs
Instrumentation All pretest–pos�est designs

An experimental design normally rules out most rival hypotheses, but
even in RCTs, researchers must exercise caution. For example, if there is
treatment infidelity or contamination between treatments, then history
might be a rival explanation for any group differences (or lack of
differences). Mortality can be a salient threat in true experiments. Because
the experimenter does things differently with the experimental and control
groups, people in the groups may drop out of the study differentially. This
is particularly apt to happen if the experimental treatment is painful or
inconvenient or if the control condition is boring or bothersome. When this
happens, participants remaining in the study may differ from those who
left, thereby nullifying the initial equivalence of the groups. In short,
researchers should consider how best to guard against and detect all
possible threats to internal validity, no ma�er what design is used.
Supplement A to this chapter on provides more detailed

information about internal validity threats for specific experimental and
quasi-
experimental designs.

TIP Traditional evidence hierarchies or level of evidence scales (e.g.,
Figure 2.2), rank evidence sources almost exclusively based on the
risk of internal validity threats.

Internal Validity and Data Analysis
The best strategy for enhancing internal validity is to use a strong research
design that includes control mechanisms and design features discussed in
this chapter. Even when this is possible (and, certainly, when this is not
possible), it is advisable to conduct analyses to assess the nature and
extent of biases. When biases are detected, the information can be used to
interpret the substantive results. Moreover, in some cases, biases can be
statistically controlled.
Researchers need to be self- critics. They need to consider fully and
objectively the types of biases that could have arisen—and then
systematically search for evidence of their existence (while hoping, of
course, that no evidence can be found). To the extent that biases can be
ruled out or controlled, the quality of causal evidence will be
strengthened.
Selection biases should always be examined. Typically, this involves
comparing groups on pretest measures, when pretest data have been
collected. For example, if we were studying depression in women who
gave birth to a baby by cesarean delivery versus those who gave birth
vaginally, selection bias could be assessed by comparing depression scores
in these two groups during or before the pregnancy. If there are significant
predelivery differences, then any postdelivery differences would have to
be interpreted with initial differences in mind (or with differences
controlled). In designs with no pretest measure of the outcome,
researchers should assess selection biases by comparing groups with
respect to key background variables, such as age, health status, and so on.
Whenever the research design involves multiple points of data collection,
researchers should analyze a�rition biases. This is typically achieved by
comparing those who did and did not complete the study on baseline
measures of the outcome or on other baseline characteristics.

Example of Assessing Internal Validity Threats
Uhm and Kim (2019) used a quasi-experimental design to study the
effectiveness of a mother–nurse partnership program on the
outcomes for mothers and infants in a pediatric cardiac intensive care
unit. They tested for selection bias and found significant differences
in the intervention and comparison group with respect to several
baseline variables (e.g., preoperative NICU care), and these were
controlled statistically using ANCOVA in the main analyses. There
was no a�rition in either group.

When people withdraw from an intervention study, researchers are in a
dilemma about whom to “count” as being “in” a condition. One approach
is a per- protocol analysis, which includes members in a treatment group
only if they actually received the treatment. Such an analysis is
problematic, however, because not receiving the treatment involves self–
selection that can undo initial group comparability. This type of analysis
will almost always be biased toward finding positive treatment effects. The
“gold standard” approach is to use an intention- to- treat analysis, which
involves keeping participants who were randomized in the groups to
which they were assigned even if they drop out (Polit & Gillespie, 2009,
2010). An intention- to- treat analysis may yield an underestimate of the
effects of a treatment if many participants did not actually get the assigned
treatment—but may be�er reflect what would happen in the real world.
One difficulty with an intention- to- treat analysis is that it is often difficult
to obtain outcome data for people who have dropped out of a treatment,
but there are strategies for estimating outcomes for those with missing
data, as we discuss in Chapter 20.

Example of an Intention- to- treat Analysis
Zhang and colleagues (2018) explored the effect of 
suction pressure
generated by a breast pump on mothers’ onset of lactation and milk
supply after cesarean birth. Mothers were randomly assigned to a
high- pressure group, a low- pressure group, or a control group. The
researchers, who used an intention- to- treat analysis, found that high–
pressure pumping boosted the timing of the onset of lactation.

In a crossover design, history is a potential threat both because an external
event could differentially affect people in different treatment orderings
and because the different orderings are in themselves a kind of differential
history. Substantive analyses of the data involve comparing outcomes
under treatment A versus treatment B. The analysis of bias, by contrast,
involves comparing participants in the different orderings (e.g., A then B
versus B then A). Significant differences between the two orderings are
evidence of an ordering bias.
In summary, efforts to enhance the internal validity of a study should not
end once the design strategy has been put in place. Researchers should
seek additional opportunities to understand (and possibly to correct) the
various threats to internal validity that can arise.

Supplement B to this chapter on provides guidance on how to
do a plausibility analysis to assess your design for internal validity
threats when randomization is not possible, as well as other strategies for
strengthening internal validity in quasi-experimental and case–control
designs.

Construct Validity
Researchers conduct a study with specific exemplars of treatments,
outcomes, se�ings, and people, which are stand- ins for broad constructs.
Construct validity involves inferences from study particulars to the
higher- order constructs that they are intended to represent. Constructs are
the means for linking the operations used in a study to mechanisms for
translating the resulting evidence into practice. If studies contain construct
errors, the evidence may be misleading.

Enhancing Construct Validity
The first step in fostering construct validity is a careful explication of the
treatment, outcomes, se�ing, and population constructs of interest; the
next step is to select instances that match those constructs as closely as
possible. Construct validity is further cultivated when researchers assess
the match between the exemplars and the constructs and the degree to
which any “slippage” occurred.
Construct validity has most often been a concern to researchers in
connection with the measurement of outcomes, an issue we discuss in
Chapter 15. There is a growing interest, however, in the careful
conceptualization and development of theory- based interventions in
which the treatment itself has strong construct validity (see Chapter 28). It
is just as important for the independent variable (whether it be an
intervention or something not amenable to manipulation) to be a strong
instance of the construct of interest as it is for the measured outcome to
have strong correspondence to the outcome construct. In nonexperimental
research, researchers do not create and manipulate the hypothesized
cause, so ensuring construct validity of the independent variable is often
difficult.
Shadish and colleagues (2002) broadened the concept of construct validity
to cover persons and se�ings as well as outcomes and treatments. For
example, some nursing interventions specifically target groups that are
characterized as “disadvantaged,” but there is not always agreement on
how this term is defined and operationalized. Researchers select specific
people to represent the construct of a disadvantaged group about which
inferences will be made, and so it is important that the specific people are
good exemplars of the underlying construct. The construct

“disadvantaged” must be carefully delineated before a sample is selected.
Similarly, if a researcher is interested in such se�ings as “immigrant
neighborhoods” or “school- based clinics,” these are constructs that require
careful description and the selection of good exemplars that match those
constructs.

Threats to Construct Validity
Threats to construct validity are reasons that inferences from a particular
study exemplar to an abstract construct could be erroneous. Such a threat
could occur if the operationalization of the construct fails to incorporate all
the relevant characteristics of the underlying construct or if it includes
extraneous content—both of which are instances of a mismatch. Shadish
and colleagues (2002) identified 14 threats to construct validity (their Table
3.1) and several additional threats specific to case–control designs (their
Table 4.3). Among the most noteworthy threats are the following:

1. Reactivity to the study situation. Participants may behave in a particular manner
because they are aware of their role in a study (the Hawthorne effect). When
people’s responses reflect, in part, their perceptions of study participation, those
perceptions become an unwanted part of the treatment construct under study.
Strategies to reduce this problem include blinding, the use of outcome measures
not susceptible to reactivity (e.g., from hospital records), and the use of
preintervention strategies to satisfy participants’ desire to look competent or
please the researcher.

Example of a Possible Hawthorne Effect
Bhimani (2016) evaluated the effect of a series of strategies designed to reduce work–
related musculoskeletal nursing injuries. A 50% reduction in injuries was found, but
Bhimani noted that the Hawthorne effect likely contributed to the decline in injury rates.

1. Researcher expectancies. A similar threat stems from the researcher’s influence on
participant responses through subtle (or not- so- subtle) communication about
desired outcomes. When this happens, the researcher’s expectations become
part of the treatment construct that is being tested. Blinding can reduce this
threat, but another strategy is to make observations to detect verbal or
behavioral signals of research staff’s expectations and correct them.

2. Novelty effects. When a treatment is new, participants and research agents alike
might alter their behavior. People may be either enthusiastic or skeptical about
new methods of doing things. Results may reflect reactions to the novelty rather
than to the intrinsic nature of an intervention, and so the intervention construct
is clouded by novelty content.

3. Compensatory effects. In intervention studies, compensatory equalization can occur
if health care staff or family members try to compensate for the control group
members’ failure to receive a perceived beneficial treatment. The compensatory
goods or services are then part of the construct description of study conditions.
Compensatory rivalry is a related threat arising from the control group members’
desire to demonstrate that they can do as well as those receiving a special
treatment.

4. Treatment diffusion or contamination. Alternative treatment conditions can become
blurred, which can impede good construct descriptions of the independent
variable. This may occur when participants in a control group condition receive
services similar to those in the treatment condition. More often, blurring occurs
when those in a treatment condition essentially put themselves into the control
group by dropping out of the intervention. This threat can also occur in
nonexperimental studies. For example, in case–control comparisons of smokers
and nonsmokers, care must be taken during screening to ensure that
participants are appropriately categorized (e.g., some people may consider
themselves nonsmokers even though they smoke regularly, but only on
weekends).

Construct validity requires careful a�ention to what we call things (i.e.,
construct labels) so that appropriate construct inferences can be made.
Enhancing construct validity in a study requires careful thought before a
study is undertaken, in terms of a well- considered explication of
constructs, and requires poststudy scrutiny to assess the degree to which a
match between operations and constructs was achieved.

External Validity
External validity concerns the extent to which it can be inferred that
relationships observed in a study hold true over variations in people,
conditions, and se�ings. External validity has emerged as a major concern
in an EBP world in which there is an interest in generalizing evidence from
tightly controlled research se�ings to real- world clinical practice se�ings.
External validity questions may take several different forms. We may ask
whether relationships observed in a study sample can be generalized to a
larger population—for example, whether results from a smoking cessation
program found effective with pregnant teenagers in Boston can be
generalized to pregnant teenagers throughout the United States. Other
external validity questions are about generalizing to types of people,
se�ings, or treatments unlike those in the research (Polit & Beck, 2010). For
example, can findings about a pain reduction treatment in a study of
Australian women be generalized to men in Canada? Sometimes new
studies are needed to answer questions about external validity, but
external validity often can be enhanced by researchers’ design decisions.

Enhancements to External Validity
One aspect of external validity concerns the representativeness of the
participants used in the study. For example, if the sample is selected to be
representative of a population to which the researcher wishes to generalize
the results, then the findings can more readily be applied to that
population (Chapter 13). Similarly, if the se�ings in which the study
occurs are representative of the clinical se�ings in which the findings
might be applied, then inferences about relevance in those other se�ings
can be strengthened.
An important concept for external validity is replication. Multisite studies
are powerful because more confidence in the generalizability of the results
can be a�ained if findings are replicated in several sites—particularly if the
sites are different on important dimensions (e.g., size, nursing skill mix,
and so on). Studies with a diverse sample of participants can test whether
study results are replicated for subgroups of the sample—for example,
whether benefits from an intervention apply to men and women.
Systematic reviews are a crucial aid to external validity precisely because

they illuminate the consistency of results in studies replicated with
different groups and se�ings.

Threats to External Validity
In the previous chapter, we discussed interaction effects that can occur in a
factorial design when two treatments are simultaneously manipulated.
The interaction question is whether the effects of treatment A hold (are
comparable) for all levels of treatment B. Conceptually, questions
regarding external validity are similar to this interaction question. Threats
to external validity concern ways in which relationships between variables
might interact with or be moderated by variations in people, se�ings, time,
and conditions. Shadish and colleagues (2002) described several threats to
external validity, such as the following two:

1. Interaction between relationship and people. An effect observed with certain types of
people might not be observed with other types of people. A common complaint
about RCTs is that many people are excluded—not because they would not
benefit from the treatment, but because they cannot provide needed research
data (e.g., cognitively impaired patients, non- English speakers) or because they
would not allow the “best test” of the intervention (e.g., they have complex
comorbidities).

2. Interaction between causal effects and treatment variation. An innovative treatment
might be effective because it is paired with other elements, and sometimes those
elements are intangible—e.g., an enthusiastic project director. The same
“treatment” could never be fully replicated, and thus different results might be
obtained in subsequent tests.

Shadish and colleagues (2002) noted that moderators of relationships are
the norm, not the exception. With interventions, it is normal for a
treatment to “work be�er” for some people than for others. We address
this issue in Chapter 31.

Tradeoffs and Priorities in Study Validity
Quantitative researchers strive to design studies that are strong with
respect to all four types of study validity. Sometimes, efforts to increase
one type of validity also benefit another type. In many instances, however,
addressing one type of validity increases threats to others.
For example, suppose we were scrupulous in maximizing intervention
fidelity in an RCT. Our efforts might include strong training of staff,
careful monitoring of intervention delivery, and steps to maximize
participants’ adherence to treatment. Such efforts would have positive
effects on statistical conclusion validity because the treatment was made
powerful. Internal validity would be enhanced if a�rition biases were
minimized. Intervention fidelity would also improve the construct validity
of the treatment because the content delivered and received would be�er
match the underlying construct. But what about external validity? All of
the actions undertaken to ensure that the intervention is strong, construct–
valid, and administered according to plan are not consistent with the
realities of clinical se�ings. People are not normally paid to adhere to
treatments; nurses are not monitored and corrected to ensure that they are
following a script; training in the use of new protocols is usually brief; and
so on.
This example illustrates that researchers need to give careful thought to
how design decisions may affect various types of study validity. Of
particular concern are tradeoffs between internal and external validity.

Internal Validity and External Validity
Tension between the goals of achieving internal validity and external
validity is pervasive. Many control mechanisms that are designed to rule
out competing explanations for hypothesized cause-and-effect
relationships make it difficult to infer that the relationships hold true in
uncontrolled real- life se�ings.
Internal validity was long considered the “sine qua non” of experimental
research (Campbell & Stanley, 1963). The rationale was this: If there is
insufficient evidence that an intervention really caused an effect, why
worry about generalizing the results? This high priority given to internal
validity, however, is somewhat at odds with the current emphasis on
evidence- based practice. A reasonable question might be: If study results

cannot be generalized to real- world clinical se�ings, who cares if an
intervention is effective? Clearly, both internal and external validity are
important to building an evidence base for nursing practice.
There are several “solutions” to the conflict between internal and external
validity. The first (and most prevalent) approach is to emphasize one and
sacrifice the other. Most often, it is external validity that is sacrificed. For
example, external validity is not even considered in ranking evidence in
level of evidence scales (Chapter 2).
A second approach is to use a phased series of studies. In the earlier phase,
there are tight controls, strict intervention protocols, and stringent criteria
for including people in the RCT. Such studies are efficacy studies. Once
the intervention has been deemed to be effective under tightly controlled
conditions in which internal validity was the priority, it is tested with
larger samples in multiple sites under less restrictive conditions, in
effectiveness studies that emphasize external validity.
A third approach is to compromise. There has been recent interest in
promoting designs that aim to achieve a balance between internal and
external validity in a single intervention study. We describe such pragmatic
clinical trials in Chapter 31, a new chapter that discusses the applicability of
research findings.

Prioritization and Design Decisions
It is impossible to avoid all possible threats to study validity. By
understanding the various threats, however, you can pinpoint the
tradeoffs you are willing to make to achieve study goals. Some threats are
more worrisome than others in terms of likelihood of occurrence and
dangers to inferences you would like to make. Moreover, some threats are
costlier to avoid than others. Resources available for a study must be
allocated to address the most important validity issues. For example, with
a fixed budget, you need to decide whether it is be�er to increase the size
of the sample and hence power (statistical conclusion validity) or to use
the money on efforts to reduce a�rition (internal validity).
The point is that you should make conscious decisions about how to
structure a study to address validity concerns. Every design decision has
both a “payoff” and a cost in terms of study integrity.

TIP A useful strategy is to create a matrix that lists various design
decisions in the first column (e.g., randomization, crossover design),
and then use the next four columns to identify the potential impact of
those options on the four types of study validity. The Toolkit section
of the accompanying Resource Manual includes a model matrix as a
Word document for you to use and adapt.

Critical Appraisal of Study Validity
In critically appraising a research report to evaluate its potential
contribution to nursing practice, it is crucial to make judgments about the
extent to which threats to validity were minimized—or, at least, assessed
and taken into consideration in interpreting the results. The guidelines in
Box 10.1 focus on validity- related issues to further help you to
appraise quantitative research designs. Together with the guidelines in the
previous chapter, they are likely to be the core of a critical evaluation of
the evidence that quantitative studies yield. From an EBP perspective, it is
important to remember that drawing inferences about causal relationships
relies not only on how high up on the evidence hierarchy a study is
(Figure 2.2), but also, for any given level of the hierarchy, how successful
the researcher was in managing study validity and balancing competing
validity demands.

Box 10.1 Guidelines for Critically Appraising Design Elements and
Study Validity in Quantitative Studies

1. Was there adequate statistical power? Did the manner in which the independent
variable was operationalized create strong contrasts that enhanced statistical
power? Was precision enhanced by controlling confounding variables? If
hypotheses were not supported (e.g., a hypothesized relationship was not
found), is it possible that statistical conclusion validity was compromised and
the results are wrong?

2. In intervention studies, did the researchers a�end to intervention fidelity? For
example, were staff adequately trained? Was the implementation of the
intervention monitored? Was a�ention paid to both the delivery and receipt of
the intervention?

3. What evidence does the report provide that selection biases were eliminated or
minimized? What steps were taken to control confounding participant
characteristics that could affect the equivalence of groups being compared?
Were these steps adequate?

4. To what extent did the research design rule out the plausibility of other threats
to internal validity, such as history, a�rition, maturation, and so on? What are
your overall conclusions about the internal validity of the study?

5. Were there any major threats to the construct validity of the study? In
intervention studies, was there a good match between the underlying
conceptualization of the intervention and its operationalization? Was the

intervention confounded with extraneous content, such as researcher
expectations? Was the se�ing or site a good exemplar of the type of se�ing
envisioned in the conceptualization?

6. Was the context of the study sufficiently described to enhance its capacity for
external validity? Were the se�ings or participants representative of the types to
which results were designed to be generalized?

7. Overall, did the researcher appropriately balance validity concerns? Was
a�ention paid to certain types of threats (e.g., internal validity) at the expense of
others (e.g., external validity)?

Research Example
We conclude this chapter with an example of a study in which careful
a�ention was paid to many aspects of study validity. The design being
used in this research is explained more fully in Chapter 31.

Study: Using SMART design to improve symptom management among cancer
patients (Sikorskii et al., 2017).
Statement of purpose: The purpose of the study, which was still in progress
when the article about the study protocol was wri�en, was to evaluate the
efficacy of a Sequential Multiple Assignment Randomized Trial (SMART) of
interventions to improve symptom management among patients with cancer.
Treatment groups: The study is testing two evidence- based practices:
reflexology and meditative (mindfulness) practices. Dyads of solid tumor cancer
patients and their caregivers are initially assigned to one of these interventions,
which is offered in the patients’ homes, or to a control group of usual care. After
4 weeks, intervention group dyads that show li�le improvement in fatigue are
rerandomized to either continuing in the original intervention or adding the
alternative intervention.
Method: The researchers are using a design that addresses many validity
concerns. Randomization (both initially and at rerandomization) is being done
using a computer minimization algorithm that is designed to balance the arms
for the patient’s site of cancer (e.g., breast, lung, colon), stage of cancer, and type
of treatment. (See Supplement to Chapter 9 for information about this type of
randomization.) The researchers estimated how large a sample was needed to
achieve adequate power for statistical conclusion validity, using a procedure
called power analysis (Chapter 13).
Additional study validity efforts: For dyads in the reflexology group,
caregivers are trained by a study reflexologist. For dyads in the meditative
group, both the patient and the caregiver are trained by a study meditation
provider. All intervention agents are being carefully trained and monitored.
Patients and caregivers in all groups are interviewed twice by telephone, at
baseline and then at study week 12. The interviewers are blinded to the dyad’s
group assignments. The interviewers gather information about patients’ fatigue,
pain, depression, and anxiety using instruments known to be of high quality. A
study coordinator calls patients weekly to ask about their symptoms and also
asks caregivers in the intervention groups about the number of sessions
conducted with the patients. Although the analysis was not yet undertaken
when this paper was wri�en, the researchers plan to control statistically for
demographic and baseline clinical characteristics. The researchers are
implementing extensive procedures to ensure intervention fidelity. For example,
both the intervention agents and the caregivers must achieve proficiency in their

therapies. The caregivers’ enactment of the therapies is being monitored. The
researchers plan to undertake an a�rition analyses to compare the
characteristics of those who do or do not drop out of the study.
Conclusions: When the paper was wri�en, the researchers had enrolled 150 of
the 430 dyads they planned to enroll. Forty dyads of the 150 have been
rerandomized. The researchers acknowledged that the recruitment of dyads is
challenging.

Summary Points

Study validity concerns the extent to which appropriate inferences can be made.
Threats to validity are reasons that an inference could be wrong. A key function
of quantitative research design is to rule out validity threats.
Control over confounding participant characteristics is key to managing many
validity threats. The best control method is randomization to treatment
conditions, which effectively controls all confounding variables—especially in
the context of a crossover design.
When randomization is not possible, other control methods include
homogeneity (the use of a homogeneous sample to eliminate variability on
confounding characteristics); blocking or stratifying, as in the case of a
randomized block design; pair matching participants on key variables to make
groups more comparable (or by using propensity matching, which involves
matching on a propensity score for each participant); balancing groups to
achieve comparability; and statistical control to remove the effect of a
confounding variable statistically (e.g., through analysis of covariance).
Homogeneity, stratifying, matching, and statistical control share two
disadvantages: Researchers must know in advance which confounding variables
to control, and they can rarely control all of them.
Four types of validity affect the rigor of a quantitative study: statistical
conclusion validity, internal validity, construct validity, and external validity.
Statistical conclusion validity concerns the validity of the inference that a
relationship between variables really exists.
Threats to statistical conclusion validity include low statistical power (the
ability to detect true relationships among variables); low precision (the
exactness of the relationships revealed after controlling confounding variables);
and factors that weaken the operationalization of the independent variable.
Intervention (or treatment) fidelity concerns the extent to which the
implementation of a treatment is faithful to its plan. Intervention fidelity is
enhanced through standardized treatment protocols, careful training of
intervention agents, monitoring of the delivery and receipt of the intervention,
manipulation checks, and steps to promote treatment adherence and avoid
contamination of treatments.
Internal validity concerns the inference that outcomes were caused by the
independent variable, rather than by confounding factors. Threats to internal
validity include temporal ambiguity (lack of clarity about whether the
presumed cause preceded the outcome); selection (preexisting group
differences); history (the occurrence of external events that could affect
outcomes); maturation (changes resulting from the passage of time); mortality

(effects a�ributable to a�rition); testing (effects of a pretest); and
instrumentation (changes in the way data are gathered).
Internal validity can be enhanced through judicious design decisions but can
also be addressed analytically (e.g., through an analysis of selection or a�rition
biases). When people withdraw from a study, an intention- to- treat analysis
(analyzing outcomes for all people in their original treatment conditions) is
preferred to a perprotocol analysis (analyzing outcomes only for those who
received the full treatment) for maintaining the integrity of randomization.
Construct validity concerns inferences from the particular exemplars of a study
(e.g., the specific treatments, outcomes, and se�ings) to the higher- order
constructs that they are intended to represent. The first step in fostering
construct validity is a careful explication of those constructs.
Threats to construct validity can occur if the operationalization of a construct
fails to incorporate all relevant characteristics of the construct, or if it includes
extraneous content. Examples of such threats include subject reactivity, researcher
expectancies, novelty effects, compensatory effects, and treatment diffusion.
External validity concerns inferences about the extent to which study results
can be generalized—i.e., whether relationships observed in a study hold true
over variations in people, se�ings, time, and treatments. External validity can be
enhanced by selecting representative people and se�ings and through replication.
Researchers need to prioritize and recognize tradeoffs among the various types
of validity, which sometimes compete with each other. Tensions between
internal and external validity are especially prominent. One solution has been to
begin with a study that emphasizes internal validity (efficacy studies) and then
if a causal relationship can be inferred, to undertake effectiveness studies that
emphasize external validity.

Study Activities

Study activities are available to instructors on .

References Cited in Chapter 10
Abbasi P., Mohammed- Alizadeh S., & Mirghafourvand M. (2018). Comparing the

effect of e- learning and educational booklet on the childbirth self- efficacy: A
randomized controlled clinical trial. Journal of Maternal- Fetal & Neonatal Medicine,
31, 633–650.

* Bang K., Kim S., Song M., Kang K., & Jeong Y. (2018). The effects of a health
promotion program using urban forests and nursing student mentors on the
perceived and psychological health of elementary school children in vulnerable
populations. International Journal of Environmental Research and Public Health, 15,
1977.

* Bellg A., Borrelli B., Resnick B., Hecht J., Minicucci D., et al. (2004). Enhancing
treatment fidelity in health behavior change studies: Best practices and
recommendations from the NIH Behavior Change Consortium. Health Psychology,
23, 443–451.

Bhimani R. (2016). Prevention of work- related musculoskeletal injuries in
rehabilitation nursing. Rehabilitation Nursing, 41, 326–335.

Bova C., Jaffarian C., Crawford S., Quintos J., Lee M., & Sullivan- Bolyai S. (2017).
Intervention fidelity: Monitoring drift, providing feedback, and assessing the
control condition. Nursing Research, 66, 54–59.

Campbell D. T., & Stanley J. C. (1963). Experimental and quasi- experimental designs for
research. Chicago: Rand McNally.

Conn V. S., Ran� M. J., Wipke- Tevis D. D., & Maas M. L. (2001). Designing effective
nursing interventions. Research in Nursing & Health, 24, 433–442.

* Fehlberg E., Lucero R., Weaver M., McDaniel A., Chandler A., Richey P., … Shorr R.
(2017). Associations between hyponatraemia, volume depletion and the risk of
falls in US hospitalised patients. BMJ Open, 7, e017045.

Lipsey M. W. (1990). Design sensitivity: Statistical power for experimental research.
Newbury Park, CA: Sage.

**Morrison J., Becker H., & Stui�ergen A. (2017). Evaluation of intervention fidelity
in a multisite clinical trial in persons with multiple sclerosis. Journal of Neuroscience
Nursing, 49, 344–348.

Polit D. F., & Beck C. T. (2010). Generalization in qualitative and quantitative
research: Myths and strategies. International Journal of Nursing Studies, 47, 1451–
1458.

Polit D. F., & Gillespie B. (2009). The use of the intention- to- treat principle in nursing
clinical trials. Nursing Research, 58, 391–399.

Polit D. F., & Gillespie B. (2010). Intention- to- treat in randomized controlled trials:
Recommendations for a total trial strategy. Research in Nursing & Health, 33, 355–
368.

* Qin R., Titler M., Shever L., & Kim T. (2008). Estimating effects of nursing
intervention via propensity score analysis. Nursing Research, 57, 444–452.

* Rixon L., Baron J., McGale N., Lorenca�o F., Francis J., & Davies A. (2016). Methods
used to address fidelity of receipt in health intervention research: A citation
analysis and systematic review. BMC Health Services Research, 16, 663.

Shadish W. R., Cook T. D., & Campbell D. T. (2002). Experimental and quasi- –
experimental designs for generalized causal inference. Boston: Houghton Mifflin Co.

Sikorskii A., Wya� G., Lehto R., Victorson D., Badger T., & Pace T. (2017). Using
SMART design to improve symptom management among cancer patients: A study
protocol. Research in Nursing & Health, 40, 501–511.

Uhm J., & Kim H. (2019). Impact of the mother- nurse partnership programme on
mother and infant outcomes in paediatric cardiac intensive care unit. Intensive &
Critical Care Nursing, 50, 79–87.

Zhang F., Yang Y., Bai T., Sun L., Sun M., Shi X., … Xia H. (2018). Effect of pumping
pressure on onset of lactation after caesarean section: A randomized controlled
study. Maternal & Child Nutrition, 14, (1).

*A link to this open- access journal article is provided in the Toolkit for Chapter 10 in
the Resource Manual.

**This journal article is available on for this chapter.

*You should not be concerned with how these numbers can be obtained. Analytic procedures are
explained in Chapter 18.

C H A P T E R 11

Specific Types of Quantitative Research

All quantitative studies can be categorized as experimental, quasi-
experimental, or nonexperimental in design. This chapter describes
types of research that vary in study purpose rather than research
design. The first two types (clinical trials and evaluations) involve
interventions, but methods for each have evolved separately because
of their disciplinary roots. Clinical trials are associated with health
care and medicine, and evaluation research is associated with the
fields of education, social work, and public policy. There is overlap
in approaches, but to acquaint you with relevant terms, we discuss
each separately. Later sections of this chapter describe comparative
effectiveness research, outcomes research, survey research, and
several other types relevant to nursing.

Clinical Trials
Clinical trials are studies designed to assess clinical interventions.
Many nurse researchers are involved in clinical trials, often as
members of interprofessional teams.

Phases of a Clinical Trial
In medical and pharmaceutical research, clinical trials often adhere
to a planned sequence of studies—often a series of four phases, as
follows:

Phase I occurs after initial development of the drug or therapy and is
designed primarily to establish safety and tolerance and to determine
optimal dose. This phase typically involves small- scale studies using
simple designs, such as a one group pretest–pos�est design. The focus is
on developing the best possible (and safest) treatment.
Phase II involves gathering preliminary evidence about the intervention’s
practicability. During this phase, researchers assess the feasibility of
launching a rigorous test, seek evidence that the treatment holds promise,
and identify refinements to improve the intervention. This phase, a pilot
test of the treatment, may be designed either as a small- scale experiment
or as a quasi-experiment. Pilot tests of interventions are described in
Chapter 29.

Example of an Early Phase Clinical Trial
Heyland and colleagues (2018) described a protocol for a Phase II trial of
two alternative approaches to partnering with family members in the
care of critically ill long- stay ICU patients. A total of 150 families were
randomly assigned to a control group or to one of the two approaches of
supporting families in shared decision- making (50 per group).

Phase III is a full test of the intervention—a randomized controlled trial
(RCT) with randomization to treatment groups under controlled
conditions. The goal of this phase is to develop evidence about treatment
efficacy—i.e., whether the treatment is more efficacious than usual care (or
an alternative counterfactual). Adverse effects are also monitored. Phase
III RCTs often involve a fairly large sample of participants, sometimes

selected from multiple sites to ensure that findings are not unique to a
single se�ing. Phase III (and Phase IV) efforts may also examine the cost–
effectiveness of the intervention.

Example of a Multisite Phase III RCT
Watson and colleagues (2018) undertook a Phase III cluster RCT to assess
the postdischarge outcomes (functional status and quality of life) of
children hospitalized in 31 medical centers. A total of 1,360 ventilated
children were randomly assigned to either a nurse- implemented goal–
directed sedation protocol or to usual care.

Phase IV trials are studies of the effectiveness of an intervention in a general
population. The emphasis is on the external validity of an intervention
that has shown promise of efficacy under controlled (but often artificial)
conditions.

TIP Researchers should record their trials in a clinical trials
registry. These registries provide transparency about research
and offer information for accessing the trial. Most registries are
searchable online (e.g., by disease, location of the trial). The
largest registry is ClinicalTrials.gov; another important registry
is the International Clinical Trials Registry of the World Health
Organization. Some journals refuse to publish reports of trials
unless they have been registered. Protocols for clinical trials are
often registered before the study gets underway.

Superiority, Noninferiority, and Equivalence Trials
The vast majority of RCTs are superiority trials, in which researchers
hypothesize that the intervention is “superior” to (more effective
than) the control condition. Standard statistical analysis does not
permit a straightforward testing of the null hypothesis, i.e., the
hypothesis that the effects of two treatments are comparable. Yet,
there are circumstances in which it is desirable to test whether a new
(and perhaps less costly or less painful) intervention results in
similar outcomes to a standard intervention. In a noninferiority

trial, the goal is to assess whether a new intervention is no worse
than a reference treatment (typically, the standard of care). Other
trials are called equivalence trials, in which the goal is to test the
hypothesis that the outcomes from two interventions are equal. In a
noninferiority trial, it is necessary to specify in advance the smallest
margin of inferiority on a primary outcome (e.g., 1%) that would be
tolerated to accept the hypothesis of noninferiority. In equivalence
trials, a tolerance must be established for the nonsuperiority of one
treatment over the other, and the statistical test is two- sided—
meaning that equivalence is accepted if the two are not different (in
either direction) by no more than the specified tolerance. Both
noninferiority and equivalence trials require statistical sophistication
and very large samples to ensure statistical conclusion validity.
Further information is provided by Christensen (2007) and Piaggio
et al. (2012).

Example of an Equivalence Trial
Makenzius and colleagues (2017) conducted an equivalence
trial to test whether women with incomplete abortion seeking
postabortion care in a low- resource area in Kenya had the same
outcomes when misoprostol was administered by midwives
compared with administration by physicians. A total of 810
women were randomized. The results indicated that treatment
by midwives was equally effective, safe, and accepted by the
patients.

TIP In a traditional Phase III trial, it may take months to recruit
and randomize a sufficiently large sample and years to draw
conclusions about efficacy (i.e., after all data have been
collected and analyzed). In a sequential clinical trial,
experimental data are continuously analyzed as they become
available, and the trial can be stopped when the evidence is
strong enough to support a conclusion about the intervention’s

efficacy. More information about sequential trials is provided
by Bartroff et al. (2013).

Pragmatic Clinical Trials
One problem with traditional Phase III RCTs is that, in efforts to
enhance internal validity in support of a causal inference, the
designs are so tightly controlled that their relevance to real- life
applications can be questioned. Concern about this situation has led
to a call for pragmatic (or practical) clinical trials, in which
researchers strive to maximize external validity with minimal
negative effect on internal validity (Glasgow et al., 2005). Pragmatic
clinical trials address practical questions about the benefits and risks
of an intervention as they would unfold in routine clinical practice.
We elaborate on pragmatic clinical trials in Chapter 31.

Evaluation Research
Evaluation research focuses on developing information needed by
decision- makers about whether to adopt, modify, or abandon a
program, practice, procedure, or policy. Pa�on (2015) distinguishes
research and evaluation, stating that “research has as its primary
purpose contributing to knowledge, and evaluation has as its
primary purpose informing action,” (p. 86). However, evaluations
often generate knowledge that can be used in other se�ings.
Concepts from evaluation research are embedded in many efforts to
test health care interventions.
Evaluations often try to answer broader questions than whether a
program is effective—for example, they may involve efforts to
improve the program or to learn how the program actually “works”
in practice. Evaluations sometimes address black box questions—that
is, what specifically is it about a multifaceted program that is driving
observed effects? Good resources for learning more about evaluation
research include the books by Pa�on (2012) and Rossi and colleagues
(2019).

TIP Evaluations can be threatening. Even though the focus of
most evaluations is on a nontangible entity (e.g., a program), it
is people who implement it. People may think that they, or their
work, are being evaluated and may feel that their jobs or
reputation are at stake. Thus, evaluation researchers need to
have more than methodologic skills—they need to be adept in
interpersonal relations.

Evaluation Components
Evaluations may involve several components to answer a range of
questions, as we describe in this section.

Process/Implementation Analyses

A process or implementation analysis provides descriptive
information about the manner in which a program gets implemented
and how it actually functions. A process analysis typically addresses
questions such as the following: Does the program operate the way
its designers intended? How does the program differ from
traditional practices? What were the barriers to its implementation?
What do staff and clients like most/least about the program?
A process analysis may be undertaken with the aim of improving a
program (a formative evaluation). In other situations, the purpose of
the process analysis is primarily to describe a program carefully so
that it can be replicated—or so that people can understand why the
program was or was not effective in meeting its objectives. In either
case, a process analysis involves an in- depth examination of the
operation of a program, often requiring the collection of both
qualitative and quantitative data. Process evaluations sometimes
overlap with efforts to monitor intervention fidelity.

Example of a Process Analysis
Boersma and colleagues (2017) undertook a process analysis
during the implementation of an intervention called the Veder
contact method (which combines elements from psychosocial
interventions with theatrical and poetic communication) in
nursing home care. The process analysis involved group and
individual interviews with multiple stakeholders.

Outcome and Impact Analyses
Evaluations may focus on whether a program or policy is meeting its
objectives. The intent of such evaluations is to help people decide
whether the program should be continued or replicated. Some
evaluation researchers distinguish between an outcome analysis and
an impact analysis. An outcome analysis (or outcome evaluation)
simply documents the extent to which the goals of the program are
a�ained, that is, the extent to which positive outcomes occur. For
example, a program may be designed to encourage women in a poor

rural community to obtain prenatal care. In an outcome analysis, the
researchers might document the percentage of pregnant women who
had obtained prenatal care, the average month in which prenatal
care was begun, and so on, and perhaps compare this information to
preintervention community data.
An impact analysis assesses a program’s net impacts—impacts that
can be a�ributed to the program, over and above effects of a
counterfactual (e.g., standard care). Impact analyses use an
experimental or strong quasi-experimental design because their aim
is to facilitate causal inferences about program effects. In our
example, suppose that the program to encourage prenatal care
involved having nurses make home visits to women in rural areas to
explain the benefits of early care. If the visits could be made to
pregnant women randomly assigned to the program, the outcomes
of the group of women receiving the home visits could be compared
with those not receiving them to assess the intervention’s net
impacts—for example, the percentage increase in receipt of prenatal
care among the experimental group relative to the control group.

Example of an Impact Analysis
Rac ˇ ic’ and colleagues (2017) tested the impact of
interprofessional diabetes education on the knowledge of
nursing, medical, and dental students. Students were
randomized to interprofessional versus uniprofessional
education. Those in the interprofessional education group had
significantly higher knowledge scores and self- assessments of
teamwork skills.

Cost/Economic Analyses
New programs are often expensive to implement, and existing
programs also may be costly. In our current situation of spiraling
health care costs, evaluations (and clinical trials) may include a cost
analysis (economic analysis) to examine whether program benefits
outweigh the monetary costs. Administrators make decisions about

resource allocations for health services based not only on whether
something “works,” but also on whether it is economically viable.
Cost analyses are typically done in connection with impact analyses
and Phase III clinical trials, that is, alongside rigorous tests of a
program’s or intervention’s efficacy.
Two types of economic analysis are cost–benefit and cost–
effectiveness analyses:

Cost–benefit analysis, in which monetary estimates are established for
both costs and benefits. One difficulty, however, is that it is sometimes
difficult to quantify benefits of health services in monetary terms. There is
also controversy about methods of assigning dollar amounts to the value
of human life.
Cost- effectiveness analysis, which is used to compare health outcomes
and resource costs of alternative interventions. Costs are measured in
monetary terms, but outcome effectiveness is not. Such analyses estimate
what it costs to produce impacts on outcomes that cannot easily be
valued in dollars, such as quality of life. Without information on
monetary benefits, however, such research may face challenges in
persuading decision- makers to make changes.

Example of a Cost- Effectiveness Analysis
Mervin and colleagues (2018) undertook a cost- effectiveness
analysis of using PARO, a therapeutic robotic seal used to
reduce agitation and medication use among patients with
dementia. The analysis was based on data from a cluster
randomized trial of 28 long- term care facilities in Australia.

Cost–utility analyses are a third type of economic analysis. This
approach is preferred when morbidity and mortality are outcomes
of interest or when quality of life is a major concern. An index called
the quality- adjusted life year (QALY) is an important outcome
indicator in cost–utility analyses. As a measure of disease burden,
QALY includes both the quality and quantity of life lived. One QALY
equates to 1 year in perfect health; zero QALY is associated with
death.

Example of a Cost–Utility Analysis
Heslin and an interprofessional team (2017) conducted a cost–
utility analysis in the context of an RCT that tested an
intervention to improve health and reduce substance use in
people with severe mental illness, compared with standard
care. One outcome in their analysis was QALY at 12 and
15 months after baseline.

In doing economic analyses, researchers must think about possible
short- term costs (e.g., clients’ days of work missed within 6 months)
and long- term costs (e.g., lost years of productive work life). Often
the cost analyst examines economic gains and losses from different
accounting perspectives—for example, for the target group; the
hospitals implementing the program; taxpayers; and society as a
whole. Distinguishing these different perspectives is crucial if a
program effect is a loss for one group (e.g., taxpayers) but a gain for
another (e.g., the target group).
Nurse researchers are increasingly becoming involved in cost
analyses—although Cook and colleagues (2017) recently found
many deficiencies in the quality of economic evaluations in nursing
research in the United States. A useful resource for further guidance
is the internationally acclaimed textbook by Drummond and
colleagues (2015).

TIP Among those planning an evidence- based practice
improvement, the costs of an innovation may be a concern. A
key question might be whether there is the potential for return
on investment (ROI), that is, whether the innovation might
save money (or at least be cost neutral) in the long run, relative
to the time and resources that will be expended to implement it
in routine practice.

Realist Evaluations

Some nurse researchers have begun to undertake realist evaluations,
which constitute a theory- driven approach to evaluating programs—
especially complex programs or interventions. The realist approach
acknowledges that interventions are not always effective for
everyone, because people are diverse and embedded in complicated
social and cultural contexts. In a realist evaluation, consideration is
given to the theoretical mechanisms underlying the effects of an
intervention. The focus is on understanding why certain groups
benefi�ed from an intervention while others did not benefit.
Pawson and Tilley (1997), who are key proponents of realist
evaluations, argued that to be useful to decision- makers, evaluators
need to identify “What works for whom and under what
circumstances?” rather than simply, “Does this work?” Realist
evaluations are not undertaken with a prescribed set of methods;
decisions about design, data collection, and analysis are guided by
the types of data needed to answer the evaluation questions and test
the initial program theory. Most often, realist evaluations involve the
collection of both quantitative and qualitative data, and qualitative
approaches play an especially important role.

Example of a Realist Evaluation
Kerr and colleagues (2018) used a realist evaluation framework
in their mixed- methods study of a program to facilitate
transition from children’s services to adult services for young
adults with life- limiting conditions.

TIP Health technology assessments (HTAs) are systematic
evaluations of the effects of health technologies and
interventions. HTA is a form of health policy research that
examines the health and social consequences of the application
of technology. A central goal of such evaluations is to provide
policy- makers with evidence on policy alternatives. Ramacciati
(2013) has wri�en a useful review about health technology
assessments in nursing.

Comparative Effectiveness Research
Comparative effectiveness research (CER) involves direct
comparisons of two or more health interventions. Like realist
approaches, CER seeks insights into which intervention works best,
for which patients. CER has emerged as a major force in health
research; disappointment with some of the methods favored for
evidence- based practice—especially the strong reliance on tightly
controlled RCTs with placebo comparators—has led to the
development of new ideas, new models, and new methods of
research that fall within the umbrella of comparative effectiveness
research.
In the United States, CER gained ground in the early 2000s and the
impetus crystallized with the publication of a report by the Institute
of Medicine (IOM) in 2009. The IOM, which proposed initial
priorities for comparative effectiveness research, defined CER as:
“the generation and synthesis of evidence that compares the benefits
and harms of alternative methods to prevent, diagnose, treat, and
monitor a clinical condition or to improve the delivery of care. The
purpose of CER is to assist consumers clinicians, purchasers, and
policy makers to make informed decisions that will improve health
care at both the individual and population level” (Chapter 2, p. 41).
Another major stimulus for CER in the United States was the
creation of the independent nonprofit organization called the
Patient- Centered Outcomes Research Institute (PCORI), which was
authorized by the U.S. Congress in 2010. PCORI specifically
sponsors comparative effectiveness research—in fact, CER is
sometimes referred to as patient- centered outcomes research.
PCORI funds research that is designed to help patients select the
health care options that best meet their needs. CER studies often
incorporate outcomes that are especially important to patients and
their caregivers. The standard outcomes used in medical research
(e.g., blood pressure, mortality) are increasingly being supplemented
by outcomes in which patients have a strong interest, such as
functional limitations, quality of life, and experiences with care.

Barksdale and colleagues (2014) have described the relevance of
PCORI to nursing, including funding opportunities.
Designs for CER vary widely. Some studies are RCTs involving a
comparison of two or more active (nonplacebo) treatments. Some
CER projects, however, are observational studies using data from
large databases, such as patient registries. Comparative effectiveness
research is described at greater length in Chapter 31, which focuses
on methods to enhance the applicability of research to individual
patients in real- world clinical se�ings.

Example of Comparative Effectiveness Research
In 2017, PCORI awarded $14 million to an interprofessional
team led by a nurse researcher (Huong Nguyen) for a 15- site
project called “A non- inferiority comparative effectiveness trial
of home- based palliative care in older adults (HomePal)”. The
project, which will compare home- based palliative care with
in- person or video consultation, is expected to be completed in
2024. (h�ps://www.pcori.org/research- results/2017/comparing–
home- based- palliative- care- person- or- video- consultation).

Health Services and Outcomes Research
Health services research is the broad interdisciplinary field that
studies how organizational structures and processes, social factors,
and personal behaviors affect access to health care, the cost and
quality of health care, and, ultimately, people’s health and well–
being.
Outcomes research, a subset of health services research, comprises
efforts to understand the end results of the structures and processes
of health care and to assess the effectiveness of health care services.
While evaluation research focuses on a specific program or policy,
outcomes research is a more global assessment of the value of health
care services. In nursing, outcomes research addresses the question,
“What effect does nursing have on patient outcomes?” Outcomes
research seeks evidence about the nursing profession’s contribution
to care.
Outcomes research represents a response to the increasing demand
from policy- makers, insurers, and the public to justify care practices
and systems in terms of costs and improved patient outcomes.
Outcomes research reflects a shift in emphasizing outcome- based
health care (what do health care staff accomplish?) rather than task–
based health care (what do health care staff do for patients?). The
focus of outcomes research in the 1980s and 1990s was
predominantly on patient health status and costs associated with
medical care, but there is a growing interest in studying broader
patient outcomes and an awareness that nursing practice can play a
role in quality improvement and health care safety, despite the many
challenges.

TIP Interest in improving care quality and documenting key
health outcomes has led to several initiatives in nursing. For
example, the Quality and Safety Education for Nurses (QSEN)
project is part of the effort to transform the quality of nursing

care by strengthening the competencies of nurses (Sherwood &
Barnsteiner, 2012).

Although many nursing studies examine patient outcomes, specific
efforts to appraise and document the impact of nursing care—as
distinct from the care provided by the overall health care system—
are less common. A major obstacle is a�ribution—that is, linking
patient outcomes to specific nursing actions, distinct from the actions
of other members of the health care team. Outcomes research has
used a variety of traditional nonexperimental designs and
methodologic strategies (primarily quantitative ones) but is also
developing new methods.

Models of Health Care Quality
In appraising quality in nursing services, various factors need to be
considered. Donabedian (1987), whose pioneering efforts created a
framework for outcomes research, emphasized three factors:
structure, process, and outcomes. The underpinning of this
framework is that good structures will support good processes,
which in turn will result in desirable patient outcomes. The structure
of care refers to broad organizational features. For example,
structure can be appraised in terms of such a�ributes as size and
range of services. Processes involve aspects of clinical management,
decision- making, and clinical interventions (e.g., discharge
planning). Outcomes refer to the specific clinical end results of patient
care, such as quality of life and functional status. Mitchell and co–
authors (1998) noted that “the emphasis on evaluating quality of care
has shifted from structures (having the right things) to processes
(doing the right things) to outcomes (having the right things
happen)” (p. 43).
Several modifications to Donabedian’s framework for appraising
health care quality have been proposed. One noteworthy framework
is the Quality Health Outcomes Model developed by the American
Academy of Nursing (Mitchell et al., 1998). This model is less linear
and more dynamic than Donabedian’s original framework and takes

client and system characteristics into account. This model does not
link actions and processes directly to outcomes. Rather, the effects of
actions are seen as mediated by client and system characteristics.
This model and others like it are increasingly used as the conceptual
framework for studies that evaluate quality of care (Baernholdt et al.,
2018; Mitchell & Lang, 2004). Another quality framework has been
developed with specific reference to nursing performance: the
Nursing Care Performance Framework or NCPF (Dubois et al.,
2013). Outcomes research usually focuses on various linkages within
such models, rather than on testing the overall model.

Structure of Care
Several studies have examined the effect of nursing structures on
various patient outcomes. Numerous indicators of structure of
relevance to nursing care have been identified. For example, nurse
staffing levels, nursing skill mix, nursing staff experience, nursing
care hours per patient, and continuity of nurse staffing are structural
variables that have been found to correlate with patient outcomes.
These structural variables can be reliably measured, and data for
these variables are generally routinely available.
Efforts have been made to measure a more complex structural
variable, nurses’ practice environments. The most well- known
measure, which has been translated into several languages, is the
Nursing Work Index- Revised (NWI- R, Aiken & Patrician, 2000),
particularly its Practice Environment Scale (Lake, 2002).
Warshawsky and Havens (2011) have documented that the use of the
NWI- R has grown across clinical se�ings and countries.

Example of Research on Structure of Care
Zhu and colleagues (2018) studied the relationships between a
hospital’s Magnet status and nurse staffing levels on temporal
trends in hospitals’ performance on measures of patients’
hospital experiences.

Nursing Processes and Actions
To demonstrate nurses’ effects on health outcomes, nurses’ clinical
actions and behaviors must be described and documented. Examples
of nursing process variables include nurses’ problem- solving;
clinical decision- making; clinical competence; and specific activities
or interventions (e.g., communication, touch, ambulation assistance).
The work that nurses do has been documented in classification
systems and taxonomies. Several research- based classification
systems of nursing interventions have been developed, refined, and
tested. Among the most prominent are the Nursing Diagnoses
Taxonomy of the North American Nursing Diagnosis Association or
NANDA (NANDA International, 2018) and the Nursing
Intervention Classification or NIC developed at the University of
Iowa (Butcher et al., 2018). NIC consists of more than 400
interventions, and each is associated with a definition and a detailed
set of activities that a nurse undertakes to implement the
intervention.

Patient Risk Adjustment
Patient outcomes vary not only because of the care they receive, but
also because of differences in patient conditions and comorbidities.
Adverse outcomes can occur no ma�er what nursing intervention is
used. Thus, in evaluating the effects of nursing actions on outcomes,
there needs to be some way of taking into account patients’ risks for
poor outcomes or the mix of risks in a caseload.
Risk adjustments have been used in many nursing outcomes studies.
These studies typically adopt global measures of patient risks or
patient acuity, such as the Acute Physiology and Chronic Health
Evaluation (APACHE I, II, III, or IV) system for critical care
environments. Wheeler (2009) has discussed the pros and cons of the
different versions of the system.

Example of Outcomes Research With Risk Adjustment

Lee and colleagues (2017) studied the relationship between
nurse workload/staffing ratios (structural indicators) on
hospital survival (the outcome) in critically ill patients. The
analysis used APACHE III to adjust for the patients’ severity of
illness. The researchers found that patients exposed to high
workload- to- nurse ratios for 1 day or more had lower odds of
survival.

Nursing- Sensitive Outcomes
Understanding the link between patient outcomes and nursing
actions is critical in making improvements to nursing quality.
Outcomes of relevance to nursing can be defined in terms of physical
or physiologic function (e.g., heart rate, blood pressure),
psychological function (e.g., comfort, satisfaction with care), or
health behaviors (e.g., self- care, exercise). Outcomes may be either
temporary (e.g., postoperative body temperature) or longer- term
(e.g., return to employment). Furthermore, outcomes may be the end
results to individual patients receiving care or to broader units such
as a family or a community.
Nursing- sensitive outcomes are patient outcomes that improve if
there is greater quantity or quality of nurses’ care (Burston et al.,
2013; Doran, 2011). Examples include pressure ulcers, falls, and
intravenous infiltrations. Several nursing- sensitive outcome
classification systems have been developed. The American Nurses
Association has developed a database of such outcomes, the
National Database of Nursing Quality Indicators or NDNQI
(Montalvo, 2007). Also, the Nursing- Sensitive Outcomes
Classification (NOC) has been developed by nurses at the University
of Iowa College of Nursing to complement the Nursing Intervention
Classification (Moorhead et al., 2018).

Example of Outcomes Research With Nursing- Sensitive
Outcomes

Backhaus and colleagues (2017) studied the relationship
between nurse staffing (the presence of nurses with a
baccalaureate education) and outcomes such as patient falls
and pressure ulcer incidence for residents in Dutch long- term
care facilities.

Challenges in Outcomes Research
The nursing profession faces several challenges in efforts to
document the effects of nursing practice on patient outcomes. As
noted by Jones (2016), “empirical evidence to support the unique
contribution to quality outcomes is currently lacking” (p. 1). One
challenge is that nursing care is more difficult to conceptualize and
measure than medical actions. Nursing interventions are often more
diffuse than medical interventions—for example, nursing
surveillance does not involve a single discrete act, or even a single
nurse.
Perhaps for this reason, nursing- sensitive indicators have tended not
to be endorsed by bodies that legislate and make policy relating to
health care quality. For example, in the United States, consensus
standards for measures of quality need the endorsement of the
National Quality Forum (NQF). As of this writing, the NQF has
endorsed only 15 new nursing- sensitive indicators out of the 150
potential measures that were submi�ed to the NQF for review, and
it has not endorsed any such indicators since 2004. Examples of
NQF- approved nursing outcome indicators include falls prevalence,
pressure ulcer prevalence, and restraint prevalence (NQF, 2004).
Further research documenting the link between nursing actions and
patient outcomes may eventually lead to a greater appreciation of
nursing’s important role in improving health outcomes.
Another challenge is developing and validating nursing- sensitive
process variables (Heslop & Lu, 2014; Jones, 2016). Efforts are needed
to identify and measure the active ingredients of nursing care. The
National Quality Forum endorsed only three nursing- sensitive
process indicators—all of them relating to smoking cessation
counseling for three different disease populations. Clearly, the full

scope of nursing practice is not captured in these three NQF
indicators.
Dubois and colleagues (2013) have urged the nursing profession to
develop be�er conceptualizations of nursing care performance.
Dubois and others (2017) have recently identified a set of indicators
that “have sufficient breadth and depth to capture the whole
spectrum of nursing care” (p. 3154), and they envision their effort as
se�ing the stage for new initiatives in operationalizing nursing care
performance.
One other challenge deserves mention, and that is the difficulty of
ensuring full documentation of nursing actions. Reliable nurse
process measures that can be assessed for their impact on patient
outcomes require comprehensive documentation. The
documentation burden for nurses is traditionally high, and the
introduction of electronic health records does not necessarily
decrease that burden or produce more comprehensive
documentation (Bilyeu & Eastes, 2013; Cutugno et al., 2015).

Survey Research
A survey is designed to obtain information about the prevalence,
distribution, and interrelations of phenomena within a population.
Political opinion polls are examples of surveys. When a survey
involves a sample, as is usually the case, it may be called a sample
survey (as opposed to a census, which covers an entire population).
Survey research relies on participants’ self- reports—participants
respond to a series of questions posed by investigators. Surveys,
which yield quantitative data primarily, may be cross- sectional or
longitudinal (e.g., panel studies). Surveys are especially appropriate
for answering Description questions, but longitudinal surveys are
also used to address Etiology and Prognosis questions. The quality
of evidence from surveys for descriptive and correlational purposes
is highly dependent on the quality of the sample used (Chapter 13)
and the quality of the data collected (Chapter 15).
Survey research is flexible: it can be applied to many populations; it
can focus on a wide range of topics; and its information can be used
for many purposes. Information obtained in most surveys, however,
tends to be relatively superficial: surveys rarely probe deeply into
human complexities.
Any information that can reliably be obtained by direct questioning
can be gathered in a survey, although surveys include mostly
questions that require brief responses (e.g., yes/no,
always/sometimes/never). Surveys often focus on what people do:
what they eat, how they care for their health, and so forth. In some
instances, the emphasis is on what people plan to do—for example,
health screenings they plan to have done—or what they have done
in the past.
Survey data can be collected in various ways. The most respected
method is through personal interviews (or face- to- face interviews), in
which interviewers meet in person with respondents. Personal
interviews tend to be costly because they involve a lot of personnel
time. Nevertheless, personal interviews are regarded as the best

method of collecting survey data because of the quality of
information they yield and because refusal rates tend to be low.

Example of a Survey With Personal Interviews
Mutiso and colleagues (2018) conducted a community
household survey to investigate pa�erns of mental illness and
stigma in two se�ings (an urban slum and a rural community)
in Kenya. Household members from the selected communities
were sampled and completed an in- person interview that
included a measure of neuropsychiatric status.

Telephone interviews are less costly than in- person interviews, but
respondents may be uncooperative (or difficult to reach) on the
telephone. Telephoning can be an acceptable method of collecting
data if the interview is short, specific, and not too personal or if
researchers have had prior personal contact with respondents. For
example, some researchers conduct in- person interviews in clinical
se�ings at baseline and then conduct follow- up interviews on the
telephone. Telephone interviews may be difficult for certain groups
of respondents, including the elderly, who may have hearing
problems.
Questionnaires, unlike interviews, are self- administered.
Respondents read the questions and then give their answers in
writing. Respondents differ in their reading levels and in their ability
to communicate in writing, so care must be taken in a questionnaire
to word questions clearly and simply. Questionnaires are economical
but are not appropriate for surveying certain populations (e.g.,
children). In survey research, questionnaires can be distributed in
person in clinical se�ings or through the mail (sometimes called a
postal survey), but are increasingly being distributed over the
Internet. Further guidance on mailed and web- based surveys is
provided in Chapter 14.

Example of a Mailed Survey

Miyashita and colleagues (2018) mailed questionnaires to
caregivers of family members who had died in palliative care
units and home hospices in Japan. The postbereavement
questionnaires included questions about the perceived benefits
and stresses of participating in the survey.

Survey researchers are using new technologies to assist in data
collection. Most major telephone surveys now use computer-
assisted telephone interviewing (CATI), and some in- person
surveys use computer- assisted personal interviewing (CAPI) with
laptop computers. Both procedures involve developing computer
programs that present interviewers with the questions to be asked
on the monitor; interviewers then enter coded responses directly
onto a computer file. CATI and CAPI surveys, although costly,
greatly facilitate data collection and improve data quality because
there is less opportunity for interviewer error.
Audio- CASI or ACASI (computer- assisted self- interview) technology is
an approach for giving respondents more privacy than is possible in
an interview (e.g., when asking about drug abuse) and is useful for
populations with literacy problems (Brown et al., 2013; Jones, 2003).
With audio- CASI, respondents sit at a computer and listen to
questions over headphones. Respondents enter their responses
directly onto the keyboard, without the interviewer seeing the
responses. This approach is also being extended to surveys with
tablets and smartphones.

Example of Audio- CASI
Lor and Bowers (2017) tested the feasibility of a culturally and
linguistically adapted audio- CASI with helper assistance for
collecting health data from Hmong older adults. Participants
found the interface user- friendly, but they confirmed that a
helper was necessary during the survey process.

There are many excellent resources for learning more about survey
research, including the classic books by Fowler (2014) and Dillman et

al. (2014).

Other Types of Research
The majority of quantitative studies that nurse researchers have
conducted are the types described thus far in this chapter or in
Chapter 9, but nurse researchers have pursued other specific types of
research. In this section, we provide a brief description of some of
them. The Supplement for this chapter on provides more
details about each type.

Translational research. Translational research (sometimes called
translation science) is an interdisciplinary field that involves systematic
efforts to convert basic research knowledge into practical applications to
enhance human well- being.
Implementation research. The goal of implementation research is to
solve problems in the implementation of health care improvements, such
as new programs, policies, or practices.
Secondary analysis. Secondary analyses involve the use of data from a
previous study (or from large databases) to test hypotheses or answer
questions that were not initially envisioned. Secondary analyses often are
based on quantitative data from a large data set (e.g., from national
surveys), but secondary analyses of data from qualitative studies have
also been undertaken (Beck, 2019). Several websites for locating publicly
available datasets are provided in the Toolkit of the accompanying
Resource Manual.
Needs assessments. Researchers conduct needs assessments to
understand the needs of a group, community, or organization. The aim of
such studies is to assess the need for special services or to see if standard
services are meeting the needs of intended beneficiaries.
Delphi surveys. Delphi surveys were developed as a tool for short- term
forecasting. The technique involves a panel of experts who are asked to
complete several rounds of questionnaires focusing on their judgments
about a topic of interest. Multiple iterations are used to achieve
consensus.
Replication studies. Researchers sometimes undertake a replication
study, which is an explicit a�empt to see if findings obtained in one
study can be duplicated in another se�ing.

Methodologic studies. Nurse researchers have undertaken many
methodologic studies, which are aimed at gathering evidence about
strategies of conducting high- quality, rigorous research.

Critical Appraisal of Studies Described in This
Chapter
It is difficult to provide guidance on critically appraising the types of
studies described in this chapter because they are so varied and
because many of the fundamental methodologic issues that require
an appraisal concern the overall design. Guidelines for appraising
design- related issues were presented in the previous two chapters.
Box 11.1 offers a few specific questions for appraising the kinds of
research included in this chapter. Separate guidelines for appraising
economic evaluations, which are technically complex, are offered in
the Toolkit section of the accompanying Resource Manual.

Box 11.1 Some Guidelines for Critically Appraising Studies
Described in Chapter 11

1. Does the study purpose match the study design? Was the best possible
design used to address the study purpose?

2. If the study was a clinical trial, was adequate a�ention paid to developing
a strong, carefully conceived intervention? Was the intervention
adequately pilot tested?

3. If the study was a clinical trial or evaluation, was there an effort to
understand how the intervention was implemented (i.e., a process- type
analysis)? Were the financial costs and benefits assessed? If not, should
they have been?

4. If the study was an evaluation, to what extent do the study results serve
the practical information needs of key decision- makers or intended
users?

5. If the study was outcomes research, were nursing- sensitive indicators
used? Were the hypothesized linkages (e.g., between nursing structures
and outcomes or nursing processes and outcomes) cogent in terms of the
potential to illuminate nursing’s unique contribution to care?

6. If the study was a survey, was the most appropriate method used to
collect the data (i.e., in- person interviews, telephone interviews, or mail
or Internet questionnaires)?

Research Example
This section describes a set of related studies that stemmed from a
clinical trial.

Background: Dr. Claire Rickard has undertaken a series of studies in
Australia relating to the replacement of peripheral intravenous catheters.
The main study, which was based on results from smaller clinical trials
(Rickard et al., 2010; Van Donk et al., 2009), was a large, multisite RCT
that included a cost- effectiveness analysis. The study also required some
methodologic work. Data from the parent study have been used in
secondary analyses.
Phase III randomized equivalence trial: Rickard and colleagues (2012)
hypothesized that patients who had intravenous catheters replaced when
clinically indicated would have equivalent rates of phlebitis and
complications (e.g., bloodstream infections), but a reduced number of
catheter insertions, compared with patients whose catheters were
removed according to the standard guideline of every 3 days. Adults
with expected catheter use of more than 4 days were recruited into the
trial. A sample of 3,283 adults from three hospitals were randomized to
clinically indicated catheter replacement or to third daily routine
replacement. The equivalence margin was set to 3%. Consistent with the
hypothesis of equivalence, phlebitis was found to occur in 7% of the
patients in both groups. No serious adverse events relating to the two
insertion protocols were observed.
Cost- effectiveness study: A cost- effectiveness study was also undertaken
in connection with the RCT (Tuffaha et al., 2014). The team collected data
on resource use and associated costs. Patients in the “clinically indicated”
group used significantly fewer catheters. The mean dwell time for
catheters in situ on Day 3 was 99 hours when replaced as clinically
indicated, compared with 70 hours when routinely replaced. The cost
analysis concluded that the incremental net monetary benefit of clinically
indicated replacement was approximately $8 per patient.
Methodologic substudy: As described in a review paper (Ray- Barruel,
Polit, Murfield & Rickard, 2014), Rickard and her team developed and
tested a new method to reliably measure the incidence of phlebitis in the
RCT.
Secondary analyses: Wallis and a team of colleagues (2014) used data
from the trial in a secondary analysis. Data from all 3,283 patients were

used to explore risk factors for peripheral intravenous catheter (PIVC)
failure. The researchers found that some of the factors that predicted
phlebitis were modifiable (e.g., large diameter PIVC, ward insertion
versus insertion by operating room staff), but others were not (e.g.,
women were at higher risk). In a separate secondary analysis of the trial
data, Webster and colleagues (2015) studied risk factors for postinfusion
phlebitis.

Summary Points

Clinical trials to assess the effectiveness of clinical interventions can
unfold in a series of phases. Features of the intervention are finalized in
Phase I. Phase II involves seeking opportunities for refinements and
preliminary evidence of feasibility and efficacy. Phase III is a full
experimental test of treatment efficacy. In Phase IV, researchers focus
primarily on generalized effectiveness.
Most trials are superiority trials, in which researchers hypothesize that
an intervention will result in be�er outcomes than the counterfactual. In a
noninferiority trial, the goal is to test whether a new intervention is no
worse than a reference treatment. In equivalence trials, the goal is to test
the hypotheses that the outcomes from two treatments are equal, within a
specified level of tolerance.
Evaluation research assesses the effectiveness of a program, policy, or
procedure and often involves several components. Process or
implementation analyses describe the process by which a program gets
implemented and how it functions in practice. Outcome analyses
describe the status of outcomes after the introduction of a program.
Impact analyses test whether a program caused net impacts on key
outcomes, relative to a counterfactual. Cost (economic) analyses assess
whether the monetary costs of a program are outweighed by benefits and
include cost–benefit analyses, cost- effectiveness analyses, and cost–
utility analyses. Realist evaluations constitute a theory- driven approach
to evaluating programs; the theoretical mechanisms underlying the
effects of an intervention are a key concern.
Comparative effectiveness research (CER) involves direct comparisons
of clinical and public health interventions to gain insights into which
work best for which patients—as well as which have greater risks of
harm. The Patient-Centered Outcomes Research Institute (PCORI) is a
major funder of CER.
Outcomes research (a subset of health services research) examines the
quality and effectiveness of health care and nursing services. Models of
health care and nursing quality typically encompass several broad
concepts, including structure (factors such as nursing skill mix); process
(e.g., nursing actions); client risk factors (e.g., illness severity,
comorbidities); and outcomes. In nursing, researchers often focus on the
effects of nursing structure and processes on nursing- sensitive outcomes

—patient outcomes that benefit from greater quantity or quality of nurse
care (e.g., patient falls, pressure ulcers).
Survey research involves gathering data about people’s characteristics,
behaviors, and intentions by asking them questions. One survey method
is through personal interviews, in which interviewers meet respondents
face- to- face and question them. Telephone interviews are less costly but
are inadvisable if the interview is long or if questions are sensitive.
Questionnaires are self- administered (i.e., questions are read by
respondents, who then give wri�en responses) and are usually
distributed by mail or over the Internet.
Other specific types of research include the following: translational
research (which involves systematic efforts to convert basic research
knowledge into practical applications); implementation research (in
which researchers seek methods to improve the implementation of
innovative program, policies, or interventions); secondary analysis (in
which researchers analyze previously collected data); needs assessments
(which are designed to understand and document the needs of a group or
community); Delphi surveys (which involve several rounds of
questioning with an expert panel to achieve consensus); replication
studies (which duplicate prior studies to test whether results can be
repeated); and methodologic studies (in which the focus is to develop
and test methodologic tools or strategies).

Study Activities
Study activities are available to instructors on .

References Cited in Chapter 11
Aiken L., & Patrician P. (2000). Measuring organizational traits of hospitals:

The revised nursing work index. Nursing Research, 49, 146–153.
* Backhaus R., van Rossum E., Verbeek H., Halfens R., Tan F., Capezuti E., &

Hamers J. (2017). Relationship between the presence of baccalaureate- –
education RNs and quality of care: A cross- sectional study in Dutch long- –
term care facilities. BMC Health Services Research, 17, 53.

Baernholdt M., Dunton N., Hughes R., Stone P., & White K. (2018). Quality
measures: A stakeholder analysis. Journal of Nursing Care Quality, 33, 149–
156.

* Barksdale D., Newhouse R., & Miller J. (2014). The Patient- Centered
Outcomes Research Institute (PCORI): Information for academic nursing.
Nursing Outlook, 62, 192–200.

Bartroff J., Lai T. L., & Shih M. (2013). Sequential experimentation in clinical trials.
New York: Springer.

Beck C. T. (2019). Secondary qualitative data analysis in the health and social
sciences. New York: Routledge.

Bilyeu P., & Eastes L. (2013). Use of the electronic medical record for trauma
resuscitations: How does this impact documentation completeness? Journal
of Trauma Nursing, 20, 166–168.

Boersma P., van Weert J., van Meijel B., & Droes R. (2017). Implementation of
the Veder contact method in daily nursing home care for people with
dementia: A process analysis according to the RE- AIM framework. Journal of
Clinical Nursing, 26, 436–455.

* Brown J., Swar�endruber A., & DuClemente R. (2013). Application of audio
computer- assisted self- interviews to collect self- reported health data: An
overview. Caries Research, 47, S40–S45.

Burston S., Chaboyer W., & Gillespie B. (2013). Nurse- sensitive indicators
suitable to reflect nursing care quality: A review and discussion of the
issues. Journal of Clinical Nursing, 23, 1785–1793.

Butcher H., Bulechek G., Dochterman J. M., & Wagner C. (2018). Nursing
interventions classification (NIC) (7th ed.). St. Louis: Elsevier.

Christensen E. (2007). Methodology of superiority vs. equivalence trials and
non- inferiority trials. Journal of Hepatology, 46, 947–954.

Cook W., Morrison M., Eaton L., Theodore B., & Doorenbos A. (2017).
Quantity and quality of economic evaluations in U.S. Nursing research,

1997–2015: A systematic review. Nursing Research, 66, 28–39.
Cutugno C., Hozak M., Fi�simmons D., & Ertogan H. (2015). Documentation

of preventive nursing measures in the elderly trauma patient: Potential
financial impact and the health record. Nursing Economic$, 33, 219–226.

Dillman D. A., Smyth J., & Christian L. (2014). Internet, phone, mail, and mixed-
mode surveys: The tailored design method (4th ed.). New York: John Wiley.

Donabedian A. (1987). Some basic issues in evaluating the quality of health
care. In Rinke L. T. (Ed.), Outcome measures in home care (Vol. 1, pp. 3–28).
New York: National League for Nursing.

Doran D., (Ed.). (2011). Nursing outcomes: State of the science (2nd ed.).
Sudbury, MA: Jones & Bartle�.

Drummond M., Sculpher M., Claxton G., Stoddart G., & Torrance G. (2015).
Methods for the economic evaluation of health care programs (4th ed.). Oxford:
Oxford Medical Publications.

Dubois C., D’Amour D., Brault I., Dallaire C., Dery J., Duhous A., … Zufferey
A. (2017). Which priority indicators to use to evaluate nursing care
performance? A discussion paper. Journal of Advanced Nursing, 73, 3154–
3167.

* Dubois C., D’Amour D., Pomey M., Girard F., & Brault I. (2013).
Conceptualizing performance of nursing care as a prerequisite for be�er
measurement: A systematic and interpretive review. BMC Nursing, 12, 7.

Fowler F. J. (2014). Survey research methods (5th ed.). Thousand Oaks, CA: Sage.
Glasgow R.E., Magid D., Beck A., Ri�woller D., & Estabrooks P. (2005).

Practical clinical trials for translating research to practice: Design and
measurement recommendations. Medical Care, 43 551–557.

* Heslin M., Patel A., Stahl D., Gardner- Sood P., Mushore M., Smith S., …
Gaughran E. (2017). Randomised controlled trial to improve health and
reduce substance use in established psychosis (IMPaCT): Cost effectiveness
of integrated psychosocial health promotion. BMC Psychiatry, 17, 407.

* Heslop L., & Lu S. (2014). Nursing-sensitive indicators: A concept analysis.
Journal of Advanced Nursing, 70, 2469–2482.

* Heyland D., Davidson J., Skrobik Y., des Ordons A., Van Scoy L. Day A, …
Marshall A. (2018). Improving partnerships with family members of ICU
patients: Study protocol for a randomized controlled trial. Trials, 19, 3.

* Institute of Medicine of the National Academies (2009). Initial priorities for
comparative effectiveness research. Washington, DC: IOM.

Jones R. (2003). Survey data collection using audio computer-assisted self- –
interview. Western Journal of Nursing Research, 25, 349–358.

* Jones T. (2016). Outcome measurement in nursing: Imperatives, ideals,
history, and challenges. The Online Journal of Issues in Nursing, 21, 2.

Kerr H., Price J., Nicholl H., & O’Halloran P. (2018). Facilitating transition
from children’s to adult services for young adults with life- limiting
conditions (TASYL): Programme theory developed from a mixed methods
realist evaluation. International Journal of Nursing Research, 86, 125–138.

Lake E. T. (2002). Development of the practice environment scale of the
Nursing Work Index. Research in Nursing & Health, 25, 176–188.

* Lee A., Cheung Y., Joynt G., Leung C., Wong W., & Gomersall C. (2017). Are
high nurse workload/staffing ratios associated with decreased survival in
critically ill patients? Annals of Intensive Care, 7, 46.

Lor M., & Bowers B. (2017). Feasibility of audio- computer- assisted self- –
interviewing with color coding and helper assistance (ACASI- H) for Hmong
older adults. Research in Nursing & Health, 40, 360–371.

* Makenzius M., Ogu�u M., Klingberg- Allvin M., Gemzell- Danielsson K.,
Odero T., & Faxelid E. (2017). Post- abortion care with misoprostol—equally
effective, safe and accepted when administered by midwives compared to
physicians: A randomised controlled equivalence trial in a low- resource
se�ing in Kenya. BMJ Open, 7, e016157.

Mervin M., Moyle W., Jones C., Murfield J., Draper B., Bea�ie E., … Thalib L.
(2018). The cost- effectiveness of using PARO, a therapeutic robotic seal, to
reduce agitation and medication use in dementia. Journal of the American
Medical Directors Association, 19, 619–622.

Mitchell P., Ferketich S., & Jennings B. (1998). Quality health outcomes model.
Image: The Journal of Nursing Scholarship, 30, 43–46.

Mitchell P., & Lang N. (2004). Framing the problem of measuring and
improving healthcare quality: Has the Quality Health Outcomes Model
been useful? Medical Care, 42, II4–11.

Miyashita M., Aoyama M., Yoshida S., Yamada Y., Abe M., Yahagihara K, …
Nakahata M. (2018). The distress and benefit to bereaved family members of
participating in a post- bereavement survey. Japanese Journal of Clinical
Oncology, 48, 135–143.

* Montalvo I. (2007). The National Database of Nursing Quality Indicators®
(NDNQI®). The Online Journal of Issues in Nursing, 12 (3).

Moorhead S., Johnson M., Maas M., & Swanson E. (2018). Nursing Outcomes
Classification (NOC): Measurement of health outcomes (6th ed.). St. Louis:
Elsevier.

Mutiso V., Musyimi C., Tomita A., Loeffen L., Burns J., & Nditei D. (2018).
Epidemiological pa�erns of mental disorders and stigma in a community
household survey in urban slum and rural se�ings in Kenya. International
Journal of Social Psychiatry, 64, 120–129.

NANDA International (2018). NANDA International Nursing Diagnoses:
Definitions and classification, 2018–2020 (11th ed.). Oxford: Wiley- –
Blackwell.

* National Quality Forum (2004). National voluntary consensus standards for
nursing- sensitive care: An initial performance set. A consensus report.
Washington, DC: National Quality Forum.

Pa�on M. Q. (2012). Essentials of utilization-focused evaluation. Thousand Oaks,
CA: Sage.

Pa�on M. Q. (2015). Qualitative research and evaluation methods (4th ed.).
Thousand Oaks, CA: Sage.

Pawson R., & Tilley N. (1997). Realistic evaluation. London: Sage.
Piaggio G., Elbourne D., Pocock S., Evans S., Altman D. (2012). Reporting of

noninferiority and equivalence randomized trials: Extension of the
CONSORT 2010 statement. Journal of the American Medical Association, 308,
2594–2604.

Račić M., Joksimović B., Cicmil S., Kusmuk S., Ivković N., Hadzivuković N., …
Dubravac M. (2017). The effects of interprofessional diabetes education on
the knowledge of medical, dentistry, and nursing students. Acta Medica
Academica, 46, 145–154.

Ramacciati N. (2013). Health technology assessment in nursing: A literature
review. International Nursing Review, 60, 23–30.

Ray- Barruel G., Polit D., Murfield J., & Rickard C. M. (2014). Infusion phlebitis
assessment measures: A systematic review. Journal of Evaluation in Clinical
Practice, 20, 191–202.

* Rickard C. M., McCann D., Munnings J., & McGrail M. (2010). Routine resite
of peripheral intravenous devices every 3 days did not reduce
complications compared with clinically indicated resite. BMC Medical, 8, 53.

Rickard C. M., Webster J., Wallis M., Marsh N., McGrail M., French V., …
Whitby M. (2012). Routine versus clinically indicated replacement of
peripheral intravenous catheters: A randomised controlled equivalence trial.
The Lancet, 380, 1066–1074.

Rossi P., Lipsey M., & Henry G. (2019). Evaluation: A systematic approach (8th
ed.). Thousand Oaks, CA: Sage.

Sherwood G., & Barnsteiner J. (2012). Quality and safety in nursing: A
competence approach to improving outcomes. Ames, Iowa: Wiley- –
Blackwell.

Tuffaha H. W., Rickard C. M., Webster J., Marsh N., Gordon L., Wallis M., &
Scu�am P. (2014). Cost- effectiveness of clinically indicated versus routine
replacement of peripheral intravenous catheters. Applied Health Economics
and Health Policy, 12, 51–58.

Van Donk P., Rickard C. M., McGrail M., & Doolan G. (2009). Routine
replacement versus clinical monitoring of peripheral intravenous catheters
in a regional hospital in the home program. Infection Control and Hospital
Epidemiology, 30, 915–917.

Wallis M., McGrail M., Webster J., Marsh N., Gowardman J., Playford E., &
Rickard C. M. (2014). Risk factors for peripheral intravenous catheter
failure: A multivariate analysis of data from a randomized controlled trial.
Infection Control and Hospital Epidemiology, 35, 
63–68.

* Warshawsky N. E., & Havens D. (2011). Global use of the Practice
Environment Scale of the Nursing Work Index. Nursing Research, 60, 17–31.

Watson R., Asaro L., Her�og J., Sorce L., Kachmar A., Dervan L., … Curley M.
(2018). Long- term outcomes after protocolized sedation vs usual care in
ventilated pediatric patients. American Journal of Respiratory & Critical Care
Medicine, 197, 1457–1467.

* Webster J., McGrail M., Marsh N., Wallis M., Ray- Barruel G., & Rickard C.
(2015). Postinfusion phlebitis: Incidence and risk factors. Nursing Research
and Practice, 2015, 691934.

Wheeler M. M. (2009). APACHE: An evaluation. Critical Care Nursing
Quarterly, 32, 46–48.

** Zhu J., Dy S., Wenzel J., & Wu A. (2018). Association of Magnet status and
nurse staffing with improvements in patient experience with hospital care,
2008- 2015. Medical Care, 56, 111–120.

*A link to this open- access journal is provided in the Toolkit for Chapter 11 in
the Resource Manual.

**This journal article is available on for this chapter.

C H A P T E R 1 2

Quality Improvement and Improvement Science

The improvement of healthcare services and patient outcomes is a goal
shared by all health disciplines. Several forces converged around the turn
of the century that led to the emergence of new endeavors and lines of
inquiry relating specifically to healthcare improvement. Quality
improvement (QI) and improvement science are rapidly evolving and are
still in their early stages of development, leaving abundant opportunity for
nurses to participate as leaders in this field. This chapter highlights a few
key features of quality improvement initiatives; we urge you to consult
other references (e.g., Finkelman, 2018; Hughes, 2008) for more
comprehensive presentations.

Quality Improvement Basics
In this section, we describe how quality improvement (QI) differs from
research, discuss the QI movement, and review basic features of QI.

Quality Improvement Versus Research
A decade ago, there was a lot of discussion in nursing journals about the
differences and similarities between quality improvement, research, and
evidence- based practice (EBP) projects. All three have a lot in common,
notably the use of systematic methods of solving health problems with an
overall aim of fostering improvements in health care. Often, the methods
used overlap: patient data and statistical analysis—sometimes combined
with analysis of qualitative data—are also used in all three.
Although the definitions proposed for QI, research, and EBP are distinct, it
is not always easy to distinguish them in real- world projects; as a result,
there is sometimes confusion. Quality