Open Conference Systems, ITC 2016 Conference

Font Size: 
SYMPOSIUM: Measuring Clinical Judgment in the Nursing Field: Application of a Decision-Making Model and Investigation of Technology Enhanced Items
Joseph Betts, Phil Dickison, Ada Woo, Doyoung Kim, Xiao Lou, Nicole Williams, William Muntean, Marie Lindsay, Karen Sutherland

Building: Pinnacle
Room: Cordova-SalonC
Date: 2016-07-02 03:30 PM – 05:00 PM
Last modified: 2016-06-08

Abstract


Introduction

These presentations in this symposium will highlight research associated with expanding the focus of an extant large-scale licensure examination program’s computer-based testing assessment to include a new, more complex construct of clinical judgment (CJ) found in professional areas of practice.

Contributions

The first presentation will situate the audience within the current exam blueprint and provide results from the recent job task analysis which indicated the increased need for evaluating CJ within the daily nursing milieu. The evolution of the R&D process that merged a cognitive psychological decision-making model with the daily praxis outlined by the job task analysis to a definable assessment model will be explored.

The second presentation will highlight the process by which new technology enhanced item (TEI) types were identified for their potential utility in measuring aspects of the CJ model. The new TEI types will be shown and the process of mapping the types to the CJ model elements will be described and examples provided.

The third paper will outline a number of possible analytic methods, e.g. such as IRT, factor analysis, and cognitive diagnostic models, for evaluating construct validity and numerous scoring models for evaluating the stated assessment model. The final paper will then explore initial research results evaluating items developed to measure the CJ element of cue recognition using both a signal-detection and polytomous item response theory (IRT) framework will be explored.

Conclusions

The audience will be exposed to the internal processes and related research results of a large-scale licensure examinations program from the inception to the current status of extending the current exams to measure the complex aspect of professional practice in the area of clinical judgment.

*****

Paper #1: Exploring the Road Ahead: Moving a Traditional Assessment into the Next Generation
Ada Woo, Phil Dickison, Doyoung Kim, Joe Betts, Will Muntean, & Xiao Lou

Introduction

As professions evolve over time, the scope of practice can change, and, therefore, licensure/certification examinations must change to meet the new professional demands. At present, the field of nursing is seeing this type of momentum as a larger emphasis is being placed upon the acquisition and use of clinical judgment (CJ) skills. This emerging need in the daily praxis of nursing has prompted a need for research in the measurement and evaluation of CJ skills.

Objectives

The objectives of this presentation will be to provide the audience with a glimpse into the contemplative process that began the movement of a major licensure examination program to explore the potential expansion of the construct of measurement to include a more comprehensive evaluation of CJ.

Design/Methodology

This presentation will start with an overview of the current exam blueprint to provide the context of the current exam. Next an elaboration of qualitative and quantitative design of the recent job task analysis (JTA) using usual methods but adding an innovative set of in-situ observations will be discussed in depth. Background information on the literature review in cognitive psychology focused on validated decision-models in nursing will be discussed.

Results

The results of the JTA indicated a growing need for CJ skills for entry-level nursing professionals. These results were incorporated into an assessment model that maps the daily nursing praxis into the validated decision-making model. This model will be fully elaborated during the discussion.

Conclusions

From the analysis of the JTA and literature review on decision-making models, a final assessment model was developed to guide item writing. Examples will be provided to show how the model will be used.

 

Paper #2: Defining New Item Types for a Clinical Judgment Construct
Joe Betts, Ada Woo, Phil Dickison, Marie Lindsay, Nicole Williams, & Karen Sutherland

Introduction

One of the aspects of identifying the new clinical judgment (CJ) construct was the realization that new item types beyond multiple-choice questions would be needed. This suggested a need to develop a set of technology enhanced items (TEI) to measure the new CJ construct.

Objectives

The discussion will begin with the definition of the underlying task model that was used to build CJ items based on the JTA described in the prior talk. From this, a discussion of the original lightning labs designed to develop TEI types that would potentially measure the CJ model. Finally, the mapping of the task model and item types was accomplished through the development of CJ scenario items.

Design/Methodology

The design of the lightning labs and mapping meetings used a modification of a brainstorming session approach. After the item types were identified, a number of item writing panels were undertaken to begin item prototypes for rendering. All items then went through a number of item review panels to validate the item types and also provide options for designing variants of each type to uniquely measure CJ elements.

Results

The results of this research were seven specific item types with a number of different variations. Examples of the CJ scenario based items that incorporated many of the new item types into the task model will be highlighted.

Conclusions

Audience members will be taken through the entire process of identifying task models aligned with the JTA, brainstorming new item types that could effectively measure the new CJ elements, item writing panel results using the task models coupled with the new item types to develop scenario based items, and the conduct of the item type review panels.

 

Paper #3: Evaluating Cognitive Constructs using an Information Processing Framework
Doyoung Kim, Xiao Lou, Ada Woo, & Phil Dickison

Introduction

With the move to assess the new CJ construct outlined in previous papers, a more comprehensive scoring model was needed. This paper will provide some methods for providing evidence based methodologies for providing both evidence for validity of the new construct along with scoring of the new complex items.

Objectives

This paper will provide an overview of the research approaches that can be used to evaluate the underlying scoring model of the new CJ construct. This model provides a linking of the information provided in the previous papers in this symposium and organizes the overarching framework. The audience will be provided with insights into the numerous approaches for evaluating the overall CJ construct.

Design/Methodology

A number of different methods for evaluating the raw data responses from the candidate’s field test results will be addressed. A polytomous IRT model (Figure 2), a multidimensional IRT model (Figure 3), a cognitive diagnostic model, and a factor analytic model will be explained and explored with the goal of highlighting how they can be used to explore the internal validity, scoring, and stability of the construct being measured.

Results

Results will be explained in terms of the information each approach provides with respect to making claims of validity based on the data. Additionally, a discussion of the different scoring models will be discussed.

Conclusions

The results of the different methods can provided key information into the internal validity and scoring of the new CJ items.

 

Paper #4: Developing and Pretesting Technology Enhanced Items: Issues and Outcomes
Will Muntean & Joe Betts

Introduction

Technology enhanced items (TEI) allow test developers to explore new domains of interest. Fundamentally, these items enhance the interaction between examinees and the exam, thus providing higher fidelity and facilitating the use of theoretical frameworks underlying the tested constructs. The current paper explores clinical judgment in the health services domain through a signal detection theoretical framework (SDT). This framework measures the utilization of information when making judgments. By applying SDT to multiple response item types, the current paper provides a feasible supplemental analysis to common item response theory analyses (IRT).

Objectives

Although SDT is popular measurement model in cognitive psychology, we introduce its application to multiple response item types. When applied to clinical judgment, the model measures one’s propensity to utilize information. By contrasting SDT to several popular polytomous item response theory (IRT) models, we offer a new alternative to evaluating item utility.

Design/Methodology

We use several simulations to investigate the similarities and differences between SDT and IRT. Initial data are generated from a SDT model, a partial credit model, a graded response model, and a testlet model. Then, by fitting each model to the different generated data, we show that the two approaches of SDT and IRT provide different information about item functioning.

Results

Collectively, the simulation results clarify the relationship between the two methods of construct assessment. Relative to IRT analyses, SDT provides unique information about responding behavior. Examines propensity to utilize information is unrelated to ability estimates derived from a graded response model. The implications of this relationship is discussed.

Conclusions

Because SDT offers a novel approach to evaluating multiple response items, the analysis supplements common item response theory analyses and provides unique insight into responding behavior.


An account with this site is required in order to view papers. Click here to create an account.