Open Conference Systems, ITC 2016 Conference

Font Size: 
SYMPOSIUM: Issues in Adapting Clinical Assessments for the i-Pad
Lawrence Weiss, Dustin Wahlstrom, Selina Oliver, Susan Raiford

Building: Pinnacle
Room: Cordova-SalonB
Date: 2016-07-02 11:00 AM – 12:30 PM
Last modified: 2016-06-08

Abstract


Symposium Overview

Clinical assessments traditionally conducted using tangible materials are increasingly being made available in a digitally administered format.  Researchers, clinicians, and practitioners alike have raised questions about the validity of digitally adapted tests.  In this symposium, four presenters will address best practices in adapting clinical assessments for the i-pad from technological, statistical, clinical, and practical perspectives.  Participants will learn why equivalence by mode of administration is important, the relative advantages and disadvantages of various research designs to evaluate equivalence, how specific clinical groups perform on a set of digitally adapted cognitive tasks, and practical issues in implementing digital assessment practices in a large school setting.

Paper 1: Interface Design and Clinical Issues in Adapting Cognitive Assessments for the i-Pad
Dustin Wahlstrom & Lawrence G. Weiss, Pearson Clinical Assessment

Redesigning traditional paper-pencil tasks in a digital format brings both clinical improvements and unforeseen challenges. It is possible to simplify the assessment process, increase the accuracy of scores, and make tests more engaging for examinees. But, software can be less flexible than paper, and care must be taken to maintain construct equivalence between paper and digital versions of tests.

This presentation describes the process by which paper-based tests such as the WISC-V were redesigned for digital delivery on the i-pad within the Q-interactive system. The process can be characterized by five main steps:

  • Outline the guiding principles that drive test design, derived from clinical (e.g., preserve clinician-client rapport), psychometric (e.g., tests must measure the same response processes as their paper counterparts), and practical (e.g., be easy to use) needs.
  • Categorize tests based on examinee and examiner demands (e.g., are stimuli presented verbally or visually?, does the subtest require subjective judgment to score?, etc.).
  • Create common interface designs for each category of test in order to promote usability across subtests.
  • Conduct equivalence studies to confirm that the interface designs and digital stimulus presentation do not change the constructs being measured.
  • Beta testing with clinicians to reveal any remaining usability issues that interfere with the practical use of the tests.

For each of these steps, the challenges faced by the development team will be discussed. For example, subtle changes to how stimuli were presented on the WAIS-IV were found to impact test performance, a finding that was not discovered until after the equivalence study and resulted in changes to the interface and further equivalence testing.  The implications of these findings for future test development will be discussed.

Paper 2: Issues in Evaluating Score Equivalence of i-pad Versions of Cognitive Assessments
Lawrence G. Weiss, Mark Daniel, Pearson Clinical Assessment

Various research designs are available to evaluate score equivalence between traditional and digitally adapted tasks.  These include randomly assigned groups, matched control groups, retest designs, and dual capture designs.  The strengths and weaknesses of each design are discussed, and suggestions made about matching the most appropriate design to the specific nature of the task being evaluated.

This presentation summarizes the results of 12 equivalence studies (N = 1,470) conducted on 75 different cognitive subtests including the WAIS-IV, WISC-IV, and WISC-V, and other neuropsychological and individual achievement tests covering an age range of 6 to 90.  Small effect sizes of .20 or less were obtained on 73 of the 75 subtests evaluated, demonstrating equivalence for the majority of tasks studied.  We discuss special issues related to digital administration with preschool age children using the WPPSI-IV.  We also discuss how to determine when equating is appropriate based on a WISC-V study with two completely redesigned processing speed tasks.

Paper 3: Issues in Comparing Clinical Validity of i-pad and Paper Versions of Cognitive Tests
Susan E. Raiford, Lawrence G. Weiss, Pearson Clinical Assessment

Demonstrating score equivalence in samples of normal subjects is a necessary but not sufficient condition for validity.  The development and maintenance of rapport between the examiner and patient is critical to eliciting the patient’s optimal effort and obtaining a valid test score.  Digital interfaces may impair rapport for some types of clinical subjects and not others.  Further, the unique diagnostic characteristics of certain clinical patients may interact with specific design features of the digital interface to elicit unexpected patient behaviors.  This might introduce construct irrelevant variance into the test scores that could threaten the validity of the results.

In this paper we describe the results of several clinical studies including Attention Deficit Hyperactivity Disorder (ADHD), Autism Spectrum Disorder with Language Impairment, Mild Intellectual Disability, Gifted and Talented students, and students with Specific Language Disorders in Reading and Maths.

Subjects were administered the WISC-V and randomly assigned to either the digital or traditional administration conditions.  Results will be presented and discussed in view of how clinical interpretation of some digitally adapted subtests may differ for certain clinical groups.  However, caution is warranted in over-interpreting such differences because the degree of severity of the disorders is uncontrolled between the two administration conditions.

Paper 4: Practical Issues for School Districts to Consider when Upgrading to Cognitive Assessments for the i-Pad
Selina Oliver, Anne Arundel County Public Schools

Transitioning school district staff from traditional paper-pencil assessments to a digital format comes with both opportunities and challenges. On one hand, a digital platform provides greater opportunities for assessment efficiency and accuracy, inventory management, and manageable costs. On the other hand, the transition process raises questions about resistance to change, hardware and licensing expenses, ethical considerations, data storage, training, and workflow.

This presentation will describe the process by which a large school district transitioned from paper-based tests to digital delivery on the Q-interactive system. The district on-boarding team engaged in the following tasks:

  • Established a continuum of training opportunities to promote required assessment competencies and workflow.
  • Secured iPads and configured them to meet the district IT standards. They also advocated district IT to close the gap between existing IT standards and those needed for iPad assessments.
  • Conducted a 10 year cost analysis to determine the fiscal feasibility of upgrading to iPad assessment.
  • Created a best practices protocol for four stages of data management:
  • creation, storage, disclosure, and destruction.

For each of these steps, the challenges faced, and solutions found by the school district will be discussed.


An account with this site is required in order to view papers. Click here to create an account.