Open Conference Systems, ITC 2016 Conference

Font Size: 
WORKSHOP: Distinguishing Between Think-Aloud Interviews and Cognitive Labs for Test Development and Validation Efforts
Jacqueline Leighton

Building: Pinnacle
Room: Cordova-SalonC
Date: 2016-07-01 09:00 AM – 05:00 PM
Last modified: 2016-05-18

Abstract


Importance: The collection of response process data is strongly recommended for tests designed to measure and inform conclusions about test-takers’ cognitive processes (Standards, 2014). However, recent research developments in think-aloud and cognitive-laboratory methods (e.g., Fox, Ericsson & Best, 2011; Leighton, 2013; Willis, 2015) indicate a need to revisit key aspects of objectives, sample considerations, interview techniques, data collection, coding, and methods for analyzing and improving the quality of conclusions derived from these highly labour-intensive methodologies.

Relevance and Usefulness: This workshop will introduce participants to key considerations in distinguishing think-aloud versus cognitive laboratories for collecting, analyzing, and interpreting response process data to better satisfy and defend test development and validation objectives.

Background of Facilitator: The facilitator has over 20 years of experience conducting empirical research using think-aloud and cognitive laboratory methods; in addition to writing critical analyses about best practices using these methods. She has published her research in leading testing journals (e.g., Educational Measurement: Issues and Practice) and is currently writing a methodologically focused book on think-aloud and cognitive laboratories in the series Evaluation Research: Bridging Qualitative and Quantitative Methods to be published by Oxford University Press.

Agenda: Introduction of (a) key differences in response processes measured by think-aloud interview and cognitive lab techniques with examples, (b) sample size considerations depending on objectives and/or analytical considerations, (c) coding schemes and standardized rubrics for rating and segmenting verbal reports, (d) coding and application of inter-rater reliability indices for report coding and ratings, (e) analysis of codes using inferential statistics, and (f) drawing conclusions within a test development or validity argument framework.

Logistics and/or Equipment: Participants expected to bring laptops to work on mini-tasks – coding, segmenting, and analysis.


An account with this site is required in order to view papers. Click here to create an account.