Abstract
Technology-based assessments that involve collaboration among students offer many sources of process data, although it remains unclear which aspects of these data are most meaningful for making inferences about students’ collaborative skills. Recent research has focused mainly on theory-based rubrics for qualitative coding of process data (e.g., text from chat dialogues, click-stream data), but many reliability and validity issues arise in the application of such rubrics. In this research, we take a more data-driven approach to the problem. Data were collected from 122 dyads who interacted over online chat to complete a twelfth-grade mathematics assessment. We focus on features of chat and click-stream that can be extracted automatically, including the extent to which chat dialogue contained content from assessment materials; chat-based cues of affective tone and mirroring; and temporal synchronization in task-related activities. Using a block-wise linear regression, we show that process features of chat and click-stream accounted for 30.5% of the variation in group performance, after controlling for group members math proficiency and the total number of words in the chat dialogue. The full model explained 61% of the variation in group performance. Implications for the design and scoring of collaborative assessments are discussed.
Original language | English (US) |
---|---|
Pages (from-to) | 367-388 |
Number of pages | 22 |
Journal | Technology, Knowledge and Learning |
Volume | 25 |
Issue number | 2 |
DOIs | |
State | Published - Jun 1 2020 |
Keywords
- Collaborative assessments
- Group performance
- Item response theory
- Process data
ASJC Scopus subject areas
- Mathematics (miscellaneous)
- Education
- Human-Computer Interaction
- Computer Science Applications