This study investigates whether the TIMSS 2019 Computer Use Questionnaire functions equivalently across languages and cultures. Using responses from 8th-grade students in Turkey, England, and Qatar, we evaluated cross-group comparability with Multiple-Group Confirmatory Factor Analysis (MGCFA) and examined Differential Item Functioning (DIF) via Ordinal Logistic Regression (OLR) and Poly‑SIBTEST. The instrument comprises 11 Likert-type items organized into two factors—Computer Usage Frequency and Computer Usage Self‑Efficacy—supported by exploratory and confirmatory factor analyses. For the same‑culture/different‑language comparison (Qatar Arabic vs. English), configural and metric invariance were supported, whereas scalar invariance was not. For the different‑culture/different‑language comparison (England vs. Turkey), only configural invariance was obtained, indicating that factor loadings and intercepts were not fully comparable across these countries. DIF findings varied by method: OLR flagged mostly negligible DIF in the frequency items for the same‑culture comparison, while Poly‑SIBTEST identified several items with moderate to large DIF; in the cross‑culture comparison, both methods indicated DIF for most items, particularly within the self‑efficacy factor. The pattern of results suggests that linguistic adaptation, access to technology, and differences in technology‑related experiences contribute to nonequivalence. We propose revising culture‑sensitive terms, clarifying item contexts, and incorporating qualitative evidence to strengthen score comparability in future administrations.
| Primary Language | English |
|---|---|
| Subjects | Measurement Equivalence |
| Journal Section | Research Article |
| Authors | |
| Submission Date | August 26, 2025 |
| Acceptance Date | November 2, 2025 |
| Early Pub Date | December 2, 2025 |
| Publication Date | December 31, 2025 |
| Published in Issue | Year 2025 Volume: 16 Issue: 4 |