Assessment of Item and Test parameters: Cosine Similarity Approach
Year 2021,
Volume: 8 Issue: 3, 28 - 38, 25.07.2021
Satyendra Nath Chakrabartty
Abstract
The paper proposes new measures of difficulty and discriminating values of binary items and test consisting of such items and find their relationships including estimation of test error variance and thereby the test reliability, as per definition using cosine similarities. The measures use entire data. Difficulty value of test and item is defined as function of cosine of the angle between the observed score vector and the maximum possible score vector. Discriminating value of test and an item are taken as coefficient of variation (CV) of test score and item score respectively. Each ranges between 0 and 1 like difficulty value of test and an item. With increase in number of correct answer to an item, item difficulty curve increases and item discriminating curve decreases. The point of intersection of the two curves can be used for item deletion along with other criteria. Cronbach alpha was expressed and computed in terms of discriminating value of test and item. Relationship derived between test discriminating value and test reliability as per theoretical definition. Empirical verifications of proposed measures were undertaken. Future studies suggested.re to enter text.
References
- Chakrabartty, Satyendra Nath (2013). Best split-half and maximum reliability. IOSR Journal of Research & Method in Education, 3(1), 01-08.
- Chauhan,P.R., Ratrhod, S. P., Chauhan, B. R., Chauhan, G. R., Adhvaryu, A. and Chauhan, A.P. (2013) Study of difficulty level and discriminating ındex of stem type multiple choice questions of anatomy in rajkot, BIOMIRROR, 4(06), 1-4.
- Denga, I. (2009).Educational measurement, continuous assessment and psychological testing. Rapid Educational Publishers.
- Ebel, R.L. and Frisbie, D.A. (1991) Essentials of educational measurement. Prentice Hall of India Pvt. Ltd.
- Ferrando, Pere. J. (2012).Assessing the discriminating power of item and test scores in the linear factor-analysis model. Psicologica, 33, pp. 111-134.
- Hankins M. (2007). Questionnaire discrimination: (re)-introducing coefficient Delta. BMC Medical Research Methodology. 7:19. doi: 10.1186/1471-2288-7-19.
- Henrysson, S. (1971) Gathering, analyzing and using data on test items. In R. L. Thondike (Ed.) Educational measurement (2nd ed. pp 130-159). American Council on Education.
- Kehoe, Jerard (1995) Basic ıtem analysis for multiple-choice tests. ERIC/AE Digest. https://pareonline.net/ getvn.asp?v=4&n=10
- McDonald, R.P. (1999) Test theory: A unified treatment. Lawrence Earlbaum Associates, Inc.
- Popham, J. W. (2008). Classroom assessment: What teachers need to know. Pearson Education, Inc.
- Rao C, Kishan Prasad H L, Sajitha K, Permi H, Shetty J. (2016) Item analysis of multiple choice questions: Assessing an assessment tool in medical students. Int J Educ Psychol Res,2:201-4
- Rudner, Lawrence M and Schafes, William (2002) Reliability: ERIC Digest. www.ericdigest.org/2002-2/reliability/htm
- Shakil, M. (2008).Assessing student performance using test item analysis and its relevance to the state exit final exams of MAT0024 classes: An action research project. Retrieved from http/www.mdc.edu/main/imafes/Usi
- Sim, Si–Mui and Rasiah, R. I.(2006) Relationship between ıtem difficulty and discrimination ındices in true/false type multiple choice questions of a para-clinical multidisciplinary paper, Annals of the Academy of Medicine, Singapore, 35(2), 67-71.
- Tzuriel, D. and M. Samuels. (2000). Dynamic assessment of learning potential: Inter-rater reliability of deficient cognitive functions, type of mediation and non-intellective factors. Journal of Cognitive Education and Psychology, 1, 41-64.