Examining the Validity of Northern College Standardized English Test (NTC-SET) according to the Common European Framework of Reference for Languages (CEFR) Standard
Main Article Content
Abstract
The purposes of this study were to develop and validate the Northern College Standardized English Test (NTC-SET), a standardized English proficiency test aligned with CEFR benchmarks. The analysis focused on NTC-SET’s design process, content validity, item difficulty, item discrimination, and internal consistency reliability. There were 351 participants used in the study. Content validity was confirmed by three expert raters with high Index of Item-Objective Congruence (IOC) ratings ranging from 0.76 (speaking) to 1.00 (listening). The item analysis revealed a moderate overall difficulty level (P=0.50) and a very good discrimination index (0.43), showing that the test successfully distinguished between skill levels. The internal consistency reliability, as measured using the Kuder-Richardson Formula 20 (KR-20), was strong, with the reading section having the highest reliability coefficient, at 0.94. Overall, the findings offered solid empirical evidence supporting the NTC-SET's validity and reliability as a tool for assessing English competence in accordance with CEFR norms.
Article Details

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
1.The articles published in the Journal of Faculty of Applied Arts are the copyright of the journal. The use of the contents, texts, opinions, pictures, tables or any parts of the articles to be published in any format for the commercial use must be officially allowed in a written form by the authorized person of the editorial team.
2.The opinion exists in each article is the author’s responsibility. The editorial team of the Journal of Faculty of Applied Arts (FAA lecturers, staff, and any personnel) will not take any responsibility on that. The author of each article is the only person who will take full responsibility for his or her own article.
References
Allen, M. J., & Yen, W. M. (2002). Introduction to measurement theory. Waveland Press.
American Educational Research Association, American Psychological Association, & National Council on
Measurement in Education. (2014). Standards for educational and psychological testing. American
Educational Research Association.
Aryadoust, V., & Riazi, A. M. (2011). A study of the construct validity of a university English placement test.
International Journal of Language Testing, 1(1), 58–86.
Cambridge University Press. (n.d.). English Vocabulary Profile. Retrieved from http://www.englishprofile.
org/wordlists
Chaiyasuparakul, S. (1999). การทดสอบและวัดผลการศึกษา [Testing and educational measurement].
Prakaipreuk Publishing House.
Council of Europe. (2001). Common European Framework of Reference for Languages: Learning, teaching,
assessment. Cambridge University Press.
Ebel, R. L. (1972). Essentials of educational measurement. Prentice-Hall.
EF Education First. (2022). EF English Proficiency Index: A ranking of 111 countries and regions by English
skills. https://www.ef.com/wwen/epi/
Kuder, G. F., & Richardson, M. W. (1937). The theory of the estimation of test reliability. Psychometrika,
(3), 151–160.
Ratasawang, P. (2020). An item analysis of KU-EPT Reading Test. Journal of Education and Human
Development, 9(2), 65–72.
Rovinelli, R. J., & Hambleton, R. K. (1977). On the use of content specialists in the assessment of
criterion-referenced test item validity. Laboratory of Psychometric and Evaluative Research,
University of Massachusetts.
Yamane, T. (1967). Statistics : An introductory analysis. 2nd ed. Harper and Row.