Reading Comprehension Ability of Future Engineers in Thailand*
Kho Siaw Hui1, Karwan Mustafa Saeed2 & & Thapanee Khemanuwong3
Kuching District Education Department, Kuching, Saeawak, Malaysia, Koya University, Koya, Kurdistan, Iraq, King Mongkut's Institute of Technology Ladkrabang, Bangkok, Thailand
Contact:  kylin11@hotmail.com, karwan.saeed@koyauniversity.org, thapaneekhe@gmail.com
* This is a refereed article.

Received: 13 March, 2020. Accepted: 14 June, 2020.

This is an open-access article distributed under the terms of a CC BY-NC-SA 4.0 license
Abstract: Profiling students' reading comprehension ability provides essential data to educators to identify students struggling with reading. However, tertiary-level test-takers and test-givers are not always aware of the makeup and design of profiles, which creates real challenges in the use of profiling of test results to diagnose problematic areas in reading. This research hoped to unveil the English reading comprehension ability (ERCA) of first-year undergraduate engineers by employing the Thai Reading Evaluation and Decoding System (T-READS). The sample consisted of 751 first-year undergraduate engineers at a Thai public university. Researchers analysed data using descriptive statistics (i.e., T-READS results) and inferential statistics (i.e., one-way ANOVA). Data analysis led to two major findings. First, the data revealed that 74% of undergraduate engineers were at Band 3 and above, which was the minimum university requirement of ERCA. Second, statistically significant differences in the T-READS scores among four groups of performers (Above Standard, Meet Standard, Below Standard and Academic Warning) (F [3, 747] = 1476.66, p = 0.00) were found. Major findings hold pedagogical implications that provide English programme planners at the tertiary level with insights into the reading comprehension difficulties faced by undergraduate engineers and the

Keywords: reading comprehension ability (ERCA), Thai Reading Evaluation and Decoding System (T-READS), reading performance, higher education, English as a Foreign Language (EFL)


Resumen: Perfilar la comprensión de lectura de los estudiantes proporciona datos esenciales a los educadores para identificar a quienes tienen dificultades en lectura. Sin embargo, los examinados de nivel terciario y los examinadores no siempre son conscientes de la composición y el diseño de los perfiles, lo que crea verdaderos desafí­os en el uso de la elaboración de perfiles de los resultados de las pruebas para diagnosticar las áreas problemáticas de la lectura. Esta investigación esperaba revelar la capacidad de comprensión de lectura en inglés (ERCA) de los ingenieros de pregrado de primer año mediante el empleo del Sistema de decodificación y evaluación de lectura tailandés (T-READS). La muestra consistió en 751 ingenieros de primer año en una universidad pública tailandesa. Los investigadores analizaron los datos usando estadí­sticas descriptivas (resultados de T-READS) y estadí­sticas inferenciales (ANOVA de una ví­a). El análisis de datos arrojó dos hallazgos importantes. Primero, los datos revelaron que el 74% de los ingenieros de pregrado estaban en la Banda 3 y superior, que era el requisito universitario mí­nimo de ERCA. En segundo lugar, se encontraron diferencias estadí­sticamente significativas en las puntuaciones de T-READS entre cuatro grupos de artistas (por encima del estándar, cumple con el estándar, por debajo del estándar y advertencia académica) (F [3, 747] = 1476,66, p = 0,00). Los principales hallazgos tienen implicaciones pedagógicas que brindan a los planificadores de programas de inglés en el nivel terciario información sobre las dificultades de comprensión lectora que enfrentan los ingenieros de pregrado y el uso de T-READS como una herramienta de evaluación formativa que se puede localizar para la planificación de intervenciones especí­ficas.

Palabras Clave: capacidad de comprensión de lectura en inglés (ERCA), sistema de decodificación y evaluación de lectura en tailandés (T-READS), rendimiento en lectura, educación superior, inglés como lengua extranjer


Introduction

Increasing productivity and enhancing Thailand’s overall competitiveness especially in the Southeast Asia region would be the key factor for the country’s economic development. In view to this, Thailand 4.0 entails boosting the country’s economic productivity through human capital development (Koen et al., 2018). Within this premise, Thailand 4.0 focuses on building its national ability using innovation-based concepts and trading internationally. In recent years, the development of English language skills has taken the centre stage in Thailand’s effort to accommodate globalisation and internalisation (Charubusp & Chinwonno, 2014). Therefore, as industry becomes more integrated and globalised, job requirements expect competency in the English language as one of the mastered skills in job applications. With regards to Thailand 4.0, Thailand’s education hopes to enhance Thai graduates’ English language competency given the crucial role of English as the global language for business and trading. 

Although research into English proficiency studies in Thailand is still at an infancy stage, the realities of Thai undergraduates’ subpar English proficiency could pose a threat to Thailand’s aspiration to be a contender in the globalised economy, according to Sureeyatanapase et al. (2016). More specifically, they refer to engineering graduates who are reported as failing to meet employers’ requirements in English proficiency, especially in the skill of reading. Kaewpet (2009) reported that whilst Thai undergraduate engineers’ English reading comprehension ability (ERCA) was named as the most essential skill required for both occupational and educational purposes, training in reading especially in technical English courses related to the field of work was still lacking. Read (2015) reasons that some undergraduates, especially first-year students, due to their diverse linguistic background may find it challenging to cope with the language demands of their degree courses. Therefore, Thai English lecturers are confronted with a challenging task to cater for this diversity. According to Cruz and Escudero (2012), many undergraduates have not sufficiently developed discourse-oriented reading skills and strategies, which are essential to comprehend difficult texts at the university level. To elaborate, they report that many undergraduates still make implausible inferences when reading narrative and non-narrative genres. 

Whilst there has been a limited number of studies highlighting the scale of the problem, there has been little research on assessing the undergraduates’ ERCA, specifically in the context of Thailand. Hence, to expand our understanding of the Thai undergraduates’ current ERCA, the present study will provide a fresh window to measure the undergraduates’ ERCA using the Thai version of READS, (T-READS). The findings of this study could inform how a descriptive test such as T-READS could be employed in assessing students’ ERCA to provide feedback on problematic reading skills such as inference making for future intervention planning.

Literature Review

Schema theory and cultural bias in ERCA

Reading comprehension is a vital skill for English learners. As highlighted by Calvete and Orue (2012), in the endeavour to identify undergraduates’ ERCA, the reading literature identifies schema as the basis for cognition and information processing. McNamara (2012) defines comprehension as the ability to perform the reading tasks of going beyond the words and understand the relationships between ideas conveyed in a text. Priebe et al. (2012) explain that readers are often guided by their previous experience and knowledge of the content area of a text in their attempt to comprehend the text, which is referred to as content schemata. Rumelhalt (1980) further illuminates that schema gives a direction to the readers in the retrieval and construction of meaning from their prior knowledge or schemata. According to Anderson (1978), proficient readers would activate their knowledge of the world as well as comprehend texts. 

Apart from the readers’ ability to activate content schemata, Floyd and Carrell (1987) contend that undergraduates’ ERCA largely relies on how well readers could relate the text to their cultural schemata or their culturally familiar background and culturally-based clues. In test-taking, the literature has identified the positive effects of addressing cultural schemata in improving test-takers’ ERCA. To investigate how cultural schemata could be activated to influence students’ comprehension and test-taking processes, Sasaki (2000) conducted a study on Japanese first-year university students in answering cloze text. The study suggested that in the undergraduates’ effort to understand a text, students who answered culturally familiar version of the test outperformed significantly than those who received culturally unfamiliar version of the test. In another study by Yamashita (2003), Japanese EFL students answering gap-filling test revealed that skilled readers used more text-level information more frequently than the less skilled readers did by activating their background knowledge. One explanation for this effect was given by Yousef, Karimi, and Janfeshan (2014) who found that the EFL students’ ability on text comprehension, on top of the students’ linguistic capability, is influenced by the students’ general knowledge and the extent that the knowledge could be activated during reading. Hence, test developers are expected to choose texts which are familiar for students to tap into their schemata. Following this line of reasoning, the researchers based this study on the widely accepted argument that readers perform better with culturally familiar texts, arguing that when readers are given texts fitting to their cultural schema, contextual biases which could confuse or mislead ERCA would be reduced. 

Measuring of ERCA

In measuring ERCA in criterion-referenced tests, Mertler (2007) asserts that the interpretation of test scores depends largely on the cut scores being set. As highlighted by Eckes (2017), cut scores are measures to classify the test-takers into two or more categories in accordance to their test performance. Powers, Schedl, and Papageorgiou (2016) argue that the categories could be differentiated based on knowledge, skills or abilities in a standard setting approach. A close examination into the methods used in determining cut scores reveals that some researchers such as Choi (2017) transformed subscale scores into z-scores and mapped onto the plots to label each group of test-takers based on the characteristic of subscale score patterns. In addition, cut scores have imperative consequences for test-takers and test-providers especially within the context of diagnostic tests such as T-READS. For example, in employing the T-READS in university English language programmes, cut scores would give the necessary information in categorising the test-takers to the courses that best fit their reading comprehension levels. In contrast, Kubiszyn and Borich (as cited in Saeed et al., 2019, p. 1064) stressed that a raw score was employed to determine whether a language learner has higher or lower performance in comparison with the rest of the learners. However, it was later discovered that raw scores would fall short in determining the language learner’s performance band because raw scores would not provide information regarding the specific descriptors of the proficiency level. The researchers, therefore, argue that determining cut scores (i.e., identifying test-takers who scored below and above the cut score) would be the utmost priority in adapting T-READS for the context of Thai undergraduates to ensure test-takers would not be assigned into intervention courses that are too challenging or too convenient for them.

Moreover, according to Jang (2009), the growing attention towards the impact of assessment and testing on student achievement has led scholars to call for more descriptive test information to provide feedback on student performance and learning. Likewise, past research has witnessed the lack of attention to ‘profiles’ when reporting on the results of tests and assessments (e.g., Spolsky, 1990). As Jang (2009) states, profiling of test results involves reporting of the performance of multiple skills tested. In other words, profiling of test results would be more beneficial than reporting one overall result because an overall result would only notify test-takers and test-givers whether it is a pass or fail result. However, the literature has shown that university-level test-takers and test-providers are not always sufficiently knowledgeable about the score profiles, which creates difficulty in the actual usage of profiling of test results for the purpose of diagnosing and intervention planning (Rea-Dickins et al., 2007).

Choi (2017) identified the different profiles of skill mastery by the test-takers and generated English proficiency profiles for 960 undergraduates at a university in the United States based on a screening test. The study identified seven profile groups from the most to the least proficient users of academic oral English proficiency, which were interpreted and labelled based on the subscale score patterns of the test-takers. Likewise, Jang (2009) employed the Fusion Model to create the skill profiles of 2,703 LanguEdge test-takers through skill mastery probability estimates. The findings showed that the model-estimated probabilities of mastery for the nine skills were significantly correlated with the test-takers’ self-reported assessment on their reading ability. 

Based on the review of the literature, it is found that profiling of test results could contribute positively to an understanding of the multifaceted construct of the varying English skills being assessed in test instruments, as exemplified in the research by Choi (2017). To this end, our study on profiling T-READS performers would provide an advantageous context to gather rich information about the ERCA of engineering undergraduates. This discussion logically leads to a test null hypothesis: No statistical difference was found between groups of performers.

Context of the Study

Reading Ability in Thailand

For more than a century, English has maintained its importance as a foreign language in Thailand (Darasawang, 2007). As highlighted by Chomchaiya and Dunworth (2008), Thai students learn English as a foreign language (EFL) and often face considerable challenges in reading. Students’ infrequent reading habit in English materials, lack of opportunities to read and inadequate exposure to reading materials would contribute to poor reading proficiency. In the Programme for International Student Assessment (PISA) 2015, Thailand underperforms all other participating countries and its neighbouring counterparts such as Vietnam in reading performance (Organisation for Economic Co-operation and Development, OECD, 2018). The reading achievement of 15-year-old Thai students participating in PISA 2015 indicates that the mean performance of Thai students (M=409) averagely score lower than the OECD average (M=493) (OECD, 2018). The low performance of Thai high school students in their reading achievement suggests that in advancing to the tertiary level, undergraduates would still need ample guidance to improve their ERCA. 

Reading in English is important for Thai undergraduates of the science and technology stream, especially engineering students, who are supposed to help push Thailand forward to scientific and technological independence. Like other EFL university students, undergraduate engineers are expected to process and comprehend a large volume of texts in English on their related subject matter. There is a demanding requirement on lexical competence for undergraduate engineers to cope with engineering reading materials such as textbooks and journals with regards to specialist knowledge of different content areas (Hsu, 2013). Piamsai (2017) reported that the undergraduates lack important reading skills, particularly reading for the main idea and making a summary. Kittidhaworn (2001) revealed similar findings indicating that the undergraduates expressed a need for reading for comprehension, especially reading for the main idea. This also corresponds with a study conducted by Chawwang (2008) who proposed that students needed to have their reading comprehension skills enhanced. 

However, studies have shown that many Thai undergraduates enter the tertiary education ill-prepared for the reading demands placed upon them (e.g., Dunworth, 2008; Sani et al., 2011). To elaborate on this, undergraduate engineers are not adequately equipped by their secondary education to comprehend engineering reading materials in English at the tertiary level (Ward, 2009). Along this vein, PISA results on Thai students’ reading performance in 2015 revealed that only 8.3% of students were top performers in reading (OECD, 2016). Based on the OECD report for the year 2015, Thailand had the lowest percentage of top performers in reading (reading proficiency of Level 5 and 6) among the PISA-participating affiliates (OECD, 2016). As it is argued by Khamkhong (2017), the reading skills required to understand texts at the tertiary level would be more demanding if compared to secondary level, but some of the Thai undergraduates seem to merely have ERCA of secondary level. 

Our present work involved working with undergraduates from King Mongkut’s Institute of Technology Ladkrabang (KMITL), an engineering and technology university in Thailand. Results of KMITL’s placement test were gathered from the General Education Department (2018); the test was recognised and framed based on Classical Test Theory (CTT) and CEFR. This test consists of 80 multiple-choice questions and measures the English proficiency of 1,157 engineering undergraduates. The analysis revealed that more than 90% of undergraduates were at the levels of basic users (A1= [89.63%] and A2 [3.28%]) in CEFR as shown in Figure 1. These levels are equal to the proficiencies of primary and lower-secondary school students in Thailand (Waluyo, 2019).

Note: The present study obtained permission from KMITL to cite the university in this work.
 Figure 1: Results from KMITL’s placement test based on KMITL CEFR results in the General Education Department (2018).

To assess and diagnose the undergraduates’ problematic areas of reading, various reading assessment tools are used to accurately gauge undergraduates’ ERCA. Admission into Thai universities requires potential candidates to obtain the recommended scores in the performance of standardised tests in either Stanford Achievement Test (SAT), American College Testing (ACT), International Baccalaureate (IB) or National higher education exam results (GAT, PAT1, PAT3). University candidates, especially those enrolling into international programmes, are expected to have obtained a certain level of English proficiency by passing the minimum scoring requirements in their English proficiency tests such as in the Test of English as a Foreign Language (TOEFL) and International English Language Testing System (IELTS). In the universities’ enrolment, the English proficiency tests are used diagnostically in determining whether the candidates would require enrolling in additional EFL courses to improve their English proficiency. Each university will have a pre-determined minimum requirement for university entrance (e.g., TOEFL & IELTS). However, within this pool of first-year students, needs analysis and profiling were not commonly conducted with the aim of identifying students, especially those who just barely fulfil the minimum English proficiency requirement and would need additional support in terms of the specific English skills for them to flourish academically at the university level. Furthermore, some of these English proficiency tests could be expensive to conduct and do not measure the specific English skills and subskills that need further intervention such as in ERCA. Even when intervention classes are conducted in the university level in pursuit of improving undergraduates’ ability in reading and comprehension in English, the costliness of such interventions is apparent (Maxwell, 2019).

In the context of Thai engineering undergraduates and English learning, most technical English courses were designed to cater for undergraduates of intermediate proficiency in who were expected to able to read and write in technical English (Kaewpet, 2009). The researchers argued that, and as most English teachers are aware, not all students should be expected to attain the same mastery level of at least intermediate English ability or meet the basic standard of reading proficiency. In fact, undergraduates are often enrolled in technical English courses, bringing with them different levels of English language proficiency, with many below the standard of ERCA. According to Thailand’s PISA results, future undergraduates could only attain a moderate level (Level 2) of reading proficiency whereby students could identify the main idea in a moderate length text, find information with the guidance of explicit and complex criteria, reflect on the purpose and form texts based on directed instructions (OECD, 2018). Therefore, the purpose of this study was to understand how the different reading proficiency subskills are defined at the micro level (i.e., what the students at each level of reading performance are able or not able to do). The researchers followed Mohamed et al. (2010) definition of the different levels of performance standards (Academic Warning, Below Standard, Meet Standard and Above Standard) (See Appendix 1).

Chomchaiya and Dunworth (2008) found that while undergraduates appeared to be motivated to enhance their ERCA, they have yet to fully develop their reading skills. More important, Sureeyatanapas et al. (2016) investigated the employers’ levels of satisfaction of English proficiency in the engineering graduates working under the employers’ organisations. The results revealed that Thai engineering graduates have not met the employers’ requirements in English proficiency. The survey findings further pointed out that the skill of reading was the most important English skill that engineering graduates have yet to master. On a positive note, however, a study by Munsakorn (2012) revealed that undergraduates were identified to have a high level of awareness of reading strategies. The stark findings of the level of ERCA of engineering graduates and the inconsistency in the findings in Thai undergraduates’ ERCA further warrants a closer scrutiny into profiling the undergraduates’ reading performance to better align English reading proficiency interventions with English for specific purposes (ESP) courses.

Based on the review of literature on Thai undergraduates’ ERCA, a number of studies concentrated on identifying the use of reading strategies (e.g., Chomchaiya & Dunworth, 2008; Munsakorn, 2012) and determining the attributions for successful or unsuccessful English learning (e.g., Khamkhong, 2017). The Commission on Higher Education (CHE) in Thailand has strived to acknowledge the crucial need to enhance English proficiency of undergraduates in universities whereby undergraduates have to take 12 credits or four subjects of English to complete their degree (Darasawang, 2007). However, mandating English as a compulsory subject at the tertiary level without profiling undergraduates according to their ERCA has resulted in providing them with a one-size-fits-all intervention that fairly missed the actual reading comprehension needs of undergraduates. The significance of profiling the undergraduates’ ERCA from statistically sound and reliable testing instrument, for instance, using the T-READS, is underscored by researchers in reading comprehension and testing (Aryadoust & Zhang, 2015; Jang, 2009). Jang (2009) refers to profiling as a diagnostic feedback focusing on a test taker’s abilities in tested skills. Using instruments to measure reading comprehension ability could provide undergraduates with their “comprehensibility profile” (Isaacs et al., 2018, p.199) and to foster students’ self-awareness of their actual ERCA. This highlights the need for further research into the profiling of Thai undergraduate engineers’ ERCA as the first step to analyse profiles of undergraduates of varying proficiency based on the T-READS scores, instead of making generalised assumptions about the areas of undergraduates’ reading problems.

READS in Thailand

As an alternative tool in diagnosing university candidates’ English reading proficiency, READS was developed and validated by Mohamed et al. (2012) as a standardised test by a research team in Universiti Sains Malaysia (USM) in 2009. READS is a diagnostic tool to assess students’ ERCA that helps educators make sense of students’ actual ERCA by specifying what they could and could not do. More specifically, READS provides a standard Reading Matrix table (see Appendix 1) that can identify students’ reading abilities whether they are at “Meet Standard”, “Below Standard”, “Above Standard” or “Academic Warning” status. The reading scores are categorised into six bands using the READS 6-Band Scale with pre-established cut scores. According to Mohamed et al. (2013), READS was also utilised in Universiti Sains Malaysia (USM), Penang, Malaysia to gauge new undergraduates’ ERCA. For example, in the 2016 – 2017 university intake in USM, a total of 2,683 undergraduates sat for the READS test from various programmes in the university. Based on the generated READS feedback, appropriate and immediate interventions could be developed to help enhance undergraduates’ ERCA during their academic years in the university.

Currently, in Thai universities, a Thai localised version of READS (T-READS) is administered to assess the undergraduates’ ERCA. The reason for developing T-READS was to diminish the possibility of contextual biases in assessing ERCA (Khemanuwong et al., 2018). T-READS is a validated and calibrated instrument to be used on undergraduates of any level. T-READS however does not have a mechanism to automatically determine the cut scores for the range of difficulty levels based on the actual assessment result of a specific group. Thus, the present study hoped to firstly establish the cut-scores for T-READS before profiling undergraduates’ ERCA.

Profiling readers’ ability is referred to the illustration of reading behaviours using a range of indicators labelled as bands that would indicate the students’ ERCA (Griffin, 1990). These profiles enable an analytical approach to be adopted in assessing ERCA. The profiles provide a micro lens to describe the strengths and weaknesses and provide information which can be used to identify appropriate targets and interventions for teaching and learning. Adopting the profiles of ERCA available in T-READS, our study attempted to profile the undergraduates’ ERCA to identify data-based interventions according to the undergraduates’ needs. This would share similarity to a needs analysis in ERCA whereby a needs analysis would also identify the target language teaching or learning needs to design an effective intervention (Sönmez, 2019).

To this end, this study primarily aimed at investigating the ERCA of first-year undergraduate engineers in which the reading ability was assessed using the T-READS. More specifically, this study addresses four research objectives as follows:

  1. To determine the range of the cut scores for T-READS.
  2. To analyse the data gathered for benchmarking purposes by analysing the test scores based on descriptive statistics.
  3. To profile undergraduates’ ERCA based on groups of performers.
  4. To examine statistically significant differences in the undergraduates’ scores on ERCA among four groups of performers.

Research Questions

In light of the discussions of the study, the following shows the research questions of the present inquiry:

  1. What are the most suitable cut-scores for the different levels of EFL undergraduates’ ERCA? 
  2. What is the general ERCA of undergraduates of Faculty of Engineering based on the frequency in the six bands in T-READS?
  3. What are the profiles of undergraduates based on four groups of performers in T-READS:
       a. Above Standard Performers?
       b. Meet Standard Performers?
       c. Below Standard Performers?
       d. Academic Warning Performers?
  4. Is there a statistically significant difference in the undergraduates’ scores in ERCA?

Method

Study Design

To collect and analyse the relevant data, the researchers conducted a quantitative study to address the research questions in line with the objectives of the study. The undergraduates’ ERCA was measured using T-READS. Resultant data were analysed using descriptive and inferential analyses. A few reasons guided our decision in adopting T-READS as the assessment tool in gauging the undergraduates’ ERCA: (a) the participating Thai university in our study is at its beginning stage in adopting T-READS to measure freshmen’s ERCA; (b) efficiency and cost-effectiveness in adapting the readily available T-READS; and (c) the fair cross-cultural characteristic of the T-READS which had been adapted based on READS by eliminating contextual bias (Khemanuwong et al., 2018). 

Participants

A total of 751 first-year undergraduate engineers of KMITL served as participants of this study. The participants came from an array of different engineering majors (Table 1). 

Participants Characteristics

Frequency

Percent (%)

Age

17-20

739

98.4

 

21-24

12

1.6

Gender

Male

477

63.5

 

Female

274

36.5

Major

Agriculture

1

0.1

 

Automation

28

3.7

 

Chemical

37

4.9

 

Civil

66

8.8

 

Computer

107

14

 

Control

28

3.7

 

Electrical

2

6.3

 

Electronics

88

11.7

 

Food

30

4.0

 

Industrial

41

5.5

 

Information

1

0.1

 

Instrumentation

38

5.1

 

Mechanical

50

6.7

 

Mechatronic

29

3.9

 

Music

25

3.3

 

Petrochemical

19

2.5

 

Production Design

29

3.9

 

Rail Transportation

31

4.1

 

Telecommunication

100

13.3

Note: Total number of participants, N=751
                          Table 1: Demographic of participants

Instrument

Data for this study were collected via T-READS. Basically, T-READS consists of three components – the Encoder (i.e., Test Instrument) which is a generic test used to measure the reading comprehension ability of any student at a particular educational level as pointed out by Ismail et al. (2018), the Analyser (i.e., Reading Matrix) and, the Decoder (i.e., Descriptors of Reading Abilities) as illustrated in Figure 2. First, comprising 60 multiple-choice questions, the Encoder measures the test-takers’ ERCA. Second, acting as a cross-reference for the analysis of the test-takers’ ERCA, the Analyser grades the test-takers to an appropriate range of “Above Standard”, “Meet Standard”, “Below Average”, and “Academic Warning”. Last, the Decoder determines the performance band of the test-takers based on their given answers to identify what undergraduates from Band 1 to Band 6 can or cannot do. 

                         Figure 2. Components of the T-READS adapted based on Ismail et al. (2018)

T-READS is a system for benchmarking EFL reading comprehension proficiency. T-READS, as mentioned earlier was adapted by Khemanuwong et al. (2018) based on READS to determine the standard of undergraduates’ ERCA in Thai universities by eliminating the contextual biases in the reading assessment or the Encoder (test) by addressing differences in cultural, background and worldly knowledge. By eliminating contextual bias, the existing original Malaysian-developed instrument could be culturally fitted to permit T-READS deployment to avoid any unfamiliarity and to familiarize the Thai target audience with its content. Specifically, the reason for using T-READS was to provide a test instrument which is culturally designed to the test takers in order to enhance their confidence and mastery by drawing their attention away from the instrumentality of the test (Reeve et al., 2008). In a reading passage, for instance, Malaysian Ringgit (RM) was replaced with Thai Baht, and Tesco, a British multinational grocery in Malaysia, was substituted for Tesco Lotus, in the Thai context. Furthermore, the term ‘UPSR’ (Ujian Pencapaian Sekolah Rendah), which refers to a national examination taken by all students in Malaysia at the end of their sixth year was changed to the term ‘O-NET’ (Ordinary National Education Test) to reflect the Thai environment. By providing a test instrument which is designed to address contextual bias, the test-takers’ confidence and mastery could be enhanced by drawing their attention away from the instrumentality of the test (Reeve et al., 2008). 

In view of the modification which had only been conducted to the Encoder (test) based on READS, T-READS adhered to the Analyser (Reading) and Decoder (Descriptors) as pre-established in READS to inform test administers about the detailed information of the test-takers’ ERCA. The decoder provides descriptions (i.e., performance indicators) on a students’ reading performance from Band 1 to Band 6 which contained the descriptors of reading comprehension abilities to indicate what readers can and cannot do (Ismail et al., 2018) (See Appendix 1). 

Based on Mok’s (1994) argument, test items are proportionately distributed into three different difficulty levels, namely, easy (25%), average (50%) and difficult (25%). Consistent with this, T-READS assesses undergraduates’ reading comprehension using items distributed into three levels of difficulty, namely, literal (25%), reorganisation (50%), and inferential (25%) skills in reading. T-READS indicated a satisfactory level of difficulty (p = 0.64). As for the discrimination index, the items presented acceptable to optimal levels with an average of 0.51. 

In T-READS, undergraduates respond to different types of texts such as dialogue, newspaper reports, and descriptive texts (see Mohamed et al., 2010). The test was uploaded on the Online Test of English Proficiency website for undergraduates to take the test online. The time allocated for the test was 70 minutes based on the results of a pilot study (Khemanuwong, et al., 2018). 

To establish the content validity of the instrument, the researchers computed and quantified the Content Validity Index (CVI) based on the ratings from three experts through a validation process for each individual item and the entire scale. According to Polit and Beck (2006), CVI is referred to the index of interrater agreement that expresses the proportion of agreement in terms of the item relevancy. As Lynn (1986) highlighted, CVI is the most widely used approach of content validity to facilitate the rejection or retention of items. T-READS was validated by a panel of experts consisting of three English lecturers from Thailand to ensure that T-READS was relevant in measuring the undergraduates’ ERCA. The experts were required to rate the relevancy of the items inside T-READS using the validation form with 4-point scale (1=not relevant; 2=somewhat relevant; 3=quite relevant; 4=highly relevant). Some items were improved based on the recommendations by the experts for the context of Thailand. Based on the feedback received from the experts, the instrument was revised, and items were finalised. 

As shown in Table 2, S-CVI for the T-READS relevancy is .97 based on the rating of the subject matter experts. The rating of .97 is acceptable based on the S-CVI guideline in item acceptability suggested by Davis (1992). This would imply that the T-READS would be relevant in measuring the undergraduates’ ERCA based on the experts’ agreement of the items of the test.

 

S-CVI 

Mean Expert Proportion

T-READS

.97

.97

Note: S-CVI, content validity index for the scale
                              Table 2: S-CVI for relevancy of T-READS

To ensure that the instrument is reliable, a pilot study was conducted using T-READS which was adapted from the original READS (Mohamed et al., 2010). A total of 624 undergraduates sat for the test. The students ranged from first year to fourth year undergraduates majoring in engineering, architecture, agriculture, liberal arts and sciences. Stringent measures were taken to confirm that the cut scores were accurate and represented the actual reading comprehension proficiency of undergraduates. The participants in the pilot study were not included in the main study. The number of undergraduates from each educational level is presented in Figure 3.

                          Figure 3: Number of undergraduates in each educational level

Using KR-20 coefficient in Statistical Package for the Social Sciences (IBM SPSS) software version 22, the gathered data were analysed to determine the cut scores for the reading comprehension proficiency levels. T-READS obtained a high value of 0.91 in the KR-20 coefficient which indicated the homogeneity of the test as shown in Table 3.

Reliability Test

Thai READS Reliability Value

No. of Items (N)

No. of Participants

KR-20

0.91

60

624

                                                 Table 3: Analysis of test reliability
Data Collection 

Prior to commencing the data collection, the researchers sought administrative approval from the university and the participants’ consent to gain official permission to conduct the study. This was to ensure the inquiry is ethical and respectful. In addition, the in-depth clarification about the study and its main purpose were provided to the participants prior to the commencement of the study. All the participants’ information was ensured to remain confidential, anonymous and no personally identifiable information was captured. This was maintained through anonymity of the participants in the test results. The test was administered to the university freshmen in accordance to the procedures set for the application of READS as outlined by Mohamed et al. (2010).

Data Analysis

The test scores were compiled using Microsoft Office Excel to identify the undergraduates’ reading abilities by categorising undergraduates into the relevant bands (Band 1 to Band 6). The undergraduates’ ERCA was matched against the Reading Matrix. Last, the reading abilities (i.e., Performance Standards and Descriptors) were correlated. 

To answer research question 1, the cut scores of the performance bands were developed based on undergraduates in a public university in Thailand. According to Tannenbaum and Wylie (2008), there is no single correct or exact cut score to determine the acceptable performance levels. Mohamed et al.’s (2010) recommendation was adhered to in conducting a comparison for the ERCA of high, average and low performers of each educational level which would range from Year 1 to Year 4 undergraduates to determine the cut scores of the performance bands. The present study developed cut scores for the undergraduates’ READS performance. For this reason, the developed cut scores would only be used for a particular university. Different cut scores might need to be considered for different contexts outside Thailand (Zieky, et al., 2006). 

Employing frequency in descriptive statistics, research question 2 sought to find out the general ERCA of the freshmen from various fields in Faculty of Engineering as presented in Figure 4. 

Figure 4. First-year undergraduates in Faculty of Engineering, KMITL 

For research question 3, the Performance Standards were used to identify the undergraduates’ performance levels in reading achievement: (a) above standard performers; (b) meet standard performers; (c) below standard performers; and (d) academic warning performers. The undergraduates’ specific reading abilities were analysed using the descriptors. From the results of the undergraduates’ reading ability based on the T-READS, whether the undergraduates’ scores could meet the expected standard of ERCA for undergraduates was determined.

For research question 4, using inferential statistics, researchers employed the one-way analysis of variance (one-way ANOVA) to determine statistically significant differences in terms of the ERCA among the four groups of performers: (a) Above Standard performers; (b) Meet Standard performers; (c) Below Standard performers; and (d) Academic Warning performers. According to Fraenkel, Wallen, and Hyun (2012), one-way ANOVA is a form of t-test, which is used with three or more groups to determine the variance both within and between each of the groups (F value). 

A follow-up test is necessary to ascertain which of the compared groups would differ significantly from each other. A post hoc analysis test of Tukey’s Honest Significance Difference, also called a post-hoc test (Tukey's HSD) was then carried out to find out which of the means were significantly different among the groups of performers being compared.

Results

The researchers calculated the cut scores for the undergraduates’ ERCA based on the reading comprehension results from T-READS to answer research question 1. For T-READS, the cut scores from Band 1 to Band 6 were determined in accordance with the z-score – a standard score which represents the number of standard deviations between each data point and the mean (Abdi, 2007). In other words, a z-score of 0 means that undergraduates obtained zero standard deviation (SD). A positive z-score indicates that an undergraduate has obtained a score above the mean. Conversely, a negative z-score indicates that an undergraduate has obtained a score below the mean. To further explain this, a score which is exactly 1 SD above the mean corresponds to a z-score of +1 A score which is exactly 1 SD below the mean corresponds to a z-score of -1 as illustrated in Figure 5.

Figure 5: The normal curve relationship between z-score and location in a standard distribution

Undergraduates were categorised into six performance bands (Band 1 to Band 6) based on their ERCA on T-READS. Band 1, Band 2 and Band 3 would be below average while Band 4, Band 5 and Band 6 would be above average. Table 4 illustrates how the cut scores based on the z-scores were developed.

Bands

Band 1

Band 2

Band 3

Band 4

Band 5

Band 6

Scores

0-15

16-22

23-35

36-47

48-53

54-60

Table 4: Cut Scores based on Z-scores

As shown above, researchers placed Band 4 between 0 SD and +1 SD to indicate that undergraduates had obtained the scores between 0 SD and +1 SD above the mean. Band 5 would have a higher ERCA than Band 4. Similarly, we conducted the same procedure for Band 5 and Band 6 whereby the researchers placed Band 5 between +1 SD and +2 SD and Band 6 above +2 SD respectively. On the other hand, the researchers placed Band 3 in between -1 SD and 0 SD which showed that undergraduates achieved the scores between -1 SD and 0 SD. Likewise, Band 2 would be placed between -1 SD and -2 SD which showed that undergraduates had attained the scores between -1 SD and -2 SD below the mean. Last, the researchers placed Band 1 below -2 SD whereby undergraduates scored below -2 SD.

Researchers categorised undergraduates into each band according to their ERCA using pre-determined percentiles. According to Table 4, the range of scores for Band 1 was at 2.14 percentile with T-READS score of below 15. The range of READ scores for Band 2 was between 15 to 21.9. Band 3 had the range of T-READ scores at 22 to 34.9, Band 4 at 35 to 46.9, Band 5 would be from 47 to 52.9 and Band 6 would be above 53 respectively.

Based on the results of the cut scores, the appropriate band to be achieved by first-year undergraduates was determined. The T-READS Matrix was then adapted as shown in Table 5 below.

 

Band 1

Band 2

Band 3

Band 4

Band 5

Band 6

Scores

0-15

16-22

23-35

36-47

48-53

54-60

Year 1 Thai Undergraduates

Academic Warning

Below Standard

Meet Standard

Above Standard

Aligning with Reading Performance Year in READS (Reading Age)

12

13

14

15

16

17

Table 5: Thai READS Matrix: Students’ performance on academic standards

According to the T-READS Matrix, Year 1 undergraduates should correspond to Band 3 which indicates that they have achieved the average mean score. For this reason, the expected performance band that Year 1 undergraduates should achieve would be Band 3 and above. When aligning with the reading performance year (i.e., reading age) as depicted in READS, Band 3 would be equivalent to reading age of 14 years old ERCA. According to Ismail et al. (2018), ideally, all Year 1 undergraduates entering the universities should have achieved Band 6 or be 17 years old (i.e., reading age). However, the actual ERCA of Year 1 Thai engineering undergraduates may prove otherwise, which this present study sought to investigate and to diagnose the undergraduates’ reading ability.

For research question 2, researchers categorised the undergraduates’ scores on ERCA in T-READS based on the six Bands. The data obtained from 751 undergraduates from Faculty of Engineering, KMITL was analysed by using descriptive statistics. The results were mapped into different bands (Band 1 to Band 6) to identify the undergraduates’ ERCA as presented in Figure 6.

Figure 6: First-year engineering undergraduates’ scores in T-READS

As shown above in Figure 6, most undergraduates (375 out of 751) achieved Band 3 and Band 4. A total of 176 undergraduates scored Band 5 and above, and 200 undergraduates acquired Band 2 and Band 1. 

To answer research question 3, the researchers determined the profiles of undergraduates of four groups of performers in T-READS and identified whether the undergraduates’ scores could meet the expected standard of ERCA for undergraduates. According to the T-READS matrix, the ERCA results were cross-referenced to the matrix to map the results to an appropriate range. The undergraduates’ ERCA was analysed based on their performance on T-READS by using descriptive statistics as shown in Figure 7.

Note: Na, Nb% -
Na
represents the number of undergraduates based on the Performance Standard Indicators.
Nb
represents the percentage of undergraduates based on the Performance Standard Indicators.
                    Figure 7. Group of performers based on the performance standard indicators

Figure 7 presents the undergraduates’ ERCA based on the Performance Standard Indicators (Academic Warning, Below Standard, Meet Standard and Above Standard). The results in respective performance reported that most undergraduates were “Meet Standard” performers (50%) and “Above Standard” performers (24%). 8% of undergraduates failed to meet the “Below Standard” grade. A greater number of undergraduates (26%) could not meet the average standard compared to undergraduates who could perform above the average performance (24%). The READS performance standards developed by Mohamed et al. (2010) was used to gauge the ERCA of the group of performers (e.g., Academic Warning and Below Standard performers). The READS performance standard was adapted based on the Prairie State Achievement Examination (Illinois State Board of Education, 2004) to be in line with the performance bands and reading performance indicators.

For research question 4, the researchers sought to find out whether there was a statistically significant difference in the undergraduates’ scores in ERCA among the four groups of performers. The purpose of conducting one-way ANOVA was to confirm the discriminating power of the cut scores. The results showed that there were statistically significant differences according to the undergraduates’ ERCA scores among four groups of performers, F (3, 747) = 1476.66, p = 0.00 as shown in Table 6.

 

Sum of Squares

df

Mean Square

F

Sig.

Between Groups

Within Groups

149187.23

25156.49

3

747

49729.08

33.68

1476.66

0.00

Total 

174343.72

750

     

Table 6: One-way ANOVA Results 

As shown in Table 6, when comparing in pairs: (a) Academic Warning and Below Standard; (b) Academic Warning and Meet Standard; (c) Academic Warning and Above Standard; (d) Below Standard and Meet Standard; (e) Below Standard and Above Standard; and Meet Standard and Above Standard, respectively, the post-hoc multiple comparisons using the Tukey HSD test indicated that there was a statistically significant difference in the undergraduates’ scores between each pair (p = 0.00). This would help to identify which specific groups’ means were different from each other as presented in Table 7. 

Dependent Variable: Score

Tukey HSD

   

Mean Difference

Std. Error

Sig.

95% Confidence Interval

Lower Bound

Upper Bound

Academic Warning

Below Standard

5.67*

0.88

0.00

-7.94

-3.39

Meet Standard

-21.51*

0.79

0.00

-23.55

-19.48

Above Standard

-43.59*

0.85

0.00

-45.78

-41.40

Below Standard

Academic Warning

5.67*

0.88

0.00

3.39

7.94

Meet Standard

-15.84*

0.57

0.00

-17.33

-14.35

Above Standard

-37.92*

0.66

0.00

-39.62

-36.22

Meet Standard

Academic Warning

21.51*

0.79

0.00

19.48

23.55

Below Standard

15.84*

0.57

0.00

14.35

17.33

Above Standard

-22.07*

0.53

0.00

-23.44

-20.71

Above Standard

Academic Warning

43.59*

0.85

0.00

41.40

45.78

Below Standard

37.92*

0.66

0.00

36.22

39.62

Meet Standard

22.07*

0.53

0.00

20.71

23.44

              Note: *. The mean difference is significant at 0.05.
Table 7: Multiple comparisons of students’ scores among four groups of performers

Discussion

Establishing Cut Scores and Profiling of ERCA for T-READS

The present study established cut scores for T-READS which further allow the interpretation of the ERCA based on the test results using z-scores and standard deviation. As suggested by Hambleton et al. (2012) and Powers et al. (2016), in setting scores in criterion-referenced achievement tests, researchers differentiated the ERCA into six categories, from Band 1 to Band 6. Based on the six categories, the reading comprehension abilities of each group were determined. These results could be used as an empirical basis on categorising the reading performers and which courses should be offered as reading interventions (Choi, 2017). 

By establishing the cut scores and profiling of ERCA for the T-READS, this study would be able to answer the call for testing the measuring instrument that could effectively profile knowledge and abilities of test-takers (Aryadoust & Zhang, 2015; Jang, 2009). Based on the test results, four profiles that represented varying reading comprehension abilities of the undergraduate engineers were determined, namely, “Academic Warning”, “Below Standard”, “Meet Standard” and “Above Standard”. The profile group structures identified through profiling test results could serve as empirical evidence to evaluate whether the test-takers’ skills could meet the expected standards of the universities, in the case of the current research, the first-year undergraduates’ ERCA (Choi, 2017).

Reading Comprehension Ability of Undergraduate Engineers according to T-READS Bands

Our findings showed that the standard of ERCA for undergraduates would be at least Band 3 and above. The researchers determined the expected standard of ERCA to be Band 3 and also provided the performance standards that correspond for the Year 1 undergraduates (see Appendix 2) by identifying the cut scores based on the participants’ T-READS performance which helped to classify the ERCA for the participants, namely “Academic Warning”, “Below Standard”, “Meet Standard” and “Above Standard”. Thai undergraduate engineers performed at a fair level with regards to their level of reading proficiency based on the T-READS assessment. Most of the participants (74%) achieved Band 3 and above and can be regarded as “Meet Standard” and “Above Standard” performers. As stated in the Descriptors of Reading Abilities within the READS system, undergraduates who could achieve “Meet Standard” and “Above Standard” in the T-READS test would have better command in their reading comprehension strategies, such as literal, reorganisation and inferential skills in reading. These research findings correspond to Munsakorn’s (2012) study in which she found that first-year engineering undergraduates in Bangkok University were proficient readers who could apply reading strategies. In this study, the undergraduate engineers demonstrated their ability at moderate to high proficiency. The reason for this could be due to the stringent process in the selection and acceptance of undergraduates into the engineering programme in the Thai university in this present study. Prospective undergraduates would need to sit for university entrance exam (TCAS) or fulfil the minimal criteria in English language proficiency test (i.e., TOEFL, IELTS). Therefore, most of the undergraduates in the Faculty of Engineering would possess ERCA at least at a minimal level and above. 

In contrast, the study’s results contradicted those of Chomchaiya and Dunworth (2008), Rajprasit et al. (2015), who revealed that Thai undergraduates’ English reading proficiency was unsatisfactory (i.e., underdeveloped reading skills such as the use of contextual clues or locating main ideas and difficulty in decoding words). More importantly, the findings of the present study would disagree with the research by Chomchaiya and Dunworth (2008) because researchers found that most of the first-year undergraduates were prepared for the reading demands in pursuing their degree, as they were at least “Meet Standard” to “Above Standard” performers. The undergraduates’ results may be attributed to the differences in population and sample size of the previous studies. It will give useful insights to the ERCA of Thai undergraduates if a larger sample size for precision and more accurate representation of the population is considered (Biau et al., 2008), especially in the research into undergraduate engineers’ ERCA in Thailand. 

From the undergraduates’ performance bands and their achievement in the T-READS, a holistic picture of what they could or could not do may be identified (Mohamed et al., 2010). The ERCA of undergraduates could be described in their ability to answer literal, reorganisation and inferential comprehension questions. Our study further revealed that the percentage of undergraduates who could not meet the average standard in ERCA were substantially high. Undergraduates who underperformed in reading achievement, 26% of the sample, would have low ability in all literal, reorganisation and inferential comprehension skills. Contrastingly, undergraduates in Khamkhong’s (2018) study performed well for literal questions but poorly for the interpretative and critical questions. Simply put, most Thai undergraduates could comprehend the reading texts but faced difficulties in interpreting or evaluating the underlying meaning of the texts. However, the present study was limited in terms of identifying the specific reading strategies (i.e., cognitive and metacognitive strategies) that undergraduates used in answering the reading comprehension items. 

Even though on average, the Thai first-year undergraduate engineers in this sample were reported to perform at a satisfactory and expected level of Band 3 in T-READS test, considerable critical attention needs to be given to the underperformers. About one-fourth (26%) of the undergraduate engineers failed to meet the minimal standard in the expected ERCA performance in T-READS. This confirms past studies that stressed the importance of Thai universities’ efforts to elevate undergraduates’ poor ERCA (Khamkhong, 2018). According to the study, the Office of the Higher Education Commission (OHEC) in Thailand emphasises the implementation of an English exit exam for undergraduates prior to their graduation to ensure these undergraduates are proficient English language users. Reading intervention programmes for undergraduates are suggested to remedy the issue and to prepare these undergraduates for the English exit exam and increasing their employability after graduation.

T-READS as a Means of Capturing ERCA

With inferential statistics (i.e., one-way ANOVA test and Tukey HSD test), the findings indicated that reading performers of “Above Standard” had significantly higher ERCA scores than reading performers of “Academic Warning”. It could be inferred that T-READS was able to effectively differentiate the test-takers into the four groups of performers (i.e., literal, reorganisation and inferential comprehension) (Mohamed et al., 2010). For example, for literal skills, undergraduates who were profiled as “Academic Warning” could be identified as “Can hardly locate the supporting details. Can understand only a few words. Guess answers.” (See Appendix 1). This result would diverge from Jang’s (2009) concern in the diagnostic capacity of some of the test instruments. Presently, most English proficiency tests in Thailand, such as the General Aptitude Test (GAT) or English proficiency tests, serve as the diagnostic tests at the most general level to determine the additional EFL courses for undergraduates. There is still a lack of a university-level screening test in most universities in Thailand that measures the specific English skills (Jianrattanapong, 2011), especially in assessing the ERCA for diagnosing the undergraduates’ problematic areas. In view of the unavailability of the reading test, T-READS poses itself as a statistically sound and inexpensive means of capturing the undergraduates’ ERCA.

The varying reading comprehension ability group provides informative data in deciding on the reading intervention to be provided to the first-year undergraduate engineers when using T-READS in diagnosing ERCA. Based on T-READS, those performing at “Below Standard” were undergraduates who could only partially achieve the objectives of the syllabus set for Year 1 and Year 2 undergraduates’ English language proficiency. These undergraduates could merely manage skills and sub-skills of reading comprehension at a relatively low or alarmingly low level and should be given the appropriate and necessary reading interventions. According to Wasburn-Moses (2006), undergraduates who were graded as “Below Standard” or “Academic Warning” would need reading support and intervention. Choi (2017) suggested within-group tutorial programmes for test-takers to lead them in acquiring at least a satisfactory achievement or to pass the test. These tutorial programmes may impact undergraduates’ ERCA by equipping struggling readers with the necessary skills and strategies focused on comprehending texts, especially texts that would relate to their technical fields.

Conclusion and Pedagogical Implications

Reading English efficiently is a central objective of Thailand’s education. This study has argued the underachievement of this objective because the literature has found Thai graduates to be underperformers in English reading abilities. The primary strength of this study was the use of T-READS to discover the actual ERCA of Thai undergraduates and their reading standards. Hence, the study attempted to investigate the ERCA of undergraduates using T-READS in pursuit of profiling their actual and specific ERCA. Using localized T-READS, English language lecturers can identify students’ current reading abilities and diagnose their weaknesses in reading since the test scores provide useful information that can inform universities and lecturers about the learners’ specific reading abilities in pursuit of future intervention courses tailored to their current reading abilities. 

Although this study was limited to Thai students, the findings of the study hold several significant implications for both assessing and diagnosing undergraduate engineers’ ERCA, particularly for the purposes of planning for reading interventions to better prepare future engineers in general. The findings are relevant to test-takers because they could use the summarised feedback from the T-READS system, which can be localized, to know about their performance in reading comprehension. On the basis of the result of their ERCA in T-READS, test-takers are expected to improve their weaker areas.

Although the present research is limited to Thai first-year undergraduate engineers of a public university, the findings provide further insights into the planning of intervention programmes or English language courses for undergraduates. Based on the findings of this study, the researchers are in support of the need to have reading courses designed based on the English content for a specialised field of engineering (e.g., Hsu, 2013; Kaewpet, 2016). Likewise, the technical English courses should also provide interventions for undergraduate engineers as suggested by Read (2015). Read (2015) introduced a post-entry language assessment (i.e., Diagnostic English Language Needs Assessment, DELNA) as an intervention initiative at the university entry level to identify undergraduates who are struggling to meet the English language demands of their degree programme. DELNA successfully provided struggling undergraduates with the relevant academic language support given by language advisors after conducting English proficiency screening of all first-year undergraduates and paper-based diagnosis of their language skills.

Recommendations for Future Research

This study was descriptive in nature, and given its limitations, future studies would be recommended to understand the use of different reading strategies that distinguish between high-performing readers and low-performing readers in the context of Thai undergraduates. Given the sample of first-year undergraduate engineers from one university in Thailand, the data yielded from this study may not be wholly representative of all undergraduates in Thai universities. Future studies can, therefore, extend this form of analysis in other higher institutions and education levels. 

In the present study, the researchers focused on identifying undergraduates’ ERCA. Nevertheless, there were other factors and intervention activities that help improve undergraduates’ reading proficiency that the researchers were unable to measure in this study. Additional studies are, therefore, suggested to include factors that influence undergraduates’ reading proficiency, for instance, developing instructional interventions that aim to improve English classroom delivery at the tertiary level (Wen, 2018). 

Last, the researchers drew heavily on quantitative measures, which did not incorporate qualitative data in the effort of profiling the reading comprehension of the undergraduates in this sample. Whilst the researchers ensured the robustness of this study by incorporating both descriptive and inferential statistics, future projects could be strengthened by including discussions which are built around the test-takers’ opinions on the T-READS exam, the reading components and their scores.

Acknowledgment

The authors would like to show their gratitude to the General Education Department, King Mongkut’s Institute of Technology Ladkrabang and Ekkapon Phairot, Songkhla Rajabhat University, who provided insight and expertise that greatly assisted the research.

References 

Anderson, R. C. (1978). Schema-directed processes in language comprehension. In A. M. Lesgold, J. W. Pellegrino, S. D. Fokkema, & R. Glaser (Eds.), Cognitive Psychology and Instruction. (pp. 67-82). Plenum.

Aryadoust, V., & Zhang, L. (2015). Fitting the mixed Rasch model to a reading comprehension test: Exploring individual difference profiles in L2 reading. Language Testing, 33(4), 529-553. https://doi.org/10.1177%2F0265532215594640 

Biau, D. J., Kernéis, S., & Porcher, R. (2008). Statistics in brief: The importance of sample size in the planning and interpretation of medical research. Clinical Orthopaedics and Related Research, 466(9), 2282-2288. https://doi.org/10.1007/s11999-008-0346-9 

Calvete, E., & Orue, I. (2012). Social information processing as a mediator between cognitive schemas and aggressive behaviour in adolescents. Journal of Abnormal Child Psychology, 40(1), 105-117. https://doi.org/10.1007/s10802-011-9546-y 

Charubusp, S., & Chinwonno, A. (2014). Developing academic and content area literacy: The Thai EFL context. Reading Matrix: An International Online Journal, 14(2). 119-134. http://www.readingmatrix.com/files/11-12362177.pdf 

Chawwang, N. (2008). An Investigation of English Reading Problems of Thai 12th-Grade Students in Nakhonratchasima Educational Regions 1, 2, 3, and 7 [Unpublished master’s thesis], Srinakharinwirot University. https://www.academia.edu/29554333/AN_INVESTIGATION_OF_ENGLISH_READING_PROBLEMS_OF_THAI_12_TH_GRADE_STUDENTS_IN_NAKHONRATCHASIMA_EDUCATIONAL_REGIONS

Choi, I. (2017). Empirical profiles of academic oral English proficiency from an international teaching. Language Testing, 34(1), 49-82. https://doi.org/10.1177%2F0265532215601881 

Chomchaiya, C. & Dunworth, K. (2008, November 19-21). Identification of learning barriers affecting English reading comprehension instruction, as perceived by ESL undergraduates in Thailand [Conference session]. EDU-COM 2008 International Conference. Sustainability in Higher Education: Directions for Change, Edith Cowan University, Perth Western Australia. http://ro.ecu.edu.au/ceducom/10 

Darasawang, P. (2007) English language teaching and education in Thailand: A decade of change. In N. D. Prescott (Ed.), English in Southeast Asia: Varieties, Literacies and Literatures. (pp. 185-202). Scholars Publishing.

Davis, L. L. (1992). Instrument review: Getting the most from a panel of experts. Applied Nursing Research, 5(4), 194-197. https://doi.org/10.1016/S0897-1897(05)80008-4 

Dunworth, K. (2008). Ideas and realities: Investigating good practice in the management of transnational English language programmes for the higher education sector. Quality in Higher Education, 14(2), 95-107.

Eckes, T. (2017). Setting cut scores on an EFL placement test using the prototype group method: A receiver operating characteristic (ROC) analysis. Language Testing, 34(3), 383-411. https://doi.org/10.1177%2F0265532216672703 

Floyd, P., & Carrell, P. L. (1987). Effects on ESL reading of teaching cultural content schemata. Language Learning, 37(1), 89-108. https://doi.org/10.1111/j.1467-1770.1968.tb01313.x 

General Education Department (2018). KMITL CEFR results 2018. King Mongkut’s Institute of Technology Ladkrabang.

Griffin, P. E. (1990). Profiling literacy development: Monitoring the accumulation of reading skills. Australian Journal of Education, 34(3), 290-311. https://doi.org/10.1177%2F000494419003400306 

Hambleton, R. K., Pitoniak, M. J., & Copella, J. M. (2012). Essential steps in setting performance standards on educational tests and strategies for assessing the reliability of results. In G. J. Cizek (Ed.), Setting performance standards: Foundations, methods, and innovations (2nd ed.). Routledge.

Hsu, W. (2014). Measuring the vocabulary load of engineering textbooks for EFL undergraduates. English for Specific Purposes, 33, 54-65. https://doi.org/10.1016/j.esp.2013.07.001 

Isaacs, T., Trofimovich, P., & Foote, J. A. (2018). Developing a user-oriented second language comprehensibility scale for English-medium universities. Language Testing, 35(2), 193-216. https://doi.org/10.1177%2F0265532217703433 

Ismail, S. A. M. M., Karim, A., & Mohamed, A. R. (2018). The role of gender, socioeconomic status, and ethnicity in predicting ESL learners’ reading comprehension. Reading & Writing Quarterly, 34(6), 457-484. https://doi.org/10.1080/10573569.2018.1462745 

Jang, E. E. (2009). Cognitive diagnostic assessment of L2 reading comprehension ability: Validity arguments for Fusion Model application to LanguEdge assessment. Language Testing, 26(1), 31-73. https://doi.org/10.1177%2F0265532208097336

Jianrattanapong, A. (2011). Positive washback from Thai university entrance examinations. Language Testing in Asia, 1(1), 51-61. https://doi.org/10.1186/2229-0443-1-1-50 

Kaewpet, C. (2009). Communication needs of Thai civil engineering students. English for Specific Purposes, 28(4), 266-278. https://doi.org/10.1016/j.esp.2009.05.002 

Khamkhong, S. (2018). Developing English L2 critical reading and thinking skills through the PISA reading literacy assessment framework: A case study of Thai EFL learners. 3L: The Southeast Asian Journal of English Language Studies, 24(3), 83 – 94. http://ejournals.ukm.my/3l/article/view/23331/8369 

Khamkhong, Y. (2017, 12 December). Developing English proficiency among Thai students: A case study of St Theresa International College. SSRN Electronic Journal. https://dx.doi.org/10.2139/ssrn.3086520 

Khemanuwong, T., Mohamed, A. R., & Ismail, S. A. M. M. (2018). Developing a Thai READS encoder to gauge EFL reading proficiency of Thai undergraduate students. Journal of Teaching and Learning English in Multicultural Contexts (TLEMC), 2(1), 23-34. http://jurnal.unsil.ac.id/index.php/tlemc/article/view/487/303 

Kittidhaworn, P. (2001). An Assessment of the English-Language Needs of Second-Year Thai Undergraduate Engineering Students in a Thai Public University in Thailand in Relation to the Second-Year EAP Program in Engineering. (Unpublished doctoral dissertation], West Virginia University, WV, USA. https://www.elibrary.ru/item.asp?id=5246699 

Koen, V., Asada, H., Rahuman, M. R. H., & Bogiatzis, A. (2018). Boosting productivity and living standards in Thailand. OECD Economics Department Working Papers. OECD. https://ideas.repec.org/p/oec/ecoaaa/1470-en.html 

Lynn, M. R. (1986), Determination and quantification of content validity. Nursing Research, 35(6), 382-385.

Maxwell, I. (2019). ‘What I thought university would be like …’ close reading as collaborative performance. Higher Education Research and Development, 38(1), 63-76. https://doi.org/10.1080/07294360.2018.1527824 

McNamara, D. S. (2012). Reading comprehension strategies: Theories, interventions and technologies. Lawrence Erlbaum.

Mertler, C. A. (2007). Interpreting standardised test scores: Strategies for data-driven instructional decision making. Sage.

Mohamed, A. R., Lin, S. E., & Ismail, S. A. M. M. (2010). Making sense of reading scores with reading evaluation and decoding system (READS). English Language Teaching, 3(3), 35-46. https://doi.org/10.5539/elt.v3n3p35 

Mohased, A. R., Lin, S. E., & Ismail, S.A.M.M. (2012). The potency of ‘READS’ to inform Students’ reading ability. RELC Journal, 43(2), 271-282. https://doi.org/10.1177%2F0033688212451803 

Mohamed, A. R., Lin, S. E., & Ismail, S. A. M. M. (2013). “READS” feedback on tri-component skills in resuscitating learners’ reading ability. Pertanika Journal of Social Sciences & Humanities, 21(3), 1179-1192. http://www.pertanika.upm.edu.my/Pertanika%20PAPERS/JSSH%20Vol.%2021%20(3)%20Sep.%202013/21%20Page%201179-1192.pdf 

Mok, S. S. (1994). Assessment, recovery and enrichment in education. Kumpulan Budiman Sdn. Bhd.

Munsakorn, N. (2012). Awareness of reading strategies among EFL learners at Bangkok University. International Journal of Social, Behavioural, Educational, Economic, Business and Industrial Engineering, 6(5), 821-824.

Organisation for Economic Cooperation and Development (2018). Education at a glance 2018: OECD indicators. OECD. https://www.oecd-ilibrary.org/education/education-at-a-glance-2018_eag-2018-en 

Piamsai, C. (2017). An investigation of Thai learners’ needs of English language use for intensive English course development. Pasaa Paritat32, 63-97. http://www.culi.chula.ac.th/Publicationsonline/files/article2/7PIAoWsuJrMon32830.pdf 

Polit, D. E., & Beck, C. T. (2006). Essentials of nursing research: Methods, appraisal, and ultilization. (6th ed.). Lippincott Williams & Wilkins.

Powers, D., Schedl, M., & Papageorgiou, S. (2017). Facilitating the interpretation of English language proficiency scores: Combining scale anchoring and test score mapping methodologies. Language Testing, 34(2), 175-195. https://doi.org/10.1177%2F0265532215623582 

Priebe, S. J., Keenan, J. M., & Miller, A. C. (2012). How prior knowledge affects word identification and comprehension. Reading and Writing, 25(1), 131-149. https://doi.org/10.1007/s11145-010-9260-0 

Rajprasit, K., Pratoomrat, P., & Wang, T. (2015). Perceptions and problems of English language and communication abilities: A final check on Thai engineering undergraduates. English Language Teaching, 8(3), 111-120. https://doi.org/10.5539/elt.v8n3p111 

Rea-Dickins, P. R., Kiely, R., & Yu, G. (2007). Student identity, learning and progression: The affective and academic impact of IELTS on ‘successful’ candidates. (IELTS research reports, Vol. 7). IELTS. https://www.ielts.org/-/media/research-reports/ielts_rr_volume07_report2.ashx 

Read, J. (2015). Issues in post-entry language assessment in English-medium universities. Language Teaching, 48(2). 217-234. https://doi.org/10.1017/S0261444813000190 

Reeve, C. L., Bonaccio, S., & Charles, J. E. (2008). A policy-capturing study of the contextual antecedents of test anxiety. Personality and Individual Differences, 45(3), 243-248. https://doi.org/10.1016/j.paid.2008.04.006 

Rumelhart, D. E. (1980). Schemata: The building blocks of cognition. In R. J. Spiro, B. Bruce, & W. F. Brewer (Eds.), Theoretical issues in reading and comprehension (pp. 33-58). Lawrence Erlba. 

Saeed, K. M., Ismail, S.A.M.M., & Eng, L. S. (2019). Malaysian speaking proficiency assessment effectiveness for undergraduates suffering from minimal descriptors. International Journal of Instruction, 12(1), 1059-1076. http://www.e-iji.net/dosyalar/iji_2019_1_68.pdf 

Sandoval Cruz, R. I. & Perales Escudero, M. D. (2012). Models of reading comprehension and their related pedagogical practices: A discussion of the evidence and a proposal. MEXTESOL Journal, 36(2), 1-18. http://www.mextesol.net/journal/index.php?page=journal&id_article=158 

Sani, B., Chik, M. N. W., Nik, Y. A. & Raslee, N. A. (2011). The reading motivation and reading strategies used by undergraduates in University Teknologi MARA Dungun, Terengganu. Journal of Language Teaching and Research, 2(1), 32-39. http://www.academypublication.com/issues/past/jltr/vol02/01/04.pdf 

Sasaki, M. (2000). Effects of cultural schemata on students’ test-taking processes for cloze tests: a multiple data source approach. Language Testing, 17(1), 85–114. https://doi.org/10.1177%2F026553220001700104 

Sönmez, H. (2019). An examination of needs analysis research in the language education process. International Journal of Education & Literacy Studies, 7(1), 8-17. http://files.eric.ed.gov/fulltext/EJ1212387.pdf 

Spolsky, B. (1990). Social aspects of individual assessment. In J. H. A. L. de Jong, & D. K. Stevenson (Eds.), Individualizing the assessment of language abilities (pp. 3-15). Multilingual Matters Ltd.

Sureeyatanapas, P., Boonma, A., & Thalangkan, S. (2016). English proficiency requirements for engineering graduates at private organizations in Thailand. KKU Engineering Journal, 43(S1). 35-39. https://ph01.tci-thaijo.org/index.php/easr/article/view/69658 

Waluyo, B. (2019). Examining Thai first-year university students’ English proficiency on CEFR Levels. The New English Teacher13(2), 51-62. http://www.assumptionjournal.au.edu/index.php/newEnglishTeacher/article/view/3651/2368 

Ward, J. (2009). A basic engineering English word list for less proficient foundation engineering undergraduates. English for Specific Purposes, 28(3), 170-182. https://doi.org/10.1016/j.esp.2009.04.001 

Wasburn-Moses, L. (2006). Obstacles to program effectiveness in secondary special education. Preventing School Failure: Alternative Education for Children and Youth, 50(3). 21-30. https://doi.org/10.3200/PSFL.50.3.21-30 

Wen, Q. (2018). The production-oriented approach to teaching university students English in China. Language Teaching, 51(4), 526-540. https://doi.org/10.1017/S026144481600001X 

Yamashita, J. (2003). Processes of taking a gap-filling test: comparison of skilled and less skilled EFL readers. Language Testing, 20(3), 267-293. https://doi.org/10.1191%2F0265532203lt257oa 

Yousef, H., Karimi, L., & Janfeshan, K. (2014). The relationship between cultural background and reading comprehension. Theory and Practice in Language Studies, 4(4), 707-714. http://doi.org/10.4304/tpls.4.4.707-714 

Zieky, M. J., & Perie, M. (2006). A primer on setting cut scores on tests of educational achievement. Educational Testing Service. https://www.ets.org/Media/Research/pdf/Cut_Scores_Primer.pdf 

 

 

 


Contact us

mextesoljournal@gmail.com
We Are Social On

Log In »
MEXTESOL A.C.

MEXTESOL Journal, vol. 44, no. 4, 2020, es una publicación cuadrimestral editada por la Asociación Mexicana de Maestros de Inglés, MEXTESOL, A.C., Versalles 15, Int. 301, Col. Juárez, Alcadí­a Cuauhtémoc, C.P. 06600, Ciudad de México, México, Tel. (55) 55 66 87 49, mextesoljournal@gmail.com. Editor responsable: Jo Ann Miller Jabbusch. Reserva de Derechos al uso Exclusivo No. 04-2015-092112295900-203, ISSN: 2395-9908, ambos otorgados por el Instituto Nacional de Derecho del Autor. Responsible de la última actualización de este número: Jo Ann Miller, Asociación Mexicana de Maestros de Inglés, MEXTESOL, A.C., Versalles 15, Int. 301, Col. Juárez, Alcadí­a Cuauhtémoc, C.P. 06600, Ciudad de México, México. Fecha de la última modificación: 31/08/2015. Las opiniones expresadas por los autores no necesariamente reflejan la postura del editor de la publicación. Se autoriza la reproducción total o parcial de los textos aquÄ« publicados siempre y cuando se cite la fuente completa y la dirección electrónica de la publicación.

License

MEXTESOL Journal applies the Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) license to everything we publish.