Evaluating Iranian L2 Teachers’ Assessment Literacy for L2 Pragmatics by Applying the CEFR’s Pragmatic Competence Model: Possible Sociocultural-Informed Solutions*
Ayad Kamalvand 1  & Mohammad Javad Mohammadi 2 
Ilam University, Ilam, Ilam, Iran, Allameh Tabataba'i University, Tehran, Iran
Contact:  kamalvand.ayad@gmail.com, m.mohammadi1377@gmail.com
* This is a refereed article.
Received: 17 May, 2022.
Accepted: 29 December, 2022.
Published: 24 April, 2024.
Correspondent: Ayad Kamalvand
DOI: 10.61871/mj.v48n2-4This is an open-access article distributed under the terms of a CC BY-NC-SA 4.0 license
Abstract: Nearly all multidimensional models of communication competence have pragmatic competence at their core. Proper assessment of second language (L2) pragmatics makes many demands on L2 teachers, both in terms of understanding the construct and in language test development. Therefore, being assessment literate helps teachers in developing effective tests geared toward educational outcomes. Despite the importance of assessment literacy (AL) for pragmatics, the field is still under-researched. Mindful of this fact and the significance of pragmatics in L2 learning and assessment, this study adopted the Common European Framework of Reference (CEFR) model of pragmatic competence to examine Iranian L2 teachers’ AL for L2 pragmatics and linked the results to the Sociocultural Theory (SCT) for formulating theory-informed answers to the problems in the teachers’ assessment of L2 pragmatics. Group interviews were run with 67 participants and then qualitative and quantitative content analyses were performed. The paper discusses the identified problems in the assessment of L2 pragmatics and offers recommendations for raising L2 teachers’ AL for L2 pragmatics.

Keywords: L2 pragmatics, the CEFR, teachers’ AL for pragmatics, SCT, assessment


Resumen: Casi todos los modelos multidimensionales de competencia comunicativa tienen en su núcleo la competencia pragmática. La evaluación adecuada de la pragmática de una segunda lengua (L2) plantea muchas exigencias a los profesores de L2, tanto en términos de comprensión del constructo como en el desarrollo de pruebas lingüísticas. Por lo tanto, tener conocimientos de evaluación ayuda a los profesores a desarrollar pruebas eficaces orientadas a los resultados educativos. A pesar de la importancia de la alfabetización en evaluación (AL) para la pragmática, el campo aún está poco investigado. Consciente de este hecho y de la importancia de la pragmática en el aprendizaje y la evaluación de la L2, este estudio adoptó el modelo de competencia pragmática del Marco Común Europeo de Referencia (MCER) para examinar el AL de los profesores iraníes de L2 en cuanto a la pragmática de la L2 y vinculó los resultados con la Teoría Sociocultural (SCT) para formular respuestas basadas en la teoría a los problemas en la evaluación de la pragmática de L2 por parte de los profesores. Se realizaron entrevistas grupales con 67 participantes y luego se realizaron análisis de contenido cualitativos y cuantitativos. El artículo analiza los problemas identificados en la evaluación de la pragmática de L2 y ofrece recomendaciones para aumentar el AL de los profesores de L2 en pragmática de L2.

Palabras Clave: pragmática L2, CEFR, AL de los profesores para pragmática, SCT, evaluación


Introduction and Review of the Literature

Pragmatic competence in L2 research has been conceptualized as an ability. Plainly put, the ability features the aptness to use language with its broad functions in sociocultural contexts to interpret the illocutionary force of those functions (Eslami & Eslami-Rasekh, 2008). Pragmatics has been one of the elemental parts within the proposed communicative competence models (Bachman & Palmer, 2010; Canale & Swain, 1980; Laughlin et al., 2015). Even more than an element in the communicative competence model, Leech (1983) presupposes that understanding the nature of language relies on understanding pragmatics. He makes a distinction between sociopragmatics and pragmalinguistics. While the former entails the cultural values that determine the relationships in interactions to perform a particular illocutionary act, the latter includes the total of lexical and grammatical resources one uses to express a certain illocutionary force. The division between sociopragmatics and pragmalinguistics, even while providing a theoretical background for empirical studies, has led to an unbalanced focus on pragmalinguistics; therefore, it has been suggested that the two components be joined in research practices (Alcón-Soler, 2008).

The CEFR, SCT, and significance of assessment

The CEFR (Common European Framework of Reference for Languages) is a document published by the Council of Europe in 2001 with the aim of presenting an encyclopedic and all-encompassing description of language abilities (Davidson & Fulcher, 2007). It is the most relevant and controversial document in language learning and teaching (Figueras, 2012). The CEFR organizes language proficiency on a six-level scale, A1 to C2, grouped into three broad levels: Basic user, Independent user, and Proficient user. Within CEFR, pragmatic competence is a triad of discourse competence (learners’ ability to organize, structure, and arrange messages), functional competence (ability to perform communicative functions), and design competence (ability to sequence messages according to interactional and transactional schemata). Meanwhile, the CEFR offers four generic qualitative factors, flexibility to circumstances, turn-taking, thematic development, coherence, and cohesion, for evaluating discourse aspects of competence, and two qualitative factors for determining the functional success of learners which are fluency and propositional precision.

It is noteworthy that the CEFR is nondogmatic by not imposing obligations to be adopted in all educational contexts and language programs. Nor is it a harmonization project (North, 2014). Rather it attempts to promote networking and reflection as well as innovation in practice. Additionally, the CEFR states:

We have not set out to tell practitioners what to do or how to do it. We are raising questions not answering them. It is not the function of the CEF to lay down the objectives that users should pursue or the methods. (Council of Europe, 2001, p. xi)

As the CEFR owes its usefulness to teachers' reflection on assessment practice, and as it is meant to raise questions, not to answer them, this study uses the CEFR’s model of pragmatic competence to diagnose possible problems in the assessment of L2 pragmatics in the Iranian context and then uses the tenets of SCT to suggest theory-informed responses to the problems. Sociocultural Theory (SCT) intersects with the CEFR through its emphasis on teachers’ active engagement in critical inquiry, which ultimately contributes to their professional development. (Gipps, 1999). Central to epistemological underpinnings of SCT lies the assumption that the development of human cognition is inherently social (Vygotsky, 1978). In relation to this study, the principles of SCT empower us to recommend suggestions that support the professionalization of L2 teachers’ AL for pragmatics in Iranian contexts with its complex social, political, economic, and cultural settings. Within SCT, we argue that assessment acts as a tool that mediates learning and teaching. Teachers hold the responsibility of assessment, yet assessment leads both teachers and learners into partnership (Carless, 2017). Under this presupposition, assessment, teaching, and learning align to promote educational requirements.    

Despite the crucial role that assessment plays in teaching and learning, it can be difficult to design tests to tap learners’ L2 communicative competence. Assessment should take into consideration context, level, students’ backgrounds, individual differences, and learning content. (Brown & Race, 2013). Moreover, in order to achieve its objectives, assessment should be fair, authentic, reliable, and valid (Admiraal et al., 2011; Brown & Race, 2013; Murillo & Hidalgo, 2017). The current orientation in assessment which has been triggered by the assessment for learning (AFL) movement has re-engineered assessment practices. Affective assessment within AFL needs to foster student autonomy, provide timely and effective feedback for students, recognize the full range of achievements of all learners, develop learners' capacity for self-assessment, promote commitment to learning goals, and involve sharing learning goals with learners (Black et al., 2004; Black et al., 2006; Wiliam et al., 2004).  

AL and language assessment literacy (LAL)

The notion of Assessment Literacy (AL) generally has been conceptualized as a reservoir of competencies, knowledge of applying assessment methods, and using proper tools in an appropriate time that make provision for the empowerment of teachers to understand, assess, construct language tests, and analyze test data (Stiggins, 1991). Currently, AL has been recommended to be included in teachers’ professional development programs (Popham, 2009). Teachers require training in assessment since they are engaged in the interpretation or development of large-scale and classroom-based tests. The demand for the development of AL has been echoed in the literature (DeLuca & Klinger, 2010; Fulcher, 2012; Siegel & Wissehr, 2011) since the growing centrality of testing and assessment in educational programs worldwide and the concerns over the misuses or abuses of tests suggest that an appropriate level of assessment literacy needs to be nurtured among the stakeholders (Taylor, 2009).

Rooted in AL, Language Assessment Literacy (LAL) includes the required knowledge, skills, and principles for stakeholders engaged in practices of assessment in language learning programs. Davies (2008) brought the “skills + knowledge” approach to LAL. Based on this approach, skills specify teachers’ practical know-how in assessment and construction, and knowledge refers to the relevant background in measurement. Distancing from componential views on LAL to developmental views, Fulcher (2012) argued that LAL should embody practical knowledge, theoretical and procedural knowledge, and socio-historical understanding. In the same vein, Inbar-Lourie (2012) stated that LAL demands familiarity with up-to-date approaches in language education and applied linguistics, and with research findings and their controversies in glocalized societies where global trends converge with local cultural needs and such knowledge links directly to assessment concerns and practices.

The complex nature of LAL has raised concerns over the level and quality of stakeholders in assessment. For Taylor (2013) stakeholders are the core group including test developers and researchers; the intermediary group includes language teachers and course instructors; and finally, the peripheral group includes a general audience and policymakers for which the degrees of the required LAL are smaller in comparison to the other two groups. While admitting that each group needs different degrees of LAL based on their needs and their level of engagement with language assessment, Taylor (2013) claimed that teachers are more in need of understanding language pedagogy than to familiarity with assessment theory. In contrast, Lee and Butler (2020) highlighted teachers’ familiarity with assessment theory and believe that teachers should be equipped with the required knowledge of language assessment theories and context, practical skills to develop and interpret assessment, and understanding of the social consequences of assessment (Lee & Butler, 2020). Likewise, Brindley (2001) placed teachers’ understanding of social consequences and contextual information of language assessment at the core of the teacher education curriculum.

In short, it can be said that an AL teacher equipped with the fundamental assessment concepts and procedures can affect education. Such abilities enable a teacher to design theoretically-driven tests, and effectively improve, monitor, evaluate, grade, and analyze language tests. Skills and knowledge accompanied with principles and considering the macro and micro impact of assessment on society guide L2 teachers in their assessment practices.

Assessment of L2 pragmatics

Testing of L2 pragmatics is a growing area of research and practice in L2 assessment (Roever, 2011). Even though the concept of AL has increasingly been gaining importance in the literature, AL for pragmatics has not been taken into consideration. Thus, it was important to conduct investigation by zooming in L2 teachers’ ability to design, apply and evaluate appropriate tests of pragmatics for learners based on theoretical knowledge, skills, and principles. Being aware of the assessment concepts and procedures also affects educational decisions. The significance of this study lies in the fact that it, first, uses the CEFR as a benchmark to evaluate the Iranian teachers’ AL for L2 pragmatics. Second, it provides the Iranian education system with theory-informed insights to take steps in improving the status of pragmatics in English language programs. Third, the study contributes to the English teachers’ understanding of the breadth and complexity of the construct so that they can develop tests of pragmatics that can serve as the basis for meaningful interpretation of learners’ performance. In addition, the study exposes language teachers to their AL for L2 pragmatics by which they can initiate steps for their professional development in the field. 

Relevant studies

One portion of studies on teachers’ LA and LAL has mainly focused on defining what AL and LAL are. Based on Google Scholar, studies carried out by Fulcher (2012), Giraldo (2018), Popham (2009), Stiggins (1991), Taylor (2009), and Xu and Brown (2016), are among the highly cited ones in the field. The literature has also investigated teachers’ beliefs and attitudes toward AL/LAL. For instance, Quilter and Gallini (2000) analyzed the relationship between teachers’ knowledge about educational assessment and their attitudes toward various forms of assessment. Results from this study indicate a strong correlation between teachers' past and current attitudes toward assessment. Teachers who had had more positive experiences with testing and assessment when they were students, were also more positive about them with their own students. Berry et al. (2019) explored teachers’ attitudes to assessment and their assessment practices. This study revealed that teachers considered assessment as an integral part of good teaching practice and the lack of training in assessment led teachers to feeling unconfident about developing assessment tasks and therefore used morey ready-made assessment materials.

Despite the importance of AL/LAL, research has stressed the inadequacies of teachers’ AL/ALA (Vogt & Tsagari, 2014), therefore; one part of the studies acknowledged the issue and addressed the development of teachers’ AL/LAL. Lukin et al. (2004) discussed the effectiveness of the Pre-service Assessment Literacy Study Group (PALS) project, Assessment Literacy Learning Team (ALLT) Program, and Nebraska Assessment Cohort (NAC) programs to provide the essential training necessary to support the development of appropriate levels of assessment literacy for some pre-service teachers and public educators. The results showed that the programs upgraded teachers’ AL in terms of classroom assessment, confidence and communication of statistical analyses of learners’ progress to stakeholders. Koh (2011) engaged teachers who taught English, sciences, and mathematics in a series of professional development workshops to develop authentic assessment task design and rubrics. The results revealed a significant improvement in the quality of teachers’ assessment tasks as well as the quality of student work. 

Few studies were found that dealt with teachers’ AL/LAL for speaking, listening, reading, writing, and related areas. Crusan et al. (2016) elicited 41 international L2 writing teachers’ backgrounds and perspectives on assessment with a survey. The findings of this study suggested that, first, teachers with greater linguistic knowledge had a better grasp of writing assessment, and used a wider variety of writing assessment practices. Second, less experienced teachers demonstrated greater assessment knowledge compared to more experienced colleagues. In another study, Mellati and Khademi (2018) explored the impact of teachers’ AL on their assessment practices and learners’ writing outcomes. While assessment literate instructors emphasized the significant impact of AL on classroom instruction and classroom management, teachers with a low degree of AL denied any lack of skills related assessment and referred to challenges they faced in conducting classroom-based assessment. The study also mentioned the classroom practices of assessment literate teachers and assessment illiterate instructors. Teachers with a low degree of AL tended to use traditional classroom activities, were not confident and flexible enough to utilize new methods, pedagogical learning and assessment so learners’ learning needs were overlooked. Finally, Tajeddin et al. (2018) explored Iranian novice and experienced L2 teachers’ AL for speaking. In contrast to Crusan et al. (2016), the results of this study indicated that more experienced teachers possessed a higher degree of AL.    

Taken together, there is room for pragmatics in teachers’ AL. Studies have been conducted to investigate different aspects of AL and LAL. Nevertheless, to the best of the researcher’s knowledge, there is a neglected niche to investigate teachers’ AL for L2 pragmatics. To bridge this gap, this study tries to discover Iranian L2 teachers’ assessment knowledge and practice regarding pragmatics.

Considering all the existing problems in the domain of assessing pragmatics and the significance of conducting a study on the teachers’ AL for pragmatics, the study will attempt to discover the answers to the following questions:

Q1: What is the Iranian L2 teachers’ status of AL for L2 pragmatics?

Q2: What are some possible solutions for the problems seen in the L2 teachers’ AL for L2 pragmatics?

Method

Participants

A total of 67 Iranian L2 teachers across state-funded and private institutions in Ilam city were involved in the study. They taught English to students with a wide range of levels of language proficiency. All these teachers held BM (n = 37), MA (n = 21), and PhD (n = 9) degrees in Teaching English as a Foreign Language and Applied Linguistics. The work experience of participants ranged from one year to thirty-two years. Participants were asked to complete an interview consisting of items derived from the CEFR’s model of pragmatic competence. In advance of the research process, the participants signed a consent form that explained the nature of the study, its purpose, and the data collection procedures. After the participants signed the consent form, the researchers began collecting data.

Instrument

To answer the research questions, we asked the participants to participate in an interview study. The interview items were organized within the CEFR’s pragmatic competence model. Based on the teachers’ academic degrees, the teachers were assigned to BA, MA, and PhD group interviews. Of 117 teachers who were sent invitations, 67 accepted to participate in the group interviews. Group size ranged from four to 12. A total of nine interviews (BA = 5, MA = 3, PhD = 1) were conducted. The interviewers followed a structured interview guide to conduct the interviews (Cohen et al., 2018), which included five open-ended questions that touched on participants’ knowledge of L2 pragmatic assessment. The interviews took a funnel-shape structure by beginning with two general questions followed by three narrow ones. The first two general questions were, how do you describe the purpose of pragmatic assessment in your class? and in terms of ‘Can Do’ descriptors, how do you assess what constitutes a pragmatically competent L2 learner? Since tests of pragmatics are largely spin-offs of theoretical insights (Roever, 2011), these two questions aimed to analyze teachers’ perceptions and knowledge of theory. Then the other three questions offered the participants the chance to demonstrate their technical skills in developing tests that elicit L2’s required proficiency for being considered pragmatically competent. The questions were, which language activity do you use more in assessing learners’ pragmatic competence? which area of pragmatics do you assess more? what types of tasks do you use to assess your learners’ level of pragmatic competence?

Data analysis  

The data gathered were analyzed in two stages. First after transcribing the audio recorded responses, they were analyzed through content analysis procedures. Transcribing the audio recordings of interviews was the first step in analyzing the participants’ level of AL for L2 pragmatics and their opinions on what constitutes pragmatic competence among students and the way it is assessed. Afterward, we analyzed and coded the responses and we categorized the main features of the teachers' opinions on AL for pragmatics and their activities in developing pragmatics tests under primary themes.  

Results and Discussion

RQ1: What is the Iranian L2 teachers’ status of AL for L2 pragmatics?

Results show a link between teachers’ academic degrees and knowledge of L2 pragmatics. In general, teachers with higher academic degrees were more detailed in their conceptualization of L2 pragmatics. When asked specifically what the purpose of assessment of L2 pragmatics is, only a few teachers with BA degrees were able to elaborate on the underlying implications of assessment. The overall responses were too general, falling more under the realm of communicative competence than pragmatic competence. However, compared with BA holders, teachers with MA and PhD degrees could delineate thoroughly the scope of assessment of pragmatic competence. A higher academic degree was also associated with the responses to the second question of the interviews. Nearly all teachers with PhD degrees and over two-thirds (n = 15) of those with MA degrees were able to name ten cases of ‘Can Do’ descriptors. It should be noted that all the responses fell under the functional component of the CEFR-based pragmatic competence. None of the participants referred to students’ knowledge of discourse and design competencies. Concerning the third question of the interviews, most of the teachers (96%) prioritized productive modes of communication over the reception, interaction, and mediation ones. Productive activities in the CEFR include both written and spoken activities. However, assessment of production ability was confined to oral forms, with only one instance of the written mode of production activities. The analysis of the fourth question of the interviews revealed that teachers generally focused solely on speech acts to the exclusion of other areas of pragmatics. This finding seems to be consistent with the argument that tests of pragmatics focus on speech acts (Roever, 2011). Finally, to tap learners’ language proficiency in pragmatics, the participants reported on the operationalization of tests based mainly on role play (80%), interviews (17%), and written discourse completion tasks (3%).

Taken together, the data analysis may provide valuable insights into the AL of Iranian L2 teachers for pragmatics. First, while previous studies focused on teachers’ experience as a variable affecting AL (Chan & Luo, 2020; Edwards, 2017; Levi & Inbar-Lourie, 2020; Rogers et al., 2020), our findings reveal that a higher academic degree is associated with AL. Second, although the CEFR framework is not without critics (Fulcher, 2004), it can aid in diagnosing teachers’ AL with great precision. Third, academic degrees differentiated teachers in terms of conceptualization of assessments of pragmatics, nonetheless, the over-reliance of teachers with MA and PhD degrees on using the oral mode of productive activities as well as restricted areas of pragmatics in the operationalization of assessments of pragmatics reemphasize the need echoed in literature to provide assessment training for all teachers (Popham, 2009). Finally, drawing on our analysis, Iranian teachers’ AL for L2 pragmatics needs immediate improvement.

Q2: What are some possible solutions for the problems seen in the L2 teachers’ AL for L2 pragmatics?

We can build upon this understanding of Iranian teachers' language learning pragmatics to develop a variety of socioculturally-relevant suggestions to address the identified problems. A call that has surfaced repeatedly in language assessment research is understanding the construct under investigation (Alderson et al., 1995; Bachman & Palmer, 2010; Chapelle, 1999; Messick, 1989). This call has been answered by suggesting collecting a priori evidence before a test event so that a match between theory and the test would be established (Weir, 2005). Once a priori evidence is verified, the construct validity of a given test can be evaluated through confirmatory a posteriori statistical analysis (Weir, 2005). Consequently, when granted such understanding, teachers will develop tests reflecting the true theoretical underpinnings of pragmatic competence not affected by construct under-representation and construct-irrelevant variance (Messick, 1989). Engagement with assessment requires teachers to employ their knowledge and understanding of the construct, that is the process of epistemic cognition and consideration of the nature of what is to be assessed (Fives et al., 2017). From an SCT perspective, teachers’ praxes mainly include everyday concepts (Johnson & Golombek, 2011). These everyday concepts have a restricted scope which narrows teachers’ understanding of assessment because teachers gain them through a shallow understanding of the construct. To develop teachers’ cognition, it is necessary to expose teachers to scientific concepts which are promoted through theoretical learning (Johnson & Golombek, 2011). Scientific concepts give teachers a solid foundation of theoretical knowledge and the most recent research findings in the field. Grounded in the data analysis, a majority of Iranian L2 teachers with BA degrees have a limited understanding of the construct of pragmatics. To outfit teachers with a sound understanding of the construct, one possibility is to plan programs for them through which the construct of pragmatics is presented explicitly by teacher educators in practical and goal-directed activities bereft of decontextualized lecturing which propels teachers towards rote memorizations of the construct conceptualization (Johnson & Golombek, 2011). The efficacy of such programs has been confirmed in some contexts (Allen, 2011; Nauman, 2011).

Plus, developing teachers’ understanding of the construct can occur through teachers’ active participation in communities of practice. This participation corresponds to the concept of mediation which is one of the planks of the Sociocultural Theory. Mediation in this process provides opportunities for teachers to interact with experts or more knowledgeable peers (Benson & Gao, 2008). This interaction aims to assist teachers to move from the social plane, interacting with more knowledgeable others, to the mental plane (Vygotsky, 1978) where each teacher internalizes scientific concepts. The mediation needs to be strategic through which teachers receive explicit and graduate assistance (Johnson & Golombek, 2011) from the more knowledgeable teachers.

The lack of a viable alternative to the functional component of the CEFR’s model of pragmatic competence to assess L2 learners’ pragmatic competence was one of the biggest issues encountered with participants with different academic qualifications. Due to the complexity of L2 pragmatics, reliance on one aspect of the construct leads to assessment which misrepresents, underrepresents, and overrepresents the construct (Haladyna & Rodriguez, 2013). Therefore, teachers need to broaden the scope of tests of pragmatics to increase the validity of assessments. Against this backdrop, the SCT offers pathways that favor teachers’ reflection in their assessment practices. The output of this process echoes Vygotsky’s (1978) dictum that a cognitive conflict spurs cognitive growth. By reflecting on the assessment of L2 pragmatics, teachers develop a critical stance to find the gaps in their AL for pragmatics. This reflexivity might result in the scholarship of teaching and learning (SoTL) (Boyer, 1990). SoLT opens a cycle whereby teachers’ reflective act is enriched with clear goals and adequate preparation, appropriate methods, significant results, and effective presentation (Glassick et al., 1997) which benefits teaching and learning generally and assessment particularly. The output of such reflective practices should allow cross-fertilization of findings in both physical and virtual forms. Physical and virtual forms include workshops, periodical sessions, in-service programs, social media discussions, video conferences, webinars, etc.  

The CEFR envisions learners as ‘social agents’ (Council of Europe, 2001) who develop a range of general and communicative language competencies to perform a constellation of real-life tasks. Real-life tasks in the CEFR mirror the real-world activities learners will be expected to perform promoting learners’ proficiency ushered by ‘Can Do’ descriptors. However, the data analysis revealed that Iranian teachers draw on a limited number of tasks to elicit L2 pragmatic representative behaviors. One possible way to widen teachers’ familiarity with tasks for L2 pragmatics comes from computer-supported collaborative learning (CSCL) (Halavais, 2016), digital spaces let teachers join online collaborative environments. These spaces facilitate the formation of virtual social learning where multimedia resources mediate online discussions (Resta & Laferrière, 2007). CSCL provision for teachers’ access to an array of online authentic resources including videos, podcasts, instant messaging, etc., contributes to SCT’s notion of meaningful learning (Kutuk, 2023) which in turn aids teachers to develop richer access to tasks for assessment of L2 pragmatics. Authentic activities in CSCL, moreover, promote sociocultural perspectives of learning by producing situated cognition (Seel, 2011). Within the situated cognition framework, teachers’ learning of pragmatic construct in CSCL most likely occurs because the contextualized activities feel authentic to them.

Critical to the effectiveness of the above-mentioned suggestions is the mentality that AL is an inseparable part of teachers’ professionalism (Xu & Brown, 2016), a prolonged rather than onetime process with no start or endpoint (Johnson & Golombek, 2011).

Conclusion

We attempted to obtain a better understanding of the Iranian teachers’ AL for L2 pragmatics by applying the CEFR model of pragmatic competence. After identifying the problems in teachers’ knowledge and skills in practicing assessment of pragmatics, we referred to the premises of SCT for formulating possible solutions for the problems. In general, there is evidence that teachers with BA degrees are not prepared enough to effectively integrate the assessment of L2 pragmatics in their practices. Based on the findings, we have a foundation to take initiatives to improve conditions in the assessment of pragmatics. Actualization of efforts for improving teachers’ AL requires a long-term policy that engages teachers in incessant and ongoing learning communities (Koh, 2011). Participating in learning communities contributes to teachers’ self-reflection by which both teachers and students benefit from quality enhancement in assessment. Evidently, assessing student language performance and proficiency influences the quality of classroom teaching since it is closely associated with the quality of the assessments (Lanteigne et al., 2021).

Owing to the small sample size of the study, the results intend not to claim wider generalization to other contexts. Future research could address the issue by dealing with a larger sample. The representativeness of the data and the generalizability of the findings can be enhanced by also conducting comparative studies whereby evaluation of AL could be carried out by having teachers from more than one context.

References

Admiraal, W., Hoeksma, M., van de Kamp, M.-T., & van Duin, G. (2011). Assessment of teacher competence using video portfolios: Reliability, construct validity, and consequential validity. Teaching and Teacher Education, 27(6), 1019-1028. https://doi.org/10.1016/j.tate.2011.04.002

Alcón Soler, E. (2008). Learning how to request in an instructed language learning context. Peter Lang.

Alderson, J. C., Clapham, C., & Wall, D. (1995). Language test construction and evaluation. Cambridge University Press.

Allen, H. W. (2011). Embracing literacy-based teaching: A longitudinal study of the conceptual development of novice foreign language teachers. In K. E. Johnson & P. R. Golombek (Eds.), Research on second language teacher education: A sociocultural on professional development (pp. 86-100). Routledge.

Bachman, L., & Palmer, A. (2010). Language assessment in practice: Developing language assessments and justifying their use in the real world. Oxford University Press.

Benson, P., & Gao, X. (2008). Individual variation and language learning strategies. In S. Hurd & T. Lewis (Eds.), Language learning strategies in independent settings (pp. 25-40). Multilingual Matters.

Berry, V., Sheehan, S., & Munro, S. (2019). What does language assessment literacy mean to teachers? ELT Journal, 73(2), 113-123. https://doi.org/10.1093/elt/ccy055

Black, P., Harrison, C., Lee, C., Marshall, B., & Wiliam, D. (2004). Working inside the Black Box: Assessment for learning in the classroom. Phi Delta Kappan, 86(1), 8-21. https://doi.org/10.1177/003172170408600105

Black, P., McCormick, R., James, M., & Pedder, D. (2006). Learning how to learn and assessment for learning: A theoretical inquiry. Research Papers in Education, 21(2), 119-132. https://doi.org/10.1080/02671520600615612

Boyer, Ernest L. (1990). Scholarship reconsidered: Priorities of the professoriate.The Carnegie Foundation for the Advancement of Teaching.

Brindley, G. (2001). Language assessment and professional development. In C. Elder, A. Brown, E. Grove, K. Hill, N. Iwashita, T. Lumely, T. McNamara, & K. O'Loughlin (Eds.), Experimenting with uncertainty: Essays in honour of Alan Davies (pp. 126-136). Cambridge University Press.

Brown, S., & Race, P. (2013). Using effective assessment to promote learning. In L. Hunt & D. Chalmers (Eds.), University teaching in focus: A learning-centered approach (pp. 74-91). Routledge.

Canale, M., & Swain, M. (1980). Theoretical bases of communicative approaches to second language teaching and testing. Applied Linguistics, 1(1), 1-47. https://doi.org/10.1093/applin/I.1.1

Carless, D. (2017). Scaling up assessment for learning: Progress and prospects. In D. Carless, S. M. Bridges, C. K. Y. Chan, & R. Glofcheski (Eds.), Scaling up assessment for learning in higher education (pp. 3-17). Springer.

Chan, C. K. Y., & Luo, J. (2020). An exploratory study on teacher assessment literacy: Do novice university teachers know how to assess students’ written reflection? Teachers and Teaching, 26(2), 214-228. https://doi.org/10.1080/13540602.2020.1787375

Chapelle, C. A. (1999). Construct definition and validity inquiry in SLA research. In L. F. Bachman & A. D. Cohen (Eds.), Interfaces between second language acquisition and language testing research (pp. 32-70). Cambridge University Press.

Cohen, L., Manion, L., & Morrison, K. (2018). Research methods in education (8th ed.). Routledge.

Council of Europe. (2001). Common European framework of reference for languages: Learning, teaching, assessment. Cambridge University Press.

Crusan, D., Plakans, L., & Gebril, A. (2016). Writing assessment literacy: Surveying second language teachers’ knowledge, beliefs, and practices. Assessing Writing, 28, 43-56. https://doi.org/10.1016/j.asw.2016.03.001

Davidson, F., & Fulcher, G. (2007). The Common European Framework of Reference (CEFR) and the design of language tests: A matter of effect. Language Teaching, 40(3), 231-241. https://doi.org/10.1017/S0261444807004351

Davies, A. (2008). Textbook trends in teaching language testing. Language Testing, 25(3), 327-347. https://doi.org/10.1177/0265532208090156

DeLuca, C., & Klinger, D. A. (2010). Assessment literacy development: Identifying gaps in teacher candidates’ learning. Assessment in Education: Principles, Policy & Practice, 17(4), 419-438. https://doi.org/10.1080/0969594X.2010.516643

Edwards, F. (2017). A rubric to track the development of secondary pre-service and novice teachers’ summative assessment literacy. Assessment in Education: Principles, Policy & Practice, 24(2), 205-227. https://doi.org/10.1080/0969594X.2016.1245651

Eslami, Z. R., & Eslami-Rasekh, A. (2008). Enhancing the pragmatic competence of non-native English-speaking teacher candidates (NNESTCs) in an EFL context. In E. Alacón Soler & A. Martinez-Flor (Eds.), Investigating pragmatics in foreign language learning, teaching and testing (pp. 178-197). Multiligual Matters.

Figueras, N. (2012). The impact of the CEFR. ELT Journal, 66(4), 477-485. https://doi.org/10.1093/elt/ccs037

Fives, H., Barnes, N., Buehl, M. M., Mascadri, J., & Ziegler, N. (2017). Teachers' epistemic cognition in classroom assessment.Educational Psychologist, 52(4), 270-283. https://doi.org/10.1080/00461520.2017.1323218

Fulcher, G. (2004). Deluded by artifices? The Common European Framework and harmonization. Language Assessment Quarterly, 1(4), 253-266. https://doi.org/10.1207/s15434311laq0104_4

Fulcher, G. (2012). Assessment literacy for the language classroom. Language Assessment Quarterly, 9(2), 113-132.https://doi.org/10.1080/15434303.2011.642041

Gipps, C. (1999). Socio-cultural aspects of assessment. Review of Research in Education, 24(1), 355-392. https://doi.org/10.3102/0091732x024001355

Giraldo, F. (2018). Language assessment literacy: Implications for language teachers. Profile: Issues in Teachers Professional Development, 20(1), 179-195. https://files.eric.ed.gov/fulltext/EJ1165944.pdf

Glassick, C. E., Huber, M. T., & Maeroff, G. I. (1997). Scholarship assessed: Evaluation of the professoriate. Jossey-Bass.

Haladyna, T. M., & Rodriguez, M. C. (2013). Developing and validating test items. Routledge.

Halavais, A. (2016). Computer-supported collaborative learning. In The international encyclopedia of communication theory and philosophy. Wiley. https://doi.org/10.1002/9781118766804.wbiect195

Inbar-Lourie, O. (2012). Language assessment literacy. In C. Chapelle (Ed.), The encyclopedia of applied linguistics (pp. 1-9). Wiley. https://doi.org/10.1002/9781405198431.wbeal0605

Johnson, K. E., & Golombek, P. R. (2011). A sociocultural theoretical perspective on teacher professional development. In K. E. Johnson & P. R. Golombek (Eds.), Research on second language teacher education: A sociocultural perspective on professional development (pp. 1-12). Routledge.

Koh, K. H. (2011). Improving teachers’ assessment literacy through professional development. Teaching Education, 22(3), 255-276. https://doi.org/10.1080/10476210.2011.593164

Kutuk, G. (2023). Understanding gender stereotypes in the context of foreign language learning through the lens of social cognitive theory. TESOL Quarterly. Advanced online publication. https://doi.org/10.1002/tesq.3267

Lanteigne, B., Coombe, C., & Brown, J. D. (2021). Reflecting on challenges in language testing around the world. In B. Lanteigne, C. Coombe, & J. D. Brown (Eds.), Challenges in language testing around the world: Insights for language test users (pp. 549-553). Springer.

Laughlin, V. T., Wain, J., & Schmidgall, J. (2015). Defining and operationalizing the construct of pragmatic competence: Review and recommendations. ETS Research Report Series, 2015(1), 1-43. https://doi.org/10.1002/ets2.12053

Lee, J., & Butler, Y. G. (2020). Reconceptualizing language assessment literacy: Where are language learners? TESOL Quarterly, 54(4), 1098-1111. https://doi.org/10.1002/tesq.576

Leech, G. N. (1983). Principles of pragmatics. Routledge.

Levi, T., & Inbar-Lourie, O. (2020). Assessment literacy or language assessment literacy: Learning from the teachers. Language Assessment Quarterly, 17(2), 168-182. https://doi.org/10.1080/15434303.2019.1692347

Lukin, L. E., Bandalos, D. L., Eckhout, T. J., & Mickelson, K. (2004). Facilitating the development of assessment literacy.Educational Measurement: Issues and Practice, 23(2), 26-32. https://doi.org/10.1111/j.1745-3992.2004.tb00156.x

Mellati, M., & Khademi, M. (2018). Exploring teachers' assessment literacy: Impact on learners' writing achievements and implications for teacher development. Australian Journal of Teacher Education (Online), 43(6), 1-18. http://files.eric.ed.gov/fulltext/EJ1183660.pdf

Messick, S. (1989). Validity. In L. Linn (Ed.), Educational measurement (3rd ed., pp. 13-103). Macmillan.

Murillo, F. J., & Hidalgo, N. (2017). Students’ conceptions about a fair assessment of their learning. Studies in Educational Evaluation, 53, 10-16. https://doi.org/10.1016/j.stueduc.2017.01.001

Nauman, G. (2011). Synthesizing the academic and the everyday: A Chinese teacher’s developing conceptualization of literacy. In K. E. Johnson & P. R. Golombek (Eds.), Research on second Language teacher education: A sociocultural perspective on professional development (pp. 116-132). Routledge.

North, B. (2014). Putting the Common European Framework of Reference to good use. Language Teaching, 47(2), 228-249. https://doi.org/10.1017/S0261444811000206

Popham, W. J. (2009). Assessment literacy for teachers: Faddish or fundamental? Theory into Practice, 48(1), 4-11. https://doi.org/10.1080/00405840802577536

Quilter, S. M., & Gallini, J. K. (2000). Teachers’ assessment literacy and attitudes. The Teacher Educator, 36(2), 115-131.https://doi.org/10.1080/08878730009555257

Resta, P., & Laferrière, T. (2007). Technology in support of collaborative learning. Educational Psychology Review, 19(1), 65-83. https://doi.org/10.1007/s10648-007-9042-7

Roever, C. (2011). Testing of second language pragmatics: Past and future. Language Testing, 28(4), 463-481. https://doi.org/10.1177/0265532210394633

Rogers, A. P., Reagan, E. M., & Ward, C. (2020). Preservice teacher performance assessment and novice teacher assessment literacy. Teaching Education, 33(2), 175-193. https://doi.org/10.1080/10476210.2020.1840544

Seel, N. M. (2011). Encyclopedia of the sciences of learning. Springer.

Siegel, M. A., & Wissehr, C. (2011). Preparing for the plunge: Preservice teachers’ assessment literacy. Journal of Science Teacher Education, 22(4), 371-391. https://doi.org/10.1007/s10972-011-9231-6

Stiggins, R. J. (1991). Assessment literacy. Phi Delta Kappan, 72(7), 534-539. http://www.jstor.org/stable/20404455

Tajeddin, Z., Alemi, M., & Yasaei, H. (2018). Classroom assessment literacy for speaking: Exploring novice and experienced English language teachers’ knowledge and practice. Iranian Journal of Language Teaching Research, 6(3), 57-77. https://doi.org/10.30466/ijltr.2018.120601

Taylor, L. (2009). Developing assessment literacy. Annual Review of Applied Linguistics, 29, 21-36. https://doi.org/10.1017/S0267190509090035

Taylor, L. (2013). Communicating the theory, practice and principles of language testing to test stakeholders: Some reflections. Language Testing, 30(3), 403-412. https://doi.org/10.1177/0265532213480338

Vogt, K., & Tsagari, D. (2014). Assessment literacy of foreign language teachers: Findings of a European study. Language Assessment Quarterly, 11(4), 374-402. https://doi.org/10.1080/15434303.2014.960046

Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Harvard University Press.

Weir, C. J. (2005). Language testing and validation. Palgrave McMillan.

Wiliam, D., Lee, C., Harrison, C., & Black, P. (2004). Teachers developing assessment for learning: Impact on student achievement. Assessment in Education: Principles, Policy & Practice, 11(1), 49-65. https://doi.org/10.1080/0969594042000208994

Xu, Y., & Brown, G. T. L. (2016). Teacher assessment literacy in practice: A reconceptualization. Teaching and Teacher Education, 58, 149-162. https://doi.org/https://doi.org/10.1016/j.tate.2016.05.010

 


Contact us

mextesoljournal@gmail.com
We Are Social On

Log In »
MEXTESOL A.C.

MEXTESOL Journal, vol. 48, no. 2, 2024, es una publicación cuadrimestral editada por la Asociación Mexicana de Maestros de Inglés, MEXTESOL, A.C., Versalles 15, Int. 301, Col. Juárez, Alcadía Cuauhtémoc, C.P. 06600, Ciudad de México, México, Tel. (55) 55 66 87 49, mextesoljournal@gmail.com. Editor responsable: Jo Ann Miller Jabbusch. Reserva de Derechos al uso Exclusivo No. 04-2015-092112295900-203, ISSN: 2395-9908, ambos otorgados por el Instituto Nacional de Derecho del Autor. Responsible de la última actualización de este número: Jo Ann Miller, Asociación Mexicana de Maestros de Inglés, MEXTESOL, A.C., Versalles 15, Int. 301, Col. Juárez, Alcadía Cuauhtémoc, C.P. 06600, Ciudad de México, México. Fecha de la última modificación: 31/08/2015. Las opiniones expresadas por los autores no necesariamente reflejan la postura del editor de la publicación. Se autoriza la reproducción total o parcial de los textos aquī publicados siempre y cuando se cite la fuente completa y la dirección electrónica de la publicación.

License

MEXTESOL Journal applies the Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) license to everything we publish.