We would like to share the following Q&A session with the speakers from New Directions, in which they share their expertise, insights, and innovative ideas on the conference's main theme and strands.

Discover new perspectives and engage in thought-provoking discussions that will inspire and inform.

New Directions is a unique opportunity to delve into the minds of leading thinkers and visionaries, ask your burning questions, and gain valuable knowledge to drive your projects and initiatives forward. Don't miss out on this chance to connect, learn, and grow with the best in the field.

Q & A with César Bizetto

César Bizetto - Inclusivity & Equity in Language Assessment

What are the key challenges in designing inclusive language assessments for a diverse range of test-takers?

In the Brazilian context, teachers lack professional development and most of the times support from schools in designing classroom assessment that considers the diverse range of test-takers. In groups with 30-35 students in class it is rather difficult for them to understand the needs of different students. Also, designing rubrics to accommodate a diverse range of test takers is not something schools and educators are fully familiar with. Moreover, the role of assessment as learning tool still needs to be further discussed and implemented.

Based on your research or experience, what practical strategies or recommendations can educators and test developers implement to make assessments more inclusive?

Educators tend to think they need different types of assessment for each student. Sometimes, if an assessment is more inclusive to help a group of learners, all the other students might benefit from the adaptation. Also, they should consider how students can demonstrate they have achieved the learning objectives in different ways; sometimes giving students the choice on how they prefer to be assessed and involving learners in the assessment design can make the process more inclusive and motivating.

For test developers that already offer a range of accommodations to make their tests more inclusive, they must consider how to support schools and educators to be familiar with the accommodations offered and how they impact the teaching and learning process. What I have realized in my experience is that test takers are granted accommodations that come as a surprise for them as they have been through the process of teaching and learning being exposed to regular test format. 

What strategies do educators use to ensure their assessments are fair and meaningful for students with diverse needs?

Educators and schools focus on the design of instructional and assessment materials, adapting for example, colors, font size, amount of input, time, etc. This is more common when we consider summative assessment at the end of a period. For classroom assessment, I have noticed educators are trying to include more self and peer-assessment. Yet, they need more support from the educational system on how to do that systematically and in way students can have quality and insightful feedback. 

What role does sociocultural context play in shaping how educators assess students’ language skills?

Sociocultural context has significant impact in shaping how educators assess students’ language skills. In Brazil, for example, we have students learning English in public and private schools in the different regions of the country (our immense territory by itself brings about sociocultural differences). Authenticity is crucial in assessing students’ language skills as they shape how students interact with the tasks and if that is not considered it has impact on learners’ motivation and willingness  to learn an additional language.

Q & A with Gemma Bellhouse

Gemma Bellhouse  - Inclusivity & Equity in Language Assessment

What strategies do educators use to ensure their assessments are fair and meaningful for students with diverse needs? 

Educators should embed EDI principles throughout the assessment life cycle, beginning with 1) analysing the research that already exists and incorporating recommendations into initial design stages of assessments, 2) considering the target test-takers and their needs by speaking with them directly through focus groups and surveys and 3) by continuing to gather feedback from test-takers after test delivery so that continual improvements can be made for high quality and inclusive assessments.  

Based on your research or experience, what practical strategies or recommendations can educators and test developers implement to make assessments more inclusive?

Firstly, educators and test developers should aim to use technology and digital solutions for more inclusivity. Digital test platforms have built-in solutions such as zoom functionality, colour overlays and larger fonts, all of which can ease the test delivery process for people with a range of disabilities and/or conditions. Secondly, educators and test developers should align their values with external service providers, examiners and teachers by communicating and setting clear expectations throughout; this can be done through clear procurement exercises, regular training sessions, and marking standardisation exercises. 

Large-scale international assessments often influence local policies—how can we ensure they remain relevant and beneficial to diverse regional contexts? 

Large-scale international assessments of English are designed to assess the proficiency levels of language learners who may encounter English as a Lingua Franca (ELF) or English as a Foreign Language (EFL). While every learner may familiarise with local or international accents and a range of vocabulary, qualifications may require certain standards and benchmarks for university entrance, business needs and immigration purposes. One important framework that international assessments adhere to is the Common European Framework of Reference (CEFR), which comprises lists of general ‘can-do’ descriptor statements that can be applied to any local or national curriculum. 

In what ways can AI-driven assessments enhance accessibility and fairness, and where might they fall short? 

One way that AI-driven assessments can be more inclusive is by fast tracking a provision of greater representation of test-takers within the test content. For example, greater representation may include images of test-takers from a range of ethnicities and ages, and audios with a range of Global English accents. However, relying on AI and its inherent biases comes with a great risk of perpetuating traditional and perhaps outdated stereotypes of certain ethnicities, genders and age groups. 

What role should language assessment play in fostering social inclusion and national competitiveness, and what are the risks of over-relying on assessment as a policy tool? 

National assessments can be quick and effective tools for trickle-down high level educational impact. A word of caution, however; teachers must be aware of the intentions behind the assessment and equipped with the correct resources to prepare their students. A lack of awareness and resources in the classroom may lead to memorisation strategies, test malpractice and higher test anxiety, rather than the intended impact of improved language learning.

Q & A with Fernanda Gonzaléz

Fernanda Gonzaléz - Current language policies, national and local tests in Latin America: Mapping the terrain

What strategies do educators use to ensure their assessments are fair and meaningful for students with diverse needs? 

I believe there is not a one-size fits all formula in terms of strategies to implement, especially because every test taker is unique and has different needs. This is where the challenge lays, there are so many different contexts and needs to meet when a test is being applied. In our university context, language teachers at the School of Education Sciences and Humanities work with students that have Asperger syndrome at different levels and/or students with hearing and visaul impairments. In theses cases, they have adapted their tests, by assessing students in different formats such as spoken individual sessions about topics that interest the student. In the case of the students who are visually impaired, teachers have adapted their tests in formats where they read to the student aloud the test and students provide their answers orally. Although this may be feasible in small contexts with local texts, in large-scale tests this could become more complicated.

What role does technology play in expanding access to language testing across different settings, and how can we address infrastructure gaps in under-resourced areas? 

Although the use of technology has marked groundbreaking moments in language assessment from the use of different scoring technological techniques to immediate feedback with AI to test takers, it may carry disadvantages that not all Latin American countries may be prepared to face. In terms of school infrstructure or test centers, the technological infrastructure needs to be constantly updated which requires financial budget that not all schools and institutions may have. Under resourced areas could benefit from the administrative arrangements of their directors or state educational authorities with the purpose of mediating to finance technological infrastructure and software updating. On the other hand, mobile devices could be of help in the case of home made tests at schools or language institutes where test administrators can apply an online assessment of easy access to students.  

Large-scale international assessments often influence local policies—how can we ensure they remain relevant and beneficial to diverse regional contexts? 

I believe that more case study research needs to be conducted to explore and understand regional contexts in LAtin America. These studies could provide insight about the unique charactersitics that these communities and local contexts portray and establish strategies to obtain full benefit from international tests. It is my belief that in some contexts of LATAM there is an overreliance on Large-scale international assessments to shape local policies. Although theses tests have been developed with the highest standards of quality and are in constant validity and reliability analysis, they are instruments that can provide some light on the language proficiency of a specific group of stakeholders rather than shape language assessment policies. Tests can also help understand how test takers go upon developing language through a specific period of time that can turn into specific language goals. However, overrelying on these international tests for the establishment of language assessment policies could jeopardized the success of the primary goal of the policy since it is being created on the basis of tests that were created for different contexts.

Q & A with Mariano Felice

Marino Felice - AI & Technological Innovations

What role does technology play in expanding access to language testing across different settings, and how can we address infrastructure gaps in under-resourced areas? 

Technology is crucial in making language testing accessible to wider audiences, as it allows learners in remote areas or with limited financial resources to access high-quality education. This is a significant step towards reducing inequalities, as it can help level the playing field for individuals in disadvantaged settings. However, for these initiatives to succeed, reliable internet access, basic infrastructure and digital literacy are essential. Partnerships between governments, educational institutions and private organisations can play a significant role in providing these resources. 

In what ways can AI-driven assessments enhance accessibility and fairness, and where might they fall short? 

AI provides valuable capabilities that can significantly enhance accessibility, such as speech recognition, text-to-speech and personalisation. In addition to assistive technology, automated scoring can also ensure fairness by providing instant and consistent results, reducing human bias and enabling simultaneous real-time testing for different populations. These advancements have the potential to make assessments more inclusive and equitable, ensuring that all learners have the opportunity to succeed regardless of their individual circumstances. However, technology is not perfect, so we need responsible humans to monitor and intervene when necessary. For example, human examiners can override automatic scores when the AI scorer is not confident in its assessment or assess learners in circumstances where technology may fail, such as cases involving inaccurate speech recognition. Needless to say, AI cannot replace the emotional support of human educators in difficult situations. 

How is AI changing the landscape of language assessment, and what implications does this have for test-takers in Latin America? 

AI is transforming the language assessment landscape in several ways. Automation has made assessments more efficient by providing instant results and feedback without human intervention. In addition, it enables remote testing, making access to education more equitable, and can provide more consistent and fairer assessments by removing unconscious human biases.  

These changes can have profound implications for the Latin American context. Automated assessments can provide a scalable solution to reach learners in remote and underserved areas. They can also be a more cost-efficient alternative to traditional in-person examinations, which is crucial for under-developed economies, and can help level up educational opportunities across the region, reducing inequalities between countries as well as between urban and rural areas.

How can AI-driven adaptive testing and other emerging technologies enhance accessibility in language assessment, and what ethical concerns should be considered? 

Adaptive testing can make assessments more efficient by strategically adjusting the test to match the learner's ability, thereby reducing testing time and producing quicker results. This also helps reduce anxiety for students, as the tests are personalised and shorter. Combining this technology with other AI-enabled capabilities such as speech recognition, text-to-speech and automated scoring can greatly enhance accessibility, leading to more inclusive and equitable assessments. When it comes to ethical considerations, we must ensure that the AI models are trained using representative datasets so as to minimise algorithmic bias, strive to use transparent and explainable models, and make sure we have “humans in the loop” that can monitor the use of the system and take any remedial action if needed. 

As AI-powered assessments become more widely accepted, how should we rethink the balance between technological innovation and the fundamental principles of language testing? 

Much as we might be tempted to incorporate the latest technology into our assessments, we must remember that technology is a means, not an end. Our learning and assessment goals should always come first. Therefore, we should only use AI if it helps us achieve those goals. Using tools without a clear purpose or attempting to retrofit our constructs into them is not the right approach. AI is here to stay, so we shouldn’t fight it but use it responsibly. 

Q & A with Allen Quesada

Allen Quesada - Embracing a Diverse Range of Approaches to Assessment

Based on your research or experience, what practical strategies or recommendations can educators and test developers implement to make assessments more inclusive? 

From our experience in Costa Rica, creating inclusive language assessments means making sure every student gets a fair chance to show what they know. One key strategy is providing different kinds of support during tests. For instance, offering exams in accessible formats like Braille for visually impaired students, giving extra time, or allowing students to test in quieter, separate rooms can greatly help. We've seen firsthand with PELEX how these adjustments can ensure everyone participates fully and fairly. 

Another effective approach involves embracing assistive technologies and reconsidering traditional ideas about language skills. Technology has advanced rapidly, and tools like text-to-speech and speech-to-text software can be game-changers for students with disabilities. It's also important to rethink what skills like "reading" or "listening" mean today, especially as digital communication evolves. At PELEX, we've adopted hybrid assessment methods—combining both offline and online formats—to cater to various student needs and ensure everyone has equal access. 

Finally, encouraging multilingualism and protecting assessment access during crises are also critical. Assessments should reflect the diverse ways people actually use language in real life. Additionally, the COVID-19 pandemic taught us the importance of ensuring students always have fair access to education, even in challenging situations. By following these practical strategies, educators and test designers can make assessments more supportive, fair, and genuinely inclusive, just as we've aimed to do with PELEX in Costa Rica. 

How is AI changing the landscape of language assessment, and what implications does this have for test-takers in Latin America?

AI is significantly changing the landscape of language assessment, and the Program in Foreign Language Assessment (PELEX) experience in Costa Rica is a great example of this shift. By integrating AI into both adaptive reading and listening tests, as well as automated oral exams, PELEX is creating more personalized and efficient assessment experiences. These AI-driven tests adjust in real-time to the test-taker’s ability, offering questions that are neither too easy nor too difficult, which leads to a more accurate evaluation of language proficiency. For learners in Latin America, this means greater accessibility—especially in remote or underserved areas—as well as faster results and targeted feedback that helps them focus on what they need to improve. The automated oral test, in particular, allows for scalable, consistent evaluation of speaking skills without requiring a human examiner for every session. Importantly, PELEX ensures that all AI-generated results are subject to expert human oversight, maintaining high standards of quality and fairness in the assessment process. 

Based on your work, what policy recommendations would you make to improve language testing practices in Latin America, particularly in multilingual communities? 

Based on insights from the PELEx experience, several clear policy recommendations emerge to enhance language testing practices across the region, especially within multilingual communities.  Firstly, policymakers should prioritize flexible assessment models—including online, offline, and hybrid options—to ensure equitable access for all students, regardless of technological barriers. Investing in digital infrastructure at the national and local levels is essential to support widespread and fair participation. Secondly, educational policies must promote transparency and accountability. Clear communication about assessment methods, purposes, and limitations helps build trust among educators, students, and stakeholders, ensuring ongoing fairness and credibility. Thirdly, governments should establish mandatory professional development programs focusing on language assessment literacy. Equipping teachers and administrators with comprehensive training strengthens assessment validity and enhances instructional quality.

Q & A with Damon Young

Damon Young – Researcher for JEDI, ELR, British Council

What are the key challenges in designing inclusive language assessments for a diverse range of test-takers? 

From my experience working in applied linguistics, EDI, and education policy, one major challenge is designing assessments that appropriately accommodate neurodiverse learners and individuals with invisible disabilities, such as dyslexia and ADHD. Traditional assessments often overlook these learners’ needs, inadvertently creating unfair barriers, for instance, rigid time constraints disadvantage students with slower processing speeds or difficulties focusing. Another challenge lies in addressing multiple intersectional barriers; students frequently face compounded disadvantages related to cultural background, socioeconomic status, and invisible disabilities simultaneously. Finally, assessments frequently reflect implicit biases favouring dominant linguistic and cultural norms, thus disadvantaging diverse learners, including those whose cognitive processing styles differ from mainstream expectations. 

Based on your research or experience, what practical strategies or recommendations can educators and test developers implement to make assessments more inclusive? 

Drawing from my doctoral research, EDI experience, and teaching practice, practical strategies should include explicitly incorporating Universal Design for Assessment to support neurodivergent learners. For instance, offering multiple response formats (oral, written, digital), flexible timing, and access to assistive technology such as text-to-speech software significantly benefits learners with dyslexia or ADHD. Educators should ensure assessments are culturally responsive and intersectionally sensitive, meaning tasks reflect diverse life experiences and identities. Additionally, clear and proactive communication of available accommodations for learners with invisible disabilities is crucial, creating a supportive environment that encourages disclosure without stigma. Finally, educators and assessment developers should receive ongoing training to recognise and reduce implicit bias, fostering a more equitable approach to assessing diverse language competencies. 

What role does sociocultural context play in shaping how educators assess students’ language skills? 

In my experience within EDI and Accessibility policy and as an educator working with linguistically and neurodiverse students, sociocultural contexts greatly influence language assessment practices. Educators' perceptions of language proficiency are shaped by implicit biases tied to cultural norms, dialects, and expected communication styles, often marginalising learners who express themselves differently due to cultural differences or neurodivergence. Recognising these biases helps educators understand, for example, that a dyslexic student's difficulty with conventional written expression does not equate to a lack of linguistic or cognitive ability. 

 

Moreover, integrating sociocultural contexts into assessment tasks, such as scenarios relevant to students' lived experiences and intersectional identities, allows learners to demonstrate their authentic language abilities more effectively. My research further highlights the importance of professional development for educators to sensitively assess language skills by considering each learner’s unique sociocultural background, invisible disabilities, and intersectional experiences, ensuring fair and meaningful evaluation of all students.