Marino Felice - AI & Technological Innovations
What role does technology play in expanding access to language testing across different settings, and how can we address infrastructure gaps in under-resourced areas?
Technology is crucial in making language testing accessible to wider audiences, as it allows learners in remote areas or with limited financial resources to access high-quality education. This is a significant step towards reducing inequalities, as it can help level the playing field for individuals in disadvantaged settings. However, for these initiatives to succeed, reliable internet access, basic infrastructure and digital literacy are essential. Partnerships between governments, educational institutions and private organisations can play a significant role in providing these resources.
In what ways can AI-driven assessments enhance accessibility and fairness, and where might they fall short?
AI provides valuable capabilities that can significantly enhance accessibility, such as speech recognition, text-to-speech and personalisation. In addition to assistive technology, automated scoring can also ensure fairness by providing instant and consistent results, reducing human bias and enabling simultaneous real-time testing for different populations. These advancements have the potential to make assessments more inclusive and equitable, ensuring that all learners have the opportunity to succeed regardless of their individual circumstances. However, technology is not perfect, so we need responsible humans to monitor and intervene when necessary. For example, human examiners can override automatic scores when the AI scorer is not confident in its assessment or assess learners in circumstances where technology may fail, such as cases involving inaccurate speech recognition. Needless to say, AI cannot replace the emotional support of human educators in difficult situations.
How is AI changing the landscape of language assessment, and what implications does this have for test-takers in Latin America?
AI is transforming the language assessment landscape in several ways. Automation has made assessments more efficient by providing instant results and feedback without human intervention. In addition, it enables remote testing, making access to education more equitable, and can provide more consistent and fairer assessments by removing unconscious human biases.
These changes can have profound implications for the Latin American context. Automated assessments can provide a scalable solution to reach learners in remote and underserved areas. They can also be a more cost-efficient alternative to traditional in-person examinations, which is crucial for under-developed economies, and can help level up educational opportunities across the region, reducing inequalities between countries as well as between urban and rural areas.
How can AI-driven adaptive testing and other emerging technologies enhance accessibility in language assessment, and what ethical concerns should be considered?
Adaptive testing can make assessments more efficient by strategically adjusting the test to match the learner's ability, thereby reducing testing time and producing quicker results. This also helps reduce anxiety for students, as the tests are personalised and shorter. Combining this technology with other AI-enabled capabilities such as speech recognition, text-to-speech and automated scoring can greatly enhance accessibility, leading to more inclusive and equitable assessments. When it comes to ethical considerations, we must ensure that the AI models are trained using representative datasets so as to minimise algorithmic bias, strive to use transparent and explainable models, and make sure we have “humans in the loop” that can monitor the use of the system and take any remedial action if needed.
As AI-powered assessments become more widely accepted, how should we rethink the balance between technological innovation and the fundamental principles of language testing?
Much as we might be tempted to incorporate the latest technology into our assessments, we must remember that technology is a means, not an end. Our learning and assessment goals should always come first. Therefore, we should only use AI if it helps us achieve those goals. Using tools without a clear purpose or attempting to retrofit our constructs into them is not the right approach. AI is here to stay, so we shouldn’t fight it but use it responsibly.