How can AI help with education?

Working Group Discussion • 12th December 2023

Summary 

The primary goal of the AI-for-Education.org initiative is to transform the provision of AI tools for education in LMICs, by supporting organisations to develop AI tools to improve learning for all.

One of the first key steps to achieve this objective is to develop a strategy for where to focus – which means thinking about how AI can potentially help improve learning in these contexts. We call these the use cases*, or simply ideas – and there are potential use cases at every level of the system.

To help structure this conversation, we outlined the general components of an education system in a framework. We then brainstormed a list of AI use cases for each, looking where AI can add value specially in the context of LMICs.

On December 12th, we discussed these ideas** and brought together individuals or organisations who are familiar with the daily challenges of improving learning, many of whom are at the forefront of innovation. The objective was to collaboratively develop, refine and expand the ideas, gauge traction, and gain insights into community priorities.

We brainstormed in small groups and cast our votes – and summarise what emerged below. To simply the logistics we grouped ideas into five categories: Assessment, School leadership and administration, Teacher recruitment and development, Student learning, Teacher tools.

*We define an AI use case as a recognised problem in education for which there exists a potential AI solution.

**The event brought together 80 participants representing 53 businesses and NGOs active in 11 sub-Saharan Africa and South Asian countries, alongside participants from developed countries.

What emerged?

Overall, the tone of the event was “cautious optimism” with many ideas, but also many concerns over how they can be implemented. Below we summarise the headlines, before looking at the most popular ideas from the five discussion groups. By far the use cases viewed as having the most potential were those involving translation. They were seen as offering high quality impact for relatively minimal inputs. The least popular were either those that removed human oversight or interaction in inappropriate places – curriculum design or interactions with parents. Or those which were regarded as economically or technically too difficult to accomplish in the LMIC context at present – such as augmented reality apps for classrooms.

Alongside the use cases, people were keen to share ideas on how things should be applied, and who they should work with, which we have summarised below.

Assessment

Summary: AI tools enabled greater personalisation in assessment. This has the potential to reduce teacher workload and increase effectiveness, but it must not be at the expense of close engagement with students. Hopes are high among educators and more widely in society about the benefits that AI assessment might bring. However, these expectations must be tempered by the current limitations of AI models. There must also be proper oversight matching the social political pressures placed on exams to maintain confidence in the system.

One of key benefit of AI driven assessment is greater personalisation. For classroom level assessment (formative or low-stakes summative assessments) greater use of automated marking and individual testing can reduce teacher workload and ensure teaching at the right level.

Assessment must be designed to increase engagement between teachers and their students. There was a concern that relying too much on an AI tool to diagnose or assess learning needs means teachers would become more distant from their students. AI assessment should engage and support teachers as they design lessons to address identified needs.

The fast pace of change has led to expectations for AI assessment tools to be very high. Care must be taken to avoid setting AI tools tasks which are beyond their limitations in terms of accuracy or sophistication. Introducing them in lower stakes contexts first and cautiously integrating them into national assessments is a prudent approach.

Related to the social expectations for AI, there is a need for oversight. This may be including human experts in the review of AI generated content. It may be at a broader societal level to ensure that AI assessments deliver the benefits promised without creating greater or different inequalities.

School Leadership and Administration

Summary: Discussions on the role of AI in school leadership and administration were positive. Groups expressed interest in using AI to support evaluations, improving retention or drop-out rates, and increasing the available resources teachers have at their disposal.

AI tools that can give feedback on teacher performance – either after a lesson or in real-time – offer interesting possibilities. However, need to implement such monitoring in a sensitive and constructive way was overriding. Without protection teachers may feel deeply uncomfortable with an excessive level of surveillance.

AI-driven predictive analysis could support school leaders in dealing with two key challenges – staff retention and student drop-out. AI could identify at-risk students, triggering additional support measures and thereby improving the educational outcomes. Better teacher retention protects state investment in the training and development of experienced members of the teaching workforce.

AI tools can help school leaders and policy makers ensure that the resources used are relevant to the curriculum and identify gaps in resource provision. However, are issues of cultural or historical relevance that mean human expert oversight may still be required to assure good quality.

Teacher Recruitment and Development

Summary: The applications for AI in teacher recruitment and development are varied. AI tools can identify the candidates with potential for success in a particular post. Yet there needs to be a sense of realism rather than hubris when designing such systems. Finally, there is a limited role for generative AI to provide mentoring for teachers, but only if there are no suitable experienced teachers available.

Using AI tools to predict which candidates would be most likely to be successful in a teaching post can have long-term benefits such as improved retention and learning outcomes. It could also reduce opportunities for corruption and nepotism. However, there is a need for realism when dealing with shortages of teachers. In some situations, any teacher may be better than no teacher if they are willing, even if they do not fit a particular desired profile.

Human feedback will be more nuanced and gives the recipient the sense they are participating in a community of learning. Using generative AI as a tool for mentoring should only be used in cases where there are no experienced colleagues or school leaders who can do this.

Student Learning

Summary: Student learning has been the most active area for early development of AI tools. Designers need to be realistic about the amount and kind of interaction they expect from students with AI tools. AI should not be seen as the easy option or an alternative to effortful work. Translation applications are early favourites for enabling quick positive results. Finally, AI developers would do well to remember that students learn in other contexts as well as their schools.

In general, students find interaction with a chatbot difficult and will need considerable training to do so. Chatbots might be a better tool for teachers to support and guide their work. Students should focus on AI-generated content.

AI tools can be engaging and novel. However, they should not be an easy option or alternative to work. The onus is on the developer to ensure that applications do not prioritize entertainment over effortful learning in their design.

There is considerable interest in AI tools for translation. Projects in this area are well bounded. They are also “high ceiling, low floor” in that they have low barriers to entry and the potential for impact is considerable. Therefore, prioritizing these in the short term is a good way forward.

It is a mistake to assume that students only learn at school. The development of tools that enable learning in home or other learning contexts may give greater equity of access. Equally, the role of the teacher out of school might be fulfilled by an older sibling or adult family member. Where possible apps should take account of this diversity in learning context.

 

Teacher Tools

Summary: There are contrasting purposes – developing new tools or resources (such as ways to facilitate cross-curricular learning) and refining existing tools and strategies (such as translating teacher resources to a more culturally and linguistically relevant form). Instruction in a student’s first language is generally regarded as desirable but not always possible. In some cases, however, it is not the best option.

Cross-curricular teaching and project-based pedagogy are new approaches which are difficult to achieve. They require creative use of resources and often, good communication between teachers in different subject areas. AI tools can support teachers as they navigate pedagogical issues and identify opportunities to develop new approaches.

Translation of existing teacher materials and generation of new ones can deliver enhanced, culturally relevant resources. However, expert review is still required to ensure that information is unbiased and accurate.

Advances in translation mean that instruction in many students’ first languages may be possible. However, just because new technology enables us to do something, does not mean that we should automatically do it. Instruction in local languages can isolate some learners from minority groups. In areas of conflict, it can also be more divisive, focusing students on regional tensions. In these situations, it may be better to develop high quality resources in international languages to support teachers and students.

Cross-cutting Comments

Occam’s Razor for AI: Perhaps it is best to target development towards the use cases with the fewest component tools. The fewer the tools the less the complexity and therefore the less the investment. Focusing on these could maximise returns for investment.

Digital literacy: The digital literacy of teachers is generally low so expecting them to use sophisticated AI tools is not realistic.

Connectivity and AI tool design: Issues of poor connectivity and infrastructure will persist for the foreseeable future. Large language models requiring considerable energy and a constant internet connection may not be practical in many places. Therefore, resources should be devoted to testing small language models which can be used on standalone devices that only connect to the internet intermittently when it is available.

Language in model design: LLMs are designed by and for users working in their mother tongue (L1 in education speak). However, many users, if interacting with an LLM in English for example, will be using their second, third or fourth language. Therefore, their speech may not follow the patterns an LLM might predict for an L1 user. This has implications for the intelligibility of the input for the LLM and how easy it is for the user to understand the LLM’s output.

Transparency: AI tools can help minimise opportunities for human bias or corruption in the education system. However, their own potential for bias is well documented. To ensure confidence in systems using AI, especially for high stakes assessment or other politically sensitive activities, developers need to build in transparency to their models so users can follow predictive or decision-making processes.

Accessibility: The design of many AI tools means they not usable by students with vision impairments. Accessibility should be considered as part of the design process to ensure more equitable access to these tools.

Why do we need to do this as a collective?

We do this because prioritising is not an easy task and requires analysis from multiple angles and perspectives. The sheer breadth and depth of expertise gathered in our working group far exceeded anything one team could muster. Together we have a better chance of identifying the right priorities for the right reasons.

The task isn’t over, in fact the event was just the beginning. Now the use cases are live on our website, available for anyone to read and, crucially, to vote on. Instead of five categories each use case is now tagged based on its function, capabilities, or underlying technology*.

This way, together as a community focused on AI in education in LMICs we can view and review our priorities, adding use cases and shifting focus as the needs and technology dictate.

*For these, we used available categories of machine learning (Scikit Learn) and generative AI (Huggingface).

What is next?

The ongoing prioritisation of use cases provides valuable information as to where we, and others, should invest time shaping tools. A vote here is a voice to express how we allocate our resources – including what learning by doing ideas we seek to test – and what we recommend others to invest in. They will also help us decide what benchmarks we need to develop to ensure that tools are high quality and evidence based for the education tasks we will use them for.

These use cases (and others we hope community members will add) will be consulted further as we develop our strategy for 2024 and beyond. Our next event will explore some of the challenges developers face when integrating AI to improve learning outcomes – a topic repeatedly touched on Working Group discussions.

Working Group Discussion

AI Use Cases

1.5 hrs

12th December 2023

Sign Up

Join our mailing list to keep up to date with news and events.

Community          Knowledge          FAQ
          Privacy Policy

AI-for-Education.org was set up by Fab Inc. in partnership with Team4Tech. We are grateful to the Bill & Melinda Gates Foundation and the Jacobs Foundation for their support.

Powered by FabData.IO