top of page
Search
Nana Oguntola

A Review of literature in the field on 'Artificial Intelligence in Education' by Nana Oguntola, (2024).

Updated: Sep 19



Abstract

This literature review looks at 6 articles focused on artificial intelligence in education, (AIED).


The articles reviewed spanned 30 years of research up to 2023 and  highlight the areas of focus and paradigm shifts in AIED research over the period.


The review indicated that AIED research has mainly focused on the technological applications of AIED with little focus on context or the pedagogical requirements of education.


Thus, when the council of Europe’s research is published their focus on human rights and protection of children especially in the use of AIED appears to be a desperate attempt to catch up as this had not been the focus of AIED prior to recent times.


Additionally, though the Department of Education’s ‘call for evidence’ provides a good indication of the impact of the use of AI tools, the literature once again points to a lack of preparedness by government and policy makers in the use of AIED.


This review calls for immediate research into the impact of AIED not just in the context of its application or capabilities but on the pedagogical application within the context of social, environmental, economic and psychological structures.

 

Introduction


This literature reviewed looked at 6 articles in the filed on artificial intelligence in education.


The aim was to discover what has already been written about AIED and identify any gaps in the research in order to make recommendations regarding future areas of research in the sector.


In order to ensure uniformity of thought it was important to apply a single definition of AI in this literature review.


The definition by UNICEF and the Council of Europe seemed to provide a good casing which could be applicable across all the literature reviewed.


The articles discussed were: Educational technology: Digital innovation and AI in schools, House of Lords Library, by James Tobin, (2023):


A review of two journals published by the Journal of Artificial Intelligence in Education (IAIED), ‘Enlarged Education – Exploring the Use of Generative AI to Support Lecturing in Higher Education by Darius Hennekeuser et al, (2024) and by Anita Chaudhary,

Innovative Educational Approaches: Charting a Path Ahead , ARTIFICIAL INTELLIGENCE IN EDUCATION, (Page 57-61), (2023):


The Council of Europe’s report: ARTIFICIAL INTELLIGENCE AND EDUCATION A critical view through the lens of human rights, democracy and the rule of law, (2022):


‘A review of the Evolution and Revolution in Artificial Intelligence in Education' Ido Roll1 & Ruth Wylie:


‘Department of Education’s report ‘Generative AI in education Call for Evidence: summary of responses':


'Artificial intelligence innovation in education: A twenty-year data-driven historical analysis by Chong Guan, Jian Mou, Zhiying Jiang, (2020)


The review explores the articles whilst critically analysing them to extrapolate the usefulness or and weaknesses of each article.


A critical discussion is carried out right through the review of the presentations made in the articles with a view to identifying the key themes of this review highlighted below.

The conclusion on the main considerations of the review is presented together with recommendations.


Methodology


The selection criteria for the articles were fourfold:

1.      Articles which could provide a historical overview of AIED and thus provide an analysis of paradigm shift in the sector over the years. By selecting some articles which had reviewed articles over 30 years this provided that broad-church, bird’s eye perspective.


2.      A review of how policy makers and the UK viewed AIED in order to analyse their position and understand safeguards or actions taken in this space for users, designers and society as a whole. By selecting the Council of Europe there was an overview and understanding of how the European body viewed AIED, by looking at the article for the House of Lords, there was an understanding of how lawmakers approached AIED and a look a the Department of Education’s ‘call for evidence’ provided a specific look at how the department responsible for education in the UK was approaching AIED.


3.      A review of literature which could provide a discussion on pedagogical approaches within AIED


4. Identify any gaps in research on AIED


Initially, Poe.com, an AI platform was asked to suggest articles and books which discussed AI in education and in filmmaking. Upon review of the list, several of them were found to be made up and did not exist., thus a large number of the suggestions were dropped.


The search was then made on google scholar with the same keywords which provided the rest of the articles reviewed.


Six articles were reviewed, and the reviews are posted on the blog: https://www.nanaoguntola.me/blog


Discussion

This first review is of the paper produced by James Tobin, (2023), in advance of a discussion on technology in the House of Lords. Baroness Kidron was to ask the government ‘what assessment they have made of the role of educational technology (ed tech) being used in United Kingdom schools in relation to (1) the educational outcomes, (2) the social development, and (3) the privacy of schoolchildren’.


The paper describes ED Tech as the use of technology to support teaching or its day-to-day management which is also the way it is defined by UNESCO.


The paper identified that the use of technology had increased since COVID-19 and schools now used technology for management and administration, teaching support and learning and pastoral support. They also used EdTech to support teaching of the curriculum but they also noted there was a variation in the extent to which different schools adopted EdTech with some schools using it widely and others ‘cautiously’.


The paper stated that the UK’s EdTech sector is the largest in Europe worth over £900 million a year and likely to grow with the introduction of new AI tools like ChatGPT which is now in wide circulation just a year after this paper was written.

The paper identifies AI as assistants: administrative assistants, parental assistants, teaching assistants and instructional assistants and government had allocated funding to enable teachers to use AI as assistants to help them with the development and planning of lessons ad quizzes to help reduce the use of their own time.


 A new organisation called ‘AI in Education’ and led by Sir Anthony Seldon, head of Epsom College was formed to inform teachers of the benefits and dangers of AI in education.

They identified the following risks:

  • infantilisation of students (and staff)

  • moral risk, not least through deep fake

  • perceptions about cheating and dishonesty

  • lack of responsibility—or answers to the question: who is in charge?

  • impact on jobs


The paper suggests these identified risks are similar to the ones set out by John Baily in the education publication, Next which are:


·         Student cheating,

·         Bias in AI algorithms.

·         Privacy concerns

·         Decreased social connection

·         Overreliance on technology

·         Equity issues


Another risk identified is the issue of equity of access to technology and the lack of consideration in the design of the tools to uphold the rights of children or further their development. The report refers to a UNESCO report ‘The Ed Tech Tragedy’ which suggests AI tools were being developed without due consideration for the development needs of children and stated these firms were ‘cannibalising’ the sector.


The UNESCO report discussed the unwanted consequences of the use of technology during COIVD including isolation, mental health breakdown, lower achievement and invasive surveillance. They felt the hopes that technology would replace human teaching environments failed to manifest during COVID and questioned if the use of more technology will be a positive situation for learners.


The paper concluded with the issues around privacy and the need for guidelines and certification and accreditation around the use of AI in education.


It notes the risks of generative AI in schools and stated the government had a paper Generative Artificial Intelligence (AI) in education.


This paper was interesting in that provides an idea of the way the government views AI which is mainly with fear, distrust and trepidation.


Most of the threats stated here already exist with the utilisation of the internet, which is already a global staple, but like the internet, people generally tend to adapt the use of tools to suit them and generally tend to master or mitigate the risks as they become more familiar with them.


The document and its influences are by technocrats who generally would not use AI and therefore remain sceptical of its use. For UNESCO to label AI as the ‘ED Tech Tragedy’ is a big declaration which immediately puts fear into the minds of other policy makers and educators. Such a statement from a respected organisation does not encourage innovation or experimentation.


In terms of equity of access this is an indication of an unequal society and not a result of technology and therefore the cure is much deeper and needs to be sought outside of technology,


The paper focuses too much on the risk and dangers and less on the benefits and innovation which AI provides. Additionally, it focuses more on the administrative use of AI and less on its uses as a tool within the process of education itself.


In testament to the rapid development of AI, it can do far more than is suggested within the paper as tools continue to be developed.


In the end, the paper offered no solutions or recommendations just a critical overview of AIED with no possible intimation of guidance for designers or users.


The Government needs to be far more open and positive towards AI in education: the tools are here to stay and its focus must be on how to harness it for the good of learners and practitioners with an eye on the risks of course.


The second review looked at two articles published by ‘The International Journal of Artificial Intelligence in Education’ which is published in conjunction with the International Artificial Intelligence in Education Society (IAIED). They refer to themselves as ‘A multidisciplinary group at the cutting edge of computer science, education, and psychology’.


They have a broad scope in looking at AI in education, much more than what the UK government and UNESCO have used so far which look at AI within the scope of administration or as assistants or as support for curriculum development. Instead their ‘Coverage extends to agent-based learning environments, architectures for AIED systems, bayesian and statistical methods, cognitive tools for learning, computer-assisted language learning, distributed learning environments, educational robotics, human factors and interface design, intelligent agents on the internet, natural language interfaces for instructional systems, real-world applications of AIED systems, tools for administration and curriculum integration, and more’.


This is an entire spectrum which includes integration, robotics, collaboration, the architecture and language whilst including curriculum development support and administration as well. This is a much better overview and gives a better understanding of AI in education which gives scope for far more utilisation, innovation and collaboration, a definition which should be adopted by policy makers in order to allow for a better assessment and understanding of the full spectrum of AI in education.


The first article is ‘Enlarged Education – Exploring the Use of Generative AI to Support Lecturing in Higher Education by Darius Hennekeuser et al, (2024). They discuss how it is vital for tool creators to understand the needs of university lecturers and by default educators. The researchers built tools specific to the requirements of educators utilising an ‘LLM-based assistant with retrieval augmented generation (RAG) capabilities and lecturing materials as its data foundation’.


They found lecturers were more positive towards using the product and felt they would utilise AI more if they knew it was trustworthy and reliable.


This is indeed a good place to start and connects with the paper written to the House of Lords by James Tobin, (2023), which recommends accreditation for ‘AI in Education’ tools. This would improve utilisation as educators can trust the source, a major principle in accepting communication.  People accept communication when they trust the source obviously applies here.


The second article was by Anita Chaudhary, 'Innovative Educational Approaches: Charting a Path Ahead , ARTIFICIAL INTELLIGENCE IN EDUCATION', (Page 57-61), (2023).

The paper discusses the impact of AI in Education but admits much of its impact is still largely unknown.


This Paper also provides a wider description of the use of AI in education than the government and UNESCO do by including the use of platforms and tools within the process of learning and teaching as well as all the administrative and support tasks listed by the government and UNESCO.  


It discusses the global growth of the industry by referring to a survey by Research and Markets, which stated that ‘the global market for AI education reached $1.1 billion in 2019 and is expected to surpass $25.7 billion by 2030’. This rapid growth would be a factor in why UNESCO describes AI in Education as ‘The Ed Tech Tragedy’.  A rapid growth of an industry does raise eyebrows but should not be feared or treated as a threat without a discussion of its capabilities and the possibilities inherent within it.


They discuss the advantages of AI in education which is essentially the same as the others reviewed so far and they include more immersive learning, creative, specific to the individual, less time needed by teachers and the disadvantages which are bias, dependence on technology and inequality of access.


Their conclusion is that AI is here to stay, and designers and creators must work to ensure they understand what educators require whilst educators must embrace AI in the knowledge that it cannot take their jobs but rather will help facilitate their teaching.


The third literature looks at the report produced by The Council of Europe which discusses the use of AI in Education through the lens of the Council's objectives around democracy, rule of law and human rights.


The Council of Europe’s minister noted in 2019 that AI’s impact on education was increasing and though this brought opportunities there were also threats. They, therefore commissioned a report which looked at ‘the application and the teaching of AI in education, which we refer to collectively as “AI and education” (AI&ED)’, looked at ‘AI&ED through the lens of the Council of Europe’s core values: human rights, democracy and the rule of law’  and third, took a ‘critical approach to AI&ED, considering both the opportunities and the challenges’.


Their definition of AIED was useful. They differentiated it in the following way:

The connections between AI and education: “learning with AI” (learner-supporting, teacher-supporting and system-supporting AI), using AI to “learn about learning” (sometimes known as learning analytics) and “learning about AI” (repositioned as the human and technological dimensions of AI literacy).


Though they acknowledge AI offers opportunities they feel there are also many threats which have the potential to overpower educators and undermine education and citizens reducing critical thinking and autonomy, and thus should not be widely used in schools, a stance which almost leaves no space for the positive impact AIED may have in schools. This stance, made right from the start of their review, immediately positions the biased nature of a report which should be critical and fair.


They take note of the plethora of tools produced by profit making companies and balance this against the backdrop for the use of AI in education. They say they are to provide a critical analysis rather than just a glowing recommendation or report in order to protect users.


In order to understand the viewpoint of the writers, context is important and they produce their report within the context of ‘Digital Citizenship Education Project (DCE), which aims to empower children through education and active participation in the increasingly digital society’. Understanding this context puts the report in perspective especially with their sometimes overly critical approach to AIED.


They state that technology is complex and non-linear with ‘dangerous unforeseen consequences’ almost as though there is the presence of a bogey man waiting for all who may try to enter this room of technology which they claim their report seeks to unravel or decipher for their audience.


They provide a preferred definition of AI which is useful for the review of this document and provides an understanding of how European policy makers see AI.


They use a definition by UNICEF which is derived from the ‘Organisation for Economic Co-operation and Development (OECD) member states):


AI refers to machine-based systems that can, given a set of human-defined objectives, make predictions, recommendations, or decisions that influence real or virtual environments. AI systems interact with us and act on our environment, either directly or indirectly. Often, they appear to operate autonomously, and can adapt their behaviour by learning about the context. (UNICEF 2021: 16)


 The report prefers this definition because they feel it does not just depend on data. It includes ‘rule-based or symbolic AI and any new paradigm of AI that might emerge in future years’; and which depends on human beings to drive. It. Their criticism of the definition by UNICEF is that it implies AI may be able to learn on its own which they claim will not happen. (Rehak 2021).


In fact, they believe Machine Learning will soon reach its development ceiling and not be able to progress further. An interesting idea as Professor, Hinton, the man called the ‘godfather of AI’ declares AI will soon be super intelligent.


They disagree with the concept of AI as a panacea and state that it must be seen as a tool which can have ‘positive impacts’ but also has various limitations such as the lack of accuracy in some cases, its tendency to change as soon as it gets a different stimulus, and bias.


It is interesting to note they find the name ‘artificial’ problematic as it implies that the ‘creation of a non-human intelligence is possible’ and that it has the capacity to ‘learn’ on its own.


They seem to have missed the fact that AI is supposed to have the capacity to learn on its own, as the more information it gathers from various sources the more it learns and changes its responses. They say ChatGPT can dish out nonsense in certain situations, however it is learning and constantly changing.


They make the important decision to just not define or discuss AI within its technological capacity but to address its ‘sociotechnical’ characteristics as it is created, developed and used within human processes and contexts and they feel a look at AI must include both sides.


The paper does not see AI as a threat to jobs but rather more nuanced which may even create new ones. They mention a paper by Frey and Osborne  which says AI will create 700 new jobs such as ‘“hidden ghost work” of AI: the data cleaning, image labelling and content moderation being undertaken by usually poorly-paid workers in developing economies (Gent 2019; Raval 2019)’. They admit the impact of AI on jobs does call for more research.


They critique the idea that AI is seen as an answer to a lot of education’s problems such as ‘the lack of qualified teachers, student underachievement and the growing achievement gap between rich and poor learners)’. Instead they feel several issues need to be understood and they include: ‘the aims of using AI in education, where it is used, by whom (by individuals, institutions or industry), how it is operationalised, at what levels (from the single learner to whole classrooms, collaborative networks and national and transnational levels), how it works and so on’. Thus they would like to ensure a more holistic view or understanding of the context of AI in education is provided prior to its deployment.


This is useful to ensure AIED is deployed within the pedagogical framework and socio economic, political and environmental contexts of applicability.


They provide a useful breakdown of AI into four categories for anyone involved in AIED:


 1.“Learning with AI”, which involves the use of AI driven tools in teaching and learning for both students and tutors.


2. “Using AI to learn about learning”, uses AI to learn about and analyse data about how learners learn, system designs in order to influence the programming and design or support things like admissions, retention and planning.


3.“Learning about AI” involves ‘increasing the AI knowledge and skills of learners of all ages


4.Preparing for AI involves ensuring that people everywhere are prepared for the possible impacts of AI on their lives. This would include issues of bias, ethics, impact on jobs. They suggest this should be integrated into learning with AI so that as learners learn about using AI tools they are also taught about the implications on their lives. They refer to this as ‘the human dimension’.


AI applications and pedagogy


The writes feel that despite research on other effective forms of learning such as, ‘Entwistle 2000; guided discovery learning, Gagné and Brown 1963; productive failure, Kapur 2008; project-based learning, Kokotsaki et al. 2016; and active learning, Matsushita 2018)’, producers of AI tools for education have been traditional in focusing on cognitive and behaviourist learning methods and not been as creative or innovative  which the Council report say undermines independence, critical thinking or self-development. They provide an example of E-proctoring which they say brings nothing new to the table.


Despite this criticism of AIED, it can be argued that the developers are simply reproducing what the education system wants, how it approaches pedagogy.


They argue the personalisation of AI in education has not worked—Some AI producers used to claim this was the Netflix of education providing personalised education but the report argues they only provided personalised pathways to the same end and not personalised learning leading to individual outcomes and actuations.


AI applications and identifying learners at risk


AI could be used to monitor attendance levels or monitor students at risk of dropping out but this must be balanced against the need for privacy, less intrusion, labelling and data protection.


AI applications and the developing brain


Data protection laws at the moment govern its use for identification and not for processing which governs behaviour. It is used to influence behaviour especially impact is higher with young people still developing their own values and identity.


The issue with this concern is that if it has not already been done in the current climate with all the access young people have to technology, it cannot be done with the proliferation of AI because AI is not the cause. It should be noted that Instagram has now introduced a new filter limiting access for users under the age of 16. The effectiveness of this will of course be mixed, as some young people already have full access and others can simply lie about their date of birth.


AI applications and learner agency


They are concerned about the freedom and agency of learners but they posit this as though children and young people have no independent thinking capacity or filters through which they can receive information or process information or interact with AI.


This is Bandura’s theory and although true in some cases does have many barriers to total blanket definition of the way children and young people learn.


AI applications for children with disabilities


They admit that although AI tools in this area can also have limitations in reproducing what is already available they do have many advantages in supporting learning with disabilities such as ‘(Drigas and Ioannidou 2013); for example, to diagnose dyslexia (Kohli and Prasad 2010), attention deficit hyperactivity disorder (ADHD) (Anuradha et al. 2010) and autism spectrum disorder (Stevens et al. 2019), and to support the inclusion of children with neuro-diversity (Porayska-Pomsta et al. 2018)’.


AI applications and parents


The article says parents want to be involved in the learning experience of students but provide no reference for this statement.  They feel students are highly influenced by AI tech in classrooms but do not know to what extent they are impacted. This research should provide some answers here rather than present assumptions.


The design of AI tools in education aims to influence what children learn as it dictates what comes up in a search. This is not a problem as that cannot be identified as a new thing or an impact of AI. It’s been happening on Google and other search engines for a long time.


The report presents a negative view of AIED which makes parents fear rather than bring them alongside to work with educational settings to ensure their wards are comfortable and safe- They imply AI will mess up a child’s behaviour. They contest the idea that algorithms can safely predict human behaviour and describe AI as something which can systematically harm the user and say parents have little recourse to deal with this.


AI applications as “high-risk”


They identify AI applications which children use as ‘high risk’. No wonder people are so fearful of them. They suggest these AI applications should be subject to compliance in relation to data governance, transparency, human oversight, robustness and accuracy.  These issues are not new either and most educational establishments already have measures in place to deal with them in relation to the use of the internet and these measures simply need to be updated to include AI. They also refer to AI systems as though they create themselves and have no human involvement in their creation

They feel using AIED tools to make predictions and determine learner grades are discriminatory as basing decisions on characteristics such as gender and race can be discriminatory which are valid concerns and therefore reiterates the need for human oversight.


Apart from the issues of data and privacy they feel technology is ‘shaping children in ways that schools and parents cannot see’ and feel parents nor children have any power over this. They gave an example of how the University of Buckingham started monitoring social media posts of students, a very intrusive exercise.


They mention how research indicates that emotions impact learning but no research data yet on how AIED affects or impacts emotions.


They discuss the potential use of AI to diagnose mental issues, or predict behaviour, monitor or use facial recognition and mention the European Data Protection Supervisor who has called for a ban on these uses of AI which would include in educational settings; A bit drastic as it would be better to limit its use for relevant settings rather than an outright ban.


AI and digital safeguarding


The irony is that the tools which they suggest are a threat to personal liberty are the very tools that keep users safe online. On one hand they say by monitoring users, behaviours which could lead to radicalisation or sexual exploitation can be predicted. They also say this monitoring makes learners feel less safe and alter behaviour to change their footprint. Furthermore they indicate schools claim this surveillance helps them predict how learners can transition into work.


The ethics of AI


They discuss the ethics of AI and say all citizens should pay attention to it and admit this ethics is complicated. They mention it has received a lot of attention referencing the works of ‘Boddington 2017; Whittaker et al. 2018; Winfield and Jirotka 2018) and more widely (e.g. the House of Lords,64 UNESCO,65 World Economic Forum66)’.


Apparently, institutions which look at ethics in AI have been set up and they include ‘the Ada Lovelace Institute,67 the AI Ethics Initiative,68 the AI Ethics Lab,69 AI Now,70 and DeepMind Ethics and Society,71 to name just a few)


In 2019, Jobin and colleagues (2019) identified 84 published sets of ethical principles for AI, which they concluded converged on five areas: transparency, justice and fairness, non-maleficence, responsibility and privacy. However, they say these all remain open to interpretation in different contexts both to the development and use of AI.


Despite this, they feel the impact of AI can be seen in education in the way it may choose candidates where a single ‘No’ negatively impacting someone’s life. But this is  done already—what is new about this?


They say the ‘social ills’ of computing will not disappear just because there are codes of ethics. This is fair to say about almost everything in life.


The article further states that although universities tend to have robust ethics requirements, they do not have such robust requirements in place for AI. This can be explained as the new area of AI where policy makers themselves are vague in their definitions and requirements of AI. It should be interesting to look at the guideline for the use of AI recently published by EU to discuss what guidelines they have provided. It is also relevant and true that policy development in education across the board regarding AIED has been slow to materialise.


If their suggestion that companies are using children for their own commercial benefits, is the case then governments need to set the rules and companies need to abide by them. This issue goes beyond the private companies not abiding by ethics to the lack of clarity by policy makers and governments to say what they are, thus leaving them for anyone to define and interpret for themselves.


They suggest ethics in AI must go beyond data collection and privacy to include ‘the ethics of teacher expectations, of resource allocations (including teacher expertise), of gender and ethnic biases, of behaviour and discipline, of the accuracy and validity of assessments, of what constitutes useful knowledge, of teacher roles, of power relations between teachers and their students, and of particular approaches to pedagogy (teaching and learning, such as instructionism and constructivism). (ibid.: 521)’  because it also involves the ethics of education.


They mentioned ethics in education has been developed for over 20 years whilst ethics in healthcare have been developed over a long period of time. As AIED is recent, this may be explanation for the slowness in developing ethics for AI in education. This should not be a cause for polarity or disparaging of the private sector but an exciting moment to create something new for generations to come.


In summary, the ethics of AI and education is complex but under-researched and without oversight or regulation – despite its potential impact on pedagogy, quality education, agency and children’s developing minds. Accordingly, “multi-stakeholder co-operation, with Council of Europe oversight, remains key to ensuring that ethical guidelines are applied to AI in education, especially as it affects the well-being of young people and other vulnerable groups’.


 The authors ask an interesting question: ‘For whom does an AI system work? The learners, the schools, the education system, the commercial players, or politicians and other decision makers?’  They suggest that the ethics of AI is less about the technology and more about the people designing and using it. A valid point as AI is just a tool like a hammer or a car. It is about who designed it, why, for whom.  Thus, the question is: for whom does AI work? AI Loyalty.


They suggest that all stakeholders such as parents, policy makers, civil society, industry, children and teachers should be involved in the design of the technology being used in AIED. This is relevant and a multi-stakeholder approach would be useful for the continued development of AIED. To note as well is that designers must already have testers and case studies to work from, but it must be noted that they are responding to market demand. Governments need to set the rules and let companies produce within it, the same as every other tech or product.


Political and economic drivers


The article suggests that with AIED being readily available it will lead to the downgrading of the quality of education. This is so far-fetched and hard to defend, especially as they fail to present adequate evidence to support this statement. AI systems and tools are built that way because that is how the education system is built—it is not a creation of their own.


The writers posit that AIED producers are profit focused and not focused on educational benefit of learners. They called the use of AI in schools as ‘privatisation by stealth’. They seem to forget these companies are operating within a capitalist system and respond to market demand. They would not produce what is not required by the market, and the market will not purchase what they do not require.


They say AIED will exacerbate inequalities between rich and poor, disabled and marginalised communities but admit in another part of the document these inequalities are created by global systems which is true because AI simply exists within a structural global order of inequality.


They suggest teachers, educators and policy makers also need training to decide on tools to use which should indeed be the case.


Evaluating AI in education


The writers decry the absence of empirical research on the use of AIED. They feel there is not enough robust data on which policy can be formed. They feel there is a lack of information on the impact of AIED on the learners.


They feel schools and governments have decided to use AIED without adequate information and usually decide to police its use after harm has been done and not prior to its use.


They decry the lack of accreditation of the tools and the access to student and teacher data which private companies have---they call this ‘data rent’.


They suggest that many teachers do not have enough knowledge of AI tools to make the right decisions on use. There is some truth to this especially with the fast space of new AIED tools hitting markets regularly. Earlier, though, the authors mentioned that AI tools are designed as cognitive and behavioural tools much like common didactive means of teaching used in schools. Teachers therefore use tools which support what they do, not tools they do not understand or cannot use.


They suggest that AIED designers may go out of businesses and thus a threat to AIED. This is a ridiculous position as this is not unique to AIED companies: Businesses going bust happens across all sectors.


The writers suggest that as AIED tools impact on learners’ mental capacities and health they should be assessed to ensure efficacy and safety. Like all educational systems this should be a norm and not an exception.


AIED colonialism


‘In 2020, despite the coronavirus pandemic, venture capital (VC) investments in AI start-ups reached a total of US$75 billion for the year, of which around US$2 billion was invested in AI in education companies,’ mostly in the US. The authors believe these companies are selling their approaches globally, creating what has been called an ‘AIED colonialism’.


The tools produced in the West have been sold globally without consideration of cultural differences or nuances. This is interesting as it is a commercial enterprise set to make profit and though there must be ethical considerations of doing business as expected across all sectors it is difficult to see how a private-investor-profit-seeking company will invest in being culturally appropriate if there is no profit in doing so.


 It must be incumbent on governments and policy makers to make this happen and ensure what is supplied into their territories is culturally appropriate. Commercial organisations will respond to market demands made on them not necessarily on emotional requirements.


They suggest Google’s dominance with google classroom is problematic. It is true that dominance by one company is usually not good for business, for variety and for innovation.


They suggest that the use of English for example across the globe will lead to lower school attainment giving sub-Saharan Africa as an example where, they claim, lower school attainment is linked to language, yet some of the most educated people in the world are from sub–Saharan Africa.  Once again, they need to appeal to the markets and not the companies to produce in the various languages around the globe. They will do so if market and profit demand it.  Otherwise, there is no obligation to create in another person’s language, especially when there are so many and it might mean losing money.


They conclude that AIED monitoring cannot be left to profit making companies but to policy makers and governments which is correct. They should make the rules like they do with books and the curriculum. AIED designers will deliver according to the brief since they want clients to purchase their products.


AI, education, human rights, democracy and the rule of law


The article encourages governments to take a cautious approach to the adoption of AIED and minimise risks to human rights especially children’s rights as education can enhance rights when ‘enjoyed fully’ or ‘negatively if not’.


This is so true and is its own solid argument for the importance of education in general.

They add that rights must be looked at in the context within which they are applied and pay attention to groups who may have these curtailed. This is correct, however, the world acts like inequalities did not exist before AI arrived. AI did not create inequality and bias and prejudice. It walked into a world that is already racist, biased and prejudiced. The tools do not need fixing: the structures, the people the world needs fixing and since that is too large and almost impossible to do it’s unlikely AI can fix it.


They discuss the rights in the United Nations Convention on the Rights of the Child (UNCRC) as the most widely ratified in the world and the basis for the protection of children’s rights everywhere.


They however state tits problems, such as weak monitoring and enforcement mechanisms globally and conclude that children must be educated about their rights, again another situation not caused by AI and which cannot be fixed by AI.


Human rights, AI and education


The article admits ‘There is little substantive literature that focuses specifically on, or even mentions in any meaningful way, AI, education and human rights’, an interesting situation noting the constant mention of the lack of literature even though AIED has been around since the 1930s. 


The report posits that one of the calls for the use of AIED is a dearth in teachers or well qualified teachers especially in rural areas but adds that AI tools will not solve this problem which has deeper roots caused by socio political and economic contexts. 


Although this is a correct analysis it is also a positive rather than a negative deployment of AI because AI could alleviate some of the issues, which are caused by deeper societal complexities in this specific example. If AI did not cause them then the fact it can help alleviate some of it is useful and should be used and not vilified.


 Right to human dignity


‘In the context of AI and education, this human right implies that the teaching, assessment and accreditation of learning, and all related pedagogical and other educational decisions, should not be delegated to an AI system, unless it can be shown that doing so does not risk violating the dignity of the participating children. Instead, all such tasks should be carried out by human teachers’.


The onus is on policy makers,  and leaders of learning institution to demand and require EdTech designers to prove their tools will not negatively affect the dignity of the learner which is not a difficult thing to do once the right guidelines are in place in institutions. For example, most learning institutions have IT policies. Designers will deliver to guidelines.


Right to autonomy


The writers insist on the right of students not to be subjected to automated decisions and the right to contest these decisions.


No problem with this except automated decision-making has already been in play for a long time across many industries like banking and insurance.


They state that a dependence on AI to profile children and determine their learning pathways could be problematic as AI could get it wrong, negatively impacting children’s psychological, mental and emotional wellbeing.


They say since old data is used to train AI, this could be problematic. Also saying grades awarded during COVID were amended afterwards.


They suggest children should have the right to refuse AI in the classroom but do not provide a reason why this should be the case. If children will use the textbook provided by the teacher, why refuse the AI tool? The gatekeeping should be with the authorities long before it arrives in the classroom. The child will not even know what works best for them because they are in school where the choice on what and how they will learn will be made for them before they arrive in the classroom.


Right not to suffer from discrimination (fairness and bias)


From design to use it must be non-discriminatory and accessible to all. This is ideal but we do not live in that world and AI cannot create that world for us,

They discuss how bias is inherent in data sets which perpetuate historic biases and stereotyping which is correct and is almost a response to their own issues of bias in AI.


It can create positive discrimination for the disabled as it can be inclusive but it can also create negative discrimination by excluding others.


They suggest companies profiling the learners could get it wrong which is a valid point for other educational tools as well and not unique to AIED.


 Right to privacy and right to data protection


Data collection can go either way: negative and positive uses of data collected—positive more people can be reached, more data for wider decision making, negative is the use of personal information which may affect learners in the future. Thus, the opportunities come with the risks and a choice or decision must be made as to whether the risk is worth the benefits. One which all institutions must make.


Although AI can support mental health by helping learners move from negative to positive state of mind, the authors feel the exclusion of the teacher from the loop takes away the personal touch and constant ability to assess the situation.

The issue is data is not just the collection but how it is stored, who has access, future use and others using it who did not have permission in the first place and anonymising it.


Right to transparency and explainability


Their issue is that data or AI tools are often anonymous and lack proprietary so there is an absence of ownership or accountability making it difficult for teachers to challenge AI decisions.  The issue remains that the tools in the classroom would have been selected by the learning institution or the teacher and so not really have that power over decisions made by the teacher.


Right to withhold or withdraw consent


The right to consent is a valid argument that is not new but issues around being able to withdraw it once given and used remain undecided. Hence the value of learning about AI for all stakeholders prior to agreement… read the fine print.  It is of course understood that these can be ambiguous—which then leads to issues around exploitation especially if money is offered.


Right to be protected from economic exploitation


Their example is when a child makes a song they own the rights—but when they create with AI it is unclear about ownership. This is an issue around the copyright and ownership of content used via AI.  Suggestions are that developers must seek to have a library of where they mine content from and deliver a system of reimbursement to the originators.


They call for the rights of the child to be imbedded in policies on ethics in AI.  That would depend on what the rules are around ownership in AI, a discussion to be reviewed. Also note that data around the child involves stakeholders around the child, such as the family, school, and other personal data.


The issues discussed here can be far more simplified than the document implies. Once the rules are made globally around the use of AI content, these will be implemented by developers and users on most issues such as curriculum, books vehicles, computers and so on. Once regulated, people tend to follow the rules.


They discuss how the accumulation of data in the hands of AI developers make them powerful. They feel this is not prolific with education creators but growing and take note that organisations tend to take a single solution which creates a potential for monopoly.  Given the West is a capitalist society this would be operating within that system and so will need to be regularised as other industries.


Rights of parents


They suggest parents may allow the child to use tools which allow for large amounts of data to be harvested and by so doing waiving the child’s right to privacy knowingly or unknowingly.


Can parents refuse for their child to use AI tools in school—not tested yet by law as of course they may affect the child’s learning if others are using it.  But consent must always be sought and not assumed, and most schools do seek consent when they allocate IT equipment to their students.


Remedies and redress


They suggest that children must be able to enforce their rights when they come into conflict with AIED however, most rights cannot be enforced by children. They are given by the people responsible for them such as parents or learning institutions.


AI, education and democracy


‘Regarding the role of digital technologies in modern society and their potential negative impact on democracy, Diamond noted that “once hailed as a great force for human empowerment and liberation, social media – and the various related digital tools that enable people to search for, access, accumulate, and process information – have rapidly come to be regarded as a major threat to democratic stability and human freedom” (2019: 20).


That is a serious claim to make especially when no proof is provided to support it. If anything, social media has been a force for alternative voices and the practice of democracy and the opportunity for marginalised voices to have a platform, fight for freedom and democratic transparency. Examples are the recent riots in Kenya and the UK, and the diverse black voices telling stories not often in the mainstream Western media on all issues important to them and not imposed by hegemonic powers.


Some risks here in terms of cyber-attacks, negative propaganda inauthentic behaviour but to date not caused solely by AI.


They agree it can be a force for the good of democratic principles. AI is not only influenced by humans but implemented by humans, therefore humans can set the laws and policies in place which regulate its use especially within education,


Democracy and AI in education


The authors state that ethics in AI started being discussed 20 years ago but not pursued due to a need for diversity and cultural sensitivity.


They suggest that if democracy is to live up to its idea of being for all then AI should be for all but there is a disparity as this is not the case. Yes, there will be a disparity, but it is not caused by AI. Thus, if democracy really does not exist in its ideal form in the world, it is unlikely to do so with AI.


They say democratisation of education is employed by public schools- so equal access—making sure all children have the same access but of course this will differ even among public schools depending on the socio-economic location of the school.


They argue that AIED follows cognitive and behavioural systems of education when they should be more connectivism or social constructivist in practice, but this is not dependent on AI it depends on pedagogical approach of the learning institution. AI will replicate it or work within it but AI cannot manufacture it.


They suggest AI replicates global inequalities as it does not take cognisance of diversity around the world and since it is trained on past data it replicates past prejudices as well.

This is true but developing countries need to develop their own AI systems and tools—that’s how equality can be created across AIED globally.  Yes, ML systems learn backwards as it must collect data backwards obviously but remember data is constantly being updated.


Critical reflections


The report posits that AI has issues especially in relation to AIED but so far criticisms levied are cliched and not unique to AI but to the internet, to global systems, corporate bodies, capitalism and other structures within which the world operates.


The authors suggest AIED has always made a claim to personalisation but question if this a good thing. Could it not cause division in society both in terms of attainment and access. Is it really personalised when its data collection is singular and therefore information really is homogenised and not personalised. They say that that since the classroom are a preparation for the world it is currently unclear how AIED impacts learners’ preparedness.


AI, education and the rule of law


They provide a list of international legal frameworks protecting human rights, disability rights and the rights of the child but they do not provide any examples of how or where these are being breached due to the use of AIED.


There are no legal frameworks that govern AIED in particular at the moment but they feel this is important given AI’s issues around data, rights, profiling and cross-cultural impact on education.  It is important to note, however that there are several laws that govern designers and within which they must operate including the international and human rights laws that govern nations.  AIED designers are likely to follow laws as provided by institutions, policy makers and governments.


‘With respect to the values of the Council of Europe, Schwemer, Tomada and Pasini write that:  it is noteworthy that the proposal does not follow a rights-based approach, which would, for example, introduce new rights for individuals that are subject to decisions made by AI systems. Instead, it focuses on regulating providers and users of AI systems in a product regulation-akin manner. (2021: 6)’.


 This is the way forward: a provision of the structures and guidance within which designers can develop and produce AIED tools.


In a survey in a German university which involved assessing the response of students to AI and human assessments and decision making, students stated they trust AI more as there would be an absence of bias.


The ‘’European Digital Competence Framework for Citizens – DigComp 2.2 (Vuorikari et al. 2022)’, expects all people to be AI literate in understanding how their data is processed but not many people do in this or in any other area


 AI and grade prediction


The article presents an interesting discussion on how AI negatively impacted learner’s grades for the international baccalaureate. Due to covid they could not take exams, so grades were based on coursework, teachers’ predictions and past performance data. This performance data was not based on the learner but the school which means schools who usually did well mainly in higher socio-economic locations had better grades and this was considered unfair and so were the results.


Biometric data use in schools


The Finish data authority and the Swedish data authority both took schools to court for using students’ biometrics to identify them. They felt that the schools’ statement of having consent from the students and their parents was not balanced in terms of the authority of the schools and that it was unnecessary for the schools to use that information to identify the students.


In Finland the data was collected for impact assessment and in Sweden it was for school lunches.



The critical reflections of the authors were as follows:


·         There is an absence of research information on AIED


·         Training is required for all stakeholders


·         They are calling for law schools to include AI in their course for example AI and health care, AI and war etc.


In their conclusion the authors say that AIED should not just be discussed in terms of the technology but the human impact, design and use.


In relation to AIED they presented the following needs analysis:


·         Given the very critical nature of the document the writers did admit that AIED of itself is not problematic just its deployment and use in the hands of humans.


·         They also call for further research on the impact of AI on education.


·         They suggest that AI tools are mainly commercially driven and may work outside the system however it should be noted that if they are used in schools, it is because they work for the schools otherwise, they will not be used.


·         They suggest parents are always given the right to choose but this is not unique as parent are given an option when schools introduce new ideas, subjects or tools.


·         They suggest that tool designers should embed ethics in the tools however, remember marketing and money is biased. Change that, change everything. AI simply reflects the world it lives in.


·         They say that children should not be forced to be research subjects. Force is definitely the wrong use of word as that is unlikely.


·         They suggest that the data rights should remain with the learners and not the designers who mine the information. They suggest that at the very least if data is mined from public schools then the tools produced should be open source. That is a discussion worth holding with the many stakeholders involved. Schools and governments could make this a requirement of using the tools or is too late already.


They call for education for stakeholders including policy makers, educators and parents on AI tools in order to facilitate a better understanding, discussion and deployment of AIED tools, a practice present for most new interventions and should be implemented.

From the start they say this is a critical document but sometimes the criticism is questionable as some of the criticisms levied against AIED is questionable such as suggesting AIED should not be used in case manufacturers go out of business, expecting manufacturers to produce in many languages and also be responsible for equality in a world where it does not exist, whilst labelling AIED a ‘high risk’ activity.


The usefulness of the document is its discussion on human rights, the rule of law and democracy as it provides another perspective in the deployment of AIED with the rights and needs of the learners at the fore of any decisions or consideration in all learning tools including AI tools.


Additionally, they discuss concepts such as ‘data rent’, ‘AI loyalty’, ‘AI colonialism’, ideas not readily discussed in the mainstream, but which are all important considerations in AIED.


The writers have called for the education of policy makers which is important and also an understanding of the fact that AI exists within an unjust society.


It is important that the rules, laws and requirements that are required for human beings are upheld across the world. Once this is done and clarity is provided by policy makes in relation to AIED and these rights, commercial producers will produce tools that meet these criteria as they know failure to do so could lose them businesses.


As the writers have said, there is a need for further research on the impact of AIED on learners which is important otherwise all of this debate continues to be just theory in the absence of an understanding of its impact.


The fourth article reviewed was written by Ido Roll & Ruth Wylie who conducted reviews of 47 articles on AI in Education from ‘three years in the history of the Journal of AIED (1994, 2004, and 2014)’.


Acknowledging the evolution of AIED over the past 25 years they seek to discover what the strength are and what the future holds.


Their aim for the review is twofold: ‘One is an evolutionary process, focusing on current classroom practices, collaborating with teachers, and diversifying technologies and domains. The other is a revolutionary process where we argue for embedding our technologies within students’ everyday lives, supporting their cultures, practices, goals, and communities’.


These objectives are interesting as the Council of Europe’s 2019 report on ‘Artificial Intelligence and Education’ also calls for a collaboration of all stakeholders to ensure safety, They differ from the Council of Europe, however, as the Council do not want AI embedded this deeply into the lives of learners.


What is interesting here is that the writers are alluding to some form of personalisation which would adapt to the diversity of its users, a capability, the Council of Europe says designers of AIED have failed to accomplish.


The article suggests the AIED community have not been innovative over the past 25 years, choosing rather to replicate traditional methods of pedagogy, a claim also levelled against AIED designers by the Council of Europe’s report.


The writers feel interactive learning environments (ILE) have shown some positive results which has lulled practitioners into a state of contentment rather than innovation.

Learning theories of the 21st are more geared towards connectivism and social constructivism and impact or transformation and this is not reflected in AIED.


The writers however see this missing capability as an opportunity for AIED. They ask what actions need to be taken to make AIED more adaptive by looking at the focus of AIED research and the changes required to achieve this goal of more adaptive learning in AIED.


To do this they take a historical look at 20 years of AIED research published in the International Journal of Artificial Intelligence in Education, (IJAIED) with a focus on papers written in 1994, 2004 and 2014.


They analyse the accomplishments in the field in the early, middle and recent years looking at general publications and special interests.


They analysed the papers within the following parameters: ‘type and focus of paper, domain and breath, interaction type and collaborative structure, technology used, learning setting, and learning goals’.


By identifying if papers were empirical in terms of the form of data collected, they identified that research papers have become more prone to intellectual rigor: ‘Only 1 paper from 1994 (out of 20, 5 %) had some form of empirical data. In contrast, 8 papers from 2004 had empirical data (out of 13, 62 %), and 10 (out of 14, 71 %) from 2014 had such data’.


They categorised each of the papers according to their focus whether modelling approach of  learner or domain, ‘research methodology, literature review, system description, system evaluation, or learning theories’. They found that modelling approach tended to take centre stage as approach across the research period and critiqued the lack of more innovative methods of research in the area though they felt it was good to see papers that contributed to the ‘theoretical implications and contributions of their work’.


They found the target domain of most of the papers were around STEM. They felt this was because a FOCUS on stem attracted more attention, funding and opportunities. This was also because they felt STEM researched seemed to use more empirical methodology and thus more easily measured.


‘We analysed activity type by two dimensions: interaction style and collaborative structure’ as experienced by students. These were activities involving a single problem which required immediate feedback, complex problems which include multiple skills and phases and may require alternative methods of analysis and or response and finally self-exploratory environments and games.


They found research seemed to be focused around one step based systems—the didactive form of teaching based on cognitive and behavioural theories of learning identified so far by the literature already reviewed above.


They further looked at collaboration in 4 areas: ‘1 learner: 1 computer are systems in which individual learners each use their own computer, and there is no designed interaction between learners (however there may be collaboration with virtual agents); n learners: 1 computer refers to systems in which a group of learners, often dyads, work together with a single machine; n learners: n computers, synchronous, describes students who collaborate in real time using different machines, and engage with a joint problem; n learners: n computers, asynchronous, refers to systems in which learners interact asynchronously with the same environment. Discussion forums are a typical example’.(‘n’ being the number)


They found that between 1994 and 2004, few papers offered discussions on collaboration whilst this increased by 2014 which also translated into an increase in the classroom. They were excited about this development as they believe the ILE environment needs to be more collaborative for students.


They also looked at the technology being used such as ‘computers, handhelds, robots, or wearables), and there intended setting (school, workplace, or informal)’. They found most people used computers and these were present in both work and school settings.


They suggest this limited coverage excluded other technologies such as ‘smartphones and tablets, wearables and robotics’ which were cheaper and becoming ‘more ubiquitous and offered more opportunities for interaction.


They found the literature indicated a shift from focusing on product to process, beyond cognitive learning to more collaborative systems of learning. Most literature used surveys to measure motivation and few looked at self-efficacy in a substantial way. They felt most research used surveys and there was a need to look beyond the ILE which were supported environments to environments which preferred self-regulated learning outside of the tutored environments.


They found an absence of literature on the participation or involvement of the tutor as an involved collaborator.


They also found little literature on what the learners did in addition to working with AIED.


The literature review across the three years also looked at the language used in the articles. They found the use of ‘student’, and ‘system;’ were the most used as they were the focus.


The analysis also supported the shift from knowledge as a product to learning as a process with the word ‘knowledge’ being replaced by ‘learning’ in 2004 and 2014.

They found the field shifting as well in 2004 and 2014 with the inclusion of more stakeholders in the literature. They specifically mention ‘teacher’ which was missing in writings in 1994, and ‘web’ which was missing in 2004 although it resurfaces in some 2014 writing.


Words like ‘theory’ change to ‘empirical analyses and words like ‘model’ disappear by 2014.

Overall, they find the literature indicated that AIED focused mainly on STEM subjects, tended to be cognitive learning focused, and were computer and classroom based. The literature had also increased in rigor to becoming more empirical and data based. They however feel the literature needed to be more diverse in topics addressed beyond STEM, work in other settings other than the classroom and include other technologies.


They consider the evolution of AIED in the following areas: ‘goals, practices, and environment’.


The writers discuss the change in focus of education as becoming less linear and focused on cognitive learning and assessments to becoming more adaptive, dynamic and based on application, self-regulation and collaboration especially as technologies become more ubiquitous and information more readily available.


The article suggests classroom practices are changing from individual cognitive focused learning to more experiential, collaborative and problem-solving practices.


They critique the lack of personalised learning paths within the curriculum which take cognisance of the diversity of learners’ backgrounds and experiences. This is a problem identified by the Council of Europe’s’ report in 2019 as inadequate in AIED design by its failure to provide individualised learning pathways for learners.


The article discusses how the schooling system whilst still in place, education has now shifted outside the classroom to many ‘Massive Online Open Courses (MOOCs)’ which accommodate more life long and independent learning as well as global studentship and post degree education with various accreditations.


They suggest that the changes require teachers to be more guides ‘on the side’ who support independent thinking and searching by learners. This relationship in the classroom is not effectively translated into MOOCs who are often talking heads, and they ask how this can be improved in future MOOCs.


The writers argue for a revolution in AIED which would be a continuum of development in the sector.


The writers felt there was an absence of ILE environments which included a consideration of the context within which they were being used. They were usually ‘plug and play’ technology. They call for a Cognitive Tutor ecosystem which offers this overarching perspective by introducing the technology together with a curriculum (Koedinger & Corbett, 2006) and call for further research which looks at the environment within which the learning takes place prior to the design process. Interestingly the Council of Europe also advocates for the relevance of context and diversity in relation to AIED.


Their literature review found no instances where teachers were involved in the design of AIED tools, a call also made by the Council of Europe. They suggest this should not be the case and that it is imperative to include teachers as collaborators and participants in the development and design of the tools and also conduct research on the extent to which AIED was changing ‘pedagogy and teaching practices, impact professional development and teacher training, and what aspects of current practice are being shortened or eliminated to make room for technology’.


The writers found that AIED tools paid little attention to cultural differences especially globally. They found of the 47 papers reviewed ‘43 come from North America, Europe, and Oceania. Only four papers have authors from other regions: three from East Asia and one from South America’ and none from Africa or south Asia’. They have called for more writings which is inclusive of diverse demographics as ‘Education is a socio-cultural phenomena (Vygotsky, 2012)’.


The writers state that of the 47 articles only one discussed the broadening of context of AIED outside the classroom.


They feel AIED must look more at being applicable in workplace setting and other environments where people are and in the manner which suits them such as parks and kitchens.


They felt the publications, with one exception, focused solely on computers. They encourage more novel and diverse technologies especially around mobile phones with diverse software which encourage more engagement.


Interestingly they suggest that data indicates that constructivist learning are not as effective as students tend to require more support. They feel that this tension offers more opportunity for AIED tools to provide personalised tools through ‘educational data mining and modelling of learners, pedagogies, and domains’. This is interesting as it is in direct contrast to the Council of Europe’s position which calls for more controls on data mining and modelling stating that it could infringe on human rights, data laws and privacy.


The writers argue that AIED should do less of reinventing the wheel and use what already exists by building the ILE and populating it with content which already exists such as MOOCs, Khan Academy and Wikipedia. This could be considered contradictory to what they criticised earlier where they feel designers are not innovative and tend to be focused on cognitive and behavioural systems of learning?

They argue that there should be more collaboration with other communities such as the ‘Learning Sciences community’ which would allow expansion in ‘related fields’.


They applaud other ways of combining AIED such as placing learning content on social sites like Facebook.


Their conclusion is better expressed in their own words: ‘AIED, as a community, should continue this work and play to our strengths and successes. While doing so, we would like to encourage researchers to be bolder, take greater risks, and tackle new contexts and domains. We specifically argue that ILEs should be better integrated – with formal and informal learning environments, with teachers and their practices, with cultural norms, with existing resources, and with our learners’ everyday lives and tasks’


They go on to add they believe the human tutor may be coming to the end of its days as AIED means learning supersedes borders, times, and number of learners. They feel these capabilities should be exploited to further augment, grow and solidify AIED.

They say they are not calling for the total disappearance of the teacher but rather a new kind of teacher that comes alongside the student as a mentor supporting life-long skills development and not just ‘domain knowledge’.


Reading this document 10 years later gives the value of hindsight on the literature they reviewed and their own ideas at the time of writing.


In hindsight it can be agreed the hardware involved in learning is not just ubiquitous but varied. Online learning is now a norm rather than an exception up to post graduate level offering lifelong learning opportunities on various platforms and is a global phenomenon as they predicted.


Their call for the personalisation of learning is still being worked out as the Council of Europe in 2019 refers to the failure of AIED to provide this capability adequately.

It can be seen that contrary to their predictions, AIED has not been able to replace the teacher in the classroom or for education to move from cognitive systems of learning with assessments.


It is also clear that in various instances, AIED has reinvented the wheel when it comes to learning across all sectors in not just what can be learned or assessed online but in how it can be accessed.


The issue relating to providing cultural context for AIED is still unresolved as identified by the Council of Europe’s report.


In general, there is good coverage of articles from the past 30 years which provides an understanding of how AIED was perceived, a glimpse of how those ideas were evolving and had evolved by the time the review was written and the ideas and thoughts about the future of AIED.


The fifth article reviewed was the report by The Department of Education on its ‘Call for Evidence’ on the use of GenAI in education. It was open for 10 weeks, launching on 14 June and closing on 23 August 2023 and it went out to practitioners across all sectors of education, the educational technology sector and AI experts.


‘GenAI uses foundation models, including large language models (LLMs), trained on large volumes of data. It can be used to produce artificially generated content such as text, audio, code, images and videos. Examples of GenAI tools include ChatGPT, Google Bard, Claude and Midjourney. This technology is also being integrated within other tools. From 14 June to 23 August 2023, the department held a Call for Evidence on GenAI in education. The purpose was to understand the uses of GenAI across education in England and the sector’s views towards the opportunities and risks it presents’.


The call for evidence received 567 responses from the education, EdTech and other AI organisations both within and outside the UK.


The department of Education begin by acknowledging an increase in interest and use by the public and education sector of GenAI tools.


The responses indicated teachers are seeing the benefit of using Gen AI, identified the support required and the concerns of using Gen AI tools.


The report added that the government is ‘committed to ensuring the department does all it can to maximise these opportunities whilst addressing the risks and challenges’.


Opportunities identified were around giving teachers more time and the provision of support for students especially around SEND, and students for whom English is an additional language and subject specific support.


This is interesting as issues from other papers discuss similar possibilities or potential for AIED.


The department committed to invest up to 2 million in Oak National Academy improve and expand their AI tools for teachers and provide ‘£137 million to the Education Endowment Foundation to encourage innovative and effective evidence-based teaching, including using technology such as computer adaptive learning and AI’.


The responses indicated a need to improve access to technology: ‘The department are therefore investing a further £200 to upgrade schools who have low WI=Fi connectivity and working with providers to ensure all schools have high speed connection by 2025.


‘The department is also setting standards so that school, college and trust leaders know what they need to do to ensure their technology is up to date, maintain security and support online safety’. This is a good response to most of the questions raised in the 2019 Council of Europe’s report which decried a lack of regulation and oversight in the sector leaving it vulnerable and a free-for-all for commercial operators.


Risk Protection Arrangement (RPA) is an alternative to commercial insurance for academies and local authority-maintained schools; and the Department has now included cybercrime in the cover from April 2022. They state it ‘has over 10,000 members (47% of all eligible schools)’.


Useful to know as this would give schools and academies confidence as the use of AIED tools in schools and help allay some of the fears highlighted by the Council of Europe.


The department is collaborating with Ofqual, Ofsted and the Office for Students who have produced a White Paper ‘which sets out the government’s first steps towards establishing a regulatory framework for AI’.


The call for evidence was done to ensure that as the department started its policy development of the sector, they could respond to sector changes and had a strong evidence base.


They state that they will continue to monitor and engage with the sector as the technology changes and update their policy paper to accommodate these.


This overview indicates an understanding by the department of education of concerns and shortcomings around the use of GenAI in education and a commitment to support education practitioners whilst enabling the opportunities and strengths in the use of GenAI in education.


The call identified areas of risks and concerns and reiterated the of importance of the teacher in the classroom despite the use of AIED, something they stated will not change. An interesting conclusion given the forecast of AIED developers and writers that this would change, an ambition they have had for the past thirty years according to Ido Roll and Ruth Wylie.


Some of their respondents called for change to the curriculum in response to the challenges of GenAI in the classroom. The department feels the ‘teaching of a broad, knowledge-rich curriculum is fundamental’ and important for learners to be ready for the future of employment and will reform the curriculum accordingly in order to ensure high standards of A ‘Levels and GCSE qualifications. It is unclear if this has been put into practice currently.


‘The department’s statutory safeguarding guidance’ provides information to schools on how they can protect learners online including with the use of GenAI limiting as much as they can any risks to students. This is key, as the Council of Europe’s Report decried the lack of safety and protection for learners. This responsibility had to lie with the policy makers and learning institutions and this paper indicates the department understands this and has taken the necessary steps to put the guidance and protection in place.


The report calls for schools to protect children from harmful practices online but most schools and learning institutions already have IT policies in place which effectively cover all use of technology in educational settings and should now include the use of AI.

The department have a ‘statutory safeguarding guidance’ to support learning institutions with information on how to protect students. This should be made available to all institutions of learning if it has not already been done.


Institutions must understand the data privacy implications of using GenAI tools and must protect the personal and special category data of learners. They should also be transparent and ensure students understand the implications as well. Pupils and students own the intellectual property rights to original content they create. Another detail suggested by the Council of Europe in relation to ownership and copyright but which the education departments appear to understand.


The department state that the work of students must not be used to train GenAI models unless permission is expressly provided, a point the Council of Europe are adamant about. In fact the Council of Europe go further to challenge the very act of permission and its legality in the context of the power relationship between the school and the child.


Exam boards have set out strict rules about students cheating with GenAI tools and this could lead to disqualification with some bodies. They suggest teachers would be best placed to identify student’s own work as they would know them and their capabilities.


‘The Joint Council for Qualifications published guidance earlier this year which reminds teachers and assessors of best practice in preventing and identifying potential malpractice, applying it in the context of AI use’.


Ofqual are in constant discussions with qualifying bodies to ensure their accreditation is robust with regards to the use of GenAI tools and to make adjustments whenever required.


They note that ‘GenAI tools can produce unreliable or biased information’, and thus must be checked by users and that the accuracy of the information are the responsibility of the individual and institutions using them. This is right. There is a clear understanding here that GenAI tools are just tools utilised and manipulated by the user. Thus, GenAI tools may produce inadequate content, but it remains the responsibility of the human to decide what is appropriate or correct to be utilised. GenAI tools cannot just produce themselves without human agency.


Additionally, the use of GenAI tools do not eliminate the requirement of knowledge of the field. It is imperative that there is human judgement to determine the accuracy of content provided by GenAI tools. This is right, and once again reiterates the value of human agency in the use of the tools.


It does also raise questions of the possibility of this changing in the future as GenAI tools become better trained.


They found the response to their call for evidence came from teachers who were early adopters and using GenAI tools in their classrooms. They used it to create educational resources, lesson planning and curriculum and to streamline tasks. Some teachers were experimenting using it for automatic marking and student feedback.


Teachers listed benefits such as freeing up their time, enhanced teaching effectiveness, student engagement and ‘improved accessibility and inclusion’ for learners.


The responses also included concerns about GenAI use which included dependence on the tools by learners, misappropriation of the tools and data and privacy risks.

Some expressed concern over the possibility of AI tools replacing the human tutor, a position the Council of Europe is not keen on, but which developers are aiming to accomplish.


They also expressed concern about the digital divide created by socio economic factors, another area which the Council of Europe is concerned about as they see inequality being amplified by ownership and access to the tools.


Most of the respondents were optimistic about the use of GenAI tools for the future especially with the capacity of freeing up teacher time whilst a minority expressed concern over the risks, which for them ‘outweighed the benefits’.


Respondents called for increased support from policy makers and to ensure the safety of AI tools. This is an important issue as the Council of Europe calls for designers to ensure this is the case, but actually governments and policy makers must set the standard to which designers must build.


The respondents would like to see training for the use of GenAI tools, improvements to AI infrastructure which the department has already committed to, regulation on issues of privacy and data protection and reforms to curricula and assessments in line with the use of GenAI.


The report concluded by recognising the sample size was limited but also that most respondents were positive about GenAI use in the classroom though felt they need more guidance and an understanding of how to mitigate the risks. The department called for more research to understand the impact but committed to engaging and supporting the education sector in its interaction and adoption of AI.


Respondents encouraged the department to play a prominent role in shaping GenAI use in education. There was a broad acknowledgement of a need to balance risk and reward. Most respondents wanted the UK to become a proactive, influential player in this emerging field. At the same time, respondents expressed a desire to proceed with caution, due to the concerns and risks identified.


This report was an excellent read because it provided a response from users of GenAI tools. It clearly shows the difference between when people actually use the tools and when people write about the tools without experience.


It indicates that the use of GenAI tools in education have positive benefits with many opportunities for growth.


It further dispels the fear that teachers will become redundant and shows them rather as a key part of the classroom, monitoring, allocating, directing the tools and still providing empathetic teaching and learning for students. They also provide a more effective system of monitoring of students’ work.


Additionally, the report demonstrates the need for the department of education to be responsible for oversight and developing policy in collaboration with stakeholders which would guide the use of GenAI tools in education and give teachers the confidence to use them.


Unlike the Council of Europe’s report which urges the developers to be responsible for this, the department is the right place where responsibility should sit. Designers and developers who are eager to make profit will develop tools to meet these guidelines otherwise they would lose out on sales curtailing risks identified by the Council of Europe that these commercial bodies could operate without control or oversight.


The report also indicates that there are other guideline documents available such as ‘the department’s statutory guidance on Keeping children safe in education and the Filtering and Monitoring standards, reviewing the Data Protection and Digital Information Bill and their impacts on individuals’ which can be used in the interim as they are updated. This should provide confidence for users.


In conclusion, GenAI tools are useful, and the department of education needs to have oversight and provide guidance for use to create confidence and enable educational institutions to use them in confidence to prepare students for the workplace of the future.


The sixth article review was 'Artificial intelligence innovation in education: A twenty-year data-driven historical analysis by Chong Guan, Jian Mou, Zhiying Jiang, (2020)'


The article starts with a discussion about the proliferation of AI in most industries and EdTech not being an exception with investment in the sector ‘reaching $1047 billion from 2008 to 2017 (Mou, 2019)’.


Research on AIED started in the 70s and they discuss the evolution of educational systems from Eliza in 64 t0 66 to SCHOLAR to MYCIN. The research has evolved from this ITS field to include other paradigms and with current advance of AIED continues to grow.


They feel other research have been limited and their own ‘temporal multiple-journal bibliometric analysis is needed to piece together the evolution of AIEd research in the past two decades’


Their review was based on 425 articles published between 2000 and 2019 and these were fed into ‘Leximancer for in-depth text analysis’. They supplemented this with manual analysis of the ‘representativeness of topics illustrated in each concept map’.


Through this research they were able to map out the paradigm shifts over the past 20 years. They saw the growth of online courses, then the emergence of virtual reality then big data which led to student profiling and learning analytics.


This table mapping out the various definitions in literature over the 20 years was interesting to peruse as it demonstrated the shift in focus over the years.

 

No.

Authors

Definition

1

AI techniques can permit the intelligent tutoring systems itself to solve the problems which it sets for the user, in a human-like and appropriate way, and then reason about the solution process and make comments on it.

2

Summarized AI in education context as intelligent tutoring system that helps to organize system knowledge and operational information to enhance operator performance and automatically determining exercise progression and remediation during a training session according to past student performance.

3

The authors summarized AI as artificially intelligent tutors that construct responses in real-time using its own ability to understand the problem and assess student analyses.

4

AI is defined as computing systems that are able to engage in human-like processes such as learning, adapting, synthesizing, self-correction and use of data for complex processing tasks.

5

AI is defined as computing systems capable of engaging in human-like processes such as adapting, learning, synthesizing, correcting and using of various data required for processing complex tasks.

 

They discuss AI in relation to the student in terms of adaptive learning, in relation to the tutor in terms of supporting admin tasks and institutional learning in terms of supporting areas like data analysis. These are the same across all literature reviewed so far.


They define Deep learning within education in the areas of adaptive learning, performance predictions and student retention.


They review Roll and Wylie (2016) which have been reviewed in this paper, and their discussion of 47 articles over 30 years which recommended further research both in the evolution in collaboration with stakeholders and revolution of AIED which embedded design.


They discuss writing by Hinojo-Lucena et al. (2019) who reviewed 132 papers and concluded research in AIED especially in Higher education was still sparse.

They reviewed an article by Bond et al. (2019) which explored 146 articles in EdTech journals published between 2007 and 2018 who concluded there had been insufficient ethical consideration of AIED.


They mentioned Chan and Zary (2019)  who looked at 37 articles and advocated the importance of addressing technical difficulties ‘in order to accelerate adoption’

They felt that AIED research is challenging due to the multiplicity of research outlets and their method of ‘a multiple-journal analysis, covering the history of AIED would provide a better overview’.


They found 11 themes have emerged over the past twenty-years (2000–2019). They include ‘AI computer-assisted instruction (AI CAI) system, VR, ITS, AR, educational games, predictive modelling, adaptive learning, assessment design, educational agents and teaching elevation’.  These themes are consistent within the literature review covered so far.


These themes were compared to concepts ‘and the top concepts related to AIEd in four ways: a) application context, such as pedagogical deployment of AIEd (e.g. intelligent tutoring systems, expert systems); b) targeted outcomes (e.g. predicting student performance; identification of learning styles); c) technologies being deployed (e.g. VR, mobile educational features); and d) learning environment (project-based learning environment)’.


The overall review indicates that the published research focuses on how technologies provide effective teaching and learning environments. This is interesting as it has been a focus on how Edtech aim to develop AIED tools and what the expectations of AIED would be, however the Council of Europe in its report does not believe the EdTech companies have effectively achieved this aim.


AI research focus which have come to the fore are ’the evolution of technology for instructional design, the integration of new technologies under a variety of teaching and learning contexts; and issues with implementing new systems and platforms, including appropriate technological and pedagogical adjustments’. This continues the theme of developing the AIED which is adaptive and relevant within contexts in which it is used and to the people who use them.


They then discuss the idea of building a ‘technology-organization-environment (TOE) framework. It is originally developed by Tornatzky and Fleischer (1990) to define innovation adoption within organisations looking at technological, organisational and environmental context arguing that their interaction would lead to enhanced teaching’.


Their review indicated that whilst developing the AIED environment was the focus of writings between 2000-2009, the focused changed to learning outcomes from 2010 to 2019.


Their research found the main paradigm shifts in literature on AIEd from distance learning in 2000-2009 including the use of VR for immersive experiences to personalized and adaptive learning with the arrival of big data from 2010 to 2019.

They identify the constraint of their research as having the possibility of missing what is currently happening in the field in terms of emerging themes due to lag time between research and publication.


They identify their search focuses mainly on the technological aspect of AIEd and ignores the ‘pedagogical, cultural, social, economic, ethical and psychological dimensions of education calling for future research to include these dimensions’. These are a vital aspect of any research into AIED as AIED does not take place in isolation but within various contexts and thus to effectively understand and respond to its impact and the paradigm shifts of AIED all of these must be considered moving forward.


This research has been invaluable in providing context and an overview of the focus of research in AIED over the past 20 years. The number of articles reviewed, over 400, ensures they have provided a broad-church context for their research and findings. 

Tracing the paradigm shifts over the 20-year period they are able to identify current trends whilst calling attention to future research requirements particularly around pedagogical relevance and context.


Conclusion/Recommendations


Following the review of the articles above the following conclusions were reached;

The language by authority figures and organisations regarding AIED must be more balanced to enable people to be less trepidatious of it:


There must be training provided to educators on the types of tools available and this must be ongoing to ensure their skills are updated:


Policy makers and government must create the rules and regulations within which EdTech companies must operate. This will create confidence for users and assure authorities of the safety of the tools being used.  These policies must be produced as a matter of urgency to streamline design and use of AIED:


Stakeholders such as educators, designers, policy makers, parents, young people must all be in constant communication to ensure effective provision of AIED tools:


The pedagogical framework within which AIED is designed and used should move towards the inclusion of other frames such as constructivist and transformative learning methods:


The context within which AIED is utilised must take cognisant of social, cultural, economic and political contexts within which they are used.


Leaning institutions like universities are slow in developing a robust response to the abuse of AI like plagiarism, within their institutions and efforts must be stepped up to create and develop policy and practice for AIED.


Finally, there is an urgent requirement for research on the impact of AIED in terms of how users respond to it psychologically, mentally, intellectually, emotionally and so on. In the review of all the literature none of them provided any information on the impact of AIED on the learner and several of them call for this research to take place.


Reference List



Anita Chaudhary, Innovative Educational Approaches: Charting a Path Ahead , ARTIFICIAL INTELLIGENCE IN EDUCATION, (Page 57-61), 2023,  https://parabpublications.com/books/pdf/innovative-educational-approaches-charting-a-path-ahead.pdf#page=57


Chong Guan, Jian Mou, Zhiying Jiang, 2020, 'Artificial intelligence innovation in education: A twenty-year data-driven historical analysis, https://doi.org/10.1016/j.ijis.2020.09.001


Darius HennekeuserDaryoush Daniel VaziriDavid GolchinfarDirk Schreiber & Gunnar Stevens, 2024, Enlarged Education – Exploring the Use of Generative AI to Support Lecturing in Higher Education, https://link.springer.com/article/10.1007/s40593-024-00424-y


Department of Education, Generative AI in education Call for Evidence: summary of responses, 2023, https://assets.publishing.service.gov.uk/media/65609be50c7ec8000d95bddd/Generative_AI_call_for_evidence_summary_of_responses.pdf


Ido Roll1 & Ruth Wylie,  Evolution and Revolution in Artificial Intelligence in Education, 2016, International Artificial Intelligence in Education Society 2016


Wayne Holmes, Jen Persson, Irene-Angelica Chounta, Barbara Wasson and Vania Dimitrova ARTIFICIAL INTELLIGENCE AND EDUCATION A critical view through the lens of human rights, democracy and the rule of law, 2022, Council of Europe

12 views0 comments

Comentarios


bottom of page