Innoeduca. International Journal of Technology and Educational Innovation
Innoeduca. International
Journal of Technology and Educational Innovation
Vol. 10. No. 2. December 2024 - pp. 201-216 - ISSN: 2444-2925
DOI: https://doi.org/10.24310/ijtei.102.2024.19806

Artificial intelligence technologies and ethics in educational processes: solution suggestions and result

Tecnologías de Inteligencia Artificial y ética en los procesos educativos: sugerencias de solución y resultados
RECEIVED 04/05/2024 ACCEPTED 26/08/2024 PUBLISHED 01/12/2024
orcid
Cansu Şahín Kölemen
Beykoz University, Türkiye
ABSTRACT
Artificial intelligence is a technology used to imitate the human-like thinking and decision-making abilities of computer systems. This technology enables computers to perform complex tasks such as data analysis, learning, problem solving and decision making. It is used in the field of education as well as in every field. While the use of artificial intelligence in the field of education provides advantages such as providing personalized learning experiences to students, providing teachers with intuition about student performance and developing educational materials, the ethical dimension should not be ignored. Therefore, the aim of this study is to produce solutions to ethical problems in the teaching and evaluation processes of artificial intelligence technologies in education. Qualitative research method was used in this study. It has adopted the phenomenological research approach among qualitative research methods. The concept of phenomenon is also the ethics of artificial intelligence. The working group consists of teachers, educational technologists and academicians. When selecting the working group, it was taken into consideration that there were teachers who use artificial intelligence applications in education and academics and technologists working in this field. Document analysis and focus group interviews were used as data collection tools. Content analysis was performed on the data obtained. According to the results of the study, ethical problems encountered with the use of artificial intelligence in education were identified and solution suggestions were offered.

KEY WORDS Artificial intelligence; artificial intelligence ethics; artificial intelligence in education.

RESUMEN
La inteligencia artificial es una tecnología que se utiliza para imitar el pensamiento humano y las capacidades de toma de decisiones de los sistemas informáticos. Esta tecnología permite a las computadoras realizar tareas complejas como análisis de datos, aprendizaje, resolución de problemas y toma de decisiones. Se utiliza tanto en el campo de la educación como en todos los campos. Si bien el uso de la inteligencia artificial en el campo de la educación brinda ventajas como brindar experiencias de aprendizaje personalizadas a los estudiantes, brindar a los docentes intuición sobre el desempeño de los estudiantes y desarrollar materiales educativos, no se debe ignorar la dimensión ética. Por tanto, el objetivo de este estudio es producir soluciones a problemas éticos en los procesos de enseñanza y evaluación de tecnologías de inteligencia artificial en educación. En este estudio se utilizó el método de investigación cualitativa. Ha adoptado el enfoque de investigación fenomenológico entre los métodos de investigación cualitativos. El concepto de fenómeno es también la ética de la inteligencia artificial. El grupo de trabajo está formado por profesores, tecnólogos educativos y académicos. Al seleccionar el grupo de trabajo se tuvo en cuenta que existieran docentes que utilizan aplicaciones de inteligencia artificial en educación y académicos y tecnólogos que trabajan en este campo. Se utilizaron análisis de documentos y entrevistas de grupos focales como herramientas de recolección de datos. Se realizó un análisis de contenido de los datos obtenidos. Según los resultados del estudio, se identificaron los problemas éticos encontrados con el uso de la inteligencia artificial en la educación y se ofrecieron sugerencias de solución.

PALABRAS CLAVE Inteligencia artificial; ética de la inteligencia artificial; inteligencia artificial en la educación.

1. INTRODUCTION

Artificial intelligence is a term that we frequently encounter today with the rapid advancement of technology and digitalization. However, this concept was first introduced in 1950 with the question “Can machines think?” (Anyoha, 2017). Nabiyev & Erümit (2020) defined artificial intelligence as the ability of a computer-controlled device to perform tasks in a human-like manner. This ability is used not only by private and state institutions, but also in higher education institutions. The applications of artificial intelligence in higher education are increasing day by day and have attracted even more attention in the last few years. For example, the 2018 Horizon report stated that artificial intelligence applications in educational technologies are important developments (Educause, 2018). According to the 2023 Horizon report, AI is widely used in education for mundane and repetitive tasks (Educause, 2023). The recent breakthroughs in AI technology, particularly in the realm of generative AI, have opened up new possibilities for enhancing educational processes and outcomes. Researchers have highlighted the enormous potential of AI-powered technologies in areas such as collaborative, immersive, affective, and exploratory learning (Olga et al., 2023). The incorporation of AI tools into the education sector has been explored in a variety of contexts ranging from curriculum development and instructional strategies to educational administration and student assessment (Alzahrani, 2022; Chen et al., 2020; Zhu, 2021).

Recently, there has been increasing research on how AI applications can be used across the learner lifecycle to explore pedagogical opportunities (Zawacki-Richter et. al., 2019). The use of AI helps learners understand the important factors behind each engagement or changes in their performance. This is thanks to AI’s ability to not only analyze large datasets but also to correlate between different data sources, helping to identify areas where real-time interventions or additional supports are needed. In this way, AI creates a tailored or individualized learning experience for each student, enabling them to understand and develop their strengths, weaknesses, abilities and challenges (Duggan, 2020). Artificial intelligence is used to personalize learning methods by taking into account students’ strengths, weaknesses, abilities and academic problems. This technology also helps educators to develop personalized learning strategies and analyze both qualitative and quantitative data. Furthermore, artificial intelligence is being used to improve the quality of distance education and deliver personalized educational programs to students to ensure effective teaching (Duggan, 2020).

Artificial intelligence aims to meet the needs of each student by adapting learning environments according to individual requirements (Harry, 2023). Artificial intelligence systems that analyze students’ learning styles, speeds and needs offer a more effective learning process by personalizing the learning experience (Tiwari, 2023). In addition, AI in education offers new opportunities to enrich students’ online learning experiences with adaptive learning materials and metacognitive cues (Tapalova & Zhiyenbayeva, 2022). The incorporation of AI tools into the education sector has been explored in a variety of contexts ranging from curriculum development and instructional strategies to educational administration and student assessment (Alzahrani, 2022; Chen et al., 2020; Zhu, 2021). One of the main benefits of AI in education is its ability to deliver personalized and adaptive learning experiences tailored to the unique needs and learning styles of individual students. By analyzing student data and learning patterns, AI-powered systems can provide targeted guidance, feedback and content recommendations, thus allowing students to progress at their own pace and maximize their learning potential. In addition, AI can play an important role in facilitating administrative tasks such as planning, resource management and grading, freeing up educators’ time and resources to focus on more meaningful and effective aspects of the teaching and learning process (Mello et al, 2023; Islam, 2023). Overall, AI has the potential to improve teaching and learning outcomes in education. Especially considering the goals of ensuring equality of opportunity in education and ensuring that everyone benefits from education, it is thought that the use of artificial intelligence technologies at different educational levels and dimensions may be beneficial. According to the 2020 report of the UNESCO Institute for Information Technologies in Education (Duggan, 2020), there are some challenges and concerns about the use of AI in education. Among these concerns, issues related to the privacy, protection and use of the data used by AI are important. It is also noted that ethical issues such as racism and gender discrimination may arise in relation to the use of AI technologies.

There has been a huge increase in scientific research on artificial intelligence technologies. Between 2013 and 2016 alone, the number of scientific articles on AI technologies has increased sixfold (Evans & Gawer, 2016). These studies generally focus on the application of existing AI technologies, proposing and/or developing new AI models (e.g., intelligent tutoring systems), or the potential benefits of AI such as profiling and prediction capabilities, assessment and evaluation, adaptive and personalized learning environments (Baz & Denizer, 2018; Zawacki-Richter et al., 2019). This paradigm shift in educational strategies has been explored in depth by various researchers, who have investigated the implications and possibilities of integrating generative AI into educational settings. However, as with any technological advancement, the adoption of generative AI in education also raises ethical considerations and the need for a comprehensive research agenda to address the challenges and opportunities it presents (Olga et al., 2023). In addition to all these fields of study, the ethical dimension also needs to be studied. Solutions need to be produced for each problem identified in the ethical field. A study was designed to create an ethical framework to make a distinction when evaluating artificial intelligence capabilities. This study identified five main challenges such as perception of equity, data privacy, moral agency, moral incompetence and bias towards data. Therefore, it is crucial for universities, one of society’s most influential institutions, to develop AI systems and secure new applications within an ethical framework to alleviate legitimate concerns.

It is observed that ethical rules initially focused primarily on the behavior of humans using technology. Today, however, it is also necessary to discuss ethical behavior from the perspective of machines and to examine the decisions and actions that machines take autonomously (Müller, 2020). AI ethics actually functions as a tool for the protection of privacy. Law, which is a positive science with rules and principles that regulate existing situations, should create the necessary infrastructure for issues that may cause security concerns and ethical concerns in society. Binns (2018) aimed to minimise the negative effects of these systems on users by addressing the issues of transparency and accountability in the decision-making processes of AI systems. Similarly, Floridi and Cowls (2019) identified the basic principles of AI ethics, focusing on issues such as human rights, justice, responsibility and privacy. Furthermore, Whittlestone et al. (2019) provided practical guidelines to promote the ethical use of AI applications. In recent years, Crawford (2021) has discussed how AI can reproduce social inequalities and what policies should be developed to prevent this. In this context, studies on the ethics of AI offer important contributions towards ensuring the responsible and fair use of technology. In this context, studies on the ethics of AI make important contributions towards ensuring the responsible and just use of technology. The contribution of this study to the literature is that it provides an in-depth understanding in the field of educational technologies and AI ethics, providing an important reference point for research at the intersection of these two fields. A detailed consideration of the ethical dimensions of AI applications in education will enable a more responsible and effective use of these technologies. Furthermore, such a study will raise awareness among policy makers and educators to promote innovative and ethical practices in education. Therefore, the aim of this study is to generate solutions for ethical problems in the teaching and evaluation processes of artificial intelligence technologies in education. In line with this purpose, answers to the following sub-questions were sought:

  1. What are the ethical problems in educational processes related to artificial intelligence technologies?
  2. What are the solutions to the ethical problems that arise in the use of artificial intelligence technologies in educational programs?

2. MATERIAL AND METHOD

In this section, information on the research model, study group, data collection tools, data collection process and analysis are given.

2.1. Research Design

Qualitative research method was used in this study. Qualitative research method is a type of research that relies on qualitative data collection to understand, explain, or clarify a particular topic. Qualitative research is often used to understand complex issues that require in-depth analysis and explanations (Creswell, 2016). Qualitative research is a method that is in the process of questioning and interpreting a unique problem and understanding the form of this problem in its natural environment (Baltacı, 2017). Methods frequently used in qualitative research include observation, interview, document analysis and discourse analysis. Qualitative research focuses on the in-depth examination of human perception and social reality in its natural environment and therefore adopts an inclusive approach that brings together different disciplines (Merriam & Grenier, 2019). This study adopted the phenomenological research approach within qualitative research methods. Phenomenological research is a method that aims to understand and interpret the experiences of individuals. This approach focuses on people’s direct expression and understanding of their experiences (Güçlü, 2019). The phenomenon concept of this study is artificial intelligence ethics.

2.2. Study Group

Although various sampling methods are used in qualitative research, the sample that can best explain the purpose of the research should generally be selected. Therefore, in qualitative research, the appropriate sample is selected and studied in detail for a long time in order to reach detailed results (Merriam & Grenier, 2019).

For the purpose of this study, criterion sampling, which is a purposive sampling method among various sampling methods, was preferred. Purposive sampling is a method in which the researcher selects a sample by focusing on a specific purpose or problem. An important feature of this sampling is that it includes groups or individuals with certain qualities to better explain the main purpose of the study. Criterion sampling is when the researcher selects a sample based on a specific criterion (Bryman, 2016). The criterion for this study group was determined as the use of artificial intelligence technology in education. The study group consisted of teachers, educational technologists and academicians. Demographic information about the study group is given in Table 1.

TABLE 1. Demographic information about the participants in the study group

CODE GENDER AGE VOCATION DURATION OF USE
T1 Woman 32 Teacher 3
T2 Male 36 Teacher 4
T3 Male 28 Teacher 4
A1 Male 41 Academician 5
A2 Woman 43 Academician 4
A3 Male 39 Academician 4
ET1 Woman 31 Education Technologist 5
ET2 Woman 33 Education Technologist 4

Table 1 shows the duration of educational technology usage of individuals from different gender, age and occupational groups in the education sector. There are 8 participants in total. Among the female participants there is a teacher (T1) who is 32 years old and an educational technologist (ET1) who is 31 years old, while among the male participants there is a teacher (T3) who is 28 years old and an academic (A2) who is 43 years old. In terms of occupational groups, besides teachers, academics and educational technologists are also represented in the table.

Looking at the duration of using artificial intelligence tools, it is seen that teachers are generally interested in educational technology between 3-4 years. For example, T1 and T2 have been using educational technology for 3 years and T3 for 4 years. Among academicians, the duration of use ranges from 4 years (A2 and A3) to 5 years (A1). Educational technologists, on the other hand, generally use this technology between 4-5 years.

2.3. Data Collection Tools

In qualitative research, data collection tools such as focus group discussions, interviews, observation and document analysis are preferred for content and descriptive analysis (Merriam & Grenier, 2019). In this study, focus group interviews and document analyses were conducted to reveal the solution proposals regarding the phenomenon. Focus group interview is a data collection technique frequently used in qualitative research. Focus group interviewing is used to understand the thoughts, experiences and opinions of participants about a particular topic or topics. Participants in the group discuss around a specific topic or issue identified by a researcher or moderator. The main purpose of this method is to understand group dynamics by enabling participants to share their ideas and experiences. Focus groups usually consist of 6 to 10 people. They are encouraged to interact and exchange ideas (Sullivan & Forrester, 2018). In the focus interview group of this study, there were 8 people in total. For the interviews with the participants, a semi-structured focus interview form was prepared to clarify the phenomenon. The semi-structured focus interview form consists of two dimensions. The first dimension includes the demographic information of the participants, while the second dimension includes the focus interview questions. Focus interview questions consisted of five basic questions. The focus interview questions were prepared with care to be clear and understandable.

Document analysis is a qualitative research method in which a researcher collects information by examining written or printed documents. These documents usually consist of various documents such as reports, letters, journals, books, policy documents, social media posts, and web pages (Güçlü, 2019). The process of scanning the written documents on the subject evaluated within the scope of the research in detail and creating a new structure from this information is referred to as document analysis (Creswell & Creswell, 2017). This analysis method, which is carried out together with the literature review, systematically organizes the researcher’s interview records and documents. While this method saves the researcher time, it also facilitates the prioritization of the topics examined, the categorization of data and the creation of new data sets (Baxter & Jack, 2008). In this study, the artificial intelligence policies developed were analyzed by examining the articles published in indexed journals on the subject.

2.4. Data Collection Process and Analysis

With the determination of the participants and data collection tools, the data collection process was initiated. In this process, if the researcher has not based the research problem on a sufficient theoretical framework and has not chosen the appropriate sample and data collection tools, problems arise in solving the research problem (Creswell & Creswell, 2017). The interviews with the participants in the study, all of which were conducted online, lasted an average of 23 minutes. The interview questions are as follows:

The data obtained in the data collection step should be verified from different sources in the literature. Therefore, document analysis was conducted by examining the literature. The data collected during the research process were content analyzed. Content analysis is a type of analysis that generally focuses on the main points of the researched topic. This type of analysis uses coding techniques to reveal the basic concepts underlying the data and the relationships between these concepts. In this way, it allows the information in the research process to be presented in a meaningful integrity. In content analysis, the data obtained from interviews and documents are generally analyzed in four stages: (1) coding the data, (2) identifying codes, categories and themes, (3) organizing the codes, categories and themes, and (4) describing and interpreting the results. These stages represent important steps in the process of in-depth analysis and interpretation of data. The data collected during the research process were first coded by content analysis method. The coding process is carried out to ensure that the data are examined and analysed in a systematic way. Each piece of data is coded under themes and categories determined in line with the research questions and the conceptual framework in the literature. After the coding process is completed, the codes obtained are examined, similar codes are brought together and categories are formed. These categories represent the main themes and sub-themes related to the research topic. While determining the themes and categories, the basic concepts underlying the data and the relationships between these concepts are taken into consideration. At this stage, the codes, categories and themes are organised by establishing connections between them. This organisation process allows the data to be presented in a more meaningful and consistent way. In addition, in this process, it is evaluated whether the data are in accordance with the analytical framework and whether they answer the research questions. In the last stage, the findings obtained from the coded and organised data are defined and interpreted. In this process, it is aimed to present the data in a meaningful integrity. The findings obtained are interpreted by comparing them with the existing information in the literature and it is evaluated whether the research questions are answered.

These stages represent important steps in the process of in-depth analysis and interpretation of data in content analysis. Content analysis allows the research data to be analysed systematically and comprehensively, thus increasing the reliability and validity of the research results.

2.5. Validity, Reliability and Ethical Considerations

Throughout the data collection process and analysis of the study, attention was paid to validity and reliability. Because the researcher needs to check the results to be obtained by the researcher in the face of the event or situation handled. In this study, the following were taken into consideration in line with validity and reliability.

3. RESULTS

The first sub-research question of the study is “What are the ethical problems in educational processes related to artificial intelligence technologies?”. Content analysis was performed for the data obtained as a result of the interviews and supported by document analysis. The results obtained as a result of the analysis are shown in Table 2.

TABLE 2. Themes and codes related to ethical problems in educational processes related to artificial intelligence technologies

  THEMES CODES PARTICIPANTS FREQUENCY
  Privacy Collection of student data T1, T2, T3, A1, A2, A3, ET1, ET2 8
Storage of data T1, T2, T3, A1, A2, A3, ET1, ET2 8
Data processing T1, T2, T3, A1, A2, A3, ET1, ET2 8
  The problem of prejudice Demographic factors T3, A1, A3, ET1, ET2 5
Unjust outcomes A1, ET1, ET2 3
  Inequality of opportunity in education Economic problems T1, T2, T3, A3, ET2 4
Demographic differences T1, A1, A2, A3, ET1, ET2 6
  Emotional needs Lack of social interaction T1, T2, T3, ET2 4
Empathy T2, A3, ET1 3
  Transparency Data trading A1, A2, A3, ET1 4
Accountability T1, T3, A1, A2, A3, ET1, ET2 7

According to Table 2, five different themes were obtained regarding the ethical problems in educational processes related to artificial intelligence technologies: “privacy, prejudice problem, inequality of opportunity in education, emotional needs and transparency”. When the privacy theme is examined, it is emphasized by all participants that the ethical dimension is important in collecting, storing and processing student data. Because artificial intelligence applications used in education generally collect and process student data. As a result of the focus interviews and document analysis, it was seen that this data includes various information such as students’ identity information, learning habits and performances. Therefore, the collection and storage of personal data should comply with relevant laws and ethical standards. It is also important that these data are stored securely and protected against unauthorized access. Looking at the themes of the bias problem, artificial intelligence systems used in education can reflect prejudices. In addition, it is thought that they may reinforce existing inequalities due to various demographic differences. For example, student assessment systems have been found to produce unfair results based on factors such as gender, ethnicity or socioeconomic status. Regarding inequality of opportunity in education, it is seen that the difficulties experienced in accessing technology due to students’ economic differences are addressed as an ethical dimension. In addition, it has been determined that artificial intelligence applications can make subjective evaluations due to demographic differences. Because students who do not have access to technology or have limited access to technology cannot fully benefit from artificial intelligence-supported education opportunities. According to the emotional needs theme, teacher-student interaction decreases with the use of artificial intelligence in education. Therefore, it is predicted that human values and empathy may weaken. In terms of transparency, it is thought that the complexity and difficulty of the internal working mechanisms of artificial intelligence systems may lead to transparency and accountability problems in terms of how decisions are made. Some of the views of the participants on the theme of privacy are given below:

“Collecting, storing and processing student data is an ethical issue of increasing importance, especially in educational institutions. Policies on this should include clear frameworks and specify what should be done.” (ET1)

Some of the participants’ views on the theme of prejudice are given below:

“Artificial intelligence of course has demographic information about users. Based on this information, it can present biased results, for example, due to racist or economic inadequacies. This poses an ethical problem.” (ET2)
“Decisions made as a result of demographic diversity can lead to unfair outcomes.” (A1)

Some of the views of the participants on the theme of inequality of opportunity in education are given below:

“Inequality of opportunity in education is one of the problems that come to mind when we talk about technology. This inequality also triggers artificial intelligence ethics. Because socioeconomic problems can affect the decisions taken by artificial intelligence applications.” (A3)

Some of the participants’ views on the theme of emotional needs are given below:

“The use of artificial intelligence in education leads to a lack of social interaction because it reduces teacher-student interaction.” (T1)
“There is ultimately an algorithm behind the apps used and it thinks like a machine. Therefore, the ability to empathize is limited. This situation reveals emotional deficiencies.” (T2)

Some of the views of the participants on the theme of transparency are given below:

“Data trade is important. Because users should have knowledge and consent about who the collected data will be shared with and how it will be used.” (A2)

Based on the quotes, it can be said that there is an increased awareness of important ethical and practical issues related to the use of AI and educational technologies. Participants point out that the collection, storage and processing of student data is an increasing ethical issue. It is important to have clear frameworks and policies that specify what should be done in these processes. In addition, issues such as bias, inequality and emotional needs are among the themes that participants are concerned about. It is emphasized that artificial intelligence can produce biased results and reinforce inequalities by using demographic information. In addition, the issue of transparency and the fact that technology can cause students to experience deficiencies in the emotional context were also addressed. These comments of the participants point to the ethical challenges and issues that need attention in the use of artificial intelligence and educational technologies.

The second sub-research question of the study is “What are the solutions to the ethical problems that arise in the use of artificial intelligence technologies in educational programs? “. Content analysis was performed for the data obtained as a result of the interviews and supported by document analysis. The results obtained as a result of the analysis are shown in Table 3.

TABLE 3. Themes and codes related to the solutions to ethical problems in educational processes related to artificial intelligence technologies

  THEMES CODES PARTICIPANTS FREQUENCY
  Solution Suggestions for Ethical Problems Informing students about data usage T1, T2, T3, A1, A2, A3, ET1, ET2 8
Establishing transparent policies A1, A2, A3, ET1, ET2 5
Use of anonymization methods A1, A3, ET1, ET2 4
Awareness raising and awareness raising T2, T3, A2, A3, ET1, ET2 5
Humanoid interactions A1, A2, A3, ET1, ET2 5
Review of training materials ET1, ET2 2
Develop algorithms that do not reflect biases A2, ET1, ET2 3
Establishing objective evaluation criteria T1, T2, T3, A2 4
State support due to economic differences T1, T2, T3, A1, A3, 5
Adopting the principle of inclusion A1, A2, A3 3
Transparency about how algorithms make decisions ET1, ET2, A1 3
Establishment of ethics committees A1, A3 2
Human-centered design ET1, ET2 2
Encryption of sensitive data A1, A2, A3, ET1, ET2 5
Limiting access to data A2, A3, ET1, ET2 4
Safety inspections A1, ET1, ET2 3

When Table 3 is examined, it is seen that solution suggestions regarding the ethical problems in the educational processes related to artificial intelligence technologies. Based on the collected data, it is thought that it should be clearly stated how the data collected from the students will be used and for what purposes the data will be analyzed. In addition to informing them how the data will be used, their consent should also be obtained. The ethical suggestion for the topic of data trade was that transparent policies should be developed about with whom the data will be shared. In this regard, it is emphasized that students should have the right to approve and reject data sharing. Anonymization of student data, in other words, hiding individual identity information, is important for data privacy. Both teachers and students should be made aware of the use of artificial intelligence applications. Because with the creation of awareness about data privacy, it allows users to know their rights and make healthy decisions. Artificial intelligence applications should also simulate empathy. Because it is important for applications to interact with students in a human-like way. While an AI-supported learning assistant supports students, it is thought to be important to pay attention to their emotional needs. Educational materials should be reviewed for the accuracy of the information they contain. Because false information and possible unethical information should be corrected in the process. Training data should be diversified to reduce the risk of bias in algorithms. In addition, the algorithms produced should be improved and updated. Identifying economically inadequate students and supporting them by the state are also among the ethical dimensions. In order to reduce the subjective evaluations caused by demographic differences in artificial intelligence applications, the principle of inclusiveness should be taken into consideration. There should be transparency regarding the internal working mechanisms of artificial intelligence applications. It should be made clear to users about how the algorithms used make decisions. It is emphasized that artificial intelligence ethics committees should be established in educational institutions. Human-centered design can be associated with empathy. Because it comes to the fore that technology should have a human-centered design in order to prevent the use of artificial intelligence from weakening emotional and human relations. It is also important to conduct data audits at regular intervals, back up, and create data recovery plans. However, authorization of data access is also among the important issues. Necessary data should be identified and these data should be encrypted. Some of the views of the participants on the theme of solution suggestions for ethical problems are given below:

“First of all, it is necessary to inform students that their data is being used.” (T2)
“How the data is used, how it is presented and with whom it is shared should be stated transparently and policies should be established in this regard.” (A1)
“Students’ data must be anonymized. This method is ethically important.” (A3)
“Awareness trainings about data privacy should be organized for teachers, students and administrators. Because if users know and are aware of what is being done with their data, they can make the right decisions.” (A2)
“Technology should be able to empathize with students and offer a human-like interaction.” (ET1)
“Objective evaluation criteria should be included in artificial intelligence applications in order to eliminate subjective evaluation processes. Thus, measures can be taken against biased results.” (A2)
“Artificial intelligence committees must be established in every educational institution. Because these applications are increasingly taking place in the education sector. Therefore, these committees should investigate and study every aspect of artificial intelligence and guide other stakeholders.” (A3)
“It should be determined which data is important and confidential. Accordingly, sensitive data should be encrypted.” (ET2)
“In addition to encrypting data, access to it should also be limited. Not everyone should have access to every data.” (A3)

These statements of the participants show the awareness of the participants who have thought deeply about the use of artificial intelligence and data in the education sector and suggest various measures. First of all, the importance of informing students that their data is being used and transparently stating how this data is used and with whom it is shared is emphasized. Data privacy and anonymization are also among the prominent ethical issues in this process. It is also stated that awareness trainings on data privacy and ethical use should be organized and objective evaluation criteria should be used in artificial intelligence applications. In addition to these measures, technical steps such as the establishment of AI committees in each educational institution and encryption of sensitive data are also suggested.

4. DISCUSSION

The use of artificial intelligence in the field of education is rapidly becoming widespread. The use of artificial intelligence technologies in educational processes raises various ethical problems. The research conducted by Regan and Jesse (2018) emphasizes that artificial intelligence systems in education will bring some ethical concerns such as privacy when compromises are made in the use of data through recommender systems. The statement in the OECD (2021) report proves that ethical concerns are not unwarranted. “For AI in education, as children are used by commercial developers to test their AI technologies, it is important to design and implement robust ethical guidelines and avoid any “ethical washing” (OECD 2021).

Among the ethical problems of this study, confidentiality, the problem of bias, inequality of opportunity in education, emotional needs and transparency stand out. Research and analysis emphasize that the ethical dimension is important in the processes of collecting, storing and processing student data. Also prominent are confidentiality, the problem of prejudice, inequality of opportunity in education, emotional needs and transparency. Research and analysis show that various suggestions can be developed to solve these problems.

Within the scope of solution suggestions, first of all, data privacy and security problems should be emphasized. It is important to comply with relevant laws and ethical standards in the collection and storage of student data and to ensure data security. In addition, the principle of transparency and accountability should be adopted in data use and analysis processes. To address these ethical concerns, it is important to implement strategies such as transparency and accountability in the development and use of AI systems in education. Another way to address the ethical concerns is through comprehensive training and education for teachers and students on the ethical implications of AI in education. Artificial intelligence should stay away from biased data. This result is in line with Kalayci Onac et al. (2021) who concluded that decisions about the use of AI in the educational process should respect students’ rights, privacy, and safety and should not discriminate against them. Anonymizing student data, informing users about data sharing and obtaining their consent also support an ethical approach. Improving the empathy ability of artificial intelligence applications and emphasizing human-centered design will also ensure that emotional needs are taken into consideration. In addition, steps such as the establishment of AI ethics committees in educational institutions, regular data audits and authorization of data access will also contribute to the solution of ethical problems. All stakeholders should be made aware of this issue. Ma & Jiang (2023) emphasize that informative activities should be organized regularly to raise awareness and educate educators and students about the use of artificial intelligence.

To combat the problem of bias, artificial intelligence algorithms need to be diversified and improved. It is important that algorithms are designed to reduce demographic differences and prejudices. According to Ma & Jiang (2023), the development and supervision of algorithms is important to ensure the ethical use of artificial intelligence. In addition, reviewing the content of educational materials and correcting unethical information also supports this process. Regarding inequality of opportunity in education, problems in access to technology should be addressed. Students who are in economically difficult situations should be supported and should benefit equally from artificial intelligence technologies. In addition, it is important that AI applications also focus on emotional needs and improve human-machine interaction. Williamson & Eynon (2020) found that even if detecting, responding to, and modifying student emotions with AI systems improves learning outcomes, there are critical concerns about how exactly the impact is detected, what the impact is on future learning, educational decisions, and even whether such an application on mental health is ethical. Finally, there should be transparency about the internal working mechanisms of AI technologies. It should be made clear to users how algorithms make decisions and what impact they have on students. This both increases trust and lays the foundation for an ethical approach.

As a result, a multifaceted and comprehensive approach is required to solve ethical problems related to the use of AI technologies in education. Wang et. al. (2020) argue that policies and regulations related to the use of artificial intelligence need to be determined and implemented. This approach should focus on protecting the privacy of student data, reducing bias, providing equal opportunities, considering emotional needs, and adopting transparency/accountability principles.

5. CONCLUSIONS

In line with the findings, firstly, the collection, storage and processing of student data should be in accordance with the law and ethical standards, and data security should be strictly ensured. Furthermore, the importance of transparency and accountability in data use and analysis processes is emphasised. In addition, users need to understand how algorithms make decisions. It is stated that regular information events should be organised to inform educators and students about the ethical use of AI technology and to raise their awareness. Algorithms should be diversified and improved, and designed in a way to reduce demographic differences and prejudices. In addition, support should be provided to students facing economic difficulties and their emotional needs should be taken into account. Finally, transparency about the inner working mechanisms of AI technologies should be ensured and policies and regulations regarding the use of AI should be determined and implemented. With the implementation of these recommendations, it is aimed to ensure the ethical use of artificial intelligence technology and to create a healthier usage environment in the field of education.

For all these, it is important to take steps in the implementation of educational policies. The emphasis on data privacy and security issues requires educational institutions and decision-makers to fully comply with relevant legal and ethical standards. In this context, transparency and accountability principles should be adopted in the collection, storage and utilization of student data. Furthermore, the solutions developed for the use of AI systems in education should offer strategies for strengthening and effectively implementing these principles. For example, steps such as the establishment of AI ethics committees in educational institutions, regular data audits and authorization of data access can be taken. It is also important to organize comprehensive training and information programmers for teachers and students on the ethical implications of AI in education. These programmers can focus on human-centered design of AI systems and applications to increase their sensitivity. Similarly, problems in access to technology should be addressed to reduce inequalities of opportunity among students. It is particularly important to provide support to students facing financial difficulties and ensure that they benefit equally from AI technologies. Such practices can help education policies create a framework that promotes the ethical use of AI.

In line with these results, the following suggestions can be made to policy makers and educators:

  1. Educational institutions can establish AI Ethics Committees to oversee and promote the ethical use of AI technologies. These committees can strengthen the principles of transparency and accountability in the processes of collecting, storing and processing student data.
  2. Regular data audits should be conducted and data access authorisations should be determined during the use of AI systems in education. These steps provide important safeguards to ensure data privacy and security.
  3. Regular ethics training programmers for educators and students raise awareness of the ethical implications of AI technologies in education. These programmers can increase the sensitivity of AI systems by emphasizing human-centered design.
  4. Support should be provided especially to students facing economic difficulties and they should be enabled to benefit from AI technologies equally. Thus, inequalities of opportunity in education can be reduced.
  5. Ensuring transparency of decision-making processes and internal mechanisms of AI systems supports users’ trust in these technologies and their ethical use. Policies should encourage this transparency and should be updated when necessary.

Implementing these recommendations can help education policies create a framework that promotes the ethical use of AI technologies. By implementing these steps, policymakers and educators can safeguard important issues such as student privacy, fairness, and equity, and make the most of the potential of AI technologies in education.

5.1. Limitations and future lines of research

In line with the results of this study, more research can be done in the future on how algorithms can be developed and how prejudices can be reduced. It can make artificial intelligence technologies in education more human-oriented. Evaluate how successful educational institutions are in acting in accordance with ethical standards. Research can be conducted on the effectiveness of raising awareness of educators and students about the use of artificial intelligence.

6. FUNDING

No funding was provided from anywhere.

7. REFERENCES

Alzahrani, A. (2022). A systematic review of artificial intelligence in education in the Arab world. Amazonia Investiga, 11 (54), 293–305. https://doi.org/10.34069/AI/2022.54.06.28

Anyoha, R. (2017). The history of artificial intelligence. Science in the News , 28.

Baltacı, A. (2017). Miles-Huberman model in qualitative data analysis. Ahi Evran University Journal of Institute of Social Sciences, 3 (1), 1-14.

Baxter, P., & Jack, S. (2008). Qualitative case study methodology: Study design and implementation for novice researchers . The qualitative report, 13 (4), 544-559. https://doi.org/10.46743/2160-3715/2008.1573

Baz, F. Ç. & Denizer, E. (2018, 23-25 November). An overview of research on the use of artificial intelligence in the field of education in Turkey: a content analysis study. Cemil Meriç - 10th Social Sciences and Sports Congress, Hatay.

Binns, R. (2018). Fairness in machine learning: Lessons from political philosophy . Proceedings of the 2018 Conference on Fairness, Accountability, and Transparency, 81, 149-159.

Bryman, A. (2016). Social research methods. Oxford university press.

Chen, L., Chen, P., & Lin, Z. (2020, January 1). Artificial Intelligence in Education: A Review. Institute of Electrical and Electronics Engineers, 8, 75264-75278. https://doi.org/10.46793/TIE22.223K

Crawford, K. (2021). The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press.

Creswell, J. W., & Creswell, J. D. (2017). Research design: Qualitative, quantitative, and mixed methods approaches . Sage publications.

Creswell, J.W. (2016). Qualitative research methods. Qualitative research and research design according to five approaches. Ankara: Siyasal Yayın Dağıtım.

Duggan, S. (2020). AI in Education: Change at the speed of learning . UNESCO Institute for Information Technologies in Education.

Educause. (2018). The NMC Horizon Report: 2018 higher education edition . Educause. https://library.educause.edu/~/media/files/library/2018/8/2018horizonreport.pdf

Educause. (2023). 2023 EDUCAUSE horizon report: teaching and learning edition . Educause. https://library.educause.edu/-/media/files/library/2023/4/2023hrteachinglearning.pdf

Evans, P. C., & Gawer, A. (2016). The rise of the platform enterprise: A global survey . The center for global enterprise.

Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1 (1). https://doi.org/10.1162/99608f92.8cd550d1

Güçlü, İ. (2019). Qualitative research methods in social sciences . Nika Publishing House.

Harry, A., 2023, Role of AI in education, Interdiciplinary Journal and Humanity (INJURITY), 2 (3), 260-268. https://doi.org/10.58631/injurity.v2i3.52

Islam, M. A. (2023). AI & Blockchain as sustainable teaching and learning tools to cope with the 4IR. Cornell University.

Kalayci Onac, A., Cetin, M., Sevik, H., Orman, P., Karci, A., & Gonullu Sutcuoglu, G. (2021). Rethinking the campus transportation network in the scope of ecological design principles: case study of Izmir Kâtip Çelebi University Çiğli Campus. Environmental Science and Pollution Research, 28 (36), 50847-50866. https://doi.org/10.1007/s11356-021-14299-2

Ma, X., & Jiang, C. (2023). On the Ethical Risks of Artificial Intelligence Applications in Education and Its Avoidance Strategies. Journal of Education, Humanities and Social Sciences, 14, 354-359. https://doi.org/10.54097/ehss.v14i.8868

Mello, R. F., Freitas, E. L. S. X., Pereira, F. D., Cabral, L. D. S., Tedesco, P., & Ramalho, G. (2023). Education in the age of Generative AI: Context and Recent Developments. Cornell University.

Merriam, S. B., & Grenier, R. S. (Eds.). (2019). Qualitative research in practice: Examples for discussion and analysis. John Wiley & Sons.

Müller, V. C. (2020). Ethics of artificial intelligence and robotics. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy . Stanford University.

Nabiyev, V., and Erümit, A. K., (2020). Prediction with machine learning . Pegem.

OECD (2021), OECD Digital Education Outlook 2021: pushing the frontiers with artificial intelligence, blockchain and robots, OECD. https://doi.org/10.1787/589b283f-en

Olga, A., Tzirides., Saini, A. S., Zapata, G. C., Searsmith, D., Cope, B., Kalantzis, M., Castro, V., Kourkoulou, T., Jones, J. H., Silva, R. A. D., Whiting, J., & Kastania, N. P. (2023). Generative AI: Implications and Applications for Education. Cornell University.

Regan, P. M., & Jesse, J. (2019). Ethical challenges of edtech, big data and personalized learning: Twenty-first century student sorting and tracking. Ethics and Information Technology, 21 , 167-179. https://doi.org/10.1007/s10676-018-9492-2

Sullivan, C., & Forrester, M. A. (Eds.). (2018 ). Doing qualitative research in psychology: A practical guide . Sage.

Tapalova, O., & Zhiyenbayeva, N. (2022). Artificial intelligence in education: AIEd for personalized learning pathways. Electronic Journal of e-Learning, 20 (5), 639-653. https://doi.org/10.34190/ejel.20.5.2597

Tiwari, R. (2023). The integration of AI and machine learning in education and its potential to personalize and improve student learning experiences. International Journal of Scientific Research in Engineering and Management, 7 (2), 1. https://doi.org/10.55041/IJSREM17645

Wang, Y., Hu, M., Zhou, Y., Li, Q., Yao, N., Zhai, G., & Yang, X. (2020). Unobtrusive and automatic classification of multiple people’s abnormal respiratory patterns in real time using deep neural network and depth camera. IEEE Internet of Things Journal, 7 (9), 8559-8571. https://doi.org/10.1109/JIOT.2020.2991456

Whittlestone, J., Nyrup, R., Alexandrova, A., & Cave, S. (2019). The role and limits of principles in AI ethics: Towards a focus on tensions. Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 195-200.

Williamson, B., & Eynon, R. (2020). Historical threads, missing links, and future directions in AI in education. Learning, Media and Technology, 45 (3), 223-235. https://doi.org/10.1080/17439884.2020.1798995

Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education-where are the educators?. International Journal of Educational Technology in Higher Education, 16 (1), 1-27. https://doi.org/10.1186/s41239-019-0171-0

Zhu, Y. (2021, June 12). Research on English teaching of professional skilled talents training based on artificial intelligence. IOS Press, 1-12.