상세 보기
초록
Since the introduction of ChatGPT in November 2022, generative AI (GAI) services have been intensively studied in the field of AI education. This is because GAI services can provide more human-like interactions and more plausible responses compared to traditional AI services. However, GAI services can generate responses that include AI hallucinations due to factors such as training data bias and model bias. In relation to this subject, some studies investigate the causes of AI hallucinations and explore strategies to mitigate their negative impact. However, the number of domestic studies remains smaller than that of international studies. In this study, we analyze international research trends related to AI hallucinations and review research cases to understand the current status of domestic AI hallucination research. Based on our findings, we propose directions for the educational use of GAI that account for the issue of AI hallucinations. Our analysis highlights the need for both a clear understanding of AI hallucinations and the practical application of methods to mitigate negative impacts for the educational use of GAI.
키워드
- 제목
- 인공지능 환각에 대한 교육적 대응 전략과 연구 방향 - 국내외 연구 분석을 바탕으로 한 시사점
- 제목 (타언어)
- Educational Response Strategies and Research Directions for AI Hallucinations: Insights from Domestic and International Research Analyses
- 저자
- 박윤수; 박호현; 이유미
- 발행일
- 2026-01
- 유형
- Y
- 저널명
- 컴퓨터교육학회 논문지
- 권
- 29
- 호
- 1
- 페이지
- 15 ~ 27