Objective: To evaluate the correctness of ChatGPT’s answers pertaining to clear retainers in orthodontics.
Materials and Methods: This study conducted a cross-sectional analysis of content of responses by ChatGPT for a set of questions about clear retainers. A total of 58 questions were created by an orthodontist, based on particular domains and within them, particular subdomains. The content generated by AI were independently assessed for accuracy by two orthodontists. A pre-piloted four-point scale was used to rate the answers. Descriptive statistical assessment was carried out on the data.
Results: The cumulative mean score reflecting the accuracy of the full dataset was (1.70 ± 0.53). Approximately 67% of the AI-produced responses were assigned a rating of objectively true, 29% consisted of selected facts, while 4% fell under the category of minimal facts. Inaccurate information provided by ChatGPT was about patient-reported adverse effects (2.25 ± 0.5), microbiological composition (3 ± 0), knowledge, information, and satisfaction (2 ± 0.64), and patient-clinician relationship (2.25 ± 0.95).
Conclusion: ChatGPT's responses to clear retainer-related questions were frequently inaccurate and lacked citations to reliable sources. The AI also had limitations in providing current and precise information. Because of this, clinicians and patients should approach its answers with caution, as they may contain errors or omit crucial details.