A University of Arizona astronomer has developed a groundbreaking method to enhance the reliability of artificial intelligence, addressing a critical issue in AI: models that confidently provide incorrect answers. Peter Behroozi, an associate professor at Steward Observatory, has created a technique that enables AI systems to recognize when their predictions might be unreliable, even for large models with billions to trillions of parameters. This method, supported by a National Science Foundation grant, adapts ray tracing, a computer graphics technique, to explore the complex mathematical spaces where AI models operate. Behroozi's paper, available on the open-access arXiv site, offers a solution to the problem of 'hallucinations' in AI, where neural networks produce made-up facts and research papers. The technique, inspired by a computational physics homework problem, applies Bayesian sampling, a gold standard method, to train thousands of models on the same data, exploring diverse responses. This approach is significantly faster than previous methods and could lead to safer, more resilient neural networks with reduced hallucinations. The implications are far-reaching, as AI is increasingly used in critical decision-making areas like medicine, finance, and autonomous vehicles. By enabling AI to recognize uncertainty, Behroozi's method could improve trust in AI-assisted research and enhance the accuracy of critical applications.