Years ago, when I was pursuing my Ph.D., I was excited by the prospect of applying new analytical methods in computer science — machine learning, neural networks and data science — to facilitate the interpretation of clinical images. I saw this as a way to improve what was then a subjective and laborious state of the art. As computing power grew, so did the possibility of automating much of what an expert clinician was traditionally required to do.
As a young graduate student, I had an abundance of data and a well-defined problem to tackle. That helped foster many friendships with newly found colleagues in computer science who possessed strong analytical and computational tools but needed real-world data and meaningful challenges. What I learned from those early collaborations are lessons that continue to guide my thinking. Even today, amid enthusiastic celebrations of artificial intelligence, those same experiences and cautionary tales frequently come to mind.
Automate the boring
Many tasks can be automated — but at what cost? Yes, I can write the code or design
a machine to do that, but is it worth the investment?
In biomedical imaging, a common task is to identify features, delineate them, and quantify their size, density and change over time. The problem is that any code developed to automate a diagnostic task will make mistakes. What you gain through automation is the ability to process more data at faster speeds and to perform tasks in a repeatable and objective way. What you cannot avoid are errors.
Detection sensitivity will fail in some cases; not all lesions will be identified. Failures of specificity will also occur. Perfectly normal individuals will be misclassified as diseased. If you have a large volume of cases to process and are willing to accept the limitations of automation — imperfect sensitivity and specificity — then these powerful computational tools can be useful. The methods are far better today than they once were.
Nevertheless, it remains critical to have an informed human exercising judgment and reviewing the output. For me, that means automated clinical systems are just that: tools. They can provide decision support, enhance productivity, reduce the burden of repetitive tasks, and improve reliability and performance. I have yet to encounter an expert system that is infallible, human or otherwise. For that reason, I remain uninterested in fully automated diagnosis but very open to technological advances in decision support.

Creativity is hard to code
Over the past two years, I have spent considerable time experimenting with large language
models to explore their strengths and weaknesses — and it has been a lot of fun. I
have generated road-trip itineraries, impossible works of art, limericks and would-be
novels, corrected grammar, refined gluten-free recipes and much more.
What I have found is that these tools are often excellent entertainment. They can also be spectacularly wrong, even absurd. Aggregating data into organized reports and summaries is usually an easy lift that produces satisfying results — most of the time. Where things become more interesting, and not always in a good way, is when creativity is required.
Asking for insights from the perspectives of world-renowned philosophers, scientists, humanitarians and history’s great leaders can yield engaging and thought-provoking responses. Fortunately, there are extensive written records from which large language models can infer and shape their answers. However, when it comes to developing creative solutions to hard problems, the human mind — with its imperfect memories and tendency to find patterns and meaning in a chaotic reality — remains difficult to surpass.
Yes, the image was AI-generated to resemble a Giacometti-style line drawing of my face. It did a pretty good job, except for the chin.
Can robots teach humans compassionate care?
I am still working to identify the best applications of AI in teaching, research and
patient care. It is undeniably a powerful tool that students in the health professions
must understand if they are to use it responsibly. It is naive to think students —
and future colleagues — will not be influenced by the growing adoption of AI.
It is encouraging to see faculty and administrators not only embracing this technology but also intentionally exploring responsible ways to adopt AI tools. Last year, the College of Optometry began using an AI-simulated patient chat platform that allows students to practice clinical conversations with artificial patients in realistic scenarios. The goal was to expose students to a variety of case types that could help prepare them for national licensing examinations.
So far, the simulations have proven engaging and realistic, and they appear to be boosting exam performance. We identified a real-world problem, had sufficient data to train AI models capable of simulating human conversation, and were able to scale the solution to accommodate thousands of simulated patient encounters for hundreds of students each semester.
This technology will not replace real human experiences. But it can help prepare students for the encounters that matter most — those involving people in need of health care and seeking a human connection.
Back to Academic Update.