Core Concept Engineering and Technology Published: December 31, 2025

Artificial Intelligence in Healthcare

Abstract

This article explores the fast-paced world of artificial intelligence in medicine: how it works, how it is being used, what it can do, and what it cannot do. We cover some examples where AI is used, such as administrative tasks, diagnosis, and treatments, and describe the importance of having good evidence to support AI-based decisions. We discuss topics in AI ethics, such as fairness, being open about how medical decisions are made, and patient privacy. We offer a list of key questions that you can ask your health provider to help you decide for yourself how AI might fit into your care. This article was developed based on input from our patients at SickKids Hospital and youth living in Toronto, Canada, but the information it contains applies to any reader in any healthcare location.

Have you ever used ChatGPT to help with your homework? Or wondered how a computer ad or search engine seems to know what you are thinking? These days, artificial intelligence (AI) seems to be everywhere, including in medicine. But what are these tools, and how do they work? Most importantly, what do young people seeking healthcare need to know? In healthcare settings, AI might be used to perform administrative tasks like appointment scheduling, to assist with diagnosis and illness detection, and even to predict how patients will do after medical procedures. No computer system is perfect—but neither are humans! The trick is figuring out how to get the best out of both: using AI tools but holding onto professionals’ human judgment to make the best decisions for patients. We are at the edge of a new frontier in healthcare, and this article will tell you all you need to know about AI tools so that you feel prepared when you visit a doctor or use a healthcare app online. We will describe some AI tools, identify some current uses in healthcare, and explore the field of AI ethics.

What is Artificial Intelligence?

AI is the science of teaching computers to perform specific tasks without explicit instruction—instead, they “learn” patterns from the data they are trained on. For example, Netflix and other streaming services learn what you like to watch and show you similar things, without an actual person selecting the shows. This is usually done through machine learning, in which the computer learns from past examples to predict similar future events. In medicine, this can mean learning which patients got a particular disease and teaching the computer how they are different from patients who did not get that disease. The goal of machine learning is to develop one or more algorithms or models—basically, the math behind making the predictions—to make predictions for new patients. The important thing to know is that AI and machine learning use patterns of information to make predictions. These patterns contain important information, but they are not always perfect. As you may know, what you have done in the past does not always predict what you will do in the future.

Generally, AI can be broken down into four methods: supervised, unsupervised, reinforcement learning, and generative. Some types of AI may use more than one method—for example, generative AI can involve both unsupervised and reinforcement learning (Figure 1).

Illustration of four types of machine learning in cloud shapes. (a) Supervised: labeled fruits including bananas, oranges, apples, and grapes. (b) Unsupervised: a bundle of fruits without individual labels. (c) Reinforcement Learning: images show progression, Level 1 vs. Level 10. (d) Generative: an image of fruit with sunglasses labeled as fruit ninja.
  • Figure 1 - (A) In supervised learning, the computer is taught to classify examples (“apple” vs. “banana”).
  • In medicine, AI can be trained to detect whether a tumor is cancerous or not. (B) In unsupervised learning, a computer learns different features of fruit (round, red, etc.). In healthcare, AI could help diagnose a patient based on symptoms. (C) In reinforcement learning, a computer learns how to improve over time. In healthcare, AI can learn the optimal amount of medicine for each patient. (D) Generative AI produces content after seeing past examples. In medicine, AI could generate notes based on a visit with the doctor.

Generative AI is newer than the other types, but it involves a lot of the same principles. For example, ChatGPT and other synthetic (i.e., not human) language generators were trained on data in the form of words, sentences, and texts, usually from the internet. Image generators are trained using lots of images. It is important to know that generative AI approaches make predictions based on similarity to the examples they have seen—they are not trained to be accurate (right or wrong), just to create similar content. When synthetic text says something that is true, it is because the math worked out that way—not because the model “knows” that the information is factually correct. This is why it is important to remember that tools like ChatGPT do not make good search engines [1].

How is AI Being Used in Medicine?

AI is useful in healthcare because it is really good at picking up patterns, such as situations or symptoms that often occur together. For example, if you have a sore throat, cough, and are sneezing, this pattern of symptoms suggests that you likely have a cold. Past patterns can also be used to predict future events. For example, a particular sequence of events can predict what might happen next to a patient (e.g., vomiting is followed by dehydration). Pattern detection is something doctors use all the time to make medical decisions—although it is important to know that doctors and AI do not make decisions in the same way [2]. Doctors look at what is best for the patient as a whole, while AI tools are trained to perform very specific tasks. In healthcare, AI can be used for administrative tasks, detection and diagnosis, and intervention.

Administrative Tasks

Healthcare workers spend a lot of time on administrative work, such as writing notes, attending meetings, and scheduling appointments [3]. Some tasks can be automated—like patients getting text reminders for their appointments or booking appointments online. Recently, AI is being used for synthetic text generation using large language models. These AI tools can “listen in” on a patient’s visit, transcribe everything said during the visit, and create summaries like a visit note, a note for the patient’s school, or others [4]. These tools may save time for doctors, but they can sometimes make random mistakes, so they need to be checked by a human [5, 6]. Doctors are ultimately responsible for making sure anything they document about their patients is accurate.

Detection and Diagnosis

One promising area for AI is assisting with disease detection and diagnosis. AI tools can be trained to detect medical problems by picking up important signals in data (e.g., heart rate changes, blood pressure), which can help healthcare providers identify problems earlier—possibly making treatment more successful [7]. AI excels in radiology, for example. Medical imaging technologies like X-rays and MRIs produce “pictures” with consistent patterns that can be analyzed by AI to determine if the patient is healthy or has a disease like cancer or a broken bone (Figure 2). With human oversight, these tools can improve the speed of disease detection, even catching some cases that might have otherwise gone undetected [7, 9, 10].

Annalise AI System image is an AI tool to identify medical problems using images. This image shows a chest x-ray that shows a problem with the lungs label ``air space opacity.'' The image includes a list of possible diagnoses on the right side which are supplied by the AI system.
  • Figure 2 - Example of an AI system used in radiology to read a chest x-ray to see if there is anything abnormal on the x-ray, where this abnormality is in the image, and how likely this is to be accurate.
  • This image shows the AI identifying that there is something funny in the lung of the patient, the box on the right suggests what it might be, and the scale on the bottom shows the AI is more than 50% sure this true [8].

Intervention

AI tools can also support treatment decisions and guide how medical professionals intervene to help patients [9]. One study showed how using an AI tool to detect a serious blood infection improved patients’ odds of getting the right treatment quickly, increasing their chances of survival [11]. Other attempts to use AI tools to detect infections have not been as successful [12], so it matters a lot how the specific tool is built and tested.

Ethical Issues

Medicine itself involves many ethical issues, and AI introduces even more. The sections that follow focus on a few issues that you should know about.

Being Open

In healthcare, it is very important for patients to understand how their medical decisions are made, this is called being “open” or “transparent” [5]. AI tools cannot always give honest “reasons” for their predictions, but that is where it is important for doctors to step in. Doctors can be open with patients about how they are using AI tools in their decision making and can give their patients clear reasons for their recommendations. The information doctors provide should include evidence about the tool, either from medical literature or their personal experience.

Fairness

As pattern matchers, AI tools can be “unfair” or have a bias, meaning they can work differently based on things that are not relevant and considered unfair in some way. For example, women are more often diagnosed with anxiety compared to men, so AI tools might be more likely to diagnose anxiety in women, and less likely to diagnose it in men. This could lead to doctors over-treating women or under-treating men for anxiety. Similar unfair biases can be found based on race, ethnicity, accents, language spoken, socioeconomic status, and others. Doctors should know which biases might be present in any AI tool they use and how this could impact their patients’ care.

Privacy

In our digital world, it is becoming harder to trace where information about us ultimately ends up. Particularly for generative AI tools, a patient may wish to know where their personal information goes, what it might be used for, and who sees it. For example, if a doctor uses AI to summarize a patient’s appointment notes, the patient should know if other doctors or clinic staff are going to be able to read their appointment summary and learn personal medical information about them. How information is protected is also important and can help patients assess how comfortable they are with sharing.

Shared Decision Making

Even the most accurate tools often do not single-handedly determine how a patient is treated. Treatment decisions should include the patient’s values and life factors, and AI tools do not take these factors into account. Sometimes people feel like AI tools are so accurate that they should have more authority—but there is still not enough convincing evidence that this is broadly true. Medicine is complex, and medical knowledge evolves constantly. Asking questions and discussing uncertainty with your doctor can help you make the best decisions about your own care (Box 1).

Box 1 - AI and Your Healthcare Rights

Some questions you can ask your medical professional:

• Is an AI tool being used for my care?

• What data or information was used to train the AI tool?

• Is the tool approved by a healthcare regulator?

• How long have you been using the AI tool?

• Am I similar to or different from other patients you have used the AI tool with before?

• Where does my information go? Does the AI company keep any of my personal information? What do they do with it?

• What would your decision have been without the AI tool?

• If the AI tool’s decision turns out to be wrong, how would you know?

Your healthcare rights include the freedom to ask questions, whether you are the one making the decision or not (e.g., your parents might be the decision maker).

Sometimes your doctor might not know the answers, like where your data goes. Most companies have privacy policies on their websites, so you might want to look them to improve your awareness of how your data is used. You can also ask for a copy of any documents generated by AI and check them yourself or with an adult. Empowered patients who are engaged in their healthcare tend to have better health outcomes than those who are not, so being curious and asking questions is part of how you care for yourself.

Remember This!

It is clear that AI tools can be helpful in healthcare. However, it is also clear that these tools need to be used with support and oversight from trained healthcare professionals. While there is a lot of excitement about AI in medicine, doctors still need to make sure they are using good evidence to make responsible choices. Knowing the issues that can arise can help prepare you for navigating an AI-enabled healthcare environment and participating knowledgeably in your care.

Glossary

Artificial Intelligence: The notion that a computer can perform a task without a human telling it what to do each step of the way using algorithms.

AI Ethics: The principles or values that oversee AI, its creation, and its use.

Machine Learning: The math of learning patterns in data, where learning is organized around some specific target or goal, such as predicting something.

Algorithm: The math behind how a computer uses data to make predictions. An algorithm is essentially the mathematical description of how a task is performed.

Generative AI: An AI-based tool that, in response to a question or prompt, can produce images, text, or other content based on its training on past examples.

Large Language Model: An AI-based tool trained on lots of text examples, which can make predictions about what words and sentences will sound like a good answer to a given question.

Radiology: The area of medicine that looks at x-rays and other images of a patient’s body to help diagnose and treat illnesses.

Bias: Having a preference for, or choosing, one thing instead of something else, without any clear reason why.

Conflict of Interest

The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

The authors would like to thank all the participants of their ongoing and previous research studies. They would like to thank the AI-Ethics teams at SickKids Hosptial in Toronto Canada, Women’s and Children’s Health Network in Adelaide Australia, and the Australian Institute for Machine Learning. The authors would greatly like to thank the young people who took the time to review this article, including Young Reviewer Harry (16). Harry—who enjoys a wide range of sports and games and is deeply interested in science, especially biology and the biomedical sector—brought a unique perspective to the review process. His passion for understanding how applications of biochemistry and genetics can solve real-world problems meaningfully enriched this work. We gratefully acknowledge the input of Harry, aged 16, who assisted with his comments. He is extremely interested in the different fields of science, especially how biochemistry and genetics can solve real-world problems. We thank him for his time to improve the article!

AI Tool Statement

The author(s) declared that generative AI was not used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.


References

[1] Heikkila, M. 2023. Why you shouldn’t Trust AI Search Engines. MIT Technology Review. Artificial Intelligence. Available online at: https://www.technologyreview.com/2023/02/14/1068498/why-you-shouldnt-trust-ai-search-engines/ (Accessed November 26, 2024).

[2] Tikhomirov, L., Semmler, C., McCradden, M., Searston, R., Ghassemi, M., Oakden-Rayner, L., et al. 2024. Medical artificial intelligence for clinicians: the lost cognitive perspective. Lancet Digit. Health. 6:e589–94. doi: 10.1016/S2589-7500(24)00095-5

[3] Davenport, T., and Kalakota, R. 2019. The potential for artificial intelligence in healthcare. Future Healthc. J. 6:94–8. doi: 10.7861/futurehosp.6-2-94

[4] Tai-Seale, M., Baxter, S. L., Vaida, F., Walker, A., Sitapati, A. M., Osborne, C., et al. (2024). AI-generated draft replies integrated into health records and physicians’ electronic communication. JAMA Netw. Open 7:e246565. doi: 10.1001/jamanetworkopen.2024.6565

[5] Dave, T., Athaluri, S. A., and Singh, S. 2023. ChatGPT in medicine: an overview of its applications, advantages, limitations, future prospects, and ethical considerations. Front Artif. Intell. 6:1169595. doi: 10.3389/frai.2023.1169595

[6] Tai-Seale, M., Baxter, S. L., Vaida, F., Walker, A., Sitapati, A. M., Osborne, C., et al. 2024. AI-generated draft replies integrated into health records and physicians’ electronic communication. JAMA Netw. Open 7:e246565. doi: 10.1001/jamanetworkopen.2024.6565

[7] Eng, D. K., Khandwala, N. B., Long, J., Fefferman, N. R., Lala, S. V., Strubel, N. A., et al. 2021. Artificial intelligence algorithm improves radiologist performance in skeletal age assessment: a prospective multicenter randomized controlled trial. Radiology 301:692–9. doi: 10.1148/radiol.2021204021

[8] User Interface of Publicly Available ’Annalise Enterprise CXR’ web demo. 2022. Artificial Intelligence in Medical Imaging: Benefits That Extend Beyond the Reporting Room? Annalise AI. Available online at: https://annalise.ai/2022/08/artificial-intelligence-in-medical-imaging-benefits-that-extend-beyond-the-reporting-room/ (Accessed September 1, 2025).

[9] Ramgopal, S., Sanchez-Pinto, L. N., Horvat, C. M., Carroll, M. S., Luo, Y., Florin, T. A., et al. 2022. Artificial intelligence-based clinical decision support in pediatrics. Pediatr. Res. 93:334–41. doi: 10.1038/s41390-022-02226-1

[10] Seol, H. Y., Shrestha, P., Muth, J. F., Wi, C.-I., Sohn, S., Ryu, E., et al. 2021. Artificial Intelligence-assisted clinical decision support for childhood asthma management: a randomized clinical trial. PLoS ONE 16:e0255261. doi: 10.1371/journal.pone.0255261

[11] Adams, R., Henry, K. E., Sridharan, A., Soleimani, H., Zhan, A., Rawat, N., et al. (2022). Prospective, multi-site study of patient outcomes after implementation of the TREWS machine learning-based early warning system for sepsis. Nat. Med. 28:1455–60. doi: 10.1038/s41591-022-01894-0

[12] Habib, A. R., Lin, A. L., and Grant, R. W. 2021. The epic sepsis model falls short—the importance of external validation. JAMA Inter Med. 181:1040–1. doi: 10.1001/jamainternmed.2021.3333