
This post was crafted in partnership with the UF-VA Bioethics Unit.
On October 28, 2022, the UF-VA Bioethics Unit hosted a Grand Rounds talk with Besa Bauta, Chief Data and Analytics Officer at the Texas Department of Family and Protective Services, about the role of artificial intelligence (AI) in healthcare. A recording of the talk is available on the UF Department of Psychiatry website.
As technological advancements persist, the significance of AI in healthcare is growing as it can enhance clinical decision-making.
“AI is currently applied with virtual nursing assistants, AI health workers.. .and optimization of clinical trials,” said Bauta during the Grand Rounds.
However, this also raises ethical concerns about data monetization and usage, as well as inherent biases in AI. This conversation is particularly relevant given UF’s AI initiative and the increasing integration of AI into multiple sectors of society.
The use of AI in healthcare has infinite possibilities, with many potential uses in the mental health field. We conducted an exclusive interview with Bauta during which she spent some time explaining how, for example, AI can detect signs of depression through speech patterns using a technique called sentiment analysis.
This process uses natural language processing to identify patterns in sound waves and predict the speaker’s emotional state. This could prove invaluable for detecting depression in patients who may be masking their symptoms from their physician.
Another applicable use of AI in healthcare that Bauta mentioned is as a learning tool. For example, Google Glass uses facial recognition to detect emotional cues and inform the wearer. It has enough accuracy to catch microexpressions as well as fake smiles, also known as a “Duchenne” smile.
This technology could be particularly helpful for individuals on the autism spectrum who struggle with facial social cues, augmenting their ability to interpret social cues, according to Bauta.
Even though the use of AI in healthcare is exciting with all of its potential and how useful it has proven to be so far, we must be wary of the ethical challenges. In her talk, Bauta focused on the privacy concerns with the amount of patient data being collected. To get large amounts of data, some companies collect data from people without their consent. This is especially relevant with the privacy laws that protect patients.
AI companies must also make sure that the data they collected is representative of the target population. Biased data might lead to inaccurate results. For example, a sentiment analysis conducted by AI based on the data gathered from speakers of a certain dialect may misdiagnose speakers of other dialects, misinterpreting the differences of dialect as signs of a condition.
“There are always biases,” Bauta said during her interview. “It’s baked into everything that we do, unfortunately they’re there, and that’s why I think it’s really important to be aware of them.”
However, Bauta also noted that many groups are raising awareness about how algorithms invade our personal liberties, rights, and decision-making abilities.
These organizations aim to promote transparency in the algorithms’ processes, contributing significantly to combatting algorithmic bias.
When discussing the changing landscape of patient ethics in relation to the use and analysis of patient data, Bauta, during her interview, shed light on how the healthcare sector is tackling this issue.
“We consent to a lot of things. There’s two options, either opt in or opt out, ” said Bauta.
Typically, patients opt in when they visit the doctor’s office and are presented with a comprehensive form that covers HIPAA regulations. They have the opportunity to review the information and agree to let their data be used. However, this consent doesn’t necessarily extend to the development of algorithms using their data.
Often, patients are unaware that their data is being used for this purpose. That’s why opting out has become more prevalent recently, according to Bauta.
“As far as the consent, it’s important for the… patient [to know how] that information will be used for them to get treated… if I was informed, you know, I would be fine with that information being used. That doesn’t mean that everybody would be fine with that. There has to be a benefit either to the patient or a benefit to the society if this information gets used.”
Bauta ended our exclusive interview by emphasizing transparency and informed consent. We can pave the way for a healthcare system that harnesses the power of AI while respecting individual rights and societal well-being. By being proactive in addressing these issues, we can harness the power of AI to improve healthcare outcomes while avoiding potential harms.
The graphic below, from Bauta’s presentation, highlights some of the key ethical considerations that must be taken into account when using AI in healthcare.

Aniqa Ahmed, Meryem Yuksel, Tina Chi, and Nicole Dan contributed to this post.