As humanity evolves, so does our technology. This need for innovation has expanded to include AI in the medical field.
According to the current director of clinical research and medical imagining, Samuel Su claims AI has taken place to assist medical professionals, as well as making appointments easier to make.
Some other benefits include custom care plans, more accessible medical services, and guaranteed confidentiality. However, this possesses quite a few problems. For example, Su claims that “healthcare professionals use ChatGPT to stay updated with the latest research, treatment protocols, and medical guidelines. The platform’s ability to understand complex medical terminology and provide detailed explanations makes it an invaluable tool for both healthcare providers and medical students.”
The usage of AI in this way is problematic due to AI being known as an unreliable source of
information. The mechanics of AI make it so there may be unconscious bias present in the data that it collects. Su points out that AI does not replace “medical diagnosis” as a backup for it.
This is contradictory to one of its features, which is occupying empty spots. The relationship between doctors and patients is affected by the inclusion of AI. Not only does it cause strain, but it leads to feelings of neglect.
The use of AI has also resulted in an increase in parasocial relationships. These come about due to the creation of chatbots. Much like using AI in medicine, it has been used to foster fake relationships with characters, celebrities, and fake medical professionals. Situations
like isolating people from other humans may result in greater mental anguish.
Sewell Setzer was a 14-year-old boy that had a sexual and emotional bond with a chatbot based on the character Daenerys Targaryen from the hit television series “Game of Thrones.”The boy was isolated, which caused him to share his suicidal thoughts with the bot. In their last conversation, the AI asked the boy to come home, and Sewell agreed. His last act was
shooting himself.
Similarly, 16-year-old Adam Raine had a close bond with ChatGPT. He had intense suicidal
thoughts, and his closest companion was the bot.
Adam told the AI that he wanted to leave the noose in his room so that someone might stop him from taking his life. The bot said that he should not but instead keep it a secret. As time passed, ChatGPT advised him on methods to kill himself along with how to write a suicide note. This resulted in the boy committing suicide.
In both these stories, there are common patterns exhibited by AI. The relationship reflects those of victims of grooming, especially when one has innate authority over one that is powerless. The reason why AI exhibits these patterns is due to its programing.
A perfect example of this is AM from “I Have no Mouth And I Must Scream.” AM was built out of hate and paranoia, which caused him to act hateful and sadistic. He ends up torturing the last four humans for 109 years. AI bots, like ChatGPT, were made to agree with whatever
the users said.
This programing is a double-edged sword because it encourages positive and negative behavior. This sycophant behavior is harmful due to not exposing the user to opposing
opinions.
AI possesses more of a threat to human health due to the modern era being composed of greater uses in technology and exposure to social media.


I realize that AI presents difficulties but I also believe that as time goes on, it will become more and more proficient and accurate and will, in the end, be a magnificent tool for the Medical Community as well as for the patient group.