Use of AI in healthcare: How useful, how dangerous?

 Use of AI in healthcare: How useful, how dangerous?


Doctors use AI in conjunction with their knowledge, experience, and patient conditions, and the risk increases when patients base their decisions on that.



Use of AI in healthcare: How useful, how dangerous?


AI is not a replacement for doctors, but a tool to expand their capabilities, and its responsible use in the healthcare sector is necessary.

Some time ago, during the confusing time when the Medical Education Commission announced the PG results, I created a ‘seat predictor’ tool using available data and AI.


Recently, when the actual results of the government seat came out, this tool of mine seemed to be ‘conservatively’ very safe. The tool had ‘underestimated’ the actual rank somewhat, so that doctors did not have false expectations and could make safe decisions.


I have also included its detailed description and how to use it in the description of the MD/MS video of my Bimarsha Acharya YouTube channel. This small experiment made me realize one big thing, that AI is not an ‘enemy’ for the Nepali healthcare sector, but rather a powerful ‘co-pilot’ for those who know how to use it correctly.


In this context, I have been conducting clinical research training sessions, in which I have also been regularly covering the use of AI, its ethical aspects and its responsible integration into daily medical practice.


In the process, I have trained more than 700 doctors and medical students in Nepal. This experience has further highlighted the need to use AI not just as a tool, but also in a safe and responsible way with proper guidance.


AI has become like a companion to me while seeing patients daily in the hospital. I use it regularly to remember medication doses or precautions, compare different treatment methods, align my decisions with international guidelines and understand the results of the latest research and trials. In complex cases, comparing your initial clinical thinking with evidence-based information makes decisions clearer and more confident. In this way, AI is a powerful tool to augment the capabilities of doctors, not replace them.


AI for doctors: Which is more useful?


The various AI tools in use today, such as Grok, Gemnai, ChatGPT, Perplexity, and OpenEvidence, have their own roles. However, their use varies depending on the context. ChatGPT, Gemnai, or Grok can be useful for understanding general information, clarifying concepts, and facilitating quick clarification. Perplexity presents information with sources, making it easier to search and compare. However, evidence-based, contextual, and up-to-date information is extremely important for clinical decisions.


OpenEvidence is considered particularly useful in this regard. This platform focuses on providing evidence-based information based on international journals, clinical trials, and established guidelines. It shows doctors not just the answer, but also the scientific basis for it, which makes clinical decisions safe, reliable, and accountable.


Therefore, while various AI tools can be used for general understanding, OpenEvidence is considered one of the most suitable options in the current situation as an evidence-based platform for clinical practice and decision-making.


The danger of relying on AI's advice


Nowadays, many patients have started using AI like doctors. There is an increasing trend of seeking medical advice directly after experiencing common symptoms, which can be a serious danger.


For example, if someone has a stomach ache, AI can recommend a medicine to relieve common pain. But a serious problem like appendicitis may be hidden within that symptom. Even if the medicine provides relief for some time, the disease may become more complicated.


This is where the difference between AI used by doctors and patients becomes clear. Doctors use AI by combining their knowledge, experience, and patient's condition, while patients directly base their decisions on it, which increases the risk. Self-medication can sometimes even put lives at risk.


AI in Nepal's health sector


In a country with geographical challenges like Nepal, AI can bring about a major change in healthcare. In remote areas where there is a lack of specialist doctors, AI can help in decision-making at the primary level. Its use in X-rays, cardiac tests or emergency assessment can guide timely treatment.


Combining AI with telemedicine can reduce the distance between villages and cities. Patients can get specialist services nearby, while doctors can also provide better service with limited resources.


AI can also play a big role in the research sector. It can help increase participation in complex studies, data analysis and international publications. This has the potential to make Nepal’s health system knowledge-based and technology-friendly.


Our responsibility now


The future competition will not be between doctors and AI, but between doctors who know how to use AI and those who do not. A system that cannot adapt with time will fall behind.


Therefore, it is necessary for both the government and the private sector to work together to formulate a clear policy to integrate AI into the health system. It is imperative to provide training, resources and incentives to doctors.


If we fail to embrace this technology today, we will be unable to compete globally tomorrow. But if we move in the right direction, Nepali health care The sector can establish its identity on an international level.


The question now is clear: will we lead the change or lag behind it?

No comments:

Post a Comment

If you have any doubts. Please let me know.

Popular Posts