Organisational readiness, prevailing regulations, and particular person working out are some components that affect the success of AI deployment in smartly being systems.
At some level of the HIMSS24 APAC panel session, “AI Horizons: Exploring the Future of Innovations,” Professor In-Young Choi, vice CIO of Catholic Clinical Centre (CMC), Dr Shankar Sridharan, chief medical info officer at Nice Ormond Avenue Health center, and Dr Ngai Tseung Cheung, the head of Records Technology and Effectively being Informatics at Health center Authority, Hong Kong (HAHK), shared outcomes, lessons, and anticipation of AI implementation in healthcare. Professor Kee Yuan Ngiam, head of the AI role of enterprise at the Nationwide University Effectively being Gadget, moderated the panel.
The realities of AI implementation
AI in healthcare has been touted as an all-in-one solution for digital wants, but Dr Cheung thinks in every other case.
“AI is an amazing technology and sometimes appears magical, but it really isn’t. AI is a tool no different from any other technology we can put in play.”
He shared questions to bear in mind when applying AI: “How does AI get put into a workflow at scale so that it can deliver positive outcomes? What is the impact [of AI on the organisation]? How does it affect the users? Does the output give you something actionable?”
Dr Shankar of Nice Ormond Avenue Health center shared these preliminary views. “The medicinal asset [for implementing AI] is enthusiasm, which can be quite infectious. But we need to address its purpose… When we do a benefits assessment on safety, security and technology, is there a clinical, operational or patient experience benefit?”
Prof Choi, nonetheless, held a positive leer, given the strict regulatory ambiance she is working with.
“We do not have government-based control for EMR contracts. Each hospital has its own [exclusive] data. It is not easy to share data with other hospitals, and it is not easy for AI companies to access hospital data. If the [AI] algorithm uses cloud computing, it will be outside Korea, and the law does not allow our medical data to enter foreign countries,” she elaborated.
Overcoming boundaries to adoption
Given its multi-layered nature, the deployment of AI instruments might simply require further reassurance. Dr Cheung shared HAHK’s proactive stance in conducting internal validations of AI instruments.
In difference, Dr Shankar supplied info safety reassurance thru vetted partnerships. “We have a trusted research department and commercial relationships with pharma and industry that airlock [data]. Then, we communicate to our CEO that our data is safe and that the [AI] tool is useful and valuable without losing data due to risks.”
Dr Cheung also indispensable that odd demonstrations can appease pessimism stages amongst smartly being workers. “Good thing is we have 43 hospitals. If it (an AI test) does not work out, it is fine. Then we can show that we tried… If we can show them that it works and the hesitant [staff and doctors] switch [to optimism], that will be good… We have to show them that the AI is okay [for use].”
Realising future AI doubtless
No topic adoption boundaries, CMC has application constructing plans which might support pathologists place therapy for lung cancer.
“One patient can have many subtypes of cancer. For example, an individual can have a 20% capillary subtype. Humans can’t count that [figure], but AI can. It can help clinicians provide better treatment to the patients,” Prof Choi defined.
“We level of interest on one particular disease for now. However after generative AI comes, perchance AI shall be conscious [relevant health] crucial capabilities in a more comprehensive come.”
Within the meantime, Dr Cheung contested the human affect of future AI.
“Today’s AI is not sentient. It is not at the level of a human being. However, there is a concept of artificial general intelligence that is touted to be superhuman. If that [technology] is better than humans in every way, there will be a massive change to medicine and humanity.”
Dr Shankar believes that there is a long come to head in realising AI’s fleshy doubtless. “We are a bit like cavemen [with AI’. Our limitations are our own, as are our use cases.”