Generative Artificial Intelligence-based Diagnostic Algorithms in Disease Risk Detection, in Personalized and Targeted Healthcare Procedures, and in Patient Care Safety and Quality.

AuthorBugaj, Martin
  1. Introduction

    ChatGPT can be decisive in comprehensive medical assessment and treatment, in streamlined diagnosis and management plan, and in virtual consultation enablement. The purpose of our systematic review is to examine the recently published literature on generative artificial intelligence-based diagnostic algorithms and integrate the insights it configures on disease risk detection, on personalized and targeted healthcare procedures, and on patient care safety and quality. By analyzing the most recent (2023) and significant (Web of Science, Scopus, and ProQuest) sources, our paper has attempted to prove that generative artificial intelligence tools automatically identify and segment various structures (Cegarra Navarro et al, 2023; Lazaroiu et al, 2022b; Popescu et al, 2017a) in medical images and enhance diagnostic accuracy and surgical planning. The actuality and novelty of this study are articulated by addressing how generative artificial intelligence tools can support the evidence-based decision-making process (Kliestik et al, 2020; Morley, 2022a; Popescu, 2018; Watson, 2022) performed by healthcare professionals in relation to clinical practice by solving remote inquiries and decreasing human error risks, that is an emerging topic involving much interest. Our research problem is whether generative artificial intelligence tools (Balcerzak et al, 2022; Lazaroiu et al, 2022a; Nica et al, 2023) evaluate disease severity and prognosis efficiently and accurately through personalized surgical plans and predict surgical outcomes.

    In this review, prior findings have been cumulated indicating that generative artificial intelligence tools can articulate accurate and reliable data (Dabija et al, 2022; Lewkowich, 2022; Popescu et al, 2017b; Vatamanescu et al, 2020), draft well-informed outlines and conclusions, and discuss and analyze results, freeing up important time and resources, enhancing medical education and shaping swift skill development, critical thinking, and problem-solving. The identified gaps advance how generative artificial intelligence algorithms (Andronie et al, 2023; Lazaroiu et al, 2017; Nica, 2018; Peters et al, 2023) articulate specific treatment recommendations, clinical decisionmaking, correct diagnoses, patient outcomes, medical practices, and healthcare equity. Our main objective is to indicate that users are likely to leverage ChatGPT for self-diagnosis in suitable healthcare contexts and applications, articulating patient expectations and decision-making processes concerning trustworthy and substantiated health information sources in terms of awareness, accessibility, and literacy.

  2. Theoretical Overview of the Main Concepts

    ChatGPT can be decisive in clinical trial recruitment and decision-making, in disease risk detection, in automated medical coding, and in patient outcome improvement and clinical information. Generative artificial intelligence algorithms can be harnessed in formative and summative evaluations in medical education, leading to personalized and targeted healthcare procedures through computer-based customized medical simulation scenarios. ChatGPT can configure medical knowledge and healthcare practice perceptions, improving patient care and clinical decisions in terms of accuracy and completeness. The manuscript is organized as following: theoretical overview (section 2), methodology (section 3), ChatGPT can configure medical knowledge and healthcare practice perceptions (section 4), ChatGPT can be decisive in comprehensive medical assessment and treatment (section 5), generative artificial intelligence tools evaluate disease severity and prognosis efficiently and accurately (section 6), discussion (section 7), synopsis of the main research outcomes (section 8), conclusions (section 9), limitations, implications, and further directions of research (section 10).

  3. Methodology

    We carried out a quantitative literature review of ProQuest, Scopus, and the Web of Science throughout April 2023, with search terms including "generative artificial intelligence-based diagnostic algorithms" + "disease risk detection," "personalized and targeted healthcare procedures," and "patient care safety and quality." As we analyzed research published in 2023, only 186 papers met the eligibility criteria. By removing controversial or unclear findings (scanty/unimportant data), results unsupported by replication, undetailed content, or papers having quite similar titles, we decided on 32, chiefly empirical, sources (Tables 1 and 2). Data visualization tools: Dimensions (bibliometric mapping) and VOSviewer (layout algorithms). Reporting quality assessment tool: PRISMA. Methodological quality assessment tools include: AXIS, Distiller SR, ROBIS, and SRDR (Figures 1-6).

    Table 2 General synopsis of evidence as regards focus topics and descriptive outcomes (research findings)

    Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) guidelines were used that ensure the literature review is comprehensive, transparent, and replicable. The flow diagram, produced by employing a Shiny app, presents the stream of evidence-based collected and processed data through the various steps of a systematic review, designing the amount of identified, included, and removed records, and the justifications for exclusions.

    To ensure compliance with PRISMA guidelines, a citation software was used, and at each stage the inclusion or exclusion of articles was tracked by use of custom spreadsheet. Justification for the removal of ineligible articles was specified during the full-text screening and final selection.

  4. ChatGPT Can Configure Medical Knowledge and Healthcare Practice Perceptions

    ChatGPT can augment the work of physicians in terms of efficiency, rather than replacing them (Abd-alrazaq et al., 2023; Gabrielson et al., 2023; Sallam, 2023), reducing administrative drudgery and optimizing access to laboratory, imaging, and pathology outcomes in low-risk settings, while providing evidence-based recommendations in relation to clinical practice. Generative artificial intelligence tools can articulate accurate and reliable data, draft well-informed outlines and conclusions, and discuss and analyze results, freeing up important time and resources, enhancing medical education and shaping swift skill development, critical thinking, and problem-solving.

    By predictive analytics, ChatGPT can assess real-time data streams, detecting patterns and abnormal changes, providing swift alerts and risk assessment (Jin et al., 2023; Liu et al., 2023; Manohar and Prasad, 2023; Srijan Chatterjee et al., 2023; Venerito et al., 2023), and thus healthcare professionals can monitor patient data incessantly and intervene, preventing adverse events. ChatGPT can configure medical knowledge and healthcare practice perceptions, improving patient care and clinical decisions in terms of accuracy and completeness.

    Clinical letter generation by ChatGPT require regulation and monitoring in terms of system outputs, as inaccurate result reporting or treatment guideline misinterpretation can affect patient care negatively (Ali et al., 2023; Mondal et al., 2023; Putra et al., 2023), thus the need of responsibly integrating such artificial intelligence algorithms into the clinical workflow and reducing potential healthcare risks with respect to patient care safety and quality. Generative artificial intelligence algorithms provide diagnostic suggestions swiftly by integrating medical patient history, symptoms, triage, and condition. (Table 3)

    Table 3 Synopsis of evidence as regards focus topics and descriptive outcomes (research findings)

  5. ChatGPT Can Be Decisive in Comprehensive Medical Assessment and Treatment

    Generative artificial intelligence tools can support the evidence-based decision-making process performed by healthcare professionals in relation to clinical practice by solving remote inquiries and decreasing human error risks (Cifarelli and Sheehan, 2023; Deiana et al., 2023; Giannos...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT