Artificial Intelligence (AI) provides opportunities to achieve more efficiency while improving the quality and safety of healthcare, through the automation of routine and simple tasks, and by providing decision support for complex problems. In recent market estimates, healthcare was defined as the largest market opportunity for AI. Yet it was also mentioned that AI in healthcare is just getting started, facing many adoption challenges, including bridging the various disciplines that are needed to prove that AI improves healthcare. Through this RPA, the University of Amsterdam (UvA) will develop a hub for interdisciplinary research regarding AI-driven health decision-making by joining expertise across UvA faculties, embedding these long-term in their research profiles.
Within this RPA, we will set-up a novel infrastructure and define procedures for data collection, develop AI technology to support real clinical decision-making, perform validation, and initiate valorization. This will be done through development of AI driven decision-making in different fields of medicine.
In developing these applications we take the opportunity to integrate ethical and legal research at every stage of its development and expand the research use cases beyond the science and health faculties. When health decision-making is supported by machines and data, this affects the autonomy of health professionals and patients. The patient-medical professional legal relationship builds on protecting this mutual autonomy, as the key to their relationship of trust. We will investigate how this legal-ethical relationship is affected by introducing AI health decision-making and vice versa.
The RPA will provide a basis to initiate further collaborative research in the scope of UvA’s AI Technology for People across all faculties, extend and initiate new public-private partnerships and attract talented researchers. Moreover, the RPA will contribute to the UvA in becoming a significant force in the broader plans of the Amsterdam metropolitan area to lead in healthcare and AI. The program will allow UvA to take the lead in shaping the future of AI implementation and the development of next-generation AI technology.
VHALID - To improve adequate blood product therapy bedside a complex laboratory test - viscoelastic hemostatic assay (VHA), has been introduced. This assay not only replaces separate and more laborious testing but saves significant time between blood draw and result. One of the complications of VHA derived parameters is their interpretation. We aim to create a model that is able to interpret the collected VHA parameters and, in combination with other clinical parameters, able to guide the anesthesiologist and intensivist in the choice and amount of coagulation agent. At the same time, we will perform qualitative interviews with health professionals to understand the professional medical standard and professional autonomy that is involved in the health decision-making in this specific case, before the implementation of health AI decision-making.
At this moment no AI model exists for the optimization of transfusion during major surgery. We will build on our previous work on transfusion extending the application. Hence, in this study, we will create a machine learning derived predictive model for the amount and kind of coagulation agent. The results of the VHA will be parametrized to extract a multitude of parameters. Probabilistic approaches like multinomial logistic regression will be followed by more complex AI-based methods to assess prediction accuracy. The most relevant parameters will be selected for training and fed to learning algorithms with increasing complexity. The AI challenges comprise in the robust decision-making with missing data and providing explainability of the automatic decisions. Besides developing the model we will perform a validation and implementation study in which we hypothesize a reduction of blood loss, blood product administration, mortality and hospital costs for patients undergoing major surgery with this model compared to standard care. As part of this validation study, we will re-interview the health professionals involved to understand if and to what extent the use of AI health decision-making would impact professional autonomy.