Call for post-doctoral Research Fellow in “Neural-cognitive Explainable Artificial Intelligence to enhance diagnostic imaging”



    We are inviting applications for a 24-month Post-doctoral Research Fellow position as part of the NeuroInsight Marie Skłodowska-Curie COFUND Action for Postdoctoral Fellowships.

    NeuroInsight is a collaboration between the FutureNeuro SFI Research Centre for Chronic and Rare Neurological Diseases and the Insight SFI Research Centre for Data Analytics.

    My team is interested in validating the potential of neuro-symbolic artificial intelligence for developing more robust, transparent, explainable approaches to support clinical diagnostics from medical imaging.

    We believe a holistic and human-centred combination of deep learning with knowledge representation and reasoning would be a game changer for widespread clinical adoption of Deep Learning in Diagnostic Imaging. Specifically, we seek to address two key challenges: the first one is to improve transparency and explainability of deep learning for medical image analysis, enabling debugging and debiasing; the second one relates to the scarcity of image data available to effectively train a deep learning model to support diagnostics for rare and less studied conditions; here is where knowledge-driven approaches can compensate for the limited amount of training image data by enabling the combination of different type of information and knowledge from other sources such as, but not limited to medical records, published research papers, knowledge bases and clinical studies.

    Our team has been working on mapping low level features from trained deep neural models into concepts and relations. This made it possible to develop new methods to analyse and interpret deep learning models through graph analysis techniques and generate factual and counterfactual explanations, as well as to gain a deeper understanding of the origins of some types of classification errors. There are still several challenges in generalising this approach to other architectures beyond CNN and other tasks beyond classification. The application of our approach to diagnostic imaging is also an open challenge, as the need for experts’ validation of concepts and relationships and the existence of high quality domain-specific knowledge bases is paramount.

    More investigation is required in areas such as validation of the extracted structured knowledge against a groundtruth, characterisation and mitigation of data bias vs. model bias vs. human bias, and also systematic combination of deductive/inductive reasoning with neural learning for a truly explainable and human-centred approach to AI for diagnostic imaging.

    We are seeking a postdoctoral researcher interested in working in this area to validate the applicability of knowledge-driven neural-cognitive approaches to diagnostic imaging. The potential target can vary based on the specific interest of the candidate, ranging from multimodal (image, text) explanation generation, mapping of cognitive processes and symbolic knowledge into neural models, semantic concept detection and probabilistic rule extraction from deep representations in the clinical domain, and human-centred design of assessment and validation metrics.

    We have clinical collaborators and research partners that can help with understanding and contextualising the clinical knowledge in the domain of interest, provide relevant medical image datasets as well as co-design human-centred validation approaches.

    If interested in applying, please contact me for further information and to get help in preparing your fellowship application.


    PostDoc Jobs
    Search for PostDocs
    Advertise a PostDoc Jobs
    PostDoc Advice Forum

    FindAPostDoc. Copyright 2005-2024
    All rights reserved.