Publication
Few-Shot Cross-lingual Transfer for Coarse-grained De-identification of Code-Mixed Clinical Texts
Saadullah Amin; Noon Pokaratsiri Goldstein; Morgan Kelly Wixted; Alejandro García-Rudolph; Catalina Martínez-Costa; Günter Neumann
In: Proceedings of the 21st Workshop on Biomedical Language Processing. Annual Meeting of the Association for Computational Linguistics (ACL-2022), located at ACL, May 22-27, Dublin, Ireland, Pages 200-211, Association for Computational Linguistics (ACL), 5/2022.
Abstract
Despite the advances in digital healthcare systems offering curated structured knowledge, much of the critical information still lies in large volumes of unlabeled and unstructured clinical texts. These texts, which often contain protected health information (PHI), are exposed to information extraction tools for downstream applications, risking patient identification. Existing works in de-identification rely on using large-scale annotated corpora in English, which often are not suitable in real-world multilingual settings. Pre-trained language models (LM) have shown great potential for cross-lingual transfer in low-resource settings. In this work, we empirically show the few-shot cross-lingual transfer property of LMs for named entity recognition (NER) and apply it to solve a low-resource and real-world challenge of code-mixed (Spanish-Catalan) clinical notes de-identification in the stroke domain. We annotate a gold evaluation dataset to assess few-shot setting performance where we only use a few hundred labeled examples for training. Our model improves the zero-shot F1-score from 73.7% to 91.2% on the gold evaluation set when adapting Multilingual BERT (mBERT) (Devlin et al., 2019) from the MEDDOCAN (Marimon et al., 2019) corpus with our few-shot cross-lingual target corpus. When generalized to an out-of-sample test set, the best model achieves a human-evaluation F1-score of 97.2%.
Projekte
PRECISE4Q - Personalised Medicine by Predictive Modeling in Stroke for better Quality of Life,
CORA4NLP - Co(n)textual Reasoning and Adaptation for Natural Language Processing
CORA4NLP - Co(n)textual Reasoning and Adaptation for Natural Language Processing