Elsevier

Healthcare Analytics

Volume 2, November 2022, 100078
Healthcare Analytics

A review on Natural Language Processing Models for COVID-19 research

https://doi.org/10.1016/j.health.2022.100078Get rights and content
Under a Creative Commons license
open access

Highlights

  • This paper primarily reviews transformer-based NLP models for COVID-19 research.

  • The performance of these models is compared using the BLURB benchmarking framework.

  • The use of these models for sentiment analysis relating to vaccine hesitancy is reviewed.

  • Open challenges relating to the optimisation of these ML models are discussed.

Abstract

This survey paper reviews Natural Language Processing Models and their use in COVID-19 research in two main areas. Firstly, a range of transformer-based biomedical pretrained language models are evaluated using the BLURB benchmark. Secondly, models used in sentiment analysis surrounding COVID-19 vaccination are evaluated. We filtered literature curated from various repositories such as PubMed and Scopus and reviewed 27 papers. When evaluated using the BLURB benchmark, the novel T-BPLM BioLinkBERT gives groundbreaking results by incorporating document link knowledge and hyperlinking into its pretraining. Sentiment analysis of COVID-19 vaccination through various Twitter API tools has shown the public’s sentiment towards vaccination to be mostly positive. Finally, we outline some limitations and potential solutions to drive the research community to improve the models used for NLP tasks.

Keywords

Natural Language Processing
COVID-19
Machine learning
Transformer models
Sentiment analysis

Cited by (0)

Acknowledgement

Prof Chang’s work is partly supported by VC Research, UK (VCR 0000183).