Fighting COVID-19 and Fake Content with Researcher.AI
Scientists and researchers around the world have united in the fight against COVID-19 by publishing hundreds of research papers every day about their findings in both peer-reviewed journals and pre-print servers. It is truly an unprecedented moment in the history of academic research. However, these daily publications are scattered across several sources and, unfortunately, no single individual can go through tens of thousands of full-text documents efficiently and rapidly enough to deal with this global emergency.
More critically, out of these thousands of papers, only a few may hold the key to unlock the cure or provide new molecular insights for a COVID-19 vaccine, or even a new way to treat patients and prevent the spread of the virus. Thus, such papers could be missed out unintentionally or discovered much later than need be.
For this reason, our team at Nebuli is setting up a cognitive deep data mining project, called Researcher.AI, applying our robotic co-worker model to help researchers worldwide read through thousands of research papers within seconds, instead of weeks.
Let’s discuss the details of what we aim to achieve and how you, our community, could help and get involved.
Due to the above-mentioned information overload, there is serious confusion about what the virus actually does to people. Not to mention, the fake content spreading all over social media networks.
But we must also be critical of the general COVID-19 coverage by most media outlets. It has become nearly impossible to see the real world through the whirlpool of poorly framed stories, incomplete analysis, self-serving content, fabricated data and poor accessibility to specialist information. This is a direct result of poor data validation models.*
The World Economic Forum listed this rapid rise of digital misinformation in 2013, stating that it is â€œone of the main threats to our societyâ€ â€“ affecting the quality and validity of data research.**
Sadly, we are seeing this very reality unfolding in front of us now with the COVID-19 outbreak. The BBC highlighted on March 19th, 2020 the enormity of misleading information circulating online about coronavirus, from dodgy health tips to speculation about government plans.
As ex-biomedical scientists, Nebuli founders experienced this information overload problem directly in the past, which had negatively affected their work and did not help them stay informed quickly enough for their deadlines. They realised at the time that if they were experiencing this issue, many millions of other professionals around the world would be in a similar situation.
Thus, the founders decided to build their own AI-powered solution for this problem back in 2012, applying mathematical models that segment and validate research papers and the efficacy of their content. This solution helped them and their customers discover knowledge in a matter of minutes instead of months and led to the incorporation of Nebuliâ€™s augmented intelligence models.
Hence, at Nebuli, we know and understand that we needed to find a way to plug this knowledge discovery gap and enable researchers, government agencies and the general public to make progress much faster in understanding COVID-19 through scientifically validated research and offer detailed evaluation quickly and effectively.
With Researcher.AI, we are refactoring the original algorithms used by the founders, alongside other similar projects they worked on, to target exclusively the information and research papers related to COVID-19 via a simple and integrable platform.
The current outbreak has revealed how governments and the scientific community have been caught entirely off-guard and are still struggling to understand how this virus is affecting people. They also struggle to adequately and quickly go through the thousands of previous and newly-published research papers which contain the latest and historical data and findings of the virus.
This is despite the warnings that were available well in advance. In addition to this issue, the spread of misinformation all over social media about the virus and how to â€œcureâ€ it exacerbates the problem even further, putting people’s lives at risk, pressuring the healthcare system to the brink and challenges government policy-making.
We can play a key role with Researcher.AI in dealing directly with the following critical problems:
- Connect as many research paper ecosystems and research data sources as possible for Nebuliâ€™s smart indexing process and generate critical trends analysis based on specific parameters inputted by researchers and government agencies.
- Fight Misinformation by providing practical, scientifically-backed validation of content shared or seen on social media using the same indexed research papers and data sources above.
Our Current Augmented Intelligence Model for Deep Data Mining and Segmentation of Research Papers
Nebuli’s core augmented intelligence model for deep data mining and segmentation of research papers generates what we describe as a Data-Driven World (DDW) for each data collection related to specific COVID-19 trends, such as patient groups that suffer from distinct symptoms. This DDW is what we describe as a Memory block. The key objectives of this indexing and visualisation process are the following:
- Creation of several DDWs from thousands of data collections.
- Cognitive Search of specific data elements within the DDW.
- Data clustering and segmentation of specific data parameters defined by individual researchers within each DDW.
- Creation of Data maps of DDW (Visualisation) using self-organising map (SOM) models and data vectors that can be displayed on Researcher.AI‘s UI and loaded within an organisation’s internal data visualisation software via API libraries.
- Creation of an isolated system with its own database for each DDW that allows for more in-depth analysis of targeted traits of the virus, particularly when new traits are discovered and reported in various journals.
Below are sample images of Nebuli’s Memory Blocks generated through our work with the University of Leicester’s (UoL) Library. The aim here is to visualise the hidden world of the UoL’s internal research papers, to help them facilitate new interdisciplinary and interdepartmental R&D collaborations based on specific parameters supplied by their team:
The above images show 2D and 3D SOM-based visualisation of segmented datasets according to specific parameters set by the UoL library team. Where the dots mostly condense is where the most factually relevant information is likely to be found. In the case of COVID-19, such segmentation highlights the areas that provide the most scientifically proven trends of the virus and patient treatment and pinpoints where the most useful and relevant information lies within the entire body of research. For example, highlighting the rate of specific organ failures caused by the virus, or a specific group of drugs that have the potential to beat the virus as reported by the indexed research papers.
Typically, researchers would spend days or weeks collecting this information manually from research papers. With Researcher.AI, they could do it automatically within minutes.
From what we are witnessing today with the global response to the COVID-19 outbreak, there is an urgent need for intervention in both science-led research and public communities in order to achieve an effective solution. No other method tackles the top-down and bottom-up issues highlighted above.
Hence, our team is looking at this from a systematic point of view and are able to design a solution that empowers the right information and drives the behaviours required to affect a change (i.e. by understanding the challenge from both a technical and behavioural standpoint). The two key solutions include:
- Researcher.AI platform (website + API) for Researchers and Government Agencies:
Specialist algorithms designed specifically to mine research paper abstracts and full-text to focus on all historical and newly published research outcomes and critical data related to Coronavirus and COVID-19 pandemic, allowing researchers and government agencies to easily and quickly monitor trends based on their specific parameters. We believe this model can help with planning and monitoring current and future pandemics well in advance.
- Researcher.AI for the General Public (mobile/social media apps):
The same algorithms can be applied through a dedicated API gateway that enables developers to build mobile and social media apps (e.g. Facebook app) that help users validate the COVID-19 stories and claims based on our indexed research papers. An important part of this public API is that when everyone reads something online or shares it on social media, our solution could show exactly how much various claims are backed up by real science (with citations). Having an embedded social media app integration would allow users to verify feeds or texts from WhatsApp, enabling them to dismiss incorrect or misleading claims of others. The API library could also recommend links to more accurate sources on the fly, thus not requiring much extra effort to perform an effective and community-lead fact-checking process. This should put fake news into isolation!
While we are focusing on the imminent COVID-19 crisis, Researcher.AI is designed to support researchers in monitoring and dealing with future outbreaks and other unforeseen emergencies, such as political instabilities and environmental catastrophes. COVID-19 will not be the last outbreak. Hence, Researcher.AI can be utilised within government and academic communities to observe emerging epidemiological trends that could support their efforts in preparing and planning well in advance, compared to what we have seen with COVID-19 to date. Not to mention, the system’s API integration with social media platforms could also assist them in quickly identifying content that has not been scientifically verified. Thus, supporting their efforts in significantly reducing the spread of fake content on their platforms.
IMPORTANT UPDATE – February 21st, 2023
We have merged Researcher.AI project with our AIQ‘s specialist and cited Large Language Models and the AIQ.org project. Please refer to this announcement for further details on registering your interest.
* Combating Fake News: An Agenda for Research and Action (2017): https://shorensteincenter.org/combating-fake-news-agenda-for-research/
** Digital Wildfires in a Hyperconnected World (2013): https://reports.weforum.org/global-risks-2013/risk-case-1/digital-wildfires-in-a-hyperconnected-world/