Actable AI Ltd
Actable AI Ltd
3 Projects, page 1 of 1
assignment_turned_in Project2023 - 2025Partners:Actable AI Ltd, KCL, Actable AI Ltd, Amnesty InternationalActable AI Ltd,KCL,Actable AI Ltd,Amnesty InternationalFunder: UK Research and Innovation Project Code: EP/X019063/1Funder Contribution: 202,209 GBPDeep learning (DL) based Natural Language Processing (NLP) technologies have attracted significant interest in recent years. The current SOTA language models, a.k.a. transformer-based language models, typically assume that the representation of a given word can be captured by the interpolation of its related context in a convex hull. However, it has recently been shown that in high-dimensional spaces, the interpolation almost surely never occurs regardless of the underlying intrinsic dimension of the data manifold. The representations generated by such transformer-based language models will converge into a dense cone-like hyperspace which is often discontinuous with many nonadjacent clusters. To overcome the limitation of current methods in most DL-based NLP models, this project aims to deploy Lebesgue integral, which can be defined as an ensemble of integrals among partitions (i.e., discontinuous feature clusters), to approximate the posterior distributions of clusters given input word features in finite measurable sets by automatically identifying the boundary of such discontinuous set, which in turn could help to generate better interpretations and quantify the uncertainty. By our proposed Lebesgue integral based approximation, the input text will be characterised by two properties: an indicator vector encoding its membership in clusters (i.e., measurable sets), and another continuous feature representation for better capturing its semantic meaning for downstream tasks. This not only allows for a more faithful approximation of commonly observed countably discontinuities in distributions of input text in NLP, but also enables learning text representations that are better understood by humans.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::5b01ce6bbd8d2e9535c285ca4f07158f&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::5b01ce6bbd8d2e9535c285ca4f07158f&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euassignment_turned_in Project2022 - 2025Partners:University of Edinburgh, Google UK, KCL, ASTRAZENECA UK LIMITED, AstraZeneca plc +3 partnersUniversity of Edinburgh,Google UK,KCL,ASTRAZENECA UK LIMITED,AstraZeneca plc,Actable AI Ltd,Actable AI Ltd,Google UKFunder: UK Research and Innovation Project Code: EP/V020579/2Funder Contribution: 887,437 GBPNatural language understanding (NLU) aims to allow computers to understand text automatically. NLU may seem easy to humans, but it is extremely difficult for computers because of the variety, ambiguity, subtlety, and expressiveness of human languages. Recent efforts to NLU have been largely exemplified in tasks such as natural language inference, reading comprehension and question answering. A common practice is to pre-train a language model such as BERT on large corpora to learn word representations and fine-tune on task-specific data. Although BERT and its successors have achieved state-of-the-art performance in many NLP tasks, it has been found that pre-trained language models mostly only reason about the surface form of entity names and fail to capture rich factual knowledge. Moreover, NLU models built on such pre-trained language models are susceptible to adversarial attack that even a small perturbation of an input (e.g., paraphrase questions and/or answers in QA tasks) would result in dramatic decrease in models' performance, showing that such models largely rely on shallow cues. In human reading, successful reading comprehension depends on the construction of an event structure that represents what is happening in text, often referred to as the situation model in cognitive psychology. The situation model also involves the integration of prior knowledge with information presented in text for reasoning and inference. Fine-tuning pre-trained language models for reading comprehension does not help in building such effective cognitive models of text and comprehension suffers as a result. In this fellowship, I aim to develop a knowledge-aware and event-centric framework for natural language understanding, in which event representations are learned from text with the incorporation of prior background and common-sense knowledge; event graphs are built on-the-fly as reading progresses; and the comprehension model is self-evolved to understand new information. I will primarily focus on reading comprehension and my goal is to enable computers to solve a variety of cognitive tasks that mimic human-like cognitive capabilities, bringing us a step closer to human-like intelligence.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::aa3dd5965b2137d9d743b687cf438094&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::aa3dd5965b2137d9d743b687cf438094&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euassignment_turned_in Project2021 - 2022Partners:University of Edinburgh, Google UK, AstraZeneca plc, ASTRAZENECA UK LIMITED, University of Warwick +4 partnersUniversity of Edinburgh,Google UK,AstraZeneca plc,ASTRAZENECA UK LIMITED,University of Warwick,Google UK,Actable AI Ltd,Actable AI Ltd,University of WarwickFunder: UK Research and Innovation Project Code: EP/V020579/1Funder Contribution: 1,269,620 GBPNatural language understanding (NLU) aims to allow computers to understand text automatically. NLU may seem easy to humans, but it is extremely difficult for computers because of the variety, ambiguity, subtlety, and expressiveness of human languages. Recent efforts to NLU have been largely exemplified in tasks such as natural language inference, reading comprehension and question answering. A common practice is to pre-train a language model such as BERT on large corpora to learn word representations and fine-tune on task-specific data. Although BERT and its successors have achieved state-of-the-art performance in many NLP tasks, it has been found that pre-trained language models mostly only reason about the surface form of entity names and fail to capture rich factual knowledge. Moreover, NLU models built on such pre-trained language models are susceptible to adversarial attack that even a small perturbation of an input (e.g., paraphrase questions and/or answers in QA tasks) would result in dramatic decrease in models' performance, showing that such models largely rely on shallow cues. In human reading, successful reading comprehension depends on the construction of an event structure that represents what is happening in text, often referred to as the situation model in cognitive psychology. The situation model also involves the integration of prior knowledge with information presented in text for reasoning and inference. Fine-tuning pre-trained language models for reading comprehension does not help in building such effective cognitive models of text and comprehension suffers as a result. In this fellowship, I aim to develop a knowledge-aware and event-centric framework for natural language understanding, in which event representations are learned from text with the incorporation of prior background and common-sense knowledge; event graphs are built on-the-fly as reading progresses; and the comprehension model is self-evolved to understand new information. I will primarily focus on reading comprehension and my goal is to enable computers to solve a variety of cognitive tasks that mimic human-like cognitive capabilities, bringing us a step closer to human-like intelligence.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::b7d8d7cd21b9dc3d13f2614ab4cc3725&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::b7d8d7cd21b9dc3d13f2614ab4cc3725&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eu