Powered by OpenAIRE graph

NTNU Nor Uni of Sci & Tech (Remove)

NTNU Nor Uni of Sci & Tech (Remove)

10 Projects, page 1 of 2
  • Funder: UK Research and Innovation Project Code: BB/H019294/1
    Funder Contribution: 320,372 GBP

    There are several well-known examples where the sequencing of polymers has provided immense benefits. Today entire DNA genomes are being sequenced, bringing with them the prospect of a revolution in medicine and biology. Similarly, a new protein can have its polypeptide sequence read, with the result that the molecular detail of its structure can be determined and the detailed mechanism of its biological function revealed. The situation is very different for the third major class of biopolymers, the glycans (sugar-containing molecules - polysaccharides and glycosylated polymers in general), despite increasing recognition of their critical role in the biology of all life and their industrial importance. On the one hand, specific short sequences may be defined with submolecular precision, but on the other, beyond a dozen or so monomers the relationship between these sequences is lost and we must rely on bulk methods to describe the material. This often leaves those who wish to understand the multiple, critical roles played by polysaccharides with little of the molecular scale detail now taken for granted by molecular biologists. Examples of these beneficiaries include the food and pharmaceutical industries looking to develop new food and drug delivery formulations, medical researchers hoping to appreciate the role of glycopolymers in human health, or botanists and microbiologists studying the function of plant and bacterial cell walls in growth or pathogenic activity. This project exploits the recent development of a force-measuring microscope capable of, for the first time, mapping the distribution of defined oligomer (short polymer) sequences in single glycan polymers. It will do this by exploiting the phenomenon of rotaxanes - molecular rings threaded over a polymer chain. In this case, an atomic force microscope (AFM) probe picks up the ring (a cyclodextrin molecule) from its 'base' on a suitable polymer and slides it along and on to the glycan chain of interest, which is coupled to the rotaxane. Molecules known to recognise and bind to well-defined sequences within the polymer are allowed to interact with the polymer chain and form complexes; the ring is then passed along the chain and when it encounters a complex will 'unzip' it, removing the bound molecule. The mapping information comes from the magnitude of the interaction between the ring and each bound complex it encounters, along with the position along the chain at which the interaction occurs. By collecting this information from a large sample of individual polymers, a map of the distribution patterns of the known sequences is revealed. We have shown that this appealingly simple mechanical concept actually works for simple model polymers; now this project is designed to apply this entirely new sequencing tool to a medically and commercially highly significant glycan, alginate. Alginate is produced by seaweeds and also by bacteria, including Pseudomonas aeruginosa when it colonises the lung in cases of cystic fibrosis. Alginate produced by the bacteria in the lung forms a gel to protect the bacteria from immune responses and attacks but also contributes to obstructions in the airways of the lung which may be fatal. Median life expectancy of cystic fibrosis sufferers is 35 years. Alginate gels form in the presence of calcium and other divalent cations due to the formation of so-called 'egg box' junction zones between aligned pairs of guluronic acid (oligoG) sequences. The minimum length of oligoG required to form a stable junction zone is not known and thus this project aims to determine both this minimum length and its distribution within well-characterised samples of alginate polymers.

    more_vert
  • Funder: UK Research and Innovation Project Code: NE/R005133/1
    Funder Contribution: 32,110 GBP

    Ecological models are becoming larger, more complicated, and being used for an increasingly wide range of applications, from describing trends and mapping distributions to understanding mechanistic relationships and predicting the impact of future scenarios. In response, there has been a huge growth in statistical methods for large-scale ecological models. However, most such methods do not account for the fact that ecological data is inherently heterogeneous, and large datasets typically contain many forms of bias. Recently, a set of hierarchical Bayesian models (HBMs) have emerged as promising ways for dealing with biased data, particularly for occurrence records and other unstructured data. Many millions of unstructured occurrence records exist, so the potential of these new methods is enormous. Not all data contain biases, though. A minority of biodiversity data is highly structured in terms of the sample locations, fixed protocols and regular sampling. Ideally, we'd like to retain the information about this in our models, but combine it with the much larger sample sizes of unstructured datasets. Integrated models provide a way to do this. They are a subclass of HBM in which data heterogeneity is modelled explicitly, by treating datasets with different observation processes as independent realisations of the same underlying state. For example, causal observations on GBIF and the Breeding Bird Survey both contain information about whether the population of a particular species was extant at a particular point in space and time. At present, these integrated models are the preserve of highly competent statisticians. They are hard to specify and difficult to fit and diagnose. One goal of this partnership is to build an extensible framework for fitting integrated models that will make them accessible to a broad community of ecological modellers. This framework, in the form of open source tools, will make it easier for ecologists to handle biased data when addressing large-scale questions about biodiversity. Although attractive from a conceptual standpoint, it is unclear whether the sophistication of integrated models deliver real benefits over simple ones. In particular there is an urgent need for some general principles about how to proceed when both structured and unstructured data sources are available. Critical questions include: Q1. When and how should we combine datasets with different properties? Q2. Under what circumstances is simple aggregation (i.e. ignoring the different observation processes) better than integration? Q3. If we suspect the data contain biases, can we detect them and handle them adequately? Q4. What are the most appropriate metrics for information content and model fit? These general questions lie at the intersection of the research interests of PI Isaac, Co-I Henrys and Project Partner O'Hara. Each has made some progress towards addressing specific aspects of these questions. Working in partnership would add significant value to each, by taking existing research beyond the specific context and toward general answers to these big questions. It would permit a co-ordinated effort and build a work program of international significance. This pump-priming award would provide a platform for this partnership. The overall aim is to build a framework for inference in large-scale models of species' distribution, and to test it using computer simulations.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/K041061/1
    Funder Contribution: 318,915 GBP

    A reduction of biodiversity loss is a key aim of the Convention on Biological Diversity (CBD) for 2020, and quantifying the loss is essential for managing it. This involves estimating the size and distribution of wild populations, which is statistically challenging - using only animals detected (often a very small fraction of the population), one must deduce the abundance and distribution of animals that were not detected. Natural systems invariably have spatial structure, and monitoring and understanding what drives habitat use, spatial distribution and changes in spatial distribution is central to understanding and predicting the effects of natural or human-induced perturbations of natural systems. This is difficult because the spatial structure of fauna and flora is often complex, involving spatial trend, spatial randomness and spatial correlation. Fitting spatial models that cannot accommodate all these aspects of spatial distribution can lead to very misleading conclusions about the drivers of spatial distribution and changes in distribution. In particular, inadequate modelling of randomness and correlation can lead to incorrect inferences and misleading predictions. And while realistically complex spatial models have existed for some time, until very recently the methods for fitting such models were too slow to be useful. With the advent of the Integrated Nested Laplace Approximation (INLA) method this is no longer the case, and as a result, use of this method has grown rapidly and the software implementing it is in great demand. However, there are currently no methods or software (INLA or other) for fitting realistically complex spatial models to data obtained from processes in which the probability of detecting population members is unknown. And a distinguishing feature of wildlife survey data is that they involve exactly such unknown detection probabilities, and what is worse, they involve detection probabilities that vary in space. The spatial distribution(s) of the population(s) of interests and the spatial distribution of detection probability have to be separated in order to draw reliable inferences about the population spatial distribution. Distance sampling (DS) and capture-recapture (CR) methods are far and away the most widely-used wildlife survey methods. Much of DS research effort has focused on developing methods for reliable estimation of spatial detection probability. Until very recently CR methods neglected the spatial component of detection probability entirely, but with the recent advent of Spatially Explicit Capture-Recapture (SECR) methods, CR methods are now also able to estimate spatial detection probability. But (with a few exceptions) both methods currently estimate detection probability assuming unrealistically simple population spatial distributions. While estimates of abundance are robust to this, estimates of distribution are not. This project combines the strengths of DS and CR methods and INLA. It will unite spatial modelling methods in INLA and spatial detection probability estimation methods of SECR and DS methods, to provide for the first time rigorous statistical methods and software for estimating realistically complex spatial distributions using data from the two most widely-used wildlife survey methods. It will provide more powerful methods and tools than are currently available for drawing inferences about what drives the distribution and change in distribution of fauna and flora. In so doing, it will provide substantially more powerful tools for monitoring and managing biodiversity loss than are currently available. And because DS and CR surveys usually record spatial data, the methods will be retrospectively applicable to many existing time series of survey data, so that they can be used immediately to "look into the past" and draw inferences about distribution and changes in distribution stretching as far back into the past as do reliable data sets.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/J022071/1
    Funder Contribution: 249,719 GBP

    This project will develop a new acoustic modelling method, ideally suited for simulation of rooms and city squares, which will outperform existing methods either in its accuracy or computational efficiency. Developing such an algorithm is a particular concern for practitioners who use auralisation as a consultation tool in acoustic design of built spaces. In this process the data from the simulation model is rendered as sound by a loudspeaker system allowing a client or stakeholder, who is unlikely to be an expert in acoustics, to form a judgement on whether the acoustic design fits their needs. This process is of course only valid if the acoustic model delivers accurate prediction of how sound behaves in the space, and current commercial software does not always succeed in this task because the high-frequency geometric propagation assumption on which it is based breaks down at low frequencies and in spaces where diffraction effects are significant. Although alternate numerical methods exist they are typically limited to modelling only low frequencies since their computational cost becomes impractical as frequency or time-resolution is increased. In response to these shortcomings, this project will develop a new hybrid method which combines the best features of geometric methods and fully numerical boundary element method (BEM) solvers to provide a scheme that inherits desirable characteristics from both approaches; i.e. fully error controllable schemes, more accurate than geometric methods for low to mid range frequencies, but with reduced computational cost at higher frequencies compared to standard BEM, all achieved within a single unified framework. Such a model would potentially include all wave terms, geometric and diffracted, but lower energy reflections would only be included where necessary to achieve a given accuracy criterion (e.g. an SPL threshold or a function of the ear's perceptible difference limen) hence computational efficiency would be maximised. Introducing an element of interactivity to the auralisation process, where a user would be able to explore the space and/or make dynamic changes to the sources and building geometry or materials, would be desirable from a consultation-productivity perspective but place extremely high demands on the acoustic model. Not only must the model dynamically update to reflect the modifications made by the user, but the requirement for accuracy is even more pressing since any feature the client chooses to introduce must be accurately rendered, even if it has a strong acoustic effect (e.g. concave focussing surfaces, room resonances, unusual echo patterns), and there will be little or no opportunity for an expert to check that the sound is realistic. The new algorithm we propose will address these needs since, as well as having improved accuracy, it also has the desirable characteristic that only a small easily identified subset of the acoustic interaction data needs to be re-computed when a change in building geometry or source location occurs; incorporating support for modelling time variant and interactive scenarios would hence be relatively straightforward. Towards this goal the project will also develop a new auralisation orientated audio platform which will represent acoustic interactions by a network of digital filters and output sound direct to audio hardware, and the simulation algorithm will be geared towards outputting reduced acoustic models in this format. Pilot studies will investigate how interactivity might be supported, as dynamic modifications of scenario objects and corresponding filter network elements, and how standard lumped parameter sound insulation and stochastic reverberation models may be incorporated. The project will conclude with a work package dedicated to modelling some real-world scenarios which would cause difficulties for current acoustic modelling software.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/R005052/1
    Funder Contribution: 322,863 GBP

    It is important for the Government to be able to predict the future energy needs of UK industries, homes and transport to ensure sufficient supply. At the same time, the UK needs to plan to reduce energy use in order to meet climate change reduction targets. At the moment the UK Government uses an Energy Demand Model which makes future energy predictions based on estimates of economic growth, the price of fuel and the number of households there will be in the future. This technique for predicting future energy needs is deficient, because it fails to take account of the fact that household demand for goods and services is the major driver of the economic performance of industry, and that the way households spend today is likely to be very different in the future. My fellowship takes a 'whole systems' approach to understanding the UK's demand for energy. The link between household spends and industrial energy use can be determined by quantifying the total energy required in the supply chain of producing a product. It is also possible to capture the energy that is embedded in goods exported abroad and goods imported to the UK from other countries with very different energy efficiency standards in their factories. I will develop a new indicator of energy demand: 'the UK's Energy Footprint' which shows the full amount of energy associated with products bought by UK consumers between 2005 and 2015. I have met with the Department for Business, Energy and Industrial Strategy (BEIS) to ensure that this new indicator will be reported alongside the Carbon Footprint. Instead of simply looking at the changing goods and services bought by an average household, this fellowship will consider the differing expenditure profile of up to 60 different household types between 2005 and 2015. For this, I will use geodemographic expenditure profiles developed by CallCredit, a credit reference company. The main user of geodemographic data is the business sector understanding their consumers, so it is important that the data is current and constantly kept up-to-date. Producers of this type of data do not keep previous years' profiles as a readily available product. This means that their data has never been used to understand the changing geodemographic profile in the UK or elsewhere. I have made an agreement with CallCredit to exclusively acquire a decade's worth of expenditure data from their archive. This means that it will be possible for the first time to determine whether the energy needs of the UK have altered due to households buying different types of products or whether the change is due to the mix of households in the UK changing. I will use mathematical analyses to calculate the drivers of the change in UK energy demand. The research will be able to determine what effect the recession had on the energy demand of different households. I will then focus on using predictions of the changing household types and predictions on how lifestyles may change in the future to estimate what the UK's demand for energy will be in 2030. There is uncertainty as to how the UK's infrastructure might have to change in order to cope with an aging population or the trend for homeworking. This fellowship will address this by determining the energy requirements of these futures by forming scenarios which calculate the UK's energy needs when there are greater proportions of these types of household present in the UK's demography. Outputs from this research will also be used to verify the BEIS's future energy demand scenarios and provide new inputs to their Energy Demand Model. This work therefore has great importance in ensuring the UK can meet the energy needs of its businesses and people, and become more sustainable, now and in the future.

    more_vert
  • chevron_left
  • 1
  • 2
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.