Powered by OpenAIRE graph

International Centre for Theoretical Physics

International Centre for Theoretical Physics

7 Projects, page 1 of 2
  • Funder: UK Research and Innovation Project Code: EP/F010125/1
    Funder Contribution: 15,325 GBP

    This School will review the state of the art in the theory and experiment on quantum nano-systems and nano-structured materials. The course will cover the following topics: (i) electronic properties of the recently discovered new 2D material - graphene, review of the recent progress in quantum Hall effect and spin-Hall effect; (ii) carbon nanotubes, Luttinger liquid in quantum wires and the bosonisation technique, Kondo effect, functional renormalisation group methods; (iii) quantum information processing, phase coherence and de-coherence in qubits, coherent exciton dynamics and optical properties of quantum dots and microcavities, adiabatic and non-adiabatic dynamics of quantum condensates of finite dimensions. The lectures in theoretical methods will be complemented by reviews of advanced experiments and focused research seminars. The lectures and the seminars will be delivered by leading specialists from the USA, the UK, Germany, etc (a full list is in the case for support).

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/R004730/1
    Funder Contribution: 101,150 GBP

    The subject of study of differential geometry are smooth manifolds, which correspond to smooth curved objects of finite dimension. In modern differential geometry, it is becoming more and more common to consider sequences (or flows) of smooth manifolds. Typically the limits of such sequences (or flows) are non smooth anymore. It is then useful to isolate a natural class of non smooth objects which generalize the classical notion of smooth manifold, and which is closed under the process of taking limits. If the sequence of manifolds satisfy a lower bound on the sectional curvatures, a natural class of non-smooth objects which is closed under (Gromov-Hausdorff) convergence is given by special metric spaces known as Alexandrov spaces; if instead the sequence of manifolds satisfy a lower bound on the Ricci curvatures, a natural class of non-smooth objects, closed under (measured Gromov-Hausdorff) convergence, is given by special metric measure spaces (i.e. metric spaces endowed with a reference volume measure) known as RCD(K,N) spaces. These are a 'Riemannian' refinement of the so called CD(K,N) spaces of Lott-Sturm-Villani, which are metric measure spaces with Ricci curvature bounded below by K and dimension bounded above by N in a synthetic sense via optimal transport. In the proposed project we aim to understand in more detail the structure, the analytic and the geometric properties of RCD(K,N) spaces. The new results will have an impact also on the classical world of smooth manifolds satisfying curvature bounds.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/M016838/1
    Funder Contribution: 606,413 GBP

    Hyperelliptic curves are a fundamental class of polynomial equations that has featured in geometry and number theory for a very long time, but whose arithmetic has not yet been subject to a systematic study. This gap in our knowledge is rapidly becoming apparent, as demands for the theory are coming both from within pure mathematics, from areas bordering to theoretical physics (via the new theory of hypergeometric motives), and from cryptography, where one of the main methods for modern data encryption is based on hyperelliptic curves. The purpose of the project is to modernise our approach to the arithmetic of hyperelliptic curves, by bringing in the number theoretic machinery of L-functions and Selmer groups into the subject. These are tools that have been the centre of attention of many number theorists over the past few decades, and lie at the heart of the works on Fermat's Last Theorem, the Langlands programme and the Birch-Swinnerton-Dyer conjecture. They promise to provide new theoretical and computational techniques for working with hyperelliptic curves, and our aim is to establish foundational results whose analogues have been central to the development of other parts of number theory. From the point of view of the established theory of L-functions, the step into hyperelliptic curves is partly a step into unchartered territory, for hyperelliptic curves cannot be treated by the comfortably familiar techniques based on modular forms. We thus plan to expand and test the L-function theory beyond its standard boundaries, and hope to shed light on the many unresolved conjectures in the subject. The interplay of hyperelliptic curves, L-functions and Selmer groups is the rationale for proposing a single unified project. Our aim is to produce mathematical results, algorithms and data that can be used in each of these three worlds. Apart from establishing results for number theorists, we aim to explore phenomena and develop concrete classifications and a database, that would also be accessible to scientists from other fields working with hyperelliptic curves.

    more_vert
  • Funder: UK Research and Innovation Project Code: NE/N013840/1
    Funder Contribution: 507,572 GBP

    Physically, deep convection is a key process in the atmosphere, particularly in the tropics, where it is the dominant driver of the weather as well as playing a key role in forcing global circulation. Despite this key role on the large scale, convection is inherently a relatively small-scale process with convective clouds typically being on the scale of 100's of metres to a few kms, and therefore unresolved in global numerical weather predicition (NWP) and climate models. "Parametrisation" of convection is therefore critical to accurately represent the impact of convection on the larger scale flow. This is not a simple problem, and deficiencies in current convective parametrisation schemes lead to significant model biases, the wrong diurnal cycle of convection in the tropics (with knock-on effects on rainfall and surface heating by radiation) and inadequate representation of important atmospheric circulations such as the Madden-Julien Oscillation, which are driven by convection. Two particular, and linked, problems which contribute to these deficiencies are the triggering of convection (timing, location and the stochastic nature of triggering) and the subsequent organisation of convection into larger convective systems. The overarching aim of this project is to bring together our understanding of the various physical processes which control the triggering and organisation of deep convection, and to use this to develop a framework in which these processes can be integrated in a consistent way. Such a framework will allow these important processes to be represented in new convective parametrisation schemes in a more physically realistic and consistent manner. In particular, a physically-based convective triggering scheme should be easier to integrate into the new generation of stochastic convective parametrisation schemes which are being developed. Such schemes will also be easier to make scale-aware, i.e. to adapt with the model resolution to only parametrise the necessary sub-grid processes, while allowing the model to resolve larger-scale features of the convection. This is particularly important for the latest NWP and regional climate model simulations which are of sufficiently high resolution that they permit the explicit representation of convection, albeit rather crudely. A further limitation of current parametrisation schemes is that they tend to be instantaneous. Where convection organised, the system has "memory", i.e. the occurrence and organisation of convection will impact on the local development of further convection. We will use techniques from other branches of fluid dynamics to understand and quantify organisation in convective systems and develop measures which can be used as the basis for new stochastic convection parametrisations. This project will consider both internal processes and external processes. Internal processes generated by the convection itself, such as gravity waves and cold pools, play a key role in the triggering and organisation of convection. Over land external factors such as surface heterogeneity and topography also play an important role in triggering convection and controlling how it can organise. Integrating these various competing influences into a consistent framework will be a significant step forward for parametrisation schemes. Having developed the framework from studying individual processes through idealised numerical simulations with the Met Office Unified Model (MetUM) and the Met Office-NERC cloud resolving model (MONC) we will test the ideas in more realistic large-domain simulations to help quantify the important scale interactions between small scale convection and the larger scale fields. The output of this project will be a series of generic physically-based model frameworks which can be used as components in different convective parametrisations schemes which are being proposed or developed, both within this programme and internationally

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/K034383/1
    Funder Contribution: 2,246,110 GBP

    L-functions and modular forms are fundamental mathematical objects that encode much of our knowledge of contemporary number theory. They form part of a web of interconnected objects, the understanding of which in the most basic cases lies at the foundations of much of modern mathematics. A spectacular example is Wiles' proof of Fermat's Last Theorem, which was an application of a fundamental "modularity" link between L-functions, modular forms and elliptic curves. This project will greatly extend and generalize such connections, both theoretically and computationally. The research vision inspiring our programme can be summarised as: "Breaking the boundaries of classical L-functions and modular forms, and exploring their applications to 21st-century mathematics, physics, and computer science". Our guiding goal is to push forward both theoretical and algorithmic developments, in order to develop L-functions and modular forms far beyond current capabilities. This programme will systematically develop an extensive catalogue of number theoretic objects, and will make this information available through an integrated online resource that will become an indispensable tool for the world's research community. L-functions are to pure mathematics what fundamental particles are to physics: their interaction reveal fundamental truths. To continue the analogy, computers are to number theorists what colliders are to particle physicists. Aside from their established role as empirical "testers" for conjectures and theories, experiments can often throw up quite unexpected phenomena which go on to reshape modern theory. Our programme will establish a major database and encyclopedia of knowledge about L-functions and related objects, which will play a role analogous to that of the LHC for the scientists at CERN. Both are at the threshold of tantalising glimpses into completely uncharted territories: higher degree L-functions for us and the Higgs boson for them. Theoretical and computational work on higher degree L-functions has only started to make substantial progress in the past few years. There do not currently exist efficient methods to work with these, and rigorous computations with them are not yet possible. Neither is there yet an explicit description of all ways in which degree 3 L-functions can arise. We will address these facets in our research programme: both algorithmic development and theoretical classification. As well as having theoretical applications to modularity relationships as in Wiles' proof, detailed knowledge of L-functions has more far-reaching implications. Collections of L-functions have statistical properties which first arose in theoretical physics. This surprising connection, which has witnessed substantial developments led by researchers in Bristol, has fundamental predictive power in number theory; the synergy will be vastly extended in this programme. In another strand, number theory plays an increasingly vital role in computing and communications, as evidenced by its striking applications to both cryptography and coding theory. The Riemann Hypothesis (one of the Clay Mathematics Million Dollar Millennium Problems) concerns the distribution of prime numbers, and the correctness of the best algorithms for testing large prime numbers depend on the truth of a generalised version of this 150-year-old unsolved problem. These are algorithms which are used by public-key cryptosystems that everyone who uses the Internet relies on daily, and that underpin our digital economy. Our programme involves the creation of a huge amount of data about a wide range of modular forms and L-functions, which will far surpass in range and depth anything computed before in this area. This in turn will be used to analyse some of the most famous outstanding problems in mathematics, including the Riemann Hypothesis and another Clay problem, the Birch and Swinnerton-Dyer conjecture.

    more_vert
  • chevron_left
  • 1
  • 2
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.