UNIVERSITE DE CAEN - BASSE-NORMANDIE
UNIVERSITE DE CAEN - BASSE-NORMANDIE
49 Projects, page 1 of 10
assignment_turned_in ProjectFrom 2007Partners:UNICAEN, CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE - DELEGATION REGIONALE MIDI-PYRENEES, UNIVERSITE DE CAEN - BASSE-NORMANDIE, UNIVERSITE TOULOUSE III [PAUL SABATIER]UNICAEN,CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE - DELEGATION REGIONALE MIDI-PYRENEES,UNIVERSITE DE CAEN - BASSE-NORMANDIE,UNIVERSITE TOULOUSE III [PAUL SABATIER]Funder: French National Research Agency (ANR) Project Code: ANR-07-CORP-0002Funder Contribution: 180,000 EURmore_vert assignment_turned_in ProjectFrom 2010Partners:INRA -CENTRE DE RECHERCHE DE TOULOUSE, UNIVERSITE DE CAEN - BASSE-NORMANDIE, UNICAEN, Universite de Pierre et Marie Currie, Paris Dauphine UniversityINRA -CENTRE DE RECHERCHE DE TOULOUSE,UNIVERSITE DE CAEN - BASSE-NORMANDIE,UNICAEN,Universite de Pierre et Marie Currie,Paris Dauphine UniversityFunder: French National Research Agency (ANR) Project Code: ANR-10-BLAN-0215Funder Contribution: 303,499 EURThis project addresses the question of decision-making for autonomous agents equipped with knowledge. In most real-world applications, such agents have to face a lot of challenges for taking optimal decisions: the environment is typically dynamic, uncertain, and only partially observable; it is described over an extremely large number of attributes; decision must be very fast. In order to design agents which are able to handle these problems, the Artificial Intelligence community has developed complementary approaches, in particular symbolic (logical) and numerical formalisms. Numerical formalisms (in particular, Markov networks, Bayesian networks, Markov Decision Processes and their derivatives) are typically suited for representing the stochastic effects of actions and the stochastic evolution of the environment, and can be learned and solved with various techniques. On the other hand, logical formalisms (e.g., attribute-value, relational, epistemic) are well suited for expressing hard constraints, norms, epistemic knowledge, goals, etc. In particular, logic is more declarative in essence, making it easier for humans to manipulate. Again, these formalisms come with various techniques for learning and reasoning. Our proposal stems from the observation that an agent placed in a real-world environment has typically access to some information through numerical models, and to some other in logical form. Then, a natural rationality requirement is that its decisions take all this information into account. Typically, a soccer robot needs a numerical model of its effectors (obtained by simulation or training), but also a logical model of the game rules. Another typical situation is medicine, where both numerical estimates of the efficiency of treatments or the exactness of analyses and logical expert knowledge are needed. So, the problem we will attack can be formulated as follows: design approaches for taking rational decisions when part of the information about the environment, actions, and rewards is given in numerical form, and part in logical form. A complete approach for handling this problem must take into account the following three subproblems: representation of the problem; optimal policy computation; reinforcement learning. We will attack these problems from the point of view of complexity-theory and algorithmics, hence identifying the complexity of problems, identifying tractable restrictions and designing efficient algorithms (both in terms of complexity and in practice). This is justified by the fact that most problems are already known to be computationally hard even when numerical and logical information are not considered together (e.g., in Partially Observable Markov Decision Processes - POMDPs). To that aim, we will build on existing factored representations of (PO)MDPs, especially by Dynamic Bayesian Networks and Probabilistic STRIPS Operators, which are based on propositional attributes. The focus on propositional logic rather than more expressive relational formalisms will ensure decidability of most problems, reasonable complexity, and possible reuse of very efficient software, e.g., for satisfiability. The proposed representations and algorithms will be illustrated on two large-scale applications. The first one consists in building occurrence maps of spatial processes, where the decisions to be taken are the locations to visit for getting information about the occurrence of the process. A real application is studied at INRA where the process to be mapped is the growth of invasive species. The second application concerns nonplaying characters in video games.
more_vert assignment_turned_in ProjectFrom 2010Partners:UNIVERSITE DE CAEN - BASSE-NORMANDIE, CNRS - DELEGATION REGIONALE LANGUEDOC-ROUSSILLON, INRA -CENTRE DE RECHERCHE DE TOULOUSE, UNICAENUNIVERSITE DE CAEN - BASSE-NORMANDIE,CNRS - DELEGATION REGIONALE LANGUEDOC-ROUSSILLON,INRA -CENTRE DE RECHERCHE DE TOULOUSE,UNICAENFunder: French National Research Agency (ANR) Project Code: ANR-10-BLAN-0214Funder Contribution: 348,330 EURMany combinatorial problems can be naturally modelled as a network of local interactions between discrete variables. In the simplest cases, the local interactions are simply compatibility/incompatibility relations and the network is a constraint network (CN). A fundamental property of such a network is its consistency (or feasibility): is it possible to find a value for each variable in the network in such a way that no incompatibility appears ? Answering this question defines the Constraint Satisfaction Problem (CSP). This problem has been the object of intense research in the last 30 years and thee franch community is very weel represented at the international level. The dedicated techniques developed to solve CSP form the foundations of constraint programming languages such as IBM ILOG Solver, Cosytec CHIP, Cisco Eclipse... These tools have shown very good complementarity with mathematical programming techniques, for example in area such as resource scheduling and configuration... Many industrial size problems have been solved using this approach. The ubiquitous and fundamental technique used inside these constraint solvers is the process of filtering by local consistency. This process consists in transforming a given constraint network in an equivalent network (having the same set of solutions) which is also more explicit and simple (characterized by specific properties). The most usual filtering techniques act at the level of single constraints and are known as filtering by arc consistency. In 2000, these techniques have been extended to cost function networks (CFN, also called Weighted or Soft Constraint Networks). Cost function networks define an extension of pure constraint networks that allows to directly capture complex optimization problems mixing arbitrary constraints and cost functions (possibly non linear). In the last years, this technical advance has been combined with branch and bound, where it provides the required incremental lower bound. This approach has been sophisticated to the point where different hard combinatorial optimization problems, open for more than 15 years, have been solved to optimality. Cost function networks have also been used to solve very large problems in bioinformatics (genetics, molecular biology) adn aplied to large stochastic graphical models (bayesian nets and Markov random fields). The aim of this project is to build on these recent successes by introducing stronger local consisyency filtering algorithms, capable of providing tighter lower bounds. The accumulated results in the field of constraint networks, more specifically on so-called "domain consistencies" and on "global constraints" (constraints with a semantics that allows for the definition of very time-efficient algorithms) will be instrumental in this process. This will require the extension of the year 2000 result on arc consistency to higher level of local consistency and to also take into acccount the precise semantics of significant "global cost functions". To guide these developments, we will rely on the complete set of benchmark problems accumulated in the "Cost function Library" completed with targeted applications: complex pedigreee diagnosis, maximum likelihood haplotyping (genetics), Nurse Rostering Prroblem instances as well as processing stochastic discrete graphical models derived from the genetics problems and from invasive speccies mapping problems modelled as Markov random fields. Beyond pure optimization problems, we will also use these techniques to give approximate computations with guarantees of the normalizing constant (Z) in these models, a difficult problem (#P-complete) central in the processing random Markov fields and more generally in reasoning under uncertainty.
more_vert assignment_turned_in ProjectFrom 2008Partners:CHU, CONTINENTAL AUTOMOTIVE FRANCE SAS, UNIVERSITE DE CAEN - BASSE-NORMANDIE, UNICAEN, IFSTTARCHU,CONTINENTAL AUTOMOTIVE FRANCE SAS,UNIVERSITE DE CAEN - BASSE-NORMANDIE,UNICAEN,IFSTTARFunder: French National Research Agency (ANR) Project Code: ANR-07-TSFA-0004Funder Contribution: 481,795 EURmore_vert assignment_turned_in ProjectFrom 2006Partners:CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE - DELEGATION REGIONALE COTE D'AZUR, UNIVERSITE DE CAEN - BASSE-NORMANDIE, CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE - DELEGATION REGIONALE LANGUEDOC-ROUSSILLON, CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE - DELEGATION REGIONALE COTE DAZUR, UNICAEN +2 partnersCENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE - DELEGATION REGIONALE COTE D'AZUR,UNIVERSITE DE CAEN - BASSE-NORMANDIE,CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE - DELEGATION REGIONALE LANGUEDOC-ROUSSILLON,CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE - DELEGATION REGIONALE COTE DAZUR,UNICAEN,CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE - DELEGATION REGIONALE PROVENCE CORSE,CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE - DELEGATION REGIONALE BRETAGNE ET PAYS- DE-LA-LOIREFunder: French National Research Agency (ANR) Project Code: ANR-05-BDIV-0004Funder Contribution: 529,664 EURmore_vert
chevron_left - 1
- 2
- 3
- 4
- 5
chevron_right
