In weak convergence, Prohorov's theorem states that tight sequences of probability measures are weakly compact. Similarly, with regard to the large deviation principle, exponentially tight sequences are large deviation relatively compact. As a consequence, large deviation theory can be developed by using tools from weak convergence theory. This talk discusses implications of this observation for establishing large deviation asymptotics of stochastic dynamical systems. To motivate the developments, we start with some classical settings such as sums of independent random variables and diffusions with small noise. Next, the notion of the large deviation principle (LDP) is defined, its basic properties are considered, and various approaches to establishing it are discussed. The brief overview concludes with the analogue of Prohorov's theorem. An application to a non-standard proof of Gartner's theorem is considered.
The second part of the talk is concerned with large deviations of the trajectories of stochastic processes. We again begin with examples. They are used in order to motivate formulating an LDP for stochastic processes as a certain type of convergence (dubbed large deviation convergence) to idempotent processes. The notions of an idempotent probability measure and idempotent process are defined. Large deviation limits of solutions to stochastic equations are specified as solutions of idempotent equations. We also discuss the use of compactness considerations for proving large deviation convergence of invariant measures. Time permitting, we conclude by establishing an LDP for a join-the-shortest-queue model.
The coupling method (in other words, the method of a common probability space) is the most common in the study of convergence rates in functional probablity theorems of probability theory. The coupling method allows us to get the most tractable and usable estimates. However, in order to get better estimates, one has to invent more and more complex coupling constructions. In the first half of my talk, I'll give a short overview on known problem formulations, ideas, and results in functional central limit theorems which have been obtained using coupling. Special attention will be given to various generalisations of well-known results by J Komlos, P Major and G Tusnady on the accuracy of approximation in functional CLT. In the second half of the talk, I'll speak about further possible generalisations and about methods of proofs.
The talk will show how simple varying of parameters of Bernoulli and Poisson random processes leads to quite deep results employing multi-dimensional-"time" martingales, percolation, Gamma-type results and stopping sets.
The theory of rough paths as developed by the Author (and several others such as Hambly, Ledoux, Coutin, Qian, Friz) aims to study the differential equations used to model the situation where a system responds to external control or forcing. The theory describes a robust approach to these equations that allows the forcing to be far from differentiable. The methodology permits the main probabilistic classes of stochastic forcing, as well as many new types that do not fit into the classical semi-martingale setting directly
The key to this theory is to answer the question-when do two controls produce similar responses. This is also a core question for the problem for multi-scale analysis where one needs to summarise small scale behaviour in a way that large scale responses can be predicted from the summarised information. The question can be translated into one asking that one characterises the continuity properties of the Ito map. This is indeed possible and the Universal Limit theorem proves the (uniform) continuity of the map taking the forcing control to response for a wide class of metrics on smooth paths -and the completions of the space under these metrics give the so called rough paths -giving insight into the control problem.
The approach is structured, and allows one to give a top down analysis of a control in terms of a sequence of algebraic coefficients we call the signature of the control (which have similarity to a child's précis of a complicated text by a simpler one and are a non-commutative analogue of Fourier coefficients) with refinements giving more accurate information about the control. Hambly and Lyons recently proved that this "signature" of a control completely characterises the control up to the appropriate null sets.
The new results mentioned above have generated new open problems.
Copulas are multivariate distribution functions with uniform marginal
distributions and the Archimedean family is a widely-studied family of
copulas sharing a common method of construction. We will motivate the
study of Archimedean copulas by mentioning applications in multivariate
survival analysis, insurance loss modelling and portfolio credit risk
There are interesting mathematical links between Archimedean copulas on the one hand and completely monotonic functions and Laplace transforms on the other. These links are described in the work of Kimberling (1974), Schweizer and Sklar (1983), Genest and MacKay (1986), Marshall and Olkin (1988) , Joe (1993) and others. We will review this theory and show how it leads to useful "one-factor representations" of Archimedean copulas that facilitate in particular their stochastic simulation.
The great drawback of standard Archimedean copulas is the fact that they are distribution functions of exchangeable random vectors, which clearly limits their applicability to modelling multivariate risks with very heterogeneous character. We will go on to consider two generalizations of Archimedean copulas that allow for more flexibility - multifactor Archimedean copulas and nested Archimedean copulas. Time permitting, the talk will culminate in a new algorithm that solves the problem of sampling from nested Archimedean copulas. A real actuarial application for this seemingly obscure problem will be mentioned.
In this talk, I will concentrate on the easy parts of Ito excursion theory and show how even these relatively easy results have a wide range of powerful applications.
Lecture 1: Spatial Point Processes Lecture 2: Poisson Processes Lecture 3: Random Measures and Random Closed Sets Lecture 4: The Boolean Model Lecture 5: Mean Value Formulae for Stationary Tesselations Lecture 6: Distributional Properties of Poisson Voronoi Tesselations Stochastic geometry aims to develop and to analyze mathematical models for random spatial patterns. Currently it is a quite lively part of modern probability theory that is combining and advancing ideas from integral and convex geometry, the theory of random fields and random measures, and spatial statistics. Methods and results of stochastic geometry are being applied in various other areas, such as material science, mobile telecommunication, biology, astronomy and geology. In material science, for instance, heterogenous structures are composed of different phases. Quite often the microstructure of such materials can be characterized only statistically. Of interest are then the macroscopic or effective properties of the material.
Random operators emerge in a number of applications, particularly in Theoretical Physics. A popular example of such an operator is the Schroedinger operator with a random potential describing a quantum system in the presence of impurities. The emerging theory contains many surprising results which are often counterintuitive not only mathematically but also from the basic physical point of view. I intend to give an introduction into the the theory of such operators and discuss what probabilistic techniques is used here. No preliminary knowledge of Quantum mechanics will be assumed.
Regular variation plays an important role in extreme value theory, summation theory, and time series analysis. It is the aim of the first talk to look at some functions of regularly varying vectors. Those include linear combinations and products with regularly varying components. Functions of this type occur in a natural way in financial time series (e.g. GARCH, stochastic volatility models). We are also interested in converse problems: given a function of a vector (such as a product or linear combination of its components) is regularly varying, is the vector regularly varying itself? We give positive answers and counterexamples. For example, an AR(1) or MA(2) process with regularly varying marginal distributions has regularly varying noise, but an MA(3) process with regularly varying marginal distribution does not necessarily have regularly varying noise. In the second talk we look at extensions of regular variation in a functional sense. This notion applies e.g. to Levy processes (in which case the Levy measure is regularly varying) and to filtered regularly varying Levy processes such as Ornstein-Uhlenbeck processes. Moreover, we study extensions of functional regular variation in the context of large deviations. The latter results can be applied to the asymptotic behavior of multivariate ruin probabilities.
Stable laws are important in probability theory, since they appear as limit distributions for sums of random variables or random vectors normalised by scaling. For instance, the normal law might appear if the sum of n summands is normalised by root n. Apart from the normal distribution, another well-known example of a stable law is the Cauchy distribution. The Cauchy distribution and other non-Gaussian stable laws appear if the normalisation is properly chosen and the summands have sufficiently heavy tails.
The probabilistic properties of univariate stable distributions are by now well understood. It should be noted however that the expressions of their densities are very complicated. In the multivariate case the densities are not known apart from the normal and Cauchy cases. Some moments are available only for isotropic stable laws.
The talk will begin with a gentle introduction to multivariate stable laws, the expression of their characteristic function. Then I plan to explain how to relate a symmetric stable law with a star-shaped (or sometimes convex) set, so that important probabilistic quantities become geometric functionals of this set. In particular, I provide expressions for moments of the Euclidean norm of a stable vector, mixed moments and various integrals of the density function. It will be shown how probabilistic properties of multivariate stable laws are related to really big problems in convex geometry, some of them recently solved after quite a long time.
Furthermore, the geometric role of sub-Gaussian laws and their relationships with general stable laws will be explained. It will be also shown how to interpret geometrically regression, orthogonality and covariation concepts for symmetric stable distributions. These geometric interpretations are also useful in the financial context, for instance, for optimisation of portfolios with stable returns.