## Math/ICES Center of Numerical Analysis Seminars (Fall 2010)

Time and Location: Specially indicated in color.

 Date Speaker Title and abstract 09/07/2010 Tuesday 3:30-5:00PM ACE 6.304 Lexing Ying (Austin) Host: Leszek Demokowicz Sweeping Preconditioners for the Helmholtz Equation Numerical solution of the variable coefficient Helmholtz equation in the high frequency regime is a challenging computational problem due to the indefiniteness of the operator and the large size of the discrete system. In this talk, we introduce the sweeping preconditioners for the rapid solution of the variable coefficient Helmholtz equation. The novelties of this new class of preconditioners are a specific order of eliminating the unknowns and efficient representations of the Schur complement matrices. For a problem with N unknowns, these preconditioners take essentially O(N) steps to apply, give iteration numbers that are independent of the frequency, and hence provide a linear-complexity method for solving the variable coefficient Helmholtz equation. 09/24/2010 Friday 3:00-4:00PM ACE 6.304 Catherine Kublik (Austin) Host: Kui Ren Topics in PDE-based image processing In the first part of the talk, I will describe new, efficient and accurate algorithms for computing certain area preserving geometric motions of curves in the plane. These algorithms alternate two very simple and fast operations, namely convolution with the Gaussian kernel and construction of the signed distance function, to generate the desired geometric ow in an unconditionally stable manner. I will present applications to large scale simulations of coarsening, and to inverse problems from medical imaging. Joint work with Selim Esedoglu and Jeffrey Fessler. In the second part of the talk, I will present rigorous results on the coarsening rate of a class of high-order, ill-posed di usion equations from image processing. The fourth order version of these equations constitutes the main motivation, since it corresponds to a well-known model in the image denoising literature that was proposed by You and Kaveh: It is used to denoise images while maintaining sharp object boundaries (edges), and was intended to be an improved version of the famous Perona-Malik equation. I will follow a technique by Kohn and Otto to establish rigorous upper bounds on the coarsening rate of these high order equations in any space dimension, and for a large class of diffusivities. 9/29/2010 Wednesday 3:00-4:00PM ACE 6.304 Hongkai Zhao (Irvine) Host: Richard Tsai A New Approximation for Effective Hamiltonians for Homogenization of a Class of Hamilton-Jacobi Equations We propose a new formulation to compute effective Hamiltonians for homogenization of a class of Hamilton-Jacobi equations. Our formulation utilizes a special property for viscosity supersolutions of convex Hamilton-Jacobi equations. The key idea is how to link the effective Hamiltonian to a suitable effective equation. The main advantage of our formulation is that only one auxiliary equation needs to be solved in order to compute the effective Hamiltonian $\bar{H}(p)$ for all $p$. Error estimates and stability will be proved and numerical examples will be presented. 10/01/2010 Friday 3:00-4:00PM ACE 6.304 Tim Sheng (Baylor) Host: Kui Ren From the computation of matrix exponentials to exponential transformation based splitting schemes for solving highly oscillatory differential equations Splitting, or decomposition, finite difference methods have been playing an important role in the numerical solution of nonsingular partial differential equations due to their remarkable efficiency, simplicity and flexibility in computations as compared with their peers. Although the split adaptation numerical strategy is still in its infancy for solving singular differential equation problems arising from many applications, explorations of the next generation decomposition schemes associated with various kinds of adaptations can be found in numerous recent publications. In this talk, we will recall some important studies and focus on the latest developments in the area. Comments will be devoted to the direct solutions of degenerate singular reaction-diffusion equations, nonlinear sine-Gordon wave equations and highly oscillatory wave equations. Numerical experiments will be given. 10/22/2010 Friday 3:00-4:00PM ACE 6.304 Jennifer Young (Rice) Host: Bjorn Engquist A Continuum-Microscopic Modeling Method for Materials with Dynamic, Heterogeneous Micro-Structures Creating accurate, macroscopic scale models of microscopically heterogeneous media is computationally challenging. The difficulty is increased for materials with time-varying micro-structures. This talk will present a new continuum-microscopic (CM) modeling approach aimed at modeling such materials. Fibrous media are chosen as a class of materials upon which to present and test the algorithm. What is novel about this algorithm, compared to other CM methods, is that information from the material's micro-structure is saved over time in the form of probability distribution functions (PDFs). These PDFs are then extrapolated forward in time to predict what the micro-structure will look like in the future. Keeping track of the micro-structure over time allows for a more accurate computation of the local mechanical parameters used in the continuum-level equations. Results show that the mechanical parameters computed with this algorithm are similar to those computed with a fully-microscopic model. Errors for continuum level variables in the 5-10% range are deemed an acceptable trade-off for the savings in computational expense offered by this algorithm. 10/29/2010 Friday 3:00-4:00PM ACE 6.304 Yin Zhang (Rice) Host: Kui Ren Some Recent Advances in Alternating Direction Methods: Practice and Theory The classic Augmented Lagrangian Alternating Direction Method (ALADM or simply ADM) has recently found great utilities in solving convex separable optimization problems arising from signal/image processing and sparse optimization. In this talk, we briefly introduce the classic ADM approach, and give some recent examples of its applications including extensions to solving non-convex and non-separable problems. We then present new local convergence results that extend the classic ADM convergence theory in several aspects. 11/11/10 Thursday 3:30-5:00PM ACE 6.304 Houman Owhadi (Caltech) Host: Lexing Ying Optimal Uncertainty Quantification We propose a rigorous framework for Uncertainty Quantification (UQ) in which the UQ objectives and the assumptions/information set are brought to the forefront. This framework, which we call \emph&ob;Optimal Uncertainty Quantification&cb; (OUQ), is based on the observation that, given a set of assumptions and information about the problem, there exist optimal bounds on uncertainties: these are obtained as extreme values of well-defined optimization problems corresponding to extremizing probabilities of failure, or of deviations, subject to the constraints imposed by the scenarios compatible with the assumptions and information. In particular, this framework does not implicitly impose inappropriate assumptions, nor does it repudiate relevant information. Although OUQ optimization problems are extremely large, we show that under general conditions, they have finite-dimensional reductions. As an application, we develop \emph&ob;Optimal Concentration Inequalities&cb; of Hoeffding and McDiarmid type. Surprisingly, contrary to the classical sensitivity analysis paradigm, these results show that uncertainties in input parameters do not necessarily propagate to output uncertainties. In addition, a general algorithmic framework is developed for OUQ and is tested on the Caltech surrogate model for hypervelocity impact, suggesting the feasibility of the framework for important complex systems. This is a joint work with C. Scovel, T. Sullivan, M. McKerns and M. Ortiz. A preprint is available at http://arxiv.org/abs/1009.0679v1 11/12/10 Friday 1:00-2:00PM RLM 11.176 Rachel Ward (NYU) Host: Lexing Ying Fast dimensionality reduction for high-dimensional data sets: generalizations, improved bounds, and implications for compressed sensing Embedding high-dimensional data sets into subspaces of much lower dimension is important for reducing storage cost and speeding up computation; this problem arises, for example, in numerical linear algebra, manifold learning, and computer science. The relatively new field of compressed sensing is based on the observation that if the high-dimensional signals are sparse in a known basis, they can be embedded into a lower-dimensional space in a manner that permits their efficient recovery through l1-minimization. In this talk, I'll give an overview of compressed sensing, highlighting two important results: the Restricted Isometry Property and the Johnson-Lindenstrauss Lemma. Then I'll discuss the "near-equivalence" of these two results, focusing on work I did with Felix Krahmer. The near-equivalence provides the best-known bounds for dimensionality reduction of arbitrary data sets using structured, or "fast" linear embeddings. 11/19/2010 Friday 3:00-4:00PM ACE6.304 Wenjia Jing (Columbia) Host: Kui Ren Corrector Theory for MsFEM and HMM in Random Media Corrector theory, i.e., theory about the deviation from homogenization, for some linear equations in heterogeneous random media has been established in the stationary ergodic setting. Here, we develop methods to assess whether certain numerical schemes, which successfully approximate the homogenized solution, are able to capture the right corrector indicated by the theory, as the discretization size goes to zero. We analyze, in particular, the Multi-scale Finite Element Method (MsFEM) and the Heterogeneous Multi-scale Method (HMM) applied to a one-dimensional second order ODE with elliptic random coefficient. The corrector for this equation is characterized in [Bourgeat and Piatnitski 1999] for short range medium, and in [Bal, Garnier, Motsch, and Perrier 2008] for long range medium, as Gaussian processes. Our analysis on the numerical schemes shows that, the MsFEM corrector converges to the right one but the HMM corrector converges to an amplified version, the amplification factor depending on a parameter in the scheme and on the parameter describing how much the random medium is correlated. We then propose modifications of HMM that eliminate this amplification effect. Our proofs are based on detailed analysis of the structure of the stiffness matrix and statistics of its entries, combined with central limit theorems and tools to prove weak convergence in the space of continuous paths. This is a joint work with Guillaume Bal. 12/06/10 Monday 1:00-2:00PM RLM 11.176 Paul Childs (Schlumberger Cambridge Research) Host: Lexing Ying We will give a general overview of some of the challenges involved in modern seismic imaging for hydrocarbon exploration. The talk will be illustrated with a number of examples and we will describe some of the computational challenges which arise. Aspects of nonlinear optimization and the numerical solution of the wave equation will be discussed. The talk will be suitable for a general audience.