When Crisp Theory is Disrupted by Messy Reality

In so many human endeavors, there is often a large gulf between an idea and the execution of that idea, between a philosophical ideal about how something works (or should work) and the implementation of a solution based upon this ideal meant to solve some problem. Every person in the world experiences this in some aspect of their lives.  This difference gives rise to my field of research, numerical analysis.  This is a field which I entered during my Ph.D. studies, and I have continued my work during my time in Linz.

Math as a Language to Describe an Ideal World

In mathematical modeling, we use the language of calculus (differential operators, integrals, limits, etc.) to describe some physical system in the ideal, where measurements and calculations are infinitely precise.  Everything works rather nicely on paper.  However, when we design software which uses these models to simulate real-world phenomena for scientific or industrial purposes, the reality of our limitations of precision, time, memory, etc. can cause partial or total failure of the software to deliver meaningful results.  Thus, further considerations must be made.

Numerics to the Rescue!

My background is in numerical analysis, with a focus on numerical linear algebra.  In Austria, I have had the freedom to continue my previous work while also finding new collaboration opportunities with coworkers here. The tools of linear algebra as we know it today were mostly developed in the mid 1800’s to early 1900’s. These tools have been of fundamental importance in fields such as theoretical physics, but linear algebra itself for a long time had not been an active field of research and was fairly well-developed by the early-to-mid 1900’s. It was the advent of computers that breathed new life into linear algebra research, creating the field of numerical linear algebra.

Problems arising in a diverse set of scientific endeavors (e.g., physics simulations, statistics, image/signal reconstruction, ballistic trajectory calculations, information retrieval) can be represented or approximated by large linear systems, sometimes with millions or even billions of unknowns. Computers are required to manipulate and solve such systems. This leads to multiple difficulties. The theory of linear algebra works on paper in exact arithmetic, but computers must round after each calculation to store the result. The wider field of numerical analysis often deals with the design and analysis of algorithms which are stable in the presence of errors introduced by rounding.

A further complication is that the linear algebra problem is either so large that it may not fit in the computer’s memory or may otherwise not be explicitly available. One only has a computer program, which given a point in space, maps that point to another point in space. This mapping can actually be used to implicitly represent the linear problem we want to solve. The field of numerical linear algebra often concerns itself with the analysis and design of algorithms for solving linear algebraic problems for which one only has this implicit mapping representation of the problem (so-called “matrix-free” methods).

An example from eXtreme Adaptive Optics

Artist’s impression of the European Extremely Large Telescope

Daniela Saxenhuber previously described the work in her doctoral thesis on the development of fast algorithms for atmospheric tomography in Adaptive Optics systems for the European Extremely Large Telescope (E-ELT).  Many of the sub-problems in this project can be approximated by linear algebra problems, which must be solved quickly and efficiently due to strict computational time requirements of the project.  One such example is the computation of the positions of the deformable mirrors, used to correct the image due to degradation caused by atmospheric turbulence.  Indeed, in this case, one only has a mapping, as described above, which is used to represent the linear problem.  Many tools from numerical linear algebra have been used to propose fast methods for solving this sub-problem, with some methods going so far as to take advantage of the computer’s own architecture to gain additional calculation speed.

Maxwell’s Equations-based Industry Project

Aside from my research, I also currently work on a project with an industry partner. In this project, we want to roughly calculate the magnetic field generated by large custom-built electrical devices in order to assist with their construction. This project begins with a well-understood mathematical model of the underlying physics (Maxwell’s equations) and the goal is to use various techniques of numerical analysis to approximate the problem of magnetic field calculation by a linear algebra problem, which can then be solved using standard methods from my field. Such codes already exist, both in the public domain and from our industry partner. Our present task has been to produce software which could do the calculation faster than these other software codes, at the expense of some accuracy. In order to do this, we have been taking advantage of the fact that although the problem (and the device in question) exists in three dimensions (which is computationally quit intensive), one can use some tricks to reduce this to a problem in only one dimension, which then can be solved at a greatly reduced computational expense. We are currently in the final software implementation phase.

%d bloggers like this: