The computational astrophysics group brings state-of-the-art numerical techniques to bear on otherwise intractable astronomical problems with complex geometries, multiple scales and physical interactions. Work in our group has led to new insight into galactic interactions and fundamental mechanisms governing their underlying dynamics. We continue to explore the interrelationships that drive galaxy evolution though the hybrid application of Hamiltonian perturbation theory and massively parallel particle simulation.

Recently, digital and detector technology has motivated data collection in unprecedented volumes. My most recent push is a strategy for scientific understanding through detailed theory-data comparisons. The work exploits the rich conceptual connections between physical science, computer science and mathematics.

Galaxy dynamics

Dynamicists attempt to understand the long-term evolution of galaxies after their formation. Even casual examination shows that most disk galaxies are not truly symmetric but exhibit a variety of morphological peculiarities of which spiral arms and bars are the most pronounced. After decades of effort, we now know that these features may be driven by environmental disturbance acting directly on the disk, in addition to self-excitation of a local disturbance. However, all disks are embedded within halos and therefore are not dynamically independent. Are halos susceptible to such disturbances as well? If so, can they affect disks and on what time scales?

Until recently, conventional wisdom held that halos acted to stabilize disks but otherwise remained relatively inert. However, I have shown both analytically and through numerical simulations that halos do respond to tidal encounters by companions or cluster members and are susceptible to long-lived modes. The Magellanic Clouds, companion galaxies to our own Milky Way, use this mechanism to produce distortions in the Galactic disk sufficient to account for both the radial location, position angle and sign of the warp and observed anomalies in stellar kinematics. Repeated distant encounters between disk galaxies in galaxy clusters causes them to evolve into dwarf spheroidal galaxies.

The same mechanisms that excite the structure in the simulation cause the perturber to evolve. Evolution and disruption of galaxies orbiting in the gravitational field of a larger cluster galaxy are driven by three coupled mechanisms: 1) tidal heating due to its time dependent motion in the primary; 2) mass loss due to the tidal field; and 3) orbital decay. In addition, the density in globular clusters is sufficiently high that the gas of stars tries to reach thermal equilibrium. The complex interplay between tidal heating and relaxation drive globular clusters and dwarf galaxies to disruption!

The outer parts of galaxies like our Milky Way are less than 10 dynamical times old so primordial inhomogeneities will not have had time to phase mix and continued disturbances from mergers will not have relaxed. These, as well as any intrinsic sources of noise, e.g. a population of 10⊃6 solar-mass black-holes or dark clusters, are amplified by the self-gravity of the halo and spheroid, and create potentially observable asymmetries in the stellar and gaseous Galactic disk. We are actively exploring the observational effects of the resulting fluctuations to long-term Galactic evolution and investigate possible limits on the nature and extent of the dark matter halo.

Technical details

These problems have been addressed by a hybrid use of n-body simulation and perturbation theory that I have helped develop and refine over the last ten years. This perturbation method, sometimes called matrix mechanics, solves the equations of motion as a non-linear eigenvalue problem. This general scheme allows long-term evolution of slow patterns to be followed without particle noise and diffusion which can obscure and change the underlying dynamics. The scientific interplay between the n-body method and matrix mechanics is in both directions. First, mechanisms identified by perturbation theory are corroborated by simulation. Because matrix mechanics and our n-body simulations share the same gravitational potential representation, a disturbance can be compared and studied harmonic-order by harmonic-order. And vice versa, structure produced by simulation is investigated simply by using the harmonic coefficients from the n-body code as input to the matrix mechanics. This scheme is well-suited to parallel computation and is implemented on our Beowulf cluster\footnote{a dedicated intensively-networked cluster of commodity computers that achieve supercomputer performance}.

Computational Statistics

Although the global structure of distant galaxies is plain to see, our own Milky Way galaxy is a good place to understand the details. In part motivated by our work here at UMass, many astronomers now believe that the Galaxy has bar-like features as well as spiral arms.

Inferring the structure of our own Galaxy is notoriously difficult. In principle, one could count the stars in different direction and infer the shape of the Galaxy and our position in the Galaxy by the variation in number (a so-called "star-count analysis"). However, our view of the Milky Way is hampered by several tens of magnitudes of extinction in the Galactic plane toward the Center. Fortunately, the astronomical community is currently engaged in or has recently produced very large optical, radio, and infrared surveys. Through the Department of Astronomy, UMass is the lead institution in a 2-Micron All-Sky Survey (2MASS). Extinction in the the near-infrared is nearly an order of magnitude smaller than in the visual bands, presenting an an unprecedented opportunity to ascertain the large-scale stellar distribution. The combined leverage of all surveys will greatly improve our understanding of the Galaxy and a wide variety of astronomical problems.

Although the computational power to study terabyte-size scientific datasets exists, an optimized integrated statistical analysis system does not. As a case in point, new astronomical datasets ranging in size from 0.1 to 20 TB will be available within the several years but will lack appropriate tools for their analysis. In short, data collection outstrips analysis capabilities in nearly all fields. Our proposed work combines state-of-the-art approaches in numerical statistics, data representation and efficient computation to provide analysis tools for terabyte-class scientific datasets. The disciplines covered specifically include astronomy and Earth-based and geographical resource management such as forestry and ecology, but our approach applies to any inference problem with mapped data.

The need for a computationally optimized method for theory--data statistical analysis is easily motivated. The information content of large databases can in principle determine models with many parameters. This class of estimation problem is readily posed with maximum likelihood or more general Bayesian analyses, which determine model parameters while allowing for straightforward incorporation of heterogeneous selection biases. Although information theory tells us that we can continue to increase the statistical confidence to any desired level by adding more data, the computational complexity of such a parameter estimation grows quickly with the number of model parameters and becomes intractable before the volume of currently available large data sets is reached. Together with faculty from the Computer Science Department at UMass/Amherst, we are developing a system removes the computational bottleneck by exploiting the inherent hierarchical nature of spatially mapped data and exploiting the low-level symmetries inherent in Bayesian computation to simultaneously optimize i/o, internode communication and arithmetic operations.