«E-mail: jean-pierre.luminet Abstract. I discuss some aspects of the use of computers in Relativity, Astrophysics and Cosmology. For each ...»
From Black Holes to Cosmology : The Universe in the
Laboratoire Univers et Théories (LUTH)
Observatoire de Paris, CNRS, Université Paris Diderot
5 Place Jules Janssen, 92190 Meudon, France
E-mail: jean-pierre.luminet @obspm.fr
Abstract. I discuss some aspects of the use of computers in Relativity, Astrophysics and
Cosmology. For each section I provide two examples representative of the field, including
gravitational collapse, black hole imagery, supernovae explosions, star-black hole tidal interactions, N-body cosmological simulations and detection of cosmic topology.
1. Introduction In this plenary lecture I shall discuss various aspects of the use of computers in relativity, astrophysics and cosmology, by presenting a few representative examples of what is been done in the field. These examples are by no means exhaustive, but reflect a personal choice on subjects that I know better than others, such as black holes and cosmology, and I apologize for not discussing all the excellent work in numerical astrophysics that could not find place in this short presentation.
Of course the use of computers is present in all the activities of astronomers and astrophysicists, but it is important to distinguish between numerical modelisation (involving computer simulations) from data treatment and analysis – although both may consume a lot of computing time and memory.
As a theoretician, I shall concentrate mainly on numerical modelisation.
Before discussing the physics, let us begin with a brief survey of the computing facilities available to the astronomical community, from the local scale of a laboratory to the global scale of international organizations.
• At the local scale, e.g. a typical laboratory comprising about fifty researchers in theoretical astrophysics, such as mine (Laboratory Universe and Theories), most of calculations are performed on sequential machines – personal laptops and computers - and small clusters (10 to 100 nodes) owned by the laboratory. For heavier calculations, use is done of mesocenters (around 200 nodes) located in Paris Observatory, grids and supercomputers (more than 10,000 nodes) owned by national institutions.
Numerical tools are developed, including codes for hydrodynamics, shock capture methods, N-body simulations, ray-tracing, etc., as well as numerical libraries, which provide a set of tools in the form of free public softwares for building various codes. For instance the publicly available library Kadath [http://luth.obspm.fr/~luthier/grandclement/kadath.html] implements spectral methods for solving partial differential equations in general relativity, astrophysics and theoretical physics, written in C++ or Fortran 77. Fully parallel and running on several hundreds of processors, Kadath has successfully recovered known solutions in various areas of physics and is the main tool for the study of quasiperiodic solutions in gauge field theory .
• At the scale of a country such as France, computing facilities are provided by governmental organizations. For instance, the GENCI (Grand Equipement National de Calcul Intensif [http://www.genci.fr/en]) coordinates the principal French equipments in high performance computing. This legal entity takes the form of a «société civile» under French law, owned by the French State represented by the Ministry for Higher Education and Research, by CEA (Commission for alternative Energies and Atomic energy), CNRS (National Centre for Scientific Research) and Universities. Created in 2007, GENCI implements and ensures the coordination of the major equipments of the national High Performance Computing (HPC) centres by providing funding and by assuming ownership, promotes the organization of an European HPC area and participates to its achievements, and sets up R&D collaborations in order to optimize HPC.
• At a more global scale such as European Union, the PRACE infrastructure (Partnership foR Advanced Computing in Europe, [http://www.prace-ri.eu]) creates a pan-European supercomputing infrastructure for large scale scientific and engineering applications. The computer systems and their operations accessible through PRACE are provided by 4 country members (BSC representing Spain, CINECA representing Italy, GCS representing Germany and GENCI representing France) who committed a total funding of €400 million for the initial PRACE systems and operations. In pace with the needs of the scientific communities and technical developments, systems deployed by PRACE are continuously updated and upgraded to be at the apex of HPC technology.
The Curie supercomputer, owned by GENCI and operated by CEA, is the first French Tier0 system open to scientists through the French participation into the PRACE infrastructure. It offers a peak performance of 1,7 Petaflops but at the present date (september 2013) it ranks only #15 in the list of the most powerful computers.
For the sake of comparison, table 1 below lists some of the TOP 20 supercomputer sites for June 2013 (keeping in mind that the list evolves rapidly).
We conclude this introduction by recalling the common definition of what is called a “numerical grand challenge” : a fundamental problem in science or engineering, with broad applications, whose solution would be enabled by the application of HPC resources that could become available in the near future. Each grand challenge requires simulations on supercomputers with more than 10,000 nodes.
The specificites and requirements of such programs are massive parallelization, very large fluxes of I/O data, and full teams of researchers, engineers and technicians to correctly operate them. In the last section of this article we briefly describe the numerical grand challenge in cosmology.
2. Numerical Relativity Numerical relativity aims to obtain solutions to Einstein's equations with computers. As it provides a powerful tool to explore fundamental problems in physics and astrophysics, it has been a constant field of research since the 1990's, but spectacular progress has been made in the last decade due to
• larger computational facilities
• more advanced and accurate numerical techniques
• new formulation of Einstein's and magnetohydrodynamics (MHD) equations well-suited for numerical evolution.
Indeed a very important step is the choice of formulation and gauge for Einstein's equations : as these are covariant, one must specify a “good” gauge in order to be able to write them as a well-posed system of partial differential equations, which can also be numerically integrated without the appearance of instabilities. In this section we discuss two representative examples, related to gravitational collapse and black hole imagery.
2.1. From gravitational collapse to gamma-ray bursts Gamma-ray bursts (GRBs) are flashes of gamma rays associated with extremely energetic explosions that have been observed in distant galaxies. They can last from ten milliseconds to several minutes, and their light curves vary considerably from one event to another. Two relatively distinct groups can nevertheless be identified: short bursts (SGRB), representing about one-third of those observed, which have a mean duration of 0.3 seconds, and long bursts, which last more than 2 seconds, with a mean of 30 seconds. It is tempting to explain this apparent dichotomy by different physical origins, especially as short bursts generally emit harder radiation (i.e higher frequency) than long bursts. Most investigators consider that GRBs are linked to the formation of stellar mass black holes, whose stronger gravitational fields can result in the release of more energy. Stellar black holes could be formed in at least two ways: either by the catastrophic collapse of a very massive star, or by the fusion of two compact stars.
In the model of coalescence, bursts would be formed from a pair of neutron stars orbiting around each other or from a pair consisting of a neutron star and a black hole. The theory of general relativity indicates that in such a situation the two compact stars would rapidly lose orbital energy in the form of gravitational waves. Over time, the decrease in the energy of the pair will inexorably shorten the distance between them. The ballet will end when the two bodies collide and fuse, thus giving birth to a black hole and an accretion disc, accompanied by a spurt of ultrahot matter orthogonal to the disc, which is responsible for a short burst.
This qualitative scenario is supported to a good extent by fully general-relativistic simulations. I refer the reader to the excellent review by Rezzolla . The numerical investigation of the inspiral and merger of binary neutron stars in full general relativity has seen enormous progress made in recent years. Crucial improvements in the formulation of the equations and numerical methods, along with increased computational resources, have extended the scope of early simulations. These developments have made it possible to compute the full evolution, from large binary-separations up to black-hole formation, without and with magnetic fields and with idealised or realistic equations-of-state.
Numerical simulations show that the formation of a torus around a rapidly rotating black hole is inevitable (figure 1). They also provide the first evidence that the merger of a binary of modestly magnetised neutron stars naturally forms many of the conditions needed to produce a jet of ultrastrong magnetic field, with properties that are consistent with SGRB observations. This missing link between the astrophysical phenomenology of GRBs and the theoretical expectations is a genuine example of the potential of numerical relativity.
This remarkable advancement also provides information about the entire gravitational waveform, from the early inspiral up to the ringing of the black hole. Advanced interferometric detectors starting from 2014 (advanced Virgo /advanced LIGO) are expected to observe such sources at a rate of 40 – 400 events per year.
2.2. Black hole imagery Since black holes cause extreme deformations of spacetime, they also create the strongest possible deflections of light rays passing in their vicinity and give rise to spectacular optical illusions. Today, it is possible to observe black holes only indirectly, through the effects that they have on their environment; for instance, their powerful gravitational field sucks in the neighboring gaz into accretion disks.
The first computer images of the appearance of a black hole surrounded by an accretion disk were obtained by myself in 1979 . Calculations were done on an IBM 7040 machine of Paris Observatory; it was the time of punched cards, and no visualization device was available, so that I had to produce the final image by hand from numerical data (figure 2) ! The system is observed from a great distance at an angle of 10° above the plane of the disk. The light rays are received on a photographic plate (rather a bolometer in order to capture all wavelengths). A realistic image, e.g.
taking account of the spacetime curvature, of the blue- and redshift effects, of the physical properties of the disk and so on, can be precisely calculated at any point of spacetime. Because of the curvature of spacetime in the neighborhood of the black hole, the image of the system is very different from the ellipses which would be observed if an ordinary celestial body (like the planet Saturn) replaced the black hole. The light emitted from the upper side of the disk forms a direct image and is considerably distorted, so that it is completely visible. There is no hidden part. The lower side of the disk is also visible as an indirect image, caused by highly curved light rays. In theory, there are multiples images of higher orders that give extremely distorted views of the top and of the bottom that are even more squashed, and so on to infinity.
With the improvement of computers, color and animated calculations were later performed by Marck  on a DEC-VAX 8600 computer, see fig. 3. At the time, black hole imagery was done mainly for pedagogical purpose, but the situation has changed very recently. Indeed black hole physics will develop considerably in the coming years, driven by observations at very high angular resolution (micro-arcseconds), as well in infrared (GRAVITY instrument installed at the VLT in 2014) as at submillimeter wavelength (Event Horizon Telescope, circa 2020). For the first time, we shall see the immediate environment of the event horizon of a black hole, especially the central black hole of our Galaxy, Sgr A*. Thus it is now essential to prepare the observations by numerical simulations which compute images and spectra of black holes and their close environment. Figure 4 depicts the result recently obtained by a new ray-tracing code, GYOTO (General relativitY Orbit Tracer of Paris Observatory) , publicly released at http://gyoto.obspm.fr. With respect to existing codes, GYOTO has the distinctive capability to compute geodesics in any metric, in particular in metrics that are known only numerically. A first application has been the modelling of the spectrum of Sgr A* by a magnetized ion torus , see figure 5.
Another spectacular example of black hole imagery is the simulation of gravitational lensing. The spacetime curvature created by a massive object along the line of sight of a starry sky causes optical illusions for all objects in the background, because the curvature increases the number of distinct trajectories taken by light rays. Powerful modern telescopes can detect gravitational mirages created by intervening ordinary bodies such as stars (micro-lensing), galaxies or galaxy clusters which function as a gravitational lens, but not yet those created by black holes, due to lack of resolution.
Computer simulations  recreate the effect of a large black hole acting as a gravitational lens to distort a starry landscape in the background and multiply the images of distant objects to create a mirage. Figures 6-7 show gravitational lensing and optical illusions produced by a black hole in the line of sight of the Milky Way center in the Southern Hemisphere (left) and in front of the Magellanic Clouds (right).