Astronomy and Astrophysics, volume 613A, 6-6 (2018/5-1)
Generalised model-independent characterisation of strong gravitational lenses. II. Transformation matrix between multiple images.
WAGNER J. and TESSORE N.
Abstract (from CDS):
We determine the transformation matrix that maps multiple images with identifiable resolved features onto one another and that is based on a Taylor-expanded lensing potential in the vicinity of a point on the critical curve within our model-independent lens characterisation approach. From the transformation matrix, the same information about the properties of the critical curve at fold and cusp points can be derived as we previously found when using the quadrupole moment of the individual images as observables. In addition, we read off the relative parities between the images, so that the parity of all images is determined when one is known. We compare all retrievable ratios of potential derivatives to the actual values and to those obtained by using the quadrupole moment as observable for two- and three-image configurations generated by a galaxy-cluster scale singular isothermal ellipse. We conclude that using the quadrupole moments as observables, the properties of the critical curve are retrieved to a higher accuracy at the cusp points and to a lower accuracy at the fold points; the ratios of second-order potential derivatives are retrieved to comparable accuracy. We also show that the approach using ratios of convergences and reduced shear components is equivalent to ours in the vicinity of the critical curve, but yields more accurate results and is more robust because it does not require a special coordinate system as the approach using potential derivatives does. The transformation matrix is determined by mapping manually assigned reference points in the multiple images onto one another. If the assignment of the reference points is subject to measurement uncertainties under the influence of noise, we find that the confidence intervals of the lens parameters can be as large as the values themselves when the uncertainties are larger than one pixel. In addition, observed multiple images with resolved features are more extended than unresolved ones, so that higher-order moments should be taken into account to improve the reconstruction precision and accuracy.