The standard ΛCDM model has encountered serious challenges and the H0 tension has become more significant with increasingly precise cosmological observation. Meanwhile, inconsistencies in measurements of the curvature parameter ΩK between different datasets also have emerged. In this work, we employ two global and cosmic age-based parameterizations, PAge and MAPAge, to perform model-independent measurements of the Hubble constant H0 and ΩK by utilizing the inverse distance ladder (IDL). To construct the PAge-improved IDL, we utilize the strong gravitational lensing (SGL), cosmic chronometers (CC), and gamma ray bursts (GRB) data to calibrate the latest DESI DR2 baryon acoustic oscillation data and DESY5 type Ia supernova data. Our analysis indicate that DESI+DESY5+SGL+CC+GRB gives H0=71.59±0.94kms−1Mpc−1 in the MAPAge model, reducing the H0 tension to the 1.0σ level. Extending to MAPAge+ΩK model, we obtain ΩK=0.001±0.038, which suggests that current late-time data are consistent with a flat universe. Finally, the Bayesian analysis indicates that the present late-universe data provide weak to moderate evidence in favor of PAge and MAPAge relative to ΛCDM.
Strong gravitational lensing of active galactic nuclei (AGN) enables measurements of cosmological parameters through time-delay cosmography (TDC). With data from the upcoming LSST survey, we anticipate using a sample of O(1000) lensed AGN for TDC. To prepare for this dataset and enable this measurement, we construct and analyze a realistic mock sample of 1300 systems drawn from the OM10 (Oguri & Marshall 2010) catalog of simulated lenses with AGN sources at z<3.1 in order to test a key aspect of the analysis pipeline, that of the lens modeling. We realize the lenses as power law elliptical mass distributions and simulate 5-year LSST i-band coadd images. From every image, we infer the lens mass model parameters using neural posterior estimation (NPE). Focusing on the key model parameters, θE (the Einstein Radius) and γlens (the projected mass density profile slope), with consistent mass-light ellipticity correlations in test and training data, we recover θE with less than 1% bias per lens, 6.5% precision per lens and γlens with less than 3% bias per lens, 8% precision per lens. We find that lens light subtraction prior to modeling is only useful when applied to data sampled from the training prior. If emulated deconvolution is applied to the data prior to modeling, precision improves across all parameters by a factor of 2. Finally, we combine the inferred lens mass models using Bayesian Hierarchical Inference to recover the global properties of the lens sample with less than 1% bias.
Researchers from the Flatiron Institute, NYU, Princeton, and a large international collaboration developed AION-1, an omnimodal foundation model that integrates 39 distinct astronomical data modalities from five major surveys. This model achieves strong performance in low-data regimes and enables flexible data fusion and cross-modal conditional generation for diverse scientific tasks.
Gravitational wave (GW) standard sirens have the potential to measure the Hubble constant H0 in the local universe independently of the distance ladder, and thus offer unique new insights into the Hubble tension. A key challenge with standard sirens is detecting their electromagnetic counterparts, and therefore assigning redshifts to the measured distances. One promising way to proceed is to utilize GW `dark sirens' -- events without an identified electromagnetic counterpart -- and cross-correlate their angular distribution with that of galaxies. We present a quantitative study of how precisely the Hubble constant can be measured using tomographic cross-correlation between galaxies and GW sources. Overall, we find that the constraints on H0 will be limited by the quality and quantity of GW data. We find that percent-level constraints on H0 will primarily depend on achieving small distance uncertainties (σdL=0.1dL), obtaining a large number of GW dark sirens (≳5,000), and accurate sky localization in the tomographic analysis.
We analyze a model of quintessence governed by an exponential potential and non-minimally coupled to gravity, in light of recent datasets, including cosmic microwave background, baryon acoustic oscillations, and supernovae distance moduli observations. Mainly focusing on the Palatini formulation of gravity, a phase space analysis reveals the existence of a late-time stable de Sitter attractor as long as the non-minimal coupling constant is negative, regardless of the value of the slope of the exponential. Fitting to CMB+DESI+DESY5 data, we find strong evidence for our model over ΛCDM, with a Bayes factor logB=5.52. Furthermore, the data seem to prefer dynamical dark energy at >3σ C.L. and a phantom crossing in the barotropic parameter of dark energy at 2−3σ C.L.. We find that the scalar field dynamics in the Palatini formalism provides marginally better agreement to the data compared to the metric formalism.
Ultralight dark matter may couple quadratically to Standard Model particles. Such quadratic interactions give rise to both coherent and stochastic signals in pulsar timing array (PTA) observations. In this work, we characterize these signals, including the effects of dark matter propagation in a finite-density medium, and assess the sensitivity of current and upcoming PTA observations to their detection. For coherent signals, we find that the sensitivity of current PTA observations competes with and sometimes exceeds that of other probes, such as equivalence principle tests and atomic clocks. For stochastic signals, we find that PTA sensitivities underperform equivalence principle constraints for both existing and upcoming PTA data sets.
Yuan-Sen Ting's comprehensive review critically evaluates the transformative influence of deep learning on astrophysics, highlighting how these methods leverage architectural inductive biases to process complex astronomical data while emphasizing the importance of physical consistency. The work systematically outlines how deep learning addresses scalability, generalizability, and data efficiency challenges in modern astronomical surveys, offering a nuanced perspective on its capabilities and limitations.
Dark matter fermions interacting via attractive fifth forces mediated by a light mediator can form dark matter halos in the very early universe. We show that bound systems composed of these halos are capable of generating gravitational wave (GW) signals detectable today, even when the individual halos are very light. The Yukawa force dominates the dynamics of these halo binaries, rather than gravity. As a result, large GW signals can be produced at initially extremely high frequencies, which are then redshifted to frequency bands accessible to current or future GW observatories. In addition, the resulting GW signals carry distinctive features that enable future observations to distinguish them from conventional ones. Notably, even if only a tiny fraction of dark matter experiences strong fifth-force interactions, such effects provide a new avenue to discover self-interacting dark matter through GW observations.
Cosmological models where dark matter interacts with dark energy via a pure momentum transfer and with no energy exchange (i.e. elastic) provide compelling scenarios for addressing the apparent lack of structures at low redshift. In particular, it has been shown that measurements of S8 may show a statistically significant preference for the presence of elastic interactions. In this work we implement a specific realisation of these scenarios into an N-body code to explore the non-linear regime. We include two populations of particles to describe the interacting dark matter and the non-interacting baryons respectively. On linear scales we recover the suppression of structures obtained from Boltzmann codes, while non-linear scales exhibit an enhancement of the matter power. We find that fewer massive halos are formed at low redshift as a consequence of the elastic interaction and that dark matter halos are more compact than in the standard model. Furthermore, the ratio of dark matter and baryons density profiles is not constant. Finally, we corroborate that baryons efficiently cluster around dark matter halos so they provide good tracers of the dark matter velocity field despite the presence of the interaction. This shows that the interaction is not sufficiently strong as to disrupt virialised structures.
A key measure of gravity is the relation between the Weyl potential Ψ+Φ and the matter overdensity δm, capsulized as an effective gravitational constant Glight for light motion. Its value, together with the possible spatial and temporal variation, is essential in probing physics beyond Einstein gravity. However, the lack of an unbiased proxy of δm prohibits direct measurement of Glight. We point out that the equivalence principle ensures the dispersion measure (DM) of localized fast radio bursts (FRBs) as a good proxy of δm. We further propose a FRB-based method FG to directly measure Glight, combining galaxy-DM of localized FRBs and galaxy-weak lensing cross-correlations. The measurement, with a conservative cut k≤0.1h/Mpc, can achieve a precision of ≲10%105/NFRB over 10 equal-width redshift bins at z≲1. The major systematic error, arising from the clustering bias of electrons traced by the FRB DM, is subdominant (∼5%). It can be further mitigated to the ≲1% level, based on the gastrophysics-agnostic behavior that the bias of total baryonic matter (ionized diffuse gas, stars, neutral hydrogen, etc) approaches unity at sufficiently large scales. Therefore, FRBs shed light on gravitational physics across spatial and temporal scales spanning over 20 orders of magnitude.
Is the usual treatment of axion dark matter as a classical field reliable? We show that the answer is subtle: the axion field could well be in a quantum state that has no complete classical description, but realistic detectors cannot tell the difference. To see this, we solve a fully quantum model of axion detection using quantum optics techniques. We show that intrinsically quantum effects are washed out by mode averaging or small amounts of noise, and significantly suppressed by the weakness of the axion coupling. Our work exemplifies that there should always be a classical analog for axion dark matter effects, extends to other wave (ultralight) dark-matter candidates, and gives a general method to compute the effects of exotic dark-matter states.
Despite stringent constraints from Big Bang Nucleosynthesis (BBN) and cosmic microwave background (CMB) observations, it is still possible for well-motivated particle physics models to substantially alter the cosmic expansion history between BBN and recombination. In this work we consider two different axion models that can realize a period of first matter domination, then kination, in this epoch. We perform fits to both primordial element abundances as well as CMB data and determine that up to a decade of late axion domination is allowed by these probes of the early universe. We establish the implications of late axion domination for the matter power spectrum on the scales 1/Mpc≲k≲103/Mpc. Our 'log' model predicts a relatively modest bump-like feature together with a small suppression relative to the standard ΛCDM predictions on either side of the enhancement. Our 'two-field' model predicts a larger, plateau-like feature that realizes enhancements to the matter power spectrum of up to two orders of magnitude. These features have interesting implications for structure formation at the forefront of current detection capabilities.
Fuzzy dark matter (FDM), composed of ultralight bosons, exhibits intricate wave phenomena on galactic scales. Compared to cold dark matter, FDM simulations are significantly more computationally demanding due to the need to resolve the de Broglie wavelength and its rapid oscillations. In this review, we first outline the governing equations and distinctive features of FDM. We then present a range of numerical algorithms for both wave- and fluid-based simulations, discuss their respective advantages and limitations, and highlight representative test problems. To facilitate code comparison, we also provide publicly available initial condition files for both isolated-halo and cosmological simulations.
The cosmic dipole measured in surveys of cosmologically distant sources is generally found to be in disagreement with the kinematic expectation of the Cosmic Microwave Background (CMB). This discrepancy represents severe tension with the Cosmological Principle and challenges the standard model of cosmology. Here, we present a Bayesian analysis of the tension between datasets used to measure the cosmic dipole. We examine the NRAO VLA Sky Survey (NVSS), the Rapid ASKAP Continuum Survey (RACS) and the Wide-field Infrared Survey Explorer catalogue (CatWISE), and jointly analyse them with the Planck observations of the CMB. Under the kinematic interpretation, we find that Planck is in severe tension with CatWISE above 5σ, strong tension with RACS, and moderate tension with NVSS. Moreover, the strong concordance between CatWISE and NVSS suggests that their dipoles arise from a common astrophysical signal. Conversely, the high discordance between RACS and both CatWISE and NVSS indicates a possible systematic difference in the RACS catalogue itself. Whilst the tension between Planck and infrared-selected quasars is already significant, the question of whether or not the dipole in individual radio surveys adds to the challenge against the standard model is yet to be seen. We estimate that O(106) radio sources are required to measure the tension to a significance of 5σ. Therefore, in light of the upcoming SKA radio surveys, we are on the cusp of disentangling the anomaly of the cosmic dipole.
The past decade has transformed our ability to observe the Universe. Via gravitational waves, merging black holes and neutron stars can now be directly detected, offering unprecedented opportunities to test General Relativity and explore astrophysics in a new way. Driven by this breakthrough, the next generation of detectors is being developed to observe a wider range of sources with greater precision, ushering in a new era in gravitational-wave astronomy: leveraging black holes as probes of new physics.
This thesis investigates how astrophysical environments, such as plasma, dark-matter structures, and clouds of ultralight bosons, affect black holes and their gravitational-wave signatures. After a short overview of gravitational-wave astrophysics, I study three classes of scenarios. (i) Isolated black holes: I examine boson clouds around black holes, their electromagnetic couplings and the role of surrounding plasma. (ii) Ringdown: I show that plasma can strongly modify the ringdown of charged black holes, whereas realistic dark-matter halos produce no detectable deviations even for next-generation detectors. (iii) Inspiral: for extreme-mass-ratio inspirals with boson clouds, I find that orbital resonances typically destroy the cloud unless the orbit is nearly counter-rotating, yielding new and exciting observational signatures. Entering the relativistic regime, I develop a self-consistent perturbative framework to model generic environments in extreme-mass-ratio binaries and apply it to the boson-cloud case. Finally, I construct a model for binaries repeatedly crossing active galactic-nucleus disks and track their long-term orbital evolution. The results of this thesis show how black hole environments shape gravitational-wave signals and open avenues for testing new physics with future observatories such as LISA or the Einstein Telescope.
Our observations with the James Webb Space Telescope have made the remarkable discovery of strong gravitational lensing arcs from XLSSC 122 (z=1.98) - setting the record for the most distant galaxy cluster that exhibits strong lensing. The discovery of giant arcs enables a strong-lensing analysis and a measurement of the concentration of the dark matter halo. We perform a strong-lensing analysis of the cluster and measure the radial projected mass density profile. Our measurements reveal an exceptionally high concentration in the core of XLSSC 122. A Navarro--Frenk--White profile fit to the inner 100 kpc estimates the concentration to be 6.3±0.5. The high concentration of XLSSC 122 contributes to the emerging picture that massive structure formation in the early universe may proceed more rapidly than standard models suggest. We estimate the mass within 100 kpc to be M(R<100 kpc) = 6.5±0.7×1013 M⊙. Our mosaic images are made public at this https URL .
We explore the properties of interferometric data from high-redshift 21~cm measurements using the Murchison Widefield Array. These data contain redshifted 21~cm signal, contamination from continuum foreground sources, and radiometric noise. The 21~cm signal from the Epoch of Reionization is expected to be highly-Gaussian, which motivates the use of the power spectrum as an effective statistical tool for extracting astrophysical information. We find that foreground contamination introduces non-Gaussianity into the distribution of measurements, and then use this information to separate Gaussian from non-Gaussian signal. We present improved upper limits on the 21cm EoR power spectrum from the MWA using a Gaussian component of the data, based on the existing analysis from Nunhokee et al (2025). This is extracted as the best-fitting Gaussian to the measured data. Our best 2 sigma (thermal+sample variance) limit for 268 hours of data improves from (30.2~mK)^2 to (23.0~mK)^2 at z=6.5 for the EW polarisation, and from (39.2~mK)^2 to (21.7~mK)^2 = 470~mK^2 in NS. The best limits at z=6.8 (z=7.0) improve to P < (25.9~mK)^2 (P < (32.0~mK)^2), and k = 0.18h/Mpc (k = 0.21h/Mpc). Results are compared with realistic simulations, which indicate that leakage from foreground contamination is a source of the non-Gaussian behaviour.
We present Synthesizer, a fast, flexible, modular and extensible platform for modelling synthetic astrophysical observables. Synthesizer can be used for a number of applications, but is predominantly designed for generating mock observables from analytical and numerical galaxy formation simulations. These use cases include (but are not limited to) analytical modelling of the star formation and metal enrichment histories of galaxies, the creation of mock images and integral field unit observations from particle based simulations, detailed photoionisation modelling of the central regions of active galactic nuclei, and spectro-photometric fitting. We provide a number of stellar population synthesis models, photoionisation code configurations, dust models, and imaging configurations that can be used 'out-of-the-box' interactively. The code can be used to quantitatively test the dependence of forward modelled observables on various model and parameter choices, and rapidly explore large parameter ranges for calibration and inference tasks. We invite and encourage the community to use, test and develop the code, and hope that the foundation developed will provide a flexible framework for a number of tasks in forward and inverse modelling of astrophysical observables. The code is publicly available at this https URL
Understanding how well future cosmological experiments can reconstruct the mechanism that generated primordial inhomogeneities is key to assessing the extent to which cosmology can inform fundamental physics. In this work, we apply a quantum metrology tool - the quantum Fisher information - to the squeezed quantum state describing cosmological perturbations at the end of inflation. This quantifies the ultimate precision achievable in parameter estimation, assuming ideal access to early-universe information. By comparing the quantum Fisher information to its classical counterpart - derived from measurements of the curvature perturbation power spectrum alone (homodyne measurement) - we evaluate how close current observations come to this quantum limit. Focusing on the tensor-to-scalar ratio as a case study, we find that the gap between classical and quantum Fisher information grows exponentially with the number of e-folds a mode spends outside the horizon. This suggests the existence of a highly efficient (but presently inaccessible) optimal measurement. Conversely, we show that accessing the decaying mode of inflationary perturbations is a necessary (but not sufficient) condition for exponentially improving the inference of the tensor-to-scalar ratio.
The Chinese Space Station Survey Telescope (CSST) is an upcoming Stage-IV sky survey telescope, distinguished by its large field of view (FoV), high image quality, and multi-band observation capabilities. It can simultaneously conduct precise measurements of the Universe by performing multi-color photometric imaging and slitless spectroscopic surveys. The CSST is equipped with five scientific instruments, i.e. Multi-band Imaging and Slitless Spectroscopy Survey Camera (SC), Multi-Channel Imager (MCI), Integral Field Spectrograph (IFS), Cool Planet Imaging Coronagraph (CPI-C), and THz Spectrometer (TS). Using these instruments, CSST is expected to make significant contributions and discoveries across various astronomical fields, including cosmology, galaxies and active galactic nuclei (AGN), the Milky Way and nearby galaxies, stars, exoplanets, Solar System objects, astrometry, and transients and variable sources. This review aims to provide a comprehensive overview of the CSST instruments, observational capabilities, data products, and scientific potential.
There are no more papers matching your filters at the moment.