Comment

Comment | Published March 27, 2023 | Original research published in Publications of the Astronomical Society of the Pacific

Eight Significant Shortcomings of the Standard Model of Cosmology

Ethan R. Siegel

Ethan R. Siegel is a theoretical astrophysicist and has worked as a visiting assistant professor of physics. He is currently a full-time science communicator and author. (Website)

Original research: Fulvio Melia, A Candid Assessment of Standard Cosmology, Publications of the Astronomical Society of the Pacific, Volume 134, Number 1042, https://doi.org/10.1088/1538-3873/aca51f

Subjects: Cosmology, Horizon problem, Inflation, Entropy, Equivalence principle

Accompanied by a response of: Fulvio Melia

[Editor’s note: This comment has been shared with Fulvio Melia before publication. You can find his response via the last entry in the table of contents.]

As long as the universe obeys the cosmological principle – that is, it’s homogeneous (the same everywhere) and isotropic (the same in all directions) on large cosmic scales – then under the theory of General Relativity, we can calculate how the universe evolves based on whatever it’s composed of. Drawing on a specific parametrization, the Lambda Cold Dark Matter or ΛCDM model does exactly this. ΛCDM is the current concordance model and describes the universe as being composed of ~70% dark energy, ~25% dark matter, and ~5% normal matter, together with a handful of neutrinos and photons. An early inflationary phase gave rise to the hot Big Bang, with the universe expanding and subsequently cooling over all the time since.

Today, 13.8 billion years later, we have a rich cosmic web of structure and an enormous suite of data to compare our predictions to. In a novel study that’s certain to ruffle many feathers, astrophysicist Fulvio Melia points out no fewer than eight “embarrassingly significant failings” of the concordance model in an attempt to expose its shortcomings. He contends that the current picture of the universe is not simply experiencing tensions, but that these failings demand a complete overhaul of ΛCDM cosmology.

A brief history of the universe as understood today

To explore this question, we first put forth a brief history of the universe as understood today, then elucidate the eight failings highlighted by Melia, followed by how the community is presently addressing these issues, and end with further recommendations.

In the standard picture of cosmology, we no longer typically talk about an ultimate origin, but begin our story from the last moments of inflation: a state of exponential expansion of space that preceded and set up the conditions for the hot Big Bang. Its origin and duration remain active areas of research, but there’s general agreement that only the final minuscule fraction-of-a-second of inflation, often taken to be in the ballpark of ~10-32 seconds or so, left an in-principle observable effect on our universe. During inflation, the expansion of the universe is dominated by some form of energy that behaves similarly to a cosmological constant, causing the universe to expand exponentially. Over each ~10-32 second interval that passes during inflation, a region of space the size of the Planck volume gets stretched to become larger than the presently observable universe. Minuscule quantum fluctuations continuously occur, getting stretched to all scales, with later fluctuations superimposed atop the earlier fluctuations that were stretched to larger scales.

When inflation ends, the energy that was previously inherent to space is converted into energy distributed among the Standard Model particles, initiating the hot Big Bang, with those earlier (scalar) fluctuations in energy becoming density fluctuations. These fluctuations are predicted to be 100% adiabatic in nature – with constant heat and entropy – with an almost perfectly scale-invariant spectrum, including on super-horizon scales, with a maximum temperature well below the Planck scale and a spacetime that’s flat to within 1 part in somewhere between ~104 and 106. The hot, early universe, extremely flat and with the same temperature everywhere to 1 part in about 30,000, now cools and evolves.

Initially filled with particles and antiparticles, the heaviest, unstable ones rapidly decay away. The electroweak symmetry breaks, with the Higgs field giving rest mass to particles and antiparticles. As the universe cools further, matter-antimatter pairs annihilate away. Due to a slight excess of matter over antimatter, only ~1 particle in every 109 remains after this annihilation, leading some matter to survive, whereas the antimatter does not. Hadronization then occurs, creating protons and neutrons, which interconvert under weak interactions as the universe cools.

When the weak interactions freeze out, it’s still too hot for heavy nuclei to stably form. But a fraction of the free neutrons undergo radioactive decay until Big Bang nucleosynthesis can occur, producing the lightest elements and their isotopes. After this, gravitational growth occurs in the remnant primordial plasma, converting this nearly scale-invariant spectrum of density fluctuations into a series of peaks and valleys that will appear later: imprinted in both the cosmic microwave background (CMB) and the late-time universe’s large-scale structure.

Very roughly, this is the “standard story” of our cosmic history. It is a story full of successes since it explains, among others, the existence of the CMB, the observed abundance of light elements, and the accelerating expansion of the universe. But ΛCDM also possesses tensions and shortcomings.

Problem 1: The observed spectral amplitude of the CMB is below the predicted value

Fulvio Melia begins his indictment of this story with a flourish, asserting that inflation could only have taken place for a brief and finite amount of time, pointing to the evidence that the CMB’s power spectrum – which describes the magnitude of its temperature fluctuations on all length scales across the sky – is severely suppressed on the absolute largest of cosmic scales. If inflation occurred the way the standard story describes it, then the lowest multipoles in the power spectrum – particularly the quadrupole and octupole – should have a specific spectral amplitude. Yet the observed spectral amplitude is significantly lower than the predicted value, not just for these two multipole numbers but, statistically, on all scales larger than ca. ~5° – i.e., temperature anisotropies on the largest cosmic scales are not as predicted by inflation.

Melia uses this well-known fact to justify imposing a cutoff on the duration of inflation (and specifically, on the exponential growth as quantified by the number of e-foldings that occur). This leads to a far superior fit between theoretical predictions and the actual CMB data, at high 8-σ significance, compared to a model with no cutoff.

However, if inflation didn’t occur for an indeterminate time, but for only a brief time that didn’t reach all observable scales, then it would not solve the horizon problem: without exponential expansion lasting sufficiently long, inflation would not have caused the universe to be almost the same temperature everywhere and in all directions. And yet, this feature can be observed. This inconsistency is the first of eight failings that Melia spotlights on the stage.

Problem 2: No satisfactory explanation of the quantum-to-classical transition between inflation and the hot Big Bang

Next, Melia questions the standard picture from a completely different angle, by asking how the primordial quantum fluctuations that allegedly occurred during inflation could have actually become classical, macroscopic density fluctuations. The driver of inflation was the hypothetical inflaton field and regardless of what its specific (unknown) properties might have been, the field should have experienced fluctuations that were inherently quantum in nature. These fluctuations would appear on all scales, according to theory, which means that any point would experience a superposition of all the different modes that have been stretched to various scales. These modes should interfere with one another, as in any quantum system. But in the aftermath of the hot Big Bang, there is no quantum behavior appearing on cosmic scales, and in the ~40 years that this puzzle has been studied, there has been no satisfactory explanation of how this quantum-to-classical transition took place in the early universe.

As Melia correctly states, “decoherence by itself still does not solve the long-standing ‘measurement’ problem.” Without a measurement device or apparatus acting on the cosmos, how did this particular realization of “our universe” get singled out to be the one we observe today? Therefore, the specific part of the story in which quantum fluctuations in the inflaton field become classical density fluctuations in the expanding universe is simply unproven – but nonetheless needed to explain the origin of stars and galaxies.

Problem 3: Why is the Higgs vacuum expectation value the same everywhere?

Moving forward in cosmic time, we arrive at the next problem – a new horizon problem – that arises during the epoch of electroweak symmetry breaking. When the electroweak phase transition that breaks the electroweak symmetry occurs, at a critical temperature of ~160 GeV (corresponding to a cosmic time of ~10-11 s, well after inflation stops), the Higgs field acquires a non-zero vacuum expectation value, which causes it to couple to many of the Standard Model particles, creating their rest masses.

Theory tells us, however, that there is no reason for this value to be universal; the Higgs field should acquire the same vacuum expectation value only in causally connected regions of spacetime. In other words, there should be a finite “horizon” within which things should be similar, but beyond that horizon, things should be different.

But they aren’t. Throughout the universe, we can consistently observe the same atomic transitions, which provides us substantial evidence that the masses of electrons and protons (and therefore the Higgs vacuum expectation value) are identical everywhere. Additionally, no domain walls or other topological defects in spacetime have been observed to indicate varying vacuum expectation values in different regions of space – and we have reason to firmly believe that cosmologists would have already found them with today’s technology.

The cosmic microwave background provides additional evidence. If the electron had different fundamental properties in various regions of space, such as rest mass, it would have induced large temperature anisotropies in the CMB. As Melia puts it, the temperature of the modern CMB “should be exhibiting many large, random variations from one side of the sky to the other, on top of the ~10-5-scale anisotropies seen by the CMB satellite missions.” With no large variations in either the CMB or the observed atomic and nuclear properties of matter, this puzzle is completely unaddressed by the standard cosmological story.

Problem 4: Lithium fails to match the predictions of Big Bang nucleosynthesis

We next arrive at Big Bang nucleosynthesis (BBN), where our attention turns to the existence of protons, neutrons, and the light elements they form early on. Cosmologists have long had to reckon with the unsolved puzzle of baryogenesis: the question of why there’s more matter than antimatter in the first place, and why that excess of matter is the same everywhere. The process leading to this situation must have taken place prior to the epoch in which the thermal background of radiation could no longer spontaneously produce baryon-antibaryon pairs.

In addition, there’s now the problem of what BBN predicts versus what’s observed. Conventionally, cosmologists claim that the relative abundances of the lightest elements that will survive after BBN is completed, 1H, 2H, 3He, 4He, and 7Li, are determined by only one free parameter: the baryon-to-photon ratio. Melia turns this narrative on its head, however, stating that the 4He-to-1H ratio is what’s actually best observed, and that number is then inserted back into the equations to determine the baryon-to-photon ratio. That ratio is eventually used to predict the other, much less abundant isotopes.

But the validity of this prediction remains questionable as long as the actual abundances are not well known. After explaining that 3He is easily destroyed in stars and 2H is very fragile, rendering both unreliable as probes of the pristine universe, Melia resorts to 7Li as the only legitimate test for BBN. This one, however, fails to match the predictions by a substantial factor: the well-known “lithium problem” of BBN. With no successes left to boast about, BBN can no longer be regarded as a key “cornerstone” of modern cosmology, he contends.

Problem 5: The low-entropy configuration as required by ΛCDM is extremely unlikely

By the time the universe is ~380,000 years old, neutral atoms form and photons can free-stream – leading to the CMB we detect today –, as they’re now decoupled from the primordial plasma. But even at this early stage, its entropy is tremendous: CMB data suggest that the universe was close to thermal and chemical equilibrium, and we can estimate the entropy at that epoch as ~1088kB, where kB is Boltzmann’s constant. If entropy always increases – and indeed, what the so-called “past hypothesis” conjecture (which is considered a fundamental law in cosmology) states is that the overall entropy of the observable universe is increasing monotonically –, the entropy must be lower, the closer one ventures back toward the Big Bang.

But the total blackbody entropy, which dominates the amount of entropy in the early universe (until the formation of black holes, at least, which helped entropy to reach even higher levels as we see them today), remains constant within any given proper volume. In other words, if the early universe consisted only of a perfectly uniform bath of non-interacting radiation, its entropy would not increase, but would only remain at a constant value during the first few hundred thousand years after the Big Bang.

So how did the universe acquire an initially low entropy state, and how did we wind up with the large amount of entropy that we currently see? As Melia points out, “If the initial state of the universe was random, characterized by a uniform probability of microstates, it should have been born with maximum entropy, representing thermal equilibrium, not the extremely unlikely low-entropy configuration required by ΛCDM.” Despite the considerable effort that has gone into addressing the initial entropy problem, it remains completely unsolved.

Problem 6: Observations show fully-formed galaxies while ΛCDM predicts stars only for the same epoch

Finally, the density imperfections in the matter species, now that they’re freed from being coupled to radiation, gravitationally collapse, leading to the formation of stars, galaxies, and the seeds of supermassive black holes. According to most simulations, the first stars – Population III stars, made solely of the pristine material left over from the Big Bang – require no less than 200 million years to form, and the subsequent generation of stars, which incorporates the recycled material from those first stars, requires a further 100 million years. Yet the Hubble Space Telescope and the ALMA observatory are seeing not only stars, but also fully-formed, massive, substantially evolved galaxies as early as 400 million years after the Big Bang, while the James Webb Space Telescope (JWST) has pushed that back to just ~320 million years. Similarly, the supermassive black holes of ~1 billion solar masses seen less than ~1 billion years after the Big Bang can only be explained by unrealistically large initial black hole seeds, a megamerger of Population III black hole remnants, or some sort of sustained hyper-Eddington accretion scenario, all of which are problematic. Put simply, the predictions of theory within ΛCDM for forming stars, galaxies, and black holes are inconsistent with the early, evolved structures we’re already observing, and with the advent of the JWST, this is only going to worsen.

Problem 7: Discrepancy between independent measurements of the Hubble constant

Next, we come to the most frequently discussed “tension” in modern cosmology: the discrepancy between late-time measurements of the Hubble constant H0 using distance ladder methods and “early relic” measurements using baryon acoustic oscillation and/or CMB measurements. While the first class of measurements consistently gives values of H0 ~ 73-74 km/s/Mpc for the expansion rate of the universe, the second class consistently yields H0~ 67 km/s/Mpc. Measurement error is no longer a possible explanation; the uncertainties and errors on both figures are small enough that the significance of this discrepancy now exceeds the 5-σ significance threshold. While exotic solutions like early dark energy or decaying dark matter are often invoked, Melia suggests instead looking at this problem not in isolation, but in concert with the previous six (and the one to come), with a view to modifying the standard model of cosmology in a way that can account for all of these various “tensions” at once.

Problem 8: ΛCDM is inconsistent with Einstein’s equivalence principle

As his grand finale, Melia presents a fundamental inconsistency in ΛCDM: its choice to use the Friedmann equations in the first place to describe the universe during periods of exponential expansion. In standard use, the Friedmann equations arise by assuming isotropy and homogeneity for the universe, and are then derivable from the Einstein field equations, where the time-time component of the metric (gtt) is related to the overall Einstein stress-energy tensor. In particular, the second Friedmann equation, which describes how the expansion rate changes with time, is derived after already assuming the validity of the cosmological principle, and therefore: after assuming isotropy and homogeneity. While this simplifies the metric coefficients greatly, it ignores the important fact that the metric coefficients depend on the chosen stress-energy tensor, which changes as the universe evolves through phases of deceleration (when matter and radiation dominate) and acceleration (when dominated by inflation and dark energy).

And yet, in ΛCDM the time-time component of the metric, gtt, is set at 1 at all times and stages of cosmic evolution, even under conditions where it may not be mathematically robust to do so. Given that the Hubble flow – the motion of galaxies due to the universe’s expansion – is not inertial in ΛCDM, i.e., the galaxies don’t experience “freefall,” why should we be comfortable applying free-fall conditions during phases of accelerated expansion? Melia then goes on to prove that when the universe accelerates, an observer must see their time being dilated relative to local free-falling frames, so that gtt can by no means be 1. And yet, because the condition gtt = 1 is always imposed, we arrive at an inherent inconsistency: ΛCDM is inconsistent with Einstein’s equivalence principle.

Conclusions to be drawn from inconsistencies and paradoxes

In light of all eight of these issues, Melia asserts that the ΛCDM model of cosmology is officially past its expiration date, arguing that “this collection of seemingly insurmountable inconsistencies and paradoxes ought to convince even its most diehard supporters that a major overhaul of the standard model is called for.” He suggests that the central problem concerns the choice of the stress-energy tensor Tμν in the Einstein field equations. While it remains beyond the scope of his paper to point out what the correct choice Tμν would actually be, Melia’s suggestions are sensible in light of the problems he’s exposed.

This approach to future work on the foundations of ΛCDM may have some merit – and who knows, may even yield an unexpected breakthrough. But diehard supporters of ΛCDM should first take a more skeptical look at each of these eight points: both those that are data-related and those that are consistency-related.

The low amount of power seen on large cosmic scales is definitely something that many have noted. In light of the Planck observatory data, power is unexpectedly low in the quadrupole and octupole (l = 2 and 3) moments of the CMB, their planes are unexpectedly aligned with each other, and are unexpectedly perpendicular to the ecliptic (and aligned with the CMB dipole). But even so, there are only a small number of ways to “break the sky” into these components, meaning that cosmic variance is very large. Our entire sky has a little over 40,000 square degrees within it; when you examine a region that’s 60° on a side, you can only fit a few of them onto the full sky, as opposed to examining regions that are ~1° on a side. In other words, our universe has more data about what’s happening on small cosmic scales than larger ones, and so our datasets on the largest cosmic scales actually contain the least amount of information. It’s like rolling a six-sided die: if you rolled it 3,000 times, it would be absurdly unlikely to find that 2,000 of your rolls yielded either a 1 or a 2. But if you only rolled it 12 times and found that 8 of your rolls yielded either a 1 or a 2, you wouldn’t assume something was inherently wrong.

Statistically, the low power seen on large cosmic scales is only a ~3-σ level anomaly, which isn’t compelling enough to draw the very strong conclusions about inflation and its duration that Melia does. It could simply be that this is the universe we get, and the low power at low multipoles arises because we have so few data points to sample and average together.

If this then frees us from the constraint that, according to Melia, we have to assume only a short duration of inflation with ~60 or fewer e-foldings to it, this can resolve both the electroweak symmetry breaking problem, related to the Higgs vacuum expectation value, and the initial entropy problem. If inflation had occurred for a sufficiently long time, then every region within our currently observable universe would have been in causal contact at a prior time during the inflationary period. Therefore, whatever dynamics arose that caused the electroweak symmetry to break in the particular fashion that it did were at play everywhere we can access. Just as restoring the electroweak symmetry in high-energy particle accelerators doesn’t lead to it breaking in a different way in laboratory experiments, it’s plausible that the electroweak symmetry would break in the same way throughout the observable universe.

Similarly, if we’d had a longer inflationary phase, the entropy in the early stages of that phase could have been arbitrarily large without posing a problem or a contradiction for thermodynamics. The reason is that, from the start of the hot Big Bang until today, we’re only capable of measuring the entropy present within our cosmic horizon, which was very small in the early stages of the hot Big Bang. Whatever entropy existed prior to inflation or in its early stages, no matter how great it was, will get stretched out over extraordinary large volumes. This conserves the amount of entropy but drives the entropy density toward 0. And this is a perfect starting point for ΛCDM: when inflation ends and the hot Big Bang begins, the onset of living in a matter- and radiation-rich universe causes a tremendous increase in entropy within our cosmic horizon, and that entropy has been increasing monotonically ever since – exactly as required.

For the issue of BBN, the baryon-to-photon ratio is arguably not fixed by measurements of 4He; instead, it’s fixed by direct measurements of the CMB, which yield both the baryon density of the universe and the photon density of the primeval plasma. From the measurements of the Wilkinson Microwave Anisotropy Probe (WMAP) satellite, which was decommissioned in 2010 and superseded by the Planck mission, this gives us a robust value for the baryon-to-photon ratio that actually suggested a lower amount of 4He than other observations at that time indicated. Meanwhile, modern measurements of 4He, 3He,and of 2H (deuterium, which is measured from quasar absorption lines that reveal particularly pristine gas clouds) are highly consistent with the observed baryon-to-photon ratio. It’s true that the 7Li problem remains unsolved, but the very real successes of BBN in predicting the other light elements and isotopes cast doubt on the problem’s robustness and severity.

Concerning the early appearance, growth, and evolution of stars, galaxies, and supermassive black holes, there are many studies suggesting that the nonlinear growth of structure can accelerate gravitational collapse and trigger the formation of the first stars in under 200 million years, and that the second generation of stars, rich in heavy elements that enable rapid cooling, requires virtually no time at all (in cosmic terms) to subsequently form. Many studies provide plausible pathways and mechanisms for galaxies to form and grow much more quickly than Melia contends. Additionally, recent simulations have shown that, independent of the formation of stars, gas streams can create seed black holes of a few tens of thousands of solar masses as early as ~150 million years after the Big Bang: sufficiently early and massive to potentially resolve the present tension completely.

The value of questioning and scrutinizing our current concordance model of cosmology

Asking questions like “how do inflation’s quantum fluctuations translate into classical density fluctuations during the hot Big Bang?” and “are we making a mistake by assuming that gtt = 1 in a ΛCDM universe?” is still valid, but our inability to resolve fundamental issues doesn’t necessarily undercut our ability to construct a successful cosmological model that accurately describes reality. These difficult problems exist but cannot possibly detract from the extraordinary string of successes that have accompanied the standard ΛCDM model of cosmology. The tension between early-relic and distance ladder methods that yield different values for the Hubble constant still remains, and is just as unresolved as ever, but doesn’t carry the same weight when it’s only accompanied by philosophical problems, rather than by problems in which ΛCDM’s predictions fail to match up with our observations.

That being said, it’s always worthwhile to challenge the standard picture and to see how well it stands up to scrutiny, and also to recognize that different intelligent people will attach greater significance to different observations and studies than others.

There are incontrovertibly very real problems within the concordance model of cosmology, whether you think it needs a complete overhaul or not: the 7Li problem remains unsolved, the Hubble tension is truly a conundrum, and there really is less power on large cosmic scales than we expected to find. Additionally, the fact that free-fall conditions are applied during phases of accelerated expansion, when they explicitly should not be applied, is a troubling realization that deserves more attention. Therefore, Melia’s latest work should lead anyone interested in questioning and scrutinizing our current concordance cosmology to rethink many issues that they probably haven’t considered in a long time, if ever. It’s important to recognize that the gaps in our understanding are substantial, and some of them could offer clues that lead us to a better understanding of our universe as a whole.

Response of Fulvio Melia

Thank you for your excellent summary of my paper. Please let me add some comments.

  1. Ethan's description of the low power at low multiples in the CMB is quite accurate. The problem lies not with these, however, but with the angular correlation function which is discrepant over angles from 60° to 180°. The three studies that have now examined the source of this problem have concluded that a cutoff in the primordial power spectrum is favored by the data at a significance of ~8σ. And very importantly, cosmic variance was fully taken into account with this analysis, so it cannot be the reason for the inconsistency between theory and observation.
  2. The electroweak symmetry breaking was a quantum process, so to suggest that inflation created a uniform set of conditions that ensured the same vacuum expectation value everywhere requires the existence of hidden variables. But as we now know, these do not exist. The most recent Nobel prize was awarded for experiments proving this point. What matters is therefore not the size of the causally connected region at the end of inflation, but how far relativistic causality would have permitted a uniform Higgs vacuum expectation value at the time it emerged (~10-11 seconds). In evaluating how large this region is today, we need to carry out the same kind of horizon calculation at ~10-11 seconds, as we would have done for inflation at ~10-32 seconds. And as best as we can tell, a region of uniform Higgs vacuum expectation value would be much smaller than our Hubble radius today without the intervention of a possible second inflationary episode after ~10-11 seconds. This has been proposed by several independent workers, by the way, but there doesn't yet seem to be an agreed-upon picture of how this could have happened self-consistently. But it may perhaps end up being a solution if all the details can be worked out properly.
  3. The issue with inflation solving the low entropy problem is that one is merely shifting the "astronomically low" probability of the Universe starting with such a low entropy to the "astronomically low" probability of setting up the correct initial conditions for inflation to solve this problem. As we now know, mainly from the extraordinary Planck data, the very low tensor to scalar fluctuation power (the so-called r < 0.055), the energy scale of inflation was much smaller than the Planck energy density. Yet the virtually scale-free scalar spectrum (ns ~ 0.96) requires an essentially constant Hubble constant during inflation, which means that the energy density could not have changed substantially towards the Planck scale. Together, these newer data have heralded the revised inflationary paradigm, usually termed "new chaotic inflation", which restricts the initiation of the inflationary phase to a time tinit tPlanck. The problem with this is that we then have no physical explanation for how the Universe would have been uniform (and causally connected) prior to tinit. The horizon problem has thus been shifted to a time prior to inflation, when the entropy would presumably have already been established. So in the context of our current inflationary paradigm, the very low probability of having the Universe begin with essentially zero entropy does not appear to be easily solvable with inflation.
  4. The value of the photon to baryon ratio used in BBN is not in conflict with the observations. Ethan is quite correct with this. The problem is that thermal equilibrium between the radiation and the baryons at the time this ratio was established would have created a value 8 orders of magnitude smaller. Somehow, the Universe needed to have the protons and neutrons in thermal equilibrium (in order to produce the correct He abundance via the Boltzmann factor), while at the same time avoiding the radiation from being in equilibrium with protons and neutrons. As far as I know, there is no known physical mechanism that could have produced this odd situation. Physicists may eventually find a clever exclusionary method of simultaneously doing both, but we do not have an answer today.
  5. Ethan's comments about the growth of structure in the early Universe are consistent with many of the prevailing views, so they are necessary for the reader to see in order to understand why this issue is drawing much attention now, particularly following the breathtaking discoveries reported over the past 5 months by JWST. The simulations we have thus far actually cannot explain the appearance of galaxies at z > 14. I apologize for pointing to one of my own papers for this, but a more detailed explanation of the problem appeared in the MNRAS Letters in February 2023 and I would recommend that the readers take a look at it: https://doi.org/10.1093/mnrasl/slad025.
How to reuse

The CC BY 4.0 license requires re-users to give due credit to the creator. It allows re-users to distribute, remix, adapt, and build upon the material in any medium or format, even for commercial purposes.

You can reuse an article (e.g. by copying it to your news site) by adding the following line:
Eight Significant Shortcomings of the Standard Model of Cosmology. © 2023 by Ethan R. Siegel is licensed under Attribution 4.0 International

Or by simply adding:
Article © 2023 by Ethan R. Siegel / CC BY

To learn more about the available options, and for details, please consult New Ground’s How to reuse section.

This article – but not the graphics or images – is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit https://creativecommons.org/licenses/by/4.0/.

This article – but not the graphics or images – is licensed under a Creative Commons Attribution 4.0 License.