HomeScience and ResearchScientific ResearchEarly dark energy will solve the problem of the Hubble constant

Early dark energy will solve the problem of the Hubble constant

Published on

A modification of the ΛCDM model, to which early dark energy has been added, can solve the differences between the values of the Hubble constant, which are obtained by measuring in different ways, argue American astrophysics.

Early dark energy is a substance that accelerates the expansion of the young Universe and “dissolves” at later times. As such a substance, scientists considered a scalar field in a potential that oscillates or disappears at infinity. Then physicists selected model parameters that do not contradict existing measurements, but eliminate the differences between the Hubble constants. According to scientists, the effect of such a “fit” in the future can be measured.

Python 3.8 unifies configuration of initialization

The Hubble Act describes how fast our Universe is expanding. According to him, the rate of “scattering” of gravitationally uncoupled objects (for example, galaxies) is directly proportional to the distance between them. The coefficient of proportionality is the Hubble constant H 0 , which is actually no constant and changes with time. Until recently, the universe was expanding slowly, so the Hubble constant decreased. In the future, most likely, it will begin to increase again.

Hubble’s constant has changed particularly over the past hundred years. True, in terms of the scale of the Universe, this period is negligible, therefore the change took place exclusively in the minds of scientists who improved the instruments and techniques. When Edwin Hubble discovered the law of the same name in 1929, comparing the speeds and distances of 29 of the brightest galaxies, he obtained the value H 0 ≈ 535 ± 40 kilometers per second per megaparsec. In 1952, astrophysicists revised the dependence period — the luminosity of Cepheids, which determined the distance to distant galaxies, and in 1955 reduced the Hubble constant to H 0≈ 180 ± 20 kilometers per second per megaparsec. In 1968, Allan Sandage developed a new, more accurate method of measuring a constant and fixed its current value at the level H 0 ≈ 75 ± 8 kilometers per second per megaparsec.

Over the next twenty years, scientists continued to revise the Hubble constant and obtain new values, among which were the numbers H 0 ≈ 59 ± 6, H 0 ≈ 73 ± 7 and H 0 ≈ 95 ± 10 kilometers per second per megaparsec. Surprisingly, scientists each time received an error of the order of ten percent, although the average values of the constant differed by almost one and a half times.

Tim Cook denies that Apple is a monopoly against the possible legal offensive of the United States

By the middle of the last decade, all these measurements more or less agreed with each other and came to the value H 0 ≈ 73 ± 2 kilometers per second per megaparsec. However, the WMAP and Planck satellites learned to measure the constant in a new way, tracking the background radiation oscillations, and these measurements led to a new result H 0 ≈ 68 ± 1 kilometers per second per megaparsec. Despite the fact that the spread of the absolute values of the constant was reduced almost five times, due to the reduction of errors, the discrepancy between the results was again almost four sigma. In a sense, this discrepancy was even worse than the previous ones, since the results were obtained in absolutely independent ways.

However, scientists still had a loophole that could eliminate this disagreement. The fact is that it is possible to recalculate the oscillations of the relic study into the Hubble constant only within the framework of the well-known cosmological model. Roughly speaking, this model depends on how the oscillations “stretch” during the expansion of the Universe. Currently, cosmology is dominated by the ΛCDM model, according to which the Universe consists of 70% dark energy, 25% dark matter and 5% ordinary matter. This model is in good agreement with satellite data. Nevertheless, you can try to “twist” the model so as to stay within the limits of the permissible errors and explain the discrepancy between the values of the Hubble constant.

A group of astrophysicists under the leadership of Marc Kamionkowski (Marc Kamionkowski) proposed just two modifications of the theory that solve the problem of the Hubble permanent. Both modifications are based on the so-called early dark energy (early dark matter, EDE) – a hypothetical substance that played the role of dark energy in the early stages of the evolution of the Universe, and then “dissolved”.

Machine learning predicted crystal growth twice as good as humans

If we recall that the density of photons and cold matter decreases with time due to the expansion of the Universe, the existence of such a substance seems plausible. In other words, EDE accelerates the expansion of the young Universe and disappears in the later stages. If this contribution is not taken into account, the value of the Hubble constant, indirectly calculated from satellite data, will be underestimated.

In the first modification, scientists considered an oscillating scalar field, which is described by the potential V [φ] = (1 – cos (φ / f )) n . At large redshifts, this field “freezes” and plays the role of the cosmological constant, and starting from a certain point in time, which is determined by the parameter  f , it begins to oscillate and becomes liquid. Over time, the density of such a fluid decreases in proportion to the sixth power of the scale parameter. The universe, for comparison, the density of hot radiation (photons) drops as the third power of the scale parameter, the density of cold radiation as the fourth power, and the density of dark energy remains constant. Consequently, the contribution of the oscillating field to the further expansion of the Universe can be neglected.

Within the framework of this model, physicists numerically simulated the expansion of the Universe and selected the values of parameters that fit into the measurement errors of the Planck satellite. Simply put, scientists forced the density of a liquid to decrease so quickly that the satellite did not feel its contribution to the mass of the Universe.

The sloth robot taught to climb a web of ropes

For the calculations, the researchers used the MontePython-V3 code based on the Monte Carlo method according to the Markov chain scheme. Then, the scientists selected the remaining model parameters so that the value of the Hubble constant, indirectly measured by relic radiation, coincided with the value of the constant calculated on the basis of the analysis of the scatter of galaxies. It turned out that it is possible to “fit” the theory in this way. Moreover, the “fit” can be felt by analyzing the background radiation. According to scientists, it’s enough to increase the angular resolution of the measuring devices.

In the second modification, physicists considered the potential, which is linear in φ for the early times of the evolution of the Universe and tends to zero for large times. According to the authors of the article, the shape of the potential does not play a special role, therefore, the results of its analysis in general coincided with the oscillating potential.

macOS Catalina will use zsh instead of bash as the default shell

Previously, physicists have tried to solve the problem of the Hubble constant, “correcting” the model with which astronomers find it. For example, in November 2017, astrophysicists from Harvard and Johns Hopkins University attempted to write off differences on the size of galaxies, but aggravated them even more. And in April of this year, American cosmologists came to the same (negative) result, analyzing the large-scale heterogeneity in the distribution of galaxies in the local Universe.

However, in addition to searching for the shortcomings of theories, scientists are also trying to develop new methods for measuring the Hubble constant. For example, last year several groups of astrophysicists showed that it can be measured using gravitational waves from merging neutron stars or black holes. Moreover, the error of such measurements will be compared with the error of existing results over the next ten years.

Mozilla brings Firefox password management as a browser addon

Latest articles

Here’s How and When Mount Everest-sized ‘Devil Comet’ Can Be Seen With Naked Eye

Mount Everest sized Comet 12P/Pons-Brooks, also known as "devil comet" which is making its...

Something Fascinating Happened When a Giant Quantum Vortex was Created in Superfluid Helium

Scientists created a giant swirling vortex within superfluid helium that is chilled to the...

The Science of Middle-aged Brain and the Best Thing You Can Do to Keep it Healthy, Revealed

Middle age: It is an important period in brain aging, characterized by unique biological...

Science Shock: Salmon’s Food Choices Better at Reducing Risk of Heart Disease and Stroke

Salmon: Rich in Health Benefits, Yet May Offer Less Nutritional Value - This is...

More like this

Here’s How and When Mount Everest-sized ‘Devil Comet’ Can Be Seen With Naked Eye

Mount Everest sized Comet 12P/Pons-Brooks, also known as "devil comet" which is making its...

Something Fascinating Happened When a Giant Quantum Vortex was Created in Superfluid Helium

Scientists created a giant swirling vortex within superfluid helium that is chilled to the...

The Science of Middle-aged Brain and the Best Thing You Can Do to Keep it Healthy, Revealed

Middle age: It is an important period in brain aging, characterized by unique biological...