How come bright stars get smaller magnitudes?

It started with the fact that the ancient Greek scientist Hipparchus compiled a star catalog in the century II BC. He subdivided the stars into classes according to their brightness. Then, after three centuries, Ptolemy in his scientific treatise “Almagest” subdivided all visible stars into six groups, where the last one, the sixth group was occupied by the dimmest ones of those visible to the naked eye. These are called sixth magnitude stars. With the development of astronomical measurements, it turned out that the faintest of the stars of the first group, such as Aldebaran, give us about 2.5 times more light than the faintest of the stars of the second group, and those, in turn, give 2.5 times more light, than the stars of the third group, etc. It turns out that ancient astronomers distributed stars over the groups in terms of brightness in accordance with the psycho-physiological Weber-Wechner law that, however, was only discovered in the middle of the 19th century. This law states that the intensity of perception is proportional to the logarithm of the intensity of the stimulus. Then the British astronomer Norman Pogson suggested following the ancient classification i.e. to attribute to faint stars, visible only through a telescope, a magnitude one less if they give 2.5 times less light. By now, the best ground-based telescope and the Hubble Space Telescope allow you to see stars up to 27th and 30th magnitudes respectively.

About an offended star 🙂

Magnitude is an astronomical characteristic of a star's brightness

The lower is the brightness of a star, the higher magnitude astronomers attribute to it.


Leave a Reply