----- The following copyright 1991 by Dirk Terrell ----- This article may be reproduced or retransmitted ----- only if the entire document remains intact ----- including this header Lecture #7 "The Stellar Rosetta Stone" Last time we looked at stellar temperatures and spectral types, and found that the color of a star indicates its surface temperature. Another important property of a star is its brightness, but we have to be sure we know what we mean when we refer to a star's brightness. Here's an example: which is brighter a penlite flashlight or a car headlight? Well, let's see, it must be obvious that a headlight is brighter than a flashlight, right? Suppose the flashlight is an inch from your eye and the car headlight is 50 miles away. Which one is brighter? The flashlight of course! Wait a minute, we have concluded that both of them are brighter! Our dilemma lies in the fact that we are calling two different things 'brightness'. In the first example, we were referring to the total amount of light (energy) was put out by the two lights and we correctly concluded that the headlight put out light than the flashlight. In the second example we were referring to how bright the lights appeared to be and since the headlight was very far away, it LOOKED dimmer than the flashlight. Let's define the 'brightness' in the first case as the absolute brightness and the 'brightness' in the second as the apparent brightness. For stars, we measure brightnesses in units called magnitudes, with a smaller magnitude meaning a brighter star (i.e. a star of first magnitude is brighter than a second magnitude star). Thus we refer to a star's absolute brightness as its absolute magnitude. We can also measure a star's absolute brightness by the amount of energy it gives off each second and this is called its luminosity. We refer to a star's apparent brightness by its apparent magnitude. The difference between absolute and apparent magnitude, as the above examples illustrate, lies in the distance to the star. A star that appears dim could be a faint star that is close to us or a bright star that is very far away. The way absolute magnitude is defined is that it's the apparent magnitude a star would have if it were 10 parsecs away (a parsec is approximately 3.26 light years). Also, an important rule to remember is that a difference of five magnitudes corresponds to a brightness ratio of 100. A first magnitude star is 100 times brighter than a sixth magnitude star. Now, intuitively we might expect a relationship between the surface temperature of a star and its absolute brightness or luminosity, namely that hotter stars will be brighter. The way to see if this is actually the case is to make a graph of luminosity versus temperature for stars of various temperatures. At the beginning of this century, Danish astronomer Ejnar Hertzsprung and American astronomer Henry Norris Russell recognized this relationship and a plot of luminosity versus temperature is now called the Hertzsprung-Russell (HR) diagram. You will also hear HR diagrams called color-magnitude diagrams. Do you know why? When you see an HR diagram, indeed the hot stars have higher luminosities. One word about HR diagrams: for some reason, unbeknownst to me, temperature(which is on the x-axis) is plotted increasing to the left which is contrary to the way we usually do things, so that stars to the left of other stars in the diagram are hotter than the ones to the right. When you plot the positions of stars in the HR diagram, one thing immediately becomes clear: most of the stars lie along a curve from the lower right to the upper left corners. This band of stars is known as the main sequence. It looks like this (bright) | * l | * * m u | * * a m | * * i i | * * n n | * * o | * * s s | * * e i | * * q t | * * n y | * * c (dim) | * * e ----------------------------- (hot) temperature (cool) As we will see, the HR diagram turns out to be a very powerful tool in understanding the structure and evolution of stars. The HR diagram enables us to test our theories of stellar evolution by creating an HR diagram for our theoretical models that can be directly compared to observed HR diagrams. One nice example of such a test is to compute structural models of stars assuming that the chemical composition of the star is always uniform throughout the star but slowly changes from 90% hydrogen/10% helium to 10% hydrogen/90% helium. When you do this the model will give you the luminosity and effective temperature of the star. For the uniform case, the model predicts that the star's temperature will rise (i.e. it will move to the left in the HR diagram as it evolves). But observed HR diagrams show that stars move to the RIGHT as they evolve (they become cooler). So, that means that stars do not maintain a uniform chemical compostion as they evolve. We understand this as the result of nuclear fusion that takes place in the core of the star. In the core hydrogen is converted to helium, but in the envelope (outer layers) of the star, the temperature and density are too low for fusion to take place and the chemical composition remains constant. So over time the star's chemical composition becomes non-uniform. When we compute structural models with proper nuclear fusion calculations, we find that the models predict that the star will move to the right in the HR diagram, which is what we observe in real stars.