Posted on 12/11/2002 6:28:08 AM PST by A2J
By WILL SENTELL
wsentell@theadvocate.com
Capitol news bureau
High school biology textbooks would include a disclaimer that evolution is only a theory under a change approved Tuesday by a committee of the state's top school board.
If the disclaimer wins final approval, it would apparently make Louisiana just the second state in the nation with such a provision. The other is Alabama, which is the model for the disclaimer backers want in Louisiana.
Alabama approved its policy six or seven years ago after extensive controversy that included questions over the religious overtones of the issue.
The change approved Tuesday requires Louisiana education officials to check on details for getting publishers to add the disclaimer to biology textbooks.
It won approval in the board's Student and School Standards/ Instruction Committee after a sometimes contentious session.
"I don't believe I evolved from some primate," said Jim Stafford, a board member from Monroe. Stafford said evolution should be offered as a theory, not fact.
Whether the proposal will win approval by the full state Board of Elementary and Secondary Education on Thursday is unclear.
Paul Pastorek of New Orleans, president of the board, said he will oppose the addition.
"I am not prepared to go back to the Dark Ages," Pastorek said.
"I don't think state boards should dictate editorial content of school textbooks," he said. "We shouldn't be involved with that."
Donna Contois of Metairie, chairwoman of the committee that approved the change, said afterward she could not say whether it will win approval by the full board.
The disclaimer under consideration says the theory of evolution "still leaves many unanswered questions about the origin of life.
"Study hard and keep an open mind," it says. "Someday you may contribute to the theories of how living things appeared on earth."
Backers say the addition would be inserted in the front of biology textbooks used by students in grades 9-12, possibly next fall.
The issue surfaced when a committee of the board prepared to approve dozens of textbooks used by both public and nonpublic schools. The list was recommended by a separate panel that reviews textbooks every seven years.
A handful of citizens, one armed with a copy of Charles Darwin's "Origin of the Species," complained that biology textbooks used now are one-sided in promoting evolution uncritically and are riddled with factual errors.
"If we give them all the facts to make up their mind, we have educated them," Darrell White of Baton Rouge said of students. "Otherwise we have indoctrinated them."
Darwin wrote that individuals with certain characteristics enjoy an edge over their peers and life forms developed gradually millions of years ago.
Backers bristled at suggestions that they favor the teaching of creationism, which says that life began about 6,000 years ago in a process described in the Bible's Book of Genesis.
White said he is the father of seven children, including a 10th-grader at a public high school in Baton Rouge.
He said he reviewed 21 science textbooks for use by middle and high school students. White called Darwin's book "racist and sexist" and said students are entitled to know more about controversy that swirls around the theory.
"If nothing else, put a disclaimer in the front of the textbooks," White said.
John Oller Jr., a professor at the University of Louisiana-Lafayette, also criticized the accuracy of science textbooks under review. Oller said he was appearing on behalf of the Louisiana Family Forum, a Christian lobbying group.
Oller said the state should force publishers to offer alternatives, correct mistakes in textbooks and fill in gaps in science teachings. "We are talking about major falsehoods that should be addressed," he said.
Linda Johnson of Plaquemine, a member of the board, said she supports the change. Johnson said the new message of evolution "will encourage students to go after the facts."
Bravo!
Well done my friend. :-)
Over the past number of weeks I have been contemplating the direction that this now defunct thread took. I thought I would share some of those thoughts this evening.
Shermy; get the way back machine! :-)
Back in the 1960s I watched a show called Star Trek. One of the themes that ran thru many of the shows was the impact of computers on both the starship and the worlds it visited. From that moment on, I became fascinated with computers and machine intelligence. Moores law had yet to be defined and the microprocessor had not yet been invented. Heck, not too long earlier, in 1947, the very first transistor was invented and shortly thereafter, April 25, 1961, the first patent was granted for the integrated circuit. And as they like to say; the rest is history.
Anyhow, when Star Trek was being aired for the first time, RTL, DTL, and TTL were still king of the hill. Many of the computers of that time were large mainframes which used electromechanical interfaces such as teletypes. There were articles about learning machines in publications such as Scientific American and the like; however, they only gave the first glimmers of the possibilities of where computer technology would lead us. Apollo and Gemini was the king of the hill for NASA and the future looked promising (at least to some of us).
Let me digress a bit further into the culture of those times. Overpopulation was a great worry, the Vietnam War caused a huge backlash by the counter culture types, and racial clashes were commonplace. I still remember walking into bookstores and seeing the posters of the time showing a ruined society with waves of people everywhere. Bike gangs, drugs and hippies were all the rage. During this tumultuous time progress continued on these tiny circuits which would later evolve into the microprocessor. The HiFi, television, and the telephone, was about all that the average house had that reflected the great leaps that were happening. Banking was the closest that folks got to computing and the news often had stories about computer banking errors. This did not instill trust by the general public into this new technology. I often heard what good is it or new fangled during that time frame. Oh most folks knew NASA needed computers, however, they were more of a bother instead of a boon to the general masses.
This cumulated towards the end of the 60s and into the first part of the 70s. Books were written about the information age such as Future Shock by Alvin Toffler. The supposition was that technology was going to explode at such a pace that the average person would be lost in this sea of technology and end up rejecting and/or being buried by the same. I personally did not adhere to that mindset. I so wanted my own computer. Unfortunately, computers were still the realm of either SiFi or companies. Individuals just did not own their very own computer. I would talk to my dad and others and would receive this reply more often than not; what would you do with it? To this day I remember my neighbor taking me to the CDC core memory plant. What a treat! I stood in awe looking at all of the machines, terminals, teletypes, and computers in this huge building. I was in heaven. This was the place where the memory arrays were built for these huge mainframes. I was allowed into the room with the sea of workstations where women strung these tiny ferrite beads on strands of wire under magnifying glasses. If you have never seen a core memory plane in real life, you are missing out on a real work of art. Each core plane was strung with gold, red and green wires not much bigger than sewing thread. The cores themselves were so tiny; you could barely see the hole in the center of each. All these colored wires caused the core plane to glitter with tiny rainbows of light under the fluorescent lighting. I was in awe. I did make one BIG faux pas. I brushed a fingertip over the top plain of one of the core stacks to feel it not realizing I was causing almost two days work worth of damage to repair. I still feel bad about that to this day.
Enter the microprocessor. Up until this point, the central processing unit of a computer had been made up of a number of different circuits, starting with tubes and relays in the very early machines, which then migrated to transistors and finally to discrete integrated circuits. In 1971 this all changed. Intel Corporation created the worlds very first CPU on a chip. It was only a four bit processor that contained 2300 transistors. This brain on a chip was quickly followed by eight bit processors that became the mainstay throughout the rest of the 70s. As complexity progressed, an Intel CEO; Gordon Moore, noticed a particular trend. He noted that approximately every 18 months, the number of transistors doubled on the average processor. This quickly became known as Moores Law. This has led to a single processor (such as the Itanium-2) with approximately 400 Million transistors inside. This is a far cry from the humble beginnings of 2300 transistors. It would take roughly 138,000 I4004 processors to equal the transistor count on one Itanium-2 processor. The clock speed of the CPUs has increased as well. With the advent of massive parallel processing and clustering of CPUs, the humble microprocessor has evolved into the super computer realm that would have boggled the mind back in 1971.
I used to play Traveler and D&D. Also a really obscure game called "SPACE QUEST". BTW, this is not the "Space Quest" everyone sees today.
This was similar to Traveler, however, the game was far more complex in the ship design & operation, the physics of spaceflight, and it even went so far as to include Spectral and Luminosity Classes, orbital mechanics, civilization levels and types, planetary atmospheres, flora and fauna, etc. Took days to set up a ship, crew, and the nearby stars. Prob why it never took off. I wonder how many people have this RPG book on a shelf somewhere. Not many I would bet. They only printed it once and it was a limited run at the time.
Also, computer gaming was getting its legs at the same time. My first computer game was a game called Adventure, we would play late at night on the IBM-360 mainframes. Then Zork came out for the Commodore and Atari and all bets were off.
BOOM! Computer gaming became huge. Eventually overtaking paper RPGs.
Now with the ease of the Internet, graphics, and the speed of personal computers, RPGs have come into their own on the PC. (Everquest is but one early example).
Also the face of the chat room is changing. These are virtual worlds with physics, textures, walls, lawns, forests, bushes, libraries, rooms, (whole towns), etc. that you can walk thru using the avatar of choice and seeing out of your avatars eyes other avatars walking thru this same virtual word and being able to congregate and chat. BTW, brick looks like brick; add marble, cement, flora and fauna, wood, lakes, waterfalls, pools, metal, etc. These look real. >[? There are whole websites devoted to nothing but textures to build a world/community to add to the existing ones out there already. I know of one, I have access too, that would take you months to explore all the different places. There are Castles, gardens, forests, towns, homes, etc. Even one person made a New York street complete with cabs, noise, and high-rises you could get into (Including riding the elevators). You could take cruise on a cruise ship, swim, ride wave riders, etc. Snow would fall, there was night and day, the moon phases would change. I even walked by a lake where I could see the stars reflected in the water. How cool is that.
I remember being in one of these worlds where we were just a bunch of avatars standing around in front of a bar and grill on a cobblestone street. It was like really being there. However, the folks I was casually chatting with were from all over the world. Mostly from the USA, Canada, Britain, France, Netherlands, Australia, New Zealand, and Spain. A few were from South America, Asia, Eastern Europe and the like, but that was not often. It was kind of strange walking down a realistic looking street with a group of folks chatting away, knowing in the back of your head, these were people sitting at computers from all over the world.
Add VR headsets, and you could almost forget you were in a virtual world as apposed to the physical one.
Some people took this to the extreme as well. I saw marriages, fights, cliques, families, occupations, virtual money, property bought and sold, all in this "cyberspace".
This is not just a fad either. It is growing FAST. Even the US Army has gotten involved. They are using the VR software from one of these online communities to set up virtual combat simulations for training.
The "Matrix" is not as far off as you may think. BTW, I am not talking a war with machines, but the virtual logging into a world that looks and acts like the "real" one.
Just my two cents.
-and another- (sheesh RADES! you gonna post all your old stuff?)
There is not enough information in this article to make any kind of assessment of the research being done. I do know they have found a gene that is partly responsible. See:
http://www.physicspost.com/articles.php?articleId=166
However, there is another revolution going on that many have not extrapolated to its logical conclusion. A scientist by the name of Dr. Gordon Moore made a postulation in a paper to the April 19th 1965 edition of Electronics magazine (titled, Cramming more components onto integrated circuits) that industry would be able to double the number of circuits onto an electronic chip while reducing the cost by one half every year. Quoting from that paper:
The complexity for minimum component costs has increased at a rate of roughly a factor of two per year. Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will remain nearly constant for at least 10 years." (Moore 1965)
This was postulated by Moore as a simple log-linear relationship between complexity and cost. In 1975 (ten years later), Dr. Moore delivered a paper to the 1975 IEEE International Electron Device Meeting where he showed a plot of semiconductor devices that remarkably followed his prediction very nicely. There were some minor revisions to the curve, most notably his prediction for doubling jumped to about every 18 months as apposed to one year. This curve became to be known as Moores Law and even got its own equation:
In a physics class I took, I heard a professor remark if airplanes had progressed with the same rapidity and complexity of the microprocessor, we would have landed on the Moon ten years after the Wright Brothers flew at Kitty Hawk. This may have been a bit over dramatic, however, it pushes the point of just how complex and capable these tiny brains called microprocessors are becoming.
Dr. Federico Faggin designed the first microprocessor, called the 4004, in 1971. Interesting anecdote: Dr. Faggin was in his lab all alone late one evening in January of 1971. He received his first 4004 CPU wafer late that day and wanted to test it. He worked late into the night and I can imagine the whoop as he realized it worked! His wife, Elivia, was the first person other than Dr. Faggin to share his triumph. If you ever peel the cover off of a 4004 CPU and look at it under a microscope, (I would not recommend it, they are becoming collector pieces) you will notice etched along with the circuit the initials FF for Federico Faggin.
http://www.intel4004.com/sign.htm
Note: a central processor unit (CPU) wafer is a round slice of silicon with a group of chips etched onto the surface, which are then cut up and packaged into individual chips we see in our computers. See:
http://www.sudhian.com/showdocs.cfm?aid=619
Let us leap into the future 34 years. January 2005 (next month). Sticking to Intel (yes I know there is AMD, IBM, and a host of others out there) the current production run processor for the home computer is called the Prescott. The original 4004 contained 2300 transistors, ran at a clock speed of 108 kHz, PMOS process with 10um line widths, and had a 4-bit architecture. Just 34 years later, The Prescott is a 90nm process (.09um), strained silicon substrate, 125 million transistors and runs at external clock speeds up to 3.8GHz.
http://www.pctechguide.com/02procs_Prescott.htm
You may be wondering where I am going with this. :-)
IMHO, it will not be biological systems that ensure longevity, but silicon instead. There are leaps and bounds I read about every day in this technology. Multiple core CPUs, massive parallel processing, faster bandwidth, lower latency, etc. I am neither a chip designer nor an information theorist; however, I know a few who are. There is some thought in the industry that we may in fact be able to directly link a human brain to a silicon one, effectively expanding the biological into the machine.
Pure layman conjecture here: Would this then ultimately allow for our consciousness to leave this biological construct called the human brain and reside wholly in silicon?
Just an interesting thought.
From here:
http://en.wikipedia.org/wiki/CMOS
"CMOS ("see-moss"), which stands for complementary metal-oxide semiconductor, is a major class of integrated circuits. CMOS chips include microprocessor, microcontroller, static RAM, and other digital logic circuits. The central characteristic of the technology is that it only uses significant power when its transistors are switching between on and off states. Consequently, CMOS devices use little power and do not produce as much heat as other forms of logic. CMOS also allows a high density of logic functions on a chip."
"The phrase "metal-oxide-semiconductor" is a reference to the nature of the fabrication process originally used to build CMOS chips. That process created field effect transistors having a metal gate electrode placed on top of an oxide insulator, which in turn is on top of a semiconductor material. Instead of metal, today the gate electrodes are almost always made from a different material, polysilicon, but the name CMOS nevertheless continues to be used for the modern descendants of the original process."
Here is an excellent gate demonstration (Java applet) and description of why CMOS works so well.
http://tech-www.informatik.uni-hamburg.de/applets/cmos/cmosdemo.html
ANd here is a nice chip collection tracing the history of the IC:
http://smithsonianchips.si.edu/
And lastly here is a timeline of IC and process development:
Moore's law has affected the Drake equation in ways we don't even know yet. I personally think the Fermi Paradox is pure BS and not well though out, however, the Drake equation seems to have stood up to scrutiny.
SETI (at least the current trend) is searching for extremely narrowband carrier signals that are Doppler shifted due to planetary rotation. The Doppler shift is extremely important since if it is not there, we know the signal is either terrestrial or an artifact of the equipment itself. The other thing that is very important is the two-antenna approach. If two antennas, separated by a thousand miles, were pointed at the same patch of sky, this would not allow a satellite to "spoof" the system. First, the likelihood of it being within the footprint of both antennas are exceedingly small, and the Doppler characteristics between the two antennas would rule it out if such a thing happened.
All that said, I agree with the advances in communications technology can cause a search to be futile for many types of broadcasts. Frequency hopping spread spectrum and the like will make it far harder to detect a tool building species that uses radio (EM).
To be fair to the other side there is another factor in this conjecture. A race is progressing along and figures out that the electromagnetic spectrum is the only real practical method of long-range communications. So high-powered transmitters are built as this technology is in its infancy. As the engineering and science of radio advances, they figure out that tight beam, spread spectrum, synthetic aperture, frequency hopping, etc. are a way of not only saving power, but also bandwidth. So for the first 50 years they have been "bleeding" EM into space across a huge range of frequencies into and ever-increasing sphere of radio noise. However, do to technological advances, this RF that is being bled into space quiets down dramatically.
Now, lets jump a few years. This race has expanded off its initial planet and is exploring the solar system it resides in. (IMHO, star travel still remains firmly in the realm of SiFi) Somehow they have to communicate. So again high power transmitters are employed to accomplish this. Light is not out of the question, however, microwave is easy, cheap, less pointing accuracy requirements, and wont be drowned out by the star. So suddenly this race again is radiating RF into the universe. So according to this scenario, a race can emit RF then grow silent for a time, and then restart emitting RF.
I also agree the movie "Contact" is pretty shallow in many ways. Headphones? Not a chance. Most SETI searches are done with computers looking at millions of frequencies simultaneously. Also, SETI is not looking for, nor is it expecting any modulation. That would long ago have been lost in the Interstellar Medium (ISM). All that can reasonably be expected to be detected a faint signal from the narrowband carrier itself. In fact due to the signal-to-noise (S/N) characteristics, the narrower the band that is being searched, the better. Some searches are looking for signals that are no wider than .8 Hertz.
Just my two cents.
From another Freeper to me:
Violence is never the answer
My reply:
I have to disagree.
Violence is sometimes the ONLY answer. I mean your face down in the mud, bullet thru the brain type violence. Sometimes even violence without mercy or rational thought. Could we have won WWII without such? Look what we defeated.
Any student of history knows that violence is what shaped where we are today. Without it, we would never have even evolved from the primitive primates of so long ago. Most of our real advances came out of conflict.
This veneer we call civilization is thin indeed (if not a fantasy). How fast would that veneer disappear without police or military forces? Not long IMHO.
Now I find myself in a quandary here. I am highly educated, a nice guy, don't drink, have not been in a fight since third grade, never shout, spend most of my time in either a lab or a control room, however, should the real need arise; I would hope I could be as violent as the next person.
Welcome to evolution 101.
This is an excellent thread for such posts. Good stuff.
Pure layman conjecture here, and an unpopular one: no. But then I am a strong materialist, believing that consciousness is what the brain does.
Even with Moore's Law continuing to be valid, I do not see anything on the horizon matching the complexity of the human brain, even if we knew how to do it
Solving the problem of consciousness would be as as great a shock to civilization as the Copernican revolution, or evolution.
Cerebral placemarker
29 bottles placemark
I disagree. Moore's law will (in the not so distant future) dictate silicon complexity must surpass the organic.
The rub is, will we even know what that happens?
Would vastly surpass those two IMHO. :-)
Think back a little more than 500 years. Many people still believed the world was flat, the world was only 6000 years old, the Earth was at the center of the universe, etc. However, the time was ripe for not only huge leaps in knowledge, but in exploration as well.
Europe was changing. Natural resources and newly exotic items (especially from the far east such as spices, drugs, silk and china) were all the rage. During this time land based trade routes were established, however, they were long, costly, and difficult. Water routes were attempted including one funded by Ferdinand and Isabella in 1492. It so happened a trade route to the orient was not forthwith, however, an entire new continent was discovered (at least to the Europeans).
Here is where it gets interesting. Countries in Europe (mainly Spain, France, and England) looked to this new land, not for colonization, but the abundance of natural resources. Think of what came back from the new world, sugar cane, rubber, gold, silver, furs, timber, cocoa, etc. So not only were these voyages of discovery, but voyages that ultimately lead to trade and wealth.
It took close to 100 years from the voyages of Columbus to the establishments of colonies. Were they able to produce all of the things needed for a society? Not hardly. However, with natural resources being shipped back to the old world and manufactured goods shipped to the new, it turned out to be quite profitable for the nations (and companies -the East India Rubber company comes to mind-) involved.
What I am driving at is that you dont need all of the 4000 years of technological infrastructure to produce a successful colony. If we do establish a lunar colony, the raw material from the lunar regolith may generate enough wealth to make a lunar colony worth the effort.
On what basis? Are you assuming a transistor per neuron? If so, I would suggest you are off by a few orders of magnitude.
I don't think we have a firm understanding of what neurons do, and we certainly don't have a clear understanding of glial cells and neurotransmitters. We haven't modeled the mind of the planaria, a mere few hundred cells.
Not really. The Spanish had functioning cities in several colonies much earlier than that. Only a single generation was required for them to get going. The lure of free land, slaves, and mineral wealth was pretty much overwhelming. It wasn't until 1607 that the Jamestown colony got started, and that may be the 100-year gap you're thinking of; but England was a laggard in that game.
Tortoise can asnswer this far better than I.
Fair enough. :-)
There will be a leap second for 2005.
ÂWhy? I hear you cry!
To explain the leap second issue, an understanding of several other inter-related topics is necessary, such as fixed celestial references, conservation of momentum, and certain multi-body phenomenon affecting the EarthÂs motion.
Historically timekeeping and calendars have been tied to the motions found in the heavens. These have been primarily the stars, our Moon, and the Sun. To get a rudimentary understanding of how time is measured and where we got our units of time, we must first talk about the motions of these heavenly bodies referenced back to our Earth. This paper will explore many aspects of the universe, its evolution and the physical properties contained within.
Before we get too far along, I thought we might toss a few definitions up front: :-)
Occam's Razor  one should not make more assumptions than the minimum needed.
Principle of Parsimony  a criterion for deciding among scientific theories or explanations. One should always choose the simplest explanation of a phenomenon, the one that requires the fewest leaps of logic
Objective  undistorted by emotion or personal bias; based on observable phenomena; "an objective appraisal"; "objective evidence"
Subjective  taking place within the mind and modified by individual bias; "a subjective judgment"
Sophistry  a deliberately invalid argument displaying ingenuity in reasoning in the hope of deceiving someone
Solipsism (noun), Solipsistic (adjective) Â the belief that only one's own experiences and existence can be known with certainty
Cislunar  situated between the earth and the Moon
Lagrange Points  places where a light 3rd body can sit "motionless" with respect to 2 heavier bodies that are orbiting each other thanks to the force of gravity. (There are five)
Regolith  The layer of loose rock resting on bedrock, constituting the surface of most land. Also called mantle rock
Hohmann Transfer Orbit  a Hohmann transfer orbit is the most efficient intermediate orbit to transfer from one circular orbit to another. The transfer orbit is an ellipse with periapsis at the smaller radius and apoapsis at the larger radius
Delta-V Â delta indicates change and V stands for velocity. Change in velocity refers to both the speed of the craft and the direction
Isp (specific impulse) Â The amount of thrust produced from each pound of propellant per second
Mass Fraction  The mass fraction is a measurement of a rocketÂs efficiency. The mass of the propellants of the rocket divided by the total mass of the rocket gives mass fraction
Planetesimals  A rocky and/or icy body, a few kilometers to several tens of kilometers in size, that was produced in the solar nebula
Roche Limit  The Roche limit is the minimum distance to which a large satellite can approach its primary body without being torn apart by tidal forces.
Tidal Lock  Tidal drag from one orbiting body on another causes the two bodies to Âlock to each other. This is why the Moon keeps only one side to the Earth.
Angular Momentum  A quantity obtained by multiplying the mass of an orbiting body by its velocity and the radius of its orbit. According to the conservation laws of physics, the angular momentum of any orbiting body must remain constant at all points in the orbit. Thus planets in elliptical orbits travel faster when they are closest to the Sun, and more slowly when farthest from the Sun. A spinning body also possesses spin angular momentum.
Isotope  An element with the same atomic number but having a different atomic weight (i.e. more or less neutrons)
Now that we have a few definitions under the belt, let us continue.
The Celestial sphere:
When we look up at the stars in the night sky they appear to be stationary relative to each other. As the Earth moves from one side of the Sun to the other, the displacement of those stars due to parallax is less than one second of arc even for the nearest star (Proxima Centauri). One way of looking at this is a fixed sphere of stars surrounding the Earth/Sun system. This is often referred to as the Celestial Sphere. This is why some of the ancient civilizations considered the stars to be holes in a tapestry.
Since we are talking distances and parallax, letÂs briefly take a moment and describe such. The more familiar term for the layman when referring to stellar distances is called a light year . This is the distance light will travel in one calendar year. For example the closest star to our Sun, Proxima Centauri, is approximately 4.22 light years from our solar system. The light we see from there today was actually generated by that sun 4.22 years ago. Astronomers use another term for stellar distance that may be not so familiar: the Parsec. A Parsec (parallax-arcsecond) is the distance needed for one astronomical unit (AU) to subtend one second of arc. An AU is the average distance from the Earth to the Sun or approximately 93 million miles, and an arcsecond is 1/60 of an arcminute, which is 1/60 of a degree. It turns out a Parsec is about 3.26 light years. Thus for an observer sitting 3.26 light years from the sun, the distance from the sun to EarthÂs orbit subtends one arcsecond.
Conversely, an observer on the Earth will see an object positioned one Parsec away appear to shift by up to two arcseconds over the course of a year. If one sighting is made when the line from the Sun to the Earth is 90 degrees from the line of observation, 6 months later the Earth will be on the opposite side of itÂs orbit. Since the radius of the EarthÂs orbit is one AU, the diameter is 2 AUÂs. This change in apparent position from different viewing locations is called parallax.
Proxima Centauri (at 4.22 Light Years or roughly 1.3 Parsecs), shows parallax of about one-and-a-half seconds of arc over the course of a year  too small to be discerned without special high-precision equipment. Most stars are much further away than Proxima Centauri, so for most practical purposes the stars are fixed  at least for periods less than a decade.
Even though it appears the stars remain in Âfixed locations in the night sky, over a period of decades and centuries the stars do move relative to each other and relative to the Earth. The star catalogue based on the epoch B1950 and the one based on the epoch J2000 would reveal minor differences due to these motions.
Another interesting item of note is that the constellations we see are made up of the brightest stars. Even in the same constellation these stars are at vastly different distances from the Earth. Some may be very bright stars that are very distant, and these may appear dimmer than closer stars that are not actually generating nearly as much light. The brightness of a star is called its magnitude. There are two ways astronomers measure magnitude: Apparent Magnitude and Absolute Magnitude.
The Apparent Magnitude is how bright a star appears to us here on the Earth. The Absolute Magnitude is how bright a star would appear if it were exactly ten parsecs away from the Earth. (Close to 33 light years).
Two notes:
1) Apparent magnitude is usually denoted with a small Âm  and absolute magnitude uses a capital ÂMÂ.
2) The magnitude scale is backwards of what you might think: the larger the number the fainter the object. The brightest star is Sirius with magnitude of Â1.5m, while somewhat dimmer Vega is defined as 0m, and planet Venus may become as bright as Â4.4m. A typical human eye can just barely see a star with a magnitude of +6m, but Earth-based telescopes may see stars as dim as +18m, and the Hubble can see stars as feint as +30m.
The Ecliptic Plane
Since the Earth is tilted (23.5 degrees) in reference to the path it sweeps out in its orbit about the Sun, this path projected onto the celestial sphere does not fall on the celestial equator. This imaginary plane is called the ecliptic . Note: This angle between the ecliptic and the equatorial plane is called The Obliquity of The Ecliptic.
This imaginary plane crosses the celestial equator in two places (called the equinoxes). The Vernal Equinox falls in the spring as the Sun appears to cross the ecliptic going north and the Autumnal Equinox falls in autumn when the Sun again crosses the ecliptic, this time going south. Note: Vernal comes from the Latin vernalis, meaning spring. Also the term equinox relates to the word equal since both day and night are close to the same, 12 hours during the equinox.
The points where this plane is the farthest above (north) and below (south) the celestial equator is called the solstices. In the northern hemisphere of the earth, the most northern point of the ecliptic is called the Summer Solstice and the southern most is called the Winter Solstice . In the Southern hemisphere of the Earth the reverse is true.
The zodiac lies along the plane of the ecliptic. Since the Earth is orbiting the Sun, the Sun appears to follow the plane of the ecliptic, making one complete circle in one calendar year. The name Âzodiac comes from the Greek meaning animal circle. In fact all of the 12 constellations of the zodiac are named after animals. Note: The path of the Moon and the other planets fall pretty much on this plane as well. Since it takes 365 days for the Earth to orbit the Sun and there are 360 degrees in a circle, the Sun moves pretty close to 1 degree per day.
Celestial Coordinates
If, on the first day of spring (the Vernal Equinox), a line is drawn from the Sun through the Earth and out to infinity, that line is said to extend to a point referred to as The First Point of Aries . (So named because at one time this line pointed to the first star in the constellation of Aries.)
The celestial sphere is tied to the Earth for its coordinate system. Project the EarthÂs equator out to infinity and you have the equator of the celestial sphere. Likewise the north and south poles of the Earth points to the north and south poles of the celestial sphere respectively. This makes it very easy to map the sky referenced to the Earth. This coordinate system is called the Equatorial Coordinate System. It ties in closely with our own geographic coordinate system here on the surface of the Earth.
Note, however, the geographic coordinate system is fixed upon the surface of the Earth (Lat-Long) -- so it rotates with the rotation of the Earth. The celestial coordinate system is fixed to the celestial sphere and appears to rotate due to the EarthÂs rotation. The equivalent of Âlatitude  in the celestial sphere (the angle of an object above or below the celestial equator) is called declination , with zero being on the equator. (This is pretty easy to relate to, since the celestialÂs equator and poles appear to be fixed like our own earth.) The celestial sphereÂs analog to Âlongitude Â, called right ascension , is not a Âfixed reference to the Earth: it is fixed to the stars instead, thus rotating every 24 hours. Instead of using degrees, right ascension is measured in hours. The Vernal Equinox is used as the zero reference for the right ascension . Since there are 360 degrees in a circle, the Earth rotates about 15 degrees every hour, so every hour of right ascension is equivalent to 15 degrees.
A declination of zero is on the equator and a right ascension of zero is at the Vernal Equinox. So on the first day of spring, when the EarthÂs equator lines up with the line to the First Point of Aries, the Vernal Equinox will have the coordinates of 0 degrees and 0 hours . This has come to define the center point for an Equatorial Sky Chart .
How was all this formed?
We will first start out with the evolution of a single, low mass star from a molecular cloud to fusion and planetary accretion
Although dust and gases are found throughout interstellar space, star formation is a relatively rare event with perhaps only 10 percent of interstellar medium actually being converted into stellar mass. Interstellar space contains roughly about 10 hydrogen atoms per cubic meter at approximately 100 to 106 K. In pockets of non-homogeneous molecular gas and dust (figure 1), the densities of matter may be as high as 104 to 106 atoms per cubic meter (contrast this with atmospheric air at STP ≈ 5.3 x 1025 atoms per cubic meter). Particulate matter within these regions is thought to include not only atomic and molecular hydrogen (H2), but also helium, carbon monoxide (CO), water ice (H2O), alcohols, ammonia (NH3), formaldehyde (HCHO), formic acid (HCOOH), methane (CH4), and other organics such as aliphatic hydrocarbons. Dust particles effectively block ultraviolet radiation from nearby stars, thus decreasing temperatures within these regions to only about 10 to 20 K.
Radio astronomers use CO emissions at 1.3 and 2.6 mm to identify molecular hydrogen (H2) in these cold molecular clouds. H I regions consist primarily of neutral atomic hydrogen (H) gas with densities of up to 107 atoms per cubic meter at temperatures around 100 K, and are detected from 21-cm emissions generated by the quantum spin flip of individual hydrogen electrons. H I regions may also be detected by Alpha Lyman H-absorption bands. In contrast, very hot H II and He III regions (up to 10,000 K) within glowing emission nebulae close to O and B spectral type stars (such as the Lagoon Nebula) are detected via infrared radiation
Note: Super geek alert #1:
The accepted view of star formation requires that an influx of non-thermal energy (shock wave or turbulence) initiate the collapse of molecular clouds. However, some researchers believe that these clouds can become stellar nurseries simply because cooler temperatures allow matter to move more slowly, allowing tiny gravitational and ionic forces between atoms to form complex molecules, leading to gravitational collapse.
Irrespective of the initial mechanism, areas of accumulated matter grow and coalesce, eventually forming a center of mass around which particulate matter and gases orbit, often colliding with other particles or the center of mass itself. As the mass contracts under continuing gravitational attraction, the core begins to heat and infrared radiation is released. Rotational velocity also increases, conserving outward angular momentum while allowing a continuous inward flow of material. The orbiting mass begins to take on a flattened disk-like shape about the core, which is now more appropriately referred to as a prestellar core or protostar. The protostar may have densities of up to 107 atoms per cubic meter at this stage in its evolution (newly formed stars have observed densities of about1022 atoms per cubic meter). Interior core temperatures may reach 150,000 K, with surface temperatures of about 3500 K as outward thermal pressure increases to compensate for the inward pull of gravity. At this point, the protostar will appear on a Hertzsprung-Russell diagram as a cool but bright star, as luminosity is still dependant upon gravitational collapse.
As contraction continues, particles that are outside the accretion disk, but still under the influence of gravitational attraction from the protostar, will be drawn into more extreme sinusoidal orbits in and out of the plane of the accretion disk. The chance that these extra accretion disk particles will collide with particles within the disk increases not only with increased density and thickness of the disk, but also with a decreased angle of incidence relative to the plane of the disk. Most particles will ultimately become part of the protostar, but some will enter into a variety of orbits within the accretion disk plane depending on their relative velocities, often forming additional regions or bands of increased density from which protoplanets may later accrete.
While the protostar stage of development may only take a few years, the pre-main sequence stage may take tens of millions of years because continued contraction, accretion and heating of the stellar core proceeds slowly.
Early pre-main sequence stars are often referred to as T Tauri stars. In these very young stars, an excess of ultraviolet radiation is released as dipolar magnetospheric accretion columns form, slowing the rotational velocity of the star in relation to the disk, and transferring mass directly from the disk to the poles of the young star. Accretion rates for these stars have been estimated to be from about 2 x 10-8 to 10-7 the mass of our Sun per year. However, mass is also simultaneously ejected from these stars perpendicular to the circumstellar disk along magnetic field lines in very narrow bipolar jets or pulses of material, possibly a mechanism for reducing excess angular momentum. T Tauri stars are hotter but not as bright as protostars, and will appear on a Hertzsprung-Russell diagram closer towards the main sequence as late F through early K spectral types.
Once the internal temperatures of the young star reach about 1 million Kelvin, the proton-proton chain reaction begins, first fusing two protons into one deuterium plus a positron and a neutrino [equation 1].
[1] 1H + 1H → 2H + positron (e+) + neutrino
The positron almost immediately encounters an electron, and the particles annihilate each other, producing two gamma rays. These gamma rays will ultimately migrate to the stellar surface where they will each be emitted as about 200,000 photons of visible light [equation 2]. [2] e+ + e- → 2 gamma rays
Deuterium created via the reaction represented by equation 1 reacts with a proton to create one helium-3 plus another gamma ray [equation 3]. [3] 2H + 1H → 3 He + gamma ray
When stellar core temperatures reach 10 million Kelvin, two helium-3 atoms will be fused into one helium-4 atom plus two protons [equation 4], an event that marks the transition to the main-sequence phase of stellar evolution, when energy produced is no longer due to gravitational collapse, but by nuclear fusion. [4] 3He + 3He → 4He + 2 1H
Main sequence stars are typically very stable because of hydrostatic equilibrium, where the forces between continued gravitational collapse equal internally generated thermal pressures. Typically, a low-mass star will continue in the main sequence for about 90% of its lifetime, slowly converting hydrogen into helium for several hundred million to several billion years until the supply of hydrogen is exhausted.
Planetary formation from stellar accretion
A model of early solar system formation (and there is evidence supporting such) describes that metal, such as Nickel-iron, rock, and ice condensed out from the accretion disk created as our solar system formed. The metals condensed out first (this is why many of the asteroids are Nickel-iron) Followed by rocky material and ice. These tiny particles then collided creating small boulders and asteroids.
Once these small asteroids and boulders have enough mass, gravity becomes the driving force. Thusly the planets and moons are formed. However, since Jupiter is so large and the total mass of the asteroid belt is so tiny, the material forming the asteroid belt never was "allowed" to form a small planet or moon because of the gravitational perturbations from Jupiter. Remember the asteroid belt has less mass than 1 tenth of our moon.
Finally the solar wind from the newly formed star (our sun) would blow all of the remaining gas into interstellar space leaving us with the planets, moons, comets, asteroids, etc. circling our little star.
Note: This is a really simplified version. There is much (volumes of data) I did not include.
Since we are talking about the Solar System, I thought I would add a little data about our solar system: :-)
Remember, all planets move in ellipses. A planet that moves in a perfectly circular orbit is actually an ellipse with its eccentricity (e) = 0, a parabola has e = 1 and a hyperbola the e > 1. So the closer to zero the planets eccentricity, the more circular its orbit.
For the planets, the furthest point from the sun in its orbit is called aphelion and the closest is called perihelion.
All of the planetary distances from the Sun are measured in Astronomical Units (AUs). One AU is the average distance from the Earth to the Sun, which is approximately 93,000,000 miles.
Mercury: e = 0.2056 and its AU = .39
Venus: e = 0.0068 and its AU = .72
Earth: e = 0.0167 and its AU = 1
Mars: e = 0.0934 and its AU = 1.52
Jupiter: e = 0.0483 and its AU = 5.20
Saturn: e = 0.0560 and its AU = 9.54
Uranus: e = 0.0461 and its AU = 19.18
Neptune: e = 0.0097 and its AU = 30.06
Pluto: e = 0.2482 and its AU = 39.44
If you notice only two planets have a high eccentricity; Mercury and Pluto. Only one of them cross the mean distance of another planet from the Sun and that is Pluto and Neptune. Briefly Pluto is closer to the Sun than Neptune when its orbit is at perihelion.
The eccentricity of our planet's orbit is mild; aphelion and perihelion differ from the mean Sun-Earth distance by less than 2%. In fact, if you drew Earth's orbit on a sheet of paper it would be difficult to distinguish from a perfect circle and that is with e = 0.0167. . As for the perfect circle, there never will be a perfect circle with the orbital elements. Remember the other planets are also "tugging" on each other. I brought up the perfect circle to show that a circle is a very special type of ellipse. The reason for that was that when we see ellipses in our mind, we see really elongated structures. Also when you look at a "map" of the solar system, they usually put it in a somewhat side perspective which exaggerates the appearance of the ellipse.
Most of the planets are so close to circles that on a piece of paper they would look just that. Again, the only two that would be even readily noticeable would be Mercury and Pluto.
For satellites orbiting the Earth, we have an added component of not only the atmospheric drag but the solar wind as well. To even further the complication our Earth is not a perfect sphere and has natural gravity wells due to the distribution of the landmasses and that it is an oblate spheroid instead of a perfect sphere (the difference is only about 15 miles between the equator and the poles). One more rub is that with long term measurements taken using a satellite in orbit (the LAGEOS), the Earth is very very slowly re-rounding itself out over time.
The other thing that is not readily apparent from most solar system maps is just how far apart the planets really are and also how tiny they are with reference to the solar system.
Before continuing let us regress and talk a little about matter and the Standard Model which describes how Âstuff (matter) interacts and exists:
THE STANDARD MODEL:
The best description of how matter and energy interact (sans gravity) is called ÂThe Standard Model It describes the organization of all of the particles and how they interact. The elementary particles are divided into two families called quarks and leptons. Each family consists of six particles and three of each of the particles in each group are acted on by a force carrier.
Quarks: Six called, up charm, top, down, strange, and bottom. All six quarks are acted upon by gluons and photons. This is because all of them carry electromagnetic charge (u,c,t have a charge of +2/3 e, while d,s,b have a charge of -1/3 e), and all of them carry a color charge. There are three kinds of color charge, which are commonly written as red, green and blue. Every quark in the universe has one of these charges. Each flavor of quark can have any color charge. The Up, charm, and top use the gluon for their force carrier. The Down, strange, and bottom use the photon for their force carrier.
Note: Super geek alert #2:
Because there is one kind of EM charge, there is one photon, but since there are three kinds of color charge, there are eight gluons. Gluons themselves carry both a color charge and an anti-color charge, so you'd think that there would be nine gluons, but the combination red-antired + blue-antiblue + green-antigreen is colorless, so if you define a red-antired gluon and a blue-antiblue gluon, a green-antigreen gluon can be described as a superposition of the other two. Only eight gluons are needed to span the color space.
Leptons: Six called: e neutrino, u neutrino, t neutrino, electron, muon, and tau. All quarks and leptons couple to both W and Z bosons. A ÂWÂ, for example, transforms an electron to an electron neutrino, or a t-quark to a b-quark. The E neutrino, u neutrino, and t neutrino use the W boson for their force carrier. The Electron, muon, and tau use the Z boson for their force carrier.
Gravity is not included in the standard model, however it is believed that is exchange force is a graviton.
THE FOUR FUNDEMENTAL FORCES OF NATURE:
Strong force
Weak force
Electromagnetism (EM)
Gravity
All of the fundamental forces are considered Exchange Forces. In other words the force involves an exchange of one or more particles.
The exchange particles are as follows:
Strong The pion (and others)
Note: Super geek alert #3:
The pion does mediate the inter-nucleon force. That force isn't fundamental, however. The fundamental force is the inter-quark force that binds the quarks into hadrons (such as protons, neutrons and pions), and that is what we usually mean by the strong force, nowadays. The force between hadrons is a residual color dipole interaction that is analogous to the Van der Waals force in electromagnetism.
Lets explore this a bit further:
First, lets take a look at Van der Waals Forces:
Atom and molecules are attracted to each other by two classes of bonds. The Intramolecular bond and the Intermolecular bond.
The Intermolecular bond is divided into these categories; Van der Waals Forces, Hydrogen Bonds, and molecule-ion attractions.
The Intramolecular bond (which are much stronger than the Intermolecular bond) is divided into these categories; Ionic bonding, covalent bonding, and metallic bonds.
We will only concentrate on the Van der Waals Forces.
Van der Waals Forces arise from the interaction of the electrons and nuclei of electrically neutral atoms and molecules. How is this possible if these are considered electrically neutral I hear you ask. What is going on here is that the electrons and nuclei of atoms and molecules (for this description: from here out called particles) are not at rest, but are in a constant motion. Since this is the case, there arises an electrical imbalance (called an instantaneous dipole [another term is a temporary polarity]) in this electrically neutral particle. Two Âparticles in this dipole state will attract. Also this dipole action in one particle can cause a dipole in an adjoining (nearby) particle. So the dipole-dipole attraction is what is known as Van der Waals Forces. If these Âparticles kinetic energies are low enough (and close enough together), the repeated actions of the instantaneous dipoles will keep them attracted together.
One of the interesting things about this that the more electrons are in play the greater the Van der Waals Force. This is why the noble gas Krypton liquefies at a higher temperature than the noble gas Neon.
Back to the Standard Model.
A brief background: How does a nucleus stay together when it is packed with positively charged protons? Since Âlike charges repel, you would think that the nucleus would fly apart. The force that keeps this from happening is the Strong Force. One of the things that was discovered is that the mass of any nucleus is always less than the sum of the individual particles (called nucleons) that make it up. The difference (residual) is due to the ÂBinding Energy of the nucleus. This binding energy is directly related to the strength of the strong force. "Binding energy" is a negative energy. If the mass of a nucleus were always less than any sum of its potential components, then it would always take energy to split a nucleus.
This is true for any nucleus below iron. For nuclei above iron, the binding energy becomes less and less; the strong nuclear force creates stable minima in which very heavy nuclei can exist, but these are but local minima sitting high on the electromagnetic hill. A uranium nucleus is heavier than thorium plus helium.
So just what is this Strong Force anyway? The Strong force has an effect on quarks, anti quarks and gluons. Oh my, another term, QUARKS! After much research, it was discovered that the protons and neutrons in the nucleus were made up of smaller particles called quarks. It turned out that two types of quarks were needed to Âproduce a proton or a neutron. However, there are six types of quarks in normal matter. The strong force binds these quarks together to form a family of particles called hadrons which include both protons and neutrons.
To simplify this discussion, quarks have a Âcolor charge (red, green, and blue). BTW, this was a convenient way of describing the charge, it is not referring to color as we commonly use it). Like colors repel and unlike colors attract. There are also antiquarks. If it is a quark/antiquark (same color) it is called a meson. If itÂs between quarks it is called a baryon (protons and neutrons fall in this category). Here is the rub, baryonic particles can exist if their total color is neutral (colorless); i.e. have a red green and blue charge altogether. Both mesons and baryons are "colorless" with respect to the outside world. In baryons red + blue + green = colorless. In mesons, for example, red + anti-red (or, if you like, red - red) = colorless.
Without getting into too much more detail, quarks can interact, changing color, etc. so long as the total charge is conserved.
The quark interactions are cause by exchanging particles called gluons. There are eight kinds of gluons each having a specific Âcolor charge. The symmetry group of Quantum Chromodynamics is SU(3). In the minimal representation of SU(3), there are three generators...the color charges. In the non-minimal representation, there are 3²-1 generators...the eight gluons! This was spookily mirrored by Murray Gell-Mann's original (1964) quark theory, which also exploited the SU(3) symmetry. Only this time, the minimal representation was the three light quark flavors (up, down, strange), and the non-minimal representation was Gell-Mann's famous Eightfold Way, which correctly(!) predicted the properties of all the light hadrons, including some that had not yet been discovered.]
So back to the original paragraph: Neutral (all three colors) hadrons (which include protons and neutrons) can interact with the strong force similarly to the way atoms an molecules react via the Van der Waals forces.
Electromagnetic (EM) Â The photon
Weak  The W and Z
Gravity  The graviton
So to sum this up:
The Strong Force:
It is a force that holds the nucleus together against the repulsion of the Protons. It is not an inverse square force like EM and has a very short range. It is the strongest of the fundamental forces.
The Weak Force:
The weak force is the force that induces beta decay via interaction with neutrinos. A star uses the weak force to Âburn (nuclear fusion). Three processes we observe are proton-to proton fusion, helium fusion, and the carbon cycle. Here is an example of proton-to-proton fusion, which is the process our own sun uses: (two protons fuse -> via neutrino interaction one of the protons transmutes to a neutron to form deuterium -> combines with another proton to form a helium nuclei -> two helium nuclei fuse releasing alpha particles and two protons). The weak force is also necessary for the formation of the elements above iron. Due to the curve of binding energy (iron has the most tightly bound nucleus), nuclear forces within a star cannot form any element above iron in the periodic table. So it is believed that all higher elements were formed in the vast energies of supernovae. In this explosion large fluxes of energetic neutrons are produced which produce the heavier elements by nuclei bombardment. This process could not take place without neutrino involvement and the weak force.
Electromagnetism:
The electromagnetic force is the forces between charges (Coulomb Law) and the magnetic force which both are describe within the Lorentz Force Law. Electric and magnetic forces are manifestations of the exchange of photons. A photon is a quantum particle of light (electromagnetic radiation). This particle has a zero rest mass The relativistic mass of a photon is also zero. Gravity couples to energy density, which is typically dominated by mass. But even in Newtonian gravity, massless light particles will bend in a gravitational field (the trajectory of a test particle doesn't depend on mass). The speed of light in a vacuum is a constant and is unobtainable by baryonic matter due to the lorentz transformation. Electromagnetism obeys the Âinverse square lawÂ.
Gravity:
Gravity is the weakest of the forces and also obeys the inverse square law. The force is only attractive and is a force between any two masses. Gravity is what holds and forms the large scale structures of the universe such as galaxies.
Note: Super geek alert #4:
We can test the effect of gravitational waves on orbiting bodies under general relativity (GR). Not with the Earth and sun: any effect there is vanishingly small. Instead, we use binary pulsars, which are systems with two neutron stars revolving about a common center of gravity. We can measure the timing of such systems to a very high degree of accuracy, and the fact is, such systems are unstable! We can measure the distance between two gravitationally bound pulsars to within inches(!), and watch the orbits decay in real time. The decay is caused by gravitational waves, and the GR prediction is confirmed to many decimal places. If the speed of gravitational waves were grossly off, or if they didn't exist somehow, we'd see it.
The Resulting Interstellar Medium
Between the stars and galaxies is mostly empty space. However, this space is not entirely empty. It is filled with a diffuse medium of gas and dust called the Interstellar Medium (ISM). The ISM primarily consists of neutral hydrogen gas (HI), molecular gas (mostly H2, ionized gas (HII), and dust grains. Even though this considered a very good vacuum, the ISM in our galaxy comprises about five percent of the mass of the visible part (stars etc) of our galaxy.
Neutral Hydrogen Gas:
Our own galaxy is filled with a diffuse distribution of neutral hydrogen gas. This gas has a density of approximately one atom per centimeter cubed. One of the features of the neutral hydrogen is the radio wave production at 21 centimeters due to the spin properties of the atom. This neutral hydrogen is distributed in a clumpy fashion with cooler denser regions called ÂcloudsÂ.
Molecular Clouds:
Denser than the surrounding regions, clouds of molecular hydrogen and dust are the birthplace of stars. We are unable to detect molecular hydrogen directly, however we can infer its characteristics from other molecules present (usually CO). There have been over 50 different molecules detected in these clouds including NH3, CH, OH, CS, etc. Some molecular clouds can be as large as 150 light years in diameter. There are thousands of these clouds in our galaxy, usually situated in the spiral arms and concentrated towards the center of the galaxy.
Ionized Hydrogen Regions:
The ionized hydrogen (HII) is the remnants left from the formation of the younger hotter stars. These produce the more visible nebula such as the Orion Nebula. O and B class stars recently formed in molecular clouds ionize the gas left over from their formation. This results in the gas being heated to a temperature of about 10,000K causing it to fluoresce producing emission line spectrums. Hydrogen atoms absorb photons and are ionized from the Âextra energy. This and other features such as collisions produce the emission features of both the hydrogen and helium in the visible nebula.
Interstellar Dust:
Around one percent of the ISM is in the form of tiny grains of dust. These grains are approximately the size of a particle of cigarette smoke. This dust blocks the plane of our Milky Way galaxy form our view. We can determine the composition of these dust clouds by the way if affects different frequencies of photons. One of the affects of these dust clouds is that they dim the light from distant objects. This dimming is called interstellar extinction. It also reddens the color (interstellar reddening) due to the fact that red light is not scattered as efficiently as blue light is. The characteristics for the dust particles vary throughout the galaxy. However, a typical grain of dust is composed of carbon mixed with silicates. Almost all of the elements such as carbon and silicon found in the ISM are found in the dust particles.
Note: Super geek alert #5:
Nearby clusters such as Virgo and Coma possess galaxy distributions that tend to be aligned with the principal axis of the cluster itself. This has also been confirmed by a recent statistical analysis of some 300 Abell clusters, where the effect has been linked to the dynamical state of the cluster. Moreover, the orbits of satellite galaxies in galactic systems like our own Milky Way also demonstrate a high degree of anisotropy-the so-called Holmberg effect, the origin of which has been the subject of debate for more than 30 years. This study presents the analysis of cosmological simulations focusing on the orbits of satellite galaxies within dark matter halos. The apocenters of the orbits of these satellites are preferentially found within a cone of opening angle ~40° around the major axis of the host halo, in accordance with the observed anisotropy found in galaxy clusters. We do, however, note that a link to the dynamical age of the cluster is not well established, as both of our oldest dark matter halos do show a clear anisotropy signal. Further analysis connects this distribution to the infall pattern of satellites along the filaments: the orbits are determined rather by the environment of the host halo than some ``dynamical selection'' during their life within the host's virial radius.
Enter the Lyman Alpha Forest
There is one spectral line that stands out above all others: the transition between the ground state of hydrogen and its first excited state. This is called the Lyman Alpha line. This energy difference corresponds to a photon with a wavelength of 1216 angstroms.
Because the clouds lie at different distances, they are traveling at different relative velocities due to the expansion of the universe. This means that their Lyman Alpha lines, as we see them, lie at different places in the spectrum because of the Doppler Effect. This means that there will be many more Lyman Alpha absorption lines--and at an increased red shift for distant objects than for nearby objects.
This enables us to plot the position of the intervening neutral hydrogen between us and stellar objects.
Note: Super geek alert #6:
Radio astronomers use temperature to describe the strength of detected radiation. Any body with a temperature above -273 deg C (approximately absolute 0) emits electromagnetic radiation (EM). This thermal radiation isnÂt just in the infrared but is exhibited across the entire electromagnetic spectrum. (Note: it will have a greater intensity (peak) at a specific area of the EM spectrum depending on its temperature). For example, bodies at 2000 K (Kelvin), the radiation is primarily in the infrared region and at 10000 K, the radiation is primarily in the visible light region. There is also a direct correlation between temperature and the amount of energy emitted, which is described by PlanckÂs law.
When the temperature of a body is lowered, two things happen. First, the peak shifts in the direction towards the longer wavelengths and second, it emits less radiation at all wavelengths.
This turns out to be extremely useful. When a radio astronomer looks at a particular point of the sky and says that it has a noise temperature of 1500 K, he/she isnÂt declaring how hot the body (nebulae, etc) really is, but is providing a measurement of the strength of the radiation from the source at the observed frequency. For example, radiation from an extra solar body may be heated from a nearby source such as a star. If this body is radiating at a temperature of 500 K, it exhibits the same emissions across all frequencies that a local test source does. The calculated noise figure will be the same across all frequencies. (Note: this does not take into account other sources of radiation such as synchrotron radiation).
So, hereÂs the rub. Not only does the source that is of interest to the radio astronomer emit thermal radiation but also both the local environment (ground, atmosphere, etc) and the equipment (antenna, amplifiers, cables, receiver, etc) being used to make the measurements. To accurately observe and measure the distant sources, the radio astronomer must subtract all of the local environment and detection equipment noise additions.
In 1963, Arno Penzias and Robert Wilson were working with a horn antenna trying to make it work with as high efficiency as possible for the Telstar project. This antenna was also going to be used for radio astronomy at a later date. They pointed it to a quiet part of the sky and took measurements. When they subtracted all of the known sources of noise, they found approximately 3 K left over. They worked very diligently to eliminate/describe this noise source and were unable to. This mysterious source of noise seemed to be there no matter where they pointed the antenna. What they had discovered was the microwave background produced from the Big Bang. This 3 (closer to 2.7) K microwave background originated approximately 300,000 years after the Big Bang itself had occurred. It has been determined that when these signals originated, the universe had already cooled down to around 3000 K.
Stars Visible from Earth
If you add up all of the stars that are visible from everywhere on the globe this roughly 6000 stars are visible to the naked eye, globally speaking. From any given location on a single night, about 2,500 are visible to the discerning eye. Under bright city lights, the quantity of stars visible to the unaided eye can drop to mere dozens.
Our Sun has an intrinsic or absolute magnitude of about 5. This is the apparent magnitude our Sun would have if it were 32.6 light years away. A star 100 times brighter would have a magnitude of 0; a star 10000 times brighter would have a magnitude of -5; a star 1000000 (i.e. a million) times brighter would have a magnitude of -10.
With the Hubble telescope, using an exposure time of several hours, one can see stars to about 30th magnitude. This is about 10 billion times fainter than our Sun, if it were 32.6 light years away. The brightness of any object falls off as the square of the distance from the observer, so the Hubble telescope could just see our Sun if it were 3.26 million light years away. If you were to replace our Sun with a star a million times brighter, it could be seen about a thousand times further away, i.e., about 3 billion light years.
In answer to your last question, since this estimate is only for the very brightest stars, and since the distance I obtained is still less than the size of the visible Universe (about 15 billion light years), there are surely many faint stars at great distances which we cannot see.
On to the Earth-Sun system:
It takes one year for the Earth to rotate around the Sun one time and 24 hours to rotate on its axis. Think about this relationship. Not only is the Earth revolving on its axis, it is in motion about the Sun. (I know this is really basic grade school stuff, however, it will help in visualizing the concepts I am about to explain) Therefore the Earth moves 1/365th of its orbit about the Sun every day.
Ok, here is where that visualization will come in handy. Since a Âday is described by one complete rotation of the Earth on its axis, this equates from noon to noon (when a point on the Earth is directly pointed at the Sun). The term for this is called the Mean Solar Day. But here is the rub; the Earth has moved through 1/365th of its orbit during this period of time we called a day. Because the Earth has moved over a tiny bit from where is was the day before, it must rotate a tiny bit more to have the same spot facing the Sun at noon. This tiny bit is slightly less than one degree (the EarthÂs orbit completes 360 degrees in 365 days). Thus the Earth actually rotates almost 361 degrees, not just 360, to complete a mean solar day.
Now let us think of this celestial sphere we have been chatting about. Remember the stars appear fixed in one location (at least on a daily basis). This means that one complete revolution of the Earth referenced to a star does not take that little bit of extra time to be over the same spot on the Earth. This Âday is referred to as a Sidereal Day. It takes approximately four extra minutes for the Earth to have the Sun over the same location on the Earth than a star.
This is the difference between a Sidereal Day (23 hours, 56 minutes) and a Mean Solar Day (24 hours).
Also the Earth is tilted on its axis from the plane of the ecliptic by 23.5 degrees. That tilt causes the North Pole to be currently pointed towards Polaris. As the Earth moves around the sun its pole stays pointed at Polaris. This is the cause of the seasons we experience. Note. This tilt varies back and forth from 21.6 degrees to 24.5 degrees approximately every 41,000 years.
There is also a precession of our pole and it sweeps a complete circle in the sky (think of the Earth as a top wobbling as it rotates) about every 26,000 years. (Hard to explain without a diagram). This gives us different pole stars as the north pole of the Earth sweeps out a circle on the celestial sphere.
There are also a number of other motions that must be taken into consideration over the years, such as the precession of the aphelion. Our EarthÂs orbit around the Sun is not a perfect circle. It is an ellipse with the closest point of the orbit called perihelion and the furthest point called aphelion. Currently perihelion occurs in early January, and aphelion falls in early July. However, this is not always the case. The aphelion and perihelion change over the centuries and sweeps thru the calendar year with a periodicity of around 22,000 years. By the way, the amount a circle is Âsquished (not a scientific term J) to create an ellipse is called its eccentricity. If the eccentricity is equal to zero the orbit will be a perfect circle (also known as a degenerate elipse). An eccentricity between zero and one, not inclusive, describes the eliptical path of an orbit  a highly eccentric orbit has eccentricity close to one. In the case of eccentricity equal exactly to one, the path is a parabola, and eccentricity greater than one describes a hyperbola. Although natural forces tend to circularize most orbits over time, achieving an eccentricity of exactly zero is extremely unlikely in nature.
The eccentricity of EarthÂs orbit is very small. However, even this changes over time. Its eccentricity varies periodically about every 100,000 years. There are also other motions effecting the orbit, caused by the Moon, Jupiter and the Sun: these are called Nutations. One of the major nutations has a period of 18.6 years.
This cursory look at the Earth-Sun system leads is into a discussion of the Earth-Moon system, which is vital to understanding the leap-second issue.
Ok, let us take a look at the Moon. :-)
Currently, the evidence points to a catastrophic collision of another body with the Earth not long after the formation of the Earth. This collision is what is believed to have formed the Moon. Originally the Moon was much closer to the Earth and its period did not match its rotation. Over the years the Moon has become tidally locked with the Earth resulting in the Moon keeping one face to the Earth (Its rotation on its axis matches the period of its orbit). This tidal locking will eventually cause the Earth and Moon to keep one face to each other since the Earth is affected by tides as well. (However, under current stellar evolutionary theory, the Sun will go nova before this happens) A more in-depth discussion of this is continued in the following paragraphs.
Ok, let us take a look at the Moon. :-)
1) How was it formed, 2) what is it made of, and 3) how far away is it are some of the questions that we can begin to answer.
1) How was the Moon formed?
There were at least five major ideas that were proposed as to the formation of the Moon.
Fission  The Moon split off from the Earth.
Capture  The Moon was captured by the gravity of the Earth.
Condensation  The Moon coalesced out of the same Âstuff the Earth did.
Colliding Planetesimals  Formed from colliding Planetesimals during the early formation of the solar system.
Collision  A body collided with the Earth causing a piece of the EarthÂs crust to form the Moon from a resultant ring produced by that collision
The evidence points to the collision theory. First, the Moon does not have an iron core. This pretty much rules out that it coalesced from the same cloud of debris that the Earth did. Second, throughout the solar system, the oxygen isotopes have been found to be different. If the Moon were captured, it too would not match the EarthÂs oxygen isotope ratio (which it does). Fourth, by looking at the angular momentum and energy required, the theory that the Moon spun off the Earth after the Earth formed does not hold up.
This leaves us with the Collision theory as the best model we have for the formation of the Moon. The resultant collision caused a ring of debris from the Earths crust to form outside the Roche limit. If it had not, tidal forces would have not allowed for the Moon we see today.
A more in depth discussion of tidal locking since the Moon is tidal locked to the Earth. The reason the Moon keeps one face to the Earth (Its rotation on its axis matches the period of its orbit) is it is tidally locked to the Earth. Here is a more in depth explanation. The total angular momentum of the earth moon system, which is spin angular momentum plus the orbital angular momentum, is constant. (The Sun plays apart also) Friction of the oceans caused by the tides is causing the Earth to slow down a tiny bit each year. This is approximately two milliseconds per century causing the moon to recede by about 3.7 centimeters per year. As the Earth slows down, the Moon must recede to keep the total angular momentum a constant. In other words as the spin angular momentum of the earth decreases, the lunar orbital angular momentum must increase. Here is an interesting side note. The velocity of the moon will slow down as the orbit increases.
Another example of tidal locking is the orbit period and rotation of the planet Mercury. What is interesting about this one is that instead of a 1:1 synchronization where Mercury would keep one face to the Sun at all times, it is actually in a 2/3:1 synchronization. This is due to the High eccentricity of its orbit.
There also can be more than one body Âlocked to each other. Lets take a look at the moon Io. Io is very nearly the same size as the EarthÂs moon. It is approximately 1.04 times the size of the moon. There is a resonance between Io, Ganymede, and Europa. Io completes four revolutions for every one of Ganymede and two of Europa. This is due to a Laplace Resonance phenomenon. A Laplace Resonance is when more than two bodies are forced into a minimum energy configuration.
There are also examples of tidal locking in the asteroid belt.
First, the asteroid belt has an estimated total combined mass of less than 1 tenth of the Earth's moon. Second, Jupiter has a profound effect on the asteroid belt.
Since Jupiter has a semimajor axis of 5.2 AU (I AU is the distance from the Sun to the Earth) it ends up with an orbital period of 11.86 years. Since the asteroids are not all at the same distance from the sun, there orbital periods will differ in a direct relationship to their distance from the sun. This will result in some of them having an orbital period of one half of Jupiter. This puts those particular asteroids in a 2:1 orbital resonance with Jupiter. The result of this resonance is gaps called Kirkwood's gaps.
The rub is why did not this asteroid belt form a small planet? The reason is the gravitational force of Jupiter. It perturbs the asteroids giving them random velocities relative to each other.
Another effect of both Jupiter and the Sun on the asteroid belt is a group of asteroids that both precede and follow Jupiter in its orbit by 60 degrees. These asteroids are known as the Trojans.
2) What is the Moon made of?
From here:
http://lunar.arc.nasa.gov/science/geochem.htm
ÂPrimary elements: The lunar crust is composed of a variety of primary elements, including uranium, thorium, potassium, oxygen, silicon, magnesium, iron, titanium, calcium, aluminum and hydrogen. When bombarded by cosmic rays, each element bounces back into space its own radiation, in the form of gamma rays. Some elements, such as uranium, thorium and potassium, are radioactive and emit gamma rays on their own. However, regardless of what causes them, gamma rays for each element are all different from one another -- each produces a unique spectral "signature," detectable by an instrument called a spectrometer. A complete global mapping of the Moon for the abundance of these elements has never been performed.
Hydrogen and helium: Because its surface is not protected by an atmosphere, the Moon is constantly exposed to the solar wind, which carries both hydrogen and helium -- each potentially very valuable resources. One natural variant of helium, [3]helium, is the ideal material to fuel fusion reactions. When scientists develop a more thorough understanding of fusion, and can practically implement such reactions, the Moon will be a priceless resource, since it is by far the best source of [3]helium anywhere in the Solar System.Â
This pretty much answers the question; are there valuable materials up there?
3) What is the distance to the Moon?
The mean distance to the Moon is approximately 238,800 miles. From past experience, we can design spacecraft to get there in about three days. This is far shorter than the months the early voyages took to the new world.
Final thoughts on the Moon.
So here we have this tremendous resource at our fingertips. Unfortunately (not unlike the early explorers), the initial cost is staggering. However, in the long run it would end up being an invaluable resource for both material and scientific study. One of the big advantages is that the Moon keeps one side facing the Earth. This minimizes communication problems between the two bodies. Also since the backside of the Moon is shielded from the Earth, it would be an ideal spot to place a radio telescope array.
Since we are now talking about orbiting bodies, let us digress just a wee bit further and briefly talk about orbits:
There are different sizes and shapes of orbits. We use the term Semi-Major Axis to measure the size of an orbit. It is the distance from the geometric center of the ellipse to either the apogee or perigee (The highest (apo) and the lowest (peri)). Apoapsis is a general term for the greatest radial distance of an Ellipse as measured from a Focus. Apoapsis for an orbit around the Earth is called apogee, and apoapsis for an orbit around the Sun is called aphelion.
Periapsis is a general term for the smallest radial distance of an Ellipse as measured from a Focus. Periapsis for an orbit around the Earth is called perigee, and periapsis for an orbit around the Sun is called perihelion.
The terms Gee and Helios comes from the Greek words ÂGe (earth) and ÂHelios (Sun) respectively.
First lets talk a bit about Âwhere it isÂ. An orbit is a nothing more than an object falling around another object. Both Kepler and Newton came up with a set of laws that describe this phenomenon.
KeplerÂs three laws of planetary motion:
1) The orbit of a planet is an ellipse with the sun at one of the foci
. 2) The line drawn between a planet and the sun sweep out equal areas in equal times.
3) The square of the periods of the planets is proportional to the cubes of their mean distance from the sun.
So what is that telling us? In a nutshell, all orbits are ellipses, the close to the body you are orbiting the faster you go (e.g. if you have a highly elliptical orbit the satellite or planetÂs velocity will increase as it approaches the object being orbited and decrease as it get further away).
These laws not only apply to planets and satellites, but to any orbiting body.
Note: Super geek alert #7:
For an orbiting body this is not entirely correct. It turns out that both bodies end up orbiting a common center of mass of the two-body system. However, for satellites, the mass of the Earth is so much greater than the mass of the satellite, the effective center of mass is the center of the Earth.
NewtonÂs three laws (and law of gravitation):
1) The first law states that every object will remain at rest or in uniform motion in a straight line unless compelled to change its state by the action of an external force. (Commonly known as inertia)
2) The second law states that force is equal to the change in momentum (MV) per change in time. (For a constant mass, force equals mass times acceleration F=ma)
3) The third law states that for every action there is an equal and opposite reaction. In other words, if an object exerts a force on another object, a resulting equal force is exerted back on the original object.
NewtonÂs law of gravitation states that any two bodies attract one another with a force proportional to the product of their masses and inversely proportional to the square of the distance between them.
Note: Super geek alert #8:
Actual observed positions did not quite match the predictions under classical Newtonian physics. Albert Einstein later solved this discrepancy with his ÂGeneral Theory of RelativityÂ. There are four classical Âtests that cemented General Relativity:
1) In November of 1919, using a solar eclipse, experimental verification of his theory was performed by measuring the apparent change in a stars position due to the bending of the light buy the sunÂs gravity.
2) The changing orientation of the major axis or Mercury not exactly matching classical mechanics.
3) Gravitational Redshift
4) Gravitational Time Dilation
So what is all this trying to tell us? Planets, satellites, etc orbit their parents in predictable trajectories allowing us to Âknow where they will be at any given time. A set of coordinates showing the location of these objects over a period of time is called its ephemeris.
Historically, time has been measured by the rotation of the Earth on its axis and the time it takes to rotate once about the Sun (a year). However, both of these are not uniform enough for precise calculations.
So what is all this trying to tell us? Planets, satellites, etc orbit their parents in predictable trajectories allowing us to Âknow where they will be at any given time. A set of coordinates showing the location of these objects over a period of time is called its ephemeris.
FINALLY let us get to time and leap seconds! Historically, time has been measured by the rotation of the Earth on its axis and the time it takes to rotate once about the Sun (a year). However, both of these are not uniform enough for precise calculations.
One of the units of time is called the second. It used to be defined as 1/86,400 of a Mean Solar Day. This was good enough for early calculations, but donÂt forget that the Earth is slowing down due to tidal forces so that ends up changing over time. After a number of intermediate steps the second was finally redefined as:
The duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the cesium 133 atom. (Atomic time), also known as Coordinated Universal Time (UTC).
Since the Earth is slowing down approximately 1.4 milliseconds per day per century, this deceleration causes the EarthÂs rotational time to vary from atomic time. The current true (instantaneous) rotation rate of the Earth is called UT1 (which is a non-uniform rotation). Over a period of a year the difference between it and UTC can approach a full second. However, since the EarthÂs rotation is non-uniform, it is monitored continuously. If the difference between UT1 and UTC approaches 0.9 seconds, a leap second is added or subtracted from UTC to keep it uniform with the EarthÂs rotation. So far all of the leap seconds have been positive. This correlates with the Earth slowing due to tidal braking.
Note: Since the GPS time does not have leap seconds added or subtracted, it is diverging with UTC with every second added to UTC. Currently it is different by 13 seconds. This can cause some consternation when flying a satellite or spacecraft that uses GPS. If your ephemeris is calculated in GPS time and you receive a Âvector in UTC time, it will be off by 13 seconds. You just cannot add 13 or subtract 13 seconds and press on. The rub is that not only has the satellite moved 13 seconds in-track, the Earth has rotated underneath by 13 seconds (cross-track) as well. This is especially noticeable for the high inclination orbits. Vectors have to be recalculated when translating between GPS and UTC.
The interesting note is that the last time a leap second was needed was clear back in 1999. Remember, the deceleration of the Earth is not uniform. There may be a number of factors that cause this non-linearity such as snow and ice loads, earthquakes and others we havenÂt even thought of. This could account for this long delay we have had between leap seconds. This certainly is not a permanent condition. The Earth will continue to slow down and the deceleration will still vary. One final item: There is an ongoing debate whether to do away with leap seconds all together and just go with UTC. The problem with this is, over an extended period of time, the hours will no longer be tied to the solar day and noon may well end up in the evening. Another suggestion is to redefine the period of one second to more closely match the current rotation of the Earth. This too has its problems as the second will required Âredefining periodically as the Earth continues to slow down.
Now that we are this far along, how about a little chat on satellites and spacecraft:
Satellites (and spacecraft) are incredibly precise machines with exquisite craftsmanship. The life of a satellite is often computed by the onboard fuel requirements. For geostationary satellites, periodic maneuvers (delta-Vs) must be accomplished to keep them on station. This is also required for many lower orbiting satellites as well. For an orbit plane change (move it into a different orbit), mass must be ejected to move the satellite.
Since the Earth is not a perfect sphere (it is an Oblate Spheroid), satellites drift from their predicted position due to the EarthÂs non-spherical shape. Also at low Earth orbits, the atmosphere creates a drag on the satellite also causing a drift (perturbation) in its orbit. At higher altitudes, such as a geosynchronous orbit, the solar wind and effects from the moon are more pronounced. This requires us to update the ephemeris periodically.
Satellites (and spacecraft) are incredibly precise machines with exquisite craftsmanship. The life of a satellite is often computed by the onboard fuel requirements. For geostationary satellites, periodic maneuvers (delta-Vs) must be accomplished to keep them on station. This is also required for many lower orbiting satellites as well. For an orbit plane change (move it into a different orbit), mass must be ejected to move the satellite.
Note: Super geek alert #9:
The Hohmann transfer orbit is the most energy efficient (minimum energy solution) way of getting from one circular orbit to a higher or lower circular orbit. This type of transfer orbit is used by interplanetary spacecraft to travel to the other planets in our solar system.
Now that we have a better understanding of its orbital position, we need to concentrate on its pointing (Attitude Control).
Why do we need to worry about pointing? If the satellite has solar panels (arrays), they need to point towards the sun to provide power. Sensors need to point at their respective targets, such as a star sensor, sun sensor etc. Thermal and possible contamination consideration must be taken into effect when pointing also.
Remember for every action there is an equal and opposite reaction. So if I spew mass (jet of gas out of a thruster nozzle), the satellite will move in the opposite direction. Also if I spin a wheel onboard the satellite, the result will be the satellite spins in the opposite direction.
Since fuel is precious and usually cannot be replenished (called consumables), other methods of pointing were devised that did not require mass ejecta from the satellite. Spinning reaction wheels were one. If you have orthogonal reaction wheels, just by spinning them you can provide precise pointing. Unfortunately, external forced (perturbations) adds unwanted momentum to the wheels. To compensate (unload momentum from the wheels) for this, I have seen both low-level monopropellant jets or torque rods used for this purpose.
Note: Super geek alert #10:
A monopropellant is one that does not require an oxidizer to function. Usually monopropellants are composed of a liquid compound called Hydrazine (N2H4). When this liquid comes in contact with a platinum catalyst, it is decomposed into gaseous ammonia (NH3), nitrogen and hydrogen. This gas is then ejected (fired thru a jet/nozzle) to providing motion for the satellite or spacecraft.
An ingenious method of unloading momentum without the use of fuel was devised using simple electromagnets. Remember the Earth is surrounded by a magnetic field (why your compass works). If you attach orthogonal electromagnets on your satellite and turn them on, the resultant field interacts with the EarthÂs field causing a torque on the satellite. These are what are known as Torque Rods.
Since the reaction wheels, gyros, and torque rods all work using electricity and the solar arrays provide that electricity, theoretically the life of the satellite is indefinite. Unfortunately, there are degradations of the thermal coatings, blankets, sensors, and failures of both the gyros and reaction wheels that ultimately limit the life of any satellite.
Over a period of time, these degrade to the point that the satellites can no longer function within design spec. At some point, you either have to replace the satellite, repair it, or say farewell.
Since there is often confusion about geosynchronous orbits, here is a brief discussion on geosynchronous orbits: A geosynchronous orbit is an orbit that has the same period (single revolution) that is equal to the time it takes the Earth to complete one revolution about its axis (one sidereal day). A sidereal day is measured with respect to the stars as apposed to the sun (one solar day). This is approximately 23 hours and 56 minutes. The semi-major axis for a circular orbit that has this period is approximately 42,164 kilometers and a mean altitude of approximately 35,790 kilometers above mean sea level. One of the unique features of this orbit is that as the inclination approaches zero (stays on the equator) and the orbit is circular, the object orbiting will stay over the same location on the Earth due to the fact it is moving at the save speed as the Earth is turning under it. This special type of geosynchronous orbit is called a Geostationary Orbit (stationary with respect to the surface of the Earth). As the inclination increases for a geosynchronous orbit, the ground trace of the orbit on the Earth plots a figure eight (8) pattern.
A more in depth discussion of geostationary orbits
First, from the above paragraph, you may have deduced that a geosynchronous orbit is not necessarily a geostationary orbit. However a geostationary orbit must be a geosynchronous orbit. These terms are often used interchangeably since most geosynchronous orbits are also geostationary. However, that is not always the case. It is the zero (0) degree inclination that makes it that special orbit called the geostationary orbit.
I used the term sidereal day for describing geosynchronous orbits. How do we measure a day? Usually we measure it in reference to the sun being in the same position from one day to the next (i.e. noon to noon). However, that is not the same time it takes the Earth to rotate once on its axis. Remember the Earth is also in orbit around the sun requiring it to travel just a tiny bit further in its rotation for the same spot on the Earth to be pointing towards the sun each day. This is the difference between the Mean Solar Day (our normal 24 hour day) and the Sidereal Day. The difference is approximately 4 minutes per day.
For a geosynchronous orbit, this orbit must be synched to the actual rotation period of the Earth (sidereal day). Even though a satellite is place in a near geostationary orbit upon launch there are forces that act upon the satellite that increase the orbital inclination. Remember an inclination of zero (0) for a geosynchronous orbit is also a geostationary orbit. The primary cause of this is that the equatorial plane is coincident with the ecliptic. So both the sun and the moon slowly over time increase the satelliteÂs orbital inclination. Also since the Earth is not a true sphere, the geosynchronous satellites drift (in-track) towards two stable equilibrium points over the EarthÂs equator. This is why Âstation keeping is required for geostationary satellites. Satellites are typically maintained within a band that is approximately 0.10 degrees. When station keeping is no longer possible (all the fuel is used) or there is a satellite malfunction, most geostationary satellites are boosted into a higher orbit (end of life orbit boost) so they will not drift into an area where another geostationary satellite is operating.
Here is another non-intuitive repositioning delta-v. For a geostationary satellite, you fire the thrusters in the same direction you want the vehicle to move. What is happening is you are changing the velocity of your vehicle that directly correlates to Kepler's third law. So if you fire the thrusters away from (behind) the direction of flight, causing the satellite to increase its altitude just a tiny bit, its velocity in respect to the velocity of the surface if the Earth will actually be slower. This allows the Earth to turn underneath it faster and the satelliteÂs subpoint (the point directly below the satellite) will move westward (or backwards in the same the direction you fired the thrusters).
If you fire the thrusters in the direction of flight (eastward), the satellite will drop to a lower orbit causing it to speed up relative to its subpoint and it will move relative to the surface of the Earth in the direction you fired the thrusters once again.
With only two firings (this is a Hohmann transfer orbit BTW) you can reposition a geostationary satellite.
Is Space travel really worth the effort?
Think back a little more than 500 years. Many people still believed the world was flat, the world was only 6000 years old, the Earth was at the center of the universe, etc. However, the time was ripe for not only huge leaps in knowledge, but in exploration as well.
Europe was changing. Natural resources and newly exotic items (especially from the Far East such as spices, drugs, silk and china) were all the rage. During this time land based trade routes were established, however, they were long, costly, and difficult. Water routes were attempted including one funded by Ferdinand and Isabella in 1492. It so happened that a trade route to the orient was not forthwith, however, an entire new continent was Âdiscovered (at least to the Europeans).
Here is where it gets interesting. Countries in Europe (mainly Spain, France, and England) looked to this new land, not for colonization, but the abundance of natural resources. Think of what came back from the new world, sugar cane, rubber, gold, silver, furs, timber, cocoa, etc. So not only were these voyages of discovery, but voyages that ultimately lead to trade and wealth.
It took close to 100 years from the voyages of Columbus to the establishments of colonies. Were they able to produce all of the things needed for a society? Not hardly. However, with natural resources being shipped back to the old world and manufactured goods shipped to the new, it turned out to be quite profitable for the nations (and companies -the East India Rubber company comes to mind-) involved.
What I am driving at is that you donÂt Âneed all of the 4000 years of technological infrastructure to produce a successful colony. If we do establish a lunar colony, the raw material from the lunar regolith may generate enough wealth to make a lunar colony worth the effort.
I am looking back over some questions brought to mind on this thread. One of those was:
Just how are you going to create, on the moon, the infrastructure to build a space ship to take advantage of its low gravity?
When the first sailing ships visited the new world, did they make those ships there? On the other hand, there were enough raw materials to build them and eventually they did as those colonies flourished into the great metropolises we have today.
What about the rest of the solar system? Can we travel from one planet to another with out requiring non-existent technology such as antigravity? The short answer is yes.
Basically, to get from one planet to the other, a Hohmann transfer orbit can be used. In simple terms (for planetary missions) this is an orbit around the Sun that intersects the two planets in question. First you launch you vehicle into a stable orbit around the planet you are leaving. Then you accomplish a delta-V to insert the spacecraft into the transfer orbit. At you destination you accomplish another delta-V to place the spacecraft into orbit around the final planet. This type of transfer minimizes the acceleration required at both ends of the orbit to match speed with the planets involved.
If for some reason there is not enough energy to produce a direct transfer orbit, another planet may be used (gravity-assist) to add the energy required to accomplish the mission. The Galileo was sent to Jupiter using this method. (After the Challenger disaster, an IUS was substituted for a Centaur reducing the total energy, which subsequently forced a redesign in the mission profile to use Venus as a gravity-assist).
One of the disadvantages of a Hohmann transfer orbit is it is quite slow, especially for the outer planets. So a series of gravity-assists can be used. Not only does the vehicle get to the destination in a shorter time, you can "explore" the planets you are gravity assisting from in the process. Voyager did this very thing as it left the solar system.
Since we are talking now about colonizing the Solar System, we should take a moment and talk about the Lagrange Points.
There are five Lagrange Points in the Earth Moon system. The one that is most talked about it the L5 point. In fact there was a society called the L5 society named after that point. The L4 and L5 points are stable. Meaning there would not have to be periodic Delta-Vs for repositioning. The L5 was chosen since it trailed the Moon and it is thought that the Moon would Âsweep up any debris that may be a hazard to the station.
SETI
Moore's law has affected the Drake equation in ways we don't even know yet. I personally think the Fermi Paradox is pure BS and not well though out, however, the Drake equation seems to have stood up to scrutiny. SETI (at least the current trend) is searching for extremely narrowband carrier signals that Doppler shift due to planetary rotation. The Doppler shift is extremely important since if it is not there, we know the signal is either terrestrial or an artifact of the equipment itself. The other thing that is very important is the two-antenna approach. If two antennas, separated by a thousand miles, were pointed at the same patch of sky, this would not allow a satellite to "spoof" the system. First, the likelihood of it being within the footprint of both antennas are exceedingly small, and the Doppler characteristics between the two antennas would rule it out if such a thing happened.
All that said, I agree with the advances in communications technology can cause a search to be futile for many types of broadcasts. Frequency hopping spread spectrum and the like will make it far harder to detect a tool building species that uses radio (EM). To be fair to the other side there is another factor in this conjecture. A race is progressing along and figures out that the electromagnetic spectrum is the only real practical method of long-range communications. So high-powered transmitters are built as this technology is in its infancy. As the engineering and science of radio advances, they figure out that tight beam, spread spectrum, synthetic aperture, frequency hopping, etc. are a way of not only saving power, but also bandwidth. So for the first 50 years they have been "bleeding" EM into space across a huge range of frequencies into and ever-increasing sphere of radio noise. However, do to technological advances, this RF that is being bled into space quiets down dramatically.
Now, letÂs jump a few years. This race has expanded off its initial planet and is exploring the solar system it resides in. (In my humble opinion, star travel still remains firmly in the realm of SiFi) Somehow they have to communicate. So again high power transmitters are employed to accomplish this. Light is not out of the question, however, microwave is easy, cheap, less pointing accuracy requirements, and wonÂt be drowned out by the star. So suddenly this race again is radiating RF into the universe. According to this scenario, a race can emit RF then grow silent for a time, and then restart emitting RF.
Most SETI searches are done with computers looking at millions of frequencies simultaneously. Also, SETI is not looking for, nor is it expecting any modulation. That would long ago have been lost in the Interstellar Medium (ISM). All that can reasonably be expected to be detected is a faint signal from the narrowband carrier itself. In fact due to the signal-to-noise (S/N) characteristics, the narrower the band that is being searched, the better. Some searches are looking for signals that are no wider than .8 Hertz.
Arrrggg! I hate HTML copy and paste from MS Word.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.