I have a question. Do you know how they did this? Seems to me they either:
A. Took the whole starlight with the planet in front of it & subtracted the colors they'd get without the planet in front of it & the difference is (should be) the colors of the planet's atmosphere, or
B. Were able to mask out the part of the star's image that the planet is not passing in front of, & analyzed that spectrum directly.
Are they able to get a star to show up as more than just a single pixel of light? I.e. how big an image can they produce of the star itself?
And if so, have they ever been able to get a picture of a sunspot?
C. Looked at the whole spectrum and picked out absorption lines that exhibited a Doppler wobble with respect to the emission lines as the planet moved around the star.
Astronomers are using computers to make most of their discoveries these days. It's similar in a way to looking for a signal from a galactic civilization, except in this case, they found the signal of sodium, which is one of the easier ones to find. It will be 10 years until NASA can launch its hyperlarge planet-finder telescope, assuming they somehow retain the budget to do so. At that time they will be able to image extra-solar planets, barely.