Artificial intelligence and why astronomers don’t look through a telescope anymore

·4 min read

As a child I had the opportunity to utilize the two-meter Faulkes Telescope in Siding Spring, Australia. I remember it was a very thrilling experience seeing the telescope move via a live webcam. I would input sky coordinates, and over the course of a minute, the telescope would slowly tilt toward the object. I couldn’t wait to become an astronomer and spend countless nights at the actual telescope sites — Hawaii, Chile, Spain, these are all locations with powerful telescopes that I wished to use one day.

But little did I know, the era of manual observations was coming to an end.

My first professional research experience occurred when I was hired as an intern at the Las Cumbres Observatory, headquartered in Santa Barbara, California, my hometown. Though it is called an observatory, no telescope is actually located there. Las Cumbres comprises over 30 instruments across the world, with the mission of always having a telescope in the dark, ready to make observations at a moment’s notice.

Daniel Godines
Daniel Godines

Their telescope system is very streamlined, and is a prime example of how astronomical observations function today. The telescopes are all controlled robotically, requiring almost no human intervention, except for select engineers responsible for instrument maintenance. The scheduling is also controlled robotically, and while astronomers can request observations during certain nights, the observations are ultimately booked and controlled by an automated system that is programmed to select the best targets, according to instrument availability and weather conditions across all sites; an analysis that is performed in mere seconds.

It is indeed becoming increasingly rarer for astronomers to visit sites and perform actual observations, as manual control of telescopes, when necessary, is often done remotely from the comfort of one’s home; and any collaboration is simply done via zoom. Gone are the days of Edwin Hubble, who would effectively live for extended periods of time on the mountain, observing night after night, collecting plates of data one at a time.

While I do think it’s a little sad that such days are behind us, there is a silver lining to it all. With the technological advances in both telescope instrumentation and software capabilities, it became necessary to revisit and streamline the observation process. We had no other choice, as humans are prone to error, certainly, but they also need to sleep, eat and socialize, whereas a machine can go on working nonstop for decades so long as electricity is provided. As an astronomer, I value the science over everything else and would happily choose an increase in quality data over the opportunity to live on a mountain and perform manual observations, as mystical and dreamy as that may have been.

Machine learning has been used as a tool to optimize manual procedures since the 20th century, in fact, my colleagues always used to joke that the USPS mail system had been using machine learning since the 1960s to help sort the mail, whereas astronomers have just recently come to appreciate its application in our field due to the big data transition we are in. The James Webb Telescope, for example, will produce more data than could ever be visually inspected. There are images that exist today that no one in this world will ever see, and with the commissioning of new telescopes in the coming years, more images will fall into the category of data inspected only by emotionless machines.

Astronomers using NASA’s James Webb Space Telescope combined the capabilities of the telescope’s two cameras to create a never-before-seen view of a star-forming region in the Carina Nebula. Captured in infrared light by the Near-Infrared Camera (NIRCam) and Mid-Infrared Instrument (MIRI), this combined image reveals previously invisible areas of star birth. 
What looks much like craggy mountains on a moonlit evening is actually the edge of a nearby, young, star-forming region known as NGC 3324. Called the Cosmic Cliffs, this rim of a gigantic, gaseous cavity is roughly 7,600 light-years away.
Astronomers using NASA’s James Webb Space Telescope combined the capabilities of the telescope’s two cameras to create a never-before-seen view of a star-forming region in the Carina Nebula. Captured in infrared light by the Near-Infrared Camera (NIRCam) and Mid-Infrared Instrument (MIRI), this combined image reveals previously invisible areas of star birth. What looks much like craggy mountains on a moonlit evening is actually the edge of a nearby, young, star-forming region known as NGC 3324. Called the Cosmic Cliffs, this rim of a gigantic, gaseous cavity is roughly 7,600 light-years away.

In short, machine learning is the employment of differential calculus to identify optimal patterns in high dimensional data. A 50-by-50 pixel image, for instance, can be represented in 2,500 dimensional space. But what are these “optimal patterns” that the machine identifies? Unfortunately, there is no answer to it. Truly, machine learning is often viewed as a dark art. Even if we could visualize the connections made by my machine learning engine, it would have been futile, as we ultimately would have made little sense of it. The correlations the machine found through numerous iterations are simply too intricate for our minds to comprehend. It is very much a black box — data goes in, we don’t know what connections the machine learned during training, but good results come out, and we are happy.

As astronomers we use machine learning for object recognition, signal predictions and even as a tool to manage our instruments. It would take me my whole life to inspect two million astronomical objects, yet I have a machine learning algorithm that did it for me in less than 30 minutes. These developments have led to the creation of broker systems that take in telescope data, apply machine learning to distinguish certain objects, and then forward along the information to science teams interested in the particular phenomena.

Much data in the coming century will go unnoticed, even with the help of our machine learning programs; but I suppose that is a beautiful thing — anyone can do astronomy by simply downloading public data from their computer. We need all the help we can get, because while machine learning has tremendous utility, in its current state it still cannot be compared to the eyes and brains we are gifted with, of which there are simply not enough.

Daniel Godines is a PhD student in astronomy at New Mexico State University. He can be reached at godines@nmsu.edu.

This article originally appeared on Las Cruces Sun-News: Star News: AI and why astronomers don’t look through a telescope anymore