#103: How to Take a Picture of Nothing
Last week, when I wrote about the image of the black hole taken by the Event Horizon Telescope (EHT) project, I conveniently glossed over a rather important detail: How was the picture actually taken? And why does it look like the way it does?
(Source: Event Horizon Telescope/NASA Astronomy Picture of the Day)
To see how the image itself was taken, let’s look at how a normal digital camera works. Light goes through the camera lens, which provides optical magnification1. Then, that light hits the camera sensor, which is made up of a grid of tiny light detectors. Each detector records how much light is hitting it, and what color that light is2. Then, the camera computer takes all those readings, and converts them into a picture we humans can see, where each pixel corresponds to one of the tiny grid sensors3.
Now, doing the same with M87’s black hole is a bit trickier, since it’s 55 million light years away. First, you have to contend with the problem of the diffraction limit: The smallest size you can resolve with your telescope is determined by dividing the wavelength you’re observing in by the telescope size. Anything that’s below that size you just won’t be able to observe. For example, if you want to observe an orange (let’s say it’s 7cm) on the surface of the moon in the visible spectrum of the light (380 to 740 nanometers), the size of the telescope you’d need happens to also be as large as the earth itself4.
If you plug in the numbers for M87’s black hole, you’ll see that you still need an earth-sized telescope just to be able to see it. But as I wrote last time, the EHT was able to get around that with a trick: interferometry. This allowed them to build a virtual earth-sized telescope by combining the observations of many telescopes (video).
But if you think back to our regular camera sensor, you quickly notice a problem: Our sensor has a detector for each pixel of the final image, but our virtual telescope doesn’t. And indeed, while the virtual telescope is able to resolve the black hole, it cannot simply take a picture of it like a camera would. All the EHT ever sees are tiny parts of the entire picture. By combining multiple observations of the black hole, the EHT can see even more pixels, but that would still not nearly be enough to call it a picture.
That’s where the EHT’s second trick came in: Once the telescopes had collected the data, the EHT split into 4 separate (and isolated) teams. Each team worked on techniques and algorithms that work out what the entire image might look like based on the few pixels the EHT was able to observe (video), based on how other things look like. Since each team developed their own approaches to generating the final image, the idea was that if each team produced the same image of the black hole at the end, then they had indeed created the “correct” image.
And so it was, and that’s how this image was generated:
(Source: Event Horizon Telescope/NASA Astronomy Picture of the Day)
It’s a bit blurry, because the EHT didn’t collect a lot of photons, but more observations in the future would mean more data for the algorithms, and we should consequently get a less-blurry image over time.
But that doesn’t answer the second question: What are we actually seeing here?
Here’s the thing: The dark spot in the middle still isn’t the black hole, but its shadow (video). Because the black hole bends space-time so much, we cannot actually see its edge. We can only see the photons that have managed to escape a slightly larger orbit. But even then, we can’t just see those photons, because on the way out, their paths were bent by the black holes massive gravity well. And if you’re confused by now, I suggest you watch Veritasium’s excellent video on what you’re actually seeing in the picture.
In the end, it’s a true scientific accomplishment that we can see this picture, even though it doesn’t look all that exciting to the untrained eye. Just remember that the light that makes up that picture first escaped a black hole, and then traveled 55 million years to reach us, so we could have this picture.
Hot People are Stressful
Look, everyone likes looking at attractive people, even if they don’t want to admit it. But actually meeting someone attractive can also be very stressful: How Attractive People Affect Your Brain.
Get It Out of This Kitchen
Professional chefs professional lives are about creating delicious dishes, and they are good at their job. Which also means they won’t suffer single-use tools in their kitchen — many of which are household staples: 9 Kitchen Tools Chefs Don’t Use.
Scientists: We kept pig brains alive 10 hours after death. Bioethicists: “Holy shit.”
After the brain has gone without oxygen for 15 minutes, its cells die. And once the brain is gone, the entire organism is essentially dead as well. In fact, that’s how death is defined for us humans: If your brain is dead, you are dead. But in a new experiment, scientists were able to revive supposedly dead pig brains 10 hours later. Vox explains the stunning new Nature study — and its implications on ethics.
📖 Weekly Longreads 📚
Why would a Russian assassin target a Ukrainian electrician? Russia Ordered a Killing That Made No Sense. Then the Assassin Started Talking.
🦄 Unicorn Chaser 🦄
An Actor Cut Together A Compilation Of His Work As A Background Extra, And It’s Freakin’ Hilarious
-
Or: It makes things look bigger. ↩
-
This is simplified, of course. Most camera sensors don’t work that way, since a detector can only detect one kind of color (like red). By cleverly arranging different color detectors, camera sensors can take a color picture, with us none the wiser. ↩
-
Again, this is simplified. Modern cameras do a lot of post-processing to enhance image quality, so a pixel in an image doesn’t really correspond to one of the detectors on the sensor. ↩
-
In other words, to observe anything in detail in our solar system, it’s cheaper, easier, and quicker to build a probe and launch it into space. ↩