Creative Curiosity: A NASA Concept Artist Explains the Process Behind the Latest Mars Images

Doug Ellison’s illustration of the Curiosity rover communicating with Earth during the landing.
(Courtesy NASA/JPL-Caltech)

Though most drawing teachers discourage students from drawing from a photograph, some of us don’t have a choice.

As a producer in the Visualization Technologies group at NASA’s Jet Propulsion Laboratory (JPL), Doug Ellison is an illustrator of places and things that, at least for now, cannot be seen in person. His job is to bring celestial bodies down to Earth: taking immense swaths of information from satellites around the solar system and using them to model images for press releases and animated videos on NASA’s website.

Among his team’s most impressive accomplishments is a computer application called “Eyes on the Solar System,” which allows laypersons to see planets, moons, comets, and asteroids at remarkably close range. It also makes it possible for users to follow the paths of satellites and other spacecraft as they move in orbit.

Most recently, Ellison was involved in developing the illustrations to coincide with the spacecraft Curiosity’s voyage to Mars. The final product included stunningly realistic maps of the module’s projected landing site and detailed videos of the rover’s final descent. How does one go about drawing something from so far away? ARTINFO talked to Ellison about his work shortly after Curiosity touched down.

Science illustration is something of a niche profession. Were you interested in animation or astronomy first?

When my career started out, I wanted to go into engineering for space flight, but when I first started a degree in electronic engineering, I hated it. I went and did multimedia design instead for three years, and that then followed a job doing multimedia production for a medical training company, but my love of space never waned. I was taking my work abilities and applying them to space things for eight plus years, in particular, using data from spacecraft that orbit Mars. Eventually that work got the attention of people here at JPL. So in the spring of 2010, I moved from the UK to Pasadena, and started working here at JPL.

As we understand it, you’re the content lead for “Eyes on the Solar System,” a web application that allows people to look closely at spacecraft, comets, planets, and the Solar System at large. When and by whom was it first conceived?

“Eyes on the Solar System’s” history goes back probably five years or so. The Cassini Project funded a tour to show the Cassini spacecraft that was going to be around Saturn, so we could just see it doing its orbit going around. Once people saw that, a different division here at JPL involved in climate science said, “That’s really good. Could you do a version that does the Earth, with Earth’s climate?” And so we did that, which became “Eyes on the Earth.”

Once we did “Eyes on the Earth,” they basically said, “That’s lovely. Now do the whole Solar System.” So that became “Eyes on the Solar System” as a third project. Actually, we’ve recreated “Eyes on the Earth” as an “Eyes on the Earth II” that uses “Eyes on the Solar System” as its back end. “Eyes on the Solar System” went live in November 2010, so it’s not far short of its second birthday, and we first went live to help people along with a comet flyby mission called EPOXI in November 2010.

What kind of data goes into constructing a piece of animation for “Eyes on the Solar System?” Is it tracking data from an object’s path through space, or is it a prediction of where you think an object is going to go?

The data in “Eyes on the Solar System” is always a mixture of the predicted data going in from a couple of weeks in the past, out into future, and then reconstructed data to back-populate what’s happened in the past. And so when we do these “live” events, what you’re looking at is a prediction of what should occur. Now, with flybys or things like comets in deep space, all the driving is being done by Isaac Newton, so our predictions and what actually occurs match very, very well. “Eyes on the Solar System” is giving people pretty much a live view, even though it’s based on a prediction.

We got extraordinary help and amazing data from the Curiosity team to help show landing, which was a kind of level of complexity and detail we haven’t even thought of before. There was far more margin for reality to deviate from prediction, and it did, but not by much. We think touchdown was about six seconds out, which, given that you have to fly through a stack of atmosphere that has weather, and wind, and things like that, is not too bad at all.

I’d like to ask you about some of the images that have come up in the press releases directly from NASA. One of the more recent ones has a photograph taken by the Curiosity superimposed over a modeled rendering of the Mars landing.

The color image on the funny angle, yes?

Right. Could you tell us a little about the equipment that was involved in developing that image?

Currently there are three orbiters of Mars: a European one called Mars Express, and two American ones. One called Mars Odyssey, and then the Mars Reconnaissance Orbiter, or MRO for short, because otherwise, it would take a month to say these.

MRO can be considered a spy satellite, a weather satellite, and a communication satellite. It does all those jobs, and the HiRISE [the High Resolution Imaging Science Experiment camera on board the orbiter] is the spy satellite piece of it. The resolution of that camera is pretty equivalent to what they see on Google Maps when they zoom all the way in.

The MAHLI [Mars Hand Lens Imager] is a microscope on board Curiosity. It just took a picture, straight out of its lens from the surface, and with its lens cover still on, which is protecting it from the dust that we kicked up when we landed, and that’s a good thing, because the image is quite murky, which is a result of all the dust on the lens cap.

HiRISE is a camera in orbit around Mars, and it does map the surface by looking down as it flies overhead. We just get those down as large image files, which we then kind of re-project to take out the dynamics of the spacecraft’s orbit. It’s like de-warping it, in some respects, and then that process takes 24 hours. That’s how we got those images [of Curiosity] you may have seen from under its parachutes during landing, that was taken by HiRISE, from orbit.

The amazing thing is that that HiRISE observation had to be planned about a week before landing, and so all they could do is aim [the camera] where they hoped Curiosity would be. And Curiosity landed about two kilometers from the absolute center of its landing ellipse, which is basically the circle of uncertainty about where we hope to put the thing down on the ground. It was about 20 kilometers long, and it landed at just two kilometers off center. It was about 50, about a 60 percent chance of actually getting it, but it had to be planned way in advance of the landing, and so a pretty lucky shot to actually get it.

The images your team developed of Curiosity’s landing were available on NASA’s press releases pretty much instantaneously. Could you describe the process for preparing those? How did you decide how to render the planet once the rover touched down?

When it comes to choosing a landing site, there are two competing parties. One is the scientific community, and the scientists want to go somewhere really interesting, which normally means kind of rough and pointy and with hills and holes and whatever. The engineering team would rather they landed in a flat parking lot with nothing of interest at all, because that’s the safest place to land. With data from the spacecraft Mars Reconnaissance Orbiter, we mapped many different landing sites, and in fact, the final four candidate landing sites all matched very well with the path of the HiRISE camera.

When we finally selected the Gale Crater, they went ahead and actually mapped all the landing ellipses, 20 by 10 kilometers of landing area, plus the route from there to the [rover’s] assigned target, about 10 kilometers to the south.

Essentially, it’s kind of like wallpapering your bathroom wall with kitchen rolls. These are thin noodles of data. The kind of thing you can do with the Mars Reconnaissance Orbiter, is you can point it slightly left and right as it’s flying overhead, and so what they do is they’ll look at the same target, on separate orbits, from slightly different angles. And what that gives you is essentially a pair of images that are a stereo pair.

Could you explain what you mean by “stereo pair?”

If you stare down, and you hold your finger out in front of you, close one eye, and look where your finger is compared to the background, and close your other eye, and compare where your finger is to the background, your finger will have moved as you flip between your two eyes. Imagine instead of using two eyes, you just have two images taken from a different point above Mars, so instead you’ve got two eyeballs that are very far apart, and you know where the spacecraft was, you know the parameters of the camera, therefore you can back out from this pair of images, this stereo pair into a three-dimensional model of the surface. From that, you build a three-dimensional model of the terrain. Multiple spacecraft do this, but the highest resolution one is HiRISE on the Mars Reconnaissance Orbiter.

As a result, we have this mosaic of all these HiRISE images we put together, including all the landing sites, and the landing target. It’s 100,000 pixels across, and 150,000 pixels tall. We also have the elevation data at 1 meter per pixel, which comes out to about 40,000 by 30,000 pixels.

It’s a hugely confusing task. Before I emigrated to the US, a friend of mine and I went through this process of taking two HiRISE images to create one more three-dimensional model, and it’s a recipe you follow from ingesting the data and calibrating it, but there’s one step, where basically the computer shuts down, creates a map of all the features on one image, a map of all the features on the other image, and then co-registers them to see where they match and the difference between them. The recipe just says that this step may take 24 hours. You just hit “Go,” and you walk away for a day, come back, and the computer has done all that math to generate this 3-D data. And there’s probably a dozen or more different 3-D kinds of swaths that have been merged together to make this kind of one, humongous, monolithic map as a three-dimensional map of the landing site.

It’s data like that that we can use before landing to produce the animations of what the Gale Crater is going to look like before we touch down. At a press conference yesterday, I did a fly-in to where we now know we are, and did a little 360-degree pan around, and then we turned from that to one of the early images from the rover, to show how the skyline matches perfectly.

Could you describe how you modeled the animation of the rover’s first landing?

We spent nine months working with the science team and the engineering team, starting with CAD files, creating the actual model of our own, to make sure that the rover was as accurate as could possibly be. Then in the animation, we actually land it on a real piece of Mars, it just happened to be not the final landing site, because when we finished the animation, there were still four candidates, and we couldn’t have it landing in one of those four, because that would have been favoritism. It might have biased the site selection process.

We then landed it in a completely different piece of Mars, which happened to be kind of downtown Valles Marineris, the Grand Canyon of Mars. But it looked spectacular, and it was the same kind of data that we’re now using to see where the rover is.

The most important part of our animation, obviously, is the entry, descent, and landing, the seven minutes of terror that gets so much press, and it was a lot of back and forth with the engineering team. Their enthusiasm to make this animation good and exciting was breathtaking. They were asking us to almost break the rules of cinematography to show just how dynamic and violent landing on Mars really is. We even went back and forth joking about what sound effects would sound good.

I was directing it from the land, there was an outside animation company that was doing the work, and we got that thing finished about nine months before launch. Just before launch, the science team has to explain to the public why they’re going to this big crater with a mountain in the middle, so I started producing graphics. I was working with a project scientist, a fellow called John Grotzinger, to generate artwork of Gale Crater to explain why it’s scientifically compelling, using this elevation data and this imagery data we have from orbit. I was just taking images of about 20,000 by 20,000 pixels, roughly speaking, over an elevation model of the crater.

There’s almost three different generations of data in there — between the ‘70s, the ‘90s, and up to the present day — to kind of pull together to have one accurate and attractive view of the crater, so that the science team has something to speak to in order to explain why they’re going there.

Once we’re on the cruise to Mars, things go quiet for a little while, while the spacecraft is just chugging away, and about a month ago, they narrowed down their landing ellipse ever so slightly to where we actually ended up, and that involved revisiting this artwork and generating these graphics with the new landing ellipse. That was probably mid-July that that was done.

What we’re going to try to do now is take the massive monolithic data set of about 100,000 pixels on the side, and hopefully try and animate that in the same way so that we can offer some sort of preview of the trek from where we are now toward where the science targets actually are. Wherever the rover now goes, between where we are now and probably six, maybe nine months down the road, as we drive towards the science target, we have a map of that place, all the way, that is as good as something on Google Earth.

In the next few months, it really helps to have images from the rover. They’re getting increasingly spectacular. We’ve got some good images coming up in a press conference in 35 minutes. By the weekend, we should have proper, full, total, high-resolution panoramas. That’s on the surface, and they are a fantastic kind of bird’s eye view of where we are and where we want to go, then being able to fly out in this massive data set, so it becomes a third-person view, almost: this is where we are, this is where we want to go, and this is why. It’s a very powerful kind of messaging tool.

As of 10:31 on Sunday night, there is a priceless national asset on the surface of Mars, and you have to take care of it. Until yesterday evening, its main camera mask was actually stowed on the rover’s deck. We kind of folded it down to protect itself from all this dust. It was only yesterday evening that the mask stood up for the first time and had a look around, and we’ve got back little thumbnails from those cameras that will then, over the next day or two, turn out large, very high-resolution tele-panoramas. It’s quite piecemeal, piece by piece, as we use more of the rover as we understand how well it’s behaving. I’d expect by this weekend we’d have some spectacular images from the surface that are going to be properly mind-blowing.

I was an advocate for the Gale Crater since 2008 — so I’m delighted to see that it’s landed here. It’s exceeded everybody’s imagination.

From a science point of view, or from an engineering point of view?

It’s interesting from both perspectives. The thing is this spectacular landscape that clearly has a lot of scientific interest and a lot of science that can be done within it, but it also looks like an engineering paradise, because it’s not so lumpy and bumpy and rocky and rough that you can’t make good driving progress. And so it’s the best of both worlds. It’s navigable and yet interesting, and that means that everybody wins. The science team are going to get actually what they want, and the rover driving team are going to excel and really show what this rover can do when you get moving.

Click here to see a slide show of images from the Jet Propulsion Laboratory's Visualization Technology and Applications group.