When Mars Curiosity landing spiked online traffic, NASA was ready
- By Rutrell Yasin
- Aug 10, 2012
Officials at NASA’s Jet Propulsion Laboratory knew they were going to get a huge spike in online traffic when the Mars Curiosity rover touched down on the Red Planet Aug. 5. They decided the best way to keep the images and videos streaming from the event was to take up residence in the cloud.
NASA wanted to make sure that the experience of Curiosity’s landing could be shared with people across the globe by providing up-to minute details of the mission, especially during the final seven minutes it took the rover to descend through Mars’ atmosphere and land on the planet.
As a result, the availability, scalability and performance of the mars.jpl.nasa.gov website was crucial during the landing event.
Mars rover Curiosity will phone home on NASA's interplanetary Internet
NASA launched Curiosity on Nov. 26, 2011, beginning an 8-month journey to Mars. Due to the size of the rover, landing was a huge challenge. JPL engineers had to design an innovated entry and landing technique that involved a sky crane maneuver that would gently lower Curiosity to the surface.
As people all over the world visited NASA/JPL sites, JPL served its content from Amazon Web Services regions located around the world to meet the global demand, according to Khawaja Shams, manager of data services for tactical operations at NASA JPL.
The architecture behind the scenes that makes the video streaming possible is quite interesting, Shams said. “We have stacks that we have created that allow us to port a variable amount of bandwidth to our consumers,” he said. As traffic to the site increased, JPL just added more stacks behind its load balancers.
The public is interested in NASA stories during major events, but that interest fades away as time goes on. Since NASA is not in the business of streaming these types of events every day, it makes more sense for the agency to procure the equipment and service the information for a few days, Shams said.
Similarly, the rovers are operating in a very “bursty” fashion, he said. Images from the rover are downloaded once or twice a day. “So cloud computing allows us to provide a bunch of machines in the cloud, process the images, shut them down and stop paying for it,” Shams said.
JPL only had a few weeks to design, build, test and deploy the infrastructure. The underlying architecture was co-developed and reviewed across NASA/JPL and Amazon Web Services, Amazon officials said.
A variety of services on AWS were used including Adobe Flash Media Server, Amazon Elastic Compute Cloud (Amazon EC2) instances running the nginx caching tier, elastic load balancing, Amazon Route 53 for DNS management and Amazon CloudFront for content delivery.
JPL used Amazon Route 53 and Elastic Load Balancers to balance the load across AWS regions and to ensure availability of content under all circumstances. Amazon EC2 instances running the Amazon Linux AMI were configured using configuration scripts and Amazon EC2 instances metadata
Shortly before the landing, NASA/JPL provisioned stacks of AWS infrastructure, each capable of handling 25 gigabits/sec of traffic. NASA/JPL used Amazon CloudWatch to monitor spikes in traffic volume and to add more capacity based on regional demand.
As traffic volumes returned to normal hours after the landing, NASA/JPL used CloudFormation to de-provision resources using a single command, according to Amazon.
Rutrell Yasin is is a freelance technology writer for GCN.