by Aakanksha Chowdhery, Associate Research Scholar at Princeton University
We have entered an era where commercial drones have processing capability similar to smart phones. For example, drones come equipped with many types of sensors, and they allow developers to develop and sell applications in new drone app stores, such as Parrot AR and DJI.
Today’s drones are small, simple and inexpensive to operate and can carry multiple types of surveying equipment. They capture a massive amount of data from a variety of sensors, such as visual camera sensors, infrared depth sensors, GPS, and inertial measurement units. They allow sensor data to be collected in a continuous stream throughout the flight and can be instantly uploaded to a server for immediate analysis.
Many important emerging drone applications will combine this sensor and video data with human inputs in real-time. These include:
- Disaster Response. Drones can fly quickly to reach remote areas in disaster and emergency situations. They can be deployed to manage safety post-disaster by surveilling large areas and can raise alarms about intruders. Operations centers can use live images transmitted from the drone to the ground station to prioritize and plan search-and- rescue missions.
- Drones can be deployed to survey remote industrial sites and scan large agricultural fields. They can alert the operations center of anomalies and can operate aerial functions, such as spraying pesticides in pest-infested areas.
- Media companies, such as ESPN and FS1, have begun to use drones to cover live broadcasts of sporting events that require high-definition video streams to be transmitted reliably to the ground station. Some emerging applications of drones include augmented-reality based games such as first-person-view (FPV) drone racing and drone Pokémon Go, which require live streaming at low latency to ensure timely response to their players.
Through advances in control and robotics research, we can reliably deploy drones on manual or pre-planned trajectories. In the near future, we will even be able to anticipate collision avoidance and formation flights in commercial drones. A fleet of drones will be able to self-pilot itself to capture video several hours a day in smart cities and industrial sites, thereby collecting massive amounts of video data.
To manage this volume of video data in real-time, drones need fog networks. The sheer volume of collected data requires local video processing. Transmitting all the data to a ground station or the cloud is too costly and susceptible to latency delays when handled through traditional wireless networks. Many applications, especially surveillance and disaster response, require inputs in real-time from a human operator. Further, sharing context across the drone fleet benefits coordination.
Today, as little as five percent of the data collected by surveillance drones reaches analysts who need to see them. A large part of the problem is attributable to extremely slow download times caused by bandwidth and connectivity limitations. In addition, local video signal processing is cumbersome and time-consuming on a drone processor because its compute capability is limited compared to a server or cloud.
Drone applications need intelligent solutions to decide what data to analyze locally, what data to transmit to the server without taxing network constraints to get a timely response, and how to use the responses for control and coordination with other drones. Applications will not be able to adapt their solutions to meet user requirements while communicating over variable wireless links.
Fog computing is the growing paradigm that allows the data to be analyzed, pre-processed or filtered at the drone, while offloading the compute-intensive aspects to the ground station and/or the cloud. Tomorrow’s drone systems will significantly benefit from fog computing to efficiently deal with these extremely large datasets.
About the author:
Dr. Aakanksha Chowdhery has been an Associate Research Scholar at Princeton University since 2015. Her research work is at the intersection of mobile systems and machine learning focusing on fog computing architectures to optimize the tradeoff between bandwidth, energy, latency and accuracy for video analytics. Her work has contributed to industry standards and consortia, such DSL standards and OpenFog Consortium. She completed her PhD in Electrical Engineering from Stanford University in 2013 and was a postdoctoral researcher at Microsoft Research in Mobility and Networking Group until 2015. In 2012, she became the first woman to win the Paul Baran Marconi Young Scholar Award, given for the scientific contributions in the field of communications and the Internet. She also received the Stanford School of Engineering Fellowship and the Stanford’s Diversifying Academia Recruiting Excellence (DARE) fellowship. Prior to joining Stanford, she completed her Bachelor’s degree at IIT Delhi where she received the President’s Silver Medal Award.