Senior Design Team sdmay20-47 • CyDrone
Nowadays, all aspects of human lives are moving towards automation. This has been true for as long as humanity has existed, but with the development of computers and growth of machine learning, more and more operations can be efficiently done by computers. Since we, humans, are receiving a big portion of information via our eyes, it’s logical for us to try and teach our machines in a similar way; to see and process what they see, making decisions based on that.
This is why we decided to work with this problem. Our goal is to make a platform with high mobility (a drone), load it with exceptional computational powers (nvidia SoC from the Jetson family), and make it process a camera input with machine learning algorithms. As a starting point in our machine learning we want to teach the drone to: identify objects, follow or avoid them, and analyze them (including volumetric analysis). Further development will be done on creating more complex algorithms based on computer vision and machine learning, like adding the possibility of a master-slave system, where a fleet of drones are controlled by a single leader.
We are hoping to build a drone-based computational system capable of solving complex tasks via machine-learning and computer-vision-based algorithms. Object detection and tracking should serve as examples of how our system can handle complex machine-vision and machine-learning-based operations while our architecture and implemented control functionalities will allow us to use and expand the solution with ease.
Our expected deliverables:
At the end of this project, our team should deliver a fully assembled drone that has unmanned automated vehicle (UAV) capabilities.Attached to the drone will be other hardware components that are needed to enable to drone to perform the required project function. There will be a GPS module to enable the drone to be tracked and relay it’s position in real time.
A camera to capture the data from the objects of interests. A GPU to directly process some of the data onboard.
The GPU should be able to incorporate complex image-processing algorithms like “find and follow” and volumetric analysis letting these algorithms control the drone movements in real time without any need of human intervention.
Drone should be controllable using the software launched on the on-board computer. We expect to create a safe simulation environment to train the Reinforced Learning Algorithms quickly and with minimum risks. Several different environments of different complexity should be created to make the training sufficiently go up in difficulty when needed by the RLA specialist. The drone should have safety capabilities during the software-controlled to “kill” it or make it hover in place.