Wednesday, August 5, 2009

Landing Gadget Could Let Drones See Like Pilots

  • 12:41 pm |
  • Categories: Air Force, Drones, Gadgets and Gear

    2d3

    One big difference between Army and Air Force drones is that many of the Air Force’s robo-lanes can’t land themselves. That’s contributed to a number of Air Force Predators crashing, when humans run into problems, bringing the drones up or down.

    But there may be a way of providing autolanding capability without the weight and expense of conventional systems. Computer vision company 2d3 is developing a new system to allow drones to see their way to a safe landing, using their own cameras rather than radio beacons or radar.

    Human pilots are trained to land using visual information, but for machines this is a first. Even though they may have cameras, that mass of whirling light and dark shapes is meaningless to the drone itself. It’s only the operator who realizes that the images show the drone is about to plunge into the ground.

    The Visually Assisted Landing System (VALS) will mean that drones do not need to fly blind. It detects features using the aircraft’s camera and translates their motion into height and orientation data which is fed into the navigation system.

    VALS mimics the way a human pilot works. It automatically picks out features on the ground, specifically the runway markings. VALS then tracks the way these features change from frame to frame and builds up a 3D model of what they represent, so that relative motion can be calculated. It can then feed height, attitude and other data to the landing system. Combined with GPS and basic runway data, VALS can land anywhere in the world without preparation.

    Of course, there have been plenty of machine vision projects in the past, many of them aimed at letting robots find their way around. Vision systems were a key element of the Darpa’s Grand Challenge for unmanned ground vehicles; but making sense of scenery is difficult and researchers found that other sensors (such as ladar, laser-based radar) were more useful. The big problem is speed, with various work-arounds being sought to rapidly identify the key features of an image and translate them into three-dimensional shapes. The limits of processing power tend to mean that autonomous ground vehicles are stuck at painfully slow speeds using visual data alone.

    VALS has the advantage that it’s looking at a tidy, structured scene -– runway markings -– rather than chaotic natural scenery. Of course, runway markings are designed to be as visible and unambiguous as possible; VALS simply takes advantage of this. With the aid of some clever software, it means that it can run at thirty frames per second, fast enough to cope with the approach rate of a landing aircraft.

    In addition to the Predator, VALS can be fitted to smaller unmanned aircraft such as Shadow and Scan Eagle. Flight testing will start later this year, on a manned aircraft acting as a surrogate for an unmanned craft.

    “Ultimately, the goal is to produce a small device which can simply be installed onto any aircraft,” Jon Damush, President of 2d3, told Danger Room. “Feed the camera into one end, runway data into the other, and the box will produce relative position and orientation information and feed it to the autopilot through a serial link.”

    2d32

    The makers, 2d3, are previously known for other applications which involve extracting data from visual systems, such as image stabilisation and adding special effects to film. (”Much of the 2d3 technology currently applied in other sectors began life as an entertainment market based solution,” says their web site.)

    VALS will be competing with more conventional autolanding systems for drones, like the Tactical Automated Landing System made by Sierra Nevada Corporation. This consists of a three-pound transponder carried by the aircraft, plus a mobile ground unit that can be carried in a Hummer and deployed by two men in fifteen minutes. For a Predator, three pounds is pretty negligible. But it’s a significant weight for some of the smaller craft.

    As drones get smaller and camera systems get better, we are likely to see a lot more efforts like VALS to leverage existing hardware and give drones their own vision. Lining up with a runway is relatively simple; later systems are likely to be able to carry out increasingly sophisticated tasks, such as identifying and tracking objects on the ground without human intervention. One day those objects on the ground may be specific vehicles — not to mention individual human beings.

    ALSO:

    No comments:

    Post a Comment