Tuesday, May 24, 2016

Biological Agriculture for Roboticists, Part 5

To be most useful, agricultural robots need not only to be able to distinguish plants from a background of soil and decaying plant matter, but to be able to distinguish them from each other, and to quickly model their branching structures, at least approximately, if only so they can locate the main stem and the point at which it emerges from the soil. They also need to be able to recognize plants that don't belong to any of the types they've already learned to identify as being something new.

This is a tall order, and I'll get into some specifics on how it might be accomplished a bit further on, but first why would robots need to be able to recognize plants as being something they haven't seen before; isn't it enough to be able to tell whether they've been planted intentionally, crop or not?

In Part 4 of this series, I provisionally claimed that, in a recently tilled field, which has not yet been planted to the next crop, any green growing thing can be presumed to be a weed. While that's usually the case there are exceptions.

Even in a monoculture scenario with routine tillage, where you don't really expect to find anything other than the crop that the farmer has planted and weeds in the field, seed may be brought in from elsewhere, blown on the wind, in bird droppings, or in the stools or clinging to the fur of some wide-ranging mammal. Generally these also might be considered weeds, but occasionally they will be rare and endangered species themselves, or vital to the survival of rare and endangered animals (milkweed for monarch butterflies), and should therefore be allowed to grow and mature, even at the expense of a small percentage of crop production and some inconvenience. (Farmers should be compensated for allowing this to happen, and robotic equipment can help document that they have done so.)

In a poly/permaculture scenario, native plants that aren't poisonous to livestock or wildlife, and which don't compete aggressively with crops, are usually welcome, because they increase the diversity of the flora, supporting a more diverse fauna, which is more likely to include beneficial species, all of which implies a more stable environment, less prone to overwhelming infestations of all sorts.

Plants look different under different lighting conditions — dawn, mid-morning, noon, mid-afternoon, dusk, and under clear sky versus clouds versus overcast conditions — and different in shade than when standing alone on otherwise clear ground. Beyond that, plants look very different as seedlings than they do after a few weeks of growth, different yet when they've gone to flower, and different once again when mature, and for deciduous perennials still more different in their winter or dry season dormancy. Without having seen them in all of these stages and conditions, even a human gardener might mistake one crop plant for another, or for a weed, and based upon that select an inappropriate action. Recognizing continuity between stages and across diverse conditions is even more challenging for a machine.

For all of these reasons, once the technology is up to making such differentiations quickly enough that it is no longer the limiting factor in machine performance, the default needs to be that when confronted with something unfamiliar do nothing other than keep track of it, and send a notification up the escalation chain. Now back to the question of how, which is about sensory modes and sensory processing. What information about an environment composed of crops, a smattering of native plants, and weeds, on a background of soil and decaying plant matter, can a machine usefully collect and process?

Among the most obvious and most valuable is location. To a very close approximation, plants stay where they're planted, so if today you find a plant in the same location as you found one yesterday, there's a high probability that it's the same plant, just a day older. (It's true that some plants send up new shoots from their root systems, remote from the original stem, but that belongs to a discussion of modeling things that aren't directly sensible, or, in that example, requires something like ground-penetrating radar.) Generally speaking for plants, over a short interval, location is synonymous with identity. GPS by itself is inadequate to establish location with sufficient precision to be used in this manner, so it must be supplemented with other methods, such as fixed markers, odometry, and maps created on previous passes over the same ground. More precise local positioning systems could also prove very helpful.

Another obvious collection of modalities center around imagery based on sensing reflected electromagnetic energy, including everything from microwaves through infrared and visible light to ultraviolet, as snapshots and over time (video), and using ambient or active illumination, or a combination of the two. (Introduction to RADAR Remote Sensing for Vegetation Mapping and Monitoring) Color video camera modules have become so inexpensive that using an array of them has become a reasonable proposition, and modules containing two or more lens/sensor systems are becoming widely available. Cameras which are sensitive to ultraviolet, near-infrared (wavelengths just longer than visible light), and far-infrared (thermal radiation) are also becoming common and dropping in price. Even phased array radar is being modularized and should be reasonable to include in mobile machines within a very few years.

Other sensory modes that are either already in common use, or may soon be so, include sound, both passive (hearing) and active (sonar), pressure/strain (touch-bars, whiskers, and controlling manipulator force), simple gas measurement (H2O, CO2, CH4) and volatile organic compound detection (smell, useful in distinguishing seedlings). I'll get back to the use of sound in a future installment, in the context of fauna management and pest control.

The stickier problem is how to transform all the data produced by all available sensors into something useful. This can be somewhat simplified by the inclusion of preprocessing circuitry in sensor modules, so that, for example, a camera module serves processed imagery instead of a raw data stream, but that still leaves the challenge of sensor fusion, weaving all of the data from all of the various sensors together to create an integrated model of the machine's environment and position within it, both reflecting physical reality and supporting decisions about what to do next, quickly enough to be useful. Again, research is ongoing.

Previous installments:

No comments: