Sunday, July 17, 2016

Robotics for Gardening and Farming: A Guide to Two Recent Series

Over the past weeks, I've written two series of posts, the first titled "Biological Agriculture for Roboticists" and the second "Robotics for Gardeners and Farmers", with the intention of helping to bridge the gap between these occupations and those engaged in them. What appears below is a table of contents for those series.

Biological Agriculture for Roboticists

Part 1

Part 2

Part 3

Part 4

Part 5

Part 6

Robotics for Gardeners and Farmers

Part 1

Part 2

Part 3

Part 4

Part 5

Part 6

Robotics for Gardeners and Farmers, Part 6

Imagine, for a moment, that you are a baby chipmunk, emerging from the burrow for the first time and having your first look around. The world is amazing, full of light and sound, most of which doesn't make much sense at first, although your highly evolved mammalian brain quickly learns to turn that barrage of data into a plausible model of what is happening around you. But, in that first instant, it's cacophony.

Now let's take this one step further. Imagine you're a newborn tree squirrel, nearly devoid of usable senses, that somehow fell from the nest but survived the fall. Without sensory hardware or some other source of information about its environment, this is essentially the situation faced by any computing device, except that it doesn't experience distress; it just runs code.

Machines only have the senses provided through the inclusion of sensory hardware in their construction, and even that by itself is insufficient. Sensory hardware must be attached to computing hardware in a manner that allows it to pass along signals representing what it has sensed, and that computing hardware must be able to interpret those signals meaningfully, either automatically, as a result of its design, or under the control of software. Those meaningful interpretations must then be passed along to software which chooses among available actions and plans the execution of whatever action it has chosen, with the resulting action feeding back into the cycle as altered sensory input.

This all sounds very complicated, but it needn't always be so. Say you have a triangular platform supported by three steerable, powered wheels near the corners, all three of which always point the same direction, meaning that they are steerable in unison, perhaps under the control of a single motor and a chain drive. This platform is sitting on a table top, and its purpose is to randomly roll around on the table without falling over the edge. All that is required to accomplish this is three edge detectors, basically simple feelers, extending beyond the wheels, each producing a simple signal (no clock needed) that lets the processor know when an edge has been detected, informing it that it should not go any further in that direction and to pick another direction, one that will move the device away from the detected edge. If the device detects edges at two corners at more or less the same time, it will know to move in the direction of the corner from which it is not receiving such a signal. If this is the only challenge with which this device is presented, it will happily roll around, without falling off the table, until its batteries can no longer power the circuitry or make the motors turn.

While this example isn't particularly useful, except perhaps for keeping small children or pets entertained, there are even simpler devices which are, such as a hose-following lawn sprinkler, and even when you're setting out to design a full-blown robotic system it's good practice to make a first pass using the simplest approach that will do a passable job of whatever it is you're out to accomplish.

That said, let's dive into the discussion of enabling machines to garner some information about their environments.

One of the most important categories of information for a machine that moves about is location, with respect to any boundaries it should not venture beyond and to any significant features within those boundaries.

For some purposes, knowing where it is to within a few yards might be enough. Say you wanted a lawn sprinkler that moved itself about more intelligently than one that just follows a hose, and you're only going to use it in the back yard, so you don't need to worry about it sprinkling visitors or your mail carrier. It's going to need a time source, so you can tell it when to start sprinkling, a map of the back yard, and some means of determining where it is within that map. One obvious way to determine location would be GPS. There are GPS receivers available for single-board computers and microcontrollers, typically as plug-in boards called shields, and, if your yard is fenced in and you don't mind some imprecision, GPS might be good enough.

For other purposes, like edging the lawn along walks and around garden spaces, GPS alone doesn't come close to being precise enough, and you might wish to rely upon some other positioning technology, or upon a hybrid system, perhaps utilizing technology more usually applied indoors. One approach would be to use an array of ZigBee protocol nodes spread around the perimeter of your yard, triangulating position based on signal strengths from those nodes, although this too might not be precise enough for edging.

For rectangular garden spaces and raised beds, the rail and gantry approach employed by FarmBot provides enough precision for most operations, and provides a good foundation for greater precision based on imagery and force control, topics beyond the scope of this installment.

Returning to our lawn sprinkler example, you might want to take soil moisture levels into account, but incorporating a soil moisture sensor into your sprinkler would make it considerably more complicated, and, in any case, healthy turf can be very difficult to penetrate, so maybe you'd prefer to distribute several of these sensors around your yard and network them together using WiFi, Bluetooth, or ZigBee. These soil moisture sensing nodes could also be used to provide a local positioning system, as described above.

But what if you have children who leave their toys strewn about. Those toys are going to get wet; there's no helping that at this level of sophistication, but we'd like to be able to detect their presence to avoid running into them, and, if possible, to avoid wrapping the water hose around them. Several fixed ultrasonic range finders, or a single one on a motorized mount that sweeps from side to side, can provide good information about such obstacles, if they can be made to operate while sealed to protect them from water. Whiskers connected to microswitches may be a more practical solution.

There are many more types of sensors available, but all have one thing in common, they convert some bit of information about the physical world into an electrical signal that then becomes digital grist for the mill of some processor and the code running on it, providing a basis for choosing what, if anything, to do next.

Taken in order, the next installment would be about that processing, but I've already gone into some detail about processing hardware and software, and have mentioned ROS in passing, so it would make more sense for me to skip on to the the subject of actuators. However, because the topic of actuators and end effectors to perform detailed manipulations of living plants and their environments is nearly as unexplored for roboticists as it is for gardeners and farmers, I think it is time to bring this series to a close and begin a new one which attempts to bring these two audiences together, probably including explanations for new terms in brief glossaries at the bottom of the installments in which they are introduced, linking to these and to supplementary material from the text.

Previous installments

Thursday, July 14, 2016

TED talk by Emma Marris

I first learned about Emma Marris from another video, posted in conjunction with the publication of her book Rambunctious Garden...

...which I have previously linked to here.

The reason I believe her vision and my own are complementary is that devices using cultivation techniques sufficiently meticulous and noninvasive to enable mechanization of intensive polycultures could also allow some selective wildness (something other than aggressive and/or noxious weeds) back onto land used for production, intermixed with crops grown for harvest.

Tuesday, July 12, 2016

FarmBot open-source CNC 'cultibot'

They call it a ‘farming machine’ and I see no reason it couldn't be scaled up to be that, but at its current scale it's more of a gardening machine, which is fine. The point is that they're using the open source paradigm, with the stated intention of pushing the technology forward. The basic design is, apparently, quite easy to use, but it's also easy to extend in various ways. This is a great project, and I do hope they get the support they need to carry it forward!

Saturday, July 09, 2016

Why Is There A Seed Vault In The Arctic Circle? | DNews Plus

Maintaining genetic diversity would be an easier matter if agricultural practice weren't (effectively) working so hard to diminish it. Robotics can bring back the attention to detail needed for diversity-supportive practices to flourish.

Sunday, July 03, 2016

Robotics for Gardeners and Farmers, Part 5

This is not meant to be a comprehensive list of resources, far from it, just enough to get you over the hump of having no idea where to start.

First, let me quickly mention three sources from which you can get parts and kits, in alphabetical order: Adafruit, RobotShop, and SparkFun. You should also know about Make: and DIY Drones.

With the exception of DIY Drones, in addition to their own websites, these also have active YouTube channels: Adafruit, RobotShop TV, SparkFun, and Make:.

Next I'll briefly describe two computing platform families that are very popular and widely available, including from the vendors mentioned above, Arduino and Raspberry Pi.

Arduino had its beginnings in the Master's thesis of a Colombian student in the Interaction Design Institute Ivrea. That project consisted of a development platform designed around Atmel's ATmega128, which itself is designed around Atmel's AVR architecture. That Master's project went on to become the Wiring project, which, after being adapted to the less expensive ATmega8 processor, was forked as the Arduino project. Arduino is probably best classed as a single-board microcontroller. Arduino the Documentary is a short film that tells the story of how Arduino came to be.

Raspberry Pi
Similar in concept, the Raspberry Pi, developed by the Raspberry Pi Foundation, is designed around processors using the ARM architecture, also found in most smart phones. Because even the least powerful version of this platform can accommodate a keyboard and monitor, and because their processors are powerful enough to run application software on full-blown operating systems, the Raspberry Pi should be thought of as a single-board computer.

This really only scratches the surface of what's available, but these two platforms both have vibrant ecosystems, which means an abundance of related resources. For any particular project, there might be another platform which is better fit for purpose, but the smaller the ecosystem surrounding any such alternative the more expertise that is likely to be required to use it.

This has been a very short installment, but we'll come back to the topic of the processing component of the sense-think-act cycle.

Next we enter the beginning of that cycle with a more detailed discussion of sensors, exploring the collection of information about environments composed of soil, plants, and critters.

Previous installments