Sunday, July 17, 2016

Robotics for Gardening and Farming: A Guide to Two Recent Series

Over the past weeks, I've written two series of posts, the first titled "Biological Agriculture for Roboticists" and the second "Robotics for Gardeners and Farmers", with the intention of helping to bridge the gap between these occupations and those engaged in them. What appears below is a table of contents for those series.

Biological Agriculture for Roboticists

Part 1

Part 2

Part 3

Part 4

Part 5

Part 6

Robotics for Gardeners and Farmers

Part 1

Part 2

Part 3

Part 4

Part 5

Part 6

Robotics for Gardeners and Farmers, Part 6

Imagine, for a moment, that you are a baby chipmunk, emerging from the burrow for the first time and having your first look around. The world is amazing, full of light and sound, most of which doesn't make much sense at first, although your highly evolved mammalian brain quickly learns to turn that barrage of data into a plausible model of what is happening around you. But, in that first instant, it's cacophony.

Now let's take this one step further. Imagine you're a newborn tree squirrel, nearly devoid of usable senses, that somehow fell from the nest but survived the fall. Without sensory hardware or some other source of information about its environment, this is essentially the situation faced by any computing device, except that it doesn't experience distress; it just runs code.

Machines only have the senses provided through the inclusion of sensory hardware in their construction, and even that by itself is insufficient. Sensory hardware must be attached to computing hardware in a manner that allows it to pass along signals representing what it has sensed, and that computing hardware must be able to interpret those signals meaningfully, either automatically, as a result of its design, or under the control of software. Those meaningful interpretations must then be passed along to software which chooses among available actions and plans the execution of whatever action it has chosen, with the resulting action feeding back into the cycle as altered sensory input.

This all sounds very complicated, but it needn't always be so. Say you have a triangular platform supported by three steerable, powered wheels near the corners, all three of which always point the same direction, meaning that they are steerable in unison, perhaps under the control of a single motor and a chain drive. This platform is sitting on a table top, and its purpose is to randomly roll around on the table without falling over the edge. All that is required to accomplish this is three edge detectors, basically simple feelers, extending beyond the wheels, each producing a simple signal (no clock needed) that lets the processor know when an edge has been detected, informing it that it should not go any further in that direction and to pick another direction, one that will move the device away from the detected edge. If the device detects edges at two corners at more or less the same time, it will know to move in the direction of the corner from which it is not receiving such a signal. If this is the only challenge with which this device is presented, it will happily roll around, without falling off the table, until its batteries can no longer power the circuitry or make the motors turn.

While this example isn't particularly useful, except perhaps for keeping small children or pets entertained, there are even simpler devices which are, such as a hose-following lawn sprinkler, and even when you're setting out to design a full-blown robotic system it's good practice to make a first pass using the simplest approach that will do a passable job of whatever it is you're out to accomplish.

That said, let's dive into the discussion of enabling machines to garner some information about their environments.

One of the most important categories of information for a machine that moves about is location, with respect to any boundaries it should not venture beyond and to any significant features within those boundaries.

For some purposes, knowing where it is to within a few yards might be enough. Say you wanted a lawn sprinkler that moved itself about more intelligently than one that just follows a hose, and you're only going to use it in the back yard, so you don't need to worry about it sprinkling visitors or your mail carrier. It's going to need a time source, so you can tell it when to start sprinkling, a map of the back yard, and some means of determining where it is within that map. One obvious way to determine location would be GPS. There are GPS receivers available for single-board computers and microcontrollers, typically as plug-in boards called shields, and, if your yard is fenced in and you don't mind some imprecision, GPS might be good enough.

For other purposes, like edging the lawn along walks and around garden spaces, GPS alone doesn't come close to being precise enough, and you might wish to rely upon some other positioning technology, or upon a hybrid system, perhaps utilizing technology more usually applied indoors. One approach would be to use an array of ZigBee protocol nodes spread around the perimeter of your yard, triangulating position based on signal strengths from those nodes, although this too might not be precise enough for edging.

For rectangular garden spaces and raised beds, the rail and gantry approach employed by FarmBot provides enough precision for most operations, and provides a good foundation for greater precision based on imagery and force control, topics beyond the scope of this installment.

Returning to our lawn sprinkler example, you might want to take soil moisture levels into account, but incorporating a soil moisture sensor into your sprinkler would make it considerably more complicated, and, in any case, healthy turf can be very difficult to penetrate, so maybe you'd prefer to distribute several of these sensors around your yard and network them together using WiFi, Bluetooth, or ZigBee. These soil moisture sensing nodes could also be used to provide a local positioning system, as described above.

But what if you have children who leave their toys strewn about. Those toys are going to get wet; there's no helping that at this level of sophistication, but we'd like to be able to detect their presence to avoid running into them, and, if possible, to avoid wrapping the water hose around them. Several fixed ultrasonic range finders, or a single one on a motorized mount that sweeps from side to side, can provide good information about such obstacles, if they can be made to operate while sealed to protect them from water. Whiskers connected to microswitches may be a more practical solution.

There are many more types of sensors available, but all have one thing in common, they convert some bit of information about the physical world into an electrical signal that then becomes digital grist for the mill of some processor and the code running on it, providing a basis for choosing what, if anything, to do next.

Taken in order, the next installment would be about that processing, but I've already gone into some detail about processing hardware and software, and have mentioned ROS in passing, so it would make more sense for me to skip on to the the subject of actuators. However, because the topic of actuators and end effectors to perform detailed manipulations of living plants and their environments is nearly as unexplored for roboticists as it is for gardeners and farmers, I think it is time to bring this series to a close and begin a new one which attempts to bring these two audiences together, probably including explanations for new terms in brief glossaries at the bottom of the installments in which they are introduced, linking to these and to supplementary material from the text.

Previous installments

Thursday, July 14, 2016

TED talk by Emma Marris

I first learned about Emma Marris from another video, posted in conjunction with the publication of her book Rambunctious Garden...

...which I have previously linked to here.

The reason I believe her vision and my own are complementary is that devices using cultivation techniques sufficiently meticulous and noninvasive to enable mechanization of intensive polycultures could also allow some selective wildness (something other than aggressive and/or noxious weeds) back onto land used for production, intermixed with crops grown for harvest.

Tuesday, July 12, 2016

FarmBot open-source CNC 'cultibot'

They call it a ‘farming machine’ and I see no reason it couldn't be scaled up to be that, but at its current scale it's more of a gardening machine, which is fine. The point is that they're using the open source paradigm, with the stated intention of pushing the technology forward. The basic design is, apparently, quite easy to use, but it's also easy to extend in various ways. This is a great project, and I do hope they get the support they need to carry it forward!

Saturday, July 09, 2016

Why Is There A Seed Vault In The Arctic Circle? | DNews Plus

Maintaining genetic diversity would be an easier matter if agricultural practice weren't (effectively) working so hard to diminish it. Robotics can bring back the attention to detail needed for diversity-supportive practices to flourish.

Sunday, July 03, 2016

Robotics for Gardeners and Farmers, Part 5

This is not meant to be a comprehensive list of resources, far from it, just enough to get you over the hump of having no idea where to start.

First, let me quickly mention three sources from which you can get parts and kits, in alphabetical order: Adafruit, RobotShop, and SparkFun. You should also know about Make: and DIY Drones.

With the exception of DIY Drones, in addition to their own websites, these also have active YouTube channels: Adafruit, RobotShop TV, SparkFun, and Make:.

Next I'll briefly describe two computing platform families that are very popular and widely available, including from the vendors mentioned above, Arduino and Raspberry Pi.

Arduino had its beginnings in the Master's thesis of a Colombian student in the Interaction Design Institute Ivrea. That project consisted of a development platform designed around Atmel's ATmega128, which itself is designed around Atmel's AVR architecture. That Master's project went on to become the Wiring project, which, after being adapted to the less expensive ATmega8 processor, was forked as the Arduino project. Arduino is probably best classed as a single-board microcontroller. Arduino the Documentary is a short film that tells the story of how Arduino came to be.

Raspberry Pi
Similar in concept, the Raspberry Pi, developed by the Raspberry Pi Foundation, is designed around processors using the ARM architecture, also found in most smart phones. Because even the least powerful version of this platform can accommodate a keyboard and monitor, and because their processors are powerful enough to run application software on full-blown operating systems, the Raspberry Pi should be thought of as a single-board computer.

This really only scratches the surface of what's available, but these two platforms both have vibrant ecosystems, which means an abundance of related resources. For any particular project, there might be another platform which is better fit for purpose, but the smaller the ecosystem surrounding any such alternative the more expertise that is likely to be required to use it.

This has been a very short installment, but we'll come back to the topic of the processing component of the sense-think-act cycle.

Next we enter the beginning of that cycle with a more detailed discussion of sensors, exploring the collection of information about environments composed of soil, plants, and critters.

Previous installments

Sunday, June 26, 2016

Robotics for Gardeners and Farmers, Part 4

What follows will begin with a whirlwind tour of topics at or near the bottom of the computing stack (the realm of bits and bytes), in the hope of tying up some loose ends at that level, followed by a few steps upwards, towards the sorts of things that technicians and hobbyists deal with directly.

Registers, Memory, Address Spaces, & Cache
I previously mentioned registers in the context of processor operation. A register is simply a temporary storage location that is very closely tied to the circuitry that performs logical and numerical operations, so closely that most processors can perform at least some of their operations, fetching one or two values, performing an operation, and storing the result, in one clock cycle (essentially one beat of its tiny, very fast processor heart). Memory, also called Random Access Memory (RAM), may be on the order of a billion times more abundant but takes more time to access, typically several clock cycles, although several shorter values (a fraction of the bits that will fit through the channel to memory at once) may be read or written together, and a series of subsequent sequential addresses may only add one cycle each. A processor's address space may lead to more than just RAM; it is the entire range of values the processor is capable of of placing on that channel to memory as an address. Using part of that range for communication with other hardware is common practice. Cache is intermediate between registers and RAM, and it's purpose is to speed access to the instructions and data located in RAM. Access to cache is slower than access to a register, but faster than access to RAM. Sometimes there are two or more levels of cache, with the fastest level being the least abundant and the slowest level the most abundant.

A/D, D/A, & GPIO
While it's possible to do abstract mathematics without being concerned with any data not included in or generated by the running program, computers are most useful when they are able to import information from outside themselves and export the results of the computational work they perform, referred to as input/ouput (I/O, or simply IO). This subject is particularly relevant to robotics, in which the ability of a machine to interact with its physical environment is fundamental. That environment typically includes elements which vary continuously rather than having discrete values. Before they can be used in digital processing, these measurements, resulting in analog signals, must be converted to digital signals by devices called analog-to-digital converters (ADC, A/D). Similarly, to properly drive hardware requiring analog signals, digital output must be converted to analog form using digital-to-analog converters (DAC, D/A). As with floating-point processors and memory management units, both of these were initially separate devices, but these functions have moved closer and closer to the main processing cores, sometimes now being located on the same integrated circuits (chips), although it is still common to have separate chips which handle A/D and D/A conversion for multiple channels. Such chips have made flexible general-purpose input/output (GPIO) commonplace on the single-board microcontrollers and single-board computers that have become the bread-and-butter of robotics hobbyists. GPIO doesn't necessarily include A/D and D/A functionality, but it often does, so pay attention to the details when considering a purchase. As is always the case with electronic devices, voltage and power compatibility is vital, so additional circuitry may be required in connecting I/O pins to your hardware. Best to start with kits or detailed plans crafted by experienced designers.

Now let's delve into software.

I've also already mentioned machine code in the context of the various uses of strings of bits. The earliest digital computers (there actually is another kind) had to be programmed directly in machine code, a tedious and error-prone process. The first major advancement in making programming easier for humans to comprehend and perform was assembly language, which came in a different dialect for each type of computer and instruction set. The beauty of assembly language was that, with practice, it was readable, and programs written in it were automatically translated into machine code by programs called assemblers. Abstractions which were later codified into the syntax of higher level computer languages, such as subroutines and data structures, existed in assembly only as idioms (programming practices), which constrained what it could reasonably be used to create. Nevertheless, many of the ideas of computer science first took form in assembly code.

Higher Level Languages
Once assembly code became available, one of the uses to which it was put was the creation of programs, called compilers, capable of translating code less closely tied to the details of computer processor operation into assembly code, from which it could be converted to machine code. That higher-level code was written in new languages that were easier for programmers to use, were more independent of particular computer hardware, and which systematized some of the low-level programming patterns already in use by assembly programmers, by incorporating those patterns into their syntax. Once these early languages became available, further progress became even easier, and many new languages followed, implementing many new ideas. Then, in the 1970s, came the C language, which was initially joined at the hip to the Unix operating system, a version of which, called BSD, quickly became popular, particularly in academia, driven in no small part by its use on minicomputers sold by DEC and Sun Microsystems. In a sense, C was a step backwards, back towards the hardware, but it was still much easier to use than assembler, and well written C code translated to very efficient machine code, making good use of the limited hardware of the time. Moreover, the combination of C and Unix proved formidable, with each leveraging the other. It would be hard to overestimate the impact C has had on computing, between having been ported to just about every computing platform in existence, various versions aimed at specific applications, superset and derivative languages (Objective-C and C++), and languages with C-inspired syntax. Even now, compilers and interpreters for newer languages are very likely to be written in C or C++ themselves. C's biggest downside is that it makes writing buggy code all too easy, and finding those bugs can be like looking for a needle in a haystack, so following good programming practice is all the more important when using it.

Operating Systems
A computer operating system is code that runs directly on the hardware, handling the most tedious and ubiquitous aspects of computing and providing a less complicated environment and basic services to application software. The environment created by an operating system is potentially independent of particular hardware. In the most minimal example, the operating system may exist as one or more source code files, which are included with application source code at compile time or precompiled code which is linked with the application code after it has been compiled, then loaded onto the device by firmware. Not every device has or needs an operating system, but those that run application software typically do, and typically their operating systems are always running, from some early stage of boot-up until the machine is shut down or disconnected from power. There are also systems that run multiple instances of one or more operating systems on multiple virtual hardware environments, but these are really beyond the scope of what I'll be addressing here.

Next up, actual hardware you can buy and tinker with.

Previous installments