Sunday, June 26, 2016

Robotics for Gardeners and Farmers, Part 4

What follows will begin with a whirlwind tour of topics at or near the bottom of the computing stack (the realm of bits and bytes), in the hope of tying up some loose ends at that level, followed by a few steps upwards, towards the sorts of things that technicians and hobbyists deal with directly.

Registers, Memory, Address Spaces, & Cache
I previously mentioned registers in the context of processor operation. A register is simply a temporary storage location that is very closely tied to the circuitry that performs logical and numerical operations, so closely that most processors can perform at least some of their operations, fetching one or two values, performing an operation, and storing the result, in one clock cycle (essentially one beat of its tiny, very fast processor heart). Memory, also called Random Access Memory (RAM), may be on the order of a billion times more abundant but takes more time to access, typically several clock cycles, although several shorter values (a fraction of the bits that will fit through the channel to memory at once) may be read or written together, and a series of subsequent sequential addresses may only add one cycle each. A processor's address space may lead to more than just RAM; it is the entire range of values the processor is capable of of placing on that channel to memory as an address. Using part of that range for communication with other hardware is common practice. Cache is intermediate between registers and RAM, and it's purpose is to speed access to the instructions and data located in RAM. Access to cache is slower than access to a register, but faster than access to RAM. Sometimes there are two or more levels of cache, with the fastest level being the least abundant and the slowest level the most abundant.

A/D, D/A, & GPIO
While it's possible to do abstract mathematics without being concerned with any data not included in or generated by the running program, computers are most useful when they are able to import information from outside themselves and export the results of the computational work they perform, referred to as input/ouput (I/O, or simply IO). This subject is particularly relevant to robotics, in which the ability of a machine to interact with its physical environment is fundamental. That environment typically includes elements which vary continuously rather than having discrete values. Before they can be used in digital processing, these measurements, resulting in analog signals, must be converted to digital signals by devices called analog-to-digital converters (ADC, A/D). Similarly, to properly drive hardware requiring analog signals, digital output must be converted to analog form using digital-to-analog converters (DAC, D/A). As with floating-point processors and memory management units, both of these were initially separate devices, but these functions have moved closer and closer to the main processing cores, sometimes now being located on the same integrated circuits (chips), although it is still common to have separate chips which handle A/D and D/A conversion for multiple channels. Such chips have made flexible general-purpose input/output (GPIO) commonplace on the single-board microcontrollers and single-board computers that have become the bread-and-butter of robotics hobbyists. GPIO doesn't necessarily include A/D and D/A functionality, but it often does, so pay attention to the details when considering a purchase. As is always the case with electronic devices, voltage and power compatibility is vital, so additional circuitry may be required in connecting I/O pins to your hardware. Best to start with kits or detailed plans crafted by experienced designers.

Now let's delve into software.

Assembler
I've also already mentioned machine code in the context of the various uses of strings of bits. The earliest digital computers (there actually is another kind) had to be programmed directly in machine code, a tedious and error-prone process. The first major advancement in making programming easier for humans to comprehend and perform was assembly language, which came in a different dialect for each type of computer and instruction set. The beauty of assembly language was that, with practice, it was readable, and programs written in it were automatically translated into machine code by programs called assemblers. Abstractions which were later codified into the syntax of higher level computer languages, such as subroutines and data structures, existed in assembly only as idioms (programming practices), which constrained what it could reasonably be used to create. Nevertheless, many of the ideas of computer science first took form in assembly code.

Higher Level Languages
Once assembly code became available, one of the uses to which it was put was the creation of programs, called compilers, capable of translating code less closely tied to the details of computer processor operation into assembly code, from which it could be converted to machine code. That higher-level code was written in new languages that were easier for programmers to use, were more independent of particular computer hardware, and which systematized some of the low-level programming patterns already in use by assembly programmers, by incorporating those patterns into their syntax. Once these early languages became available, further progress became even easier, and many new languages followed, implementing many new ideas. Then, in the 1970s, came the C language, which was initially joined at the hip to the Unix operating system, a version of which, called BSD, quickly became popular, particularly in academia, driven in no small part by its use on minicomputers sold by DEC and Sun Microsystems. In a sense, C was a step backwards, back towards the hardware, but it was still much easier to use than assembler, and well written C code translated to very efficient machine code, making good use of the limited hardware of the time. Moreover, the combination of C and Unix proved formidable, with each leveraging the other. It would be hard to overestimate the impact C has had on computing, between having been ported to just about every computing platform in existence, various versions aimed at specific applications, superset and derivative languages (Objective-C and C++), and languages with C-inspired syntax. Even now, compilers and interpreters for newer languages are very likely to be written in C or C++ themselves. C's biggest downside is that it makes writing buggy code all too easy, and finding those bugs can be like looking for a needle in a haystack, so following good programming practice is all the more important when using it.

Operating Systems
A computer operating system is code that runs directly on the hardware, handling the most tedious and ubiquitous aspects of computing and providing a less complicated environment and basic services to application software. The environment created by an operating system is potentially independent of particular hardware. In the most minimal example, the operating system may exist as one or more source code files, which are included with application source code at compile time or precompiled code which is linked with the application code after it has been compiled, then loaded onto the device by firmware. Not every device has or needs an operating system, but those that run application software typically do, and typically their operating systems are always running, from some early stage of boot-up until the machine is shut down or disconnected from power. There are also systems that run multiple instances of one or more operating systems on multiple virtual hardware environments, but these are really beyond the scope of what I'll be addressing here.

Next up, actual hardware you can buy and tinker with.

Previous installments

Sunday, June 12, 2016

Robotics for Gardeners and Farmers, Part 3

From this point on I'm going to assume that anyone who's still with me isn't intimidated by technical terms and discussions, and I'll stop apologizing for including them. If I fail to explain any new term so you can understand how I'm using it, please say so in a comment.

Before diving back down to the level of fundamentals, there's a bit more to say about serial communications.

Serial Ports & Communication Protocols
Serial ports, on a microcontroller or single board computer, are made up of a set of pins or solder pads that work together to handle a single, typically bidirectional serial connection with some other device (see also UART). Serial ports on enclosed devices like laptop or desktop computers are standardized connectors with standardized signals on particular pins or contacts. Examples include RS-232 and USB ports. While such ports have their own protocols, communication protocols also include layers that ride on top of those of physical connections. One example of such a protocol that I expect to become increasingly important in the future is RapidIO. An even higher level protocol used by ROS, the Robot Operating System is rosbridge.

Okay, now back down to the bottom of the stack for a look at how computers do what they do. This will be more than you need to know to just use a computer, but when you're wiring up sensors or other hardware to or programming a microcontroller or single board computer it could come in handy.

Binary logic
Once again, think simple. At the binary level, logic operations are about taking one or two bits as input and producing a single bit as output. Binary NOT simply changes a 1 to a 0 or a 0 to a 1. Binary AND produces a 1 as output if and only if ("iff") both of two inputs are 1. Binary OR produces 1 as an output if either of its two inputs is 1, or if both are 1. Binary NAND is like running the output of an AND operation through a NOT operation. Likewise, NOR is like running the output of an OR through a NOT. XOR, also called Exclusive OR, produces a 1 as output if either of two inputs is 1, but not if both are 1 or if both are 0. Implementations of these binary logic operations in circuitry are referred to as "gates" — AND gate, OR gate, and so forth. When processing cores perform binary logic operations, they typically do so on entire strings of bits at the same time.

Bit shift
Moving all of the bits in a string of bits one position to the left, inserting a 0 at the right end, is equivalent to multiplying by 2, unless there was already a 1 in the left-most (most significant) position, with no place to go, which is called overflow. Moving all of the bits in a string of bits one position to the right, inserting a 0 at the left end, is equivalent to dividing by 2, unless there was already a 1 in the right-most (least significant) position, with no place to go, which is called underflow. Sometimes overflow or underflow are errors, and sometimes they are not, depending on the context in which they occur.

Integer
Integer has the same meaning in computing as it does in arithmetic, except that there are additional constraints. In computers, integers are represented by strings of bits, generally no longer than the number of bits that the processing core(s) can handle in a single operation, usually either 32 or 64 these days. These binary representations of integers come in two basic types, signed or unsigned. A 32-bit unsigned integer can represent any whole value between 0 and 4,294,967,295 (inclusive), whereas a 32-bit signed integer can represent any whole value between −2,147,483,648 and 2,147,483,647 (inclusive). As with left-shift, integer addition and multiplication can result in overflow, and, as with right-shift, integer subtraction can result in underflow. Integer division is a special case; any remainder is typically discarded, but can be accessed by something called the modulo operation.

Floating point
As with integers, floating point numbers generally come in 32 and 64-bit sizes, with the 64-bit version both having a greater range and being more precise. They have gradually come into more common use as computing hardware capable of performing floating point operations at a reasonable rate became more affordable, eventually being integrated into the central processing units (CPUs) found in most computers.

Machine code
Another use for strings of bits is as the code that controls the operation of a processing core. In the simplest case, each bit or short subset of a string of bits forming an instruction is actually a control signal, although it's significance may depend on the state of one or more other bits in the string. For example part of the instruction might specify 32-bit unsigned integer addition, while two other parts specify the registers from which to draw the operands and yet another part specifies the register into which to place the result, with the operation finishing by incrementing the program counter (a pointer to the memory location of the next instruction). This approach can be carried to an extreme in what's called a VLIW (Very Long Instruction Word) architecture. An alternative approach, called microcode establishes a layer of abstraction between the level of control signals and the code that constitutes a program, and can also allow the same code to run on a range of closely related processor designs with nonidentical control signals. These days most processors found in consumer devices use microcode.

Processing cores
Up until now I've referred to processing cores without having actually defined them. A core is like a knot of circuitry that performs a set of closely related operations. The most basic type of core is an Arithmetic Logic Unit (ALU). These cores handle binary logic, bit shifting, integer arithmetic, and sometimes also floating point operations, although floating point circuitry was initially found on separate chips and only later included on the same chips as ALUs. Another common type of core is concerned with memory in the processor's primary address space (yet another use of strings of bits). Addresses usually take the form of unsigned integers, but ordinary integer operations don't apply to them.

GPU & GPGPU
Graphics Processing Units (GPUs) belong to the more general class called Vector Processors. "Vector" here means the same thing as it does in linear algebra, although GPUs can be very useful in computing geometric vectors. They are at their best when performing the same operation or sequence of operations on a large set of data, and in these sorts of applications they have a huge performance advantage over more conventional processing cores. Robotic applications where you might find a GPU include processing data from a camera or microphone. General purpose computing on GPUs (GPGPU) is a growing trend.

There's a bit (informal use) more to be said about processors and such before working our way back up the stack, but it can wait for the next installment.

Previous installments

Monday, June 06, 2016

Robotics for Gardeners and Farmers, Part 2

In Part 1 of this series I said "you can combine purchased bits with your own bits to create novel devices that perform tasks for which no off-the-shelf solution exists." But why bother, right? Isn't it just a matter of time? Perhaps, but this is something of a chicken-and-egg problem. Investment follows the perception of a potential market. Without the perception of a market into which to sell the fruits of product development, investment is hard to come by, hence little development happens and few products are forthcoming. To really get behind the application of robotics to horticulture and agriculture, in a manner that takes full advantage of the potential of robotics to leverage the very best practices and make them scalable, investors must be convinced that their money will at least accomplish something worthwhile, and preferably that it will bring them a nice return. One way you can contribute to creating that perception of a market is by pushing the envelope of what can be done with what's available now, measuring the results, and talking about it, with friends and neighbors and on the social networks of your choice, preferably accompanied with video that makes clear what your creations do. (I'll come back to the use of social networks later in this installment.)

As I was saying at the close of Part 1, before I can go into much more detail, some additional definitions are in order.

Bit
I'm fond of this word in its informal sense, but, as applied to computers and related technologies, a bit is the smallest unit of information, usually represented by a single binary digit, which can have either of two values, 0 or 1. A bit can represented physical in many ways, the side of a coin facing up after a toss, for example. It is typically represented electronically by either a high state (a measurable voltage, either + or -) or a low state (usually ground), and while the signal (see below) representing a bit might be constant until changed, it is more commonly compact in time, and both created and retrieved in reference to a clock signal (see second item below).

Four bits taken together are called a nibble, represented by a 4-digit binary number, which can have any of 16 values: 0000, 0001, 0010, 0011, 0100, 0101, 0110, 0111, 1000, 1001, 1010, 1011, 1100, 1101, 1110, or 1111. These sixteen values can each be represented by a single hexadecimal (base 16) digit: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, and F, respectively. When combining longer strings of binary digits, such as two nibbles to form a byte (8 bits), the use of hexadecimal becomes first a convenience, then a necessity, as longer strings of 0s and 1s are very difficult to parse visually — hexadecimal 00 is equivalent to binary 00000000, and hexadecimal FF is equivalent to 11111111.

Signal
About thirty years ago, a very bright man posed the question (paraphrased), given that computer monitors were typically higher quality than televisions why were the images they produced so much more primitive than those produced by TVs? The answer was all about the source of the signals each was being fed. Television signals, at that time, were derived almost entirely from imagery captured from the physical world, whereas the images on monitors had to be generated by the computers they were attached to. There was no contest; even the supercomputers of that time simply weren’t up to the task of generating life-like imagery in real time; rather they would spend minutes or hours calculating each frame, and even then the result was cartoonish at best. These days smartphones and game consoles do a passable job of generating moving points of view within dynamic 3-dimensional environments of nontrivial complexity.

Like the simplest form of sensor, the simplest signal is either on or off. The signal from a light switch is either nothing at all or a connection to a power source, usually alternating current (AC) with a voltage (in the U.S.) between 110 and 120. The on/off supply of power to the light is, in that example, inseparable from the on/off signal, but what if the switch only controls the supply of power to a relay (a type of magnetic switch controlled by a small direct current running through a coil of wire), which in turn controls the supply of power to the light. In this case the power to the light is distinct from the signal (the power to the relay) that controls it, although that signal is still of the simplest, on/off type. This becomes more clear if we use another sort of relay, one that is closed (on) by default, making a connection unless there is a current through its magnetic coil, such that the light is on when the switch is off, and off when the switch is on.

Signals may be combined with the supply of power, but they are about the representation/encoding and transmission of information, and signal processing is about the extraction/decoding of information from incoming signals.

Under ideal conditions, the voltage of the AC power arriving to your home from the grid, graphed over time, forms a constant sine wave, with a cycle time (in the U.S.) of 1/60 second. Any perturbations from that perfect sine wave carry information, perhaps from a switch being turned on or off within your home, or perhaps from an event happening elsewhere, even many miles away. Background information of unknown significance from an indeterminate source is usually referred to as noise, although at some epistemological peril. (The constant snow we used to see on tube-type televisions attached to antennas, when they were tuned to an unused channel, turned out to be the background radiation left over from the Big Bang.)

Two common ways of encoding information into an AC signal are amplitude modulation (varying the voltage) and frequency modulation (varying the frequency or cycle time), from which the AM and FM radio bands get those names.

Clock
Digital devices use a different kind of signal, one more like that of a simple switch, but switching back and forth many times per second. The simplest form of such a signal is called a clock. A clock signal is a constant square wave, rising abruptly, remaining in a high state for an instant, then falling abruptly, remaining in a low state for another instant, over and over in a regular rhythm. Such a clock signal is a reference against which other signals are measured, governing the encoding of information into them and the extraction of information from them.

Serial vs. Parallel
Electronic representation of multi-digit binary numbers can be either serial (one bit at a time) or parallel (several bits as separate signals on separate channels), or both (several bits at a time, combined with a clock signal). Nearly every computer in existence today moves bits around internally at least 4 at a time, and more commonly 32 or 64 at a time, to the beat of a clock running at millions or billions of cycles per second. Externally, over cables and wireless connections between devices, sending one bit at a time is the rule, and sending groups of bits together in lockstep is the exception. Many bits are sent as a string, over such connections, and then reconstituted at the other end.

One place where you will find external connections with 4, 8, or even 16 bits in parallel is on the pins of solder pads provided on single board computers for hobbyists, such as the Raspberry Pi. These are typically configurable, capable of operating either singly or together as a group, in parallel, and can frequently also handle analog signals, in which the information content is encoded as a voltage that varies anywhere between ground state and high state, or, more commonly, pulse width modulation (PWM), in which the information is encoded in the timing of changes between ground state and high state.

That's enough technical talk for one installment. Now back to the discussion of social networks. Even if talk of bits, bytes, processors, and signals leaves you numb, you can act on what follows.

The most important social network is your friends, neighbors, and those you interact with through face-to-face meetings. Find out which of the people in your network is interested in and/or has some competence in technology, either the technologies used in robotics or the biology-based technologies used in organic gardening, agroecology, biological agriculture, or whatever you prefer to call it. Chat them up; find out what they know and what they're interested in, particularly what they're interested in doing themselves. Also find out which online social networks (Facebook, Twitter, etc.) they use, and get connected to them there.

Build on that base. Share your discoveries and projects with this group, and keep up with what they share. If they've done something particularly impressive, maybe do a video-recorded interview and post that. Also nudge your contacts to build out the network by including others they know. Be on the lookout for other such networks, whether intentional or not, and hook up with them as you find them, also any interested individuals you locate online.

Find out if your school or school system has any robotics activities. If you have children, see whether they’re interested. Either way, introduce yourself to the instructor or club sponsor. Chances are they know a few technically adept youth, who would be enthusiastic for a chance to do something real that mattered.

Also introduce yourself to any industrial arts teachers. Robots aren't only computers, but have mechanical components which are essential to what they do, and not all such components can be 3D printed in plastic. Sometimes you might need someone with access to a lathe or a welder or a furnace capable of melting metal for molding, and the skills to use it.

And finally, bug the equipment dealers around you for smaller, lighter, more intelligent, more detail-oriented, less destructive options. Tell them you want to get away from packing down and tearing up the soil, and away from the use of poisons of all types. If they hear this often enough, they'll be passing the message up the chain to their suppliers.

Keep notes, whether on paper or on the device or cloud of your choice, so you don't lose track of what you've already learned.

Get back to contacts periodically.

Not what you were expecting? As I was saying, the perception of a market is a critical in motivating investment, and investment can vastly accelerate the development of technology. But money doesn't sit around for long; it gets invested one way or another, into the best option of which the investor is aware. To attract that investment, it's important to make some commotion.

Previous installment

Monday, May 30, 2016

Robotics for Gardeners and Farmers, Part 1

What would you like to do with your garden or farm that you can't make time for, don't have patience for, or just can't imagine how you'd go about getting it done? Weeding without herbicides? Maintaining a continuous canopy of foliage by replacing plants as they mature? Dispensing with rows and using nearly all of the available space nearly all the time? Mixing native flowers in with your vegetables in a random fashion? Selectively harvesting certain plants in a polyculture mix without having to crush others under wheels to do it? Including perennials in your mix? Allowing poultry to range free under the shade of your taller crops, without fear that they'll wander off or be taken by a fox or bobcat? Whatever it is, there may soon be a machine available that makes it not only possible but practical.

Some of the items on this wish list, or others you might have come up with yourself, are probably already practical for those with a bit of knowledge about available technologies and a willingness to tinker. For example, it's not too difficult to imagine a drone (see below) establishing a virtual fence around a flock of chickens, and also keeping any predators that might show up at bay. With a bit more knowledge, some imagination, and persistence, all of the items I listed above can probably be accomplished with technologies that are available now.

Drone
This term has a range of meanings, but is usually applied to aircraft that are either operated remotely or which navigate for themselves. That auto-navigation can be entirely preprogrammed, a combination of preprogramming and flexible routing, or entirely autonomous, based on goals and rules. Drones can resemble either conventional aircraft, with fixed wings and one or more propellers pulling them forward, or helicopters, with one or more (usually at least two) rotors, primarily producing lift, spinning around vertical shafts. The most common configuration, and what most people think of when they hear the word drone outside of a military context, is four such rotors, arranged in a square, with most of the mass of the craft suspended in the space between them, at the center of that square.

Not tomorrow, and probably not the day after, but most likely within the next decade, tiny drones on the scale of moths or butterflies, with enough sophistication to be variously useful, will become available. With appropriate sensors and programming (see below), these should be very helpful in collecting all sorts of information, anything you might want to know about what's happening in your garden or field, and all without any disturbance more significant than occasionally brushing a rotor or wing against a leaf. They should also be capable of performing a wide range of very detailed operations, for example pollination, but possibly even the precise application of tiny amounts of potent substances, which might mean herbicides and pesticides, but might also mean something less noxious, like concentrated sodium hydroxide or phosphoric acid, or an inoculating solution containing some specific bacteria or fungus.

Sensor
The simplest sort of sensor is a switch, which allows current to pass or blocks it from passing, like a light switch. Many light switches do what they do by tipping a tube containing mercury (a metal that is liquid at room temperature) so that it either makes an electrical connection between two wire contacts or does not. That sort of tube, partially filled with mercury, can also be used to detect whether something to which it is attached, like a lamp, has tipped over. There are also magnetic switches that close (make contact to form a circuit) when in close proximity to a magnet and open (break the circuit) if there is no magnet nearby. These are frequently used to detect whether a door or window has been opened. Sometimes a sensor is nothing more than a thin rod, even a feather, connected to such a simple switch, which completes a circuit if the rod is moved far enough, and breaks that circuit again if the rod is allowed to swing back. Sensors can also be a good deal more complex, but I'll need to lay some groundwork before addressing this subject in detail.

Program or Programming
These words are basically interchangeable, and both can be either a noun or a verb. As nouns they refer to the collection of computer code (hand waving pending more detailed discussion) embedded in or available to be loaded into a computer processing core, which you can think of as the chip at the heart of a computer, although processing cores come in many types and sometimes with many on a single chip. As verbs they refer to the act of creating such code.

So there's not a lot I can say without defining some additional terms, a process that's sure to continue at least throughout the next installment, and perhaps several installments. I will attempt to make this a little more interesting than your typical glossary.

Automation
Automation need not involve computers, nor even anything electrical; it can be entirely mechanical. Farmers have been using automation since the advent of the earliest horse drawn sickle mowers, more than 150 years ago, and many forms of automation have become common on farms, from the microwave ovens in kitchens, with their rotating platters and timers that turn them off after a preset time, to combine harvesters that cut, thresh, and temporarily store grain, distributing the chaff back onto the field. Automation is difficult to define, but when you've seen as many examples of it as just about everyone living in the developed world has seen, you're sure to have a pretty good idea of what it's about.

Robot
Robot is even more difficult to define, in large part because people have differing ideas about what the word should mean, and attempting to provide a definition might be considered a fool's errand. There's even a podcast devoted to determining whether particular examples qualify as a robot. Again, you probably have a reasonable sense for what is meant by the word, but I would like to fill out the picture a bit:
  • Sense, Think, Act — The most fundamental attributes of a robot are that it
    1. somehow acquires information (even just a simple on/off signal) from its environment
    2. decides what action to perform (and whether to perform that action) based on the interaction of that information with its programming
    3. performs the action, when the decision is to do so
  • Physicality — While there are 'bots' that exist only as programs and are data-only in and data-only out, having physical form, some sensors and/or some mechanism of its own, is generally considered to be a requirement for being a robot, and it's devices having this property that we're concerned with here.
There are other properties we might include, for differentiating between a robot and an automaton, or between a robot and an artificial intelligence, but these distinctions aren't particularly relevant in this context, so let's leave it at that.

Robotics
Robotics is the study and practice of everything that goes into creating robots, and is therefore a radically multidisciplinary field. It includes, but is most certainly not limited to, mechanics, electronics, and computer science. Happily, you don't have to know everything about all of the various aspects of robotics to take advantage of the robots created by roboticists, nor even to make valuable contributions to the field. You can combine purchased bits with your own bits to create novel devices that perform tasks for which no off-the-shelf solution exists, in fact doing this is broadly encouraged, and supported with a wide variety of parts, kits, and code that is free to use. I'll provide some sources for these in a future installment.

Until the next installment, I'd like to suggest that you look around for examples of automation that are already part of your life, and give some thought to what else you might like to automate, if doing so were reasonable and affordable.

Sunday, May 29, 2016

Biological Agriculture for Roboticists, Part 6

In a previous installment, I said that identifying weeds based on what's left standing after a patch of ground has been grazed won't control low-growing plants, using goatheads as an example.

To begin with, what some type of herbivore (cattle) finds distasteful another (goats) may find delectable, so not everything left standing by a single species is useless, and it's a good idea to run cattle, which strongly prefer grass, together with or immediately followed by another herbivore that is less picky, like goats.

Secondly, being unpalatable doesn't automatically make a plant a weed. Weeds are plants that move aggressively into disturbed ground, smother or chemically inhibit other plant life, and/or put most of their energy into producing above-ground growth and seeds rather than roots. They are typically annuals or biennials (producing seed in their second year). If a plant does none of these things and is not toxic to livestock or wildlife, it's probably not accurate to describe it as a weed. Even so, if livestock won't eat it and it's not a candidate for protection for being rare and endangered or threatened, and not vital to some rare and endangered animal, you probably don't want it taking up ground that could be producing something more useful in your pasture. So what's left standing after grazing isn't such a bad indication, but, as already mentioned, this test won't catch low-growing plants.

So, how to deal with those low-growing plants? Good question, and a good subject for further research. First you have to be able to identify their presence, and distinguish between them and the grass stubble left behind by grazing. Then there's the matter of locating the main stem and the location where it and the root system connect. If a plant is laying on the ground, supported by it and not swaying in the breeze, the modeling of its branching structure from video of its motion I referenced earlier won't work. One way to accomplish this might be to use a vacuum that pulls in a sufficiently large volume of air to pick up the vining tendrils and suck them in, and if you have a serious infestation of this sort of weed then using such equipment might be a reasonable choice. Another way might be a pincer-like manipulator, with cylindrical counter-rotating rotary rasps for fingers, pinching the vine at any point, determining which direction to rotate by trial and error, then using the resulting tension to guide the manipulator to the main stem so it can be uprooted.

Such a manipulator might be generally better at uprooting than a simple grasping manipulator, since the rotation of the fingers would replace retracting the robotic arm, potentially making the overall operation more efficient. A variation on the theme which might prove more generally useful would have low points on each finger matched by shallow indentations on the other finger, at the end furthest from the motors driving finger rotation, progressing to protruding hooks matched by deep indentations at the end nearest the motors. This would allow the same attachment to be used both for ordinary uprooting and for gathering up a something like goatheads, simply by adjusting where along the length of the rotating fingers it grasped the plant.


I also promised to get back to the use of sound, in the context of fauna management and pest control. This by itself could easily be the subject of a lengthy book. Information about the environment can be gleaned from ambient sounds as well as from active sonar, and a robot might also emit sounds for the effects they can produce.

Sonar is already widely used in robotics as a way of detecting and determining the distance to obstacles. While thus far more sophisticated technologies, such as synthetic aperture sonar, have primarily been developed for underwater use, a large market for autonomous robots operating at modest ground speeds in uncontrolled environments might prove incentive enough to justify developing versions for use in air.

Meanwhile, there is a wealth of information available from simple microphones. From tiny arthropods to passing ungulates, many animals produce characteristic sounds, with familiar examples including crickets, frogs, and all types of birds and mammals. These sounds can help identify not only what species are present but where they are and what they are doing.

Sound can also be used to affect the behavior of animals, for example discouraging deer from spending too much time browsing on your vegetable garden or keeping chickens from venturing too far afield. Through sound, a robot might signal the presence of a predator, or food, or a potential mate.

But it's not just animals; even plants produce sounds. A tree that has sustained wind damage, introducing cracks into its trunk, will sound different from one which has not. A plant with wilted leaves sounds different from one that is fully turgid, and one from which the leaves have fallen sounds different yet.

So far as I'm aware, all such potential uses of sound represent largely unexplored areas of research, so it's hard to know what all a machine might be able to learn about its biological environment just by listening and processing the data produced, and in what manner it might use sound to exert some control over that environment.


I've concentrated on tying up loose ends here because I'm eager to get on to the series on Robotics for Gardeners and Farmers. That's not to say that this will be the last installment in this series; after all I've yet to address planting, pruning, pest control, harvest, or dealing with the plant matter left behind after harvest, as well as animal husbandry. Whether I eventually get to all of these remains to be seen. Touching on all such topics probably isn't as important as conveying the nature of the opportunities presented by the application of robotics to methods founded in horticulture rather than in conventional agriculture, with an eye to then making them scalable.

Previous installments:

Building Soil Health for Healthy Plants by soil scientist Dr. Elaine Ingham

You might think of this as a mini-course in soil science, with an emphasis on soil microbiology.

Saturday, May 28, 2016

Joel Salatin: Successional Success - Field of Farmers

No mention of robotics here, except as might be implied by portable infrastructure, but this speech is a real eye-opener, well worth the time investment in watching and listening.