Sunday, June 26, 2016

Robotics for Gardeners and Farmers, Part 4

What follows will begin with a whirlwind tour of topics at or near the bottom of the computing stack (the realm of bits and bytes), in the hope of tying up some loose ends at that level, followed by a few steps upwards, towards the sorts of things that technicians and hobbyists deal with directly.

Registers, Memory, Address Spaces, & Cache
I previously mentioned registers in the context of processor operation. A register is simply a temporary storage location that is very closely tied to the circuitry that performs logical and numerical operations, so closely that most processors can perform at least some of their operations, fetching one or two values, performing an operation, and storing the result, in one clock cycle (essentially one beat of its tiny, very fast processor heart). Memory, also called Random Access Memory (RAM), may be on the order of a billion times more abundant but takes more time to access, typically several clock cycles, although several shorter values (a fraction of the bits that will fit through the channel to memory at once) may be read or written together, and a series of subsequent sequential addresses may only add one cycle each. A processor's address space may lead to more than just RAM; it is the entire range of values the processor is capable of of placing on that channel to memory as an address. Using part of that range for communication with other hardware is common practice. Cache is intermediate between registers and RAM, and it's purpose is to speed access to the instructions and data located in RAM. Access to cache is slower than access to a register, but faster than access to RAM. Sometimes there are two or more levels of cache, with the fastest level being the least abundant and the slowest level the most abundant.

A/D, D/A, & GPIO
While it's possible to do abstract mathematics without being concerned with any data not included in or generated by the running program, computers are most useful when they are able to import information from outside themselves and export the results of the computational work they perform, referred to as input/ouput (I/O, or simply IO). This subject is particularly relevant to robotics, in which the ability of a machine to interact with its physical environment is fundamental. That environment typically includes elements which vary continuously rather than having discrete values. Before they can be used in digital processing, these measurements, resulting in analog signals, must be converted to digital signals by devices called analog-to-digital converters (ADC, A/D). Similarly, to properly drive hardware requiring analog signals, digital output must be converted to analog form using digital-to-analog converters (DAC, D/A). As with floating-point processors and memory management units, both of these were initially separate devices, but these functions have moved closer and closer to the main processing cores, sometimes now being located on the same integrated circuits (chips), although it is still common to have separate chips which handle A/D and D/A conversion for multiple channels. Such chips have made flexible general-purpose input/output (GPIO) commonplace on the single-board microcontrollers and single-board computers that have become the bread-and-butter of robotics hobbyists. GPIO doesn't necessarily include A/D and D/A functionality, but it often does, so pay attention to the details when considering a purchase. As is always the case with electronic devices, voltage and power compatibility is vital, so additional circuitry may be required in connecting I/O pins to your hardware. Best to start with kits or detailed plans crafted by experienced designers.

Now let's delve into software.

Assembler
I've also already mentioned machine code in the context of the various uses of strings of bits. The earliest digital computers (there actually is another kind) had to be programmed directly in machine code, a tedious and error-prone process. The first major advancement in making programming easier for humans to comprehend and perform was assembly language, which came in a different dialect for each type of computer and instruction set. The beauty of assembly language was that, with practice, it was readable, and programs written in it were automatically translated into machine code by programs called assemblers. Abstractions which were later codified into the syntax of higher level computer languages, such as subroutines and data structures, existed in assembly only as idioms (programming practices), which constrained what it could reasonably be used to create. Nevertheless, many of the ideas of computer science first took form in assembly code.

Higher Level Languages
Once assembly code became available, one of the uses to which it was put was the creation of programs, called compilers, capable of translating code less closely tied to the details of computer processor operation into assembly code, from which it could be converted to machine code. That higher-level code was written in new languages that were easier for programmers to use, were more independent of particular computer hardware, and which systematized some of the low-level programming patterns already in use by assembly programmers, by incorporating those patterns into their syntax. Once these early languages became available, further progress became even easier, and many new languages followed, implementing many new ideas. Then, in the 1970s, came the C language, which was initially joined at the hip to the Unix operating system, a version of which, called BSD, quickly became popular, particularly in academia, driven in no small part by its use on minicomputers sold by DEC and Sun Microsystems. In a sense, C was a step backwards, back towards the hardware, but it was still much easier to use than assembler, and well written C code translated to very efficient machine code, making good use of the limited hardware of the time. Moreover, the combination of C and Unix proved formidable, with each leveraging the other. It would be hard to overestimate the impact C has had on computing, between having been ported to just about every computing platform in existence, various versions aimed at specific applications, superset and derivative languages (Objective-C and C++), and languages with C-inspired syntax. Even now, compilers and interpreters for newer languages are very likely to be written in C or C++ themselves. C's biggest downside is that it makes writing buggy code all too easy, and finding those bugs can be like looking for a needle in a haystack, so following good programming practice is all the more important when using it.

Operating Systems
A computer operating system is code that runs directly on the hardware, handling the most tedious and ubiquitous aspects of computing and providing a less complicated environment and basic services to application software. The environment created by an operating system is potentially independent of particular hardware. In the most minimal example, the operating system may exist as one or more source code files, which are included with application source code at compile time or precompiled code which is linked with the application code after it has been compiled, then loaded onto the device by firmware. Not every device has or needs an operating system, but those that run application software typically do, and typically their operating systems are always running, from some early stage of boot-up until the machine is shut down or disconnected from power. There are also systems that run multiple instances of one or more operating systems on multiple virtual hardware environments, but these are really beyond the scope of what I'll be addressing here.

Next up, actual hardware you can buy and tinker with.

Previous installments

Sunday, June 12, 2016

Robotics for Gardeners and Farmers, Part 3

From this point on I'm going to assume that anyone who's still with me isn't intimidated by technical terms and discussions, and I'll stop apologizing for including them. If I fail to explain any new term so you can understand how I'm using it, please say so in a comment.

Before diving back down to the level of fundamentals, there's a bit more to say about serial communications.

Serial Ports & Communication Protocols
Serial ports, on a microcontroller or single board computer, are made up of a set of pins or solder pads that work together to handle a single, typically bidirectional serial connection with some other device (see also UART). Serial ports on enclosed devices like laptop or desktop computers are standardized connectors with standardized signals on particular pins or contacts. Examples include RS-232 and USB ports. While such ports have their own protocols, communication protocols also include layers that ride on top of those of physical connections. One example of such a protocol that I expect to become increasingly important in the future is RapidIO. An even higher level protocol used by ROS, the Robot Operating System is rosbridge.

Okay, now back down to the bottom of the stack for a look at how computers do what they do. This will be more than you need to know to just use a computer, but when you're wiring up sensors or other hardware to or programming a microcontroller or single board computer it could come in handy.

Binary Logic
Once again, think simple. At the binary level, logic operations are about taking one or two bits as input and producing a single bit as output. Binary NOT simply changes a 1 to a 0 or a 0 to a 1. Binary AND produces a 1 as output if and only if ("iff") both of two inputs are 1. Binary OR produces 1 as an output if either of its two inputs is 1, or if both are 1. Binary NAND is like running the output of an AND operation through a NOT operation. Likewise, NOR is like running the output of an OR through a NOT. XOR, also called Exclusive OR, produces a 1 as output if either of two inputs is 1, but not if both are 1 or if both are 0. Implementations of these binary logic operations in circuitry are referred to as "gates" — AND gate, OR gate, and so forth. When processing cores perform binary logic operations, they typically do so on entire strings of bits at the same time.

Bit Shift
Moving all of the bits in a string of bits one position to the left, inserting a 0 at the right end, is equivalent to multiplying by 2, unless there was already a 1 in the left-most (most significant) position, with no place to go, which is called overflow. Moving all of the bits in a string of bits one position to the right, inserting a 0 at the left end, is equivalent to dividing by 2, unless there was already a 1 in the right-most (least significant) position, with no place to go, which is called underflow. Sometimes overflow or underflow are errors, and sometimes they are not, depending on the context in which they occur.

Integer
Integer has the same meaning in computing as it does in arithmetic, except that there are additional constraints. In computers, integers are represented by strings of bits, generally no longer than the number of bits that the processing core(s) can handle in a single operation, usually either 32 or 64 these days. These binary representations of integers come in two basic types, signed or unsigned. A 32-bit unsigned integer can represent any whole value between 0 and 4,294,967,295 (inclusive), whereas a 32-bit signed integer can represent any whole value between −2,147,483,648 and 2,147,483,647 (inclusive). As with left-shift, integer addition and multiplication can result in overflow, and, as with right-shift, integer subtraction can result in underflow. Integer division is a special case; any remainder is typically discarded, but can be accessed by something called the modulo operation.

Floating Point
As with integers, floating point numbers generally come in 32 and 64-bit sizes, with the 64-bit version both having a greater range and being more precise. They have gradually come into more common use as computing hardware capable of performing floating point operations at a reasonable rate became more affordable, eventually being integrated into the central processing units (CPUs) found in most computers.

Machine Code
Another use for strings of bits is as the code that controls the operation of a processing core. In the simplest case, each bit or short subset of a string of bits forming an instruction is actually a control signal, although it's significance may depend on the state of one or more other bits in the string. For example part of the instruction might specify 32-bit unsigned integer addition, while two other parts specify the registers from which to draw the operands and yet another part specifies the register into which to place the result, with the operation finishing by incrementing the program counter (a pointer to the memory location of the next instruction). This approach can be carried to an extreme in what's called a VLIW (Very Long Instruction Word) architecture. An alternative approach, called microcode establishes a layer of abstraction between the level of control signals and the code that constitutes a program, and can also allow the same code to run on a range of closely related processor designs with nonidentical control signals. These days most processors found in consumer devices use microcode.

Processing Cores
Up until now I've referred to processing cores without having actually defined them. A core is like a knot of circuitry that performs a set of closely related operations. The most basic type of core is an Arithmetic Logic Unit (ALU). These cores handle binary logic, bit shifting, integer arithmetic, and sometimes also floating point operations, although floating point circuitry was initially found on separate chips and only later included on the same chips as ALUs. Another common type of core is concerned with memory in the processor's primary address space (yet another use of strings of bits). Addresses usually take the form of unsigned integers, but ordinary integer operations don't apply to them.

GPU & GPGPU
Graphics Processing Units (GPUs) belong to the more general class called Vector Processors. "Vector" here means the same thing as it does in linear algebra, although GPUs can be very useful in computing geometric vectors. They are at their best when performing the same operation or sequence of operations on a large set of data, and in these sorts of applications they have a huge performance advantage over more conventional processing cores. Robotic applications where you might find a GPU include processing data from a camera or microphone. General purpose computing on GPUs (GPGPU) is a growing trend.

There's a bit (informal use) more to be said about processors and such before working our way back up the stack, but it can wait for the next installment.

Previous installments

Monday, June 06, 2016

Robotics for Gardeners and Farmers, Part 2

In Part 1 of this series I said "you can combine purchased bits with your own bits to create novel devices that perform tasks for which no off-the-shelf solution exists." But why bother, right? Isn't it just a matter of time? Perhaps, but this is something of a chicken-and-egg problem. Investment follows the perception of a potential market. Without the perception of a market into which to sell the fruits of product development, investment is hard to come by, hence little development happens and few products are forthcoming. To really get behind the application of robotics to horticulture and agriculture, in a manner that takes full advantage of the potential of robotics to leverage the very best practices and make them scalable, investors must be convinced that their money will at least accomplish something worthwhile, and preferably that it will bring them a nice return. One way you can contribute to creating that perception of a market is by pushing the envelope of what can be done with what's available now, measuring the results, and talking about it, with friends and neighbors and on the social networks of your choice, preferably accompanied with video that makes clear what your creations do. (I'll come back to the use of social networks later in this installment.)

As I was saying at the close of Part 1, before I can go into much more detail, some additional definitions are in order.

Bit
I'm fond of this word in its informal sense, but, as applied to computers and related technologies, a bit is the smallest unit of information, usually represented by a single binary digit, which can have either of two values, 0 or 1. A bit can represented physically in many ways, the side of a coin facing up after a toss, for example. It is typically represented electronically by either a high state (a measurable voltage, either + or -) or a low state (usually ground), and while the signal (see below) representing a bit might be constant until changed, it is more commonly compact in time, and both created and retrieved in reference to a clock signal (see second item below).

Four bits taken together are called a nibble, represented by a 4-digit binary number, which can have any of 16 values: 0000, 0001, 0010, 0011, 0100, 0101, 0110, 0111, 1000, 1001, 1010, 1011, 1100, 1101, 1110, or 1111. These sixteen values can each be represented by a single hexadecimal (base 16) digit: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, and F, respectively. When combining longer strings of binary digits, such as two nibbles to form a byte (8 bits), the use of hexadecimal becomes first a convenience, then a necessity, as longer strings of 0s and 1s are very difficult to parse visually — hexadecimal 00 is equivalent to binary 00000000, and hexadecimal FF is equivalent to 11111111.

Signal
About thirty years ago, a very bright man posed the question (paraphrased), given that computer monitors were typically higher quality than televisions why were the images they produced so much more primitive than those produced by TVs? The answer was all about the source of the signals each was being fed. Television signals, at that time, were derived almost entirely from imagery captured from the physical world, whereas the images on monitors had to be generated by the computers they were attached to. There was no contest; even the supercomputers of that time simply weren’t up to the task of generating life-like imagery in real time; rather they would spend minutes or hours calculating each frame, and even then the result was cartoonish at best. These days smartphones and game consoles do a passable job of generating moving points of view within dynamic 3-dimensional environments of nontrivial complexity.

Like the simplest form of sensor, the simplest signal is either on or off. The signal from a light switch is either nothing at all or a connection to a power source, usually alternating current (AC) with a voltage (in the U.S.) between 110 and 120. The on/off supply of power to the light is, in that example, inseparable from the on/off signal, but what if the switch only controls the supply of power to a relay (a type of magnetic switch controlled by a small direct current running through a coil of wire), which in turn controls the supply of power to the light. In this case the power to the light is distinct from the signal (the power to the relay) that controls it, although that signal is still of the simplest, on/off type. This becomes more clear if we use another sort of relay, one that is closed (on) by default, making a connection unless there is a current through its magnetic coil, such that the light is on when the switch is off, and off when the switch is on.

Signals may be combined with the supply of power, but they are about the representation/encoding and transmission of information, and signal processing is about the extraction/decoding of information from incoming signals.

Under ideal conditions, the voltage of the AC power arriving to your home from the grid, graphed over time, forms a constant sine wave, with a cycle time (in the U.S.) of 1/60 second. Any perturbations from that perfect sine wave carry information, perhaps from a switch being turned on or off within your home, or perhaps from an event happening elsewhere, even many miles away. Background information of unknown significance from an indeterminate source is usually referred to as noise, although at some epistemological peril. (The constant snow we used to see on tube-type televisions attached to antennas, when they were tuned to an unused channel, turned out to be the background radiation left over from the Big Bang.)

Two common ways of encoding information into an AC signal are amplitude modulation (varying the voltage) and frequency modulation (varying the frequency or cycle time), from which the AM and FM radio bands get those names.

Clock
Digital devices use a different kind of signal, one more like that of a simple switch, but switching back and forth many times per second. The simplest form of such a signal is called a clock. A clock signal is a constant square wave, rising abruptly, remaining in a high state for an instant, then falling abruptly, remaining in a low state for another instant, over and over in a regular rhythm. Such a clock signal is a reference against which other signals are measured, governing the encoding of information into them and the extraction of information from them.

Serial vs. Parallel
Electronic representation of multi-digit binary numbers can be either serial (one bit at a time) or parallel (several bits as separate signals on separate channels), or both (several bits at a time, combined with a clock signal). Nearly every computer in existence today moves bits around internally at least 4 at a time, and more commonly 32 or 64 at a time, to the beat of a clock running at millions or billions of cycles per second. Externally, over cables and wireless connections between devices, sending one bit at a time is the rule, and sending groups of bits together in lockstep is the exception. Many bits are sent as a string, over such connections, and then reconstituted at the other end.

One place where you will find external connections with 4, 8, or even 16 bits in parallel is on the pins of solder pads provided on single board computers for hobbyists, such as the Raspberry Pi. These are typically configurable, capable of operating either singly or together as a group, in parallel, and can frequently also handle analog signals, in which the information content is encoded as a voltage that varies anywhere between ground state and high state, or, more commonly, pulse width modulation (PWM), in which the information is encoded in the timing of changes between ground state and high state.

That's enough technical talk for one installment. Now back to the discussion of social networks. Even if talk of bits, bytes, processors, and signals leaves you numb, you can act on what follows.

The most important social network is your friends, neighbors, and those you interact with through face-to-face meetings. Find out which of the people in your network is interested in and/or has some competence in technology, either the technologies used in robotics or the biology-based technologies used in organic gardening, agroecology, biological agriculture, or whatever you prefer to call it. Chat them up; find out what they know and what they're interested in, particularly what they're interested in doing themselves. Also find out which online social networks (Facebook, Twitter, etc.) they use, and get connected to them there.

Build on that base. Share your discoveries and projects with this group, and keep up with what they share. If they've done something particularly impressive, maybe do a video-recorded interview and post that. Also nudge your contacts to build out the network by including others they know. Be on the lookout for other such networks, whether intentional or not, and hook up with them as you find them, also any interested individuals you locate online.

Find out if your school or school system has any robotics activities. If you have children, see whether they’re interested. Either way, introduce yourself to the instructor or club sponsor. Chances are they know a few technically adept youth, who would be enthusiastic for a chance to do something real that mattered.

Also introduce yourself to any industrial arts teachers. Robots aren't only computers, but have mechanical components which are essential to what they do, and not all such components can be 3D printed in plastic. Sometimes you might need someone with access to a lathe or a welder or a furnace capable of melting metal for molding, and the skills to use it.

And finally, bug the equipment dealers around you for smaller, lighter, more intelligent, more detail-oriented, less destructive options. Tell them you want to get away from packing down and tearing up the soil, and away from the use of poisons of all types. If they hear this often enough, they'll be passing the message up the chain to their suppliers.

Keep notes, whether on paper or on the device or cloud of your choice, so you don't lose track of what you've already learned.

Get back to contacts periodically.

Not what you were expecting? As I was saying, the perception of a market is critical in motivating investment, and investment can vastly accelerate the development of technology. But money doesn't sit around for long; it gets invested one way or another, into the best option of which the investor is aware. To attract that investment, it's important to make some commotion.

Previous installment