From this point on I'm going to assume that anyone who's still with me isn't intimidated by technical terms and discussions, and I'll stop apologizing for including them. If I fail to explain any new term so you can understand how I'm using it, please say so in a comment.
Before diving back down to the level of fundamentals, there's a bit more to say about serial communications.
- Serial Ports & Communication Protocols
- Serial ports, on a microcontroller or single board computer, are made up of a set of pins or solder pads that work together to handle a single, typically bidirectional serial connection with some other device (see also UART). Serial ports on enclosed devices like laptop or desktop computers are standardized connectors with standardized signals on particular pins or contacts. Examples include RS-232 and USB ports. While such ports have their own protocols, communication protocols also include layers that ride on top of those of physical connections. One example of such a protocol that I expect to become increasingly important in the future is RapidIO. An even higher level protocol used by ROS, the Robot Operating System is rosbridge.
Okay, now back down to the bottom of the stack for a look at how computers do what they do. This will be more than you need to know to just use a computer, but when you're wiring up sensors or other hardware to or programming a microcontroller or single board computer it could come in handy.
- Binary Logic
- Once again, think simple. At the binary level, logic operations are about taking one or two bits as input and producing a single bit as output. Binary NOT simply changes a 1 to a 0 or a 0 to a 1. Binary AND produces a 1 as output if and only if ("iff") both of two inputs are 1. Binary OR produces 1 as an output if either of its two inputs is 1, or if both are 1. Binary NAND is like running the output of an AND operation through a NOT operation. Likewise, NOR is like running the output of an OR through a NOT. XOR, also called Exclusive OR, produces a 1 as output if either of two inputs is 1, but not if both are 1 or if both are 0. Implementations of these binary logic operations in circuitry are referred to as "gates" — AND gate, OR gate, and so forth. When processing cores perform binary logic operations, they typically do so on entire strings of bits at the same time.
- Bit Shift
- Moving all of the bits in a string of bits one position to the left, inserting a 0 at the right end, is equivalent to multiplying by 2, unless there was already a 1 in the left-most (most significant) position, with no place to go, which is called overflow. Moving all of the bits in a string of bits one position to the right, inserting a 0 at the left end, is equivalent to dividing by 2, unless there was already a 1 in the right-most (least significant) position, with no place to go, which is called underflow. Sometimes overflow or underflow are errors, and sometimes they are not, depending on the context in which they occur.
- Integer
- Integer has the same meaning in computing as it does in arithmetic, except that there are additional constraints. In computers, integers are represented by strings of bits, generally no longer than the number of bits that the processing core(s) can handle in a single operation, usually either 32 or 64 these days. These binary representations of integers come in two basic types, signed or unsigned. A 32-bit unsigned integer can represent any whole value between 0 and 4,294,967,295 (inclusive), whereas a 32-bit signed integer can represent any whole value between −2,147,483,648 and 2,147,483,647 (inclusive). As with left-shift, integer addition and multiplication can result in overflow, and, as with right-shift, integer subtraction can result in underflow. Integer division is a special case; any remainder is typically discarded, but can be accessed by something called the modulo operation.
- Floating Point
- As with integers, floating point numbers generally come in 32 and 64-bit sizes, with the 64-bit version both having a greater range and being more precise. They have gradually come into more common use as computing hardware capable of performing floating point operations at a reasonable rate became more affordable, eventually being integrated into the central processing units (CPUs) found in most computers.
- Machine Code
- Another use for strings of bits is as the code that controls the operation of a processing core. In the simplest case, each bit or short subset of a string of bits forming an instruction is actually a control signal, although it's significance may depend on the state of one or more other bits in the string. For example part of the instruction might specify 32-bit unsigned integer addition, while two other parts specify the registers from which to draw the operands and yet another part specifies the register into which to place the result, with the operation finishing by incrementing the program counter (a pointer to the memory location of the next instruction). This approach can be carried to an extreme in what's called a VLIW (Very Long Instruction Word) architecture. An alternative approach, called microcode establishes a layer of abstraction between the level of control signals and the code that constitutes a program, and can also allow the same code to run on a range of closely related processor designs with nonidentical control signals. These days most processors found in consumer devices use microcode.
- Processing Cores
- Up until now I've referred to processing cores without having actually defined them. A core is like a knot of circuitry that performs a set of closely related operations. The most basic type of core is an Arithmetic Logic Unit (ALU). These cores handle binary logic, bit shifting, integer arithmetic, and sometimes also floating point operations, although floating point circuitry was initially found on separate chips and only later included on the same chips as ALUs. Another common type of core is concerned with memory in the processor's primary address space (yet another use of strings of bits). Addresses usually take the form of unsigned integers, but ordinary integer operations don't apply to them.
- GPU & GPGPU
- Graphics Processing Units (GPUs) belong to the more general class called Vector Processors. "Vector" here means the same thing as it does in linear algebra, although GPUs can be very useful in computing geometric vectors. They are at their best when performing the same operation or sequence of operations on a large set of data, and in these sorts of applications they have a huge performance advantage over more conventional processing cores. Robotic applications where you might find a GPU include processing data from a camera or microphone. General purpose computing on GPUs (GPGPU) is a growing trend.
There's a bit (informal use) more to be said about processors and such before working our way back up the stack, but it can wait for the next installment.
1 comment:
Great rreading your blog
Post a Comment