What follows will begin with a whirlwind tour of topics at or near the bottom of the computing stack (the realm of bits and bytes), in the hope of tying up some loose ends at that level, followed by a few steps upwards, towards the sorts of things that technicians and hobbyists deal with directly.
- Registers, Memory, Address Spaces, & Cache
- I previously mentioned registers in the context of processor operation. A register is simply a temporary storage location that is very closely tied to the circuitry that performs logical and numerical operations, so closely that most processors can perform at least some of their operations, fetching one or two values, performing an operation, and storing the result, in one clock cycle (essentially one beat of its tiny, very fast processor heart). Memory, also called Random Access Memory (RAM), may be on the order of a billion times more abundant but takes more time to access, typically several clock cycles, although several shorter values (a fraction of the bits that will fit through the channel to memory at once) may be read or written together, and a series of subsequent sequential addresses may only add one cycle each. A processor's address space may lead to more than just RAM; it is the entire range of values the processor is capable of of placing on that channel to memory as an address. Using part of that range for communication with other hardware is common practice. Cache is intermediate between registers and RAM, and it's purpose is to speed access to the instructions and data located in RAM. Access to cache is slower than access to a register, but faster than access to RAM. Sometimes there are two or more levels of cache, with the fastest level being the least abundant and the slowest level the most abundant.
- A/D, D/A, & GPIO
- While it's possible to do abstract mathematics without being concerned with any data not included in or generated by the running program, computers are most useful when they are able to import information from outside themselves and export the results of the computational work they perform, referred to as input/ouput (I/O, or simply IO). This subject is particularly relevant to robotics, in which the ability of a machine to interact with its physical environment is fundamental. That environment typically includes elements which vary continuously rather than having discrete values. Before they can be used in digital processing, these measurements, resulting in analog signals, must be converted to digital signals by devices called analog-to-digital converters (ADC, A/D). Similarly, to properly drive hardware requiring analog signals, digital output must be converted to analog form using digital-to-analog converters (DAC, D/A). As with floating-point processors and memory management units, both of these were initially separate devices, but these functions have moved closer and closer to the main processing cores, sometimes now being located on the same integrated circuits (chips), although it is still common to have separate chips which handle A/D and D/A conversion for multiple channels. Such chips have made flexible general-purpose input/output (GPIO) commonplace on the single-board microcontrollers and single-board computers that have become the bread-and-butter of robotics hobbyists. GPIO doesn't necessarily include A/D and D/A functionality, but it often does, so pay attention to the details when considering a purchase. As is always the case with electronic devices, voltage and power compatibility is vital, so additional circuitry may be required in connecting I/O pins to your hardware. Best to start with kits or detailed plans crafted by experienced designers.
Now let's delve into software.
- Assembler
- I've also already mentioned machine code in the context of the various uses of strings of bits. The earliest digital computers (there actually is another kind) had to be programmed directly in machine code, a tedious and error-prone process. The first major advancement in making programming easier for humans to comprehend and perform was assembly language, which came in a different dialect for each type of computer and instruction set. The beauty of assembly language was that, with practice, it was readable, and programs written in it were automatically translated into machine code by programs called assemblers. Abstractions which were later codified into the syntax of higher level computer languages, such as subroutines and data structures, existed in assembly only as idioms (programming practices), which constrained what it could reasonably be used to create. Nevertheless, many of the ideas of computer science first took form in assembly code.
- Higher Level Languages
- Once assembly code became available, one of the uses to which it was put was the creation of programs, called compilers, capable of translating code less closely tied to the details of computer processor operation into assembly code, from which it could be converted to machine code. That higher-level code was written in new languages that were easier for programmers to use, were more independent of particular computer hardware, and which systematized some of the low-level programming patterns already in use by assembly programmers, by incorporating those patterns into their syntax. Once these early languages became available, further progress became even easier, and many new languages followed, implementing many new ideas. Then, in the 1970s, came the C language, which was initially joined at the hip to the Unix operating system, a version of which, called BSD, quickly became popular, particularly in academia, driven in no small part by its use on minicomputers sold by DEC and Sun Microsystems. In a sense, C was a step backwards, back towards the hardware, but it was still much easier to use than assembler, and well written C code translated to very efficient machine code, making good use of the limited hardware of the time. Moreover, the combination of C and Unix proved formidable, with each leveraging the other. It would be hard to overestimate the impact C has had on computing, between having been ported to just about every computing platform in existence, various versions aimed at specific applications, superset and derivative languages (Objective-C and C++), and languages with C-inspired syntax. Even now, compilers and interpreters for newer languages are very likely to be written in C or C++ themselves. C's biggest downside is that it makes writing buggy code all too easy, and finding those bugs can be like looking for a needle in a haystack, so following good programming practice is all the more important when using it.
- Operating Systems
- A computer operating system is code that runs directly on the hardware, handling the most tedious and ubiquitous aspects of computing and providing a less complicated environment and basic services to application software. The environment created by an operating system is potentially independent of particular hardware. In the most minimal example, the operating system may exist as one or more source code files, which are included with application source code at compile time or precompiled code which is linked with the application code after it has been compiled, then loaded onto the device by firmware. Not every device has or needs an operating system, but those that run application software typically do, and typically their operating systems are always running, from some early stage of boot-up until the machine is shut down or disconnected from power. There are also systems that run multiple instances of one or more operating systems on multiple virtual hardware environments, but these are really beyond the scope of what I'll be addressing here.
Next up, actual hardware you can buy and tinker with.
1 comment:
Loved readding this thank you
Post a Comment