I've long believed that Augmented Reality (AR) and robotics are closely related. Both model their environments to some degree. Robotics uses that model to guide the behavior of a machine, whereas AR uses it to provide an enhanced sensory experience to a human.
The exact nature of that enhanced experience is bounded only by available sensory, computational, and display (audio, haptic, ...) hardware, and by how the data gathered can be usefully transformed into overlays that augment the natural perception of the human user. What is useful is a function of both the content of those overlays and the latency, how much lag time is introduced by the computations involved in generating the overlays. Faster computational hardware can produce more detailed overlays with the same latency or the same overlays with lower latency than slower hardware.
One important application for AR is making it safer and easier for a human to work in collaboration with robotic hardware. For example, a robot might provide the path it intends to follow and the 3D space through which it intends to pass, and that information might be converted in an AR display into highlighting of anything occupying that space. Or perhaps a machine wants to direct the attention of its human counterpart to some particular element of the environment, say one specific plant. That too could be highlighted in the display.
While these examples only scratch the surface of what is possible, they do serve to illustrate that the content of the AR overlays need not be generated entirely from data gathered by sensors attached to the display itself, but can be provided by other sources, including but not limited to other nearby devices. Those sources might include aerial or satellite imagery and information from databases. In the farming context, they might include 3D soil maps produced from core samples.
Examples of overlays that might be useful for a farmer include thermal imagery, current soil moisture content, soil surface porosity and water absorption capacity, exaggerated vertical relief and what to expect in the way of runoff and resulting erosion for various precipitation scenarios, highlighting all plants of a particular species, all plants exhibiting nutrient deficiencies or other trauma, highlighting bare soil (no mulch or plant cover), the presence, activity, and impact of various types of animals. This list could go on and on.
Machines may be better at doing particular manipulations of data, finding correlations, and even at answering well-specified questions, but they're not so good at asking meaningful questions, much less at thinking outside the box. For this reason, the combination of human and machine is more powerful than either alone.
It's still very early days in AR, and there's a great deal of room for improvement. One development that is likely to occur sooner rather than later is voice operation, enabling hands-free control of the AR experience, including which overlays are active and how they are combined. With voice control, a farmer should be able to walk through a field, say what he wants to see, and make modifications to the plan controlling the robotic machinery that actually operates the farm, or issue commands for execution by the first available machine. For most, this will be a more intimate and far richer connection to their land than what they currently experience.
1 comment:
Thank you for sharing. Your blog provides huge support and information for everyone.
Mini Projects | Major Projects | Enginering Projects | Diploma | BTech | MTech | Courses | Internships
Post a Comment