Augmented Reality

From Gothpoodle

Jump to: navigation, search

This is the overlaying of virtual reality information onto a user’s perception of the real world. Its basic element is a “virtual interface”: smart glasses incorporating a computer, digital camera, visual head-up display, optical recognition software, cellular modem, and bone induction speaker, all controlled by an infomorph, typically a nonsapient or low-sapient AI.

The system recognizes objects (including faces) and situations, and provides a helpful stream of context-appropriate data, often as audio messages or text boxes in the user’s visual field.

The user accesses the system through voice commands or a virtual reality screen (and when necessary, a keyboard) projected in front of him. Usually, the user just tells the AI what he wants it to do, or it anticipates his needs. However, the system’s camera can also track the user’s finger movements, allowing him to type, move objects, or simulate a mouse, trackball, or other controller in empty air. Virtual interfaces have rendered solid keyboards and computer terminals obsolete. With appropriate programs, the user can manipulate graphic images, or even use his finger as a pen or paintbrush. Infomorphs (AIs and mind emulations) use augmented reality without needing a virtual interface.

Augmented reality is a mature technology in 2100, nearly a century old. The latest advances, not yet ubiquitous, are smarter AIs and replacing wearables with brain implants. Popular augmented- reality applications include:

Contents

Memory Augmentation

A typical AR program is a “mug shot” database. Different databases are commercially available or Webaccessible, ranging from the commonplace (famous celebrities) to the job-specific (e.g., a cop may have a database of wanted criminals). Most people also accumulate personalized databases of people they meet or expect to meet, co-workers, and so on. If the virtual interface’s camera (or the user’s eyes, if he uses a brain implant) spots someone whose face is in the database, the program will automatically display their name and a brief identifier as they come into his visual field, unless told not to do so. Similar remembrance-agent programs and databases can be acquired for other tasks, such as recognizing artwork, wildlife, and vehicles.

Augmented reality can be used with context-relevant data-mining programs that continually search the Web for data with content relevant to the user’s current situation and present that information as appropriate. This will augment existing remembrance-agent databases. For example, if a person is encountered who isn’t in a user’s specific database, his picture and other data still have a very high chance of being in a public file online. True anonymity thus requires either disguise or an appearance nearly identical to thousands of others (fairly common for anyone with a cybershell or bioroid body, for example).

Video and Sensory Processing

Augmented reality can digitally process what the user sees, improving his vision. For example, enhancing the edges in an image helps in face recognition. It can also replace what he sees and hears, immersing him in a virtual reality.

Personal Navigation

In concert with global positioning satellites (GPS) or local embedded transmitters, the user can receive directions overlaid on his visual field, or call up more complex moving map displays. These are available for all urban and most rural areas on Earth and other inhabited worlds or stations. Outdoor maps (or real-time satellite imagery) are accessible from the Web. Building plans may be available for automatic download upon entering a large building such as an office block or mall. (Secure installations would do so only if the individual was recognized as an authorized visitor.)

Virtual Tags

Places, things, and even people can be “tagged” with augmented reality positional overlays called virtual tags (“v-tags”). The v-tag files are usually stored in local networks specific to a location in real-space. The position of tagged objects is updated continuously through tiny GPS or radio-frequency locators in contact with the local network. When someone with an augmented reality system approaches any tagged object, his virtual interface will compare his position in real-space with that of the object; if the user is facing it and within a designated range, his interface will be permitted to download the appropriate information, and he will “see” the v-tag data overlay. In short, a v-tag is a virtual signpost. It’s easy to create v-tags: simply upload an appropriate file. People can attach virtual sticky notes, pictures, etc. to walls, doors, desks, fellow workers – whatever has been coded into the system. Similar v-tags are used in museums, shops, natural parks, billboards, warehouses, etc.

Manufactured goods of all varieties are also vtagged, with virtual labels that allow access to reams of online data, ranging from ingredients to safety instructions.

Some objects may also incorporate actual sensors to monitor their own status, whether that means checking to make sure milk hasn’t spoiled or measuring microstresses in a precision machine. The data from these sensors can be continuously uploaded and available through v-tag access. As all objects transmit positional data, valuable objects may alert humans or software if they are moved without authorization, as well as sending regular updates of their present location – at least, until the signal is jammed or the transmitter removed.

Virtual Tutors

A mechanism (anything from a car engine to a prefab house) may have its dozens (or thousands) of different parts individually tagged with microcommunicators and positional sensors similar to v-tags. Integral databases know where each part goes and virtual interface software can track both the parts and the user’s own hand movements, aiding in assembly, disassembly, preparation, or maintenance. For example, when a repair technician (human or machine) walks up to a broken device, the device’s components transmit diagnostics to the tech’s virtual interface. The virtual interface’s augmented reality program locates the 3D position of the object, and overlays real-time step-by-step guides for the technician to follow. Since all the individual parts (and tools) are also tagged, often with additional sensors that monitor things such as stress, current flow, etc., an object-specific “virtual repair manual” can warn the technician if he is taking apart or putting the object back together the wrong way, or if there are internal faults.

The same technology can apply to other tasks requiring rote manual actions using specific processes and components, from building a house to fixing an engine. Each widget, brick, pipe, or module has a chip and sensor in it that knows where it goes and whether it’s been installed correctly. Augmented reality has enabled a resurgence in unskilled labor, since these technologies permit untrained individuals to perform complex tasks.

Personal tools