What is Augmented Reality?
Augmented reality is the technology that expands our physical world, adding layers of digital information onto it. Unlike Virtual Reality (VR), AR does not create the whole artificial environments to replace real with virtual one. AR appears in direct view of an existing environment and adds sounds, videos, graphics to it.
A view of physical real-world environment with superimposed computer-generated images, thus changing the perception of reality, is the AR.
The term itself was coined back in 1990, and one of the first commercial uses were in television and military. With the raise of Internet and smartphones, AR rolled out its second wave and nowadays is mostly related to interactive concept. 3D models are directly projected onto physical things or fused together in real-time, various augmented reality apps impact our habits, social life and entertainment industry.
AR apps typically connect digital animation to a special ‘marker’, or with the help of GPS in phones pinpoint the location. Augmentation is happening in real time and within context of the environment, for example overlaying scores to a live feed sport events.
There are 4 types of augmented reality today:
- markerless AR
- marker-based AR
- projection-based AR
- superimposition-based AR
Brief history of AR
AR in 1960s. In 1968 Ivan Sutherland and Bob Sproull created a first head-mounted display, they called it The Sword of Damocles. Obviously, it was a rough device that displayed primitive computer graphics.
AR in 1970s. In 1975 Myron Krueger created Videoplace – an artificial reality laboratory. The scientist envisioned the interaction with digital stuff by human movements. This concept later was used for certain projectors, video cameras and onscreen silhouettes.
AR in 1980s. In 1980 Steve Mann developed a first portable computer called EyeTap, designed to be worn in front of the eye. It recorded the scene to superimposed effects on it later, and show it all to a user who could also play with it via head movements. In 1987 Douglas George and Robert Morris developed the prototype of a heads-up display (HUD). It displayed astronomical data over the real sky.
AR in 1990s. Year 1990 marked the birth of the “augmented reality” term. It first appeared in the work of Thomas Caudell and David Mizell – Boeing company researchers. In 1992 Louis Rosenberg of the US Air Force created the AR system called “Virtual Fixtures”. In 1999, a group of scientists led by Frank Delgado and Mike Abernathy tested new navigation software, which generated runways and streets data from a helicopter video.
AR in 2000s. In 2000 a Japanese scientist Hirokazu Kato developed and published ARToolKit – an open-source SDK. Later it was adjusted to work with Adobe. In 2004 Trimble Navigation presented an outdoor helmet-mounted AR system. In 2008 Wikitude made the AR Travel Guide for Android mobile devices.
AR today. In 2013 Google beta tested the Google Glass – with internet connection via a Bluetooth. In 2015 Microsoft presented two brand new technologies: Windows Holographic and HoloLens (an AR goggles with lots of sensors to display HD holograms). In 2016 Niantic launched Pokemon Go game for mobile devices. The app blew the gaming industry up and earned $2 million in just first week.
How does Augmented Reality work
What is Augmented Reality for many of us implies a technical side, i.e. how does AR work. For AR a certain range of data (images, animations, videos, 3D models) may be used and people will see the result in both natural and synthetic light. Also, users are aware of being in the real world which is advanced by computer vision, unlike in VR.
AR can be displayed on various devices: screens, glasses, handheld devices, mobile phones, head-mounted displays. It involves technologies like S.L.A.M. (simultaneous localization and mapping), depth tracking (briefly, a sensor data calculating the distance to the objects), and the following components:
- Cameras and sensors. Collecting data about user’s interactions and sending it for processing. Cameras on devices are scanning the surroundings and with this info a device locates physical objects and generates 3D models. It may be special duty cameras, like in Microsoft Hololens, or common smartphone cameras to take pictures/videos.
- Processing. AR devices eventually should act like little computers, something modern smartphones already do. In the same manner, they require a CPU, a GPU, flash memory, RAM, Bluetooth/WiFi, a GPS, etc. to be able to measure speed, angle, direction, orientation in space, and so on.
- Projection. This refers to a miniature projector on AR headsets, which takes data from sensors and projects digital content (result of processing) onto a surface to view. In fact, the use of projections in AR has not been fully invented yet to use it in commercial products or services.
- Reflection. Some AR devices have mirrors to assist human eyes to view virtual images. Some have an “array of small curved mirrors” and some have a double-sided mirror to reflect light to a camera and to a user’s eye. The goal of such reflection paths is to perform a proper image alignment.