While channel-surfing before bedtime, I caught one of the Marvel “Iron Man” movies. I had already seen it, but there was nothing else on and I was looking to set my brain on auto-pilot before going to bed, so I watched some of the movie. As in the other “Iron Man” movies, there were scenes where Tony Stark, played by Robert Downey Jr., interacts with his highly intelligent personal assistant, J.A.R.V.I.S.

iron_man

Image source: Touchless Screens On Their Way After Apple Wins Gesture Controls Patent

Stark was flicking, dragging, and expanding tiles in mid-air as fast as J.A.R.V.I.S. could present them. These tiles contained everything from scientific formulas, press clips, news footage, surveillance video – you name it. Stark whipped through them while sizing up the problem at hand. As I watched this interaction, a question popped into my mind: Is J.A.R.V.I.S. the future of technical documentation?

So much for putting my brain on auto-pilot…

Gesture Recognition

Stark’s interactions with J.A.R.V.I.S. highlight two emerging technologies that I think might impact how technical documentation is delivered in the not-so-distant future. The first one is gesture recognition, which is defined as “…the mathematical interpretation of a human motion by a computing device,” (“gesture recognition,” WhatIs.com).

In a way, this technology has already emerged in the form of touchscreen devices, especially the swiping operations found on virtually all mobile devices. On my Droid Turbo, I swipe up to show the login screen, enter my passcode, and then swipe through the tiles until I find the app I want. Or, I curse repeatedly as I swipe a typo-ridden text message using Swype.

Gesture recognition technology does not stop with the smartphone:

In personal computing, gestures are most often used for input commands. Recognizing gestures as input allows computers to be more accessible for the physically-impaired and makes interaction more natural in a gaming or 3-D virtual reality environment. Hand and body gestures can be amplified by a controller that contains accelerometers and gyroscopes to sense tilting, rotation and acceleration of movement — or the computing device can be outfitted with a camera so that software in the device can recognize and interpret specific gestures. A wave of the hand, for instance, might terminate the program. (“gesture recognition,” WhatIs.com)

Companies like Google and ArcSoft are already exploring the possible applications of gesture recognition. Soli is a project of Google’s Advanced Technology & Projects (ATAP) group that is researching and developing applications for gesture recognition technology, including wearables, phones, computers, and IoT (Internet of Things) devices. Below is a short video on Project Soli.

ArcSoft is a global leader in imaging intelligence technology. According to ArcSoft, they customize solutions to serve the world’s leading device companies, and they package their best offerings into direct-to-consumer software and apps (“About Us,” ArcSoft). ArcSoft is also pioneering gesture recognition technology, “ArcSoft’s Gesture Technology uses natural human hand gestures with one or both hands, such as wave, grab and move, as well as face, eye and finger motions that allow us to interact with our devices without actually touching them,” (“Gesture,”ArcSoft).

gesture-cimg3

Image source: http://www.arcsoft.com/technology/gesture.html

Biometrics is an emerged technology that shares characteristics of gesture recognition, and is widely used in the industry that I am employed: access control and security. Biometrics is “…the measurement and statistical analysis of people’s physical and behavioral characteristics. The technology is mainly used for identification and access control, or for identifying individuals that are under surveillance. The basic premise of biometric authentication is that everyone is unique and an individual can be identified by his or her intrinsic physical or behavioral traits,” (Rouse, “Biometrics”). My guess is that  J.A.R.V.I.S. relies on biometric technology to know if it’s interacting with Stark, Pepper Potts, or someone else.

Augmented Reality

Another emerging technology highlighted by Stark’s use of J.A.R.V.I.S. is augmented reality, which is “…a live direct or indirect view of a physical, real-world environment whose elements are augmented (or supplemented) by computer-generated sensory input such as sound, video, graphics or GPS data,” (“Augmented Reality,” Wikipedia). By moving, expanding, and interacting with tiles presented by J.A.R.V.I.S., Stark manipulated the environment around him with computer-generated sensory input.

With regard to technical documentation, let’s focus on the “direct or indirect view of a physical, real-world environment whose element are augmented” part of the above definition for augmented reality. Two jobs ago, I spent a lot of my time developing technical illustrations. They were mostly flat, 2D renderings of product installation or wiring diagrams done manually in Macromedia Freehand or Adobe Illustrator. We started to do wordless installation guides for global product releases as a way to combat the cost of translation. A push was made for more realistic, 3D illustrations for these guides. At that time, we had a talented graphic artist in our department who drew 3D illustrations by hand. She eventually left the company, and we were left without someone who could update or create these drawings. We partnered with our company’s CAD team to utilize the 3D models they were creating to test the physical designs of our hardware products. With some combined ingenuity from both departments, we figured out how to export views of the models and create vector illustrations in Freehand and Illustrator. If we needed to change something, the model was preserved and could be easily manipulated as needed.

How does this relate to augmented reality? Simple. For the illustrations I created, I relied on a real system with all of the hardware components and peripherals in addition to the corresponding 3D models. As I moved a hinged plate on the real system to see how components mounted on it, I could then manipulate the models accordingly, turn off unwanted components, and export files to convert into vector art. In Freehand or Illustrator, I then added shading to accentuate important parts, faded out or removed unnecessary parts, and added callouts, wiring connections, and so on. I essentially enhanced, or augmented, a 3D rendering of a real hardware environment in order to help my audience understand something about it. Overall, the illustrations were received favorably.

Today, companies like SolidWorks have taken this idea to the next level by offering the ability to create interactive 3D technical instructions with SolidWorks Composer. The image below shows a 3D model for the SEA-DOO® SEASCOOTER:

composer.PNG

Image source: http://www.solidworks.com/sw/products/technical-communication/3d-communication.htm

To see how SolidWorks Composer can create 3D technical instructions, click here to visit their website and see a short promotional video.

The video shows how SolidWorks Composer utilizes 3D models to provide an enhanced, or augmented, view to help users better see and understand something.

Use of augmented reality to deliver technical documentation is already here. For example, Audi has replaced the traditional printed vehicle owner’s manual with eKurzinfo. This is an app that a new Audi owner downloads onto a mobile device and then positions the device’s camera in front of the dashboard to learn how to use it. Here is a short video on eKurzinfo:

Impact on Technical Documentation

An excellent article by Jacquie Samuels about how user experience will drive the future of Technical Communication pinpoints how technologies like gesture recognition and augmented reality will influence the development and delivery of user documentation:

People thinking that gesture control is just a fad and won’t really impact the tech comm industry is like thinking that the mouse wouldn’t change the way we navigate through the interwebs…Widespread adoption of gesture control devices will let us control computers, TVs, gaming systems in a visual/physical way, browsing, searching, and finding content by swiping, grabbing, and manipulating information objects…Let me wiggle a finger, rotate a wrist, or blink to find my next piece of information, start or stop a video, exit an application, or troubleshoot a nuclear reactor…This change will, of course, profoundly affect the content we provide—not just what we write, but how users access it, experience it, save it, and recall it.
(Samuels, “The Future Is Now: User Experience Drives Technical Communication”)

 

Samuels lists several examples of how user experience and emerging technologies will reshape Technical Communication:

  • Gesture control devices will profoundly change the user experience of information of all kinds. Your content will need the ability to be searched, saved, sorted, filtered, and assembled for users easily while they are gesturing through it. Findability and usability will be the twin pillars of importance here.
  • Mobile is king. Our content will have to automatically and dynamically adjust to the user’s device, no matter what device that is, with minimal effort (we can’t afford to code for individual devices—it just doesn’t scale). We have to build to be adjustable.
  • Content that dynamically displays what users want/need. Users will be able to select their roles, experience level, operating systems, etc. and get the content that applies to them—dynamically. This has already been implemented in some large and small corporations, but it will start being demanded by users…and the rest of us will need to implement this basic functionality as well.
  • Direct, real-time feedback from users directly back into the source content. We’re going to open up the dialog (or possibly monologue from user > author) between users and authors and break down the barrier between us.
  • PDF no longer a delivery method, only an assemble-on-the-fly option. PDFs will die as a means of delivering content. Instead, PDF will be an option-on-demand. Users will assemble their own PDFs on the fly and create their own “books.”
  • Silos start to clump. Marketing, technical content, training content, support-generated content will all start to converge into simply “content,” as it should.
    (Samuels, “The Future Is Now: User Experience Drives Technical Communication”)

Here’s how I envision these technologies influencing the future of my field:

The systems that I have written for over the course of nearly 20 years are complex combinations of networked hardware components and software. In the near future, a technician arrives at an installation site. Instead of positioning a tablet-like mobile device in front of the system components like how Audi’s eKurzinfo functions, the technician places it on a flat surface and launches a J.A.R.V.I.S.-like app. A biometrics component logs the technician into the app, and based on the tech’s user profile that is accessed from a central server, the app provides all the necessary information for the job site and required work.The technician can move around tiles that contain video-based instructions for the hardware and software products; he/she can contact and interact with technical support specialists as needed (this would be another emerging technology, called Telepresence); he/she could then use this app to instruct the end-user on how to use the system.

And that’s how J.A.R.V.I.S. could be the future of technical documentation.

Advertisements