Using the HoloLens to facilitate plant maintenance

We have been using the HoloLens for 8 months now, yet I am still amazed by the quality of the tracking it provides, enabling truly stable and quite robust hologram displays.  So far, many of the use cases that have been demonstrated for the device were showing holograms unrelated with their physical environment: whether they be building models, TV screens or Minecraft games, such holograms might be displayed on your coffee table, on your bed, in your garage, or at school – it makes no difference.  But the aspect that, in my opinion, gives the HoloLens such great potential in infrastructure engineering is its capacity to augment reality – that is to display digital information directly related with the physical world. 

Let say for instance that you operate a plant, that requires regular maintenance.  The procedure is straightforward, it is always done by the same employee, who knows it so well he could do the job with his eyes closed.  But one day the guy is ill, and no other employee has been trained to do the work.  What do you do?

You could send someone else with a maintenance handbook (assuming one exists), who will try to follow the procedure by reading instructions, and trying to execute them.  Not only this would likely be slow, but there is also a risk he might make mistakes, as establishing a correspondence between written text and physical handles to operate is error prone.

A quicker and safer solution would be to show him exactly what he has to do, at the right location, directly on the hand valves and instruments that he has to operate or check.  Some sort of an Augmented Reality tutor that would guide him on how to do the work, step by step.  We tried achieving that last summer:

Of course setting up such AR tutor systems would take time, and the AR procedure would need to be updated from time to time.  So they would likely make sense only in cases where such procedures have to be repeated on a regular basis.  Unless of course their creation could be automated through some sort of analysis process based on the system’s P&ID and maintenance task goal. 

But there is a risk in providing such detailed step by step instructions.  Let say for instance that the task that you want to teach is how to drive a screw in a piece of wood.

Of course, an augmented reality tutor could simply say: “Drive the screw at this location”.

Alternatively, it could also say: “Place the screw on the driver tip and hold both screw and tip together with the fingers of one hand. Apply very little pressure on the driver while turning in a clockwise direction until the screw engages the wood.” (source: http://www.artofmanliness.com/2010/02/18/toolmanship-how-to-use-a-screwdriver/ ) 

Workers know how to use a screwdriver…  Such highly detailed instructions would be way too much information.  Actually this could be risky – the danger is the same as when using a GPS device – there is a risk you might end up following the instructions blindly, and you stop thinking.  Then you run your car into a lake…

Using AR, we wish to give superpowers to users, not stupefy them.  That is: we want to give users just the right amount of information to enable them doing what they are good at: to take decisions and act.
This will be the subject of my next post…  Stay tuned!