Can mixed reality give humans the edge over automation?
There’s a trend that’s been around since the dawn of the industrial revolution – if you can build a machine that automates the process of a worker (or workers), it will be more efficient in almost every way – cheaper, faster, and more accurate. At first, this trend wasn’t all that worrisome – while some jobs were made obsolete by automation, so many new fields and industries were created that workers had more job opportunities than ever before.
Like any new facet of technology, however, automation slowly (and then not so slowly) began advancing – taking a lot of jobs with it. But workers found a new avenue of growth through the rise of the service industry, and while manufacturing positions were pushed to the wayside, the economy found a way to capitalize on the trend of automation.
This automation, in the same manner as during the industrial revolution, opened up entirely new industries with millions of jobs to fill; the stand-out of the 2010’s being social media. The best part was, these jobs seemed future proof; the future was digital, and social media was a new form of digital communication by humans, for humans.
And then along came artificial intelligence.
If you had asked someone in the 90’s, 00’s, or even very early 10’s to imagine A.I., they’d probably have described a large supercomputer, housed in something akin to the datacenters integral to Silicon Valley and government digital operations around the world, calculating the great philosophical questions that have plagued humanity since the dawn of time; What is the meaning of life? What is the true nature of the universe? Where do we fit in amongst all of this?
Well, it turns out that while A.I. isn’t to the point where it can answer with “42” yet, it is really good at doing tasks that were largely unexpected just a few years ago – like diagnosing medical conditions, driving cars, and even writing prose. Deep learning neural networks are just plain better at processing vast quantities of data when compared to humans, and more than likely, if there’s a task that is processed through a digital medium, A.I. will, with time, be better at that task than humans.
Social media algorithms are already wreaking havoc with elections across the world by influencing millions of users – and with much more efficiency and effectiveness than any marketing team could ever hope to achieve themselves.
If humans were, for instance, Batman, A.I. would be Bane – just as the dark knight merely adopted the darkness, humans adopted the method of processing data via a digital interface – A.I. was born into, and is better adapted to this digital environment. We’re like the first creatures to move from the sea to land; just as our lumbering ancestors struggled against gravity, we interact with digital mediums with very low input/output when compared to A.I. – most of the time, our input is our two thumbs, and our output is limited to the processing power of one brain.
A.I., on the other hand, inputs and processes information orders of magnitude faster than we can – it was designed to work with 1’s and 0’s, while we work within a whole host of sensory and emotional data. Plain and simple, given enough time, A.I. will replace almost every job that works within a medium provided by a computer.
Take, for instance, the Space-X December 2018 Falcon 9 landing failure. The re-usable rocket’s first stage had a malfunction during re-entry – the hydraulics that control the grid fins which stabilize the rocket stalled. The flight computer, which assesses the situation during re-entry, detected the malfunction, and went into a “safe” landing mode – meaning it aimed for the ocean, and not the pad. The flight computer is smart; if an ocean landing is impossible, it is programmed to avoid buildings and people. But what was truly astounding was the flight computer using algorithms to learn – in real time – a method to stop the seemingly out-of-control spin by using the rocket engines themselves.
This hadn’t been pre-programmed; the rocket took in data, processed it, and then improvised. Fully automated, without any input from ground control. Let that sink in for a second – the first stage of a re-usable rocket malfunctioned upon re-entry, and improvised a safe landing using methods of control it was not programmed to, in order to slow (and ultimately regain control of) its descent into the ocean – by itself.
If this were 2014, that would be the plot of a sci-fi short – not a widely covered news story.
Luckily for us slow-input humans, rocket landings aren’t an everyday occurrence (yet). There’s still jobs to fill, work to be done. There are still plenty of things humans do better than A.I. and robots. We still have time to develop ways to keep up (and frankly, in certain fields, catch-up) with A.I., robotics, and the ever-looming presence of automation.
Augmented Reality (AR) and Mixed Reality (MxR), as Microsoft’s HoloLens 2 exemplifies, might be our method for keeping up with the advancement of automation in the near future.
Our edge against automation
As Boston Dynamic’s Spotmini has shown us, robots, just like velociraptors, can now open doors – or moonwalk, if you prefer. Atlas can backflip, turn 180 degrees in the air, and even run. But we’re still a lot better at this kind of stuff than robots. We’re much more adaptable – while a robot surgeon has higher dexterity than a human surgeon, it’s not going to build a car anytime soon. But with enough training, a human could become proficient at both surgery and auto repair.
But that training takes time, which takes money. If a manufacturer can spend $100,000 on a machine that takes the place of a human worker, they will – and a large part of that is due to the robot requiring zero on-the-job training.
Where automation is limited (for now, at least) is the number of functions a single robot can perform. Tesla’s Gigafactory, which was built with the idea of one day being fully automated, exemplifies this with its cautionary tales of too much automation too soon.
There’s no doubt humans are more adaptable than computers – we came to be the dominate species on this globe for a reason – so what would happen if we could learn almost as fast as A.I. algorithms?
First, when universal A.I. comes about, this is all null and void – there’s no way we could keep up with an intelligence like that – but for now, MxR might be what gives humans the edge over automation.
We’ve covered the enterprise applications of AR before – but with the release of the HoloLens 2, the power of MxR is becoming rapidly apparent. There have already been a lot of cool methods for bringing MxR into manufacturing, but due to the wearable, hands-free nature of the HoloLens, MxR is now a truly viable option.
First of all, you wear HoloLens on your head, and when it’s not in use, you can flip it up, akin to a welder’s mask. Microsoft put a lot of time and effort to make sure it was comfortable, allowing workers to wear it for extended periods of time.
The actual area on which images are projected (called the field of view) is larger than the HoloLens 1, which was released in 2015, made possible by mirrors called MEMS that project visual data at 120 FPS. There have been many other improvements since then as well – a big one being the reduced weight of the HoloLens 2 – and, unlike larger AR rigs, it can provide workers with in-the-field, real-time MxR experiences. Other workers can see what the HoloLens wearer can see via cameras, and highlight objects in real-time. Eye tracking helps to focus in on objects the wearer is looking at, and learns to predict what users will find interesting. Cnet’s Ian Sherr and Scott Stein described this predictive feature as feeling like “practical magic” and that is was almost like having your mind read. These eye-tracking cameras can even read the wearer’s emotions, and can detect who is wearing them – allowing shared headsets to switch from one user to the next without spending valuable time setting up personal options. The wearer can even manipulate virtual objects with their hands.
Ian Sherr and Scott Stein, who are not auto mechanics, were put in front of an ATV in need of repair, as a demo of the HoloLens at Microsoft’s headquarters. They were able to fix it in a short period of time with real-time instructions provided by HoloLens 2.
These features, and their implications, could possibly have a resounding impact on the structure of the manufacturing and production industries. With MxR, workers don’t need training – they only need to be able to follow real-time, visual instructions. They don’t need to have any knowledge prior to completing the task. They have, essentially, access to Matrix-like superpowers of downloading information directly to their brain – without the giant hole in the back of their skull.
And when things fall apart – as they inevitable do – the adaptable humans will still be there to improvise in ways automation physically couldn’t. This might be the crux of what sets MxR enhanced workers apart from automated robots – a worker with MxR enhancements is efficient and precise, but still has the ability to think, and act, outside of the box.
The AR and MxR industry was already worth $6 billion in 2018, and is expected to grow to $200 billion by 2025. That’s a huge leap – and with it, advancement and innovation. As A.I. takes over the digital realm, maybe we’ll regress back to a manufacturing based economy, albeit in a more futuristic way.