Don’t Fear the Robots – or should we?

February 6, 2019 in
By Nathan Bailey, Jeff Kidwell

As we head into February, we wanted to explore the truism ‘don’t fear the robots,’ wondering if technological advances really have reached the point at which we should be paying more attention- if not have already reached the stage of active concern. Fortunately, we have two well-positioned experts to battle it out (sort of) for us! Nathan Bailey, the Managing Director leading FMP’s Center of Excellence for Technology and Tools shares his insight into this truism, while Jeff Kidwell, the Managing Director leading FMP’s Center of Excellence for Learning and Development, provides a counter-point argument highlighting all of what technology does so well for us. So, let the discussion begin- and maybe have a robot bring you some popcorn while you read.

Point: Technology is amazing. It does so many good things. And…it kinda scares me.

Trust me, I’m not a luddite. I love our technology-centric world just as much as you do. But, I tend to worry that the “inmates are running the asylum.” Traditionally, that phrase means that the least capable people are running an organization, but in our modern society, technology, automation, and “robots” are managing, operating, facilitating, and delivering all forms of information and critical services from transportation and medical, to education and social media (and everything in between). In almost every way, modern technology helps us and is suited to these kinds of activities. It tends to be reliable. It has amazing computational power. It doesn’t make stupid mistakes. It doesn’t get stressed. It doesn’t get tired or need breaks. It has sensory and response capabilities way beyond what humans can do. And the list goes on…

But at the same time, it’s hard to know what it is doing, and maybe more importantly, how it does it. It is also difficult to anticipate how it will break and/or when it will give us unexpected results. So back to my original assertion that the inmates are running the asylum, it’s not so much that technology isn’t capable, and instead more of an issue that we don’t fully understand what it’s doing and how it’s doing it. Technology and automation often lacks nuance and the ability to deal with novel situations. Its span of control is complex and hard to understand, with layer after layer of code, integration, updates, and patches that can interact and behave in ways that range from mildly annoying to catastrophic. And unfortunately, because it is so useful and works most of the time so well, we depend on it, and trust it without reservation. This trend will only get worse.

When I was in graduate school, I studied something called automation-induced complacency. That’s basically a fancy way of saying that when technology systems are hard to understand and working well, we trust them and we like them, and we stop paying much attention to them. Generally, that’s fine. It’s a good thing for us to delegate difficult, mundane, monotonous, and/or complex tasks to technology. But the flipside is that the systems don’t always work perfectly, or they may work in ways that we don’t expect, especially when things in our world aren’t going as planned. In those situations, we need drivers, pilots, operators, programmers, and people reading their news feeds to pay close attention to, think critically, and be skeptical. Which is nearly impossible when technology basically trains us to do the exact opposite by making things easy and working well ALMOST all of the time.

Up to this point, I’ve talked essentially about three scenarios where our interactions with technology are less than ideal: 1) It does something unexpected and we don’t notice it, 2) It does something in a way we don’t understand and we just trust and accept the results, and/or 3) It breaks in a totally unexpected way. There are more, but these are three very common situations. Related to the first, one of the most famous cases of complacency (i.e., the not noticing scenario) was the crash of Eastern Airlines Flight 401 into the Florida Everglades. Following a rather unusual series of events, including suspicion that landing gear was not properly locking into place, the pilots were forced to put the aircraft in a holding pattern while they were troubleshooting. Moments later, the aircraft began descending when the pilots inadvertently disengaged the automation that was holding the aircraft at a certain altitude. It was subtle, and they were already flying low and detected the event too late to stop the crash and 96 of 163 passengers onboard died. This situation was complicated, the automation was difficult to understand and acted in an unexpected way, and the events were subtle and hard to detect. Sadly, this all started because of a burned out bulb that if properly lit, would have told the pilots that their landing gear was in fact working correctly.

For the second scenario, let’s turn to social media. We all know that Facebook uses a pretty complicated and difficult-to-understand algorithm for delivering content to our news feed. It’s hard to understand what will show up from our friends, news sources, advertisers, and other content providers. In addition, we’ve seen that these things can be manipulated and that Facebook has a difficult time identifying good and real content from sources that are less so. There are a number of serious and insidious consequences to this situation, including not seeing content we are interested in, thinking certain content is more important or relevant than it really is, or being manipulated into believing content that is misleading or blatantly untrue. Unfortunately, this situation often goes completely unnoticed and we are happy to simply browse our news feed and accept and delight in the content that is there, despite not really understanding how it gets there in the first place.

Finally, related to the third scenario, and ripped from this week’s headlines, consider the recent FaceTime vulnerability discovered in iOS. This is a great and terrifying example of complex technology breaking or acting in a completely unexpected way. The basic situation is that the combination of the latest updates and patches to iOS came together to create a vulnerability within FaceTime that allowed people trying to connect with somebody to turn on that individual’s microphone and/or their camera, without that person’s knowledge. Let me repeat that, the interaction of all of this complex code, patches, and updates, inadvertently allowed others to access your microphone and camera so they could listen and watch from your phone without you even knowing about it! That is scary.

That’s a lot of doom and gloom, and I know I started off by saying that I’m not a luddite. Even after all of that, I’m still not. But I feel that it’s important to be emphatic. We have to remain vigilant when it comes to interacting with technology and make an effort to understand how it works. We have to be skeptical with what it’s providing us in terms of services, answers, and content, and retain our ability to be good critical thinkers and responsible consumers of information. And more broadly, we have to work to maintain our skills for doing some of the kinds of tasks that technology does for us, which helps us understand not only how things work, but the complexity of how it all fits together (and it is also helpful when the power goes off!).

Technology does amazing things for our lives, it extends our capabilities, makes things easier and more comfortable, and puts a world of information at our feet. But there are concerning risks and trends and we all need to be aware so that we get the most out of all of these technological marvels, without falling prey to them.

Counter(ish) point: Technology is evolving more rapidly than we can imagine. Get on the bus, or get run over.

I, too, am slightly afraid of the pace and scale of technological change. I sometimes get the uncomfortable feeling that something as simple as an iOS update will cause hours of wasted time trying to fix something that, to my mind, was not broken in the first place. As an Industrial Engineer, I began my automation and technology relationship in the areas of computing and industrial automation in the manufacturing field. In the early 1980s, I worked in a factory helping to install and program a robotic system to improve the production of a new type of light bulb. What I did not realize until I walked out on that factory floor for the first time, was the extent of manufacturing automation that was already in place, independent of computing technology. Just the process of making a basic incandescent light bulb (and just about any other high-volume product) had become so mechanically automated, humans were relegated to just making sure that machinery was well maintained and that the outputs were as expected. You can see this today on just about every “How It’s Made” episode. To me, it was a revelation. That plant produced a million light bulbs a week, with only 50 or so people in the building.

The point is, automation has been evolving for a long time. Humans are constantly trying to improve processes and be more efficient. We happen to be lucky enough to be alive at the point in time where the confluence of computing technology, including processing speed, networks, storage, miniaturization, and the evolution of code, are being combined with other systems, both mechanical and virtual, with such innovation that we all will benefit in ways we currently cannot envision.

So, is SkyNet an inevitability? What will we humans do next? I agree that we need to be watchful and wary of the evolution of technology and automation. But, we also need to continue to understand how we can evolve to best embrace and optimize the capabilities of the most complex system ever created – the human being. Which gets us to the automation of learning.  Artificial Intelligence (AI), machine learning, natural language programming, and the aforementioned advances in other computing technologies now allow us to learn in ways that can be instantly tailored to the individual, and adapted real time to their learning styles and preferences, to maximize knowledge retention and reduce the time for knowledge gain. This is where we stay one step ahead of the machines. Embracing the technology of learning (and maybe some very ethical implementations of the gene editing technologies like CRISPR CAS-9) will enable humans to move to the next level of knowledge, performance and mastery, in ways that we currently can only imagine in sci-fi movies. So, do we need to keep an eye on our Roomba? Maybe. Or, maybe we need to evolve enough to teach it to bring us a cold drink, and relax a bit more.