Engineering my mind a little every day.
A rational, optimistic view of AI
A rational, optimistic view of AI

A rational, optimistic view of AI

Given the success of narrow, machine learning applications over the last few years (what the media and general public refer to as “Artificial Intelligence”, or just “AI”) and a healthy dose of hyperbolic, media-candy predictions from a certain egocentric billionaire, concerns about what AI can and will accomplish in the near-future have risen dramatically. Cutting through the hype is especially difficult for two reasons. First, given the disparate domains that must be well understood to have an informed opinion, the vast majority of people are completely ignorant as to what will be required to substantially remake the world with an artificial general intelligence (AGI). Second, fear is a powerful motivator and there is no shortage of fear to be had if AGI is introduced to the world. Jobs will change, security and major infrastructure will need to be re-imagined, our understanding of what it means to be alive will be challenged and even the existential threat of human extinction is on the table. Given the (supposed) stakes, there is plenty of fear to go around.

There is one problem with all this fear: the prerequisite, an actual AGI, doesn’t exist. And while that is accepted as fact today, ignorance then plays a significant role in how society at large understands AGI and its implications. This is because most people seem to believe that AGI is 1) inevitable and 2) imminent. But, in fact, neither is necessarily true.

It is probably inevitable

I will concede that it is highly probable that humans can achieve AGI some day. We have shown that we can achieve great and previously unimaginable things through specialization and exchange. There is little reason to believe that will not continue should our species survive and some government somewhere continues to allow and enable innovation. Later, I will explain why I believe AGI is unlikely to destroy all humans or to do anything other than help us, even once invented.

It is probably not imminent

Regarding the imminent (near term i.e. within the next 5-11 years) invention of AGI, leading researchers acknowledge that we really have no idea how close or far away we are from having an AGI.

“…many [in] the AI research community itself, while actively pursuing AGI, go to great lengths to emphasize how far we still are — perhaps out of concern that the media hype around AI may lead to dashed hopes and yet another AI nuclear winter.”
Frontier AI: How far are we from artificial “general” intelligence, really?

An article last year from the MIT Technology Review speculates that the most advanced “AIs” of today are in fact using deep learning “back propagation” solutions that will soon see diminishing returns or are approaching a technological dead end with regard to being viable for AGI. This remains true one year later.

“It’s worth asking whether we’ve wrung nearly all we can out of backprop. If so, that might mean a plateau for progress in artificial intelligence.”
Is AI Riding a One-Trick Pony?

The current thinking essentially amounts to, we don’t really know what it will take to build AGI. Many researchers are back to the old idea of looking at human learning as the basis for teaching a machine how to learn.

“A growing line of thinking in research is to rethink core principles of AI in light of how the human brain works, including in children… Teaching a machine of how to learn like a child is one of the oldest ideas of AI, going back to Turing and Minsky in the 1950s, but progress is being made as both the field of artificial intelligence and the field of neuroscience are maturing.”
Frontier AI: How far are we from artificial “general” intelligence, really?

And again from the MIT Technology Review:

“If you want to see the next big thing, something that could form the basis of machines with a much more flexible intelligence, you should probably check out research that resembles what you would’ve found had you encountered backprop in the ’80s: smart people plugging away on ideas that don’t really work yet.”
Is AI Riding a One-Trick Pony?

The implementation of back propagation as a viable deep learning technique to accomplish machine learning, which was first theorized thirty plus years ago, is the basis for all of the most advanced AIs that exist today. It now seems likely that this technology is limited and will not lead to AGI. The current leading research hypothesis is that we need to dive deeper into understanding strategies around human learning, proposed sixty plus years ago, with little understanding of what we may find or accomplish.

To summarize, nothing has changed significantly in the last thirty plus years except our ability to implement very narrow and basic “learning” and researchers today are still looking for more insight on theories proposed over sixty years ago. In essence, if one were to assert that AGI is imminent, it would be a guess that is not based on any known data or research and on balance, appears unlikely.

So you’re saying there’s a chance…

Of course, anyone is free to argue that leaps in our technologies and understanding do happen and could happen in AI. But that is an argument that invokes the cheapest of tricks – you can’t prove a negative, or, in essence, you can’t prove something won’t happen in the future. But many things could happen that would pose an existential threat to the human species existence but we simply don’t give a lot of thought to these things because they are unlikely to happen. In fact, I would argue, a virtually infinite number of things could significantly impact the human race in the next ten years. But we only consider a handful of them as likely and research (not to mention history) has shown that humans are not very good at determining what is likely to happen in the future.

The evidence today simply does not support the conclusion that anyone knows when AGI will be available and thus picking any time period for this to occur seems, at best, pure speculation.

But what about what we have?

The back propagation technique and the deep learning implementations and applications we have today are amazing and powerful. And this technology likely has a long way to go with regard to our ability to find applications for its use. I firmly believe we should continue to leverage technology, specifically computer technologies, to increase the prosperity and outcomes for everyone in the world.

But there is a likely a limit – a limit to what deep learning can accomplish and it is likely well before the realization of AGI.  More on this: Deep Learning: A Critical Appraisal

But what if?

Above, I concede that AGI is probably inevitable. It is not guaranteed but given the unmeasured value it can bring to humans along with the hundreds of thousands of incredibly bright people working all angles of the problem (learning research, software, hardware, etc.) as well as the inexorable march of technological advancement, it would seem likely that someday, we’ll get there. What then? Well, my faith in the human race not to destroy itself lies in the little acknowledged fact that were we to realize an AGI, we would almost certainly develop strategies and plans to limit its power and its ability to gain power. This seems so obvious as to go without saying but it seems lost in the conversations of today. It is not as if AGI is going to pop into a computer terminal one day, take control of all the things and start popping out baby robots to kill us off.

The people working in this field are smart enough to realize the implications and to develop counter-measures. It is a fact that when a person creates something, that person is also intimately  aware of that things weaknesses and/or how to destroy the thing. It will be no different with AGI. And, it isn’t hard to find that people are already doing exactly that

Secondly, if one were to argue that bad actors could get their hands on an AGI and use it outside these constraints, they’d be right. Bad things can happen – no one can prove that this won’t happen. But similar fears dominated much of the 20th century with regard to nuclear bombs and warfare and yet, here we are. Of course, AGI is different in kind than bombs but the principle is this: no one that is not insane wants to destroy the world. And insane people rarely have the capabilities needed to acquire weapons of mass destruction that require high levels of technical expertise. Rational people will attempt to build and acquire AGI for an infinite number of reasons. But it will be in literally no ones best interest to let AGI control the outcomes of humans to the degree they would need to enslave or destroy us. So smart, powerful people will work very hard to prevent this from happening.

Look, the future is unknowable and no one can prove anything is or is not going to happen. But that said, assuming that the impact of an AGI on humanity is always going to lead to bad outcomes for humans ignores very real and powerful factors at play. Namely, in general, humans don’t want bad outcomes for humans. We can, should and likely will develop regulations, policies, strategies and use force if necessary to ensure our survival against AGI. And I would bet, it won’t be that hard to do.

Leave a Reply

Bitnami