Who really knows if in this very moment, an AI is reading this article before it gets published. Maybe it already calculated the probability of this article being published. It may have already figured that this article is going to the trash.
This critically-thinking possibility for AI and robots is a real path. And now, it’s not just in the movies anymore.
Elon Musk’s fear-mongering about artificial intelligence (AI) becoming a weapon that countries with a strong computer science foundation stems from a logical, if not a far-fetched, idea about AI going down a path of destruction. Automation is one of AI’s most useful features, and one that can be programmed into doing anything from industrial manufacturing to chat responses. In Musk’s mind, this is a prelude to machines being programmed to do undesirable things. With machine learning also burgeoning, he also sees an AI to be capable (read: sentient) to declare war. In support to this, he signed a petition to the UN for help in banning killer robots with other tech and scientific figureheads such as Steve Wozniak (co-founder of Apple), Stephen Hawking, and Mustafa Suleyman (co-founder of Deepmind).
Many figures have opposed his ideas, people like Bill Gates, who can seriously question Elon Musk’s judgments about AI. But as a technological luminary, who shares this fear of robots with Stephen Hawking, Musk’s warnings can’t be ignored. He does know a little about AI, as Tesla is one of the biggest proponents of self-driving features in cars, a technology that’s heavily bootstrapped by AI. Tesla’s version of self-driving works so well, as compared to Weymo and Uber’s efforts, because it’s more of an upgraded version of cruise control rather than a full-blown AI-is-your-co-driver feature. But it’s close, due to its very nature to absorb information and make decisions whether to speed up the car, slow down, or to stay in the same lane.
So Close to Sentience or So Close to Nothing?
This ability to gather information and decide is one of AI’s most useful features, and one people fear the most. How? Through machine learning, a programmed ability of AI entities to identify patterns and train themselves based on those patterns. Why? To enable automation, scalability, and for an AI entity to function on its own without constant input from humans.
So, one can make the argument that some AI entities operate semi-independently already. Like Sophia. Some of them people use every day. Like Siri and Google itself. These things are a testament to how much of a leap humankind has made into making fully-functioning AIs and robots.
On the other side, however, these developments foster the most fundamental fear about AI: what if these AI entities are talking to each other through the Internet and are plotting to take over the world? What if it’s happening right now, in this exact moment? What if humankind’s future has already been quantified and that AIs are only waiting for the perfect time to launch coordinated, indefensible attacks to enslave their masters?
What if they’re already sentient? Mustafa Suleyman said that a moral compass will be a focal point in his company’s goals in 2018. Ethics is a uniquely human trait, so if an AI does successfully integrate a sense of morality, it will be the day when an AI can actually think like a human. They can imitate the best of humans, and the worst too.
Nonetheless, it’s all still hypothetical, and it could really be problematic for humans. But for every proclamation of AI apocalypse, there’s also the good and beneficial, like more vacations for employees. Bill Gates says that companies can treat AI as simply having better software that can help with their workload, and help fulfill tasks when a human isn’t present. An AI future can also look more like a kitchen with specific apps to help with each task related to food preparation, rather than a Terminator situation. And a specialized toaster.
Still Very Much a Human of Tug of War
In the theoretical world, technologists and scientists foresee a future that’s so reliant on AI that humans become incapable of tackling problems themselves. In reality, they didn’t even have to theorize about that situation because it’s already happening. There are apps available now that help people deal with their mental woes, offering counseling and behavioral analysis whenever one needs it.
But are AIs are now holding the hands of their creators and leading them to a robot-centric future? Far from it, very far from it. Elon Musk got it right that there are agencies that could be defining how AIs develop, but they’re more human agencies (like principles and society) rather than governments and private companies.
The differences between Google Assistant and Alisa, a Russian-speaking counterpart developed by Russian search engine company Yandex, encapsulates the still-human-defined field of AI. A Twitter post went viral last Oct 2017 that shows how an app developed in the west is different in values to an app developed in the east. Google warmly answered the query of “I feel sad” with “I wish I had arms so I could give you a hug. Alisa responded with, “No one said that life is about fun.” And it’s not quantifiable metrics gathered by a self-learning machine that defined each AI’s response, it’s the societal values of its developers. Google’s is emotional capitalism (hugging will take away the negativity), defined by sociologist Eva Illouz, and Yandex’s is emotional socialism (life is hard, deal with it), defined by sociologist Julia Lerner. One is unmistakably a western, even American, way of thinking; and the other characterizes how an actual Russian person would deal with sadness.
Lest We Forget, It’s Still Early Days
With Amazon releasing a host of products that promote interconnectivity in one’s home (through the Internet of Things) and the use of AI, many can experience the practicality of a presence in a household that can do things like play a TV show, heat food, and lock the doors with a command. This is a home people only saw in movies. The future really is now.
But even then, AI is young. Machine learning is, for all intents and purposes, step one. It could be that doomsayers may not have heard of deep learning, or artificial intelligence augmentation. These fields are about helping machines be more helpful to humans, not just helping machines learn themselves. By developing neural networks that assist in creativity and foster generation of new ideas, AIs won’t be deduced to a tool. It could possibly expand how humans think, the best outcome in a world where AI is still at its infancy.
Then there are the simple goals, such as having a better translation tool. The constant flux of information doesn’t have to lead to a world where apps and AI predetermine the movement and action of people, it could all just mean having more useful tools at our disposal.