A.I. is set to improve every facet of our interaction with the world, but the technology is still not without those who wish to caution you to the genuine and present danger lurking within the endless potential.
At SXSW Tesla CEO Elon Musk doubled down on his numerous warnings about the technology with an extremely strongly worded warning, declaring A.I. “more dangerous than nukes.” In a year where decades-old concerns of nuclear war is now making headlines with alarming frequency, whether its the missile alert in Hawaii or the increasingly worrying global relations between the US and North Korea to have one of the biggest names in the industry describe A.I as being more dangerous should really give us pause for taught on where its all leading.
Musk is not alone, for the better part of the last decade we’ve seen the biggest names in science, technology and pop culture all keen to discuss the pragmatic implications of sharing the planet with super-intelligent beings. Sam Harris, Nick Bostrom, and Stephen Hawking all detailed reasons for why we should be extremely cautious about this technology. At a conference at Asilomar organized by the Future of Life Institute Elon Musk, Stuart Russell, Bart Selman, Ray Kurzweil, David Chalmers, Nick Bostrom, Demis Hassabis, Sam Harris, and Jaan Tallinn all sat down for an open panel discussion last year to outline just exactly what the implications of sharing our planet with superintelligence and their collective conclusions were not exactly reassuring.
Is this really something you should be worried about?
Is A.I. an enemy waiting to be unleashed or an ally capable of bringing us forward into an almost unrecognizable era of existence? Right now A.I. is far from being categorizable as superintelligent being, but courtesy of deep learning and machine learning A.I. has been making our lives remarkably easier. The impact of machine learning is almost difficult to comprehend as the sheer varieties of industries and sectors benefiting from machine learning seems to be growing daily.
Medicine, construction, agriculture and the tech industry are all implementing machine-based learning to increase efficiency, accuracy, and costs. Researchers have recently taught an A.I. to accurately identify over 100 types of brain tumors with astonishing accuracy beating their human counterparts with the accuracy of their diagnosis. Skycatch has over 5,000 building sites in Japan that are now using their drones that use machine learning to fully automate the site, plan construction, and direct self-driving construction vehicles on site. A.I. has mapped the moon more accurately than humans, a company called MineSense has integrated machine learning into the mining industry allowing access to purer ore and fewer waste products. XpertSea is helping fish farms do more with less human interaction as it uses machine learning to identify disease overfeeding and as a result, produce healthier stronger fish.
Pick the industry or sector and machine learning is there and allowing us new levels of understanding and accuracy that we could not achieve on our own.
The idea of reducing waste, reducing human error, cutting costs are all goals we as a species can universally applaud. The fear of this tool being used by the military-industrial complex has been given legitimate weight. The Pentagon has requested the help of Silicon Valley for their work on A.I. and earlier this month Google confirmed it was working on machine learning for drone strike technology.
When DeepMind was acquired by Google, they specifically said they wanted to use their technology for ethical purposes. Google has been quick to explain how these machine learning drones will be used for passive data collection allowing for enhanced logistical abilities on the battlefield and for troop deployment.
While many question the ethics of this decision by Google at least an argument can be made that their technology is not malevolent by design.
The same cannot be said about the emergence of DeepFakes. Sure, we all laughed at the videos of the Trump DeepFakes, and why not? It was funny! Now imagine a video surfaces of a digitally edited Trump declaring war? How would you be able to tell the difference? Would you even question that what you saw wasn’t real? Nvidia has recently released photos of people who are entirely digitally created by an A.I. learning system. It has become so seamless in its process that no casual observer of the photo would even suspect they are not real photos much less real people.
Nvidia also has released details of the ability of “Unsupervised image to image Translation networks” allowing a photo taken on a sunny day to become a snowy one. Gone is the reliability of your senses, gone is certainty that what you saw or listened to happened. The emergence of DeepFake pornography has risen dramatically in recent months, and ironically, A.I. is being used to combat authors of DeepFakes and DeepFake pornography. DeepFake communities and discussions and content have been banned off Twitter, Reddit, Pornhub, and Discord. So where is this leading?
We are entering an era when A.I. is learning how to create music, video, and art. A.I. may become a contributor to culture and arts.
Alexa has a new ability to play music created by A.I. called DeepMusic that allows us to listen to the soundscapes of a machines “mind.” A.I. may create their social media accounts and with time may become indistinguishable from a regular conversation you could have on messenger. If we continue to explore machine learning and wish to benefit from our collaboration with A.I. then its clear that the intention of its application matters and can result in unimaginable improvements for all but requires the collective wisdom of many informed and capable people to guide us into the new era of technology. Elon Musk may have been right.