Artificial intelligence has been a hot topic in Silicon Valley and in the tech scene in general lately. For those of us involved in that scene, there seems to be an incredible dynamic around the subject, with all kinds of companies using A.I. at the core of your business. There is also an increase in AI-related university courses, bringing a wave of extremely smart new talent into the job market. But this is not a simple case of bias: since mid-2014, interest in the topic has been increasing.
The noise around the subject will only increase and to the layman it is all very confusing. Depending on what you read, it's easy to believe that we're headed for Skynet-style apocalyptic destruction by cold, calculating supercomputers, or that we'll all live forever as purely digital entities in some sort of cloud. - based on an artificial world. In other words, the Terminator of Matrix is about to get eerily prescient.
Should we be worried or excited? And what does all this mean?
Will robots rule the world?
When I was in the A.I. car at the end of 2014 I knew very little about it. Although I have been involved in web technologies for over 20 years, I have a degree in English Literature and I am more committed to the business and creative possibilities of the technology than to the science behind it. I was attracted to the A.I. because of its positive potential, but when I read warnings from people like Stephen Hawking about the apocalyptic dangers lurking in our future, I was naturally as concerned as anyone else.
So I did what I normally do when I'm worried about something: I started learning about it so I could understand it. Over a year of constant reading, talking, listening, watching, tweaking, and studying has brought me to a pretty good understanding of what it all means, and I want to spend the next few paragraphs sharing that knowledge in hopes of helping someone. light up. another who is curious but naively afraid of this great new world.
Oh, if you just want the answer to the headline above, the answer is yes, they will. Sorry.
How machines have learned to learn
The first thing I discovered was that artificial intelligence, as an industry term, has been around since 1956 and has seen multiple booms and busts during that time. In the 1960s, the A.I. the industry plunged into a golden age of research with western governments, universities and big corporations pouring huge amounts of money into the industry in hopes of building a brave new world. But in the mid-1970s, when it became clear that A.I. failed to deliver on its promise, the industry bubble burst and funding dried up. In the 1980s, as computers became more popular, other A.I. The boom came with similar levels of mind-boggling investment pouring into various companies. But again, the sector came to nothing and the inevitable crash followed.
To understand why this boom didn't last, you must first understand what artificial intelligence really is. The short answer to that (and believe me, there are very, very long answers) is that A.I. it's a set of different overlapping technologies that broadly tackle the challenge of using data to make a decision about something. It encompasses many different disciplines and technologies (big data or the Internet of Things, anyone?), but the most important is a concept called machine learning.
Machine learning basically involves computers entering large amounts of data and having them analyze that data to extract patterns from which to draw conclusions. You've probably seen this in action with facial recognition technology (such as in Facebook or modern digital cameras and smartphones), where the computer can identify and frame human faces in photos. To do this, computers reference a vast library of photographs of people's faces and have learned to detect features of a human face from shapes and colors, averaged over a data set of hundreds of millions of different samples. This process is basically the same for any machine learning application, from fraud detection (analysis of purchase patterns from credit card purchase history) to generative art (analysis of patterns in paintings and random generation of images using those learned patterns).
As you can imagine, analyzing huge data sets to extract patterns requires LOTS of computer processing power. In the 1960s, they just didn't have machines powerful enough to do it, which is why that boom hissed. In the 1980s, computers were powerful enough, but they found that machines learn effectively only if the amount of data they receive is large enough, and they couldn't get enough data to power the machines.
Then came the internet. Not only has it solved the computing problem once and for all through cloud computing innovations, which essentially allow us to access as many processors as we need at the touch of a button, but have people on the Internet. more data generated throughout the day. that has never happened in the entire history of planet earth. The amount of data that is constantly being produced is absolutely mind-boggling.
What this means for machine learning is important: we now have more than enough data to actually start training our machines. Think about the number of photos on Facebook and you will begin to understand why the facial recognition technology is so accurate.
There is now no major barrier (that we currently know of) that prevents the A.I. to reach its potential. We're just starting to figure out what to do with it.
When computers start thinking for themselves
There is a famous scene from the movie 2001: A Space Odyssey in which Dave, the main character, slowly deactivates the artificial intelligence mainframe (called "Hal") after the latter malfunctioned and decided to try and kill all the people in the space. room. station that was intended to run. Hal, the AI, protests Dave's actions and ominously proclaims that he is afraid of dying.
This film illustrates one of the great fears surrounding AI. in general, that is, what will happen if computers start thinking for themselves instead of being controlled by humans. The fear is justified: We are already working with machine learning constructs called neural networks, whose structures are based on neurons in the human brain. With neural networks, data is fed into and then processed through a highly complex network of interconnected points that establish connections between concepts in the same way as associative human memory. This means that computers are slowly building a library of not only patterns, but also concepts that eventually lead to the basic foundations of understanding rather than recognition.
Imagine looking at a picture of someone's face. When you first see the photo, a lot of things happen in your brain: first, you recognize that it is a human face. Then you can recognize that you are male or female, young or old, black or white, etc. Your brain will also quickly decide whether it recognizes the face, although recognition sometimes has to think more deeply depending on the frequency. you have been exposed to this particular face (the experience of recognizing a person but not immediately knowing from where). This all happens almost instantly, and computers are already able to do it all, almost at the same speed. For example, Facebook can not only identify faces, it can also tell you who the face belongs to, if that person is also on Facebook. Google has technology that can identify a person's race, age and other characteristics based on just a photo of their face. We've come a long way since the 1950s.
But true artificial intelligence, known as Artificial General Intelligence (AGI), where the machine is as advanced as a human brain, is still a long way off. Machines can recognize faces, but they still don't really know what a face is. For example, you can look at a human face and infer many things that are derived from a huge intricate network of different memories, lessons and feelings. You can look at a picture of a woman and guess she's a mother, which in turn may lead you to believe she's selfless, or even the opposite, depending on your own experiences with mothers and motherhood. A man can look at the same photo and find the woman attractive, which will lead him to make positive assumptions about her personality (bias again), or vice versa, discover that she looks like a crazy ex-girlfriend, which will make it irrational for him feels negatively towards the woman. † These richly varied but often illogical thoughts and experiences are what drives people to the different behaviors, good and bad, that characterize our race. Despair often leads to innovation, fear leads to aggression, etc.
For computers to be really dangerous they need some of these emotional compulsions, but this is a very rich, complex, multi-layered tapestry of different concepts on which it is very difficult to train a computer, no matter how advanced the networks. to be. neural. We'll get there one day, but there's plenty of time to make sure that when the computers reach AGI, we can still shut them down if we need to.