top of page

Artificial Intelligence (AI) : Will it help or hurt mankind?


The answer probably lies in how we handle the almost limitless power of this still nascent technology. Though one thing is for sure - AI will have a profound impact on our jobs and future economic structures.

AI has been thrown into the limelight in recent days thanks to the much publicized recent spat between Mark Zuckerberg and Elon Musk - two tech titans who hold differing views on the future of AI and its impact on humanity. Does AI hold only positive outcomes as Mr. Zuckerberg argues? Or is there a potential downside as Mr. Musk warns. In this piece we argue that Mr. Zuckerberg is not entirely right. Though AI as a technology holds immense promise for improving the quality of human life, it is something that should be handled with a lot of care.

Using the analogy of nuclear research with the potential for a very dangerous weapon, releasing the energy is easy, containing the energy safely is very difficult. - Elon Musk

AI, Machine Learning, Deep Learning

While reading about AI, you would surely have come across these two terms - Machine Learning and Deep Learning. To fully appreciate the power of AI, it is important to understand what these terms are.

As Jeremy Howard explains in this wonderful TED talk, programming usually means laying out in great detail every step that you want the computer to take to achieve your goal. But what happens if you want the computer to do something that you don't know how to do yourself. In 1956, Arthur Samuel, the man widely acknowledged as the father of Machine Learning, wanted to teach a computer to beat him at the game of Checkers. But how do you teach a computer to beat yourself. So he came up with an idea. He made the computer play against itself thousands of times and learn how to play Checkers. And it worked. The computer was able to beat the Connecticut champion by 1962.

Perhaps the first big success of machine learning commercially was Google. Google showed that it was possible to search for information using a computer algorithm and this algorithm was based on machine learning. Since then there have been many commercial successes using machine learning. Companies such as Amazon and Netflix suggest products that you might want to buy, movies you might want to watch. LinkedIn and Facebook tell you who your friends might be and sometimes you don't even know how they did it. And it's all due to the power of machine learning. These are algorithms that have learnt how to do this from data rather than being programmed by hand. So we now know that computers can learn - they can learn to do things that we ourselves do not know or they can do things better than us.

Deep Learning is a subset of Machine learning. Deep Learning is an algorithm that is inspired by how the human brain works. And as a result it is an algorithm that has no theoretical limitations on what it can achieve. The more data you give it along with more computation power, the better it gets. Using Deep Learning, computers today can listen and understand - Microsoft chief research officer Rick Rashid wowed attendees at a lecture in China with a demonstration of speech software that transcribed his spoken words into English text with an error rate of 7 percent, translated them into Chinese-language text, and then simulated his own voice uttering them in Mandarin.

Computers can also see. In 2012, Google announced that they had built a deep learning algorithm that learnt independently about concepts such as people and cats just by watching YouTube videos. This is much like how humans learn. Humans don't learn by being told what things are but by learning by themselves. Can computers read? Yes. Deep learning has been used to read Chinese at about native Chinese speaker level. The algorithm was developed by researchers based out of Switzerland, none of whom speak or even understand Chinese!! Can computers write? Again yes! Deep learning has enabled computers to write surprisingly accurate captions for images.

So now we know why everyone is so excited about Deep Learning. Like humans, deep learning algorithms can learn, listen, understand, see, read and write. You put all of this together and the possibilities seem to be endless. Machine learning is helping taxi drivers in Tokyo pick up passengers with lower waiting time. Google's AI is detecting cancer with a better accuracy than pathologists. Self driving technology promises a future with reduced road accidents. Robots are helping take care of the elderly in Japan. So far the story seems to be headed towards a happy ending.

No one really knows how the most advanced algorithms do what they do. That could be a problem.

In 2015, a research group at Mount Sinai Hospital in New York was inspired to apply deep learning to the hospital’s vast database of patient records. The resulting program, which the researchers named Deep Patient, was trained using data from about 700,000 individuals, and when tested on new records, it proved incredibly good at predicting disease including a wide range of ailments. At the same time, Deep Patient is a bit puzzling. It appears to anticipate the onset of psychiatric disorders like schizophrenia surprisingly well but offers no clue as to how it does this. If something like Deep Patient is actually going to help doctors, it will ideally give them the rationale for its prediction, to reassure them that it is accurate and to justify, say, a change in the drugs someone is being prescribed. But by its nature, deep learning is a particularly dark black box.

Just as many aspects of human behavior are impossible to explain in detail, perhaps it won’t be possible for AI to explain everything it does. It might just be part of the nature of intelligence that only part of it is exposed to rational explanation. Some of it is just instinctual, or subconscious, or inscrutable. This raises mind-boggling questions. As the technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith. How well can we expect to communicate—and get along with—intelligent machines that could be unpredictable and inscrutable?

Are our jobs safe?

AI is still in its infancy. But it’s not hard to see which direction the wind is blowing. The US Postal Service, for example, used to employ humans to sort letters, but for some time now, that’s been done largely by machines that can recognize human handwriting. Netflix does a better job picking movies you might like than a bored video-store clerk. In fact, there’s even a digital sports writer. Doctors should probably be worried as well. Remember Watson, the Jeopardy!-playing computer? It’s now being fed millions of pages of medical information so that it can help physicians do a better job of diagnosing diseases. In another decade, there’s a good chance that Watson will be able to do this without any human help at all.

Computers will play an increasingly important role in our professional lives too. In December 2016, Bridgewater Associates, one of the world’s largest hedge funds with over US$100bn under management, announced a project to automate the day-to-day management of the firm, including hiring, firing and other strategic decision-making.

A future in which human workers are replaced by machines has already become a reality at an insurance firm in Japan, where more than 30 employees are being laid off and replaced with an artificial intelligence system that can calculate payouts to policyholders. Fukoku Mutual Life Insurance believes it will increase productivity by 30% and see a return on its investment in less than two years.

One of the industries that will be severely impacted is my own. According to a recent survey, several of the jobs in the IT Industry will disappear over the next five years. The jobs that will be impacted are those that are repetitive and can be taken over by Artificial Intelligence (AI) such as manual testing, infrastructure management, BPO and system maintenance.

Experts have repeatedly declared that some things will “for ever” remain beyond the reach of smart machines. But it turns out that “for ever” often means no more than a decade or two. In 2004, professor Frank Levy from MIT and professor Richard Murnane from Harvard published research on the job market, listing those professions most likely to undergo automation. Truck driving was given as an example of a job that could not possibly be automated in the foreseeable future. A mere decade later, Google, Uber and Tesla can not only imagine this, but are actually making it happen. A taxi driver or a cardiologist works in a very narrow niche. For AI to squeeze humans out of the job market it need only outperform us in the specific abilities a particular profession demands.

In September 2013, two Oxford researchers, Carl Benedikt Frey and Michael A. Osborne, published “The Future of Employment,” in which they surveyed the likelihood of different professions being taken over by computer algorithms within the next 20 years, and they estimated that 47 percent of US jobs are at high risk. For example, there is a 99 percent probability that by 2033 human telemarketers and insurance underwriters will lose their jobs to algorithms. There is a 98 percent probability that the same will happen to sports referees. Cashiers — 97 percent. Chefs — 96 percent. Waiters — 94 percent. Paralegals — 94 percent. Tour guides — 91 percent. Bakers — 89 percent. Bus drivers — 89 percent. Construction laborers — 88 percent. Veterinary assistants — 86 percent. Security guards — 84 percent. Sailors — 83 percent. There are, of course, some safe jobs. The likelihood that computer algorithms will displace archaeologists by 2033 is only 0.7 percent, because their job requires highly sophisticated types of pattern recognition and doesn’t produce huge profits and it is improbable that corporations or government will make the necessary investment to automate archaeology within the next 20 years.

Since we do not know how the job market would look in 2030 or 2040, today we have no idea what to teach our kids. Most of what they currently learn at school will probably be irrelevant by the time they are 40. Traditionally, life has been divided into two main parts: a period of learning, followed by a period of working. Very soon this traditional model will become utterly obsolete. Individuals should position themselves for a lifetime of learning since the skills demanded by the workplace are changing more rapidly than ever.

Can computers ever become as smart as humans?

"I believe there is no deep difference between what can be achieved by a biological brain and what can be achieved by a computer. It therefore follows that computers can, in theory, emulate human intelligence — and exceed it." - Stephen Hawking

Self-driving cars offer a good example of the amount of work that needs to go in before AI systems can reach human level intelligence. Because there are things that humans understand when approaching certain situations that would be difficult to teach to a machine. In a long blog post on autonomous cars, Rodney Brooks brings up a number of such situations, including how an autonomous car might approach a stop sign at a cross walk in a city neighborhood with an adult and child standing at the corner chatting. The algorithm would probably be tuned to wait for the pedestrians to cross, but what if they had no intention of crossing because they were waiting for a school bus? A human driver could signal to the pedestrians to go, and they in turn could wave the car on, but a driverless car could potentially be stuck there endlessly waiting for the pair to cross because they have no understanding of these uniquely human signals.

But one thing is certain. Given the recent developments in the field of AI, there will come a day when humans will not be the smartest entity on this planet. What happens then? Will computers annihilate humans or continue to serve them? And how can we put safeguards in place?

One option is to find market solutions, putting up money to fund research in ethical and safe AI, as Musk has done with OpenAI. The other is more dangerous. At a gathering of US governors earlier this month, Musk pressed them to “be proactive about regulation”. What precisely does that entail? Pure research and their practical applications interact constantly to push the field of AI and robotics forward. Government control and red tape to stave off a vague, imprecise threat would be an innovation-killer.

Final Thoughts

The rise of AI could lead to the eradication of disease and poverty and the conquest of climate change. But it could also bring us all sorts of things we don't like - autonomous weapons, economic disruption and machines that developed a will of their own, in conflict with humanity. The author is hopeful that in the coming years and decades humans will be able to harness the power of AI while at the same time sidestepping some of the potential pitfalls.

Do you agree?

"The rise of powerful AI will be either the best, or the worst thing, ever to happen to humanity. We do not yet know which." – Stephen Hawking

10 views0 comments
bottom of page