Quantcast
Channel: MongoDB | Blog
Viewing all articles
Browse latest Browse all 2423

Deep Learning and the Artificial Intelligence Revolution: Part 1

$
0
0

Deep learning and Artificial Intelligence (AI) have moved well beyond science fiction into the cutting edge of internet and enterprise computing.

Access to more computational power in the cloud, advancement of sophisticated algorithms, and the availability of funding are unlocking new possibilities unimaginable just five years ago. But it’s the availability of new, rich data sources that is making deep learning real.

In this 4-part blog series, we are going to explore deep learning, and the role database selection plays in successfully applying deep learning to business problems:

  • In part 1 today we will look at the history of AI, and why it is taking off now
  • In part 2, we will discuss the differences between AI, Machine Learning, and Deep Learning
  • In part 3, we’ll dive deeper into deep learning and evaluate key considerations when selecting a database for new projects We’ll wrap up in part 4 with a discussion on why MongoDB is being used for deep learning, and provide examples of where it is being used

If you want to get started right now, download the complete Deep Learning and Artificial Intelligence white paper.

The History of Artificial Intelligence

We are living in an era where artificial intelligence (AI) has started to scratch the surface of its true potential. Not only does AI create the possibility of disrupting industries and transforming the workplace, but it can also address some of society’s biggest challenges. Autonomous vehicles may save tens of thousands of lives, and increase mobility for the elderly and the disabled. Precision medicine may unlock tailored individual treatment that extends life. Smart buildings may help reduce carbon emissions and save energy. These are just a few of the potential benefits that AI promises, and is starting to deliver upon.

By 2018, Gartner estimates that machines will author 20% of all business content, and an expected 6 billion IoT-connected devices will be generating a deluge of data. AI will be essential to make sense of it all. No longer is AI confined to science fiction movies; artificial intelligence and machine learning are finding real world applicability and adoption.

Artificial intelligence has been a dream for many ever since Alan Turing wrote his seminal 1950 paper “Computing Machinery and Intelligence”. In Turing’s paper, he asked the fundamental question, “Can Machines Think?” and contemplated the concept of whether computers could communicate like humans. The birth of the AI field really started in the summer of 1956, when a group of researchers came together at Dartmouth College to initiate a series of research projects aimed at programming computers to behave like humans. It was at Dartmouth where the term “artificial intelligence” was first coined, and concepts from the conference crystallized to form a legitimate interdisciplinary research area.

Over the next decade, progress in AI experienced boom and bust cycles as advances with new algorithms were constrained by the limitations of contemporary technologies. In 1968, the science fiction film 2001: A Space Odyssey helped AI leave an indelible impression in mainstream consciousness when a sentient computer – HAL 9000 – uttered the famous line, “I’m sorry Dave, I’m afraid I can’t do that.” In the late 1970s, Star Wars further cemented AI in mainstream culture when a duo of artificially intelligent robots (C-3PO and R2-D2) helped save the galaxy.

But it wasn’t until the late 1990s that AI began to transition from science fiction lore into real world applicability. Beginning in 1997 with IBM’s Deep Blue chess program beating then current world champion Garry Kasparov, the late 1990s ushered in a new era of AI in which progress started to accelerate. Researchers began to focus on sub-problems of AI and harness it to solve real world applications such as image recognition and speech. Instead of trying to structure logical rules determined by the knowledge of experts, researchers started to work on how algorithms could learn the logical rules themselves. This trend helped to shift research focus into Artificial Neural Networks (ANNs). First conceptualized in the 1940s, ANNs were invented to “loosely” mimic how the human brain learns. ANNs experienced a resurgence in popularity in 1986 when the concept of backpropagation gradient descent was improved. The backpropagation method reduced the huge number of permutations needed in an ANN, and thus was a more efficient way to reduce AI training time.

Even with advances in new algorithms, neural networks still suffered from limitations with technology that had plagued their adoption over the previous decades. It wasn’t until the mid 2000s that another wave of progress in AI started to take form. In 2006, Geoffrey Hinton of the University of Toronto made a modification to ANNs, which he called deep learning (deep neural networks). Hinton added multiple layers to ANNs and mathematically optimized the results from each layer so that learning accumulated faster up the stack of layers. In 2012, Andrew Ng of Stanford University took deep learning a step further when he built a crude implementation of deep neural networks using Graphical Processing Units (GPUs). Since GPUs have a massively parallel architecture that consist of thousands of cores designed to handle multiple tasks simultaneously, Ng found that a cluster of GPUs could train a deep learning model much faster than general purpose CPUs. Rather than take weeks to generate a model with traditional CPUs, he was able to perform the same task in a day with GPUs.

Essentially, this convergence – advances in software algorithms combined with highly performant hardware – had been brewing for decades, and would usher in the rapid progress AI is currently experiencing.

Why Is AI Taking Off Now?

There are four main factors driving the adoption of AI today:

More Data. AI needs a huge amount of data to learn, and the digitization of society is providing the available raw material to fuel its advances. Big data from sources such as Internet of Things (IoT) sensors, social and mobile computing, science and academia, healthcare, and many more new applications generate data that can be used to train AI models. Not surprisingly, the companies investing most in AI – Amazon, Apple, Baidu, Google, Microsoft, Facebook – are the ones with the most data.

Cheaper Computation. In the past, even as AI algorithms improved, hardware remained a constraining factor. Recent advances in hardware and new computational models, particularly around GPUs, have accelerated the adoption of AI. GPUs gained popularity in the AI community for their ability to handle a high degree of parallel operations and perform matrix multiplications in an efficient manner – both are necessary for the iterative nature of deep learning algorithms. Subsequently, CPUs have also made advances for AI applications. Recently, Intel added new deep learning instructions to its Xeon and Xeon Phi processors to allow for better parallelization and more efficient matrix computation. This is coupled with improved tools and software frameworks from it’s software development libraries. With the adoption of AI, hardware vendors now also have the chip demand to justify and amortize the large capital costs required to develop, design, and manufacture products exclusively tailored for AI. These advancements result in better hardware designs, performance, and power usage profiles.

More Sophisticated Algorithms. Higher performance and less expensive compute also enable researchers to develop and train more advanced algorithms because they aren’t limited by the hardware constraints of the past. As a result, deep learning is now solving specific problems (e.g., speech recognition, image classification, handwriting recognition, fraud detection) with astonishing accuracy, and more advanced algorithms continue to advance the state of the art in AI.

Broader Investment. Over the past decades, AI research and development was primarily limited to universities and research institutions. Lack of funding combined with the sheer difficulty of the problems associated with AI resulted in minimal progress. Today, AI investment is no longer confined to university laboratories, but is pervasive in many areas – government, venture capital-backed startups, internet giants, and large enterprises across every industry sector.

Wrapping Up Part 1

That wraps up the first part of our 4-part blog series. In Part 2, we discuss the differences between AI, Machine Learning, and Deep Learning

Remember, if you want to get started right now, download the complete Deep Learning and Artificial Intelligence white paper.


Viewing all articles
Browse latest Browse all 2423

Trending Articles