Your Deep Learning Setup

From Deep Learning Course Wiki
Jump to: navigation, search

NOTE: This page is not yet finished.

The Big Picture

Deep Learning Setup.jpg

Concepts/Ideas

This is the starting point, this is the idea you'll implement in code. It could be as simple as printing the result of 1 + 1 or as complex as training a deep convolutional neural network. If you need inspiration, check out the other pages in this Wiki.

Code

After arriving at an idea, you'll need to select the "tools" you wish to use. These tools will help you translate your ideas into functioning code. The first two tools you'll need to choose are your programming language and your text editor.

Programming Language

A programming language is what you use to implement your ideas. If you've never written a line of code before, please refer to the many online resources available that can teach you how to write code.

Python is the most popular and recommended programming language for Deep Learning purposes. It's also what is used in the USF Deep Learning Certificate Part 1. If you're just starting out in Deep Learning, it is highly recommend you use Python. It will allow you to share and read code more often, since the majority of practitioners use it.

Text Editor

A text editor is what you write code in, for example Notepad++ or Sublime. There are many popular text editors to choose from. While they can all achieve the same end result, some of them in particular will help you in the process.

Aside from a text editor, it is recommended that you use Jupyter Notebook to assist you in the early days of a project. The Jupyter Notebook interactive experimentation environment is commonly used in the USF Deep Learning Certificate Part 1 and is recommended. Jupyter Notebook's advantages include:

- the ability to write Markdown, LaTeX, and Python code in the same file (must be a Notebook, extension .ipynb)

- split your file into "cells", allowing you to isolate code

- automatic terminal statements, below your code cells

- thoroughly record your steps and results

It is best to start with a Jupyter Notebook (.ipynb) and then finalize your code with a text editor while saving it in a Python file (.py).

Libraries

Libraries, at least in Python, are files which contain useful classes, functions, and code. Although you could implement your ideas in pure Python, it is highly recommended that you use libraries such as Keras, Theano, NumPy, Pandas, and TensorFlow. Popular libraries tend to be highly-optimized, computationally fast and efficient, and thoroughly documented.

The Keras Deep Learning library, with a Theano backend, is used in the USF Deep Learning Certificate Part 1.

Hardware

Note: "Hardware" doesn't actually appear in the above flowchart.

So far we've translated our ideas into code, now it's time to run our code. How exactly do you run this code? Well, the details can get very complicated very quickly, but we'll only learn the necessary details in this guide.

Simply put, you have two options when it comes to running your "Deep Learning" code. One option is to use a CPU or Central Processing Unit, while another option is to use a GPU or Graphics Processing Unit. Although there are some arguments against the use of GPUs (coming mainly from Intel), performance benchmarks and measures have solidified the popularity of GPUs among Deep Learning practitioners.

Warning: Although NVIDIA GPUs may be the most popular choice of hardware as of this writing, December 7th, 2016. This could very well change in the near future, especially with Intel's recent purchase of Nervana and the pending release of HBM2 GPUs and iGPUs to the public. Research before you make a large investment into an expensive GPU or piece of hardware.

CPU

Every traditional computer requirers a CPU in order to function. However, CPUs have massive disadvantages when compared to GPUs for Deep Learning. In particular, they only perform operations in a narrow and serial fashion. At the bare minimum, you'll need a somewhat recent quad-core CPU. Any recent quad-core CPU should not bottleneck your GPU.

GPU

GPUs were originally invented for video simulation and gaming purposes. Although they are used in combination with CPUs, they are not a necessary piece in a traditional computer like the CPU is. Their advantage over CPUs is their parallel architecture which is well suited for vector and matrix operations, which are commonly used in Deep Learning.

Recent NVIDIA GPUs (specifically CUDA capable) dominate the market and field of Deep Learning; so much so that it is often futile to use anything else, as most Deep Learning libraries are built solely and specifically for NVIDIA GPUs.