Alan Turing in his paper described a Universal Truing Machine. A universal turing machine is a king of finite state machine which can simulate other finite sstate machine.

We can extend this idea. What if Human are finite state machine? This leads to the possibility of Human being simulated by Computers. THis may go against our instituion as Human relatively behaves in undeterministic way but this can be just because of large number of state that a human posses.

Programming traditionally has been method of writing down the sequential steps to accomplish a task but with the rise of deep neural newtork the method has hanged to simulating the way human computes or human brain works to accomplish the task.

This idea began with the introduction of perceptron which emulates single neutron. Basically it takes input which are weighted to produce an output. A single perceptron cannot model the job of all complexity level. So we require connecting the perceptron to multiple layer to capture the complexity of the job.

Like me I think must of us fell in love with pytorch because of the Human Readable code and ability to express our idea with few lines of code. Pytorch is the python library which enables all these character in deep learning newtork.

Pytorch is the python implementation of popular deep learning library in lua called torch. it implements two things:

  1. Tensor data type which is like numpy ndarray but with GPU acceleration support
  2. Deep Neural Network built on tape-based autograd system.

The autograd system works by creating a graph of dependency of the computation result with the input variable. Gradient of the result can be found by observing the change with small change in input variable. This can be seen from the gif image below.

imperative_graph

Lets go through the first step to create a program(Linear regression passsing through origin)

  1. Installation
user@ubuntu:~$ conda install pytorch torchvision cuda80 -c soumith
  1. import library
import torch
import torch.autograd import Variable
  1. Convert inputs/label to Variables
x = Variable(torch.from_numpy(numpy_x).type(dtype),requires_grad=False)
y = Variable(torch.from_numpy(numpy_y).type(dtype),requires_grad=False)
  1. Declare model weights
w1 = Variable(torch.randn(D_in,D_out).type(dtype),requires_grad=True)
  1. Get the output of the model
y_pred = torch.matmul(x.t(),w1)
  1. Get the loss (i.e. objective you want to minimize)
loss = (y - y_pred.t()).pow(2).sum()
  1. Calcuate gradient
loss.backward()
  1. Update parameters using gradient calcaulated
w1.data -= learning_rate * w1.grad.data
  1. Zero gradient
w1.grad.data.zero_()
  1. Repeat Step 5 until your satisfied with the accuracy of the model

Download Link to heading

  1. ipython notebook source code
  2. slide

Youtube talk Link to heading