d0tfi1e’s blog

趣味と日記

Simplest Implementation of Neural Net without using Deep Learning Libraries

codes are available here

When I was doing my CS assignment of implementing a neural net, I searched some implementations but none of them was cool. So I decided to implement one by myself.

Model

  • Only supports a model with fully connected layers

Implementations

For some unclear variables, please read the raw code.

Forward Propagation

Below you can see the implementation of iterative calculation a = y * W + b.

def forprop(self, xs):
        self.as_ = []
        self.ys = [xs]
        for i in range(self.n):
            a = np.dot(self.ys[-1], self.weights[i]) + self.biases[i]
            y = self.activation(a)
            self.as_.append(a)
            self.ys.append(y)

Backward Propagation

The math of backprop is much harder than that of forprop. When you implement it by yourself, I strongly recommend you to assert the dimensions of your matrices each time.

def backprop(self, xs, ts, eta=1.0):
        dLdy = self.lossdif(self.ys[-1], ts)
        for i in range(self.n):
            if i > 0:
                dLdy = np.dot(dLdy, self.weights[self.n - i].T)
            dyda = self.activationdif(self.as_[self.n - 1 - i])
            dLda = dLdy * dyda
            dadw = self.ys[self.n - 1 - i].T
            self.biases[self.n - 1 - i] -= eta * np.sum(dLda, axis=0)
            self.weights[self.n - 1 - i] -= eta * np.dot(dadw, dLda)