1. Simulate the activity of a simple RNN.

import numpy as np
import matplotlib.pyplot as plt

# Define h
K = 10
h = np.zeros([K])

# Set initial values for h
h[0] = "SOMETHING"

# Define recurrent connection
w    = "SOMETHING"

# Simulate the model
for k in np.arange(K-1):

    # Compute the activity at the next step.
    "SOMETHING"

f = plt.figure()
plt.plot(h)
plt.xlabel('Time')
plt.ylabel('activity');

2. Simulate the activity of a feedforward network.

The network has 3 inputs (\(I\) = [1,3,-3]), 3 hidden nodes, and 3 outputs.

Set all bias = 0.

"SOMETHING"

3. Simulate an RNN.

  • Consider 1 input, 1 hidden neuron, and 1 output.
  • Make \(I_t\) random noise.
  • Use a sigmoid activation function.
  • Set bias = 0.
  • Set \(W_{ih} = 1\), \(W_{ho} = 1\), and choose something for \(W_{hh}\).
import numpy as np
import matplotlib.pyplot as plt

def sigmoid(x):
    S = 1 / (1+np.exp(-x))
    return S

K  = 100
It = np.random.randn(K);
ht = np.zeros([K,1])
ot = np.zeros([K,1])

# Initial state
ht[0] = 0.0
ot[0] = 0.0

# Set parameters
Wih = # SOMETHING
Whh = # SOMETHING
Who = # SOMETHING

# Update h and o at each time.
for k in np.arange(1,K-1):

    ht[k] = # SOMETHING
    H     = # SOMETHING
    ot[k] = # SOMETHING
    O     = # SOMETHING

plt.plot(It, 'b')
plt.plot(ht, 'k')
plt.plot(ot, 'r')
plt.legend(['It', 'ht', 'ot'])