Skip to content

Instantly share code, notes, and snippets.

@giuseppebonaccorso
Created August 19, 2017 15:06
Show Gist options
  • Save giuseppebonaccorso/60ce3eb3a829b94abf64ab2b7a56aaef to your computer and use it in GitHub Desktop.
Save giuseppebonaccorso/60ce3eb3a829b94abf64ab2b7a56aaef to your computer and use it in GitHub Desktop.
Hodgkin-Huxley spiking neuron model in Python
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import odeint
# Set random seed (for reproducibility)
np.random.seed(1000)
# Start and end time (in milliseconds)
tmin = 0.0
tmax = 50.0
# Average potassium channel conductance per unit area (mS/cm^2)
gK = 36.0
# Average sodoum channel conductance per unit area (mS/cm^2)
gNa = 120.0
# Average leak channel conductance per unit area (mS/cm^2)
gL = 0.3
# Membrane capacitance per unit area (uF/cm^2)
Cm = 1.0
# Potassium potential (mV)
VK = -12.0
# Sodium potential (mV)
VNa = 115.0
# Leak potential (mV)
Vl = 10.613
# Time values
T = np.linspace(tmin, tmax, 10000)
# Potassium ion-channel rate functions
def alpha_n(Vm):
return (0.01 * (10.0 - Vm)) / (np.exp(1.0 - (0.1 * Vm)) - 1.0)
def beta_n(Vm):
return 0.125 * np.exp(-Vm / 80.0)
# Sodium ion-channel rate functions
def alpha_m(Vm):
return (0.1 * (25.0 - Vm)) / (np.exp(2.5 - (0.1 * Vm)) - 1.0)
def beta_m(Vm):
return 4.0 * np.exp(-Vm / 18.0)
def alpha_h(Vm):
return 0.07 * np.exp(-Vm / 20.0)
def beta_h(Vm):
return 1.0 / (np.exp(3.0 - (0.1 * Vm)) + 1.0)
# n, m, and h steady-state values
def n_inf(Vm=0.0):
return alpha_n(Vm) / (alpha_n(Vm) + beta_n(Vm))
def m_inf(Vm=0.0):
return alpha_m(Vm) / (alpha_m(Vm) + beta_m(Vm))
def h_inf(Vm=0.0):
return alpha_h(Vm) / (alpha_h(Vm) + beta_h(Vm))
# Input stimulus
def Id(t):
if 0.0 < t < 1.0:
return 150.0
elif 10.0 < t < 11.0:
return 50.0
return 0.0
# Compute derivatives
def compute_derivatives(y, t0):
dy = np.zeros((4,))
Vm = y[0]
n = y[1]
m = y[2]
h = y[3]
# dVm/dt
GK = (gK / Cm) * np.power(n, 4.0)
GNa = (gNa / Cm) * np.power(m, 3.0) * h
GL = gL / Cm
dy[0] = (Id(t0) / Cm) - (GK * (Vm - VK)) - (GNa * (Vm - VNa)) - (GL * (Vm - Vl))
# dn/dt
dy[1] = (alpha_n(Vm) * (1.0 - n)) - (beta_n(Vm) * n)
# dm/dt
dy[2] = (alpha_m(Vm) * (1.0 - m)) - (beta_m(Vm) * m)
# dh/dt
dy[3] = (alpha_h(Vm) * (1.0 - h)) - (beta_h(Vm) * h)
return dy
# State (Vm, n, m, h)
Y = np.array([0.0, n_inf(), m_inf(), h_inf()])
# Solve ODE system
# Vy = (Vm[t0:tmax], n[t0:tmax], m[t0:tmax], h[t0:tmax])
Vy = odeint(compute_derivatives, Y, T)
# Input stimulus
Idv = [Id(t) for t in T]
fig, ax = plt.subplots(figsize=(12, 7))
ax.plot(T, Idv)
ax.set_xlabel('Time (ms)')
ax.set_ylabel(r'Current density (uA/$cm^2$)')
ax.set_title('Stimulus (Current density)')
plt.grid()
# Neuron potential
fig, ax = plt.subplots(figsize=(12, 7))
ax.plot(T, Vy[:, 0])
ax.set_xlabel('Time (ms)')
ax.set_ylabel('Vm (mV)')
ax.set_title('Neuron potential with two spikes')
plt.grid()
# Trajectories with limit cycles
fig, ax = plt.subplots(figsize=(10, 10))
ax.plot(Vy[:, 0], Vy[:, 1], label='Vm - n')
ax.plot(Vy[:, 0], Vy[:, 2], label='Vm - m')
ax.set_title('Limit cycles')
ax.legend()
plt.grid()
@Balharbi
Copy link

Balharbi commented Oct 8, 2018

I like your style, man, very efficient and high level coding, can I ask you some question ?
my implementation is very basic , and I would like to improve it ! coming from java, i find it hard to write efficient code in python
1- first why did you use np.random.seed(1000), I know that for reproducibility , but isn't it suppose to work only when use the function np.random somewhere down the code, I don't see where it will act in your code ? + why 1000 ?

2- your # Input stimulus loop blew my mind 👍 :) :) I'm still trying to do my own version but I get confused , could you elaborate on the general logic behind it ?? because in my case the user will enter the Input stimulus value + the simulation time + the initial values of Vi,mi,hi,ni . then it will produce a single plot , I would like to do an automated function like you where it change the current as time moves on ... anyway here is some question regarding this piece of code art :)

3- did you do it mainly to produce the Current density plot ?

def Id(t): 4- this will run 1000 times here (1 second of the whole 1 second , t is micro second (.001)
if 0.0 < t < 1.0:
return 150.0
elif 10.0 < t < 11.0:
return 50.0
return 0.0
5- why is the gap between 1-10 ?

6- how/when/ where you control for generating only 2 spikes ?

7- finally in my implementation, I did not use odeint , I used I simple for loop with basic Forward Euler estimation for the values at each time steps , since I'm beginner in numerical analysis your usage of this function got me interesting , because I saw many other implementations uses them and it got me confused even more , so what is the benefit of using it over regular scheme like F Euler , B Elure , runge kutta..etc..

here is my code , I plan to extend it to make it spatial , any suggestion from you regarding how I could implement a control loop for the stimulus/time

`'''
Created on Oct 5, 2018

@author: baa87
'''
#===============================================================================

testCase1 = [0.8, 200, -65, 0.4, 0.2, 0.5]

testCase2= [8, 500, -65, 0.4, 0.2, 0.5]

testCase3= [10, 500, -65, 0.5, 0.06, 0.5]

testCase4 = [20, 500, -65, 0.5, 0.06, 0.5]

#===============================================================================
import numpy as np
import matplotlib.pyplot as plt

def alphaM(vs):
return (2.5 - 0.1 * (vs + 65.0)) / float((np.exp(2.5 - 0.1 * (vs + 65.0)) - 1.0))

def betaM(vs):
return 4.0 * float(np.exp(-(vs + 65.0) / 18.0))

def alphaH(vs):
return 0.07 * np.exp(-(vs + 65.0) / float(20.0))

def betaH(vs):
return 1.0 / float((np.exp(3.0 - 0.1 * (vs + 65.0)) + 1.0))

def alphaN(vs):
return (0.1 - 0.01 * (vs + 65.0)) / float((np.exp(1.0 - 0.1 * (vs + 65.0)) - 1.0))

def betaN(vs):
return 0.125 * np.exp(-(vs + 65.0) / float(80.0))

messge1='\n(1)Input current mA (2) time in ms(3)(4)(5)(6) are the initial values of V,m,h and n respectively'
messege2='Usage....test case example = 10, 500, -65, 0.5, 0.06, 0.5 '
print(messege2,messge1)

values = [float(i) for i in input('Enter Initial values: ').split(',')]

for i in range(len(values)):
if len(values)<6:
print(messge1,messege2)
else:
InputCurrent = values[0]
tSpan = values[1]
# Set initial values for the variables
Ci = values[2]
mi =values[3]
hi = values[4]
ni = values[5]

dt = 0.001 # time step for forward Euler's method
t = np.arange(0, tSpan, dt)

V = np.zeros(len(t)) # Initial Membrane voltage
m = np.zeros(len(t)) # Initial m-value
n = np.zeros(len(t)) # Initial n-value
h = np.zeros(len(t)) # Initial h-value
gNa = 120.0
eNa = 115.0
gK = 36.0
eK = -12.0
gL = 0.3
eL = 10.6
loop = np.arange(0, tSpan, dt)

Initial values declaration

V[0] = Ci
m[0] = mi
n[0] = ni
h[0] = hi
for i in range(0, len(loop) - 1) :
# m(i+1) = m(i) + dt*(alphaM(V(i))*(1-m(i)) - betaM(V(i))*m(i));
m[i + 1] = m[i] + dt * (alphaM(V[i]) * (1 - m[i]) - betaM(V[i]) * m[i])
h[i + 1] = h[i] + dt * (alphaH(V[i]) * (1 - h[i]) - betaH(V[i]) * h[i])
n[i + 1] = n[i] + dt * (alphaN(V[i]) * (1 - n[i]) - betaN(V[i]) * n[i])
V[i + 1] = V[i] + dt * (gNa * m[i] ** 3 * h[i] * (eNa - (V[i] + 65)) + gK * n[i] ** 4 * (eK - (V[i] + 65)) + gL * (eL - (V[i] + 65)) + InputCurrent)

fig = plt.figure()

fig, ax = plt.subplots(figsize=(12, 7))
ax.plot(t, V)
ax.set_xlabel('Time (ms)')
ax.set_ylabel(r'Current density (uA/$cm^2$)')
ax.set_title('Stimulus (Current density)')
plt.grid()

Neuron potential

fig, ax = plt.subplots(figsize=(12, 7))
ax.plot(t, V)
ax.set_xlabel('Time (ms)')
ax.set_ylabel('Vm (mV)')
ax.set_title('Neuron potential')
plt.grid()`

@psychedelic2007
Copy link

It was a nice and simple code for Hodgkin-Huxley. The only doubt I have is when you coded for the input stimulus you have given a particular condition that when t is in between 0 and 1 then the output will be having a certain spike. I wanted to know what will be the codes if I wanted to code for multiple neural spikes or say a continuous neural spikes for a given stimulus?

@CarloCobal
Copy link

It was a nice and simple code for Hodgkin-Huxley. The only doubt I have is when you coded for the input stimulus you have given a particular condition that when t is in between 0 and 1 then the output will be having a certain spike. I wanted to know what will be the codes if I wanted to code for multiple neural spikes or say a continuous neural spikes for a given stimulus?

It would also be cool to add in some post synaptic depressive attributes to enable learning and memory via said plasticity!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment