Skip to content

Instantly share code, notes, and snippets.

@phil8192
Created April 3, 2020 21:02
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save phil8192/72c57dff84d582a27c036a47301ffcf8 to your computer and use it in GitHub Desktop.
Save phil8192/72c57dff84d582a27c036a47301ffcf8 to your computer and use it in GitHub Desktop.
class A:
def __init__(self, x, y, b, pub_key=None):
self.x = x # A's vertical partition of X.
self.y = y # A's training labels.
self.b = b # reference to Host B.
self.features = x.shape[1]
self.pub_key = pub_key
# Called by Coordinator with current model Theta for each mini-batch
# returns (encrypted) gradients for Host A, Host B.
def gradients(self, theta):
# A's Theta
a_theta = theta[:self.features]
# A's part of the gradient (result is a 1d vector of length = y)
u = 1/4 * np.dot(self.x, a_theta) - 1/2 * self.y
# Encrypt (u) using the public key. (result is 1d vector of length = y
# containing encrypted+encoded values)
u = encrypt(self.pub_key, u)
# A now sends the thetas and encrypted (u) to Host B. A "blocks",
# expecting B to return it's w (needed to complete A's gradient
# calculation) and B's gradient.
w, gradient_b = self.b.gradients(theta, u)
# A's gradient
gradient_a = np.dot(w, self.x)
# Send both parts of (encrypted) gradients to the Coordinator.
return gradient_a, gradient_b
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment