Skip to content

Instantly share code, notes, and snippets.

@baraldilorenzo
Last active November 21, 2023 22:41
Star You must be signed in to star a gist
Save baraldilorenzo/07d7802847aaad0a35d3 to your computer and use it in GitHub Desktop.
VGG-16 pre-trained model for Keras

##VGG16 model for Keras

This is the Keras model of the 16-layer network used by the VGG team in the ILSVRC-2014 competition.

It has been obtained by directly converting the Caffe model provived by the authors.

Details about the network architecture can be found in the following arXiv paper:

Very Deep Convolutional Networks for Large-Scale Image Recognition
K. Simonyan, A. Zisserman
arXiv:1409.1556

In the paper, the VGG-16 model is denoted as configuration D. It achieves 7.5% top-5 error on ILSVRC-2012-val, 7.4% top-5 error on ILSVRC-2012-test.

Please cite the paper if you use the models.

###Contents:

model and usage demo: see vgg-16_keras.py

weights: vgg16_weights.h5

from keras.models import Sequential
from keras.layers.core import Flatten, Dense, Dropout
from keras.layers.convolutional import Convolution2D, MaxPooling2D, ZeroPadding2D
from keras.optimizers import SGD
import cv2, numpy as np
def VGG_16(weights_path=None):
model = Sequential()
model.add(ZeroPadding2D((1,1),input_shape=(3,224,224)))
model.add(Convolution2D(64, 3, 3, activation='relu'))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(64, 3, 3, activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2,2)))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(128, 3, 3, activation='relu'))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(128, 3, 3, activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2,2)))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(256, 3, 3, activation='relu'))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(256, 3, 3, activation='relu'))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(256, 3, 3, activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2,2)))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(512, 3, 3, activation='relu'))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(512, 3, 3, activation='relu'))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(512, 3, 3, activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2,2)))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(512, 3, 3, activation='relu'))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(512, 3, 3, activation='relu'))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(512, 3, 3, activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2,2)))
model.add(Flatten())
model.add(Dense(4096, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(4096, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1000, activation='softmax'))
if weights_path:
model.load_weights(weights_path)
return model
if __name__ == "__main__":
im = cv2.resize(cv2.imread('cat.jpg'), (224, 224)).astype(np.float32)
im[:,:,0] -= 103.939
im[:,:,1] -= 116.779
im[:,:,2] -= 123.68
im = im.transpose((2,0,1))
im = np.expand_dims(im, axis=0)
# Test pretrained model
model = VGG_16('vgg16_weights.h5')
sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(optimizer=sgd, loss='categorical_crossentropy')
out = model.predict(im)
print np.argmax(out)
@huma1155
Copy link

huma1155 commented Aug 7, 2019

Hey,
I have my own model and weights in tensorflow. I am trying to assign these weights to caffe model but when I assign weights to the caffe layer in prototxt to the fully connected layer it gives an error.
`weights1 = data_file[0][0].transpose((3,2,0,1))
bias1 = data_file[0][1]

weights2 = data_file[1][0].transpose((3,2,0,1))
bias2 = data_file[1][1]

weights3 = data_file[2][0].transpose((3,2,0,1))
bias3 = data_file[2][1]

#connecting the tensor after last pooling layer with the first fully-connected layer
#here is the link to the video where this part is explained (https://youtu.be/kvXHOIn3-8s?t=3m38s)
print(np.shape(data_file[3][0]))
fc1_w = data_file[3][0].reshape((28,28,64,120))
fc1_w = fc1_w.transpose((3,2,0,1))
fc1_w = fc1_w.reshape((120,50176))
fc1_b = data_file[3][1]
#fully connected layer format:
#Tensorflow: [number of inputs (0), number of outputs (1)]
#Caffe: [number of outputs (1), number of inputs (0)]
fc2_w = data_file[4][0].transpose((1,0))
fc2_b = data_file[4][1]

fc3_w = data_file[5][0].transpose((1,0))
fc3_b = data_file[5][1]

#define architecture
print('Start creating caffe model')
net = caffe.Net('./LeNet.prototxt', caffe.TEST)
#load parameters

net.params['conv2d_1'][0].data[...] = weights1
net.params['conv2d_1'][1].data[...] = bias1

net.params['conv2d_2'][0].data[...] = weights2
net.params['conv2d_2'][1].data[...] = bias2

net.params['conv2d_3'][0].data[...] = weights3
net.params['conv2d_3'][1].data[...] = bias3

net.params['dense_1'][0].data[...]= fc1_w
net.params['dense_1'][1].data[...] = fc1_b

net.params['dense_2'][0].data[...] = fc2_w
net.params['dense_2'][1].data[...] = fc2_b

net.params['dense_3'][0].data[...] = fc3_w
net.params['dense_3'][1].data[...] = fc3_b
print('End')`

Error
`runfile('/home/rehman/Tensorflow2caffe/create_caffemdoel.py', wdir='/home/rehman/Tensorflow2caffe')
Start creating caffe model
Traceback (most recent call last):

File "", line 1, in
runfile('/home/rehman/Tensorflow2caffe/create_caffemdoel.py', wdir='/home/rehman/Tensorflow2caffe')

File "/usr/lib/python3/dist-packages/spyder/utils/site/sitecustomize.py", line 705, in runfile
execfile(filename, namespace)

File "/usr/lib/python3/dist-packages/spyder/utils/site/sitecustomize.py", line 102, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)

File "/home/rehman/Tensorflow2caffe/create_caffemdoel.py", line 64, in
net.params['dense_1'][0].data[...]= fc1_w

ValueError: could not broadcast input array from shape (120,50176) into shape (120,64)`

Here is my prototxt
`name: 'LeNet'
input: 'data'
input_shape{
dim: 1 #no of images
dim: 3 #no of channels
dim :224 #width
dim :224
}
layer {
name: "conv2d_1"
type: "Convolution"
bottom: "data"
top: "conv2d_1"
convolution_param{
num_output: 64
kernel_size: 5
stride: 2
pad: 0
}
}
layer {
name: "activation_1"
type: "ReLU"
bottom: "conv2d_1"
top: "conv2d_1"
}

layer {
name: "max_pooling2d_1"
type: "Pooling"
bottom: "conv2d_1"
top: "max_pooling2d_1"
pooling_param{
pool :MAX
kernel_size: 5
stride: 2
pad: 0
}
}
layer {
name: "conv2d_2"
type: "Convolution"
bottom: "max_pooling2d_1"
top: "conv2d_2"
convolution_param{
num_output: 16
kernel_size: 5
stride: 2
pad: 0
}
}
layer {
name: "activation_2"
type: "ReLU"
bottom: "conv2d_2"
top: "conv2d_2"
}
layer {
name: "max_pooling2d_2"
type: "Pooling"
bottom: "conv2d_2"
top: "max_pooling2d_2"
pooling_param{
pool :MAX
kernel_size: 5
stride: 2
pad: 0
}
}
layer {
name: "conv2d_3"
type: "Convolution"
bottom: "max_pooling2d_2"
top: "conv2d_3"
convolution_param{
num_output: 64
kernel_size: 5
stride: 2
pad: 0
}
}
layer {
name: "activation_3"
type: "ReLU"
bottom: "conv2d_3"
top: "conv2d_3"
}
layer {
name: "max_pooling2d_3"
type: "Pooling"
bottom: "conv2d_3"
top: "max_pooling2d_3"
pooling_param{
pool :MAX
kernel_size: 5
stride: 2
pad: 0
}
}
layer{
name: "flatten_1"
type: "Reshape"
bottom: "max_pooling2d_3"
top: "flatten_1"
reshape_param{shape:{dim:-1 dim:1 dim:1 dim:1}}
}
layer {
name: "dense_1"
type: "InnerProduct"
bottom:"flatten_1"
top:"dense_1"
inner_product_param {
num_output: 120
}
param {
lr_mult: 1
}
param {
lr_mult: 2
}
}
layer {
name: "activation_4"
type: "ReLU"
bottom: "dense_1"
top: "dense_1"
}
layer {
name: "dense_2"
type: "InnerProduct"
bottom:"dense_1"
top:"dense_2"
inner_product_param {
num_output: 64
}
}
layer {
name: "activation_5"
type: "ReLU"
bottom: "dense_2"
top: "dense_2"
}
layer {
name: "dense_3"
type: "InnerProduct"
bottom:"dense_2"
top:"dense_3"
inner_product_param {
num_output: 10
}
}
layer {
name: "activation_6"
type: "Softmax"
bottom: "dense_3"
top: "dense_3"
}
`
and my model description is this:

`Layer (type) Output Shape Param #
conv2d_1 (Conv2D) (None, 224, 224, 64) 4864


activation_1 (Activation) (None, 224, 224, 64) 0


max_pooling2d_1 (MaxPooling2 (None, 112, 112, 64) 0


dropout_1 (Dropout) (None, 112, 112, 64) 0


conv2d_2 (Conv2D) (None, 112, 112, 16) 25616


activation_2 (Activation) (None, 112, 112, 16) 0


max_pooling2d_2 (MaxPooling2 (None, 56, 56, 16) 0


conv2d_3 (Conv2D) (None, 56, 56, 64) 25664


activation_3 (Activation) (None, 56, 56, 64) 0


max_pooling2d_3 (MaxPooling2 (None, 28, 28, 64) 0


flatten_1 (Flatten) (None, 50176) 0


dense_1 (Dense) (None, 120) 6021240


activation_4 (Activation) (None, 120) 0


dropout_2 (Dropout) (None, 120) 0


dense_2 (Dense) (None, 64) 7744


activation_5 (Activation) (None, 64) 0


dense_3 (Dense) (None, 10) 650


activation_6 (Activation) (None, 10) 0

Total params: 6,085,778
Trainable params: 6,085,778
Non-trainable params: 0
_________________________________________________________________`

@zero-craft
Copy link

I am getting below error while loading the model.

Negative dimension size caused by subtracting 2 from 1 for 'max_pooling2d_4/MaxPool' (op: 'MaxPool') with input shapes: [?,1,112,128].

It is being treated:

from keras import backend as K
K.set_image_data_format('channels_first')

@zero-craft
Copy link

zero-craft commented Oct 12, 2019

Getting this error

File "/home/anaconda3/envs/tfod/lib/python3.7/site-packages/keras/engine/saving.py", line 1030, in load_weights_from_hdf5_group
str(len(filtered_layers)) + ' layers.')

ValueError: You are trying to load a weight file containing 0 layers into a model with 16 layers.

replace

if weights_path:
    model.load_weights(weights_path)

by

if weights_path:
    model.load_weights(weights_path,by_name=True)

@zero-craft
Copy link

zero-craft commented Oct 14, 2019

F o r t h e l a z y :)

from keras import backend as K
from keras.models import Sequential
from keras.layers.core import Flatten, Dense, Dropout
from keras.layers.convolutional import Conv2D, MaxPooling2D, ZeroPadding2D
from keras.layers.core import Activation
from keras.optimizers import SGD
import cv2, numpy as np

K.set_image_data_format('channels_first')

def VGG_16(weights_path=None):
    model = Sequential()
    
  
    model.add(ZeroPadding2D((1,1),input_shape=(3,224,224)))
    model.add(Conv2D(64, (3, 3),padding='same'))
    model.add(Activation('relu'))  


    model.add(ZeroPadding2D((1,1)))
    model.add(Conv2D(64, (3, 3),padding='same'))
    model.add(Activation('relu'))     
    model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))

    model.add(ZeroPadding2D((1,1)))
    model.add(Conv2D(128, (3, 3),padding='same'))
    model.add(Activation('relu')) 

    model.add(ZeroPadding2D((1,1)))
    model.add(Conv2D(128, (3, 3),padding='same'))
    model.add(Activation('relu')) 
    model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))

    model.add(ZeroPadding2D((1,1)))
    model.add(Conv2D(256, (3, 3),padding='same'))
    model.add(Activation('relu')) 

    model.add(ZeroPadding2D((1,1)))
    model.add(Conv2D(256, (3, 3),padding='same'))
    model.add(Activation('relu')) 

    model.add(ZeroPadding2D((1,1)))
    model.add(Conv2D(256, (3, 3),padding='same'))
    model.add(Activation('relu')) 
    model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))

    model.add(ZeroPadding2D((1,1)))
    model.add(Conv2D(512, (3, 3),padding='same'))
    model.add(Activation('relu')) 

    model.add(ZeroPadding2D((1,1)))
    model.add(Conv2D(512, (3, 3),padding='same'))
    model.add(Activation('relu')) 

    model.add(ZeroPadding2D((1,1)))
    model.add(Conv2D(512, (3, 3),padding='same'))
    model.add(Activation('relu')) 
    model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))

    model.add(ZeroPadding2D((1,1)))
    model.add(Conv2D(512, (3, 3),padding='same'))
    model.add(Activation('relu')) 

    model.add(ZeroPadding2D((1,1)))
    model.add(Conv2D(512, (3, 3),padding='same'))
    model.add(Activation('relu')) 

    model.add(ZeroPadding2D((1,1)))
    model.add(Conv2D(512, (3, 3),padding='same'))
    model.add(Activation('relu')) 
    model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))
    

    model.add(Flatten())
    model.add(Dense(4096))
    model.add(Activation('relu')) 
    model.add(Dropout(0.5))
    
    model.add(Dense(4096))
    model.add(Activation('relu')) 
    model.add(Dropout(0.5))

    model.add(Dense(1000))
    model.add(Activation('softmax')) 


    if weights_path:
        model.load_weights(weights_path,by_name=True)

    return model

if __name__ == "__main__":
    im = cv2.resize(cv2.imread('cat.jpg'), (224, 224)).astype(np.float32)
#     im[:,:,0] -= 103.939
#     im[:,:,1] -= 116.779
#     im[:,:,2] -= 123.68
    im = im.transpose((2,0,1))
    im = np.expand_dims(im, axis=0)
    

    model = VGG_16('vgg16_weights.h5')
    sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
    model.compile(optimizer=sgd, loss='categorical_crossentropy')
    out = model.predict(im)
    print (np.argmax(out))

@njerschow
Copy link

@zero-craft by_name=True simply looks for layers with names of layers in the model, and tries to cast those same layers from the weights file. This circumvents the error but essentially initializes the model with random weights.

Has anyone been able to get a solution to :
You are trying to load a weight file containing 0 layers into a model with 16 layers.

@FiV0
Copy link

FiV0 commented Nov 2, 2019

Loading the model from github is extremely slow (in my case). Kaggle also has a mirror: https://www.kaggle.com/keras/vgg16#vgg16_weights_tf_dim_ordering_tf_kernels.h5

@mahboubeh86
Copy link

Hi
My database is Fashion-mnist, and the size of my photos is 28 * 28. How can I use vgg architecture? Given that the default input size for this model is 224x224

@njerschow
Copy link

@mahboubeh86

So if I remember correctly the VGG model can take at minimum 32x32 input. So what I would do is pad your images with 2 blank pixels all around. Then when you load the vgg model you can simply pass in the parameter input_size.

Check out the code here: https://github.com/keras-team/keras-applications/blob/master/keras_applications/vgg19.py

@mahboubeh86
Copy link

mahboubeh86 commented Dec 19, 2019 via email

@njerschow
Copy link

can you show me how you are using the code in your program? Are you getting an error when you run it?

@mahboubeh86
Copy link

mahboubeh86 commented Dec 20, 2019 via email

@mahboubeh86
Copy link

mahboubeh86 commented Dec 20, 2019 via email

@njerschow
Copy link

I dont see your code @mahboubeh86

@mahboubeh86
Copy link

mahboubeh86 commented Dec 27, 2019 via email

@JyotiRawat29
Copy link

Hi,
The input size of images is 2828. However I want to train them via VGG 16 where input size of image should be 244244.
How can I do that? I tried several methods but I am not successful.
I found flow_from_directory() function from keras but it requires path of directory, how I am directly accessing data from load_data().

Could you please tell me how can I achieve this?

@yynopenguin
Copy link

F o r t h e l a z y :)

from keras import backend as K
from keras.models import Sequential
from keras.layers.core import Flatten, Dense, Dropout
from keras.layers.convolutional import Conv2D, MaxPooling2D, ZeroPadding2D
from keras.layers.core import Activation
from keras.optimizers import SGD
import cv2, numpy as np

K.set_image_data_format('channels_first')

def VGG_16(weights_path=None):
    model = Sequential()
    
  
    model.add(ZeroPadding2D((1,1),input_shape=(3,224,224)))
    model.add(Conv2D(64, (3, 3),padding='same'))
    model.add(Activation('relu'))  


    model.add(ZeroPadding2D((1,1)))
    model.add(Conv2D(64, (3, 3),padding='same'))
    model.add(Activation('relu'))     
    model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))

    model.add(ZeroPadding2D((1,1)))
    model.add(Conv2D(128, (3, 3),padding='same'))
    model.add(Activation('relu')) 

    model.add(ZeroPadding2D((1,1)))
    model.add(Conv2D(128, (3, 3),padding='same'))
    model.add(Activation('relu')) 
    model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))

    model.add(ZeroPadding2D((1,1)))
    model.add(Conv2D(256, (3, 3),padding='same'))
    model.add(Activation('relu')) 

    model.add(ZeroPadding2D((1,1)))
    model.add(Conv2D(256, (3, 3),padding='same'))
    model.add(Activation('relu')) 

    model.add(ZeroPadding2D((1,1)))
    model.add(Conv2D(256, (3, 3),padding='same'))
    model.add(Activation('relu')) 
    model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))

    model.add(ZeroPadding2D((1,1)))
    model.add(Conv2D(512, (3, 3),padding='same'))
    model.add(Activation('relu')) 

    model.add(ZeroPadding2D((1,1)))
    model.add(Conv2D(512, (3, 3),padding='same'))
    model.add(Activation('relu')) 

    model.add(ZeroPadding2D((1,1)))
    model.add(Conv2D(512, (3, 3),padding='same'))
    model.add(Activation('relu')) 
    model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))

    model.add(ZeroPadding2D((1,1)))
    model.add(Conv2D(512, (3, 3),padding='same'))
    model.add(Activation('relu')) 

    model.add(ZeroPadding2D((1,1)))
    model.add(Conv2D(512, (3, 3),padding='same'))
    model.add(Activation('relu')) 

    model.add(ZeroPadding2D((1,1)))
    model.add(Conv2D(512, (3, 3),padding='same'))
    model.add(Activation('relu')) 
    model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))
    

    model.add(Flatten())
    model.add(Dense(4096))
    model.add(Activation('relu')) 
    model.add(Dropout(0.5))
    
    model.add(Dense(4096))
    model.add(Activation('relu')) 
    model.add(Dropout(0.5))

    model.add(Dense(1000))
    model.add(Activation('softmax')) 


    if weights_path:
        model.load_weights(weights_path,by_name=True)

    return model

if __name__ == "__main__":
    im = cv2.resize(cv2.imread('cat.jpg'), (224, 224)).astype(np.float32)
#     im[:,:,0] -= 103.939
#     im[:,:,1] -= 116.779
#     im[:,:,2] -= 123.68
    im = im.transpose((2,0,1))
    im = np.expand_dims(im, axis=0)
    

    model = VGG_16('vgg16_weights.h5')
    sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
    model.compile(optimizer=sgd, loss='categorical_crossentropy')
    out = model.predict(im)
    print (np.argmax(out))

Do you still have the weight file vgg16_weights.h5, I found the original google drive link is broken.

@gitosu67
Copy link

file is in user's trash

@jpthenasseril
Copy link

jpthenasseril commented Jul 20, 2020

@baraldilorenzo (and @Zebreu!) thanks for this!

Is there a way I can use this to actually label images? I can get an index into the class vector (my 'cat.jpg' comes out to class 669) but I've no idea how to actually find the class labels the index refers to...

Tensorflow outputs in alphabetical order so if you have a list of all classes, sort them and find the label at index 669

@mikechen66
Copy link

mikechen66 commented Sep 8, 2020

The above-mentioned topic includes the long historical timeline discussions on the VGG16 model and need time to consume. I am thinking whether the following keras models can be used.

A Concise VGG16 Model
Release Date: Dec 3, 2019
https://github.com/1297rohit/VGG16-In-Keras

Official Keras VGG16 Model by the keras team
Release Data: Mar 29, 2019
https://github.com/keras-team/keras-applications/blob/master/keras_applications/vgg16.py

@toshniwal-aadhar12
Copy link

Can you please provide the weight files the above download link isn't working.

@mikechen66
Copy link

Thanks for your response. While using the weights provided by François Chollet, it can run. In this condition, I use Chollet's code with VGG16 model.

@dantewarriorcp
Copy link

is the SGD the optimizer? I want to write the same to the paper in code, i didnt read the SGD, somebody friends.. i dont understand , this code is the same from the paper , even the optimizer? ..

@dantewarriorcp
Copy link

@mikedewar You can download it at http://dl.caffe.berkeleyvision.org/caffe_ilsvrc12.tar.gz (there are other files too, but one of them has a thousand lines with synset IDs and names, so you can just use that as an index). Works well in my case.
Also, from the out array, you can do numpy.argmax(out) to get the class with the highest probability.

Also, there is no normalization done in the gist above. If you want accurate results, you better do those steps to any input image:

    img = cv2.resize(cv2.imread('../../Downloads/cat2.jpg'), (224, 224))

    mean_pixel = [103.939, 116.779, 123.68]
    img = img.astype(np.float32, copy=False)
    for c in range(3):
        img[:, :, c] = img[:, :, c] - mean_pixel[c]
    img = img.transpose((2,0,1))
    img = np.expand_dims(img, axis=0)

The mean pixel values are taken from the VGG authors, which are the values computed from the training dataset.

is the sgd the exactly optimizer from the paper ? , like the code .. or only the model vgg16 .. i want the exxactly code , in the paper dont mencione the OPTIMIZER

@qu3ntinprime
Copy link

does anyone have a copy of the weights or any working link for download. the link provided seems dead at this moment

@Calmett
Copy link

Calmett commented Jan 13, 2022

qu3ntinprime you can find the weights here: https://github.com/fchollet/deep-learning-models/releases.

@Calmett
Copy link

Calmett commented Mar 3, 2022

I got this error when runing the code vgg16: "return tf.random.uniform ( ResourceExhaustedError: failed to allocate memory [Op:AddV2]". Could someone help me?

@satyasaigudla
Copy link

Unable to download vgg16.h5 file,Can please solve this issue.

@zulhaikal
Copy link

@soon-will
Copy link

soon-will commented Dec 26, 2022 via email

@agulli
Copy link

agulli commented Dec 27, 2022 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment