Skip to content

Instantly share code, notes, and snippets.

Last active April 10, 2019 17:56
Show Gist options
  • Star 17 You must be signed in to star a gist
  • Fork 3 You must be signed in to fork a gist
  • Save dboskovic/23858511bf3c1cbebdbd to your computer and use it in GitHub Desktop.
Save dboskovic/23858511bf3c1cbebdbd to your computer and use it in GitHub Desktop.
KeystoneJS: Cloudinary Cache => Amazon S3

I had a client who I built a site for (ecommerce) that had a lot of high resolution images. (running about 500gb/mo). Cloudinary charges $500/mo for this usage and Amazon charges about $40. I wrote some middleware that I used to wrap my cloudinary urls with in order to enable caching. This is entirely transparent and still enables you to use all the cool cloudinary effect and resizing functions. Hopefully this is useful to someone!

I think using deasync() here is janky but I couldn't think of another way to do it that allowed for quite as easy a fix.

var keystone = require('keystone'),
Types = keystone.Field.Types;
var Imagecache = new keystone.List('Imagecache');
hash: { type: Types.Text, index: true },
uploaded: { type: Types.Boolean, index: true },
canAccessKeystone: { type: Boolean, initial: false }
// add this to ./routes/middleware.js
var crypto = require('crypto');
var request = require('request');
var path = require("path");
var fs = require('fs');
var s3 = require('s3');
var image_cache = keystone.list('Imagecache').model;
var temp_dir = path.join(process.cwd(), 'temp/');
if (!fs.existsSync(temp_dir)) {
var s3_client = s3.createClient({
multipartUploadThreshold: 209715200, // this is the default (20 MB)
multipartUploadSize: 157286400, // this is the default (15 MB)
s3Options: {
accessKeyId: "ACCESS_KEY",
secretAccessKey: "SECRET"
// if you already have an initLocals, just add the function to it
exports.initLocals = function(req,res,next) { = function(img) {
// console.log('looking for image =>',img)
var md5 = crypto.createHash('md5');
var hash = md5.update(img).digest('hex');
var db_image;
function getImage(hash) {
var response;
response = data
while(response === undefined) {
return response;
db_image = getImage(hash)
if(!db_image || !db_image.uploaded) {
if(!db_image) {
// console.log('starting image upload')
request(img).pipe(fs.createWriteStream(temp_dir+"/"+hash+".jpg")).on('close', function (error, response, body) {
var params = {
localFile: temp_dir+"/"+hash+".jpg",
s3Params: {
Bucket: "YOUR_BUCKET",
Key: hash+'.jpg',
var uploader = s3_client.uploadFile(params);
uploader.on('error', function(err) {
uploader.on('end', function() {
console.log('successful image upload',img)
$img.uploaded = true;
// console.log('returning image =>',img)
return img
else {
// console.log('returning image =>',req.protocol+'://'+hash+'.jpg')
return req.protocol+'://'+hash+'.jpg'
// - show a product photo where product has already been loaded from controller and put into scope
// - notice the keystone cloudinary photo method simply returns an http://... url to the cloudinary image
// - the gi() method just requests that url and sends it to s3, and then updates the database when it's available.
Copy link

I put something together to help with wysiwyg images that are saved on cloudinary as we were being hit by huge bandwidths.

Same concept as yours but caches content blocks in the model, also checks if the content changes

Copy link

Just an update about the huge impact to the bandwidth of both your script and the one I added, our cloudinary usage went from 100+gb a day to 75mb a day 😃

Copy link

This is just what I needed. Thanks guys!

Copy link

molomby commented Oct 5, 2017

This is a neat solution for the Cloudinary images in particular but I imagine most sites with bandwidth issues would be better served with a basic CDN setup or a reverse proxy. AWS will charge you $0.09-0.25 per GB/month (for first 10 TB) for transfers while a $20/month plan with CloudFlare gives you unlimited transfers. Something like this is quite transparent and can be setup easily.

On the flip side, if the middleware code above was refactored to use the offical AWS SDK package, you could leverage the ability to create signed URLs for S3 objects. Doing so would let you generate a unique, time-limited URL each time an image was referenced. This has some interesting effects:

  • Still leverages the CDN-like nature of S3 (as the current implementation does)
  • Prevents people deep linking to images on your site (since the URLs could be made to expire after, say, a few minutes)
  • Also, since the Cloudinary URL isn't exposed, and the S3 URLs are temporary, I think this could be used as a workaround for issue #162 (whereby Cloudinary images aren't automatically cleaned up when deleted from Keystone or otherwise unpublished from a site).

Just an idea..

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment