Using Python's built-in defaultdict we can easily define a tree data structure:
def tree(): return defaultdict(tree)
That's it!
Using Python's built-in defaultdict we can easily define a tree data structure:
def tree(): return defaultdict(tree)
That's it!
ffmpeg -i shame-run.mov -r 24/1 test/output%03d.jpg |
//Eigen::MatrixXd src = pointsToMatrix(pointsA[i]); | |
//Eigen::MatrixXd dst = pointsToMatrix(pointsB[j]); | |
//Eigen::Matrix4d cR_t = Eigen::umeyama(src, dst, true); | |
//Eigen::Matrix4d R_t = Eigen::umeyama(src, dst, false); | |
//Eigen::Matrix3d R = R_t.block(0,0,3,3); | |
//Eigen::Vector3d t = R_t.block(0,3,3,1); |
Cython has two major benefits:
Cython gains most of it's benefit from statically typing arguments. However, statically typing is not required, in fact, regular python code is valid cython (but don't expect much of a speed up). By incrementally adding more type information, the code can speed up by several factors. This gist just provides a very basic usage of cython.
Leó Stefánsson |
# The MIT License (MIT) | |
# Copyright (c) 2014 Lycaon (lycaon.me) | |
# Permission is hereby granted, free of charge, to any person obtaining a copy | |
# of this software and associated documentation files (the "Software"), to deal | |
# in the Software without restriction, including without limitation the rights | |
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell | |
# copies of the Software, and to permit persons to whom the Software is | |
# furnished to do so, subject to the following conditions: |
#include <Adafruit_NeoPixel.h> | |
#define NUM_PIXELS 60 | |
unsigned long interval=50; // the time we need to wait | |
unsigned long previousMillis=0; | |
uint32_t currentColor;// current Color in case we need it | |
uint16_t currentPixel = 0;// what pixel are we operating on | |
#import <GPUImageThreeInputFilter.h> | |
extern NSString *const kGPUImageFourInputTextureVertexShaderString; | |
@interface GPUImageFourInputFilter : GPUImageThreeInputFilter | |
{ | |
GPUImageFramebuffer *fourthInputFramebuffer; | |
GLint filterFourthTextureCoordinateAttribute; | |
GLint filterInputTextureUniform4; |
#ifdef GL_ES | |
#define LOWP lowp | |
precision mediump float; | |
#else | |
#define LOWP | |
#endif | |
const float offset = 1.0 / 128.0; | |
varying vec2 v_texCoords; | |
uniform sampler2D u_texture; |
self.rawG8OutputTarget = [[GPUImageRawDataOutput alloc] initWithImageSize: aspectSize resultsInBGRAFormat:YES]; | |
[filterChain.output addTarget:self.rawG8OutputTarget]; | |
__weak GPUImageRawDataOutput *weakRawOutput = self.rawG8OutputTarget; | |
[self.rawG8OutputTarget setNewFrameAvailableBlock:^{ | |
GLubyte *outputBytes = [weakRawOutput rawBytesForImage]; | |
NSInteger bytesPerRow = [weakRawOutput bytesPerRowInOutput]; | |
//I use this variables to create the image to be streamed. | |
}]; |