Building web apps in Java has changed significantly over the past few years. Many of those changes revolve around keeping your application fast, maintainable, and flexible in the face of ever changing deployment environments (aka "the cloud") and rapidly changing standards. Dropwizard looks to fit in the areas that J2EE does not and even replace it in a large number of cases. So if web APIs are your thing, come checkout Dropwizard and how it can help when building production ready APIs.
package main | |
import ( | |
apns "github.com/joekarl/go-libapns" | |
"ioutil" | |
) | |
func main() { | |
//tlsConn is a socket connection to apple's gateway | |
apnsConnection, _ := apns.NewAPNSConnection(&APNSConfig{ |
go get github.com/joekarl/go-libapns |
package main | |
import ( | |
apns "github.com" | |
"ioutils" | |
"net" | |
"time" | |
) | |
func main() { |
package main | |
import ( | |
apns "github.com/joekarl/go-libapns" | |
"fmt" | |
"io/ioutil" | |
"time" | |
) | |
type HandleCloseErrorFn func (closeError *apns.ConnectionClose) |
var async = require('async'); | |
main(); | |
function main() { | |
processFiles(function(err, results){ | |
}); | |
} |
So after some more thought, I'm thinking a programmable pipeline would start to look like this.
- say we want to render a thing (a thing being a chunk of memory representing the thing, ie a buffer)
- vertex shader gets one slice from the buffer (all of the data for a single vertex)
- this necessitates interleaved buffers but that shouldn't be a problem
- nice thing about this is we're loading up all the data for a vertex into cache nicely for the cpu to play with
- the vertex shader outputs an array of data to be passed to the rasterizer (we'll call each of these a varying)
- the rasterizer takes these outputs (which are an array per vertex) and does scan conversion on them (ie convert the triangle into a set of scan lines that are in pixel space)
- also done in this step are screen space clipping
- each scanline will contain the x,y screen coords
Was thinking about collision detection a bit. Particularly looking at this article http://www.wildbunny.co.uk/blog/2011/04/20/collision-detection-for-dummies/ and the section(s) on minkowski sum/difference. The gist of this type of collision detection is rather than moving the objects and then detecting that two objects are colliding, we only move the objects as far as they'd be able to go before they'd collide. This is nice, as we can correctly calculate the time and position a collision occurs. If that time is less than total move timestep we can sim multiple times per timestep to correctly respond to multiple collisions per frame.
The trick (and what I'm having a hard time visualizing) is how to handle collisions of multiple objects moving in a single frame.
What I feel the correct thing to should do is calculate the first contact time within our timestep between all collideable objects, sim the objects to that point, resolve the collisions, rinse and repeat until we've exhausted all of the time in our
syntax on | |
nnoremap <Left> :echoe "Use h"<CR> | |
nnoremap <Right> :echoe "Use l"<CR> | |
nnoremap <Up> :echoe "Use k"<CR> | |
nnoremap <Down> :echoe "Use j"<CR> | |
" Recursive path for file search | |
set path+=** |