This filter intercepts the response and runs Flying Saucer ITextRenderer on it, returning a pdf instead.
Just put the filter on your code and configure the url patterns where it will run on web.xml. The filtered pages will return as pdf documents.
On the long list of the reasons why an all Flash site sucks, this dialogue, that I found somewhere is awesome. And, unfortunately, quite realistic. | |
* Dialogue | |
Ever try selling a client over the phone with a Flash based site? It goes something like this: | |
- Ok. Wait for the loader bar to finish? Is it loaded? Great. | |
- Wait for a second while the intro animation runs. | |
- Click on the pulsing green button. | |
- See the dancing donkey in the corner? Click that. | |
- Click on the donkey's right eye. |
$ echo 'export PATH=$PATH:$HOME/.gem/ruby/1.9.1/bin' >> ~/.bashrc && . ~/.bashrc |
<? | |
function getCEP($psCEP) | |
{ | |
echo utf8_encode(urldecode(trim(@file_get_contents('http://cep.republicavirtual.com.br/web_cep.php?cep='.urlencode($psCEP).'&formato=javascript')))); | |
// Retorna var resultadoCEP = { 'uf' : 'UF', 'cidade' : 'Cidade', 'bairro' : 'Bairro', 'tipo_logradouro' : 'Avenida', 'logradouro' : 'Logradouro', 'resultado' : '1', 'resultado_txt' : 'sucesso - cep completo' } | |
} | |
?> |
cd ~ | |
sudo apt-get update | |
sudo apt-get install openjdk-7-jre -y | |
wget https://github.com/downloads/elasticsearch/elasticsearch/elasticsearch-0.19.8.tar.gz -O elasticsearch.tar.gz | |
tar -xf elasticsearch.tar.gz | |
rm elasticsearch.tar.gz | |
sudo mv elasticsearch-* elasticsearch | |
sudo mv elasticsearch /usr/local/share |
require 'faraday_middleware' | |
require 'hashie/mash' | |
# Public: GeoIP service using freegeoip.net | |
# | |
# See https://github.com/fiorix/freegeoip#readme | |
# | |
# Examples | |
# | |
# res = GeoipService.new.call '173.194.64.19' |
select:focus, textarea:focus, input[type="text"]:focus, input[type="password"]:focus, input[type="datetime"]:focus, input[type="datetime-local"]:focus, input[type="date"]:focus, input[type="month"]:focus, input[type="time"]:focus, input[type="week"]:focus, input[type="number"]:focus, input[type="email"]:focus, input[type="url"]:focus, input[type="search"]:focus, input[type="tel"]:focus, input[type="color"]:focus{ | |
border-color: rgba(82, 168, 236, 0.8); | |
box-shadow: 0 1px 1px rgba(0, 0, 0, 0.075) inset, 0 0 8px rgb(7, 122, 175); | |
outline: 0 none; | |
} |
This filter intercepts the response and runs Flying Saucer ITextRenderer on it, returning a pdf instead.
Just put the filter on your code and configure the url patterns where it will run on web.xml. The filtered pages will return as pdf documents.
#!/bin/bash | |
oldbase="gruber" | |
newbase="gruberteste" | |
nomearquivo="$oldbase-$(date +%d%m%Y-%H%M%S).sql" | |
outputfile="/home/gts/sql-base-fiscal/$nomearquivo" | |
httpserver="/srv/http/" | |
echo "===========================================" | |
echo "Fazendo dump da base \"$oldbase\"" |
require 'lda-ruby' | |
corpus = Lda::Corpus.new | |
corpus.add_document(Lda::TextDocument.new(corpus, "a lion is a wild feline animal", [])) | |
corpus.add_document(Lda::TextDocument.new(corpus, "a dog is a friendly animal", [])) | |
corpus.add_document(Lda::TextDocument.new(corpus, "a cat is a feline animal", [])) | |
lda = Lda::Lda.new(corpus) | |
lda.verbose = false | |
lda.num_topics = (2) | |
lda.em('random') |
class API::V1::BaseController < ApplicationController | |
skip_before_filter :verify_authenticity_token | |
before_filter :cors_preflight_check | |
after_filter :cors_set_access_control_headers | |
def cors_set_access_control_headers | |
headers['Access-Control-Allow-Origin'] = '*' | |
headers['Access-Control-Allow-Methods'] = 'POST, GET, PUT, DELETE, OPTIONS' |