Sugestão de guidelines do processo

DoD (definition of done) - quando uma história é considerada como feita?

  • Deve estar revisada e aprovada por 1 ou dois desenvolvedores.
  • Cobertura de testes automatizados deve seguir a seguinte premissa:
    • O código deve adicionar ou estender no mínimo um cenário de teste de aceitação (ou integração), caso o item de trabalho seja uma feature de evolução (ou seja, não é refactoring ou tech chore).
  • Build da suite de testes deve estar passando.
  • Deve estar disponível e testada em ambiente de validação/homologação.
    • deploy QA via heroku?

A Context Map Should Answer:

  • Where is the technical debt?
  • Where are the areas of technical risk?
  • What knowledge gaps do you have?

Spectrum of cooperation


Queueing Theory references

General content

FWIW: I didn't produce the content present here. I've just copy-pasted it from somewhere over the Internet, but I cannot remember exactly the original source. I was also not able to find the author's name, so I cannot give him/her the proper credit.

Effective Engineer - Notes

What's an Effective Engineer?

View sublime.json
"bold_folder_labels": true,
"caret_style": "blink",
"color_scheme": "Packages/One Dark Color Scheme/One Dark.tmTheme",
"ensure_newline_at_eof_on_save": true,

“Fools ignore complexity; pragmatists suffer it; experts avoid it; geniuses remove it.”

– Alan Perlis (Turing Award #1, ALGOL)

“Computer Science is the first engineering discipline in which the complexity of the objects created is limited solely by the skill of the creator, and not by the strength of raw materials.”


Functional objects on Ruby programming language


  • Less questions asked. E.g:
  • More consistent style among classes definitions.

Sometimes a Controller is Just a Controller


You grok SOLID. You practice TDD. You've read Sandi's book…twice. You rewatch Destroy All Software monthly. You can pronounce GOOS. You know your stuff!

But some of your coworkers say your code is too complex or confusing for them. You might rush to conclude that must be a them problem.

But doubt lingers: what if they're right?


Organização de testes de aceitação

Sabemos que testes de aceitação (ou "feature tests", no linguajar do Rspec) são os testes mais próximos da especificação que é projetada entre developers, analistas de negócios, stakeholders etc. No paradigma ágil, esses testes deveriam refletir os critérios de aceite definidos em uma user story (aqui, sem entrar no mérito da pirâmide de testes).

Há uma questão que parece ser recorrente durante o desenvolvimento desses testes. Como organizamos os arquivos correspondentes a esses testes?

Eis algumas dúvidas que podem surgir:

  • A localização física e o nome dos arquivos deveriam refletir a organização física ou as capabilities do software? Ex:

Testing front-end for a Sinatra app with RSpec and Capybara

I've used Cucumber quite a bit on my last job. It's an excellent tool, and I believe readable tests are the way to the future. But I could never get around to write effective scenarios, or maintain the boatload of text that the suite becomes once you get to a point where you have decent coverage. On top of that, it didn't seem to take much for the suite to become really slow as tests were added.

A while ago I've seen a gist by Lachie Cox where he shows how to use RSpec and Capybara to do front-end tests. That sounded perfect for me. I love RSpec, I can write my own matchers when I need them with little code, and it reads damn nicely.

So for my Rails Rumble 2010 project, as usual, I rolled a Sinatra app and figured I should give the idea a shot. Below are my findings.