Skip to content

Instantly share code, notes, and snippets.

@italoveloso
Created October 5, 2012 20:10
Show Gist options
  • Save italoveloso/3842079 to your computer and use it in GitHub Desktop.
Save italoveloso/3842079 to your computer and use it in GitHub Desktop.
Velocity 2012 notes
These reviews are very biased and reflect mostly my personal
preferences against the presented subjects.
Keynote videos are available at: http://velocityconf.com/velocity2012/public/schedule/proceedings
Breaking the mobile web (score: 2/5)
=======================
Estelle Weyl
* Embedded JavaScript/CSS to avoid extra requests.
* Image optimization + CSS masking to avoid transparent pngs, but afaik
only webkit supports this feature.
* Use optimized font versions that strips out the unused symbols from
the font file. Too difficult to implement and error prone though.
* Careful with text-indent, because the browser still allocates and
consumes the processor even though the element is not being rendered.
* html * { translateZ(0) } is nice because it enables hardware acceleration on
the target element. Do not abuse that, this consumes video memory
(not every device have a lot of video memory to begin with).
* Use transparency and gradients with care, they are expensive to
compute (duh).
* About accessibility: there are some people who still use the pinch
gesture to zoom in, so it's not a always good idea to disable zooming.
Taming the mobile beast (score: 5/5)
=======================
Matt Welsh, Patrick Meenan (Google)
* Mainly a showcase of upcoming features of Webkit and its debug facilities.
* Mobile access is becoming the main access. Rapidly.
* Large bandwidth not necessarily gives a good web performance. It's
all about the latency!
* Mobile connections can take several seconds to get established.
* The negotiation protocol that cellphones use to establish connection
to the radio provider is costly. It actually depends on the carrier,
number of connected devices, number of connections per device limit
by the carrier and the web browser (i.e. 6 conns per domain in
Chrome concurrent, 256 overall), among other factors.
* Parallell tcp -> saturate the network -> better resource usage. May
not be a good idea on a mobile connection (several connections
fighting for the same radio channel).
* Multiple domains for static files: more roundtrips to resolve domain
name -> might cause problems in mobile connections.
- Pipelining might help solve this problem, some browsers are using
this technique already for quite some time (Safari iOS 5 and
Android Browser are known. Chrome does't however).
- Not all browsers implement this.
- Just one problem with pipelining: more severe penalty when a packet
is dropped, since data from multiple resources might get lost.
* Some mobile carries use transparent web proxying (i.e. by turning
gzip compression by default. They might do something nastier, nobody
knows).
- Might as well affect load times and you'll probably never know
what caused it.
* JavaScript execution time is getting better, but it's not that fast
already.
* Mobile browsers have a very small cache size (i.e. 8mb+ in Android).
- Some http libraries often have broken cache/redirect behavior:
Java's URLConnection, HttpURLConnection and HttpClient are
examples of that.
- There might be a limit for the size of an individual object on the
cache.
* Webkit Resource Timing: It's becoming a w3c standard, it's like a
spec that defines data that can be used to build graphs and measure
a website's performance from top to bottom
- In summary, you will be able to create your own web-based Chrome's
Web Developer Tools
Tools (Chrome showcase, mostly):
* Remote debugging Chrome on Android
- Very nice to see what happens when loading the resources of a
webpage through a mobile network (waterfall plots,
request/response headers, etc).
- CPU/memory profile
- iOS 6 is bringing such functionality to Safari, probably
* Bookmarklets
- Firebug Lite
- YSlow Mobile
- csses - looks for unused CSS selectors
- NavigationTimingBookmarklet
Links:
* Mobitest: http://www.blaze.io/mobile/
* WebPageTest - http://www.webpagetest.org
* Mobile connection experiment - http://goo.gl/F5sKV
(Full URL): http://www.stevesouders.com/blog/2011/09/21/making-a-mobile-connection/
* PCAP Web Performance Analyzer: http://pcapperf.appspot.com
How to walk away from your outage looking like a hero (score: 0.5/5)
=====================================================
* Why the outage happened: bugs, infrastructure, environment, human
errors, etc.
* Monitoring has issues: it takes time to detect, don't catch the
issue, and takes too long for people to be alerted about a problem,
cascading errors are tricky to catch.
* Lack of communication to internal/external customers.
* Do log analysis (which is pretty obvious).
* Apart from that, she just shoved a torrent of incident reports down
our throats :-(
Links:
* Their incident analysis template: http://www.teresadietrich.net/?page_id=37
The 90-minute mobile optimization lifecycle (score: 4.5/5)
===========================================
Hooman Beheshti (StrangeLoop)
* At first, just a more succint version of the Google keynote,
specially the part where the guys mention all the available tools to
measure the performance of a website.
* There were a nice demonstration about the steps to follow in order to
optimize the performance of a website, with a bad 'copy' of the
O'Reilly website.
* As we saw, the impacts of performance problems are different on
desktop (with wireless connection) and on mobile devices.
* New tip: add keep-alive connections, so the cost of creating new
connections get smaller.
* CDN:
- Improves mostly average first-byte metric.
- The server closer to the client is picked, but this is not what
seems to happen when it comes to mobile carriers. The presenter is
going to conduct an experiment to collect more data in order to come
up with a conclusion.
* Mobile FEO (front-end optimization):
- Deferral analytics, marketing tags, ads, objects not on the
viewport (i.e. images), image optimization/compression
- Careful when implementing, you might ruin the user experience!
- Thus, sometimes is better to not do some referral, and implement a
mixed solution (i.e.defer two viewports of content). Improves the
user experience.
- Very hard to implement, but improves most of the metrics.
* Mobile caching, much like what the guys from Google talked about
- Where to put the objects?
- Size of the cache
- Use response headers correctly (max-age, if-modified-since, etc)
* Network type-based optimization
- Perform different optimizations depending the traffic source (i.e.
wifi, 3g, etc). Android has a property for this, I'm not sure if
it's possible to do that in other browsers
* Navigation-based optimization
- It's not a bad idea to make individual pages slower if the desired
navigation flow is made faster for the user.
* Device-specific optimization
- Some optimizations were covered by the first talk.
- Example: use localstorage to cache content, with versioning.
* Network configurations matter!
- TCP windows: increase init_cwnd setting is a common performance
recommendation. CDNs tipically use this configuration to improve
performance. Also, the latest Androids raised the default value of
this setting to 14600 bytes.
- There's also a recommended formula to define the optimal value for
this setting.
Highlights:
* Most things he talked about we already know that we need to improve
* Funny thing: the fully optimized website, which have similar
features to the desktop version of GloboTV (with lots of pictures,
text, css) takes 7s+ avg to load (from the user point of view). Quite
similar to our time.
* We have some significant improvements to do on the mobile version of
GloboTV though.
Links:
* CDN experiment: http://bit.ly/velocitycdntest
* http://webperformancetoday.com
* Similar talk: http://www.slideshare.net/Strangeloopnet/marrying-cdns-with-frontend-optimization
The performance implications of responsive web design (score: 3.5/5)
=====================================================
Jason Grisby
* There's too much inconsistency among devices (even in Android)
* Fluid grids, Flexible image sizes, etc, but we already know that
* CSS media queries are only one aspect of the problem of creating a
responsive experience.
* Other approach to implementing responsive design is to it the other
way around: start with the reduced version and then scale it to bigger
screen sizes.
* Performance tip: make the payload smaller when rendering the website
in smallser screen sizes (smaller images, stylesheets, etc)
* Tips for webdesigners when it comes to responsive design:
- Build mobile first
- Cascade from small to large screen sizes in css mediaqueries, in
this order
- Deliver different images for different screen sizes (we already do
this). But we need something supported from w3c, like an img
element mixed with mediaquery (like Picturefill, although this
solution is NOT permanent and should be changed as soon as we have
a standard).
- Do not serve high-density images unless you need to (apple.com
serves high-density images all the time, not good). Is there a way
to detect whether the website is being rendered on a high-resolution
screen? (i.e. -webkit-device-pixel-ratio property in Webkit)
Links:
* mediaMatch() polyfill: https://github.com/paulirish/matchMedia.js/
* The Boston Globe: httpL//bostonglobe.com
* Respond.js: https://github.com/scottjehl/Respond
* Picturefill (just an experiment): https://github.com/scottjehl/picturefill
* Foresight.js: https://github.com/adamdbradley/foresight.js
* Southstreet: https://github.com/filamentgroup/Southstreet
* Tim Kadlec Blog: http://timkadlec.com
Real time systems at twitter (score: 2/5)
============================
Raffi Krikorian, Arya Asemanfar
* They had a monolythic ruby app, even though they were organized in
multiple engineering teams.
- Decompose into multiple services (social graph service, tweetypie,
gizmoduck, tls, and so forth)
- They started by changing the application layer by layer, in this
order:
> storage & retrieval, logic, presentation, and routing.
- team organization should mimic the software stack organization. If
everything's decomposed properly, we would be able to perform
multiple parallel releases without losing control.
* They built a system to control deployment, where they go to a
dashboard and turn features on/off.
* They are using java. A lot. 45% of traffic passes through a
JVM-based app
* They split their environment in three layers:
- Production
- Don't-remember-the-name: receives a very small amount of
production traffic and it's heavily monitored.
- Staging: It's just like the production environment, except that it
doesn't receive external traffic
- Development
Links:
* Finagle: http://twitter.github.com/finagle
* Zipkin (waterfall plot): https://github.com/twitter/zipkin
* Iago (load generator): https://github.com/twitter/iago
Stability Patterns (score: 2/5)
==================
Michael Nygard
* Looks very much like the content of his book.
* The score was low because I already read his book, so the talk
didn't add anything new. He is a great speaker though.
Links:
* His book: http://pragprog.com/book/mnee/release-it
Understanding hardware acceleration on mobile devices (score: 0/5)
=====================================================
Ariya Hidayat (Sencha)
* This talk was not useful for me at all.
* It was only implementation details of how hardware acceleration is
done in browsers.
Web vs Apps BoF (score: 2/5)
===========
Ben Galbraith, Dion Almaer (Walmart)
* Tradeoffs between web and native apps, nothing really new to me
Highlight:
* Web application written entirely in JavaScript, with Node.js on the
server (see the link below).
- Very interesting as a proof of concept, nothing more.
Links:
* Function Source (look at the page source!): http://functionsource.com/post/conditional-tier-rendering-the-battle-of-server-innerhtml-vs-js-mvc-json
Browsers (score: 1.5/5)
========
Opera Mini/Opera Mobile (Luz Caballero)
* Information about how Opera Mini and Opera Mobile work
- Not really relevant, since proxy browsers are getting less popular
every day.
Chrome and Chrome for Android (Tony Gentilcore)
* Implementation details and roadmap of upcoming features.
Mobile Firefox (Taras Glek)
* Implementation details and roadmap of upcoming features.
Highlights:
* These talks just showed how Chrome > Firefox >> Opera
Timers, power consumption, and performance (score: 4/5)
==========================================
Nikolas Zakas (Ex-Yahoo, Well Furnished)
* UI thread do two things: update UI, and execute JavaScript
- Just one thread, then only one thing runs at a given time
> This happens because you might want to change something on the DOM
that will need to be drawn.
* setTimeout() solves this, but it's not used properly
- It doesn't mean "Run x() after y ms", but "Queue x to be run after
y ms", which is quite different.
* Some work have been done to make setTimeout useful to create smooth
animations (at least 60Hz), but this consumes battery in mobile devices
- Also, some browsers don't bother running animations when the tab
is not being viewed (in some cases, the refresh rate drops to 1
refresh per second).
- This behavior is a little buggy right now (we actually
experimented some of these bugs in jQuery Destaque plugin we made
for Globo.tv)
* requestAnimationFrame(): function to inform the browser that you
intend to run code that needs to be drawn the screen, so the browser
knows what optimizations to use. It doesn't work cross-browser, just
check if the function is available in window object.
* setImmediate(): a replacement for setTimeout(fn, 0).
* Use the worker API when you need to process data (e.g. anything that
doesn't manipulate the DOM) in background. They are optimized for
this kind of work.
Links:
* How JavaScript timers work: http://ejohn.org/blog/how-javascript-timers-work/
* Analysing Timer Performance: http://ejohn.org/blog/analyzing-timer-performance/
* requestAnimationFrame: https://developer.mozilla.org/en/DOM/window.requestAnimationFrame
* setImmediate: https://developer.mozilla.org/en/DOM/window.setImmediate
Scaling Pinterest (score: 3.5/5)
=================
Yashwanth Nelapati, Marty Weiner
* Python-based project
> They also use MySQL, Redis, Memcached, etc, on top of Amazon EC2/S3
* No matter how complex things are, they will fail. So, keep things
simple, in order to be easier to recover.
* Clustering vs sharding (sharding was the chosen approach)
- Clustering: easier to setup, no single PoF, in theory
> Usually not true in practice (data loss/corruption, improper
balance, etc)
> They seemed to suffer a little bit with buggy tools, and this
was the major reason to abandon clustering
- Sharding: harder to setup, intrusive (you actually have to
implement the whole thing)
> Harder to design and change the data scheme, makes it impossible
to run complex queries on relative unormalized data, no
transaction support, etc
* They monitor the load on each database server. If one db server is
with high load, create a replica and and make the new db responsible
for some of the databases.
- They seem to do this manually.
* Very simple shard ID structure: shard id + object type (i.e. pins) +
local id (auto increment).
- UUID is an alternative approach, except that it's not very cheap
to compute.
* No complex queries; only primary key or index lookups (obviously)
* No schema changes are done
- When a schema change is needed, another database is created with
the new structure, and a migration script populates that new db
which becomes the used db.
- They keep the old db for some time before it gets removed, for
safety reasons.
- They made a scripting farm just to run migrations and scripts to
create new shards, due to the huge amount of data (500M pins, 1.6B
follower rows, etc).
* Caching
- Solution based on Memcache and Redis
> Redis fast data structures to store lists and sets, and Memcached to
store single objects. (is there a benchmark to prove this?)
* Current problems, aside from technology
- Scaling the team. They are trying to allocate small teams to work
on specific parts of the product. Not quite there yet.
Links:
* Pyres (based on Resque): https://github.com/binarydud/pyres
5 essential tools for ui performance (score: 3/5)
====================================
Nicole Sullian
* There's been some cool improvements in Webkit, IE, and Gecko
- Style sharing
- Faster/more intelligent element matching
- 'a > b' selector is faster than 'a b'
- :hover, :active, and such pseudo-classes used to be very slow, but
browser providers are increasingly making them faster by
evaluating fewer unrelated elements
- Avoid ~ and + selectors, they still are very slow.
* Now, about IE 10
- They are improving their engines to make it faster to match common
used selectors
- Performance improvements: stick to common selector patterns
* Now, the tools
- Chrome selector audit tool: we can see stats about matched
selectors. Not very consistent, but gives an idea of what's happening
- Tilt: 3d visualization of the nested elements of a document
- Dust-me selectors
* Chrome has -webkit-show-paint-rects to display what's being redrawn
by the browser at any given moment
- Good to see whether a JavaScript function is causing a bigger
redrawn than expected, thus impacting performance.
Links:
* Tilt: https://addons.mozilla.org/en-US/firefox/addon/tilt/
* Dust-me selectors: https://addons.mozilla.org/en-US/firefox/addon/dust-me-selectors/?src=search
Living with technical debt (score: x/5)
==========================
Nathan Yergler (Eventbrite)
* Is something which increases the cognitive overhead of your software.
- It's not just about "ah, I would have done this differently today".
- It's about the perception that something is holding you back, you
cannot deliver the amount of value you think it's possible.
- It's not about whose fault that was.
- It's that bad decisions happen and accumulate over time, even if
you deliver value and meet the deadlines.
* Makes it harder for new team members, so adding new team members in
order to deliver more value is NOT an alternative.
* Rewrite everything from scratch? That's a business decision.
- A doable alternative would be to swap debts, e.g. replace a more
serious debt by a less serious one, gradually.
* Redesign and prototype the solution the team think it's the right one.
- Lateral refactoring: create new stuff separated and start to build
upon it, moving the trash away over time. That way, you can deliver
value and gradually improve the foundation of the software.
> Long-term improvement is better than no improvement at all.
Inertia should not take control.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment