public
Last active

What's wrong with Netmag's "Optimize your JavaScript" post

  • Download Gist
gistfile1.md
Markdown

What's wrong with Netmag's "Optimize your JavaScript" post

Update: The original post on Netmag has been updated since this was written.

I tweeted earlier that this should be retracted. Generally, these performance-related articles are essentially little more than linkbait -- there are perhaps an infinite number of things you should do to improve a page's performance before worrying about the purported perf hit of multiplication vs. division -- but this post went further than most in this genre: it offered patently inaccurate and misleading advice.

Here are a few examples, assembled by some people who actually know what they're talking about (largely Rick Waldron and Ben Alman, with some help from myself and several others from the place that shall be unnamed).

Things that are just plain wrong

  • Calling array.push() five times in a row will never be a "performance improvement." The author has clearly confused creating an array literal ["foo", "bar", "baz"] and then using .join("") on it vs. creating an array, pushing individual items, and then joining. See here for the proof, and see here for a possible explanation.

  • The author sets up a for loop as follows: for(var i = 0; length = 999; i <= length; i++){. This results in a syntax error.

  • The author suggests for(var i = my_array.length; i--) as a shorter version of a for loop. While you can get by with using the decrement as the conditional, omitting the semi-colon at the end causes a syntax error. Also, if someone were to move the semi colon to before the decrement, it would cause an infinite loop. Also, if you were ever to do this style of cleverness, a while loop looks much more sane: var i = my_array.length; while( i-- ) {}.

  • Because JavaScript lacks block scope, variables declared inside blocks are not local to any block. The variable declaration is actually "hoisted" to the beginning of the nearest execution context (function body or global scope) such that var foo; for(...) { foo = 1; } behaves exactly the same as for(...) { var foo = 1; }. It is, however, considered bad practice to declare variables inside of blocks, because novice JavaScript developers infer from it that JavaScript has block scope.

  • Creating a variable to hold the length property in a for loop is typically no faster (and sometimes slower) in Chrome. Making it faster in Firefox doesn't make it magically "faster" everywhere.

Things that demonstrate a lack of actual expertise in the subject matter

  • The article mentions DOM perf almost as an afterthought. When it comes to perf, rarely is JavaScript the problem -- DOM reflows are a much more likely culprit for performance issues, but even just modifying elements via the DOM API in memory is still slow. This is one of many reasons people use client-side templates, which the author does not mention. All of that said, if you're looking for real performance gains, take a long, hard look at your HTTP overhead. Chances are that optimizing an image or two will make more difference than any improvements you can make to your JavaScript.

  • The author talks about avoiding unnecessary method calls within conditionals, and then demonstrates by accessing a property, rather than calling a method.

  • The author talks about local vs global variables, and then proceeds to demonstrate the concept with an instance property, not a local variable.

  • The author uses the global Array constructor function to create a new array, rather than using the Array literal [] syntax which is, itself, known to be a performance improvement.

  • The section "Calling methods or properties on array elements" is true, but somewhat misses the point: lookups of any sort should be stored in a variable, rather than being repeated. This guidance has nothing to do with array elements; it's just as true for the result of a function call.

  • In the "encapsulating methods within class declarations" example, the author fails to point out that there are really 2 performance metrics to be tested: 1) cost while creating a new instance object and 2) cost while dereferencing a method from an instance object. The provided example only discusses #1 without addressing #2. What's interesting is that, typically, #1 and #2 are mutually exclusive. When methods are defined directly on an instance object, they consume more memory and cycles (because a new function object must be created for each instance object). When methods are defined on the prototype however, they must only be defined once, so the initial overhead is less, but each time the method is dereferenced from the instance object, the cost is slightly higher because the prototype chain must be traversed. A compromise can be had whereby a previously-defined function is defined as a method on each individual instance, thus reducing memory usage, but the cost of initially assigning the method probably won't offset the cost of accessing the method via the prototype down the road.

  • The "encapsulating methods within class declarations" section also lacks an understanding of semantic, language level behaviour and use cases for instance properties and methods, and prototype properties and methods. Comparing instance properties and methods with prototype properties and methods is like comparing a knife with a stealth fighter jet -- yes they can both be lethal, but one is far more efficient and suited to larger scale tasks and the other is suited to individual, "case by case" (ie. instance) tasks. Additionally, any time that data properties are defined on a prototype, it's likely a mistake that will result in bugs.

  • The section under the heading "Unnecessary method calls within conditionals in For Loops" gives an egregiously absurd example that itself misses the point that should've been made: operations in loops have costs, be mindful of this. In addition to this mistake, there is actually no such thing as a "conditional in For Loops"; the author is simply unaware that the actual name is "Expression", as in... IterationStatement, optional Expression.

  • The author advises using Firebug for profiling code, but provides absolutely zero guidance on how to do this, or even a link to any guidance. This article offers step-by-step guidance on using the Chrome developer tools to identify performance bottlenecks -- which, as we mentioned, you should only do after you've done a whole lot of other things you can do to improve your page's performance. The YSlow page offers a good list of things to do to improve page performance.

In Summary

Smart people spent their time correcting this article, and this sucks. When misleading and inaccurate content gets published by a prominent site, it falls to a busy community to use its time -- a scarce resource -- to correct said content. Publications that distribute such content should be embarassed, and while authors who create it should perhaps be commended for their efforts, they should strongly consider enlisting a knowledgeable technical reviewer in the future. Goodness knows the publication won't cover their ass for them.

Want to know more? See some debunktastic perf tests :)

And then there's this gem...

It’s also worth remembering that if you don’t put var in front of your declarations, the code will run and you won’t receive an error (unless you’re using IE6)...

That actually affects IE <= 8, and only occurs if the variable in question shares its name with an id or name of an element in the document.

It's a minor correction compared to the other hilariously inexcusably egregious errors mentioned above...but, if anything, it further demonstrates the author's misunderstanding of the issues at hand.

Respectfully, the tone of this gist is a real shame. I'm on your side, you're right and I agree with you - but please, read over what you've put here and ask yourself: is this the way to speak to someone who's taken their time to try and help others? Whether they're right or not?

@JamieMason I thought long and hard about this. My issue is with Netmag for publishing content with little regard for its technical quality. I'd never have invested the energy in collecting this list had the author of the post written it on his personal blog. However, when a supposedly reputable publication distributes blatantly inaccurate content, then I do believe it needs to be called out as such, and authors need to know that if they choose to publish content in a publication that has such reckless disregard for accuracy, then they will be subject to public correction.

If I were this author, I would take this as a lesson in the need for a thorough technical review by a qualified person -- as I say at the end, the publication in question clearly has no inclination to ensure you don't end up looking dumb, and so it's in your best interest to ensure your article is accurate. I do commend the author for his efforts, but if I were the one publishing such misleading things, I'd expect nothing less (and actually a whole lot worse) than what was written above.

@JamieMason This post was filled with quantitative judgements, citations, and a delightful lack of bickering (save for one crass comment aimed at publishers). The author might feel bad for being rebuked so accurately, but they and the publisher are the responsible parties by choosing to distribute the work.

@JamieMason This post is objectively criticizing the article on technical merits. I can't find any fault with Rebecca for doing this, as matter of fact I commend her for taking the time ("her scarce resource") to do this. Our community (especially in front end development) is developing quite rapidly and we need to guide other developer with the knowledge that we gain in our everyday experience; however we must take great care to guide them in the right direction and not steer them in a misguided and wrong direction. Rebecca's attempt was only to right a wrong and we should all strive to do such. Every time I blog, I make sure that I ask a fellow developer to review it for me and it seems that this author did not take time to do that.
@rmurphey thanks for doing this!!

Nice writeup. It's unfortunate that stuff like that article keeps spreading. As much as I love JavaScript, I hesitate whenever someone says they want to learn it because of the volumes of bad information out there on it.

I agree with @JamieMason; Totally legit on the article's many wrong points but could have expressed the same points in a nicer way. As it stands its just one developer who knows something really well publicly crushing another. I'm honestly sure it felt really good to write, as I have written similar without caring for others feelings in the past. Then I realized one day that there's always a smarter guy or gal around the corner, and I should be more humble.

These days I write on topics slightly out of my skill level in an attempt to learn and share with others. (Try explaining the difference between Apache prefork processes vs Nginx's event-based processing model.) I'm occasionally wrong and someone corrects me, and I'm so happy they did, so long as they didn't treat me like an idiot.

I agree with @JamieMason as well. Was an effort made to contact the author or the publisher of the post privately to help with corrections before "taking it to the streets"? The technical rebuttal is sound but the delivery is littered with pride and arrogance; there has to be a more tactful way to do this.

Awesome, publicly addressing public misinformation is insensitive. I wonder if this standard will be applied to all public servants. I predict a double standard.

Wish I could "Like" your post @davemo. Everything is technically correct, but the tone is really discouraging. Surely there are better ways to incite positive changes, like contacting the publisher.

I agree with @davemo with regards to the situation of someone posting their own findings or tutorial on their blog, or perhaps a project they are working on. However, when it's from an a source claiming to be authoritative as well as an individual presenting himself to be authoritative, that kind of mis-information spreads quickly and needs to be snuffed out. I've had so many times where helping someone in #jquery gets messy because someone reads an article like this and thinks they have to make strings using Array() and they haven't even learned how to architecture an app yet. Thank you for calling attention to the need to be polite - I hope this post's targeting of Netmag and lack of character attacking is evidence of calling out the facts v.s. inflammatory. Unfortunately, the author was the victim of Netmag not having a good review process; please don't blame this post for that lack of responsibility.

I'm glad @rmurphey "took it to the streets". As a new developer I am constantly trying to pin down what the language's best practices are. Because of my lack of experience and the amount of information out there to absorb, I can't necessarily go and confirm every article's accuracy on my own. When it's a major publication like Netmag or SmashingMag newcomers like me tend to trust these articles with a blind eye (although not so much lately for me). It's good to see that the more experienced developers are taking the time to point out those flaws before the younger developers start blowing shit up.

@adambowman The code was incorrect in the Netmag article, and was copied and pasted here verbatim. Indeed, with a comma, it would be correct, but in the original article it was not.

Good article, while some people disagree with you i totally agree that you pointed out such flaws in the article by Netmag.
Seems nobody proof reads article before they publish them out anymore, it's all about publishing content as fast as possible quantity vs quality. At the same time there are little facts to the statements in the article such as speed comparisons between the different operations.

Another funny statement the local vs global declaration(declaring var inside a for loop), sure this is correct but most browsers these days such as Webkit and Firefox will see these error and fix them while parsing the javascript source code.

And the optimization are way out of context with todays browser you ask me. The optimizations pointed out in the article do way little for performance, unless you are working with huge amount of data and then we are talking about in about 10,000's.
Doing animations and draw operations using canvas and/or div's, will be the most costly operations.

Other optimizations such as caching techniques, more in depth on how to use firebug or other consoles and such for doing performance tests could have been stated out. Compressing and merging your javascript files is an excellent technique for using less bandwidth and serving content faster.

In the end "Premature optimization is the root of all Evil", optimize when needed and when you know why you are doing it.

So @davemo, @interlock, @jamiemason & others are commenting about the 'tone' of the criticism of an extraordinarily uninformed article that was published as an authoritative expert set of tips in an extremely popular website?

I agree with the entire process @rmurphey and others have taken with this gist, the message on the original article and the call for retracting. IMHO any call for retraction of an article must include solid reasons for doing so. This gist provides the facts to back up the call for retracting. It would have been a mistake to call for retraction without facts.

Also, articles get most views soon after publication so it is important to respond quickly while there is time to debunk. So I agree that publishing the facts publicly was the proper course of action. In addition, as stated, this was not a personal blog and instead of publication that gets picked up by aggregators.

Finally, as stated the author of the original article "should perhaps be commended for their efforts." Personally, I hope he continues to write articles.

[This is taken from a response to a thoughtful and level-headed email I received, and I am glad that the sender wrote it, even though we disagree about the appropriateness of the post above. I hope that it clarifies my rationale for posting this gist a bit further.]

I don't think I'm a hot shit JavaScript developer. I've come to realize that I'm better than many, but worse, in some way, than every single developer I look up to. I'm constantly awed and terrified by the things other people know that I don't. I struggle mightily with impostor syndrome, but do my best to put myself in situations where I don't know stuff and have to learn it, and to surround myself with people from whom I can learn. I also try to do my best to help others add to their knowledge, always remembering that I was in their shoes once, and I wouldn't be where I am today if others had not helped me. When I'm wrong, I for damn sure want someone to tell me so.

On the subject of the Netmag article: as I indicated in another comment on the gist, I thought long and hard about whether posting this gist was the right thing. I took a draft filled with the raw, frustrated, snarky writing of several people, and, with their permission, did my level best to turn it into an appropriate critique.

I feel badly for the author of the article -- a profit-seeking publication asked him to write something, and then did nothing to ensure he didn't end up with egg on his face. The publication could have enlisted the services of qualified technical reviewers, but it didn't. Had this author written this content on his personal blog, it's likely it wouldn't have even been noticed; if it had been noticed, then it's likely that its shortcomings might have been addressed in a comment, or might have just been left alone.

Instead, it was published in a supposedly respectable publication, and tweeted from an account with 31,000 followers. This sort of distribution of misinformation is absolutely detrimental to a community full of members who invest an enormous amount of unpaid time helping beginners. Indeed, many of the people who helped write the post above know each other from the endless hours they've spent helping people in the #jquery IRC channel, often correcting exactly the sort of misinformation that was contained in the article. When publications spread this misinformation for profit, it absolutely trickles down to our daily lives.

One of the biggest reasons I thought long and hard about how to respond to the article, though, is because I spend a fair bit of time trying to encourage other people to participate in the JavaScript community, particularly women. I thought a lot about how I'd react if a woman had written the post (I'd be just as disappointed), and how my reaction might affect others' thinking when it comes to joining this community as a share-er of knowledge. The bottom line is that there is a wide world of people way less nice than what's written above, and, personally, I use this as a very strong motivator to know my shit. I absolutely do not encourage people to write articles or give talks about stuff they do not know; rather, I encourage them to write articles or give talks about things they have learned, and possibly even the process of learning it. I encourage them to get their feet wet in lower-stakes situations, like local meetups and their own blog, or maybe StackOverflow or somesuch -- places where they can get a sense of what they know, what they don't know, and what they don't know they don't know.

I stand by the decision to post what was posted above, and hope you might now understand the reasoning behind it a bit better.

@nimbupani Yeah, like, what I wrote... general conduct.

I don't think anyone would suggest the guy shouldn't have been corrected, I just wanted to ask (without confrontation) what do we feel is the right thing to when if a dev trips up in public? Correct them as we kick them on the ground? or while we're helping them back to their feet?

@JamieMason I say this very genuinely: I would appreciate seeing a fork of this gist that does a better job. Kicking anyone while they're on the ground was not the goal here; forcefully and publicly correcting bad information spread by a for-profit publication was the goal.

Curious here. Has anyone contacted Netmag or the author to find out if they indeed did, or did not, hire a technical reviewer? If they did their due diligence, then perhaps this gist should be also directed at the reviewer.

@rmurphey I don't doubt you had anything but the best of intentions, I just threw in something for consideration that's all - nothing wrong with a helpful suggestion between devs, right? :)

@commadelimited Netmag so far has been non-communicative; we've commented on the post and reached out on Twitter. However, the author has indicated in a comment that the article will be revised. You make a fair point that the gist should be directed at the reviewer, if any, but ultimately the responsibility falls to the publication; if a reviewer did review the article, the publication needs to find a new reviewer :)

@JamieMason no, definitely nothing wrong with a suggestion, but I'd love to see how that suggestion would manifest itself in this particular case.

@JamieMason you "throwing things for consideration" seems like a slap on the face of all the voluntary hard effort that Rebecca & other well-regarded experts has gone through to point out such egregious errors in a for-profit publication, that is likely to float to the top of Google Search results by n00bs.

Nobody would have been upset if this was on a personal blog, but the very fact that it appears on a famous site - especially one targeted towards newbies in web development - should be a great cause for concern.

That Rebecca & others took time off their very extremely busy schedules to even write this deserves respect and not snark or derision.

@nimbupani What wrong can there be in telling someone you're on their side, that they're right, that you agree with them, but "be nice"?

@JamieMason I ask again: can you please take the time to create a fork of this gist and adjust it to how you think it should have been stated? I appreciate the "be nice" advice, but I'd like to see how that would change this post. More to the point, I'd like to see people critical of this post -- which took a non-trivial amount of time to write -- spend a bit of time of their own to make it better, rather than just writing comments about how it should have been better.

@rmurphey and good for you absolutely. But it seems silly if I'm honest to fork an article just for me to be able to justify promoting positive discourse. And you don't need me to, I've seen you speak - you seem really nice! I just asked for some consideration for the guys feelings - is that so bad?! :)

If I were the author, as much as it might hurt to be publicly corrected for an article containing a number of inaccuracies, I would be extremely grateful for Rebecca, Ben and Rick taking their time to point out the errors in my post. I disagree with those that say the tone of this response could have been friendlier or nicer. It's needed to hit home an important point (one which I learned over time).

When you're writing about a topic, whether it's on your personal blog or for a popular publication, you have the potential to influence how many other people interpret that topic for the long term. Imagine if even a few hundred developers began following the initial (uncorrected) advice and used them in production-level systems. Those perceived optimizations could easily have negative consequences.

Every single developer writing material that's going to be published should have at least one other person with more experience than they have look over it.I've written for Netmag and many others and I don't care if the publication doesn't offer technical reviews. Yes, they should, but as a teacher, it's your responsibility to make sure you aren't spreading misinformation (I say this having made this mistake in the past myself). It might mean your article is delayed for a week or two, but it is so much better getting a handfull of people to look at your content beforehand.

At minimum they'll spot the errors Rebecca pointed out in her response and at most, they may even have further input to offer which could improve the article.

NetMag: I would recommend following the same model Smashing Magazine do. They don't always get it right but they have a panel of technical reviewers (including myself, Lea Verou and others) who read through articles and offer corrections and guidance at least a week before it goes live. That might help lower the number of articles with technically inaccurate information in them going out.

@JamieMason My reading of this is that you wish that I'd taken the time to make this nicer, but you aren't willing to take the same time. So, that's frustrating to me, but perhaps it means you understand better why I'm not inclined to take the (additional) time either.

Ok so I feel like I've offended you and that really wasn't my intention, I just wanted to say "I agree, but think about the guy's feelings" and leave it at that (I care about the community, not the gist).

@addyosmani made some valid points about the importance of articles being correct so I concede, and apologise if you were offended.

Could it simply be the fact that there are so many things incorrect in the article that this gist feels "not nice" to the author.

Without giving real examples (saying tone is not an example) of how this gist is inconsiderate to the original author it's really hard to understand your perspective (and the others that say this article is in poor taste).

The simple fact is netmag has produced an article full of misinformation and they need to correct it, withdraw it, or others need to correct them to stop this madness of crappy JavaScript resources.

On Jul 11, 2012, at 7:58 AM, Jamie Mason reply@reply.github.com wrote:

Ok so I feel like I've offended you and that really wasn't my intention, I just wanted to say "I agree, but think about the guy's feelings" and leave it at that (I care about the community, not the gist).

@addyosmani made some valid points about the importance of articles being correct so I concede, and apologise if you were offended.


Reply to this email directly or view it on GitHub:
https://gist.github.com/3086328

I'm still waiting to see the nice fork of this from those that feel being succinct and to the point is not nice and agressive - an example would help to see what it is y'all expected.

@Codery94 I appreciate your feedback, and I will ask you just like I asked above: please, please, seriously please, fork this gist and show me how it should have been written instead.

@codery94 thank you. Note though that one of your examples is from a comment, and not from the actual article.

@adambowman - so the fact that people that have jobs teaching JavaScript professionally to other professionals, consulting for large companies or otherwise shaping the future of JavaScript and the web took their personal time to do a tech review of an article that a FOR PROFIT company published for the sake of helping less-knowledgeable people not get confused - is beyond arrogant? Seriously?

I forked it to remove some of what I saw as the harsher language - https://gist.github.com/3091497

I don't have a problem with tearing NetMag a new one - they're a for-profit publication selling misinformation to the community; they should be better than this - but I agree that there's no need to be quite so mean to the author themselves.

@Codery94, "We all learnt somewhere and we all made mistakes at some point." is true, however I would hazard a guess that the majority of our mistakes did not get published as facts and good practices on a major website for all to learn our mistakes (note: not learn from them...).

As requested, I've posted a completely updated fork that removes the derisive language and maintains what I believe to be @rmurphey's desired effect: an informative look at some corrections in an effort to prevent the spread of inaccurate information. I think the advice that was given to potential authors about having community leaders proofread technical articles is a great suggestion. I would also encourage you to seek the same guidance from others when authoring posts like this to avoid unnecessarily painting yourselves in a negative light.

As a final note, thanks for putting your own time and effort into this message. The heart was good, it just needed some proof reading to put it in the right direction :)

Cheers,

https://gist.github.com/3093034

Edit: Also, I would be more than happy to spend some of my own time do some "tone proofreading" on any efforts like this in the future. Shoot me an email or hit me up on twitter!

I'm too lazy to confirm, but I'm pretty sure calling push 5 times in a row with a large string will be faster in IE6 and possibly 7. We changed EJS to do this 3 years ago and it was much faster. The reason I believe was that IE would do unnecessarily expensive string copying every time += was encountered.

And, if chrome / ff / opera / safari are "fast enough" we optimize for IE.

IE6 and IE7 are dead. Any optimizations for them are a complete waste of time. You might as well optimize for IE4 on Mac.

http://www.ie6countdown.com/
http://theie7countdown.com/

@justinbmeyer faster than ['string1', 'string2', 'string3', 'string4'].join('')? Also, why would you optimize for <15% browser market share?

@danheberden ... I was only talking about when you can't know what the final string should look like immediately. If you can do:

['string1', 'string2', 'string3', 'string4'].join('')

why not just do

'string1string2string3string4'

and get the best performance possible. :-)

@mattrogish + @danheberden

Also, why would you optimize for <15% browser market share?
IE6 and IE7 are dead.

This was 3 years ago. I no longer develop much for IE6 thankfully. Yet, a lot of big companies are still on IE6+7. And if you're a big company, 5% more potential revenue is still something to target.

Push at some point was faster. And it still seems slighly faster in FF http://jsperf.com/are-you-kidding-me/4. Labeling using push as "just plain wrong" isn't entirely accurate. It was right, and maybe if you are doing FF-only work it's still right. If you are having to support IE6 (and maybe 7) it still might be better to use push in some places simply because how terrible += is.

This all gets into a gray area.

@justinbmeyer - i've seen a lot of times people use the array format to make long strings fit well on the screen. As for < IE8, you'd want to do arr[ arr.length ] = "moar strings" instead of arr.push( "moar strings" ) - so in the case of boasting .push() as THE best way, that statement is indeed "just plain wrong". But I'd wager 5% of your customers having a slower experience on their shitty browser that they have a shitty experience every where else with is tolerable.

However, while your points are valid in targeting browsers, this article wasn't called "performance tips for your IE6/7 targeted app".

@danheberden

They wouldn't have a terrible experience anywhere else. If something has good performance on IE, it looks amazing in Chrome.

"performance tips for your IE6/7 targeted app".

This post could still include these caveats. arr[ arr.length ] = "moar strings" should probably be part of the critique. But, it depends on how you define the "best way" across multiple browsers. If pressed, I'd use amdahl's law factoring in browser share, but favor the future. Consider two different ways:

way 1

  • IE: 5 seconds
  • Chrome: 0.4 s

way 2

  • IE: 1s
  • Chrome: 0.5 s

If IE has 10% market share vs Chromes 90%. You could calculate total slowness:

Way 1: (5 * .1 = .5) + (.4 * .9 = .36) = .86
Way 2: (1 * .1 = .1) + (.5 * .9 = .45) = .55

Using amdahl, Way 2 would be faster for "humanity". But, it might not matter because you could care less about the savages using IE. For us, 3 years ago, this equation seemed to favor push (or arr[arr.length]). To prove += is better than .push (or even arr.length) it might be better to include browser share info. Maybe as an improvement to jsperf.

String concatenation is generally slow in most languages, if you know how the internals work.
Say:

var a = "";
for (var i = 0; i < 10000; i++) {
    a += "Some random String…";
}

This requires that the string on the left side is expanded everytime the string on the right side is appended by concatenation operation; meaning that the length of the string on the right side is evaluated before expanding the string on the left side and appending the value of string on the right.

Instead when joining an array of strings which is basically pointers to a set of strings, it will evaluate the total length for the final string and join all the strings in the array in one operation/method(depends on the programming language) which is must faster.

Still if you are working with large sections of strings due to manipulation of html or text content why re-invent the wheel there are enough libraries out there like jquery and other for WSIWYG, that already have taken care of most performance and cross browser issues…

@cristobal - if you are writing something like http://embededjs.com or another template engine that does a lot of string concat, it's really important.

Sure @justinbmeyer but the audience of the article was mainly related to web developers not people developing template engines or http://embeddedjs.com/ like you say. I assume that you already know how stuff works in the background, if was just pointing out why its slow.

Still if you are writing an template engine or doing i.e. Huffman encoding or general encoding of some stream in javascript(java or c++) you would avoid string concatenation since it's a costly(generally slow) operation in most languages not just javascript…

Yes the last two articles are java but string handling is mostly the same. The first article also points out "Ropes" the last article as an alternative for String building in javascript.
There is even an implementation for node https://github.com/josephg/jumprope, still it also states you need to work with strings longer than 5000 chars to see any performance boost.

Poor performance of string concatenation like the one in IE has todo with the javascript engine implementation of IE, which is slow compared to Webkit or Firefox. Hopefully IE10 will improve stuff a bit…

If only we can do the same now with W3CSchool - there is a lot of rubbish on there that can cause FUD in developers

@tanepiper that does exist -- w3fools.com

@joeframbach - true but W3CSchool content still shows up high in search rankings, which is so sad

[Not a big fan of doing this publicly, but here we go. This is the email I sent @rmurphey last night:]

Thanks for your feedback on Joe Angus' JavaScript article we posted yesterday.

I'd like to take this opportunity and point out that, while the use of some techniques may be subjective, we of course always want to avoid factual errors and are very open to correcting things. As you may have noticed Joe has now updated the article:

http://www.netmagazine.com/tutorials/optimise-your-javascript

I would have been delighted had you got in touch with me (or directly with Joe) last night before you posted a public response on Github. We do respond to criticism (and we try to move very fast to get an update live), but I fear that if it's handled so publicly, it will put younger developers off to share their tips with the community. I've actually seen this happen quite a few times lately, and it's a real shame. By the looks of some comments on Twitter and your Gist post, I'm not alone with this opinion.

We're also currently looking for experts to help us peer review articles before they're published (both in print and online). In fact, this was one of the first things I started work on when I took the helm at .net a couple of months ago. However, as you know well-regarded experts who are very active in the community and often speak at conferences, like you, are unfortunately yet understandably often too busy to write tutorials themselves or help out with peer reviews.

We always approach people of your knowledge and standing in the community to work with us. Sadly, our previous experience in working with such experts has not always been the best. It's okay to say 'no' due to work pressures, but we also had a fair few occasions, where somebody agreed to write an article for us and then did not deliver and went quiet on us (Rick, I think you know what I'm talking about :)). As you can imagine that's leaving us in the lurch, especially when it comes to print deadilnes.

This means that we have to approach people lower down in the food chain who may not have the same knowledge, yet (or they approach us, as Joe did). It also means that, while technical accuracy is very important to us, errors will creep in from time to time, if we can't get people knowledgable enough to review our articles. That .net has a 'reckless disregard for accuracy' is strictly not true.

So, as a solution, I'd like to invite you to sit on our panel of peer reviewers. We've already signed up Stephanie Rieger, Christian Heilmann and Chris Mills. We're still working out the details but of course you'd at least get a credit on the article you have reviewed.

Anyway, let me know what you think. We'd love you to contribute to .net in whatever form. We're always learning and it's important to us to make your knowledge accessible to the wider community.

Cheers,

Oliver
Editor
.net

[OK, well, if we're going here, here's my reply.]

I appreciate you getting back to us so quickly, I appreciate that the
article has been updated, and I also appreciate that it would have
been more comfortable for you if this entire exchange had occurred
privately. I'm going to speak very frankly here: I feel it was very
important for it to occur publicly, as the web developer community
needs to know that content in your publication must be read with a
shaker full of salt. Handling this privately would have addressed this
single article, but would not have addressed the more systemic problem
of content being published and promoted without regard for its
accuracy.

I am well aware that there could be chilling effects associated with
calling out bad content, and I gave much thought to that fact before
publishing this. The fact is, I want people to second-guess
themselves before publishing content to such a large audience; if they
are unsure, then they should gain experience in lower-stakes
environments, and solicit feedback from people who know the subject
well. Being inexperienced is not a license to make stuff up.

As I've said repeatedly at this point, had Joe published this on his
personal blog, then a private conversation would have been the most
likely -- and most appropriate -- response. However, when it was
published on a well known site with a large audience, and promoted to
30,000 followers, the need to address this publicly was much more
clear. As I said here
https://gist.github.com/3086328#gistcomment-368792, these sorts of
egregiously inaccurate articles have a real impact on folks like us
who already spend a significant amount of time -- without compensation
-- assisting newcomers to web development.

If you are having difficulty identifying qualified authors and
reviewers, I would suggest the answer is one of economics: pay
attractive rates for good content and good review, and you will get
good content and good review. A subject-matter expert could reasonably
expect $500 to $1000 in exchange for the time required to write an
article for a for-profit publication; a reviewer could reasonably
expect $100-$200 for the time required to review it. Since few
publications are willing to pay these rates, few subject matter
experts are willing to provide these services, opting instead to
publish in a place where they can derive other benefits from their
efforts (and where they have control over the quality of the
surrounding content).

I do not know what your compensation structure is, but publications
that wish to avoid incidents such as the one that unfolded yesterday
do well to appreciate these economic considerations. It may well be
that the only viable business model requires that content be from
sub-par sources, because quality content is simply too expensive
relative to the money that can be extracted from it, and few subject
matter experts are willing to provide their time at a discount to a
for-profit entity. If a publication continues to distribute content
even when they can't reasonably ensure that the content is good, then
that publication should expect that the community will continue to
point out that content's flaws, and I don't see any reason to expect
the community to do so privately when the content in question is
public.

[At this point Oliver wrote back, saying that they didn't have trouble identifying quality authors, but they did have issues with follow-through by a few prominent authors. This was my reply.]

You may not have difficulty identifying quality authors, but you
clearly have issues with them following through. As someone who worked
on the night copy desk at a newspaper for five years, I take deadlines
very seriously, and it doesn't sit well with me when people miss them.
I'm sorry that has happened to you. However, I can also understand how
it might happen. While I can't speak to any individual case,
considering the billable rate of these folks, spending hours writing
quality content for less than $500 -- or hours reviewing an article
for less than $100 -- may not be at the top of their to-do list. That
may be the going rate, but it's not a rate that's going to incentivize
people who can easily bill upwards of $2,000 a day.

What I'm struggling with here is that you seem to expect community
members to work at discounted rates in order to ensure you have
quality content, and that when those community members aren't willing
to do that, your answer is to publish anyway. "Optimise your
JavaScript" never should have seen the light of day in its original
state. Just like being inexperienced isn't an excuse for making things
up, having difficulty getting quality content isn't a justification
for publishing things that are just plain wrong.

@oliverlindberg It's unclear to me why the answer to making sure that the content in a for-profit publication backed by a publicly traded company is properly vetted and timely is something that would fall to an ad-hoc, outside panel of experts who are essentially receiving a token honorarium at irregular intervals, rather than being a staff position.

Since when does computer science involve the sparing of feelings over document accuracy? These folks would have no problem writing and publishing documents that debunked, refuted and rebuked information that was incorrect, inaccurate and/or provably wrong.

I'm struggling to see anything in Rebecca's post that could be construed as hurtful or personally offensive. People are rightly pointing out that as a profit making organisation, .Net mag have a responsibility to ensure the accuracy of their articles but following through that train of thought, presumably the author of the original article also got paid and shares that responsibility to some extent? If an author is happy to put out an article without taking the personal responsibility to check facts and furthermore, they're happy to take a pay cheque for doing so, as far as I'm concerned they're putting themself in the firing line. Frankly, if someone's feelings are that easily hurt, they might want to avoid publishing anything else on the internet, because rightly or wrongly, the internet can be cruel and it will hurt your feelings.

Please sign in to comment on this gist.

Something went wrong with that request. Please try again.