-
-
Save bendc/da83fdac68a1095f3595 to your computer and use it in GitHub Desktop.
function removeDuplicates(arr) { | |
var clean = [] | |
var cleanLen = 0 | |
var arrLen = arr.length | |
for (var i = 0; i < arrLen; i++) { | |
var el = arr[i] | |
var duplicate = false | |
for (var j = 0; j < cleanLen; j++) { | |
if (el !== clean[j]) continue | |
duplicate = true | |
break | |
} | |
if (duplicate) continue | |
clean[cleanLen++] = el | |
} | |
return clean | |
} |
bendc
commented
Dec 9, 2014
An even faster way would be to add all items to a Set
and then pull them all out again.
Tried out @mathiasbynens suggestion in a dumb and dirty JSPerf - http://jsperf.com/dumb-remove-duplicates-from-array
Set
can be used to improve the performance of a uniqing function by loading it up with values, use set.add(v)
instead of new Set(array)
for a wider range of support, and then checking if the value exists in your loop, set.has(v)
, instead of using an indexOf
linear search equiv.
There is a cost to creating and populating the set so I usually don't kick in the set optimization until an array is considered large enough to outweigh the cost, in my case 200
, but mileage will vary on your implementation.
Set
is great for uniqing objects but not so great at primitives. Since uniqing numbers is a common case I special case numbers using an hash map for better performance.
Quick followup: the following usage of Set
seems to be faster than my method for arrays larger than ~130 elements.
var set = new Set(arr)
var clean = []
set.forEach(function(el) { clean.push(el) })
Thank you
Thank you