从JS数组中删除重复的值[复制]

时间:2022-07-24 07:36:26

This question already has an answer here:

这个问题已经有了答案:

I have a very simple JavaScript array that may or may not contain duplicates.

我有一个非常简单的JavaScript数组,它可能包含也可能不包含副本。

var names = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"];

I need to remove the duplicates and put the unique values in a new array.

我需要删除重复的内容,并将唯一的值放在一个新的数组中。

I could point to all the codes that I've tried but I think it's useless because they don't work. I accept jQuery solutions too.

我可以指出我尝试过的所有代码,但我认为它没有用,因为它们不起作用。我也接受jQuery解决方案。

Similar question:

54 个解决方案

#1


359  

Quick and dirty using jQuery:

使用jQuery快速而肮脏:

var names = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"];
var uniqueNames = [];
$.each(names, function(i, el){
    if($.inArray(el, uniqueNames) === -1) uniqueNames.push(el);
});

#2


2139  

"Smart" but naïve way

uniqueArray = a.filter(function(item, pos) {
    return a.indexOf(item) == pos;
})

Basically, we iterate over the array and, for each element, check if the first position of this element in the array is equal to the current position. Obviously, these two positions are different for duplicate elements.

基本上,我们遍历数组,对于每个元素,检查这个元素在数组中的第一个位置是否等于当前位置。显然,对于重复的元素,这两个位置是不同的。

Using the 3rd ("this array") parameter of the filter callback we can avoid a closure of the array variable:

使用filter回调的第3个(“这个数组”)参数,我们可以避免数组变量的关闭:

uniqueArray = a.filter(function(item, pos, self) {
    return self.indexOf(item) == pos;
})

Although concise, this algorithm is not particularly efficient for large arrays (quadratic time).

尽管该算法简洁,但对于大型数组(二次时间)并不是特别有效。

Hashtables to the rescue

function uniq(a) {
    var seen = {};
    return a.filter(function(item) {
        return seen.hasOwnProperty(item) ? false : (seen[item] = true);
    });
}

This is how it's usually done. The idea is to place each element in a hashtable and then check for its presence instantly. This gives us linear time, but has at least two drawbacks:

这就是通常的做法。其思想是将每个元素放在一个hashtable中,然后立即检查其存在性。这给了我们线性时间,但至少有两个缺点:

  • since hash keys can only be strings in Javascript, this code doesn't distinguish numbers and "numeric strings". That is, uniq([1,"1"]) will return just [1]
  • 由于散列键只能是Javascript中的字符串,因此该代码不区分数字和“数字字符串”。也就是说,uniq([1,"1"])只返回[1]
  • for the same reason, all objects will be considered equal: uniq([{foo:1},{foo:2}]) will return just [{foo:1}].
  • 出于同样的原因,所有对象都将被认为是相等的:uniq([{foo:1},{foo:2}])将只返回[{foo:1}]。

That said, if your arrays contain only primitives and you don't care about types (e.g. it's always numbers), this solution is optimal.

也就是说,如果数组只包含原语,而不关心类型(例如,它总是数字),那么这个解决方案是最优的。

The best from two worlds

A universal solution combines both approaches: it uses hash lookups for primitives and linear search for objects.

通用解决方案结合了这两种方法:它对原语使用散列查找,对对象使用线性搜索。

function uniq(a) {
    var prims = {"boolean":{}, "number":{}, "string":{}}, objs = [];

    return a.filter(function(item) {
        var type = typeof item;
        if(type in prims)
            return prims[type].hasOwnProperty(item) ? false : (prims[type][item] = true);
        else
            return objs.indexOf(item) >= 0 ? false : objs.push(item);
    });
}

sort | uniq

Another option is to sort the array first, and then remove each element equal to the preceding one:

另一种选择是先对数组进行排序,然后删除与前一个元素相等的每个元素:

function uniq(a) {
    return a.sort().filter(function(item, pos, ary) {
        return !pos || item != ary[pos - 1];
    })
}

Again, this doesn't work with objects (because all objects are equal for sort). Additionally, we silently change the original array as a side effect - not good! However, if your input is already sorted, this is the way to go (just remove sort from the above).

同样,这对对象不起作用(因为所有对象对于排序都是相等的)。此外,我们悄悄地改变原来的数组作为副作用-不好!但是,如果您的输入已经被排序,那么这就是解决问题的方法(从上面删除sort)。

Unique by...

Sometimes it's desired to uniquify a list based on some criteria other than just equality, for example, to filter out objects that are different, but share some property. This can be done elegantly by passing a callback. This "key" callback is applied to each element, and elements with equal "keys" are removed. Since key is expected to return a primitive, hash table will work fine here:

有时,我们希望根据一些标准而不仅仅是相等来对列表进行统一,例如,过滤出不同但共享某些属性的对象。这可以通过传递回调来优雅地完成。这个“key”回调应用于每个元素,并删除具有相同“key”的元素。由于key期望返回一个原语,所以散列表在这里可以正常工作:

function uniqBy(a, key) {
    var seen = {};
    return a.filter(function(item) {
        var k = key(item);
        return seen.hasOwnProperty(k) ? false : (seen[k] = true);
    })
}

A particularly useful key() is JSON.stringify which will remove objects that are physically different, but "look" the same:

一个特别有用的键()是JSON。stringify将删除物理上不同的对象,但“看”相同:

a = [[1,2,3], [4,5,6], [1,2,3]]
b = uniqBy(a, JSON.stringify)
console.log(b) // [[1,2,3], [4,5,6]]

If the key is not primitive, you have to resort to the linear search:

如果键不是原语,则必须采用线性搜索:

function uniqBy(a, key) {
    var index = [];
    return a.filter(function (item) {
        var k = key(item);
        return index.indexOf(k) >= 0 ? false : index.push(k);
    });
}

or use the Set object in ES6:

或使用ES6中的Set对象:

function uniqBy(a, key) {
    var seen = new Set();
    return a.filter(item => {
        var k = key(item);
        return seen.has(k) ? false : seen.add(k);
    });
}

(Some people prefer !seen.has(k) && seen.add(k) instead of seen.has(k) ? false : seen.add(k)).

(有些人喜欢seen.has(k) && seen.add(k),而不是seen.has(k) ?假:seen.add(k))。

Libraries

Both underscore and Lo-Dash provide uniq methods. Their algorithms are basically similar to the first snippet above and boil down to this:

下划线和Lo-Dash都提供了uniq方法。他们的算法基本上与上面的第一个片段相似,并归结为:

var result = [];
a.forEach(function(item) {
     if(result.indexOf(item) < 0) {
         result.push(item);
     }
});

This is quadratic, but there are nice additional goodies, like wrapping native indexOf, ability to uniqify by a key (iteratee in their parlance), and optimizations for already sorted arrays.

这是二次的,但是还有一些很好的优点,比如封装本机索引、通过键(用他们的话说就是iteratee)进行统一的能力,以及对已经排序的数组进行优化。

If you're using jQuery and can't stand anything without a dollar before it, it goes like this:

如果你用的是jQuery,没有一美元你无法忍受,它是这样的:

  $.uniqArray = function(a) {
        return $.grep(a, function(item, pos) {
            return $.inArray(item, a) === pos;
        });
  }

which is, again, a variation of the first snippet.

这又是第一个片段的变体。

Performance

Function calls are expensive in Javascript, therefore the above solutions, as concise as they are, are not particularly efficient. For maximal performance, replace filter with a loop and get rid of other function calls:

函数调用在Javascript中非常昂贵,因此上面的解决方案虽然简洁,但效率并不高。为了获得最大的性能,用循环替换过滤器,并摆脱其他函数调用:

function uniq_fast(a) {
    var seen = {};
    var out = [];
    var len = a.length;
    var j = 0;
    for(var i = 0; i < len; i++) {
         var item = a[i];
         if(seen[item] !== 1) {
               seen[item] = 1;
               out[j++] = item;
         }
    }
    return out;
}

This chunk of ugly code does the same as the snippet #3 above, but an order of magnitude faster (as of 2017 it's only twice as fast - JS core folks are doing a great job!)

这段丑陋的代码和上面的代码片段#3所做的一样,但是速度快了一个数量级(到2017年,速度只有原来的两倍——JS核心人员做得很好!)

function uniq(a) {
    var seen = {};
    return a.filter(function(item) {
        return seen.hasOwnProperty(item) ? false : (seen[item] = true);
    });
}

function uniq_fast(a) {
    var seen = {};
    var out = [];
    var len = a.length;
    var j = 0;
    for(var i = 0; i < len; i++) {
         var item = a[i];
         if(seen[item] !== 1) {
               seen[item] = 1;
               out[j++] = item;
         }
    }
    return out;
}

/////

var r = [0,1,2,3,4,5,6,7,8,9],
    a = [],
    LEN = 1000,
    LOOPS = 1000;

while(LEN--)
    a = a.concat(r);

var d = new Date();
for(var i = 0; i < LOOPS; i++)
    uniq(a);
document.write('<br>uniq, ms/loop: ' + (new Date() - d)/LOOPS)

var d = new Date();
for(var i = 0; i < LOOPS; i++)
    uniq_fast(a);
document.write('<br>uniq_fast, ms/loop: ' + (new Date() - d)/LOOPS)

ES6

ES6 provides the Set object, which makes things a whole lot easier:

ES6提供Set对象,使事情变得更简单:

function uniq(a) {
   return Array.from(new Set(a));
}

or

let uniq = a => [...new Set(a)];

Note that, unlike in python, ES6 sets are iterated in insertion order, so this code preserves the order of the original array.

注意,与python不同,ES6集是按插入顺序迭代的,因此此代码保留了原始数组的顺序。

However, if you need an array with unique elements, why not use sets right from the beginning?

但是,如果您需要一个具有独特元素的数组,为什么不从一开始就使用集合?

Generators

A "lazy", generator-based version of uniq can be built on the same basis:

uniq的“懒惰”、基于生成器的版本可以在同样的基础上构建:

  • take the next value from the argument
  • 从参数中取下一个值
  • if it's been seen already, skip it
  • 如果已经看到了,跳过它
  • otherwise, yield it and add it to the set of already seen values
  • 否则,将它赋值并将其添加到已经看到的值集合中。

function* uniqIter(a) {
    let seen = new Set();

    for (let x of a) {
        if (!seen.has(x)) {
            seen.add(x);
            yield x;
        }
    }
}

// example:

function* randomsBelow(limit) {
    while (1)
        yield Math.floor(Math.random() * limit);
}

// note that randomsBelow is endless

count = 20;
limit = 30;

for (let r of uniqIter(randomsBelow(limit))) {
    console.log(r);
    if (--count === 0)
        break
}

// exercise for the reader: what happens if we set `limit` less than `count` and why

#3


257  

Got tired of seeing all bad examples with for-loops or jQuery. Javascript has the perfect tools for this nowadays: sort, map and reduce.

厌倦了用for循环或jQuery看到所有糟糕的例子。Javascript现在有完美的工具:排序、映射和减少。

Uniq reduce while keeping existing order

var names = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"];

var uniq = names.reduce(function(a,b){
    if (a.indexOf(b) < 0 ) a.push(b);
    return a;
  },[]);

console.log(uniq, names) // [ 'Mike', 'Matt', 'Nancy', 'Adam', 'Jenny', 'Carl' ]

// one liner
return names.reduce(function(a,b){if(a.indexOf(b)<0)a.push(b);return a;},[]);

Faster uniq with sorting

There are probably faster ways but this one is pretty decent.

可能有更快的方法,但是这个很不错。

var uniq = names.slice() // slice makes copy of array before sorting it
  .sort(function(a,b){
    return a > b;
  })
  .reduce(function(a,b){
    if (a.slice(-1)[0] !== b) a.push(b); // slice(-1)[0] means last item in array without removing it (like .pop())
    return a;
  },[]); // this empty array becomes the starting value for a

// one liner
return names.slice().sort(function(a,b){return a > b}).reduce(function(a,b){if (a.slice(-1)[0] !== b) a.push(b);return a;},[]);

Update 2015: ES6 version:

In ES6 you have Sets and Spread which makes it very easy and performant to remove all duplicates:

在ES6中,您有一套和扩展,这使得删除所有副本非常容易和有效:

var uniq = [ ...new Set(names) ]; // [ 'Mike', 'Matt', 'Nancy', 'Adam', 'Jenny', 'Carl' ]

Sort based on occurrence:

Someone asked about ordering the results based on how many unique names there are:

有人问,根据有多少个独特的名字排序结果:

var names = ['Mike', 'Matt', 'Nancy', 'Adam', 'Jenny', 'Nancy', 'Carl']

var uniq = names
  .map((name) => {
    return {count: 1, name: name}
  })
  .reduce((a, b) => {
    a[b.name] = (a[b.name] || 0) + b.count
    return a
  }, {})

var sorted = Object.keys(uniq).sort((a, b) => uniq[a] < uniq[b])

console.log(sorted)

#4


70  

Vanilla JS: Remove duplicates using an Object like a Set

Vanilla JS:使用一个对象来删除重复的内容,比如集合

You can always try putting it into an object, and then iterating through its keys:

你可以尝试把它放入一个对象中,然后遍历它的键:

function remove_duplicates(arr) {
    var obj = {};
    var ret_arr = [];
    for (var i = 0; i < arr.length; i++) {
        obj[arr[i]] = true;
    }
    for (var key in obj) {
        ret_arr.push(key);
    }
    return ret_arr;
}

Vanilla JS: Remove duplicates by tracking already seen values (order-safe)

Vanilla JS:通过跟踪已经看到的值(订单安全)来删除重复的数据

Or, for an order-safe version, use an object to store all previously seen values, and check values against it before before adding to an array.

或者,对于订单安全的版本,使用对象来存储以前看到的所有值,并在添加到数组之前对其进行值检查。

function remove_duplicates_safe(arr) {
    var seen = {};
    var ret_arr = [];
    for (var i = 0; i < arr.length; i++) {
        if (!(arr[i] in seen)) {
            ret_arr.push(arr[i]);
            seen[arr[i]] = true;
        }
    }
    return ret_arr;

}

ECMAScript 6: Use the new Set data structure (order-safe)

ECMAScript 6:使用新的Set数据结构(订单安全)

ECMAScript 6 adds the new Set Data-Structure, which lets you store values of any type. Set.values returns elements in insertion order.

ECMAScript 6添加了新的数据结构,允许您存储任何类型的值。值按插入顺序返回元素。

function remove_duplicates_es6(arr) {
    let s = new Set(arr);
    let it = s.values();
    return Array.from(it);
}

Example usage:

使用示例:

a = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"];

b = remove_duplicates(a);
// b:
// ["Adam", "Carl", "Jenny", "Matt", "Mike", "Nancy"]

c = remove_duplicates_safe(a);
// c:
// ["Mike", "Matt", "Nancy", "Adam", "Jenny", "Carl"]

d = remove_duplicates_es6(a);
// d:
// ["Mike", "Matt", "Nancy", "Adam", "Jenny", "Carl"]

#5


67  

Use Underscore.js

It's a library with a host of functions for manipulating arrays.

它是一个有许多操作数组函数的库。

It's the tie to go along with jQuery's tux, and Backbone.js's suspenders.

它与jQuery的tux和主干紧密相连。js的背带。

_.uniq

_.uniq

_.uniq(array, [isSorted], [iterator]) Alias: unique
Produces a duplicate-free version of the array, using === to test object equality. If you know in advance that the array is sorted, passing true for isSorted will run a much faster algorithm. If you want to compute unique items based on a transformation, pass an iterator function.

_。uniq(数组,[is排序],[iterator])别名:unique产生了一个无复制版本的数组,使用===测试对象的相等性。如果您事先知道数组已排序,那么为isordered传递true将会运行更快的算法。如果您希望基于转换计算惟一项,请传递一个迭代器函数。

Example

例子

var names = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"];

alert(_.uniq(names, false));

Note: Lo-Dash (an underscore competitor) also offers a comparable .uniq implementation.

注意:Lo-Dash(下划线竞争对手)也提供了类似的。uniq实现。

#6


60  

A single line version using array filter and indexOf functions:

使用数组过滤器和函数索引的单行版本:

arr = arr.filter (function (value, index, array) { 
    return array.indexOf (value) == index;
});

#7


46  

You can simply do it in JavaScript, with the help of the second - index - parameter of the filter method:

你可以简单地用JavaScript来做,借助过滤器方法的第二个索引参数:

var a = [2,3,4,5,5,4];
a.filter(function(value, index){ return a.indexOf(value) == index });

or in short hand

或简而言之手

a.filter((v,i) => a.indexOf(v) == i)

#8


28  

The most concise way to remove duplicates from an array using native javascript functions is to use a sequence like below:

使用本机javascript函数从数组中删除重复内容的最简洁方法是使用如下所示的序列:

vals.sort().reduce(function(a, b){ if (b != a[0]) a.unshift(b); return a }, [])

there's no need for slice nor indexOf within the reduce function, like i've seen in other examples! it makes sense to use it along with a filter function though:

在reduce函数中不需要slice或indexOf,就像我在其他例子中看到的那样!它与一个过滤器函数一起使用是有意义的:

vals.filter(function(v, i, a){ return i == a.indexOf(v) })

Yet another ES6(2015) way of doing this that already works on a few browsers is:

另一种ES6(2015)方法已经在一些浏览器上使用:

Array.from(new Set(vals))

or even using the spread operator:

甚至使用扩展运算符:

[...new Set(vals)]

cheers!

干杯!

#9


24  

One line:

一行:

let names = ['Mike','Matt','Nancy','Adam','Jenny','Nancy','Carl', 'Nancy'];
let dup = [...new Set(names)];
console.log(dup);

#10


19  

Go for this one:

去这一个:

var uniqueArray = duplicateArray.filter(function(elem, pos) {
    return duplicateArray.indexOf(elem) == pos;
}); 

Now uniqueArray contains no duplicates.

现在,uniqueArray表示“独一无二的地球不存在任何重复现象”。

#11


17  

Simplest One I've run into so far. In es6.

到目前为止我遇到的最简单的一个。在es6。

 var names = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl", "Mike", "Nancy"]

 var noDupe = Array.from(new Set(names))

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Set

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Set

#12


16  

I had done a detailed comparison of dupes removal at some other question but having noticed that this is the real place i just wanted to share it here as well.

我已经做了一个详细的比较,在一些其他的问题,但注意到这是真正的地方,我只是想在这里分享它。

I believe this is the best way to do this

我相信这是最好的方法

var myArray = [100, 200, 100, 200, 100, 100, 200, 200, 200, 200],
    reduced = Object.keys(myArray.reduce((p,c) => (p[c] = true,p),{}));
console.log(reduced);

OK .. even though this one is O(n) and the others are O(n^2) i was curious to see benchmark comparison between this reduce / look up table and filter/indexOf combo (I choose Jeetendras very nice implementation https://*.com/a/37441144/4543207). I prepare a 100K item array filled with random positive integers in range 0-9999 and and it removes the duplicates. I repeat the test for 10 times and the average of the results show that they are no match in performance.

好吧. .尽管这是O(n)和其他人是O(n ^ 2)我很好奇看到基准对比这减少/查找表和过滤器/ indexOf组合(我选择Jeetendras很好的实现https://*.com/a/37441144/4543207)。我准备了一个100K的项目数组,其中包含0-9999范围内的随机正整数,它将删除重复的整数。我重复测试了10次,结果的平均值表明它们在性能上并不匹配。

  • In firefox v47 reduce & lut : 14.85ms vs filter & indexOf : 2836ms
  • 在firefox v47中,reduce & lut: 14.85ms vs filter & indexOf: 2836ms
  • In chrome v51 reduce & lut : 23.90ms vs filter & indexOf : 1066ms
  • 在chrome v51减少& lut: 23.90ms vs过滤器& indexOf: 1066ms。

Well ok so far so good. But let's do it properly this time in the ES6 style. It looks so cool..! But as of now how it will perform against the powerful lut solution is a mystery to me. Lets first see the code and then benchmark it.

到目前为止还不错。但是这次我们用ES6格式来做一下。它看起来太酷了…!但是现在,它将如何对抗强大的lut解决方案对我来说是个谜。让我们先看看代码,然后进行基准测试。

var myArray = [100, 200, 100, 200, 100, 100, 200, 200, 200, 200],
    reduced = [...myArray.reduce((p,c) => p.set(c,true),new Map()).keys()];
console.log(reduced);

Wow that was short..! But how about the performance..? It's beautiful... Since the heavy weight of the filter / indexOf lifted over our shoulders now i can test an array 1M random items of positive integers in range 0..99999 to get an average from 10 consecutive tests. I can say this time it's a real match. See the result for yourself :)

哇这是短. . !但是演出怎么样?很漂亮……由于过滤器/索引的重量很重,现在我可以在0范围内测试一个1米的正整数随机项目。99999,从连续10次测试中获得平均成绩。我可以说这一次是一场真正的比赛。你自己看看结果:)

var ranar = [],
     red1 = a => Object.keys(a.reduce((p,c) => (p[c] = true,p),{})),
     red2 = a => reduced = [...a.reduce((p,c) => p.set(c,true),new Map()).keys()],
     avg1 = [],
     avg2 = [],
       ts = 0,
       te = 0,
     res1 = [],
     res2 = [],
     count= 10;
for (var i = 0; i<count; i++){
  ranar = (new Array(1000000).fill(true)).map(e => Math.floor(Math.random()*100000));
  ts = performance.now();
  res1 = red1(ranar);
  te = performance.now();
  avg1.push(te-ts);
  ts = performance.now();
  res2 = red2(ranar);
  te = performance.now();
  avg2.push(te-ts);
}

avg1 = avg1.reduce((p,c) => p+c)/count;
avg2 = avg2.reduce((p,c) => p+c)/count;

console.log("reduce & lut took: " + avg1 + "msec");
console.log("map & spread took: " + avg2 + "msec");

Which one would you use..? Well not so fast...! Don't be deceived. Map is at displacement. Now look... in all of the above cases we fill an array of size n with numbers of range < n. I mean we have an array of size 100 and we fill with random numbers 0..9 so there are definite duplicates and "almost" definitely each number has a duplicate. How about if we fill the array in size 100 with random numbers 0..9999. Let's now see Map playing at home. This time an Array of 100K items but random number range is 0..100M. We will do 100 consecutive tests to average the results. OK let's see the bets..! <- no typo

你会用哪一个?没那么快…好!不要被欺骗。地图在位移。现在看来……在以上的所有情况中,我们将一个大小为n的数组填充为范围小于n的数。9所以有一定的重复,而且几乎肯定每个数字都有重复。如果我们用随机数0..9999来填充数组大小为100。现在让我们看看地图在家里播放。这一次,一个包含100K个项目的数组,但随机数范围是0..100M。我们将做100个连续的测试来平均结果。好吧,让我们来打赌。<——没有错误

var ranar = [],
     red1 = a => Object.keys(a.reduce((p,c) => (p[c] = true,p),{})),
     red2 = a => reduced = [...a.reduce((p,c) => p.set(c,true),new Map()).keys()],
     avg1 = [],
     avg2 = [],
       ts = 0,
       te = 0,
     res1 = [],
     res2 = [],
     count= 100;
for (var i = 0; i<count; i++){
  ranar = (new Array(100000).fill(true)).map(e => Math.floor(Math.random()*100000000));
  ts = performance.now();
  res1 = red1(ranar);
  te = performance.now();
  avg1.push(te-ts);
  ts = performance.now();
  res2 = red2(ranar);
  te = performance.now();
  avg2.push(te-ts);
}

avg1 = avg1.reduce((p,c) => p+c)/count;
avg2 = avg2.reduce((p,c) => p+c)/count;

console.log("reduce & lut took: " + avg1 + "msec");
console.log("map & spread took: " + avg2 + "msec");

Now this is the spectacular comeback of Map()..! May be now you can make a better decision when you want to remove the dupes.

现在这是Map()的壮观回归。也许现在你可以做出一个更好的决定,当你想要删除的dupes。

Well ok we are all happy now. But the lead role always comes last with some applause. I am sure some of you wonder what Set object would do. Now that since we are open to ES6 and we know Map is the winner of the previous games let us compare Map with Set as a final. A typical Real Madrid vs Barcelona game this time... or is it? Let's see who will win the el classico :)

好吧,我们现在都很开心。但是主角总是在最后才会有掌声。我相信你们有些人想知道集合对象会做什么。既然我们对ES6开放,而且我们知道Map是之前游戏的获胜者,那么让我们将Map与Set作最后的比较。这是一场典型的皇马对阵巴塞罗那的比赛……或者是吗?让我们来看看谁会赢得经典赛:)

var ranar = [],
     red1 = a => reduced = [...a.reduce((p,c) => p.set(c,true),new Map()).keys()],
     red2 = a => Array.from(new Set(a)),
     avg1 = [],
     avg2 = [],
       ts = 0,
       te = 0,
     res1 = [],
     res2 = [],
     count= 100;
for (var i = 0; i<count; i++){
  ranar = (new Array(100000).fill(true)).map(e => Math.floor(Math.random()*10000000));
  ts = performance.now();
  res1 = red1(ranar);
  te = performance.now();
  avg1.push(te-ts);
  ts = performance.now();
  res2 = red2(ranar);
  te = performance.now();
  avg2.push(te-ts);
}

avg1 = avg1.reduce((p,c) => p+c)/count;
avg2 = avg2.reduce((p,c) => p+c)/count;

console.log("map & spread took: " + avg1 + "msec");
console.log("set & A.from took: " + avg2 + "msec");

Wow.. man..! Well unexpectedly it didn't turn out to be an el classico at all. More like Barcelona FC against CA Osasuna :))

哇. .男人. . !出乎意料的是,它根本就不是el经典赛。更像是巴塞罗那对阵CA Osasuna:)

#13


15  

use Array.filter() like this

像这样使用Array.filter()

var actualArr = ['Apple', 'Apple', 'Banana', 'Mango', 'Strawberry', 'Banana'];

console.log('Actual Array: ' + actualArr);

var filteredArr = actualArr.filter(function(item, index) {
  if (actualArr.indexOf(item) == index)
    return item;
});

console.log('Filtered Array: ' + filteredArr);

this can be made shorter in ES6 to

这可以在ES6 to中缩短

actualArr.filter((item,index,self) => self.indexOf(item)==index);

Here is nice explanation of Array.filter()

下面是对Array.filter()的很好的解释

#14


14  

The following is more than 80% faster than the jQuery method listed (see tests below). It is an answer from a similar question a few years ago. If I come across the person who originally proposed it I will post credit. Pure JS.

以下代码比列出的jQuery方法快80%以上(参见下面的测试)。这是几年前一个类似问题的答案。如果我遇到最初提出这个建议的人,我会把功劳记在上面。纯粹的JS。

var temp = {};
for (var i = 0; i < array.length; i++)
  temp[array[i]] = true;
var r = [];
for (var k in temp)
  r.push(k);
return r;

My test case comparison: http://jsperf.com/remove-duplicate-array-tests

我的测试用例比较:http://jsperf.com/remove-duplicate-array-tests

#15


13  

Here is a simple answer to the question.

这个问题有一个简单的答案。

var names = ["Alex","Tony","James","Suzane", "Marie", "Laurence", "Alex", "Suzane", "Marie", "Marie", "James", "Tony", "Alex"];
var uniqueNames = [];

    for(var i in names){
        if(uniqueNames.indexOf(names[i]) === -1){
            uniqueNames.push(names[i]);
        }
    }

#16


12  

In ECMAScript 6 (aka ECMAScript 2015), Set can be used to filter out duplicates. Then it can be converted back to an array using the spread operator.

在ECMAScript 6 (aka ECMAScript 2015)中,Set可以用来过滤副本。然后可以使用扩展运算符将其转换回数组。

var names = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"],
    unique = [...new Set(names)];

#17


10  

A simple but effective technique, is to use the filter method in combination with the filter function(value, index){ return this.indexOf(value) == index }.

一种简单而有效的方法是,将筛选方法与筛选函数(value, index){返回this.indexOf(value) == index}结合使用。

Code example :

var data = [2,3,4,5,5,4];
var filter = function(value, index){ return this.indexOf(value) == index };
var filteredData = data.filter(filter, data );

document.body.innerHTML = '<pre>' + JSON.stringify(filteredData, null, '\t') +  '</pre>';

See also this Fiddle.

看到这个小提琴也。

#18


9  

The top answers have complexity of O(n²), but this can be done with just O(n) by using an object as a hash:

上面的答案有O(n²)的复杂性,但这仅仅可以用O(n)通过使用对象作为一个散列:

function getDistinctArray(arr) {
    var dups = {};
    return arr.filter(function(el) {
        var hash = el.valueOf();
        var isDup = dups[hash];
        dups[hash] = true;
        return !isDup;
    });
}

This will work for strings, numbers, and dates. If your array contains complex objects (ie, they have to be compared with ===), the above solution won't work. You can get an O(n) implementation for objects by setting a flag on the object itself:

这将适用于字符串、数字和日期。如果数组包含复杂的对象(例如,它们必须与===进行比较),上述解决方案将不起作用。您可以通过在对象本身上设置标志来获得对象的O(n)实现:

function getDistinctObjArray(arr) {
    var distinctArr = arr.filter(function(el) {
        var isDup = el.inArray;
        el.inArray = true;
        return !isDup;
    });
    distinctArr.forEach(function(el) {
        delete el.inArray;
    });
    return distinctArr;
}

#19


9  

here is the simple method without any special libraries are special function,

这里有一个简单的方法,没有任何特殊的库就是特殊的函数,

name_list = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"];
get_uniq = name_list.filter(function(val,ind) { return name_list.indexOf(val) == ind; })

console.log("Original name list:"+name_list.length, name_list)
console.log("\n Unique name list:"+get_uniq.length, get_uniq)

从JS数组中删除重复的值[复制]

#20


8  

Apart from being a simpler, more terse solution than the current answers (minus the future-looking ES6 ones), I perf tested this and it was much faster as well:

除了比当前的答案更简单、更简洁的解决方案(减去未来的ES6)之外,我还测试了这个方案,而且速度也快得多:

var uniqueArray = dupeArray.filter(function(item, i, self){
  return self.lastIndexOf(item) == i;
});

One caveat: Array.lastIndexOf() was added in IE9, so if you need to go lower than that, you'll need to look elsewhere.

一个警告:Array.lastIndexOf()在IE9中添加了,所以如果你需要低于这个值,你就需要去别处看看。

#21


8  

So the options is:

所以的选项是:

let a = [11,22,11,22];
let b = []


b = [ ...new Set(a) ];     
// b = [11, 22]

b = Array.from( new Set(a))   
// b = [11, 22]

b = a.filter((val,i)=>{
  return a.indexOf(val)==i
})                        
// b = [11, 22]

#22


7  

Solution 1

解决方案1

Array.prototype.unique = function() {
    var a = [];
    for (i = 0; i < this.length; i++) {
        var current = this[i];
        if (a.indexOf(current) < 0) a.push(current);
    }
    return a;
}

Solution 2 (using Set)

解决方案2(使用集)

Array.prototype.unique = function() {
    return Array.from(new Set(this));
}

Test

测试

var x=[1,2,3,3,2,1];
x.unique() //[1,2,3]

Performance

性能

When I tested both implementation (with and without Set) for performance in chrome, I found that the one with Set is much much faster!

当我在chrome中测试两种实现(带和不带设置)的性能时,我发现带Set的那个要快得多!

Array.prototype.unique1 = function() {
    var a = [];
    for (i = 0; i < this.length; i++) {
        var current = this[i];
        if (a.indexOf(current) < 0) a.push(current);
    }
    return a;
}


Array.prototype.unique2 = function() {
    return Array.from(new Set(this));
}

var x=[];
for(var i=0;i<10000;i++){
	x.push("x"+i);x.push("x"+(i+1));
}

console.time("unique1");
console.log(x.unique1());
console.timeEnd("unique1");



console.time("unique2");
console.log(x.unique2());
console.timeEnd("unique2");

#23


6  

Generic Functional Approach

Here is a generic and strictly functional approach with ES2015:

以下是ES2015的一种通用而严格的功能方法:

// small, reusable auxiliary functions

const apply = f => a => f(a);

const flip = f => b => a => f(a) (b);

const uncurry = f => (a, b) => f(a) (b);

const push = x => xs => (xs.push(x), xs);

const foldl = f => acc => xs => xs.reduce(uncurry(f), acc);

const some = f => xs => xs.some(apply(f));


// the actual de-duplicate function

const uniqueBy = f => foldl(
   acc => x => some(f(x)) (acc)
    ? acc
    : push(x) (acc)
 ) ([]);


// comparators

const eq = y => x => x === y;

// string equality case insensitive :D
const seqCI = y => x => x.toLowerCase() === y.toLowerCase();


// mock data

const xs = [1,2,3,1,2,3,4];

const ys = ["a", "b", "c", "A", "B", "C", "D"];


console.log( uniqueBy(eq) (xs) );

console.log( uniqueBy(seqCI) (ys) );

We can easily derive unique from unqiueBy or use the faster implementation utilizing Sets:

我们可以很容易地从unqiueBy得到唯一的,或者使用更快的实现使用集合:

const unqiue = uniqueBy(eq);

// const unique = xs => Array.from(new Set(xs));

Benefits of this approach:

这种方法的好处:

  • generic solution by using a separate comparator function
  • 通用的解决方案通过使用一个单独的比较器函数
  • declarative and succinct implementation
  • 声明式和简洁的实现
  • reuse of other small, generic functions
  • 重用其他小型通用函数

Performance Considerations

uniqueBy isn't as fast as an imperative implementation with loops, but it is way more expressive due to its genericity.

uniqueBy不如带循环的命令式实现快,但由于它的通用性,它的表达方式要丰富得多。

If you identify uniqueBy as the cause of a concrete performance penalty in your app, replace it with optimized code. That is, write your code first in an functional, declarative way. Afterwards, provided that you encounter performance issues, try to optimize the code at the locations, which are the cause of the problem.

如果您确定uniqueBy是导致应用程序中出现具体性能损失的原因,那么用优化的代码替换它。也就是说,首先以功能的、声明性的方式编写代码。之后,如果您遇到性能问题,请尝试在位置上优化代码,这是问题的原因。

Memory Consumption and Garbage Collection

uniqueBy utilizes mutations (push(x) (acc)) hidden inside its body. It reuses the accumulator instead of throwing it away after each iteration. This reduces memory consumption and GC pressure. Since this side effect is wrapped inside the function, everything outside remains pure.

uniqueBy利用隐藏在它体内的突变(push(x) (acc))。它重用累加器,而不是在每次迭代之后丢弃它。这减少了内存消耗和GC压力。由于这个副作用被包装在函数内部,所以外面的一切都是纯的。

#24


4  

$(document).ready(function() {

    var arr1=["dog","dog","fish","cat","cat","fish","apple","orange"]

    var arr2=["cat","fish","mango","apple"]

    var uniquevalue=[];
    var seconduniquevalue=[];
    var finalarray=[];

    $.each(arr1,function(key,value){

       if($.inArray (value,uniquevalue) === -1)
       {
           uniquevalue.push(value)

       }

    });

     $.each(arr2,function(key,value){

       if($.inArray (value,seconduniquevalue) === -1)
       {
           seconduniquevalue.push(value)

       }

    });

    $.each(uniquevalue,function(ikey,ivalue){

        $.each(seconduniquevalue,function(ukey,uvalue){

            if( ivalue == uvalue)

            {
                finalarray.push(ivalue);
            }   

        });

    });
    alert(finalarray);
});

#25


4  

for (i=0; i<originalArray.length; i++) {  
    if (!newArray.includes(originalArray[i])) {
        newArray.push(originalArray[i]); 
    }
}

#26


3  

If by any chance you were using

如果你有机会使用的话。

D3.js

D3.js

You could do

你可以做

d3.set(["foo", "bar", "foo", "baz"]).values() ==> ["foo", "bar", "baz"]

https://github.com/mbostock/d3/wiki/Arrays#set_values

https://github.com/mbostock/d3/wiki/Arrays set_values

#27


3  

https://jsfiddle.net/2w0k5tz8/

https://jsfiddle.net/2w0k5tz8/

function remove_duplicates(array_){
    var ret_array = new Array();
    for (var a = array_.length - 1; a >= 0; a--) {
        for (var b = array_.length - 1; b >= 0; b--) {
            if(array_[a] == array_[b] && a != b){
                delete array_[b];
            }
        };
        if(array_[a] != undefined)
            ret_array.push(array_[a]);
    };
    return ret_array;
}

console.log(remove_duplicates(Array(1,1,1,2,2,2,3,3,3)));

Loop through, remove duplicates, and create a clone array place holder because the array index will not be updated.

循环,删除重复,并创建一个克隆数组占位符,因为数组索引不会被更新。

Loop backward for better performance ( your loop wont need to keep checking the length of your array)

向后循环以获得更好的性能(循环不需要一直检查数组的长度)

#28


3  

This was just another solution but different than the rest.

这只是另一种解决方案,但与其他方案不同。

function diffArray(arr1, arr2) {
  var newArr = arr1.concat(arr2);
  newArr.sort();
  var finalArr = [];
  for(var i = 0;i<newArr.length;i++) {
   if(!(newArr[i] === newArr[i+1] || newArr[i] === newArr[i-1])) {
     finalArr.push(newArr[i]);
   } 
  }
  return finalArr;
}

#29


3  

Here is very simple for understanding and working anywhere (even in PhotoshopScript) code. Check it!

这对于理解和使用任何地方(甚至是在PhotoshopScript中)的代码非常简单。检查它!

var peoplenames = new Array("Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl");

peoplenames = unique(peoplenames);
alert(peoplenames);

function unique(array){
    var len = array.length;
    for(var i = 0; i < len; i++) for(var j = i + 1; j < len; j++) 
        if(array[j] == array[i]){
            array.splice(j,1);
            j--;
            len--;
        }
    return array;
}

//*result* peoplenames == ["Mike","Matt","Nancy","Adam","Jenny","Carl"]

#30


3  

A slight modification of thg435's excellent answer to use a custom comparator:

稍微修改一下thg435使用自定义比较器的优秀答案:

function contains(array, obj) {
    for (var i = 0; i < array.length; i++) {
        if (isEqual(array[i], obj)) return true;
    }
    return false;
}
//comparator
function isEqual(obj1, obj2) {
    if (obj1.name == obj2.name) return true;
    return false;
}
function removeDuplicates(ary) {
    var arr = [];
    return ary.filter(function(x) {
        return !contains(arr, x) && arr.push(x);
    });
}

#1


359  

Quick and dirty using jQuery:

使用jQuery快速而肮脏:

var names = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"];
var uniqueNames = [];
$.each(names, function(i, el){
    if($.inArray(el, uniqueNames) === -1) uniqueNames.push(el);
});

#2


2139  

"Smart" but naïve way

uniqueArray = a.filter(function(item, pos) {
    return a.indexOf(item) == pos;
})

Basically, we iterate over the array and, for each element, check if the first position of this element in the array is equal to the current position. Obviously, these two positions are different for duplicate elements.

基本上,我们遍历数组,对于每个元素,检查这个元素在数组中的第一个位置是否等于当前位置。显然,对于重复的元素,这两个位置是不同的。

Using the 3rd ("this array") parameter of the filter callback we can avoid a closure of the array variable:

使用filter回调的第3个(“这个数组”)参数,我们可以避免数组变量的关闭:

uniqueArray = a.filter(function(item, pos, self) {
    return self.indexOf(item) == pos;
})

Although concise, this algorithm is not particularly efficient for large arrays (quadratic time).

尽管该算法简洁,但对于大型数组(二次时间)并不是特别有效。

Hashtables to the rescue

function uniq(a) {
    var seen = {};
    return a.filter(function(item) {
        return seen.hasOwnProperty(item) ? false : (seen[item] = true);
    });
}

This is how it's usually done. The idea is to place each element in a hashtable and then check for its presence instantly. This gives us linear time, but has at least two drawbacks:

这就是通常的做法。其思想是将每个元素放在一个hashtable中,然后立即检查其存在性。这给了我们线性时间,但至少有两个缺点:

  • since hash keys can only be strings in Javascript, this code doesn't distinguish numbers and "numeric strings". That is, uniq([1,"1"]) will return just [1]
  • 由于散列键只能是Javascript中的字符串,因此该代码不区分数字和“数字字符串”。也就是说,uniq([1,"1"])只返回[1]
  • for the same reason, all objects will be considered equal: uniq([{foo:1},{foo:2}]) will return just [{foo:1}].
  • 出于同样的原因,所有对象都将被认为是相等的:uniq([{foo:1},{foo:2}])将只返回[{foo:1}]。

That said, if your arrays contain only primitives and you don't care about types (e.g. it's always numbers), this solution is optimal.

也就是说,如果数组只包含原语,而不关心类型(例如,它总是数字),那么这个解决方案是最优的。

The best from two worlds

A universal solution combines both approaches: it uses hash lookups for primitives and linear search for objects.

通用解决方案结合了这两种方法:它对原语使用散列查找,对对象使用线性搜索。

function uniq(a) {
    var prims = {"boolean":{}, "number":{}, "string":{}}, objs = [];

    return a.filter(function(item) {
        var type = typeof item;
        if(type in prims)
            return prims[type].hasOwnProperty(item) ? false : (prims[type][item] = true);
        else
            return objs.indexOf(item) >= 0 ? false : objs.push(item);
    });
}

sort | uniq

Another option is to sort the array first, and then remove each element equal to the preceding one:

另一种选择是先对数组进行排序,然后删除与前一个元素相等的每个元素:

function uniq(a) {
    return a.sort().filter(function(item, pos, ary) {
        return !pos || item != ary[pos - 1];
    })
}

Again, this doesn't work with objects (because all objects are equal for sort). Additionally, we silently change the original array as a side effect - not good! However, if your input is already sorted, this is the way to go (just remove sort from the above).

同样,这对对象不起作用(因为所有对象对于排序都是相等的)。此外,我们悄悄地改变原来的数组作为副作用-不好!但是,如果您的输入已经被排序,那么这就是解决问题的方法(从上面删除sort)。

Unique by...

Sometimes it's desired to uniquify a list based on some criteria other than just equality, for example, to filter out objects that are different, but share some property. This can be done elegantly by passing a callback. This "key" callback is applied to each element, and elements with equal "keys" are removed. Since key is expected to return a primitive, hash table will work fine here:

有时,我们希望根据一些标准而不仅仅是相等来对列表进行统一,例如,过滤出不同但共享某些属性的对象。这可以通过传递回调来优雅地完成。这个“key”回调应用于每个元素,并删除具有相同“key”的元素。由于key期望返回一个原语,所以散列表在这里可以正常工作:

function uniqBy(a, key) {
    var seen = {};
    return a.filter(function(item) {
        var k = key(item);
        return seen.hasOwnProperty(k) ? false : (seen[k] = true);
    })
}

A particularly useful key() is JSON.stringify which will remove objects that are physically different, but "look" the same:

一个特别有用的键()是JSON。stringify将删除物理上不同的对象,但“看”相同:

a = [[1,2,3], [4,5,6], [1,2,3]]
b = uniqBy(a, JSON.stringify)
console.log(b) // [[1,2,3], [4,5,6]]

If the key is not primitive, you have to resort to the linear search:

如果键不是原语,则必须采用线性搜索:

function uniqBy(a, key) {
    var index = [];
    return a.filter(function (item) {
        var k = key(item);
        return index.indexOf(k) >= 0 ? false : index.push(k);
    });
}

or use the Set object in ES6:

或使用ES6中的Set对象:

function uniqBy(a, key) {
    var seen = new Set();
    return a.filter(item => {
        var k = key(item);
        return seen.has(k) ? false : seen.add(k);
    });
}

(Some people prefer !seen.has(k) && seen.add(k) instead of seen.has(k) ? false : seen.add(k)).

(有些人喜欢seen.has(k) && seen.add(k),而不是seen.has(k) ?假:seen.add(k))。

Libraries

Both underscore and Lo-Dash provide uniq methods. Their algorithms are basically similar to the first snippet above and boil down to this:

下划线和Lo-Dash都提供了uniq方法。他们的算法基本上与上面的第一个片段相似,并归结为:

var result = [];
a.forEach(function(item) {
     if(result.indexOf(item) < 0) {
         result.push(item);
     }
});

This is quadratic, but there are nice additional goodies, like wrapping native indexOf, ability to uniqify by a key (iteratee in their parlance), and optimizations for already sorted arrays.

这是二次的,但是还有一些很好的优点,比如封装本机索引、通过键(用他们的话说就是iteratee)进行统一的能力,以及对已经排序的数组进行优化。

If you're using jQuery and can't stand anything without a dollar before it, it goes like this:

如果你用的是jQuery,没有一美元你无法忍受,它是这样的:

  $.uniqArray = function(a) {
        return $.grep(a, function(item, pos) {
            return $.inArray(item, a) === pos;
        });
  }

which is, again, a variation of the first snippet.

这又是第一个片段的变体。

Performance

Function calls are expensive in Javascript, therefore the above solutions, as concise as they are, are not particularly efficient. For maximal performance, replace filter with a loop and get rid of other function calls:

函数调用在Javascript中非常昂贵,因此上面的解决方案虽然简洁,但效率并不高。为了获得最大的性能,用循环替换过滤器,并摆脱其他函数调用:

function uniq_fast(a) {
    var seen = {};
    var out = [];
    var len = a.length;
    var j = 0;
    for(var i = 0; i < len; i++) {
         var item = a[i];
         if(seen[item] !== 1) {
               seen[item] = 1;
               out[j++] = item;
         }
    }
    return out;
}

This chunk of ugly code does the same as the snippet #3 above, but an order of magnitude faster (as of 2017 it's only twice as fast - JS core folks are doing a great job!)

这段丑陋的代码和上面的代码片段#3所做的一样,但是速度快了一个数量级(到2017年,速度只有原来的两倍——JS核心人员做得很好!)

function uniq(a) {
    var seen = {};
    return a.filter(function(item) {
        return seen.hasOwnProperty(item) ? false : (seen[item] = true);
    });
}

function uniq_fast(a) {
    var seen = {};
    var out = [];
    var len = a.length;
    var j = 0;
    for(var i = 0; i < len; i++) {
         var item = a[i];
         if(seen[item] !== 1) {
               seen[item] = 1;
               out[j++] = item;
         }
    }
    return out;
}

/////

var r = [0,1,2,3,4,5,6,7,8,9],
    a = [],
    LEN = 1000,
    LOOPS = 1000;

while(LEN--)
    a = a.concat(r);

var d = new Date();
for(var i = 0; i < LOOPS; i++)
    uniq(a);
document.write('<br>uniq, ms/loop: ' + (new Date() - d)/LOOPS)

var d = new Date();
for(var i = 0; i < LOOPS; i++)
    uniq_fast(a);
document.write('<br>uniq_fast, ms/loop: ' + (new Date() - d)/LOOPS)

ES6

ES6 provides the Set object, which makes things a whole lot easier:

ES6提供Set对象,使事情变得更简单:

function uniq(a) {
   return Array.from(new Set(a));
}

or

let uniq = a => [...new Set(a)];

Note that, unlike in python, ES6 sets are iterated in insertion order, so this code preserves the order of the original array.

注意,与python不同,ES6集是按插入顺序迭代的,因此此代码保留了原始数组的顺序。

However, if you need an array with unique elements, why not use sets right from the beginning?

但是,如果您需要一个具有独特元素的数组,为什么不从一开始就使用集合?

Generators

A "lazy", generator-based version of uniq can be built on the same basis:

uniq的“懒惰”、基于生成器的版本可以在同样的基础上构建:

  • take the next value from the argument
  • 从参数中取下一个值
  • if it's been seen already, skip it
  • 如果已经看到了,跳过它
  • otherwise, yield it and add it to the set of already seen values
  • 否则,将它赋值并将其添加到已经看到的值集合中。

function* uniqIter(a) {
    let seen = new Set();

    for (let x of a) {
        if (!seen.has(x)) {
            seen.add(x);
            yield x;
        }
    }
}

// example:

function* randomsBelow(limit) {
    while (1)
        yield Math.floor(Math.random() * limit);
}

// note that randomsBelow is endless

count = 20;
limit = 30;

for (let r of uniqIter(randomsBelow(limit))) {
    console.log(r);
    if (--count === 0)
        break
}

// exercise for the reader: what happens if we set `limit` less than `count` and why

#3


257  

Got tired of seeing all bad examples with for-loops or jQuery. Javascript has the perfect tools for this nowadays: sort, map and reduce.

厌倦了用for循环或jQuery看到所有糟糕的例子。Javascript现在有完美的工具:排序、映射和减少。

Uniq reduce while keeping existing order

var names = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"];

var uniq = names.reduce(function(a,b){
    if (a.indexOf(b) < 0 ) a.push(b);
    return a;
  },[]);

console.log(uniq, names) // [ 'Mike', 'Matt', 'Nancy', 'Adam', 'Jenny', 'Carl' ]

// one liner
return names.reduce(function(a,b){if(a.indexOf(b)<0)a.push(b);return a;},[]);

Faster uniq with sorting

There are probably faster ways but this one is pretty decent.

可能有更快的方法,但是这个很不错。

var uniq = names.slice() // slice makes copy of array before sorting it
  .sort(function(a,b){
    return a > b;
  })
  .reduce(function(a,b){
    if (a.slice(-1)[0] !== b) a.push(b); // slice(-1)[0] means last item in array without removing it (like .pop())
    return a;
  },[]); // this empty array becomes the starting value for a

// one liner
return names.slice().sort(function(a,b){return a > b}).reduce(function(a,b){if (a.slice(-1)[0] !== b) a.push(b);return a;},[]);

Update 2015: ES6 version:

In ES6 you have Sets and Spread which makes it very easy and performant to remove all duplicates:

在ES6中,您有一套和扩展,这使得删除所有副本非常容易和有效:

var uniq = [ ...new Set(names) ]; // [ 'Mike', 'Matt', 'Nancy', 'Adam', 'Jenny', 'Carl' ]

Sort based on occurrence:

Someone asked about ordering the results based on how many unique names there are:

有人问,根据有多少个独特的名字排序结果:

var names = ['Mike', 'Matt', 'Nancy', 'Adam', 'Jenny', 'Nancy', 'Carl']

var uniq = names
  .map((name) => {
    return {count: 1, name: name}
  })
  .reduce((a, b) => {
    a[b.name] = (a[b.name] || 0) + b.count
    return a
  }, {})

var sorted = Object.keys(uniq).sort((a, b) => uniq[a] < uniq[b])

console.log(sorted)

#4


70  

Vanilla JS: Remove duplicates using an Object like a Set

Vanilla JS:使用一个对象来删除重复的内容,比如集合

You can always try putting it into an object, and then iterating through its keys:

你可以尝试把它放入一个对象中,然后遍历它的键:

function remove_duplicates(arr) {
    var obj = {};
    var ret_arr = [];
    for (var i = 0; i < arr.length; i++) {
        obj[arr[i]] = true;
    }
    for (var key in obj) {
        ret_arr.push(key);
    }
    return ret_arr;
}

Vanilla JS: Remove duplicates by tracking already seen values (order-safe)

Vanilla JS:通过跟踪已经看到的值(订单安全)来删除重复的数据

Or, for an order-safe version, use an object to store all previously seen values, and check values against it before before adding to an array.

或者,对于订单安全的版本,使用对象来存储以前看到的所有值,并在添加到数组之前对其进行值检查。

function remove_duplicates_safe(arr) {
    var seen = {};
    var ret_arr = [];
    for (var i = 0; i < arr.length; i++) {
        if (!(arr[i] in seen)) {
            ret_arr.push(arr[i]);
            seen[arr[i]] = true;
        }
    }
    return ret_arr;

}

ECMAScript 6: Use the new Set data structure (order-safe)

ECMAScript 6:使用新的Set数据结构(订单安全)

ECMAScript 6 adds the new Set Data-Structure, which lets you store values of any type. Set.values returns elements in insertion order.

ECMAScript 6添加了新的数据结构,允许您存储任何类型的值。值按插入顺序返回元素。

function remove_duplicates_es6(arr) {
    let s = new Set(arr);
    let it = s.values();
    return Array.from(it);
}

Example usage:

使用示例:

a = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"];

b = remove_duplicates(a);
// b:
// ["Adam", "Carl", "Jenny", "Matt", "Mike", "Nancy"]

c = remove_duplicates_safe(a);
// c:
// ["Mike", "Matt", "Nancy", "Adam", "Jenny", "Carl"]

d = remove_duplicates_es6(a);
// d:
// ["Mike", "Matt", "Nancy", "Adam", "Jenny", "Carl"]

#5


67  

Use Underscore.js

It's a library with a host of functions for manipulating arrays.

它是一个有许多操作数组函数的库。

It's the tie to go along with jQuery's tux, and Backbone.js's suspenders.

它与jQuery的tux和主干紧密相连。js的背带。

_.uniq

_.uniq

_.uniq(array, [isSorted], [iterator]) Alias: unique
Produces a duplicate-free version of the array, using === to test object equality. If you know in advance that the array is sorted, passing true for isSorted will run a much faster algorithm. If you want to compute unique items based on a transformation, pass an iterator function.

_。uniq(数组,[is排序],[iterator])别名:unique产生了一个无复制版本的数组,使用===测试对象的相等性。如果您事先知道数组已排序,那么为isordered传递true将会运行更快的算法。如果您希望基于转换计算惟一项,请传递一个迭代器函数。

Example

例子

var names = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"];

alert(_.uniq(names, false));

Note: Lo-Dash (an underscore competitor) also offers a comparable .uniq implementation.

注意:Lo-Dash(下划线竞争对手)也提供了类似的。uniq实现。

#6


60  

A single line version using array filter and indexOf functions:

使用数组过滤器和函数索引的单行版本:

arr = arr.filter (function (value, index, array) { 
    return array.indexOf (value) == index;
});

#7


46  

You can simply do it in JavaScript, with the help of the second - index - parameter of the filter method:

你可以简单地用JavaScript来做,借助过滤器方法的第二个索引参数:

var a = [2,3,4,5,5,4];
a.filter(function(value, index){ return a.indexOf(value) == index });

or in short hand

或简而言之手

a.filter((v,i) => a.indexOf(v) == i)

#8


28  

The most concise way to remove duplicates from an array using native javascript functions is to use a sequence like below:

使用本机javascript函数从数组中删除重复内容的最简洁方法是使用如下所示的序列:

vals.sort().reduce(function(a, b){ if (b != a[0]) a.unshift(b); return a }, [])

there's no need for slice nor indexOf within the reduce function, like i've seen in other examples! it makes sense to use it along with a filter function though:

在reduce函数中不需要slice或indexOf,就像我在其他例子中看到的那样!它与一个过滤器函数一起使用是有意义的:

vals.filter(function(v, i, a){ return i == a.indexOf(v) })

Yet another ES6(2015) way of doing this that already works on a few browsers is:

另一种ES6(2015)方法已经在一些浏览器上使用:

Array.from(new Set(vals))

or even using the spread operator:

甚至使用扩展运算符:

[...new Set(vals)]

cheers!

干杯!

#9


24  

One line:

一行:

let names = ['Mike','Matt','Nancy','Adam','Jenny','Nancy','Carl', 'Nancy'];
let dup = [...new Set(names)];
console.log(dup);

#10


19  

Go for this one:

去这一个:

var uniqueArray = duplicateArray.filter(function(elem, pos) {
    return duplicateArray.indexOf(elem) == pos;
}); 

Now uniqueArray contains no duplicates.

现在,uniqueArray表示“独一无二的地球不存在任何重复现象”。

#11


17  

Simplest One I've run into so far. In es6.

到目前为止我遇到的最简单的一个。在es6。

 var names = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl", "Mike", "Nancy"]

 var noDupe = Array.from(new Set(names))

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Set

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Set

#12


16  

I had done a detailed comparison of dupes removal at some other question but having noticed that this is the real place i just wanted to share it here as well.

我已经做了一个详细的比较,在一些其他的问题,但注意到这是真正的地方,我只是想在这里分享它。

I believe this is the best way to do this

我相信这是最好的方法

var myArray = [100, 200, 100, 200, 100, 100, 200, 200, 200, 200],
    reduced = Object.keys(myArray.reduce((p,c) => (p[c] = true,p),{}));
console.log(reduced);

OK .. even though this one is O(n) and the others are O(n^2) i was curious to see benchmark comparison between this reduce / look up table and filter/indexOf combo (I choose Jeetendras very nice implementation https://*.com/a/37441144/4543207). I prepare a 100K item array filled with random positive integers in range 0-9999 and and it removes the duplicates. I repeat the test for 10 times and the average of the results show that they are no match in performance.

好吧. .尽管这是O(n)和其他人是O(n ^ 2)我很好奇看到基准对比这减少/查找表和过滤器/ indexOf组合(我选择Jeetendras很好的实现https://*.com/a/37441144/4543207)。我准备了一个100K的项目数组,其中包含0-9999范围内的随机正整数,它将删除重复的整数。我重复测试了10次,结果的平均值表明它们在性能上并不匹配。

  • In firefox v47 reduce & lut : 14.85ms vs filter & indexOf : 2836ms
  • 在firefox v47中,reduce & lut: 14.85ms vs filter & indexOf: 2836ms
  • In chrome v51 reduce & lut : 23.90ms vs filter & indexOf : 1066ms
  • 在chrome v51减少& lut: 23.90ms vs过滤器& indexOf: 1066ms。

Well ok so far so good. But let's do it properly this time in the ES6 style. It looks so cool..! But as of now how it will perform against the powerful lut solution is a mystery to me. Lets first see the code and then benchmark it.

到目前为止还不错。但是这次我们用ES6格式来做一下。它看起来太酷了…!但是现在,它将如何对抗强大的lut解决方案对我来说是个谜。让我们先看看代码,然后进行基准测试。

var myArray = [100, 200, 100, 200, 100, 100, 200, 200, 200, 200],
    reduced = [...myArray.reduce((p,c) => p.set(c,true),new Map()).keys()];
console.log(reduced);

Wow that was short..! But how about the performance..? It's beautiful... Since the heavy weight of the filter / indexOf lifted over our shoulders now i can test an array 1M random items of positive integers in range 0..99999 to get an average from 10 consecutive tests. I can say this time it's a real match. See the result for yourself :)

哇这是短. . !但是演出怎么样?很漂亮……由于过滤器/索引的重量很重,现在我可以在0范围内测试一个1米的正整数随机项目。99999,从连续10次测试中获得平均成绩。我可以说这一次是一场真正的比赛。你自己看看结果:)

var ranar = [],
     red1 = a => Object.keys(a.reduce((p,c) => (p[c] = true,p),{})),
     red2 = a => reduced = [...a.reduce((p,c) => p.set(c,true),new Map()).keys()],
     avg1 = [],
     avg2 = [],
       ts = 0,
       te = 0,
     res1 = [],
     res2 = [],
     count= 10;
for (var i = 0; i<count; i++){
  ranar = (new Array(1000000).fill(true)).map(e => Math.floor(Math.random()*100000));
  ts = performance.now();
  res1 = red1(ranar);
  te = performance.now();
  avg1.push(te-ts);
  ts = performance.now();
  res2 = red2(ranar);
  te = performance.now();
  avg2.push(te-ts);
}

avg1 = avg1.reduce((p,c) => p+c)/count;
avg2 = avg2.reduce((p,c) => p+c)/count;

console.log("reduce & lut took: " + avg1 + "msec");
console.log("map & spread took: " + avg2 + "msec");

Which one would you use..? Well not so fast...! Don't be deceived. Map is at displacement. Now look... in all of the above cases we fill an array of size n with numbers of range < n. I mean we have an array of size 100 and we fill with random numbers 0..9 so there are definite duplicates and "almost" definitely each number has a duplicate. How about if we fill the array in size 100 with random numbers 0..9999. Let's now see Map playing at home. This time an Array of 100K items but random number range is 0..100M. We will do 100 consecutive tests to average the results. OK let's see the bets..! <- no typo

你会用哪一个?没那么快…好!不要被欺骗。地图在位移。现在看来……在以上的所有情况中,我们将一个大小为n的数组填充为范围小于n的数。9所以有一定的重复,而且几乎肯定每个数字都有重复。如果我们用随机数0..9999来填充数组大小为100。现在让我们看看地图在家里播放。这一次,一个包含100K个项目的数组,但随机数范围是0..100M。我们将做100个连续的测试来平均结果。好吧,让我们来打赌。<——没有错误

var ranar = [],
     red1 = a => Object.keys(a.reduce((p,c) => (p[c] = true,p),{})),
     red2 = a => reduced = [...a.reduce((p,c) => p.set(c,true),new Map()).keys()],
     avg1 = [],
     avg2 = [],
       ts = 0,
       te = 0,
     res1 = [],
     res2 = [],
     count= 100;
for (var i = 0; i<count; i++){
  ranar = (new Array(100000).fill(true)).map(e => Math.floor(Math.random()*100000000));
  ts = performance.now();
  res1 = red1(ranar);
  te = performance.now();
  avg1.push(te-ts);
  ts = performance.now();
  res2 = red2(ranar);
  te = performance.now();
  avg2.push(te-ts);
}

avg1 = avg1.reduce((p,c) => p+c)/count;
avg2 = avg2.reduce((p,c) => p+c)/count;

console.log("reduce & lut took: " + avg1 + "msec");
console.log("map & spread took: " + avg2 + "msec");

Now this is the spectacular comeback of Map()..! May be now you can make a better decision when you want to remove the dupes.

现在这是Map()的壮观回归。也许现在你可以做出一个更好的决定,当你想要删除的dupes。

Well ok we are all happy now. But the lead role always comes last with some applause. I am sure some of you wonder what Set object would do. Now that since we are open to ES6 and we know Map is the winner of the previous games let us compare Map with Set as a final. A typical Real Madrid vs Barcelona game this time... or is it? Let's see who will win the el classico :)

好吧,我们现在都很开心。但是主角总是在最后才会有掌声。我相信你们有些人想知道集合对象会做什么。既然我们对ES6开放,而且我们知道Map是之前游戏的获胜者,那么让我们将Map与Set作最后的比较。这是一场典型的皇马对阵巴塞罗那的比赛……或者是吗?让我们来看看谁会赢得经典赛:)

var ranar = [],
     red1 = a => reduced = [...a.reduce((p,c) => p.set(c,true),new Map()).keys()],
     red2 = a => Array.from(new Set(a)),
     avg1 = [],
     avg2 = [],
       ts = 0,
       te = 0,
     res1 = [],
     res2 = [],
     count= 100;
for (var i = 0; i<count; i++){
  ranar = (new Array(100000).fill(true)).map(e => Math.floor(Math.random()*10000000));
  ts = performance.now();
  res1 = red1(ranar);
  te = performance.now();
  avg1.push(te-ts);
  ts = performance.now();
  res2 = red2(ranar);
  te = performance.now();
  avg2.push(te-ts);
}

avg1 = avg1.reduce((p,c) => p+c)/count;
avg2 = avg2.reduce((p,c) => p+c)/count;

console.log("map & spread took: " + avg1 + "msec");
console.log("set & A.from took: " + avg2 + "msec");

Wow.. man..! Well unexpectedly it didn't turn out to be an el classico at all. More like Barcelona FC against CA Osasuna :))

哇. .男人. . !出乎意料的是,它根本就不是el经典赛。更像是巴塞罗那对阵CA Osasuna:)

#13


15  

use Array.filter() like this

像这样使用Array.filter()

var actualArr = ['Apple', 'Apple', 'Banana', 'Mango', 'Strawberry', 'Banana'];

console.log('Actual Array: ' + actualArr);

var filteredArr = actualArr.filter(function(item, index) {
  if (actualArr.indexOf(item) == index)
    return item;
});

console.log('Filtered Array: ' + filteredArr);

this can be made shorter in ES6 to

这可以在ES6 to中缩短

actualArr.filter((item,index,self) => self.indexOf(item)==index);

Here is nice explanation of Array.filter()

下面是对Array.filter()的很好的解释

#14


14  

The following is more than 80% faster than the jQuery method listed (see tests below). It is an answer from a similar question a few years ago. If I come across the person who originally proposed it I will post credit. Pure JS.

以下代码比列出的jQuery方法快80%以上(参见下面的测试)。这是几年前一个类似问题的答案。如果我遇到最初提出这个建议的人,我会把功劳记在上面。纯粹的JS。

var temp = {};
for (var i = 0; i < array.length; i++)
  temp[array[i]] = true;
var r = [];
for (var k in temp)
  r.push(k);
return r;

My test case comparison: http://jsperf.com/remove-duplicate-array-tests

我的测试用例比较:http://jsperf.com/remove-duplicate-array-tests

#15


13  

Here is a simple answer to the question.

这个问题有一个简单的答案。

var names = ["Alex","Tony","James","Suzane", "Marie", "Laurence", "Alex", "Suzane", "Marie", "Marie", "James", "Tony", "Alex"];
var uniqueNames = [];

    for(var i in names){
        if(uniqueNames.indexOf(names[i]) === -1){
            uniqueNames.push(names[i]);
        }
    }

#16


12  

In ECMAScript 6 (aka ECMAScript 2015), Set can be used to filter out duplicates. Then it can be converted back to an array using the spread operator.

在ECMAScript 6 (aka ECMAScript 2015)中,Set可以用来过滤副本。然后可以使用扩展运算符将其转换回数组。

var names = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"],
    unique = [...new Set(names)];

#17


10  

A simple but effective technique, is to use the filter method in combination with the filter function(value, index){ return this.indexOf(value) == index }.

一种简单而有效的方法是,将筛选方法与筛选函数(value, index){返回this.indexOf(value) == index}结合使用。

Code example :

var data = [2,3,4,5,5,4];
var filter = function(value, index){ return this.indexOf(value) == index };
var filteredData = data.filter(filter, data );

document.body.innerHTML = '<pre>' + JSON.stringify(filteredData, null, '\t') +  '</pre>';

See also this Fiddle.

看到这个小提琴也。

#18


9  

The top answers have complexity of O(n²), but this can be done with just O(n) by using an object as a hash:

上面的答案有O(n²)的复杂性,但这仅仅可以用O(n)通过使用对象作为一个散列:

function getDistinctArray(arr) {
    var dups = {};
    return arr.filter(function(el) {
        var hash = el.valueOf();
        var isDup = dups[hash];
        dups[hash] = true;
        return !isDup;
    });
}

This will work for strings, numbers, and dates. If your array contains complex objects (ie, they have to be compared with ===), the above solution won't work. You can get an O(n) implementation for objects by setting a flag on the object itself:

这将适用于字符串、数字和日期。如果数组包含复杂的对象(例如,它们必须与===进行比较),上述解决方案将不起作用。您可以通过在对象本身上设置标志来获得对象的O(n)实现:

function getDistinctObjArray(arr) {
    var distinctArr = arr.filter(function(el) {
        var isDup = el.inArray;
        el.inArray = true;
        return !isDup;
    });
    distinctArr.forEach(function(el) {
        delete el.inArray;
    });
    return distinctArr;
}

#19


9  

here is the simple method without any special libraries are special function,

这里有一个简单的方法,没有任何特殊的库就是特殊的函数,

name_list = ["Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"];
get_uniq = name_list.filter(function(val,ind) { return name_list.indexOf(val) == ind; })

console.log("Original name list:"+name_list.length, name_list)
console.log("\n Unique name list:"+get_uniq.length, get_uniq)

从JS数组中删除重复的值[复制]

#20


8  

Apart from being a simpler, more terse solution than the current answers (minus the future-looking ES6 ones), I perf tested this and it was much faster as well:

除了比当前的答案更简单、更简洁的解决方案(减去未来的ES6)之外,我还测试了这个方案,而且速度也快得多:

var uniqueArray = dupeArray.filter(function(item, i, self){
  return self.lastIndexOf(item) == i;
});

One caveat: Array.lastIndexOf() was added in IE9, so if you need to go lower than that, you'll need to look elsewhere.

一个警告:Array.lastIndexOf()在IE9中添加了,所以如果你需要低于这个值,你就需要去别处看看。

#21


8  

So the options is:

所以的选项是:

let a = [11,22,11,22];
let b = []


b = [ ...new Set(a) ];     
// b = [11, 22]

b = Array.from( new Set(a))   
// b = [11, 22]

b = a.filter((val,i)=>{
  return a.indexOf(val)==i
})                        
// b = [11, 22]

#22


7  

Solution 1

解决方案1

Array.prototype.unique = function() {
    var a = [];
    for (i = 0; i < this.length; i++) {
        var current = this[i];
        if (a.indexOf(current) < 0) a.push(current);
    }
    return a;
}

Solution 2 (using Set)

解决方案2(使用集)

Array.prototype.unique = function() {
    return Array.from(new Set(this));
}

Test

测试

var x=[1,2,3,3,2,1];
x.unique() //[1,2,3]

Performance

性能

When I tested both implementation (with and without Set) for performance in chrome, I found that the one with Set is much much faster!

当我在chrome中测试两种实现(带和不带设置)的性能时,我发现带Set的那个要快得多!

Array.prototype.unique1 = function() {
    var a = [];
    for (i = 0; i < this.length; i++) {
        var current = this[i];
        if (a.indexOf(current) < 0) a.push(current);
    }
    return a;
}


Array.prototype.unique2 = function() {
    return Array.from(new Set(this));
}

var x=[];
for(var i=0;i<10000;i++){
	x.push("x"+i);x.push("x"+(i+1));
}

console.time("unique1");
console.log(x.unique1());
console.timeEnd("unique1");



console.time("unique2");
console.log(x.unique2());
console.timeEnd("unique2");

#23


6  

Generic Functional Approach

Here is a generic and strictly functional approach with ES2015:

以下是ES2015的一种通用而严格的功能方法:

// small, reusable auxiliary functions

const apply = f => a => f(a);

const flip = f => b => a => f(a) (b);

const uncurry = f => (a, b) => f(a) (b);

const push = x => xs => (xs.push(x), xs);

const foldl = f => acc => xs => xs.reduce(uncurry(f), acc);

const some = f => xs => xs.some(apply(f));


// the actual de-duplicate function

const uniqueBy = f => foldl(
   acc => x => some(f(x)) (acc)
    ? acc
    : push(x) (acc)
 ) ([]);


// comparators

const eq = y => x => x === y;

// string equality case insensitive :D
const seqCI = y => x => x.toLowerCase() === y.toLowerCase();


// mock data

const xs = [1,2,3,1,2,3,4];

const ys = ["a", "b", "c", "A", "B", "C", "D"];


console.log( uniqueBy(eq) (xs) );

console.log( uniqueBy(seqCI) (ys) );

We can easily derive unique from unqiueBy or use the faster implementation utilizing Sets:

我们可以很容易地从unqiueBy得到唯一的,或者使用更快的实现使用集合:

const unqiue = uniqueBy(eq);

// const unique = xs => Array.from(new Set(xs));

Benefits of this approach:

这种方法的好处:

  • generic solution by using a separate comparator function
  • 通用的解决方案通过使用一个单独的比较器函数
  • declarative and succinct implementation
  • 声明式和简洁的实现
  • reuse of other small, generic functions
  • 重用其他小型通用函数

Performance Considerations

uniqueBy isn't as fast as an imperative implementation with loops, but it is way more expressive due to its genericity.

uniqueBy不如带循环的命令式实现快,但由于它的通用性,它的表达方式要丰富得多。

If you identify uniqueBy as the cause of a concrete performance penalty in your app, replace it with optimized code. That is, write your code first in an functional, declarative way. Afterwards, provided that you encounter performance issues, try to optimize the code at the locations, which are the cause of the problem.

如果您确定uniqueBy是导致应用程序中出现具体性能损失的原因,那么用优化的代码替换它。也就是说,首先以功能的、声明性的方式编写代码。之后,如果您遇到性能问题,请尝试在位置上优化代码,这是问题的原因。

Memory Consumption and Garbage Collection

uniqueBy utilizes mutations (push(x) (acc)) hidden inside its body. It reuses the accumulator instead of throwing it away after each iteration. This reduces memory consumption and GC pressure. Since this side effect is wrapped inside the function, everything outside remains pure.

uniqueBy利用隐藏在它体内的突变(push(x) (acc))。它重用累加器,而不是在每次迭代之后丢弃它。这减少了内存消耗和GC压力。由于这个副作用被包装在函数内部,所以外面的一切都是纯的。

#24


4  

$(document).ready(function() {

    var arr1=["dog","dog","fish","cat","cat","fish","apple","orange"]

    var arr2=["cat","fish","mango","apple"]

    var uniquevalue=[];
    var seconduniquevalue=[];
    var finalarray=[];

    $.each(arr1,function(key,value){

       if($.inArray (value,uniquevalue) === -1)
       {
           uniquevalue.push(value)

       }

    });

     $.each(arr2,function(key,value){

       if($.inArray (value,seconduniquevalue) === -1)
       {
           seconduniquevalue.push(value)

       }

    });

    $.each(uniquevalue,function(ikey,ivalue){

        $.each(seconduniquevalue,function(ukey,uvalue){

            if( ivalue == uvalue)

            {
                finalarray.push(ivalue);
            }   

        });

    });
    alert(finalarray);
});

#25


4  

for (i=0; i<originalArray.length; i++) {  
    if (!newArray.includes(originalArray[i])) {
        newArray.push(originalArray[i]); 
    }
}

#26


3  

If by any chance you were using

如果你有机会使用的话。

D3.js

D3.js

You could do

你可以做

d3.set(["foo", "bar", "foo", "baz"]).values() ==> ["foo", "bar", "baz"]

https://github.com/mbostock/d3/wiki/Arrays#set_values

https://github.com/mbostock/d3/wiki/Arrays set_values

#27


3  

https://jsfiddle.net/2w0k5tz8/

https://jsfiddle.net/2w0k5tz8/

function remove_duplicates(array_){
    var ret_array = new Array();
    for (var a = array_.length - 1; a >= 0; a--) {
        for (var b = array_.length - 1; b >= 0; b--) {
            if(array_[a] == array_[b] && a != b){
                delete array_[b];
            }
        };
        if(array_[a] != undefined)
            ret_array.push(array_[a]);
    };
    return ret_array;
}

console.log(remove_duplicates(Array(1,1,1,2,2,2,3,3,3)));

Loop through, remove duplicates, and create a clone array place holder because the array index will not be updated.

循环,删除重复,并创建一个克隆数组占位符,因为数组索引不会被更新。

Loop backward for better performance ( your loop wont need to keep checking the length of your array)

向后循环以获得更好的性能(循环不需要一直检查数组的长度)

#28


3  

This was just another solution but different than the rest.

这只是另一种解决方案,但与其他方案不同。

function diffArray(arr1, arr2) {
  var newArr = arr1.concat(arr2);
  newArr.sort();
  var finalArr = [];
  for(var i = 0;i<newArr.length;i++) {
   if(!(newArr[i] === newArr[i+1] || newArr[i] === newArr[i-1])) {
     finalArr.push(newArr[i]);
   } 
  }
  return finalArr;
}

#29


3  

Here is very simple for understanding and working anywhere (even in PhotoshopScript) code. Check it!

这对于理解和使用任何地方(甚至是在PhotoshopScript中)的代码非常简单。检查它!

var peoplenames = new Array("Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl");

peoplenames = unique(peoplenames);
alert(peoplenames);

function unique(array){
    var len = array.length;
    for(var i = 0; i < len; i++) for(var j = i + 1; j < len; j++) 
        if(array[j] == array[i]){
            array.splice(j,1);
            j--;
            len--;
        }
    return array;
}

//*result* peoplenames == ["Mike","Matt","Nancy","Adam","Jenny","Carl"]

#30


3  

A slight modification of thg435's excellent answer to use a custom comparator:

稍微修改一下thg435使用自定义比较器的优秀答案:

function contains(array, obj) {
    for (var i = 0; i < array.length; i++) {
        if (isEqual(array[i], obj)) return true;
    }
    return false;
}
//comparator
function isEqual(obj1, obj2) {
    if (obj1.name == obj2.name) return true;
    return false;
}
function removeDuplicates(ary) {
    var arr = [];
    return ary.filter(function(x) {
        return !contains(arr, x) && arr.push(x);
    });
}