I have an array of integers that I need to remove duplicates from while maintaining the order of the first occurrence of each integer. I can see doing it like this, but imagine there is a better way that makes use of STL algorithms better? The insertion is out of my control, so I cannot check for duplicates before inserting.
我有一个整数数组,我需要删除重复项,同时保持每个整数第一次出现的顺序。我可以看到这样做,但想象有一种更好的方法可以更好地利用STL算法吗?插入不受我的控制,所以在插入之前我无法检查重复项。
int unsortedRemoveDuplicates(std::vector<int> &numbers) {
std::set<int> uniqueNumbers;
std::vector<int>::iterator allItr = numbers.begin();
std::vector<int>::iterator unique = allItr;
std::vector<int>::iterator endItr = numbers.end();
for (; allItr != endItr; ++allItr) {
const bool isUnique = uniqueNumbers.insert(*allItr).second;
if (isUnique) {
*unique = *allItr;
++unique;
}
}
const int duplicates = endItr - unique;
numbers.erase(unique, endItr);
return duplicates;
}
How can this be done using STL algorithms?
如何使用STL算法完成?
8 个解决方案
#1
10
The naive way is to use std::set
as everyone tells you. It's overkill and has poor cache locality (slow).
The smart* way is to use std::vector
appropriately (make sure to see footnote at bottom):
天真的方式是使用std :: set,大家告诉你。它太过分了,缓存局部性很差(慢)。 smart *方法是适当地使用std :: vector(确保在底部看到脚注):
#include <algorithm>
#include <vector>
struct target_less
{
template<class It>
bool operator()(It const &a, It const &b) const { return *a < *b; }
};
struct target_equal
{
template<class It>
bool operator()(It const &a, It const &b) const { return *a == *b; }
};
template<class It> It uniquify(It begin, It const end)
{
std::vector<It> v;
v.reserve(static_cast<size_t>(std::distance(begin, end)));
for (It i = begin; i != end; ++i)
{ v.push_back(i); }
std::sort(v.begin(), v.end(), target_less());
v.erase(std::unique(v.begin(), v.end(), target_equal()), v.end());
std::sort(v.begin(), v.end());
size_t j = 0;
for (It i = begin; i != end && j != v.size(); ++i)
{
if (i == v[j])
{
using std::iter_swap; iter_swap(i, begin);
++j;
++begin;
}
}
return begin;
}
Then you can use it like:
然后你就可以使用它:
int main()
{
std::vector<int> v;
v.push_back(6);
v.push_back(5);
v.push_back(5);
v.push_back(8);
v.push_back(5);
v.push_back(8);
v.erase(uniquify(v.begin(), v.end()), v.end());
}
*Note: That's the smart way in typical cases, where the number of duplicates isn't too high. For a more thorough performance analysis, see this related answer to a related question.
*注意:这是典型情况下的智能方式,其中重复次数不是太高。有关更全面的性能分析,请参阅相关问题的相关答案。
#2
15
Sounds like a job for std::copy_if. Define a predicate that keeps track of elements that already have been processed and return false if they have.
听起来像是std :: copy_if的工作。定义一个跟踪已经处理过的元素的谓词,如果有,则返回false。
If you don't have C++11 support, you can use the clumsily named std::remove_copy_if and invert the logic.
如果您没有C ++ 11支持,则可以使用名为std :: remove_copy_if的笨拙并反转逻辑。
This is an untested example:
这是一个未经测试的例子:
template <typename T>
struct NotDuplicate {
bool operator()(const T& element) {
return s_.insert(element).second; // true if s_.insert(element);
}
private:
std::set<T> s_;
};
Then
std::vector<int> uniqueNumbers;
NotDuplicate<int> pred;
std::copy_if(numbers.begin(), numbers.end(),
std::back_inserter(uniqueNumbers),
std::ref(pred));
where an std::ref
has been used to avoid potential problems with the algorithm internally copying what is a stateful functor, although std::copy_if
does not place any requirements on side-effects of the functor being applied.
其中使用std :: ref来避免算法内部复制有状态仿函数的潜在问题,尽管std :: copy_if对正在应用的仿函数的副作用没有任何要求。
#3
7
int unsortedRemoveDuplicates(std::vector<int>& numbers)
{
std::set<int> seenNums; //log(n) existence check
auto itr = begin(numbers);
while(itr != end(numbers))
{
if(seenNums.find(*itr) != end(seenNums)) //seen? erase it
itr = numbers.erase(itr); //itr now points to next element
else
{
seenNums.insert(*itr);
itr++;
}
}
return seenNums.size();
}
//3 6 3 8 9 5 6 8
//3 6 8 9 5
#4
5
Fast and simple, C++11:
快速而简单,C ++ 11:
template<typename T>
size_t RemoveDuplicatesKeepOrder(std::vector<T>& vec)
{
std::set<T> seen;
auto newEnd = std::remove_if(vec.begin(), vec.end(), [&seen](const T& value)
{
if (seen.find(value) != std::end(seen))
return true;
seen.insert(value);
return false;
});
vec.erase(newEnd, vec.end());
return vec.size();
}
#5
2
To verify the performance of the proposed solutions, I've tested three of them, listed below. The tests are using random vectors with 1 mln elements and different ratio of duplicates (0%, 1%, 2%, ..., 10%, ..., 90%, 100%).
为了验证所提出的解决方案的性能,我已经测试了其中的三个,如下所示。测试使用具有1毫升元素和不同重复比例的随机载体(0%,1%,2%,...,10%,...,90%,100%)。
-
Mehrdad's solution, currently the accepted answer:
Mehrdad的解决方案,目前是公认的答案:
void uniquifyWithOrder_sort(const vector<int>&, vector<int>& output) { using It = vector<int>::iterator; struct target_less { bool operator()(It const &a, It const &b) const { return *a < *b; } }; struct target_equal { bool operator()(It const &a, It const &b) const { return *a == *b; } }; auto begin = output.begin(); auto const end = output.end(); { vector<It> v; v.reserve(static_cast<size_t>(distance(begin, end))); for (auto i = begin; i != end; ++i) { v.push_back(i); } sort(v.begin(), v.end(), target_less()); v.erase(unique(v.begin(), v.end(), target_equal()), v.end()); sort(v.begin(), v.end()); size_t j = 0; for (auto i = begin; i != end && j != v.size(); ++i) { if (i == v[j]) { using std::iter_swap; iter_swap(i, begin); ++j; ++begin; } } } output.erase(begin, output.end()); }
-
void uniquifyWithOrder_set_copy_if(const vector<int>& input, vector<int>& output) { struct NotADuplicate { bool operator()(const int& element) { return _s.insert(element).second; } private: set<int> _s; }; vector<int> uniqueNumbers; NotADuplicate pred; output.clear(); output.reserve(input.size()); copy_if( input.begin(), input.end(), back_inserter(output), ref(pred)); }
-
void uniquifyWithOrder_set_remove_if(const vector<int>& input, vector<int>& output) { set<int> seen; auto newEnd = remove_if(output.begin(), output.end(), [&seen](const int& value) { if (seen.find(value) != end(seen)) return true; seen.insert(value); return false; }); output.erase(newEnd, output.end()); }
juanchopanza的解决方案void uniquifyWithOrder_set_copy_if(const vector
Leviathan的解决方案void uniquifyWithOrder_set_remove_if(const vector
They are slightly modified for simplicity, and to allow comparing in-place solutions with not in-place ones. The full code used to test is available here.
为简单起见,对它们进行了略微修改,并允许将就地解决方案与非就地解决方案进行比较。用于测试的完整代码可在此处获得。
The results suggest that if you know you'll have at least 1% duplicates the remove_if
solution with std::set
is the best one. Otherwise, you should go with the sort
solution:
结果表明,如果您知道至少有1%重复,则使用std :: set的remove_if解决方案是最好的。否则,您应该使用排序解决方案:
// Intel(R) Core(TM) i7-2600 CPU @ 3.40 GHz 3.40 GHz
// 16 GB RAM, Windows 7, 64 bit
//
// cl 19
// /GS /GL /W3 /Gy /Zc:wchar_t /Zi /Gm- /O2 /Zc:inline /fp:precise /D "NDEBUG" /D "_CONSOLE" /D "_UNICODE" /D "UNICODE" /WX- /Zc:forScope /Gd /Oi /MD /EHsc /nologo /Ot
//
// 1000 random vectors with 1 000 000 elements each.
// 11 tests: with 0%, 10%, 20%, ..., 90%, 100% duplicates in vectors.
// Ratio: 0
// set_copy_if : Time : 618.162 ms +- 18.7261 ms
// set_remove_if : Time : 650.453 ms +- 10.0107 ms
// sort : Time : 212.366 ms +- 5.27977 ms
// Ratio : 0.1
// set_copy_if : Time : 34.1907 ms +- 1.51335 ms
// set_remove_if : Time : 24.2709 ms +- 0.517165 ms
// sort : Time : 43.735 ms +- 1.44966 ms
// Ratio : 0.2
// set_copy_if : Time : 29.5399 ms +- 1.32403 ms
// set_remove_if : Time : 20.4138 ms +- 0.759438 ms
// sort : Time : 36.4204 ms +- 1.60568 ms
// Ratio : 0.3
// set_copy_if : Time : 32.0227 ms +- 1.25661 ms
// set_remove_if : Time : 22.3386 ms +- 0.950855 ms
// sort : Time : 38.1551 ms +- 1.12852 ms
// Ratio : 0.4
// set_copy_if : Time : 30.2714 ms +- 1.28494 ms
// set_remove_if : Time : 20.8338 ms +- 1.06292 ms
// sort : Time : 35.282 ms +- 2.12884 ms
// Ratio : 0.5
// set_copy_if : Time : 24.3247 ms +- 1.21664 ms
// set_remove_if : Time : 16.1621 ms +- 1.27802 ms
// sort : Time : 27.3166 ms +- 2.12964 ms
// Ratio : 0.6
// set_copy_if : Time : 27.3268 ms +- 1.06058 ms
// set_remove_if : Time : 18.4379 ms +- 1.1438 ms
// sort : Time : 30.6846 ms +- 2.52412 ms
// Ratio : 0.7
// set_copy_if : Time : 30.3871 ms +- 0.887492 ms
// set_remove_if : Time : 20.6315 ms +- 0.899802 ms
// sort : Time : 33.7643 ms +- 2.2336 ms
// Ratio : 0.8
// set_copy_if : Time : 33.3077 ms +- 0.746272 ms
// set_remove_if : Time : 22.9459 ms +- 0.921515 ms
// sort : Time : 37.119 ms +- 2.20924 ms
// Ratio : 0.9
// set_copy_if : Time : 36.0888 ms +- 0.763978 ms
// set_remove_if : Time : 24.7002 ms +- 0.465711 ms
// sort : Time : 40.8233 ms +- 2.59826 ms
// Ratio : 1
// set_copy_if : Time : 21.5609 ms +- 1.48986 ms
// set_remove_if : Time : 14.2934 ms +- 0.535431 ms
// sort : Time : 24.2485 ms +- 0.710269 ms
// Ratio: 0
// set_copy_if : Time: 666.962 ms +- 23.7445 ms
// set_remove_if : Time: 736.088 ms +- 39.8122 ms
// sort : Time: 223.796 ms +- 5.27345 ms
// Ratio: 0.01
// set_copy_if : Time: 60.4075 ms +- 3.4673 ms
// set_remove_if : Time: 43.3095 ms +- 1.31252 ms
// sort : Time: 70.7511 ms +- 2.27826 ms
// Ratio: 0.02
// set_copy_if : Time: 50.2605 ms +- 2.70371 ms
// set_remove_if : Time: 36.2877 ms +- 1.14266 ms
// sort : Time: 62.9786 ms +- 2.69163 ms
// Ratio: 0.03
// set_copy_if : Time: 46.9797 ms +- 2.43009 ms
// set_remove_if : Time: 34.0161 ms +- 0.839472 ms
// sort : Time: 59.5666 ms +- 1.34078 ms
// Ratio: 0.04
// set_copy_if : Time: 44.3423 ms +- 2.271 ms
// set_remove_if : Time: 32.2404 ms +- 1.02162 ms
// sort : Time: 57.0583 ms +- 2.9226 ms
// Ratio: 0.05
// set_copy_if : Time: 41.758 ms +- 2.57589 ms
// set_remove_if : Time: 29.9927 ms +- 0.935529 ms
// sort : Time: 54.1474 ms +- 1.63311 ms
// Ratio: 0.06
// set_copy_if : Time: 40.289 ms +- 1.85715 ms
// set_remove_if : Time: 29.2604 ms +- 0.593869 ms
// sort : Time: 57.5436 ms +- 5.52807 ms
// Ratio: 0.07
// set_copy_if : Time: 40.5035 ms +- 1.80952 ms
// set_remove_if : Time: 29.1187 ms +- 0.63127 ms
// sort : Time: 53.622 ms +- 1.91357 ms
// Ratio: 0.08
// set_copy_if : Time: 38.8139 ms +- 1.9811 ms
// set_remove_if : Time: 27.9989 ms +- 0.600543 ms
// sort : Time: 50.5743 ms +- 1.35296 ms
// Ratio: 0.09
// set_copy_if : Time: 39.0751 ms +- 1.71393 ms
// set_remove_if : Time: 28.2332 ms +- 0.607895 ms
// sort : Time: 51.2829 ms +- 1.21077 ms
// Ratio: 0.1
// set_copy_if : Time: 35.6847 ms +- 1.81495 ms
// set_remove_if : Time: 25.204 ms +- 0.538245 ms
// sort : Time: 46.4127 ms +- 2.66714 ms
#6
0
Here is what WilliamKF is searching for. It uses the erase statement. This code is good for lists but isn t good for vectors. For vectors you should not use the erase statement.
这是WilliamKF正在寻找的东西。它使用erase语句。这段代码适用于列表,但不适用于矢量。对于向量,您不应使用erase语句。
//makes uniques in one shot without sorting !!
template<class listtype> inline
void uniques(listtype* In)
{
listtype::iterator it = In->begin();
listtype::iterator it2= In->begin();
int tmpsize = In->size();
while(it!=In->end())
{
it2 = it;
it2++;
while((it2)!=In->end())
{
if ((*it)==(*it2))
In->erase(it2++);
else
++it2;
}
it++;
}
}
What I have tryed for vectors without using sort is that:
我没有使用sort尝试的矢量是:
//makes vectors as fast as possible unique
template<typename T> inline
void vectoruniques(std::vector<T>* In)
{
int tmpsize = In->size();
for (std::vector<T>::iterator it = In->begin();it<In->end()-1;it++)
{
T tmp = *it;
for (std::vector<T>::iterator it2 = it+1;it2<In->end();it2++)
{
if (*it2!=*it)
tmp = *it2;
else
*it2 = tmp;
}
}
std::vector<T>::iterator it = std::unique(In->begin(),In->end());
int newsize = std::distance(In->begin(),it);
In->resize(newsize);
}
Somehow it looks like this would work. I tested it a bit maybe can somebody tell if this really works ! This solution doesn t need any greater operator. I mean why use the greater operator for seaching unique elements ? Usage for Vectors:
不知怎的,它看起来会起作用。我测试了一下也许有人可以告诉我这是否真的有效!该解决方案不需要任何更大的操作员。我的意思是为什么使用更大的运算符来搜索唯一元素?矢量用法:
int myints[] = {21,10,20,20,20,30,21,31,20,20,2};
std::vector<int> abc(myints , myints+11);
vectoruniques(&abc);
#7
0
Here's something that handles POD and non-POD types with move support. Uses default operator== or a custom equality predicate. Does not require sorting/operator<, key generation, or a separate set. No idea if this is more efficient than the other methods described above.
这里有处理POD和非POD类型的东西,有移动支持。使用默认运算符==或自定义等式谓词。不需要排序/运算符<,密钥生成或单独的集合。不知道这是否比上述其他方法更有效。
template <typename Cnt, typename _Pr = std::equal_to<typename Cnt::value_type>>
void remove_duplicates( Cnt& cnt, _Pr cmp = _Pr() )
{
Cnt result;
result.reserve( std::size( cnt ) ); // or cnt.size() if compiler doesn't support std::size()
std::copy_if(
std::make_move_iterator( std::begin( cnt ) )
, std::make_move_iterator( std::end( cnt ) )
, std::back_inserter( result )
, [&]( const typename Cnt::value_type& what )
{
return std::find_if(
std::begin( result )
, std::end( result )
, [&]( const typename Cnt::value_type& existing ) { return cmp( what, existing ); }
) == std::end( result );
}
); // copy_if
cnt = std::move( result ); // place result in cnt param
} // remove_duplicates
Usage/tests:
{
std::vector<int> ints{ 0,1,1,2,3,4 };
remove_duplicates( ints );
assert( ints.size() == 5 );
}
{
struct data
{
std::string foo;
bool operator==( const data& rhs ) const { return this->foo == rhs.foo; }
};
std::vector<data>
mydata{ { "hello" }, {"hello"}, {"world"} }
, mydata2 = mydata
;
// use operator==
remove_duplicates( mydata );
assert( mydata.size() == 2 );
// use custom predicate
remove_duplicates( mydata2, []( const data& left, const data& right ) { return left.foo == right.foo; } );
assert( mydata2.size() == 2 );
}
#8
0
Here is a c++11 generic version that works with iterators and doesn't allocate additional storage. It may have the disadvantage of being O(n^2) but is likely faster for smaller input sizes.
这是一个与迭代器一起使用的c ++ 11通用版本,不分配额外的存储空间。它可能具有O(n ^ 2)的缺点,但对于较小的输入尺寸可能更快。
template<typename Iter>
Iter removeDuplicates(Iter begin,Iter end)
{
auto it = begin;
while(it != end)
{
auto next = std::next(it);
if(next == end)
{
break;
}
end = std::remove(next,end,*it);
it = next;
}
return end;
}
....
std::erase(removeDuplicates(vec.begin(),vec.end()),vec.end());
Sample Code: http://cpp.sh/5kg5n
示例代码:http://cpp.sh/5kg5n
#1
10
The naive way is to use std::set
as everyone tells you. It's overkill and has poor cache locality (slow).
The smart* way is to use std::vector
appropriately (make sure to see footnote at bottom):
天真的方式是使用std :: set,大家告诉你。它太过分了,缓存局部性很差(慢)。 smart *方法是适当地使用std :: vector(确保在底部看到脚注):
#include <algorithm>
#include <vector>
struct target_less
{
template<class It>
bool operator()(It const &a, It const &b) const { return *a < *b; }
};
struct target_equal
{
template<class It>
bool operator()(It const &a, It const &b) const { return *a == *b; }
};
template<class It> It uniquify(It begin, It const end)
{
std::vector<It> v;
v.reserve(static_cast<size_t>(std::distance(begin, end)));
for (It i = begin; i != end; ++i)
{ v.push_back(i); }
std::sort(v.begin(), v.end(), target_less());
v.erase(std::unique(v.begin(), v.end(), target_equal()), v.end());
std::sort(v.begin(), v.end());
size_t j = 0;
for (It i = begin; i != end && j != v.size(); ++i)
{
if (i == v[j])
{
using std::iter_swap; iter_swap(i, begin);
++j;
++begin;
}
}
return begin;
}
Then you can use it like:
然后你就可以使用它:
int main()
{
std::vector<int> v;
v.push_back(6);
v.push_back(5);
v.push_back(5);
v.push_back(8);
v.push_back(5);
v.push_back(8);
v.erase(uniquify(v.begin(), v.end()), v.end());
}
*Note: That's the smart way in typical cases, where the number of duplicates isn't too high. For a more thorough performance analysis, see this related answer to a related question.
*注意:这是典型情况下的智能方式,其中重复次数不是太高。有关更全面的性能分析,请参阅相关问题的相关答案。
#2
15
Sounds like a job for std::copy_if. Define a predicate that keeps track of elements that already have been processed and return false if they have.
听起来像是std :: copy_if的工作。定义一个跟踪已经处理过的元素的谓词,如果有,则返回false。
If you don't have C++11 support, you can use the clumsily named std::remove_copy_if and invert the logic.
如果您没有C ++ 11支持,则可以使用名为std :: remove_copy_if的笨拙并反转逻辑。
This is an untested example:
这是一个未经测试的例子:
template <typename T>
struct NotDuplicate {
bool operator()(const T& element) {
return s_.insert(element).second; // true if s_.insert(element);
}
private:
std::set<T> s_;
};
Then
std::vector<int> uniqueNumbers;
NotDuplicate<int> pred;
std::copy_if(numbers.begin(), numbers.end(),
std::back_inserter(uniqueNumbers),
std::ref(pred));
where an std::ref
has been used to avoid potential problems with the algorithm internally copying what is a stateful functor, although std::copy_if
does not place any requirements on side-effects of the functor being applied.
其中使用std :: ref来避免算法内部复制有状态仿函数的潜在问题,尽管std :: copy_if对正在应用的仿函数的副作用没有任何要求。
#3
7
int unsortedRemoveDuplicates(std::vector<int>& numbers)
{
std::set<int> seenNums; //log(n) existence check
auto itr = begin(numbers);
while(itr != end(numbers))
{
if(seenNums.find(*itr) != end(seenNums)) //seen? erase it
itr = numbers.erase(itr); //itr now points to next element
else
{
seenNums.insert(*itr);
itr++;
}
}
return seenNums.size();
}
//3 6 3 8 9 5 6 8
//3 6 8 9 5
#4
5
Fast and simple, C++11:
快速而简单,C ++ 11:
template<typename T>
size_t RemoveDuplicatesKeepOrder(std::vector<T>& vec)
{
std::set<T> seen;
auto newEnd = std::remove_if(vec.begin(), vec.end(), [&seen](const T& value)
{
if (seen.find(value) != std::end(seen))
return true;
seen.insert(value);
return false;
});
vec.erase(newEnd, vec.end());
return vec.size();
}
#5
2
To verify the performance of the proposed solutions, I've tested three of them, listed below. The tests are using random vectors with 1 mln elements and different ratio of duplicates (0%, 1%, 2%, ..., 10%, ..., 90%, 100%).
为了验证所提出的解决方案的性能,我已经测试了其中的三个,如下所示。测试使用具有1毫升元素和不同重复比例的随机载体(0%,1%,2%,...,10%,...,90%,100%)。
-
Mehrdad's solution, currently the accepted answer:
Mehrdad的解决方案,目前是公认的答案:
void uniquifyWithOrder_sort(const vector<int>&, vector<int>& output) { using It = vector<int>::iterator; struct target_less { bool operator()(It const &a, It const &b) const { return *a < *b; } }; struct target_equal { bool operator()(It const &a, It const &b) const { return *a == *b; } }; auto begin = output.begin(); auto const end = output.end(); { vector<It> v; v.reserve(static_cast<size_t>(distance(begin, end))); for (auto i = begin; i != end; ++i) { v.push_back(i); } sort(v.begin(), v.end(), target_less()); v.erase(unique(v.begin(), v.end(), target_equal()), v.end()); sort(v.begin(), v.end()); size_t j = 0; for (auto i = begin; i != end && j != v.size(); ++i) { if (i == v[j]) { using std::iter_swap; iter_swap(i, begin); ++j; ++begin; } } } output.erase(begin, output.end()); }
-
void uniquifyWithOrder_set_copy_if(const vector<int>& input, vector<int>& output) { struct NotADuplicate { bool operator()(const int& element) { return _s.insert(element).second; } private: set<int> _s; }; vector<int> uniqueNumbers; NotADuplicate pred; output.clear(); output.reserve(input.size()); copy_if( input.begin(), input.end(), back_inserter(output), ref(pred)); }
-
void uniquifyWithOrder_set_remove_if(const vector<int>& input, vector<int>& output) { set<int> seen; auto newEnd = remove_if(output.begin(), output.end(), [&seen](const int& value) { if (seen.find(value) != end(seen)) return true; seen.insert(value); return false; }); output.erase(newEnd, output.end()); }
juanchopanza的解决方案void uniquifyWithOrder_set_copy_if(const vector
Leviathan的解决方案void uniquifyWithOrder_set_remove_if(const vector
They are slightly modified for simplicity, and to allow comparing in-place solutions with not in-place ones. The full code used to test is available here.
为简单起见,对它们进行了略微修改,并允许将就地解决方案与非就地解决方案进行比较。用于测试的完整代码可在此处获得。
The results suggest that if you know you'll have at least 1% duplicates the remove_if
solution with std::set
is the best one. Otherwise, you should go with the sort
solution:
结果表明,如果您知道至少有1%重复,则使用std :: set的remove_if解决方案是最好的。否则,您应该使用排序解决方案:
// Intel(R) Core(TM) i7-2600 CPU @ 3.40 GHz 3.40 GHz
// 16 GB RAM, Windows 7, 64 bit
//
// cl 19
// /GS /GL /W3 /Gy /Zc:wchar_t /Zi /Gm- /O2 /Zc:inline /fp:precise /D "NDEBUG" /D "_CONSOLE" /D "_UNICODE" /D "UNICODE" /WX- /Zc:forScope /Gd /Oi /MD /EHsc /nologo /Ot
//
// 1000 random vectors with 1 000 000 elements each.
// 11 tests: with 0%, 10%, 20%, ..., 90%, 100% duplicates in vectors.
// Ratio: 0
// set_copy_if : Time : 618.162 ms +- 18.7261 ms
// set_remove_if : Time : 650.453 ms +- 10.0107 ms
// sort : Time : 212.366 ms +- 5.27977 ms
// Ratio : 0.1
// set_copy_if : Time : 34.1907 ms +- 1.51335 ms
// set_remove_if : Time : 24.2709 ms +- 0.517165 ms
// sort : Time : 43.735 ms +- 1.44966 ms
// Ratio : 0.2
// set_copy_if : Time : 29.5399 ms +- 1.32403 ms
// set_remove_if : Time : 20.4138 ms +- 0.759438 ms
// sort : Time : 36.4204 ms +- 1.60568 ms
// Ratio : 0.3
// set_copy_if : Time : 32.0227 ms +- 1.25661 ms
// set_remove_if : Time : 22.3386 ms +- 0.950855 ms
// sort : Time : 38.1551 ms +- 1.12852 ms
// Ratio : 0.4
// set_copy_if : Time : 30.2714 ms +- 1.28494 ms
// set_remove_if : Time : 20.8338 ms +- 1.06292 ms
// sort : Time : 35.282 ms +- 2.12884 ms
// Ratio : 0.5
// set_copy_if : Time : 24.3247 ms +- 1.21664 ms
// set_remove_if : Time : 16.1621 ms +- 1.27802 ms
// sort : Time : 27.3166 ms +- 2.12964 ms
// Ratio : 0.6
// set_copy_if : Time : 27.3268 ms +- 1.06058 ms
// set_remove_if : Time : 18.4379 ms +- 1.1438 ms
// sort : Time : 30.6846 ms +- 2.52412 ms
// Ratio : 0.7
// set_copy_if : Time : 30.3871 ms +- 0.887492 ms
// set_remove_if : Time : 20.6315 ms +- 0.899802 ms
// sort : Time : 33.7643 ms +- 2.2336 ms
// Ratio : 0.8
// set_copy_if : Time : 33.3077 ms +- 0.746272 ms
// set_remove_if : Time : 22.9459 ms +- 0.921515 ms
// sort : Time : 37.119 ms +- 2.20924 ms
// Ratio : 0.9
// set_copy_if : Time : 36.0888 ms +- 0.763978 ms
// set_remove_if : Time : 24.7002 ms +- 0.465711 ms
// sort : Time : 40.8233 ms +- 2.59826 ms
// Ratio : 1
// set_copy_if : Time : 21.5609 ms +- 1.48986 ms
// set_remove_if : Time : 14.2934 ms +- 0.535431 ms
// sort : Time : 24.2485 ms +- 0.710269 ms
// Ratio: 0
// set_copy_if : Time: 666.962 ms +- 23.7445 ms
// set_remove_if : Time: 736.088 ms +- 39.8122 ms
// sort : Time: 223.796 ms +- 5.27345 ms
// Ratio: 0.01
// set_copy_if : Time: 60.4075 ms +- 3.4673 ms
// set_remove_if : Time: 43.3095 ms +- 1.31252 ms
// sort : Time: 70.7511 ms +- 2.27826 ms
// Ratio: 0.02
// set_copy_if : Time: 50.2605 ms +- 2.70371 ms
// set_remove_if : Time: 36.2877 ms +- 1.14266 ms
// sort : Time: 62.9786 ms +- 2.69163 ms
// Ratio: 0.03
// set_copy_if : Time: 46.9797 ms +- 2.43009 ms
// set_remove_if : Time: 34.0161 ms +- 0.839472 ms
// sort : Time: 59.5666 ms +- 1.34078 ms
// Ratio: 0.04
// set_copy_if : Time: 44.3423 ms +- 2.271 ms
// set_remove_if : Time: 32.2404 ms +- 1.02162 ms
// sort : Time: 57.0583 ms +- 2.9226 ms
// Ratio: 0.05
// set_copy_if : Time: 41.758 ms +- 2.57589 ms
// set_remove_if : Time: 29.9927 ms +- 0.935529 ms
// sort : Time: 54.1474 ms +- 1.63311 ms
// Ratio: 0.06
// set_copy_if : Time: 40.289 ms +- 1.85715 ms
// set_remove_if : Time: 29.2604 ms +- 0.593869 ms
// sort : Time: 57.5436 ms +- 5.52807 ms
// Ratio: 0.07
// set_copy_if : Time: 40.5035 ms +- 1.80952 ms
// set_remove_if : Time: 29.1187 ms +- 0.63127 ms
// sort : Time: 53.622 ms +- 1.91357 ms
// Ratio: 0.08
// set_copy_if : Time: 38.8139 ms +- 1.9811 ms
// set_remove_if : Time: 27.9989 ms +- 0.600543 ms
// sort : Time: 50.5743 ms +- 1.35296 ms
// Ratio: 0.09
// set_copy_if : Time: 39.0751 ms +- 1.71393 ms
// set_remove_if : Time: 28.2332 ms +- 0.607895 ms
// sort : Time: 51.2829 ms +- 1.21077 ms
// Ratio: 0.1
// set_copy_if : Time: 35.6847 ms +- 1.81495 ms
// set_remove_if : Time: 25.204 ms +- 0.538245 ms
// sort : Time: 46.4127 ms +- 2.66714 ms
#6
0
Here is what WilliamKF is searching for. It uses the erase statement. This code is good for lists but isn t good for vectors. For vectors you should not use the erase statement.
这是WilliamKF正在寻找的东西。它使用erase语句。这段代码适用于列表,但不适用于矢量。对于向量,您不应使用erase语句。
//makes uniques in one shot without sorting !!
template<class listtype> inline
void uniques(listtype* In)
{
listtype::iterator it = In->begin();
listtype::iterator it2= In->begin();
int tmpsize = In->size();
while(it!=In->end())
{
it2 = it;
it2++;
while((it2)!=In->end())
{
if ((*it)==(*it2))
In->erase(it2++);
else
++it2;
}
it++;
}
}
What I have tryed for vectors without using sort is that:
我没有使用sort尝试的矢量是:
//makes vectors as fast as possible unique
template<typename T> inline
void vectoruniques(std::vector<T>* In)
{
int tmpsize = In->size();
for (std::vector<T>::iterator it = In->begin();it<In->end()-1;it++)
{
T tmp = *it;
for (std::vector<T>::iterator it2 = it+1;it2<In->end();it2++)
{
if (*it2!=*it)
tmp = *it2;
else
*it2 = tmp;
}
}
std::vector<T>::iterator it = std::unique(In->begin(),In->end());
int newsize = std::distance(In->begin(),it);
In->resize(newsize);
}
Somehow it looks like this would work. I tested it a bit maybe can somebody tell if this really works ! This solution doesn t need any greater operator. I mean why use the greater operator for seaching unique elements ? Usage for Vectors:
不知怎的,它看起来会起作用。我测试了一下也许有人可以告诉我这是否真的有效!该解决方案不需要任何更大的操作员。我的意思是为什么使用更大的运算符来搜索唯一元素?矢量用法:
int myints[] = {21,10,20,20,20,30,21,31,20,20,2};
std::vector<int> abc(myints , myints+11);
vectoruniques(&abc);
#7
0
Here's something that handles POD and non-POD types with move support. Uses default operator== or a custom equality predicate. Does not require sorting/operator<, key generation, or a separate set. No idea if this is more efficient than the other methods described above.
这里有处理POD和非POD类型的东西,有移动支持。使用默认运算符==或自定义等式谓词。不需要排序/运算符<,密钥生成或单独的集合。不知道这是否比上述其他方法更有效。
template <typename Cnt, typename _Pr = std::equal_to<typename Cnt::value_type>>
void remove_duplicates( Cnt& cnt, _Pr cmp = _Pr() )
{
Cnt result;
result.reserve( std::size( cnt ) ); // or cnt.size() if compiler doesn't support std::size()
std::copy_if(
std::make_move_iterator( std::begin( cnt ) )
, std::make_move_iterator( std::end( cnt ) )
, std::back_inserter( result )
, [&]( const typename Cnt::value_type& what )
{
return std::find_if(
std::begin( result )
, std::end( result )
, [&]( const typename Cnt::value_type& existing ) { return cmp( what, existing ); }
) == std::end( result );
}
); // copy_if
cnt = std::move( result ); // place result in cnt param
} // remove_duplicates
Usage/tests:
{
std::vector<int> ints{ 0,1,1,2,3,4 };
remove_duplicates( ints );
assert( ints.size() == 5 );
}
{
struct data
{
std::string foo;
bool operator==( const data& rhs ) const { return this->foo == rhs.foo; }
};
std::vector<data>
mydata{ { "hello" }, {"hello"}, {"world"} }
, mydata2 = mydata
;
// use operator==
remove_duplicates( mydata );
assert( mydata.size() == 2 );
// use custom predicate
remove_duplicates( mydata2, []( const data& left, const data& right ) { return left.foo == right.foo; } );
assert( mydata2.size() == 2 );
}
#8
0
Here is a c++11 generic version that works with iterators and doesn't allocate additional storage. It may have the disadvantage of being O(n^2) but is likely faster for smaller input sizes.
这是一个与迭代器一起使用的c ++ 11通用版本,不分配额外的存储空间。它可能具有O(n ^ 2)的缺点,但对于较小的输入尺寸可能更快。
template<typename Iter>
Iter removeDuplicates(Iter begin,Iter end)
{
auto it = begin;
while(it != end)
{
auto next = std::next(it);
if(next == end)
{
break;
}
end = std::remove(next,end,*it);
it = next;
}
return end;
}
....
std::erase(removeDuplicates(vec.begin(),vec.end()),vec.end());
Sample Code: http://cpp.sh/5kg5n
示例代码:http://cpp.sh/5kg5n