What is the fastest known sort algorithm for absolute worst case? I don't care about best case and am assuming a gigantic data set if that even matters.
对于绝对最坏的情况,最快知道的排序算法是什么?我不关心最好的情况,我假设有一个巨大的数据集,如果它很重要的话。
17 个解决方案
#1
17
make sure you have seen this:
确保你看过这个:
visualizing sort algorithms - it helped me decide what sort alg to use.
可视化排序算法——它帮助我决定使用哪种类型的alg。
#2
9
Depends on data. For example for integers (or anything that can be expressed as integer) the fastest is radix sort which for fixed length values has worst case complexity of O(n). Best general comparison sort algorithms have complexity of O(n log n).
取决于数据。例如,对于整数(或者可以表示为整数的任何东西),最快的是radix排序,对于固定长度的值,其复杂度为O(n)。最好的一般比较排序算法的复杂度是O(n log n)。
#3
7
If you are using binary comparisons, the best possible sort algorithm takes O(N log N) comparisons to complete. If you're looking for something with good worst case performance, I'd look at MergeSort and HeapSort since they are O(N log N) algorithms in all cases.
如果您使用的是二进制比较,最好的排序算法采用O(N log N)比较来完成。如果您正在寻找一个性能最好的案例,那么我将研究一下归并排序和HeapSort,因为它们在所有情况下都是O(N log N)算法。
HeapSort is nice if all your data fits in memory, while MergeSort allows you to do on-disk sorts better (but takes more space overall).
如果所有数据都适合内存,那么HeapSort就很好了,而MergeSort可以让您更好地执行磁盘操作(但总体上需要更多空间)。
There are other less-well-known algorithms mentioned on the Wikipedia sorting algorithm page that all have O(n log n) worst case performance. (based on comment from mmyers)
在Wikipedia排序算法页面上还有其他一些不太知名的算法,它们都有O(n log n)最坏的情况。(基于mmyers的评论)
#4
5
For the man with limitless budget
Facetious but correct: Sorting networks trade space (in real hardware terms) for better than O(n log n) sorting!
滑稽但正确:排序网络交易空间(在实际硬件条件下)比O(n log n)排序好!
Without resorting to such hardware (which is unlikely to be available) you have a lower bound for the best comparison sorts of O(n log n)
如果不求助于这样的硬件(这是不可能的),你就会有一个更低的界限,以获得最好的比较类型O(n log n)
O(n log n) worst case performance (no particular order)
- Binary Tree Sort
- 二叉树排序
- Merge Sort
- 归并排序
- Heap Sort
- 堆排序
- Smooth Sort
- 光滑的那种
- Intro Sort
- 介绍排序
Beating the n log n
If your data is amenable to it you can beat the n log n restriction but instead care about the number of bits in the input data as well
如果你的数据是可接受的,你可以打破n log n的限制,但也要关心输入数据中的比特数。
Radix and Bucket are probably the best known examples of this. Without more information about your particular requirements it is not fruitful to consider these in more depth.
Radix和Bucket可能是最著名的例子。如果没有更多关于你的特殊要求的信息,你就不能更深入地考虑这些问题。
#5
#6
2
If you have a gigantic data set (ie much larger than available memory) you likely have your data on disk/tape/something-with-expensive-random-access, so you need an external sort.
如果你有一个庞大的数据集(比可用内存大得多),你可能会把你的数据存储在磁盘/磁带/什么东西上- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Merge sort works well in that case; unlike most other sorts it doesn't involve random reads/writes.
归并排序在这种情况下很好;不像大多数其他类型,它不涉及随机读/写。
#7
1
It largely is related to the size of your dataset and whether or not the set is already ordered (or what order it is currently in).
它很大程度上与您的数据集的大小有关,以及该集合是否已经有序(或者它当前的顺序)。
Entire books are written on search/sort algorithms. You aren't going to find an "absolute fastest" assuming a worst case scenario because different sorts have different worst-case situations.
整本书都写在搜索/排序算法上。你不会发现一个“绝对最快”的假设是最坏的情况,因为不同的排序有不同的最坏情况。
#8
1
If you have a sufficiently huge data set, you're probably looking at sorting individual bins of data, then using merge-sort to merge those bins. But at this point, we're talking data sets huge enough to be VASTLY larger than main memory.
如果你有一个足够大的数据集,你可能会考虑对单个的数据箱进行排序,然后使用合并排序来合并这些回收箱。但在这一点上,我们说的数据集足够大,比主存储器大得多。
I guess the most correct answer would be "it depends".
我猜最正确的答案应该是“视情况而定”。
#9
1
It depends both on the type of data and the type of resources. For example there are parallel algorithms that beat Quicksort, but given how you asked the question it's unlikely you have access them. There are times when the "worst case" for one algorithm is "best case" for another (nearly sorted data is problematic with Quick and Merge, but fast with much simpler techniques).
这取决于数据类型和资源类型。例如,有一些并行算法可以打败快速排序,但是考虑到你如何问这个问题,你不太可能访问它们。有时一个算法的“最坏情况”是另一个算法的“最好的情况”(几乎排序的数据是快速和合并的问题,但是使用更简单的技术是快速的)。
#10
1
It depends on the size, according to the Big O notation O(n).
它取决于大小,根据大O符号O(n)
Here is a list of sorting algorithms BEST AND WORST CASE for you to compare. My preference is the 2 way MergeSort
这里列出了排序算法最好和最坏的情况供您比较。我的偏好是2路归并。
#11
0
Assuming randomly sorted data, quicksort.
假设随机排序的数据,快速排序。
O(nlog n) mean case, O(n^2) in the worst case, but that requires highly non-random data.
在最坏的情况下,O(nlog n)的平均情况,O(n 2),但这需要高度非随机的数据。
You might want to describe your data set characteristics.
您可能需要描述您的数据集特性。
#12
0
See Quick Sort Vs Merge Sort for a comparison of Quicksort and Mergesort, which are two of the better algorithms in most cases.
快速排序和归并排序是快速排序和归并排序的比较,在大多数情况下这是两种更好的算法。
#13
0
It all depends on the data you're trying to sort. Different algorithms have different speeds for different data. an O(n) algorithm may be slower than an O(n^2) algorithm, depending on what kind of data you're working with.
这完全取决于你想要排序的数据。不同的算法对不同的数据有不同的速度。一个O(n)算法可能比O(n 2)算法慢,这取决于你使用的是哪种类型的数据。
#14
0
I've always preferred merge sort, as it's stable (meaning that if two elements are equal from a sorting perspective, then their relative order is explicitly preserved), but quicksort is good as well.
我一直喜欢合并排序,因为它是稳定的(意味着如果两个元素从排序的角度来看是相等的,那么它们的相对顺序被显式地保留了),但是快速排序也很好。
#15
0
The lowest upper bound on Turing machines is achieved by merge sort, that is O(n log n). Though quick sort might be better on some datasets.
图灵机器上的最低上限是通过归并排序实现的,即O(n log n),虽然快速排序在某些数据集上可能会更好。
You can't go lower than O(n log n) unless you're using special hardware (e.g. hardware supported bead sort, other non-comparison sorts).
除非使用特殊的硬件(例如硬件支持的珠子排序,其他非比较类),否则不能低于O(nlgn)。
#16
0
On the importance of specifying your problem: radix sort might be the fastest, but it's only usable when your data has fixed-length keys that can be broken down into independent small pieces. That limits its usefulness in the general case, and explains why more people haven't heard of it.
关于指定您的问题的重要性:基数排序可能是最快的,但只有在数据具有固定长度的键(可以分解为独立的小块)时,它才可用。这限制了它在一般情况下的有效性,并解释了为什么更多的人没有听说过它。
http://en.wikipedia.org/wiki/Radix_sort
http://en.wikipedia.org/wiki/Radix_sort
P.S. This is an O(k*n) algorithm, where k is the size of the key.
这是一个O(k*n)算法,k是键的大小。
#17
-2
This is massively depends on the charectaristics of the data.
这很大程度上取决于数据的特性。
#1
17
make sure you have seen this:
确保你看过这个:
visualizing sort algorithms - it helped me decide what sort alg to use.
可视化排序算法——它帮助我决定使用哪种类型的alg。
#2
9
Depends on data. For example for integers (or anything that can be expressed as integer) the fastest is radix sort which for fixed length values has worst case complexity of O(n). Best general comparison sort algorithms have complexity of O(n log n).
取决于数据。例如,对于整数(或者可以表示为整数的任何东西),最快的是radix排序,对于固定长度的值,其复杂度为O(n)。最好的一般比较排序算法的复杂度是O(n log n)。
#3
7
If you are using binary comparisons, the best possible sort algorithm takes O(N log N) comparisons to complete. If you're looking for something with good worst case performance, I'd look at MergeSort and HeapSort since they are O(N log N) algorithms in all cases.
如果您使用的是二进制比较,最好的排序算法采用O(N log N)比较来完成。如果您正在寻找一个性能最好的案例,那么我将研究一下归并排序和HeapSort,因为它们在所有情况下都是O(N log N)算法。
HeapSort is nice if all your data fits in memory, while MergeSort allows you to do on-disk sorts better (but takes more space overall).
如果所有数据都适合内存,那么HeapSort就很好了,而MergeSort可以让您更好地执行磁盘操作(但总体上需要更多空间)。
There are other less-well-known algorithms mentioned on the Wikipedia sorting algorithm page that all have O(n log n) worst case performance. (based on comment from mmyers)
在Wikipedia排序算法页面上还有其他一些不太知名的算法,它们都有O(n log n)最坏的情况。(基于mmyers的评论)
#4
5
For the man with limitless budget
Facetious but correct: Sorting networks trade space (in real hardware terms) for better than O(n log n) sorting!
滑稽但正确:排序网络交易空间(在实际硬件条件下)比O(n log n)排序好!
Without resorting to such hardware (which is unlikely to be available) you have a lower bound for the best comparison sorts of O(n log n)
如果不求助于这样的硬件(这是不可能的),你就会有一个更低的界限,以获得最好的比较类型O(n log n)
O(n log n) worst case performance (no particular order)
- Binary Tree Sort
- 二叉树排序
- Merge Sort
- 归并排序
- Heap Sort
- 堆排序
- Smooth Sort
- 光滑的那种
- Intro Sort
- 介绍排序
Beating the n log n
If your data is amenable to it you can beat the n log n restriction but instead care about the number of bits in the input data as well
如果你的数据是可接受的,你可以打破n log n的限制,但也要关心输入数据中的比特数。
Radix and Bucket are probably the best known examples of this. Without more information about your particular requirements it is not fruitful to consider these in more depth.
Radix和Bucket可能是最著名的例子。如果没有更多关于你的特殊要求的信息,你就不能更深入地考虑这些问题。
#5
2
Quicksort is usually the fastest, but if you want good worst-case time, try Heapsort or Mergesort. These both have O(n log n)
worst time performance.
快速排序通常是最快的,但是如果您想要最佳的最坏的情况,可以尝试使用Heapsort或Mergesort。它们都有O(n log n)最差的时间性能。
#6
2
If you have a gigantic data set (ie much larger than available memory) you likely have your data on disk/tape/something-with-expensive-random-access, so you need an external sort.
如果你有一个庞大的数据集(比可用内存大得多),你可能会把你的数据存储在磁盘/磁带/什么东西上- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Merge sort works well in that case; unlike most other sorts it doesn't involve random reads/writes.
归并排序在这种情况下很好;不像大多数其他类型,它不涉及随机读/写。
#7
1
It largely is related to the size of your dataset and whether or not the set is already ordered (or what order it is currently in).
它很大程度上与您的数据集的大小有关,以及该集合是否已经有序(或者它当前的顺序)。
Entire books are written on search/sort algorithms. You aren't going to find an "absolute fastest" assuming a worst case scenario because different sorts have different worst-case situations.
整本书都写在搜索/排序算法上。你不会发现一个“绝对最快”的假设是最坏的情况,因为不同的排序有不同的最坏情况。
#8
1
If you have a sufficiently huge data set, you're probably looking at sorting individual bins of data, then using merge-sort to merge those bins. But at this point, we're talking data sets huge enough to be VASTLY larger than main memory.
如果你有一个足够大的数据集,你可能会考虑对单个的数据箱进行排序,然后使用合并排序来合并这些回收箱。但在这一点上,我们说的数据集足够大,比主存储器大得多。
I guess the most correct answer would be "it depends".
我猜最正确的答案应该是“视情况而定”。
#9
1
It depends both on the type of data and the type of resources. For example there are parallel algorithms that beat Quicksort, but given how you asked the question it's unlikely you have access them. There are times when the "worst case" for one algorithm is "best case" for another (nearly sorted data is problematic with Quick and Merge, but fast with much simpler techniques).
这取决于数据类型和资源类型。例如,有一些并行算法可以打败快速排序,但是考虑到你如何问这个问题,你不太可能访问它们。有时一个算法的“最坏情况”是另一个算法的“最好的情况”(几乎排序的数据是快速和合并的问题,但是使用更简单的技术是快速的)。
#10
1
It depends on the size, according to the Big O notation O(n).
它取决于大小,根据大O符号O(n)
Here is a list of sorting algorithms BEST AND WORST CASE for you to compare. My preference is the 2 way MergeSort
这里列出了排序算法最好和最坏的情况供您比较。我的偏好是2路归并。
#11
0
Assuming randomly sorted data, quicksort.
假设随机排序的数据,快速排序。
O(nlog n) mean case, O(n^2) in the worst case, but that requires highly non-random data.
在最坏的情况下,O(nlog n)的平均情况,O(n 2),但这需要高度非随机的数据。
You might want to describe your data set characteristics.
您可能需要描述您的数据集特性。
#12
0
See Quick Sort Vs Merge Sort for a comparison of Quicksort and Mergesort, which are two of the better algorithms in most cases.
快速排序和归并排序是快速排序和归并排序的比较,在大多数情况下这是两种更好的算法。
#13
0
It all depends on the data you're trying to sort. Different algorithms have different speeds for different data. an O(n) algorithm may be slower than an O(n^2) algorithm, depending on what kind of data you're working with.
这完全取决于你想要排序的数据。不同的算法对不同的数据有不同的速度。一个O(n)算法可能比O(n 2)算法慢,这取决于你使用的是哪种类型的数据。
#14
0
I've always preferred merge sort, as it's stable (meaning that if two elements are equal from a sorting perspective, then their relative order is explicitly preserved), but quicksort is good as well.
我一直喜欢合并排序,因为它是稳定的(意味着如果两个元素从排序的角度来看是相等的,那么它们的相对顺序被显式地保留了),但是快速排序也很好。
#15
0
The lowest upper bound on Turing machines is achieved by merge sort, that is O(n log n). Though quick sort might be better on some datasets.
图灵机器上的最低上限是通过归并排序实现的,即O(n log n),虽然快速排序在某些数据集上可能会更好。
You can't go lower than O(n log n) unless you're using special hardware (e.g. hardware supported bead sort, other non-comparison sorts).
除非使用特殊的硬件(例如硬件支持的珠子排序,其他非比较类),否则不能低于O(nlgn)。
#16
0
On the importance of specifying your problem: radix sort might be the fastest, but it's only usable when your data has fixed-length keys that can be broken down into independent small pieces. That limits its usefulness in the general case, and explains why more people haven't heard of it.
关于指定您的问题的重要性:基数排序可能是最快的,但只有在数据具有固定长度的键(可以分解为独立的小块)时,它才可用。这限制了它在一般情况下的有效性,并解释了为什么更多的人没有听说过它。
http://en.wikipedia.org/wiki/Radix_sort
http://en.wikipedia.org/wiki/Radix_sort
P.S. This is an O(k*n) algorithm, where k is the size of the key.
这是一个O(k*n)算法,k是键的大小。
#17
-2
This is massively depends on the charectaristics of the data.
这很大程度上取决于数据的特性。