合并k排序数组-比较两个解决方案

时间:2022-04-06 19:15:51

If I have K sorted arrays of N elements each, e.g.

如果我有K个数组,每个数组有N个元素。

[0, 1, 2]
[1, 6, 8]
[10, 11, 12]

I know that I can use a heap to merge them by cycling all of the lists and their elements and inserting them into the heap, then getting back the minimum each time in O(KN * log(KN)).

我知道我可以使用一个堆来合并它们,方法是循环所有列表和它们的元素并将它们插入到堆中,然后每次在O(KN * log(KN)中返回最小值。

I checked on the internet and another popular solution seems to be using a min heap of only K elements and inserting all the first items of the K lists into the heap, then get the minimum out and advance the pointer to the list that owned that minimum element.

我在internet上进行了检查,另一个流行的解决方案似乎是使用一个只有K个元素的min堆,并将K列表的所有第一个项插入到堆中,然后获得最小值,并将指针指向拥有该最小元素的列表。

Aside from a more efficient memory requirement (O(K) in the second case), is the second method more efficient time-wise?

除了在第二种情况下更有效的内存需求(O(K)之外,第二种方法的时间效率更高吗?

Optional bonus points: is there an even better algorithm than the ones above?

可选积分:有比上面更好的算法吗?

3 个解决方案

#1


2  

The second version should have a runtime of O(KN* log(K)) since you perform a heapify (log (K)) operation for each element (N*K). So yes it is faster. I cannot think of a more efficient way to solve this problem.

第二个版本应该有O(KN* log(K))的运行时,因为您对每个元素(N*K)执行heapify (log (K))操作。是的,它更快。我想不出有什么更有效的办法来解决这个问题。

#2


2  

The first method is fine when you have enough memory to perform the sorting of all the input lists, but it'd be even simpler to just perform a k-way merge between the already-sorted lists, with a bit of extra space (a list of K elements) for keeping track of the index where you're at each input list. That's an O(K^2 * N) solution.

第一个方法很好,当你有足够的内存来执行所有的输入列表的排序,但它会更简单的只是执行一个K路之间的合并已排序列表,与一些额外的空间(K元素的列表)的指数跟踪你在每个输入列表。这是一个O(K ^ 2 * N)的解决方案。

Which is better - the first method or the k-way merge, depends on how big is K compared to N, and let's not forget the O(KN) cost of building a heap for the first method. To give an idea:

哪个更好——第一个方法或K -way合并,取决于K与N相比有多大,并且我们不要忘记为第一个方法构建堆的O(KN)成本。给一个想法:

k=5; n=100
k*n*log(k*n)
=> 3107
k*k*n
=> 2500

k=100; n=100
k*n*log(k*n)
=> 92103
k*k*n
=> 1000000

The second method uses less memory, and that's very important! it's the way to go when the input lists don't fit in memory - hence we'd take one element from each list, put it in the heap, determine the next one that goes into the final result, and write it to the output, updating the heap accordingly: that's O(KN * log(K)) in complexity. Again, to give an idea:

第二种方法使用更少的内存,这很重要!的路要走当输入列表不适合在内存中,因此我们接受每个列表的一个元素,把它在堆中,确定下一个最终结果,并将它写入输出,更新相应的堆:O(KN *日志(K))的复杂性。再一次提出一个想法:

k=5; n=100
k*n*log(k)
=> 804

k=100; n=100
k*n*log(k)
=> 46051

Bottom line: Use a k-way merge instead of the first method when the input fits in memory and k is small, and as @btilly points out, the second method is theoretically the best of them all, but practical considerations might make a k-way merge faster. As usual: the best strategy is to profile with some real data, and pick the winner!

底线:当输入在内存中并且k很小时,使用k-way merge而不是第一种方法,正如@btilly指出的,第二种方法理论上是所有方法中最好的,但是实际考虑可能会使k-way merge速度更快。和往常一样:最好的策略是用一些真实的数据进行剖析,然后选出赢家!

#3


1  

The first answer is O(KN * log(KN)) The second is O(KN * log(K)) and so is better. It is impossible to in general do better than that.

第一个答案是O(KN * log(KN))第二个答案是O(KN * log(K))所以更好。一般来说,不可能做得比这更好。

That said, you can improve on it sometimes in practice. Instead of dumping the minimum elements into a heap, create a tree of merges like merge-sort does. Then add logic to, when you seem to be pulling from one side of the merge, try to jump ahead and look for a run.

也就是说,你可以在实践中改进它。与其将最小元素转储到堆中,不如创建一个合并树,就像合并排序那样。然后添加逻辑,当你似乎从合并的一边拉时,尝试向前跳转并寻找运行。

The win can be significant if K is large, comparisons are expensive, and your data has lots of runs.

如果K是大的,比较是昂贵的,并且您的数据有很多运行,那么win会很重要。

See https://en.wikipedia.org/wiki/Timsort for an example of a sorting algorithm which tries something like this, and has been finely tuned for a lot of real world use cases.

请参阅https://en.wikipedia.org/wiki/Timsort,了解一个排序算法的示例,它尝试了类似的东西,并且已经为许多真实世界的用例进行了微调。

#1


2  

The second version should have a runtime of O(KN* log(K)) since you perform a heapify (log (K)) operation for each element (N*K). So yes it is faster. I cannot think of a more efficient way to solve this problem.

第二个版本应该有O(KN* log(K))的运行时,因为您对每个元素(N*K)执行heapify (log (K))操作。是的,它更快。我想不出有什么更有效的办法来解决这个问题。

#2


2  

The first method is fine when you have enough memory to perform the sorting of all the input lists, but it'd be even simpler to just perform a k-way merge between the already-sorted lists, with a bit of extra space (a list of K elements) for keeping track of the index where you're at each input list. That's an O(K^2 * N) solution.

第一个方法很好,当你有足够的内存来执行所有的输入列表的排序,但它会更简单的只是执行一个K路之间的合并已排序列表,与一些额外的空间(K元素的列表)的指数跟踪你在每个输入列表。这是一个O(K ^ 2 * N)的解决方案。

Which is better - the first method or the k-way merge, depends on how big is K compared to N, and let's not forget the O(KN) cost of building a heap for the first method. To give an idea:

哪个更好——第一个方法或K -way合并,取决于K与N相比有多大,并且我们不要忘记为第一个方法构建堆的O(KN)成本。给一个想法:

k=5; n=100
k*n*log(k*n)
=> 3107
k*k*n
=> 2500

k=100; n=100
k*n*log(k*n)
=> 92103
k*k*n
=> 1000000

The second method uses less memory, and that's very important! it's the way to go when the input lists don't fit in memory - hence we'd take one element from each list, put it in the heap, determine the next one that goes into the final result, and write it to the output, updating the heap accordingly: that's O(KN * log(K)) in complexity. Again, to give an idea:

第二种方法使用更少的内存,这很重要!的路要走当输入列表不适合在内存中,因此我们接受每个列表的一个元素,把它在堆中,确定下一个最终结果,并将它写入输出,更新相应的堆:O(KN *日志(K))的复杂性。再一次提出一个想法:

k=5; n=100
k*n*log(k)
=> 804

k=100; n=100
k*n*log(k)
=> 46051

Bottom line: Use a k-way merge instead of the first method when the input fits in memory and k is small, and as @btilly points out, the second method is theoretically the best of them all, but practical considerations might make a k-way merge faster. As usual: the best strategy is to profile with some real data, and pick the winner!

底线:当输入在内存中并且k很小时,使用k-way merge而不是第一种方法,正如@btilly指出的,第二种方法理论上是所有方法中最好的,但是实际考虑可能会使k-way merge速度更快。和往常一样:最好的策略是用一些真实的数据进行剖析,然后选出赢家!

#3


1  

The first answer is O(KN * log(KN)) The second is O(KN * log(K)) and so is better. It is impossible to in general do better than that.

第一个答案是O(KN * log(KN))第二个答案是O(KN * log(K))所以更好。一般来说,不可能做得比这更好。

That said, you can improve on it sometimes in practice. Instead of dumping the minimum elements into a heap, create a tree of merges like merge-sort does. Then add logic to, when you seem to be pulling from one side of the merge, try to jump ahead and look for a run.

也就是说,你可以在实践中改进它。与其将最小元素转储到堆中,不如创建一个合并树,就像合并排序那样。然后添加逻辑,当你似乎从合并的一边拉时,尝试向前跳转并寻找运行。

The win can be significant if K is large, comparisons are expensive, and your data has lots of runs.

如果K是大的,比较是昂贵的,并且您的数据有很多运行,那么win会很重要。

See https://en.wikipedia.org/wiki/Timsort for an example of a sorting algorithm which tries something like this, and has been finely tuned for a lot of real world use cases.

请参阅https://en.wikipedia.org/wiki/Timsort,了解一个排序算法的示例,它尝试了类似的东西,并且已经为许多真实世界的用例进行了微调。