As per question title, if the array is of an odd length and the array elements are numbered 1 - 10.
如问题标题所示,如果数组长度为奇数,数组元素编号为1 - 10。
Example,
的例子,
3 6 8 1 3 7 7 9 4 1
3 6 8 1 3 7 9 4 1
I was thinking of using heapsort? Since it is an array, merge sort and insertion sort requires shifting, and would not be so efficient.
我想用heapsort?因为它是一个数组,所以归并排序和插入排序需要移动,而且效率不高。
7 个解决方案
#1
29
the array elements are number from 1 - 10.
数组元素是从1 - 10开始的。
With this restriction, counting sort will be far more efficient than any general purpose sorting algorithm - it's O(n)
有了这个限制,计数排序将比任何通用的排序算法高效得多——它是O(n)
#2
7
This is my counting sort example
这是我的计数排序示例
static int[] countingSort(int[] numbers) {
int max = numbers[0];
for (int i = 1; i < numbers.length; i++) {
if (numbers[i] > max)
max = numbers[i];
}
int[] sortedNumbers = new int[max+1];
for (int i = 0; i < numbers.length; i++) {
sortedNumbers[numbers[i]]++;
}
int insertPosition = 0;
for (int i = 0; i <= max; i++) {
for (int j = 0; j < sortedNumbers[i]; j++) {
numbers[insertPosition] = i;
insertPosition++;
}
}
return numbers;
}
#3
2
If there are only 10 elements it isn't worth your while to even worry about it. If there are a million it might start to become significant.
如果只有10个元素,那就不值得你去担心了。如果有一百万,它可能开始变得重要。
#4
2
Edit: A counting sort is probably optimal given the constraint that the elements only range from 1-10. A counting sort applied to this problem will run in O(n) time. A merge sort (as I recommended below) will run in no better than O(nlogn) time. Parallelizing a counting sort could be interesting. Simply assign a subarray with n/p elements to each processor. Each processor would have its own count array of size 9. This step should take O(n/p) time. Then consolidate all the count arrays into a single array, which should take O(p) time. I haven't fully thought through the last step in the counting sort where the elements are placed in order, but it seems as long as the elements of the count array are atomic, you could assign n/p sections of the original array to individual processors and achieve some parallelization. There would be contention at individual elements of the count array, however, potentially substantially reducing concurrency. You might be able to assign subsections of the count array to p processors and you are back to O(n/p) runtime, if the elements are distributed fairly evenly, but you would be limited to 10 processors. If the elements are not distributed evenly, one or more processors could be doing a larger proportion of the work. This is great question; can you do a counting sort in O(n/p) time?
编辑:考虑到元素的范围仅在1-10之间,计数排序可能是最优的。应用于这个问题的计数排序将在O(n)时间内运行。合并排序(如我下面所建议的)将在O(nlogn)时间内运行。将计数排序并行化可能会很有趣。只需为每个处理器分配一个包含n/p元素的子数组。每个处理器都有自己的大小为9的计数数组。这一步需要O(n/p)时间。然后将所有计数数组合并到一个数组中,这将花费O(p)时间。我还没有充分考虑计数排序的最后一步,也就是将元素按顺序排列,但是似乎只要计数数组的元素是原子的,就可以将原始数组的n/p部分分配给单个处理器,并实现一些并行化。但是,计数数组中的单个元素会产生争用,这可能会显著降低并发性。您可能可以将count数组的子部分分配给p个处理器,然后返回O(n/p)运行时,如果元素分布相当均匀,但您将被限制为10个处理器。如果元素不是均匀分布的,一个或多个处理器可能要做更多的工作。这是很好的问题;你能在O(n/p)时间内做计数排序吗?
Quicksort is a great in-place sort algorithm that runs fast and conserves memory. However, given the elements only range from 1-10, if you are sorting large numbers of elements, you will end up with large runs of the same number, either initially, or at interim times during the sort. In-order arrays or sub-arrays can really bog down a Quicksort's performance.
快速排序是一种很好的就地排序算法,运行速度快,节省内存。但是,如果元素的范围仅为1-10,那么如果您对大量的元素进行排序,那么您最终将得到相同数量的大量运行,要么是初始运行,要么是在排序期间的临时运行。有序数组或子数组确实会影响快速排序的性能。
If you don't care about memory, a simple Mergesort would suffice. Mergesort is up there with the fastest standard sort algorithms.
如果您不关心内存,一个简单的归并排序就足够了。归并排序是最快的标准排序算法。
The default Collections.sort() implementation in Java 7 is a Mergesort algorithm adapted from 'TimSort.' The default Arrays.sort() implementation in Java 7 is a dual pivot Quicksort.
Java 7中的默认collection .sort()实现是一种合并排序算法,它借鉴了“TimSort”。Java 7中的默认Arrays.sort()实现是一个双pivot快速排序。
If you would like to go parallel, a Parallel Quicksort can achieve good results on large arrays with small numbers of processors, but with the same limitations as the sequential Quicksort. PSRS can help scale to larger numbers of processors.
如果您想进行并行处理,那么并行快速排序可以在具有少量处理器的大型数组上获得良好的结果,但其局限性与顺序快速排序相同。PSRS可以帮助扩展到更多的处理器。
#5
1
You should consider looking at this for complexity chart Complexity Comparison Chart.
您应该考虑查看这张复杂性图表的复杂性比较图表。
The comparison of sorting algorithm is based on their Best, Average, Worst case scenario for time and space complexity.Based on this chart you can see Counting Sort approach is best on both space and time complexity. Other comparable method is Radix Sort.
排序算法的比较是基于它们在时间和空间复杂度方面的最佳、平均、最差情况。根据这个图表,您可以看到计数排序方法在空间和时间复杂度上都是最好的。其他类似的方法是基数排序。
Worst [Time,Space] complexity of "Counting Sort" :- [O(n+k),O(k)].
最坏的[时间,空间]复杂度的“计数排序”:- [O(n+k),O(k)]。
Worst [Time,Space] complexity of "Radix Sort" :- [O(nk),O(n+k)].
最坏的[时间,空间]“基数排序”的复杂性:- [O(nk),O(n+k)]。
#6
0
this is an example of simple sort (Insertion sort)
这是一个简单排序(插入排序)的例子
private static void insertionSort(int[] a) {
// [230, 23, 45, 34, 98]
for (int j = 2; j < a.length; j++) {
int key = a[j];
int i = j - 1;
while (i > 0 && a[i] > key) {
a[i + 1] = a[i];
i--;
}
a[i + 1] = key;
}
System.out.println("sorted array: " + Arrays.toString(a));
}
#7
0
Counting sort will be best in this scenario.
在这种情况下,计数排序是最好的。
Assuming the data are integers, in a range of 0-k. Create an array of size K to keep track of how many items appear (3 items with value 0, 4 items with value 1, etc). Given this count, you can tell the position of an item — all the 1’s must come after the 0’s, of which there are 3. Therefore, the 1’s start at item #4. Thus, we can scan the items and insert them into their proper position.
假设数据是整数,范围为0-k。创建一个大小为K的数组,以跟踪显示了多少项(3项值为0,4项值为1,等等)。根据这个计数,你可以知道一个项目的位置——所有的1必须在0之后,其中有3个。因此,1从第4项开始。因此,我们可以扫描这些项目并将它们插入到它们的适当位置。
Creating the count array is O(N) Inserting items into their proper position is O(N) I oversimplified here — there is a summation of the counts, and a greatest-to-least ordering which keeps the sort stable.
创建count数组是O(N)将项插入到它们的正确位置,这是O(N),我在这里过度简化了——这里有一个计数的总和,以及一个保持排序稳定的大到最小的排序。
#1
29
the array elements are number from 1 - 10.
数组元素是从1 - 10开始的。
With this restriction, counting sort will be far more efficient than any general purpose sorting algorithm - it's O(n)
有了这个限制,计数排序将比任何通用的排序算法高效得多——它是O(n)
#2
7
This is my counting sort example
这是我的计数排序示例
static int[] countingSort(int[] numbers) {
int max = numbers[0];
for (int i = 1; i < numbers.length; i++) {
if (numbers[i] > max)
max = numbers[i];
}
int[] sortedNumbers = new int[max+1];
for (int i = 0; i < numbers.length; i++) {
sortedNumbers[numbers[i]]++;
}
int insertPosition = 0;
for (int i = 0; i <= max; i++) {
for (int j = 0; j < sortedNumbers[i]; j++) {
numbers[insertPosition] = i;
insertPosition++;
}
}
return numbers;
}
#3
2
If there are only 10 elements it isn't worth your while to even worry about it. If there are a million it might start to become significant.
如果只有10个元素,那就不值得你去担心了。如果有一百万,它可能开始变得重要。
#4
2
Edit: A counting sort is probably optimal given the constraint that the elements only range from 1-10. A counting sort applied to this problem will run in O(n) time. A merge sort (as I recommended below) will run in no better than O(nlogn) time. Parallelizing a counting sort could be interesting. Simply assign a subarray with n/p elements to each processor. Each processor would have its own count array of size 9. This step should take O(n/p) time. Then consolidate all the count arrays into a single array, which should take O(p) time. I haven't fully thought through the last step in the counting sort where the elements are placed in order, but it seems as long as the elements of the count array are atomic, you could assign n/p sections of the original array to individual processors and achieve some parallelization. There would be contention at individual elements of the count array, however, potentially substantially reducing concurrency. You might be able to assign subsections of the count array to p processors and you are back to O(n/p) runtime, if the elements are distributed fairly evenly, but you would be limited to 10 processors. If the elements are not distributed evenly, one or more processors could be doing a larger proportion of the work. This is great question; can you do a counting sort in O(n/p) time?
编辑:考虑到元素的范围仅在1-10之间,计数排序可能是最优的。应用于这个问题的计数排序将在O(n)时间内运行。合并排序(如我下面所建议的)将在O(nlogn)时间内运行。将计数排序并行化可能会很有趣。只需为每个处理器分配一个包含n/p元素的子数组。每个处理器都有自己的大小为9的计数数组。这一步需要O(n/p)时间。然后将所有计数数组合并到一个数组中,这将花费O(p)时间。我还没有充分考虑计数排序的最后一步,也就是将元素按顺序排列,但是似乎只要计数数组的元素是原子的,就可以将原始数组的n/p部分分配给单个处理器,并实现一些并行化。但是,计数数组中的单个元素会产生争用,这可能会显著降低并发性。您可能可以将count数组的子部分分配给p个处理器,然后返回O(n/p)运行时,如果元素分布相当均匀,但您将被限制为10个处理器。如果元素不是均匀分布的,一个或多个处理器可能要做更多的工作。这是很好的问题;你能在O(n/p)时间内做计数排序吗?
Quicksort is a great in-place sort algorithm that runs fast and conserves memory. However, given the elements only range from 1-10, if you are sorting large numbers of elements, you will end up with large runs of the same number, either initially, or at interim times during the sort. In-order arrays or sub-arrays can really bog down a Quicksort's performance.
快速排序是一种很好的就地排序算法,运行速度快,节省内存。但是,如果元素的范围仅为1-10,那么如果您对大量的元素进行排序,那么您最终将得到相同数量的大量运行,要么是初始运行,要么是在排序期间的临时运行。有序数组或子数组确实会影响快速排序的性能。
If you don't care about memory, a simple Mergesort would suffice. Mergesort is up there with the fastest standard sort algorithms.
如果您不关心内存,一个简单的归并排序就足够了。归并排序是最快的标准排序算法。
The default Collections.sort() implementation in Java 7 is a Mergesort algorithm adapted from 'TimSort.' The default Arrays.sort() implementation in Java 7 is a dual pivot Quicksort.
Java 7中的默认collection .sort()实现是一种合并排序算法,它借鉴了“TimSort”。Java 7中的默认Arrays.sort()实现是一个双pivot快速排序。
If you would like to go parallel, a Parallel Quicksort can achieve good results on large arrays with small numbers of processors, but with the same limitations as the sequential Quicksort. PSRS can help scale to larger numbers of processors.
如果您想进行并行处理,那么并行快速排序可以在具有少量处理器的大型数组上获得良好的结果,但其局限性与顺序快速排序相同。PSRS可以帮助扩展到更多的处理器。
#5
1
You should consider looking at this for complexity chart Complexity Comparison Chart.
您应该考虑查看这张复杂性图表的复杂性比较图表。
The comparison of sorting algorithm is based on their Best, Average, Worst case scenario for time and space complexity.Based on this chart you can see Counting Sort approach is best on both space and time complexity. Other comparable method is Radix Sort.
排序算法的比较是基于它们在时间和空间复杂度方面的最佳、平均、最差情况。根据这个图表,您可以看到计数排序方法在空间和时间复杂度上都是最好的。其他类似的方法是基数排序。
Worst [Time,Space] complexity of "Counting Sort" :- [O(n+k),O(k)].
最坏的[时间,空间]复杂度的“计数排序”:- [O(n+k),O(k)]。
Worst [Time,Space] complexity of "Radix Sort" :- [O(nk),O(n+k)].
最坏的[时间,空间]“基数排序”的复杂性:- [O(nk),O(n+k)]。
#6
0
this is an example of simple sort (Insertion sort)
这是一个简单排序(插入排序)的例子
private static void insertionSort(int[] a) {
// [230, 23, 45, 34, 98]
for (int j = 2; j < a.length; j++) {
int key = a[j];
int i = j - 1;
while (i > 0 && a[i] > key) {
a[i + 1] = a[i];
i--;
}
a[i + 1] = key;
}
System.out.println("sorted array: " + Arrays.toString(a));
}
#7
0
Counting sort will be best in this scenario.
在这种情况下,计数排序是最好的。
Assuming the data are integers, in a range of 0-k. Create an array of size K to keep track of how many items appear (3 items with value 0, 4 items with value 1, etc). Given this count, you can tell the position of an item — all the 1’s must come after the 0’s, of which there are 3. Therefore, the 1’s start at item #4. Thus, we can scan the items and insert them into their proper position.
假设数据是整数,范围为0-k。创建一个大小为K的数组,以跟踪显示了多少项(3项值为0,4项值为1,等等)。根据这个计数,你可以知道一个项目的位置——所有的1必须在0之后,其中有3个。因此,1从第4项开始。因此,我们可以扫描这些项目并将它们插入到它们的适当位置。
Creating the count array is O(N) Inserting items into their proper position is O(N) I oversimplified here — there is a summation of the counts, and a greatest-to-least ordering which keeps the sort stable.
创建count数组是O(N)将项插入到它们的正确位置,这是O(N),我在这里过度简化了——这里有一个计数的总和,以及一个保持排序稳定的大到最小的排序。