I have a large 10,000,000+ length array that contains rows. I need to individually shuffle those rows. For example:
我有一个大的10,000,000+长度数组,包含行。我需要单独地洗牌。例如:
[[1,2,3]
[1,2,3]
[1,2,3]
...
[1,2,3]]
to
来
[[3,1,2]
[2,1,3]
[1,3,2]
...
[1,2,3]]
I'm currently using
我正在使用
map(numpy.random.shuffle, array)
But it's a python (not NumPy) loop and it's taking 99% of my execution time. Sadly, the PyPy JIT doesn't implement numpypy.random
, so I'm out of luck. Is there any faster way? I'm willing to use any library (pandas
, scikit-learn
, scipy
, theano
, etc. as long as it uses a Numpy ndarray
or a derivative.)
但它是一个python(不是NumPy)循环,它占用了我99%的执行时间。遗憾的是,PyPy JIT并没有实现numpypy。随机的,所以我运气不好。有更快的方法吗?我愿意使用任何图书馆(熊猫、scikitt -learn、scipy、theano等,只要它使用Numpy ndarray或衍生工具)。
If not, I suppose I'll resort to Cython or C++.
如果没有,我想我会求助于Cython或c++。
4 个解决方案
#1
6
Here are some ideas:
这里有一些建议:
In [10]: a=np.zeros(shape=(1000,3))
In [12]: a[:,0]=1
In [13]: a[:,1]=2
In [14]: a[:,2]=3
In [17]: %timeit map(np.random.shuffle, a)
100 loops, best of 3: 4.65 ms per loop
In [21]: all_perm=np.array((list(itertools.permutations([0,1,2]))))
In [22]: b=all_perm[np.random.randint(0,6,size=1000)]
In [25]: %timeit (a.flatten()[(b+3*np.arange(1000)[...,np.newaxis]).flatten()]).reshape(a.shape)
1000 loops, best of 3: 393 us per loop
If there are only a few columns, then the number of all possible permutation is much smaller than the number of rows in the array (in this case, when there are only 3 columns, there are only 6 possible permutations). A way to make it faster is to make all the permutations at once first and then rearrange each row by randomly picking one permutation from all possible permutations.
如果只有几个列,那么所有可能的排列的数量都要比数组中的行数小得多(在本例中,当只有3个列时,只有6个可能的排列)。一种使其更快的方法是先进行所有排列,然后随机从所有可能的排列中选择一个排列来重新排列每一行。
It still appears to be 10 times faster even with larger dimension:
即使是更大的尺寸,它的速度仍然快10倍:
#adjust a accordingly
In [32]: b=all_perm[np.random.randint(0,6,size=1000000)]
In [33]: %timeit (a.flatten()[(b+3*np.arange(1000000)[...,np.newaxis]).flatten()]).reshape(a.shape)
1 loops, best of 3: 348 ms per loop
In [34]: %timeit map(np.random.shuffle, a)
1 loops, best of 3: 4.64 s per loop
#2
7
If the permutations of the columns are enumerable, then you could do this:
如果列的排列是可枚举的,那么您可以这样做:
import itertools as IT
import numpy as np
def using_perms(array):
nrows, ncols = array.shape
perms = np.array(list(IT.permutations(range(ncols))))
choices = np.random.randint(len(perms), size=nrows)
i = np.arange(nrows).reshape(-1, 1)
return array[i, perms[choices]]
N = 10**7
array = np.tile(np.arange(1,4), (N,1))
print(using_perms(array))
yields (something like)
收益率(类似的)
[[3 2 1]
[3 1 2]
[2 3 1]
[1 2 3]
[3 1 2]
...
[1 3 2]
[3 1 2]
[3 2 1]
[2 1 3]
[1 3 2]]
Here is a benchmark comparing it to
这是一个比较的基准。
def using_shuffle(array):
map(numpy.random.shuffle, array)
return array
In [151]: %timeit using_shuffle(array)
1 loops, best of 3: 7.17 s per loop
In [152]: %timeit using_perms(array)
1 loops, best of 3: 2.78 s per loop
Edit: CT Zhu's method is faster than mine:
编辑:CT Zhu的方法比我的快:
def using_Zhu(array):
nrows, ncols = array.shape
all_perm = np.array((list(itertools.permutations(range(ncols)))))
b = all_perm[np.random.randint(0, all_perm.shape[0], size=nrows)]
return (array.flatten()[(b+3*np.arange(nrows)[...,np.newaxis]).flatten()]
).reshape(array.shape)
In [177]: %timeit using_Zhu(array)
1 loops, best of 3: 1.7 s per loop
Here is a slight variation of Zhu's method which may be even a bit faster:
这里有一个稍微有点变化的朱的方法,可能会更快一点:
def using_Zhu2(array):
nrows, ncols = array.shape
all_perm = np.array((list(itertools.permutations(range(ncols)))))
b = all_perm[np.random.randint(0, all_perm.shape[0], size=nrows)]
return array.take((b+3*np.arange(nrows)[...,np.newaxis]).ravel()).reshape(array.shape)
In [201]: %timeit using_Zhu2(array)
1 loops, best of 3: 1.46 s per loop
#3
#4
0
I believe I have an alternate, equivalent strategy, building upon the previous answers:
我相信我有一个替代的,等价的策略,建立在之前的答案上:
# original sequence
a0 = np.arange(3) + 1
# length of original sequence
L = a0.shape[0]
# number of random samples/shuffles
N_samp = 1e4
# from above
all_perm = np.array( (list(itertools.permutations(np.arange(L)))) )
b = all_perm[np.random.randint(0, len(all_perm), size=N_samp)]
# index a with b for each row of b and collapse down to expected dimension
a_samp = a0[np.newaxis, b][0]
I'm not sure how this compares performance-wise, but I like it for its readability.
我不知道这是如何比较性能的,但是我喜欢它的可读性。
#1
6
Here are some ideas:
这里有一些建议:
In [10]: a=np.zeros(shape=(1000,3))
In [12]: a[:,0]=1
In [13]: a[:,1]=2
In [14]: a[:,2]=3
In [17]: %timeit map(np.random.shuffle, a)
100 loops, best of 3: 4.65 ms per loop
In [21]: all_perm=np.array((list(itertools.permutations([0,1,2]))))
In [22]: b=all_perm[np.random.randint(0,6,size=1000)]
In [25]: %timeit (a.flatten()[(b+3*np.arange(1000)[...,np.newaxis]).flatten()]).reshape(a.shape)
1000 loops, best of 3: 393 us per loop
If there are only a few columns, then the number of all possible permutation is much smaller than the number of rows in the array (in this case, when there are only 3 columns, there are only 6 possible permutations). A way to make it faster is to make all the permutations at once first and then rearrange each row by randomly picking one permutation from all possible permutations.
如果只有几个列,那么所有可能的排列的数量都要比数组中的行数小得多(在本例中,当只有3个列时,只有6个可能的排列)。一种使其更快的方法是先进行所有排列,然后随机从所有可能的排列中选择一个排列来重新排列每一行。
It still appears to be 10 times faster even with larger dimension:
即使是更大的尺寸,它的速度仍然快10倍:
#adjust a accordingly
In [32]: b=all_perm[np.random.randint(0,6,size=1000000)]
In [33]: %timeit (a.flatten()[(b+3*np.arange(1000000)[...,np.newaxis]).flatten()]).reshape(a.shape)
1 loops, best of 3: 348 ms per loop
In [34]: %timeit map(np.random.shuffle, a)
1 loops, best of 3: 4.64 s per loop
#2
7
If the permutations of the columns are enumerable, then you could do this:
如果列的排列是可枚举的,那么您可以这样做:
import itertools as IT
import numpy as np
def using_perms(array):
nrows, ncols = array.shape
perms = np.array(list(IT.permutations(range(ncols))))
choices = np.random.randint(len(perms), size=nrows)
i = np.arange(nrows).reshape(-1, 1)
return array[i, perms[choices]]
N = 10**7
array = np.tile(np.arange(1,4), (N,1))
print(using_perms(array))
yields (something like)
收益率(类似的)
[[3 2 1]
[3 1 2]
[2 3 1]
[1 2 3]
[3 1 2]
...
[1 3 2]
[3 1 2]
[3 2 1]
[2 1 3]
[1 3 2]]
Here is a benchmark comparing it to
这是一个比较的基准。
def using_shuffle(array):
map(numpy.random.shuffle, array)
return array
In [151]: %timeit using_shuffle(array)
1 loops, best of 3: 7.17 s per loop
In [152]: %timeit using_perms(array)
1 loops, best of 3: 2.78 s per loop
Edit: CT Zhu's method is faster than mine:
编辑:CT Zhu的方法比我的快:
def using_Zhu(array):
nrows, ncols = array.shape
all_perm = np.array((list(itertools.permutations(range(ncols)))))
b = all_perm[np.random.randint(0, all_perm.shape[0], size=nrows)]
return (array.flatten()[(b+3*np.arange(nrows)[...,np.newaxis]).flatten()]
).reshape(array.shape)
In [177]: %timeit using_Zhu(array)
1 loops, best of 3: 1.7 s per loop
Here is a slight variation of Zhu's method which may be even a bit faster:
这里有一个稍微有点变化的朱的方法,可能会更快一点:
def using_Zhu2(array):
nrows, ncols = array.shape
all_perm = np.array((list(itertools.permutations(range(ncols)))))
b = all_perm[np.random.randint(0, all_perm.shape[0], size=nrows)]
return array.take((b+3*np.arange(nrows)[...,np.newaxis]).ravel()).reshape(array.shape)
In [201]: %timeit using_Zhu2(array)
1 loops, best of 3: 1.46 s per loop
#3
0
You can also try the apply function in pandas
你也可以尝试在熊猫的应用功能。
import pandas as pd
df = pd.DataFrame(array)
df = df.apply(lambda x:np.random.shuffle(x) or x, axis=1)
And then extract the numpy array from the dataframe
然后从dataframe中提取numpy数组。
print df.values
#4
0
I believe I have an alternate, equivalent strategy, building upon the previous answers:
我相信我有一个替代的,等价的策略,建立在之前的答案上:
# original sequence
a0 = np.arange(3) + 1
# length of original sequence
L = a0.shape[0]
# number of random samples/shuffles
N_samp = 1e4
# from above
all_perm = np.array( (list(itertools.permutations(np.arange(L)))) )
b = all_perm[np.random.randint(0, len(all_perm), size=N_samp)]
# index a with b for each row of b and collapse down to expected dimension
a_samp = a0[np.newaxis, b][0]
I'm not sure how this compares performance-wise, but I like it for its readability.
我不知道这是如何比较性能的,但是我喜欢它的可读性。