After my previous question on finding toes within each paw, I started loading up other measurements to see how it would hold up. Unfortunately, I quickly ran into a problem with one of the preceding steps: recognizing the paws.
在我之前提出的关于在每只爪子中发现脚趾的问题之后,我开始收集其他的测量数据,看看它能支撑多久。不幸的是,我很快就遇到了前面步骤中的一个问题:识别爪子。
You see, my proof of concept basically took the maximal pressure of each sensor over time and would start looking for the sum of each row, until it finds on that != 0.0. Then it does the same for the columns and as soon as it finds more than 2 rows with that are zero again. It stores the minimal and maximal row and column values to some index.
你看,我的概念证明基本上是取每个传感器随时间的最大压力,然后开始寻找每一行的和,直到它找到它!= 0.0。然后它对列做同样的事情当它发现超过2行时,它还是0。它将最小和最大的行和列值存储到某个索引中。
As you can see in the figure, this works quite well in most cases. However, there are a lot of downsides to this approach (other than being very primitive):
正如您在图中所看到的,这在大多数情况下都非常有效。然而,这种方法有很多缺点(除了非常原始):
-
Humans can have 'hollow feet' which means there are several empty rows within the footprint itself. Since I feared this could happen with (large) dogs too, I waited for at least 2 or 3 empty rows before cutting off the paw.
人类可以有“空心的脚”,这意味着在脚印内部有几个空行。因为我担心(大型)狗也会出现这种情况,所以我至少等了两到三排空狗,然后才把爪子剪掉。
This creates a problem if another contact made in a different column before it reaches several empty rows, thus expanding the area. I figure I could compare the columns and see if they exceed a certain value, they must be separate paws.
如果在另一个列中进行的另一个接触在到达多个空行之前产生问题,从而扩展区域。我想我可以比较一下这些列,看看它们是否超过某个值,它们一定是分开的爪子。
-
The problem gets worse when the dog is very small or walks at a higher pace. What happens is that the front paw's toes are still making contact, while the hind paw's toes just start to make contact within the same area as the front paw!
当狗很小或者走得更快的时候,问题会变得更严重。发生的情况是,前爪的脚趾仍在接触,而后爪的脚趾刚刚开始接触与前爪相同的区域!
With my simple script, it won't be able to split these two, because it would have to determine which frames of that area belong to which paw, while currently I would only have to look at the maximal values over all frames.
使用我的简单脚本,它将无法分割这两个框架,因为它必须确定该区域的哪些框架属于哪个paw,而目前我只需要查看所有框架的最大值。
Examples of where it starts going wrong:
它开始出错的例子:
So now I'm looking for a better way of recognizing and separating the paws (after which I'll get to the problem of deciding which paw it is!).
所以现在我正在寻找一种更好的识别和分离爪子的方法(在此之后,我将着手决定哪只爪子!)
Update:
更新:
I've been tinkering to get Joe's (awesome!) answer implemented, but I'm having difficulties extracting the actual paw data from my files.
我一直在修补以实现Joe(太棒了!)的答案,但是我很难从我的文件中提取实际的paw数据。
The coded_paws shows me all the different paws, when applied to the maximal pressure image (see above). However, the solution goes over each frame (to separate overlapping paws) and sets the four Rectangle attributes, such as coordinates or height/width.
当应用到最大压力图像(见上图)时,coded_paws显示了所有不同的爪子。但是,解决方案会遍历每一帧(以分离重叠的爪子)并设置四个矩形属性,如坐标或高度/宽度。
I can't figure out how to take these attributes and store them in some variable that I can apply to the measurement data. Since I need to know for each paw, what its location is during which frames and couple this to which paw it is (front/hind, left/right).
我不知道如何使用这些属性并将它们存储在我可以应用于测量数据的某个变量中。因为我需要知道每只爪子的位置,在这段时间里它的框架和爪子的位置(前/后,左/右)。
So how can I use the Rectangles attributes to extract these values for each paw?
那么如何使用矩形属性来提取每个爪子的值呢?
I have the measurements I used in the question setup in my public Dropbox folder (example 1, example 2, example 3). For anyone interested I also set up a blog to keep you up to date :-)
我有我在公共Dropbox文件夹中的问题设置中使用的测量值(例子1,例子2,例子3)。
3 个解决方案
#1
345
If you're just wanting (semi) contiguous regions, there's already an easy implementation in Python: SciPy's ndimage.morphology module. This is a fairly common image morphology operation.
如果您只是想要(半连续的)区域,那么在Python中已经有了一个简单的实现:SciPy的ndimage。形态学模块。这是一个相当常见的图像形态学操作。
Basically, you have 5 steps:
基本上,你有5个步骤:
def find_paws(data, smooth_radius=5, threshold=0.0001):
data = sp.ndimage.uniform_filter(data, smooth_radius)
thresh = data > threshold
filled = sp.ndimage.morphology.binary_fill_holes(thresh)
coded_paws, num_paws = sp.ndimage.label(filled)
data_slices = sp.ndimage.find_objects(coded_paws)
return object_slices
-
Blur the input data a bit to make sure the paws have a continuous footprint. (It would be more efficient to just use a larger kernel (the
structure
kwarg to the variousscipy.ndimage.morphology
functions) but this isn't quite working properly for some reason...)模糊输入数据一点,以确保爪有一个连续的足迹。使用更大的内核会更有效(将kwarg结构转换为不同的scipy.ndimage)。(形态学功能)但由于某些原因,这并不能正常工作。
-
Threshold the array so that you have a boolean array of places where the pressure is over some threshold value (i.e.
thresh = data > value
)阈值数组,这样就有了一个布尔数组,其中的压力值超过了阈值(即thresh =数据>值)
-
Fill any internal holes, so that you have cleaner regions (
filled = sp.ndimage.morphology.binary_fill_holes(thresh)
)填充任何内部的孔,这样您就有了更干净的区域(填充= sp. ndimag.形态。binary_fill_holes(thresh))
-
Find the separate contiguous regions (
coded_paws, num_paws = sp.ndimage.label(filled)
). This returns an array with the regions coded by number (each region is a contiguous area of a unique integer (1 up to the number of paws) with zeros everywhere else)).找到单独的连续区域(coded_paws, num_paws = sp.ndimage.label,已填充)。这返回一个数组,该数组的区域按数字编码(每个区域都是一个唯一整数(1到paws的数量)的连续区域,其他地方都是0)。
-
Isolate the contiguous regions using
data_slices = sp.ndimage.find_objects(coded_paws)
. This returns a list of tuples ofslice
objects, so you could get the region of the data for each paw with[data[x] for x in data_slices]
. Instead, we'll draw a rectangle based on these slices, which takes slightly more work.使用data_slice = sp.ndimage.find_objects(coded_paws)隔离连续区域。这将返回切片对象的元组列表,因此您可以使用data_slice中的x的[data[x]获取每个爪子的数据区域。相反,我们将基于这些切片绘制一个矩形,这需要做更多的工作。
The two animations below show your "Overlapping Paws" and "Grouped Paws" example data. This method seems to be working perfectly. (And for whatever it's worth, this runs much more smoothly than the GIF images below on my machine, so the paw detection algorithm is fairly fast...)
下面的两个动画显示了“重叠的爪子”和“分组的爪子”示例数据。这种方法似乎很有效。(不管它值多少钱,它比下面我机器上的GIF图片运行起来要流畅得多,所以爪子检测算法相当快……)
Here's a full example (now with much more detailed explanations). The vast majority of this is reading the input and making an animation. The actual paw detection is only 5 lines of code.
这里有一个完整的例子(现在有更详细的解释)。其中绝大部分是阅读输入并制作动画。实际的爪子检测只有5行代码。
import numpy as np
import scipy as sp
import scipy.ndimage
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle
def animate(input_filename):
"""Detects paws and animates the position and raw data of each frame
in the input file"""
# With matplotlib, it's much, much faster to just update the properties
# of a display object than it is to create a new one, so we'll just update
# the data and position of the same objects throughout this animation...
infile = paw_file(input_filename)
# Since we're making an animation with matplotlib, we need
# ion() instead of show()...
plt.ion()
fig = plt.figure()
ax = fig.add_subplot(111)
fig.suptitle(input_filename)
# Make an image based on the first frame that we'll update later
# (The first frame is never actually displayed)
im = ax.imshow(infile.next()[1])
# Make 4 rectangles that we can later move to the position of each paw
rects = [Rectangle((0,0), 1,1, fc='none', ec='red') for i in range(4)]
[ax.add_patch(rect) for rect in rects]
title = ax.set_title('Time 0.0 ms')
# Process and display each frame
for time, frame in infile:
paw_slices = find_paws(frame)
# Hide any rectangles that might be visible
[rect.set_visible(False) for rect in rects]
# Set the position and size of a rectangle for each paw and display it
for slice, rect in zip(paw_slices, rects):
dy, dx = slice
rect.set_xy((dx.start, dy.start))
rect.set_width(dx.stop - dx.start + 1)
rect.set_height(dy.stop - dy.start + 1)
rect.set_visible(True)
# Update the image data and title of the plot
title.set_text('Time %0.2f ms' % time)
im.set_data(frame)
im.set_clim([frame.min(), frame.max()])
fig.canvas.draw()
def find_paws(data, smooth_radius=5, threshold=0.0001):
"""Detects and isolates contiguous regions in the input array"""
# Blur the input data a bit so the paws have a continous footprint
data = sp.ndimage.uniform_filter(data, smooth_radius)
# Threshold the blurred data (this needs to be a bit > 0 due to the blur)
thresh = data > threshold
# Fill any interior holes in the paws to get cleaner regions...
filled = sp.ndimage.morphology.binary_fill_holes(thresh)
# Label each contiguous paw
coded_paws, num_paws = sp.ndimage.label(filled)
# Isolate the extent of each paw
data_slices = sp.ndimage.find_objects(coded_paws)
return data_slices
def paw_file(filename):
"""Returns a iterator that yields the time and data in each frame
The infile is an ascii file of timesteps formatted similar to this:
Frame 0 (0.00 ms)
0.0 0.0 0.0
0.0 0.0 0.0
Frame 1 (0.53 ms)
0.0 0.0 0.0
0.0 0.0 0.0
...
"""
with open(filename) as infile:
while True:
try:
time, data = read_frame(infile)
yield time, data
except StopIteration:
break
def read_frame(infile):
"""Reads a frame from the infile."""
frame_header = infile.next().strip().split()
time = float(frame_header[-2][1:])
data = []
while True:
line = infile.next().strip().split()
if line == []:
break
data.append(line)
return time, np.array(data, dtype=np.float)
if __name__ == '__main__':
animate('Overlapping paws.bin')
animate('Grouped up paws.bin')
animate('Normal measurement.bin')
Update: As far as identifying which paw is in contact with the sensor at what times, the simplest solution is to just do the same analysis, but use all of the data at once. (i.e. stack the input into a 3D array, and work with it, instead of the individual time frames.) Because SciPy's ndimage functions are meant to work with n-dimensional arrays, we don't have to modify the original paw-finding function at all.
更新:只要确定哪个paw在什么时候与传感器接触,最简单的解决方案就是进行相同的分析,但同时使用所有数据。(例如,将输入叠加到一个3D数组中,并与之一起工作,而不是单独的时间帧。)因为SciPy的ndimage函数是用来处理n维数组的,所以我们根本不需要修改原始的查找指针函数。
# This uses functions (and imports) in the previous code example!!
def paw_regions(infile):
# Read in and stack all data together into a 3D array
data, time = [], []
for t, frame in paw_file(infile):
time.append(t)
data.append(frame)
data = np.dstack(data)
time = np.asarray(time)
# Find and label the paw impacts
data_slices, coded_paws = find_paws(data, smooth_radius=4)
# Sort by time of initial paw impact... This way we can determine which
# paws are which relative to the first paw with a simple modulo 4.
# (Assuming a 4-legged dog, where all 4 paws contacted the sensor)
data_slices.sort(key=lambda dat_slice: dat_slice[2].start)
# Plot up a simple analysis
fig = plt.figure()
ax1 = fig.add_subplot(2,1,1)
annotate_paw_prints(time, data, data_slices, ax=ax1)
ax2 = fig.add_subplot(2,1,2)
plot_paw_impacts(time, data_slices, ax=ax2)
fig.suptitle(infile)
def plot_paw_impacts(time, data_slices, ax=None):
if ax is None:
ax = plt.gca()
# Group impacts by paw...
for i, dat_slice in enumerate(data_slices):
dx, dy, dt = dat_slice
paw = i%4 + 1
# Draw a bar over the time interval where each paw is in contact
ax.barh(bottom=paw, width=time[dt].ptp(), height=0.2,
left=time[dt].min(), align='center', color='red')
ax.set_yticks(range(1, 5))
ax.set_yticklabels(['Paw 1', 'Paw 2', 'Paw 3', 'Paw 4'])
ax.set_xlabel('Time (ms) Since Beginning of Experiment')
ax.yaxis.grid(True)
ax.set_title('Periods of Paw Contact')
def annotate_paw_prints(time, data, data_slices, ax=None):
if ax is None:
ax = plt.gca()
# Display all paw impacts (sum over time)
ax.imshow(data.sum(axis=2).T)
# Annotate each impact with which paw it is
# (Relative to the first paw to hit the sensor)
x, y = [], []
for i, region in enumerate(data_slices):
dx, dy, dz = region
# Get x,y center of slice...
x0 = 0.5 * (dx.start + dx.stop)
y0 = 0.5 * (dy.start + dy.stop)
x.append(x0); y.append(y0)
# Annotate the paw impacts
ax.annotate('Paw %i' % (i%4 +1), (x0, y0),
color='red', ha='center', va='bottom')
# Plot line connecting paw impacts
ax.plot(x,y, '-wo')
ax.axis('image')
ax.set_title('Order of Steps')
#2
3
I'm no expert in image detection, and I don't know Python, but I'll give it a whack...
我不是图像检测方面的专家,我也不知道Python,但我会给它一个重击……
To detect individual paws, you should first only select everything with a pressure greater than some small threshold, very close to no pressure at all. Every pixel/point that is above this should be "marked." Then, every pixel adjacent to all "marked" pixels becomes marked, and this process is repeated a few times. Masses that are totally connected would be formed, so you have distinct objects. Then, each "object" has a minimum and maximum x and y value, so bounding boxes can be packed neatly around them.
要检测单独的爪子,你首先应该只选择压力大于某个小阈值的东西,几乎没有压力。上面的每一个像素点都应该是“标记的”。然后,与所有“标记”像素相邻的每个像素都被标记,这个过程重复几次。完全连接的质量会形成,所以有不同的物体。然后,每个“对象”都有一个最小和最大的x和y值,因此可以将边界框整齐地打包在它们周围。
Pseudocode:
伪代码:
(MARK) ALL PIXELS ABOVE (0.5)
(标记)以上所有像素(0.5)
(MARK) ALL PIXELS (ADJACENT) TO (MARK) PIXELS
(标记)与(标记)像素相邻的所有像素
REPEAT (STEP 2) (5) TIMES
重复(步骤2)(5)次
SEPARATE EACH TOTALLY CONNECTED MASS INTO A SINGLE OBJECT
把每一个完全相连的质量分开成一个单一的物体
MARK THE EDGES OF EACH OBJECT, AND CUT APART TO FORM SLICES.
标记每个对象的边缘,并将其分割成片。
That should about do it.
应该这样做。
#3
0
Note: I say pixel, but this could be regions using an average of the pixels. Optimization is another issue...
注意:我说的是像素,但这可能是使用像素平均值的区域。优化是另一个问题……
Sounds like you need to analyze a function (pressure over time) for each pixel and determine where the function turns (when it changes > X in the other direction it is considered a turn to counter errors).
听起来好像你需要对每个像素分析一个函数(压力随时间变化),并确定函数在哪里转动(当它在另一个方向上改变> X时,它被认为是一个用来抵消错误的转动)。
If you know at what frames it turns, you will know the frame where the pressure was the most hard and you will know where it was the least hard between the two paws. In theory, you then would know the two frames where the paws pressed the most hard and can calculate an average of those intervals.
如果你知道它转动的角度,你就会知道压力最大的那个框架,你就会知道它在两个爪子之间最不硬的地方。从理论上讲,你就能知道这两幅画中,爪子最用力的地方是哪两幅,并能计算出这两幅画的平均间隔。
after which I'll get to the problem of deciding which paw it is!
在那之后,我将着手决定它是哪只爪子的问题!
This is the same tour as before, knowing when each paw applies the most pressure helps you decide.
这和以前一样,知道每个爪子在什么时候施加最大的压力能帮助你做出决定。
#1
345
If you're just wanting (semi) contiguous regions, there's already an easy implementation in Python: SciPy's ndimage.morphology module. This is a fairly common image morphology operation.
如果您只是想要(半连续的)区域,那么在Python中已经有了一个简单的实现:SciPy的ndimage。形态学模块。这是一个相当常见的图像形态学操作。
Basically, you have 5 steps:
基本上,你有5个步骤:
def find_paws(data, smooth_radius=5, threshold=0.0001):
data = sp.ndimage.uniform_filter(data, smooth_radius)
thresh = data > threshold
filled = sp.ndimage.morphology.binary_fill_holes(thresh)
coded_paws, num_paws = sp.ndimage.label(filled)
data_slices = sp.ndimage.find_objects(coded_paws)
return object_slices
-
Blur the input data a bit to make sure the paws have a continuous footprint. (It would be more efficient to just use a larger kernel (the
structure
kwarg to the variousscipy.ndimage.morphology
functions) but this isn't quite working properly for some reason...)模糊输入数据一点,以确保爪有一个连续的足迹。使用更大的内核会更有效(将kwarg结构转换为不同的scipy.ndimage)。(形态学功能)但由于某些原因,这并不能正常工作。
-
Threshold the array so that you have a boolean array of places where the pressure is over some threshold value (i.e.
thresh = data > value
)阈值数组,这样就有了一个布尔数组,其中的压力值超过了阈值(即thresh =数据>值)
-
Fill any internal holes, so that you have cleaner regions (
filled = sp.ndimage.morphology.binary_fill_holes(thresh)
)填充任何内部的孔,这样您就有了更干净的区域(填充= sp. ndimag.形态。binary_fill_holes(thresh))
-
Find the separate contiguous regions (
coded_paws, num_paws = sp.ndimage.label(filled)
). This returns an array with the regions coded by number (each region is a contiguous area of a unique integer (1 up to the number of paws) with zeros everywhere else)).找到单独的连续区域(coded_paws, num_paws = sp.ndimage.label,已填充)。这返回一个数组,该数组的区域按数字编码(每个区域都是一个唯一整数(1到paws的数量)的连续区域,其他地方都是0)。
-
Isolate the contiguous regions using
data_slices = sp.ndimage.find_objects(coded_paws)
. This returns a list of tuples ofslice
objects, so you could get the region of the data for each paw with[data[x] for x in data_slices]
. Instead, we'll draw a rectangle based on these slices, which takes slightly more work.使用data_slice = sp.ndimage.find_objects(coded_paws)隔离连续区域。这将返回切片对象的元组列表,因此您可以使用data_slice中的x的[data[x]获取每个爪子的数据区域。相反,我们将基于这些切片绘制一个矩形,这需要做更多的工作。
The two animations below show your "Overlapping Paws" and "Grouped Paws" example data. This method seems to be working perfectly. (And for whatever it's worth, this runs much more smoothly than the GIF images below on my machine, so the paw detection algorithm is fairly fast...)
下面的两个动画显示了“重叠的爪子”和“分组的爪子”示例数据。这种方法似乎很有效。(不管它值多少钱,它比下面我机器上的GIF图片运行起来要流畅得多,所以爪子检测算法相当快……)
Here's a full example (now with much more detailed explanations). The vast majority of this is reading the input and making an animation. The actual paw detection is only 5 lines of code.
这里有一个完整的例子(现在有更详细的解释)。其中绝大部分是阅读输入并制作动画。实际的爪子检测只有5行代码。
import numpy as np
import scipy as sp
import scipy.ndimage
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle
def animate(input_filename):
"""Detects paws and animates the position and raw data of each frame
in the input file"""
# With matplotlib, it's much, much faster to just update the properties
# of a display object than it is to create a new one, so we'll just update
# the data and position of the same objects throughout this animation...
infile = paw_file(input_filename)
# Since we're making an animation with matplotlib, we need
# ion() instead of show()...
plt.ion()
fig = plt.figure()
ax = fig.add_subplot(111)
fig.suptitle(input_filename)
# Make an image based on the first frame that we'll update later
# (The first frame is never actually displayed)
im = ax.imshow(infile.next()[1])
# Make 4 rectangles that we can later move to the position of each paw
rects = [Rectangle((0,0), 1,1, fc='none', ec='red') for i in range(4)]
[ax.add_patch(rect) for rect in rects]
title = ax.set_title('Time 0.0 ms')
# Process and display each frame
for time, frame in infile:
paw_slices = find_paws(frame)
# Hide any rectangles that might be visible
[rect.set_visible(False) for rect in rects]
# Set the position and size of a rectangle for each paw and display it
for slice, rect in zip(paw_slices, rects):
dy, dx = slice
rect.set_xy((dx.start, dy.start))
rect.set_width(dx.stop - dx.start + 1)
rect.set_height(dy.stop - dy.start + 1)
rect.set_visible(True)
# Update the image data and title of the plot
title.set_text('Time %0.2f ms' % time)
im.set_data(frame)
im.set_clim([frame.min(), frame.max()])
fig.canvas.draw()
def find_paws(data, smooth_radius=5, threshold=0.0001):
"""Detects and isolates contiguous regions in the input array"""
# Blur the input data a bit so the paws have a continous footprint
data = sp.ndimage.uniform_filter(data, smooth_radius)
# Threshold the blurred data (this needs to be a bit > 0 due to the blur)
thresh = data > threshold
# Fill any interior holes in the paws to get cleaner regions...
filled = sp.ndimage.morphology.binary_fill_holes(thresh)
# Label each contiguous paw
coded_paws, num_paws = sp.ndimage.label(filled)
# Isolate the extent of each paw
data_slices = sp.ndimage.find_objects(coded_paws)
return data_slices
def paw_file(filename):
"""Returns a iterator that yields the time and data in each frame
The infile is an ascii file of timesteps formatted similar to this:
Frame 0 (0.00 ms)
0.0 0.0 0.0
0.0 0.0 0.0
Frame 1 (0.53 ms)
0.0 0.0 0.0
0.0 0.0 0.0
...
"""
with open(filename) as infile:
while True:
try:
time, data = read_frame(infile)
yield time, data
except StopIteration:
break
def read_frame(infile):
"""Reads a frame from the infile."""
frame_header = infile.next().strip().split()
time = float(frame_header[-2][1:])
data = []
while True:
line = infile.next().strip().split()
if line == []:
break
data.append(line)
return time, np.array(data, dtype=np.float)
if __name__ == '__main__':
animate('Overlapping paws.bin')
animate('Grouped up paws.bin')
animate('Normal measurement.bin')
Update: As far as identifying which paw is in contact with the sensor at what times, the simplest solution is to just do the same analysis, but use all of the data at once. (i.e. stack the input into a 3D array, and work with it, instead of the individual time frames.) Because SciPy's ndimage functions are meant to work with n-dimensional arrays, we don't have to modify the original paw-finding function at all.
更新:只要确定哪个paw在什么时候与传感器接触,最简单的解决方案就是进行相同的分析,但同时使用所有数据。(例如,将输入叠加到一个3D数组中,并与之一起工作,而不是单独的时间帧。)因为SciPy的ndimage函数是用来处理n维数组的,所以我们根本不需要修改原始的查找指针函数。
# This uses functions (and imports) in the previous code example!!
def paw_regions(infile):
# Read in and stack all data together into a 3D array
data, time = [], []
for t, frame in paw_file(infile):
time.append(t)
data.append(frame)
data = np.dstack(data)
time = np.asarray(time)
# Find and label the paw impacts
data_slices, coded_paws = find_paws(data, smooth_radius=4)
# Sort by time of initial paw impact... This way we can determine which
# paws are which relative to the first paw with a simple modulo 4.
# (Assuming a 4-legged dog, where all 4 paws contacted the sensor)
data_slices.sort(key=lambda dat_slice: dat_slice[2].start)
# Plot up a simple analysis
fig = plt.figure()
ax1 = fig.add_subplot(2,1,1)
annotate_paw_prints(time, data, data_slices, ax=ax1)
ax2 = fig.add_subplot(2,1,2)
plot_paw_impacts(time, data_slices, ax=ax2)
fig.suptitle(infile)
def plot_paw_impacts(time, data_slices, ax=None):
if ax is None:
ax = plt.gca()
# Group impacts by paw...
for i, dat_slice in enumerate(data_slices):
dx, dy, dt = dat_slice
paw = i%4 + 1
# Draw a bar over the time interval where each paw is in contact
ax.barh(bottom=paw, width=time[dt].ptp(), height=0.2,
left=time[dt].min(), align='center', color='red')
ax.set_yticks(range(1, 5))
ax.set_yticklabels(['Paw 1', 'Paw 2', 'Paw 3', 'Paw 4'])
ax.set_xlabel('Time (ms) Since Beginning of Experiment')
ax.yaxis.grid(True)
ax.set_title('Periods of Paw Contact')
def annotate_paw_prints(time, data, data_slices, ax=None):
if ax is None:
ax = plt.gca()
# Display all paw impacts (sum over time)
ax.imshow(data.sum(axis=2).T)
# Annotate each impact with which paw it is
# (Relative to the first paw to hit the sensor)
x, y = [], []
for i, region in enumerate(data_slices):
dx, dy, dz = region
# Get x,y center of slice...
x0 = 0.5 * (dx.start + dx.stop)
y0 = 0.5 * (dy.start + dy.stop)
x.append(x0); y.append(y0)
# Annotate the paw impacts
ax.annotate('Paw %i' % (i%4 +1), (x0, y0),
color='red', ha='center', va='bottom')
# Plot line connecting paw impacts
ax.plot(x,y, '-wo')
ax.axis('image')
ax.set_title('Order of Steps')
#2
3
I'm no expert in image detection, and I don't know Python, but I'll give it a whack...
我不是图像检测方面的专家,我也不知道Python,但我会给它一个重击……
To detect individual paws, you should first only select everything with a pressure greater than some small threshold, very close to no pressure at all. Every pixel/point that is above this should be "marked." Then, every pixel adjacent to all "marked" pixels becomes marked, and this process is repeated a few times. Masses that are totally connected would be formed, so you have distinct objects. Then, each "object" has a minimum and maximum x and y value, so bounding boxes can be packed neatly around them.
要检测单独的爪子,你首先应该只选择压力大于某个小阈值的东西,几乎没有压力。上面的每一个像素点都应该是“标记的”。然后,与所有“标记”像素相邻的每个像素都被标记,这个过程重复几次。完全连接的质量会形成,所以有不同的物体。然后,每个“对象”都有一个最小和最大的x和y值,因此可以将边界框整齐地打包在它们周围。
Pseudocode:
伪代码:
(MARK) ALL PIXELS ABOVE (0.5)
(标记)以上所有像素(0.5)
(MARK) ALL PIXELS (ADJACENT) TO (MARK) PIXELS
(标记)与(标记)像素相邻的所有像素
REPEAT (STEP 2) (5) TIMES
重复(步骤2)(5)次
SEPARATE EACH TOTALLY CONNECTED MASS INTO A SINGLE OBJECT
把每一个完全相连的质量分开成一个单一的物体
MARK THE EDGES OF EACH OBJECT, AND CUT APART TO FORM SLICES.
标记每个对象的边缘,并将其分割成片。
That should about do it.
应该这样做。
#3
0
Note: I say pixel, but this could be regions using an average of the pixels. Optimization is another issue...
注意:我说的是像素,但这可能是使用像素平均值的区域。优化是另一个问题……
Sounds like you need to analyze a function (pressure over time) for each pixel and determine where the function turns (when it changes > X in the other direction it is considered a turn to counter errors).
听起来好像你需要对每个像素分析一个函数(压力随时间变化),并确定函数在哪里转动(当它在另一个方向上改变> X时,它被认为是一个用来抵消错误的转动)。
If you know at what frames it turns, you will know the frame where the pressure was the most hard and you will know where it was the least hard between the two paws. In theory, you then would know the two frames where the paws pressed the most hard and can calculate an average of those intervals.
如果你知道它转动的角度,你就会知道压力最大的那个框架,你就会知道它在两个爪子之间最不硬的地方。从理论上讲,你就能知道这两幅画中,爪子最用力的地方是哪两幅,并能计算出这两幅画的平均间隔。
after which I'll get to the problem of deciding which paw it is!
在那之后,我将着手决定它是哪只爪子的问题!
This is the same tour as before, knowing when each paw applies the most pressure helps you decide.
这和以前一样,知道每个爪子在什么时候施加最大的压力能帮助你做出决定。