From the nifti header its easy to get the affine matrix. However in the DICOM header there are lots of entries, but its unclear to me which entries describe the transformation of which parameter to which new space.
从nifti标头得到仿射矩阵很容易。但是在DICOM头中有很多条目,但是我不清楚哪些条目描述了哪个参数转换到哪个新空间。
I have found a tutorial which is quite detailed, but I cant find the entries they refer to. Also, that tutorial is written for Python, not Matlab. It lists these header entries:
我找到了一个非常详细的教程,但是我找不到他们提到的条目。另外,该教程是为Python编写的,而不是Matlab编写的。它列出了这些头条目:
Entries needed:
Image Position (0020,0032)
Image Orientation (0020,0037)
Pixel Spacing (0028,0030)
I cant find these if I load the header with dicominfo() . Maybe they are vendor specific or maybe they are nested away somewhere in the struct. Also the Pixel Spacing they refer to consists of two values, so I think their tutorial will only work for single slice transformations. More header entries about slice thickness and slicegap would be needed. Its also not easy to calculate the correct transformation for the z coordinates.
如果我用dicominfo()加载标题,我找不到这些。也许它们是特定于供应商的,或者它们是嵌套在结构中的某个地方。另外,它们所引用的像素间距包含两个值,所以我认为它们的教程只适用于单片转换。需要更多关于切片厚度和切片的标题条目。它也不容易计算z坐标的正确变换。
Does anybody know how to find these entries or how to transform image coordinates to patient coordinates with other information from a DICOM header? I use Matlab.
有没有人知道如何找到这些条目或者如何将图像坐标转换为患者坐标和DICOM标题中的其他信息?我使用Matlab。
1 个解决方案
#1
7
Ok so they were nested away in what might be a vendor specific entry of the struct. When loaded in Matlab, the name of the nest was inf.PerFrameFunctionalGroupsSequence.Item_X., then the framenumber, and then some more nesting which was more straightforward/self explanatory so I wont need to add it here. But search for the entries you need there. The slice spacing is called SpacingBetweenSlices (or slice thickness in the single slice case), the pixel spacing is called PixelSpacing and then there are ImagePositionPatient for the translation and ImageOrientationPatient for the rotation. Below is the code I wrote when following the steps from the nipy link below.
它们被嵌套在一个特定于供应商的结构中。当在Matlab中加载时,nest的名称为。perframefunctionalgroupssequence.item_x。然后是framenumber,然后是更多的嵌套,它更简单易懂,所以我不需要在这里添加它。但是搜索你需要的条目。切片的间距称为间距(或单片区的切片厚度),像素间距称为像素间距,然后为旋转的平移和图像定向患者提供图像定位病人。下面是我按照下面的nipy链接的步骤编写的代码。
What happens is you load the direction cosines in a rotation matrix to align the basis vectors and you load the the pixel spacing and slice spacing in a matrix to scale the basis vectors and you load the image position to translate the new coordinate system. Finding the directoin cosines for the z direction takes some calculations because dicom apparently was designed for 2d images. In the single slice case the z direction cosines is the unit vector orthogonal to the x and y direction cosines (the cross product between the two) and in the multi slice case you can calculate it from all the differences in the translations between the slcies. After this you still want to apply the transformation which is also not immediately straightforward.
所发生的是,你在旋转矩阵中加载方向余弦,以使基向量对齐,然后你在矩阵中加载像素间距和切片间距,以缩放基向量,你加载图像位置来平移新的坐标系统。由于dicom显然是为2d图像设计的,所以在z方向上找到方向余弦需要一些计算。在单层情况下,z方向余弦是正交于x和y方向余弦的单位向量(两者之间的叉积),在多片的情况下,你可以从所有的差异中计算出它们之间的差异。在这之后,你仍然想要应用这个变换,它也不是立即直接的。
%load the header
inf = dicominfo(filename, 'dictionary', yourvendorspecificdictfilehere);
nSl = double(inf.MRSeriesNrOfSlices);
nY = double(inf.Height);
nX = double(inf.Width);
T1 = double(inf.PerFrameFunctionalGroupsSequence.Item_1.PlanePositionSequence.Item_1.ImagePositionPatient);
%load pixel spacing / scaling / resolution
RowColSpacing = double(inf.PerFrameFunctionalGroupsSequence.Item_1.PixelMeasuresSequence.Item_1.PixelSpacing);
%of inf.PerFrameFunctionalGroupsSequence.Item_1.PrivatePerFrameSq.Item_1.Pixel_Spacing;
dx = double(RowColSpacing(1));
dX = [1; 1; 1].*dx;%cols
dy = double(RowColSpacing(2));
dY = [1; 1; 1].*dy;%rows
dz = double(inf.SpacingBetweenSlices);%inf.PerFrameFunctionalGroupsSequence.Item_1.PrivatePerFrameSq.Item_1.SliceThickness; %thickness of spacing?
dZ = [1; 1; 1].*dz;
%directional cosines per basis vector
dircosXY = double(inf.PerFrameFunctionalGroupsSequence.Item_1.PlaneOrientationSequence.Item_1.ImageOrientationPatient);
dircosX = dircosXY(1:3);
dircosY = dircosXY(4:6);
if nSl == 1;
dircosZ = cross(dircosX,dircosY);%orthogonal to other two direction cosines!
else
N = nSl;%double(inf.NumberOfFrames);
TN = double(-eval(['inf.PerFrameFunctionalGroupsSequence.Item_',sprintf('%d', N),'.PlanePositionSequence.Item_1.ImagePositionPatient']));
dircosZ = ((T1-TN)./nSl)./dZ;
end
%all dircos together
dimensionmixing = [dircosX dircosY dircosZ];
%all spacing together
dimensionscaling = [dX dY dZ];
%mixing and spacing of dimensions together
R = dimensionmixing.*dimensionscaling;%maps from image basis to patientbasis
%offset and R together
A = [[R T1];[0 0 0 1]];
%you probably want to switch X and Y
%(depending on how you load your dicom into a matlab array)
Aold = A;
A(:,1) = Aold(:,2);
A(:,2) = Aold(:,1);
This results in this affine formula:
这就产生了这个仿射公式:
So basically I followed this tutorial. What was the biggest struggle was getting the Z direction and the translation correct. Also finding identifying and converting the correct entries was not straightforward for me. I do think my answer adds something to that tutorial though, because it was pretty hard to find the entries they refer to and now I wrote some Matlab code getting the affine matrix from a DICOM header. Before using the found affine matrix you also might need to find the Z coordinates for all of your frames, which might not be trivial if your dataset has more than four dimensions (dicomread puts all higher dimensions in one big fourth dimension)
所以基本上我遵循了这个教程。最大的困难是获得Z方向和正确的翻译。同样,找到识别和转换正确的条目对我来说并不简单。我确实认为我的答案会给这个教程添加一些内容,因为很难找到它们引用的条目,现在我编写了一些Matlab代码,从DICOM头中获取仿射矩阵。在使用找到的仿射矩阵之前,您还可能需要找到所有帧的Z坐标,如果数据集有超过四个维度(dicomread将所有更高维度放在一个大的四维中),这可能并不重要。
-Edit- Corrected Z direction and translation of the transformation
-编辑-校正Z方向和转换的翻译。
#1
7
Ok so they were nested away in what might be a vendor specific entry of the struct. When loaded in Matlab, the name of the nest was inf.PerFrameFunctionalGroupsSequence.Item_X., then the framenumber, and then some more nesting which was more straightforward/self explanatory so I wont need to add it here. But search for the entries you need there. The slice spacing is called SpacingBetweenSlices (or slice thickness in the single slice case), the pixel spacing is called PixelSpacing and then there are ImagePositionPatient for the translation and ImageOrientationPatient for the rotation. Below is the code I wrote when following the steps from the nipy link below.
它们被嵌套在一个特定于供应商的结构中。当在Matlab中加载时,nest的名称为。perframefunctionalgroupssequence.item_x。然后是framenumber,然后是更多的嵌套,它更简单易懂,所以我不需要在这里添加它。但是搜索你需要的条目。切片的间距称为间距(或单片区的切片厚度),像素间距称为像素间距,然后为旋转的平移和图像定向患者提供图像定位病人。下面是我按照下面的nipy链接的步骤编写的代码。
What happens is you load the direction cosines in a rotation matrix to align the basis vectors and you load the the pixel spacing and slice spacing in a matrix to scale the basis vectors and you load the image position to translate the new coordinate system. Finding the directoin cosines for the z direction takes some calculations because dicom apparently was designed for 2d images. In the single slice case the z direction cosines is the unit vector orthogonal to the x and y direction cosines (the cross product between the two) and in the multi slice case you can calculate it from all the differences in the translations between the slcies. After this you still want to apply the transformation which is also not immediately straightforward.
所发生的是,你在旋转矩阵中加载方向余弦,以使基向量对齐,然后你在矩阵中加载像素间距和切片间距,以缩放基向量,你加载图像位置来平移新的坐标系统。由于dicom显然是为2d图像设计的,所以在z方向上找到方向余弦需要一些计算。在单层情况下,z方向余弦是正交于x和y方向余弦的单位向量(两者之间的叉积),在多片的情况下,你可以从所有的差异中计算出它们之间的差异。在这之后,你仍然想要应用这个变换,它也不是立即直接的。
%load the header
inf = dicominfo(filename, 'dictionary', yourvendorspecificdictfilehere);
nSl = double(inf.MRSeriesNrOfSlices);
nY = double(inf.Height);
nX = double(inf.Width);
T1 = double(inf.PerFrameFunctionalGroupsSequence.Item_1.PlanePositionSequence.Item_1.ImagePositionPatient);
%load pixel spacing / scaling / resolution
RowColSpacing = double(inf.PerFrameFunctionalGroupsSequence.Item_1.PixelMeasuresSequence.Item_1.PixelSpacing);
%of inf.PerFrameFunctionalGroupsSequence.Item_1.PrivatePerFrameSq.Item_1.Pixel_Spacing;
dx = double(RowColSpacing(1));
dX = [1; 1; 1].*dx;%cols
dy = double(RowColSpacing(2));
dY = [1; 1; 1].*dy;%rows
dz = double(inf.SpacingBetweenSlices);%inf.PerFrameFunctionalGroupsSequence.Item_1.PrivatePerFrameSq.Item_1.SliceThickness; %thickness of spacing?
dZ = [1; 1; 1].*dz;
%directional cosines per basis vector
dircosXY = double(inf.PerFrameFunctionalGroupsSequence.Item_1.PlaneOrientationSequence.Item_1.ImageOrientationPatient);
dircosX = dircosXY(1:3);
dircosY = dircosXY(4:6);
if nSl == 1;
dircosZ = cross(dircosX,dircosY);%orthogonal to other two direction cosines!
else
N = nSl;%double(inf.NumberOfFrames);
TN = double(-eval(['inf.PerFrameFunctionalGroupsSequence.Item_',sprintf('%d', N),'.PlanePositionSequence.Item_1.ImagePositionPatient']));
dircosZ = ((T1-TN)./nSl)./dZ;
end
%all dircos together
dimensionmixing = [dircosX dircosY dircosZ];
%all spacing together
dimensionscaling = [dX dY dZ];
%mixing and spacing of dimensions together
R = dimensionmixing.*dimensionscaling;%maps from image basis to patientbasis
%offset and R together
A = [[R T1];[0 0 0 1]];
%you probably want to switch X and Y
%(depending on how you load your dicom into a matlab array)
Aold = A;
A(:,1) = Aold(:,2);
A(:,2) = Aold(:,1);
This results in this affine formula:
这就产生了这个仿射公式:
So basically I followed this tutorial. What was the biggest struggle was getting the Z direction and the translation correct. Also finding identifying and converting the correct entries was not straightforward for me. I do think my answer adds something to that tutorial though, because it was pretty hard to find the entries they refer to and now I wrote some Matlab code getting the affine matrix from a DICOM header. Before using the found affine matrix you also might need to find the Z coordinates for all of your frames, which might not be trivial if your dataset has more than four dimensions (dicomread puts all higher dimensions in one big fourth dimension)
所以基本上我遵循了这个教程。最大的困难是获得Z方向和正确的翻译。同样,找到识别和转换正确的条目对我来说并不简单。我确实认为我的答案会给这个教程添加一些内容,因为很难找到它们引用的条目,现在我编写了一些Matlab代码,从DICOM头中获取仿射矩阵。在使用找到的仿射矩阵之前,您还可能需要找到所有帧的Z坐标,如果数据集有超过四个维度(dicomread将所有更高维度放在一个大的四维中),这可能并不重要。
-Edit- Corrected Z direction and translation of the transformation
-编辑-校正Z方向和转换的翻译。