I am currently working with using Bezier curves and surfaces to draw the famous Utah teapot. Using Bezier patches of 16 control points, I have been able to draw the teapot and display it using a 'world to camera' function which gives the ability to rotate the resulting teapot, and am currently using an orthographic projection.
我目前正在使用Bezier曲线和表面绘制著名的犹他州茶壶。使用16个控制点的Bezier补丁,我已经能够绘制茶壶并显示它使用一个'world to camera'功能,它可以旋转产生的茶壶,并且目前正在使用正投影。
The result is that I have a 'flat' teapot, which is expected as the purpose of an orthographic projection is to preserve parallel lines.
结果是,我有一个“扁平”茶壶,它被认为是一个正投影的目的是保持平行线。
However, I would like to use a perspective projection to give the teapot depth. My question is, how does one take the 3D xyz vertex returned from the 'world to camera' function, and convert this into a 2D coordinate. I am wanting to use the projection plane at z=0, and allow the user to determine the focal length and image size using the arrow keys on the keyboard.
不过,我想用透视投影来给出茶壶的深度。我的问题是,如何将3D xyz顶点从“世界”返回到“相机”功能,并将其转换为2D坐标。我想在z=0处使用投影平面,并允许用户使用键盘上的箭头键来确定焦距和图像大小。
I am programming this in java and have all of the input event handler set up, and have also written a matrix class which handles basic matrix multiplication. I've been reading through wikipedia and other resources for a while, but I can't quite get a handle on how one performs this transformation.
我用java编程,并设置了所有的输入事件处理程序,还编写了一个矩阵类来处理基本矩阵乘法。我已经在*和其他资源上阅读了一段时间,但是我还不能很好地理解一个人是如何完成这个转换的。
10 个解决方案
#1
81
I see this question is a bit old, but I decided to give an answer anyway for those who find this question by searching.
The standard way to represent 2D/3D transformations nowadays is by using homogeneous coordinates. [x,y,w] for 2D, and [x,y,z,w] for 3D. Since you have three axes in 3D as well as translation, that information fits perfectly in a 4x4 transformation matrix. I will use column-major matrix notation in this explanation. All matrices are 4x4 unless noted otherwise.
The stages from 3D points and to a rasterized point, line or polygon looks like this:
我觉得这个问题有点老,但我还是决定给那些通过搜索找到这个问题的人一个答案。现在,表示2D/3D转换的标准方法是使用齐次坐标。[x,y,w]为2D, [x,y,z,w]为3D。既然你有3个三维的轴,以及翻译,这个信息完全符合4x4的变换矩阵。我将在这个解释中使用列主要的矩阵符号。所有的矩阵都是4x4,除非另有说明。从三维点到栅格化点,线或多边形的阶段是这样的:
- Transform your 3D points with the inverse camera matrix, followed with whatever transformations they need. If you have surface normals, transform them as well but with w set to zero, as you don't want to translate normals. The matrix you transform normals with must be isotropic; scaling and shearing makes the normals malformed.
- 用反相机矩阵变换你的3D点,然后用他们需要的任何转换。如果你有表面法线,也可以把它们转换成,但是w设为零,因为你不想翻译法线。你变换法线的矩阵必须是各向同性的;缩放和剪切使法线变形。
- Transform the point with a clip space matrix. This matrix scales x and y with the field-of-view and aspect ratio, scales z by the near and far clipping planes, and plugs the 'old' z into w. After the transformation, you should divide x, y and z by w. This is called the perspective divide.
- 用一个剪辑空间矩阵变换点。这个矩阵的尺度是x和y,视野和纵横比,z由近处和远的剪切面,并把“老”z插入w。在变换之后,你应该用w除以x, y和z,这被称为透视分割。
- Now your vertices are in clip space, and you want to perform clipping so you don't render any pixels outside the viewport bounds. Sutherland-Hodgeman clipping is the most widespread clipping algorithm in use.
- 现在你的顶点在剪辑空间中,你想要执行剪辑,这样你就不会在viewport边界之外渲染任何像素。Sutherland-Hodgeman剪裁是目前使用最广泛的裁剪算法。
- Transform x and y with respect to w and the half-width and half-height. Your x and y coordinates are now in viewport coordinates. w is discarded, but 1/w and z is usually saved because 1/w is required to do perspective-correct interpolation across the polygon surface, and z is stored in the z-buffer and used for depth testing.
- 变换x和y关于w和半宽度和半高。你的x和y坐标现在在viewport坐标中。w被丢弃,但是1/w和z通常被保存,因为1/w被要求在多边形表面进行换位正确的插补,z被存储在z缓冲区中,用于深度测试。
This stage is the actual projection, because z isn't used as a component in the position any more.
这个阶段是实际的投影,因为z不再被用作位置的一个组件。
The algorithms:
Calculation of field-of-view
This calculates the field-of view. Whether tan takes radians or degrees is irrelevant, but angle must match. Notice that the result reaches infinity as angle nears 180 degrees. This is a singularity, as it is impossible to have a focal point that wide. If you want numerical stability, keep angle less or equal to 179 degrees.
这就计算出了视野。tan是否取弧度或角度无关,但角度必须匹配。注意,当角接近180度时,结果达到无穷大。这是一个奇点,因为它不可能有一个广泛的焦点。如果你想要数值稳定性,保持角度小于或等于179度。
fov = 1.0 / tan(angle/2.0)
Also notice that 1.0 / tan(45) = 1. Someone else here suggested to just divide by z. The result here is clear. You would get a 90 degree FOV and an aspect ratio of 1:1. Using homogeneous coordinates like this has several other advantages as well; we can for example perform clipping against the near and far planes without treating it as a special case.
还要注意,1.0 / tan(45) = 1。这里有人建议除以z,结果很明显。你会得到一个90度的FOV,一个宽比为1:1。像这样使用同质坐标还有其他几个优点;例如,我们可以在不将其视为特殊情况的情况下,对近距离和远平面进行裁剪。
Calculation of the clip matrix
This is the layout of the clip matrix. aspectRatio is Width/Height. So the FOV for the x component is scaled based on FOV for y. Far and near are coefficients which are the distances for the near and far clipping planes.
这是剪辑矩阵的布局。aspectRatio宽度/高度。因此,x分量的FOV是基于FOV对y进行缩放的,远和近的系数是近距离和远剪切面的距离。
[fov * aspectRatio][ 0 ][ 0 ][ 0 ]
[ 0 ][ fov ][ 0 ][ 0 ]
[ 0 ][ 0 ][(far+near)/(far-near) ][ 1 ]
[ 0 ][ 0 ][(2*near*far)/(near-far)][ 0 ]
Screen Projection
After clipping, this is the final transformation to get our screen coordinates.
在剪辑之后,这是得到我们的屏幕坐标的最终转换。
new_x = (x * Width ) / (2.0 * w) + halfWidth;
new_y = (y * Height) / (2.0 * w) + halfHeight;
Trivial example implementation in C++
#include <vector>
#include <cmath>
#include <stdexcept>
#include <algorithm>
struct Vector
{
Vector() : x(0),y(0),z(0),w(1){}
Vector(float a, float b, float c) : x(a),y(b),z(c),w(1){}
/* Assume proper operator overloads here, with vectors and scalars */
float Length() const
{
return std::sqrt(x*x + y*y + z*z);
}
Vector Unit() const
{
const float epsilon = 1e-6;
float mag = Length();
if(mag < epsilon){
std::out_of_range e("");
throw e;
}
return *this / mag;
}
};
inline float Dot(const Vector& v1, const Vector& v2)
{
return v1.x*v2.x + v1.y*v2.y + v1.z*v2.z;
}
class Matrix
{
public:
Matrix() : data(16)
{
Identity();
}
void Identity()
{
std::fill(data.begin(), data.end(), float(0));
data[0] = data[5] = data[10] = data[15] = 1.0f;
}
float& operator[](size_t index)
{
if(index >= 16){
std::out_of_range e("");
throw e;
}
return data[index];
}
Matrix operator*(const Matrix& m) const
{
Matrix dst;
int col;
for(int y=0; y<4; ++y){
col = y*4;
for(int x=0; x<4; ++x){
for(int i=0; i<4; ++i){
dst[x+col] += m[i+col]*data[x+i*4];
}
}
}
return dst;
}
Matrix& operator*=(const Matrix& m)
{
*this = (*this) * m;
return *this;
}
/* The interesting stuff */
void SetupClipMatrix(float fov, float aspectRatio, float near, float far)
{
Identity();
float f = 1.0f / std::tan(fov * 0.5f);
data[0] = f*aspectRatio;
data[5] = f;
data[10] = (far+near) / (far-near);
data[11] = 1.0f; /* this 'plugs' the old z into w */
data[14] = (2.0f*near*far) / (near-far);
data[15] = 0.0f;
}
std::vector<float> data;
};
inline Vector operator*(const Vector& v, const Matrix& m)
{
Vector dst;
dst.x = v.x*m[0] + v.y*m[4] + v.z*m[8 ] + v.w*m[12];
dst.y = v.x*m[1] + v.y*m[5] + v.z*m[9 ] + v.w*m[13];
dst.z = v.x*m[2] + v.y*m[6] + v.z*m[10] + v.w*m[14];
dst.w = v.x*m[3] + v.y*m[7] + v.z*m[11] + v.w*m[15];
return dst;
}
typedef std::vector<Vector> VecArr;
VecArr ProjectAndClip(int width, int height, float near, float far, const VecArr& vertex)
{
float halfWidth = (float)width * 0.5f;
float halfHeight = (float)height * 0.5f;
float aspect = (float)width / (float)height;
Vector v;
Matrix clipMatrix;
VecArr dst;
clipMatrix.SetupClipMatrix(60.0f * (M_PI / 180.0f), aspect, near, far);
/* Here, after the perspective divide, you perform Sutherland-Hodgeman clipping
by checking if the x, y and z components are inside the range of [-w, w].
One checks each vector component seperately against each plane. Per-vertex
data like colours, normals and texture coordinates need to be linearly
interpolated for clipped edges to reflect the change. If the edge (v0,v1)
is tested against the positive x plane, and v1 is outside, the interpolant
becomes: (v1.x - w) / (v1.x - v0.x)
I skip this stage all together to be brief.
*/
for(VecArr::iterator i=vertex.begin(); i!=vertex.end(); ++i){
v = (*i) * clipMatrix;
v /= v.w; /* Don't get confused here. I assume the divide leaves v.w alone.*/
dst.push_back(v);
}
/* TODO: Clipping here */
for(VecArr::iterator i=dst.begin(); i!=dst.end(); ++i){
i->x = (i->x * (float)width) / (2.0f * i->w) + halfWidth;
i->y = (i->y * (float)height) / (2.0f * i->w) + halfHeight;
}
return dst;
}
If you still ponder about this, the OpenGL specification is a really nice reference for the maths involved. The DevMaster forums at http://www.devmaster.net/ have a lot of nice articles related to software rasterizers as well.
如果您还在考虑这个问题,OpenGL规范对于涉及的数学来说是一个非常好的参考。DevMaster论坛在http://www.devmaster.net/有很多与软件光栅相关的好文章。
#2
8
I think this will probably answer your question. Here's what I wrote there:
我想这可能会回答你的问题。这是我写的:
Here's a very general answer. Say the camera's at (Xc, Yc, Zc) and the point you want to project is P = (X, Y, Z). The distance from the camera to the 2D plane onto which you are projecting is F (so the equation of the plane is Z-Zc=F). The 2D coordinates of P projected onto the plane are (X', Y').
这是一个很普遍的答案。假设相机在(Xc, Yc, Zc)上,你想要投射的点是P = (X, Y, Z),从相机到你投影到的二维平面的距离是F(所以平面的方程是Z-Zc=F)。平面上P的二维坐标是(X', Y')
Then, very simply:
然后,非常简单:
X' = ((X - Xc) * (F/Z)) + Xc
X' = (X - Xc) * (F/Z) + Xc。
Y' = ((Y - Yc) * (F/Z)) + Yc
Y' = (Y - Yc) * (F/Z) + Yc。
If your camera is the origin, then this simplifies to:
如果你的相机是原点,那么这个简化为:
X' = X * (F/Z)
X' = X * (F/Z)
Y' = Y * (F/Z)
Y' = Y * (F/Z)
#3
5
You can project 3D point in 2D using: Commons Math: The Apache Commons Mathematics Library with just two classes.
你可以使用:Commons Math: Apache Commons Math Library,只使用两个类。
Example for Java Swing.
Java Swing的例子。
import org.apache.commons.math3.geometry.euclidean.threed.Plane;
import org.apache.commons.math3.geometry.euclidean.threed.Vector3D;
Plane planeX = new Plane(new Vector3D(1, 0, 0));
Plane planeY = new Plane(new Vector3D(0, 1, 0)); // Must be orthogonal plane of planeX
void drawPoint(Graphics2D g2, Vector3D v) {
g2.drawLine(0, 0,
(int) (world.unit * planeX.getOffset(v)),
(int) (world.unit * planeY.getOffset(v)));
}
protected void paintComponent(Graphics g) {
super.paintComponent(g);
drawPoint(g2, new Vector3D(2, 1, 0));
drawPoint(g2, new Vector3D(0, 2, 0));
drawPoint(g2, new Vector3D(0, 0, 2));
drawPoint(g2, new Vector3D(1, 1, 1));
}
Now you only needs update the planeX
and planeY
to change the perspective-projection, to get things like this:
现在你只需要更新planeX和planeY来改变透视投影,得到这样的东西:
#4
3
To obtain the perspective-corrected co-ordinates, just divide by the z
co-ordinate:
为了获得换位思考的坐标,只需除以z坐标:
xc = x / z
yc = y / z
The above works assuming that the camera is at (0, 0, 0)
and you are projecting onto the plane at z = 1
-- you need to translate the co-ords relative to the camera otherwise.
上面的工作假设相机在(0,0,0),而你在z = 1的平面上投影——你需要将与摄像机相关的共模转换成其他形式。
There are some complications for curves, insofar as projecting the points of a 3D Bezier curve will not in general give you the same points as drawing a 2D Bezier curve through the projected points.
对于曲线来说,有一些复杂的情况,如投影出三维Bezier曲线的点,一般不会给你相同的点,即通过投影点绘制2D Bezier曲线。
#5
2
I'm not sure at what level you're asking this question. It sounds as if you've found the formulas online, and are just trying to understand what it does. On that reading of your question I offer:
我不知道你问这个问题的层次。听起来好像你已经在网上找到了这些公式,只是想了解它的作用。在阅读你的问题时,我提出:
- Imagine a ray from the viewer (at point V) directly towards the center of the projection plane (call it C).
- 想象一下,从观察者(点V)直接向投影平面的中心(称为C)的光线。
- Imagine a second ray from the viewer to a point in the image (P) which also intersects the projection plane at some point (Q)
- 想象一下从观察者到图像中的点(P)的第二束光线,它也与投影平面相交于某个点(Q)
- The viewer and the two points of intersection on the view plane form a triangle (VCQ); the sides are the two rays and the line between the points in the plane.
- 视图平面上的观察者和两个交叉点形成一个三角形(VCQ);两边是两条射线和平面上两点之间的直线。
- The formulas are using this triangle to find the coordinates of Q, which is where the projected pixel will go
- 公式是用这个三角形找到Q的坐标,也就是投影的像素点的位置。
#6
2
My previous answer was wrong and garbage.
我以前的回答是错误的和无用的。
Here is a redo:
这是一个重做:
Looking at the screen from the top, you get x and z axis.
Looking at the screen from the side, you get y and z axis.
从上方看屏幕,得到x轴和z轴。从侧面看屏幕,得到y轴和z轴。
Calculate the focal lengths of the top and side views, using trigonometry, which is the distance between the eye and the middle of the screen, which is determined by the field of view of the screen. This makes the shape of two right triangles back to back.
计算顶部和侧面视图的焦距,使用三角学,即眼睛和屏幕中间的距离,这是由屏幕的视场决定的。这使得两个直角三角形的形状可以背回来。
hw = screen_width / 2
hw = screen_width / 2。
hh = screen_height / 2
hh = screen_height / 2。
fl_top = hw / tan(θ/2)
fl_top = hw / tan(/2)
fl_side = hh / tan(θ/2)
fl_side = hh / tan(/2)
Then take the average focal length.
然后取平均焦距。
fl_average = (fl_top + fl_side) / 2
fl_average = (fl_top + fl_side) / 2。
Now calculate the new x and new y with basic arithmetic, since the larger right triangle made from the 3d point and the eye point is congruent with the smaller triangle made by the 2d point and the eye point.
现在用基本算法计算新x和新y,因为从三维点和眼点组成的大直角三角形与由二维点和视点构成的较小三角形是一致的。
x' = (x * fl_top) / (z + fl_top)
x' = (x * fl_top) / (z + fl_top)
y' = (y * fl_top) / (z + fl_top)
y' = (y * fl_top) / (z + fl_top)
Or you can simply set
或者你可以简单地设置。
x' = x / (z + 1)
x' = x / (z + 1)
and
和
y' = y / (z + 1)
y' = y / (z + 1)
#7
1
All of the answers address the question posed in the title. However, I would like to add a caveat that is implicit in the text. Bézier patches are used to represent the surface, but you cannot just transform the points of the patch and tessellate the patch into polygons, because this will result in distorted geometry. You can, however, tessellate the patch first into polygons using a transformed screen tolerance and then transform the polygons, or you can convert the Bézier patches to rational Bézier patches, then tessellate those using a screen-space tolerance. The former is easier, but the latter is better for a production system.
所有的答案都解决了题目中提出的问题。但是,我想在文中添加一个隐含的警告。Bezier补丁被用来表示表面,但是你不能仅仅将patch和tessellate的点转换成多边形,因为这会导致扭曲的几何图形。但是,您可以使用转换后的屏幕公差首先将补丁嵌入到多边形中,然后转换多边形,或者您可以将Bezier补丁转换为rational Bezier补丁,然后使用屏幕空间公差来对这些补丁进行tessellate。前者更容易,但后者更适合于生产系统。
I suspect that you want the easier way. For this, you would scale the screen tolerance by the norm of the Jacobian of the inverse perspective transformation and use that to determine the amount of tessellation that you need in model space (it might be easier to compute the forward Jacobian, invert that, then take the norm). Note that this norm is position-dependent, and you may want to evaluate this at several locations, depending on the perspective. Also remember that since the projective transformation is rational, you need to apply the quotient rule to compute the derivatives.
我怀疑你想要更简单的方法。为此,您将通过反视角转换的雅可比矩阵的标准来缩放屏幕公差,并使用它来确定您在模型空间中需要的镶嵌量(可以更容易地计算出前面的雅可比矩阵,然后将其转化为标准)。请注意,此规范是基于位置的,您可能希望根据透视图在多个位置进行评估。还要记住,由于投影变换是合理的,所以你需要应用除法法则来计算导数。
#8
0
I know it's an old topic but your illustration is not correct, the source code sets up the clip matrix correct.
我知道这是一个老话题,但你的说明是不正确的,源代码设置了剪辑矩阵的正确。
[fov * aspectRatio][ 0 ][ 0 ][ 0 ]
[ 0 ][ fov ][ 0 ][ 0 ]
[ 0 ][ 0 ][(far+near)/(far-near) ][(2*near*far)/(near-far)]
[ 0 ][ 0 ][ 1 ][ 0 ]
some addition to your things:
你的一些东西:
This clip matrix works only if you are projecting on static 2D plane if you want to add camera movement and rotation:
如果你想增加相机的运动和旋转,这个剪辑矩阵只在你在静态二维平面上放映时有效。
viewMatrix = clipMatrix * cameraTranslationMatrix4x4 * cameraRotationMatrix4x4;
this lets you rotate the 2D plane and move it around..-
这让你可以旋转2D平面并移动它。
#9
0
You might want to debug your system with spheres to determine whether or not you have a good field of view. If you have it too wide, the spheres with deform at the edges of the screen into more oval forms pointed toward the center of the frame. The solution to this problem is to zoom in on the frame, by multiplying the x and y coordinates for the 3 dimensional point by a scalar and then shrinking your object or world down by a similar factor. Then you get the nice even round sphere across the entire frame.
您可能想要用球体调试您的系统,以确定您是否有一个良好的视图。如果你把它太宽,在屏幕边缘有变形的球体,会变成更多的椭圆形,指向框架的中心。这个问题的解决方案是放大框架,将三维点的x和y坐标乘以一个标量,然后用一个相似的因子缩小你的物体或世界。然后在整个坐标系中得到均匀的圆球面。
I'm almost embarrassed that it took me all day to figure this one out and I was almost convinced that there was some spooky mysterious geometric phenomenon going on here that demanded a different approach.
我几乎感到尴尬,我花了一整天的时间才弄明白这一点,我几乎确信,在这里发生了一些令人毛骨悚然的神秘的几何现象,需要一种不同的方法。
Yet, the importance of calibrating the zoom-frame-of-view coefficient by rendering spheres cannot be overstated. If you do not know where the "habitable zone" of your universe is, you will end up walking on the sun and scrapping the project. You want to be able to render a sphere anywhere in your frame of view an have it appear round. In my project, the unit sphere is massive compared to the region that I'm describing.
然而,通过渲染球体来校准“放大”视角系数的重要性是不能被夸大的。如果你不知道你的宇宙的“宜居带”在哪里,你最终会走在太阳上,放弃这个项目。你想要在你的视图框架内的任何地方呈现一个球体。在我的项目中,与我描述的区域相比,单位球是巨大的。
Also, the obligatory wikipedia entry: Spherical Coordinate System
另外,*的条目是:球坐标系。
#10
0
Thanks to @Mads Elvenheim for a proper example code. I have fixed the minor syntax errors in the code (just a few const problems and obvious missing operators). Also, near and far have vastly different meanings in vs.
感谢@Mads Elvenheim提供了一个适当的示例代码。我已经修正了代码中的小语法错误(只是一些const问题和明显的丢失操作符)。而且,在vs中,近和远的意义有很大的不同。
For your pleasure, here is the compileable (MSVC2013) version. Have fun. Mind that I have made NEAR_Z and FAR_Z constant. You probably dont want it like that.
为了您的快乐,这里是compileable (MSVC2013)版本。玩得开心。注意,我已经做了接近和FAR_Z的常数。你可能不希望这样。
#include <vector>
#include <cmath>
#include <stdexcept>
#include <algorithm>
#define M_PI 3.14159
#define NEAR_Z 0.5
#define FAR_Z 2.5
struct Vector
{
float x;
float y;
float z;
float w;
Vector() : x( 0 ), y( 0 ), z( 0 ), w( 1 ) {}
Vector( float a, float b, float c ) : x( a ), y( b ), z( c ), w( 1 ) {}
/* Assume proper operator overloads here, with vectors and scalars */
float Length() const
{
return std::sqrt( x*x + y*y + z*z );
}
Vector& operator*=(float fac) noexcept
{
x *= fac;
y *= fac;
z *= fac;
return *this;
}
Vector operator*(float fac) const noexcept
{
return Vector(*this)*=fac;
}
Vector& operator/=(float div) noexcept
{
return operator*=(1/div); // avoid divisions: they are much
// more costly than multiplications
}
Vector Unit() const
{
const float epsilon = 1e-6;
float mag = Length();
if (mag < epsilon) {
std::out_of_range e( "" );
throw e;
}
return Vector(*this)/=mag;
}
};
inline float Dot( const Vector& v1, const Vector& v2 )
{
return v1.x*v2.x + v1.y*v2.y + v1.z*v2.z;
}
class Matrix
{
public:
Matrix() : data( 16 )
{
Identity();
}
void Identity()
{
std::fill( data.begin(), data.end(), float( 0 ) );
data[0] = data[5] = data[10] = data[15] = 1.0f;
}
float& operator[]( size_t index )
{
if (index >= 16) {
std::out_of_range e( "" );
throw e;
}
return data[index];
}
const float& operator[]( size_t index ) const
{
if (index >= 16) {
std::out_of_range e( "" );
throw e;
}
return data[index];
}
Matrix operator*( const Matrix& m ) const
{
Matrix dst;
int col;
for (int y = 0; y<4; ++y) {
col = y * 4;
for (int x = 0; x<4; ++x) {
for (int i = 0; i<4; ++i) {
dst[x + col] += m[i + col] * data[x + i * 4];
}
}
}
return dst;
}
Matrix& operator*=( const Matrix& m )
{
*this = (*this) * m;
return *this;
}
/* The interesting stuff */
void SetupClipMatrix( float fov, float aspectRatio )
{
Identity();
float f = 1.0f / std::tan( fov * 0.5f );
data[0] = f*aspectRatio;
data[5] = f;
data[10] = (FAR_Z + NEAR_Z) / (FAR_Z- NEAR_Z);
data[11] = 1.0f; /* this 'plugs' the old z into w */
data[14] = (2.0f*NEAR_Z*FAR_Z) / (NEAR_Z - FAR_Z);
data[15] = 0.0f;
}
std::vector<float> data;
};
inline Vector operator*( const Vector& v, Matrix& m )
{
Vector dst;
dst.x = v.x*m[0] + v.y*m[4] + v.z*m[8] + v.w*m[12];
dst.y = v.x*m[1] + v.y*m[5] + v.z*m[9] + v.w*m[13];
dst.z = v.x*m[2] + v.y*m[6] + v.z*m[10] + v.w*m[14];
dst.w = v.x*m[3] + v.y*m[7] + v.z*m[11] + v.w*m[15];
return dst;
}
typedef std::vector<Vector> VecArr;
VecArr ProjectAndClip( int width, int height, const VecArr& vertex )
{
float halfWidth = (float)width * 0.5f;
float halfHeight = (float)height * 0.5f;
float aspect = (float)width / (float)height;
Vector v;
Matrix clipMatrix;
VecArr dst;
clipMatrix.SetupClipMatrix( 60.0f * (M_PI / 180.0f), aspect);
/* Here, after the perspective divide, you perform Sutherland-Hodgeman clipping
by checking if the x, y and z components are inside the range of [-w, w].
One checks each vector component seperately against each plane. Per-vertex
data like colours, normals and texture coordinates need to be linearly
interpolated for clipped edges to reflect the change. If the edge (v0,v1)
is tested against the positive x plane, and v1 is outside, the interpolant
becomes: (v1.x - w) / (v1.x - v0.x)
I skip this stage all together to be brief.
*/
for (VecArr::const_iterator i = vertex.begin(); i != vertex.end(); ++i) {
v = (*i) * clipMatrix;
v /= v.w; /* Don't get confused here. I assume the divide leaves v.w alone.*/
dst.push_back( v );
}
/* TODO: Clipping here */
for (VecArr::iterator i = dst.begin(); i != dst.end(); ++i) {
i->x = (i->x * (float)width) / (2.0f * i->w) + halfWidth;
i->y = (i->y * (float)height) / (2.0f * i->w) + halfHeight;
}
return dst;
}
#pragma once
#1
81
I see this question is a bit old, but I decided to give an answer anyway for those who find this question by searching.
The standard way to represent 2D/3D transformations nowadays is by using homogeneous coordinates. [x,y,w] for 2D, and [x,y,z,w] for 3D. Since you have three axes in 3D as well as translation, that information fits perfectly in a 4x4 transformation matrix. I will use column-major matrix notation in this explanation. All matrices are 4x4 unless noted otherwise.
The stages from 3D points and to a rasterized point, line or polygon looks like this:
我觉得这个问题有点老,但我还是决定给那些通过搜索找到这个问题的人一个答案。现在,表示2D/3D转换的标准方法是使用齐次坐标。[x,y,w]为2D, [x,y,z,w]为3D。既然你有3个三维的轴,以及翻译,这个信息完全符合4x4的变换矩阵。我将在这个解释中使用列主要的矩阵符号。所有的矩阵都是4x4,除非另有说明。从三维点到栅格化点,线或多边形的阶段是这样的:
- Transform your 3D points with the inverse camera matrix, followed with whatever transformations they need. If you have surface normals, transform them as well but with w set to zero, as you don't want to translate normals. The matrix you transform normals with must be isotropic; scaling and shearing makes the normals malformed.
- 用反相机矩阵变换你的3D点,然后用他们需要的任何转换。如果你有表面法线,也可以把它们转换成,但是w设为零,因为你不想翻译法线。你变换法线的矩阵必须是各向同性的;缩放和剪切使法线变形。
- Transform the point with a clip space matrix. This matrix scales x and y with the field-of-view and aspect ratio, scales z by the near and far clipping planes, and plugs the 'old' z into w. After the transformation, you should divide x, y and z by w. This is called the perspective divide.
- 用一个剪辑空间矩阵变换点。这个矩阵的尺度是x和y,视野和纵横比,z由近处和远的剪切面,并把“老”z插入w。在变换之后,你应该用w除以x, y和z,这被称为透视分割。
- Now your vertices are in clip space, and you want to perform clipping so you don't render any pixels outside the viewport bounds. Sutherland-Hodgeman clipping is the most widespread clipping algorithm in use.
- 现在你的顶点在剪辑空间中,你想要执行剪辑,这样你就不会在viewport边界之外渲染任何像素。Sutherland-Hodgeman剪裁是目前使用最广泛的裁剪算法。
- Transform x and y with respect to w and the half-width and half-height. Your x and y coordinates are now in viewport coordinates. w is discarded, but 1/w and z is usually saved because 1/w is required to do perspective-correct interpolation across the polygon surface, and z is stored in the z-buffer and used for depth testing.
- 变换x和y关于w和半宽度和半高。你的x和y坐标现在在viewport坐标中。w被丢弃,但是1/w和z通常被保存,因为1/w被要求在多边形表面进行换位正确的插补,z被存储在z缓冲区中,用于深度测试。
This stage is the actual projection, because z isn't used as a component in the position any more.
这个阶段是实际的投影,因为z不再被用作位置的一个组件。
The algorithms:
Calculation of field-of-view
This calculates the field-of view. Whether tan takes radians or degrees is irrelevant, but angle must match. Notice that the result reaches infinity as angle nears 180 degrees. This is a singularity, as it is impossible to have a focal point that wide. If you want numerical stability, keep angle less or equal to 179 degrees.
这就计算出了视野。tan是否取弧度或角度无关,但角度必须匹配。注意,当角接近180度时,结果达到无穷大。这是一个奇点,因为它不可能有一个广泛的焦点。如果你想要数值稳定性,保持角度小于或等于179度。
fov = 1.0 / tan(angle/2.0)
Also notice that 1.0 / tan(45) = 1. Someone else here suggested to just divide by z. The result here is clear. You would get a 90 degree FOV and an aspect ratio of 1:1. Using homogeneous coordinates like this has several other advantages as well; we can for example perform clipping against the near and far planes without treating it as a special case.
还要注意,1.0 / tan(45) = 1。这里有人建议除以z,结果很明显。你会得到一个90度的FOV,一个宽比为1:1。像这样使用同质坐标还有其他几个优点;例如,我们可以在不将其视为特殊情况的情况下,对近距离和远平面进行裁剪。
Calculation of the clip matrix
This is the layout of the clip matrix. aspectRatio is Width/Height. So the FOV for the x component is scaled based on FOV for y. Far and near are coefficients which are the distances for the near and far clipping planes.
这是剪辑矩阵的布局。aspectRatio宽度/高度。因此,x分量的FOV是基于FOV对y进行缩放的,远和近的系数是近距离和远剪切面的距离。
[fov * aspectRatio][ 0 ][ 0 ][ 0 ]
[ 0 ][ fov ][ 0 ][ 0 ]
[ 0 ][ 0 ][(far+near)/(far-near) ][ 1 ]
[ 0 ][ 0 ][(2*near*far)/(near-far)][ 0 ]
Screen Projection
After clipping, this is the final transformation to get our screen coordinates.
在剪辑之后,这是得到我们的屏幕坐标的最终转换。
new_x = (x * Width ) / (2.0 * w) + halfWidth;
new_y = (y * Height) / (2.0 * w) + halfHeight;
Trivial example implementation in C++
#include <vector>
#include <cmath>
#include <stdexcept>
#include <algorithm>
struct Vector
{
Vector() : x(0),y(0),z(0),w(1){}
Vector(float a, float b, float c) : x(a),y(b),z(c),w(1){}
/* Assume proper operator overloads here, with vectors and scalars */
float Length() const
{
return std::sqrt(x*x + y*y + z*z);
}
Vector Unit() const
{
const float epsilon = 1e-6;
float mag = Length();
if(mag < epsilon){
std::out_of_range e("");
throw e;
}
return *this / mag;
}
};
inline float Dot(const Vector& v1, const Vector& v2)
{
return v1.x*v2.x + v1.y*v2.y + v1.z*v2.z;
}
class Matrix
{
public:
Matrix() : data(16)
{
Identity();
}
void Identity()
{
std::fill(data.begin(), data.end(), float(0));
data[0] = data[5] = data[10] = data[15] = 1.0f;
}
float& operator[](size_t index)
{
if(index >= 16){
std::out_of_range e("");
throw e;
}
return data[index];
}
Matrix operator*(const Matrix& m) const
{
Matrix dst;
int col;
for(int y=0; y<4; ++y){
col = y*4;
for(int x=0; x<4; ++x){
for(int i=0; i<4; ++i){
dst[x+col] += m[i+col]*data[x+i*4];
}
}
}
return dst;
}
Matrix& operator*=(const Matrix& m)
{
*this = (*this) * m;
return *this;
}
/* The interesting stuff */
void SetupClipMatrix(float fov, float aspectRatio, float near, float far)
{
Identity();
float f = 1.0f / std::tan(fov * 0.5f);
data[0] = f*aspectRatio;
data[5] = f;
data[10] = (far+near) / (far-near);
data[11] = 1.0f; /* this 'plugs' the old z into w */
data[14] = (2.0f*near*far) / (near-far);
data[15] = 0.0f;
}
std::vector<float> data;
};
inline Vector operator*(const Vector& v, const Matrix& m)
{
Vector dst;
dst.x = v.x*m[0] + v.y*m[4] + v.z*m[8 ] + v.w*m[12];
dst.y = v.x*m[1] + v.y*m[5] + v.z*m[9 ] + v.w*m[13];
dst.z = v.x*m[2] + v.y*m[6] + v.z*m[10] + v.w*m[14];
dst.w = v.x*m[3] + v.y*m[7] + v.z*m[11] + v.w*m[15];
return dst;
}
typedef std::vector<Vector> VecArr;
VecArr ProjectAndClip(int width, int height, float near, float far, const VecArr& vertex)
{
float halfWidth = (float)width * 0.5f;
float halfHeight = (float)height * 0.5f;
float aspect = (float)width / (float)height;
Vector v;
Matrix clipMatrix;
VecArr dst;
clipMatrix.SetupClipMatrix(60.0f * (M_PI / 180.0f), aspect, near, far);
/* Here, after the perspective divide, you perform Sutherland-Hodgeman clipping
by checking if the x, y and z components are inside the range of [-w, w].
One checks each vector component seperately against each plane. Per-vertex
data like colours, normals and texture coordinates need to be linearly
interpolated for clipped edges to reflect the change. If the edge (v0,v1)
is tested against the positive x plane, and v1 is outside, the interpolant
becomes: (v1.x - w) / (v1.x - v0.x)
I skip this stage all together to be brief.
*/
for(VecArr::iterator i=vertex.begin(); i!=vertex.end(); ++i){
v = (*i) * clipMatrix;
v /= v.w; /* Don't get confused here. I assume the divide leaves v.w alone.*/
dst.push_back(v);
}
/* TODO: Clipping here */
for(VecArr::iterator i=dst.begin(); i!=dst.end(); ++i){
i->x = (i->x * (float)width) / (2.0f * i->w) + halfWidth;
i->y = (i->y * (float)height) / (2.0f * i->w) + halfHeight;
}
return dst;
}
If you still ponder about this, the OpenGL specification is a really nice reference for the maths involved. The DevMaster forums at http://www.devmaster.net/ have a lot of nice articles related to software rasterizers as well.
如果您还在考虑这个问题,OpenGL规范对于涉及的数学来说是一个非常好的参考。DevMaster论坛在http://www.devmaster.net/有很多与软件光栅相关的好文章。
#2
8
I think this will probably answer your question. Here's what I wrote there:
我想这可能会回答你的问题。这是我写的:
Here's a very general answer. Say the camera's at (Xc, Yc, Zc) and the point you want to project is P = (X, Y, Z). The distance from the camera to the 2D plane onto which you are projecting is F (so the equation of the plane is Z-Zc=F). The 2D coordinates of P projected onto the plane are (X', Y').
这是一个很普遍的答案。假设相机在(Xc, Yc, Zc)上,你想要投射的点是P = (X, Y, Z),从相机到你投影到的二维平面的距离是F(所以平面的方程是Z-Zc=F)。平面上P的二维坐标是(X', Y')
Then, very simply:
然后,非常简单:
X' = ((X - Xc) * (F/Z)) + Xc
X' = (X - Xc) * (F/Z) + Xc。
Y' = ((Y - Yc) * (F/Z)) + Yc
Y' = (Y - Yc) * (F/Z) + Yc。
If your camera is the origin, then this simplifies to:
如果你的相机是原点,那么这个简化为:
X' = X * (F/Z)
X' = X * (F/Z)
Y' = Y * (F/Z)
Y' = Y * (F/Z)
#3
5
You can project 3D point in 2D using: Commons Math: The Apache Commons Mathematics Library with just two classes.
你可以使用:Commons Math: Apache Commons Math Library,只使用两个类。
Example for Java Swing.
Java Swing的例子。
import org.apache.commons.math3.geometry.euclidean.threed.Plane;
import org.apache.commons.math3.geometry.euclidean.threed.Vector3D;
Plane planeX = new Plane(new Vector3D(1, 0, 0));
Plane planeY = new Plane(new Vector3D(0, 1, 0)); // Must be orthogonal plane of planeX
void drawPoint(Graphics2D g2, Vector3D v) {
g2.drawLine(0, 0,
(int) (world.unit * planeX.getOffset(v)),
(int) (world.unit * planeY.getOffset(v)));
}
protected void paintComponent(Graphics g) {
super.paintComponent(g);
drawPoint(g2, new Vector3D(2, 1, 0));
drawPoint(g2, new Vector3D(0, 2, 0));
drawPoint(g2, new Vector3D(0, 0, 2));
drawPoint(g2, new Vector3D(1, 1, 1));
}
Now you only needs update the planeX
and planeY
to change the perspective-projection, to get things like this:
现在你只需要更新planeX和planeY来改变透视投影,得到这样的东西:
#4
3
To obtain the perspective-corrected co-ordinates, just divide by the z
co-ordinate:
为了获得换位思考的坐标,只需除以z坐标:
xc = x / z
yc = y / z
The above works assuming that the camera is at (0, 0, 0)
and you are projecting onto the plane at z = 1
-- you need to translate the co-ords relative to the camera otherwise.
上面的工作假设相机在(0,0,0),而你在z = 1的平面上投影——你需要将与摄像机相关的共模转换成其他形式。
There are some complications for curves, insofar as projecting the points of a 3D Bezier curve will not in general give you the same points as drawing a 2D Bezier curve through the projected points.
对于曲线来说,有一些复杂的情况,如投影出三维Bezier曲线的点,一般不会给你相同的点,即通过投影点绘制2D Bezier曲线。
#5
2
I'm not sure at what level you're asking this question. It sounds as if you've found the formulas online, and are just trying to understand what it does. On that reading of your question I offer:
我不知道你问这个问题的层次。听起来好像你已经在网上找到了这些公式,只是想了解它的作用。在阅读你的问题时,我提出:
- Imagine a ray from the viewer (at point V) directly towards the center of the projection plane (call it C).
- 想象一下,从观察者(点V)直接向投影平面的中心(称为C)的光线。
- Imagine a second ray from the viewer to a point in the image (P) which also intersects the projection plane at some point (Q)
- 想象一下从观察者到图像中的点(P)的第二束光线,它也与投影平面相交于某个点(Q)
- The viewer and the two points of intersection on the view plane form a triangle (VCQ); the sides are the two rays and the line between the points in the plane.
- 视图平面上的观察者和两个交叉点形成一个三角形(VCQ);两边是两条射线和平面上两点之间的直线。
- The formulas are using this triangle to find the coordinates of Q, which is where the projected pixel will go
- 公式是用这个三角形找到Q的坐标,也就是投影的像素点的位置。
#6
2
My previous answer was wrong and garbage.
我以前的回答是错误的和无用的。
Here is a redo:
这是一个重做:
Looking at the screen from the top, you get x and z axis.
Looking at the screen from the side, you get y and z axis.
从上方看屏幕,得到x轴和z轴。从侧面看屏幕,得到y轴和z轴。
Calculate the focal lengths of the top and side views, using trigonometry, which is the distance between the eye and the middle of the screen, which is determined by the field of view of the screen. This makes the shape of two right triangles back to back.
计算顶部和侧面视图的焦距,使用三角学,即眼睛和屏幕中间的距离,这是由屏幕的视场决定的。这使得两个直角三角形的形状可以背回来。
hw = screen_width / 2
hw = screen_width / 2。
hh = screen_height / 2
hh = screen_height / 2。
fl_top = hw / tan(θ/2)
fl_top = hw / tan(/2)
fl_side = hh / tan(θ/2)
fl_side = hh / tan(/2)
Then take the average focal length.
然后取平均焦距。
fl_average = (fl_top + fl_side) / 2
fl_average = (fl_top + fl_side) / 2。
Now calculate the new x and new y with basic arithmetic, since the larger right triangle made from the 3d point and the eye point is congruent with the smaller triangle made by the 2d point and the eye point.
现在用基本算法计算新x和新y,因为从三维点和眼点组成的大直角三角形与由二维点和视点构成的较小三角形是一致的。
x' = (x * fl_top) / (z + fl_top)
x' = (x * fl_top) / (z + fl_top)
y' = (y * fl_top) / (z + fl_top)
y' = (y * fl_top) / (z + fl_top)
Or you can simply set
或者你可以简单地设置。
x' = x / (z + 1)
x' = x / (z + 1)
and
和
y' = y / (z + 1)
y' = y / (z + 1)
#7
1
All of the answers address the question posed in the title. However, I would like to add a caveat that is implicit in the text. Bézier patches are used to represent the surface, but you cannot just transform the points of the patch and tessellate the patch into polygons, because this will result in distorted geometry. You can, however, tessellate the patch first into polygons using a transformed screen tolerance and then transform the polygons, or you can convert the Bézier patches to rational Bézier patches, then tessellate those using a screen-space tolerance. The former is easier, but the latter is better for a production system.
所有的答案都解决了题目中提出的问题。但是,我想在文中添加一个隐含的警告。Bezier补丁被用来表示表面,但是你不能仅仅将patch和tessellate的点转换成多边形,因为这会导致扭曲的几何图形。但是,您可以使用转换后的屏幕公差首先将补丁嵌入到多边形中,然后转换多边形,或者您可以将Bezier补丁转换为rational Bezier补丁,然后使用屏幕空间公差来对这些补丁进行tessellate。前者更容易,但后者更适合于生产系统。
I suspect that you want the easier way. For this, you would scale the screen tolerance by the norm of the Jacobian of the inverse perspective transformation and use that to determine the amount of tessellation that you need in model space (it might be easier to compute the forward Jacobian, invert that, then take the norm). Note that this norm is position-dependent, and you may want to evaluate this at several locations, depending on the perspective. Also remember that since the projective transformation is rational, you need to apply the quotient rule to compute the derivatives.
我怀疑你想要更简单的方法。为此,您将通过反视角转换的雅可比矩阵的标准来缩放屏幕公差,并使用它来确定您在模型空间中需要的镶嵌量(可以更容易地计算出前面的雅可比矩阵,然后将其转化为标准)。请注意,此规范是基于位置的,您可能希望根据透视图在多个位置进行评估。还要记住,由于投影变换是合理的,所以你需要应用除法法则来计算导数。
#8
0
I know it's an old topic but your illustration is not correct, the source code sets up the clip matrix correct.
我知道这是一个老话题,但你的说明是不正确的,源代码设置了剪辑矩阵的正确。
[fov * aspectRatio][ 0 ][ 0 ][ 0 ]
[ 0 ][ fov ][ 0 ][ 0 ]
[ 0 ][ 0 ][(far+near)/(far-near) ][(2*near*far)/(near-far)]
[ 0 ][ 0 ][ 1 ][ 0 ]
some addition to your things:
你的一些东西:
This clip matrix works only if you are projecting on static 2D plane if you want to add camera movement and rotation:
如果你想增加相机的运动和旋转,这个剪辑矩阵只在你在静态二维平面上放映时有效。
viewMatrix = clipMatrix * cameraTranslationMatrix4x4 * cameraRotationMatrix4x4;
this lets you rotate the 2D plane and move it around..-
这让你可以旋转2D平面并移动它。
#9
0
You might want to debug your system with spheres to determine whether or not you have a good field of view. If you have it too wide, the spheres with deform at the edges of the screen into more oval forms pointed toward the center of the frame. The solution to this problem is to zoom in on the frame, by multiplying the x and y coordinates for the 3 dimensional point by a scalar and then shrinking your object or world down by a similar factor. Then you get the nice even round sphere across the entire frame.
您可能想要用球体调试您的系统,以确定您是否有一个良好的视图。如果你把它太宽,在屏幕边缘有变形的球体,会变成更多的椭圆形,指向框架的中心。这个问题的解决方案是放大框架,将三维点的x和y坐标乘以一个标量,然后用一个相似的因子缩小你的物体或世界。然后在整个坐标系中得到均匀的圆球面。
I'm almost embarrassed that it took me all day to figure this one out and I was almost convinced that there was some spooky mysterious geometric phenomenon going on here that demanded a different approach.
我几乎感到尴尬,我花了一整天的时间才弄明白这一点,我几乎确信,在这里发生了一些令人毛骨悚然的神秘的几何现象,需要一种不同的方法。
Yet, the importance of calibrating the zoom-frame-of-view coefficient by rendering spheres cannot be overstated. If you do not know where the "habitable zone" of your universe is, you will end up walking on the sun and scrapping the project. You want to be able to render a sphere anywhere in your frame of view an have it appear round. In my project, the unit sphere is massive compared to the region that I'm describing.
然而,通过渲染球体来校准“放大”视角系数的重要性是不能被夸大的。如果你不知道你的宇宙的“宜居带”在哪里,你最终会走在太阳上,放弃这个项目。你想要在你的视图框架内的任何地方呈现一个球体。在我的项目中,与我描述的区域相比,单位球是巨大的。
Also, the obligatory wikipedia entry: Spherical Coordinate System
另外,*的条目是:球坐标系。
#10
0
Thanks to @Mads Elvenheim for a proper example code. I have fixed the minor syntax errors in the code (just a few const problems and obvious missing operators). Also, near and far have vastly different meanings in vs.
感谢@Mads Elvenheim提供了一个适当的示例代码。我已经修正了代码中的小语法错误(只是一些const问题和明显的丢失操作符)。而且,在vs中,近和远的意义有很大的不同。
For your pleasure, here is the compileable (MSVC2013) version. Have fun. Mind that I have made NEAR_Z and FAR_Z constant. You probably dont want it like that.
为了您的快乐,这里是compileable (MSVC2013)版本。玩得开心。注意,我已经做了接近和FAR_Z的常数。你可能不希望这样。
#include <vector>
#include <cmath>
#include <stdexcept>
#include <algorithm>
#define M_PI 3.14159
#define NEAR_Z 0.5
#define FAR_Z 2.5
struct Vector
{
float x;
float y;
float z;
float w;
Vector() : x( 0 ), y( 0 ), z( 0 ), w( 1 ) {}
Vector( float a, float b, float c ) : x( a ), y( b ), z( c ), w( 1 ) {}
/* Assume proper operator overloads here, with vectors and scalars */
float Length() const
{
return std::sqrt( x*x + y*y + z*z );
}
Vector& operator*=(float fac) noexcept
{
x *= fac;
y *= fac;
z *= fac;
return *this;
}
Vector operator*(float fac) const noexcept
{
return Vector(*this)*=fac;
}
Vector& operator/=(float div) noexcept
{
return operator*=(1/div); // avoid divisions: they are much
// more costly than multiplications
}
Vector Unit() const
{
const float epsilon = 1e-6;
float mag = Length();
if (mag < epsilon) {
std::out_of_range e( "" );
throw e;
}
return Vector(*this)/=mag;
}
};
inline float Dot( const Vector& v1, const Vector& v2 )
{
return v1.x*v2.x + v1.y*v2.y + v1.z*v2.z;
}
class Matrix
{
public:
Matrix() : data( 16 )
{
Identity();
}
void Identity()
{
std::fill( data.begin(), data.end(), float( 0 ) );
data[0] = data[5] = data[10] = data[15] = 1.0f;
}
float& operator[]( size_t index )
{
if (index >= 16) {
std::out_of_range e( "" );
throw e;
}
return data[index];
}
const float& operator[]( size_t index ) const
{
if (index >= 16) {
std::out_of_range e( "" );
throw e;
}
return data[index];
}
Matrix operator*( const Matrix& m ) const
{
Matrix dst;
int col;
for (int y = 0; y<4; ++y) {
col = y * 4;
for (int x = 0; x<4; ++x) {
for (int i = 0; i<4; ++i) {
dst[x + col] += m[i + col] * data[x + i * 4];
}
}
}
return dst;
}
Matrix& operator*=( const Matrix& m )
{
*this = (*this) * m;
return *this;
}
/* The interesting stuff */
void SetupClipMatrix( float fov, float aspectRatio )
{
Identity();
float f = 1.0f / std::tan( fov * 0.5f );
data[0] = f*aspectRatio;
data[5] = f;
data[10] = (FAR_Z + NEAR_Z) / (FAR_Z- NEAR_Z);
data[11] = 1.0f; /* this 'plugs' the old z into w */
data[14] = (2.0f*NEAR_Z*FAR_Z) / (NEAR_Z - FAR_Z);
data[15] = 0.0f;
}
std::vector<float> data;
};
inline Vector operator*( const Vector& v, Matrix& m )
{
Vector dst;
dst.x = v.x*m[0] + v.y*m[4] + v.z*m[8] + v.w*m[12];
dst.y = v.x*m[1] + v.y*m[5] + v.z*m[9] + v.w*m[13];
dst.z = v.x*m[2] + v.y*m[6] + v.z*m[10] + v.w*m[14];
dst.w = v.x*m[3] + v.y*m[7] + v.z*m[11] + v.w*m[15];
return dst;
}
typedef std::vector<Vector> VecArr;
VecArr ProjectAndClip( int width, int height, const VecArr& vertex )
{
float halfWidth = (float)width * 0.5f;
float halfHeight = (float)height * 0.5f;
float aspect = (float)width / (float)height;
Vector v;
Matrix clipMatrix;
VecArr dst;
clipMatrix.SetupClipMatrix( 60.0f * (M_PI / 180.0f), aspect);
/* Here, after the perspective divide, you perform Sutherland-Hodgeman clipping
by checking if the x, y and z components are inside the range of [-w, w].
One checks each vector component seperately against each plane. Per-vertex
data like colours, normals and texture coordinates need to be linearly
interpolated for clipped edges to reflect the change. If the edge (v0,v1)
is tested against the positive x plane, and v1 is outside, the interpolant
becomes: (v1.x - w) / (v1.x - v0.x)
I skip this stage all together to be brief.
*/
for (VecArr::const_iterator i = vertex.begin(); i != vertex.end(); ++i) {
v = (*i) * clipMatrix;
v /= v.w; /* Don't get confused here. I assume the divide leaves v.w alone.*/
dst.push_back( v );
}
/* TODO: Clipping here */
for (VecArr::iterator i = dst.begin(); i != dst.end(); ++i) {
i->x = (i->x * (float)width) / (2.0f * i->w) + halfWidth;
i->y = (i->y * (float)height) / (2.0f * i->w) + halfHeight;
}
return dst;
}
#pragma once