I am using the iPhone X and ARFaceKit
to capture the user's face. The goal is to texture the face mesh with the user's image.
我正在使用iPhone X和ARFaceKit捕捉用户的脸部。目标是使用用户的图像纹理面部网格。
I'm only looking at a single frame (an ARFrame
) from the AR
session. From ARFaceGeometry
, I have a set of vertices that describe the face. I make a jpeg representation of the current frame's capturedImage
.
我只是在AR会话中查看单帧(ARFrame)。从ARFaceGeometry,我有一组描述面部的顶点。我制作了当前帧的captureImage的jpeg表示。
I then want to find the texture coordinates that map the created jpeg onto the mesh vertices. I want to: 1. map the vertices from model space to world space; 2. map the vertices from world space to camera space; 3. divide by image dimensions to get pixel coordinates for the texture.
然后我想找到将创建的jpeg映射到网格顶点的纹理坐标。我想:1。将顶点从模型空间映射到世界空间; 2.将世界空间的顶点映射到摄像机空间; 3.除以图像尺寸以获得纹理的像素坐标。
let geometry: ARFaceGeometry = contentUpdater.faceGeometry!
let theCamera = session.currentFrame?.camera
let theFaceAnchor:SCNNode = contentUpdater.faceNode
let anchorTransform = float4x4((theFaceAnchor?.transform)!)
for index in 0..<totalVertices {
let vertex = geometry.vertices[index]
// Step 1: Model space to world space, using the anchor's transform
let vertex4 = float4(vertex.x, vertex.y, vertex.z, 1.0)
let worldSpace = anchorTransform * vertex4
// Step 2: World space to camera space
let world3 = float3(worldSpace.x, worldSpace.y, worldSpace.z)
let projectedPt = theCamera?.projectPoint(world3, orientation: .landscapeRight, viewportSize: (theCamera?.imageResolution)!)
// Step 3: Divide by image width/height to get pixel coordinates
if (projectedPt != nil) {
let vtx = projectedPt!.x / (theCamera?.imageResolution.width)!
let vty = projectedPt!.y / (theCamera?.imageResolution.height)!
textureVs += "vt \(vtx) \(vty)\n"
}
}
This is not working, but instead gets me a very funky looking face! Where am I going wrong?
这不行,但反而给我一个非常时髦的脸!我哪里错了?
1 个解决方案
#1
0
The start point is different:
起点不同:
Apply the following changes to your code:
将以下更改应用于您的代码:
//let vty = projectedPt!.y / (theCamera?.imageResolution.height)!
let vty = ((theCamera?.imageResolution.height)! - projectedPt!.y) / (theCamera?.imageResolution.height)!
You can get Normal Face.
你可以得到普通脸。
#1
0
The start point is different:
起点不同:
Apply the following changes to your code:
将以下更改应用于您的代码:
//let vty = projectedPt!.y / (theCamera?.imageResolution.height)!
let vty = ((theCamera?.imageResolution.height)! - projectedPt!.y) / (theCamera?.imageResolution.height)!
You can get Normal Face.
你可以得到普通脸。