I want to get access to the iPad's camera on the Swift Playgrounds iPad app. I have found that it's not possibile to capture video data, even though my playground runs ok.
我想通过Swift Playgrounds(场地)iPad应用程序使用iPad的摄像头。我发现,虽然我的游乐场运行正常,但我不可能采集视频数据。
captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!)
, a delegate method of the AVCaptureVideoDataOutputSampleBufferDelegate
protocol, is not getting called (probably because no video data is coming in), while it is in my iOS app.
captureOutput(_ captureOutput:AVCaptureOutput !,didOutputSampleBuffer sampleBuffer:CMSampleBuffer !在我的iOS应用中,AVCaptureVideoDataOutputSampleBufferDelegate协议的委托方法不会被调用(可能是因为没有视频数据输入)。
The view in my playground is supposed to display the FaceTime camera view. Why can't I display the camera output even though Apple explicitly says it's allowed to do so? Also, the Playground app asks me for camera permissions as soon as I open my playground, so it should be allowed in some way.
我操场上的视图应该显示FaceTime摄像头的视图。为什么我不能显示相机的输出,即使苹果明确表示允许这样做?另外,当我打开我的游乐场时,游乐场应用会问我是否有摄像头权限,所以应该允许。
import UIKit
import CoreImage
import AVFoundation
import ImageIO
import PlaygroundSupport
class Visage: NSObject, AVCaptureVideoDataOutputSampleBufferDelegate {
var visageCameraView : UIView = UIView()
fileprivate var faceDetector : CIDetector?
fileprivate var videoDataOutput : AVCaptureVideoDataOutput?
fileprivate var videoDataOutputQueue : DispatchQueue?
fileprivate var cameraPreviewLayer : AVCaptureVideoPreviewLayer?
fileprivate var captureSession : AVCaptureSession = AVCaptureSession()
fileprivate let notificationCenter : NotificationCenter = NotificationCenter.default
override init() {
super.init()
self.captureSetup(AVCaptureDevicePosition.front)
var faceDetectorOptions : [String : AnyObject]?
faceDetectorOptions = [CIDetectorAccuracy : CIDetectorAccuracyHigh as AnyObject]
self.faceDetector = CIDetector(ofType: CIDetectorTypeFace, context: nil, options: faceDetectorOptions)
}
func beginFaceDetection() {
self.captureSession.startRunning()
}
func endFaceDetection() {
self.captureSession.stopRunning()
}
fileprivate func captureSetup (_ position : AVCaptureDevicePosition) {
var captureError : NSError?
var captureDevice : AVCaptureDevice!
for testedDevice in AVCaptureDevice.devices(withMediaType: AVMediaTypeVideo){
if ((testedDevice as AnyObject).position == position) {
captureDevice = testedDevice as! AVCaptureDevice
}
}
if (captureDevice == nil) {
captureDevice = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)
}
var deviceInput : AVCaptureDeviceInput?
do {
deviceInput = try AVCaptureDeviceInput(device: captureDevice)
} catch let error as NSError {
captureError = error
deviceInput = nil
}
captureSession.sessionPreset = AVCaptureSessionPresetHigh
if (captureError == nil) {
if (captureSession.canAddInput(deviceInput)) {
captureSession.addInput(deviceInput)
}
self.videoDataOutput = AVCaptureVideoDataOutput()
self.videoDataOutput!.videoSettings = [kCVPixelBufferPixelFormatTypeKey as AnyHashable: Int(kCVPixelFormatType_32BGRA)]
self.videoDataOutput!.alwaysDiscardsLateVideoFrames = true
self.videoDataOutputQueue = DispatchQueue(label: "VideoDataOutputQueue", attributes: [])
self.videoDataOutput!.setSampleBufferDelegate(self, queue: self.videoDataOutputQueue!)
if (captureSession.canAddOutput(self.videoDataOutput)) {
captureSession.addOutput(self.videoDataOutput)
}
}
visageCameraView.frame = UIScreen.main.bounds
let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer?.frame = UIScreen.main.bounds
previewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
visageCameraView.layer.addSublayer(previewLayer!)
}
// NOT CALLED
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
print("delegate method called!")
}
}
class SmileView: UIView {
let smileView = UIView()
var smileRec: Visage!
override init(frame: CGRect) {
super.init(frame: frame)
self.addSubview(smileView)
self.translatesAutoresizingMaskIntoConstraints = false
smileRec = Visage()
smileRec.beginFaceDetection()
let cameraView = smileRec.visageCameraView
self.addSubview(cameraView)
}
required init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
}
let frame = CGRect(x: 0, y: 0, width: UIScreen.main.bounds.width, height: UIScreen.main.bounds.height)
let sView = SmileView(frame: frame)
PlaygroundPage.current.liveView = sView
2 个解决方案
#1
5
Edit: this should have been fixed :)
编辑:这应该是固定的:)
--
- - -
Edit: this was confirmed to be a bug by Apple.
编辑:这被苹果证实是一个错误。
I have filed a bug report and I will update this answer when new official information comes in.
我已经提交了一个bug报告,当新的官方信息出现时我会更新这个答案。
#2
1
I think you need to set the needsIndefiniteExecution
property so that execution is not stopped after your code is completed. From apple:
我认为您需要设置needsIndefiniteExecution属性,以便在代码完成后不会停止执行。从苹果公司:
By default, all top-level code is executed, and then execution is terminated. When working with asynchronous code, enable indefinite execution to allow execution to continue after the end of the playground’s top-level code is reached. This, in turn, gives threads and callbacks time to execute.
默认情况下,执行所有*代码,然后终止执行。当使用异步代码时,允许不确定的执行,以便在达到游乐场的*代码的末尾之后继续执行。这反过来又给线程和回调时间来执行。
Editing the playground automatically stops execution, even when indefinite execution is enabled.
编辑游乐场自动停止执行,即使不确定执行被启用。
Set needsIndefiniteExecution to true to continue execution after the end of top-level code. set it to false to stop execution at that point.
将needsIndefiniteExecution设置为true,在*代码结束后继续执行。将其设置为false,以在此时停止执行。
So possible code at the end will be :
所以最后可能的代码是:
let frame = CGRect(x: 0, y: 0, width: UIScreen.main.bounds.width, height: UIScreen.main.bounds.height)
let sView = SmileView(frame: frame)
PlaygroundPage.current.needsIndefiniteExecution = true
PlaygroundPage.current.liveView = sView
#1
5
Edit: this should have been fixed :)
编辑:这应该是固定的:)
--
- - -
Edit: this was confirmed to be a bug by Apple.
编辑:这被苹果证实是一个错误。
I have filed a bug report and I will update this answer when new official information comes in.
我已经提交了一个bug报告,当新的官方信息出现时我会更新这个答案。
#2
1
I think you need to set the needsIndefiniteExecution
property so that execution is not stopped after your code is completed. From apple:
我认为您需要设置needsIndefiniteExecution属性,以便在代码完成后不会停止执行。从苹果公司:
By default, all top-level code is executed, and then execution is terminated. When working with asynchronous code, enable indefinite execution to allow execution to continue after the end of the playground’s top-level code is reached. This, in turn, gives threads and callbacks time to execute.
默认情况下,执行所有*代码,然后终止执行。当使用异步代码时,允许不确定的执行,以便在达到游乐场的*代码的末尾之后继续执行。这反过来又给线程和回调时间来执行。
Editing the playground automatically stops execution, even when indefinite execution is enabled.
编辑游乐场自动停止执行,即使不确定执行被启用。
Set needsIndefiniteExecution to true to continue execution after the end of top-level code. set it to false to stop execution at that point.
将needsIndefiniteExecution设置为true,在*代码结束后继续执行。将其设置为false,以在此时停止执行。
So possible code at the end will be :
所以最后可能的代码是:
let frame = CGRect(x: 0, y: 0, width: UIScreen.main.bounds.width, height: UIScreen.main.bounds.height)
let sView = SmileView(frame: frame)
PlaygroundPage.current.needsIndefiniteExecution = true
PlaygroundPage.current.liveView = sView