新博客:
完整版 - AVFoundation Programming Guide分章节版:
— 第1章:About AVFoundation - AVFoundation概述
— 第2章:Using Assets - 使用Assets
— 第3章:Playback - 播放
— 第4章:Editing - 编辑
— 第5章:Still and Video Media Capture - 静态视频媒体捕获
— 第6章:Export - 输出
— 第7章:Time and Media Representations 时间和媒体表现CSDN博客:
完整版 - AVFoundation Programming Guide分章节版:
— 第1章:About AVFoundation - AVFoundation概述
— 第2章:Using Assets - 使用Assets
— 第3章:Playback - 播放
— 第4章:Editing - 编辑
— 第5章:Still and Video Media Capture - 静态视频媒体捕获
— 第6章:Export - 输出
— 第7章:Time and Media Representations 时间和媒体表现版权声明:本文为博主原创翻译,如需转载请注明出处。
苹果源文档地址 - 点击这里
The AVFoundation framework provides a feature-rich set of classes to facilitate the editing of audio visual assets. At the heart of AVFoundation’s editing API are compositions. A composition is simply a collection of tracks from one or more different media assets. The AVMutableComposition class provides an interface for inserting and removing tracks, as well as managing their temporal orderings. Figure 3-1 shows how a new composition is pieced together from a combination of existing assets to form a new asset. If all you want to do is merge multiple assets together sequentially into a single file, that is as much detail as you need. If you want to perform any custom audio or video processing on the tracks in your composition, you need to incorporate an audio mix or a video composition, respectively.
AVFoundation
框架提供了一个功能丰富的类集合去帮助音视频资产的编辑。 AVFoundation
编辑 API
的核心是一些组合。一种组合物是简单的一个或者多个不同媒体资产的轨道的集合。AVMutableComposition 类提供一个可以插入和移除轨道的接口,以及管理它们的时间序列。图3-1显示了一个新的组合是怎样从一些现有的资产拼凑起来,形成新的资产。如果你想做的是将多个资产合并为一个单一的文件,这里有尽可能多的你需要掌握的细节。如果你想在你的作品中的轨道上执行任何自定义音频或视频处理,你需要分别将一个音频组合或者视频组成。
![Figure 3-1 AVMutableComposition assembles assets together](http://ww4.sinaimg.cn/large/a9c4d5f6gw1f6fu4l7pu1j20wu0goaal.jpg)
Using the AVMutableAudioMix class, you can perform custom audio processing on the audio tracks in your composition, as shown in Figure 3-2. Currently, you can specify a maximum volume or set a volume ramp for an audio track.
使用 AVMutableAudioMix 类,可以在你作品的音频轨道中执行自定义处理,如图3-2所示。目前,你可以指定一个最大音量或设置一个音频轨道的音量斜坡
![Figure 3-2 AVMutableAudioMix performs audio mixing](http://ww1.sinaimg.cn/large/a9c4d5f6gw1f6fu9qkxtjj20mo0giq3g.jpg)
You can use the AVMutableVideoComposition class to work directly with the video tracks in your composition for the purposes of editing, shown in Figure 3-3. With a single video composition, you can specify the desired render size and scale, as well as the frame duration, for the output video. Through a video composition’s instructions (represented by the AVMutableVideoCompositionInstruction class), you can modify the background color of your video and apply layer instructions. These layer instructions (represented by the AVMutableVideoCompositionLayerInstruction class) can be used to apply transforms, transform ramps, opacity and opacity ramps to the video tracks within your composition. The video composition class also gives you the ability to introduce effects from the Core Animation framework into your video using the animationTool property.
可以使用 AVMutableVideoComposition 类直接在视频中跟踪你想编辑的部分,如图3-3所示。一个单一的视频组件,可以为输出视频指定所需的渲染大小和规模,以及帧的持续时间。通过视频组件的指令(以 AVMutableVideoCompositionInstruction 类为代表),你可以修改视频的背景颜色和应用层的指令。这些层的指令(以 AVMutableVideoCompositionLayerInstruction 类为代表)可以可应用于应用变换,变换坡道,不透明度以及不透明度的坡道到你的组件中的视频轨道。视频组件类也能让你做一些事,从核心动画框架到使用 animationTool 属性的视频。
![Figure 3-3 AVMutableVideoComposition](http://ww3.sinaimg.cn/large/a9c4d5f6gw1f6gafmhl5zj214v0lf0ti.jpg)
To combine your composition with an audio mix and a video composition, you use an AVAssetExportSession object, as shown in Figure 3-4. You initialize the export session with your composition and then simply assign your audio mix and video composition to the audioMix and videoComposition properties respectively.
将音频和视频的成分组合,可以使用 AVAssetExportSession 对象,如图3-4所所示。初始化导出会话,然后简单的分别将音频部分和视频组件分配给 audioMix 和 videoComposition 属性。
![Figure 3-4 Use AVAssetExportSession to combine media elements into an output file](http://ww2.sinaimg.cn/large/a9c4d5f6gw1f6galdybkvj20zc0nnab4.jpg)
Creating a Composition - 创建组件
To create your own composition, you use the AVMutableComposition class. To add media data to your composition, you must add one or more composition tracks, represented by the AVMutableCompositionTrack class. The simplest case is creating a mutable composition with one video track and one audio track:
使用 AVMutableComposition 类创建自己的组件。在你的组件中添加媒体数据,必须添加一个或者多个组件轨道,以 AVMutableCompositionTrack 类为代表。最简单的例子创建一个有一个音频轨道和一个视频轨道的可变组件。
1 |
AVMutableComposition *mutableComposition = [AVMutableComposition composition]; AVMutableCompositionTrack *mutableCompositionVideoTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid]; |
Options for Initializing a Composition Track - 初始化组件轨道的选项
When adding new tracks to a composition, you must provide both a media type and a track ID. Although audio and video are the most commonly used media types, you can specify other media types as well, such as AVMediaTypeSubtitle or AVMediaTypeText.
Every track associated with some audiovisual data has a unique identifier referred to as a track ID. If you specify kCMPersistentTrackID_Invalid as the preferred track ID, a unique identifier is automatically generated for you and associated with the track.
当给轨道添加一个新的轨道时,必须提供媒体类型和轨道 ID
。虽然音频和视频是最常用的媒体类型,你可以指定其他媒体类型,比如 AVMediaTypeSubtitle 或者 AVMediaTypeText 。
每个和视听数据相关联的轨道都有一个唯一的标示符,叫做 track ID
。如果你指定了 kCMPersistentTrackID_Invalid 作为首先的 track ID
,将会为你生成一个唯一的标示符并且与轨道相关联。
Adding Audiovisual Data to a Composition - 将视听数据添加到一个组件中
Once you have a composition with one or more tracks, you can begin adding your media data to the appropriate tracks. To add media data to a composition track, you need access to the AVAsset object where the media data is located. You can use the mutable composition track interface to place multiple tracks with the same underlying media type together on the same track. The following example illustrates how to add two different video asset tracks in sequence to the same composition track:
一旦有带着一个或多个轨道的组件,就可以把你的媒体数据添加到适当的轨道中。为了将媒体数据添加到组件轨道,需要访问媒体数据所在位置的 AVAsset 对象。可以使用可变组件轨道接口将有相同基础的媒体类型的多个轨道放置到一个轨道上。下面的示例演示了如何将一个队列中两个不同的音频资产轨道添加到同一个组件轨道中。
1 |
// You can retrieve AVAssets from a number of places, like the camera roll for example. |
Retrieving Compatible Composition Tracks - 检索兼容的组件轨道
Where possible, you should have only one composition track for each media type. This unification of compatible asset tracks leads to a minimal amount of resource usage. When presenting media data serially, you should place any media data of the same type on the same composition track. You can query a mutable composition to find out if there are any composition tracks compatible with your desired asset track:
在可能的情况下,每个媒体类型应该只有一个组件轨道。这种统一兼容的资产轨道可以达到最小的资源使用量。当串行显示媒体数据时,应该将相同类型的媒体数据放置在相同的组件轨道上。你可以查询一个可变组件,找出是否有组件轨道与你想要的资产轨道兼容。
1 |
AVMutableCompositionTrack *compatibleCompositionTrack = [mutableComposition mutableTrackCompatibleWithTrack:<#the AVAssetTrack you want to insert#>]; |
Note: Placing multiple video segments on the same composition track can potentially lead to dropping frames at the transitions between video segments, especially on embedded devices. Choosing the number of composition tracks for your video segments depends entirely on the design of your app and its intended platform.
注意:在相同的组件轨道放置多个视频片段,可能会导致在视频片段之间的转换会掉帧,尤其是在嵌入式设备下。你的视频片段的组件轨道数量取决于你的应用程序预期和它的平台设计。
Generating a Volume Ramp - 生成一个音量坡度
A single AVMutableAudioMix object can perform custom audio processing on all of the audio tracks in your composition individually. You create an audio mix using the audioMix class method, and you use instances of the AVMutableAudioMixInputParameters class to associate the audio mix with specific tracks within your composition. An audio mix can be used to vary the volume of an audio track. The following example displays how to set a volume ramp on a specific audio track to slowly fade the audio out over the duration of the composition:
一个单独的 AVMutableAudioMix
对象可以分别执行自定义音频,处理组件中的所有轨道。可以使用 audioMix 类方法创建一个音频混合,使用 AVMutableAudioMixInputParameters 类的实例将混合音频与组件中指定的轨道联结起来。一个混合音频可以用来改变音频轨道的音量。下面的例子展示了,如何在一个指定的音频轨道设置一个音量坡度,使得在组件的持续时间让音频缓慢淡出:
1 |
AVMutableAudioMix *mutableAudioMix = [AVMutableAudioMix audioMix]; |
Performing Custom Video Processing - 执行自定义配置
As with an audio mix, you only need one AVMutableVideoComposition object to perform all of your custom video processing on your composition’s video tracks. Using a video composition, you can directly set the appropriate render size, scale, and frame rate for your composition’s video tracks. For a detailed example of setting appropriate values for these properties, see Setting the Render Size and Frame Duration.
作为一个混合音频,只需要一个 AVMutableVideoComposition
对象就可以执行组件音频轨道中的所有自定义音频配置。使用一个音频组件,可以直接为组件音频轨道设置适当的渲染大小,规模以及帧速率。有一个设置这些属性值的详细的示例,请看 Setting the Render Size and Frame Duration
Changing the Composition’s Background Color - 改变组件的背景颜色
All video compositions must also have an array of AVVideoCompositionInstruction objects containing at least one video composition instruction. You use the AVMutableVideoCompositionInstruction class to create your own video composition instructions. Using video composition instructions, you can modify the composition’s background color, specify whether post processing is needed or apply layer instructions.
The following example illustrates how to create a video composition instruction that changes the background color to red for the entire composition.
所有的视频组件必须有一个 AVVideoCompositionInstruction 对象的数组,每个对象至少包含一个视频组件指令。使用 AVMutableVideoCompositionInstruction 类去创建自己的视频组件指令。使用视频组件指令,可以修改组件的背景颜色,指定是否需要处理推迟处理或者应用到层指令。
下面的例子展示了如果创建一个视频组件指令,将整个组件的背景颜色改为红色。
1 |
AVMutableVideoCompositionInstruction *mutableVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction]; |
Applying Opacity Ramps - 应用不透明的坡道
Video composition instructions can also be used to apply video composition layer instructions. An AVMutableVideoCompositionLayerInstruction object can apply transforms, transform ramps, opacity and opacity ramps to a certain video track within a composition. The order of the layer instructions in a video composition instruction’s layerInstructions array determines how video frames from source tracks should be layered and composed for the duration of that composition instruction. The following code fragment shows how to set an opacity ramp to slowly fade out the first video in a composition before transitioning to the second video:
视频组件指令可以用于视频组件层指令。一个 AVMutableVideoCompositionLayerInstruction 对象可以应用转换,转换坡道,不透明度和坡道的不透明度到某个组件内的视频轨道。视频组件指令的 layerInstructions 数组中 层指令的顺序决定了组件指令期间,资源轨道中的视频框架应该如何被应用和组合。下面的代码展示了如何设置一个不透明的坡度使得第二个视频之前,让第一个视频慢慢淡出:
1 |
AVAsset *firstVideoAssetTrack = <#AVAssetTrack representing the first video segment played in the composition#>; |
Incorporating Core Animation Effects - 结合核心动画效果
A video composition can add the power of Core Animation to your composition through the animationTool property. Through this animation tool, you can accomplish tasks such as watermarking video and adding titles or animating overlays. Core Animation can be used in two different ways with video compositions: You can add a Core Animation layer as its own individual composition track, or you can render Core Animation effects (using a Core Animation layer) into the video frames in your composition directly. The following code displays the latter option by adding a watermark to the center of the video:
一个视频组件可以通过 animationTool 属性将核心动画的力量添加到你的组件中。通过这个动画制作工具,可以完成一些任务,例如视频水印,添加片头或者动画覆盖。核心动画可以有两种不同的方式被用于视频组件:可以添加一个核心动画层到自己的个人组件轨道,或者可以渲染核心动画效果(使用一个核心动画层)直接进入组件的视频框架。下面的代码展示了在视频*添加一个水印显示出来的效果。
1 |
CALayer *watermarkLayer = <#CALayer representing your desired watermark image#>; |
Putting It All Together: Combining Multiple Assets and Saving the Result to the Camera Roll -
This brief code example illustrates how you can combine two video asset tracks and an audio asset track to create a single video file. It shows how to:
- Create an AVMutableComposition object and add multiple AVMutableCompositionTrack objects
- Add time ranges of AVAssetTrack objects to compatible composition tracks
- Check the preferredTransform property of a video asset track to determine the video’s orientation
- Use AVMutableVideoCompositionLayerInstruction objects to apply transforms to the video tracks within - a composition
- Set appropriate values for the renderSize and frameDuration properties of a video composition
- Use a composition in conjunction with a video composition when exporting to a video file
- Save a video file to the Camera Roll
这个简短的代码示例说明了如何将两个视频资产轨道和一个音频资产轨道结合起来,创建一个单独的视频文件。有下面几个方面:
- 创建一个 AVMutableComposition 对象并且添加多个 AVMutableCompositionTrack 对象
- 添加 AVAssetTrack 对象的时间范围,兼容组件轨道
- 检查视频资产轨道的 preferredTransform 的属性,决定视频的方向
- 使用 AVMutableVideoCompositionLayerInstruction 对象给组件内的视频轨道应用转换。
- 给视频组件的 renderSize 和 frameDuration 属性设置适当的值。
- 当导出视频文件时,使用一个视频组件组合物中的组件
- 保存视频文件到相机胶卷
Note: To focus on the most relevant code, this example omits several aspects of a complete app, such as memory management and error handling. To use AVFoundation, you are expected to have enough experience with Cocoa to infer the missing pieces.
注意:关注最相关的代码,这个例子省略了一个完整应用程序的几个方面,如内存处理和错误处理。利用
AVFoundation
,希望你有足够的使用Cocoa
的经验去判断丢失的碎片
Creating the Composition - 创建组件
To piece together tracks from separate assets, you use an AVMutableComposition object. Create the composition and add one audio and one video track.
使用 AVMutableComposition
对象将分离的资产拼凑成轨道。创建组件并且添加一个音频轨道和一个视频轨道。
1 |
AVMutableComposition *mutableComposition = [AVMutableComposition composition]; |
Adding the Assets - 添加资产
An empty composition does you no good. Add the two video asset tracks and the audio asset track to the composition.
一个空的资产并不是好。往组件中添加两个视频资产轨道和音频资产轨道。
1 |
AVAssetTrack *firstVideoAssetTrack = [[firstVideoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0]; |
Note: This assumes that you have two assets that contain at least one video track each and a third asset that contains at least one audio track. The videos can be retrieved from the Camera Roll, and the audio track can be retrieved from the music library or the videos themselves.
注意:这里假定你有两个资产,每个资产中都至少包含一个视频轨道,第三个资产至少包含一个音频轨道。视频可以从相机胶卷中检索到,音频轨道可以从音乐库或者视频本身检索到。
Checking the Video Orientations - 检查视频的方向
Once you add your video and audio tracks to the composition, you need to ensure that the orientations of both video tracks are correct. By default, all video tracks are assumed to be in landscape mode. If your video track was taken in portrait mode, the video will not be oriented properly when it is exported. Likewise, if you try to combine a video shot in portrait mode with a video shot in landscape mode, the export session will fail to complete.
一旦给组件添加了音频和视频轨道,你需要确保两个视频轨道的方向都是正确的。默认情况下,所有的视频轨道都被假定为横向模式。如果你的视频轨道是在纵向模式下拍摄的,当它被导出的时候方向将出现错误。同样,如果你尝试将横向模式下拍摄的视频与纵向的视频结合在一起,导出会话将无法完成。
1 |
BOOL isFirstVideoPortrait = NO; |
Applying the Video Composition Layer Instructions - 视频组件层指令的应用
Once you know the video segments have compatible orientations, you can apply the necessary layer instructions to each one and add these layer instructions to the video composition.
如果你知道视频片段对方向有兼容性,可以将必要的层指令应用到每个视频片段,并将这些层指令添加到视频组合中。
1 |
AVMutableVideoCompositionInstruction *firstVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction]; |
All AVAssetTrack objects have a preferredTransform property that contains the orientation information for that asset track. This transform is applied whenever the asset track is displayed onscreen. In the previous code, the layer instruction’s transform is set to the asset track’s transform so that the video in the new composition displays properly once you adjust its render size.
所有的 AVAssetTrack 对象都有一个 preferredTransform 属性,包含了资产轨道的方向信息。当资产轨道被展示到屏幕上时就进行这些转换。在之前的代码中,层指令信息的转换被设置为资产轨道的转换,使得一旦你调整了它的渲染大小,视频在新的组件中都能正确的显示。
Setting the Render Size and Frame Duration - 设置渲染大小和帧周期
To complete the video orientation fix, you must adjust the renderSize property accordingly. You should also pick a suitable value for the frameDuration property, such as 1/30th of a second (or 30 frames per second). By default, the renderScale property is set to 1.0, which is appropriate for this composition.
为了完成视频方向的固定,必须调整相应的 renderSize 属性。也应该给 frameDuration 属性设置一个合适的值,比如 1/30th of a second (或者每秒30帧)。默认情况下,renderScale 属性设置 1.0
,对于组件是比较合适的。
1 |
CGSize naturalSizeFirst, naturalSizeSecond; |
Exporting the Composition and Saving it to the Camera Roll - 导出组件并存到相机胶卷
The final step in this process involves exporting the entire composition into a single video file and saving that video to the camera roll. You use an AVAssetExportSession object to create the new video file and you pass to it your desired URL for the output file. You can then use the ALAssetsLibrary class to save the resulting video file to the Camera Roll.
这个过程的最后一步,是将整个组件导出到一个单独的视频文件,并且将视频存到相机胶卷中。使用 AVAssetExportSession 对象去创建新的视频文件,并且给输出文件传递一个期望的 URL
。然后可以使用 ALAssetsLibrary 类去将视频文件结果保存到相机胶卷。
1 |
// Create a static date formatter so we only have to initialize it once. |