I have added this (https://github.com/kewlbear/FFmpeg-iOS-build-script) version of ffmpeg to my project. I can't see the entry point to the library in the headers included.
我已经将这个(https://github.com/kewlbear/FFmpeg-iOS-build-script)版本的ffmpeg添加到我的项目中。在包含的头文件中,我看不到指向库的入口点。
How do I get access to the same text command based system that the stand alone application has, or an equivalent?
如何访问独立应用程序具有的相同的基于文本命令的系统,或等效的?
I would also be happy if someone could point me towards documentation that allows you to use FFmpeg without the command line interface.
我也很高兴,如果有人能指出我的文档,允许您在没有命令行界面的情况下使用FFmpeg。
This is what I am trying to execute (I have it working on windows and android using the CLI version of ffmpeg)
这就是我要执行的(我让它使用ffmpeg的CLI版本在windows和android上运行)
ffmpeg -framerate 30 -i snap%03d.jpg -itsoffset 00:00:03.23333 -itsoffset 00:00:05 -i soundEffect.WAV -c:v libx264 -vf fps=30 -pix_fmt yuv420p result.mp4
2 个解决方案
#1
5
Actually you can build ffmpeg library including the ffmpeg binary's code (ffmpeg.c). Only thing to care about is to rename the function main(int argc, char **argv)
, for example, to ffmpeg_main(int argc, char **argv)
- then you can call it with arguments just like you're executing ffmpeg binary. Note that argv[0]
should contain program name, just "ffmpeg"
should work.
实际上,您可以构建包含ffmpeg二进制代码(ffmpeg.c)的ffmpeg库。只需将函数main(int argc, char **argv)重命名为ffmpeg_main(int argc, char *argv)——然后可以使用参数调用它,就像执行ffmpeg二进制一样。请注意argv[0]应该包含程序名,只要“ffmpeg”就可以了。
The same approach was used in the library VideoKit for Android.
在Android的库录象中也使用了同样的方法。
#2
0
To do what you want, you have to use your compiled FFmpeg library in your code.
要实现您想要的功能,您必须在代码中使用已编译的FFmpeg库。
What you are looking for is exactly the code providing by FFmpeg documentation libavformat/output-example.c (that mean AVFormat
and AVCodec
FFmpeg's libraries in general).
您正在寻找的正是FFmpeg文档libavformat/output-example提供的代码。c(一般指AVFormat和AVCodec FFmpeg的库)。
* is not a "do it for me please" platform. So I prefer explaining here what you have to do, and I will try to be precise and to answer all your questions.
*不是一个“为我服务”的平台。所以我更愿意在这里解释你们要做什么,我会尽量准确地回答你们所有的问题。
I assume that you already know how to link your compiled (static or shared) library to your Xcode project, this is not the topic here.
我假设您已经知道如何将编译(静态或共享)库链接到Xcode项目,这里没有这个主题。
So, let's talk about this code. It creates a video (containing video stream and audio stream randomly generated) based on a duration. You want to create a video based on a picture list and sound file. Perfect, there are only three main modifications you have to do:
我们来讨论一下这个代码。它根据持续时间创建一个视频(包含随机生成的视频流和音频流)。您想要创建一个基于图片列表和声音文件的视频。很好,你只需要做三件主要的修改:
- The end condition is not reaching a duration, but reaching the end of your file list (In code there is already a
#define STREAM_NB_FRAMES
you can use to iterate over all you frames). - 结束条件不是达到持续时间,而是到达文件列表的末尾(在代码中已经有一个#define stream_nb_frame,您可以使用它对所有的框架进行迭代)。
- Replace the dummy
void fill_yuv_image
by your own method that load and decode image buffer from file. - 用您自己的方法替换无效fill_yuv_image,该方法从文件中加载和解码图像缓冲区。
- Replace the dummy
void write_audio_frame
by your own method that load and decode the audio buffer from your file. - 用您自己的方法替换无效的write_audio_frame,该方法从文件中加载和解码音频缓冲区。
(you can find "how to load audio file content" example on documentation starting at line 271, easily adaptable for video content regarding documentation)
(您可以在第271行开始的文档中找到“如何加载音频文件内容”示例,轻松适应有关文档的视频内容)
In this code, comparing to your CLI, you can figure out that:
在此代码中,与CLI进行比较,您可以发现:
-
const char *filename;
in the main should be you output file "result.mp4". - const char *文件名;主要是输出文件“result.mp4”。
-
#define STREAM_FRAME_RATE 25
(replace it by 30). - #定义STREAM_FRAME_RATE 25(替换为30)。
- For MP4 generation, video frames will be encoded in H.264 by default (in this code, the GOP is 12). So no need to precise libx264.
- 对于MP4代,视频帧将默认编码在H.264中(在这段代码中,GOP是12)。所以不需要精确的libx264。
-
#define STREAM_PIX_FMT PIX_FMT_YUV420P
represents your desired yuv420p decoding format. - #定义STREAM_PIX_FMT PIX_FMT_YUV420P表示所需的yuv420p解码格式。
Now, with these official examples and related documentation, you can achieve what you desire. Be careful that there is some differences between FFmpeg's version in these examples and current FFmpeg's version. For example:
现在,有了这些官方示例和相关文档,您就可以实现您想要的。注意,这些示例中的FFmpeg版本与当前的FFmpeg版本之间存在一些差异。例如:
st = av_new_stream(oc, 1); // line 60
Could be replaced by:
可以替换为:
st = avformat_new_stream(oc, NULL);
st->id = 1;
Or:
或者:
if (avcodec_open(c, codec) < 0) { // line 97
Could be replaced by:
可以替换为:
if (avcodec_open2(c, codec, NULL) < 0) {
Or again:
又或者:
dump_format(oc, 0, filename, 1); // line 483
Could be replaced by:
可以替换为:
av_dump_format(oc, 0, filename, 1);
Or CODEC_ID_NONE
by AV_CODEC_ID_NONE
... etc.
或CODEC_ID_NONE AV_CODEC_ID_NONE……等。
Ask your questions, but you got all the keys! :)
问你的问题,但你得到了所有的钥匙!:)
#1
5
Actually you can build ffmpeg library including the ffmpeg binary's code (ffmpeg.c). Only thing to care about is to rename the function main(int argc, char **argv)
, for example, to ffmpeg_main(int argc, char **argv)
- then you can call it with arguments just like you're executing ffmpeg binary. Note that argv[0]
should contain program name, just "ffmpeg"
should work.
实际上,您可以构建包含ffmpeg二进制代码(ffmpeg.c)的ffmpeg库。只需将函数main(int argc, char **argv)重命名为ffmpeg_main(int argc, char *argv)——然后可以使用参数调用它,就像执行ffmpeg二进制一样。请注意argv[0]应该包含程序名,只要“ffmpeg”就可以了。
The same approach was used in the library VideoKit for Android.
在Android的库录象中也使用了同样的方法。
#2
0
To do what you want, you have to use your compiled FFmpeg library in your code.
要实现您想要的功能,您必须在代码中使用已编译的FFmpeg库。
What you are looking for is exactly the code providing by FFmpeg documentation libavformat/output-example.c (that mean AVFormat
and AVCodec
FFmpeg's libraries in general).
您正在寻找的正是FFmpeg文档libavformat/output-example提供的代码。c(一般指AVFormat和AVCodec FFmpeg的库)。
* is not a "do it for me please" platform. So I prefer explaining here what you have to do, and I will try to be precise and to answer all your questions.
*不是一个“为我服务”的平台。所以我更愿意在这里解释你们要做什么,我会尽量准确地回答你们所有的问题。
I assume that you already know how to link your compiled (static or shared) library to your Xcode project, this is not the topic here.
我假设您已经知道如何将编译(静态或共享)库链接到Xcode项目,这里没有这个主题。
So, let's talk about this code. It creates a video (containing video stream and audio stream randomly generated) based on a duration. You want to create a video based on a picture list and sound file. Perfect, there are only three main modifications you have to do:
我们来讨论一下这个代码。它根据持续时间创建一个视频(包含随机生成的视频流和音频流)。您想要创建一个基于图片列表和声音文件的视频。很好,你只需要做三件主要的修改:
- The end condition is not reaching a duration, but reaching the end of your file list (In code there is already a
#define STREAM_NB_FRAMES
you can use to iterate over all you frames). - 结束条件不是达到持续时间,而是到达文件列表的末尾(在代码中已经有一个#define stream_nb_frame,您可以使用它对所有的框架进行迭代)。
- Replace the dummy
void fill_yuv_image
by your own method that load and decode image buffer from file. - 用您自己的方法替换无效fill_yuv_image,该方法从文件中加载和解码图像缓冲区。
- Replace the dummy
void write_audio_frame
by your own method that load and decode the audio buffer from your file. - 用您自己的方法替换无效的write_audio_frame,该方法从文件中加载和解码音频缓冲区。
(you can find "how to load audio file content" example on documentation starting at line 271, easily adaptable for video content regarding documentation)
(您可以在第271行开始的文档中找到“如何加载音频文件内容”示例,轻松适应有关文档的视频内容)
In this code, comparing to your CLI, you can figure out that:
在此代码中,与CLI进行比较,您可以发现:
-
const char *filename;
in the main should be you output file "result.mp4". - const char *文件名;主要是输出文件“result.mp4”。
-
#define STREAM_FRAME_RATE 25
(replace it by 30). - #定义STREAM_FRAME_RATE 25(替换为30)。
- For MP4 generation, video frames will be encoded in H.264 by default (in this code, the GOP is 12). So no need to precise libx264.
- 对于MP4代,视频帧将默认编码在H.264中(在这段代码中,GOP是12)。所以不需要精确的libx264。
-
#define STREAM_PIX_FMT PIX_FMT_YUV420P
represents your desired yuv420p decoding format. - #定义STREAM_PIX_FMT PIX_FMT_YUV420P表示所需的yuv420p解码格式。
Now, with these official examples and related documentation, you can achieve what you desire. Be careful that there is some differences between FFmpeg's version in these examples and current FFmpeg's version. For example:
现在,有了这些官方示例和相关文档,您就可以实现您想要的。注意,这些示例中的FFmpeg版本与当前的FFmpeg版本之间存在一些差异。例如:
st = av_new_stream(oc, 1); // line 60
Could be replaced by:
可以替换为:
st = avformat_new_stream(oc, NULL);
st->id = 1;
Or:
或者:
if (avcodec_open(c, codec) < 0) { // line 97
Could be replaced by:
可以替换为:
if (avcodec_open2(c, codec, NULL) < 0) {
Or again:
又或者:
dump_format(oc, 0, filename, 1); // line 483
Could be replaced by:
可以替换为:
av_dump_format(oc, 0, filename, 1);
Or CODEC_ID_NONE
by AV_CODEC_ID_NONE
... etc.
或CODEC_ID_NONE AV_CODEC_ID_NONE……等。
Ask your questions, but you got all the keys! :)
问你的问题,但你得到了所有的钥匙!:)