I'm trying to mux h264 and aac created with MediaCodec using FFMPEG, and also use FFMPEG's RTMP support to send to youtube. I've created two pipes, and am writing from java (android) through WriteableByteChannels. I can send to one pipe just fine (accepting null audio) like this:
我正在尝试使用FFMPEG复用h264和使用MediaCodec创建的aac,并使用FFMPEG的RTMP支持发送到youtube。我已经创建了两个管道,并且是通过WriteableByteChannels从java(android)编写的。我可以发送到一个管道就好(接受空音频),如下所示:
./ffmpeg -f lavfi -i aevalsrc=0 -i "files/camera-test.h264" -acodec aac -vcodec copy -bufsize 512k -f flv "rtmp://a.rtmp.youtube.com/live2/XXXX"
YouTube streaming works perfectly (but I have no audio). Using two pipes this is my command:
YouTube流媒体效果很好(但我没有音频)。使用两个管道这是我的命令:
./ffmpeg \
-i "files/camera-test.h264" \
-i "files/audio-test.aac" \
-vcodec copy \
-acodec copy \
-map 0:v:0 -map 1:a:0 \
-f flv "rtmp://a.rtmp.youtube.com/live2/XXXX""
The pipes are created with mkfifo , and opened from java like this:
管道是用mkfifo创建的,并从java中打开,如下所示:
pipeWriterVideo = Channels.newChannel(new FileOutputStream(outputFileVideo.toString()));
The order of execution (for now in my test phase) is creation of the files, starting ffmpeg (through adb shell) and then starting recording which opens the channels. ffmpeg will immediately open the h264 stream and then wait, since it is reading from the pipe the first channel open (for video) will successfully run. When it comes to trying to open the audio the same way, it fails because ffmpeg has not actually started reading from the pipe. I can open a second terminal window and cat the audio file and my app spits out what i hope is encoded aac, but ffmpeg fails, usually just sitting there waiting. Here is the verbose output:
执行顺序(目前在我的测试阶段)是创建文件,启动ffmpeg(通过adb shell),然后开始记录打开通道。 ffmpeg将立即打开h264流然后等待,因为它正在从管道读取第一个通道打开(视频)将成功运行。当尝试以相同的方式打开音频时,它会失败,因为ffmpeg实际上还没有开始从管道读取。我可以打开第二个终端窗口并播放音频文件,我的应用程序吐出我希望编码的aac,但ffmpeg失败,通常只是坐在那里等待。这是详细输出:
ffmpeg version N-78385-g855d9d2 Copyright (c) 2000-2016 the FFmpeg
developers
built with gcc 4.8 (GCC)
configuration: --prefix=/home/dev/svn/android-ffmpeg-with-rtmp/src/ffmpeg/android/arm
--enable-shared --disable-static --disable-doc --disable-ffplay
--disable-ffprobe --disable-ffserver --disable-symver
--cross-prefix=/home/dev/dev/android-ndk-r10e/toolchains/arm-linux-androideabi-4.8/prebuilt/linux-x86_64/bin/arm-linux-androideabi-
--target-os=linux --arch=arm --enable-cross-compile
--enable-librtmp --enable-pic --enable-decoder=h264
--sysroot=/home/dev/dev/android-ndk-r10e/platforms/android-19/arch-arm
--extra-cflags='-Os -fpic -marm'
--extra-ldflags='-L/home/dev/svn/android-ffmpeg-with-rtmp/src/openssl-android/libs/armeabi '
--extra-ldexeflags=-pie --pkg-config=/usr/bin/pkg-config
libavutil 55. 17.100 / 55. 17.100
libavcodec 57. 24.102 / 57. 24.102
libavformat 57. 25.100 / 57. 25.100
libavdevice 57. 0.101 / 57. 0.101
libavfilter 6. 31.100 / 6. 31.100
libswscale 4. 0.100 / 4. 0.100
libswresample 2. 0.101 / 2. 0.101
matched as AVOption 'debug' with argument 'verbose'.
Trailing options were found on the commandline.
Finished splitting the commandline.
Parsing a group of options: global .
Applying option async (audio sync method) with argument 1.
Successfully parsed a group of options.
Parsing a group of options: input file files/camera-test.h264.
Successfully parsed a group of options.
Opening an input file: files/camera-test.h264.
[file @ 0xb503b100] Setting default whitelist 'file'
I think if I could just get ffmpeg to start listening to both pipes, the rest would work out!
我想如果我能让ffmpeg开始听两个管道,剩下的就可以了!
Thanks for your time.
谢谢你的时间。
EDIT: I've made progress by decoupling the audio pipe connection and encoding, but now as soon as the video stream has been passed it errors on audio. I started a separate thread to create the WriteableByteChannel for audio and it never gets passed the FileOutputStream creation.
编辑:我通过解耦音频管道连接和编码取得了进展,但现在一旦视频流传递它就会出现音频错误。我启动了一个单独的线程来为音频创建WriteableByteChannel,它永远不会传递FileOutputStream创建。
matched as AVOption 'debug' with argument 'verbose'.
Trailing options were found on the commandline.
Finished splitting the commandline.
Parsing a group of options: global .
Successfully parsed a group of options.
Parsing a group of options: input file files/camera-test.h264.
Successfully parsed a group of options.
Opening an input file: files/camera-test.h264.
[file @ 0xb503b100] Setting default whitelist 'file'
[h264 @ 0xb503c400] Format h264 probed with size=2048 and score=51
[h264 @ 0xb503c400] Before avformat_find_stream_info() pos: 0 bytes read:15719 seeks:0
[h264 @ 0xb5027400] Current profile doesn't provide more RBSP data in PPS, skipping
[h264 @ 0xb503c400] max_analyze_duration 5000000 reached at 5000000 microseconds st:0
[h264 @ 0xb503c400] After avformat_find_stream_info() pos: 545242 bytes read:546928 seeks:0 frames:127
Input #0, h264, from 'files/camera-test.h264':
Duration: N/A, bitrate: N/A
Stream #0:0, 127, 1/1200000: Video: h264 (Baseline), 1 reference frame, yuv420p(left), 854x480 (864x480), 1/50, 25 fps, 25 tbr, 1200k tbn, 50 tbc
Successfully opened the file.
Parsing a group of options: input file files/audio-test.aac.
Applying option vcodec (force video codec ('copy' to copy stream)) with argument copy.
Successfully parsed a group of options.
Opening an input file: files/audio-test.aac.
Unknown decoder 'copy'
[AVIOContext @ 0xb5054020] Statistics: 546928 bytes read, 0 seeks
Here is where I attempt to open the audio pipe.
这是我尝试打开音频管道的地方。
new Thread(){
public void run(){
Log.d("Audio", "pre thread");
FileOutputStream fs = null;
try {
fs = new FileOutputStream("/data/data/android.com.android.grafika/files/audio-test.aac");
} catch (FileNotFoundException e) {
e.printStackTrace();
}
Log.d("Audio", "made fileoutputstream"); //never hits here
mVideoEncoder.pipeWriterAudio = Channels.newChannel(fs);
Log.d("Audio", "made it past opening audio pipe");
}
}.start();
Thanks.
1 个解决方案
#1
4
Your explanation isn't very clear, I can see you're trying to explain exactly what you're doing, but it's not working.
你的解释不是很清楚,我可以看到你正试图解释你正在做什么,但它没有用。
First: can you describe the actual issue. I have to read until halfway your post into an 8-line paragraph and it looks like you're suggesting ffmpeg is hanging. Is that the issue? You really want to be explicit about that.
第一:你能描述实际问题吗?我必须阅读你的帖子中途到一个8行的段落,看起来你建议ffmpeg挂起。这是问题吗?你真的想明确这一点。
Secondly: how do you pipe data into the FIFOs? This matters. Your post is entirely unclear, you seem to suggest ffmpeg reads an entire video file and then moves to the audio. Is that correct? Or are both streams fed to ffmpeg concurrently?
其次:你如何将数据输入FIFO?这很重要。您的帖子完全不清楚,您似乎建议ffmpeg读取整个视频文件,然后转移到音频。那是对的吗?或者两个流同时馈送到ffmpeg?
Lastly: if ffmpeg hangs, it's likely because one of your input pipes is blocking (you're pushing data to FIFO-1 and the buffer is full, but ffmpeg wants data from FIFO-2 and the buffer is empty). Both FIFOs need to always be independently filled with data.
最后:如果ffmpeg挂起,可能是因为你的一个输入管道正在阻塞(你将数据推送到FIFO-1并且缓冲区已满,但是ffmpeg想要FIFO-2中的数据并且缓冲区为空)。两个FIFO都需要始终独立地填充数据。
#1
4
Your explanation isn't very clear, I can see you're trying to explain exactly what you're doing, but it's not working.
你的解释不是很清楚,我可以看到你正试图解释你正在做什么,但它没有用。
First: can you describe the actual issue. I have to read until halfway your post into an 8-line paragraph and it looks like you're suggesting ffmpeg is hanging. Is that the issue? You really want to be explicit about that.
第一:你能描述实际问题吗?我必须阅读你的帖子中途到一个8行的段落,看起来你建议ffmpeg挂起。这是问题吗?你真的想明确这一点。
Secondly: how do you pipe data into the FIFOs? This matters. Your post is entirely unclear, you seem to suggest ffmpeg reads an entire video file and then moves to the audio. Is that correct? Or are both streams fed to ffmpeg concurrently?
其次:你如何将数据输入FIFO?这很重要。您的帖子完全不清楚,您似乎建议ffmpeg读取整个视频文件,然后转移到音频。那是对的吗?或者两个流同时馈送到ffmpeg?
Lastly: if ffmpeg hangs, it's likely because one of your input pipes is blocking (you're pushing data to FIFO-1 and the buffer is full, but ffmpeg wants data from FIFO-2 and the buffer is empty). Both FIFOs need to always be independently filled with data.
最后:如果ffmpeg挂起,可能是因为你的一个输入管道正在阻塞(你将数据推送到FIFO-1并且缓冲区已满,但是ffmpeg想要FIFO-2中的数据并且缓冲区为空)。两个FIFO都需要始终独立地填充数据。