这里只讲 客户端的使用.
看了一下live555发现里面的例子里只有一个openrtsp的例子有点像。
文章的标题是:
Live555+FFMPEG+ddraw实现H264码流接收,解码,显示
链接是:
文章正文:
1)H264码流接收采用的是live555,live555会将sps,pps,I帧,p帧都是单独的包过来的,在接收到Buffer,需要对它进行组成帧,live555自己支持I帧和P帧的组帧的,但是我们交给ffmpeg前,必须对在每帧之前插入00 00 00 01开始码,同时如果是I帧,必须将sps,pps,I帧同时交给ffmpeg才能解码的,所以对live555的Buffer的进行组帧;
live555的重点工作是拿到Buffer,可以参考OpenRtsp和RtspClient两个例子,OpenRtsp中有一个FileSink和H264VideoFileSink,RtspClient中有个DummySink,可以修改这个Sink,在这个Sink中进行组帧,然后调用ffmpeg解码;
class DummySink: public MediaSink {
public:
static DummySink* createNew(UsageEnvironment& env,
MediaSubsession& subsession, // identifies the kind of data that's being received
char const* streamId = NULL); // identifies the stream itself (optional)
private:
DummySink(UsageEnvironment& env, MediaSubsession& subsession, char const* streamId);
// called only by "createNew()"
virtual ~DummySink();
static void afterGettingFrame(void* clientData, unsigned frameSize,
unsigned numTruncatedBytes,
struct timeval presentationTime,
unsigned durationInMicroseconds);
void afterGettingFrame(unsigned frameSize, unsigned numTruncatedBytes,
struct timeval presentationTime, unsigned durationInMicroseconds);
private:
// redefined virtual functions:
virtual Boolean continuePlaying();
private:
u_int8_t* fReceiveBuffer;
MediaSubsession& fSubsession;
char* fStreamId;
};
//以上 是 live555 的 源码 ..
2)采用ffmpeg进行解码;这个没有什么可说的,我前面的例子里面有;
我们可以改写:
void DummySink::afterGettingFrame(unsigned frameSize, unsigned numTruncatedBytes,
struct timeval presentationTime, unsigned /*durationInMicroseconds*/) {
unsigned char const start_code[4] = {0x00, 0x00, 0x00, 0x01};
。。。。。。。。。。。。。。。。。。。。。。
//为保存Buff的缓冲区,在这个地方将Buff调用给FFMPEG
int imageWidth=0;
int imageHeight=0;
if (H264Status==H264STATUS_IFRAME ||H264Status==H264STATUS_PFRAME)
{
//封装H264解码函数;
bool bRet=H264DecodeClass.H264DecodeProcess((unsigned char*)pH264ReceiveBuff,frameSize,(unsigned char *)DecodeBuff,imageWidth,imageHeight);
if (bRet&&imageWidth>0&&imageHeight>0)
{
TRACE("receive a frame,frameSize=%d\n",frameSize);
//这里调用DDRAW显示图像;
}
}
// Then continue, to request the next frame of data:
continuePlaying();
}
3)ddraw yuv420p直接显示;首先创建2个表面,主表面和离屏表面;将yuv420p的数据copy到离屏表面,然后blt到主表面进行绘制;这个在我的博客里面也有讲到,
有需要的朋友可以参考前面的ddraw yuv视频显示的文章;
-------------------------------------
2.
博文地址:live555实现ffmpeg解码H264的rtsp流
博客正文:
由于需要实现一个解码H264的rtsp流的web客户端。我首先想到的是live555+ffmpeg。live555用于接收rtsp流,ffmpeg用于解码H264用于显示。看了一下live555发现里面的例子里只有一个openrtsp的例子有点想象,但是那个只是接收rtsp流存在一个文件中。我先尝试写了一个ffmpeg解码H264文件的程序,调试通过。现在只要把live555的例子改一下就可以了,把两个程序联合起来就可以了。这里主要的关键点是找到openrtsp写入文件的地方,只需将这个地方的数据获取到解码显示就可以了。
由于项目忙,也只能抽出时间来记录一下。
main函数在playCommon.cpp。main()的流程比较简单,跟服务端差别不大:建立任务计划对象--建立环境对象--处理用户输入的参数(RTSP地址)--创建RTSPClient实例--发出第一个RTSP请求(可能是OPTIONS也可能是DESCRIBE)--进入Loop。
我们主要来看看创建RTPSource在函数createSourceObjects()中,看一下:
Boolean MediaSubsession::createSourceObjects(int useSpecialRTPoffset) {-----------------------------------------------
do {
// First, check "fProtocolName"
if (strcmp(fProtocolName, "UDP") == 0) {
// A UDP-packetized stream (*not* a RTP stream)
fReadSource = BasicUDPSource::createNew(env(), fRTPSocket);
fRTPSource = NULL; // Note!
if (strcmp(fCodecName, "MP2T") == 0) { // MPEG-2 Transport Stream
fReadSource = MPEG2TransportStreamFramer::createNew(env(), fReadSource);
// this sets "durationInMicroseconds" correctly, based on the PCR values
}
} else {
// Check "fCodecName" against the set of codecs that we support,
// and create our RTP source accordingly
// (Later make this code more efficient, as this set grows #####)
// (Also, add more fmts that can be implemented by SimpleRTPSource#####)
Boolean createSimpleRTPSource = False; // by default; can be changed below
Boolean doNormalMBitRule = False; // default behavior if "createSimpleRTPSource" is True
if (strcmp(fCodecName, "QCELP") == 0) { // QCELP audio
fReadSource =
QCELPAudioRTPSource::createNew(env(), fRTPSocket, fRTPSource,
fRTPPayloadFormat,
fRTPTimestampFrequency);
// Note that fReadSource will differ from fRTPSource in this case
} else if (strcmp(fCodecName, "AMR") == 0) { // AMR audio (narrowband)
fReadSource =
AMRAudioRTPSource::createNew(env(), fRTPSocket, fRTPSource,
fRTPPayloadFormat, 0 /*isWideband*/,
fNumChannels, fOctetalign, fInterleaving,
fRobustsorting, fCRC);
// Note that fReadSource will differ from fRTPSource in this case
} else if (strcmp(fCodecName, "AMR-WB") == 0) { // AMR audio (wideband)
fReadSource =
AMRAudioRTPSource::createNew(env(), fRTPSocket, fRTPSource,
fRTPPayloadFormat, 1 /*isWideband*/,
fNumChannels, fOctetalign, fInterleaving,
fRobustsorting, fCRC);
// Note that fReadSource will differ from fRTPSource in this case
} else if (strcmp(fCodecName, "MPA") == 0) { // MPEG-1 or 2 audio
fReadSource = fRTPSource
= MPEG1or2AudioRTPSource::createNew(env(), fRTPSocket,
fRTPPayloadFormat,
fRTPTimestampFrequency);
} else if (strcmp(fCodecName, "MPA-ROBUST") == 0) { // robust MP3 audio
fReadSource = fRTPSource
= MP3ADURTPSource::createNew(env(), fRTPSocket, fRTPPayloadFormat,
fRTPTimestampFrequency);
if (fRTPSource == NULL) break;
if (!fReceiveRawMP3ADUs) {
// Add a filter that deinterleaves the ADUs after depacketizing them:
MP3ADUdeinterleaver* deinterleaver
= MP3ADUdeinterleaver::createNew(env(), fRTPSource);
if (deinterleaver == NULL) break;
// Add another filter that converts these ADUs to MP3 frames:
fReadSource = MP3FromADUSource::createNew(env(), deinterleaver);
}
} else if (strcmp(fCodecName, "X-MP3-DRAFT-00") == 0) {
// a non-standard variant of "MPA-ROBUST" used by RealNetworks
// (one 'ADU'ized MP3 frame per packet; no headers)
fRTPSource
= SimpleRTPSource::createNew(env(), fRTPSocket, fRTPPayloadFormat,
fRTPTimestampFrequency,
"audio/MPA-ROBUST" /*hack*/);
if (fRTPSource == NULL) break;
// Add a filter that converts these ADUs to MP3 frames:
fReadSource = MP3FromADUSource::createNew(env(), fRTPSource,
False /*no ADU header*/);
} else if (strcmp(fCodecName, "MP4A-LATM") == 0) { // MPEG-4 LATM audio
fReadSource = fRTPSource
= MPEG4LATMAudioRTPSource::createNew(env(), fRTPSocket,
fRTPPayloadFormat,
fRTPTimestampFrequency);
} else if (strcmp(fCodecName, "VORBIS") == 0) { // Vorbis audio
fReadSource = fRTPSource
= VorbisAudioRTPSource::createNew(env(), fRTPSocket,
fRTPPayloadFormat,
fRTPTimestampFrequency);
} else if (strcmp(fCodecName, "VP8") == 0) { // VP8 video
fReadSource = fRTPSource
= VP8VideoRTPSource::createNew(env(), fRTPSocket,
fRTPPayloadFormat,
fRTPTimestampFrequency);
} else if (strcmp(fCodecName, "AC3") == 0 || strcmp(fCodecName, "EAC3") == 0) { // AC3 audio
fReadSource = fRTPSource
= AC3AudioRTPSource::createNew(env(), fRTPSocket,
fRTPPayloadFormat,
fRTPTimestampFrequency);
} else if (strcmp(fCodecName, "MP4V-ES") == 0) { // MPEG-4 Elementary Stream video
fReadSource = fRTPSource
= MPEG4ESVideoRTPSource::createNew(env(), fRTPSocket,
fRTPPayloadFormat,
fRTPTimestampFrequency);
} else if (strcmp(fCodecName, "MPEG4-GENERIC") == 0) {
fReadSource = fRTPSource
= MPEG4GenericRTPSource::createNew(env(), fRTPSocket,
fRTPPayloadFormat,
fRTPTimestampFrequency,
fMediumName, fMode,
fSizelength, fIndexlength,
fIndexdeltalength);
} else if (strcmp(fCodecName, "MPV") == 0) { // MPEG-1 or 2 video
fReadSource = fRTPSource
= MPEG1or2VideoRTPSource::createNew(env(), fRTPSocket,
fRTPPayloadFormat,
fRTPTimestampFrequency);
} else if (strcmp(fCodecName, "MP2T") == 0) { // MPEG-2 Transport Stream
fRTPSource = SimpleRTPSource::createNew(env(), fRTPSocket, fRTPPayloadFormat,
fRTPTimestampFrequency, "video/MP2T",
0, False);
fReadSource = MPEG2TransportStreamFramer::createNew(env(), fRTPSource);
// this sets "durationInMicroseconds" correctly, based on the PCR values
} else if (strcmp(fCodecName, "H261") == 0) { // H.261
fReadSource = fRTPSource
= H261VideoRTPSource::createNew(env(), fRTPSocket,
fRTPPayloadFormat,
fRTPTimestampFrequency);
} else if (strcmp(fCodecName, "H263-1998") == 0 ||
strcmp(fCodecName, "H263-2000") == 0) { // H.263+
fReadSource = fRTPSource
= H263plusVideoRTPSource::createNew(env(), fRTPSocket,
fRTPPayloadFormat,
fRTPTimestampFrequency);
} else if (strcmp(fCodecName, "H264") == 0) {
fReadSource = fRTPSource
= H264VideoRTPSource::createNew(env(), fRTPSocket,
fRTPPayloadFormat,
fRTPTimestampFrequency);
} else if (strcmp(fCodecName, "DV") == 0) {
fReadSource = fRTPSource
= DVVideoRTPSource::createNew(env(), fRTPSocket,
fRTPPayloadFormat,
fRTPTimestampFrequency);
} else if (strcmp(fCodecName, "JPEG") == 0) { // motion JPEG
fReadSource = fRTPSource
= JPEGVideoRTPSource::createNew(env(), fRTPSocket,
fRTPPayloadFormat,
fRTPTimestampFrequency,
videoWidth(),
videoHeight());
} else if (strcmp(fCodecName, "X-QT") == 0
|| strcmp(fCodecName, "X-QUICKTIME") == 0) {
// Generic QuickTime streams, as defined in
// <http://developer.apple.com/quicktime/icefloe/dispatch026.html>
char* mimeType
= new char[strlen(mediumName()) + strlen(codecName()) + 2] ;
sprintf(mimeType, "%s/%s", mediumName(), codecName());
fReadSource = fRTPSource
= QuickTimeGenericRTPSource::createNew(env(), fRTPSocket,
fRTPPayloadFormat,
fRTPTimestampFrequency,
mimeType);
delete[] mimeType;
} else if ( strcmp(fCodecName, "PCMU") == 0 // PCM u-law audio
|| strcmp(fCodecName, "GSM") == 0 // GSM audio
|| strcmp(fCodecName, "DVI4") == 0 // DVI4 (IMA ADPCM) audio
|| strcmp(fCodecName, "PCMA") == 0 // PCM a-law audio
|| strcmp(fCodecName, "MP1S") == 0 // MPEG-1 System Stream
|| strcmp(fCodecName, "MP2P") == 0 // MPEG-2 Program Stream
|| strcmp(fCodecName, "L8") == 0 // 8-bit linear audio
|| strcmp(fCodecName, "L16") == 0 // 16-bit linear audio
|| strcmp(fCodecName, "L20") == 0 // 20-bit linear audio (RFC 3190)
|| strcmp(fCodecName, "L24") == 0 // 24-bit linear audio (RFC 3190)
|| strcmp(fCodecName, "G726-16") == 0 // G.726, 16 kbps
|| strcmp(fCodecName, "G726-24") == 0 // G.726, 24 kbps
|| strcmp(fCodecName, "G726-32") == 0 // G.726, 32 kbps
|| strcmp(fCodecName, "G726-40") == 0 // G.726, 40 kbps
|| strcmp(fCodecName, "SPEEX") == 0 // SPEEX audio
|| strcmp(fCodecName, "T140") == 0 // T.140 text (RFC 4103)
|| strcmp(fCodecName, "DAT12") == 0 // 12-bit nonlinear audio (RFC 3190)
) {
createSimpleRTPSource = True;
useSpecialRTPoffset = 0;
} else if (useSpecialRTPoffset >= 0) {
// We don't know this RTP payload format, but try to receive
// it using a 'SimpleRTPSource' with the specified header offset:
createSimpleRTPSource = True;
} else {
env().setResultMsg("RTP payload format unknown or not supported");
break;
}
if (createSimpleRTPSource) {
char* mimeType
= new char[strlen(mediumName()) + strlen(codecName()) + 2] ;
sprintf(mimeType, "%s/%s", mediumName(), codecName());
fReadSource = fRTPSource
= SimpleRTPSource::createNew(env(), fRTPSocket, fRTPPayloadFormat,
fRTPTimestampFrequency, mimeType,
(unsigned)useSpecialRTPoffset,
doNormalMBitRule);
delete[] mimeType;
}
}
return True;
} while (0);
return False; // an error occurred
}
3. 固本培元的专栏 (放牛娃不吃草)
该博主写的博客 阅读量 已经过万 .
文章标题:
Live555接收h264使用ffmpeg解码为YUV420
博客地址:
部分内容:
1.0
Live555客户端
在编译完成live555之后会产生很多例程。其中便有客户端的改写例程。
本文使用了testRTSPClient.cpp 例程进行改写。
live555的官方文档中有记录:点击打开链接
2.0
live555保存h264文件:
live555在传输h264流时省略了起始码,若需要存储h264码流的朋友并需要能使用vlc播放加入起始码即可。
起始码:0x00 0x00 0x00 0x01
(注意:0x01 在高地址)
-----------------------------------------------------------
4.博客标题:
Live555 + h264 + ffmpeg 客户端解码 笔记
客户端 使用 ffmpeg 解码 mp4 文件中的 h264 帧 并用 SDL2.0 播放
1,让我们了解一下什么是 pps, sps , 链接 -
打开链接吧
这文章分析的非常专业的--哈哈
使用RTP传输H264的时候,需要用到sdp协议描述,其中有两项:Sequence Parameter Sets (SPS) 和Picture Parameter Set (PPS)需要用到,
那么这两项从哪里获取呢?答案是从H264码流中获取.在H264码流中,都是以"0x00 0x00 0x01"或者"0x00 0x00 0x00 0x01"为开始码的,找到开始码之后,
使用开始码之后的第一个字节的低5位判断是否为7(sps)或者8(pps), 及data[4] & 0x1f == 7 || data[4] & 0x1f == 8.
然后对获取的nal去掉开始码之后进行base64编码,
得到的信息就可以用于sdp.sps和pps需要用逗号分隔开来.
2.好了,接下来我说一下 555 客户端是怎么获取 sps, pps 并解码的。
The "testRTSPClient" demo application receives each (video and/or audio) frame into a memory buffer, but does not do anything with the frame data.
You can, however, use this code as a model for a 'media player' application that decodes and renders these frames.
Note, in particular, the "DummySink" class that the "testRTSPClient" demo application uses - and the (non-static) "DummySink::afterGettingFrame()" function.
When this function is called, a complete 'frame' (for H.264 or H.265, this will be a "NAL unit") will have already been delivered into "fReceiveBuffer".
Note that our "DummySink" implementation doesn't actually do anything with this data; that's why it's called a 'dummy' sink.
If you want to decode (or otherwise process) these frames, you would replace "DummySink" with your own "MediaSink" subclass.
Its "afterGettingFrame()" function would pass the data (at "fReceiveBuffer", of length "frameSize") to a decoder.
(A decoder would also use the "presentationTime" timestamp to properly time the rendering of each frame, and to synchronize audio and video.)
链接: 请打开 这个链接
上面已经说了, 在客户端解码的时候需要 do something before decode.
1. 调用 MediaSubsession::fmpt_spropparameterstes() 获取到 sps, pps 的 base64 编码;
2. 调用 SPropRecord* parseSPropParameterSets(char const* sPropParameterSetsStr, unsigned& numSPropRecords); 这个不是类的成员函数哦。
调用 parseSPropParameterSets(... ) 会返回一个 SPropRecord* 类型的变量。
很肯定的告诉你, 返回的其实是一个 数组或者是一块内存,元素类型就是 SPropRecord 类型。
在我的程序里面经过测试,返回的 数组 长度为2, 第一个元素 为 sps, 第二个元素为 sps。
源码:
SPropRecord* parseSPropParameterSets(char const* sPropParameterSetsStr,
// result parameter:
unsigned& numSPropRecords) {
// Make a copy of the input string, so we can replace the commas with '\0's:
char* inStr = strDup(sPropParameterSetsStr);
if (inStr == NULL) {
numSPropRecords = 0;
return NULL;
}
// Count the number of commas (and thus the number of parameter sets):
numSPropRecords = 1;
char* s;
for (s = inStr; *s != '\0'; ++s) {
if (*s == ',') {
++numSPropRecords;
*s = '\0';
}
}
// Allocate and fill in the result array:
SPropRecord* resultArray = new SPropRecord[numSPropRecords]; //****** 看到 这里了 把 *******/
s = inStr;
for (unsigned i = 0; i < numSPropRecords; ++i) {
resultArray[i].sPropBytes = base64Decode(s, resultArray[i].sPropLength);
s += strlen(s) + 1;
}
delete[] inStr;
return resultArray;
}
接下来我们继续看, 这部分代码是客户端的 -
void DummySink::afterGettingFrame1(unsigned frameSize, unsigned numTruncatedBytes,
struct timeval presentationTime, unsigned /*durationInMicroseconds*/)
{
unsigned int Num = 0;
unsigned int &SPropRecords = Num;
SPropRecord *p_record = parseSPropParameterSets(fSubsession.fmtp_spropparametersets(), SPropRecords);
SPropRecord &sps = p_record[0];
SPropRecord &pps = p_record[1];
m_player->setSDPInfo(sps.sPropBytes, sps.sPropLength, pps.sPropBytes, pps.sPropLength);// 传递 sps, pps 给播放器初始化解码器
m_player->renderOneFrame(frameSize); // 给播放器发信号,一帧就绪 准备渲染
// Then continue, to request the next frame of data:
continuePlaying();
}