庖丁解牛-----Live555源码彻底解密(RTP打包)

时间:2022-08-28 14:26:30

本文主要讲解live555的服务端RTP打包流程,根据MediaServer讲解RTP的打包流程,所以大家看这篇文章时,先看看下面这个链接的内容;

庖丁解牛-----Live555源码彻底解密(根据MediaServer讲解Rtsp的建立过程)

http://blog.csdn.net/smilestone_322/article/details/18923139

在收到客户端的Play命令后,调用StartStream函数启动流

void OnDemandServerMediaSubsession::startStream(unsigned clientSessionId,

void* streamToken,

TaskFunc* rtcpRRHandler,

void* rtcpRRHandlerClientData,

unsignedshort& rtpSeqNum,

unsigned& rtpTimestamp,

ServerRequestAlternativeByteHandler* serverRequestAlternativeByteHandler,

void* serverRequestAlternativeByteHandlerClientData) {

StreamState* streamState = (StreamState*)streamToken;

Destinations* destinations

= (Destinations*)(fDestinationsHashTable->Lookup((charconst*)clientSessionId));

if (streamState != NULL) {

//启动流

streamState->startPlaying(destinations,

rtcpRRHandler, rtcpRRHandlerClientData,

serverRequestAlternativeByteHandler, serverRequestAlternativeByteHandlerClientData);

RTPSink* rtpSink = streamState->rtpSink(); // alias

if (rtpSink != NULL) {

//获取序列号与时间戳

rtpSeqNum = rtpSink->currentSeqNo();

rtpTimestamp = rtpSink->presetNextTimestamp();

}

}

}

接着跟踪streamState类中的startPlaying函数;源码如下:

void StreamState

::startPlaying(Destinations* dests,

TaskFunc* rtcpRRHandler, void* rtcpRRHandlerClientData,

ServerRequestAlternativeByteHandler* serverRequestAlternativeByteHandler,

void* serverRequestAlternativeByteHandlerClientData) {

if (dests == NULL) return;

if (fRTCPInstance == NULL && fRTPSink != NULL) {

// Create (and start) a 'RTCP instance' for this RTP sink:

//用来发送RTCP数据包

fRTCPInstance

= RTCPInstance::createNew(fRTPSink->envir(), fRTCPgs,

fTotalBW, (unsignedchar*)fMaster.fCNAME,

fRTPSink, NULL /* we're a server */);

// Note: This starts RTCP running automatically

}

if (dests->isTCP) {

// Change RTP and RTCP to use the TCP socket instead of UDP:

//使用TCP Socket代替UDP socket,使用什么socket由客户端确定,客户端在Setup时,将socket的连接方式告诉服务端;

if (fRTPSink != NULL) {

fRTPSink->addStreamSocket(dests->tcpSocketNum, dests->rtpChannelId);

RTPInterface

::setServerRequestAlternativeByteHandler(fRTPSink->envir(), dests->tcpSocketNum,

serverRequestAlternativeByteHandler, serverRequestAlternativeByteHandlerClientData);

// So that we continue to handle RTSP commands from the client

}

if (fRTCPInstance != NULL) {

fRTCPInstance->addStreamSocket(dests->tcpSocketNum, dests->rtcpChannelId);

fRTCPInstance->setSpecificRRHandler(dests->tcpSocketNum, dests->rtcpChannelId,

rtcpRRHandler, rtcpRRHandlerClientData);

}

} else {

// Tell the RTP and RTCP 'groupsocks' about this destination

// (in case they don't already have it):

if (fRTPgs != NULL) fRTPgs->addDestination(dests->addr, dests->rtpPort);

if (fRTCPgs != NULL) fRTCPgs->addDestination(dests->addr, dests->rtcpPort);

if (fRTCPInstance != NULL) {

fRTCPInstance->setSpecificRRHandler(dests->addr.s_addr, dests->rtcpPort,

rtcpRRHandler, rtcpRRHandlerClientData);

}

}

if (fRTCPInstance != NULL) {

// Hack: Send an initial RTCP "SR" packet, before the initial RTP packet, so that receivers will (likely) be able to

// get RTCP-synchronized presentation times immediately:

fRTCPInstance->sendReport();

}

if (!fAreCurrentlyPlaying && fMediaSource != NULL) {

if (fRTPSink != NULL) {

//启动流

fRTPSink->startPlaying(*fMediaSource, afterPlayingStreamState,this);

fAreCurrentlyPlaying = True;

} else if (fUDPSink != NULL) {

fUDPSink->startPlaying(*fMediaSource, afterPlayingStreamState,this);

fAreCurrentlyPlaying = True;

}

}

}

下面主要分析:

fRTPSink->startPlaying(*fMediaSource, afterPlayingStreamState, this);

代码;RTPSink* fRTPSink;RTPSink继承自MediaSink,所以fRTPSink调用的是MediaSink中的startPlaying函数;跟踪进入到startPlaying函数;

Boolean MediaSink::startPlaying(MediaSource& source,

afterPlayingFunc* afterFunc,

void* afterClientData) {

// Make sure we're not already being played:

if (fSource != NULL) {

envir().setResultMsg("This sink is already being played");

return False;

}

// Make sure our source is compatible:

if (!sourceIsCompatibleWithUs(source)) {

envir().setResultMsg("MediaSink::startPlaying(): source is not compatible!");

return False;

}

//保存下一些变量

fSource = (FramedSource*)&source;

fAfterFunc = afterFunc;

fAfterClientData = afterClientData;

return continuePlaying();

}

这个函数的内容对于客户端和服务端来说,都差不多,就是Sink跟source要数据,对服务器来说,source就是读文件或读实时流,将数据数据传递到sink,sink负责打包发送,对于客户端来说,source就是从网络上接收数据包,组成帧,而sink就是数据的解码等内容;下面接着跟进到continuePlaying();

virtual Boolean continuePlaying() = 0;函数在MediaSink类中定义的是一个纯虚函数,实现就是在它的子类里面实现了。跟进代码,看在哪个子类中实现该函数;

Boolean MultiFramedRTPSink::continuePlaying() {

// Send the first packet.

// (This will also schedule any future sends.)

buildAndSendPacket(True);

return True;

}

在MultiFrameRTPSink中找到continuePlaying()函数,该函数很简单,就是调用buildAndSendPacket(True);函数;MultiFrameRTPSink是一个与帧有关的类,它每次从source中获得一帧数据,buildAndSendPacket函数,顾名思义就是打包和发送的函数了。

void MultiFramedRTPSink::buildAndSendPacket(Boolean isFirstPacket) {

fIsFirstPacket = isFirstPacket;

// Set up the RTP header:

//填充RTP包头

unsigned rtpHdr = 0x80000000; // RTP version 2; marker ('M') bit not set (by default; it can be set later)

rtpHdr |= (fRTPPayloadType<<16); //负载类型

rtpHdr |= fSeqNo; // sequence number //序列号

//往包buff中加入rtpHdr

fOutBuf->enqueueWord(rtpHdr);

// Note where the RTP timestamp will go.

// (We can't fill this in until we start packing payload frames.)

fTimestampPosition = fOutBuf->curPacketSize();

//缓冲区中空出一个时间戳的位置,时间戳在以后在填充

fOutBuf->skipBytes(4); // leave a hole for the timestamp

//缓冲区中填入SSRC内容;

fOutBuf->enqueueWord(SSRC());

// Allow for a special, payload-format-specific header following the

// RTP header:

fSpecialHeaderPosition = fOutBuf->curPacketSize();

fSpecialHeaderSize = specialHeaderSize();

fOutBuf->skipBytes(fSpecialHeaderSize);

// Begin packing as many (complete) frames into the packet as we can:

fTotalFrameSpecificHeaderSizes = 0;

fNoFramesLeft = False;

fNumFramesUsedSoFar = 0;

//前面的内容都是填充RTP包头,packFrame就是打包数据了

packFrame();

}

PackFrame函数源码如下:

void MultiFramedRTPSink::packFrame() {

// Get the next frame.

// First, see if we have an overflow frame that was too big for the last pkt

if (fOutBuf->haveOverflowData()) {

//上一帧的数据太大,溢出了

// Use this frame before reading a new one from the source

unsigned frameSize = fOutBuf->overflowDataSize();

struct timeval presentationTime = fOutBuf->overflowPresentationTime();

unsigned durationInMicroseconds = fOutBuf->overflowDurationInMicroseconds();

fOutBuf->useOverflowData();

afterGettingFrame1(frameSize, 0, presentationTime, durationInMicroseconds);

} else {

// Normal case: we need to read a new frame from the source

if (fSource == NULL) return;

//更新缓冲区的位置信息

fCurFrameSpecificHeaderPosition = fOutBuf->curPacketSize();

fCurFrameSpecificHeaderSize = frameSpecificHeaderSize();

fOutBuf->skipBytes(fCurFrameSpecificHeaderSize);

fTotalFrameSpecificHeaderSizes += fCurFrameSpecificHeaderSize;

//再次从source要数据, fOutBuf->curPtr()表示数据存放的起始Buff地址;第2个参数表示Buff可用缓冲区的size,afterGettingFrame为收到一帧数据的回调函数,对该帧数据进行处理;ourHandleClosure在关闭文件时调用该函数;

fSource->getNextFrame(fOutBuf->curPtr(), fOutBuf->totalBytesAvailable(),

afterGettingFrame, this, ourHandleClosure,this);

}

}

GetNextFrame函数就是Source读文件或某个设备(比如IP Camera)中读取一帧数据,读完后返回给Sink,然后调用afterGettingFrame函数;

下面接着讲解getNextFrame函数;

void FramedSource::getNextFrame(unsignedchar* to,unsigned maxSize,

afterGettingFunc* afterGettingFunc,

void* afterGettingClientData,

onCloseFunc* onCloseFunc,

void* onCloseClientData) {

// Make sure we're not already being read:

if (fIsCurrentlyAwaitingData) {

envir() << "FramedSource[" <<this <<"]::getNextFrame(): attempting to read more than once at the same time!\n";

envir().internalError();

}

//保存一些变量

fTo = to;

fMaxSize = maxSize;

fNumTruncatedBytes = 0; // by default; could be changed by doGetNextFrame()

fDurationInMicroseconds = 0; // by default; could be changed by doGetNextFrame()

fAfterGettingFunc = afterGettingFunc;

fAfterGettingClientData = afterGettingClientData;

fOnCloseFunc = onCloseFunc;

fOnCloseClientData = onCloseClientData;

fIsCurrentlyAwaitingData = True;

doGetNextFrame();

}

调用doGetNextFrame()函数取下一帧数据;

H264FUAFragmenter类是H264VideoRTPSink的中调用,为H264VideoRTPSink的一个成员变量,H264VideoRTPSink继承自VideoRTPSink,而VideoRTPSink又继承自MultiFramedRTPSink;MultiFramedRTPSink继承自MediaSink;H264FUAFragmenter类取代了H264VideoStreamFramer成为和RTPSink的source,RTPSink要获取数据帧时,从H264FUAFragmenter获取。

void H264FUAFragmenter::doGetNextFrame() {

if (fNumValidDataBytes == 1) {

// We have no NAL unit data currently in the buffer. Read a new one:

//buff中没有数据,则调用fInputSource->getNextFrame函数从source中获取数据;

//fInputSource为H264VideoStreamFramer,H264VideoStreamFramer的getNextFrame()会调用H264VideoStreamParser的parser(),parser()又调用ByteStreamFileSource获取数据;

fInputSource->getNextFrame(&fInputBuffer[1], fInputBufferSize - 1,

afterGettingFrame, this,

FramedSource::handleClosure, this);

} else {

// We have NAL unit data in the buffer. There are three cases to consider:

// 1. There is a new NAL unit in the buffer, and it's small enough to deliver

//    to the RTP sink (as is).

// 2. There is a new NAL unit in the buffer, but it's too large to deliver to

//    the RTP sink in its entirety.  Deliver the first fragment of this data,

//    as a FU-A packet, with one extra preceding header byte.

// 3. There is a NAL unit in the buffer, and we've already delivered some

//    fragment(s) of this.  Deliver the next fragment of this data,

//    as a FU-A packet, with two extra preceding header bytes.

if (fMaxSize < fMaxOutputPacketSize) {// shouldn't happen

envir() << "H264FUAFragmenter::doGetNextFrame(): fMaxSize ("

<< fMaxSize << ") is smaller than expected\n";

} else {

fMaxSize = fMaxOutputPacketSize;

}

fLastFragmentCompletedNALUnit = True; // by default

//1)非分片包

if (fCurDataOffset == 1) { // case 1 or 2

if (fNumValidDataBytes - 1 <= fMaxSize) {// case 1

memmove(fTo, &fInputBuffer[1], fNumValidDataBytes - 1);

fFrameSize = fNumValidDataBytes - 1;

fCurDataOffset = fNumValidDataBytes;

} else { // case 2

// We need to send the NAL unit data as FU-A packets. Deliver the first

// packet now.  Note that we add FU indicator and FU header bytes to the front

// of the packet (reusing the existing NAL header byte for the FU header).

//2)为FU-A的第一个包

fInputBuffer[0] = (fInputBuffer[1] & 0xE0) | 28; // FU indicator

fInputBuffer[1] = 0x80 | (fInputBuffer[1] & 0x1F); // FU header (with S bit)

memmove(fTo, fInputBuffer, fMaxSize);

fFrameSize = fMaxSize;

fCurDataOffset += fMaxSize - 1;

fLastFragmentCompletedNALUnit = False;

}

} else { // case 3

// We are sending this NAL unit data as FU-A packets. We've already sent the

// first packet (fragment).  Now, send the next fragment.  Note that we add

// FU indicator and FU header bytes to the front. (We reuse these bytes that

// we already sent for the first fragment, but clear the S bit, and add the E

//3) bit if this is the last fragment.)

//为FU-A的中间的包,复用FU indicator and FU header,清除掉FU header (no S bit开始标记)

fInputBuffer[fCurDataOffset-2] = fInputBuffer[0]; // FU indicator

fInputBuffer[fCurDataOffset-1] = fInputBuffer[1]&~0x80; // FU header (no S bit)

unsigned numBytesToSend = 2 + fNumValidDataBytes - fCurDataOffset;

if (numBytesToSend > fMaxSize) {

// We can't send all of the remaining data this time:

numBytesToSend = fMaxSize;

fLastFragmentCompletedNALUnit = False;

} else {

// This is the last fragment:

//4)这是FU(分片包28)的最后一个包了,将FU头部的设置成E表示End,方便客户端组帧

fInputBuffer[fCurDataOffset-1] |= 0x40; // set the E bit in the FU header

fNumTruncatedBytes = fSaveNumTruncatedBytes;

}

memmove(fTo, &fInputBuffer[fCurDataOffset-2], numBytesToSend);

fFrameSize = numBytesToSend;

fCurDataOffset += numBytesToSend - 2;

}

if (fCurDataOffset >= fNumValidDataBytes) {

// We're done with this data.  Reset the pointers for receiving new data:

fNumValidDataBytes = fCurDataOffset = 1;

}

// Complete delivery to the client:

FramedSource::afterGetting(this);

}

}

该函数的else部分实现RTP数据打包工作;live555只处理2种包;单独的包,比如sps,pps信息,一个包就是一个数据帧,2)包很大,拆包,采用FU-A的方法拆包,参考RTP打包协议!

http://blog.csdn.net/smilestone_322/article/details/7574253

//fInputSource->getNextFrame()后调用回调函数:

void H264FUAFragmenter::afterGettingFrame(void* clientData,unsigned frameSize,

unsigned numTruncatedBytes,

struct timeval presentationTime,

unsigned durationInMicroseconds) {

H264FUAFragmenter* fragmenter = (H264FUAFragmenter*)clientData;

fragmenter->afterGettingFrame1(frameSize, numTruncatedBytes, presentationTime,

durationInMicroseconds);

}

void H264FUAFragmenter::afterGettingFrame1(unsigned frameSize,

unsigned numTruncatedBytes,

struct timeval presentationTime,

unsigned durationInMicroseconds) {

fNumValidDataBytes += frameSize;

fSaveNumTruncatedBytes = numTruncatedBytes;

fPresentationTime = presentationTime;

fDurationInMicroseconds = durationInMicroseconds;

// Deliver data to the client:

doGetNextFrame();

}

doGetNextFrame();获取到一帧数据后,就打包将数据发送给客户端;调用H264FUAFragmenter的doGetNextFrame()函数,对数据进行分析处理;这时走的doGetNextFrame()的else部分;

afterGettingFrame函数的源码如下:

void MultiFramedRTPSink

::afterGettingFrame(void* clientData,unsigned numBytesRead,

unsigned numTruncatedBytes,

struct timeval presentationTime,

unsigned durationInMicroseconds) {

MultiFramedRTPSink* sink = (MultiFramedRTPSink*)clientData;

sink->afterGettingFrame1(numBytesRead, numTruncatedBytes,

presentationTime, durationInMicroseconds);

}

afterGettingFrame又调用afterGettingFrame1来消费数据,afterGettingFrame1我猜是发送数据;

看源码;

void MultiFramedRTPSink

::afterGettingFrame1(unsigned frameSize,unsigned numTruncatedBytes,

struct timeval presentationTime,

unsigned durationInMicroseconds) {

if (fIsFirstPacket) {

// Record the fact that we're starting to play now:

gettimeofday(&fNextSendTime, NULL);

}

fMostRecentPresentationTime = presentationTime;

if (fInitialPresentationTime.tv_sec == 0 && fInitialPresentationTime.tv_usec == 0) {

fInitialPresentationTime = presentationTime;

}

if (numTruncatedBytes > 0) {

unsigned const bufferSize = fOutBuf->totalBytesAvailable();

envir() << "MultiFramedRTPSink::afterGettingFrame1(): The input frame data was too large for our buffer size ("

<< bufferSize << ").  "

<< numTruncatedBytes << " bytes of trailing data was dropped! Correct this by increasing \"OutPacketBuffer::maxSize\" to at least "

<< OutPacketBuffer::maxSize + numTruncatedBytes << ", *before* creating this 'RTPSink'.  (Current value is "

<< OutPacketBuffer::maxSize << ".)\n";

}

unsigned curFragmentationOffset = fCurFragmentationOffset;

unsigned numFrameBytesToUse = frameSize;

unsigned overflowBytes = 0;

// If we have already packed one or more frames into this packet,

// check whether this new frame is eligible to be packed after them.

// (This is independent of whether the packet has enough room for this

// new frame; that check comes later.)

if (fNumFramesUsedSoFar > 0) {

if ((fPreviousFrameEndedFragmentation

&& !allowOtherFramesAfterLastFragment())

|| !frameCanAppearAfterPacketStart(fOutBuf->curPtr(), frameSize)) {

// Save away this frame for next time:

numFrameBytesToUse = 0;

fOutBuf->setOverflowData(fOutBuf->curPacketSize(), frameSize,

presentationTime, durationInMicroseconds);

}

}

fPreviousFrameEndedFragmentation = False;

//缓冲区太小了,数据帧被截断了,提示用户增加缓冲区大小

if (numFrameBytesToUse > 0) {

// Check whether this frame overflows the packet

if (fOutBuf->wouldOverflow(frameSize)) {

// Don't use this frame now; instead, save it as overflow data, and

// send it in the next packet instead. However, if the frame is too

// big to fit in a packet by itself, then we need to fragment it (and

// use some of it in this packet, if the payload format permits this.)

if (isTooBigForAPacket(frameSize)

&& (fNumFramesUsedSoFar == 0 || allowFragmentationAfterStart())) {

// We need to fragment this frame, and use some of it now:

overflowBytes = computeOverflowForNewFrame(frameSize);

numFrameBytesToUse -= overflowBytes;

fCurFragmentationOffset += numFrameBytesToUse;

} else {

// We don't use any of this frame now:

overflowBytes = frameSize;

numFrameBytesToUse = 0;

}

fOutBuf->setOverflowData(fOutBuf->curPacketSize() + numFrameBytesToUse,

overflowBytes, presentationTime, durationInMicroseconds);

} else if (fCurFragmentationOffset > 0) {

// This is the last fragment of a frame that was fragmented over

// more than one packet.  Do any special handling for this case:

fCurFragmentationOffset = 0;

fPreviousFrameEndedFragmentation = True;

}

}

if (numFrameBytesToUse == 0 && frameSize > 0) {

// Send our packet now, because we have filled it up:

//发送数据包

sendPacketIfNecessary();

} else {

// Use this frame in our outgoing packet:

unsigned char* frameStart = fOutBuf->curPtr();

fOutBuf->increment(numFrameBytesToUse);

// do this now, in case "doSpecialFrameHandling()" calls "setFramePadding()" to append padding bytes

// Here's where any payload format specific processing gets done:

doSpecialFrameHandling(curFragmentationOffset, frameStart,

numFrameBytesToUse, presentationTime,

overflowBytes);

++fNumFramesUsedSoFar;

// Update the time at which the next packet should be sent, based

// on the duration of the frame that we just packed into it.

// However, if this frame has overflow data remaining, then don't

// count its duration yet.

//更新时间戳

if (overflowBytes == 0) {

fNextSendTime.tv_usec += durationInMicroseconds;

fNextSendTime.tv_sec += fNextSendTime.tv_usec/1000000;

fNextSendTime.tv_usec %= 1000000;

}

// Send our packet now if (i) it's already at our preferred size, or

// (ii) (heuristic) another frame of the same size as the one we just

//      read would overflow the packet, or

// (iii) it contains the last fragment of a fragmented frame, and we

//      don't allow anything else to follow this or

// (iv) one frame per packet is allowed:

//1)数据包的size已经是一个恰当的大小了,在往里面打包数据可能造成缓冲区溢出了;

//2)已经包括了分片包的最后一个包了;

//3)容许一帧一个数据包

if (fOutBuf->isPreferredSize()

|| fOutBuf->wouldOverflow(numFrameBytesToUse)

|| (fPreviousFrameEndedFragmentation &&

!allowOtherFramesAfterLastFragment())

|| !frameCanAppearAfterPacketStart(fOutBuf->curPtr() - frameSize,

frameSize) ) {

// The packet is ready to be sent now

//发送数据包

sendPacketIfNecessary();

} else {

// There's room for more frames; try getting another:

//继承打包

packFrame();

}

}

}

下面继续看发送数据的函数:

void MultiFramedRTPSink::sendPacketIfNecessary() {

if (fNumFramesUsedSoFar > 0) {

// Send the packet:

#ifdef TEST_LOSS

if ((our_random()%10) != 0) // simulate 10% packet loss #####

#endif

if (!fRTPInterface.sendPacket(fOutBuf->packet(), fOutBuf->curPacketSize())) {

// if failure handler has been specified, call it

if (fOnSendErrorFunc != NULL) (*fOnSendErrorFunc)(fOnSendErrorData);

}

++fPacketCount;

fTotalOctetCount += fOutBuf->curPacketSize();

fOctetCount += fOutBuf->curPacketSize()

- rtpHeaderSize - fSpecialHeaderSize - fTotalFrameSpecificHeaderSizes;

++fSeqNo; // for next time

}

if (fOutBuf->haveOverflowData()

&& fOutBuf->totalBytesAvailable() > fOutBuf->totalBufferSize()/2) {

// Efficiency hack: Reset the packet start pointer to just in front of

// the overflow data (allowing for the RTP header and special headers),

// so that we probably don't have to "memmove()" the overflow data

// into place when building the next packet:

unsigned newPacketStart = fOutBuf->curPacketSize()

- (rtpHeaderSize + fSpecialHeaderSize + frameSpecificHeaderSize());

fOutBuf->adjustPacketStart(newPacketStart);

} else {

// Normal case: Reset the packet start pointer back to the start:

fOutBuf->resetPacketStart();

}

fOutBuf->resetOffset();

fNumFramesUsedSoFar = 0;

if (fNoFramesLeft) {

// We're done:

onSourceClosure(this);

} else {

// We have more frames left to send. Figure out when the next frame

// is due to start playing, then make sure that we wait this long before

// sending the next packet.

struct timeval timeNow;

gettimeofday(&timeNow, NULL);

int secsDiff = fNextSendTime.tv_sec - timeNow.tv_sec;

int64_t uSecondsToGo = secsDiff*1000000 + (fNextSendTime.tv_usec - timeNow.tv_usec);

if (uSecondsToGo < 0 || secsDiff < 0) {// sanity check: Make sure that the time-to-delay is non-negative:

uSecondsToGo = 0;

}

// Delay this amount of time:

nextTask() = envir().taskScheduler().scheduleDelayedTask(uSecondsToGo, (TaskFunc*)sendNext,this);

}

}

在发送数据的函数中使用延迟任务,为了延迟包的发送,使用delay task来执行下次打包发送任务,看sendNext的代码;

void MultiFramedRTPSink::sendNext(void* firstArg) {

MultiFramedRTPSink* sink = (MultiFramedRTPSink*)firstArg;

sink->buildAndSendPacket(False);

}

它又调用了buildAndSendPacket函数,看下该函数参数的作用,True和False的区别;True表示该帧是第一帧,记下实际Play的时间;在afterGettingFrame1中有如下代码:

if (fIsFirstPacket) {

// Record the fact that we're starting to play now:

gettimeofday(&fNextSendTime, NULL);

}

在MultiFramedRTPSink中数据包和帧的缓冲区队列是同一个,使用了一些标记和对指针的移动来操作数据打包和发送数据帧;注意:如果数据帧溢出,时间戳会计算不准确;

from:http://blog.csdn.net/smilestone_322/article/details/18923711

庖丁解牛-----Live555源码彻底解密(RTP打包)的更多相关文章

  1. 庖丁解牛-----Live555源码彻底解密&lpar;RTP解包&rpar;

    Live555 客户端解包 以testRTSPClient.cpp为例讲解: Medium<-MediaSource<-FramedSource<-RTPSource<-Mul ...

  2. 庖丁解牛-----Live555源码彻底解密&lpar;根据MediaServer讲解Rtsp的建立过程&rpar;

    live555MediaServer.cpp服务端源码讲解 int main(int argc, char** argv) { // Begin by setting up our usage env ...

  3. 27 GroupSock概述&lpar;一&rpar;——live555源码阅读&lpar;四&rpar;网络

    27 GroupSock概述(一)——live555源码阅读(四)网络 27 GroupSock概述(一)——live555源码阅读(四)网络 简介 1.网络通用数据类型定义 2.Tunnel隧道封装 ...

  4. vs2010编译live555源码

    最近加入了公司的C++视频小组,利用中秋这个假期将研究了一些live555的源码,现在先将如何编译使用vs2010编译live555,整理出来,对以后分析代码有很大帮助. 1.下载live555源码, ...

  5. 40 网络相关函数&lpar;八&rpar;——live555源码阅读&lpar;四&rpar;网络

    40 网络相关函数(八)——live555源码阅读(四)网络 40 网络相关函数(八)——live555源码阅读(四)网络 简介 15)writeSocket向套接口写数据 TTL的概念 函数send ...

  6. 39 网络相关函数&lpar;七&rpar;——live555源码阅读&lpar;四&rpar;网络

    39 网络相关函数(七)——live555源码阅读(四)网络 39 网络相关函数(七)——live555源码阅读(四)网络 简介 14)readSocket从套接口读取数据 recv/recvfrom ...

  7. 38 网络相关函数&lpar;六&rpar;——live555源码阅读&lpar;四&rpar;网络

    38 网络相关函数(六)——live555源码阅读(四)网络 38 网络相关函数(六)——live555源码阅读(四)网络 简介 12)makeSocketNonBlocking和makeSocket ...

  8. 37 网络相关函数&lpar;五&rpar;——live555源码阅读&lpar;四&rpar;网络

    37 网络相关函数(五)——live555源码阅读(四)网络 37 网络相关函数(五)——live555源码阅读(四)网络 简介 10)MAKE_SOCKADDR_IN构建sockaddr_in结构体 ...

  9. 36 网络相关函数&lpar;四&rpar;——live555源码阅读&lpar;四&rpar;网络

    36 网络相关函数(四)——live555源码阅读(四)网络 36 网络相关函数(四)——live555源码阅读(四)网络 简介 7)createSocket创建socket方法 8)closeSoc ...

随机推荐

  1. HTML 表单 选择器

    表单元素 每个表单都对应一个<form></form>标签    表单内所有元素都写在 <form></form>里面: 1.最重要的属性 <fo ...

  2. web前端开发教程系列-1 - 前端开发编辑器介绍

    目录: 前言 一. Webstorm 1. 优点 2. 缺点 3. 教程 4. 插件 5. 技巧 二. SublimeText 1. 优点 2. 缺点 3. 教程 4. 插件 5. 技巧 前言 由于很 ...

  3. typedef NS&lowbar;OPTIONS 位移的枚举

    typedef NS_OPTIONS里面的枚举可以并存使用  用 | 来并存

  4. opencv&lowbar;协方差矩阵与协方差讲解

    统计学的基本概念 学过概率统计的孩子都知道,统计里最基本的概念就是样本的均值,方差,或者再加个标准差.首先我们给你一个含有n个样本的集合,依次给出这些概念的公式描述,这些高中学过数学的孩子都应该知道吧 ...

  5. MDK5&period;01百度云下载,安装微软雅黑混合字体,字体效果很棒,解决显示中文的BUG

    微软雅黑字体http://pan.baidu.com/s/1nt9Epuh 初步尝试,以前的小BUG都已经解决了.下面是安装雅黑字体后的字体效果,很爽.第一步:安装雅黑字体.第二步:选择Edit--- ...

  6. SENet

     \(\bf F_{tr}\) 为标准卷积操作 \(\bf F_{sq}\) 为 Global Average Pooling \(\bf F_{ex}\) 为两层全连接网络(可以看做两个1×1卷积 ...

  7. Leetcode&lowbar;67&lowbar;Add Binary

    本文是在学习中的总结,欢迎转载但请注明出处:http://blog.csdn.net/pistolove/article/details/40480151 Given two binary strin ...

  8. &commat;ResponseBody 返回乱码 的解决办法

    1:最快的  最简单的办法是  在Ajax请求脸面指定头信息Accept属性,StringHttpMessageConverter默认iso-8859-1编码,但是会根据请求头信息指定的编码格式来转换 ...

  9. Socket 的网络编程

    socket 网络编程的服务端: 1) 创建socket 套接字. 2) 和socket绑定主机地址和端口 3) socket 主动监听端口,看又没有来连接. 4) 当执行到 accept() 时, ...

  10. Java并发(三)线程池原理

    Java中的线程池是运用场景最多的并发框架,几乎所有需要异步或并发执行任务的程序都可以使用线程池.在开发过程中,合理地使用线程池能够带来3个好处. 1. 降低资源消耗.通过重复利用已创建的线程降低线程 ...