Directshow源码分析之推模式

时间:2021-10-05 09:20:02

在windowsSDK中有一个推模式的Source Filter例子,位于SDK安装目录samples\C++\Directshow\Ball下。

下面做一下简要分析:

功能:Live Source不断产生视频帧,演示在封闭围墙内碰撞的弹球的运动轨迹。使用该filter构建Filter Graph及其输出结果如下图:

Directshow源码分析之推模式

Directshow源码分析之推模式

类的继承结构关系为:

CBouncingBall继承自CSource

CBallStream继承自CSourceStream

可以知道,从CSource派生的Filter,它的输出Pin会使用一个数据线程,将Sample不断的推出去,具体实现如下:

//
// DoBufferProcessingLoop
//
// Grabs a buffer and calls the users processing function.
// Overridable, so that different delivery styles can be catered for.
HRESULT CSourceStream::DoBufferProcessingLoop(void) {

Command com;

OnThreadStartPlay();//开始数据传送前调用,子类可以重新实现以进行一些初始化

do {
while (!CheckRequest(&com)) {

IMediaSample *pSample;
<span style="white-space:pre"></span>//输出Pin上得到一个空的Pin
HRESULT hr = GetDeliveryBuffer(&pSample,NULL,NULL,0);
if (FAILED(hr)) {
Sleep(1);
continue;// go round again. Perhaps the error will go away
// or the allocator is decommited & we will be asked to
// exit soon.
}

// 子类必须实现的一个函数,填写Sample的实际内容
hr = FillBuffer(pSample);

if (hr == S_OK) {
hr = Deliver(pSample);//往下一个filter发送数据
pSample->Release();

// downstream filter returns S_FALSE if it wants us to
// stop or an error if it's reporting an error.
if(hr != S_OK)
{
DbgLog((LOG_TRACE, 2, TEXT("Deliver() returned %08x; stopping"), hr));
return S_OK;
}

} else if (hr == S_FALSE) {
// derived class wants us to stop pushing data
pSample->Release();
DeliverEndOfStream();
return S_OK;
} else {
// derived class encountered an error
pSample->Release();
DbgLog((LOG_ERROR, 1, TEXT("Error %08lX from FillBuffer!!!"), hr));
DeliverEndOfStream();
m_pFilter->NotifyEvent(EC_ERRORABORT, hr, 0);
return hr;
}

// all paths release the sample
}

// For all commands sent to us there must be a Reply call!

if (com == CMD_RUN || com == CMD_PAUSE) {
Reply(NOERROR);
} else if (com != CMD_STOP) {
Reply((DWORD) E_UNEXPECTED);
DbgLog((LOG_ERROR, 1, TEXT("Unexpected command!!!")));
}
} while (com != CMD_STOP);

return S_FALSE;
}

CBaseFilter的FillBuffer的实现:

<pre name="code" class="cpp">HRESULT RtpStream::FillBuffer(IMediaSample *pms)
{
CheckPointer(pms, E_POINTER);
ASSERT(m_Ball);

BYTE *pData;
long lDataLen;

pms->GetPointer(&pData);
lDataLen = pms->GetSize();

ZeroMemory(pData, lDataLen);//请屏,将图像帧设置为黑色
{
CAutoLock cAutoLockShared(&m_cSharedState);

// If we haven't just cleared the buffer delete the old
// ball and move the ball on

m_Ball->MoveBall(m_rtSampleTime - (LONG)m_iRepeatTime);
//在图像帧上画出弹球当前的位置
m_Ball->PlotBall(pData, m_BallPixel, m_iPixelSize);

// The current time is the sample's start
CRefTime rtStart = m_rtSampleTime;

// Increment to find the finish time
m_rtSampleTime += (LONG)m_iRepeatTime;
//为Sample打上时间戳
pms->SetTime((REFERENCE_TIME *)&rtStart, (REFERENCE_TIME *)&m_rtSampleTime);
}

pms->SetSyncPoint(TRUE);
return NOERROR;

}

 

可以看到,m_Ball是一个CBall类,主要完成弹球的位置计算以及绘制,当要在新位置上绘制弹球时,一般首先调用MoveBall

进行新位置的计算,然后调用PlotBall在计算得到的位置上绘制一个球。

下面是理解该例子的几个要点:

1.如何创建输出Pin。因为FIlter-Pin采用的是CSource-CSourceStream类结构,所以实际输出Pin的创建一般在Filter的构造函数中。

<pre name="code" class="cpp">RtpStreamFilter::RtpStreamFilter(LPUNKNOWN lpunk, HRESULT *phr) :
CSource(NAME("Rtp Source Filter"), lpunk, CLSID_BouncingBall)
{
ASSERT(phr);
CAutoLock cAutoLock(&m_cStateLock);

//首先进行内存分配,m_paStreams是一个指针数组,因为只有一个Pin,所以分配一个单元
m_paStreams = (CSourceStream **) new RtpStream*[1];
if (m_paStreams == NULL)
{
if (phr)
*phr = E_OUTOFMEMORY;

return;
}

//新生成一个CSourceStream实例
m_paStreams[0] = new RtpStream(phr, this, L"A Rtp Source Filter!");
if (m_paStreams[0] == NULL)
{
if (phr)
*phr = E_OUTOFMEMORY;

return;
}

}

 

2.如何在输出Pin上提供首选的媒体类型列表,包括RGB32,RGB24,RGB565,RGB555,RGB8等?如下代码描

//
// GetMediaType
//
// I _prefer_ 5 formats - 8, 16 (*2), 24 or 32 bits per pixel and
// I will suggest these with an image size of 320x240. However
// I can accept any image size which gives me some space to bounce.
//
// A bit of fun:
// 8 bit displays get red balls
// 16 bit displays get blue
// 24 bit see green
// And 32 bit see yellow
//
// Prefered types should be ordered by quality, zero as highest quality
// Therefore iPosition =
// 0 return a 32bit mediatype
// 1 return a 24bit mediatype
// 2 return 16bit RGB565
// 3 return a 16bit mediatype (rgb555)
// 4 return 8 bit palettised format
// (iPosition > 4 is invalid)
//
HRESULT RtpStream::GetMediaType(int iPosition, CMediaType *pmt)
{
CheckPointer(pmt, E_POINTER);//如果pmt为NULL,直接return

CAutoLock cAutoLock(m_pFilter->pStateLock());
if (iPosition < 0)
{
return E_INVALIDARG;
}

// Have we run off the end of types?

if (iPosition > 4)
{
return VFW_S_NO_MORE_ITEMS;
}

VIDEOINFO *pvi = (VIDEOINFO *)pmt->AllocFormatBuffer(sizeof(VIDEOINFO));
if (NULL == pvi)
return(E_OUTOFMEMORY);

ZeroMemory(pvi, sizeof(VIDEOINFO));

switch (iPosition)
{
case 0:
{
// Return our highest quality 32bit format

// since we use RGB888 (the default for 32 bit), there is
// no reason to use BI_BITFIELDS to specify the RGB
// masks. Also, not everything supports BI_BITFIELDS

SetPaletteEntries(Yellow);
pvi->bmiHeader.biCompression = BI_RGB;
pvi->bmiHeader.biBitCount = 32;
break;
}

case 1:
{ // Return our 24bit format

SetPaletteEntries(Green);
pvi->bmiHeader.biCompression = BI_RGB;
pvi->bmiHeader.biBitCount = 24;
break;
}

case 2:
{
// 16 bit per pixel RGB565

// Place the RGB masks as the first 3 doublewords in the palette area
for (int i = 0; i < 3; i++)
pvi->TrueColorInfo.dwBitMasks[i] = bits565[i];

SetPaletteEntries(Blue);
pvi->bmiHeader.biCompression = BI_BITFIELDS;
pvi->bmiHeader.biBitCount = 16;
break;
}

case 3:
{ // 16 bits per pixel RGB555

// Place the RGB masks as the first 3 doublewords in the palette area
for (int i = 0; i < 3; i++)
pvi->TrueColorInfo.dwBitMasks[i] = bits555[i];

SetPaletteEntries(Blue);
pvi->bmiHeader.biCompression = BI_BITFIELDS;
pvi->bmiHeader.biBitCount = 16;
break;
}

case 4:
{ // 8 bit palettised

SetPaletteEntries(Red);
pvi->bmiHeader.biCompression = BI_RGB;
pvi->bmiHeader.biBitCount = 8;
pvi->bmiHeader.biClrUsed = iPALETTE_COLORS;
break;
}
}

// (Adjust the parameters common to all formats...)

// put the optimal palette in place
for (int i = 0; i < iPALETTE_COLORS; i++)
{
pvi->TrueColorInfo.bmiColors[i].rgbRed = m_Palette[i].peRed;
pvi->TrueColorInfo.bmiColors[i].rgbBlue = m_Palette[i].peBlue;
pvi->TrueColorInfo.bmiColors[i].rgbGreen = m_Palette[i].peGreen;
pvi->TrueColorInfo.bmiColors[i].rgbReserved = 0;
}

pvi->bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
pvi->bmiHeader.biWidth = m_iImageWidth;
pvi->bmiHeader.biHeight = m_iImageHeight;
pvi->bmiHeader.biPlanes = 1;
pvi->bmiHeader.biSizeImage = GetBitmapSize(&pvi->bmiHeader);
pvi->bmiHeader.biClrImportant = 0;

SetRectEmpty(&(pvi->rcSource)); // we want the whole image area rendered.
SetRectEmpty(&(pvi->rcTarget)); // no particular destination rectangle

pmt->SetType(&MEDIATYPE_Video);
pmt->SetFormatType(&FORMAT_VideoInfo);
pmt->SetTemporalCompression(FALSE);

// Work out the GUID for the subtype from the header info.
const GUID SubTypeGUID = GetBitmapSubtype(&pvi->bmiHeader);
pmt->SetSubtype(&SubTypeGUID);
pmt->SetSampleSize(pvi->bmiHeader.biSizeImage);

return NOERROR;

}

3.如何在输出Pin连接时进行媒体类型的检查?如下代码

//
// CheckMediaType
//
// We will accept 8, 16, 24 or 32 bit video formats, in any
// image size that gives room to bounce.
// Returns E_INVALIDARG if the mediatype is not acceptable
//
HRESULT RtpStream::CheckMediaType(const CMediaType *pMediaType)
{
CheckPointer(pMediaType, E_POINTER);

if ((*(pMediaType->Type()) != MEDIATYPE_Video) || // we only output video
!(pMediaType->IsFixedSize())) // in fixed size samples
{
return E_INVALIDARG;
}

// Check for the subtypes we support
const GUID *SubType = pMediaType->Subtype();
if (SubType == NULL)
return E_INVALIDARG;

if ((*SubType != MEDIASUBTYPE_RGB8)
&& (*SubType != MEDIASUBTYPE_RGB565)
&& (*SubType != MEDIASUBTYPE_RGB555)
&& (*SubType != MEDIASUBTYPE_RGB24)
&& (*SubType != MEDIASUBTYPE_RGB32))
{
return E_INVALIDARG;
}

// Get the format area of the media type
VIDEOINFO *pvi = (VIDEOINFO *)pMediaType->Format();

if (pvi == NULL)
return E_INVALIDARG;

// Check the image size. As my default ball is 10 pixels big
// look for at least a 20x20 image. This is an arbitary size constraint,
// but it avoids balls that are bigger than the picture...

if ((pvi->bmiHeader.biWidth < 20) || (abs(pvi->bmiHeader.biHeight) < 20))
{
return E_INVALIDARG;
}

// Check if the image width & height have changed
if (pvi->bmiHeader.biWidth != m_Ball->GetImageWidth() ||
abs(pvi->bmiHeader.biHeight) != m_Ball->GetImageHeight())
{
// If the image width/height is changed, fail CheckMediaType() to force
// the renderer to resize the image.
return E_INVALIDARG;
}


return S_OK; // This format is acceptable.

}
4.当输出Pin连接成功后,需要协商Pin上以后传送数据使用的Sample的一些属性,比如Sample使用的内存的大小等。

//
// DecideBufferSize
//
// This will always be called after the format has been sucessfully
// negotiated. So we have a look at m_mt to see what size image we agreed.
// Then we can ask for buffers of the correct size to contain them.
//
HRESULT RtpStream::DecideBufferSize(IMemAllocator *pAlloc,
ALLOCATOR_PROPERTIES *pProperties)
{
CheckPointer(pAlloc, E_POINTER);
CheckPointer(pProperties, E_POINTER);

CAutoLock cAutoLock(m_pFilter->pStateLock());
HRESULT hr = NOERROR;

VIDEOINFO *pvi = (VIDEOINFO *)m_mt.Format();
pProperties->cBuffers = 1;
pProperties->cbBuffer = pvi->bmiHeader.biSizeImage;

ASSERT(pProperties->cbBuffer);

// Ask the allocator to reserve us some sample memory, NOTE the function
// can succeed (that is return NOERROR) but still not have allocated the
// memory that we requested, so we must check we got whatever we wanted

ALLOCATOR_PROPERTIES Actual;
hr = pAlloc->SetProperties(pProperties, &Actual);
if (FAILED(hr))
{
return hr;
}

// Is this allocator unsuitable

if (Actual.cbBuffer < pProperties->cbBuffer)
{
return E_FAIL;
}

// Make sure that we have only 1 buffer (we erase the ball in the
// old buffer to save having to zero a 200k+ buffer every time
// we draw a frame)

ASSERT(Actual.cBuffers == 1);
return NOERROR;

}
5.如何响应质量控制时事件。需要在输出Pin上实现Notify函数

STDMETHODIMP RtpStream::Notify(IBaseFilter * pSender, Quality q)
{
// Adjust the repeat rate.
if (q.Proportion <= 0)
{
m_iRepeatTime = 1000; // We don't go slower than 1 per second
}
else
{
m_iRepeatTime = m_iRepeatTime * 1000 / q.Proportion;
if (m_iRepeatTime > 1000)
{
m_iRepeatTime = 1000; // We don't go slower than 1 per second
}
else if (m_iRepeatTime<10)
{
m_iRepeatTime = 10; // We don't go faster than 100/sec
}
}

// skip forwards
if (q.Late > 0)
m_rtSampleTime += q.Late;

return NOERROR;

} // Notify


先到这里,想起来再补充。