如何使用硬件加速视频/H。264解码与directx11和windows 7?

时间:2022-03-01 04:17:52

I've been researching all day and not gotten very far. I'm on windows 7, using directx 11. (My final output is to be a frame of video onto a DX11 texture) I want to decode some very large H.264 video files, and the CPU (using libav) doesn't cut it.

我一整天都在研究,但并没有走得太远。我在windows 7上使用directx11。(我的最终输出是一个DX11纹理的视频帧)我想要解码一些非常大的H.264视频文件,而CPU(使用libav)并没有减少它。

I've looked at the hwaccel capabilities of libav using DXVA2, but hit a road block when I need to create a IDirectXVideoDecoder, which can only be created with a D3D9 interface. (which I don't have using DX11)

我使用DXVA2查看了libav的hwaccel功能,但是当我需要创建一个IDirectXVideoDecoder时,我遇到了一个问题,它只能通过D3D9接口创建。(我没有使用DX11)

Whenever I've looked up DXVA documentation, it doesn't reference DX11, was this removed in DX10 or 11? (Can't find any confirmation of this, nor anywhere that says DXVA2 is redundant, possibly that it's been superceeded by DXVA-HD?)

无论何时我查阅了DXVA文档,它都不引用DX11,是在DX10还是11中删除的?(找不到这方面的任何证实,也没有任何地方说DXVA2是多余的,可能是DXVA-HD已经超越了它?)

Then I've looked into the media foundation SDK as that looks like what I'm supposed to use for DX11... But none of the types exist in my headers (The docs say to just include , but this yields nothing). They also specify a minimum of windows 8 to use it.

然后我查看了媒体基础SDK,它看起来像我应该使用的DX11…但是,在我的标题中没有任何类型(文档说只是包含,但这没有任何结果)。他们还指定了最少的windows 8使用它。

I believe to use MF I need the windows 8 SDK, which now includes all the directx libs/headers.

我相信使用MF我需要windows 8 SDK,它现在包括了所有的directx libs/header。

So this leaves a gap with windows 7... Is it possible to get hardware accelerated video decoding? and if so, which API am I supposed to be using?

这就给windows 7留下了一个空白……有可能得到硬件加速视频解码吗?如果是,我应该使用哪个API ?

2 个解决方案

#1


18  

D3D11 features a video api which is basically DXVA2 with a slightly altered interface above. You need a good understand of h.264 bitstreams to proceed (really!). i.e. make sure you have a h.264 parser at hand to extract fields of the SPS and PPS structures and all slices of an encoded frame.

D3D11有一个视频api,它基本上是DXVA2,上面有一个略微改变的界面。你需要了解h。要继续进行(真的!)也就是确保你有一个h。264解析器,用于提取SPS和PPS结构的字段以及编码帧的所有切片。

1) Obtain ID3D11VideoDevice instance from your ID3D11Device, and ID3D11VideoContext from your immediate D3D11 device context NOTE: On Win7, you have to create your device with feature level 9_3 to get video support! (In Win8 it just works)

1)从id3d11设备获取ID3D11VideoDevice实例,从直接D3D11设备上下文注意到ID3D11VideoContext:在Win7中,您必须创建具有feature level 9_3的设备来获得视频支持!(在Win8中它只是工作)

2) Create a ID3D11VideoDecoder instance for h.264 Use ID3D11VideoDevice::GetVideoDecoderProfileCount, GetVideoDecoderProfile, CheckVideoDecodeRFormat... to iterate through all supported profiles, and find one with GUID D3D11_DECODER_PROFILE_H264_VLD_NOFGT for h264 without filmgrain. As OutputFormat your best bet is DXGI_FORMAT_NV12.

2)为h创建一个ID3D11VideoDecoder实例。使用ID3D11VideoDevice::GetVideoDecoderProfileCount, GetVideoDecoderProfile, CheckVideoDecodeRFormat…要遍历所有受支持的配置文件,并在没有filmgrain的情况下,找到一个带有GUID D3D11_DECODER_PROFILE_H264_VLD_NOFGT的文件。作为OutputFormat,您最好的选择是DXGI_FORMAT_NV12。

3) Decoding of the individual frames see Supporting Direct3D 11 Video Decoding in Media Foundation:

3)在媒体基础上对单个帧的解码支持Direct3D 11视频解码:

  • ID3D11VideoContext::DecoderBeginFrame( decoder, outputView -> decoded frame texture )
  • 视频上下文::DecoderBeginFrame(译码器,outputView ->解码帧纹理)
  • Fill buffers:
    • D3D11_VIDEO_DECODER_BUFFER_PICTURE_PARAMETERS
    • D3D11_VIDEO_DECODER_BUFFER_PICTURE_PARAMETERS
    • D3D11_VIDEO_DECODER_BUFFER_INVERSE_QUANTIZATION_MATRIX
    • D3D11_VIDEO_DECODER_BUFFER_INVERSE_QUANTIZATION_MATRIX
    • D3D11_VIDEO_DECODER_BUFFER_BITSTREAM
    • D3D11_VIDEO_DECODER_BUFFER_BITSTREAM
    • D3D11_VIDEO_DECODER_BUFFER_SLICE_CONTROL
    • D3D11_VIDEO_DECODER_BUFFER_SLICE_CONTROL
  • 填充缓冲区:d3d11_video_decoder_buffer_imag_parameter d3d11_video_video_decoder_buffer_video_decoder_buffer_bitstream D3D11_VIDEO_DECODER_BUFFER_SLICE_CONTROL。

The buffers are filled with the corresponding DXVA2 structures (see dxva2.h) The full DXVA2 spec is here, you'll need it to map the h.264 sps/pps fields accordingly.

缓冲区中填充了相应的DXVA2结构(参见DXVA2 .h),这里是完整的DXVA2规范,您将需要它来映射h。264 sps / pps相应字段。

See:

看到的:

Then:

然后:

  • ID3D11VideoContext::SubmitBuffers to commit all filled buffers
  • i3d11videocontext:: submitbuffer来提交所有填充的缓冲区。
  • ID3D11VideoContext::DecoderEndFrame to finish the current frame
  • d3d11videocontext::DecoderEndFrame完成当前帧。

3) D3D11_VIDEO_DECODER_BUFFER_PICTURE_PARAMETERS buffer also contains info on all references frames/surface - you need to manage them yourself, i.e. make sure the surfaces/textures are available to the GPU!

3)d3d11_video_decoder_buffer_photo_parameters缓冲区也包含了所有引用帧/表面的信息——您需要自己管理它们,即确保GPU可以使用表面/纹理!

It's quite complicated, check ffmpeg and Media Player Classic, they both have DXVA2 (though not via DX11) support.

这是相当复杂的,检查ffmpeg和媒体播放器的经典,他们都有DXVA2(虽然不是通过DX11)支持。

4) Convert from NV12 to RGB(A), some GPUs (D3D11 feature levels) allow to use NV12 as shader input, some don't. In case it's not possible to use NV12 directly, have a look at the D3D11VideoProcessor interfaces which feature NV12/YUV420->RGB conversion for all GPUs with D3D11 support.

4)从NV12转换为RGB(A),一些gpu (D3D11特性级别)允许使用NV12作为着色输入,有些则不使用。如果不可能直接使用NV12,请查看D3D11VideoProcessor接口,该接口采用NV12/YUV420->RGB转换为所有gpu的D3D11支持。

The conversion could be performed in code like this:

可以在如下代码中进行转换:

// Setup ID3D11Video*
ID3D11VideoProcessor * d3dVideoProc = ...;
ID3D11VideoDevice    * d3dVideoDevice = ...;
ID3D11VideoProcessorEnumerator * d3dVideoProcEnum = ...;


ID3D11Texture2D * srcTextureNV12Fmt = ...;
ID3D11Texture2D * dstTextureRGBFmt = ...;

// Use Video Processor

// Create views for VideoProc In/Output
ID3D11VideoProcessorInputView * videoProcInputView;
ID3D11VideoProcessorOutputView * videoProcOutputView;

{

    D3D11_VIDEO_PROCESSOR_INPUT_VIEW_DESC inputViewDesc = { 0 };
    inputViewDesc.ViewDimension = D3D11_VPIV_DIMENSION_TEXTURE2D;
    inputViewDesc.Texture2D.ArraySlice = arraySliceIdx;
    inputViewDesc.Texture2D.MipSlice = 0;
    hr = d3dVideoDevice->CreateVideoProcessorInputView(srcTextureNV12Fmt, d3dVideoProcEnum, &inputViewDesc, &videoProcInputView);
}


{
    D3D11_VIDEO_PROCESSOR_OUTPUT_VIEW_DESC outputViewDesc = { D3D11_VPOV_DIMENSION_TEXTURE2D };
    outputViewDesc.Texture2D.MipSlice = 0;
    hr = d3dVideoDevice->CreateVideoProcessorOutputView(dstTextureRGBFmt, d3dVideoProcEnum, &outputViewDesc, &videoProcOutputView);
}


// Setup streams
D3D11_VIDEO_PROCESSOR_STREAM streams = { 0 };
streams.Enable = TRUE;
streams.pInputSurface = videoProcInputView.get();

RECT srcRect = { /* source rectangle in pixels*/ };
RECT dstRect = { /* destination rectangle in pixels*/ };

// Perform VideoProc Blit Operation (with color conversion)
hr = videoCtx_->VideoProcessorBlt(d3dVideoProc, videoProcOutputView.get(), 0, 1, &streams);

#2


0  

As a follow up, I am currently using MediaFoundation with windows 7,8 and 10, with directx(or just Windows SDK in the case of 8+)

接下来,我将使用windows7、8和10的MediaFoundation,使用directx(或8+时的windows SDK)

It supports far fewer formats (or rather, resolutions/profile levels), and currently I'm not exactly sure if it's using hardware acceleration or not...

它支持更少的格式(或者更确切地说,分辨率/配置文件级别),目前我不确定它是否使用了硬件加速……

But this API is compatible, which was the original query

但是这个API是兼容的,这是最初的查询。

#1


18  

D3D11 features a video api which is basically DXVA2 with a slightly altered interface above. You need a good understand of h.264 bitstreams to proceed (really!). i.e. make sure you have a h.264 parser at hand to extract fields of the SPS and PPS structures and all slices of an encoded frame.

D3D11有一个视频api,它基本上是DXVA2,上面有一个略微改变的界面。你需要了解h。要继续进行(真的!)也就是确保你有一个h。264解析器,用于提取SPS和PPS结构的字段以及编码帧的所有切片。

1) Obtain ID3D11VideoDevice instance from your ID3D11Device, and ID3D11VideoContext from your immediate D3D11 device context NOTE: On Win7, you have to create your device with feature level 9_3 to get video support! (In Win8 it just works)

1)从id3d11设备获取ID3D11VideoDevice实例,从直接D3D11设备上下文注意到ID3D11VideoContext:在Win7中,您必须创建具有feature level 9_3的设备来获得视频支持!(在Win8中它只是工作)

2) Create a ID3D11VideoDecoder instance for h.264 Use ID3D11VideoDevice::GetVideoDecoderProfileCount, GetVideoDecoderProfile, CheckVideoDecodeRFormat... to iterate through all supported profiles, and find one with GUID D3D11_DECODER_PROFILE_H264_VLD_NOFGT for h264 without filmgrain. As OutputFormat your best bet is DXGI_FORMAT_NV12.

2)为h创建一个ID3D11VideoDecoder实例。使用ID3D11VideoDevice::GetVideoDecoderProfileCount, GetVideoDecoderProfile, CheckVideoDecodeRFormat…要遍历所有受支持的配置文件,并在没有filmgrain的情况下,找到一个带有GUID D3D11_DECODER_PROFILE_H264_VLD_NOFGT的文件。作为OutputFormat,您最好的选择是DXGI_FORMAT_NV12。

3) Decoding of the individual frames see Supporting Direct3D 11 Video Decoding in Media Foundation:

3)在媒体基础上对单个帧的解码支持Direct3D 11视频解码:

  • ID3D11VideoContext::DecoderBeginFrame( decoder, outputView -> decoded frame texture )
  • 视频上下文::DecoderBeginFrame(译码器,outputView ->解码帧纹理)
  • Fill buffers:
    • D3D11_VIDEO_DECODER_BUFFER_PICTURE_PARAMETERS
    • D3D11_VIDEO_DECODER_BUFFER_PICTURE_PARAMETERS
    • D3D11_VIDEO_DECODER_BUFFER_INVERSE_QUANTIZATION_MATRIX
    • D3D11_VIDEO_DECODER_BUFFER_INVERSE_QUANTIZATION_MATRIX
    • D3D11_VIDEO_DECODER_BUFFER_BITSTREAM
    • D3D11_VIDEO_DECODER_BUFFER_BITSTREAM
    • D3D11_VIDEO_DECODER_BUFFER_SLICE_CONTROL
    • D3D11_VIDEO_DECODER_BUFFER_SLICE_CONTROL
  • 填充缓冲区:d3d11_video_decoder_buffer_imag_parameter d3d11_video_video_decoder_buffer_video_decoder_buffer_bitstream D3D11_VIDEO_DECODER_BUFFER_SLICE_CONTROL。

The buffers are filled with the corresponding DXVA2 structures (see dxva2.h) The full DXVA2 spec is here, you'll need it to map the h.264 sps/pps fields accordingly.

缓冲区中填充了相应的DXVA2结构(参见DXVA2 .h),这里是完整的DXVA2规范,您将需要它来映射h。264 sps / pps相应字段。

See:

看到的:

Then:

然后:

  • ID3D11VideoContext::SubmitBuffers to commit all filled buffers
  • i3d11videocontext:: submitbuffer来提交所有填充的缓冲区。
  • ID3D11VideoContext::DecoderEndFrame to finish the current frame
  • d3d11videocontext::DecoderEndFrame完成当前帧。

3) D3D11_VIDEO_DECODER_BUFFER_PICTURE_PARAMETERS buffer also contains info on all references frames/surface - you need to manage them yourself, i.e. make sure the surfaces/textures are available to the GPU!

3)d3d11_video_decoder_buffer_photo_parameters缓冲区也包含了所有引用帧/表面的信息——您需要自己管理它们,即确保GPU可以使用表面/纹理!

It's quite complicated, check ffmpeg and Media Player Classic, they both have DXVA2 (though not via DX11) support.

这是相当复杂的,检查ffmpeg和媒体播放器的经典,他们都有DXVA2(虽然不是通过DX11)支持。

4) Convert from NV12 to RGB(A), some GPUs (D3D11 feature levels) allow to use NV12 as shader input, some don't. In case it's not possible to use NV12 directly, have a look at the D3D11VideoProcessor interfaces which feature NV12/YUV420->RGB conversion for all GPUs with D3D11 support.

4)从NV12转换为RGB(A),一些gpu (D3D11特性级别)允许使用NV12作为着色输入,有些则不使用。如果不可能直接使用NV12,请查看D3D11VideoProcessor接口,该接口采用NV12/YUV420->RGB转换为所有gpu的D3D11支持。

The conversion could be performed in code like this:

可以在如下代码中进行转换:

// Setup ID3D11Video*
ID3D11VideoProcessor * d3dVideoProc = ...;
ID3D11VideoDevice    * d3dVideoDevice = ...;
ID3D11VideoProcessorEnumerator * d3dVideoProcEnum = ...;


ID3D11Texture2D * srcTextureNV12Fmt = ...;
ID3D11Texture2D * dstTextureRGBFmt = ...;

// Use Video Processor

// Create views for VideoProc In/Output
ID3D11VideoProcessorInputView * videoProcInputView;
ID3D11VideoProcessorOutputView * videoProcOutputView;

{

    D3D11_VIDEO_PROCESSOR_INPUT_VIEW_DESC inputViewDesc = { 0 };
    inputViewDesc.ViewDimension = D3D11_VPIV_DIMENSION_TEXTURE2D;
    inputViewDesc.Texture2D.ArraySlice = arraySliceIdx;
    inputViewDesc.Texture2D.MipSlice = 0;
    hr = d3dVideoDevice->CreateVideoProcessorInputView(srcTextureNV12Fmt, d3dVideoProcEnum, &inputViewDesc, &videoProcInputView);
}


{
    D3D11_VIDEO_PROCESSOR_OUTPUT_VIEW_DESC outputViewDesc = { D3D11_VPOV_DIMENSION_TEXTURE2D };
    outputViewDesc.Texture2D.MipSlice = 0;
    hr = d3dVideoDevice->CreateVideoProcessorOutputView(dstTextureRGBFmt, d3dVideoProcEnum, &outputViewDesc, &videoProcOutputView);
}


// Setup streams
D3D11_VIDEO_PROCESSOR_STREAM streams = { 0 };
streams.Enable = TRUE;
streams.pInputSurface = videoProcInputView.get();

RECT srcRect = { /* source rectangle in pixels*/ };
RECT dstRect = { /* destination rectangle in pixels*/ };

// Perform VideoProc Blit Operation (with color conversion)
hr = videoCtx_->VideoProcessorBlt(d3dVideoProc, videoProcOutputView.get(), 0, 1, &streams);

#2


0  

As a follow up, I am currently using MediaFoundation with windows 7,8 and 10, with directx(or just Windows SDK in the case of 8+)

接下来,我将使用windows7、8和10的MediaFoundation,使用directx(或8+时的windows SDK)

It supports far fewer formats (or rather, resolutions/profile levels), and currently I'm not exactly sure if it's using hardware acceleration or not...

它支持更少的格式(或者更确切地说,分辨率/配置文件级别),目前我不确定它是否使用了硬件加速……

But this API is compatible, which was the original query

但是这个API是兼容的,这是最初的查询。