Qt+GStreamer:如何在播放实时视频时获取快照。

时间:2022-12-25 08:44:26

I've developed a video player based on Qt and QtGstreamer. It is used to play live streams (RTSP). I have to add the possibility for the user to take snapshots while he is playing a live stream without perturbing the video playback.

我开发了一个基于Qt和QtGstreamer的视频播放器。它被用来播放实时流(RTSP)。我必须增加用户拍照的可能性,而他在播放视频的同时不让视频回放。

Here the graph of the pipeline I've made:

这是我制作的管道图:

                                -->queue-->autovideosink
uridecodebin-->videoflip-->tee--|
            |                   -->queue->videoconvert-->pngenc-->filesink
            |
            |->audioconvert-->autoaudiosink

I use the pad-added signal from uridecodebin to add and link dynamically my elements to the pipeline, function of the received caps.

我使用来自uridecodebin的pad-add信号,将我的元素动态地添加到管道中,并将其与接收到的caps的函数联系起来。

void Player::onPadAdded(const QGst::PadPtr &pad)
{
    QGst::CapsPtr caps = pad->currentCaps();
    if (caps->toString().startsWith("video/x-raw")) {
        qDebug("Received 'video/x-raw' caps");
        handleNewVideoPad(pad);
    }
    else if (caps->toString().startsWith("audio/x-raw")) {
        qDebug("Received 'audio/x-raw' caps");
        if (!m_audioEnabled) {
            qDebug("Audio is disabled in the player. Ignoring...");
            return;
        }
        handleNewAudioPad(pad);
    }
    else {
        qWarning("Unsuported caps, arborting ...!");
        return;
    }
}

[...]

void Player::handleNewVideoPad(QGst::PadPtr pad)
{
    m_player->videoTeeVideoSrcPad = m_player->videoTee->getRequestPad("src_%u");

    // Add video elements
    m_player->pipeline->add(m_player->videoFlip);
    m_player->pipeline->add(m_player->videoTee);
    m_player->pipeline->add(m_player->videoQueue);
    m_player->pipeline->add(m_player->videoSink);

    // Add snap elements
    m_player->pipeline->add(m_player->snapQueue);
    m_player->pipeline->add(m_player->snapConverter);
    m_player->pipeline->add(m_player->snapEncoder);
    m_player->pipeline->add(m_player->snapSink);

    // Link video elements
    m_player->videoFlip->link(m_player->videoTee);
    m_player->videoQueue->link(m_player->videoSink);

    // Link snap elements
    m_player->snapQueue->link(m_player->snapConverter);
    m_player->snapConverter->link(m_player->snapEncoder);
    m_player->snapEncoder->link(m_player->snapSink);

    // Lock snap elements
    m_player->snapQueue->setStateLocked(true);
    m_player->snapConverter->setStateLocked(true);
    m_player->snapEncoder->setStateLocked(true);
    m_player->snapSink->setStateLocked(true);

    m_player->videoFlip->setState(QGst::StatePlaying);
    m_player->videoTee->setState(QGst::StatePlaying);
    m_player->videoQueue->setState(QGst::StatePlaying);
    m_player->videoSink->setState(QGst::StatePlaying);

    // Link pads
    m_player->videoTeeVideoSrcPad->link(m_player->videoQueue->getStaticPad("sink"));
    pad->link(m_player->videoSinkPad);

    m_player->videoLinked = true;
}

The method to take a snapshot:

获取快照的方法:

void Player::takeSnapshot()
{
    QDateTime dateTime = QDateTime::currentDateTime();
    QString snapLocation = QString("/%1/snap_%2.png").arg(m_snapDir).arg(dateTime.toString(Qt::ISODate));

    m_player->inSnapshotCaputre = true;

    if (m_player->videoTeeSnapSrcPad) {
        m_player->videoTee->releaseRequestPad(m_player->videoTeeSnapSrcPad);
        m_player->videoTeeSnapSrcPad.clear();
    }
    m_player->videoTeeSnapSrcPad = m_player->videoTee->getRequestPad("src_%u");

    // Stop the snapshot branch
    m_player->snapQueue->setState(QGst::StateNull);
    m_player->snapConverter->setState(QGst::StateNull);
    m_player->snapEncoder->setState(QGst::StateNull);
    m_player->snapSink->setState(QGst::StateNull);

    // Link Tee src pad to snap queue sink pad
    m_player->videoTeeSnapSrcPad->link(m_player->snapQueue->getStaticPad("sink"));

    // Set the snapshot location property
    m_player->snapSink->setProperty("location", snapLocation);

    // Unlock snapshot branch
    m_player->snapQueue->setStateLocked(false);
    m_player->snapConverter->setStateLocked(false);
    m_player->snapEncoder->setStateLocked(false);
    m_player->snapSink->setStateLocked(false);
    m_player->videoTeeSnapSrcPad->setActive(true);

    // Synch snapshot branch state with parent
    m_player->snapQueue->syncStateWithParent();
    m_player->snapConverter->syncStateWithParent();
    m_player->snapEncoder->syncStateWithParent();
    m_player->snapSink->syncStateWithParent();
}

The bus message callback:

总线消息回调:

void Player::onBusMessage(const QGst::MessagePtr & message)
{
    QGst::ElementPtr source = message->source().staticCast<QGst::Element>();
    switch (message->type()) {
    case QGst::MessageEos: { //End of stream. We reached the end of the file.
        qDebug("Message End Off Stream");
        if (m_player->inSnapshotCaputre) {
            blockSignals(true);
            pause();
            play();
            blockSignals(false);
            m_player->inSnapshotCaputre = false;
        }
        else {
            m_eos = true;
            stop();
        }
        break;
    }
    [...]
}

The problem is:

现在的问题是:

  • When I set the snapshot property to true of the pngenc element, I receive the EOS event which stop my pipeline, so I need to restart it, which freeze the video playback for about half a second, which in not acceptable in my case.
  • 当我将快照属性设置为对pngenc元素的true时,我接收到停止我的管道的EOS事件,所以我需要重新启动它,它将视频回放冻结了大约半秒,这在我的情况下是不可接受的。
  • When I set the snapshot property to false of the pngenc element, I have no pipeline perturbations, but my png file keeps growing until I call again the Player::takeSnapshot() method.
  • 当我将快照属性设置为false的pngenc元素时,我没有管道扰动,但是我的png文件一直在增长,直到我再次调用Player::takeSnapshot()方法。

Where am I wrong ? Is there a better way to do it ? I've tried unsuccessfully creating a QGst::Bin element for my snapshot branch. What about pad probe ?

我哪里错了?有没有更好的方法?我曾尝试创建一个QGst::Bin元素来创建快照分支。pad探测器呢?

Thanks by advance

提前谢谢了

2 个解决方案

#1


3  

You can take the last-sample property on any sink, e.g. your video sink. This contains a GstSample, which has a buffer with the very latest video frame in it. You can take that as a snapshot, and e.g. with gst_video_convert_sample() or the async variant of it, convert it to a PNG/JPG/whatever.

您可以在任何接收器上取最后一个样本属性,例如您的视频接收器。这包含一个GstSample,它有一个带有最新视频帧的缓冲区。您可以将其作为快照,例如使用gst_video_convert_sample()或它的async变体,将其转换为PNG/JPG/其他格式。

See https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer-libs/html/GstBaseSink.html#GstBaseSink--last-sample and https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-base-libs/html/gst-plugins-base-libs-gstvideo.html#gst-video-convert-sample

看到https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer-libs/html/GstBaseSink.html GstBaseSink——一次和https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-base-libs/html/gst-plugins-base-libs-gstvideo.html gst-video-convert-sample

Alternatively, you would have to shut down the filesink snapshot pipeline after the first frame. For example by having a pad probe to know when the first frame happened, and then injecting an EOS event to prevent further PNG frames to be appended to the same file.

或者,您必须在第一个框架之后关闭filesink快照管道。例如,通过使用pad探测来了解第一个框架何时发生,然后注入一个EOS事件,以防止将更多的PNG帧附加到同一个文件中。

#2


1  

Thanks to @sebastian-droge answer, I found the solution, using gst_video_convert_sample and the last-sample property of my video sink.

感谢@sebastian-droge的回答,我找到了解决方案,使用gst_video_convert_sample和我的视频接收器的最后一个样本属性。

The solution I've implemented is:

我实现的解决方案是:

void Player::takeSnapshot()
{
    QDateTime currentDate = QDateTime::currentDateTime();
    QString location = QString("%1/snap_%2.png").arg(QDir::homePath()).arg(currentDate.toString(Qt::ISODate));
    QImage snapShot;
    QImage::Format snapFormat;
    QGlib::Value val = m_videoSink->property("last-sample");
    GstSample *videoSample = (GstSample *)g_value_get_boxed(val);
    QGst::SamplePtr sample = QGst::SamplePtr::wrap(videoSample);
    QGst::SamplePtr convertedSample;
    QGst::BufferPtr buffer;
    QGst::CapsPtr caps = sample->caps();
    QGst::MapInfo mapInfo;
    GError *err = NULL;
    GstCaps * capsTo = NULL;
    const QGst::StructurePtr structure = caps->internalStructure(0);
    int width, height;

    width = structure.data()->value("width").get<int>();
    height = structure.data()->value("height").get<int>();

    qDebug() << "Sample caps:" << structure.data()->toString();

    /*
     * { QImage::Format_RGBX8888, GST_VIDEO_FORMAT_RGBx  },
     * { QImage::Format_RGBA8888, GST_VIDEO_FORMAT_RGBA  },
     * { QImage::Format_RGB888  , GST_VIDEO_FORMAT_RGB   },
     * { QImage::Format_RGB16   , GST_VIDEO_FORMAT_RGB16 }
     */
    snapFormat = QImage::Format_RGB888;
    capsTo = gst_caps_new_simple("video/x-raw",
                                 "format", G_TYPE_STRING, "RGB",
                                 "width", G_TYPE_INT, width,
                                 "height", G_TYPE_INT, height,
                                 NULL);

    convertedSample = QGst::SamplePtr::wrap(gst_video_convert_sample(videoSample, capsTo, GST_SECOND, &err));
    if (convertedSample.isNull()) {
        qWarning() << "gst_video_convert_sample Failed:" << err->message;
    }
    else {
        qDebug() << "Converted sample caps:" << convertedSample->caps()->toString();

        buffer = convertedSample->buffer();
        buffer->map(mapInfo, QGst::MapRead);

        snapShot = QImage((const uchar *)mapInfo.data(),
                          width,
                          height,
                          snapFormat);

        qDebug() << "Saving snap to" << location;
        snapShot.save(location);

        buffer->unmap(mapInfo);
    }

    val.clear();
    sample.clear();
    convertedSample.clear();
    buffer.clear();
    caps.clear();
    g_clear_error(&err);
    if (capsTo)
        gst_caps_unref(capsTo);
}

I've create a simple test application, which implement this solution. The code is available on my Github

我创建了一个简单的测试应用程序,它实现了这个解决方案。代码在我的Github上可用。

#1


3  

You can take the last-sample property on any sink, e.g. your video sink. This contains a GstSample, which has a buffer with the very latest video frame in it. You can take that as a snapshot, and e.g. with gst_video_convert_sample() or the async variant of it, convert it to a PNG/JPG/whatever.

您可以在任何接收器上取最后一个样本属性,例如您的视频接收器。这包含一个GstSample,它有一个带有最新视频帧的缓冲区。您可以将其作为快照,例如使用gst_video_convert_sample()或它的async变体,将其转换为PNG/JPG/其他格式。

See https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer-libs/html/GstBaseSink.html#GstBaseSink--last-sample and https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-base-libs/html/gst-plugins-base-libs-gstvideo.html#gst-video-convert-sample

看到https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer-libs/html/GstBaseSink.html GstBaseSink——一次和https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-base-libs/html/gst-plugins-base-libs-gstvideo.html gst-video-convert-sample

Alternatively, you would have to shut down the filesink snapshot pipeline after the first frame. For example by having a pad probe to know when the first frame happened, and then injecting an EOS event to prevent further PNG frames to be appended to the same file.

或者,您必须在第一个框架之后关闭filesink快照管道。例如,通过使用pad探测来了解第一个框架何时发生,然后注入一个EOS事件,以防止将更多的PNG帧附加到同一个文件中。

#2


1  

Thanks to @sebastian-droge answer, I found the solution, using gst_video_convert_sample and the last-sample property of my video sink.

感谢@sebastian-droge的回答,我找到了解决方案,使用gst_video_convert_sample和我的视频接收器的最后一个样本属性。

The solution I've implemented is:

我实现的解决方案是:

void Player::takeSnapshot()
{
    QDateTime currentDate = QDateTime::currentDateTime();
    QString location = QString("%1/snap_%2.png").arg(QDir::homePath()).arg(currentDate.toString(Qt::ISODate));
    QImage snapShot;
    QImage::Format snapFormat;
    QGlib::Value val = m_videoSink->property("last-sample");
    GstSample *videoSample = (GstSample *)g_value_get_boxed(val);
    QGst::SamplePtr sample = QGst::SamplePtr::wrap(videoSample);
    QGst::SamplePtr convertedSample;
    QGst::BufferPtr buffer;
    QGst::CapsPtr caps = sample->caps();
    QGst::MapInfo mapInfo;
    GError *err = NULL;
    GstCaps * capsTo = NULL;
    const QGst::StructurePtr structure = caps->internalStructure(0);
    int width, height;

    width = structure.data()->value("width").get<int>();
    height = structure.data()->value("height").get<int>();

    qDebug() << "Sample caps:" << structure.data()->toString();

    /*
     * { QImage::Format_RGBX8888, GST_VIDEO_FORMAT_RGBx  },
     * { QImage::Format_RGBA8888, GST_VIDEO_FORMAT_RGBA  },
     * { QImage::Format_RGB888  , GST_VIDEO_FORMAT_RGB   },
     * { QImage::Format_RGB16   , GST_VIDEO_FORMAT_RGB16 }
     */
    snapFormat = QImage::Format_RGB888;
    capsTo = gst_caps_new_simple("video/x-raw",
                                 "format", G_TYPE_STRING, "RGB",
                                 "width", G_TYPE_INT, width,
                                 "height", G_TYPE_INT, height,
                                 NULL);

    convertedSample = QGst::SamplePtr::wrap(gst_video_convert_sample(videoSample, capsTo, GST_SECOND, &err));
    if (convertedSample.isNull()) {
        qWarning() << "gst_video_convert_sample Failed:" << err->message;
    }
    else {
        qDebug() << "Converted sample caps:" << convertedSample->caps()->toString();

        buffer = convertedSample->buffer();
        buffer->map(mapInfo, QGst::MapRead);

        snapShot = QImage((const uchar *)mapInfo.data(),
                          width,
                          height,
                          snapFormat);

        qDebug() << "Saving snap to" << location;
        snapShot.save(location);

        buffer->unmap(mapInfo);
    }

    val.clear();
    sample.clear();
    convertedSample.clear();
    buffer.clear();
    caps.clear();
    g_clear_error(&err);
    if (capsTo)
        gst_caps_unref(capsTo);
}

I've create a simple test application, which implement this solution. The code is available on my Github

我创建了一个简单的测试应用程序,它实现了这个解决方案。代码在我的Github上可用。