提问者:小点点

JavaCV:在android设备上解码来自red5服务器的H.264实时流


这里是我的问题,我用Red5实现了一个服务器端应用程序,它发送H.264编码的实时流,在客户端接收的流是byte[]
,为了在Android客户端解码它,我遵循了Javacv-FFmpeg库。解码的代码如下

public Frame decodeVideo(byte[] data,long timestamp){
           frame.image = null;
           frame.samples = null;
           avcodec.av_init_packet(pkt);
           BytePointer video_data = new BytePointer(data);
           avcodec.AVCodec codec = avcodec.avcodec_find_decoder(codec_id);
           video_c = null;
           video_c = avcodec.avcodec_alloc_context3(codec);
           video_c.width(320);
           video_c.height(240);
           video_c.pix_fmt(0);
           video_c.flags2(video_c.flags2()|avcodec.CODEC_FLAG2_CHUNKS);
           avcodec.avcodec_open2(video_c, codec, null))
           picture = avcodec.avcodec_alloc_frame()
           pkt.data(video_data);
           pkt.size(data.length);
           int len = avcodec.avcodec_decode_video2(video_c, picture, got_frame, pkt);
           if ((len >= 0) && ( got_frame[0] != 0)) {
             ....
              process the decoded frame into **IPLImage of Javacv** and render it with **Imageview** of Android
           }
} 

从服务器接收的数据如下
有以下模式的几帧
17 01 00 00 00 00 00 00 00 00 02 09 10 00 00 00 0F 06 00 01 C0 01 07 09 08 04 9A 00 00 03 00 80 00 00 16 EF 65 88 80 07 00 05 6C 98 90 00...

许多帧有以下模式
27 01 0000 0000 0000 0000 0000 02 09 30 0000 0000 0C 06 01 07 09 08 05 9A 00 00 03 00 80 00 00 0D 77 41 9A 02 04 15 B5 06 20 E3 11 E2 3C 46....

对于H.264编解码器,解码器输出长度>0,但GOT_FRAMES=0;对于MPEG1编解码器,解码器输出长度>0,且GOT_FRAMES>0,但输出图像为绿色或失真。

但是,按照javacv的FFmpegFrameGrabber代码,我可以用类似的代码解码本地文件(H.264编码)。

我想知道我遗漏了哪些细节,与头相关的数据操作或设置适合解码器的编解码器。

如有任何建议,请帮忙。
提前感谢。


共1个答案

匿名用户

最后...经过大量的RND后终于开始工作了。
我所缺少的是对视频帧结构的分析。视频是由“I”,“P”帧组成的。“I”帧是信息帧,存储下一帧的信息。“P”帧是图像帧,它包含实际的视频帧…
所以我需要解码“P”帧中的W.R.T信息。最后的代码如下所示

public IplImage decodeFromVideo(byte[] data, long timeStamp) {
avcodec.av_init_packet(reveivedVideoPacket); // Empty AVPacket
/*
 * Determine if the frame is a Data Frame or Key. IFrame 1 = PFrame 0 = Key
 * Frame
 */
byte frameFlag = data[1];
byte[] subData = Arrays.copyOfRange(data, 5, data.length);

BytePointer videoData = new BytePointer(subData);
if (frameFlag == 0) {
    avcodec.AVCodec codec = avcodec
            .avcodec_find_decoder(avcodec.AV_CODEC_ID_H264);
    if (codec != null) {
        videoCodecContext = null;
        videoCodecContext = avcodec.avcodec_alloc_context3(codec);
        videoCodecContext.width(320);
        videoCodecContext.height(240);
        videoCodecContext.pix_fmt(avutil.AV_PIX_FMT_YUV420P);
        videoCodecContext.codec_type(avutil.AVMEDIA_TYPE_VIDEO);
        videoCodecContext.extradata(videoData);
        videoCodecContext.extradata_size(videoData.capacity());

        videoCodecContext.flags2(videoCodecContext.flags2()
                | avcodec.CODEC_FLAG2_CHUNKS);
        avcodec.avcodec_open2(videoCodecContext, codec,
                (PointerPointer) null);

        if ((videoCodecContext.time_base().num() > 1000)
                && (videoCodecContext.time_base().den() == 1)) {
            videoCodecContext.time_base().den(1000);
        }
    } else {
        Log.e("test", "Codec could not be opened");
    }
}

if ((decodedPicture = avcodec.avcodec_alloc_frame()) != null) {
    if ((processedPicture = avcodec.avcodec_alloc_frame()) != null) {
        int width = getImageWidth() > 0 ? getImageWidth()
                : videoCodecContext.width();
        int height = getImageHeight() > 0 ? getImageHeight()
                : videoCodecContext.height();

        switch (imageMode) {
        case COLOR:
        case GRAY:
            int fmt = 3;
            int size = avcodec.avpicture_get_size(fmt, width, height);
            processPictureBuffer = new BytePointer(
                    avutil.av_malloc(size));
            avcodec.avpicture_fill(new AVPicture(processedPicture),
                    processPictureBuffer, fmt, width, height);
            returnImageFrame = opencv_core.IplImage.createHeader(320,
                    240, 8, 1);
            break;
        case RAW:
            processPictureBuffer = null;
            returnImageFrame = opencv_core.IplImage.createHeader(320,
                    240, 8, 1);
            break;
        default:
            Log.d("showit",
                    "At default of swith case 1.$SwitchMap$com$googlecode$javacv$FrameGrabber$ImageMode[ imageMode.ordinal()]");
        }

        reveivedVideoPacket.data(videoData);
        reveivedVideoPacket.size(videoData.capacity());

        reveivedVideoPacket.pts(timeStamp);
        videoCodecContext.pix_fmt(avutil.AV_PIX_FMT_YUV420P);
        decodedFrameLength = avcodec.avcodec_decode_video2(videoCodecContext,
                decodedPicture, isVideoDecoded, reveivedVideoPacket);

if ((decodedFrameLength >= 0) && (isVideoDecoded[0] != 0)) {
 .... Process image same as javacv .....
}

希望它能帮助别人。