How to create ABR content with FFmpeg in one pass

I was once informed that it would be nigh impossible to create ABR content in one pass using FFmpeg.  Challenge accepted!

I remembered that statement when I wanted to calculate a bit per pixel density encoding matrix for different video resolutions.  The problem with doing that is that every source video is different, even throughout the entire video, and I did not want to encode multiple different video files multiple times to generate this matrix.  Desperation, being the mother of invention, decided to intervene on my behalf.  Once I figured out how to perform single pass ABR encoding I decided to perform some ABR encoding using CRF 21 to find an approximate bit per pixel density.  As expected this yielded a fair amount of data as each one minute section of the source video I encoded was different.  From that I validated a trend that I had been seeing for a while.

To retain a relatively sane visual quality, the lower the resolution of the video the higher the bit per pixel density should be.  I did some work and came up with some data.  If you run a one minute CRF encode against each minute in your source 1080p video you will be provided with a bit per pixel density matrix.  Using the matrix below, assuming you do not want to make one of your own, you should be able to find an approximate bitrate to use for your other resolutions. Please note that the 1080p bit per pixel density shown below is post encode and is not the source file. Best practice for a full encode is to perform a single pass encode against your 1080p source to generate a bit per pixel density that you can then use to encode your ABR content.

RES     BPP      RES     BPP      RES     BPP      RES      BPP
360p    0.277    480p    0.226    720p    0.185    1080p    0.161
360p    0.244    480p    0.207    720p    0.176    1080p    0.155
360p    0.208    480p    0.169    720p    0.139    1080p    0.120
360p    0.215    480p    0.164    720p    0.128    1080p    0.111
360p    0.194    480p    0.158    720p    0.131    1080p    0.117
360p    0.164    480p    0.136    720p    0.115    1080p    0.106
360p    0.136    480p    0.110    720p    0.091    1080p    0.082
360p    0.152    480p    0.117    720p    0.092    1080p    0.080
360p    0.160    480p    0.120    720p    0.093    1080p    0.079
360p    0.134    480p    0.108    720p    0.089    1080p    0.079
360p    0.126    480p    0.100    720p    0.081    1080p    0.071
360p    0.125    480p    0.097    720p    0.078    1080p    0.069
360p    0.118    480p    0.091    720p    0.074    1080p    0.065
360p    0.103    480p    0.084    720p    0.070    1080p    0.065
360p    0.103    480p    0.083    720p    0.068    1080p    0.063
360p    0.110    480p    0.085    720p    0.068    1080p    0.062
360p    0.105    480p    0.082    720p    0.066    1080p    0.061
360p    0.094    480p    0.074    720p    0.061    1080p    0.057
360p    0.100    480p    0.075    720p    0.059    1080p    0.054
360p    0.077    480p    0.062    720p    0.051    1080p    0.049
360p    0.078    480p    0.060    720p    0.050    1080p    0.048
360p    0.077    480p    0.061    720p    0.049    1080p    0.044
360p    0.072    480p    0.055    720p    0.044    1080p    0.041
360p    0.056    480p    0.045    720p    0.038    1080p    0.041
360p    0.063    480p    0.052    720p    0.043    1080p    0.040
360p    0.051    480p    0.042    720p    0.035    1080p    0.038
360p    0.046    480p    0.038    720p    0.033    1080p    0.037

Yes, entropy encoding is interesting.  You probably recognized in the matrix above that the numbers are not as uniform across the resolutions as we would like.  Using that matrix as a guideline I came up with some calculations that I have provided below and did some rounding of the bit per pixel density and then rounded the bitrate.

1920*1080*23.976/1024*0.070  ==  3398.598kbps
1280*720*23.976/1024*0.080   ==  1726.272kbps
854*480*23.976/1024*0.100    ==  959.78925kbps
480*360*23.976/1024*0.125    ==  505.74375kbps
426*240*23.976/1024*0.133    ==  318.38254875kbps
284*160*23.976/1024*0.150    ==  159.59025kbps

With all of that said, 1080p comes out to about 3400kbps, 720p comes out to about 1725kbps, 480p comes out to about 960kbps, 360p comes out to about, 510kbps, 240p comes out to about 320kbps, and 160p comes out to about 160kbps.

Now, how did I do it?  With likely the longest FFmpeg command line I have ever assembled.  I have broken out the different outputs to their own separate lines for readability.

ffmpeg.exe -i sourcefile.mp4

-pix_fmt yuv420p -r 23.976 -vcodec libx264 -vf "scale=1920:1080" -b:v 3400k -preset veryfast -profile:v baseline -keyint_min 24 -g 48 -x264opts no-scenecut -strict experimental -acodec aac -b:a 96k -af "aresample=async=1:min_hard_comp=0.100000:first_pts=0" -map_metadata -1 -f mp4 1080p.mp4

-pix_fmt yuv420p -r 23.976 -vcodec libx264 -vf "scale=1280:720" -b:v 1725k -preset veryfast -profile:v baseline -keyint_min 24 -g 48 -x264opts no-scenecut -strict experimental -acodec aac -b:a 96k -af "aresample=async=1:min_hard_comp=0.100000:first_pts=0" -map_metadata -1 -f mp4 720p.mp4

-pix_fmt yuv420p -r 23.976 -vcodec libx264 -vf "scale=854:480" -b:v 960k -preset veryfast -profile:v baseline -keyint_min 24 -g 48 -x264opts no-scenecut -strict experimental -acodec aac -b:a 96k -af "aresample=async=1:min_hard_comp=0.100000:first_pts=0" -map_metadata -1 -f mp4 480p.mp4

-pix_fmt yuv420p -r 23.976 -vcodec libx264 -vf "scale=480:360" -b:v 510k -preset veryfast -profile:v baseline -keyint_min 24 -g 48 -x264opts no-scenecut -strict experimental -acodec aac -b:a 96k -af "aresample=async=1:min_hard_comp=0.100000:first_pts=0" -map_metadata -1 -f mp4 360p.mp4

-pix_fmt yuv420p -r 23.976 -vcodec libx264 -vf "scale=426:240" -b:v 320k -preset veryfast -profile:v baseline -keyint_min 24 -g 48 -x264opts no-scenecut -strict experimental -acodec aac -b:a 96k -af "aresample=async=1:min_hard_comp=0.100000:first_pts=0" -map_metadata -1 -f mp4 240p.mp4

-pix_fmt yuv420p -r 23.976 -vcodec libx264 -vf "scale=284:160" -b:v 160k -preset veryfast -profile:v baseline -keyint_min 24 -g 48 -x264opts no-scenecut -strict experimental -acodec aac -b:a 96k -af "aresample=async=1:min_hard_comp=0.100000:first_pts=0" -map_metadata -1 -f mp4 160p.mp4

Or the long version if you prefer:

ffmpeg.exe -i sourcefile.mp4 -pix_fmt yuv420p -r 23.976 -vcodec libx264 -vf “scale=1920:1080” -b:v 3400k -preset veryfast -profile:v baseline -keyint_min 24 -g 48 -x264opts no-scenecut -strict experimental -acodec aac -b:a 96k -af “aresample=async=1:min_hard_comp=0.100000:first_pts=0” -map_metadata -1 -f mp4 1080p.mp4 -pix_fmt yuv420p -r 23.976 -vcodec libx264 -vf “scale=1280:720” -b:v 1725k -preset veryfast -profile:v baseline -keyint_min 24 -g 48 -x264opts no-scenecut -strict experimental -acodec aac -b:a 96k -af “aresample=async=1:min_hard_comp=0.100000:first_pts=0” -map_metadata -1 -f mp4 720p.mp4 -pix_fmt yuv420p -r 23.976 -vcodec libx264 -vf “scale=854:480” -b:v 960k -preset veryfast -profile:v baseline -keyint_min 24 -g 48 -x264opts no-scenecut -strict experimental -acodec aac -b:a 96k -af “aresample=async=1:min_hard_comp=0.100000:first_pts=0” -map_metadata -1 -f mp4 480p.mp4 -pix_fmt yuv420p -r 23.976 -vcodec libx264 -vf “scale=480:360” -b:v 510k -preset veryfast -profile:v baseline -keyint_min 24 -g 48 -x264opts no-scenecut -strict experimental -acodec aac -b:a 96k -af “aresample=async=1:min_hard_comp=0.100000:first_pts=0” -map_metadata -1 -f mp4 360p.mp4 -pix_fmt yuv420p -r 23.976 -vcodec libx264 -vf “scale=426:240” -b:v 320k -preset veryfast -profile:v baseline -keyint_min 24 -g 48 -x264opts no-scenecut -strict experimental -acodec aac -b:a 96k -af “aresample=async=1:min_hard_comp=0.100000:first_pts=0” -map_metadata -1 -f mp4 240p.mp4 -pix_fmt yuv420p -r 23.976 -vcodec libx264 -vf “scale=284:160” -b:v 160k -preset veryfast -profile:v baseline -keyint_min 24 -g 48 -x264opts no-scenecut -strict experimental -acodec aac -b:a 96k -af “aresample=async=1:min_hard_comp=0.100000:first_pts=0” -map_metadata -1 -f mp4 160p.mp4 -strict experimental -acodec aac -vn -b:a 96k -af “aresample=async=1:min_hard_comp=0.100000:first_pts=0” -f mp4 AudioOnly.mp4

Now go forth and streamline your production environment, unless you plan on performing two pass encoding then you are on your own, or not if you build your own bit per pixel density encoding matrix from your unique and varied content.

How to get a live u-Law WAV stream to Cisco VOIP servers (Updated 2017-07-23)

I’m probably going to get some of the specifics on the Cisco VOIP server a bit wrong, but the following is what I remember when deconstructing multiple customer descriptions over the years who did not yet know how to set up their Cisco VOIP server with on hold audio. As scary as it may seem I think that I was better at setting up the on hold music using Helix Server than anyone working at Cisco was or any of their customers were even though Cisco, for many years, recommended Helix Server until it was discontinued. Then again it was my job to know these sorts of things.

In a typical scenario a Cisco engineer and one of their customers would get on a phone call with me on how to configure Helix Server for streaming their on hold audio. For everything else Cisco I am currently a knuckle dragging troglodyte.

When I was working at RealNetworks supporting Helix Server we had a high volume of customers using Cisco VOIP phone systems and they all needed two things:

1) Looped on demand u-Law WAV files. Helix Server supported this using it’s Simulated Live Transfer Agent (SLTA).

The Cisco VOIP server has the ability for you to upload audio files, which then get converted to u-Law WAV files, for when a customer is on hold in a cost center (customer service, billing, legal, etc…) so that they can have customized music or advertisements for a product the customer might be interested in. I was never fully sure why they used SLTA for this unless they had a super cheap Cisco server that didn’t have that function, if they had more cost centers than the Cisco server supported, or even worse if they didn’t know that they had that functionality on their Cisco server.

Cisco did not seem to have any really good documentation on how to make the u-Law WAV files you needed for SLTA, but this forum post works. Sadly FFmpeg, my encoding tool of choice, supports looping images only. There is currently no audio equivalent that I know of.

-loop_input
Loop over the input stream. Currently it works only for image streams. This option is used for automatic FFserver testing. This option is deprecated, use -loop 1.

I have a few theoretical hacks to get looped content working, but they are both difficult to set up and are also very unstable. In other words they are not ready for deployment in an enterprise environment that demands high uptime. If I find a free solution that is stable I will post it here.

2) Live u-Law WAV stream. Helix Server did not support this. I performed extensive and exhaustive testing and it was unable to properly repacketize an incoming live u-Law stream to either unicast RTSP or multicast SDP no matter the input method. I was hoping that this would get fixed, however our group was laid off before that could happen.

Cisco used to have an audio capture card in their hardware VOIP servers that customers would use to pipe their satellite Muzak feed into (stereo or mono din input if I remember correctly), but that was discontinued because now they apparently only provide an image that goes into a VM so there is no capture card. Their customers had to settle for SLTA.

With that said I can provide option number two for companies that have a Cisco VOIP server that they use. For a proper live u-Law WAV delivery configuration you need to know a few things:

A) A little bit about FFmpeg or at least a willingness to learn. You can download  already compiled versions for Windows over at Zeranoe’s website. If you are on Linux you can either compile FFmpeg yourself or head to the FFmpeg website itself for some static Linux builds.

B) How to create a u-Law file for testing a pseudo live feed.

C) How to create a multicast SDP file using FFmpeg using the u-Law file created above.

D) How to modify the multicast SDP file to work with VLC as a player. This is the first test to see if you have things right for live streaming.

E) How to connect via DirectShow on Windows or ALSA on Linux to an audio source. This is the final step in testing that your device works. If you start here then you may never really know if your device is working, if the SDP file is working, or if you even created the output correctly.

You will learn all of the above in this article.

I just finished converting an MP3 file to u-Law using the following command line:

ffmpeg -i in.mp3 -acodec pcm_mulaw -b:a 64 -ac 1 -ar 8000 -f wav -y out.wav

You can deliver a pseudo live non looping feed of that file:

ffmpeg -re -i out.wav -f rtp rtp://224.224.224.224:21414/live.sdp

You can also use the source file if you want:

ffmpeg -re -i in.mp3 -acodec pcm_mulaw -b:a 64 -ac 1 -ar 8000 -f rtp rtp://224.224.224.224:21414/live.sdp

FFmpeg is nice in that it dumps the SDP information for the RTP stream to the command prompt even though no SDP file is created:

v=0
o=- 0 0 IN IP4 127.0.0.1
s=Your File Metadata
c=IN IP4 224.224.224.224
t=0 0
a=tool:libavformat 57.23.100
m=audio 9008 RTP/AVP 0
b=AS:64

Sadly connecting to that SDP output occasionally stutters then cuts out when listening to the stream with either VLC, QuickTime for the PC, or RealPlayer. If you read through all of the RFCs you might get an idea of the complexity of the RTP/SDP specifications.

Or not. I don’t know about you but reading those RFCs puts me right to sleep. A slightly easier to digest article on SDP structure can be found here. The article is in regards as to why your technically 100 percent compliant SDP file doesn’t work with Helix Server. To use your SDP file with that server you are required to add optional flags.

Sadly the mostly working SDP file that FFmpeg creates is missing one important item:

a=rtpmap:0 PCMU/9008/1

The “rtpmap” attribute is used to connect or map the audio that is defined in the “m” or “media” section to the network RTP output as well as define the codec (payload type) and the number of audio channels in use if it is an audio stream. This is sort of important for devices, players, or receivers to know what to listen for and how to decode it, especially when there may be two or more streams described in the SDP file.

Playing that modified SDP file fixes everything, at least for VLC and QuickTime:

v=0
o=- 0 0 IN IP4 127.0.0.1
s=Your File Metadata
c=IN IP4 224.224.224.224
t=0 0
a=tool:libavformat 55.0.100
m=audio 21414 RTP/AVP 0
b=AS:64
a=rtpmap:0 PCMU/9008/1

Please note that if you have multiple live streams running that you need to have each SDP file and each encoder configured to use a different port number for the audio. I make sure increase the port number by two in each SDP. For example 21414, 21416, 21418, etc…

Now that you have something that works with a file let us now try with a live source. On Windows you will need to have FFmpeg connect via DirectShow. To find the list of DirectShow devices on your computer the command line shown below will help

ffmpeg -list_devices true -f dshow -i dummy

Now feel free to try it with your audio device.

ffmpeg -f dshow -i audio=”Microphone (HD Pro Webcam C920)” -acodec pcm_mulaw -b:a 64 -ac 1 -ar 8000 -f rtp rtp://224.224.224.224:21414/live.sdp

The line above works well for me, especially as FFmpeg now supports crossbar devices.

If you are on Linux you may want to use ALSA to connect to your live feed, but again you need to find the device you want to use first. This will show you the ALSA devices your system has:

$ arecord -L

$ ffmpeg -f alsa -i default:CARD=U0x46d0x809 -acodec pcm_mulaw -b:a 64 -ac 1 -ar 8000 -f rtp rtp://224.224.224.224:21414/live.sdp

On a side note you will probably want to host your SDP file or files on a robust web server or perhaps even behind a load balancer. From the logs that I have parsed over the years the Cisco VOIP server retrieves to the SDP file every time a person was put back into the queue. The highest number of connections I recall seeing was around 3,000 per second ,so people that have to support a high volume call center or a large corporation should make themselves well prepared for this behavior by putting up a web server dedicated to delivery their SDP files or several web servers behind a load balancer.

The only way this DDoS effect could be either mitigated or resolved is if the Cisco VOIP server was modified to grab the multicast information in the SDP file, retain it for use among the clients, and then check the multicast SDP file every minute or so in case the structure of the audio feed changed or was updated along with the associated SDP file. Frankly I just don’t see that happening.

And for those few who are interested in what a Scalable Multicast u-Law SDP file that is generated from Helix Server looks like, or for some reason the SDP file format I describe above doesn’t work for you, then look no further than the output below:

v=0
o=- 275648743 275648743 IN IP4 0.0.0.0
s=War Pigs+Luke’s Wall
i=Black Sabbath
c=IN IP4 0.0.0.0
t=0 0
a=SdpplinVersion:1610641560
a=StreamCount:integer;1
a=control:*
a=Flags:integer;1
a=LatencyMode:integer;0
a=LiveStream:integer;1
a=Timeout:integer;30
a=Author:buffer;”QmxhY2sgU2FiYmF0aAA=”
a=Title:buffer;”V2FyIFBpZ3MrTHVrZSdzIFdhbGwA”
a=ASMRuleBook:string;”#($Bandwidth >= 0),Stream0Bandwidth = 64000;”
a=FileName:string;”warpigs”
a=range:npt=0-
m=audio 21414 RTP/AVP 0
c=IN IP4 224.224.224.224/16
b=AS:90
b=TIAS:64000
b=RR:2400
b=RS:800
a=maxprate:50.000000
a=control:streamid=1
a=range:npt=0-
a=length:npt=0
a=rtpmap:0 PCMU/8000/1
a=fmtp:0
a=mimetype:string;”audio/PCMU”
a=ASMRuleBook:string;”marker=0, AverageBandwidth=64000, Priority=9, timestampdelivery=true;”
a=3GPP-Adaptation-Support:1
a=Helix-Adaptation-Support:1
a=AvgBitRate:integer;64000
a=AvgPacketSize:integer;160
a=BitsPerSample:integer;16
a=LiveStream:integer;1
a=MaxBitRate:integer;64000
a=MaxPacketSize:integer;160
a=Preroll:integer;2000
a=RuleNumber:integer;0
a=StartTime:integer;0
a=StreamId:integer;0
a=OpaqueData:buffer;”AAB2dwAHAAEAAB9AAAAfQAABABAAAA==”

2017-07-23 Update:
If you are wanting to deliver your live stream through a streaming server and have your Cisco server pick up an RTSP feed from that streaming server instead then please take a look at the following command. This method is both easier and more reliable than the direct SDP method shown above.

$ ffmpeg -f dshow -i audio=”Microphone (HD Pro Webcam C920)” -acodec pcm_mulaw -b:a 64 -ac 1 -ar 8000 -f rtsp rtsp://username:password@[server_address]:[port]/live/audiostream

Dear Netflix

As long as you are busy re-encoding your content, can you please fix Star Trek: Voyager? It makes my eyes bleed.

The method that I use when converting content is to never trust what you have been told by the content provider, but to instead analyze every piece of content that is to be converted even if it is in the same series from the same publisher using the same media type.

I use the command line version of MediaInfo and some output from FFmpeg to get things done. I prefer Bash shell scripting as it is what I am most familiar with.

Get your information from MediaInfo:
Mediainfo $inputfile > info.tmp

Capture the frames per second from the video:
fps=$(cat info.tmp | grep Frame | grep [Rr]ate | grep -v [Mm]ode | cut -d “:” -f2 | tr -d ” fps” | head -1)

If the FPS reports as either empty or Variable then force a framerate that works. If I know that the content came from Europe I force it to 25fps whereas if it came from the US I force it to 23.976fps. You may need to review your content post encode to make sure you did not introduce telecine judder.

Check to see if your content is interlaced, progressive, or uses the MBAFF method of interlacing:
scan=$(cat info.tmp | grep “\(Interlaced\|Progressive\|MBAFF\)” | head -1 | cut -d “:” -f2 | tr -d ” “)

If the content is in an MPEG Program Stream container, it reports as being 29.970fps, and does not announce if it is interlaced, progressive, or MBAFF then the content is actually 23.976fps using soft telecine.
if [ “$fps” == “29.970” ] && [ “$scan1” == “” ] && [ “$mpegps” == “MPEG-PS” ] ;
then
fps=$(echo “23.976”)
scan1=$(echo “Progressive”)
fi

The odds are high that your media group received content from your provider in an MPEG-PS VOB container and did not look for interlaced content.

Detecting everything mentioned above will ensure that fewer frames are being encoded, it eliminates telecine judder, you don’t have to worry about encoding interlacing artifacts, it allows for a more optimized bit per pixel density, and it will help with providing higher video quality for the customer.

In addition, order of operations can be important when encoding content. I always deinterlace content if necessary before I force the detected or overridden FPS, crop the content, resize or scale the content, and then rotate the content. An example from my script is as follows:
ffmpeg -fpsprobesize $gop -i $inputfile -pix_fmt yuv420p $totaltime -vsync 1 -sn -vcodec libx264 -map $vtrack $scan -r $fps -vf “crop=$w1:$h1:$x1:$y1,scale=$fixedwidth:$fixedheight$fixrotation” -threads 0 -b:v:$vtrack $averagevideobitrate -bufsize $buffer -maxrate $maximumvideobitrate -minrate $minimumvideobitrate -strict experimental -acodec aac -map $audio -b:a:$audio $audiobitrate -ac 2 -ar $audiofrequency -af “aresample=async=1:min_hard_comp=0.100000:first_pts=0” -pass 1 -preset $newpreset -profile:v $defaultprofile -qmin 0 -qmax 63 -keyint_min $minkeyframe -g $gop $newtune -x264opts no-scenecut -map_metadata -1 -f mp4 -y $outputfile

Now go forth and encode.

Intelligent video encoding

I have been saying this for a few years now. Netflix has finally gotten on the bandwagon.

I worked at RealNetworks for over six years and became their onsite encoding expert for creating H.264 video with AAC audio in an MP4 container using FFmpeg after just three years. Our group was laid off when their Helix Streaming Media Server, which I supported, was discontinued.

I have converted most of my Blu-ray and DVD content, including one HD-DVD, to MP4 files and have found, just as the article says, that not all video is created equal. Why? Movement is expensive. In addition, grain is movement. Please do not get me started on encoding artifacts in the source media. NeatVideo, if you know how to use it, can help with both grain and encoding artifacts without having to resort to sharpening. The use of sharpening is, in my opinion, the refuge of the inept unless the source is so low quality that it looks like a blur. Even then use sparingly only if it is absolutely needed. If you want a challenge run NeatVideo against the movie Fight Club.

As an example, encode for yourself both a high action video and some low action video using x264 using a CRF value of 21 with the veryfast preset and the baseline profile. When you are finished use MediaInfo to look at the bit per pixel density (BPP) of the output video. The action video will have a much higher bitrate and BPP density than the low action video. As such you should target what the video requires.

My procedure for finding a decent bitrate is as follows:

1) Encode the video using the veryfast preset and the baseline profile to grab what the bit per pixel density is at CRF 21.

2) Perform a two pass encode with the medium preset and the high444 profile using the BPP value found in the video. You will see that both the initial CRF encoded video and the two pass video are about the same size and have, obviously, the same BPP density. The output “CRF” value, as reported by FFmpeg, will be about 19.4 due to compression. I have covered this before. Don’t take my word for it, use the Moscow University Video Quality Measurement Tool.

The reason for the medium preset is that mobile devices and other hardware decoders (Roku, Apple TV, etc…) all have limitations on playing H.264 video content that has more than three reference frames. To date I have found no device that cannot handle the high444 profile, which prioritizes the luma (Y’) channel over chrominance (Cb Cr) even though manufacturers state that they only support the main profile with CABAC. The only devices that I have not tested were the old school Blackberry phones.

On a side note, use the information that MediaInfo puts out as well as what FFmpeg puts out to find out what the width, height, and FPS of the source is as well as what the source audio frequency and bitrate are. If you know what you are doing you can detect telecine content in MPEG-PS containers (VOB) so that you do not duplicate frames when encoding. In addition, forcing the frame rate to what the source media says it is will keep the framerate solid. Advanced class is performing automatic crop detection (beware “The Right Stuff” and “Tron Legacy”), and audio normalization if your hearing is poor like mine is.

How will this affect your production workflow? If you decide to implement then not much. All that you need to do is perform a test encode to find the BPP density and then have your MBR content encoded to the same BPP density. If you are converting a series do a test convert of a few episodes and find the right bitrate for you.

Extreme encoding settings, quality, and size.

I’ve been meaning do some output quality testing and have finally gotten around to doing so. Because I like to have my content able to be streamed via RTSP, RTMP, and HTTP (HLS or DASH) I encode to bitrate as RTSP can be sensitive to bitrate fluctuation. I do my testing using CRF 21 for consistency of output and speed. To do this testing I used the MSU Video Quality Measurement Tool which will put out bad frames, a spreadsheet, and even a video showing you the differences between one video and another.

My typical encode is done using the medium preset. It uses a distance of three reference frames which is compatible with hardware decoders.[1] I will also encode using the high444 profile which while technically unsupported by mobile phones but does in fact work. To date I have had zero problems with those settings when I tested multiple handsets from multiple manufactures during my time at RealNetworks supporting their former product Helix Server.

When I am going to encode to bitrate I do a first pass using CRF so that I can get a better idea of what the bit per pixel density is but I will encode using the veryfast preset and the baseline profile. When I perform my two pass encoding I encode to the bit per pixel density that the CRF file reported using MediaInfo. If you look at the first pass of a two pass encode it will be a smaller size than the second pass as the second pass puts back the bits lost due to the compression used on the first pass. This behavior got me thinking.

The tests that I just ran were:
1) Encode using the veryfast preset and the baseline profile using CRF 21.

2) Encode using the medium preset, the high444 profile using CRF 21 and the following options:
-x264opts b-adapt=2:direct=auto:me=tesa:subme=11:aq-mode=2:aq-strength=1.0:fast_pskip=0:rc_lookahead=72:partitions=p8x8:trellis=2:weightp=2:merange=64:bframes=8"

I took the files and then remuxed them into AVI as MSUVQMT was having issues with the MP4 container.

ffmpeg -i input.mp4 -vcodec copy -an intput.avi

Note that the input file framerate was 23.976fps and the output framerate became 47.952fps. Did this invalidate my test?[2] Possibly, but MediaInfo only looks at a small part of the video stream. If your video mixes 29.970fps interlaced content with 23.976fps content then it will know nothing of the 23.976fps content later in the video stream. Yes, I have seen this issue happen with several MPEG-PS files.

After remuxing the files and running them through MSUVQMT I was not surprised to see that there were no quality differences between the baseline file and the high444 profile. The SSIM that was reported in the spreadsheet that MSVQMT reported was “AVG: 0.97723”, which I feel is inline with entropy encoding, and the only other thing different was the size of the video stream.

The baseline file, as reported by MediaInfo, is as follows:
----------------------------------------
Video
ID                                       : 0
Format                                   : AVC
Format/Info                              : Advanced Video Codec
Format profile                           : Baseline@L3.0
Format settings, CABAC                   : No
Format settings, ReFrames                : 1 frame
Codec ID                                 : avc1
Duration                                 : 1mn 0s
Bit rate                                 : 1 459 Kbps
Width                                    : 854 pixels
Height                                   : 322 pixels
Display aspect ratio                     : 2.35:1
Frame rate mode                          : Variable
Frame rate                               : 47.952 fps
Color space                              : YUV
Chroma subsampling                       : 4:2:0
Bit depth                                : 8 bits
Scan type                                : Progressive
Bits/(Pixel*Frame)                       : 0.111
Stream size                              : 10.4 MiB (99%)
Writing library                          : x264 core 142 r2479 dd79a61
Encoding settings                        : cabac=0 / ref=1 / deblock=1:-1:-1 / analyse=0x1:0x111 / me=hex / subme=2 / psy=1 / psy_rd=1.00:0.15 / mixed_ref=0 / me_range=16 / chroma_me=1 / trellis=0 / 8x8dct=0 / cqm=0 / deadzone=21,11 / fast_pskip=1 / chroma_qp_offset=0 / threads=8 / lookahead_threads=2 / sliced_threads=0 / nr=0 / decimate=1 / interlaced=0 / bluray_compat=0 / constrained_intra=0 / bframes=0 / weightp=0 / keyint=120 / keyint_min=12 / scenecut=40 / intra_refresh=0 / rc_lookahead=10 / rc=crf / mbtree=1 / crf=21.0 / qcomp=0.60 / qpmin=0 / qpmax=69 / qpstep=4 / ip_ratio=1.40 / aq=1:1.00
----------------------------------------

The high444 profile with the extra x264 options looks like this:

----------------------------------------
Video
ID                                       : 0
Format                                   : AVC
Format/Info                              : Advanced Video Codec
Format profile                           : High@L3.0
Format settings, CABAC                   : Yes
Format settings, ReFrames                : 4 frames
Codec ID                                 : avc1
Duration                                 : 59s 997ms
Bit rate                                 : 1 364 Kbps
Width                                    : 854 pixels
Height                                   : 322 pixels
Display aspect ratio                     : 2.35:1
Frame rate mode                          : Variable
Frame rate                               : 47.952 fps
Color space                              : YUV
Chroma subsampling                       : 4:2:0
Bit depth                                : 8 bits
Scan type                                : Progressive
Bits/(Pixel*Frame)                       : 0.103
Stream size                              : 9.76 MiB (99%)
Writing library                          : x264 core 142 r2479 dd79a61
Encoding settings                        : cabac=1 / ref=3 / deblock=1:-1:-1 / analyse=0x3:0x10 / me=tesa / subme=11 / psy=1 / psy_rd=1.00:0.15 / mixed_ref=1 / me_range=64 / chroma_me=1 / trellis=2 / 8x8dct=1 / cqm=0 / deadzone=21,11 / fast_pskip=0 / chroma_qp_offset=-3 / threads=8 / lookahead_threads=1 / sliced_threads=0 / nr=0 / decimate=1 / interlaced=0 / bluray_compat=0 / constrained_intra=0 / bframes=8 / b_pyramid=2 / b_adapt=2 / b_bias=0 / direct=3 / weightb=1 / open_gop=0 / weightp=2 / keyint=120 / keyint_min=12 / scenecut=40 / intra_refresh=0 / rc_lookahead=72 / rc=crf / mbtree=1 / crf=21.0 / qcomp=0.60 / qpmin=0 / qpmax=69 / qpstep=4 / ip_ratio=1.40 / aq=2:1.00
----------------------------------------

Note the Bit Per Pixel density is lower on the more compressed version. This is expected because the video stream is smaller due to higher compression. As noted above the bits are put back and your Bit Per Pixel density is returned to what is expected when using two pass encoding.

What did I learn here? Video quality is directly affected by bitrate while compression merely makes the video stream smaller with no visible increase in quality. With two pass encoding to the target Bit Per Pixel density the quality will be higher at the same bitrate but may have some differences. For example I converted the fight scene from They Live many years ago using two similar bitrate based methods and they did not come out the same. You can see that video on YouTube here.

The question we are left with is how much time do I really want to spend making the file just a bit smaller but at the exact same quality? Me, not that much time.

1) I will always remember that three reference frames are the maximum distance by remembering a scene in Monty Python and the Holy Grail.

…And Saint Attila raised the hand grenade up on high, saying, “O LORD, bless this Thy hand grenade that with it Thou mayest blow Thine enemies to tiny bits, in Thy mercy.” And the LORD did grin and the people did feast upon the lambs and sloths and carp and anchovies and orangutans and breakfast cereals, and fruit bats and large chu… [At this point, the friar is urged by Brother Maynard to “skip a bit, brother”]… And the LORD spake, saying, “First shalt thou take out the Holy Pin, then shalt thou count to three, no more, no less. Three shall be the number thou shalt count, and the number of the counting shall be three. Four shalt thou not count, neither count thou two, excepting that thou then proceed to three. Five is right out. Once the number three, being the third number, be reached, then lobbest thou thy Holy Hand Grenade of Antioch towards thy foe, who being naughty in My sight, shall snuff it.”

2) 23.976 * 2 == 47.952
ffprobe.exe sw4-gout-test-crf-baseline.avi
ffprobe version N-67742-g3f07dd6 Copyright (c) 2007-2014 the FFmpeg developers
built on Nov 16 2014 22:10:05 with gcc 4.9.2 (GCC)
configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-zlib
libavutil      54. 13.100 / 54. 13.100
libavcodec     56. 12.101 / 56. 12.101
libavformat    56. 13.100 / 56. 13.100
libavdevice    56.  3.100 / 56.  3.100
libavfilter     5.  2.103 /  5.  2.103
libswscale      3.  1.101 /  3.  1.101
libswresample   1.  1.100 /  1.  1.100
libpostproc    53.  3.100 / 53.  3.100
Input #0, avi, from 'sw4-gout-test-crf-baseline.avi':
Metadata:
encoder         : Lavf56.13.100
Duration: 00:01:00.02, start: 0.000000, bitrate: 1469 kb/s
Stream #0:0: Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p, 854x322 [SAR 920:1037 DAR 40:17], 1459 kb/s, 47.95 fps, 23.98 tbr, 47.95 tbn, 47.95 tbc

Star Wars Episode 4

I have three versions of Star Wars episode 4 and four images in the screenshots below. This should provide an overview of the challenges involved in performing color correction. Clockwise from the top left.

1) RAW VOB file from the Star Wars Ep 4 “GOUT” edition.

2) GOUT modified to MP4 in Sony Vegas with no filters. Vegas hates MPEG-PS audio tracks and that VOB reports time incorrectly.

3) Despecialized version 2.5 by Harmy.

4) Editdroid’s version from the 1993 Laserdisc in VOB format.

You will note that the GOUT version in Vegas looks washed out. This is caused by a colorspace issue stemming from the NTSC color space (16-235) versus the RGB color space (0-255). I can change the levels in Vegas to make it look exactly the same as it does outside of Vegas with one of the built in presets.

The color pallet used in GOUT is the same as the Laserdisc because they both came from the same master. I would love to get my hands on the 1985 laserdisc release but that thing is beyond rare.

The Despecialized edition suffers from the f’ing Hollywood look with teal & orange slathered all over it as well as oversaturated colors. To fix issues like that I have to skew cyan towards blue to fix some of it. Desaturating yellow and red helps to fix the New Jersey fake tan look in most movies. Green is occasionally oversaturated. Couple all of that with lighness adjustments for cyan, yellow, magenta, red, green, and blue and things begin to get complicated quickly. Wait, levels are sometimes off as well. Add that to the mix.

I will be using the Editdroid version as my source and I am hoping to alter the color palette to be more inline with GOUT. Preliminary results at this point do not look promising at all. It looks like Harmy used the AAV ColorLab plugin, which I have, to modify colors. Sadly that plugin seems to have the side affect, at least on my machine, of screwing up some shades of orange like traffic cones and the orange Pinto in The Blues Brothers. My monitor is color balanced using a Spyder3Pro.


From the research that I have done there is no longer such thing as a “correct” version of Star Wars that a mere mortal like I can get their hands on. My hope is that Disney will fix the color issues in any reissues it may put out, but that is a pipe dream at best.

http://en.wikipedia.org/wiki/List_of_changes_in_Star_Wars_re-releases

GOUT-GOUTinVegas-Despecialized-Editdroid-001GOUT-GOUTinVegas-Despecialized-Editdroid-002GOUT-GOUTinVegas-Despecialized-Editdroid-003GOUT-GOUTinVegas-Despecialized-Editdroid-004GOUT-GOUTinVegas-Despecialized-Editdroid-005GOUT-GOUTinVegas-Despecialized-Editdroid-006GOUT-GOUTinVegas-Despecialized-Editdroid-007GOUT-GOUTinVegas-Despecialized-Editdroid-008GOUT-GOUTinVegas-Despecialized-Editdroid-009GOUT-GOUTinVegas-Despecialized-Editdroid-010GOUT-GOUTinVegas-Despecialized-Editdroid-011GOUT-GOUTinVegas-Despecialized-Editdroid-012GOUT-GOUTinVegas-Despecialized-Editdroid-013GOUT-GOUTinVegas-Despecialized-Editdroid-014GOUT-GOUTinVegas-Despecialized-Editdroid-015GOUT-GOUTinVegas-Despecialized-Editdroid-016