The method that I use when converting content is to never trust what you have been told by the content provider, but to instead analyze every piece of content that is to be converted even if it is in the same series from the same publisher using the same media type.
I use the command line version of MediaInfo and some output from FFmpeg to get things done. I prefer Bash shell scripting as it is what I am most familiar with.
Get your information from MediaInfo:
Mediainfo $inputfile > info.tmp
Capture the frames per second from the video:
fps=$(cat info.tmp | grep Frame | grep [Rr]ate | grep -v [Mm]ode | cut -d “:” -f2 | tr -d ” fps” | head -1)
If the FPS reports as either empty or Variable then force a framerate that works. If I know that the content came from Europe I force it to 25fps whereas if it came from the US I force it to 23.976fps. You may need to review your content post encode to make sure you did not introduce telecine judder.
Check to see if your content is interlaced, progressive, or uses the MBAFF method of interlacing:
scan=$(cat info.tmp | grep “\(Interlaced\|Progressive\|MBAFF\)” | head -1 | cut -d “:” -f2 | tr -d ” “)
If the content is in an MPEG Program Stream container, it reports as being 29.970fps, and does not announce if it is interlaced, progressive, or MBAFF then the content is actually 23.976fps using soft telecine.
if [ “$fps” == “29.970” ] && [ “$scan1” == “” ] && [ “$mpegps” == “MPEG-PS” ] ;
The odds are high that your media group received content from your provider in an MPEG-PS VOB container and did not look for interlaced content.
Detecting everything mentioned above will ensure that fewer frames are being encoded, it eliminates telecine judder, you don’t have to worry about encoding interlacing artifacts, it allows for a more optimized bit per pixel density, and it will help with providing higher video quality for the customer.
In addition, order of operations can be important when encoding content. I always deinterlace content if necessary before I force the detected or overridden FPS, crop the content, resize or scale the content, and then rotate the content. An example from my script is as follows:
ffmpeg -fpsprobesize $gop -i $inputfile -pix_fmt yuv420p $totaltime -vsync 1 -sn -vcodec libx264 -map $vtrack $scan -r $fps -vf “crop=$w1:$h1:$x1:$y1,scale=$fixedwidth:$fixedheight$fixrotation” -threads 0 -b:v:$vtrack $averagevideobitrate -bufsize $buffer -maxrate $maximumvideobitrate -minrate $minimumvideobitrate -strict experimental -acodec aac -map $audio -b:a:$audio $audiobitrate -ac 2 -ar $audiofrequency -af “aresample=async=1:min_hard_comp=0.100000:first_pts=0” -pass 1 -preset $newpreset -profile:v $defaultprofile -qmin 0 -qmax 63 -keyint_min $minkeyframe -g $gop $newtune -x264opts no-scenecut -map_metadata -1 -f mp4 -y $outputfile
Now go forth and encode.