Some useful FFmpeg commands
- FFmpeg intro
- FFprobe file info
- FFplay video viewing
- FFmpeg video editing
- Convert MKV to MP4
- Convert MP4 to M4A (audio only mp4)
- Edit metadata (add chapters)
- Add thumbnail
- Add subtitles
- Extract frames
- Create video from frames
- crop video
- scale video
- compress video
- cut video
- loop video
- Reverse video and/or Audio
- Concatenate multiple videos into one
- Create/download video with m3u8 playlist
- find silence parts in video
- Libavfilter virtual input device (lavfi filtergraph)
- sierpinski (pan)
- mandelbrot (zoom)
- (elementary) cellular automaton
- life (Cellular automaton)
- mptestsrc (animated test patterns)
- empty (input)
- color (input)
- smptebars (input)
- smptehdbars (input)
- testsrc (input)
- testsrc2 (input)
- rgbtestsrc (input)
- yuvtestsrc (input)
- colorspectrum (input)
- colorchart (input)
- allrgb (input)
- allyuv (input)
Important
The order of (some) parameters/flags matters - before or after a given input or output (FFmpeg supports multiple input files and creating multiple output streams with one command).
Scroll TOP
- Full FFmpeg documentation @ https://ffmpeg.org/ffmpeg-all.html (archive)
- Full FFplay documentation @ https://ffmpeg.org/ffplay-all.html (archive)
- Full FFprobe documentation @ https://ffmpeg.org/ffprobe-all.html (archive)
Get FFmpeg from https://ffmpeg.org/download.html
I currently use the FFmpeg builds from https://www.gyan.dev/ffmpeg/builds/ (for Windows 7+)
under the release builds section the file ffmpeg-release-full.7z
or directly https://www.gyan.dev/ffmpeg/builds/ffmpeg-release-full.7z
(don't forget 7-Zip for unpacking the .7z file)
see Generic options
ffmpeg -h # CLI help
ffmpeg -L # FFmpeg licence
ffmpeg -version # FFmpeg build versionffmpeg -buildconf # FFmpeg build configuration
ffmpeg -formats # files (and devices) de-/muxing support
ffmpeg -pix_fmts # pixel formats (in/out/hardware acceleration/palette/bitstream)
ffmpeg -protocols # protocols (in/out) like: file, https, sftp
ffmpeg -codecs # video/audio/subtitle/data en-/decoders (also shows if lossy or lossless)
ffmpeg -filters # video/audio (libavfilter) filters like: avgblur V->V (video in; video out)
ffmpeg -bsfs # bitstream filters like: null, h264_metadata, hevc_metadata
ffmpeg -dispositions # how a stream is added to an output file; for example attached_pic is the file thumbnail/cover art for video files like MP4 (visible in file explorer)
ffmpeg -colors # color names with their hex value; for example: Lime #00ff00ffmpeg -hide_banner # does not log version/copyright/buildconfig
ffmpeg -v level+warning # only log warnings and worse and shows level: "[warning] ..."
# debug > verbose > info (default) > warning > error > fatal > quiet (nothing)
# banner is info so -hide_banner is not needed with warning or less
ffmpeg -stats # allways show stats (en-/decoding progress), even when log level is less than infoCLI keyboard hotkeys mid-process:
| key | function |
|---|---|
| ? | show this table |
| + | increase verbosity (logging level) |
| - | decrease verbosity (logging level) |
| q | quit |
| c | Send command to first matching filter supporting it |
| C | Send/Queue command to all matching filters |
| D | cycle through available debug modes |
| h | dump packets/hex press to cycle through the 3 states |
| s | Show QP histogram |
I didn't find an official documentation for these...
Scroll TOP
Output of the following commands is going to stdout (standard output), if no -o OUTPUT.log is specified
# list all audio streams and their language-tag (if set) in compact format (seperator ":", no field-names, and ommit section name)
ffprobe -v level+warning -i INPUT.mp4 -show_entries stream=index:stream_tags=language -select_streams a -of compact=s=\::nk=1:p=0
# [stream_index]:[tag_language]
# 1:engNote
FFprobe uses 3 letter ISO 639-2 language codes
# list all streams with index, codec_name, codec_type, and language-tag (if set) in compact format (seperator ":", no field-names, and ommit section name)
ffprobe -v level+warning -i INPUT.mp4 -show_entries stream=index,codec_type,codec_name:stream_tags=language -of compact=s=\::nk=1:p=0
# [stream_index]:[codec_name]:[codec_type]:[tag_language]
# 0:h264:video:und (undetermined)
# 1:aac:audio:eng
# 2:png:video (DISPOSITION:attached_pic ie the file thumbnail)see how to Add thumbnail
# show all chapters (if any) of this file with id, start_time, end_time, and title (time in seconds) in compact format (seperator ":", no field-names, and ommit section name)
ffprobe -v level+warning -i INPUT.mp4 -show_entries chapter=id,start_time,end_time:chapter_tags=title -of compact=s=\::nk=1:p=0
# [chapter_id]:[start_time]:[end_time]:[chapter_title]
# 0:0.000000:10.000000:First 10 seconds
# 1:60.000000:120.000000:Second minutesee how to Edit metadata (add chapters)
# most relevant metadata in JSON format (secify output file instead of stdout)
ffprobe -v level+warning -i INPUT.mp4 -o OUTPUT.json -show_entries stream=index,codec_type,codec_name,width,height,display_aspect_ratio,avg_frame_rate,start_time,duration,bit_rate,sample_rate,channels,channel_layout:stream_tags=language,title:stream_disposition:chapter=id,start_time,end_time:chapter_tags=title:format=filename,duration,size,bit_rate:format_tags=title,artist,date,comment -of json
# see JSON structure with type info and some extra notes belowClick to show -show_entries value with added whitespace for readability
stream = index, codec_type, codec_name, width, height, display_aspect_ratio, avg_frame_rate,
start_time, duration, bit_rate, sample_rate, channels, channel_layout:
stream_tags = language , title:
stream_disposition:
chapter = id, start_time, end_time:
chapter_tags = title:
format = filename,duration, size, bit_rate:
format_tags = title, artist, date, comment
Click to show JSON format/type info (only for above command)
Not a general list of keys, just those that are output by the above command
type FFprobeJSON = {
programs: never[]; // (all via -show_programs)
stream_groups: never[]; // (all via -show_stream_groups)
streams: { // (all via -show_streams)
index: number;
codec_name: string; // "h264"/"png"/"aac"/...
codec_type: string; // "video"/"audio"/...
width: number;
height: number;
display_aspect_ratio: string; // usually "16:9"
avg_frame_rate: string;
start_time: string; // time in seconds
duration: string; // time in seconds
bit_rate: string;
disposition: { // only one should be 1 (if any)
default: 0|1;
dub: 0|1;
original: 0|1;
comment: 0|1;
lyrics: 0|1;
karaoke: 0|1;
forced: 0|1;
hearing_impaired: 0|1;
visual_impaired: 0|1;
clean_effects: 0|1;
attached_pic: 0|1; // for cover-images/file-thubnails
timed_thumbnails: 0|1;
non_diegetic: 0|1;
captions: 0|1;
descriptions: 0|1;
metadata: 0|1;
dependent: 0|1;
still_image: 0|1;
multilayer: 0|1;
};
tags: {
language: string; // 3 letter (ISO 639-2) language tag
title: string;
};
sample_rate: string;
channels: number; // usually 2 for audio streams (left right)
channel_layout: string; // usually stereo for audio streams
}[];
chapters: { // (all via -show_chapters)
id: number;
start_time: string; // time in seconds
end_time: string; // time in seconds
tags: {
title: string; // name of chapter
};
}[];
format: { // (all via -show_format)
filename: string; // exact path/URL given to FFprobe
duration: string; // entire file duration in seconds
size: string; // file size in bytes
bit_rate: string;
tags: {
title: string;
artist: string;
date: string;
comment: string;
};
};
};Also, custom metadata added via -metadata field=value
can be read with -show_entries format_tags=field (then {}.format.tags.field (string) in JSON) or within -show_format as TAG:field=value (if no output format was specified; value may span multiple lines)
Keep in mind that whitespace will not be ignored, so -metadata "field = data" has key "field " and value " data"
-vdocumentation-idocumentation-odocumentation-show_entriesdocumentation-select_streamsdocumentation-ofdocumentation-show_programsdocumentation-show_stream_groupsdocumentation-show_streamsdocumentation-show_chaptersdocumentation-show_formatdocumentation
Scroll TOP
ffplay -v level+warning -stats -loop -1 INPUT.mp4A window will show the video looping infinitly (see FFplay video controls)
# random simulation, window size 1280*960 (4 times 320*240, which is the default size)
ffplay -v level+warning -stats -f lavfi life=mold=25:life_color=\#00ff00:death_color=\#aa0000,scale=4*iw:-1:flags=neighborA window will show the simulation infinitly (see FFplay video controls)
Note
seeking is not available, only pause/resume or frame-by-frame playback
-vdocumentation-statsdocumentation- see life (cellular automaton) section below
scalefilter documentation
| Key | Action |
|---|---|
| Q / ESC | Quit |
| F or left mouse double-click | Toggle full screen |
| P / SPACE | Pause/Resume |
| S | Step to next frame (and pause) frame-by-frame |
| ← / → | Seek back-/forward 10 seconds |
| ↓ / ↑ | Seek back-/forward 1 minute |
| PAGE DOWN / PAGE UP | Seek to the previous/next chapter (or 10 minutes) |
| right mouse click | Seek to percentage by click position (of window width) |
| 9 / 0 or / / * | De-/increase volume |
| M | Toggle mute |
| A | Cycle audio channel (current program) |
| T | Cycle subtitle channel (current program) |
| C | Cycle program |
| V | Cycle video channel |
| W | Cycle video filter/show modes |
- Convert MKV to MP4
- Convert MP4 to M4A (audio only mp4)
- Edit metadata (add chapters)
- Add thumbnail
- Add subtitles
- Extract frames
- Create video from frames
- crop video
- scale video
- compress video
- cut video
- loop video
- Reverse video and/or Audio
- Concatenate multiple videos into one
- Create/download video with m3u8 playlist
- find silence parts in video
Scroll TOP
the mkv video file format is suggested when streaming or recording (via OBS) since it can be easily recovert
# Audio codec already is AAC, so it can be copied to save some time
# Also use some compression to shrink the file size a bit
ffmpeg -v level+warning -stats -i INPUT.mkv -c:a copy -c:v libx264 -crf 12 OUTPUT.mp4-vdocumentation-statsdocumentation-cdocumentation-crfdocumentation (the best description is under libaom-AV1 but it's also in other encoders like MPEG-4)- also see this guide for CRF with
libx264
only include audio and subtitles (if present)
ffmpeg -v level+warning -stats -i INPUT.mp4 -c copy -map 0:a -map 0:s? OUTPUT.m4aor only exclude video
ffmpeg -v level+warning -stats -i INPUT.mp4 -c copy -map 0 -map -0:v OUTPUT.m4aexport all metadata to a file
ffmpeg -v level+warning -stats -i INPUT.mp4 -f ffmetadata FFMETADATAFILE.txtit looks something like this
;FFMETADATA1
# empty lines or lines starting with ; or # will be ignored
# whitespace will not be ignored so "title = A" would be interpreted as key "title " and value " A"
title=Video Title
artist=Artist Name
# newlines and other special characters like = ; # \ must be escaped with a \
description=Text\
Line two\
\
\
Line five\
Line with Û̕͝͡n̊̑̓̊i͚͚ͬ́c̗͕̈́̀o̵̯ͣ͊ḑ̴̱̐ḛ̯̓̒
# then adding chapters is very simple | order does not matter (no intersection ofc), so easiest is to append them to the end of file
[CHAPTER]
# fractions of a second so 1/1000 says the following START and END are in milliseconds
TIMEBASE=1/1000
# start and end might change a bit when reinserting (snaps to nearest frame when video stream is copied and not encoded)
START=0
END=10000
title=0 to 10sec of the video
[CHAPTER]
TIMEBASE=1/1000
START=10000
END=20000
title=10sec to 20sec of the videothen to reinsert the edited metadata file
ffmpeg -v level+warning -stats -i INPUT.mp4 -i FFMETADATAFILE.txt -map_metadata 1 -c copy OUTPUT.mp4-vdocumentation-statsdocumentation- full metadata documentation
- You might also want to look at the
-metadatadocumentation
ffmpeg -v level+warning -stats -i INPUT.mp4 -i IMAGE.png -map 0 -map 1 -c copy -c:v:1 png -disposition:v:1 attached_pic OUTPUT.mp4-vdocumentation-statsdocumentation-cdocumentation-mapdocumentation-dispositiondocumentation- How To add an embedded cover/thumbnail (within the
-dispositiondocumentation)
Adding subtitles as an extra stream so they can be turned on and off.
Needs a video player that supports this feature like VLC.
# for mkv output
ffmpeg -v level+warning -stats -i INPUT.mp4 -i SUB.srt -c copy OUTPUT.mkv
# for mp4 output
ffmpeg -v level+warning -stats -i INPUT.mp4 -i SUB.srt -c copy -c:s mov_text OUTPUT.mp4
# ... with multiple subtitle files
ffmpeg -v level+warning -stats -i INPUT.mp4 -i SUB_ENG.srt -i SUB_GER.srt -map 0:0 -map 1:0 -map 2:0 -c copy -c:s mov_text OUTPUT.mp4
# ... with language codes
ffmpeg -v level+warning -stats -i INPUT.mp4 -i SUB_ENG.srt -i SUB_GER.srt -map 0:0 -map 1:0 -map 2:0 -c copy -c:s mov_text -metadata:s:s:0 language=eng -metadata:s:s:1 language=ger OUTPUT.mp4A subtitle file (.srt) may look like this:
1
00:00:00,000 --> 00:00:03,000
hello there
2
00:00:04,000 --> 00:00:08,000
general kenobi
3
00:00:10,000 --> 00:01:00,000
multi
line
subtitlesdisplayed like in file → new line in SRT = new line in video
Unicode can be used → tested with z̵̢͎̟͛ͥ̄͑̐͐a̡͈̳̟ͧ̑̓͆̔ͬl̗̠̭͖͓͚ͭ̐͊͊ģ͖͈̍̓ͭͩ̚͝͞ơ̢̞̫̜̞̓͗͊ͪ text and it "pushed" the subtitles of screen (big line height)
Note
Not all subtitle files are supported by FFmpeg.
# dump ALL frames
ffmpeg -v level+warning -stats -i INPUT.mp4 ./_dump/frame%03d.png
# dump frames with custom frame rate (here 1fps)
ffmpeg -v level+warning -stats -i INPUT.mp4 -r 1 ./_dump/frame%03d.png
# dump custom number of frames
ffmpeg -v level+warning -stats -i INPUT.mp4 -frames:v 3 ./_dump/frame%03d.png
# dump all frames in a timeframe (here from 0:00:02 to 0:00:05)
ffmpeg -v level+warning -stats -ss 2 -i INPUT.mp4 -t 3 ./_dump/frame%03d.png
ffmpeg -v level+warning -stats -ss 2 -i INPUT.mp4 -to 5 ./_dump/frame%03d.pngImportant
The directory path must exist, ie, folders must be created beforehand.
pngis a good middle ground (lossless compression, but supports less colors)jpegis slower but has good compression (lossy compression)bmpis faster but has large file size (uncompressed)
The format frame%03d.png means files will be named: frame001.png, frame002.png, ..., frame050.png, ..., frame1000.png, and so on
Tip
use -start_number 0 (before output) to start at frame000.png
-vdocumentation-statsdocumentation- image file muxer (output)
-rdocumentation-ssdocumentation-tdocumentation-todocumentation
-ss, -t, and -to expect a specific time format
in short [-][HH:]MM:SS[.m...] or [-]S+[.m...][s|ms|us]
# uses files INPUT000.png, INPUT001.png, etc to create the mp4 video (with 24fps)
ffmpeg -v level+warning -stats -framerate 24 -i INPUT%03d.png OUTPUT.mp4
# uses every png file that starts with INPUT (at 24fps)
ffmpeg -v level+warning -stats -framerate 24 -i INPUT*.png OUTPUT.mp4
# uses every png file (at 24fps)
ffmpeg -v level+warning -stats -framerate 24 -i *.png OUTPUT.mp4ffmpeg -v level+warning -stats -i INPUT.mp4 -vf crop=WIDTH:HEIGHT:POSX:POSY OUTPUT.mp4-
WIDTH- the width of the croped window -
HEIGHT- the height of the croped window -
POSX- the X position of the croped window (can be omitted = auto center) -
POSY- the Y position of the croped window (can be omitted = auto center) -
all values are in pixels, but there is no "px" after it (or an expression that gets calculated each frame)
# scale to WIDTH*HEIGHT (stretches to new resolution)
ffmpeg -v level+warning -stats -i INPUT.mp4 -vf "scale=WIDTH:HEIGHT" OUTPUT.mp4
# preserve aspect ratio with letter-/pillarbox (black bars)
ffmpeg -v level+warning -stats -i INPUT.mp4 -vf "scale=WIDTH:HEIGHT:force_original_aspect_ratio=decrease,pad=WIDTH:HEIGHT:POSX:POSY,setsar=1" OUTPUT.mp4-
WIDTH- the width of the desired output resolution -
HEIGHT- the height of the desired output resolution -
POSX- the X position of the video in the padding region (-1to auto center; default is0) -
POSY- the Y position of the video in the padding region (-1to auto center; default is0) -
all values are in pixels, but there is no "px" after it (or an expression that gets calculated each frame, like
(ow-iw)/2forPOSXto center horizontally)
lower values are better (higher bitrate), but also lead to larger file size
# for `h.264` values from 18 to 23 are very good
ffmpeg -v level+warning -stats -i INPUT.mp4 -c copy -c:v libx264 -crf 20 OUTPUT.mp4
# for `h.265` values from 24 to 30 are very good
ffmpeg -v level+warning -stats -i INPUT.mp4 -c copy -c:v libx265 -crf 25 OUTPUT.mp4faster with GPU hardware acceleration / NVIDIA CUDA
# for h.264 → h264_nvenc with NVIDIA CUDA
ffmpeg -v level+warning -stats -hwaccel cuda -hwaccel_output_format cuda -i INPUT.mp4 -c copy -c:v h264_nvenc -fps_mode passthrough -b_ref_mode disabled -preset medium -tune hq -rc vbr -multipass disabled -qp 20 OUTPUT.mp4
# for h.265 → hevc_nvenc with NVIDIA CUDA
ffmpeg -v level+warning -stats -hwaccel cuda -hwaccel_output_format cuda -i INPUT.mp4 -c copy -c:v hevc_nvenc -fps_mode passthrough -b_ref_mode disabled -preset medium -tune hq -rc vbr -multipass disabled -qp 25 OUTPUT.mp4and even better by specifying the (output) bitrate manually
# for h.264 → h264_nvenc with NVIDIA CUDA
# bitrate ~ 4M with limit 500k - 8M and buffer size 8M (should be larger than input video bitrate)
# and QP 4 (high quality/less lossy compression)
ffmpeg -v level+warning -stats -hwaccel cuda -hwaccel_output_format cuda -i INPUT.mp4 -c copy -c:v h264_nvenc -preset p7 -tune hq -profile:v high -level:v auto -rc vbr -b:v 4M -minrate:v 500k -maxrate:v 8M -bufsize:v 8M -multipass disabled -fps_mode passthrough -b_ref_mode:v disabled -rc-lookahead:v 32 -qp 4 OUTPUT.mp4Click to show formatted codec arguments from last command above
- c copy
-c:v h264_nvenc
-preset p7
-tune hq
-profile:v high
-level:v auto
-rc vbr
-b:v 4M
-minrate:v 500k
-maxrate:v 8M
-bufsize:v 8M
-multipass disabled
-fps_mode passthrough
-b_ref_mode:v disabled
-rc-lookahead:v 32
-qp 4
Note
also, note that if there is an attached_pic (ie file thumbnail), then the :v stream specifier will also include it and thus tries to encode with NVENC, which will give an error, so instead you'll need to specify further which video stream to encode
by adding -map 0 before the -c copy to copy over every stream (of first input) and disposition (thus also the attached_pic) and then :v:0 to select the first video stream as the one to encode with NVENC
-vdocumentation-statsdocumentation-mapdocumentation-cdocumentation-crfdocumentation (the best description is under libaom-AV1 but it's also in other encoders like MPEG-4)- also see this FFmpeg guide for CRF with
libx264 - CUDA ignores
-crf(the best description is under libaom-AV1 but it's also in other encoders like MPEG-4) so it's-qpfor the hardware acceleration
- also see this FFmpeg guide for CRF with
- also "this FFmpeg guide" for hardware acceleration with differend OS/hardware (specifically section CUDA (NVENC/NVDEC))
- and "Using FFmpeg with NVIDIA GPU Hardware Acceleration" on the NVIDIA Documentation Hub
# start at 0:00:01 and stop at 0:00:10
ffmpeg -v level+warning -stats -ss 1 -i INPUT.mp4 -to 10 -c copy OUTPUT.mp4
# start at 0:00:10 and stop at 0:00:20 (0:00:10 duration)
ffmpeg -v level+warning -stats -ss 10 -i INPUT.mp4 -t 10 -c copy OUTPUT.mp4
# caps output to be 0:00:30 max
ffmpeg -v level+warning -stats -i INPUT.mp4 -t 30 -c copy OUTPUT.mp4timing from -ss, -to, and -t shift to the nearest frame and not at the exact timestamp when stream is copied (like here)
if exact time is needed the video needs to be re-encoded (-c:v libx264 after or instead of -c copy) which obviously takes longer
when -ss is after -i it will decode and discard the video until the time is reached,
when it's before -i like here it will seek into the video without decoding it first (during the seek) so it will be faster.
-vdocumentation-statsdocumentation-ssdocumentation-todocumentation-tdocumentation-cdocumentation
# loop video infinitely but stop after 0:00:30
ffmpeg -v level+warning -stats -stream_loop -1 -i INPUT.mp4 -t 30 -c copy OUTPUT.mp4
# loop video to length of audio
ffmpeg -v level+warning -stats -stream_loop -1 -i INPUT.mp4 -i INPUT.mp3 -shortest -map 0:v -map 1:a OUTPUT.mp4
# loop audio to length of video
ffmpeg -v level+warning -stats -i INPUT.mp4 -stream_loop -1 -i INPUT.mp3 -shortest -map 0:v -map 1:a OUTPUT.mp4if exact timing is needed, it is better to re-encode the video (-c:v libx264 after or instead of -c copy)
Note
Looping once means two playthroughs.
-vdocumentation-statsdocumentation-stream_loopdocumentation-tdocumentation-cdocumentation-shortestdocumentation-mapdocumentation
Warning: these filters require a lot of memory (buffer of the entire clip) so it's suggested to also use the trim filter as shown
# reverse video only (first 5sec)
ffmpeg -v level+warning -stats -i INPUT.mp4 -vf trim=end=5,reverse OUTPUT.mp4
# reverse audio only (first 5sec)
ffmpeg -v level+warning -stats -i INPUT.mp4 -af atrim=end=5,areverse OUTPUT.mp4-vdocumentation-statsdocumentation- reverse filter documentation
- trim filter documentation
- areverse filter documentation
- atrim filter documentation
# using filter complex and the concat filter (if video formats are not the same add `:unsafe` to the `concat` filter)
ffmpeg -v level+warning -stats -i INPUT_0.mp4 -i INPUT_1.mp4 -filter_complex "[0:v] [0:a] [1:v] [1:a] concat=n=2:v=1:a=1 [v1] [a1]" -map "[v1]" -map "[a1]" OUTPUT.mp4
# using a list file and demuxer
ffmpeg -v level+warning -stats -safe 0 -f concat -i VIDEO_LIST.txt -c copy OUTPUT.mp4content of VIDEO_LIST.txt as follows
file 'INPUT_0.mp4'
file 'INPUT_1.mp4'
Note
all videos need to have the same resolution and fps for concat
see scale video to scale all videos to the same resolution
and -fps_mode vfr/passthrough to allow for variable frame rate
Cut clips and concat them (with re-encoding) as follows (video and audio are cut and combined separately).
# 00:00 to 00:02 video and audio of INPUT_0.mp4
# 00:04 to 00:08 video and audio of INPUT_0.mp4
# 00:01 to 00:05 video and audio of INPUT_1.mp4
# 00:06 to 00:08 video and audio of INPUT_1.mp4
ffmpeg -v level+warning -stats -i INPUT_0.mp4 -i INPUT_1.mp4 -filter_complex "[0:v]trim=0:2,setpts=PTS-STARTPTS[i0v0];[0:a]atrim=0:2,asetpts=PTS-STARTPTS[i0a0];[0:v]trim=4:8,setpts=PTS-STARTPTS[i0v1];[0:a]atrim=4:8,asetpts=PTS-STARTPTS[i0a1];[1:v]trim=1:5,setpts=PTS-STARTPTS[i1v0];[1:a]atrim=1:5,asetpts=PTS-STARTPTS[i1a0];[1:v]trim=6:8,setpts=PTS-STARTPTS[i1v1];[1:a]atrim=6:8,asetpts=PTS-STARTPTS[i1a1];[i0v0][i0a0][i0v1][i0a1][i1v0][i1a0][i1v1][i1a1]concat=n=4:v=1:a=1[cv][ca]" -map "[cv]" -map "[ca]" OUTPUT.mp4
# scale to WIDTH*HEIGHT with auto letter-/pilarbox and variable fps
ffmpeg -v level+warning -stats -i INPUT_0.mp4 -i INPUT_1.mp4 -filter_complex "[0:v]trim=0:2,setpts=PTS-STARTPTS,scale=WIDTH:HEIGHT:force_original_aspect_ratio=decrease,pad=WIDTH:HEIGHT:-1:-1,setsar=1[i0v0];[0:a]atrim=0:2,asetpts=PTS-STARTPTS[i0a0];[0:v]trim=4:8,setpts=PTS-STARTPTS,scale=WIDTH:HEIGHT:force_original_aspect_ratio=decrease,pad=WIDTH:HEIGHT:-1:-1,setsar=1[i0v1];[0:a]atrim=4:8,asetpts=PTS-STARTPTS[i0a1];[1:v]trim=1:5,setpts=PTS-STARTPTS,scale=WIDTH:HEIGHT:force_original_aspect_ratio=decrease,pad=WIDTH:HEIGHT:-1:-1,setsar=1[i1v0];[1:a]atrim=1:5,asetpts=PTS-STARTPTS[i1a0];[1:v]trim=6:8,setpts=PTS-STARTPTS,scale=WIDTH:HEIGHT:force_original_aspect_ratio=decrease,pad=WIDTH:HEIGHT:-1:-1,setsar=1[i1v1];[1:a]atrim=6:8,asetpts=PTS-STARTPTS[i1a1];[i0v0][i0a0][i0v1][i0a1][i1v0][i1a0][i1v1][i1a1]concat=n=4:v=1:a=1[cv][ca]" -map "[cv]" -fps_mode vfr -map "[ca]" OUTPUT.mp4
# for newer versions of ffmpeg, bicubic scaler is used, otherwise add `:flags=bicubic` to the scale filter to explicitly select it
# (if you want to only scale once and reuse the stream `[...]`, then it needs to be duplicated via `[...]split=N[..0][..1][..N]` filter to use in N places)Click to show formatted first command
ffmpeg
-v level+warning
-stats
-i INPUT_0.mp4
-i INPUT_1.mp4
-filter_complex "
[0:v] trim=0:2, setpts=PTS-STARTPTS[i0v0];
[0:a]atrim=0:2,asetpts=PTS-STARTPTS[i0a0];
[0:v] trim=4:8, setpts=PTS-STARTPTS[i0v1];
[0:a]atrim=4:8,asetpts=PTS-STARTPTS[i0a1];
[1:v] trim=1:5, setpts=PTS-STARTPTS[i1v0];
[1:a]atrim=1:5,asetpts=PTS-STARTPTS[i1a0];
[1:v] trim=6:8, setpts=PTS-STARTPTS[i1v1];
[1:a]atrim=6:8,asetpts=PTS-STARTPTS[i1a1];
[i0v0][i0a0]
[i0v1][i0a1]
[i1v0][i1a0]
[i1v1][i1a1]
concat=n=4:v=1:a=1
[cv][ca]
"
-map "[cv]"
-map "[ca]"
OUTPUT.mp4
Click to show formatted second command
ffmpeg
-v level+warning
-stats
-i INPUT_0.mp4
-i INPUT_1.mp4
-filter_complex "
[0:v]
trim=0:2,
setpts=PTS-STARTPTS,
scale=WIDTH:HEIGHT:force_original_aspect_ratio=decrease,
pad=WIDTH:HEIGHT:-1:-1,
setsar=1
[i0v0];
[0:a]atrim=0:2,asetpts=PTS-STARTPTS[i0a0];
[0:v] trim=4:8, setpts=PTS-STARTPTS,scale=WIDTH:HEIGHT:force_original_aspect_ratio=decrease,pad=WIDTH:HEIGHT:-1:-1,setsar=1[i0v1];
[0:a]atrim=4:8,asetpts=PTS-STARTPTS[i0a1];
[1:v] trim=1:5, setpts=PTS-STARTPTS,scale=WIDTH:HEIGHT:force_original_aspect_ratio=decrease,pad=WIDTH:HEIGHT:-1:-1,setsar=1[i1v0];
[1:a]atrim=1:5,asetpts=PTS-STARTPTS[i1a0];
[1:v] trim=6:8, setpts=PTS-STARTPTS,scale=WIDTH:HEIGHT:force_original_aspect_ratio=decrease,pad=WIDTH:HEIGHT:-1:-1,setsar=1[i1v1];
[1:a]atrim=6:8,asetpts=PTS-STARTPTS[i1a1];
[i0v0][i0a0]
[i0v1][i0a1]
[i1v0][i1a0]
[i1v1][i1a1]
concat=n=4:v=1:a=1
[cv][ca]
"
-map "[cv]"
-fps_mode vfr
-map "[ca]"
OUTPUT.mp4
Note
if you want to use hardware acceleration like NVIDIA CUDA the -hwaccel cuda -hwaccel_output_format cuda is required to be in front of every -i INPUT_N.mp4
-vdocumentation-statsdocumentation-filter_complexdocumentation- can also be read from a file via
-filter_complex_scriptwithpath/to/file.txt, although this is not mentioned in the official documentation. - trim multimedia filter
- atrim multimedia filter
- setpts/asetpts
- scale filter documentation
- scaler options
- also, see the scale video section
- pad filter documentation
- setsar filter documentation
- concat multimedia filter
- can also be read from a file via
-mapdocumentation- concat demuxer documentation (concat via text file)
-safeoption for concat demuxer-cdocumentation- also, see the compress video section (specifically with GPU hardware acceleration / NVIDIA CUDA)
# this will whitelist urls (`-i`) for files available via file, http/s, tcp, tls, or crypto protocol (for this command, not permanent)
ffmpeg -v level+warning -stats -protocol_whitelist file,http,https,tcp,tls,crypto -i INPUT.m3u8 -c copy OUTPUT.mp4
ffmpeg -v level+warning -stats -protocol_whitelist file,http,https,tcp,tls,crypto -i https://example.com/INPUT.m3u8 -c copy OUTPUT.mp4# finds sections min 240sec long and max -70db loud and writes them to LOG.txt
ffmpeg -v level+warning -stats -i INPUT.mp4 -af silencedetect=noise=-70dB:d=240 -f null - 2> LOG.txtlook for [silencedetect @ * lines in log file
[silencedetect @ 0000000000******] silence_start: 01:00:02.500
[silencedetect @ 0000000000******] silence_end: 01:10:02.500 | silence_duration: 00:09:59.989
[silencedetect @ 000000000*******] silence_start: 02:00:02.500
[silencedetect @ 000000000*******] silence_end: 02:10:02.500 | silence_duration: 00:09:59.989
[...]
Create new videos via lavfi virtual input device and a video source.
- sierpinski (pan)
- mandelbrot (zoom)
- (elementary) cellular automaton
- life (Cellular automaton)
- mptestsrc (animated test patterns)
- empty (input)
- color (input)
- smptebars (input)
- smptehdbars (input)
- testsrc (input)
- testsrc2 (input)
- rgbtestsrc (input)
- yuvtestsrc (input)
- colorspectrum (input)
- colorchart (input)
- allrgb (input)
- allyuv (input)
Honorable mention: ddagrab which can be used to capture (Windows) desktop screen (/-cutout).
Scroll TOP
Random pan of sierpinski carpet/triangle fractal.
defaults: s=640x480, r=25 (fps), and type=carpet
ffmpeg -v level+warning -stats -f lavfi -i sierpinski OUTPUT.mp4
ffmpeg -v level+warning -stats -f lavfi -i sierpinski=type=triangle OUTPUT.mp4-vdocumentation-statsdocumentation-fdocumentationlavfivirtual input devicesierpinskivideo source
lavfi_sierpinski.mp4
ffmpeg -v level+warning -stats -f lavfi -i sierpinski -t 60 lavfi_sierpinski.mp4
lavfi_sierpinski_triangle.mp4
ffmpeg -v level+warning -stats -f lavfi -i sierpinski=type=triangle -t 60 lavfi_sierpinski_triangle.mp4
Continuous zoom into the Mandelbrot set.
# Mandelbrot with the "inside" set to black (how it usually is displayed)
# default: 640*480 25fps and position:
# X = -0.743643887037158704752191506114774 (real axis)
# Y = -0.131825904205311970493132056385139 (imaginary axis, inverted to how it usually is displayed)
ffmpeg -v level+warning -stats -f lavfi -i mandelbrot=inner=black -t 60 OUTPUT.mp4
# limited to 60sec
# ! the frame generation gets slower the further in the zoom-vdocumentation-statsdocumentation-fdocumentationlavfivirtual input devicemandelbrotvideo source
lavfi_mandelbrot_black_blur.mp4
speed up and blured to decrease file size
ffmpeg -v level+warning -stats -f lavfi -i mandelbrot=inner=black:s=300x300:end_pts=75,avgblur=1 -t 43 lavfi_mandelbrot_black_blur.mp4
Also, see the same zoom (position, vertically flipped so it looks the same) in my (interactive) Mandelbrot viewer:
Source code and documentation (controls): https://github.com/MAZ01001/AlmondBreadErkunder
"Waterfall" of a 1D cellular automaton.
# random seed, no custom pattern, rule 18, start with an empty screen
# fallback/defaults: s=320x508 r=24 rule=110
ffmpeg -v level+warning -stats -f lavfi -i cellauto=full=0 -t 60 OUTPUT.mp4
# limited to 60sec-vdocumentation-statsdocumentation-fdocumentationlavfivirtual input devicecellautovideo source
lavfi_cellauto_3.mp4
ffmpeg -v level+warning -stats -f lavfi -i cellauto=full=0:seed=3 -t 60 lavfi_cellauto_3.mp4
2D cellular automaton.
# Conway's Game of Life
# default: random grid 320*240 25fps rule S23/B3 (stay alive with 2/3 neighbors and born with 3 neighbors)
ffmpeg -v level+warning -stats -f lavfi -i life -t 60 OUTPUT.mp4
# limited to 60sec
# as above but with green color and a red afterglow of dying cells
ffmpeg -v level+warning -stats -f lavfi -i life=mold=25:life_color=\#00ff00:death_color=\#aa0000 -t 60 OUTPUT.mp4
# limited to 60seclavfi_life_3_200x200_scaled.mp4
smaller initial size and scaled up 4x (nearest neighbor) to reduce file size
ffmpeg -v level+warning -stats -f lavfi -i life=mold=25:life_color=\#00ff00:death_color=\#aa0000:seed=3:s=200x200,scale=4*iw:-1:flags=neighbor -t 124 lavfi_life_3_200x200_scaled.mp4
These patterns are equal to those from the MPlayer test filter.
default: r=25 (fps) t=all (all 10 tests repeating) m=30 (frames per test) d=-1 (infinite duration)
tests: dc_luma, dc_chroma, freq_luma, freq_chroma, amp_luma, amp_chroma, cbp, mv, ring1, and ring2.
# 60sec, all tests (each 3sec)
ffmpeg -v level+warning -stats -f lavfi -i mptestsrc=m=3*25:d=60 OUTPUT.mp4-vdocumentation-statsdocumentation-fdocumentationlavfivirtual input devicemptestsrcvideo source
lavfi_mptestsrc_all_3s.mp4
ffmpeg -v level+warning -stats -f lavfi -i mptestsrc=m=3*25:d=60 lavfi_mptestsrc_all_3s.mp4
default: s=320x240, r=25 (fps), and d=-1 (infinite duration)
# 1sec 1920*1080 60fps nothing (green)
ffmpeg -v level+warning -stats -f lavfi -i nullsrc=s=1920x1080:r=60:d=1 OUTPUT.mp4-vdocumentation-statsdocumentation-fdocumentationlavfivirtual input devicenullsrcvideo source
default: s=320x240, r=25 (fps), and d=-1 (infinite duration)
# 1sec solid color #ff9900
# default: 320*240 25 fps
ffmpeg -v level+warning -stats -f lavfi -i color=c=\#ff9900:d=1 OUTPUT.mp4-vdocumentation-statsdocumentation-fdocumentationlavfivirtual input devicecolorvideo source
default: s=320x240, r=25 (fps), and d=-1 (infinite duration)
# color bars pattern, based on the SMPTE Engineering Guideline EG 1-1990
ffmpeg -v level+warning -stats -f lavfi -i smptebars OUTPUT.mp4-vdocumentation-statsdocumentation-fdocumentationlavfivirtual input devicesmptebarsvideo source
ffmpeg -v level+warning -stats -f lavfi -i smptebars -frames 1 lavfi_smptebars.png
default: s=320x240, r=25 (fps), and d=-1 (infinite duration)
# color bars pattern, based on the SMPTE RP 219-2002
ffmpeg -v level+warning -stats -f lavfi -i smptehdbars OUTPUT.mp4-vdocumentation-statsdocumentation-fdocumentationlavfivirtual input devicesmptehdbarsvideo source
ffmpeg -v level+warning -stats -f lavfi -i smptehdbars -frames 1 lavfi_smptehdbars.png
default: s=320x240, r=25 (fps), d=-1 (infinite duration), and n=0 (
n=0 shows timestamp in seconds and n=3 shows timestamp in milliseconds
# test pattern with animated gradient and timecode (seconds)
ffmpeg -v level+warning -stats -f lavfi -i testsrc OUTPUT.mp4-vdocumentation-statsdocumentation-fdocumentationlavfivirtual input devicetestsrcvideo source
lavfi_testsrc_n3.mp4
ffmpeg -v level+warning -stats -f lavfi -i testsrc=n=3:d=60 lavfi_testsrc_n3.mp4
default: s=320x240, r=25 (fps), d=-1 (infinite duration), and alpha=255 (opacity of background, 0 to 255)
I couldn't see a difference with different alpha values, at least for mp4/webm/webp/png/gif file-formats
# animated test pattern
ffmpeg -v level+warning -stats -f lavfi -i testsrc2 OUTPUT.mp4-vdocumentation-statsdocumentation-fdocumentationlavfivirtual input devicetestsrc2video source
lavfi_testsrc2.mp4
ffmpeg -v level+warning -stats -f lavfi -i testsrc2=d=60 lavfi_testsrc2.mp4
default: s=320x240, r=25 (fps), and d=-1 (infinite duration)
# RGB test pattern (useful for detecting RGB vs BGR issues)
ffmpeg -v level+warning -stats -f lavfi -i rgbtestsrc OUTPUT.mp4
# there should be red, green, and blue stripes from top to bottom-vdocumentation-statsdocumentation-fdocumentationlavfivirtual input devicergbtestsrcvideo source
ffmpeg -v level+warning -stats -f lavfi -i rgbtestsrc -frames 1 lavfi_rgbtestsrc.png
default: s=320x240, r=25 (fps), and d=-1 (infinite duration)
# YUV test pattern
ffmpeg -v level+warning -stats -f lavfi -i yuvtestsrc OUTPUT.mp4
# Y (luminance, black/white)
# Cb (blue-difference chroma, yellow/grey/blue)
# Cr (red-difference chroma, turquoise/grey/red)-vdocumentation-statsdocumentation-fdocumentationlavfivirtual input deviceyuvtestsrcvideo source
ffmpeg -v level+warning -stats -f lavfi -i yuvtestsrc -frames 1 lavfi_yuvtestsrc.png
default: s=320x240, r=25 (fps), d=-1 (infinite duration), and type=black (black/white/all)
ffmpeg -v level+warning -stats -f lavfi -i colorspectrum OUTPUT.mp4-vdocumentation-statsdocumentation-fdocumentationlavfivirtual input devicecolorspectrumvideo source
ffmpeg -v level+warning -stats -f lavfi -i colorspectrum -frames 1 lavfi_colorspectrum_black.png
ffmpeg -v level+warning -stats -f lavfi -i colorspectrum=type=white -frames 1 lavfi_colorspectrum_white.png
ffmpeg -v level+warning -stats -f lavfi -i colorspectrum=type=all -frames 1 lavfi_colorspectrum_all.png
default: s=320x240, r=25 (fps), d=-1 (infinite duration), preset=reference (reference/skintones), and patch_size=64x64 (size of each tile)
# colors checker chart (6↔ * 4↕ = 24 tiles)
ffmpeg -v level+warning -stats -f lavfi -i colorchart OUTPUT.mp4-vdocumentation-statsdocumentation-fdocumentationlavfivirtual input devicecolorchartvideo source
ffmpeg -v level+warning -stats -f lavfi -i colorchart=patch_size=32x32 -frames 1 lavfi_colorchart_reference_32x32.png
ffmpeg -v level+warning -stats -f lavfi -i colorchart=preset=skintones:patch_size=32x32 -frames 1 lavfi_colorchart_skintones_32x32.png
default: r=25 (fps), and d=-1 (infinite duration)
Important
fixed size of 4096x4096 (use scale filter to change size)
# all rgb colors (static 4096x4096 frames)
ffmpeg -v level+warning -stats -f lavfi -i allrgb OUTPUT.mp4-vdocumentation-statsdocumentation-fdocumentationlavfivirtual input deviceallrgbvideo source
![]()
scaled down to half size (bicubic) to reduce file size
ffmpeg -v level+warning -stats -f lavfi -i allrgb,scale=iw/2:-1 -frames 1 lavfi_allrgb_halfed.png
default: r=25 (fps), and d=-1 (infinite duration)
Important
fixed size of 4096x4096 (use scale filter to change size)
# all yuv colors (static 4096x4096 frames)
ffmpeg -v level+warning -stats -f lavfi -i allyuv OUTPUT.mp4-vdocumentation-statsdocumentation-fdocumentationlavfivirtual input deviceallyuvvideo source
![]()
scaled down to half size (bicubic) to reduce file size
ffmpeg -v level+warning -stats -f lavfi -i allyuv,scale=iw/2:-1 -frames 1 lavfi_allyuv_halfed.png










