From nobody Mon Nov 08 17:39:35 2021 X-Original-To: dev-commits-ports-main@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 0D1A8183BE84; Mon, 8 Nov 2021 17:39:37 +0000 (UTC) (envelope-from git@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.nyi.freebsd.org", Issuer "R3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4HnyzX0nXGz4pf2; Mon, 8 Nov 2021 17:39:35 +0000 (UTC) (envelope-from git@FreeBSD.org) Received: from gitrepo.freebsd.org (gitrepo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 45C3F15D44; Mon, 8 Nov 2021 17:39:35 +0000 (UTC) (envelope-from git@FreeBSD.org) Received: from gitrepo.freebsd.org ([127.0.1.44]) by gitrepo.freebsd.org (8.16.1/8.16.1) with ESMTP id 1A8HdZfW042498; Mon, 8 Nov 2021 17:39:35 GMT (envelope-from git@gitrepo.freebsd.org) Received: (from git@localhost) by gitrepo.freebsd.org (8.16.1/8.16.1/Submit) id 1A8HdZjV042497; Mon, 8 Nov 2021 17:39:35 GMT (envelope-from git) Date: Mon, 8 Nov 2021 17:39:35 GMT Message-Id: <202111081739.1A8HdZjV042497@gitrepo.freebsd.org> To: ports-committers@FreeBSD.org, dev-commits-ports-all@FreeBSD.org, dev-commits-ports-main@FreeBSD.org From: Jan Beich Subject: git: 5e728b3087b2 - main - multimedia/ffmpeg: switch VMAF to v2 API (built-in models) List-Id: Commits to the main branch of the FreeBSD ports repository List-Archive: https://lists.freebsd.org/archives/dev-commits-ports-main List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-dev-commits-ports-main@freebsd.org X-BeenThere: dev-commits-ports-main@freebsd.org MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit X-Git-Committer: jbeich X-Git-Repository: ports X-Git-Refname: refs/heads/main X-Git-Reftype: branch X-Git-Commit: 5e728b3087b23cd52c25ae5ea5caed4c91c59226 Auto-Submitted: auto-generated X-ThisMailContainsUnwantedMimeParts: N The branch main has been updated by jbeich: URL: https://cgit.FreeBSD.org/ports/commit/?id=5e728b3087b23cd52c25ae5ea5caed4c91c59226 commit 5e728b3087b23cd52c25ae5ea5caed4c91c59226 Author: Jan Beich AuthorDate: 2021-11-08 14:47:49 +0000 Commit: Jan Beich CommitDate: 2021-11-08 17:33:41 +0000 multimedia/ffmpeg: switch VMAF to v2 API (built-in models) --- multimedia/ffmpeg/Makefile | 2 +- multimedia/ffmpeg/files/patch-vmaf | 820 ++++++++++++++++++++++++++++++++++++- 2 files changed, 809 insertions(+), 13 deletions(-) diff --git a/multimedia/ffmpeg/Makefile b/multimedia/ffmpeg/Makefile index ee7b7ce9179d..ba4de1fc5fcd 100644 --- a/multimedia/ffmpeg/Makefile +++ b/multimedia/ffmpeg/Makefile @@ -2,7 +2,7 @@ PORTNAME= ffmpeg PORTVERSION= 4.4.1 -PORTREVISION= 1 +PORTREVISION= 2 PORTEPOCH= 1 CATEGORIES= multimedia audio net MASTER_SITES= https://ffmpeg.org/releases/ diff --git a/multimedia/ffmpeg/files/patch-vmaf b/multimedia/ffmpeg/files/patch-vmaf index b7b205fd9eb3..b503561803c1 100644 --- a/multimedia/ffmpeg/files/patch-vmaf +++ b/multimedia/ffmpeg/files/patch-vmaf @@ -1,32 +1,828 @@ -https://github.com/Netflix/vmaf/commit/122089fa3d23 +https://lists.ffmpeg.org/pipermail/ffmpeg-devel/2021-June/281878.html +--- configure.orig 2021-10-24 20:47:11 UTC ++++ configure +@@ -3663,7 +3663,7 @@ uspp_filter_deps="gpl avcodec" + vaguedenoiser_filter_deps="gpl" + vidstabdetect_filter_deps="libvidstab" + vidstabtransform_filter_deps="libvidstab" +-libvmaf_filter_deps="libvmaf pthreads" ++libvmaf_filter_deps="libvmaf" + zmq_filter_deps="libzmq" + zoompan_filter_deps="swscale" + zscale_filter_deps="libzimg const_nan" +@@ -6441,7 +6441,7 @@ enabled libtwolame && require libtwolame twolam + enabled libuavs3d && require_pkg_config libuavs3d "uavs3d >= 1.1.41" uavs3d.h uavs3d_decode + enabled libv4l2 && require_pkg_config libv4l2 libv4l2 libv4l2.h v4l2_ioctl + enabled libvidstab && require_pkg_config libvidstab "vidstab >= 0.98" vid.stab/libvidstab.h vsMotionDetectInit +-enabled libvmaf && require_pkg_config libvmaf "libvmaf >= 1.5.2" libvmaf.h compute_vmaf ++enabled libvmaf && require_pkg_config libvmaf "libvmaf >= 2.0.0" libvmaf.h vmaf_init + enabled libvo_amrwbenc && require libvo_amrwbenc vo-amrwbenc/enc_if.h E_IF_init -lvo-amrwbenc + enabled libvorbis && require_pkg_config libvorbis vorbis vorbis/codec.h vorbis_info_init && + require_pkg_config libvorbisenc vorbisenc vorbis/vorbisenc.h vorbis_encode_init --- doc/filters.texi.orig 2021-10-24 20:47:07 UTC +++ doc/filters.texi -@@ -13875,14 +13875,14 @@ The obtained VMAF score is printed through the logging +@@ -13867,66 +13867,37 @@ ffmpeg -i input.mov -vf lensfun=make=Canon:model="Cano + + @section libvmaf + +-Obtain the VMAF (Video Multi-Method Assessment Fusion) +-score between two input videos. ++Calulate the VMAF (Video Multi-Method Assessment Fusion) score for a ++reference/distorted pair of input videos. + + The obtained VMAF score is printed through the logging system. + It requires Netflix's vmaf library (libvmaf) as a pre-requisite. After installing the library it can be enabled using: @code{./configure --enable-libvmaf}. -If no model path is specified it uses the default model: @code{vmaf_v0.6.1.pkl}. -+If no model path is specified it uses the default model: @code{vmaf_v0.6.1.json}. The filter has following options: @table @option - @item model_path - Set the model path which is to be used for SVM. +-@item model_path +-Set the model path which is to be used for SVM. -Default value: @code{"/usr/local/share/model/vmaf_v0.6.1.pkl"} -+Default value: @code{"/usr/local/share/model/vmaf_v0.6.1.json"} ++@item model ++A `|` delimited list of vmaf models. Each model can be configured with a number of parameters. ++Default value: @code{"version=vmaf_v0.6.1"} ++@item feature ++A `|` delimited list of features. Each feature can be configured with a number of parameters. ++ @item log_path - Set the file path to be used to store logs. +-Set the file path to be used to store logs. ++Set the file path to be used to store log files. + + @item log_fmt +-Set the format of the log file (csv, json or xml). ++Set the format of the log file (xml, json, csv, or sub). + +-@item enable_transform +-This option can enable/disable the @code{score_transform} applied to the final predicted VMAF score, +-if you have specified score_transform option in the input parameter file passed to @code{run_vmaf_training.py} +-Default value: @code{false} +- +-@item phone_model +-Invokes the phone model which will generate VMAF scores higher than in the +-regular model, which is more suitable for laptop, TV, etc. viewing conditions. +-Default value: @code{false} +- +-@item psnr +-Enables computing psnr along with vmaf. +-Default value: @code{false} +- +-@item ssim +-Enables computing ssim along with vmaf. +-Default value: @code{false} +- +-@item ms_ssim +-Enables computing ms_ssim along with vmaf. +-Default value: @code{false} +- +-@item pool +-Set the pool method to be used for computing vmaf. +-Options are @code{min}, @code{harmonic_mean} or @code{mean} (default). +- + @item n_threads +-Set number of threads to be used when computing vmaf. +-Default value: @code{0}, which makes use of all available logical processors. ++Set number of threads to be used when initializing libvmaf. ++Default value: @code{0}, no threads. + + @item n_subsample +-Set interval for frame subsampling used when computing vmaf. +-Default value: @code{1} +- +-@item enable_conf_interval +-Enables confidence interval. +-Default value: @code{false} ++Set frame subsampling interval to be used. + @end table + + This filter also supports the @ref{framesync} options. +@@ -13934,23 +13905,31 @@ This filter also supports the @ref{framesync} options. + @subsection Examples + @itemize + @item +-On the below examples the input file @file{main.mpg} being processed is +-compared with the reference file @file{ref.mpg}. ++In the examples below, a distorted video @file{distorted.mpg} is ++compared with a reference file @file{reference.mpg}. + ++@item ++Basic usage: + @example +-ffmpeg -i main.mpg -i ref.mpg -lavfi libvmaf -f null - ++ffmpeg -i distorted.mpg -i reference.mpg -lavfi libvmaf=log_path=output.xml -f null - + @end example + + @item +-Example with options: ++Example with multiple models: + @example +-ffmpeg -i main.mpg -i ref.mpg -lavfi libvmaf="psnr=1:log_fmt=json" -f null - ++ffmpeg -i distorted.mpg -i reference.mpg -lavfi libvmaf='model=version=vmaf_v0.6.1\\:name=vmaf|version=vmaf_v0.6.1neg\\:name=vmaf_neg:log_path=output.xml' -f null - + @end example + + @item ++Example with multiple addtional features: ++@example ++ffmpeg -i distorted.mpg -i reference.mpg -lavfi libvmaf='feature=name=psnr|name=ciede:log_path=output.xml' -f null - ++@end example ++ ++@item + Example with options and different containers: + @example +-ffmpeg -i main.mpg -i ref.mkv -lavfi "[0:v]settb=AVTB,setpts=PTS-STARTPTS[main];[1:v]settb=AVTB,setpts=PTS-STARTPTS[ref];[main][ref]libvmaf=psnr=1:log_fmt=json" -f null - ++ffmpeg -i distorted.mpg -i reference.mkv -lavfi "[0:v]settb=AVTB,setpts=PTS-STARTPTS[main];[1:v]settb=AVTB,setpts=PTS-STARTPTS[ref];[main][ref]libvmaf=log_fmt=json:log_path=output.json" -f null - + @end example + @end itemize + --- libavfilter/vf_libvmaf.c.orig 2021-10-24 20:47:07 UTC +++ libavfilter/vf_libvmaf.c -@@ -72,7 +72,7 @@ typedef struct LIBVMAFContext { +@@ -24,8 +24,8 @@ + * Calculate the VMAF between two input videos. + */ + +-#include + #include ++ + #include "libavutil/avstring.h" + #include "libavutil/opt.h" + #include "libavutil/pixdesc.h" +@@ -39,211 +39,372 @@ + typedef struct LIBVMAFContext { + const AVClass *class; + FFFrameSync fs; +- const AVPixFmtDescriptor *desc; +- int width; +- int height; +- double vmaf_score; +- int vmaf_thread_created; +- pthread_t vmaf_thread; +- pthread_mutex_t lock; +- pthread_cond_t cond; +- int eof; +- AVFrame *gmain; +- AVFrame *gref; +- int frame_set; +- char *model_path; + char *log_path; + char *log_fmt; +- int disable_clip; +- int disable_avx; +- int enable_transform; +- int phone_model; +- int psnr; +- int ssim; +- int ms_ssim; +- char *pool; + int n_threads; + int n_subsample; +- int enable_conf_interval; +- int error; ++ char *model_cfg; ++ char *feature_cfg; ++ VmafContext *vmaf; ++ VmafModel **model; ++ unsigned model_cnt; ++ unsigned frame_cnt; ++ unsigned bpc; + } LIBVMAFContext; + + #define OFFSET(x) offsetof(LIBVMAFContext, x) #define FLAGS AV_OPT_FLAG_FILTERING_PARAM|AV_OPT_FLAG_VIDEO_PARAM static const AVOption libvmaf_options[] = { - {"model_path", "Set the model to be used for computing vmaf.", OFFSET(model_path), AV_OPT_TYPE_STRING, {.str="/usr/local/share/model/vmaf_v0.6.1.pkl"}, 0, 1, FLAGS}, -+ {"model_path", "Set the model to be used for computing vmaf.", OFFSET(model_path), AV_OPT_TYPE_STRING, {.str="/usr/local/share/model/vmaf_v0.6.1.json"}, 0, 1, FLAGS}, - {"log_path", "Set the file path to be used to store logs.", OFFSET(log_path), AV_OPT_TYPE_STRING, {.str=NULL}, 0, 1, FLAGS}, - {"log_fmt", "Set the format of the log (csv, json or xml).", OFFSET(log_fmt), AV_OPT_TYPE_STRING, {.str=NULL}, 0, 1, FLAGS}, - {"enable_transform", "Enables transform for computing vmaf.", OFFSET(enable_transform), AV_OPT_TYPE_BOOL, {.i64=0}, 0, 1, FLAGS}, +- {"log_path", "Set the file path to be used to store logs.", OFFSET(log_path), AV_OPT_TYPE_STRING, {.str=NULL}, 0, 1, FLAGS}, +- {"log_fmt", "Set the format of the log (csv, json or xml).", OFFSET(log_fmt), AV_OPT_TYPE_STRING, {.str=NULL}, 0, 1, FLAGS}, +- {"enable_transform", "Enables transform for computing vmaf.", OFFSET(enable_transform), AV_OPT_TYPE_BOOL, {.i64=0}, 0, 1, FLAGS}, +- {"phone_model", "Invokes the phone model that will generate higher VMAF scores.", OFFSET(phone_model), AV_OPT_TYPE_BOOL, {.i64=0}, 0, 1, FLAGS}, +- {"psnr", "Enables computing psnr along with vmaf.", OFFSET(psnr), AV_OPT_TYPE_BOOL, {.i64=0}, 0, 1, FLAGS}, +- {"ssim", "Enables computing ssim along with vmaf.", OFFSET(ssim), AV_OPT_TYPE_BOOL, {.i64=0}, 0, 1, FLAGS}, +- {"ms_ssim", "Enables computing ms-ssim along with vmaf.", OFFSET(ms_ssim), AV_OPT_TYPE_BOOL, {.i64=0}, 0, 1, FLAGS}, +- {"pool", "Set the pool method to be used for computing vmaf.", OFFSET(pool), AV_OPT_TYPE_STRING, {.str=NULL}, 0, 1, FLAGS}, +- {"n_threads", "Set number of threads to be used when computing vmaf.", OFFSET(n_threads), AV_OPT_TYPE_INT, {.i64=0}, 0, UINT_MAX, FLAGS}, +- {"n_subsample", "Set interval for frame subsampling used when computing vmaf.", OFFSET(n_subsample), AV_OPT_TYPE_INT, {.i64=1}, 1, UINT_MAX, FLAGS}, +- {"enable_conf_interval", "Enables confidence interval.", OFFSET(enable_conf_interval), AV_OPT_TYPE_BOOL, {.i64=0}, 0, 1, FLAGS}, ++ {"model", "Set the model to be used for computing vmaf.", OFFSET(model_cfg), AV_OPT_TYPE_STRING, {.str="version=vmaf_v0.6.1"}, 0, 1, FLAGS}, ++ {"feature", "Set the feature to be used for computing vmaf.", OFFSET(feature_cfg), AV_OPT_TYPE_STRING, {.str=NULL}, 0, 1, FLAGS}, ++ {"log_path", "Set the file path to be used to write log.", OFFSET(log_path), AV_OPT_TYPE_STRING, {.str=NULL}, 0, 1, FLAGS}, ++ {"log_fmt", "Set the format of the log (csv, json, xml, or sub).", OFFSET(log_fmt), AV_OPT_TYPE_STRING, {.str="xml"}, 0, 1, FLAGS}, ++ {"n_threads", "Set number of threads to be used when computing vmaf.", OFFSET(n_threads), AV_OPT_TYPE_INT, {.i64=0}, 0, UINT_MAX, FLAGS}, ++ {"n_subsample", "Set interval for frame subsampling used when computing vmaf.", OFFSET(n_subsample), AV_OPT_TYPE_INT, {.i64=1}, 1, UINT_MAX, FLAGS}, + { NULL } + }; + + FRAMESYNC_DEFINE_CLASS(libvmaf, LIBVMAFContext, fs); + +-#define read_frame_fn(type, bits) \ +- static int read_frame_##bits##bit(float *ref_data, float *main_data, \ +- float *temp_data, int stride, void *ctx) \ +-{ \ +- LIBVMAFContext *s = (LIBVMAFContext *) ctx; \ +- int ret; \ +- \ +- pthread_mutex_lock(&s->lock); \ +- \ +- while (!s->frame_set && !s->eof) { \ +- pthread_cond_wait(&s->cond, &s->lock); \ +- } \ +- \ +- if (s->frame_set) { \ +- int ref_stride = s->gref->linesize[0]; \ +- int main_stride = s->gmain->linesize[0]; \ +- \ +- const type *ref_ptr = (const type *) s->gref->data[0]; \ +- const type *main_ptr = (const type *) s->gmain->data[0]; \ +- \ +- float *ptr = ref_data; \ +- float factor = 1.f / (1 << (bits - 8)); \ +- \ +- int h = s->height; \ +- int w = s->width; \ +- \ +- int i,j; \ +- \ +- for (i = 0; i < h; i++) { \ +- for ( j = 0; j < w; j++) { \ +- ptr[j] = ref_ptr[j] * factor; \ +- } \ +- ref_ptr += ref_stride / sizeof(*ref_ptr); \ +- ptr += stride / sizeof(*ptr); \ +- } \ +- \ +- ptr = main_data; \ +- \ +- for (i = 0; i < h; i++) { \ +- for (j = 0; j < w; j++) { \ +- ptr[j] = main_ptr[j] * factor; \ +- } \ +- main_ptr += main_stride / sizeof(*main_ptr); \ +- ptr += stride / sizeof(*ptr); \ +- } \ +- } \ +- \ +- ret = !s->frame_set; \ +- \ +- av_frame_unref(s->gref); \ +- av_frame_unref(s->gmain); \ +- s->frame_set = 0; \ +- \ +- pthread_cond_signal(&s->cond); \ +- pthread_mutex_unlock(&s->lock); \ +- \ +- if (ret) { \ +- return 2; \ +- } \ +- \ +- return 0; \ ++static enum VmafPixelFormat pix_fmt_map(enum AVPixelFormat av_pix_fmt) ++{ ++ switch (av_pix_fmt) { ++ case AV_PIX_FMT_YUV420P: ++ case AV_PIX_FMT_YUV420P10LE: ++ case AV_PIX_FMT_YUV420P12LE: ++ case AV_PIX_FMT_YUV420P16LE: ++ return VMAF_PIX_FMT_YUV420P; ++ case AV_PIX_FMT_YUV422P: ++ case AV_PIX_FMT_YUV422P10LE: ++ case AV_PIX_FMT_YUV422P12LE: ++ case AV_PIX_FMT_YUV422P16LE: ++ return VMAF_PIX_FMT_YUV422P; ++ case AV_PIX_FMT_YUV444P: ++ case AV_PIX_FMT_YUV444P10LE: ++ case AV_PIX_FMT_YUV444P12LE: ++ case AV_PIX_FMT_YUV444P16LE: ++ return VMAF_PIX_FMT_YUV444P; ++ default: ++ return VMAF_PIX_FMT_UNKNOWN; ++ } + } + +-read_frame_fn(uint8_t, 8); +-read_frame_fn(uint16_t, 10); ++static int copy_picture_data(AVFrame *src, VmafPicture *dst, unsigned bpc) ++{ ++ int err = vmaf_picture_alloc(dst, pix_fmt_map(src->format), bpc, ++ src->width, src->height); ++ if (err) return AVERROR(ENOMEM); + +-static void compute_vmaf_score(LIBVMAFContext *s) ++ for (unsigned i = 0; i < 3; i++) { ++ uint8_t *src_data = src->data[i]; ++ uint8_t *dst_data = dst->data[i]; ++ for (unsigned j = 0; j < dst->h[i]; j++) { ++ memcpy(dst_data, src_data, sizeof(*dst_data) * dst->w[i]); ++ src_data += src->linesize[i]; ++ dst_data += dst->stride[i]; ++ } ++ } ++ ++ return 0; ++} ++ ++static int do_vmaf(FFFrameSync *fs) + { +- int (*read_frame)(float *ref_data, float *main_data, float *temp_data, +- int stride, void *ctx); +- char *format; ++ AVFilterContext *ctx = fs->parent; ++ LIBVMAFContext *s = ctx->priv; ++ AVFrame *ref, *dist; ++ int err = 0; + +- if (s->desc->comp[0].depth <= 8) { +- read_frame = read_frame_8bit; +- } else { +- read_frame = read_frame_10bit; ++ int ret = ff_framesync_dualinput_get(fs, &dist, &ref); ++ if (ret < 0) ++ return ret; ++ if (ctx->is_disabled || !ref) ++ return ff_filter_frame(ctx->outputs[0], dist); ++ ++ VmafPicture pic_ref; ++ err = copy_picture_data(ref, &pic_ref, s->bpc); ++ if (err) { ++ av_log(s, AV_LOG_ERROR, "problem during vmaf_picture_alloc.\n"); ++ return AVERROR(ENOMEM); + } + +- format = (char *) s->desc->name; ++ VmafPicture pic_dist; ++ err = copy_picture_data(dist, &pic_dist, s->bpc); ++ if (err) { ++ av_log(s, AV_LOG_ERROR, "problem during vmaf_picture_alloc.\n"); ++ vmaf_picture_unref(&pic_ref); ++ return AVERROR(ENOMEM); ++ } + +- s->error = compute_vmaf(&s->vmaf_score, format, s->width, s->height, +- read_frame, s, s->model_path, s->log_path, +- s->log_fmt, 0, 0, s->enable_transform, +- s->phone_model, s->psnr, s->ssim, +- s->ms_ssim, s->pool, +- s->n_threads, s->n_subsample, s->enable_conf_interval); ++ err = vmaf_read_pictures(s->vmaf, &pic_ref, &pic_dist, s->frame_cnt++); ++ if (err) { ++ av_log(s, AV_LOG_ERROR, "problem during vmaf_read_pictures.\n"); ++ return AVERROR(EINVAL); ++ } ++ ++ return ff_filter_frame(ctx->outputs[0], dist); + } + +-static void *call_vmaf(void *ctx) ++static AVDictionary **delimited_dict_parse(char *str, unsigned *cnt) + { +- LIBVMAFContext *s = (LIBVMAFContext *) ctx; +- compute_vmaf_score(s); +- if (!s->error) { +- av_log(ctx, AV_LOG_INFO, "VMAF score: %f\n",s->vmaf_score); +- } else { +- pthread_mutex_lock(&s->lock); +- pthread_cond_signal(&s->cond); +- pthread_mutex_unlock(&s->lock); ++ int err = 0; ++ if (!str) return NULL; ++ ++ *cnt = 1; ++ for (char *p = str; *p; p++) { ++ if (*p == '|') ++ (*cnt)++; + } +- pthread_exit(NULL); ++ ++ AVDictionary **dict = av_calloc(*cnt, sizeof(*dict)); ++ if (!dict) goto fail; ++ ++ char *str_copy = av_strdup(str); ++ if (!str_copy) goto fail; ++ ++ char *saveptr = NULL; ++ for (unsigned i = 0; i < *cnt; i++) { ++ char *s = av_strtok(i == 0 ? str_copy : NULL, "|", &saveptr); ++ err = av_dict_parse_string(&dict[i], s, "=", ":", 0); ++ if (err) goto fail; ++ } ++ ++ av_free(str_copy); ++ return dict; ++ ++fail: ++ if (dict) { ++ for (unsigned i = 0; i < *cnt; i++) { ++ if (dict[i]) ++ av_dict_free(&dict[i]); ++ } ++ av_free(dict); ++ } ++ if (str_copy) ++ av_free(str_copy); ++ *cnt = 0; + return NULL; + } + +-static int do_vmaf(FFFrameSync *fs) ++static int parse_features(AVFilterContext *ctx) + { +- AVFilterContext *ctx = fs->parent; + LIBVMAFContext *s = ctx->priv; +- AVFrame *master, *ref; +- int ret; ++ if (!s->feature_cfg) return 0; + +- ret = ff_framesync_dualinput_get(fs, &master, &ref); +- if (ret < 0) +- return ret; +- if (!ref) +- return ff_filter_frame(ctx->outputs[0], master); ++ int err = 0; + +- pthread_mutex_lock(&s->lock); ++ unsigned dict_cnt = 0; ++ AVDictionary **dict = delimited_dict_parse(s->feature_cfg, &dict_cnt); ++ if (!dict) { ++ av_log(ctx, AV_LOG_ERROR, ++ "could not parse feature config: %s\n", s->feature_cfg); ++ return AVERROR(EINVAL); ++ } + +- while (s->frame_set && !s->error) { +- pthread_cond_wait(&s->cond, &s->lock); ++ for (unsigned i = 0; i < dict_cnt; i++) { ++ char *feature_name = NULL; ++ VmafFeatureDictionary *feature_opts_dict = NULL; ++ AVDictionaryEntry *e = NULL; ++ ++ while (e = av_dict_get(dict[i], "", e, AV_DICT_IGNORE_SUFFIX)) { ++ if (av_stristr(e->key, "name")) { ++ feature_name = e->value; ++ continue; ++ } ++ ++ err = vmaf_feature_dictionary_set(&feature_opts_dict, e->key, ++ e->value); ++ if (err) { ++ av_log(ctx, AV_LOG_ERROR, ++ "could not set feature option: %s.%s=%s\n", ++ feature_name, e->key, e->value); ++ goto exit; ++ } ++ } ++ ++ err = vmaf_use_feature(s->vmaf, feature_name, feature_opts_dict); ++ if (err) { ++ av_log(ctx, AV_LOG_ERROR, ++ "problem during vmaf_use_feature: %s\n", feature_name); ++ goto exit; ++ } + } + +- if (s->error) { ++exit: ++ for (unsigned i = 0; i < dict_cnt; i++) { ++ if (dict[i]) ++ av_dict_free(&dict[i]); ++ } ++ av_free(dict); ++ return err; ++} ++ ++static int parse_models(AVFilterContext *ctx) ++{ ++ LIBVMAFContext *s = ctx->priv; ++ if (!s->model_cfg) return 0; ++ ++ int err = 0; ++ ++ unsigned dict_cnt = 0; ++ AVDictionary **dict = delimited_dict_parse(s->model_cfg, &dict_cnt); ++ if (!dict) { + av_log(ctx, AV_LOG_ERROR, +- "libvmaf encountered an error, check log for details\n"); +- pthread_mutex_unlock(&s->lock); ++ "could not parse model config: %s\n", s->model_cfg); + return AVERROR(EINVAL); + } + +- av_frame_ref(s->gref, ref); +- av_frame_ref(s->gmain, master); ++ s->model_cnt = dict_cnt; ++ s->model = av_calloc(s->model_cnt, sizeof(*s->model)); ++ if (!s->model) return AVERROR(ENOMEM); + +- s->frame_set = 1; ++ for (unsigned i = 0; i < dict_cnt; i++) { ++ VmafModelConfig model_cfg = {0}; ++ AVDictionaryEntry *e = NULL; ++ char *version, *path; + +- pthread_cond_signal(&s->cond); +- pthread_mutex_unlock(&s->lock); ++ while (e = av_dict_get(dict[i], "", e, AV_DICT_IGNORE_SUFFIX)) { ++ if (av_stristr(e->key, "disable_clip")) { ++ model_cfg.flags |= av_stristr(e->value, "true") ? ++ VMAF_MODEL_FLAG_DISABLE_CLIP : 0; ++ continue; ++ } + +- return ff_filter_frame(ctx->outputs[0], master); ++ if (av_stristr(e->key, "enable_transform")) { ++ model_cfg.flags |= av_stristr(e->value, "true") ? ++ VMAF_MODEL_FLAG_ENABLE_TRANSFORM : 0; ++ continue; ++ } ++ ++ if (av_stristr(e->key, "name")) { ++ model_cfg.name = e->value; ++ continue; ++ } ++ ++ if (av_stristr(e->key, "version")) { ++ version = e->value; ++ continue; ++ } ++ ++ if (av_stristr(e->key, "path")) { ++ path = e->value; ++ continue; ++ } ++ } ++ ++ if (version) { ++ err = vmaf_model_load(&s->model[i], &model_cfg, version); ++ if (err) { ++ av_log(ctx, AV_LOG_ERROR, ++ "could not load libvmaf model with version: %s\n", ++ version); ++ goto exit; ++ } ++ } ++ ++ if (path && !s->model[i]) { ++ err = vmaf_model_load_from_path(&s->model[i], &model_cfg, path); ++ if (err) { ++ av_log(ctx, AV_LOG_ERROR, ++ "could not load libvmaf model with path: %s\n", ++ e->value); ++ goto exit; ++ } ++ } ++ ++ if (!s->model[i]) { ++ av_log(ctx, AV_LOG_ERROR, ++ "could not load libvmaf model with config: %s\n", ++ s->model_cfg); ++ goto exit; ++ } ++ ++ e = NULL; ++ while (e = av_dict_get(dict[i], "", e, AV_DICT_IGNORE_SUFFIX)) { ++ char *feature_opt = NULL; ++ char *feature_name = av_strtok(e->key, ".", &feature_opt); ++ if (!feature_opt) continue; ++ ++ VmafFeatureDictionary *feature_opts_dict = NULL; ++ err = vmaf_feature_dictionary_set(&feature_opts_dict, ++ feature_opt, e->value); ++ if (err) { ++ av_log(ctx, AV_LOG_ERROR, ++ "could not set feature option: %s.%s=%s\n", ++ feature_name, feature_opt, e->value); ++ err = AVERROR(EINVAL); ++ goto exit; ++ } ++ ++ err = vmaf_model_feature_overload(s->model[i], feature_name, ++ feature_opts_dict); ++ if (err) { ++ av_log(ctx, AV_LOG_ERROR, ++ "could not overload feature: %s\n", feature_name); ++ err = AVERROR(EINVAL); ++ goto exit; ++ } ++ } ++ } ++ ++ for (unsigned i = 0; i < s->model_cnt; i++) { ++ err = vmaf_use_features_from_model(s->vmaf, s->model[i]); ++ if (err) { ++ av_log(ctx, AV_LOG_ERROR, ++ "problem during vmaf_use_features_from_model\n"); ++ err = AVERROR(EINVAL); ++ goto exit; ++ } ++ } ++ ++exit: ++ for (unsigned i = 0; i < dict_cnt; i++) { ++ if (dict[i]) ++ av_dict_free(&dict[i]); ++ } ++ av_free(dict); ++ return err; + } + ++static enum VmafLogLevel log_level_map(int log_level) ++{ ++ switch (log_level) { ++ case AV_LOG_QUIET: ++ return VMAF_LOG_LEVEL_NONE; ++ case AV_LOG_ERROR: ++ return VMAF_LOG_LEVEL_ERROR; ++ case AV_LOG_WARNING: ++ return VMAF_LOG_LEVEL_WARNING; ++ case AV_LOG_INFO: ++ return VMAF_LOG_LEVEL_INFO; ++ case AV_LOG_DEBUG: ++ return VMAF_LOG_LEVEL_DEBUG; ++ default: ++ return VMAF_LOG_LEVEL_INFO; ++ } ++} ++ + static av_cold int init(AVFilterContext *ctx) + { + LIBVMAFContext *s = ctx->priv; ++ int err = 0; + +- s->gref = av_frame_alloc(); +- s->gmain = av_frame_alloc(); +- if (!s->gref || !s->gmain) +- return AVERROR(ENOMEM); ++ VmafConfiguration cfg = { ++ .log_level = log_level_map(av_log_get_level()), ++ .n_subsample = s->n_subsample, ++ .n_threads = s->n_threads, ++ }; + +- s->error = 0; ++ err = vmaf_init(&s->vmaf, cfg); ++ if (err) return AVERROR(EINVAL); + +- s->vmaf_thread_created = 0; +- pthread_mutex_init(&s->lock, NULL); +- pthread_cond_init (&s->cond, NULL); ++ err = parse_models(ctx); ++ if (err) return err; + ++ err = parse_features(ctx); ++ if (err) return err; ++ + s->fs.on_event = do_vmaf; + return 0; + } +@@ -251,8 +412,9 @@ static av_cold int init(AVFilterContext *ctx) + static int query_formats(AVFilterContext *ctx) + { + static const enum AVPixelFormat pix_fmts[] = { +- AV_PIX_FMT_YUV444P, AV_PIX_FMT_YUV422P, AV_PIX_FMT_YUV420P, +- AV_PIX_FMT_YUV444P10LE, AV_PIX_FMT_YUV422P10LE, AV_PIX_FMT_YUV420P10LE, ++ AV_PIX_FMT_YUV420P, AV_PIX_FMT_YUV420P16LE, ++ AV_PIX_FMT_YUV422P, AV_PIX_FMT_YUV422P16LE, ++ AV_PIX_FMT_YUV444P, AV_PIX_FMT_YUV444P16LE, + AV_PIX_FMT_NONE + }; + +@@ -262,33 +424,32 @@ static int query_formats(AVFilterContext *ctx) + return ff_set_common_formats(ctx, fmts_list); + } + +- + static int config_input_ref(AVFilterLink *inlink) + { +- AVFilterContext *ctx = inlink->dst; ++ AVFilterContext *ctx = inlink->dst; + LIBVMAFContext *s = ctx->priv; +- int th; + +- if (ctx->inputs[0]->w != ctx->inputs[1]->w || +- ctx->inputs[0]->h != ctx->inputs[1]->h) { +- av_log(ctx, AV_LOG_ERROR, "Width and height of input videos must be same.\n"); +- return AVERROR(EINVAL); ++ int err = 0; ++ ++ if (ctx->inputs[0]->w != ctx->inputs[1]->w) { ++ av_log(ctx, AV_LOG_ERROR, "input width must match.\n"); ++ err |= AVERROR(EINVAL); + } ++ ++ if (ctx->inputs[0]->h != ctx->inputs[1]->h) { ++ av_log(ctx, AV_LOG_ERROR, "input height must match.\n"); ++ err |= AVERROR(EINVAL); ++ } ++ + if (ctx->inputs[0]->format != ctx->inputs[1]->format) { +- av_log(ctx, AV_LOG_ERROR, "Inputs must be of same pixel format.\n"); +- return AVERROR(EINVAL); ++ av_log(ctx, AV_LOG_ERROR, "input pix_fmt must match.\n"); ++ err |= AVERROR(EINVAL); + } + +- s->desc = av_pix_fmt_desc_get(inlink->format); +- s->width = ctx->inputs[0]->w; +- s->height = ctx->inputs[0]->h; ++ if (err) return err; + +- th = pthread_create(&s->vmaf_thread, NULL, call_vmaf, (void *) s); +- if (th) { +- av_log(ctx, AV_LOG_ERROR, "Thread creation failed.\n"); +- return AVERROR(EINVAL); +- } +- s->vmaf_thread_created = 1; ++ const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(inlink->format); ++ s->bpc = desc->comp[0].depth; + + return 0; + } +@@ -320,35 +481,72 @@ static int activate(AVFilterContext *ctx) + return ff_framesync_activate(&s->fs); + } + ++static enum VmafOutputFormat log_fmt_map(const char *log_fmt) ++{ ++ if (av_stristr(log_fmt, "xml")) ++ return VMAF_OUTPUT_FORMAT_XML; ++ if (av_stristr(log_fmt, "json")) ++ return VMAF_OUTPUT_FORMAT_JSON; ++ if (av_stristr(log_fmt, "csv")) ++ return VMAF_OUTPUT_FORMAT_CSV; ++ if (av_stristr(log_fmt, "sub")) ++ return VMAF_OUTPUT_FORMAT_SUB; ++ ++ return VMAF_OUTPUT_FORMAT_XML; ++} ++ + static av_cold void uninit(AVFilterContext *ctx) + { + LIBVMAFContext *s = ctx->priv; + ++ int err = 0; ++ + ff_framesync_uninit(&s->fs); + +- pthread_mutex_lock(&s->lock); +- s->eof = 1; +- pthread_cond_signal(&s->cond); +- pthread_mutex_unlock(&s->lock); ++ if (!s->frame_cnt) goto clean_up; + +- if (s->vmaf_thread_created) +- { +- pthread_join(s->vmaf_thread, NULL); +- s->vmaf_thread_created = 0; ++ err = vmaf_read_pictures(s->vmaf, NULL, NULL, 0); ++ if (err) { ++ av_log(ctx, AV_LOG_ERROR, ++ "problem flushing libvmaf context.\n"); + } + +- av_frame_free(&s->gref); +- av_frame_free(&s->gmain); ++ for (unsigned i = 0; i < s->model_cnt; i++) { ++ double vmaf_score; ++ err = vmaf_score_pooled(s->vmaf, s->model[i], VMAF_POOL_METHOD_MEAN, ++ &vmaf_score, 0, s->frame_cnt - 1); ++ if (err) { ++ av_log(ctx, AV_LOG_ERROR, ++ "problem getting pooled vmaf score.\n"); ++ } + +- pthread_mutex_destroy(&s->lock); +- pthread_cond_destroy(&s->cond); ++ av_log(ctx, AV_LOG_INFO, "VMAF score: %f\n", vmaf_score); ++ } ++ ++ if (s->vmaf) { ++ if (s->log_path && !err) ++ vmaf_write_output(s->vmaf, s->log_path, log_fmt_map(s->log_fmt)); ++ } ++ ++clean_up: ++ if (s->model) { ++ for (unsigned i = 0; i < s->model_cnt; i++) { ++ if (s->model[i]) ++ vmaf_model_destroy(s->model[i]); ++ } ++ av_free(s->model); ++ } ++ ++ if (s->vmaf) ++ vmaf_close(s->vmaf); + } + + static const AVFilterPad libvmaf_inputs[] = { + { + .name = "main", + .type = AVMEDIA_TYPE_VIDEO, +- },{ ++ }, ++ { + .name = "reference", + .type = AVMEDIA_TYPE_VIDEO, + .config_props = config_input_ref,