{"id":1116141,"date":"2025-01-08T18:45:16","date_gmt":"2025-01-09T02:45:16","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-academic-program&#038;p=1116141"},"modified":"2025-06-08T12:03:13","modified_gmt":"2025-06-08T19:03:13","slug":"ntire-2025-vqe","status":"publish","type":"msr-academic-program","link":"https:\/\/www.microsoft.com\/en-us\/research\/academic-program\/ntire-2025-vqe\/","title":{"rendered":"CVPR 2025 Challenge on Video Quality Enhancement for Video Conferencing"},"content":{"rendered":"\n\n<p>NTIRE Workshop at CVPR 2025<\/p>\n\n\n\n\n\n\n<h2 class=\"wp-block-heading\" id=\"introduction\">Introduction<\/h2>\n\n\n\n<p>Design a Video Quality Enhancement (VQE) model to enhance video quality in video conferencing scenarios by (a) improving lighting, (b) enhancing colors, (c) reducing noise, and (d) enhancing sharpness in video calls \u2013 giving a professional studio-like effect.<\/p>\n\n\n\n<p>We provide participants with a differentiable Video Quality Assessment (VQA) model, training and test videos. The participants submit enhanced videos which are used for evaluation in our crowdsourced framework.<\/p>\n\n\n\n<div style=\"height:60px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"motivation\">Motivation<\/h2>\n\n\n\n<p class=\"has-text-align-left\">Light is a crucial component of visual expression and key to controlling texture, appearance, and composition. Professional photographers often have sophisticated studio-lights and reflectors to illuminate their subjects such that the true visual cues are expressed and captured. Similarly, tech-savvy users with modern desk setups employ a sophisticated combination of key and fill lights to give themselves control over their illumination and shadow characteristics.<\/p>\n\n\n\n<p>On the other hand, many users are constrained by their physical environment which may lead to poor positioning of ambient lighting or lack thereof. It is also commonplace to encounter flares, scattering, and specular reflections that may come from windows or mirror-like surfaces. Problems can be compounded by poor-quality cameras that may introduce sensor noise. This leads to poor visual experience during video calls and may have a negative impact on downstream tasks such as face detection and segmentation.<\/p>\n\n\n\n<p>The current production light correction solution in Microsoft Teams &#8211; called AutoAdjust, finds a global mapping of input to output colors which is updated sporadically. Since this mapping is global, the method is sometimes unable to find a correction that works well for both foreground and background. A better approach may be Image Relighting which only performs local correction in the foreground and gives users the option to dim their background &#8211; creating a pop-out effect. A possible side effect of local correction can be the reduction of local contrast which often serves as a proxy to convey depth in 2D images \u2013 thereby making people appear dull in some cases.<\/p>\n\n\n\n<div style=\"height:60px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"registration\">Registration<\/h2>\n\n\n\n<p>Participants are required to register on the <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/codalab.lisn.upsaclay.fr\/competitions\/21291\">CodaLab website<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>. Email used during registration will be used to add participants to our <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/cvpr2025ntire-5sk3590.slack.com\" target=\"_blank\" rel=\"noopener noreferrer\">Slack workspace<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> which will be the default mode of day-to-day communication and where participants will submit their videos for subjective evaluation. For objective evaluation, please make your submission to the CodaLab website contains the correct (a) team name, (b) names, emails & affiliations of your team members, and (c) team captain.<\/p>\n\n\n\n<p>Please reach out to the challenge organizers at <a href=\"mailto:jain.varun@microsoft.com\" target=\"_blank\" rel=\"noreferrer noopener\">jain.varun@microsoft.com<\/a> if you need assistance with registration. <\/p>\n\n\n\n<div style=\"height:60px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"awards-paper-submission\">Awards & Paper Submission<\/h2>\n\n\n\n<p>Top-ranking participants will receive a winner certificate. They will also be invited to submit their paper to NTIRE 2025 and participate in the challenge report &#8211; both of which will be included in the archived proceedings of the NTIRE workshop at CVPR 2025.<\/p>\n\n\n\n<div style=\"height:60px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"introduction\">Citation<\/h2>\n\n\n\n<p>If you use our method, data, or code in your research, please cite:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>@inproceedings{ntire2025vqe,\n  title={{NTIRE} 2025 Challenge on Video Quality Enhancement for Video Conferencing: Datasets, Methods and Results},\n  author={Varun Jain and Zongwei Wu and Quan Zou and Louis Florentin and Henrik Turbell and Sandeep Siddhartha and Radu Timofte and others},\n  booktitle={Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},\n  year={2025}\n}<\/code><\/pre>\n\n\n\n<div style=\"height:60px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"awards-paper-submission\">Subjective Results<\/h2>\n\n\n\n<p>We received 5 complete submissions for both the mid-point and final evaluations. For each team&#8217;s submission, we utilized our crowdsourced framework to evaluate their 3,000-video test set. This involved presenting human raters with 270K side-by-side video comparisons. Raters were asked to provide a preference rating on a scale of 1 to 5, where 1 and 5 represent strong preference for the left and right video respectively, and 2 and 4 represent weak preference. A rating of 3 indicates no preference. Furthermore, raters were prompted to specify if their decision was primarily influenced by (a) colors, (b) image brightness, or (c) skin tone.<\/p>\n\n\n\n<p>Here are the Bradley\u2013Terry scores for each team that maximize the likelihood of the observed P.910 voting:<\/p>\n\n\n\n<p class=\"has-text-align-center\"><img loading=\"lazy\" decoding=\"async\" width=\"7035\" height=\"3411\" class=\"wp-image-1135419\" style=\"width: 1200px\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/results_combined_mid_final.png\" alt=\"results_combined_mid_final\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/results_combined_mid_final.png 7035w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/results_combined_mid_final-300x145.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/results_combined_mid_final-1024x496.png 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/results_combined_mid_final-768x372.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/results_combined_mid_final-1536x745.png 1536w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/results_combined_mid_final-2048x993.png 2048w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/results_combined_mid_final-240x116.png 240w\" sizes=\"auto, (max-width: 7035px) 100vw, 7035px\" \/><\/p>\n\n\n\n<p class=\"has-text-align-center\"><em>Figure 5: Interval plots illustrating the mean P.910 Bradley-Terry scores and their corresponding 95% confidence intervals for the 5 submissions, input videos, and the provided baseline. (Top) Overall preference, and (bottom) factors influencing preference.<\/em><\/p>\n\n\n\n<div style=\"height:60px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p><a id=\"_msocom_2\"><\/a><\/p>\n\n\n\n\n\n<h2 class=\"wp-block-heading\" id=\"problem-statement\">Problem Statement<\/h2>\n\n\n\n<p class=\"has-text-align-center\"><img loading=\"lazy\" decoding=\"async\" width=\"3245\" height=\"376\" class=\"wp-image-1119612\" style=\"width: 900px\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/Picture3.png\" alt=\"Linear figure showing p910 scores of various methods\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/Picture3.png 3245w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/Picture3-300x35.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/Picture3-1024x119.png 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/Picture3-768x89.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/Picture3-1536x178.png 1536w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/Picture3-2048x237.png 2048w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/Picture3-240x28.png 240w\" sizes=\"auto, (max-width: 3245px) 100vw, 3245px\" \/><\/p>\n\n\n\n<p class=\"has-text-align-center\"><em>Figure 1: P.910 study indicates that people prefer AutoAdjust over auto-corrected images (L), which are both preferred over the Image Relighting approach.<\/em><\/p>\n\n\n\n<p>We ran three P.910 <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/academic-program\/ntire-2025-vqe\/timeline\/#:~:text=1952)%3A%20324%2D345.-,%5B2%5D,-Naderi%2C%20Babak%2C%20and\">[2]<\/a> studies totaling ~350,000 pairwise comparisons that measured people\u2019s preference for AutoAdjust (A) and Image Relighting (R*) over No effect (N) and images auto-corrected using Lightroom (L). We used the Bradley\u2013Terry model <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/academic-program\/ntire-2025-vqe\/timeline\/#:~:text=References-,%5B1%5D,-Bradley%2C%20Ralph%20Allan\">[1]<\/a> to compute per-method scores and observed that people preferred our current AutoAdjust more than any other method in all three studies.<\/p>\n\n\n\n<p>To take the next step towards achieving studio-grade video quality, one would need to (a) understand what people prefer and construct a differentiable Video Quality Assessment (VQA) metric, and (b) be able to train a Video Quality Enhancement (VQE) model that optimizes this metric. To solve the first problem, we used the abovementioned P.910 data and trained a VQA model that, given a pair of videos x<sub>1<\/sub> and x<sub>2<\/sub>, gives the probability of x<sub>1<\/sub> being better than x<sub>2<\/sub>. Given a test set, this information can be used to construct a ranking order of a given set of methods.<\/p>\n\n\n\n<p>We would now like to invite researchers to participate in a challenge aimed at developing Neural Processing Unit (NPU) friendly VQE models that leverage our trained VQA model to improve video quality.<\/p>\n\n\n\n<p>We look at the following properties of a video to judge its studio-grade quality:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Foreground illumination<\/strong> \u2013 the person (all body parts and clothing) should be optimally lit.<\/li>\n\n\n\n<li><strong>Natural colors<\/strong> \u2013 correction may make local or global color changes to make videos pleasing.<\/li>\n\n\n\n<li><strong>Temporal noise<\/strong> \u2013 correct for image and video encoding artefacts and sensor noise.<\/li>\n\n\n\n<li><strong>Sharpness <\/strong>\u2013 to make sure that correction algorithms do not introduce softness, the final image should at least be as sharp as the input.<\/li>\n<\/ol>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p>We realize that there may be many other aspects to a good video. For simplicity, we discount all except the ones mentioned above. Specifically, submissions are not judged on:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Egocentric motion \u2013 unstable camera may introduce sweeping motion or small vibrations that we do not aim to correct.<\/li>\n\n\n\n<li>Masking of Background \u2013 spatial modification of background such as blurring or replacement with minimal changes to the foreground may improve subjective scores but we consider these out of domain. <\/li>\n\n\n\n<li>Makeup and beautification \u2013 it is commonplace for users to apply beautification filters that alter their skin tone and facial features such as those found on Instagram and Snapchat. We do not aim for that aesthetic.<\/li>\n\n\n\n<li>Removal of reflection on glasses and lens flare \u2013 despite being a common occurrence in video teleconference scenarios, we do not aim to remove reflections that may come from screens and other light sources onto users\u2019 glasses due to the risk associated with altering the users\u2019 eyes and gaze direction.<\/li>\n\n\n\n<li>Avatars \u2013 A solution that synthesizes a photorealistic avatar of the subject and drives it based on the input video may score the highest in terms of noise, illumination and color. If it indeed minimizes the total cost function that takes into account all these factors, it is acceptable.<\/li>\n<\/ol>\n\n\n\n<p>Solutions that significantly rely on altering properties other than what are discussed above will be asked to resubmit. Ensembles of models are allowed. Manual tweaking of hyperparameters for individual videos would lead to disqualification.<\/p>\n\n\n\n<div style=\"height:60px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"baseline-solution-starter-code\">Baseline Solution & Starter Code<\/h2>\n\n\n\n<p>Since AutoAdjust was ranked higher than expert-edited images and Image Relighting methods, we will provide the participants with a baseline solution so that they can reproduce the AutoAdjust feature as currently shipped in Microsoft Teams.<\/p>\n\n\n\n<p>More details can be found at our repository: <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/github.com\/varunj\/cvpr-vqe\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/github.com\/varunj\/cvpr-vqe<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n\n\n\n<div style=\"height:60px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"compute-constraints\">Compute Constraints<\/h2>\n\n\n\n<p>The goal is to have a computationally efficient solution that can be offloaded to NPU for CoreML inference. We establish a qualifying criterion of CoreML uint8 or fp16 models with at most 20.0&#215;10<sup>9<\/sup> MACs\/frame for an input resolution of 1280*720. We anticipate such a model to have a per frame processing time of ~9ms on an M1 Ultra powered Mac Studio and ~5ms on an M4 Pro powered Mac Mini for the given input resolution. Submissions not meeting this criterion will not be considered for evaluation.<\/p>\n\n\n\n<div style=\"height:60px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n\n\n<h2 class=\"wp-block-heading\" id=\"dataset\">Dataset<\/h2>\n\n\n\n<p class=\"has-text-align-center\"><img loading=\"lazy\" decoding=\"async\" width=\"3163\" height=\"1593\" class=\"wp-image-1120653\" style=\"width: 900px\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/data.png\" alt=\"Image showing input and ground truth images\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/data.png 3163w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/data-300x151.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/data-1024x516.png 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/data-768x387.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/data-1536x774.png 1536w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/data-2048x1031.png 2048w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/data-240x121.png 240w\" sizes=\"auto, (max-width: 3163px) 100vw, 3163px\" \/><\/p>\n\n\n\n<p class=\"has-text-align-center\"><em>Figure 2: Ground truth from (top) our synthetics framework, (bottom) our AutoAdjust solution. The top row shows the input with suboptimal foreground illumination which is fixed by adding a studio light setup in front of the subject which is simulated in synthetics and predicted via global changes in the real data<\/em>.<\/p>\n\n\n\n<p>We provide 13,000 real videos for training, validation and testing of VQE methods. The videos are 10s long, encoded at 30 FPS and amount to a total of 3,900,000 frames. We keep 3,000 (23%) videos for testing and ranking submissions and make 10,000 (77%) available to the teams. They can choose to split it between training and validation sets as they desire. The teams are also free to use other publicly available open datasets but will have to be mindful about data drift.<\/p>\n\n\n\n<p>In addition to this data, we also provide some paired data for supervised training as shown in Fig 2.<\/p>\n\n\n\n<p>Note that it is possible, and encouraged, to learn a correction that is different from these ground truth labels and achieves a higher MOS score. Hence, these labels should be treated as suggestive improvements and not the global optima.<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Real data\n<ul class=\"wp-block-list\">\n<li>Of the 13,000 videos, we selected 300 high quality videos where P.910 voters vote strongly in favor of AutoAdjust. We assume these to be the ground truth.<\/li>\n\n\n\n<li>P.910 done on these videos shows a MOS of 3.58 in favor of the target.<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li>Synthetic Portrait Relighting data\n<ul class=\"wp-block-list\">\n<li>1,500 videos for training and 500 videos for testing. 5s long encoded at 30 FPS.<\/li>\n\n\n\n<li>The source image only has lighting from the HDRI scene. For the target, we add 2 diffuse light sources to simulate a studio lighting setup.<\/li>\n\n\n\n<li>P.910 done on these videos shows a MOS of 4.06 in favor of the target indicating that these make for a better target compared to the current baseline AutoAdjust solution.<\/li>\n\n\n\n<li>Some examples of these pairs are shown in Fig 2. and more details about the rendering framework can be found at <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/academic-program\/ntire-2025-vqe\/timeline\/#:~:text=2814.%20IEEE%2C%202024.-,%5B3%5D,-Hewitt%2C%20Charlie%2C%20Tadas\">[3]<\/a>.<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\">\n<p class=\"has-text-align-center\"><strong>Dataset<\/strong><\/p>\n\n\n\n<p>train.tar.gz <\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">   train_supervised_synthetic.tar.gz<\/pre>\n\n\n\n<pre class=\"wp-block-preformatted\">   train_supervised_real.tar.gz<\/pre>\n\n\n\n<pre class=\"wp-block-preformatted\">   train_unsupervised.tar.gz<\/pre>\n\n\n\n<p>test.tar.gz<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">   test_supervised_synthetic.tar.gz<\/pre>\n\n\n\n<pre class=\"wp-block-preformatted\">   test_unsupervised.tar.gz<\/pre>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-style-default is-layout-flow wp-block-column-is-layout-flow\">\n<p class=\"has-text-align-center\"><strong>Azure Blob Storage Link<\/strong><\/p>\n\n\n\n<p class=\"has-text-align-center\"><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/ic3mi.z22.web.core.windows.net\/vqe\/train.tar.gz\">link<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n\n\n\n<p class=\"has-text-align-center\"><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/ic3mi.z22.web.core.windows.net\/vqe\/train_supervised_synthetic.tar.gz\">link<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n\n\n\n<p class=\"has-text-align-center\"><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/ic3mi.z22.web.core.windows.net\/vqe\/train_supervised_real.tar.gz\">link<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n\n\n\n<p class=\"has-text-align-center\"><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/ic3mi.z22.web.core.windows.net\/vqe\/train_unsupervised.tar.gz\">link<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n\n\n\n<p class=\"has-text-align-center\"><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/ic3mi.z22.web.core.windows.net\/vqe\/test.tar.gz\">link<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n\n\n\n<p class=\"has-text-align-center\"><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/ic3mi.z22.web.core.windows.net\/vqe\/test_supervised_synthetic.tar.gz\">link<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n\n\n\n<p class=\"has-text-align-center\"><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/ic3mi.z22.web.core.windows.net\/vqe\/test_unsupervised.tar.gz\">link<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow\">\n<p class=\"has-text-align-center\"><strong>Google Drive Link<\/strong><\/p>\n\n\n\n<p class=\"has-text-align-center\">&#8211;<\/p>\n\n\n\n<p class=\"has-text-align-center\"><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/drive.google.com\/file\/d\/1suD8yefmGZA-QRTECmLQX749JFmmqOyA\/view?usp=sharing\">link<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n\n\n\n<p class=\"has-text-align-center\"><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/drive.google.com\/file\/d\/1_AQyNgUbS2b6tV8N1pDn9g6MkUZPJ28w\/view?usp=sharing\">link<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n\n\n\n<p class=\"has-text-align-center\"><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/drive.google.com\/file\/d\/11CFQX1tE-n_mcg5W7XKJ_SgrLNuEnM7q\/view?usp=sharing\">link<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n\n\n\n<p class=\"has-text-align-center\">&#8211;<\/p>\n\n\n\n<p class=\"has-text-align-center\"><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/drive.google.com\/file\/d\/1EUOpzN4KXcDFeBT_aYQUYDCHeAPrUOiW\/view?usp=sharing\">link<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n\n\n\n<p class=\"has-text-align-center\"><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/drive.google.com\/file\/d\/1HxGioJL3hcWM1V8PeQGlSCpLSh96znqi\/view?usp=sharing\">link<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<\/div>\n<\/div>\n\n\n\n<div style=\"height:60px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"folder-structure-for-dataset-and-submissions\">Folder Structure for Dataset and Submissions<\/h2>\n\n\n\n<p class=\"has-text-align-center\"><img loading=\"lazy\" decoding=\"async\" width=\"876\" height=\"811\" class=\"wp-image-1123473\" style=\"width: 475px\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/folder_structure_train.png\" alt=\"folder_structure_train\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/folder_structure_train.png 876w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/folder_structure_train-300x278.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/folder_structure_train-768x711.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/folder_structure_train-194x180.png 194w\" sizes=\"auto, (max-width: 876px) 100vw, 876px\" \/>  <img loading=\"lazy\" decoding=\"async\" width=\"876\" height=\"287\" class=\"wp-image-1123470\" style=\"width: 475px\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/folder_structure_test.png\" alt=\"folder_structure_train\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/folder_structure_test.png 876w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/folder_structure_test-300x98.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/folder_structure_test-768x252.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/folder_structure_test-240x79.png 240w\" sizes=\"auto, (max-width: 876px) 100vw, 876px\" \/><\/p>\n\n\n\n<p class=\"has-text-align-center\"><em>Figure 3: Folder structure for the provided dataset and the expected submissions.<\/em><\/p>\n\n\n\n<p>To ensure efficient evaluation, please organize your submissions according to the folder structure depicted in Fig. 3. Additionally, please strictly adhere to the video coding guidelines detailed within the starter code.<\/p>\n\n\n\n<div style=\"height:60px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"metrics-evaluating-submissions\">Metrics & Evaluating Submissions<\/h2>\n\n\n\n<p>Our final goal is to rank submissions based on P.910 scores. We will require the teams to submit their predictions on the 3,000-video real test set. We then compare the submissions relative to the given input as well as against each other. Similar to Fig 1, comparison using the Bradley\u2013Terry model gives us the score for each submission that maximizes the likelihood of the observed P.910 voting. Our current P.910 framework has a throughput of ~210K votes per week.<\/p>\n\n\n\n<p>In case two methods have statistically insignificant difference in subjective scores, we will use individual objective metric discussed below to break ties.<\/p>\n\n\n\n<p class=\"has-text-align-center\"><img loading=\"lazy\" decoding=\"async\" width=\"439\" height=\"119\" class=\"wp-image-1123479\" style=\"width: 300px\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/Screenshot-2025-01-22-150338.png\" alt=\"objective metric is the mean VQA score\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/Screenshot-2025-01-22-150338.png 439w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/Screenshot-2025-01-22-150338-300x81.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/Screenshot-2025-01-22-150338-240x65.png 240w\" sizes=\"auto, (max-width: 439px) 100vw, 439px\" \/><\/p>\n\n\n\n<p>Due to the infeasibility of getting P.910 scores in real-time, the teams can use the mean VQA score S<sub>obj<\/sub> given by the provided VQA model as shown above for continuous & independent evaluation.<\/p>\n\n\n\n<p>For the 3,000 unsupervised videos, teams are required to submit the per-video VQA score along with the 11 auxiliary scores predicted by the VQA model as shown in Fig 3. For the synthetic test set, teams should report the per-video Root Mean Squared Error (RMSE). These scores will also be published on the leaderboard so that participants can track their progress relative to other teams. However, we do not rank teams based on these objective metrics since it might be possible to learn a correction that is different from and subjectively better than the ground truth provided.<\/p>\n\n\n\n<div style=\"height:60px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"metrics-evaluating-submissions\">How to Submit Videos<\/h2>\n\n\n\n<p>To submit test videos for subjective evaluation, create a zip archive of all 3,000 real videos. Then, make an entry in <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/docs.google.com\/spreadsheets\/d\/1r4xdrKz7pKM_NnDdHv7AYuIh3EqR-TAPivXNPbpPTS0\/edit?usp=sharing\">this spreadsheet<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> to notify the organizers of your submission. Teams can choose one of the following options:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Upload to your own cloud storage:<\/strong> Upload the zip file to your Google Drive, OneDrive, or other cloud storage service. Grant read and download access to <a href=\"mailto:jain.varun@microsoft.com\" target=\"_blank\" rel=\"noreferrer noopener\">jain.varun@microsoft.com<\/a> and include the link in the spreadsheet.<\/li>\n\n\n\n<li><strong>Upload to Azure Storage using azcopy:<\/strong> This option allows you to avoid using your own cloud storage. Refer to <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/storage\/common\/storage-use-azcopy-v10\">this link<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> for instructions on installing the azcopy command-line interface (CLI) tool.\n<ul class=\"wp-block-list\">\n<li>Upload your file: <code>azcopy cp --check-length=false team_name.zip \"https:\/\/ic3midata.blob.core.windows.net\/cvpr-2025-ntire-vqe\/final\/?sv=2023-01-03&st=2025-02-19T06%3A25%3A24Z&se=2025-05-01T05%3A25%3A00Z&sr=c&sp=wl&sig=34%2Ff%2BtkD8hYYVD1u4d00m0PSnW%2Fkn5XhOByFqWnhZDA%3D\"<\/code><\/li>\n\n\n\n<li>To check your submission: <code>azcopy ls \"https:\/\/ic3midata.blob.core.windows.net\/cvpr-2025-ntire-vqe?sv=2023-01-03&st=2025-02-19T06%3A25%3A24Z&se=2025-05-01T05%3A25%3A00Z&sr=c&sp=wl&sig=34%2Ff%2BtkD8hYYVD1u4d00m0PSnW%2Fkn5XhOByFqWnhZDA%3D\"<\/code><\/li>\n<\/ul>\n<\/li>\n<\/ol>\n\n\n\n<div style=\"height:60px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n\n\n<h2 class=\"wp-block-heading\" id=\"timeline\">Timeline<\/h2>\n\n\n\n<p class=\"has-text-align-center\"><img loading=\"lazy\" decoding=\"async\" width=\"3181\" height=\"1011\" class=\"wp-image-1134968\" style=\"width: 960px\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/Picture1-2.png\" alt=\"timeline\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/Picture1-2.png 3181w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/Picture1-2-300x95.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/Picture1-2-1024x325.png 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/Picture1-2-768x244.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/Picture1-2-1536x488.png 1536w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/Picture1-2-2048x651.png 2048w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2025\/01\/Picture1-2-240x76.png 240w\" sizes=\"auto, (max-width: 3181px) 100vw, 3181px\" \/><\/p>\n\n\n\n<p class=\"has-text-align-center\"><em>Figure 4: Timeline for the challenge.<\/em><\/p>\n\n\n\n<div style=\"height:0px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<div style=\"height:60px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading has-text-align-left\" id=\"references\">References<\/h2>\n\n\n\n<p>[1] Bradley, Ralph Allan, and Milton E. Terry. &#8220;Rank analysis of incomplete block designs: I. The method of paired comparisons.&#8221;&nbsp;<em>Biometrika<\/em>&nbsp;39, no. 3\/4 (1952): 324-345.<br>[2] Naderi, Babak, and Ross Cutler. &#8220;A crowdsourcing approach to video quality assessment.&#8221; In&nbsp;<em>ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)<\/em>, pp. 2810-2814. IEEE, 2024.<br>[3] Hewitt, Charlie, Fatemeh Saleh, Sadegh Aliakbarian, Lohit Petikam, Shideh Rezaeifar, Louis Florentin, Zafiirah Hosenie et al. &#8220;Look Ma, no markers: holistic performance capture without the hassle.&#8221;&nbsp;<em>ACM Transactions on Graphics (TOG)<\/em>&nbsp;43, no. 6 (2024): 1-12.<\/p>\n\n\n\n<div style=\"height:60px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n\n\n<h2 class=\"wp-block-heading\" id=\"vqe-challenge-terms-conditions\">VQE Challenge Terms & Conditions<\/h2>\n\n\n\n<p>Participants agree to the following terms of use by participating in the challenge. The Provider grants a limited, non-exclusive, personal, non-transferable, non-sublicensable, and revocable license to access, download and use the Dataset and the VQA Model &#8211; hereinafter referred to as the Model, for internal and research purposes only, during the specified term. The Participant is required to comply with the Provider&#8217;s reasonable instructions, as well as all applicable statutes, laws, and regulations.<\/p>\n\n\n\n<p>By submitting your entry, you grant us a broad license to utilize your submission, including but not limited to: review, analysis, testing, and use in various media (now known or later developed) for any purpose, including marketing and product development. This license is non-exclusive, royalty-free, and worldwide. You acknowledge that we may have independently developed similar ideas and waive any claims related to such similarities. You will not receive compensation for this use beyond what is specified in the Rules. We may publicly display your entry, but we are not responsible for any unauthorized use by others. We are under no obligation to use your submission, even if it is selected as a winner.<\/p>\n\n\n\n<p>For any questions about the Dataset or the Model, please contact <a href=\"mailto:jain.varun@microsoft.com\">jain.varun@microsoft.com<\/a><\/p>\n\n\n\n<div style=\"height:60px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"ntire-workshop-terms-conditions\">NTIRE Workshop Terms & Conditions<\/h2>\n\n\n\n<pre class=\"wp-block-code\"><code>Terms and Conditions for the New Trends in Image Restoration and Enhancement (NTIRE) workshop challenges, and AIS: Vision, Graphics and AI for Streaming (AIS) workshop challenges.\n\nThese are the official rules (terms and conditions) that govern how the NTIRE and AIS Challenges will operate. This challenge will be simply referred to as the \"challenge\" or the \"contest\" throughout the remaining part of these rules and may be named as \"NTIRE\" or \"AIS\" benchmark, challenge, or contest, elsewhere (our webpage, our documentation, other publications).\n\nIn these rules, \"we\", \"our\", and \"us\" refer to the organizers (Marcos Conde (marcos.conde &#091;at] uni-wuerzburg.de) and Radu.Timofte &#091;at] uni-wuerzburg.de) of the NTIRE and AIS challenge and \"you\" and \"yourself\" refer to an eligible contest participant.\n\nNote that these official rules can change during the contest until the start of the final phase. If at any point during the contest the registered participant considers that can not anymore meet the eligibility criteria or does not agree with the changes in the official terms and conditions then it is the responsibility of the participant to send an email to the organizers such that to be removed from all the records. Once the contest is over no change is possible in the status of the registered participants and their entries.\n\n\n1. Contest description\nThis is a skill-based contest and chance plays no part in the determination of the winner (s).\n\nThe goal of the contest is explained in its corresponding website.\n\nFocus of the contest: it will be made available a dataset adapted for the specific needs of the challenge. The participants will not have access to the ground truth images from the test data. The ranking of the participants is according to the performance of their methods on the internal test data. The participants will provide descriptions of their methods, details on (run)time complexity, platform and (extra) data used for modeling. The winners will be determined according to their entries, the reproducibility of the results and uploaded codes or executables, and the above mentioned criteria as judged by the organizers.\n\n\n2. Tentative contest schedule\nThe registered participants will be notified by email if any changes are made to the schedule. The schedule is available on the NTIRE\/AIS workshop web page and on the Overview of the competition website.\n\n\n3. Eligibility\nYou are eligible to register and compete in this contest only if you meet all the following requirements:\n\nyou are an individual or a team of people willing to contribute to the open tasks, who accepts to follow the rules of this contest\nyou are not an NTIRE\/AIS challenge organizer or an employee of NTIRE\/AIS challenge organizers\nyou are not involved in any part of the administration and execution of this contest\nyou are not a first-degree relative, partner, household member of an employee or of an organizer of NTIRE\/AIS challenge or of a person involved in any part of the administration and execution of this contest\nThis contest is void wherever it is prohibited by law.\n\nEntries submitted but not qualified to enter the contest, it is considered voluntary and for any entry you submit NTIRE\/AIS reserves the right to evaluate it for scientific purposes, however under no circumstances will such entries qualify for sponsored prizes. If you are an employee, affiliated with or representant of any of the NTIRE\/AIS challenge sponsors then you are allowed to enter in the contest and get ranked, however, if you will rank among the winners with eligible entries you will receive only a diploma award and none of the sponsored money, products or travel grants.\n\nNOTE: industry and research labs are allowed to submit entries and to compete in both validation phase and final test phase. However, in order to get officially ranked on the final test leaderboard and to be eligible for awards the reproducibility of the results is a must and, therefore, the participants need to make available and submit their codes or executables. All the top entries will be checked for reproducibility and marked accordingly.\n\nWe will have 3 categories of entries in the final test ranking:\n\t1) checked with publicly released codes \n\t2) checked with publicly released executable\n\t3) unchecked (with or without released codes or executables)\n\n\n4. Entry\nIn order to be eligible for judging, an entry must meet all the following requirements:\n\nEntry contents: the participants are required to submit image results and code or executables. To be eligible for prizes, the top ranking participants should publicly release their code or executables under a license of their choice, taken among popular OSI-approved licenses (http:\/\/opensource.org\/licenses) and make their code or executables online accessible for a period of not less than one year following the end of the challenge (applies only for top three ranked participants of the competition). To enter the final ranking the participants will need to fill out a survey (fact sheet) briefly describing their method. All the participants are also invited (not mandatory) to submit a paper for peer-reviewing and publication at the NTIRE or AIS Workshop and Challenges. To be eligible for prizes, the participants score must improve the baseline performance provided by the challenge organizers.\n\nUse of data provided: all data provided by NTIRE\/AIS are freely available to the participants from the website of the challenge under license terms provided with the data. The data are available only for open research and educational purposes, within the scope of the challenge. NTIRE\/AIS and the organizers make no warranties regarding the database, including but not limited to warranties of non-infringement or fitness for a particular purpose. The copyright of the images remains in property of their respective owners. By downloading and making use of the data, you accept full responsibility for using the data. You shall defend and indemnify NTIRE\/AIS and the organizers, including their employees, Trustees, officers and agents, against any and all claims arising from your use of the data. You agree not to redistribute the data without this notice.\n\nTest data: The organizers will use the test data for the final evaluation and ranking of the entries. The ground truth test data will not be made available to the participants during the contest.\nTraining and validation data: The organizers will make available to the participants a training dataset with ground truth images and a validation dataset without ground truth images. At the start of the final phase the test data without ground truth images will be made available to the registered participants.\nPost-challenge analyses: the organizers may also perform additional post-challenge analyses using extra-data, but without effect on the challenge ranking.\nSubmission: the entries will be online submitted via the CMT web platform. During development phase, while the validation server is online, the participants will receive immediate feedback on validation data. The final perceptual evaluation will be computed on the test data submissions, the final scores will be released after the challenge is over.\nOriginal work, permissions: In addition, by submitting your entry into this contest you confirm that, to the best of your knowledge: - your entry is your own original work; and - your entry only includes material that you own, or that you have permission to use.\n\n\n5. Potential use of entry\nOther than what is set forth below, we are not claiming any ownership rights to your entry. However, by submitting your entry, you:\n\nAre granting us an irrevocable, worldwide right and license, in exchange for your opportunity to participate in the contest and potential prize awards, for the duration of the protection of the copyrights to:\n\nUse, review, assess, test and otherwise analyze results submitted or produced by your code or executable and other material submitted by you in connection with this contest and any future research or contests by the organizers; and\nFeature your entry and all its content in connection with the promotion of this contest in all media (now known or later developed);\nAgree to sign any necessary documentation that may be required for us and our designees to make use of the rights you granted above;\n\nUnderstand and acknowledge that us and other entrants may have developed or commissioned materials similar or identical to your submission and you waive any claims you may have resulting from any similarities to your entry;\n\nUnderstand that we cannot control the incoming information you will disclose to our representatives or our co-sponsor\u2019s representatives in the course of entering, or what our representatives will remember about your entry. You also understand that we will not restrict work assignments of representatives or our co-sponsor\u2019s representatives who have had access to your entry. By entering this contest, you agree that use of information in our representatives\u2019 or our co-sponsor\u2019s representatives unaided memories in the development or deployment of our products or services does not create liability for us under this agreement or copyright or trade secret law;\n\nUnderstand that you will not receive any compensation or credit for use of your entry, other than what is described in these official rules.\n\nIf you do not want to grant us these rights to your entry, please do not enter this contest.\n\n\n6. Submission of entries\nThe participants will follow the instructions on the competition website to submit entries\n\nThe participants will be registered as mutually exclusive teams. Each team is allowed to submit only one single final entry. We are not responsible for entries that we do not receive for any reason, or for entries that we receive but do not work properly.\n\nThe participants must follow the instructions and the rules. We will automatically disqualify incomplete or invalid entries.\n\n\n7. Judging the entries\nThe board of NTIRE and AIS will select a panel of judges to judge the entries; all judges will be forbidden to enter the contest and will be experts in causality, statistics, machine learning, computer vision, or a related field, or experts in challenge organization. A list of the judges will be made available upon request. The judges will review all eligible entries received and select (three) winners for each or for both of the competition tracks based upon the prediction score on test data. The judges will verify that the winners complied with the rules, including that they documented their method by filling out a fact sheet.\n\nThe decisions of these judges are final and binding. The distribution of prizes according to the decisions made by the judges will be made within three (3) months after completion of the last round of the contest. If we do not receive a sufficient number of entries meeting the entry requirements, we may, at our discretion based on the above criteria, not award any or all of the contest prizes below. In the event of a tie between any eligible entries, the tie will be broken by giving preference to the earliest submission, using the time stamp of the submission platform.\n\n\n8. Prizes and Awards\nThe financial sponsors of this contest are listed on NTIRE\/AIS web page. There will be economic incentive prizes and travel grants for the winners (based on availability) to boost contest participation; these prizes will not require participants to enter into an IP agreement with any of the sponsors. The participants affiliated with the industry sponsors agree to not receive any sponsored money, product or travel grant in the case they will be among the winners.\n\n\n9. Other Sponsored Events\nPublishing papers is optional and will not be a condition to entering the challenge or winning prizes. The top ranking participants are invited to submit a paper following CVPRauthor rules, for peer-reviewing to NTIRE\/AIS workshop.\n\nThe results of the challenge will be published together with NTIRE\/AIS workshop papers in the CVPR Workshops proceedings of the corresponding year.\n\nThe top ranked participants and participants contributing interesting and novel methods to the challenge will be invited to be co-authors of the challenge report paper which will be published in the CVPR Workshops proceedings (of the corresponding year). A detailed description of the ranked solution as well as the reproducibility of the results are a must to be an eligible co-author.\n\n\n10. Notifications\nIf there is any change to data, schedule, instructions of participation, or these rules, the registered participants will be notified on the competition page and\/or at the email they provided with the registration.\n\nWithin seven days following the determination of winners we will send a notification to the potential winners. If the notification that we send is returned as undeliverable, or you are otherwise unreachable for any reason, we may award the prize to an alternate winner, unless forbidden by applicable law.\n\nThe prize such as money, product, or travel grant will be delivered to the registered team leader given that the team is not affiliated with any of the sponsors. It is up to the team to share the prize. If this person becomes unavailable for any reason, the prize will be delivered to be the authorized account holder of the e-mail address used to make the winning entry.\n\nIf you are a potential winner, we may require you to sign a declaration of eligibility, use, indemnity and liability\/publicity release and applicable tax forms. If you are a potential winner and are a minor in your place of residence, and we require that your parent or legal guardian will be designated as the winner, and we may require that they sign a declaration of eligibility, use, indemnity and liability\/publicity release on your behalf. If you, (or your parent\/legal guardian if applicable), do not sign and return these required forms within the time period listed on the winner notification message, we may disqualify you (or the designated parent\/legal guardian) and select an alternate selected winner.\n\nThe terms and conditions are inspired by and use verbatim text from the `Terms and conditions' of ChaLearn Looking at People Challenges and of the NTIRE 2017, 2018, 2019, 2020, 2021, 2022, 2023, and 2024 challenges and of the AIM 2019, 2020, 2021, 2022, and 2024 challenges.<\/code><\/pre>\n\n\n\n<div style=\"height:60px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n","protected":false},"featured_media":1119516,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr_hide_image_in_river":null,"footnotes":""},"msr-opportunity-type":[187426],"msr-region":[],"msr-locale":[268875],"msr-program-audience":[],"msr-post-option":[269151],"msr-impact-theme":[],"class_list":["post-1116141","msr-academic-program","type-msr-academic-program","status-publish","has-post-thumbnail","hentry","msr-opportunity-type-challenges","msr-locale-en_us","msr-post-option-reject-for-river"],"msr_description":"","msr_social_media":[],"related-researchers":[{"type":"user_nicename","display_name":"Varun Jain","user_id":43791,"people_section":"Section name 0","alias":"varjain"},{"type":"guest","display_name":"Zongwei Wu","user_id":1123032,"people_section":"Section name 0","alias":""},{"type":"guest","display_name":"Quan Zou","user_id":1119534,"people_section":"Section name 0","alias":""},{"type":"guest","display_name":"Louis Florentin","user_id":1119687,"people_section":"Section name 0","alias":""},{"type":"user_nicename","display_name":"Henrik Turbell","user_id":43794,"people_section":"Section name 0","alias":"heturbel"},{"type":"guest","display_name":"Sandeep Siddhartha","user_id":1120491,"people_section":"Section name 0","alias":""},{"type":"guest","display_name":"Radu Timofte","user_id":1123035,"people_section":"Section name 0","alias":""}],"tab-content":[],"msr_impact_theme":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-academic-program\/1116141","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-academic-program"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-academic-program"}],"version-history":[{"count":163,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-academic-program\/1116141\/revisions"}],"predecessor-version":[{"id":1141603,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-academic-program\/1116141\/revisions\/1141603"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/1119516"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=1116141"}],"wp:term":[{"taxonomy":"msr-opportunity-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-opportunity-type?post=1116141"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=1116141"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=1116141"},{"taxonomy":"msr-program-audience","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-program-audience?post=1116141"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=1116141"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=1116141"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}