Learning-Based
Image Inpainting


Please move to the challenge participation page [link] for more details.

Challenge description


Image inpainting, also known as image completion, is the process of filling-in the missing areas of an incomplete image so that the completed image is visually plausible. While this task is indispensable in many applications, such as dis-occlusion, object removal, error concealment, and so on, the task is still regarded very difficult thus far. Traditionally, several different approaches have been proposed for image inpainting, including partial differential equation-based inpainting, constrained texture synthesis, structure propagation, database-assisted, and so on.

In recent years, deep learning has revolutionized the research of image inpainting, and a number of deep models have been designed. Nonetheless, the lack of a public, widely acknowledged dataset has been a significant issue in developing advanced, learning-based inpainting solution.

This challenge is meant to consolidate research efforts about image inpainting using learning, especially deep learning approach. We will prepare two tracks: error concealment (EC) and object removal (OR). In the EC track, we simulate the case of transmission error that incurs missing areas (usually square blocks) in a decoded image. In the OR track, we carefully select some objects in an image to be removed, and produce missing areas with irregular shapes. In both tracks we challenge the researchers to inpaint the incomplete image. The major difference between the two tracks is that, in the first track, we want to recover the missing areas so that the completed image is similar to the original (although this can be very difficult!), and in the second track, we are satisfied as long as the completed image is visually plausible and pleasing.

To our best knowledge, if this challenge is accepted to ICME 2019, it will be the first challenge about general-purpose image inpainting in conjunction with related top-tier conferences (including ICME, CVPR, ICCV, ECCV, NIPS, ICML, ICIP, etc.).

Datasets


We will prepare a large-scale, high-quality, carefully crafted dataset. The dataset consists of three parts: training, validation, and test. Download train_img and valid_img

Training set consists of around 1500 high-definition natural, complete images.

Validation set has two subsets that correspond to the two tracks: EC and OR. For validation set we will provide a mask for each image to indicate “missing areas”.

Test set is similar in scale to the validation set, and for each test image we will provide incomplete image together with a mask. The ground-truth of test images is not public.

Evaluation criteria


We set different criteria for the two tracks.

For the EC track, we will use a combination of PSNR and SSIM, calculated within and around the missing areas between the original and the completed images.

For the OR track, we will simply use MOS, which means we will perform subjective evaluations.

To be more specific, we will have a team of about 40-50 subjects to evaluate the completed images. We will use the pairwise evaluation, i.e. each subject is given two images at once and the subject is required to select a better one from the two.

Submission deadline


  • April 8, 2019: Paper submission deadline
  • April 8, 2019: Test image submission deadline
  • April 22, 2019: Paper acceptance notification
  • April 22, 2019: Evaluation results announcement
  • April 29, 2019: Camera-ready paper submission deadline

Submission guidelines


By April 8, 2019, we ask all participants to submit paper and test result (i.e. completed images).

By April 9, 2019, we ask all participants to submit their (testing) code and model, which serves for verification purpose.

We will provide an uploading service for participants.

Coordinators


Prof. Dong Liu
University of Science and Technology of China
Email: dongeliu@ustc.edu.cn
Homepage: http://staff.ustc.edu.cn/~dongeliu/
Prof. Ming-Hsuan Yang
University of California at Merced
Email: mhyang@ucmerced.edu
Homepage: http://faculty.ucmerced.edu/mhyang/