[update]The Generation task ranking is available here. The spotting task ranking is available here.

Notification of Acceptance: 7 July 2022 18 July 2022 (updated)

[update] The decision is based on the merit of the method and the performance (the ranking).

[update] Supplementary file submission: Supplemental materials are required to be submitted on OpenReview. See more submission requirement here.

[update]Submission platform: https://openreview.net/group?id=acmmm.org/ACMMM/2022/Track/Grand_Challenges. See more submission requirement here.

[update] Submission deadline has been extended to 25th June 2022 (23:59AoE)

[note] Articles and results should be submitted together, both with a deadline of 25th June 2022 (23:59AoE).

[note for spotting task] Challenge participants are allowed to create 5 submissions per day on Leaderboardhttps://megc2022.grand-challenge.org/.

ME generation task

Recommended Training Databases

  • SAMM with 159 MEs at 200fps.
    • To download the dataset, please visit: http://www2.docm.mmu.ac.uk/STAFF/M.Yap/dataset.php. Download and fill in the license agreement form, email to M.Yap@mmu.ac.uk with email subject: SAMM.
    • Reference: Davison, A. K., Lansley, C., Costen, N., Tan, K., & Yap, M. H. (2016). SAMM: A spontaneous micro-facial movement dataset. IEEE transactions on affective computing, 9(1), 116-129.
  • CASME II with 247 FMEs at 200 fps.
    • To download the dataset, please visit: http://fu.psych.ac.cn/CASME/casme2-en.php. Download and fill in the license agreement form, upload the file through this link: https://www.wjx.top/vj/hSaLoan.aspx.
    • Reference: Yan, W. J., Li, X., Wang, S. J., Zhao, G., Liu, Y. J., Chen, Y. H., & Fu, X. (2014). CASME II: An improved spontaneous micro-expression database and the baseline evaluation. PloS one, 9(1), e86041.
  • SMIC: SMIC-VIS and SMIC-NIR with 71 MEs at 25fps, SMIC-HS with 164 MEs at 100fps.
    • To download the dataset, please visit: https://www.oulu.fi/cmvs/node/41319. Download and fill in the license agreement form (please indicate which version/subset you need), email to Xiaobai.Li@oulu.fi.
    • Reference: Li, X., Pfister, T., Huang, X., Zhao, G., & Pietikäinen, M. (2013, April). A spontaneous micro-expression database: Inducement, collection and baseline. In 2013 10th IEEE International Conference and Workshops on Automatic face and gesture recognition (fg) (pp. 1-6). IEEE.

Template faces and Designated micro-expressions for generation

  • To download the MEGC2022-sythesis dataset, Download and fill in the license agreement form, upload the file through this link: https://www.wjx.top/vj/rWcxZXc.aspx.
  • Evaluation Protocol

  • Guidelines: Download file here.
  • The three experts who are FACS AU coders will evaluate the results without interfering with each other.

    Evaluation Result

  • The submissions with the generated videos and the corresponding evaluation could be downloaded from here.
  • The final evaluation on generated videos from three submissions
    normalized/Perfect_score (9*36 =324)
    #36 #46 #63
    coder1 0.61 0.73 0.74
    coder2 0.56 0.55 0.58
    coder3 0.44 0.58 0.66
    total 1.61 1.86 1.98
    normalized/max_submission_perCoder
    #36 #46 #63
    coder1 0.83 1.00 1.00
    coder2 0.97 0.95 1.00
    coder3 0.66 0.88 1.00
    total 2.46 2.82 3.00
  • ME and Macro-expression Spotting task

    Recommended Training Databases

    • SAMM Long Videos with 147 long videos at 200 fps (average duration: 35.5s).
      • To download the dataset, please visit: http://www2.docm.mmu.ac.uk/STAFF/M.Yap/dataset.php. Download and fill in the license agreement form, email to M.Yap@mmu.ac.uk with email subject: SAMM long videos.
      • Reference: Yap, C. H., Kendrick, C., & Yap, M. H. (2020, November). SAMM long videos: A spontaneous facial micro-and macro-expressions dataset. In 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020) (pp. 771-776). IEEE.
    • CAS(ME)2 with 97 long videos at 30 fps (average duration: 148s).
      • To download the dataset, please visit: http://fu.psych.ac.cn/CASME/casme2-en.php. Download and fill in the license agreement form, upload the file through this link: https://www.wjx.top/vj/QR147Sq.aspx.
      • Reference: Qu, F., Wang, S. J., Yan, W. J., Li, H., Wu, S., & Fu, X. (2017). CAS (ME) $^ 2$: a database for spontaneous macro-expression and micro-expression spotting and recognition. IEEE Transactions on Affective Computing, 9(4), 424-436.
    • SMIC-E-long with 162 long videos at 100 fps (average duration: 22s).
      • To download the dataset, please visit: https://www.oulu.fi/cmvs/node/41319. Download and fill in the license agreement form (please indicate which version/subset you need), email to Xiaobai.Li@oulu.fi.
      • Reference: Tran, T. K., Vo, Q. N., Hong, X., Li, X., & Zhao, G. (2021). Micro-expression spotting: A new benchmark. Neurocomputing, 443, 356-368.

    Unseen Test Dataset

    • The unseen testing set (MEGC2022-testSet) contains 10 long video, including 5 long videos from SAMM (SAMM Challenge dataset) and 5 clips cropped from different videos in CAS(ME)3. The frame rate for SAMM Challenge dataset is 200fps and the frame rate for CAS(ME)3 is 30 fps. The participants should test on this unseen dataset.
    • To download the MEGC2022-testSet, Download and fill in the license agreement form of SAMM Challenge dataset and the license agreement form of CAS(ME)3_clip, upload the file through this link: https://www.wjx.top/vj/wMAN302.aspx.
      • For the request from a bank or company, the participants need to ask their director or CEO to sign the form.
      • Reference:
        1. Li, J., Dong, Z., Lu, S., Wang, S.J., Yan, W.J., Ma, Y., Liu, Y., Huang, C. and Fu, X. (2022). CAS(ME)3: A Third Generation Facial Spontaneous Micro-Expression Database with Depth Information and High Ecological Validity. IEEE Transactions on Pattern Analysis and Machine Intelligence, doi: 10.1109/TPAMI.2022.3174895.
        2. Davison, A. K., Lansley, C., Costen, N., Tan, K., & Yap, M. H. (2016). SAMM: A spontaneous micro-facial movement dataset. IEEE transactions on affective computing, 9(1), 116-129.

    Evaluation Protocol

    • Participant should test the proposed algorithm on the unseen dataset and upload the result to the Leaderboard for the evaluation.
    • Baseline Method:
      Please cite:
      Yap, C.H., Yap, M.H., Davison, A.K., Cunningham, R. (2021), 3D-CNN for Facial Micro-and Macro-expression Spotting on Long Video Sequences using Temporal Oriented Reference Frame, arXiv:2105.06340 [cs.CV], https://arxiv.org/abs/2105.06340.
    • Baseline result: Available on the Leaderboard
    • Leaderboardhttps://megc2022.grand-challenge.org/.
      • Please submit a zip file contains your predicted csv files with the following labels:
        • cas_pred.csv
        • samm_pred.csv
      • An example submission can be seen at example_submission and example_submission_withoutExpType.
      • Note: For submission without labelling expression type (me or mae), the labelling will be done automatically using ME threshold of 0.5s (15 frames for CAS and 100 frames for SAMM).
    • submission stage: 23rd May - 18th June 25th June 2022 (23:59AoE) (updated)
      • The participants could upload the result and then the Leaderboards will calculate the metrics.
      • Update: Please contact lijt@psych.ac.cn for the participants' own evaluation result, with mail subject: [Evaluation Result Request] MEGC2022 - Spotting task - [user name] - [submission time].
      • The evaluation result of other participants and the ranking will not be provided during this stage. You could compare your result with the provided baseline result.
    • Live Leaderboard stage: Since 19th June 2022 26th June 2022 (updated)
      • Results uploaded after June 25th will not be considered by ACM MEGC2022 for the final ranking of the competition.
      • However, any research team interested in the spotting task can upload results to validate the performance of their method.
      • The leaderboard will calculate and display the uploaded results and real-time ranking.

    Submission

    Please note: The submission deadline is at 11:59 p.m. of the stated deadline date Anywhere on Earth.

    • Submission platform: https://openreview.net/group?id=acmmm.org/ACMMM/2022/Track/Grand_Challenges.
    • Submission Deadline: 18th June 25th June 2022 (updated)
    • Notification: 7 July 2022 18 July 2022 (updated)
    • Camera-ready: 20 July 2022 23 July 2022 (updated)
    • Submission guidelines:
      • Submitted papers (.pdf format) must use the ACM Article Template https://www.acm.org/publications/proceedings-template as used by regular ACMMM submissions. Please use the template in traditional double-column format to prepare your submissions. For example, word users may use Word Interim Template, and latex users may use sample-sigconf template.
      • Grand challenge papers will go through a single-blind review process. Each grand challenge paper submission is limited to 4 pages with 1-2 extra pages for references only.
      • For all files with different task requirements except for the paper, please submit in a single zip file and upload to the submission system as supplementary material.
        • GitHub repository URL containing codes of your implemented method, and all other relevant files such as feature/parameter data.
        • For generation task: the generated videos
        • For spotting task: two csv files reporting the results, i.e., cas_pred.csv and samm_pred.csv.

    Frequently Asked Questions

    1. Q: How to deal with the spotted intervals with overlap?
      A: We consider that each ground-truth interval corresponds to at most one single spotted interval. If your algorithm detects multiple with overlap, you should merge them into an optimal interval. The fusion method is also part of your algorithm, and the final result evaluation only cares about the optimal interval obtained.