To be held at IJCAI 2026, Bremen, Germany.
We jointly hold The First Joint Workshop on Human Behavior Analysis and Interaction for Emotional Intelligence, with the 4th MiGA Challenge at IJCAI 2026, 15th August 2026, Bremen, Germany.
We warmly welcome your contribution and participation!
Topic: EI-MiGA2026's Zoom Meeting
Time: Aug 15/16/17, 2026 08:45 AM (Germany) (TBD)
Meeting ID: 620 1539 6579
Join Zoom1st April 2026 : The EI-MiGA workshop & challenge website is live! Call for Challenge is now open.
We are organizing the First Joint Workshop on Human Behavior Analysis and Interaction for Emotional Intelligence, with the 4th MiGA Challenge, to be held at IJCAI 2026, Bremen, Germany.
Emotional intelligence in Artificial Intelligence (AI) is evolving from affect sensing toward holistic understanding and empathetic interaction. Beyond recognizing emotions, next-generation systems aim to reason about affective experiences and generate meaningful and context-aware responses. This workshop covers the full spectrum of Emotion AI, including multimodal affect sensing from speech, language, vision, and physiological biosignals, as well as data-efficient learning, LLM/MLLM-based reasoning, and affect-aware interaction. The workshop also emphasizes trustworthiness, including fairness, robustness, and ethical considerations in affective computing.
To foster a unified platform for methodological advances and benchmark-driven evaluation in the workshop, we invite
We welcome high-quality submissions spanning diverse aspects of emotional intelligence in AI. To provide a comprehensive perspective, the topics include, but are not limited to:
Please see the Workshop Page for full details.
The Microsoft CMT service was used for managing the peer-reviewing process for this conference. This service was provided for free by Microsoft and they bore all expenses, including costs for Azure cloud services as well as for software development and support.
Emotional intelligence in Artificial Intelligence (AI) is undergoing a paradigm shift from sensing human affective states (including emotions and related affective phenomena) toward holistic understanding and empowerment. While existing systems focus on detecting discrete or dimensional emotions, future AI will evolve to reason about affective experiences and generate nuanced, empathetic, and interactive responses.
This workshop explores the full spectrum of Emotion AI within affective computing, starting with multimodal affect sensing from speech, language, vision (including facial expressions and body gestures), physiological biosignals. A key bottleneck in this field is the scarcity of high-quality annotated multimodal data; therefore, we welcome complementary strategies such as data augmentation, self-/semi-supervised learning, active learning, and the use of Large Language Models (LLMs) for automated annotation. With the emergence of Multimodal LLMs (MLLMs), AI is increasingly capable of reasoning about the intentions and underlying causes of affect embedded in complex signals. By integrating LLM-based contextual reasoning and interaction, together with advanced expressive speech synthesis models, this workshop aims to advance affect-aware systems that not only perceive affective states, but also understand and respond to them in an empathetic manner. Beyond technical advances, the workshop places particular emphasis on model trustworthiness, bridging technical innovation with ethical, fair, responsible, and robust Human-Computer Interaction (HCI). Our ultimate goal is to foster the development of next-generation affect-aware interfaces that are both effective and socially responsible.
The workshop covers a wider scope than the challenge — any paper related to human behavior analysis and emotional intelligence in AI is welcome (both original research papers and challenge papers). We welcome high-quality submissions spanning diverse aspects of emotional intelligence in AI. Topics are categorized (but not limited) into the following three themes:
Affect Perception and Data Strategies
Holistic Understanding and Reasoning
Empathetic Interaction and Applications
- Papers must comply with the Springer LNCS paper style and can fall in one of the following categories:
- Full research papers (minimum 7 pages)
- Short research papers (4-6 pages)
- Position papers (2 pages) (Position papers argue for a specific viewpoint or research direction rather than reporting completed work.)
- The Springer LNCS template can be found on the Springer LNCS guidelines page.
- Accepted papers (after blind review of at least 2-3 experts) will be published in Springer Lecture Notes in Computer Science (LNCS). We are also planning to organize a special issue, and the authors of the most interesting and relevant papers will be invited to submit an extended manuscript.
- Workshop submissions will be handled by CMT submission system; the submission link is as follows: Paper Submission.
The proposed workshop (including the challenge) will be held as a one-day event, on August 15/16/17th, 2026 (TBD), Bremen, Germany.
The MiGA Challenge is an ongoing annual event featuring multiple challenge tasks each year, with progressively larger-scale datasets. This year, in addition to the two fundamental micro-gesture analysis tasks—classification and online recognition, we are taking MiGA a step further by introducing a new task: behavior-based emotion understanding. The tracks of MiGA will be extended to leverage those identity-insensitive cues to achieve hidden emotion understanding in the future. The challenge will be based on two spontaneous datasets: One is the SMG dataset, introduced in IJCV2023’s paper “SMG: A Micro-Gesture Dataset Towards Spontaneous Body Gestures for Emotional Stress State Analysis.” and the other is the iMiGUE dataset, published in CVPR2021’s paper “iMiGUE: An Identity-free Video Dataset for Micro-Gesture Understanding and Emotion Analysis.”. For MiGA 2026, we have introduced three challenge tasks (Tracks), and participating teams are welcome to compete in one or more of them. The challenge will be organized on the Kaggle website.
Track 1: Micro-gesture classification from short video clips. The MG datasets were collected from in-the-wild settings. Compared to ordinary action/gesture data, MGs concern more fine-grained and subtle body movements that occur spontaneously in practical interactions. Thus, learning those fine-grained body movement patterns, handling imbalanced sample distribution of MGs, and distinguishing the high heterogeneous MG samples of interclass are the big challenges to be addressed.
Track 2: Multimodality-based online micro-gesture recognition from long video sequences. Unlike any existing online action/gesture recognition datasets in which samples are well aligned/performed in the sequence, MGs samples occur spontaneously in any combination or order, just like seen in daily communicative scenarios. Thus, the task of online micro-gesture recognition requires dealing with more complex body-movement transition patterns (e.g., co- occurrence of multiple MGs, incomplete MGs and complicated transitions between MGs, etc.) and detecting fine-grained MGs from irrelevant/context body movements, which poses new challenges that haven’t been considered in previous gesture research.
Track 3: Multimodality behavior-based emotion recognition from long video sequences. In this track, participants will try to predict whether a tennis player won or lost the match based on their body behaviors from the video recordings of the public interview press. Unlike any existing body behavior-based emotion recognition datasets in which emotions or behaviors are acted or performed intentionally, our task is about recognizing subjects’ hidden emotions based on body behaviors directly from in-the-wild interview press. Thus, the task of behavior-based emotion recognition requires dealing with more complex challenges, e.g., modeling reliable emotion-behavior probabilistic with suitable algorithms, mining fine-grained MGs from irrelevant/context body movements to achieve reliable emotion analysis, which hasn’t been considered in previous gesture research.
Datasets for the proposed challenge are available. The MiGA challenge is planned as a continuous annual event. Two benchmark datasets published on IJCV 2023 and CVPR 2021 are available and will be used for the challenge. The first one is the SMG dataset published in IJCV2023 Spontaneous Micro-Gesture (SMG) dataset consists of 3,692 samples of 17 MGs. The MG clips are annotated from 40 long video sequences (10-15 minutes) with 821,056 frames in total. The datasets were collected from 40 subjects while narrating a fake and real story to elicit the emotional states. The participants are recorded collected by Kinect resulting in four modalities, RGB, 3D skeletal joints, depth and silhouette. In this workshop, we allow participants to use the skeleton, RGB or both modalities. Details about SMG dataset
The second dataset is iMiGUE published in CVPR2021. Micro-Gesture Understanding and Emotion analysis (iMiGUE) dataset consists of 32 MGs plus one non-MG class collected from post-match press conferences videos of famous tennis players. The dataset consists of 18,499 samples of MGs to detect negative and positive emotions. The MG clips are annotated from 359 long video sequences (0.5-26 minutes) with 3,765,600 frames in total. The dataset contains RGB modality and 2D skeletal joints collected from Open-Pose. In this workshop, we allow participants to use the skeleton, RGB or both modalities. Details about iMiGUE dataset
Note that: 1) not all data are used for the challenge. 2) part of the data will be selected and tailored for different challenge tasks, please follow the Kaggle competition links to obtain and process the datasets.
2. EvaluationWe deploy a cross-subject evaluation protocol. For MG classification track, 13,936 and 3,692 MG clips from iMiGUE and SMG datasets will be used for training and validating, and the remaining 4,563 MG clips from iMiGUE will be used for testing. For MG online recognition track, 252 and 40 long sequences from iMiGUE and SMG datasets will be used for training and validating, and the remaining 104 long sequences from iMiGUE will be used for testing.
MG classification track: We report Top-1 accuracy on the testing setes on the iMiGUE dataset. Submissions will be ranked based on Top-1 accuracy on the overall split (if Top-1 results are the same, then Top-5 will be used to compare the results).
MG online recognition track: We jointly evaluate the detection and classification performances of algorithms by using the F1 score measurement defined below: F1 = 2 * Precision * Recall / (Precision + Recall), given a long video sequence that needs to be evaluated, Precision is the fraction of correctly classified MGs among all gestures retrieved in the sequence by algorithms, while Recall (or sensitivity) is the fraction of MGs that have been correctly retrieved over the total amount of annotated MGs.
Behavior-based emotion prediction track: We evaluate the submitted algorithms with a binary classification (win/lose) accuracy. The algorithm needs to predict the correct emotion state (positive/negative) corresponding to the match results (win/lose) of the tennis players based on their body behaviors during the interview press. Submissions will be ranked on the basis of the accuracy on the testing set.
Submission format for three tracks. Participants must submit their predictions in a single .csv file on the Kaggle platform, more detailed instructions for each track can be found on Kaggle. The results will be evaluated on the server and displayed on the ranking list in real-time. The organization team has the right to examine the participants source code to ensure the reproducibility of the algorithms. The final results and ranking will be confirmed and announced by organizers.
Please visit the Kaggle websites to join the competitions:
The 4th MiGA-IJCAI Challenge Track 1: Micro-gesture Classification
The 4th MiGA-IJCAI Challenge Track 2: Micro-gesture Online Recognition
The 4th MiGA-IJCAI Challenge Track 3: Behavior-based Emotion Prediction
The rankings for the MiGA 2026 challenge will be published after the competition concludes. Once all submissions have been thoroughly reviewed, the final standings will be determined and announced. Stay tuned for updates!
University of Oulu, FI
University of Bremen, DE
Technical University of Munich, DE
University of Michigan, USA
Stanford University, USA
Imperial College London, UK
Technical University of Munich, DE
University of Bremen, DE
University of Oulu, FI
Welcome to our Telegram group and discuss with peers.
https://t.me/+Uc0soGQkOVRkOGUy
The contact info is listed as follows: