Preprint 2026

Skill-Aligned Annotation for Reliable Evaluation in Text-to-Image Generation

Abdelrahman Eldesokey, Merey Ramazanova, Ahmad Sait, Ansar Khangeldin, Karen Sanchez, Tong Zhang, Bernard Ghanem

King Abdullah University of Science and Technology (KAUST)

We revisit Text-to-Image evaluation through the lens of skill-aligned annotation, where each strategy reflects the underlying characteristics of its evaluation skill — yielding higher inter-annotator agreement and more stable model comparisons than uniform Likert or binary-QA baselines.

Why revisit Text-to-Image evaluation?

Limitation

As T2I models converge in quality, reliable evaluation becomes critical yet harder. Existing practices apply uniform annotation mechanisms — Likert-scale ratings or binary question answering — across heterogeneous evaluation skills, despite fundamental differences in their nature.

Skill-Aligned Annotation

We pair each evaluation skill with an annotation strategy that reflects its underlying characteristics. Compared to uniform baselines, this design produces more consistent evaluation signals, with higher inter-annotator agreement and improved stability across models.

Automated Pipeline

We present an automated pipeline that instantiates the proposed evaluation protocol, enabling scalable, fine-grained evaluation with spatially grounded feedback — without simply scaling annotation effort.

Full Abstract

Text-to-image (T2I) generation has advanced rapidly, making reliable evaluation critical as performance differences between models narrow. Existing evaluation practices typically apply uniform annotation mechanisms, such as Likert-scale or binary question answering (BQA), across heterogeneous evaluation skills, despite fundamental differences in their nature. In this work, we revisit T2I evaluation through the lens of skill-aligned annotation, where annotation strategies reflect the underlying characteristics of each evaluation skill.

We systematically compare skill-aligned annotation against uniform baselines and show that it produces more consistent evaluation signals, with higher inter-annotator agreement and improved stability across models. Finally, we present an automated pipeline that instantiates the proposed evaluation protocol, enabling scalable and fine-grained evaluation with spatially grounded feedback.

Our work highlights that improving the foundations of image evaluation can increase reliability and efficiency without simply scaling annotation effort. We hope this motivates further research on refining evaluation protocols as a central component of reliable model assessment.

Evaluation tools in practice

The following demonstrations illustrate the tools developed to implement and operationalize our skill-aligned evaluation framework.

Evaluation App

An interactive annotation tool that implements the proposed skill-aligned annotation strategies. Each evaluation skill is presented through its corresponding annotation interface, including anchored Likert scales and targeted binary questions, designed to elicit accurate and consistent human judgments.

Prompts Analysis App

A visualization tool for examining how evaluation prompts are annotated with relevant skills and subskills. Users can browse the prompt collection, inspect the associated validation questions for each skill, and review how the benchmark covers the full spectrum of T2I generation capabilities.