Standard clothing asset generation involves restoring forward-facing flat-lay garment images displayed on a clear background by extracting clothing information from diverse real-world contexts, which presents significant challenges due to highly standardized structure sampling distributions and clothing semantic absence in complex scenarios. Existing models have limited spatial perception, often exhibiting structural hallucinations and texture distortion in this high-specification generative task.
To address this issue, we propose a novel Retrieval-Augmented Generation (RAG) framework, termed RAGDiffusion, to enhance structure determinacy and mitigate hallucinations by assimilating knowledge from language models and external databases. RAGDiffusion consists of two processes: (1) Retrieval-based structure aggregation, which employs contrastive learning and a Structure Locally Linear Embedding (SLLE) to derive global structure and spatial landmarks, providing both soft and hard guidance to counteract structural ambiguities; and (2) Omni-level faithful garment generation, which introduces a coarse-to-fine texture alignment that ensures fidelity in pattern and detail components within the diffusing. Extensive experiments on challenging realworld datasets demonstrate that
RAGDiffusion synthesizes structurally and texture-faithful clothing assets with significant performance improvements, representing a pioneering effort in high-specification faithful generation with RAG to confront intrinsic hallucinations and enhance fidelity.
Overall framework of RAGDiffusion. StructureNet provides latent structural embeddings, while SLLE facilitates the embedding fusion along with landmark retrieval. The generative model synthesizes multiple conditions to achieve omni-level high-fidelity generation.
Retrievalbased Structure Aggregation: Contrastive learning is firstly introduced to train a dual tower network to extract multi-modal structure embeddings from images of two branches as well as attributes derived from a frozen LLM. Structure Locally Linear Embedding (SLLE) to project the predicted structure embedding towards a standard manifold as well as offer a silhouette landmark.
Omni-level Faithful Garment Generation: RAGDiffusion establishes coarse-to-fine texture alignment for pattern faithfulness and detail faithfulness for generated garment. we ensure the generated pattern matches the conditioning through ReferenceNet, To mitigate reconstruction distortions inherent in original VAE, we propose Parameter Gradual Encoding Adaptation (PGEA) to adapt the SDXL backbone to a more powerful VAE.
Faithfulness: RAGDiffusion achieves omni-level faithful preservation of garments at the structural, pattern, and decoding levels, preserving fine details without loss and reaching an e-commerce-ready quality.
Generalizability: Benefiting from the integration of RAG, RAGDiffusion demonstrates strong generalization to unseen scenarios through expansion of the retrieval database.
Human-interpretable control: By leveraging manipulation landmarks, our method enables users to perform fine-grained, intuitive control over the generated garment structure.
FAITHFULNESS: RAGDiffusion delivers both loyal structures and superior details on challenging layered and side-view situations.
GENERALIZABILITY: RAGDiffusion is able to produce accurate results in the lower-body domain, showcasing its strong out-of-distribution compatibility and generalization ability.
HUMAN CONTROL: By leveraging manipulation landmarks, our method enables users to perform fine-grained, intuitive control over the generated garment structure.
RAGDiffusion assimilates high-quality contour landmarks and structure embeddings as external prior to producing visually compelling results that enhance depth and realism.
As RAGDiffusion works with real-world data, we provide more results to showcase how it handles super varied garments—like crazy patterns or funky designs.
More cross dataset visual results on the unseen dataset Viton-HD, DressCode and the untrained categories lower-body/dresses from RAGDiffusion to validate the enhanced generalizability due to RAG.
Without landmarks, RAGDiffusion suffers inaccurate shape. Embeddings from StructureNet improve inner structure, while PGEA enhances detail preservation.
Distribution of cosine similarity between a given sample and images from the external memory database.
@inproceedings{li2024ragdiffusion,
title={RAGDiffusion: Faithful Cloth Generation via External Knowledge Assimilation},
author={Li, Yuhan and Tan, Xianfeng and Shang, Wenxiang and Wu, Yubo and Wang, Jian and Chen, Xuanhong and Zhang, Yi and Lin, Ran and Ni, Bingbing},
booktitle={International Conference on Computer Vision (ICCV Highlight), 2025},
year={2024}
}