DisControlFace: Adding Disentangled Control to Diffusion Autoencoder for
One-shot Explicit Facial Image Editing
ACM MM 2024

1Huawei Cloud Computing Technologies Co., Ltd    2Northwestern Polytechnical University    3Tsinghua University
teaser image
Our DisControlFace can realistically edit the input portrait corresponding to different explicit parametric controls and faithfully generate personalized facial details of an individual. Our model also supports other editing tasks, e.g., inpainting and semantic manipulations.

Abstract

In this work, we focus on exploring explicit fine-grained control of generative facial image editing, all while generating faithful facial appearances and consistent semantic details, which however, is quite challenging and has not been extensively explored, especially under an one-shot scenario. We identify the key challenge as the exploration of disentangled conditional control between high-level semantics and explicit parameters (e.g., 3DMM) in the generation process, and accordingly propose a novel diffusion-based editing framework, named DisControlFace. Specifically, we leverage a Diffusion Autoencoder (Diff-AE) as the semantic reconstruction backbone. To enable explicit face editing, we construct an Exp-FaceNet that is compatible with Diff-AE to generate spatial-wise explicit control conditions based on estimated 3DMM parameters. Different from current diffusion-based editing methods that train the whole conditional generative model from scratch, we freeze the pre-trained weights of the Diff-AE to maintain its semantically deterministic conditioning capability and accordingly propose a random semantic masking (RSM) strategy to effectively achieve an independent training of Exp-FaceNet. This setting endows the model with disentangled face control meanwhile reducing semantic information shift in editing. Our model can be trained using 2D in-the-wild portrait images without requiring 3D or video data and perform robust editing on any new facial image through a simple one-shot fine-tuning. Comprehensive experiments demonstrate that DisControlFace can generate realistic facial images with better editing accuracy and identity preservation over state-of-the-art methods.
pipeline IMAGE
Pipeline overview. Our DisControlNet leverages Diffusion Autoencoder (Diff-AE) as the reconstruction backbone freeze its pre-trained weights to maintain the semantic deterministic conditioning capability, which is effective in reducing semantic information shift during the editing of the input portrait image. Then, an explicit face control network, Exp-FaceNet compatible with the Diff-AE is constructed, which takes pixel-aligned snapshots rendered from estimated explicit parameters as inputs and generates multi-scale control features to condition the DDIM decoder. Moreover, a random semantic masking (RSM) training strategy is accordingly designed to enable a disentangled explicit face control of Exp-FaceNet.


Video

BibTeX

@inproceedings{jia2023discontrolface,
            title={DisControlFace: Disentangled Control for Personalized Facial Image Editing},
            author={Jia, Haozhe and Li, Yan and Cui, Hengfei and Xu, Di and Yang, Changpeng and Wang, Yuwang and Yu, Tao},
            booktitle={Proceedings of the ACM International Conference on Multimedia},
            year={2024}
        }