Intrinsic image decomposition is the process of recovering the image formation components (reflectance and shading) from an image. Previous methods employ either explicit priors to constrain the problem or implicit constraints as formulated by their losses (deep learning). These methods can be negatively influenced by strong illumination conditions causing shading-reflectance leakages.
Therefore, in this paper, an end-to-end edge-driven hybrid CNN approach is proposed for intrinsic image decomposition. Edges correspond to illumination invariant gradients. To handle hard negative illumination transitions, a hierarchical approach is taken including global and local refinement layers. We make use of attention layers to further strengthen the learning process.
An extensive ablation study and large scale experiments are conducted showing that it is beneficial for edge-driven hybrid IID networks to make use of illumination invariant descriptors and that separating global and local cues helps in improving the performance of the network. Finally, it is shown that the proposed method obtains state of the art performance and is able to generalise well to real world images.
Results
Citation
@inproceedings{dasPIENet,
title = {PIE-Net: Photometric Invariant Edge Guided Network for Intrinsic Image Decomposition},
author = {Partha Das and Sezer Karaoglu and Theo Gevers},
booktitle = {IEEE Conference on Computer Vision and Pattern Recognition, (CVPR)},
year = {2022}
}
Paper
PIE-Net: Photometric Invariant Edge Guided Network for Intrinsic Image Decomposition