StyleShade3D Tool
Our novel method integrates deep learning-based models, like Segment anything (SAM), into a traditional 3D mesh and material reconstruction pipeline. This process begins with generating a segmentation atlas (2D parameterized segmentation-map over the 3D surface), which is then used for semantic shading. This means applying different shading models and material assets to various segments of the 3D model, as well as stylizing the meshes.
The 3D model is processed in BlenderProc, where multi-view images are generated through an all-around camera animation to ensure comprehensive coverage for the segmentation atlas. We take inspiration from path planning approaches to get the minimum set of views that represent the 3D surface. We create multi-view segmentation maps—segmented representations of an object from multiple viewpoints—using the tracking feature of SAM2.
Since CAD data often lacks texture atlas coordinates, we generate the texture atlas using Blender’s ‘Smart UV Project’ algorithm. Then, we create the segmentation atlas by projecting the model from different views and mapping the texture into a 2D atlas.
We leverage a semantic atlas, deep stylization models, and various shading models for region-specific stylization and shading. This StyleShade3D is a tool based on WebGL that consumes this segmentation atlas and allows us to style and shade different regions of a 3D model.
The StyleShade3D tool allows us to stylize (marble) and shade (hair and drape) different regions of the statue
Credits
Saptarshi Neil Sinha , Andreas Zapf
(Fraunhofer IGD), Isabel Yoko Arteaga Kiyomoto
(NTNU), Donata Magrini
, Roberta Iannaccone
(CNR ISPC)
Learn more
- Sinha, Saptarshi Neil, Paul Julius Kühn, Pavel Rojtberg, Holger Graf, Arjan Kuijper, and Michael Weinmann. 2024. Semantic Stylization and Shading via Segmentation Atlas Utilizing Deep Learning Approaches. The Eurographics Association. https://doi.org/10.2312/STAG.20241352.