{"id":3329,"date":"2025-08-27T14:35:10","date_gmt":"2025-08-27T12:35:10","guid":{"rendered":"https:\/\/perceive-horizon.eu\/?page_id=3329"},"modified":"2025-09-24T17:18:20","modified_gmt":"2025-09-24T15:18:20","slug":"tool-styleshade3d","status":"publish","type":"page","link":"https:\/\/perceive-horizon.eu\/index.php\/results\/perceive-tools\/tool-styleshade3d\/","title":{"rendered":"Tool StyleShade3D"},"content":{"rendered":"<h2 style=\"text-align: center; margin-bottom: 80px;\">StyleShade3D Tool<\/h2>\n<div class=\"perceive-container\">\n<p class=\"custom-font\">Our novel method integrates deep learning-based models, like Segment anything (SAM), into a traditional 3D mesh and material reconstruction pipeline. This process begins with generating a segmentation atlas (2D parameterized segmentation-map over the 3D surface), which is then used for semantic shading. This means applying different shading models and material assets to various segments of the 3D model, as well as stylizing the meshes.<\/p>\n<\/div>\n<p><!-- =========================\n\n\n\n<section class=\"nl-banner\" role=\"region\" aria-label=\"Evaluate\">\n  \n\n<div class=\"nl-banner__inner\">\n    \n\n<h2 class=\"nl-banner__title\">Your opinion matters<\/h2>\n\n\n    \n\n<p class=\"nl-banner__tagline\">Fill the following form and test PERCEIVE StyleShade3D Tool<\/p>\n\n\n    \n\n<div class=\"nl-banner__actions\">\n      <a class=\"nl-banner__cta\" href=\"https:\/\/perceive-tools.igd.fraunhofer.de\/venus\/\" target=\"_blank\">Try the tool<\/a>\n      <a class=\"nl-banner__cta\" href=\"https:\/\/forms.cloud.microsoft\/e\/urcwWmPf1S\">Fill in the form<\/a>\n      <a class=\"nl-banner__cta nl-banner__cta--outline\" href=\"#nl-modal\">Show QR Code<\/a>\n    <\/div>\n\n\n  <\/div>\n\n\n<\/section>\n\n\n\n========================= --><\/p>\n<div class=\"perceive-container\">\n<p class=\"custom-font\">The 3D model is processed in BlenderProc, where multi-view images are generated through an all-around camera animation to ensure comprehensive coverage for the segmentation atlas. We take inspiration from path planning approaches to get the minimum set of views that represent the 3D surface. We create multi-view segmentation maps\u2014segmented representations of an object from multiple viewpoints\u2014using the tracking feature of SAM2.<\/p>\n<p class=\"custom-font\">Since CAD data often lacks texture atlas coordinates, we generate the texture atlas using Blender&#8217;s &#8216;Smart UV Project&#8217; algorithm. Then, we create the segmentation atlas by projecting the model from different views and mapping the texture into a 2D atlas.<\/p>\n<p class=\"custom-font\">We leverage a semantic atlas, deep stylization models, and various shading models for region-specific stylization and shading. This StyleShade3D is a tool based on WebGL that consumes this segmentation atlas and allows us to style and shade different regions of a 3D model.<\/p>\n<p><img decoding=\"async\" class=\"centered-img\" src=\"http:\/\/perceive-horizon.eu\/wp-content\/uploads\/2025\/08\/styleShade3D.png\"><\/p>\n<p class=\"custom-font\"><small>The StyleShade3D tool allows us to stylize (marble) and shade (hair and drape) different regions of the statue<\/small><\/p>\n<\/div>\n<div class=\"perceive-container\">\n<h6>Credits<\/h6>\n<p class=\"custom-font\">\nSaptarshi Neil Sinha  <a href=\"https:\/\/orcid.org\/0000-0001-6637-0379\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/perceive-horizon.eu\/wp-content\/uploads\/2025\/08\/orcid.png\" alt=\"ORCID\" width=\"18\" height=\"18\"><\/a>, Andreas Zapf <a href=\"https:\/\/orcid.org\/ 0000-0002-5916-6406 \" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/perceive-horizon.eu\/wp-content\/uploads\/2025\/08\/orcid.png\" alt=\"ORCID\" width=\"18\" height=\"18\"><\/a> (Fraunhofer IGD), Isabel Yoko Arteaga Kiyomoto <a href=\"https:\/\/orcid.org\/0000-0002-6787-789X\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/perceive-horizon.eu\/wp-content\/uploads\/2025\/08\/orcid.png\" alt=\"ORCID\" width=\"18\" height=\"18\"><\/a> (NTNU), Donata Magrini <a href=\"https:\/\/orcid.org\/0000-0001-8639-3244\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/perceive-horizon.eu\/wp-content\/uploads\/2025\/08\/orcid.png\" alt=\"ORCID\" width=\"18\" height=\"18\"><\/a>, Roberta Iannaccone <a href=\"https:\/\/orcid.org\/0000-0002-8931-1969\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/perceive-horizon.eu\/wp-content\/uploads\/2025\/08\/orcid.png\" alt=\"ORCID\" width=\"18\" height=\"18\"><\/a> (CNR ISPC)<\/p>\n<h6>Learn more<\/h6>\n<p class=\"custom-font\">\n<ul>\n<li>Sinha, Saptarshi Neil, Paul Julius K\u00fchn, Pavel Rojtberg, Holger Graf, Arjan Kuijper, and Michael Weinmann. 2024. Semantic Stylization and Shading via Segmentation Atlas Utilizing Deep Learning Approaches. The Eurographics Association. <a href=\"https:\/\/diglib.eg.org\/items\/8e062f02-ccd6-4467-9df1-8dd7d23d9965\">https:\/\/doi.org\/10.2312\/STAG.20241352<\/a>.<\/li>\n<ul>\n<\/div>\n<div class=\"button-wrapper\" style=\"margin: 2rem;\"><a class=\"wp-block-button__link white-button has-background has-medium-font-size has-custom-font-size wp-element-button\" style=\"border-radius: 52px; padding: 15px 20px;\" href=\"http:\/\/perceive-horizon.eu\/index.php\/perceive-tools\/\"> Go back to all<br \/>Tools &#038; Demonstrators <\/a><\/div>\n<p><!-- Modal (opens with :target) --><\/p>\n<div id=\"nl-modal\" class=\"nl-modal\" aria-hidden=\"true\">\n  <!-- Click overlay to close --><br \/>\n  <a class=\"nl-modal__overlay\" href=\"#\" aria-hidden=\"true\"><\/a><\/p>\n<div class=\"nl-modal__dialog\" role=\"dialog\" aria-modal=\"true\"\n       aria-labelledby=\"nl-modal-title\" aria-describedby=\"nl-modal-desc\"><\/p>\n<header class=\"nl-modal__header\">\n<h3 id=\"nl-modal-title\" class=\"nl-modal__title\">Scan the QR Code<\/h3>\n<\/header>\n<div class=\"nl-modal__content\" id=\"nl-modal-desc\">\n<div class=\"nl-modal__media\">\n<p class=\"custom-font\">\n<p>        <img decoding=\"async\" class=\"qr\" src=\"http:\/\/perceive-horizon.eu\/wp-content\/uploads\/2025\/09\/StyleShade3D_1.png\" alt=\"QR code\">\n       <\/div>\n<div class=\"nl-modal__actions\">\n        <a class=\"nl-banner__cta nl-banner__cta--muted\" href=\"#\" aria-label=\"Dismiss\">Maybe later<\/a><\/div>\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>StyleShade3D Tool Our novel method integrates deep learning-based models, like Segment anything (SAM), into a traditional 3D mesh and material reconstruction pipeline. This process begins with generating a segmentation atlas (2D parameterized segmentation-map over the 3D surface), which is then used for semantic shading. This means applying different shading models and material assets to various [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":2793,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_acf_changed":false,"_uag_custom_page_level_css":"","footnotes":""},"class_list":["post-3329","page","type-page","status-publish","hentry"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v24.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Tool StyleShade3D - PERCEIVE<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/perceive-horizon.eu\/index.php\/results\/perceive-tools\/tool-styleshade3d\/\" \/>\n<meta property=\"og:locale\" content=\"en_GB\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Tool StyleShade3D - PERCEIVE\" \/>\n<meta property=\"og:description\" content=\"StyleShade3D Tool Our novel method integrates deep learning-based models, like Segment anything (SAM), into a traditional 3D mesh and material reconstruction pipeline. This process begins with generating a segmentation atlas (2D parameterized segmentation-map over the 3D surface), which is then used for semantic shading. This means applying different shading models and material assets to various [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/perceive-horizon.eu\/index.php\/results\/perceive-tools\/tool-styleshade3d\/\" \/>\n<meta property=\"og:site_name\" content=\"PERCEIVE\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/PERCEIVEhorizon\" \/>\n<meta property=\"article:modified_time\" content=\"2025-09-24T15:18:20+00:00\" \/>\n<meta property=\"og:image\" content=\"http:\/\/perceive-horizon.eu\/wp-content\/uploads\/2025\/08\/styleShade3D.png\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:site\" content=\"@PERCEIVE_info\" \/>\n<meta name=\"twitter:label1\" content=\"Estimated reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"3 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/perceive-horizon.eu\/index.php\/results\/perceive-tools\/tool-styleshade3d\/\",\"url\":\"https:\/\/perceive-horizon.eu\/index.php\/results\/perceive-tools\/tool-styleshade3d\/\",\"name\":\"Tool StyleShade3D - PERCEIVE\",\"isPartOf\":{\"@id\":\"https:\/\/perceive-horizon.eu\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/perceive-horizon.eu\/index.php\/results\/perceive-tools\/tool-styleshade3d\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/perceive-horizon.eu\/index.php\/results\/perceive-tools\/tool-styleshade3d\/#primaryimage\"},\"thumbnailUrl\":\"http:\/\/perceive-horizon.eu\/wp-content\/uploads\/2025\/08\/styleShade3D.png\",\"datePublished\":\"2025-08-27T12:35:10+00:00\",\"dateModified\":\"2025-09-24T15:18:20+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/perceive-horizon.eu\/index.php\/results\/perceive-tools\/tool-styleshade3d\/#breadcrumb\"},\"inLanguage\":\"en-GB\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/perceive-horizon.eu\/index.php\/results\/perceive-tools\/tool-styleshade3d\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-GB\",\"@id\":\"https:\/\/perceive-horizon.eu\/index.php\/results\/perceive-tools\/tool-styleshade3d\/#primaryimage\",\"url\":\"http:\/\/perceive-horizon.eu\/wp-content\/uploads\/2025\/08\/styleShade3D.png\",\"contentUrl\":\"http:\/\/perceive-horizon.eu\/wp-content\/uploads\/2025\/08\/styleShade3D.png\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/perceive-horizon.eu\/index.php\/results\/perceive-tools\/tool-styleshade3d\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/perceive-horizon.eu\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Results\",\"item\":\"https:\/\/perceive-horizon.eu\/index.php\/results\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"PERCEIVE Tool &#038; Demonstrators\",\"item\":\"https:\/\/perceive-horizon.eu\/index.php\/results\/perceive-tools\/\"},{\"@type\":\"ListItem\",\"position\":4,\"name\":\"Tool StyleShade3D\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/perceive-horizon.eu\/#website\",\"url\":\"https:\/\/perceive-horizon.eu\/\",\"name\":\"PERCEIVE\",\"description\":\"Perceptive Enhanced Realities of Colored collEctions\\u0003through aI and Virtual Experiences\",\"publisher\":{\"@id\":\"https:\/\/perceive-horizon.eu\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/perceive-horizon.eu\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-GB\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/perceive-horizon.eu\/#organization\",\"name\":\"PERCEIVE-horizon\",\"url\":\"https:\/\/perceive-horizon.eu\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-GB\",\"@id\":\"https:\/\/perceive-horizon.eu\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/perceive-horizon.eu\/wp-content\/uploads\/2023\/02\/Fichier-2@2x.png\",\"contentUrl\":\"https:\/\/perceive-horizon.eu\/wp-content\/uploads\/2023\/02\/Fichier-2@2x.png\",\"width\":400,\"height\":58,\"caption\":\"PERCEIVE-horizon\"},\"image\":{\"@id\":\"https:\/\/perceive-horizon.eu\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/PERCEIVEhorizon\",\"https:\/\/x.com\/PERCEIVE_info\",\"https:\/\/www.instagram.com\/perceive_horizon\/\",\"https:\/\/www.linkedin.com\/company\/perceive-horizon\/\",\"https:\/\/www.youtube.com\/@Perceive_horizon\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Tool StyleShade3D - PERCEIVE","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/perceive-horizon.eu\/index.php\/results\/perceive-tools\/tool-styleshade3d\/","og_locale":"en_GB","og_type":"article","og_title":"Tool StyleShade3D - PERCEIVE","og_description":"StyleShade3D Tool Our novel method integrates deep learning-based models, like Segment anything (SAM), into a traditional 3D mesh and material reconstruction pipeline. This process begins with generating a segmentation atlas (2D parameterized segmentation-map over the 3D surface), which is then used for semantic shading. This means applying different shading models and material assets to various [&hellip;]","og_url":"https:\/\/perceive-horizon.eu\/index.php\/results\/perceive-tools\/tool-styleshade3d\/","og_site_name":"PERCEIVE","article_publisher":"https:\/\/www.facebook.com\/PERCEIVEhorizon","article_modified_time":"2025-09-24T15:18:20+00:00","og_image":[{"url":"http:\/\/perceive-horizon.eu\/wp-content\/uploads\/2025\/08\/styleShade3D.png","type":"","width":"","height":""}],"twitter_card":"summary_large_image","twitter_site":"@PERCEIVE_info","twitter_misc":{"Estimated reading time":"3 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/perceive-horizon.eu\/index.php\/results\/perceive-tools\/tool-styleshade3d\/","url":"https:\/\/perceive-horizon.eu\/index.php\/results\/perceive-tools\/tool-styleshade3d\/","name":"Tool StyleShade3D - PERCEIVE","isPartOf":{"@id":"https:\/\/perceive-horizon.eu\/#website"},"primaryImageOfPage":{"@id":"https:\/\/perceive-horizon.eu\/index.php\/results\/perceive-tools\/tool-styleshade3d\/#primaryimage"},"image":{"@id":"https:\/\/perceive-horizon.eu\/index.php\/results\/perceive-tools\/tool-styleshade3d\/#primaryimage"},"thumbnailUrl":"http:\/\/perceive-horizon.eu\/wp-content\/uploads\/2025\/08\/styleShade3D.png","datePublished":"2025-08-27T12:35:10+00:00","dateModified":"2025-09-24T15:18:20+00:00","breadcrumb":{"@id":"https:\/\/perceive-horizon.eu\/index.php\/results\/perceive-tools\/tool-styleshade3d\/#breadcrumb"},"inLanguage":"en-GB","potentialAction":[{"@type":"ReadAction","target":["https:\/\/perceive-horizon.eu\/index.php\/results\/perceive-tools\/tool-styleshade3d\/"]}]},{"@type":"ImageObject","inLanguage":"en-GB","@id":"https:\/\/perceive-horizon.eu\/index.php\/results\/perceive-tools\/tool-styleshade3d\/#primaryimage","url":"http:\/\/perceive-horizon.eu\/wp-content\/uploads\/2025\/08\/styleShade3D.png","contentUrl":"http:\/\/perceive-horizon.eu\/wp-content\/uploads\/2025\/08\/styleShade3D.png"},{"@type":"BreadcrumbList","@id":"https:\/\/perceive-horizon.eu\/index.php\/results\/perceive-tools\/tool-styleshade3d\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/perceive-horizon.eu\/"},{"@type":"ListItem","position":2,"name":"Results","item":"https:\/\/perceive-horizon.eu\/index.php\/results\/"},{"@type":"ListItem","position":3,"name":"PERCEIVE Tool &#038; Demonstrators","item":"https:\/\/perceive-horizon.eu\/index.php\/results\/perceive-tools\/"},{"@type":"ListItem","position":4,"name":"Tool StyleShade3D"}]},{"@type":"WebSite","@id":"https:\/\/perceive-horizon.eu\/#website","url":"https:\/\/perceive-horizon.eu\/","name":"PERCEIVE","description":"Perceptive Enhanced Realities of Colored collEctions\u0003through aI and Virtual Experiences","publisher":{"@id":"https:\/\/perceive-horizon.eu\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/perceive-horizon.eu\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-GB"},{"@type":"Organization","@id":"https:\/\/perceive-horizon.eu\/#organization","name":"PERCEIVE-horizon","url":"https:\/\/perceive-horizon.eu\/","logo":{"@type":"ImageObject","inLanguage":"en-GB","@id":"https:\/\/perceive-horizon.eu\/#\/schema\/logo\/image\/","url":"https:\/\/perceive-horizon.eu\/wp-content\/uploads\/2023\/02\/Fichier-2@2x.png","contentUrl":"https:\/\/perceive-horizon.eu\/wp-content\/uploads\/2023\/02\/Fichier-2@2x.png","width":400,"height":58,"caption":"PERCEIVE-horizon"},"image":{"@id":"https:\/\/perceive-horizon.eu\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/PERCEIVEhorizon","https:\/\/x.com\/PERCEIVE_info","https:\/\/www.instagram.com\/perceive_horizon\/","https:\/\/www.linkedin.com\/company\/perceive-horizon\/","https:\/\/www.youtube.com\/@Perceive_horizon"]}]}},"uagb_featured_image_src":{"full":false,"thumbnail":false,"medium":false,"medium_large":false,"large":false,"1536x1536":false,"2048x2048":false},"uagb_author_info":{"display_name":"admin6543","author_link":"https:\/\/perceive-horizon.eu\/index.php\/author\/admin6543\/"},"uagb_comment_info":0,"uagb_excerpt":"StyleShade3D Tool Our novel method integrates deep learning-based models, like Segment anything (SAM), into a traditional 3D mesh and material reconstruction pipeline. This process begins with generating a segmentation atlas (2D parameterized segmentation-map over the 3D surface), which is then used for semantic shading. This means applying different shading models and material assets to various&hellip;","_links":{"self":[{"href":"https:\/\/perceive-horizon.eu\/index.php\/wp-json\/wp\/v2\/pages\/3329","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/perceive-horizon.eu\/index.php\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/perceive-horizon.eu\/index.php\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/perceive-horizon.eu\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/perceive-horizon.eu\/index.php\/wp-json\/wp\/v2\/comments?post=3329"}],"version-history":[{"count":10,"href":"https:\/\/perceive-horizon.eu\/index.php\/wp-json\/wp\/v2\/pages\/3329\/revisions"}],"predecessor-version":[{"id":3529,"href":"https:\/\/perceive-horizon.eu\/index.php\/wp-json\/wp\/v2\/pages\/3329\/revisions\/3529"}],"up":[{"embeddable":true,"href":"https:\/\/perceive-horizon.eu\/index.php\/wp-json\/wp\/v2\/pages\/2793"}],"wp:attachment":[{"href":"https:\/\/perceive-horizon.eu\/index.php\/wp-json\/wp\/v2\/media?parent=3329"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}