Information

  • Publication Type: Journal Paper (without talk)
  • Workgroup(s)/Project(s): not specified
  • Date: May 2024
  • Article Number: e15016
  • DOI: 10.1111/cgf.15016
  • ISSN: 1467-8659
  • Journal: Computer Graphics Forum
  • Number: 2
  • Pages: 13
  • Volume: 43
  • Publisher: WILEY
  • Keywords: Deep learning (DL), Computer Graphics, Texture Synthesis

Abstract

Mesh texture synthesis is a key component in the automatic generation of 3D content. Existing learning-based methods have drawbacks—either by disregarding the shape manifold during texture generation or by requiring a large number of different views to mitigate occlusion-related inconsistencies. In this paper, we present a novel surface-aware approach for mesh texture synthesis that overcomes these drawbacks by leveraging the pre-trained weights of 2D Convolutional Neural Networks (CNNs) with the same architecture, but with convolutions designed for 3D meshes. Our proposed network keeps track of the oriented patches surrounding each texel, enabling seamless texture synthesis and retaining local similarity to classical 2D convolutions with square kernels. Our approach allows us to synthesize textures that account for the geometric content of mesh surfaces, eliminating discontinuities and achieving comparable quality to 2D image synthesis algorithms. We compare our approach with state-of-the-art methods where, through qualitative and quantitative evaluations, we demonstrate that our approach is more effective for a variety of meshes and styles, while also producing visually appealing and consistent textures on meshes.

Additional Files and Images

No additional files or images.

Weblinks

BibTeX

@article{kovacs-2024-smt,
  title =      "Surface-aware Mesh Texture Synthesis with Pre-trained 2D
               CNNs",
  author =     "Áron Samuel Kov\'{a}cs and Pedro Hermosilla-Casajus and
               Renata Raidou",
  year =       "2024",
  abstract =   "Mesh texture synthesis is a key component in the automatic
               generation of 3D content. Existing learning-based methods
               have drawbacks—either by disregarding the shape manifold
               during texture generation or by requiring a large number of
               different views to mitigate occlusion-related
               inconsistencies. In this paper, we present a novel
               surface-aware approach for mesh texture synthesis that
               overcomes these drawbacks by leveraging the pre-trained
               weights of 2D Convolutional Neural Networks (CNNs) with the
               same architecture, but with convolutions designed for 3D
               meshes. Our proposed network keeps track of the oriented
               patches surrounding each texel, enabling seamless texture
               synthesis and retaining local similarity to classical 2D
               convolutions with square kernels. Our approach allows us to
               synthesize textures that account for the geometric content
               of mesh surfaces, eliminating discontinuities and achieving
               comparable quality to 2D image synthesis algorithms. We
               compare our approach with state-of-the-art methods where,
               through qualitative and quantitative evaluations, we
               demonstrate that our approach is more effective for a
               variety of meshes and styles, while also producing visually
               appealing and consistent textures on meshes.",
  month =      may,
  articleno =  "e15016",
  doi =        "10.1111/cgf.15016",
  issn =       "1467-8659",
  journal =    "Computer Graphics Forum",
  number =     "2",
  pages =      "13",
  volume =     "43",
  publisher =  "WILEY",
  keywords =   "Deep learning (DL), Computer Graphics, Texture Synthesis",
  URL =        "https://www.cg.tuwien.ac.at/research/publications/2024/kovacs-2024-smt/",
}