[Defense] Object Oriented 3D Modeling from a Single View Sketch
Friday, July 16, 2021
1:00 pm - 3:00 pm
will defend his dissertation
Object Oriented 3D Modeling from a Single View Sketch
Sketch-based modeling is a has undergone verity of research during past two decades. To build a bridge between the gap between 2D sketch and 3D model, previous work utilize multi-view 2D input or contours with semantic meaning to infer the depth information used to model relative 3D model. However, those techniques require carefully alignment input or complex interaction. The goal of this dissertation is to reduce the complexity requirement of sketch-based modeling and increase the modeling result including surface details. With the proposed techniques in this dissertation, two single-view sketch-based modeling frameworks are introduced.
First, a framework is proposed to build a novel space that jointly embeds both 2D occluding contours and 3D shapes via a variational autoencoder (VAE) and a volumetric autoencoder. Given a dataset of 3D shapes, their occluding contours are extracted via projections from random views and use the occluding contours to train the VAE. Then, the obtained continuous embedding space, where each point is a latent vector that represents an occluding contour, can be used to measure the similarity between occluding contours. After that, the volumetric autoencoder is trained to first map 3D shapes onto the embedding space through a supervised learning process and then decode the merged latent vectors of three occluding contours (from three different views) of a 3D shape to its 3D voxel representation. To ensure the expansibility of the embedding space and the usefulness of the output voxels, the 3D modeling ability is enhanced for the categories not in the training dataset of 3D shapes, by adding more contours extracted from web images to the embedding space, and employ both symmetry and assembly-based refinement to improve the quality of the 3D modeling results. To increase the generation of surface details, a novel, object-oriented approach to model 3D objects with geometric details based on a single view sketch input is proposed. Specifically, a novel differentiable sketch render is introduced to learn the geometric feature relation between normal maps and 2D strokes. Then, on top of the differentiable sketch render, an end-to-end framework is presented to generate 3D models with plausible geometric details based on a single-view sketch, where two novel losses based on the silhouette-based confidence maps and the regression similarities are introduced for a better convergence. The framework allow back propagate the gradients between the rendered sketch with the input sketch and update the learnable weights to enhance the geometric details of the predicted 3D object.
1PM - 3PM CT
Online via MS Teams (code: ufqlgv2)
Dr. Zhigang Deng, dissertation advisor
Faculty, students and the general public are invited.
- Online via MS Teams