Novel View Synthesis for Large-scale Scene using Adversarial Loss

نویسندگان

  • Xiaochuan Yin
  • Henglai Wei
  • Penghong lin
  • Xiangwei Wang
  • Qijun Chen
چکیده

Novel view synthesis aims to synthesize new images from different viewpoints of given images. Most of previous works focus on generating novel views of certain objects with a fixed background. However, for some applications, such as virtual reality or robotic manipulations, large changes in background may occur due to the egomotion of the camera. Generated images of a large-scale environment from novel views may be distorted if the structure of the environment is not considered. In this work, we propose a novel fully convolutional network, that can take advantage of the structural information explicitly by incorporating the inverse depth features. The inverse depth features are obtained from CNNs trained with sparse labeled depth values. This framework can easily fuse multiple images from different viewpoints. To fill the missing textures in the generated image, adversarial loss is applied, which can also improve the overall image quality. Our method is evaluated on the KITTI dataset. The results show that our method can generate novel views of large-scale scene without distortion. The effectiveness of our approach is demonstrated through qualitative and quantitative evaluation. . . .

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Cross-View Image Synthesis using Conditional GANs

Learning to generate natural scenes has always been a challenging task in computer vision. It is even more painstaking when the generation is conditioned on images with drastically different views. This is mainly because understanding, corresponding, and transforming appearance and semantic information across views is not trivial. In this paper, we attempt to solve the novel problem of cross-vi...

متن کامل

TUCH: Turning Cross-view Hashing into Single-view Hashing via Generative Adversarial Nets

Cross-view retrieval, which focuses on searching images as response to text queries or vice versa, has received increasing attention recently. Crossview hashing is to efficiently solve the cross-view retrieval problem with binary hash codes. Most existing works on cross-view hashing exploit multiview embedding method to tackle this problem, which inevitably causes the information loss in both i...

متن کامل

Stacked Conditional Generative Adversarial Networks for Jointly Learning Shadow Detection and Shadow Removal

Understanding shadows from a single image spontaneously derives into two types of task in previous studies, containing shadow detection and shadow removal. In this paper, we present a multi-task perspective, which is not embraced by any existing work, to jointly learn both detection and removal in an end-to-end fashion that aims at enjoying the mutually improved benefits from each other. Our fr...

متن کامل

3D Reconstruction and Rendering from Image Sequences

This contribution describes a system for 3D surface reconstruction and novel view synthesis from image streams of an unknown but static scene. The system operates fully automatic and estimates camera pose and 3D scene geometry using Structure-from-Motion and dense multi-camera stereo reconstruction. From these estimates, novel views of the scene can be rendered at interactive rates.

متن کامل

Novel View Synthesis using Needle-Map Correspondence

Interest in view interpolation and novel view synthesis is growing. In this paper we show how dense correspondence can be found between needle-maps generated using shape-from-shading, which in turn can be used to generate new needle-maps. From these we can produce novel intermediate views, and also estimates of how each intermediate view would look under different lighting conditions. The appro...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • CoRR

دوره abs/1802.07064  شماره 

صفحات  -

تاریخ انتشار 2018