Deep learning based Generation of 3D environment using Earth observation Data
- 0 Collaborators
Deep Generative Models such as GAN (Generative adversarial Networks), for object height simulation from earth observation data such as satellite imagery. The model is capable of generating a 3D environment directly from satellite imagery, previously only possible using LiDAR or Stereo imagery. ...learn more
Project status: Under Development
Groups
Student Developers for AI
Intel Technologies
Intel Integrated Graphics
Overview / Usage
This deep learning-based geospatial data processing pipeline uses conditional generative adversarial networks (cGAN) to directly simulate object surface elevation from satellite imagery. In the process, it generates a 3D environment as a point cloud from multi-spectral satellite imagery. Currently, expensive LiDAR aerial surveys are performed to generate point cloud for monitoring the earth surface and for 3D mapping. Using this approach, 3D model of locations could be generated directly using multi-spectral imagery, available for every location on earth. In the future, 3D environment for self-driving cars could also be generated from the satellite imagery of unknown location using multi-spectral imagery.
Methodology / Approach
The architecture implements an encoder-decoder network with skip connection