This course is an applied study of deep learning methods for extracting information from geospatial data, such as aerial imagery, multispectral imagery, digital terrain data, and other digital cartographic representations. We have provided background and conceptual information in the form of lecture modules, video examples using ArcGIS Pro, and coding examples using Python and PyTorch or R and geodl. We primarily focus on convolutional neural networks (CNNs) and semantic segmentation methods. As we continue to expand the course, we plan to add more materials associated with transformer architectures.
The material is divided into four sections:
It is assumed that you have prior knowledge of coding in the Python language. The R language is also used but to a lesser extent. If you do not have experience coding, please take a look at our Methods in Open Science and/or Open-Source Spatial Analytics (R) courses, which explore coding in Python and/or R.
After completing this course you will be able to:
If you have any questions or suggestions, feel free to contact us. We hope to continue to update and improve this course.
This course was produced by West Virginia View (http://www.wvview.org/) with support from AmericaView (https://americaview.org/). This material is based upon work supported by the U.S. Geological Survey under Grant/Cooperative Agreement No. G18AP00077. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the opinions or policies of the U.S. Geological Survey. Mention of trade names or commercial products does not constitute their endorsement by the U.S. Geological Survey. This course and associated materials were also supported by the National Science Foundation (NSF) (Federal Award ID No. 2046059: “CAREER: Mapping Anthropocene Geomorphology with Deep Learning, Big Data Spatial Analytics, and LiDAR”). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.