Semantic Segmentation Code for Buildings in Remote Sensing Images,This repository does not contain comparison test code.
We use the ISPRS Potsdam and Vaihingen remote sensing datasets for experiments,Dataset download address:
https://www.isprs.org/education/benchmarks/UrbanSemLab/2d-sem-label-potsdam.aspx
The data Vaihingen set contains 33 patches (of different sizes), each consisting of a true orthophoto (TOP) extracted from a larger TOP mosaic, see Figure below and a DSM. For further information about the original input data, please refer to the data description of the object detection and 3d reconstruction benchmark. The data Potsdam set contains 38 patches (of the same size), each consisting of a true orthophoto (TOP) extracted from a larger TOP mosaic, see Figure below and a DSM.
After downloading the data set from the official website, split it according to the size of 520*520, and divide it into training set, verification set and test set according to 8:1:1.
Data image | Data label |
---|---|
![]() |
![]() |
torch==1.10.0 torchvision==0.11.1
Environment configuration reference requirements.txt file.
Download pretrained weights resnet50.pt.
Run train.py after modifying the pre-training weight path.
Log files are saved in the log folder.
Modify the test set path of the vaildation.py file to test the performance of the trained model. Modify the trained model file path
This folder is used to store the network structure files used in the project. These files contain code that defines a machine learning or deep learning model, such as the architecture of a neural network, layer definitions, model configuration, and more. You can put these network structure files in the src directory for easy project use and management.
- mffnet.py MFF-Net network structure.
- backbone.py resnet backbone network.
- mobilenet-backbone.py mobilenetv2 backbone network.
- cbamblock.py attention mechanism module.
- ParallelConv2d.py Dual channel mask module.
- .................
This folder is used to hold the model files used in the project. These files contain trained machine learning models, pretrained models, or other related model files. You can put these model files in the src directory for easy project use and management.
The train_utils
folder contains utility functions and modules related to training machine learning models. It includes evaluation and validation functions along with commonly used evaluation metrics for model performance analysis.
The train_utils
provides a set of functions for evaluating and validating machine learning models. These functions help in assessing the performance of the models on unseen data and verifying their generalization capabilities.
In addition to evaluation functions, train_utils
also offers a range of evaluation metrics commonly used in machine learning tasks. These metrics provide quantitative measures to assess the model's performance in various domains such as classification, regression, and clustering.
Some of the evaluation metrics included in train_utils
are:
- Accuracy
- Precision
- Recall
- F1-score
- Mean Absolute Error (MAE)
Feel free to explore the files in the train_utils
folder to utilize these evaluation and validation functions as well as the evaluation metrics in your machine learning projects.
The feature map analysis Python files in this repository provide tools and functions to analyze and visualize feature maps generated by deep learning models. These files help in understanding the learned representations within the neural network and gaining insights into the model's decision-making process.
- Functions to extract and visualize feature maps at different layers of a neural network.
- Heatmap generation to highlight important regions of the feature maps.
- Activation maximization techniques to visualize the patterns that maximally activate specific neurons.
- Grad-CAM (Gradient-weighted Class Activation Mapping) implementation for visualizing important regions contributing to model predictions.
Feel free to explore the feature map analysis Python files to gain a deeper understanding of your models and analyze the learned representations.
analyze_feature_map.py
analyze_heatmap.py
analyze_kernel_weight.py
The heat map results are saved in the heatmap
folder.
![]() |
![]() |
---|---|
![]() |
![]() |
Thanks ISPRS Community for providing data support. Thanks to WZMIAOMIAO for providing code support. and thanks to others who contributed to this work and the community