API reference

Classes

These classes perform analyses used in the Panel-Segmentation project.

panel_detection.PanelDetection([…])

A class for training a deep learning architecture, detecting solar arrays from a satellite image, performing spectral clustering, predicting azimuth, and classifying mounting type and configuration.

panel_detection.PanelDetection.generateSatelliteImage(…)

Generates satellite image via Google Maps, using a set of lat-long coordinates.

panel_detection.PanelDetection.classifyMountingConfiguration(…)

This function is used to detect and classify the mounting configuration of solar installations in satellite imagery.

panel_detection.PanelDetection.diceCoeff(…)

This function is used as the metric of similarity between the predicted mask and ground truth.

panel_detection.PanelDetection.diceCoeffLoss(…)

This function is a loss function that can be used when training the segmentation model.

panel_detection.PanelDetection.testBatch(…)

This function is used to predict the mask of a batch of test satellite images.

panel_detection.PanelDetection.testSingle(…)

This function is used to predict the mask corresponding to a single test image.

panel_detection.PanelDetection.hasPanels(…)

This function is used to predict if there is a panel in an image or not.

panel_detection.PanelDetection.detectAzimuth(in_img)

This function uses canny edge detection to first extract the edges of the input image.

panel_detection.PanelDetection.cropPanels(…)

This function basically isolates regions with solar panels in a satellite image using the predicted mask.

panel_detection.PanelDetection.plotEdgeAz(…)

This function is used to generate plots of the image with its azimuth It can generate three figures or one.

panel_detection.PanelDetection.clusterPanels(…)

This function uses object detection outputs to cluster the panels

panel_detection.PanelDetection.runSiteAnalysisPipeline(…)

This function runs a site analysis on a site, when latitude and longitude coordinates are given. It includes the following steps: 1. If generate_image = True, taking a satellite image in Google Maps of site location, based on its latitude-longitude coordinates. The satellite image is then saved under ‘file_name_save_img’ path. 2. Running the satellite image through the mounting configuration/type pipeline. The associated mount predictions are returned, and the most frequently occurring mounting configuration of the predictions is selected. The associated labeled image is stored under the ‘file_name_save_mount’ path. 3. Running the satellite image through the azimuth estimation algorithm. A default single azimuth is calculated in this pipeline for simplicity. The detected azimuth image is saved via the file_path_save_azimuth path. 4. If a mounting configuration is detected as a single-axis tracker, an azimuth correction of 90 degrees is applied, as azimuth runs parallel to the installation, as opposed to perpendicular. 5. A final dictionary of analysed site metadata is returned, including latitude, longitude, detected azimuth, and mounting configuration.

panel_train.TrainPanelSegmentationModel(…)

A class for training a deep learning architecture to perform image segmentation on satellite images to detect solar arrays in the image.

panel_train.TrainPanelSegmentationModel.loadImagesToNumpyArray(…)

Load in a set of images from a folder into a 4D numpy array, with dimensions (number images, 640, 640, 3).

panel_train.TrainPanelSegmentationModel.diceCoeff(…)

Accuracy metric is overly optimistic.

panel_train.TrainPanelSegmentationModel.diceCoeffLoss(…)

This function is a loss function that can be used when training the segmentation model.

panel_train.TrainPanelSegmentationModel.trainSegmentation(…)

This function uses VGG16 as the base network and as a transfer learning framework to train a model that segments solar panels from a satellite image.

panel_train.TrainPanelSegmentationModel.trainPanelClassifier(…)

This function uses VGG16 as the base network and as a transfer learning framework to train a model that predicts the presence of solar panels in a satellite image.

panel_train.TrainPanelSegmentationModel.trainMountingConfigClassifier(…)

This function uses Faster R-CNN ResNet50 FPN as the base network and as a transfer learning framework to train a model that performs object detection on the mounting configuration of solar arrays.

panel_train.TrainPanelSegmentationModel.trainingStatistics(…)

This function prints the training statistics such as training loss and accuracy and validation loss and accuarcy.

Models

The following deep learning models are included in the Panel-Segmentation package.