API reference¶
Classes¶
These classes perform analyses used in the Panel-Segmentation project.
A class for training a deep learning architecture, detecting solar arrays from a satellite image, performing spectral clustering, predicting azimuth, and classifying mounting type and configuration. |
|
Generates satellite image via Google Maps, using a set of lat-long coordinates. |
|
|
This function is used to detect and classify the mounting configuration of solar installations in satellite imagery. |
This function is used as the metric of similarity between the predicted mask and ground truth. |
|
This function is a loss function that can be used when training the segmentation model. |
|
This function is used to predict the mask of a batch of test satellite images. |
|
This function is used to predict the mask corresponding to a single test image. |
|
This function is used to predict if there is a panel in an image or not. |
|
This function uses canny edge detection to first extract the edges of the input image. |
|
This function basically isolates regions with solar panels in a satellite image using the predicted mask. |
|
This function is used to generate plots of the image with its azimuth It can generate three figures or one. |
|
This function uses object detection outputs to cluster the panels |
|
This function runs a site analysis on a site, when latitude and longitude coordinates are given. It includes the following steps: 1. If generate_image = True, taking a satellite image in Google Maps of site location, based on its latitude-longitude coordinates. The satellite image is then saved under ‘file_name_save_img’ path. 2. Running the satellite image through the mounting configuration/type pipeline. The associated mount predictions are returned, and the most frequently occurring mounting configuration of the predictions is selected. The associated labeled image is stored under the ‘file_name_save_mount’ path. 3. Running the satellite image through the azimuth estimation algorithm. A default single azimuth is calculated in this pipeline for simplicity. The detected azimuth image is saved via the file_path_save_azimuth path. 4. If a mounting configuration is detected as a single-axis tracker, an azimuth correction of 90 degrees is applied, as azimuth runs parallel to the installation, as opposed to perpendicular. 5. A final dictionary of analysed site metadata is returned, including latitude, longitude, detected azimuth, and mounting configuration. |
|
A class for training a deep learning architecture to perform image segmentation on satellite images to detect solar arrays in the image. |
|
|
Load in a set of images from a folder into a 4D numpy array, with dimensions (number images, 640, 640, 3). |
Accuracy metric is overly optimistic. |
|
This function is a loss function that can be used when training the segmentation model. |
|
|
This function uses VGG16 as the base network and as a transfer learning framework to train a model that segments solar panels from a satellite image. |
|
This function uses VGG16 as the base network and as a transfer learning framework to train a model that predicts the presence of solar panels in a satellite image. |
|
This function uses Faster R-CNN ResNet50 FPN as the base network and as a transfer learning framework to train a model that performs object detection on the mounting configuration of solar arrays. |
|
This function prints the training statistics such as training loss and accuracy and validation loss and accuarcy. |