etcSynthetic data

Satellite Image Classification Dataset

Main product

Data Quantity

Optional product

During synthetic data generation, duplication may occur due to the close resemblance to the original dataset, a common issue in such processes. To minimize this, consider generating more data than initially required.

Total Price

USD 0 VAT included

  • Main product
    Premium
  • Data Quantity
    Basic
  • Optional product
    Not selected
  • Download
    No data
  • All products are priced including VAT.
  • Premium datasets are custom-made and take approximately two weeks from the date of application to improve quality.
  • Common datasets can be checked on My Page after purchase.
Product Image
etc

Basic Price

USD 5,800 VAT included

A realistic representation of real-world conditions, offering a comprehensive view of diverse landform features, making it suitable for classification experiments across different environments and evaluating model generalization performance.

  • Labeling type: Land Surface Type
  • Data Format: Image

Related Tags:

Categories:

About Dataset

1) Data Introduction

• The Satellite Image Classification Dataset is a benchmark image classification dataset constructed using satellite remote sensing imagery. It includes a total of four land surface classes—cloudy, desert, green_area, and water—collected from various sensor-based images and Google Maps snapshots. The dataset is designed for training and evaluating image-based scene recognition models.

2) Data Utilization

(1) Characteristics of the Satellite Image Classification Dataset: • The dataset was collected with the aim of automatic interpretation of satellite imagery and consists of a combination of sensor-based images and map snapshots, offering a realistic representation of real-world conditions. • All images are of fixed resolution and include diverse landform features, making the dataset suitable for classification experiments across different environments and for evaluating model generalization performance. (2) Applications of the Satellite Image Classification Dataset: • Land surface classification model training: Can be used in experiments to classify various types of terrain such as buildings, farmland, and roads. • Research and application in geospatial information analysis: Useful for developing models that support spatial decision-making through tasks such as land use monitoring, urban structure analysis, and land surface inference.

Meta Data

DomainetcZoodata formatsImage
Zoodata volume1000 itemsRegistration date2025.05.28
Zoodata typeSynthetic dataExistence of labelingExist
Labeling typeLand Surface TypeLabeling formatsjson

Normal

Performance 1
Good

Outstanding

Performance 2
100

Data Samples 5

Data sample
Data sample
Data sample
Data sample

Utility

Downstream Classification (▲)KID (▼)One Class Classification (▼)
Total000
SuitabilityOKOKOK

The higher the value, the better (▲)

Model Performance

Downstream classification accuracy is an indicator used to evaluate the usefulness of synthetic data. It measures whether synthetic data performs similarly to real data. The method involves training the same model separately on real data and synthetic data, and then comparing the accuracies of the two models. Interpretation: A high accuracy rate means that the model trained on synthetic data performs similarly to the one trained on real data, indicating that the synthetic data is of high quality and well represents the real data.

The closer to zero or the lower the value, the better (▼)

Quality

KID (Kernel Inception Distance) is a metric used to evaluate the similarity between generated images and real images. It compares the differences between the two sample distributions using Kernel Mean Embedding, without assuming a normal distribution. Interpretation: A lower KID score suggests that generated images are more similar to real images, with a score close to 0 being ideal. Specifically, a score below 0.01 indicates very high similarity.


Privacy

LPIPS (▲)SSIM (▼)
Total00
SuitabilityOKOK

The higher the value, the better (▲)

Perceptual Similarity

Learned Perceptual Image Patch Similarity (LPIPS) is a metric used to measure the visual similarity between two images by utilizing neural networks to extract key features and calculate the distance between them. High LPIPS value: Indicates high similarity between images, raising the risk of information leakage. Low LPIPS value: Suggests that synthetic images are perceptually different from real images, indicating a lower risk of sensitive information leakage.

The closer to zero or the lower the value, the better (▼)

Structural Similarity

The Structural Similarity Index Measure (SSIM) is a metric used to assess the similarity between two images. It is primarily used to compare the quality of a restored or compressed image with the original image. SSIM measures visual similarity by considering brightness, contrast, and structure. High SSIM value (0.9 or above): Indicates that the synthetic image is very similar to the real image, which may increase the risk of information leakage.
Low SSIM value (0.6 or below): Indicates low similarity and reduced risk of leakage.

Premium Report Information

If you purchase the premium report product, you will be able to view the analysis results of a more detailed dataset.
select premium data

Premium dataset sample