etcSynthetic data

Forest Fire Dataset

Main product

Data Quantity

Optional product

During synthetic data generation, duplication may occur due to the close resemblance to the original dataset, a common issue in such processes. To minimize this, consider generating more data than initially required.

Total Price

USD 0 VAT included

  • Main product
    Premium
  • Data Quantity
    Basic
  • Optional product
    Not selected
  • Download
    No data
  • All products are priced including VAT.
  • Premium datasets are custom-made and take approximately two weeks from the date of application to improve quality.
  • Common datasets can be checked on My Page after purchase.
Product Image
etc

Basic Price

USD 8,700 VAT included

A comprehensive visual training material for the development of fire and smoke detection algorithms, allowing for the accurate classification of wildfire and smoke images.

  • Labeling type: Fire and Smoke
  • Data Format: Image

Related Tags:

Categories:

About Dataset

1) Data Introduction

• The Forest Fire Dataset is an image classification dataset consisting of images related to wildfires and smoke. It is designed to serve as visual training material for the development of fire and smoke detection algorithms. The dataset includes two classification labels: 'fire' for wildfire images and 'smoke' for smoke-related images.

2) Data Utilization

(1) Characteristics of the Forest Fire Dataset: • The dataset contains images of fires and smoke captured in various environments, making it suitable for the development of early detection and classification systems. • Most of the images are sourced from the wildfire detection dataset released by the University of Science and Technology of China (USTC), and they contain a wide range of visual features reflecting real wildfire scenarios. (2) Applications of the Forest Fire Dataset: • Development of wildfire and smoke recognition AI models: Can be used to train image-based artificial intelligence models that automatically classify the presence of fire or smoke. • Experiments for disaster response system development: Useful as foundational data for building technologies such as forest surveillance, CCTV video analysis, and real-time alert systems. • Environmental research and climate change applications: Can be used to analyze wildfire occurrence patterns and assess the effectiveness of fire detection algorithms under climate change scenarios.

Meta Data

DomainetcZoodata formatsImage
Zoodata volume1000 itemsRegistration date2025.05.28
Zoodata typeSynthetic dataExistence of labelingExist
Labeling typeFire and SmokeLabeling formatsjson

Normal

Performance 1
Excellent

Outstanding

Performance 2
100

Data Samples 5

Data sample
Data sample
Data sample
Data sample

Utility

Downstream Classification (▲)KID (▼)One Class Classification (▼)
Total000
SuitabilityOKOKOK

The higher the value, the better (▲)

Model Performance

Downstream classification accuracy is an indicator used to evaluate the usefulness of synthetic data. It measures whether synthetic data performs similarly to real data. The method involves training the same model separately on real data and synthetic data, and then comparing the accuracies of the two models. Interpretation: A high accuracy rate means that the model trained on synthetic data performs similarly to the one trained on real data, indicating that the synthetic data is of high quality and well represents the real data.

The closer to zero or the lower the value, the better (▼)

Quality

KID (Kernel Inception Distance) is a metric used to evaluate the similarity between generated images and real images. It compares the differences between the two sample distributions using Kernel Mean Embedding, without assuming a normal distribution. Interpretation: A lower KID score suggests that generated images are more similar to real images, with a score close to 0 being ideal. Specifically, a score below 0.01 indicates very high similarity.


Privacy

LPIPS (▲)SSIM (▼)
Total00
SuitabilityOKOK

The higher the value, the better (▲)

Perceptual Similarity

Learned Perceptual Image Patch Similarity (LPIPS) is a metric used to measure the visual similarity between two images by utilizing neural networks to extract key features and calculate the distance between them. High LPIPS value: Indicates high similarity between images, raising the risk of information leakage. Low LPIPS value: Suggests that synthetic images are perceptually different from real images, indicating a lower risk of sensitive information leakage.

The closer to zero or the lower the value, the better (▼)

Structural Similarity

The Structural Similarity Index Measure (SSIM) is a metric used to assess the similarity between two images. It is primarily used to compare the quality of a restored or compressed image with the original image. SSIM measures visual similarity by considering brightness, contrast, and structure. High SSIM value (0.9 or above): Indicates that the synthetic image is very similar to the real image, which may increase the risk of information leakage.
Low SSIM value (0.6 or below): Indicates low similarity and reduced risk of leakage.

Premium Report Information

If you purchase the premium report product, you will be able to view the analysis results of a more detailed dataset.
select premium data

Premium dataset sample