Tarun Reddy's other Models Reports

Major Concepts

 

Sign-Up/Login to access Several ML Models and also Deploy & Monetize your own ML solutions for free

Models Home » Generic Models » Computer Vision » Forest Fire Detection using RCNN

Forest Fire Detection using RCNN

Models Status

Model Overview



Forest fires are a topic for concern since they cause chaos on the environment, property, and people's lives. As a result, early detection of a forest fire is critical at an earlier stage. This can assist in the protection of the vegetation and wildlife habitat. The region, as well as the resources It may also helpful in  Control the spread of the fire in the early phases. The function of  Forest monitoring is very important. Forest fires are caused by a variety of factors, both natural and man-made. High air temperatures, lightning, and dryness (low humidity) provide an ideal atmosphere for forest fires to begin, which are natural causes of forest fire.

Dataset Description:

The data was gathered in order to train a model that can identify between pictures that contain fire (images of fire) and regular images (images which aren't fire), hence the task is simply a binary classification problem. The data is divided into two folders: one is for outdoor fire photographs, which comprises 755 images, some of which have smoke, and another for non-fire images, which has 244 nature images(eg forests, trees, grasses, rivers, people, lakes, animals, roads and waterfalls). Note: the data is skewed, which means the 2 classes (folders) do not have an equal number of samples, so make sure you have a validation set with an equal number of images per class (eg 40 images from both fire and non-fire classes).

Model Accuracy: 

Let's import the required libraries for the usecase.

import pandas as pd
import numpy as np
import datetime as dt
import os
import os.path
from pathlib import Path
import glob
import cv2
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dropout, Flatten, Dense, BatchNormalization
from tensorflow.keras.layers import SpatialDropout2D
from tensorflow.keras.callbacks import EarlyStopping
from sklearn.metrics import confusion_matrix, accuracy_score, classification_report, roc_auc_score, roc_curve
from tensorflow.keras.utils import plot_model
from tensorflow.keras.preprocessing import image
from PIL import Image
from tensorflow.keras.preprocessing.image import ImageDataGenerator, img_to_array, load_img
import tensorflow as tf

Let us now create the data frame.

Fire_Dataset_Path = Path('E:\\fire\\fire_dataset')
PNG_Path = list(Fire_Dataset_Path.glob(r"*/*.png"))
PNG_Labels = list(map(lambda x: os.path.split(os.path.split(x)[0])[1],PNG_Path))

print("FIRE: ", PNG_Labels.count("fire_images"))
print("NO_FIRE: ", PNG_Labels.count("non_fire_images"))



PNG_Path_Series = pd.Series(PNG_Path,name="PNG").astype(str)
PNG_Labels_Series = pd.Series(PNG_Labels,name="CATEGORY")
print(PNG_Path_Series)


Let us now change the names 

PNG_Labels_Series.replace({"non_fire_images":"NO_FIRE","fire_images":"FIRE"},inplace=True)
print(PNG_Labels_Series)



Main_Train_Data = pd.concat([PNG_Path_Series,PNG_Labels_Series],axis=1)
print(Main_Train_Data.head(-1))
Main_Train_Data = Main_Train_Data.sample(frac=1).reset_index(drop=True)
print(Main_Train_Data.head(-1))
print(Main_Train_Data["PNG"][2])
print(Main_Train_Data["CATEGORY"][2])
print(Main_Train_Data["PNG"][200])
print(Main_Train_Data["CATEGORY"][200])
print(Main_Train_Data["PNG"][45])
print(Main_Train_Data["CATEGORY"][45])
print(Main_Train_Data["PNG"][852])
print(Main_Train_Data["CATEGORY"][852])

We need to remove the image 'non_fire.189.png' as it is format incompatible.

remove_PNG = ("E:\\fire\\fire_dataset\\non_fire_images\\non_fire.189.png")
Main_Train_Data = Main_Train_Data.loc[~(Main_Train_Data.loc[:,'PNG'] == remove_PNG),:]
print(Main_Train_Data.loc[Main_Train_Data.loc[:,'PNG'] == remove_PNG,:])
print(Main_Train_Data.head(-1))


Let us now view the dataset data in a barplot.

plt.style.use("dark_background")
sns.countplot(Main_Train_Data["CATEGORY"])
plt.show()



Main_Train_Data['CATEGORY'].value_counts().plot.pie(figsize=(5,5))
plt.show()


Let us now watch the images present in dataset.

figure = plt.figure(figsize=(10,10))
x = cv2.imread(Main_Train_Data["PNG"][0])
plt.imshow(x)
plt.xlabel(x.shape)
plt.title(Main_Train_Data["CATEGORY"][0])

figure = plt.figure(figsize=(10,10))
x = cv2.imread(Main_Train_Data["PNG"][993])
plt.imshow(x)
plt.xlabel(x.shape)
plt.title(Main_Train_Data["CATEGORY"][993])

figure = plt.figure(figsize=(10,10))
x = cv2.imread(Main_Train_Data["PNG"][20])
plt.imshow(x)
plt.xlabel(x.shape)
plt.title(Main_Train_Data["CATEGORY"][20])

figure = plt.figure(figsize=(10,10))
x = cv2.imread(Main_Train_Data["PNG"][48])
plt.imshow(x)
plt.xlabel(x.shape)
plt.title(Main_Train_Data["CATEGORY"][48])
plt.show()







fig, axes = plt.subplots(nrows=5,
ncols=5,
figsize=(10,10),
subplot_kw={"xticks":[],"yticks":[]})

for i,ax in enumerate(axes.flat):
ax.imshow(cv2.imread(Main_Train_Data["PNG"][i]))
ax.set_title(Main_Train_Data["CATEGORY"][i])
plt.tight_layout()
plt.show()​





fig, axes = plt.subplots(nrows=5,
ncols=5,
figsize=(10,10),
subplot_kw={"xticks":[],"yticks":[]})

for i,ax in enumerate(axes.flat):
x = cv2.imread(Main_Train_Data["PNG"][i])
x = cv2.cvtColor(x,cv2.COLOR_RGB2BGR)
ax.imshow(x)
ax.set_title(Main_Train_Data["CATEGORY"][i])
plt.tight_layout()
plt.show()​



DETERMINATION TRAIN AND TEST DATA
IMAGE GENERATOR


Train_Generator = ImageDataGenerator(rescale=1./255,
shear_range=0.3,
zoom_range=0.2,
brightness_range=[0.2,0.9],
rotation_range=30,
horizontal_flip=True,
vertical_flip=True,
fill_mode="nearest",
validation_split=0.1)

Test_Generator = ImageDataGenerator(rescale=1./255)

Split the data into training and testing data.


Train_Data,Test_Data = train_test_split(Main_Train_Data,train_size=0.9,random_state=42,shuffle=True)
print("TRAIN SHAPE: ",Train_Data.shape)
print("TEST SHAPE: ",Test_Data.shape)



print(Train_Data.head(-1))
print("----"*20)
print(Test_Data.head(-1))

print(Test_Data["CATEGORY"].value_counts())


converting the Label to numeric format using encode for testing later: 

encode = LabelEncoder()
For_Prediction_Class = encode.fit_transform(Test_Data["CATEGORY"])

Let us see how the generator applied image look like

example_Image = Train_Data["PNG"][99]
Load_Image = image.load_img(example_Image,target_size=(200,200))
Array_Image = image.img_to_array(Load_Image)
Array_Image = Array_Image.reshape((1,) + Array_Image.shape)

i = 0
for batch in Train_Generator.flow(Array_Image,batch_size=1):
plt.figure(i)
IMG = plt.imshow(image.array_to_img(batch[0]))
i += 1
if i % 4 == 0:
break
plt.show()








Let us now apply generator and transformation to tensor

Train_IMG_Set = Train_Generator.flow_from_dataframe(dataframe=Train_Data,
x_col="PNG",
y_col="CATEGORY",
color_mode="rgb",
class_mode="categorical",
batch_size=32,
subset="training")



Validation_IMG_Set = Train_Generator.flow_from_dataframe(dataframe=Train_Data,
x_col="PNG",
y_col="CATEGORY",
color_mode="rgb",
class_mode="categorical",
batch_size=32,
subset="validation")



Test_IMG_Set = Test_Generator.flow_from_dataframe(dataframe=Test_Data,
x_col="PNG",
y_col="CATEGORY",
color_mode="rgb",
class_mode="categorical",
batch_size=32)


Let us now build the model using RCNN.

from tensorflow.keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPool2D, BatchNormalization,MaxPooling2D,BatchNormalization,Permute, TimeDistributed, Bidirectional,GRU, SimpleRNN, LSTM, GlobalAveragePooling2D, SeparableConv2D
Model_Three = Sequential()
Model_Three.add(Conv2D(12,(3,3),activation="relu",
input_shape=(256,256,3)))
Model_Three.add(BatchNormalization())
Model_Three.add(MaxPooling2D((2,2)))

Model_Three.add(Conv2D(24,(3,3),
activation="relu"))
Model_Three.add(Dropout(0.2))
Model_Three.add(MaxPooling2D((2,2)))
Model_Three.add(TimeDistributed(Flatten()))
Model_Three.add(Bidirectional(LSTM(32,
return_sequences=True,
dropout=0.5,
recurrent_dropout=0.5)))
Model_Three.add(Bidirectional(GRU(32,
return_sequences=True,
dropout=0.5,
recurrent_dropout=0.5)))

Model_Three.add(Flatten())
Model_Three.add(Dense(256,activation="relu"))
Model_Three.add(Dropout(0.5))
Model_Three.add(Dense(2,activation="softmax"))
Call_Back = tf.keras.callbacks.EarlyStopping(monitor="loss",patience=5,mode="min")
Model_Three.compile(optimizer = 'adam' , loss = 'categorical_crossentropy' , metrics = ['accuracy'])
RCNN_Model = Model_Three.fit(Train_IMG_Set,
validation_data=Validation_IMG_Set,
callbacks=Call_Back,
epochs=100)


Let us now check the accuracy scores and also visualize the accuracy and validation losses.

Model_Results_Three = Model_Three.evaluate(Test_IMG_Set)
print("LOSS: " + "%.4f" % Model_Results_Three[0])
print("ACCURACY: " + "%.2f" % Model_Results_Three[1])



plt.plot(RCNN_Model.history["accuracy"])
plt.plot(RCNN_Model.history["val_accuracy"])
plt.ylabel("ACCURACY")
plt.show()



plt.plot(RCNN_Model.history["loss"])
plt.plot(RCNN_Model.history["val_loss"])
plt.ylabel("LOSS")
plt.show()


Let us now predict the model

Prediction_Three = Model_Three.predict(Test_IMG_Set)
Prediction_Three = Prediction_Three.argmax(axis=-1)
print(Prediction_Three)


Saving the model.

Model_Three.save("E:\\firedetection\\model3.h5")

0 comments