Hashwanth Gogineni's other Models Reports

Major Concepts

 

Sign-Up/Login to access Several ML Models and also Deploy & Monetize your own ML solutions for free

Models Home » Generic Models » Computer Vision » Building Surface Crack Detection

Building Surface Crack Detection

Models Status

Model Overview

Cracks in Structures and Buildings:

Even as engineers, 'cracking' on the structure frame's top was a major issue. This is due to the fact that 'cracks' can have a significant impact on structural safety, serviceability, and reliability. According to the theory, as 'cracks' form and propagate, a 'bend' causes a reduction in the effective loading region, resulting in greater stress and subsequent collapse of 'concrete' or 'alternative' structures. Because 'concrete structures' have downsides and buildings deteriorate over time, cracking appears to be a possibility, and concrete blocks, columns, slabs, and brick walls are used as examples. The 'shape,' 'number,' 'width,' and 'length' of cracks on the structural surface reflect the early degree of degradation and the concrete frameworks' capacity to hold. There are two sorts of cracks in general. There are two types of cracks: concrete cracks and non-structural cracks. Internal fissures can cause these 'non-structural cracks.'Second, cracks can vary greatly in size.




Project Implementation:


Construction companies can use the project to detect cracks and help determine the health of a concrete structure.


Dataset:


The datasets contain images of various concrete surfaces with and without cracks. The image data are divided into negative (without crack) and positive (with crack) in a separate folder for image classification. Each class has 19,950 images, i.e. total of 39,900 images with 227 x 227 pixels with RGB channels. The dataset is generated from 458 high-resolution images (4032x3024 pixel) with the method proposed by Zhang et al. (2016).





VGG16:


'VGG16' is a convolutional neural network model proposed by 'K. Simonyan' and 'A. Zisserman' from the 'University of Oxford' in the paper 'Very Deep Convolutional Networks for Large-Scale Image Recognition.' The model achieves '92.7%' top-5 test accuracy in 'ImageNet,' a dataset of over '14 million' images belonging to '1000' classes. It was one of the top models submitted to 'ILSVRC-2014'. It improves 'AlexNet' by replacing large kernel-sized filters with multiple 3×3 kernel-sized filters one after another. 'VGG16' was trained for weeks and was using 'NVIDIA Titan Bl

Understanding code:


First, let us import the required libraries for our project.


import os
import numpy as np
import pandas as pd
from keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras import Model,layers
import tensorflow as tf
import keras
import matplotlib.pyplot as plt
from matplotlib.image import imread
import cv2
from tensorflow.keras.models import load_model
from keras.applications.vgg16 import VGG16
from keras.applications.vgg16 import preprocess_input
from keras.models import Model
from keras.layers import Dense, Flatten


Now, load the data into our system.


path = '/Building Surface Crack Detection/Data'

datagen = ImageDataGenerator(rescale=1.0/255,validation_split=0.3)

train_data = datagen.flow_from_directory(path,
target_size=(227,227),
batch_size=64,
class_mode='categorical',
subset='training')
test_data = datagen.flow_from_directory(path,
target_size=(227,227),
batch_size=64,
class_mode='categorical',
subset='validation')

As you can see, I used the 'ImageDataGenerator' function for data augmentation purposes.

Also, I loaded train and test data using the 'flow_from_directory' function into the kernel.



Next, let us dive into the modelling part of the project.


vgg = VGG16(include_top=False, weights='imagenet', input_shape=(227,227,3))
vgg.summary()

So, I chose the 'VGG16' model to get the best out of our data.
As you can see, I initiated the model using the 'VGG16' function and used 'imagenet' as weights for our model.


for layer in vgg.layers:
layer.trainable=False

x = Flatten()(vgg.output)
x = Dense(512,activation='relu')(x)
x = Dense(512,activation='relu')(x)
prediction = Dense(2,activation='sigmoid')(x)
model = Model(inputs=vgg.input, outputs=prediction)
model.summary()

I used 'Dense' layers in the end to build a compatible model for our use case.
Finally, I used the 'sigmoid' function to get the desired output.


model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])

history=model.fit_generator(train_data,
validation_data=test_data,
epochs=5,
steps_per_epoch=len(train_data),
validation_steps=len(test_data))

I used the 'binary_crossentropy' function as loss function and 'accuracy' as metrics to track our model's performance.

Now let us look at the model's performance graphs.


plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()

plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()

I used 'history' and 'plot' functions to display the model's performance's graphs.



Finally, let us have a peek into our model's classification report.


# Classification Report

from sklearn.metrics import classification_report
test_labels=test_data.classes
predictions=model.predict_generator(test_data, verbose=1)
y_pred = np.argmax(predictions, axis=-1)
print(classification_report(test_labels, y_pred))



Thank you for your time.




0 comments