Hashwanth Gogineni's other Models Reports

Major Concepts

 

Sign-Up/Login to access Several ML Models and also Deploy & Monetize your own ML solutions for free

Lyme Disease Detection

Models Status

Model Overview

What is Lyme disease?




Lyme disease is the most common vector-borne disease in the United States. Lyme disease is caused by the bacterium Borrelia burgdorferi and rarely, Borrelia mayonii.


It is transmitted to humans through the bite of infected black-legged ticks. Typical symptoms include fever, headache, fatigue, and a characteristic skin rash called erythema migrans. If left untreated, the infection can spread to joints, the heart, and the nervous system.


 Dataset Description:


The data contains images of the EM ( Erythema Migrans) also known as the "Bull's Eye Rash" It is one of the most prominent symptoms of Lyme disease. Also in the data contains several other types of rashes which may be often confused with EM rash by doctors and most of the medical field.


You can access the following link to download the dataset -  https://drive.google.com/file/d/1QgqbgdGZ7CGPFjmojeinYC0Mg_Jb_z1e/view?usp=sharing

What is ResNet-50?




ResNet, short for Residual Networks is a classic neural network used as a backbone for many computer vision tasks. This model was the winner of the ImageNet challenge in 2015.




The fundamental breakthrough with ResNet was it allowed us to train extremely deep neural networks with 150+layers successfully. Prior to ResNet training very deep neural networks was difficult due to the problem of vanishing gradients.



Understanding the code:


The following are all the required libraries.


import pandas as pd
import numpy as np
from numpy import asarray
from PIL import Image
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.applications.resnet50 import preprocess_input
import tensorflow as tf
from keras.applications.resnet50 import ResNet50, preprocess_input
from keras.preprocessing.image import ImageDataGenerator
from keras.layers import Dense, Activation, Flatten, Dropout
from keras.models import Sequential, Model
from keras.optimizers import SGD, Adam
from keras.callbacks import TensorBoard
from tensorflow.keras.preprocessing.image import load_img
from tensorflow.keras.applications.vgg16 import preprocess_input
from tensorflow.keras.preprocessing.image import array_to_img, img_to_array
import keras
import os

 


Now before getting deep into the project let us have a look at few reference images






Data preprocessing has been done on the data followed by Data Augmentation.

Here we defined the following parameters to perform data augmentation efficiently.


HEIGHT = 128
WIDTH = 128

BATCH_SIZE = 8
class_list = ["class_1", "class_2"]
FC_LAYERS = [1024, 512, 256]
dropout = 0.5
NUM_EPOCHS = 200
BATCH_SIZE = 8

 


train_datagen = ImageDataGenerator(preprocessing_function = preprocess_input,
rotation_range = 90,
horizontal_flip = True,
vertical_flip = True,
width_shift_range=0.2,
height_shift_range=0.2,
zoom_range=0.1,)

test_datagen = ImageDataGenerator(preprocessing_function = preprocess_input,
rotation_range = 90,
horizontal_flip = True,
vertical_flip = False)

 


Coming to the modelling part, ‘ResNet-50’ has been used to classify our images in order to acquire the most accurate predictions.


‘Imagenet’ weights have been used to get the most out of our model.


def build_model(base_model, dropout, fc_layers, num_classes):
for layer in base_model.layers:
layer.trainable = False

x = base_model.output
x = Flatten()(x)
for fc in fc_layers:
print(fc)
x = Dense(fc, activation='relu')(x)
x = Dropout(dropout)(x)
preditions = Dense(num_classes, activation='softmax')(x)
finetune_model = Model(inputs = base_model.input, outputs = preditions)
return finetune_model

base_model_1 = ResNet50(weights = 'imagenet',
include_top = False,
input_shape = (HEIGHT, WIDTH, 3))

resnet50_model = build_model(base_model_1,
dropout = dropout,
fc_layers = FC_LAYERS,
num_classes = len(class_list))

adam = Adam(lr = 0.00001)
resnet50_model.compile(adam, loss="binary_crossentropy", metrics=["accuracy"])

filepath = "./checkpoints" + "RestNet50" + "_model_weights.h5"
checkpoint = keras.callbacks.ModelCheckpoint(filepath, monitor = ["acc"], verbose= 1, mode = "max")
cb=TensorBoard(log_dir=("/home/ubuntu/"))
callbacks_list = [checkpoint, cb]

#print(train_generator.class_indices)

resnet50_model.summary()

 


history = resnet50_model.fit_generator(generator = train_generator, epochs = NUM_EPOCHS, steps_per_epoch = 100, 
shuffle = True, validation_data = test_generator)


Finally, let us check out the accuracy and validation losses of our model. As you can see we achieved '90%' accuracy.


 


acc = history.history['accuracy']
val_acc = history.history['val_accuracy']

loss=history.history['loss']
val_loss=history.history['val_loss']

epochs_range = range(NUM_EPOCHS)

plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')

plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()






Thank You for your time.


 


0 comments