Kartikey Sharma's other Models Reports

Major Concepts

 

Sign-Up/Login to access Several ML Models and also Deploy & Monetize your own ML solutions for free

Models Home » Domain Usecases » Health Care and Pharmaceuticals » White Blood Cell classification

White Blood Cell classification

Models Status

Model Overview


Blood cell images

Blood cell images consisted of
white blood cells (denoted by large purple shaded regions), red blood cells, and platelets (light purple-ish spots).


Usage

Blood cell images assist in identifying different kinds of deficiencies and illnesses. Red Blood cells are responsible for carrying oxygen throughout the body, and their lack can cause Anemia. Depletion of platelets can cause excessive bleeding. White blood cells are responsible for the system's immunity, and their disorder can weaken the immune system and cause all hosts of infections.


Model

Data Source


The dataset was taken from the
BCCD dataset. The image folder consists of three subfolders - TRAIN, TEST_SIMPLE, and TEST. Each of the three folders is divided into four categories - LYMPHOCYTE, MONOCYTE, NEUTROPHIL, and EOSINOPHIL. The images in the TRAIN and TEST folders are augmented.




Data Preprocessing

The images were resized to size 128x128x3, and their mask was created by manually thresholding the pixel values. Then the masks were dilated using a 3x3 kernel. Below is the code,

def mask (path) :

x = cv2.resize(cv2.cvtColor(cv2.imread(path), cv2.COLOR_BGR2RGB), (128,128))
m = np.ones(x.shape)*255.0

m[(x[:,:,0] >= 160) & (x[:,:,1] >= 140) & (x[:,:,2] >= 140)] = [0.0,0.0,0.0]

kernel = np.ones((3,3), dtype = np.uint8)
m = cv2.dilate(m, kernel, iterations = 2)

return cv2.bitwise_and(x, np.array(m, np.uint8))

Output images




Model

The model is a Sequential network of Convolution and Fully connected layers. Dropout of 0.4 is added to the fully connected layer. And weight regularization is added to keep the weights small and generalize the model better. In addition, four convolutional layers were added with a 3x3 kernel size and strides of 1. And a max-pooling layer was added to reduce the image dimensions.


def conv_layer (filters) :

model = Sequential()

model.add(Conv2D(filters,
(3,3),
strides = 1,
padding = 'same',
kernel_regularizer = 'l2'))

model.add(BatchNormalization())
model.add(LeakyReLU())
model.add(MaxPooling2D((2, 2)))

return model

def dens_layer (hiddenx) :

model = Sequential()

model.add(Dense(hiddenx,
kernel_regularizer = 'l2'))

model.add(BatchNormalization())
model.add(Dropout(.4))
model.add(LeakyReLU())

return model

def cnn (filter1,
filter2,
filter3,
filter4, hidden1) :

model = Sequential([

Input((128,128,3,)),
conv_layer(filter1),
conv_layer(filter2),
conv_layer(filter3),
conv_layer(filter4),
Flatten(),
dens_layer(hidden1),
Dense(4, activation = 'softmax')
])

model.compile(loss = 'categorical_crossentropy', optimizer = Adam(learning_rate = 0.0001),
metrics = ['accuracy'])
return model


Performance

Validation data



Testing data



 


0 comments