fbpx

AI and ML

Live Emotion Detection | Emotion Recognition Algorithm

Live Emotion Detection

PROBLEM:

Detecting emotion of a person in a live video

SOLUTION:

The above problem can be solved using machine learning.

We can do this by training the algorithm by data set of different emotion

We have trained our model with two emotions “HAPPY” and “SURPRISED”

PROCEDURE:

  • Collect the data with different emotions as required(happy, sad, surprised, sleepy, etc)
  • Put all images as per their emotions in different folders, and place all the folders in the main folder.
  • Convert the image into grayscale and detect the faces using haar-cascade.
  • Now crop the faces split the data into training data and testing data, train the model.
  • The training model can be of our choice. ( we have chosen K-NEIGHBOUR CLASSIFIER)
  • Then test the model and check the accuracy of training and testing.
  • Now capture the live video and detect the emotion in your face using a trained model.

Emotion Recognition Algorithm

Code:

The below code is to train the model and to check the accuracy.

import cv2

import os

import numpy

mainfolder=r”C:\Users\dell\Desktop\new”

#give your path of the folder which condains the images in different folders #with respect to their emotions

def face_detect(img):

fd=cv2.CascadeClassifier(r”C:\Users\dell\Desktop\ml\ebooks\haarcascade_frontalface_default.xml”)

img2=img.copy()

faces=fd.detectMultiScale(img2,1.3,5)

if len(faces)==0:

return None

(x,y,w,h)=faces[0]

img2=img[y:y+h,x:x+w]

return img2

def load_data(mainfolder):

folders=os.listdir(mainfolder)

happy_path=mainfolder+’\\’+folders[0]

surprised_path=mainfolder+’\\’+folders[1]

#add more the folders if you take more than two emotions

happyfiles=os.listdir(happy_path)

surprisedfiles=os.listdir(surprised_path)

faces=[]

labels=[]

for file in happyfiles:

img=cv2.imread(happy_path+’\\’+file,-1)

face=face_detect(img)

if face is not None:

labels.append(‘happy’)

img3=face/(face.max())

img4=img3.astype(numpy.float32)

img5=cv2.resize(img4,(250,250))

img5=numpy.ndarray.flatten(img5)

faces.append(img5)

for file in surprisedfiles:

img=cv2.imread(surprised_path+’\\’+file,-1)

face=face_detect(img)

if face is not None:

labels.append(‘surprised’)

img6=face/(face.max())

img7=img6.astype(numpy.float32)

img8=cv2.resize(img7,(250,250))

img9=numpy.ndarray.flatten(img8)

faces.append(img9)

return faces,labels

faces,labels=load_data(mainfolder)

faces=numpy.array(faces)

from sklearn.model_selection import train_test_split

xtr,xts,ytr,yts=train_test_split(faces,labels,test_size=0.2)

from sklearn.neighbors import KNeighborsClassifier

alg=KNeighborsClassifier(n_neighbors=5)

alg.fit(xtr,ytr)

accuracy=alg.score(xts,yts)

print(accuracy)

accuracy1=alg.score(xtr,ytr)

print(accuracy1)

from sklearn.externals import joblib

joblib.dump(alg,’model.pkl’)

 

Now detect your live emotion using the below code:

import numpy

import cv2

from sklearn.externals import joblib

alg=joblib.load(‘model.pkl’)

p=cv2.VideoCapture(0)

ret,a=p.read()

while ret:

ret,a=p.read()

a1=cv2.cvtColor(a,cv2.COLOR_BGR2GRAY)

fd=cv2.CascadeClassifier(r”C:\Users\dell\Desktop\ml\ebooks\haarcascade_frontalface_default.xml”)

img2=a1.copy()

faces=fd.detectMultiScale(a,1.3,5)

if len(faces)==0:

print(‘none’)

else:

(x,y,w,h)=faces[0]

cv2.rectangle(a,(x,y),(x+w,y+h),[0,0,255],3)

img2=img2[y:y+h,x:x+w]

ig3=img2/(img2.max())

ig4=ig3.astype(numpy.float32)

ig5=cv2.resize(ig4,(250,250))

ig5=ig5.reshape(1,-1)

y1=alg.predict(ig5)

print(y1)

font = cv2.FONT_HERSHEY_SIMPLEX

if(y1==’happy’):

cv2.putText(a,’happy’,(90,50), font, 2,(0,255,255),2,cv2.LINE_AA)

else:

cv2.putText(a,’SURPRISED’,(90,50),font,2,(0,0,255),2,cv2.LINE_AA)

cv2.imshow(‘img’,a)

if (cv2.waitKey(1) & 0xFF) == ord(‘q’):# Hit `q` to exit

break

p.release()

cv2.waitKey(0)

cv2.destroyAllWindows()

 

No comments yet! You be the first to comment.

Leave a Reply

Your email address will not be published. Required fields are marked *

error: