Real-Time Emotion Detection Using Python🐍

Introduction

 
Detecting the real-time emotion of the person with a camera input is one of the advanced features in the machine learning process. The detection of emotion of a person using a camera is useful for various research and analytics purposes. The detection of emotion is made by using the machine learning concept. You can use the trained dataset to detect the emotion of the human being. For detecting the different emotions, first, you need to train those different emotions, or you can use a dataset already available on the internet. In this article, we will discuss creating a Python program to detect the real-time emotion of a human being using the camera.
 

Installing Dependencies

 
For using this machine learning concept, you need to install a lot of dependencies into your system using the command prompt. The machine learning algorithm used by me was a tensor flow algorithm, which was designed by Google for machine learning functions. For analyzing faces. you need to detect the faces, to know more about detecting faces using python, you can refer to my article by clicking here. You need a cascade file for this process, you can download it from my git-hub page or in the download section.
 
You can install the dependencies by using the commands given below:
  1. pip install opencv-python  

  2. pip install tensorflow  
  3.   
  4. pip install numpy  
  5.   
  6. pip install pandas  
  7.   
  8. pip install keras  
  9.   
  10. pip install adam  
  11.   
  12. pip install kwargs  
  13.   
  14. pip install cinit  

Training the Dataset

 
For training purposes, I use the predefined un trained dataset CSV file as my main input for my input for training the machine. You can use the code given below for training the machine using the dataset. Before that, you need to ensure that all required files in the same repository where the program presents otherwise it will through some error. You can download the data set by clicking here.
 
 
  1. import sys, os  
  2. import pandas as pd  
  3. import numpy as np  
  4.   
  5. from keras.models import Sequential  
  6. from keras.layers import Dense, Dropout, Activation, Flatten  
  7. from keras.layers import Conv2D, MaxPooling2D, BatchNormalization,AveragePooling2D  
  8. from keras.losses import categorical_crossentropy  
  9. from keras.optimizers import Adam  
  10. from keras.regularizers import l2  
  11. from keras.utils import np_utils  
  12. # pd.set_option('display.max_rows', 500)  
  13. # pd.set_option('display.max_columns', 500)  
  14. # pd.set_option('display.width', 1000)  
  15.   
  16. df=pd.read_csv('fer2013.csv')  
  17.   
  18. # print(df.info())  
  19. # print(df["Usage"].value_counts())  
  20.   
  21. # print(df.head())  
  22. X_train,train_y,X_test,test_y=[],[],[],[]  
  23.   
  24. for index, row in df.iterrows():  
  25.     val=row['pixels'].split(" ")  
  26.     try:  
  27.         if 'Training' in row['Usage']:  
  28.            X_train.append(np.array(val,'float32'))  
  29.            train_y.append(row['emotion'])  
  30.         elif 'PublicTest' in row['Usage']:  
  31.            X_test.append(np.array(val,'float32'))  
  32.            test_y.append(row['emotion'])  
  33.     except:  
  34.         print(f"error occured at index :{index} and row:{row}")  
  35.   
  36.   
  37. num_features = 64  
  38. num_labels = 7  
  39. batch_size = 64  
  40. epochs = 30  
  41. width, height = 4848  
  42.   
  43.   
  44. X_train = np.array(X_train,'float32')  
  45. train_y = np.array(train_y,'float32')  
  46. X_test = np.array(X_test,'float32')  
  47. test_y = np.array(test_y,'float32')  
  48.   
  49. train_y=np_utils.to_categorical(train_y, num_classes=num_labels)  
  50. test_y=np_utils.to_categorical(test_y, num_classes=num_labels)  
  51.   
  52. #cannot produce  
  53. #normalizing data between oand 1  
  54. X_train -= np.mean(X_train, axis=0)  
  55. X_train /= np.std(X_train, axis=0)  
  56.   
  57. X_test -= np.mean(X_test, axis=0)  
  58. X_test /= np.std(X_test, axis=0)  
  59.   
  60. X_train = X_train.reshape(X_train.shape[0], 48481)  
  61.   
  62. X_test = X_test.reshape(X_test.shape[0], 48481)  
  63.   
  64. # print(f"shape:{X_train.shape}")  
  65. ##designing the cnn  
  66. #1st convolution layer  
  67. model = Sequential()  
  68.   
  69. model.add(Conv2D(64, kernel_size=(33), activation='relu', input_shape=(X_train.shape[1:])))  
  70. model.add(Conv2D(64,kernel_size= (33), activation='relu'))  
  71. # model.add(BatchNormalization())  
  72. model.add(MaxPooling2D(pool_size=(2,2), strides=(22)))  
  73. model.add(Dropout(0.5))  
  74.   
  75. #2nd convolution layer  
  76. model.add(Conv2D(64, (33), activation='relu'))  
  77. model.add(Conv2D(64, (33), activation='relu'))  
  78. # model.add(BatchNormalization())  
  79. model.add(MaxPooling2D(pool_size=(2,2), strides=(22)))  
  80. model.add(Dropout(0.5))  
  81.   
  82. #3rd convolution layer  
  83. model.add(Conv2D(128, (33), activation='relu'))  
  84. model.add(Conv2D(128, (33), activation='relu'))  
  85. # model.add(BatchNormalization())  
  86. model.add(MaxPooling2D(pool_size=(2,2), strides=(22)))  
  87.   
  88. model.add(Flatten())  
  89.   
  90. #fully connected neural networks  
  91. model.add(Dense(1024, activation='relu'))  
  92. model.add(Dropout(0.2))  
  93. model.add(Dense(1024, activation='relu'))  
  94. model.add(Dropout(0.2))  
  95.   
  96. model.add(Dense(num_labels, activation='softmax'))  
  97.   
  98. # model.summary()  
  99.   
  100. #Compliling the model  
  101. model.compile(loss=categorical_crossentropy,  
  102.               optimizer=Adam(),  
  103.               metrics=['accuracy'])  
  104.   
  105. #Training the model  
  106. model.fit(X_train, train_y,  
  107.           batch_size=batch_size,  
  108.           epochs=epochs,  
  109.           verbose=1,  
  110.           validation_data=(X_test, test_y),  
  111.           shuffle=True)  
  112.   
  113.   
  114. #Saving the  model to  use it later on  
  115. fer_json = model.to_json()  
  116. with open("fer.json""w") as json_file:  
  117.     json_file.write(fer_json)  
  118. model.save_weights("fer.h5")  
 
 

Detecting Real-Time Emotion

 
For detecting the emotion, first, you need to run the train.py program to train the data. Then you can use the code given below:
 
 
  1. import os  
  2. import cv2  
  3. import numpy as np  
  4. from keras.models import model_from_json  
  5. from keras.preprocessing import image  
  6.   
  7. #load model  
  8. model = model_from_json(open("fer.json""r").read())  
  9. #load weights  
  10. model.load_weights('fer.h5')  
  11.   
  12.   
  13. face_haar_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')  
  14.   
  15.   
  16. cap=cv2.VideoCapture(0)  
  17.   
  18. while True:  
  19.     ret,test_img=cap.read()# captures frame and returns boolean value and captured image  
  20.     if not ret:  
  21.         continue  
  22.     gray_img= cv2.cvtColor(test_img, cv2.COLOR_BGR2GRAY)  
  23.   
  24.     faces_detected = face_haar_cascade.detectMultiScale(gray_img, 1.325)  
  25.   
  26.   
  27.     for (x,y,w,h) in faces_detected:  
  28.         cv2.rectangle(test_img,(x,y),(x+w,y+h),(255,0,0),thickness=7)  
  29.         roi_gray=gray_img[y:y+w,x:x+h]#cropping region of interest i.e. face area from  image  
  30.         roi_gray=cv2.resize(roi_gray,(48,48))  
  31.         img_pixels = image.img_to_array(roi_gray)  
  32.         img_pixels = np.expand_dims(img_pixels, axis = 0)  
  33.         img_pixels /= 255  
  34.   
  35.         predictions = model.predict(img_pixels)  
  36.   
  37.         #find max indexed array  
  38.         max_index = np.argmax(predictions[0])  
  39.   
  40.         emotions = ('angry''disgust''fear''happy''sad''surprise''neutral')  
  41.         predicted_emotion = emotions[max_index]  
  42.   
  43.         cv2.putText(test_img, predicted_emotion, (int(x), int(y)), cv2.FONT_HERSHEY_SIMPLEX, 1, (0,0,255), 2)  
  44.   
  45.     resized_img = cv2.resize(test_img, (1000700))  
  46.     cv2.imshow('Facial emotion analysis ',resized_img)  
  47.   
  48.   
  49.   
  50.     if cv2.waitKey(10) == ord('q'):#wait until 'q' key is pressed  
  51.         break  
  52.   
  53. cap.release()  
  54. cv2.destroyAllWindows  

Output Verification

 
Now you can run the videoTester.py program. Your camera automatically turns on and detects the emotion of your face.
 
 
 
 
 

Conclusion

 
This is just a beginning step in face detection. You can download the program files from my git-hub link by clicking here. Feel free to use this for future enhancements.


Recommended Free Ebook
Similar Articles