Speech Based Depression Detection using Convolution Neural Networks
Swathy Krishna1, Anju J2

1Swathy Krishna, Department of Computer Science, Lal Bahadur Shastri Institute of Technology for Women, Thiruvananthapuram, Kerala, India.
2Anju J, Assistant Professor, Department of Computer Science, Lal Bahadur Shastri Institute of Technology for Women, Thiruvananthapuram, Kerala, India.
Manuscript received on June 15, 2020. | Revised Manuscript received on June 25, 2020. | Manuscript published on July 10, 2020. | PP: 405-408 | Volume-9 Issue-9, July 2020 | Retrieval Number: 100.1/ijitee.I7076079920 | DOI: 10.35940/ijitee.I7076.079920
Open Access | Ethics and Policies | Cite | Mendeley
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: Depression has become a serious mental disorder nowadays affecting people of almost all age groups. Loss of interest in daily activities, constant feeling of being isolated, hopelessness etc. causing significant impairment in life. This illness will affect the physical and mental health of the individual affecting his/her emotional stability. Emotions are a way of expression of one’s state of mind in the form of thoughts, feelings or behavioural responses. For a depressed individual, the emotions are often negative in nature. Diagnosis of depression is a complex task as the disease may be unidentified by the patient itself. Sometimes the patient may be reluctant to consult a doctor. The long term ignorance of the illness may worsen the mental health of the one suffering from it. Thus the early diagnosis of depression is of great significance. With the emergence of neural networks and pattern recognition, many researchers have put effort in detecting depression by analysing non-verbal cues, such as facial expressions, gesture, body language and tone of voice. Recent studies have shown that the speech emotion analysis can effectively be used in distinguishing emotional features and a depressed speech varies from that of a normal speech to a great extent. The depressed patient normally speaks in a low voice, slowly, sometimes stuttering, whispering, trying several times before they speak up or become mute in the middle of a sentence. This paper proposes a CNN architecture for learning the audio features in the speech for detecting depression, identifying the emotions and to infer the emotional severity of the individual. This paper also reviews some of the existing research methods in the field of depression analysis. 
Keywords: CNN, Depression, Spectrogram, Speech.
Scope of the Article: Deep Learning