- Emotion and Mood Recognition
- Speech and Audio Processing
- Speech Recognition and Synthesis
- EEG and Brain-Computer Interfaces
- Speech and dialogue systems
- Music and Audio Processing
- Face and Expression Recognition
- Phonetics and Phonology Research
- Sentiment Analysis and Opinion Mining
- Neural dynamics and brain function
- ECG Monitoring and Analysis
- Machine Learning and ELM
- Neural Networks and Applications
- Social Robot Interaction and HRI
- Hand Gesture Recognition Systems
- Software Testing and Debugging Techniques
- Advanced Memory and Neural Computing
- Industrial Vision Systems and Defect Detection
- Advanced Malware Detection Techniques
- Iron and Steelmaking Processes
- Network Security and Intrusion Detection
- Blind Source Separation Techniques
- Robotics and Automated Systems
- Brain Tumor Detection and Classification
- COVID-19 diagnosis using AI
Diyarbakır Gazi Yaşargil Eğitim ve Araştırma Hastanesi
2021
University of Health Sciences
2021
Adana Science and Technology University
2017-2021
İskenderun Technical University
2013-2016
Mustafa Kemal University
2009-2015
University of Southern California
2003-2009
Southern California University for Professional Studies
2005-2006
The interaction between human beings and computers will be more natural if are able to perceive respond non-verbal communication such as emotions. Although several approaches have been proposed recognize emotions based on facial expressions or speech, relatively limited work has done fuse these two, other, modalities improve the accuracy robustness of emotion recognition system. This paper analyzes strengths limitations systems only acoustic information. It also discusses two used...
Recognizing human emotions/attitudes from speech cues has gained increased attention recently. Most previous work focused primarily on suprasegmental prosodic features calculated at the utterance level for this purpose. Notably, not much is paid to details segmental phoneme in modeling. Based hypothesis that different emotions have varying effects properties of sounds, paper investigates usefulness phoneme-level modeling classification emotional states speech. Hidden Markov models (HMM)...
In this paper, we propose an approach for music emotion recognition based on convolutional long short term memory deep neural network (CLDNN) architecture. addition, construct a new Turkish emotional database composed of 124 traditional excerpts with duration 30 s each and the performance proposed is evaluated constructed database. We utilize features obtained by feeding (CNN) layers log-mel filterbank energies mel frequency cepstral coefficients (MFCCs) in addition to standard acoustic...
In this study, we investigate acoustic properties of speech associated with four different emotions (sadness, anger, happiness, and neutral) intentionally expressed in by an actress. The aim is to obtain detailed knowledge on how modulated when speaker’s emotion changes from neutral a certain emotional state. It based measurements parameters related prosody, vowel articulation spectral energy distribution. Acoustic similarities differences among the are then explored mutual information...
Few studies exist on the topic of emotion encoding in speech articulatory domain. In this report, we analyze data collected during simulated emotional production and investigate differences articulation among four types; neutral, anger, sadness happiness. The movement tongue tip, jaw lower lip, along with speech, were obtained from a subject using an Electromagnetic articulography (EMA) system. effectiveness parameters classification was also investigated. A general behavior observed that...
ABSTRACT Problem Brain tumors are among the most prevalent and lethal diseases. Early diagnosis precise treatment crucial. However, manual classification of brain is a laborious complex task. Aim This study aimed to develop fusion model address certain limitations previous works, such as covering diverse image modalities in various datasets. Method We presented hybrid transfer learning model, Fusion‐Brain‐Net, at automatic tumor classification. The proposed method included four stages:...
Affective computing, especially from speech, is one of the key steps toward building more natural and effective human-machine interaction. In recent years, several emotional speech corpora in different languages have been collected; however, Turkish not among that investigated context emotion recognition. For this purpose, a new database, which includes 5,100 utterances extracted 55 movies, was constructed. Each utterance database labeled with categories (happy, surprised, sad, angry,...
In this study, we investigate politeness and frustration behavior of children during their spoken interaction with computer characters in a game. We focus on automatically detecting frustrated, polite neutral attitudes from the child’s speech (acoustic language) communication cues study differences as function age gender. The is based Wizard-of-Oz dialog corpus 103 playing voice activated Statistical analysis revealed that there was significant gender effect girls data exhibiting more...
Recent studies in our lab show that emotions speech are manifested as, besides supra-segmental trends, distinct variations phoneme-level prosodic and spectral parameters. In this paper, we further investigate the significance of finding context emotional synthesis. Specifically, study signal property manipulation transforming information conveyed a utterance. We analyze effect individual combined modifications F0, duration, energy spectrum using data recorded by professional actress with...
The presence of disfluencies in spontaneous speech, while poses a challenge for robust automatic recognition, also offers means gaining additional insights into understanding speaker's communicative and cognitive state. This paper analyzes children's the context spoken dialog based computer game play, addresses detection disfluency boundaries. Although several approaches have been proposed to detect relatively little work has done utilize visual information improve performance robustness...
Epileptic seizure detection and prediction from electroencephalography (EEG) is a vital area of research. In this study, Second-Order Difference Plot (SODP) used to extract features based on consecutive difference time domain values three states EEG (pre-ictal, ictal inter-ictal), Multi-Layer Neural Network classifier classify these classes. The proposed technique tested publicly available database classified with Naive Bayes k -nearest neighbor classifiers. As result, it shown that overall...
In this study computer vision and robot arm are used together to design a smart system which can identify objects from images automatically perform given tasks. A serving application, in specific tableware be identified lifted table, is presented work. new database was created by using of meal. This consists two phases: First phase includes recognition the through algorithms determining specified objects’ coordinates. Second realization arm’s movement Artificial neural network for object...
In this study computer vision and robot arm are used together to design a smart system which can identify objects from images automatically perform given tasks. A serving application, in specific tableware be identified lifted table, is presented work. new database was created by using of meal. This consists two phases: First phase includes recognition the through algorithms determining specified objects’ coordinates. Second realization arm’s movement Artificial neural network for object...
The purpose of this study is to optimize the mass 1.5 MW wind turbine steel tower performing Genetic Algorithm method (GA). In accordance with ASCE 7-98, AISC-89 and IEC61400-1 , impact loads on calculated within highest safety conditions against buckling strength each sections by means GA codes. stifness along ensured entirely while mitigated optimized.
Success of an emotion recognition systems from speech signal is directly dependent to the database used in system modeling as any pattern problem. In this work, we give detailed description new Turkish emotional created. Our consists 5304 and their textual contents extracted 55 movies. Speech signals are labeled by numerous evaluators both categorically (happy, surprised, sad, angry, fear, neutral other) 3-dimensional space (valence, activation dominance). We believe that our will be very...
Automatic detection for human-machine interfaces of the emotional states people is one difficult tasks. EEG signals that are very to control by person also used in emotion recognition In this study, analysis and classification study were conducted using different types stimuli. The combination audio video information has been shown be more effective about positive/negative (high/low) wavelet transform from signals, true positive rate 81.6% was obtained valence dimension. Information found...