In a recent development in artificial intelligence (AI), scientists at Ankara’s Bilkent University are collaborating with psychiatrists to detect signs of depression and analyze personality using data such as audio, conversation, and faces. We have developed an AI-based technology that can do this. facial expressions and body language.
Hamdi Dibekrioglu, assistant professor in the Department of Computer Engineering at Bilkent University, emphasized his lifelong focus on research in artificial intelligence, particularly in the area of automated analysis of human behavior.
The project, called affective computing, will use machine learning techniques to analyze human behavior and identify patterns in vocal content, volume, tone, facial expressions and posture, Dibecliol said.
Dibeklioğlu highlighted the growing interest in affective informatics due to the proliferation of AI models like ChatGPT, and highlighted the team’s recent progress in developing algorithms to measure depression severity using AI. I made it clear.
“Clinical psychologists and psychiatrists typically make diagnoses based on observations. Similarly, AI can be used to analyze a variety of data, from facial expressions, tone of voice, and speaking patterns to body language. “We aim to determine the level of depression through analysis. While the expert conducts the interview, the AI simultaneously processes the data and shares it with the expert,” Dibecliol explained.
Dibeklioglu further detailed that the research, which was started during his time in the Netherlands and continued upon his return to Turkiye, adhered to strict ethical standards and obtained approval from both patients and hospitals at each step. Ta. He emphasized the importance of data privacy and consent, reassuring that the system only operates with the explicit permission of individuals and protects sensitive data through strict protocols.
“We call this confidential or sensitive data, and its use requires very careful procedures,” Dibecliol said, noting the seriousness inherent in their pioneering work. and ethical considerations.
Use of clinical data
“We’re trying to understand the relationship between behavior and levels of depression. Our findings overlap with theory. For example, ‘My child cries a lot, is he depressed?’ But in the case of depression, the general expectation is desensitization. In other words, the person does not try to communicate with anyone and withdraws from the social environment. So are the patterns we capture. “If we look at the literature, we come across the behavior of avoiding social interaction in depression. In other words, the model makes its own diagnosis. In this way, the AI can find what is being overlooked. ” he explained.
lie detection
Dibecliol discussed another project centered on determining the extent of deception through various data points such as text, tone of voice, and visual cues. Ethical approval was obtained for these studies.
When analyzing diverse video material, they focused on assessing the authenticity of conversations, distinguishing truth from lies, and cross-referencing these results with multiple sources of information. The content of the speech was evaluated by a “natural language processing” model, and the tone of voice was evaluated by “frequency analysis.”
“Although artificial intelligence has contributed to solving this challenge, it is impossible to achieve 100% accurate predictions, but we have achieved a significantly higher success rate,” said Dibecriol. He clarified.
“It is important to understand that this lie detection system cannot be used directly in legal proceedings or decisions that affect a person’s life due to its inherent error rate. However, its application extends to a variety of fields. Lie detection using AI may also be useful in environments such as students.”
personality detection
Mr. Dibecliol elaborated on assessing personality across multiple dimensions, including openness to the outside world and innovativeness. They collect personality data through visual and auditory elements, interpret them while interacting with individuals, and aim to give machines this ability. Despite the increasing capabilities of machines for detailed observations and complex operations, the key element lies in training the algorithms accurately.
He emphasized the need for caution in the realm of human behavior, stating that “absolute precision in this area, which directly affects everyday life, leads to serious problems when individuals are held responsible for their mistakes. Possible. Ethical approval in behavioral analysis requires careful attention. The aim is for AI to assist us, but this does not mean handing over all decision-making to AI, while distancing it from responsibility. does not mean.”
Pain level detection
Dibecliol emphasized that a comparable system is used to identify “pain level,” which is important when determining dosage.
He highlighted the potential importance of this detection, particularly in the treatment of children and infants, saying: “In children and infants, it is often difficult to hear the level of pain directly. In such cases, we rely on interpreting facial expressions and actions to determine the degree of pain they may be experiencing. ”