Authors: Sourav Banerjee, Ayushi Agarwal, Ayush Kumar Bar
Abstract
In today’s workplace, mental health is gaining importance. As a result, AI-powered mental health chatbots haveemerged as first-aid solutions to support employees. However, there are concerns regarding privacy and security risks, such asspoofing, tampering, and information disclosure, that need to be addressed for their implementation.
The objective of this studyis to explore and establish privacy protocols and risk mitigation strategies specifically designed for AI-driven mental healthchatbots in corporate environments. These protocols aim to ensure the ethical usage of these chatbots. To achieve this goal, theresearch analyses aspects of security, including authentication, authorisation, end-to-end encryption (E2EE), compliance withregulations like GDPR (General Data Protection Regulation) along with the new Digital Services Act (DSA) and DataGovernance Act (DGA).
This analysis combines evaluation with policy review to provide comprehensive insights. The findingshighlight strategies that can enhance the security and privacy of interactions with these chatbots. Organisations are incorporatingheightened security measures, including the adoption of Two-factor Authentication (2FA) and Multi-Factor Authentication(MFA), integrating end-to-end encryption (E2EE), and employing self-destructing messages. Emphasising the significance ofcompliance, these measures collectively contribute to a robust security framework. The study underscores the critical importanceof maintaining a balance between innovative advancements in AI-driven mental health chatbots and the stringent safeguarding ofuser data. I
t concludes that establishing comprehensive privacy protocols is essential for the successful integration of thesechatbots into workplace environments. These chatbots, while offering significant avenues for mental health support, necessitateeffective handling of privacy and security concerns to ensure ethical usage and efficacy.
Future research directions includeadvancing privacy protection measures, conducting longitudinal impact studies to assess long-term effects, optimising userexperience and interface, expanding multilingual and cultural capabilities, and integrating these tools with other wellnessprograms. Additionally, continual updates to ethical guidelines and compliance with regulatory standards are imperative.
Research into leveraging AI advancements for personalised support and understanding the impact on organisational culture willfurther enhance the effectiveness and acceptance of these mental health solutions in the corporate sector.