ComVas: Contextual Moral Values Alignment System
DOI:
10.24963/ijcai.2024/1026
Publication Date:
2024-07-26T14:28:11Z
AUTHORS (11)
ABSTRACT
In contemporary society, the integration of artificial intelligence (AI) systems into various aspects of daily life raises significant ethical concerns. One critical aspect is to ensure that AI systems align with the moral values of the endusers. To that end, we introduce the Contextual Moral Value Alignment System, ComVas. Unlike traditional AI systems which have moral values predefined, ComVas empowers users to dynamically select and customize the desired moral values thereby guiding the system’s decision-making process. Through a user-friendly interface, individuals can specify their preferred morals, allowing the system to steer the model’s responses and actions accordingly. ComVas utilizes advanced natural language processing techniques to engage with the users in a meaningful dialogue, understanding their preferences, and reasoning about moral dilemmas in diverse contexts. This demo article showcases the functionality of ComVas, illustrating its potential to foster ethical decision-making in AI systems while respecting individual autonomy and promoting user-centric design principles.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (0)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....