[08/09/2022] From websites collecting our personal data in cookies, to IoT devices that collect, store, and process many different types of user data, we make decisions about our privacy every day. And they are important decisions – if we make wrong choices we risk our data being leaked and exploited. In their recent paper, Nadin Kökciyan and Pinar Yolum from the University of Utrecht propose a new way to deal with everyday privacy questions: personal assistants who can help users with configuring their privacy settings. In many countries, the law requires parties that collect and store personal data to ask users for permission to do so but this can result in a substantial decision load on users, usually leading them to ignore the details, and just click "Accept all". It doesn’t get any easier when users interact with IoT (Internet of Things) devices such as smart speakers: more often than not they will have to decide on whether to trust a particular device in unknown privacy situations. Here is where privacy assistants (i.e. agents) can help: they can make sharing decisions on behalf of the user or with the user by modelling trust in multiple contexts. When the situation is inherently ambiguous and the agents are not sure what to advise, they delegate the decision to users. Hence, humans and agents work together to manage users’ privacy together. Kökciyan and Yolum’s agent-based privacy assistant, PAS, builds a personalized computational model of the user by analysing past user-device interactions to model trust in various contexts. Intuitively, PAS derives contexts automatically from the historical situations that the user has previously experienced. It determines the trust for a context, based on positive and negative privacy experiences of the user in that context. By explicitly modelling inconsistencies between experiences of the user, and capturing uncertainty by using a subjective-logic based approach, PAS determines when to make a decision on behalf of the user, and when it can abstain and delegate the decision to the user. An example situation, when PAS could be used is during a visit to a department store which uses sensors to detect whether customers are present on the floors. The store management uses this data to keep track of the number of visitors in the shop in order to determine if the number of staff at these times can be reduced. It is not disclosed to the customer how long the data will be retained for. If PAS representing this person knows that they are not comfortable revealing their data in similar privacy situations, it will not share their data in this situation with the presence sensor. Kökciyan and Yolum’s contributions include (i) mathematically formulating the notion of situations, contexts, experience and so on; and (ii) designing a practical algorithm to be used by intelligent agents to make privacy decisions together with the humans. Their work, including experimental evaluation with real-world data, has been presented at the International Joint Conference on Artificial Intelligence (IJCAI) 2022 in Vienna. Many studies show that humans are happy to collaborate with AI systems, when they are given the opportunity. People are concerned about their privacy; we should give them the practical tools they need so that they could protect their privacy online. Here we propose privacy assistants that could collaborate with humans in making sharing decisions. Our next step is to deploy privacy assistants in the real-world, and I am looking forward to seeing that happening. Dr Nadin Kökciyan, School of Informatics This story has been developed with the help of Ricky Zhu from InfGang (the Informatics Science Communications Group). Related links Link to Nadin Kökciyan’s personal page Link to Full paper (PDF) This article was published on 2024-03-18