Behavior Accountability of Agents Responsible for Privacy Negotiation in Social Networks

被引:0
|
作者
Gu T.-L. [1 ,2 ]
Hao F.-R. [2 ]
Li L. [1 ,2 ]
Li J.-J. [1 ]
Chang L. [2 ]
机构
[1] College of Information Science and Technology, College of Cyber Security, Jinan University, Guangzhou
[2] Guangxi Key Laboratory of Trusted Software, Guilin University of Electronic Technology, Guilin
来源
Ruan Jian Xue Bao/Journal of Software | 2022年 / 33卷 / 09期
关键词
accountability; agents; privacy negotiation; privacy protection; social networks;
D O I
10.13328/j.cnki.jos.006364
中图分类号
学科分类号
摘要
Privacy negotiation performs a pre-protective role against privacy disclosure as it can assist social network users to build a consensus on privacy protection before information sharing. Accountability is an attribute that a subject is responsible for an action or consequence, and it is an important aspect of transparent and explainable artificial intelligence applications. Accountability in the privacy negotiation process in social networks is of great significance for improving the transparency and explainability of application platforms or systems. Although Kekulluoglu et al. proposed an agent-based reciprocal privacy negotiation system, the accountability for the behaviors of agents was not discussed. For this reason, a novel system for agent behavior accountability during privacy negotiation in social networks is designed and implemented, and qualitative and quantitative accountability methods are developed. Moreover, requirements and behavior indicators are also proposed to achieve accountability. Specifically, the qualitative accountability method can accurately determine whether a privacy negotiation agent has misbehavior and pinpoint the specific location of the misbehavior. The quantitative accountability methods include simple quantification, weighted Mahalanobis distance, and improved Minhash and can quantify the severity of the agent’s misbehavior. The experimental data demonstrate the validity and rationality of the proposed system and methods. © 2022 Chinese Academy of Sciences. All rights reserved.
引用
收藏
相关论文
共 24 条
  • [1] Tran HY, Hu JK., Privacy-preserving big data analytics a comprehensive survey, Journal of Parallel and Distributed Computing, 134, pp. 207-218, (2019)
  • [2] Zhu TQ, Ye DY, Wang W, Zhou WL, Yu PS., More than privacy: Applying differential privacy in key areas of artificial intelligence, (2020)
  • [3] Zhao JW, Chen YF, Zhang W., Differential privacy preservation in deep learning: Challenges, opportunities and solutions, IEEE Access, 7, pp. 48901-48911, (2019)
  • [4] Chouldechova A, Roth A., The frontiers of fairness in machine learning, (2018)
  • [5] Tan ZW, Zhang LF., Survey on privacy preserving techniques for machine learning, Ruan Jian Xue Bao/Journal of Software, 31, 7, pp. 2127-2156, (2020)
  • [6] Ji SL, Du TY, Li JF, Shen C, Li B., Security and privacy of machine learning models: A survey, Ruan Jian Xue Bao/Journal of Software, 32, 1, pp. 41-67, (2021)
  • [7] Hasan R, Crandall D, Fritz M, Kapadia A., Automatically detecting bystanders in photos to reduce privacy risks, Proc. of the 2020 IEEE Symp. on Security and Privacy, pp. 318-335, (2020)
  • [8] Yang JH, Chakrabarti A, Vorobeychik Y., Protecting geolocation privacy of photo collections, Proc. of the 35th AAAI Conf. on Artificial Intelligence, pp. 524-531, (2020)
  • [9] Kekulluoglu D, Kokciyan N, Yolum P., Preserving privacy as social responsibility in online social networks, ACM Trans. on Internet Technology, 18, 4, (2018)
  • [10] Ethics guidelines for trustworthy AI, (2019)