Transaction

ed30bf4230ba184fd6c7c8fb72c2fd7cb7b2aedcca0b48ceefc1da5acf4aec8a
Timestamp (utc)
2024-05-30 07:06:02
Fee Paid
0.00000005 BSV
(
0.00330120 BSV
-
0.00330115 BSV
)
Fee Rate
2.439 sat/KB
Version
1
Confirmations
88,288
Size Stats
2,050 B

3 Outputs

Total Output:
0.00330115 BSV
  • jmetaB03cc418419fbae49a6b92ce498ebf53a6734f9e135af6950a636e5cdeac701fd17@f9851c2160f6d07cecd53064b44b75ae180251462771b97e11dbfad2c8f6cbe5rss.item metarss.netMr<item> <title>Protecting Split Learning by Potential Energy Loss</title> <link>https://arxiv.org/abs/2210.09617</link> <description>arXiv:2210.09617v2 Announce Type: replace-cross Abstract: As a practical privacy-preserving learning method, split learning has drawn much attention in academia and industry. However, its security is constantly being questioned since the intermediate results are shared during training and inference. In this paper, we focus on the privacy leakage from the forward embeddings of split learning. Specifically, since the forward embeddings contain too much information about the label, the attacker can either use a few labeled samples to fine-tune the top model or perform unsupervised attacks such as clustering to infer the true labels from the forward embeddings. To prevent such kind of privacy leakage, we propose the potential energy loss to make the forward embeddings become more 'complicated', by pushing embeddings of the same class towards the decision boundary. Therefore, it is hard for the attacker to learn from the forward embeddings. Experiment results show that our method significantly lowers the performance of both fine-tuning attacks and clustering attacks.</description> <guid isPermaLink="false">oai:arXiv.org:2210.09617v2</guid> <category>cs.CR</category> <category>cs.AI</category> <category>cs.DC</category> <category>cs.LG</category> <arxiv:announce_type>replace-cross</arxiv:announce_type> <dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> <dc:creator>Fei Zheng, Chaochao Chen, Lingjuan Lyu, Xinyi Fu, Xing Fu, Weiqiang Wang, Xiaolin Zheng, Jianwei Yin</dc:creator> </item>
    https://whatsonchain.com/tx/ed30bf4230ba184fd6c7c8fb72c2fd7cb7b2aedcca0b48ceefc1da5acf4aec8a