Transaction

08027d2086d6800a5a496da681073d9ab3caab65f1d4f27ecb237828c41d8b32
Timestamp (utc)
2024-06-19 06:36:40
Fee Paid
0.00000007 BSV
(
0.00324127 BSV
-
0.00324120 BSV
)
Fee Rate
2.424 sat/KB
Version
1
Confirmations
85,422
Size Stats
2,887 B

3 Outputs

Total Output:
0.00324120 BSV
  • jmetaB0231c1fb5bbad8e7b86d4f92ee0ac414e3c0180c71c3efaf6c0fd852645f1bb209@104e04f4dc7bbb58b675a0be8ec8a2392cd828cadc0c1b85347e2d4ab003150erss.item metarss.netM· <item> <title>Ents: An Efficient Three-party Training Framework for Decision Trees by Communication Optimization</title> <link>https://arxiv.org/abs/2406.07948</link> <description>arXiv:2406.07948v2 Announce Type: replace Abstract: Multi-party training frameworks for decision trees based on secure multi-party computation enable multiple parties to train high-performance models on distributed private data with privacy preservation. The training process essentially involves frequent dataset splitting according to the splitting criterion (e.g. Gini impurity). However, existing multi-party training frameworks for decision trees demonstrate communication inefficiency due to the following issues: (1) They suffer from huge communication overhead in securely splitting a dataset with continuous attributes. (2) They suffer from huge communication overhead due to performing almost all the computations on a large ring to accommodate the secure computations for the splitting criterion. In this paper, we are motivated to present an efficient three-party training framework, namely Ents, for decision trees by communication optimization. For the first issue, we present a series of training protocols based on the secure radix sort protocols to efficiently and securely split a dataset with continuous attributes. For the second issue, we propose an efficient share conversion protocol to convert shares between a small ring and a large ring to reduce the communication overhead incurred by performing almost all the computations on a large ring. Experimental results from eight widely used datasets show that Ents outperforms state-of-the-art frameworks by $5.5\times \sim 9.3\times$ in communication sizes and $3.9\times \sim 5.3\times$ in communication rounds. In terms of training time, Ents yields an improvement of $3.5\times \sim 6.7\times$. To demonstrate its practicality, Ents requires less than three hours to securely train a decision tree on a widely used real-world dataset (Skin Segmentation) with more than 245,000 samples in the WAN setting.</description> <guid isPermaLink="false">oai:arXiv.org:2406.07948v2</guid> <category>cs.CR</category> <category>cs.AI</category> <arxiv:announce_type>replace</arxiv:announce_type> <dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> <arxiv:DOI>10.1145/3658644.3670274</arxiv:DOI> <dc:creator>Guopeng Lin, Weili Han, Wenqiang Ruan, Ruisheng Zhou, Lushan Song, Bingshuai Li, Yunfeng Shao</dc:creator> </item>
    https://whatsonchain.com/tx/08027d2086d6800a5a496da681073d9ab3caab65f1d4f27ecb237828c41d8b32