Transaction

81ef7a2a54cd5d6cc6e7e00bbb99ac974bea958df2fb0ecd678d5d1fc5c7de6d
Timestamp (utc)
2024-06-18 06:41:17
Fee Paid
0.00000007 BSV
(
0.00324763 BSV
-
0.00324756 BSV
)
Fee Rate
2.386 sat/KB
Version
1
Confirmations
84,559
Size Stats
2,933 B

3 Outputs

Total Output:
0.00324756 BSV
  • jmetaB026de9762dd324554c0cb8eee3af9b0641cf702766ca122a267c2ce44d484fd3ee@104e04f4dc7bbb58b675a0be8ec8a2392cd828cadc0c1b85347e2d4ab003150erss.item metarss.netMå <item> <title>A First Physical-World Trajectory Prediction Attack via LiDAR-induced Deceptions in Autonomous Driving</title> <link>https://arxiv.org/abs/2406.11707</link> <description>arXiv:2406.11707v1 Announce Type: new Abstract: Trajectory prediction forecasts nearby agents' moves based on their historical trajectories. Accurate trajectory prediction is crucial for autonomous vehicles. Existing attacks compromise the prediction model of a victim AV by directly manipulating the historical trajectory of an attacker AV, which has limited real-world applicability. This paper, for the first time, explores an indirect attack approach that induces prediction errors via attacks against the perception module of a victim AV. Although it has been shown that physically realizable attacks against LiDAR-based perception are possible by placing a few objects at strategic locations, it is still an open challenge to find an object location from the vast search space in order to launch effective attacks against prediction under varying victim AV velocities. Through analysis, we observe that a prediction model is prone to an attack focusing on a single point in the scene. Consequently, we propose a novel two-stage attack framework to realize the single-point attack. The first stage of prediction-side attack efficiently identifies, guided by the distribution of detection results under object-based attacks against perception, the state perturbations for the prediction model that are effective and velocity-insensitive. In the second stage of location matching, we match the feasible object locations with the found state perturbations. Our evaluation using a public autonomous driving dataset shows that our attack causes a collision rate of up to 63% and various hazardous responses of the victim AV. The effectiveness of our attack is also demonstrated on a real testbed car. To the best of our knowledge, this study is the first security analysis spanning from LiDAR-based perception to prediction in autonomous driving, leading to a realistic attack on prediction. To counteract the proposed attack, potential defenses are discussed.</description> <guid isPermaLink="false">oai:arXiv.org:2406.11707v1</guid> <category>cs.CR</category> <category>cs.CV</category> <category>cs.LG</category> <arxiv:announce_type>new</arxiv:announce_type> <dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> <dc:creator>Yang Lou, Yi Zhu, Qun Song, Rui Tan, Chunming Qiao, Wei-Bin Lee, Jianping Wang</dc:creator> </item>
    https://whatsonchain.com/tx/81ef7a2a54cd5d6cc6e7e00bbb99ac974bea958df2fb0ecd678d5d1fc5c7de6d