Transaction

1269a9837c46ba533005b5998eb86987b5d5d99bb6e8ac1dd5f9d5254e48c2ee
2024-03-22 00:58:37
0.00000016 BSV
(
0.00262698 BSV
-
0.00262682 BSV
)
10.24 sat/KB
1
70,859
1,561 B

2 Outputs

Total Output:
0.00262682 BSV
  • j"1LAnZuoQdcKCkpDBKQMCgziGMoPC4VQUckM<div class="post">Maybe someone with a little background in this statistics/math stuff can shed some light on this..<br/><br/>The way this thing works is it takes a (basically random) block of data and alters a 32 bit field inside it by starting at 1 and incrementing.&nbsp; The block of data also contains a timestamp and that's incremented occasionally just to keep mixing it up (but the incrementing field isn't restarted when the timestamp is update).&nbsp; If you get a new block from the network you sort of end up having to start over with the incrementing field at 1 again.. however all the other data changed too so it's not the same thing you're hashing anyway.<br/><br/>The way I understand it, since the data that's being hashed is pretty much random and because the hashing algorithm exhibits the 'avalanche effect' it probably doesn't matter if you keep starting with 1 and incrementing it or if you use pseudo random values instead, but I was wondering if anyone could support this or disprove it.<br/><br/>Can you increase your likelihood of finding a low numerical value hash by doing something other than just sequentially incrementing that piece of data in the input?&nbsp; Or is this equivalent to trying to increase your chances of rolling a 6 (with dice) by using your other hand?</div> text/html
    https://whatsonchain.com/tx/1269a9837c46ba533005b5998eb86987b5d5d99bb6e8ac1dd5f9d5254e48c2ee