pull down to refresh

0 sats \ 0 replies \ @VzxPLnHqr OP 31 May \ parent \ on: p2share: how to turn any sh*tcoin/network into a bitcoin mining pool bitcoin_Mining
Thanks for reading and for your reply. At first that is what I thought too. Obviously we do not know for sure unless/until this design is tried in the real world.
However, on further reflection, it does not seem so clear cut. If you imagine that the sharechain has some notable featureset that people want to use, then it is reasonable to expect the price to be (perhaps barely) positive. Additionally, since the sharechain has its own difficulty adjustment, it seems natural to think that some sort of price/difficulty equilibrium would be reached.
edit window closed and there is a classic typo in one of the footnote anchors: "assing_to_self" which should have been "assign_to_self" and would have linked to footnote content of:
Mary could, of course, try to game the system and find a nonce for her block (blocki+1
) which would cause the sharechain network to "choose" her as the assignee of the bitcoin reward for the next block (blocki+2
). This may be a problem in the beginning when the sharechain network difficulty is extremely low. However, as soon as the sharechain difficulty finds equilibrium this attack becomes expensive in that it will require significant extra work to pull off. That work is probably better spent simply mining according to consensus which often will mean assigning a shareholder other than onesself as the would-be-winner of the bitcoin block reward. Afterall, for every valid sharechain block Mary mines, she does get to assign herself the shares!
13 sats \ 0 replies \ @VzxPLnHqr 12 Jul 2024 \ parent \ on: Mempool Accelerator™ Is Now Live bitcoin
Thanks for your reply. Having fees be implicit in a transaction as opposed to existing as a separate output saves space in the block (and in the utxo set), but maybe in the future people will just pay their fees to an output of
<k blocks> OP_CSV
thereby emulating (with k = 100
) the traditional incentives.It also is interesting to think about what might happen if people started extending
k
to longer durations.Rationally, for larger
k
, this "explicit fee output" would cost the sender more sats (time value of money) to get the transaction confirmed (assuming miners would even be on the lookout for such transactions with these outputs), but if the mempools were to have a lot of otherwise equal value transactions with different k
s, then we might be able to get a near-real-time gauge of miner time preference/myopia.One observation about these out-of-band non-atomic accelerators is that the fee is earned/spendable immediately by miners. This is because the fee is not included in the coinbase output and hence not locked for 100 blocks the way that mining rewards are supposed to be.
It may not be something to worry about right now, but in the far future, if somehow most transactions end up paying their fees out of band, then the network security may suffer. Would be curious to get @petertodd 's take on this.
33 sats \ 0 replies \ @VzxPLnHqr 13 Jun 2024 \ parent \ on: How to MAKE your own Cold Wallet 🕶️ bitcoin
It is experimental, but there is also nixos-airgapped.
I agree that we need to be very very cautious about changing anything with regard to the incentive structure.
That being said, it does feel like current incentives are not as aligned as they could be. I have not yet read the specific post that you linked on bitcoin-dev, but here is one of my favorite (soft-fork) ideas which might better align incentives:
example soft-fork w/ burning: require all transactions over a certain size (including witness) to include an op_return output which burns a certain number of sats/byte
Burning sats is not a very popular idea. If need be, we could probably replace burning with a suitably long timelocked
anyone_can_spend
output, something like 10 or 15 years would be my preference. Then it is almost like burning, but the sats still are available to be (re)mined in the future.This mechanism still has some vulnerabilities (for example, people would probably just chop their larger transactions into smaller transactions to get under the limit, so more research is certainly necessary), but the idea of burning sats is at least interesting to me because it fairly (proportionally and instantly) rewards all remaining hodlers -- and the node operators we care most about and want to protect are the hodlers. These are the people who are in bitcoin for the long run.
At least then when your node is verifying all these large transactions, every time you get to the op_return output, you get a little bit richer and, by definition, the creator of that offending transaction gets a little bit poorer.
This seems like a very reasonable and useful solution. Then each LN node simply runs a nostr client. Is anyone working on that?
Right, that is actually why I am trying to focus more on the error-correction aspect of this. Using a secure hash function for checksum calculation (as in the bip39 spec) helps determine that there is an error but it does not identify where the error is nor help correct it.
Reed-Solomon error correcting codes are more what I am seeking, but it is non-trivial to make those work for permutations.
Anyway, there seems to be less interest/activity in this bounty thread than I was anticipating, so I went ahead and awarded the bounty to you!
Thanks for engaging, and good questions. Please help me firm up the definitions.
How do you measure the difference between two orderings? As the number of inversions?
Number of inversions probably makes sense, but I guess it depends. If we come up with some clever way of doing this (e.g. with a non-standard deck of cards) then maybe there is some other measure we could use?
And what are you trying to maximize here? The number of cards you can lose? Can any subset of card indices be lost?
It would be great if some subset of the larger
M-card
deck (M = 54
in the example) could be lost and yet still be able to reconstruct the data
where the data
in the example is a number between 0
and factorial(N)
where N = 13
. I was representing that number as a permutation of 13
objects, but I suppose that part is not very important.Think about
data
as being the entropy+checksum for a bip39 seed. What I want is definitions for encode
and decode
such that:encode(data) = f
decode(g) = data
where
f
and g
are not equal, but f
and g
are permutations of {1,2,3,4,...,M}.Does that help clarify?
Thanks. I understand what you mean now. However, it does not quite capture what I am seeking. I want it to be a single message which is comprised of
data
(e.g. entropy for a cold wallet, which itself may include some of its own checksums, etc), and I want the whole thing to be represented as a permutation of {1,2,3,4,5...,M}
. So what I want is definitions for encode
and decode
such that:encode(data) = f
decode(g) = data
where
f
and g
are not equal, but f
and g
are permutations of {1,2,3,4,...,M}
.Continuing with the cold storage example: the sender "sends"
f
by putting the cards in the order determined by f
and then destroying the underlying original entropy (the secret) since it is now represented by f
. Maybe Sender then buries this deck of cards in a hole in the ground or something :-).Then, some time later, the Receiver (which could be the Sender itself) digs up the deck of cards and writes down their order. Unfortunately the order the cards are now in, for whatever reason, turns out to be
g
which is not equal to f
. In other words, there was an error in the transmission. However, all hope should not be lost.So, the question is: can we endow
f
with enough error correction capabilities that even if the receiver receives g
, the receiver can still find the underlying original secret data
from g
?Ngl I find it a little difficult to understand what you even want.
Thanks for your answer and for the feedback regarding clarity. I should have given a more explicit example of the type of transmission I am after:
better example / motivation -- cold storage of entropy
Imagine storing the entropy for a cold wallet in the form of a carefully ordered deck of cards. It is not hard to take a number and represent it as a permutation of
N
objects. This is what a Lehmer code does, but Lehmer codes alone do not get us all the way there from a redundancy/error-correcting standpoint.Where I get stumped is how to build error correction into the permutation itself so that you can recover the seed entropy even if your permutation (the message) got a little jumbled up. My thought is that there is probably a way to do it by using an even larger collection of
M
objects where M > N
.a method that's widely used in many protocols in computer science for transmission.
Regarding your specific solution, I am confused. I want the final encoding of the message (e.g. what is actually transmitted) to be in the form of a permutation of objects, nothing else. Does your method do that? I think your merkle tree idea assumes that the receiver somehow already knows/learns the root hash, but that would be outside of the rules.
If you have an idea for how to do this robustly but which needs a non-standard deck of cards, that is fine, please share your idea!
The important thing is that the final encoding is solely in terms of a permutation of
M
physical objects (in the example M = 54
) and that such an encoding stores information about a permutation of N
objects (in the example N = 13
) with some robustness against errors.Maybe you have a clever way of doing it with a permutation of polynomials or something crazy like that?
I added a test demonstrating 1 of n oblivious signing. It does not quite solve the problem you point out, but it gets closer (notice how Alice can privately shuffle the messages and yet Bob still receives one and only one valid signature, while Alice still remains oblivious as to which one he receives).
Thanks for engaging on this. I hope it is helpful and we each further our own understanding of these concepts. I see what you are saying, and you very well might be correct.
Still, it feels like there is probably some way to do it with oblivious signing that would satisfy your objection.
Ok, I am shooting from the hip here quickly (so what follows could very well be wrong), but let's try this:
-
do you see how it is easy to extend a 1 of 2 oblivious transfer (oblivious signing in our case) to a 1 of n?
-
if so, then Alice funds the output with amount
U
and can sell sell a ticket to each Bob for an amount slightly more thanU / n
(it is slightly more so that, in expectation she can earn a profit). -
if one of the Bob's wins they will, naturally, move/claim the utxo
-
if more than one Bob wins, they will have a fee fight
I think for large enough
n
and slow enough ticket sales, these issues are surmountable, no?Yes. There is no need for every e-commerce merchant to know your physical address. There have been a couple explorations, but a lot of work still needs to be done to make it practical. See this SN thread
If done right, it feels like we should be able to take the Amazon Prime model and flip it inside out.
Good idea!
As a step towards making it trustess, you might want to check out "oblivious signing"1 which uses adaptor signatures and something called oblivious transfer to achieve a trustless off-chain coinflip and which can probably be generalized into a lottery.
I also implemented a simple demonstration/test2 of the underlying ecc math that makes oblivious signing work and which you might also find helpful.