I'm not understanding why dominant assurance contracts are so special

VitalikButerinVitalikButerin Administrator Posts: 84 admin
edited April 2014 in Smart Contracts and Dapps
After reading the post on the dominant assurance contract implementation, I read through the original DAC paper, and I'm trying to understand what it is about DACs that makes them effective in funding public goods where traditional ACs are not. Tabarrok claims in http://mason.gmu.edu/~atabarro/PrivateProvision.pdf that they solve the public goods provision problem via a solution with one equilibrium, and argues that even under imperfect information they can be used to provide public goods with 1/2 probability.

I tried to do the math behind the contracts in a simplified form that makes things clearer, but I'm arriving at a completely different conclusion: DACs are not a single bit better than traditional ACs.

First, the definition. The way an AC works is that there are N participants, a public good with per-person cost C, per-person reward ~V (that's shorthand for "a distribution centered around V") with V > C, and a contract is set up where people can pledge R, and if at least C/R people contribute then the contract spends the funds and produces the public good, and if fewer people contribute then everyone is refunded. From the point of view of a participant, there are three possible scenarios:

1. The number of people who will pledge not including them is less than C/R - 1. In this case, if they pledge or don't pledge their return is 0.
2. The number of people who will pledge not including them is exactly C/R - 1. In this case, the return is V - R for pledging and 0 for not pledging.
3. The number of people who will pledge including them exceeds C/R. In this case, the return is V-R for pledging and V for not pledging.

Presuppose that the probability for (1) is 1/2-p/2, for (2) is p, and for (3) is 1/2-p/2. By central limit theorem p ~= 1 / sqrt(N). Then, the expected return from pledging is:

p * (V - R) - R*(1/2-p/2)

People will contribute if that value is greater than zero. Then, we have:
The
p * (V - R) - R/2+pR/2 > 0
2pV - pR - R+pR > 0
2pV > R

Since R = 2C, that's:

pV > C

Hence, people have the incentive to contribute if the probability that they are "pivotal", ie. the chance the goal will be reached with them and fail without them, is greater than the inverse of the social return coefficient (V/C). Thus, a public good with 100x social return will succeed up to N = 10000 people, 10x social return with up to N = 100 people, etc.

Now, let's try a dominant assurance contract. Now, there is an entrepreneur who makes the following deal: if at least kN people (let k=1/2) sign up, then they will be required to provide a payment of S to the entrepreneur, and the entrepreneur will pay C per person for the public good, which has value ~V per person. Otherwise, the entrepreneur pays everyone F. The idea is that if the contract always fails then everyone thinks the contract will fail, then everyone has the incentive to contribute in order to receive F, but then if that happens then the contract will eventually end up succeeding at least part of the time. Now, let's look at the incentives of the entrepreneur (who controls F and S), with success probability 1/2 as before.

1/2 * S - 1/2 * F - C > 0
S > F + 2C
S - F > 2C

Now, given that, let's look at the individual participants:

p * (V - S) + F * (1/2 - p/2) - S * (1/2 - p/2) > 0
2p * (V - S) + (F - S) - p * (F - S) > 0
2p * (V - S) + p * (S - F) > S - F
2p(V-S) > (1-p)(S-F)
2p(V-S) > (1-p)2C
p(V-S) > (1-p)C
p(V-C) > C - pC (since obviously S > C for the entrepreneur to be profitable)
pV - pC > C - pC
pV > C

Exactly the same inequality. What in my above analysis is wrong?
Post edited by VitalikButerin on
«1

Comments

  • aatkinaatkin Member Posts: 75 ✭✭
    Hey @vitalik?. Thanks for taking a look. I'm not a math guy, but I'm looking at your vanilla AC example. Of course, this might not change anything. A note from the paper:

    -This works best for "lumpy" goods where we have an idea of time and materials required. This isn't a good model for funding the right amount of poetry in the world.

    -It is very close to an AC, with some levers. For non-profits, S=C, for loss leaders, S < C. I think the main idea is to manipulate F and S in such a way that p >= 0.5. Another optimization would be to maximize profit: http://www.stanford.edu/~jacobt/writing/dac.pdf

    -Honestly the math might end up being equivalent on these but the behaviroal motivation drivers might make it a more successful model. There are very few real world test cases, so I thought we might try it or some of its variants out on Ethereum.

    Vanilla Assurance Contract
    1. The number of people who will pledge not including them is less than C/R - 1. In this case, if they pledge or don't pledge their return is 0.
    2. The number of people who will pledge not including them is exactly C/R - 1. In this case, the return is V - R for pledging and 0 for not pledging.
    3. The number of people who will pledge including them exceeds C/R. In this case, the return is V-R for pledging and V for not pledging.

    1. This is true in the pledge model where we trust the donor to honor the pledge at the end of the contract. The non-trust model (which I prefer) is to require escrow up front. In this situation the funders lose opportunity cost K because their money is tied up for the length of the campaign with a 0 ROI.

    2. In the pivotal case of C/R - 1, the return is V-R for funding, and V for not funding if it is a non-excludable public good (e.g. everyone can use the bridge). It it 0 for an excludable public good (e.g. only funders can use the bridge, all others pay a toll).

    3. As in #2 return for not funding is V for a non-excludable public good and 0 for an excludable public good.

    Assume all donations are equal. X is donor count, S is the donation, Q is opportunity cost of escrowing donation

    Dominant Assurance Contract
    1. Missed funding target:
    a) Entrepreneur loses F (the prize)
    b) Donor gains F (the prize) - Q
    c) Non-Contributor gains 0

    2. Met or exceeded funding target
    a) Entrepreneur profits by XS-C
    b) Donor gains V - S - Q
    c) Non Contributor gains 0 for excludable good, V for non-excludable good.

  • VitalikButerinVitalikButerin Administrator Posts: 84 admin
    If k is very close to 0 or 1, then p follows sqrt(kN) or sqrt(N - kN), so the ratio would be larger. This is an expected result; in the limit, if k = 1, then the public goods problem disappears, but setting k=1 is impractical due to coordination concerns (some people disagree that the public good is valuable, often you don't even know what the population is, etc). Setting k~=0 doesn't work because that requires you to set S or R very high. I agree that the best kind of contract will probably be based on behavioral economics in some fashion, combined with recursive social incentives; aside from that, it seems like (V/C)^2 might be a fundamental bound on the number of people at which an isolated contract is a workable solution.
  • yoyoyoyo Member Posts: 34 ✭✭✭
    edited April 2014
    @vitalik:

    Disclosure : not an expert in game theory.
    Quotes are in italics.

    First, one very important difference in the game theoretic setup and the real life setup is that in Ethereum the game will not be played simultaneously.
    Each agent would know how many agents have entered so far and would know if he is indeed the pivotal voter or not. This may change everything with regards to incentives.

    Vitalik wrote: 
    First, the definition. The way an AC works is that there are N participants, a public good with per-person cost C, per-person reward ~V (that's shorthand for "a distribution centered around V") with V > C, and a contract is set up where people can pledge R, and if at least C/R people contribute then the contract spends the funds and produces the public good, and if fewer people contribute then everyone is refunded


    In this paragraph and seemingly in other parts of your post, C alternatively means the per-person cost and the total cost.

    Vitalik wrote:
    The expected return from pledging is:
    p * (V - R) - R*(1/2-p/2)


    I don't follow this. For me the expected return from pledging, combining (1), (2) and (3) :

    (((1-p)/2) * 0) + (p * (V - R)) + (((1-p)/2) * (V - R))

    or

    p * (V - R) + ((1-p)/2) * (V - R)

    Vitalik wrote:
    p * (V - R) - R/2+pR/2 > 0
    2pV - pR - R+pR > 0
    2pV > R


    I think second line should be 2pV - 2pR - R + pR > 0

    Vitalik wrote:
    Since R = 2C


    Where is this coming from? I can't find this notion anywhere.


    Regarding the dominant contract analysis, the use of different naming conventions for some quantities is confusing.
    From the paper:
    - N is the total number of agents.
    - X is the number of agents that actually accepts the contract.
    - K is the quorum of agents needed for the contract to succeeds.
    - F is the individual payoff paid to accepting agents if the contract fails.
    - S is the contribution made by each accepting agent if the contract succeeds.
    - C is the total cost to make the public good.
    - V is the individual payoff value of the public good.

    if X < K the contract fails, otherwise it succeeds.

    Vitalik wrote:
    Now, let's look at the incentives of the entrepreneur (who controls F and S), with success probability 1/2 as before.
    1/2 * S - 1/2 * F - C > 0


    I do not follow. For me the entrepreneur expectations are :
    XS - C, if X >= K
    -XF, if X < K

    I'm not sure these can be normalized per-agent.

    C is the total cost and is not necessarily correlated with K. K is defined by the entrepreneur.

    For each agent, if he pledges (accepts the contract):

    from case (1) : ((1-p)/2) * F
    from case (2) : p * (V - S)
    from case (3) : ((1-p)/2) * (V - S)

    Vitalik wrote:
    p(V-C) > C - pC (since obviously S > C for the entrepreneur to be profitable)


    The entrepreneur can also be profitable by demanding that the quorum be more that what would be strictly needed to cover the costs of producing the public good.

  • VitalikButerinVitalikButerin Administrator Posts: 84 admin
    All amounts are per-person.

    > The expected return from pledging is: p * (V - R) - R*(1/2-p/2) I don't follow this.

    There are three cases:

    Case 1 (probability 1/2 - p/2) not enough people pledge even if you do. Then, reward is 0 in both cases
    Case 2 (probability p), you are the pivotal member in determining if enough people pledge. Then, reward is V - R if you pledge and 0 if you don't pledge, so pledging gains you V - R if this happens
    Case 3 (probability 1/2 - p/2) enough people pledge with or without you. Then, reward is V if you don't pledge, V - R if you do

    Hence, the relative gain from pledging is (1/2 - p/2) * 0 + p * (V - R) + (1/2 - p/2) * -R which is equivalent to my above formula. You're providing the absolute return of pledging, when the pertinent thing is the gain of pledging vs not pledging.

    > Since R = 2C Where is this coming from? I can't find this notion anywhere.

    C is the per-person cost, half of people are contributing, so R (the per-person cost among those who actually contribute) = 2C

    > I'm not sure these can be normalized per-agent.

    I'm assuming that there is a normal distribution around the minimal threshold, and since the standard deviation is sqrt(n) the effects of the deviation on the denominator are insignificant. It's a good simplification.

    > The entrepreneur can also be profitable by demanding that the quorum be more that what would be strictly needed to cover the costs of producing the public good.

    That's equivalent to setting S > C.
  • sjenkinssjenkins Member Posts: 28
    >Presuppose that the probability for (1) is 1/2-p/2, for (2) is p, and for (3) is 1/2-p/2.

    [I am not a game theory expert! But...] is this supposition the problem?

    What strategies are the various members of the population employing to to produce a certain p (whatever it is) and an even split around it? And given a particular p and an even split does it benefit *any* player to change their own strategy accordingly? How would this affect p and the split?

    In a traditional AC, the entire population deciding not to pledge is a stable scenario in which:

    The probability for (1) is 1, for (2) is 0, and for (3) is 0.

    Nobody has an incentive to change their "don't pledge" strategy here, so they don't change it.

    In a DAC on the other hand there *is* an incentive to switch strategy if nobody is pledging. There's also an incentive to switch strategy if too many people are pledging. I'm not doing the math but this at least suggests things could move towards a situation where the right number of people are pledging, as people adjust their strategies according to previous results.

    Yoyo's point about how the blockchain affects the game is also well worth noting. The probability of (3) would likely be zero because nobody's got an incentive to pledge after they see K pledges already made.
  • aatkinaatkin Member Posts: 75 ✭✭
    If donors are offered rewards a la Kickstarter I don't see why they would behave any differently for case 3 in a DAC.
  • JasperJasper Eindhoven, the NetherlandsMember Posts: 514 ✭✭✭
    Yes but the rewards also have a price on them. That said social or 'emotional' rewards in particular could be worth more than the price. But this promotion might be moot, because other players will consider the (potential of)those rewards and increase their estimate of likelyhood of success, and decide free-riding is better.

    Tend to agree with Vitalic, but(p is success probability! not distance from halfway) '
    Fails(1-p):      F         -XF
    Success(p): V-S XS-C
    Success_abstain: V 0
    Participate if:
    Av_Entrepreneur = -(1-p)XF + p(XS-C) > 0
    Av_Guy = (1-p)F + p(V-S) > Av_Guy_abstain = pV

    So for the first inequality i get XF < p(XF+XS-C)
    for the latter, note that pV drops off, thats the problem with getting people onboard with the regular Assurance Contract for a public good. Anyway, F > p(S+F)

    So S+F < F/p < F+S - C/X or we need C/X < 0 or C<0 for there to be a F/p that both parties agree too.. Initially i though it was absurd.. (I also accidentally flipped the unequality sign) But i am thinking maybe it is right, and the fact that pV essentially drops off makes it a simple bet, except it has a cost to one of the parties, C. If they agree on the probability, the cost makes one of them always on average lose money in accepting the bet.(bets are essentially always about differences in estimated probabilities on two sides) With different probabilities, the bet can makes sense if F/p1 - F/p2 > C/X with the claim of expecting p~1/2 that wont be the case..

    The thing that made it feel absurd is probably the idea that the Entrepreneur can charge more, but the potential participants respond to it.
  • sjenkinssjenkins Member Posts: 28
    >the cost makes one of them always on average lose money in accepting the bet

    From the paper:

    "The entrepreneur’s profit maximizing decision, therefore, implies that a necessary and sufficient condition for the entrepreneur to produce the public good is that it be efficient to do so, i.e. that VN > C."

    So its not a zero sum game: If the contract is accepted then N people have gained V, yet it has not cost the entrepreneur VN to produce that. The entrepreneur's profit and the population's net gain in utility both come from of the efficiency of undertaking the project, not from each other.


  • JasperJasper Eindhoven, the NetherlandsMember Posts: 514 ✭✭✭
    Hmm figuring out my apparently different conclusion, i see i have completely overlooked the point of a probability of being pivotal.. Sorry about that! Will try again...

    @sjenkins Its not about it being a 'zero sum game' or not... The problem is that for the people participating, it is not in their advantage to participate. And then the thing creating the value never happens. I hate to bring politics in it(well you kindah did) but the mistake of libertarians when they proclaim charity shouldnt go via the state is that people being charitable are disadvantaged by being so. Same for creating the bridge here, except this creates public value.. Though 'charity like getting drug addicts off the streets or education also creates public value. /politics
  • sjenkinssjenkins Member Posts: 28
    This isn't a game where all parties agree on a probability p and then a random result is generated according to that probability. In fact there's no random element at all. The players have complete information about the contract and then simultaneously get to make their move. The result is 100% determined by the moves the players actually make (pledge or no pledge) and 0% determined by some probability "p".

    Sure, some players may model their uncertainty of the other players strategies in terms of probability but modelling uncertainty about your opponents is not the same as modelling the game itself. (Like a chess player may estimate a 1 in 6 chance their opponent will play the king's gambit and prepare accordingly, but the actual opening will be decided by the opponent not by the roll of a dice. 1 in 6 is NOT the actual chance of getting a king's gambit here).

    So what is "p" in all that maths up there?





  • JasperJasper Eindhoven, the NetherlandsMember Posts: 514 ✭✭✭
    Hmm what if it is one by one. You can tell if you are pivotal ahead of time. If you are not pivotal the above estimation with probability p is in effect. So then the above approach is correct, until it is pivotal?

    If you are pivotal and the probability is q that someone else goes first if you wait, either the other guy comes, +V, or not, 0; average=qV. If you participate it is V-S for sure, so then you do it if qV < V-S, but that is moot if you can never get to the state where it is pivotal..

    @sjenkins: potential investors are going to try figure if it will fail or succeed.. For p=0 my argument doesnt work (because of the division by zero) Then an investor jumps in. But if this investor thinks others think alike he knows that they are also going to do so. And eventually there will be a pivotal guy, and qV < V-S is eventually going to be true by someones estimation, because if it isnt, your estimate of q is going down, so the pivotal guy will play along. So investors cant truly think p=0..

    Maybe this stuff is really realistic, or the right approach. I think some parties kindah have running interests in the thing. They may have competitors to worry about outside of town and although the benefit V is 'public inside the town', it is 'private to the town'. How often you see politicians talk about infrastructure helping the economy, this is probably the reason.

    That said, 'private to the town' isnt as beneficial to everyone as could be. For instance a distro that shares the software to everyone is clearly more beneficial than one that works as a subscription and keeps software to itself. In that case, it seems that essentially, donation is better as a culture.. But then you run into the problem that donation is self-disadvantaging again, so maybe we do need a solution to it.. Going more politicsy, one is to simply make money not so much about personal success, like UBI...
  • sjenkinssjenkins Member Posts: 28
    @jasper:

    >potential investors are going to try figure if it will fail or succeed..

    Yes, but they won't figure it out correctly by assuming the other investors actions are random. The other investors aren't acting randomly; they're all actively trying to game each other for advantage. For a very clear illustration of the difference consider the following 2 games:

    Rock, Paper, Scissors: Arthritic Death Match Edition
    Variant 1
    • You are to play 100 rounds of Rock, Paper, Scissors.
    • If you win less than 25 rounds you will be shot dead at the end of the game.
    • If you win 25 rounds or more you will walk free.
    • You have arthritis which hurts your hand a bit when you make Scissors or Paper.
    • Your opponent is a fair and balanced random number generator.
    Variant 2
    • This is exactly like Variant 1...
    • ...except this time your opponent is a human being who wants you dead.

    So the best strategy for Variant 1 is to simply play Rock 100 times in a row: Its got the same good chance of survival as any other sequence, plus it doesn't hurt the hand. But taking this same strategy into Variant 2 though is almost literal suicide! The probability calculations which provide the correct answer for Variant 1 simply don't work on variant 2 because the assumption of opponent randomness is no longer valid.
  • aatkinaatkin Member Posts: 75 ✭✭
    I used to be an experimental psych student. What if we designed a "study" where we promote a link to the kickstarter ethereum contract for a given campaign along with a DAC contract for the same campaign. Half the emails have the DAC link on the email on top, the other half reversed.

    The point I'm trying to make is that this sort of thing needs to be tested empirically. People don't behave rationally in general (e.g. state lotteries, casinos). I'd like to see how successful a DAC is in practice as the existing result set is very small (the quora case is the only one I know of).

    From what I remember from the paper, p isn't a constant it's theorized to be a bayesian distribution as different contributors value it differently. Also the paper models the case where information is hidden, in our case it wouldn't be. It also assumes a fixed all or nothing funding level, e.g. $0 or $100. Would the math change at all if you could invest a variable amount?
  • sjenkinssjenkins Member Posts: 28
    @aatkin The paper analyses a case where the value is uncertain in a separate section after analysing the case where it is known. It may be worth noting that the word "probability" does not even appear in the paper until we get to that section. The main analysis is done with game theory not probability theory.

    I agree there are plenty of psychological biases (and political ones as @jasper has mentioned) which might affect this, especially if they turned out to be very common in the population.



  • JasperJasper Eindhoven, the NetherlandsMember Posts: 514 ✭✭✭
    I agree with @aatkin! Well caveits.. You can get stuck in all sorts of problems where your group you are testing is not representative. Besides that wealth doesnt follow the typical behavior either.

    Ultimately, the problem here is that donation is essentially self-defeating. What if you are rational.. you shouldnt be asked to act (overly)irrational. I think there may be a political dimension to this thing.

    Also we could indeed try go more complicated.. People having different use value V[n], and allowing each agent to choose S arbitrarily, getting F=αS for some α. I dont really want to go there.(have the feeling it could be pointless, or i am going to bump into something that makes it moot anyway..)

    Assuming the game is a row of agents playing in order, and it gets to the potential pivot, if the pivot is the last guy it is either V-S or 0. So the last guy will do it! The second-to last guy can see that happening has has the same consideration. If you are the K+1'th behind the last guy, you can see beforehand that you do not need to invest, the other guys will do it before you.

    Note that in that view, it doesnt matter wether or not you get paid F if it fails either.

    @sjenkins so you are right aswel i suppose. The problem is that the problem is ill-defined, and probably not fitting to the socioeconomic one.. Kind-of we dont know which order or which ability to reach there is. The row of agents could be ordered by the time it takes them to 'fill out the form', i suppose. But do they know that about each other too, if you're K+1'th, what if someone infront is irrational? I suppose, that given the new 'row' approach it probably looks different than above, but 'realistic' approaches there probably involve probability...
  • VitalikButerinVitalikButerin Administrator Posts: 84 admin
    > In a traditional AC, the entire population deciding not to pledge is a stable scenario in which:
    > The probability for (1) is 1, for (2) is 0, and for (3) is 0.
    > Nobody has an incentive to change their "don't pledge" strategy here, so they don't change it.
    > In a DAC on the other hand there *is* an incentive to switch strategy if nobody is pledging.

    You are correct. However, that does not preclude my point from being correct. These two scenarios can be simultaneously satisfied by one equilibrium: no entrepreneur bothers to create a DAC because it's not usually feasible/profitable in the first place. And this is indeed the equilibrium that we seem to see in reality.

    > That said social or 'emotional' rewards in particular could be worth more than the price.

    Agreed. A lot of these economic tricks succeed primarily because they're an excellent form of socially reinforced gamification, not necessarily because of the underlying economics.

    > So what is "p" in all that maths up there?

    It basically is modeling the randomness of other players. Here's how I'm modeling the game:

    (i) Presuppose that the DAC works, and there have been many rounds of DACs, and on average one observes that 50% of people contribute to a DAC.
    (ii) A DAC starts. All agents, each with a choice to contribute either $0 or $X, are sitting behind their computer screens, with no information about anything that other contributors are doing than the above
    (iii) All agents make their decision simultaneously

    If the 50% is changed to another percentage, say 13%, then the same math will still work; it's just that some of the constants will change a bit and all the changes will roughly cancel out. "p" is the probability that exactly the minimal sufficient number of agents will participate. From (ii) and (iii) it's reasonable to assume that it will be a Gaussian distribution with some standard deviation, and the standard deviation is proportional to the square root of the number of players.

    Now, if we model the DAC as a multi-round game where people can see the time remaining and the score going up over time, then the mechanics might become more interesting, and the constant factor on the p=k/sqrt(N) might increase to some extent, though it would still remain proportional to 1/sqrt(N). Thus, I am open to the possibility that perhaps DACs bring some constant-factor improvement to traditional ACs. I would love to see an evolutionary-agent-based model of the game (but be careful not to accidentally reward altruism through kin selection) and see what strategies prove the most profitable.
  • sjenkinssjenkins Member Posts: 28
    @vitalik?

    >(i) Presuppose that the DAC works, and there have been many rounds of DACs, and on average one observes that 50% of people contribute to a DAC.

    You have the entire history of the rounds and know that every result in every round is the completely deterministic outcome of the strategies used by the players in that round. Why would you discard information by averaging this?

    >(ii) A DAC starts. All agents, each with a choice to contribute either $0 or $X, are sitting behind their computer screens, with no information about anything that other contributors are doing than the above

    They don't just know where the others are sitting. They also know that the others are all evaluating their existing DAC strategies (a history of whose results you have in your possession) and trying to decide whether to continue with them or to change them.

    >"p" is the probability that exactly the minimal sufficient number of agents will participate. From (ii) and (iii) it's reasonable to assume that it will be a Gaussian distribution with some standard deviation, and the standard deviation is proportional to the square root of the number of players.

    "p" is not a probability, and you not knowing what the number is doesn't make it start behaving like one.
  • zawyzawy Member Posts: 26
    edited July 2014
    "no entrepreneur bothers to create a DAC because it's not usually feasible/profitable in the first place. And this is indeed the equilibrium that we seem to see in reality."

    Example of it in reality:
    Voters and Government can be viewed as the entrepreneur. Taxes are the "buy-in". Not going to jail is the payoff for paying taxes. So government is the DAC-like solution for where ACs fail. The government version of the DAC avoids being equivalent to a AC by going ahead and doing the projects from the pool of taxes rather than waiting for sufficient participants. There is no "fail" outcome which is pretty much like saying it is the "dominant" outcome. Not paying fair taxes and getting away with it (legally or not) means there are still free-riders. Determining if people are paying their "fair share" is a problem, and supposedly the excuse for a complex tax code.

    Religion was the first government (DAC-like). There were direct tax-like and investment like structures from the beginning in religion, but there was also a great public good social glue it provided. An example was being able to travel vast distances without electronic communication and know ahead of time the rules by which people would play. It prevented war and a enabled a sense of "family" way beyond the local tribal experience for which we evolved (kin-selection altruism). The cost was maybe some loss of individualism and beliefs in superstitions that may cause the person to be more self-sacrificing for the common good than evolution directly designed him for. The payoff for buying-in to the religion was being able to be part of the society. Again, there is no probability for "fail" to calculate, so it is a DAC that works better than an AC.
    Post edited by zawy on
  • VitalikButerinVitalikButerin Administrator Posts: 84 admin
    @sjenkins‌

    I just don't think that the kind of logic you're advocating is a strategy that people will realistically follow. I see this as similar to the finite-iterated prisoner's dilemma case, where game theoretically everyone "should" defect from the first round because of the recursive "well obviously the other guy's optimal strategy on round n+1 is to defect regardless of what I do now", and yet imperfect information and imperfect rationality mean that such outcomes never happen in the real world. That's basically why I think the Gaussian-0.5-median model is the only reasonable one.

    @zawy‌

    What you're describing is a recursive punishment system, not a DAC. The mechanisms are quite different. Recursive punishment systems I have no theoretical objections to from an economic standpoint, only implementation-specific moral and practical issues.
  • zawyzawy Member Posts: 26
    edited July 2014
    Clearly a system of forced taxes as the optional buy-in and "voters and government" as the entrepreneur make the comparison to a DAC "imaginative", but the differences are why it should overcome your disproof of the DAC's supposed advantages. Punishing the non-contributors is the double-negative of reward the contributors, so I am not sure there is a logical difference from that key part of the DAC. Training children and pets with rewards is supposed to be better, so there is an extra psychological dimension that makes my negation possibly not so swell.

    Systems of contracts on Ethereum could function as "governments" that avoid your implementation-specific objections. If a person decides to join a specific "ethereum government", it means he accepts the "punishments". Viewing tax-evasion imprisonment (or much more likely, financial penalties) as forced punishment makes the assumption that the person was forced into the societal agreement. This is true, he is forced, even in a democracy he did not choose to join but was born into. Democracy is "tyranny of the slightest majority" on any particular issue and it results in everyone losing reasonable freedoms. So my suggestion is that Ethereum-based governments can achieve the goal of DACs, working better than ACs. People would choose to join them, so the punishments, if they exist, are not as forced as in current governments. (Although some societies have achieved high taxes and very low imprisonment).

    You cited the lack of DACs in reality as an arguing point against its usefulness. I cite the existence of governments as being necessary and clearly improvable, such as by removing the occluded "actors" (the non-useful politicians) and contrary special interests by transparent programming.

    Post edited by zawy on
  • VitalikButerinVitalikButerin Administrator Posts: 84 admin
    edited July 2014
    So a protocol as follows might work:

    1. Everyone who contributes at least C to public good P is in class G
    2. Everyone who offers a X% discount on non-resellable goods and services to members of class G is in class G

    That's the proper mirror image of a government-style recursive punishment system, and as far as I know it has never been _formally_ implemented on any significant scale in history, probably in large part because up until very recently it has been very hard to keep track of who has what classes in an urban/globalized society. But that's not a DAC. The primary difference between the two is that DACs ultimately rest all of their power on the public good itself; if you follow the math with the public good being useless then all you have left is a zero-expected-value gambling game. Recursive punishment/reward systems, however, can take on a life entirely of their own, which is both good (because it allows for much higher magnification factors) and potentially dangerous (because you can set up recursive systems around anything irrespective of whether it's good or bad, which is why we get evil authoritarian governments, wars, religious conflicts, etc), although the conjecture is that recursive _reward_ systems are less likely to do this because the lack of special coercive power usually forces a lower bound of zero on their negative consequences toward any specific entity.
  • sjenkinssjenkins Member Posts: 28
    @Vitalik

    > That's basically why I think the Gaussian-0.5-median model is the only reasonable one.

    It isn't a reasonable model; it's the mistake you asked us to find i.e. the reason why your maths didn't reproduce the published result.

    A variable being hidden does not make it random. Ignorance of other players' strategies does not cause them to magically start producing results normally distributed around the precise value of 0.5. Providing more reasons why a variable is hidden ("imperfect information", "imperfect rationality") does not provide more reason to think that it has or adopts these properties.

    The game theory approach isn't making the too-strong assumtion that everyone will behave perfectly logically and rationally, but neither is it making the too-weak assumption that everyone will just toss a coin when deciding what to do. Rather, it is making the "just-right" assumption that people are playing *some* strategy (informed or not, rational or not) and that those playing bad ones will either change them over time or get gamed by people who do.

    The difference between the DAC and AC shows up when too many people are playing the "don't pledge" strategy. In this scenario the DAC provides an incentive for some to switch to the "do pledge" strategy whereas the AC does not. Your maths doesn't show this difference because it only covers the case where everyone is playing the "toss a coin" strategy.
  • zawyzawy Member Posts: 26
    edited July 2014
    That would explain why negative reinforcement as a training technique is not considered optimal: it harms the thing you're training, at least against what it believes is best for itself, which should be accurate enough. But you want maximum output which is contrary to harming it. Unless there is an excess supply of the thing you're "training" (getting output from), then "using them up" has been employed (worker death camps). If the thing being trained seeks, above all things, self-reproduction against "what's best" for all, and yet the rules governing all are designed to do what's best for all, then there is an inherent conflict that could result in "necessary" negative reinforcement. There's a sense of "I, robot" in this where the system of contracts are the robots.
  • zawyzawy Member Posts: 26
    @vitalik, isn't your 1) and 2) nearly a definition of religion? There are "public goods" (and bads from an EXTERNAL non-G view) religions are engaged in, but the primary "public good" religion seeks is to spread itself. There are some philosophies that the only good is the desire for survival which can imply reproduction, so I can't define it as bad. Religion primarily seems to use positive reinforcement and is intensively concerned with the health of its members. The problems historically have come from religions that evolved separately (no communication like we have today) so conflict would arise. This is why I promote world government (aka, one God, aka, one governing system of contracts at the top level only concerned with vast macro results such as overpopulation) instead of crypto-anarchy.

    I have the sense DAC-like attempts for a greater good is somehow trying to work from rigid rules at the bottom to result in a higher level of intelligence in the system. That does not seem to be how things have evolved (I do not believe in "emergent" intelligence, only emergent complexity). It seems "higher intelligence" comes from competing (evolving) complex systems of "governed" systems. How they get there seems to be a long and messy process, with cells being an example.
  • zawyzawy Member Posts: 26
    edited July 2014
    To clarify the parallel to religion, contribution C is a mildly enforced percentage of income ("tax") of 2.5% for Unitarian Universalists, 5% for 7th day Adventists, and 10% for a lot of Methodists and Baptists. The X% discount on non-resellable goods and services is along a lot of vague lines but might generally be summed as "love thy Christian neighbor, but not thy Muslim neighbor" and vice versa, even if they both exclaim this is not how they think. So once again, there is a negative reinforcement implied for the non-commits if members of G simply show a preference for others in G. Society has a finite set of resources to distribute even if they can't be resold or sold in the first place (e.g., spending time taking care of someone dying for free), so there is a normalization of total "funds" that turns into a negative for non-G members, unless P > total X% + C (the group's project(s) add(s) to society more than it discounts its members and spent on P). Non-members get a free ride on P with less overall profit due to not getting X%, but only if X% > C. Although I like keeping definitions flexible in order not to limit possible new ideas (Tim Berners Lee mentioned this as important in the semantic web), I see your mathematical delineation of this enables good clarification on the relation between P, C, and X%.

    By relaxing your definitions of C and X% a little, it seems to describe a general class of societal-beneficial "team" efforts, where there is forgiveness among team members (X%) and self-sacrifice to the project (C) of money, time, or social interaction.
    Post edited by zawy on
  • zawyzawy Member Posts: 26
    edited July 2014
    Doh. To what extent does 1) and 2) not describe a modern company? P is the net public good of their product beyond competitors. It is a public good since it has outcompeted less-effective products or services. C is stock investments, G are shareholders. X% are dividends, although different from what you described (from the buyers instead of other shareholders). I don't understand your requirement for non re-sellable. Per my last post, total X% should eventually be greater than total C for a success of shareholders, and that P should be > total X% + total C to prevent a net negative of externalities due to reduction in system wide resources. Total X% > total C for shareholders to profit, and P > 2*total C for shareholders to gain and yet not take system-wide resources away from non-shareholders. This is the key to preventing a concentration of wealth in the hands of a few. I have not figured out how to measure P in this context, because "beyond nearest competitor" seems way too small. Is it as much as gross sales plus this benefit beyond nearest competitor?

    So the role of government (law) is to ensure this last inequality, to prevent shareholders from being ripped off via fraud, and to prevent externalities. If there is no competition, then government might need to step in to break up a monopoly that is preventing the P > total X% + C from being true. Evolving competition and critical core thinkers are needed elements in companies to satisfy 2 of my guidelines for how NP hard optimization problems are usually slowly and approximately solved. Government's role should not only be to upheld the inequality, but to maximize it. An example of this is research assistance to fundamental technologies that many companies can use. Another example of how to maximize it: advertising and marketing should be restricted (especially prevention of fraud) because it subtracts from P by promoting the company based on things other than its product, such that better products more wisely investing in the product itself (or their services) get left behind. Amazon enables better products without the producers having to spend on advertising because of customer reviews. Customer reviews are a way to bypass something I have said government needs to do. The people replace the governor and advertiser in the marketplace. Ethereum might enable all functions of government to be replaced like this. And replace Amazon too.

    Bitcoin has been jumping like gold on any war-like news in the Ukraine. Today's jump based on the airplane shot down. Imagine if you needed to transfer wealth into or out of the Ukraine in a time of war. Gold won't do you much good.
    Post edited by zawy on
  • zawyzawy Member Posts: 26
    edited July 2014
    The main thing is that P < total X% + total C is parasitic against society and specifically against only the non-G's if X%>C.
    Post edited by zawy on
  • zawyzawy Member Posts: 26
    By charging customers to provide X% to shareholders "G" instead of discounts only between G members, there is an ongoing feedback mechanism (the price signal) that is missing in your 1) and 2) about the extent to which it is actually a "public good". Companies can be more aggressive, but externalities are not prevented in either method, so a measurement and governing mechanism is still required to replace the "faith" among G members that the public good is indeed a public good.
  • cuskneecusknee Member Posts: 1
    This chat has persisted here; http://forum.truthcoin.info/index.php/topic,132.0.html
    Its decision was greatly comparable to sjenkins: (that p is inapt for a DAC).
  • aatkinaatkin Member Posts: 75 ✭✭
    edited September 2014
    Interesting. I'll follow the conversation at the link. I wonder what Alex Tabarrok might have to say on the subject. Just to reiterate my opinion, many human beings are not economically rational (pervasiveness of gambling, lotteries etc.) and ethereum offers us a way of easily testing the effectiveness of the funding model compared to others. In fact I think ethereum would be a valuable tool for social science and economics experiments.
Sign In or Register to comment.