I think I'm finally getting the idea of Ethereum and the possibilities it provides. I get the contracts, scripting, enforcement by code, lack of trust, no dependence on third parties, etc. But I think the Autonomous Agent part is the potentially weak area. To use a historical example "Dewey Defeats Truman". For those too young to remember, a major US newspaper proclaimed Dewey won the US Presidency. This was not only completely incorrect, but it caused people to take actions they would not otherwise have taken. The same thing happened more recently when Al Gore was proclaimed the US President before people went to bed, and they woke up to a different President.
So if a news organization or blog or radio station makes a mistake, or an official site (say ESPN) screws up, and an Autonomous Agent relies on this information, contracts can be settled incorrectly, with no recourse. Worse there are often Internet-wide "stories" that develop, that everyone reports on, that are later determined to be wrong. This is just what happens by accident. If malicious actors, with real money at play, are involved, much more active efforts at misinformation could be attempted. This could range from a mass distributed hacking and denial of service attack (change the info where you can, shut it down where you can't) to an insider at Bloomberg manipulating a stock ticker relied upon worldwide for personal profit or political gain.
It's just like social engineering. You never try to break the cryptography, you just present different information in the right places, at the right time, to achieve the result you are looking for. Even if human arbitors are involved, they can be bribed, threatened, blackmailed, extorted or even eliminated. This already happens in the case of theft, fixing sports events, effecting political competitions, or influencing the result in a trial. Add the case of an actual war, and all of this is dramatically escalated.
So, I think the code and protocols, if done right, are probably robust, but it's the interface with the real world where the problem comes in. Because you can't *really* trust anything, not the New York Stock Exchange, not sports scores, not combat casualties, or news reports. They can all be falsified or manipulated, at least to some extent, for some period of time, if there is sufficient motivation.
If I had to summarize my point it would be: "Autonomous Agents are gullible." Is this a concern that has been acknowledged and is there any work towards addressing it?
Thanks very much,
Chris
1 ·
Comments
That said, in the world we currently live in people do actually make contracts like these despite the problems you discuss, so some level of risk is clearly acceptable to the marketplace. The question is then how you mitigate the risk. This is a complex topic, but a few observations:
1) Pulling data feeds from a source that isn't intended to be used for the purposes of settling contracts then trusting it unconditionally is dangerous and potentially irresponsible, because they're not designed or secured with these purposes in mind, and you'd incentivize people to interfere with them. This is why Reality Keys doesn't do this.
2) As with the Dewey vs Truman case, there are trade-offs between accuracy and speed. Users need to be free to choose what the right trade-offs for them are. I suspect people betting on sports would rather get paid quickly at the cost of getting the wrong result from time to time. But if you're buying a house and you don't want the payment to go through until it's been confirmed that ownership has been transferred, the seller shouldn't mind waiting a couple of days until they get their money if that's what it takes to reduce the risk of getting the wrong result from 0.1% to 0.001%.
3) Error and coercion can be mitigated (but not eliminated) with diversity. Users should be able to combine different combinations of arbitors. (Users of Reality Keys can do this already, although we don't yet have any direct competitors so you'd have to use a general-purpose human escrow guy.) Arbitors can build in some jurisdictional and geographical redundancy by having people in semi-autonomous organizations in different jurisdictions, and distributing the keys required to sign any given fact. (Reality Keys isn't yet set up like this, but we're thinking about it.)
4) Bribery and corruption can be discouraged (but not eliminated) with transparency. This is one of the reasons why Reality Keys is set up to make the actual judgements we make 100% public, with a single, shared pair of keys per fact that anyone can check up on. If you pay us to cheat your counter-party, everybody will be able to see us cheating your counter-party, and we'll also have to cheat anybody else sharing the same key.
5) The PKI model with certificate authorities in browsers (or in this case nodes) is nasty, but it may well be better than the alternatives. In particular, the ability to revoke certificates and allow contracts to fall back on some other combination of authorities would be very valuable, especially if revocation was managed properly. However, doing this in a system that's distributed but nevertheless has to achieve consensus creates some interesting problems.
I say "after a period of time" because information tends to be more accurate once time has passed and all the data is available (Dewey Defeats Truman could never have happened 24 hours later.) Not only that, but people can only corrupt data for so long...maintaining the corruption of a feed for weeks or months would be difficult and expensive.