I'm interested to hear why ethereum choose a stack machine for contract execution. I think stack machines make sense when your building an abstraction that will execute close to physical hardware, but that's not the case for ethereum. And while it certainly makes it easier for a C-like language compiler to target, I think that there are some unintended consequences to this architecture that might seriously handicap ethereum.
First of all, there is no way for a miner to prove (or be rewarded for) a partial proof of work when executing a contract. This means that contract execution is a zero-sum risk for a miner. Keeping track of the STEPCOUNT/STEPFEE strikes me as an inelegant (and possibly non-effective) tool for minimizing the wasted work done by miners (wasted work being any work they are not rewarded for). And I don't think it will be effective safeguard against a DDOS attack to the network using a contract with an infinite loop. Because the contract balance can be "double-spent" by all miners trying to execute that contract simultaneously, there is an imbalance between the cost of the contract when a whole network of miners trying to spend it simultaneously is taken into account. One of these DDOS contracts could attack a large part of the network for a comparatively small fee for the total amount work wasted by all miners in the network.
In my opinion, something similar to a dataflow architecture would be better suited for contract execution. It could allow for partial proof of work rewards for executing a portion of a contract by a miners. Here's a hypothetical scenario on how such a machine might work:
Taking the C-like language compiler, lets alter it so that instead of producing a set of instructions, it produces an unordered list of basic blocks and a table of named registers. A basic block would be an ordered set of instructions. The load instruction(s) in a block are ordered first and take a value from a named register (clearing the register). The store instruction(s) are ordered at the end of a block and write a value to a named register. In between the loads and stores are the rest of the instructions. They would be provided a local stack, indexable registers, or temporal belt (http://ootbcomp.com/docs/belt/
) for storing and feeding values between instructions in the block.
This would provide enough context for the contract executor (referred to as the miner) to build up the dependency relationship between basic blocks and named registers. The miner would determine if a block was executable by scanning the named register table for the registers the block loads. If every register has a value, then the block is executable. If any one register does not have a value, the block cannot execute.
Control flow would be implemented not with jump instructions, but instead by conditionally setting the name of a register in a store instruction. For example, lets say we wanted to execute a bit of C code: "a = (c > d ? b + 1 : b * c)". This could be implemented with 3 basic blocks and 5 named registers (and, in this example, using an internal stack for each block):
# basic block 1
load @c # pushes the value of the named register 'c' onto the stack
load @d # pushes the value of the named register 'd' onto the stack
ge # pops off the two values, if @c > @d, pushes 1 on the stack, otherwise 0
cpushc 'branch-1' # conditionally push the constant 'branch-1' on the stack if the top is 1 (c is > d)
npushc 'branch-2' # conditionally push the constant 'branch-2' on the stack if the top is 0 (c is not > d)
pushc 0 # push a constant value onto the stack, the value doesn't matter
swap # swap the value and the register name on the stack
pstore # pop the last two values of the stack, store the value (S[-2]) in the named register (S[-1])
# basic block 2
load @b # load the named register 'b'
load @branch-1 # load the named register 'branch-1'. This only happens if (c > d)
pop # throw away the value of @branch-1, it's not needed
pushc 1 # push the constant value '1' onto the stack
add # add 1 to @b
store @a # store the result of the addition to @a
# basic block 3
This is a pretty trivial example (and an internal stack for each block might not be the best implementation), but it hopefully shows how a conditional statement can be broken up into basic blocks. A for loop would similarly need multiple blocks: one for managing the iterations and one for executing the body of the loop. There would be a sort of flip-flop effect where each one would read and write to the other's named registers.
This should also simplify the book-keeping for the state of a contract. The state of a contract would be contained entirely in the named register table stored as a patricia tree. A blockchain could be used to track the state changes of the register table in a contract. The blockchain would be updated after every basic block that is executed. This has several nice side effects, but before I get to those, there is still a problem with the initial load of a contract.
If the execution of a contract is driven entirely by the state of it's registers, then there must be a way to set initial values into the register to get things going. So a contract would need a property for specifying the initial value of named registers. But a drawback to this is that all input to a contract would have to be known at creation time. This means they lose their autonomous-ness because they are no longer waiting for something to happen on the network.
A possible solution would be to also alter transactions so that a transaction could set the value in a named register for a contract. This would allow contracts to partially execute, then halt on an incoming transaction. It also provides a simple cue for miners that a contract is likely to be (at least partially) executable. These transactions will likely not be received in a consistent order, so the contract would not want to allow transactions to set (or overwrite) a named register that would be updated internally. To enforce this, the contract should be created with metadata for the register table.
Along with the basic blocks & named register table, a contract would have metadata that tags the named registers as either internal, input, and output. Internal registers are only usable by the basic blocks and can be set/updated multiple times. An input register is only writable by a transaction, and the metadata could contain rules about which address(es) may set a value on that register. A transaction that attempts to set a value for a register that it does not have access to is invalid just like any other invalid transaction in the network.
The output register could be treated as the input to a predefined but uncalculated field in a transaction. E.g. when the contract is created, the metadata could contain a transaction that says "send the value in output register A to the address in output register B". The transaction would not be valid until all of the output registers have a value.
There are some other nice side effects of such a system.
First off, miners could receive partial compensation of contract fee for advancing the contract's blockchain. They would not need to run a contract to the end of it's life to be compensated, so no more zero-sum reward.
And because entire state of the contract would be stored as it executes, detecting cycles (where the same value for all registers had already been observed) would be a simple check to see if the top hash had already been recorded in the blockchain. Simple infinite loops could easily be detected.
It might even be possible to implement the GHOST protocol for the contracts blockchains. That way an uncle could be compensated for advancing a contract that the miner did not attempt to execute. This might even lead to the bulk of payouts on the network going towards miners who execute contracts instead of working on the ethereum blockchain.
By breaking up the registers into internal, input, and output, we provide a sort of IPC mechanizm for contracts. Let's say a contract needs to do 3 things: add two very large numbers, factor it's primes, and split the sum on the largest prime between two addresses. Factoring the addition of the very large number is a lot of work, so instead of performing it on the network, the contract wants a 3rd party to factor the number instead. So the contract would receive the 2 numbers in a pair of input registers, add them together, then place that sum in an output register. The contract's metadata has a transaction associated with that output register that says "send this transaction to this address with the sum and the register name 'largest-prime'". That address happens to be the address of a quantum computer that provides prime-factoring-as-a-service. It sees the transaction, factors the prime, then sends back a transaction that sets the "largest-prime" register of the contract to the value it calculated. At this point the contract can continue executing the basic blocks that split the sum into the two accounts.
I'll wrap this up by saying that a dataflow architecture for contracts might solve some flaws, vulnerabilities, and inefficiencies that are present in a stack based architecture, and would also enable contracts to do some really interesting additional stuff.