Starting a geth instance, I tend to download 256 blocks every second or so. Then, after a while, it goes to 256 blocks every 10 seconds, and then every minute. killing the process and starting over typically gets me back to the original speed, I wish geth could monitor itself and reinitialize when things slow down too much, instead of me having to babysit it.
Is anyone experiencing similar issues?
0 ·
Comments
The dev team is on it and working on a fix.
Does killing the process and starting it over (presumably repeatedly) work for you? Do you get to a point where the blockchain is being downloaded completely? I had almost resigned to the fact that I need to go through (days of) downloading at glacial speed.
Fatal. LastBlock not found. Report this issue
It stopped functioning last night at some point, here is the tail of the log, the last pairs of lines
are generated by attempting to start geth:
I0610 02:37:33.169128 18312 worker.go:413] commit new work on block 559153 with 2 txs & 0 uncles
I0610 02:37:44.467386 18312 chain_manager.go:664] imported 8 block(s) (0 queued 0 ignored) in 17.145277081s. #559160 [88615630 / 602f2b7f]
I0610 02:37:53.584098 18312 worker.go:413] commit new work on block 559161 with 2 txs & 0 uncles
I0610 03:40:00.930895 18312 chain_manager.go:664] imported 1 block(s) (0 queued 0 ignored) in 2.143233986s. #559161 [e20fa3f2 / e20fa3f2]
I0610 03:40:00.931082 18312 worker.go:256]
I0618 12:34:36.946324 6790 chain_manager.go:801] Bad block #586729 (d0b0aa0fff075630ef417cb604f795edb99092fc9c312db71b4cced0cdc31795)
I0618 12:34:36.946449 6790 chain_manager.go:802] Block's parent unknown 736150561260143d4bd3e415e782431205955b18af04f47c468c4995f4ddda09
I0618 12:34:36.954800 6790 downloader.go:222] Synchronisation failed: block downloading canceled (requested)
I0618 12:35:41.763687 6790 chain_manager.go:518] Chain manager stopped
I0618 12:35:41.763827 6790 handler.go:136] Stopping ethereum protocol handler...
I0618 12:35:41.763883 6790 handler.go:146] Ethereum protocol handler stopped
I0618 12:35:41.763933 6790 transaction_pool.go:120] TX Pool stopped
I0618 12:35:41.765213 6790 backend.go:635] Automatic pregeneration of ethash DAG OFF (ethash dir: /home/gcolbourn/.ethash)
^C
I0618 12:35:41.952965 6790 database.go:78] flushed and closed db: /home/gcolbourn/.ethereum/blockchain
I0618 12:35:42.025026 6790 database.go:78] flushed and closed db: /home/gcolbourn/.ethereum/state
I0618 12:35:42.044314 6790 database.go:78] flushed and closed db: /home/gcolbourn/.ethereum/extra
[email protected]:~$ geth console
I0618 12:35:45.677395 8427 backend.go:269] Protocol Version: 60, Network Id: 0
I0618 12:35:45.677548 8427 backend.go:279] Blockchain DB Version: 3
F0618 12:35:45.682816 8427 chain_manager.go:238] Fatal. LastBlock not found. Please run removedb and resync
If it's nearly-impossible to sync the blockchain, it looks like mining is impractical for most people starting now.
1. That there aren't enough healthy peers to download from during this prolonged stress testing. 256 blocks is equivalent to 40-50 minutes which means if it's taking longer than that to download then the node's going backwards...
2. That EVM is still processing contracts during this time presumably re-validating every block's transactions (?) before it can download the next lot. On top of that I think it's still trying to process pending transactions into blocks that will never be mined.
So from that stance, the EVM processing power looks also be a very limiting factor and slowing down block propagation.
For example, Geth is utilising 60% of my (aged) Core 2 Quad 2.66Ghz CPU and 1.13 of 4GB. Not mining cause I can't synch. Last block to download was 634121 while best block is currently 637426. That's 3305 block behind or about 10 hours. I'd be curious what kind of utilisation and times more powerful machines have.
If I'm very lucky i get 256 block in 20 to 30 minute intervals but mostly my peers crash and I never see another block until I restart Geth.
If anyone has a enode URL for a healthy peer I'd like to add it statically and see if it improves matters.
I am not able to provide true technical insight here, but anecdotally, your analysis completely matches my experience. In particular, among my three mining rigs, the one with the best performing CPU is also the one that is keeping up best with the block chain, and vice-versa (GPU power is irrelevant in this regard).
And yes, when a machine starts lagging behind, it often coincides with it importing/verifying transactions for past blocks.
All this is giving me hope that these are bugs that should be relatively straight forward to address in the code, rather than a fundamental design flaw in Ethereum.
In terms of my geth version, I am running apt-get upgrade from the ppa repository daily.
Anyway, don't know what the dev's have done but the chain seems healthy again now. I even synch'd last night
Anyway, I hope the dev's have a good metric on blockchain's load rating now.
I0620 16:01:15.016514 4963 chain_manager.go:690] imported 256 block(s) (0 queued 0 ignored) including 255 txs in 2m11.656166459s. #587494 [ebfb4b2c / 93709046]
I0620 16:04:00.428273 4963 chain_manager.go:690] imported 256 block(s) (0 queued 0 ignored) including 267 txs in 2m45.411565501s. #587750 [4f74a443 / 00c3c0f1]
I0620 16:05:29.832868 4963 chain_manager.go:801] Bad block #587888 (75ab3709793eb5db3250b881167c1559386f590e9c968200a812b72ea40dbe99)
I0620 16:05:29.832995 4963 chain_manager.go:802] Block's parent unknown 54e0fa7b17311295fa2dea1dbce8ba0ec7bb1bfa9c9d7487d2cc9519de1cf3cc
I0620 16:29:22.059160 4963 chain_manager.go:801] Bad block #587888 (75ab3709793eb5db3250b881167c1559386f590e9c968200a812b72ea40dbe99)
I0620 16:29:22.059282 4963 chain_manager.go:802] Block's parent unknown 54e0fa7b17311295fa2dea1dbce8ba0ec7bb1bfa9c9d7487d2cc9519de1cf3cc
I0620 16:37:34.400321 4963 chain_manager.go:690] imported 1 block(s) (0 queued 0 ignored) including 1 txs in 473.266039ms. #587887 [54e0fa7b / 54e0fa7b]
I0620 16:37:39.542756 4963 chain_manager.go:801] Bad block #587888 (75ab3709793eb5db3250b881167c1559386f590e9c968200a812b72ea40dbe99)
I0620 16:37:39.552280 4963 chain_manager.go:802] Block's parent unknown 54e0fa7b17311295fa2dea1dbce8ba0ec7bb1bfa9c9d7487d2cc9519de1cf3cc
I0620 16:37:39.564350 4963 downloader.go:224] Synchronisation failed: block downloading canceled (requested)
^C
I0620 16:54:32.929742 4963 chain_manager.go:518] Chain manager stopped
I0620 16:54:32.929863 4963 handler.go:141] Stopping ethereum protocol handler...
I0620 16:54:32.929918 4963 handler.go:151] Ethereum protocol handler stopped
I0620 16:54:32.929966 4963 transaction_pool.go:122] TX Pool stopped
I0620 16:54:32.931926 4963 backend.go:635] Automatic pregeneration of ethash DAG OFF (ethash dir: /home/gcolbourn/.ethash)
I0620 16:54:33.013186 4963 database.go:78] flushed and closed db: /home/gcolbourn/.ethereum/blockchain
I0620 16:54:33.248276 4963 database.go:78] flushed and closed db: /home/gcolbourn/.ethereum/state
I0620 16:54:33.300674 4963 database.go:78] flushed and closed db: /home/gcolbourn/.ethereum/extra
[email protected]:~$ geth console
I0620 17:03:08.780938 5325 backend.go:269] Protocol Version: 60, Network Id: 0
I0620 17:03:08.784355 5325 backend.go:279] Blockchain DB Version: 3
F0620 17:03:08.789305 5325 chain_manager.go:238] Fatal. LastBlock not found. Please run removedb and resync
[email protected]:~$
Is there any way to recover the blockchain without having to start again?
I am at around block 636K, so at the current rate, it will take me a bit under 9 hours to catch up with where the chain is now. Of course, the chain will have added another 2.7K blocks by then, so, add another hour. Unless I have to turn off the machine for some time...
I0621 19:57:46.068902 1608 queue.go:142] Hash 67d7ed8cf16944928613f9a02e1802eeb0747e5c590535f2bc29203abcbf6a3b already scheduled at index 6113
I0621 19:57:46.068902 1608 queue.go:142] Hash 94203d14baf4515514b4111a5304ca6dcdd739bd74250b561b1a8fd919b0d992 already scheduled at index 6114
I0621 19:57:46.068902 1608 queue.go:142] Hash a0956211a0782658f59ea3520a10b71514f2943c4002626c6c181b54051b6aaa already scheduled at index 6115
I0621 19:57:46.068902 1608 queue.go:142] Hash 2c59f5893fc978b84bc5700c6072143cdedb352c28e98372db01e17094b30e44 already scheduled at index 6116
i compiled from source. does it make sense to compile a different branch or revision?
... and now this:
I0630 12:29:36.039752 1315 backend.go:301] Protocol Version: 60, Network Id: 0
I0630 12:29:36.039825 1315 backend.go:311] Blockchain DB Version: 3
F0630 12:29:36.094242 1315 chain_manager.go:251] Fatal. LastBlock not found. Please run removedb and resync
Network hashrate has dropped to below 500Mhs, so I'm wondering if there has developed a peers cluster of really fast nodes that are racing ahead and starving the rest of the network...
fetcher.go:366] Peer 46a62e601b891398: discarded block #756518 [7f764ff1], distance 136
Obviously the distance is the number of blocks beyond my own best block, but I don't understand why it keeps discarding them. Even on verbosity 9 I don't see much activity apart from peer talk, so why isn't it ever catching up? What's stopping it importing the necessary blocks in between? It instead only dribbles in 1 or a few on each import...