eth full node running errors

My first message here. Greetings to everybody.

I built ethereum-cpp from sources ( RelWithDebInfo ) and started in full node mode on Windows 10.

--------------
eth
cpp-ethereum, a C++ Ethereum client
cpp-ethereum 1.3.0
By cpp-ethereum contributors, (c) 2013-2016.
--------------

First day of running was OK. The datastore grew up to around 25 Gb. On the second day I found "Guru Meditation" errors e.g.

-----------------
X 19:27:54.411|p2p|07afddaa…|Parity/v1.6.3-beta-ccc5732-20170314/x86_64-linux-gnu/rustc1.15.1 Ignoring malformed block: C:\a03_libs\cpp-ethereum\libethashseal\Ethash.cpp(97): Throw in function void __cdecl dev::eth::Ethash::verify(enum dev::eth::Strictness,const class dev::eth::BlockHeader &,const class dev::eth::BlockHeader &,class dev::vector_ref) constDynamic exception type: class boost::exception_detail::clone_implstd::exception::what: ExtraDataIncorrect[struct dev::eth::tag_block * __ptr64] = [ type: class std::vector >, size: 24, dump: f0 22 e6 97 a7 01 00 00 c2 26 e6 97 a7 01 00 00 ][struct dev::tag_comment * __ptr64] = Received block from the wrong fork (invalid extradata).[struct dev::tag_extraData * __ptr64] = [ type: class std::vector >, size: 24, dump: 10 e6 f5 8e a7 01 00 00 1f e6 f5 8e a7 01 00 00 ][struct dev::eth::tag_now * __ptr64] = 1491434874[struct dev::eth::tag_phase * __ptr64] = 1
....
X 19:27:47.844|p2p|07afddaa…|Parity/v1.6.3-beta-ccc5732-20170314/x86_64-linux-gnu/rustc1.15.1
Import Failure ExtraDataIncorrect Guru Meditation #01920000.94365e3a…
...
-----------------

After the restart I see a lot of

-----------------------------
X 19:37:34.510|verifier0 BlockQueue missing our job: was there a GM?
-----------------------------

Over the night the datastore size only increased by ~1 Gb and currently is ~26.1 GB.
From the information on the net I guess the current size of the datastore has to be around 100 Gb.

So, looks like I'm stuck.

Any suggestions?

Comments

  • mintarmintar Member Posts: 5
    For those interested. I built from source and used geth on the same Windows machine for comparison.
    I started full node with geth on 20170407 and by 20170410 it loaded 41.2 Gb of data, which amounts to about 2.3 mln blocks (out of ~3.5 mln) i.e. about 65%.
    I found that the synchronisation becomes progressively slower, the bigger the database grows. At the start it took geth just 3 hours to load 1 mln blocks/~9.5Gb. For the last night it only got from 40.9Gb to 41.2Gb i.e. absolutely dramatic slowdown. I wonder if is it because of Patricia Tree insert inefficiency?
    Another thing to notice is that sometime number of peer nodes drops to 2-3, at which point the synchronization cannot proceed.
    When I'm done with geth, I'll test eth again. May be everything is just the way it's supposed to be.
  • mintarmintar Member Posts: 5
    Overall experience with geth is Way better so far. Builds in no time, runs without any "Guru" bs. No wonder it holds the lion share of the user base.
  • mintarmintar Member Posts: 5
    Patricia tree insert/lookup/delete is O(log(n))
  • mintarmintar Member Posts: 5
    Here is my statistics on the full node synchronisation:

    block number range: sync speed, blocks/hour
    ------------------------------------------
    0..1045923 348,641
    1045923..1726874 54,129
    1726874..2322736 15,777
    2322736..2344564 1,175
    :
    To me it looks like a Fundamental problem with the blockchain. Pretty much exponential drop in sync speed as the datastore size grows.
    There is still more than a million blocks to go and the speed by now is measly 1000 blocks per hour. Even without further drop in speed it'll take 1000 hours to get to the finish. If the drop continues, it may never happen. That's with just 3.5 mln blocks. What about 3.5 bln?
    My laptop has i7, 2.20Ghz, 4 cores, 12Gb RAM and a broadband connection.
    I see on the Net mentioning RaspberryPi for the Full node. I guess that's a history by now. When it'll come (if ever) to 3.5 bln blocks, one will need a supercomputer for a full node.
Sign In or Register to comment.