View Issue Details
|ID||Project||Category||View Status||Date Submitted||Last Update|
|0003903||GNUnet||datastore service||public||2015-07-18 15:44||2019-02-28 11:17|
|Reporter||Christian Grothoff||Assigned To||amatus|
|Priority||normal||Severity||major||Reproducibility||have not tried|
|Platform||i7||OS||Debian GNU/Linux||OS Version||squeeze|
|Product Version||SVN HEAD|
|Target Version||Fixed in Version||0.11.0|
|Summary||0003903: datastore performs badly once at capacity|
|Description||Daniel Golle writes:|
once the datastore sizes gets close to the quota limit, I start getting
Jul 16 20:35:22-249294 fs-4121 WARNING Message `Datastore lookup already took 1 m!' repeated 109 times in the last 19 s
Jul 16 20:35:22-249294 fs-4121 WARNING Datastore lookup already took 2 m!
Once the datastore is close to quota, we have additional work that is happening to 'purge' or remove expired data. So if some index is not working properly, this is quite plausiable.
It would be good to know which backend Daniel is using.
|Tags||No tags attached.|
||sqlite with a quite harsh quota of 4 MB which is maybe just way to small...|
||Well, that's good, should make it easier to reproduce. Besides, with 4 MB I'd hope that the inherent speed of the DB is not that shabby if we do stuff right, after all, that does fit into buffers...|
I've extended the "perf_datastore_api_sqlite" to cover the 'above quota' situation. You can also modify test_datastore_api_data_sqlite.conf to set the quota to 4 MB (default: 10 MB). In either case, the test does NOT reveal an unexpected drop in performance once the quota is reached and content needs to be dropped.
Three possible explanations:
1) the test doesn't reproduce a representative database (i.e. the data size/priority distribution is particularly unfavorable in the real world, compared to the random data generated by the test)
2) there is something about your setup that is different, and the test would show the issue on your system
3) we're still not testing the "correct" issue.
It would be helpful (with respect to excluding (2)) if you could run the testcase (with 4 or 10 MB quotas) and report the output.
I may have fixed this issue in 0d8e5fb7a.
That bug isn't directly related to the database being full, but any slowdown in the database makes it more likely for that bug to be triggered.
||Marking as 'fixed' as we don't know what backend Daniel was using and right now this seems to be not reproduceable anymore and amatus says he might have fixed it.|
|2015-07-18 15:44||Christian Grothoff||New Issue|
|2015-07-20 03:09||dangole||Note Added: 0009466|
|2015-07-22 10:11||Christian Grothoff||Note Added: 0009474|
|2015-07-22 10:11||Christian Grothoff||Assigned To||=> Christian Grothoff|
|2015-07-22 10:11||Christian Grothoff||Status||new => assigned|
|2015-07-22 10:11||Christian Grothoff||Target Version||=> 0.11.0pre66|
|2015-08-03 12:47||Christian Grothoff||Note Added: 0009520|
|2015-08-09 00:45||Christian Grothoff||Status||assigned => feedback|
|2015-09-09 19:40||Christian Grothoff||Target Version||0.11.0pre66 =>|
|2017-02-26 02:07||Christian Grothoff||Assigned To||Christian Grothoff =>|
|2017-03-19 22:39||amatus||Assigned To||=> amatus|
|2017-03-19 22:39||amatus||Status||feedback => assigned|
|2017-11-11 00:49||amatus||Note Added: 0012577|
|2019-02-20 12:32||Christian Grothoff||Status||assigned => resolved|
|2019-02-20 12:32||Christian Grothoff||Resolution||open => fixed|
|2019-02-20 12:32||Christian Grothoff||Fixed in Version||=> 0.11.0|
|2019-02-20 12:32||Christian Grothoff||Note Added: 0013899|
|2019-02-28 11:17||Christian Grothoff||Status||resolved => closed|