View Issue Details
ID | Project | Category | View Status | Date Submitted | Last Update |
---|---|---|---|---|---|
0002875 | GNUnet | datastore service | public | 2013-04-27 22:15 | 2013-12-24 20:55 |
Reporter | LRN | Assigned To | Christian Grothoff | ||
Priority | none | Severity | feature | Reproducibility | N/A |
Status | closed | Resolution | fixed | ||
Target Version | 0.10.0 | Fixed in Version | 0.10.0 | ||
Summary | 0002875: Add sneakernet support | ||||
Description | The idea is to be able to dump datastore into a file, move that file by some means (usually by physically moving the media on which the file is stored), then import contents of that file into datastore of another node. This is different from existing approach, where you download files into your datastore from one network (or insert them), then move your node into another network that is not connected with the first one, and share the contents of the datastore there. The difference is that you don't have to move the node, only the data. | ||||
Additional Information | Additional features: 1) Segmented dumps (dump 1400MB datastore as two 700MB dump-files). 1.1) Non-simultaneous segmented dumps (dump 700MB, store the list of hashes of dumped blocks, later dump another 700MB, using the list to make sure that same blocks aren't dumped again, and adding newly dumped blocks to that list, rinse and repeat) 2) Partial dumps (dump only particular files (SBlock trees; GNUnet directories are also traversed, so starting with a root directory will dump not just the directory but its contents to, optionally) and KBlocks that point at them (by default, disableable)) 3) Mounted read-only data-dumps. This is not "Partial imports", because importing node does not know which blocks to import and which blocks to ignore. Instead it should mount the data-dump as a special form of datastore, be able to search in it normally for KBlocks, and download corresponding SBlocks (and KBlocks) into its own normal datastore (special handling of the read-only datastore will ensure that the transfer actually happens - GNUnet shouldn't think that blocks shouldn't be downloaded just because blocks are already available in a local read-only datastore) 3.1) Making it non-read-only (be able to write into it). Not very useful, IMO. | ||||
Tags | No tags attached. | ||||
|
I don't understand what you want to add here. GNUnet stores inserted files in a database. So all you need to do is backup that database (possibly using a multi-volume backup), and then you can restore it (possibly on one or on multiple peers, doesn't matter). In any case, this stuff is possible with MySQL and likely Postgres already (and with a bit of trickery you can likely do it with sqlite + zip), so I don't see this as a feature for GNUnet. |
|
"and then you can restore it" - meaning that node2 will have its datastore replaced by the datastore that node1 backed up, right? That is, node2 can have, at any given moment, either its own datastore, or the datastore that node1 owner backed up and gave to node2 owner (you shut down node2, move old datastore out of the way, place the datastore of node1 in its place, start node2). The only way for node2 to have access to both its own datastore and the datastore of node1 at the same time is to set up an extra dummy node that communicates only with node2, and give that node the datastore from node1. This is awkward (but, on the other hand, achievable with just some creative configuration, since running 2 nodes on the same machine, and having one of the nodes communicate only locally, is pretty much supported already, AFAIU). If the datastore of node1 is not "restored" as in "replace everything", but rather in a "merge"-like mode, then that matches the basic description of this feature request (dump on node1 -> move the file -> merge into node2), possibly with the "Segmented dumps" feature too (if multi-volume backup is used; that said, if datastore can't be reliably "cut" into pieces, then restoring multi-volume backup would require having access to ALL volumes, you can't do partial restores; considering the latency sneakernet has, that would not be good). That said, i foresee that such backups and restores might have difficulties related to file formats (both nodes must use compatible datastores, and if datastore content is machine-dependent, then both nodes must have the same architecture; if datastore content depends on the DB version, both nodes must use compatible DB versions). Using a special "dump" format ensures that data can be moved around. Ensures, at the very least, that data can be appended to datastore, and will not just replace its contents. Ensures that all the extra features listed above are possible, and aren't blocked by capabilities of the database actually used to store data. |
|
'restore' can easily be done using 'merge into'. Most SQL backups can be done so that they're essentially a sequence of 'INSERT INTO table' statements, so by simply not first deleting the data that is there, restoring from backup means merging with the existing database. Converting between different databases might require a bit more work, but that's then more about asking for a migration tool (we used to have a 'gnunet-convert', which then became 'gnunet-update' which could do such things in 0.7/0.8). The simplest way to do this is to load both plugins and use one to iterate over existing data and then call insert on the other plugin. This should be relatively easy. |
|
Fixed in SVN 27342. |
Date Modified | Username | Field | Change |
---|---|---|---|
2013-04-27 22:15 | LRN | New Issue | |
2013-04-28 00:48 | Christian Grothoff | Note Added: 0007066 | |
2013-04-28 00:49 | Christian Grothoff | Assigned To | => Christian Grothoff |
2013-04-28 00:49 | Christian Grothoff | Status | new => feedback |
2013-04-28 01:10 | LRN | Note Added: 0007067 | |
2013-04-28 01:10 | LRN | Status | feedback => assigned |
2013-04-28 11:04 | Christian Grothoff | Note Added: 0007070 | |
2013-04-28 11:04 | Christian Grothoff | Note Edited: 0007070 | |
2013-04-28 11:07 | Christian Grothoff | Note Edited: 0007070 | |
2013-05-30 21:10 | Christian Grothoff | Note Added: 0007141 | |
2013-05-30 21:10 | Christian Grothoff | Status | assigned => resolved |
2013-05-30 21:10 | Christian Grothoff | Fixed in Version | => 0.10.0 |
2013-05-30 21:10 | Christian Grothoff | Resolution | open => fixed |
2013-05-30 21:10 | Christian Grothoff | Target Version | => 0.10.0 |
2013-12-24 20:55 | Christian Grothoff | Status | resolved => closed |