View Issue Details
ID | Project | Category | View Status | Date Submitted | Last Update |
---|---|---|---|---|---|
0002594 | GNUnet | file-sharing service | public | 2012-10-19 23:34 | 2012-11-05 18:33 |
Reporter | bratao | Assigned To | Christian Grothoff | ||
Priority | normal | Severity | major | Reproducibility | always |
Status | closed | Resolution | fixed | ||
Platform | W32 | OS | Windows | OS Version | 8 |
Product Version | 0.9.3 | ||||
Target Version | 0.9.4 | Fixed in Version | 0.9.4 | ||
Summary | 0002594: 100% usage on gnunet-service-fs with gnunet-fs-gtk | ||||
Description | The gnunet-service-fs stay on 100% usage for long periods when gnunet-fs-gtk is open. The gnuet-fs-gtk process also stay with a 100% usage. If I close gnuet-fs-gtk it return to 1% cpu usage immediately. A simple(and possibly wrong) profiling shows that GNUNET_LOAD_update is called hundreds of thousands times. Maybe worth say that my datastore size is set to 100MB, and is fully utilized. | ||||
Tags | No tags attached. | ||||
Attached Files | 2min.txt (8,572 bytes)
transport # acknowledgements sent for fragment: 622 ats # active addresses: 31 hostlist # active connections: 29 ats # address requests received: 90 transport # address revalidations started: 244 ats # address suggestions made: 559 ats # address updates received: 1900 ats # addresses destroyed: 1 fs # artificial delays introduced (ms): 1027 fs # average retransmission delay (ms): 18 core # avg payload per encrypted message: 2569 ats # backoff reset requests received: 90 ats # bandwidth recalculations performed: 33 transport # bytes currently in TCP buffers: 0 core # bytes decrypted: 5006765 transport # bytes discarded by TCP (disconnect): 39708 hostlist # bytes downloaded from hostlist servers: 20448 core # bytes encrypted: 2221996 transport # bytes in message queue for other peers: 127124 dht # Bytes of bandwdith requested from core: 223523 core # bytes of payload decrypted: 4979909 transport # bytes payload received: 5125981 transport # bytes received via TCP: 3204652 datastore-api # bytes sent to datastore: 1776 datacache # bytes stored: 238196 ! datastore # bytes stored: 83062457 transport # bytes total received: 5233777 dht # Bytes transmitted to other peers: 698314 transport # bytes transmitted via TCP: 177672 ! datastore # bytes used in file-sharing datastore `sqlite': 85566758 datastore # cache size: 12500000 fs # client searches active: 337 fs # client searches received: 673 topology # connect requests issued to transport: 70 fs # Datastore `PUT' failures: 53 fs # Datastore lookups concluded (error queueing): 3911 fs # Datastore lookups concluded (found last result): 47 fs # Datastore lookups concluded (no results): 921 fs # Datastore lookups initiated: 4228 fs # delay heap timeout: 198 dht # DHT requests combined: 53 transport # DISCONNECT messages sent: 3 transport # duplicate fragments received: 403 dht # Duplicate REPLIES matched against routing table: 2089 core # encrypted bytes given to transport: 2218796 dht # Entries added to routing table: 91 nse # estimated network diameter: 8 dht # FIND PEER messages initiated: 2 nse # flood messages received: 49 nse # flood messages started: 6 nse # flood messages transmitted: 64 transport # fragment acknowledgements received: 439 transport # fragmentation transmissions completed: 262 transport # fragments received: 1801 transport # fragments retransmitted: 1165 transport # fragments transmitted: 2565 transport # fragments wrap arounds: 450 fs # GAP PUT messages received: 84 dht # GET messages queued for transmission: 83 datastore-api # GET REPLICATION requests executed: 27 datastore # GET REPLICATION requests received: 30 datastore-api # GET requests executed: 1731 fs # GET requests received (from other peers): 5675 datastore # GET requests received: 4313 dht # GET requests routed: 93 datastore-api # GET ZERO ANONYMITY requests executed: 3 datastore # GET ZERO ANONYMITY requests received: 3 dht # Good REPLIES matched against routing table: 848 topology # HELLO messages gossipped: 30 topology # HELLO messages received: 30 hostlist # hostlist downloads initiated: 1 transport # IPv4 broadcast HELLO beacons received via udp: 32 transport # IPv6 multicast HELLO beacons received via udp: 8 dht # ITEMS stored in datacache: 359 transport # keepalives sent: 88 core # key exchanges initiated: 30 core # key exchanges stopped: 1 fs # Loopback routes suppressed: 2835 transport # messages defragmented: 208 transport # messages fragmented: 266 transport # messages transmitted to other peers: 997 fs # migration stop messages received: 12 fs # migration stop messages sent: 20 ! hostlist # milliseconds between hostlist downloads: 3600000 core # neighbour entries allocated: 29 dht # Network size estimates received: 6 transport # network-level TCP disconnect events: 78 nse # nodes in the network (estimate): 50 transport # other peer asked to disconnect from us: 1 dht # P2P FIND PEER requests processed: 62 dht # P2P GET requests ONLY routed: 29 dht # P2P GET requests received: 91 dht # P2P messages dropped due to full queue: 21 dht # P2P PUT requests received: 358 fs # P2P query messages received and processed: 4542 dht # P2P RESULTS received: 863 fs # P2P searches active: 3883 fs # P2P searches destroyed due to ultimate reply: 8 dht # Peer selection failed: 66 transport # peers connected: 29 nse # peers connected: 30 topology # peers connected: 29 dht # peers connected: 29 core # peers connected: 29 fs # peers connected: 28 dht # Peers excluded from routing due to Bloomfilter: 3326 peerinfo # peers known: 82 mesh # peers: 29 fs # Pending requests active: 5002 fs # Pending requests created: 4153 ats # performance updates given to clients: 184 core # PING messages received: 34 transport # PING messages received: 125 transport # PING without HELLO messages sent: 243 core # PONG messages created: 34 core # PONG messages decrypted: 34 core # PONG messages received: 47 transport # PONG messages received: 117 transport # PONGs unicast via reliable transport: 103 ats # preference change requests processed: 88 dht # Preference updates given to core: 17 dht # PUT messages queued for transmission: 351 datastore-api # PUT requests executed: 87 dht # PUT requests received from clients: 1 dht # PUT requests routed: 359 fs # query messages sent to other peers: 1535 fs # query plan entries: 42871 fs # query plans executed: 44059 datastore-api # queue entries created: 1854 datastore-api # queue overflows: 9759 datastore # quota: 100000000 transport # refreshed my HELLO: 1 dht # REPLIES ignored for CLIENTS (no match): 1222 fs # replies received and matched: 51 fs # replies received for local clients: 44 fs # replies received for other peers: 7 fs # replies transmitted to other peers: 1 transport # REQUEST CONNECT messages received: 172 fs # requested DBLOCK or IBLOCK not found: 903 fs # requests done for free (low load): 4542 fs # requests dropped due to missing reverse route: 1086 datastore-api # Requests dropped from datastore queue: 24 datastore # requests filtered by bloomfilter: 1431 ats # reservation requests processed: 2751 datastore # reserved: 0 dht # RESULT messages queued for transmission: 889 datastore # results found: 2912 datastore-api # Results received: 264 fs # running average P2P latency (ms): 286 core # session keys confirmed via PONG: 30 core # session keys received: 31 core # sessions terminated by transport disconnect: 1 core # SET_KEY and PING messages created: 30 core # SET_KEY messages decrypted: 31 datastore-api # status messages received: 83 transport # TCP sessions active: 93 transport # TCP WELCOME messages received: 95 transport # total size of fragmented messages: 1726763 core # type map refreshes sent: 30 core # type maps received: 30 transport # UDP, ACK msgs, bytes overhead, sent, success: 54736 transport # UDP, ACK msgs, messages, sent, success: 622 transport # UDP, fragmented msgs, bytes payload, attempt: 1707611 transport # UDP, fragmented msgs, fragments, sent, success: 2545 transport # UDP, fragmented msgs, messages, attempt: 266 transport # UDP, fragmented msgs, messages, pending: 4 transport # UDP, fragmented msgs, messages, sent, success: 262 transport # UDP, sessions active: 86 transport # UDP, total, bytes in buffers: 18279380813176 transport # UDP, total, bytes overhead, sent: 1171039 transport # UDP, total, bytes payload, sent: 2045604 transport # UDP, total, bytes, received: 2707324 transport # UDP, total, bytes, sent, success: 3870699 transport # UDP, total, messages, sent, success: 4237 transport # UDP, total, msgs in buffers: 1 transport # UDP, unfragmented msgs, bytes payload, attempt: 465117 transport # UDP, unfragmented msgs, messages, attempt: 1070 transport # UDP, unfragmented msgs, messages, sent, success: 1070 transport # unexpected SESSION ACK messages: 16 core # updates to my type map: 5 hostlist # valid HELLOs downloaded from hostlist servers: 62 ! hostlist bytes in hostlist: 15864 ! hostlist expired addresses encountered: 11936 ! hostlist hostlist requests processed: 2 transport # UDP, fragmented msgs, bytes overhead, sent, success: 1039263 transport # UDP, fragmented msgs, bytes payload, sent, success: 1580487 transport # UDP, fragmented msgs, fragments bytes, sent, success: 3273806 transport # UDP, unfragmented msgs, bytes overhead, sent, success: 77040 transport # UDP, unfragmented msgs, bytes payload, sent, success: 465117 3min.txt (8,997 bytes)
transport # acknowledgements sent for fragment: 962 ats # active addresses: 31 hostlist # active connections: 29 transport # address records discarded: 113 ats # address requests received: 94 transport # address revalidations started: 255 ats # address suggestions made: 559 ats # address updates received: 2323 ats # addresses destroyed: 1 fs # artificial delays introduced (ms): 1027 fs # average retransmission delay (ms): 18 core # avg payload per encrypted message: 3393 ats # backoff reset requests received: 94 ats # bandwidth recalculations performed: 33 transport # bytes currently in TCP buffers: 0 core # bytes decrypted: 6815227 transport # bytes discarded by TCP (disconnect): 39708 hostlist # bytes downloaded from hostlist servers: 20448 core # bytes encrypted: 4313327 transport # bytes in message queue for other peers: 62857 dht # Bytes of bandwdith requested from core: 305469 core # bytes of payload decrypted: 6781971 transport # bytes payload received: 6963243 transport # bytes received via TCP: 4428784 datastore-api # bytes sent to datastore: 1776 datacache # bytes stored: 326324 ! datastore # bytes stored: 83144893 transport # bytes total received: 7086679 dht # Bytes transmitted to other peers: 983899 transport # bytes transmitted via TCP: 389432 ! datastore # bytes used in file-sharing datastore `sqlite': 85826614 datastore # cache size: 12500000 fs # client searches active: 337 fs # client searches received: 673 topology # connect requests issued to transport: 70 fs # Datastore `PUT' failures: 53 fs # Datastore lookups concluded (error queueing): 3911 fs # Datastore lookups concluded (found last result): 47 fs # Datastore lookups concluded (no results): 921 fs # Datastore lookups initiated: 4228 fs # delay heap timeout: 198 dht # DHT requests combined: 77 transport # DISCONNECT messages sent: 3 transport # duplicate fragments received: 688 dht # Duplicate REPLIES matched against routing table: 3058 core # encrypted bytes given to transport: 4386185 dht # Entries added to routing table: 121 nse # estimated network diameter: 8 dht # FIND PEER messages initiated: 3 dht # FIND PEER requests ignored due to Bloomfilter: 1 nse # flood messages received: 49 nse # flood messages started: 6 nse # flood messages transmitted: 64 transport # fragment acknowledgements received: 717 transport # fragmentation transmissions completed: 420 transport # fragments received: 2518 transport # fragments retransmitted: 1802 transport # fragments transmitted: 4575 transport # fragments wrap arounds: 720 fs # GAP PUT messages received: 84 dht # GET messages queued for transmission: 111 datastore-api # GET REPLICATION requests executed: 27 datastore # GET REPLICATION requests received: 40 datastore-api # GET requests executed: 1731 fs # GET requests received (from other peers): 5675 datastore # GET requests received: 5522 dht # GET requests routed: 124 datastore-api # GET ZERO ANONYMITY requests executed: 3 datastore # GET ZERO ANONYMITY requests received: 6 dht # Good REPLIES matched against routing table: 1238 topology # HELLO messages gossipped: 30 topology # HELLO messages received: 30 hostlist # hostlist downloads initiated: 1 transport # IPv4 broadcast HELLO beacons received via udp: 44 transport # IPv6 multicast HELLO beacons received via udp: 11 dht # ITEMS stored in datacache: 504 transport # keepalives sent: 92 core # key exchanges initiated: 30 core # key exchanges stopped: 1 fs # Loopback routes suppressed: 2835 transport # messages defragmented: 258 transport # messages dismissed due to timeout: 5 transport # messages fragmented: 425 transport # messages transmitted to other peers: 1394 fs # migration stop messages received: 12 fs # migration stop messages sent: 20 ! hostlist # milliseconds between hostlist downloads: 3600000 core # neighbour entries allocated: 29 dht # Network size estimates received: 6 transport # network-level TCP disconnect events: 132 nse # nodes in the network (estimate): 50 transport # other peer asked to disconnect from us: 1 dht # P2P FIND PEER requests processed: 80 dht # P2P GET requests ONLY routed: 41 dht # P2P GET requests received: 121 dht # P2P messages dropped due to full queue: 21 dht # P2P PUT requests received: 503 fs # P2P query messages received and processed: 4542 dht # P2P RESULTS received: 1198 fs # P2P searches active: 3883 fs # P2P searches destroyed due to ultimate reply: 8 dht # Peer selection failed: 85 transport # peers connected: 29 nse # peers connected: 30 topology # peers connected: 29 dht # peers connected: 29 core # peers connected: 29 fs # peers connected: 28 dht # Peers excluded from routing due to Bloomfilter: 4423 peerinfo # peers known: 82 mesh # peers: 29 fs # Pending requests active: 5002 fs # Pending requests created: 4153 ats # performance updates given to clients: 184 transport # PING messages received: 142 core # PING messages received: 34 transport # PING without HELLO messages sent: 254 core # PONG messages created: 34 core # PONG messages decrypted: 34 transport # PONG messages received: 124 core # PONG messages received: 47 transport # PONGs unicast via reliable transport: 118 ats # preference change requests processed: 122 dht # Preference updates given to core: 24 dht # PUT messages queued for transmission: 496 datastore-api # PUT requests executed: 87 dht # PUT requests received from clients: 1 dht # PUT requests routed: 504 fs # query messages sent to other peers: 1535 fs # query plan entries: 42871 fs # query plans executed: 44059 datastore-api # queue entries created: 1854 datastore-api # queue overflows: 9759 datastore # quota: 100000000 transport # refreshed my HELLO: 1 dht # REPLIES ignored for CLIENTS (no match): 1702 fs # replies received and matched: 51 fs # replies received for local clients: 44 fs # replies received for other peers: 7 fs # replies transmitted to other peers: 1 transport # REQUEST CONNECT messages received: 200 fs # requested DBLOCK or IBLOCK not found: 903 fs # requests done for free (low load): 4542 fs # requests dropped due to missing reverse route: 1086 datastore-api # Requests dropped from datastore queue: 24 datastore # requests filtered by bloomfilter: 2572 ats # reservation requests processed: 5356 datastore # reserved: 0 dht # RESULT messages queued for transmission: 1296 datastore # results found: 2990 datastore-api # Results received: 264 fs # running average P2P latency (ms): 286 core # session keys confirmed via PONG: 30 core # session keys received: 31 core # sessions terminated by transport disconnect: 1 core # SET_KEY and PING messages created: 30 core # SET_KEY messages decrypted: 31 datastore-api # status messages received: 83 transport # TCP sessions active: 30 transport # TCP WELCOME messages received: 97 transport # total size of fragmented messages: 3574557 core # type map refreshes sent: 31 core # type maps received: 31 transport # UDP, ACK msgs, bytes overhead, sent, success: 84216 transport # UDP, ACK msgs, messages, sent, success: 957 transport # UDP, fragmented msgs, bytes payload, attempt: 3543957 transport # UDP, fragmented msgs, fragments, sent, success: 4542 transport # UDP, fragmented msgs, messages, attempt: 425 transport # UDP, fragmented msgs, messages, pending: 5 transport # UDP, fragmented msgs, messages, sent, success: 420 transport # UDP, sessions active: 46 transport # UDP, total, bytes in buffers: 29343216574272 transport # UDP, total, bytes overhead, sent: 2604114 transport # UDP, total, bytes payload, sent: 4073474 transport # UDP, total, bytes, received: 3790066 transport # UDP, total, bytes, sent, success: 6679052 transport # UDP, total, messages, sent, success: 6798 transport # UDP, total, msgs in buffers: 15 transport # UDP, unfragmented msgs, bytes payload, attempt: 594724 transport # UDP, unfragmented msgs, messages, attempt: 1310 transport # UDP, unfragmented msgs, messages, sent, failure: 5 transport # UDP, unfragmented msgs, messages, sent, success: 1299 transport # unexpected SESSION ACK messages: 22 core # updates to my type map: 5 datastore # utilization by current datastore: 85826614 hostlist # valid HELLOs downloaded from hostlist servers: 62 ! hostlist bytes in hostlist: 15864 ! hostlist expired addresses encountered: 11936 ! hostlist hostlist requests processed: 2 transport # UDP, fragmented msgs, bytes overhead, sent, success: 2426370 transport # UDP, fragmented msgs, bytes payload, sent, success: 3482628 transport # UDP, fragmented msgs, fragments bytes, sent, success: 5910462 transport # UDP, unfragmented msgs, bytes overhead, sent, failure: 360 transport # UDP, unfragmented msgs, bytes overhead, sent, success: 93528 transport # UDP, unfragmented msgs, bytes payload, sent, failure: 2350 transport # UDP, unfragmented msgs, bytes payload, sent, success: 590846 10min.txt (9,090 bytes)
transport # acknowledgements sent for fragment: 3180 ats # active addresses: 31 hostlist # active connections: 29 transport # address records discarded: 224 ats # address requests received: 122 transport # address revalidations started: 550 ats # address suggestions made: 559 ats # address updates received: 4608 ats # addresses destroyed: 1 fs # artificial delays introduced (ms): 1027 fs # average retransmission delay (ms): 18 core # avg payload per encrypted message: 3942 ats # backoff reset requests received: 122 ats # bandwidth recalculations performed: 33 transport # bytes currently in TCP buffers: 0 core # bytes decrypted: 17699778 transport # bytes discarded by TCP (disconnect): 80810 hostlist # bytes downloaded from hostlist servers: 20448 core # bytes encrypted: 13457633 transport # bytes in message queue for other peers: 72685 dht # Bytes of bandwdith requested from core: 808645 core # bytes of payload decrypted: 17637378 transport # bytes payload received: 17978626 transport # bytes received via TCP: 11539817 datastore-api # bytes sent to datastore: 1776 datacache # bytes stored: 827302 ! datastore # bytes stored: 83358567 transport # bytes total received: 18243389 dht # Bytes transmitted to other peers: 2698663 transport # bytes transmitted via TCP: 1104398 ! datastore # bytes used in file-sharing datastore `sqlite': 85826614 datastore # cache size: 12500000 fs # client searches active: 337 fs # client searches received: 673 topology # connect requests issued to transport: 70 fs # Datastore `PUT' failures: 53 fs # Datastore lookups concluded (error queueing): 3911 fs # Datastore lookups concluded (found last result): 47 fs # Datastore lookups concluded (no results): 3295 fs # Datastore lookups initiated: 4228 fs # delay heap timeout: 198 dht # DHT requests combined: 239 transport # DISCONNECT messages sent: 3 transport # duplicate fragments received: 2627 dht # Duplicate REPLIES matched against routing table: 13637 core # encrypted bytes given to transport: 13701961 dht # Entries added to routing table: 321 nse # estimated network diameter: 8 dht # FIND PEER messages initiated: 4 dht # FIND PEER requests ignored due to Bloomfilter: 13 nse # flood messages received: 77 nse # flood messages started: 6 nse # flood messages transmitted: 64 transport # fragment acknowledgements received: 2118 transport # fragmentation transmissions completed: 1272 transport # fragments received: 7346 transport # fragments retransmitted: 5304 transport # fragments transmitted: 14300 transport # fragments wrap arounds: 2101 fs # GAP PUT messages received: 84 dht # GET messages queued for transmission: 320 datastore-api # GET REPLICATION requests executed: 27 datastore # GET REPLICATION requests received: 49 datastore-api # GET requests executed: 1731 fs # GET requests received (from other peers): 5675 datastore # GET requests received: 9726 dht # GET requests routed: 325 datastore-api # GET ZERO ANONYMITY requests executed: 3 datastore # GET ZERO ANONYMITY requests received: 9 dht # Good REPLIES matched against routing table: 3462 topology # HELLO messages gossipped: 88 topology # HELLO messages received: 87 hostlist # hostlist downloads initiated: 1 transport # IPv4 broadcast HELLO beacons received via udp: 104 transport # IPv6 multicast HELLO beacons received via udp: 26 dht # ITEMS stored in datacache: 1309 core # keepalive messages sent: 2 transport # keepalives sent: 234 core # key exchanges initiated: 30 core # key exchanges stopped: 1 fs # Loopback routes suppressed: 2835 transport # messages defragmented: 522 transport # messages dismissed due to timeout: 27 transport # messages fragmented: 1274 transport # messages transmitted to other peers: 3600 fs # migration stop messages received: 12 fs # migration stop messages sent: 20 ! hostlist # milliseconds between hostlist downloads: 3600000 core # neighbour entries allocated: 29 dht # Network size estimates received: 7 transport # network-level TCP disconnect events: 259 nse # nodes in the network (estimate): 176 transport # other peer asked to disconnect from us: 1 dht # P2P FIND PEER requests processed: 201 dht # P2P GET requests ONLY routed: 120 dht # P2P GET requests received: 321 dht # P2P messages dropped due to full queue: 21 dht # P2P PUT requests received: 1308 fs # P2P query messages received and processed: 4542 dht # P2P RESULTS received: 3616 fs # P2P searches active: 3883 fs # P2P searches destroyed due to ultimate reply: 8 dht # Peer selection failed: 158 transport # peers connected: 29 nse # peers connected: 30 topology # peers connected: 29 dht # peers connected: 29 core # peers connected: 29 fs # peers connected: 28 dht # Peers excluded from routing due to Bloomfilter: 10488 peerinfo # peers known: 93 mesh # peers: 29 fs # Pending requests active: 5002 fs # Pending requests created: 4153 ats # performance updates given to clients: 184 transport # PING messages received: 305 core # PING messages received: 34 transport # PING without HELLO messages sent: 546 core # PONG messages created: 34 core # PONG messages decrypted: 35 transport # PONG messages received: 244 core # PONG messages received: 48 transport # PONGs unicast via reliable transport: 240 ats # preference change requests processed: 320 dht # Preference updates given to core: 52 dht # PUT messages queued for transmission: 1325 datastore-api # PUT requests executed: 87 dht # PUT requests received from clients: 1 dht # PUT requests routed: 1309 fs # query messages sent to other peers: 1535 fs # query plan entries: 144026 fs # query plans executed: 44059 datastore-api # queue entries created: 1854 datastore-api # queue overflows: 9759 datastore # quota: 100000000 transport # refreshed my HELLO: 1 dht # REPLIES ignored for CLIENTS (no match): 4925 fs # replies received and matched: 3085 fs # replies received for local clients: 44 fs # replies received for other peers: 7 fs # replies transmitted to other peers: 1 transport # REQUEST CONNECT messages received: 416 fs # requested DBLOCK or IBLOCK not found: 5715 fs # requests done for free (low load): 4542 fs # requests dropped due to missing reverse route: 1086 datastore-api # Requests dropped from datastore queue: 24 datastore # requests filtered by bloomfilter: 6635 ats # reservation requests processed: 14757 datastore # reserved: 0 dht # RESULT messages queued for transmission: 3629 datastore # results found: 3140 datastore-api # Results received: 264 fs # running average P2P latency (ms): 286 core # session keys confirmed via PONG: 30 core # session keys received: 31 core # sessions terminated by transport disconnect: 1 core # SET_KEY and PING messages created: 30 core # SET_KEY messages decrypted: 31 datastore-api # status messages received: 83 transport # TCP sessions active: 63 transport # TCP WELCOME messages received: 184 transport # total size of fragmented messages: 11567392 core # type map refreshes sent: 88 core # type maps received: 87 transport # UDP, ACK msgs, bytes overhead, sent, success: 279752 transport # UDP, ACK msgs, messages, sent, success: 3179 transport # UDP, fragmented msgs, bytes payload, attempt: 11475664 transport # UDP, fragmented msgs, fragments, sent, success: 14241 transport # UDP, fragmented msgs, messages, attempt: 1274 transport # UDP, fragmented msgs, messages, pending: 2 transport # UDP, fragmented msgs, messages, sent, success: 1272 transport # UDP, sessions active: 82 transport # UDP, total, bytes in buffers: 88089779244061 transport # UDP, total, bytes overhead, sent: 7648380 transport # UDP, total, bytes payload, sent: 12813610 transport # UDP, total, bytes, received: 10817323 transport # UDP, total, bytes, sent, success: 20461990 transport # UDP, total, messages, sent, success: 20426 transport # UDP, total, msgs in buffers: 4 transport # UDP, unfragmented msgs, bytes payload, attempt: 1422529 transport # UDP, unfragmented msgs, messages, attempt: 3034 transport # UDP, unfragmented msgs, messages, sent, failure: 27 transport # UDP, unfragmented msgs, messages, sent, success: 3006 transport # unexpected SESSION ACK messages: 58 core # updates to my type map: 5 datastore # utilization by current datastore: 85826614 hostlist # valid HELLOs downloaded from hostlist servers: 62 ! hostlist bytes in hostlist: 15864 ! hostlist expired addresses encountered: 11936 ! hostlist hostlist requests processed: 2 transport # UDP, fragmented msgs, bytes overhead, sent, success: 7152196 transport # UDP, fragmented msgs, bytes payload, sent, success: 11403120 transport # UDP, fragmented msgs, fragments bytes, sent, success: 18555316 transport # UDP, unfragmented msgs, bytes overhead, sent, failure: 1944 transport # UDP, unfragmented msgs, bytes overhead, sent, success: 216432 transport # UDP, unfragmented msgs, bytes payload, sent, failure: 11898 transport # UDP, unfragmented msgs, bytes payload, sent, success: 1410490 | ||||
|
I don't get this behavior. Are you having any searches/downloads open, or is this just a plain gnunet-fs-gtk window that is open? Can you run 'gnunet-statistics -s fs' to see what the values are? Are any increasing rapidly? Can you attach gdb to gnunet-service-fs to see what it is doing? |
|
Well, I attached a file with a 2 min, 3 min and 10 min data from gnunet-statistics. The # UDP, total, bytes in buffers: 88089779244061 seems really off ! But the bug happens even with the udp plugin disabled. This behavior only happen when a search panel with a lot of results is open. With "GPL" (2 resutls) open, nothing happens. With "mp3" (930 results) open, this bug happens every-time. Both gnunet-fs-gtk and gnunet-service-fs go to 100% usage, If I close gnunet-fs-gtk returns to normal. Still now I have no luck debugging/profiling in W32. I will check if this happen on linux. |
|
I attached the DEBUG log from the fs-GUI client. I'm still trying to understand what is happening, but at a point gnunet have 3k tasks on the queue, this looks suspicious to me. He trying to connect to the fs-service like crazy, we see hundreds of thousands of : Trying to connect using address `localhost:2094/::1:2094' Also: Failed to connect to `fs', automatically trying again |
|
Which version is this? SVN HEAD or 0.9.3? I still cannot reproduce anything, and I've not seen anything terrible in the stats yet. 3k tasks is of course bad, but I cannot tell which tasks these are :-(. |
|
Maybe related question: how much bandwidth did you configure your peer to use? |
|
Oh, good, finally got the problem after running a peer for hours with gnunet-fs-gtk running at the same time! |
|
Found only two things so far: (1) statistics_update was called a bit often on my system (for TOO many events), causing the API to consume so much CPU that it was pretty much always active (statistics API adapts to high load, so if you use it too much you get 100% CPU load but the system will still work). However, this issue should not relate to gnunet-fs-gtk running or not, so I'm not sure if that fixes "everything". (2) requests from clients continued to be tracked even if the client no longer cared; however, they were set to 'inactive' and should thus not have caused any more load on the system (just wasting memory). So again, I'm not sure if this is "THE" issue. |
|
Final update: before fixing what I discussed in note 6477, the CPU load does NOT drop on my system after I close gnunet-fs-gtk, it continues at 100% (as the note would suggest). So what I reproduced was not necessarily the same behavior originally reported, as it does not depend on gnunet-fs-gtk running. Anyway, patches for this are now in SVN HEAD. |
|
I confirm that is not reproducible on Debian. But this is making GNUNet completely unusable on W32. I will try to debug and mess around to see if I get something. |
|
SVN 24495 fixes a possibly closely related issue --- I profiled gnunet-service-fs and found that the stats calls were STILL very expensive. The cause was a recently-added call to the (platform dependent!) 'mallinfo' routine. 'mallinfo' was on GNU/Linux taking 60% of the CPU time!!! The call was only to get some statistics and is now optional (and by default OFF). So if 'mallinfo' is even more expensive on W32, this may still explain the issue. Try SVN 24495. |
|
It is now very-well behaved on my system. |
|
Nice work, grothoff, with all the optimizations. The 100% cpu bug is still biting me(every time I have some HUGE searches saved, like two 900 results ¨mp3¨),I feel that is the probe searches that are going out of the way, because after 10 minutes the availability for all results update at once and the CPU usage returns to normal. But nevertheless the fs is going way smoother to me after this round of optimizations. |
|
Ok, I optimized the BAADF00D gnunet-free code today, resulting in another 25% reduction of CPU use by FS (and other GNUnet processes). |
|
Note on probes: the code _should_ limit the number of parallel probes it performs to not thrash the system. You can also try without probes. Just find the line with GNUNET_FS_FLAGS_PERSISTENCE | GNUNET_FS_FLAGS_DO_PROBES in gnunet-fs-gtk.c and remove the "DO_PROBES" flag part. |
|
The 2599 makes this way more subtle. I think can mark as fixed. although it behaviour of loading thousands of connections to fs, is fishy to me. |
|
At this point, I'm thinking that FS just happens to have that much work to do, so that it HAS to take 100% CPU on your system given your workload. |
Date Modified | Username | Field | Change |
---|---|---|---|
2012-10-19 23:34 | bratao | New Issue | |
2012-10-20 15:32 | Christian Grothoff | Assigned To | => Christian Grothoff |
2012-10-20 15:32 | Christian Grothoff | Status | new => assigned |
2012-10-20 15:34 | Christian Grothoff | Note Added: 0006458 | |
2012-10-20 15:34 | Christian Grothoff | Status | assigned => feedback |
2012-10-20 20:33 | bratao | File Added: 2min.txt | |
2012-10-20 20:34 | bratao | File Added: 3min.txt | |
2012-10-20 20:34 | bratao | File Added: 10min.txt | |
2012-10-20 20:40 | bratao | Note Added: 0006459 | |
2012-10-20 20:40 | bratao | Status | feedback => assigned |
2012-10-21 02:04 | bratao | File Added: fs-debug-log.zip | |
2012-10-21 02:05 | bratao | Note Added: 0006460 | |
2012-10-21 02:09 | bratao | Note Edited: 0006460 | |
2012-10-21 02:11 | bratao | Note Edited: 0006460 | |
2012-10-23 09:48 | Christian Grothoff | Note Added: 0006470 | |
2012-10-23 09:52 | Christian Grothoff | Note Added: 0006471 | |
2012-10-23 15:06 | Christian Grothoff | Note Added: 0006476 | |
2012-10-23 15:36 | Christian Grothoff | Note Added: 0006477 | |
2012-10-23 15:38 | Christian Grothoff | Note Added: 0006478 | |
2012-10-23 19:26 | bratao | Note Added: 0006480 | |
2012-10-23 20:44 | Christian Grothoff | Note Added: 0006481 | |
2012-10-23 20:45 | Christian Grothoff | Note Added: 0006482 | |
2012-10-24 07:47 | bratao | Note Added: 0006484 | |
2012-10-24 12:01 | Christian Grothoff | Note Added: 0006487 | |
2012-10-24 12:32 | Christian Grothoff | Note Added: 0006488 | |
2012-10-24 12:32 | Christian Grothoff | Status | assigned => feedback |
2012-10-25 02:48 | bratao | Note Added: 0006489 | |
2012-10-25 02:48 | bratao | Status | feedback => assigned |
2012-11-04 22:26 | Christian Grothoff | Note Added: 0006507 | |
2012-11-04 22:26 | Christian Grothoff | Status | assigned => resolved |
2012-11-04 22:26 | Christian Grothoff | Resolution | open => fixed |
2012-11-04 22:26 | Christian Grothoff | Product Version | => 0.9.3 |
2012-11-04 22:26 | Christian Grothoff | Fixed in Version | => 0.9.4 |
2012-11-04 22:26 | Christian Grothoff | Target Version | => 0.9.4 |
2012-11-05 18:33 | Christian Grothoff | Status | resolved => closed |