View Issue Details

IDProjectCategoryView StatusLast Update
0003247GNUnetfile-sharing servicepublic2018-06-07 00:25
ReporterbrataoAssigned ToChristian Grothoff 
PriorityhighSeveritymajorReproducibilityalways
Status closedResolutionfixed 
PlatformW32OSWindowsOS Version8.1
Product VersionSVN HEAD 
Target Version0.11.0pre66Fixed in Version0.11.0pre66 
Summary0003247: 100% CPU usage in MESH and FS after search
DescriptionAfter starting my peer, connected to 5 nodes, everything is ok.

But if I start some search for popular files ( test, mp3) , the gnunet-service-fs , gnunet-service-mesh and gnunet-service-statistics stay at 100% CPU use and do not stop.

No matter if is a 0 anonymity or a 1 anonymity search. It do not stop after closing the search.

Makes a Dual Core CPU unusable after performing a search.

Profiling the call stack , I can see that in fs:
GNUNET_MESH_channel_destroy
GSF_put_done_
GSF_put_done_
GNUNET_SCHEDULER_shutdown
GNUNET_SCHEDULER_run

Is the most executed path, and responsible for the high CPU usage.

In mesh:
GNUNET_xfree_
GNUNET_NETWORK_fdset_destroy
GNUNET_SCHEDULER_shutdown
GNUNET_SCHEDULER_shutdown
GNUNET_SCHEDULER_run

Is the hot-spot.
TagsNo tags attached.

Relationships

related to 0003596 closedBart Polot CADET "kills" gnunet-service-dht with > 20000 new requests / minute (on startup) 

Activities

bratao

2013-12-29 17:57

developer   ~0007950

Last edited: 2013-12-29 17:57

View 2 revisions

After further looking, it do not happen only during a search. After a while connected, it start this behavior by itself.

Worth say that I have more than 100 indexed files sharing ( 0 anonymity), and gnunet-service-fs have a high memory use( 144MB and go up to 250, coming back to 150 after a while).

Christian Grothoff

2014-01-02 11:38

manager   ~0007957

I've not seen this yet; can you (1) see any values being incremented quickly in gnunet-statistics? (2) see anything odd in the log files? (3) see any process be restarted (often) by ARM? (4) attach GDB to the process using lots of CPU to try to figure out where it does so? (or use any other method to evaluate where the code loops; I'm not sure what tools work on W32...).

Bart Polot

2014-01-07 18:21

manager   ~0007968

This is for a high CORE/FS usage, mesh seems to behave...

[bart@voyager ~]$ gnunet-statistics | sort
          ats # active performance clients: 2
          ats # addresses: 66
          ats # addresses destroyed: 737
          ats # address suggestions made: 135497
          ats # address updates received: 101669
          ats # ATS active addresses LAN total: 3
          ats # ATS active addresses total: 801
          ats # ATS active addresses UNSPECIFIED total: 42
          ats # ATS active addresses WAN total: 29
          ats # ATS addresses LAN total: 18
          ats # ATS addresses total: 0
          ats # ATS addresses UNSPECIFIED total: 13
          ats # ATS addresses WAN total: 35
          ats # performance updates given to clients: 172246
          ats # preference change requests processed: 6396
          ats # reservation requests processed: 94411
         core # avg payload per encrypted message: 21470
         core # bytes decrypted: 634923473
         core # bytes encrypted: 412460213
         core # bytes of messages of type 137 received: 13988080
         core # bytes of messages of type 138 received: 396727346
         core # bytes of messages of type 139 received: 4032
         core # bytes of messages of type 146 received: 408702
         core # bytes of messages of type 147 received: 248124
         core # bytes of messages of type 148 received: 906798
         core # bytes of messages of type 17 received: 2794
         core # bytes of messages of type 256 received: 528
         core # bytes of messages of type 257 received: 288
         core # bytes of messages of type 262 received: 7292
         core # bytes of messages of type 268 received: 288
         core # bytes of messages of type 280 received: 356
         core # bytes of messages of type 322 received: 660
         core # bytes of payload decrypted: 634424605
         core # DATA message dropped (out of order): 197
         core # encrypted bytes given to transport: 413668518
         core # EPHEMERAL_KEY messages received: 22
         core # ephemeral keys received: 22
         core # keepalive messages sent: 24
         core # key exchanges initiated: 20
         core # key exchanges stopped: 12
         core # messages discarded (expired prior to transmission): 1
         core # messages discarded (session disconnected): 1
         core # neighbour entries allocated: 8
         core # old ephemeral keys ignored: 20
         core # peers connected: 6
         core # PING messages received: 41
         core # PONG messages created: 41
         core # PONG messages decrypted: 20
         core # PONG messages received: 20
         core # send requests dropped (disconnected): 2
         core # session keys confirmed via PONG: 20
         core # sessions terminated by timeout: 4
         core # sessions terminated by transport disconnect: 12
         core # type map refreshes sent: 118
         core # type maps received: 114
         core # updates to my type map: 9
    datacache # bytes stored: 86858
    datacache # items stored: 364
    datacache # requests filtered by bloom filter: 1
    datacache # requests received: 1
 datastore-api # GET REPLICATION requests executed: 153
 datastore-api # GET ZERO ANONYMITY requests executed: 40
 datastore-api # PUT requests executed: 19394
 datastore-api # queue overflows: 6718
 datastore-api # status messages received: 19394
! datastore # bytes stored: 144623083
! datastore # bytes used in file-sharing datastore `sqlite': 145564798
    datastore # cache size: 625000000
    datastore # GET REPLICATION requests received: 153
    datastore # GET requests received: 131557
    datastore # GET ZERO ANONYMITY requests received: 40
    datastore # quota: 5000000000
    datastore # requests filtered by bloomfilter: 24661
    datastore # results found: 107049
    datastore # utilization by current datastore: 145564798
          dht # Bytes of bandwidth requested from core: 771195
          dht # Bytes transmitted to other peers: 1563624
          dht # DHT requests combined: 1221
          dht # Duplicate REPLIES matched against routing table: 8266
          dht # Entries added to routing table: 1289
          dht # FIND PEER messages initiated: 83
          dht # FIND PEER requests ignored due to Bloomfilter: 549
          dht # GET messages queued for transmission: 1196
          dht # GET requests from clients injected: 5
          dht # GET requests given to datacache: 1
          dht # GET requests received from clients: 1
          dht # GET requests routed: 1377
          dht # Good REPLIES matched against routing table: 2970
          dht # ITEMS stored in datacache: 3228
          dht # Network size estimates received: 5
          dht # P2P FIND PEER requests processed: 959
          dht # P2P GET bytes received: 448572
          dht # P2P GET requests ONLY routed: 330
          dht # P2P GET requests received: 1289
          dht # P2P messages dropped due to full queue: 703
          dht # P2P PUT bytes received: 856495
          dht # P2P PUT requests received: 1042
          dht # P2P RESULT bytes received: 693213
          dht # P2P RESULTS received: 2182
          dht # peers connected: 6
          dht # Peer selection failed: 947
          dht # Peers excluded from routing due to Bloomfilter: 6945
          dht # Preference updates given to core: 164
          dht # PUT messages queued for transmission: 684
          dht # PUT requests received from clients: 4
          dht # PUT requests routed: 1046
          dht # Queued messages discarded (peer disconnected): 135
          dht # REPLIES ignored for CLIENTS (no match): 3228
          dht # RESULT messages queued for transmission: 3033
           fs # average retransmission delay (ms): 19722
           fs # Datastore lookups concluded (error queueing): 6587
           fs # Datastore lookups concluded (found last result): 98090
           fs # Datastore lookups concluded (load too high): 103
           fs # Datastore lookups concluded (seen all): 740
           fs # Datastore `PUT' failures: 19466
           fs # delay heap timeout (ms): 599
           fs # GAP PUT messages received: 19525
           fs # GET requests received (from other peers): 143809
           fs # Loopback routes suppressed: 42548
           fs # migration stop messages received: 449
           fs # migration stop messages sent: 252
           fs # P2P query messages received and processed: 143809
           fs # P2P searches active: 332
           fs # P2P searches destroyed due to ultimate reply: 98608
           fs # peers connected: 6
           fs # Pending requests active: 65536
           fs # query messages sent to other peers: 153916
           fs # query plan entries: 7804
           fs # replies dropped: 86870
           fs # replies received and matched: 105802
           fs # replies received for other peers: 99653
           fs # replies transmitted to other peers: 12546
           fs # results found locally: 1017
           fs # running average P2P latency (ms): 4
          gns Current zone iteration interval (in ms): 14400000
          gns Number of public records in DHT: 0
          gns Number of zone iterations: 1
         mesh # channels: 0
         mesh # clients: 6
         mesh # connections: 0
         mesh # data retransmitted: 1
         mesh # duplicate PONG messages: 13
         mesh # messages received: 47
         mesh # peers: 5
          nse # estimated network diameter: 2
          nse # flood messages not generated (no proof yet): 5
          nse # flood messages received: 40
          nse # flood messages transmitted: 5
          nse # nodes in the network (estimate): 176
          nse # peers connected: 6
     peerinfo # peers known: 41
   revocation # peers connected: 1
     topology # connect requests issued to transport: 533
     topology # HELLO messages gossipped: 68
     topology # HELLO messages received: 67
     topology # peers connected: 5
    transport # acknowledgements sent for fragment: 31564
    transport # address records discarded: 870
    transport # address revalidations started: 1401
    transport # bandwidth quota violations by other peers: 1916
    transport # bytes currently in TCP buffers: 1002
    transport # bytes discarded by TCP (disconnect): 240817
    transport # bytes discarded by TCP (timeout): 440
    transport # bytes in message queue for other peers: 371191
    transport # bytes payload received: 688470506
    transport # bytes received via TCP: 587886880
    transport # bytes total received: 688793283
    transport # bytes transmitted via TCP: 194811035
    transport # CONNECT_ACK messages received: 10
    transport # CONNECT_ACK messages sent: 2513
    transport # CONNECT messages received: 2483
    transport # DISCONNECT messages received: 2
    transport # DISCONNECT messages sent: 103
    transport # duplicate fragments received: 27795
    transport # fragment acknowledgements received: 6896
    transport # fragmentation transmissions completed: 6794
    transport # fragments received: 98698
    transport # fragments retransmitted: 42825
    transport # fragments transmitted: 200756
    transport # fragments wrap arounds: 7575
    transport # IPv4 broadcast HELLO beacons received via udp: 935
    transport # keepalives sent: 1660
    transport # messages defragmented: 3890
    transport # messages dismissed due to timeout: 2
    transport # messages fragmented: 6802
    transport # messages transmitted to other peers: 19405
    transport # ms throttling suggested: 1890855
    transport # network-level TCP disconnect events: 595
    transport # other peer asked to disconnect from us: 1
    transport # peers connected: 8
    transport # PING messages received: 141
    transport # PING without HELLO messages sent: 1051
    transport # PONG messages received: 305
    transport # PONGs unicast via reliable transport: 141
    transport # refreshed my HELLO: 14
    transport # REQUEST CONNECT messages received: 2592
    transport # requests to create session with invalid address: 72
    transport # SESSION_ACK messages received: 16
    transport # SESSION_CONNECT messages sent: 29709
    transport # successful address checks during validation: 141
    transport # TCP sessions active: 10
    transport # TCP WELCOME messages received: 32
    transport # total size of fragmented messages: 216180001
    transport # transmission failures for messages to other peers: 4
    transport # transport-service disconnect requests for TCP: 62
    transport # UDP, ACK msgs, bytes overhead, sent, success: 1767584
    transport # UDP, ACK msgs, messages, sent, success: 31564
    transport # UDP, fragmented msgs, bytes overhead, sent, success: 61081920
    transport # UDP, fragmented msgs, bytes payload, attempt: 215907921
    transport # UDP, fragmented msgs, bytes payload, sent, failure: 1009204
    transport # UDP, fragmented msgs, bytes payload, sent, success: 215644153
    transport # UDP, fragmented msgs, fragments bytes, sent, failure: 121800
    transport # UDP, fragmented msgs, fragments bytes, sent, success: 277501721
    transport # UDP, fragmented msgs, fragments, sent, failure: 87
    transport # UDP, fragmented msgs, fragments, sent, success: 200669
    transport # UDP, fragmented msgs, messages, attempt: 6802
    transport # UDP, fragmented msgs, messages, pending: 5
    transport # UDP, fragmented msgs, messages, sent, failure: 3
    transport # UDP, fragmented msgs, messages, sent, success: 6794
    transport # UDP, sessions active: 16
    transport # UDP, total, bytes in buffers: 0
    transport # UDP, total, bytes overhead, sent: 63802384
    transport # UDP, total, bytes payload, sent: 218999311
    transport # UDP, total, bytes, received: 143798670
    transport # UDP, total, bytes, sent, failure: 0
    transport # UDP, total, bytes, sent, success: 283577343
    transport # UDP, total, bytes, sent, timeout: 590
    transport # UDP, total, messages, sent, failure: 18
    transport # UDP, total, messages, sent, success: 256055
    transport # UDP, total, messages, sent, timeout: 2
    transport # UDP, total, msgs in buffers: 0
    transport # UDP, unfragmented msgs, bytes overhead, sent, failure: 680
    transport # UDP, unfragmented msgs, bytes overhead, sent, success: 952880
    transport # UDP, unfragmented msgs, bytes payload, attempt: 3355860
    transport # UDP, unfragmented msgs, bytes payload, sent, failure: 702
    transport # UDP, unfragmented msgs, bytes payload, sent, success: 3355158
    transport # UDP, unfragmented msgs, bytes, sent, timeout: 510
    transport # UDP, unfragmented msgs, messages, attempt: 23839
    transport # UDP, unfragmented msgs, messages, sent, failure: 17
    transport # UDP, unfragmented msgs, messages, sent, success: 23822
    transport # UDP, unfragmented msgs, messages, sent, timeout: 2
    transport # unexpected SESSION_ACK messages: 5

Bart Polot

2014-01-07 18:26

manager   ~0007969

GDB sampling shows core often in GSC_KX_encrypt_and_transmit or GSC_KX_handle_encrypted_message functions (deep down in libgrcypt kdf/hmac, usually).

Bart Polot

2014-01-07 18:29

manager   ~0007970

Bandwidth usage doesn't seem very high either:
[bart@voyager ~]$ date; gnunet-statistics -s core | grep given
Tue 7 Jan 18:26:40 CET 2014
         core # encrypted bytes given to transport: 428205656
[bart@voyager ~]$ date; gnunet-statistics -s core | grep given
Tue 7 Jan 18:26:46 CET 2014
         core # encrypted bytes given to transport: 428307657
[bart@voyager ~]$ date; gnunet-statistics -s core | grep given
Tue 7 Jan 18:26:54 CET 2014
         core # encrypted bytes given to transport: 428609866
[bart@voyager ~]$ date; gnunet-statistics -s core | grep given
Tue 7 Jan 18:27:01 CET 2014
         core # encrypted bytes given to transport: 428678820

Conky shows 30k up / 50k down for the whole system.

Bart Polot

2014-01-07 18:32

manager   ~0007971

Here are some backtraces:


(gdb) bt
#0 0x00007fe952d08bf2 in transform_blk (ctx=0x7fffb19b27d0, data=0xbb1758 "") at rmd160.c:331
#1 0x00007fe952d0978c in transform (c=0x7fffb19b27d0, data=0xbb1758 "", nblks=51) at rmd160.c:396
#2 0x00007fe952d097d5 in _gcry_rmd160_mixblock (hd=0x7fffb19b27d0, blockof64byte=0xbb1418) at rmd160.c:416
#3 0x00007fe952d10f66 in mix_pool (
    pool=0xbb11c0 "eb1\242\272ӈ\375\202\070\225`\nt\023}\352\255\207\310\032\204\361>gp\210\060\274\211\234\006\217\226ʾW\211\232\255\213\252\002\203\244=\275<\025X\023\340O\221\314u\216\353\253\341\317\061\225!>\240\336nn\234ܶuE\037f2@\356(\273I\210\245\365\v\034NɊ\036]\301\006?\343\375E\267q\003\307\062\243vl\021V\262\005z\n\273\336sq\034ѕ߹\017\f\325\062\247t-\377\060W\247\226\032\rA\267S\370 \304g,") at random-csprng.c:656
#4 0x00007fe952d11c66 in add_randomness (buffer=0x7fffb19b29a0, length=0, origin=RANDOM_ORIGIN_FASTPOLL)
    at random-csprng.c:1102
#5 0x00007fe952d11e82 in do_fast_random_poll () at random-csprng.c:1250
#6 0x00007fe952d11eb4 in _gcry_rngcsprng_fast_poll () at random-csprng.c:1275
#7 0x00007fe952d105bc in _gcry_fast_random_poll () at random.c:415
#8 0x00007fe952c5f3de in md_open (h=0x7fffb19b2a60, algo=10, secure=0, hmac=0) at md.c:348
#9 0x00007fe952c5fefa in md_final (a=0xbdbb30) at md.c:664
#10 0x00007fe952c601b3 in _gcry_md_ctl (hd=0xbdbb30, cmd=5, buffer=0x0, buflen=0) at md.c:728
#11 0x00007fe952c60373 in _gcry_md_read (hd=0xbdbb30, algo=10) at md.c:814
#12 0x00007fe952c46b3e in gcry_md_read (hd=0xbdbb30, algo=10) at visibility.c:1151
#13 0x00007fe952f8d3e9 in GNUNET_CRYPTO_hmac (key=0x7fffb19b2ef0, plaintext=0x7fffb19b325c, plaintext_len=760,
    hmac=0x7fffb19b2f60) at crypto_hash.c:573
#14 0x000000000040a1b8 in GSC_KX_handle_encrypted_message (kx=0xbfe3b0, msg=0x7fffb19b3214)
    at gnunet-service-core_kx.c:1352
#15 0x00000000004068b0 in handle_transport_receive (cls=0x0, peer=0x7fffb19b31f4, message=0x7fffb19b3214)
    at gnunet-service-core_neighbours.c:404
#16 0x00007fe9533d7955 in demultiplexer (cls=0xbb5590, msg=0x7fffb19b31f0) at transport_api.c:667
#17 0x00007fe952f6d13a in receive_task (cls=0xbb9090, tc=0x7fffb19b3620) at client.c:595
#18 0x00007fe952fa5e29 in run_ready (rs=0xbc9520, ws=0xbc95b0) at scheduler.c:595
#19 0x00007fe952fa66b7 in GNUNET_SCHEDULER_run (task=0x7fe952fb2901 <service_task>, task_cls=0x7fffb19b39c0)
    at scheduler.c:817
#20 0x00007fe952fb4670 in GNUNET_SERVICE_run (argc=3, argv=0x7fffb19b3c48, service_name=0x40daf9 "core",
    options=GNUNET_SERVICE_OPTION_NONE, task=0x4029d2 <run>, task_cls=0x0) at service.c:1490
#21 0x0000000000402cca in main (argc=3, argv=0x7fffb19b3c48) at gnunet-service-core.c:142

#0 0x00007fe952d06d43 in rol (x=441912565, n=5) at bithelp.h:31
#1 0x00007fe952d092a3 in transform_blk (ctx=0x7fffb19b2660, data=0xbb1d58 '\001' <repeats 104 times>, "\002")
    at rmd160.c:357
#2 0x00007fe952d0978c in transform (c=0x7fffb19b2660, data=0xbb1d58 '\001' <repeats 104 times>, "\002", nblks=27)
    at rmd160.c:396
#3 0x00007fe952d097d5 in _gcry_rmd160_mixblock (hd=0x7fffb19b2660, blockof64byte=0xbb1418) at rmd160.c:416
#4 0x00007fe952d10f66 in mix_pool (pool=0xbb11c0 "\366U\334\a\262\005\265\220\300f\367'") at random-csprng.c:656
#5 0x00007fe952d11c66 in add_randomness (buffer=0x7fffb19b27a0, length=104, origin=RANDOM_ORIGIN_FASTPOLL)
    at random-csprng.c:1102
#6 0x00007fe952d11e26 in do_fast_random_poll () at random-csprng.c:1232
#7 0x00007fe952d11eb4 in _gcry_rngcsprng_fast_poll () at random-csprng.c:1275
#8 0x00007fe952d105bc in _gcry_fast_random_poll () at random.c:415
#9 0x00007fe952c5f3de in md_open (h=0x7fffb19b28f0, algo=8, secure=0, hmac=2) at md.c:348
#10 0x00007fe952c5f464 in _gcry_md_open (h=0x7fffb19b29d8, algo=8, flags=2) at md.c:379
#11 0x00007fe952c469d3 in gcry_md_open (h=0x7fffb19b29d8, algo=8, flags=2) at visibility.c:1095
#12 0x00007fe952f8d611 in GNUNET_CRYPTO_hkdf_v (result=0x7fffb19b3000, out_len=16, xtr_algo=10, prf_algo=8,
    xts=0x7fffb19b2b18, xts_len=8, skm=0xbb94bc, skm_len=32, argp=0x7fffb19b2be8) at crypto_hkdf.c:161
#13 0x00007fe952f8dd01 in GNUNET_CRYPTO_kdf_v (result=0x7fffb19b3000, out_len=16, xts=0x7fffb19b2b18, xts_len=8,
    skm=0xbb94bc, skm_len=32, argp=0x7fffb19b2be8) at crypto_kdf.c:62
#14 0x00007fe952f85f28 in GNUNET_CRYPTO_symmetric_derive_iv_v (iv=0x7fffb19b3000, skey=0xbb94bc, salt=0x7fffb19b2cec,
    salt_len=4, argp=0x7fffb19b2be8) at crypto_symmetric.c:225
#15 0x00007fe952f85db5 in GNUNET_CRYPTO_symmetric_derive_iv (iv=0x7fffb19b3000, skey=0xbb94bc, salt=0x7fffb19b2cec,
    salt_len=4) at crypto_symmetric.c:199
#16 0x0000000000406ec3 in derive_iv (iv=0x7fffb19b3000, skey=0xbb94bc, seed=3479056035,
    identity=0x622b60 <GSC_my_identity>) at gnunet-service-core_kx.c:454
#17 0x000000000040a29c in GSC_KX_handle_encrypted_message (kx=0xbb9400, msg=0x7fffb19b32e4)
    at gnunet-service-core_kx.c:1362
#18 0x00000000004068b0 in handle_transport_receive (cls=0x0, peer=0x7fffb19b32c4, message=0x7fffb19b32e4)
    at gnunet-service-core_neighbours.c:404
#19 0x00007fe9533d7955 in demultiplexer (cls=0xbb5590, msg=0x7fffb19b32c0) at transport_api.c:667
#20 0x00007fe952f6d13a in receive_task (cls=0xbb9090, tc=0x7fffb19b3620) at client.c:595
#21 0x00007fe952fa5e29 in run_ready (rs=0xbc9520, ws=0xbc95b0) at scheduler.c:595
#22 0x00007fe952fa66b7 in GNUNET_SCHEDULER_run (task=0x7fe952fb2901 <service_task>, task_cls=0x7fffb19b39c0)
    at scheduler.c:817
#23 0x00007fe952fb4670 in GNUNET_SERVICE_run (argc=3, argv=0x7fffb19b3c48, service_name=0x40daf9 "core",
    options=GNUNET_SERVICE_OPTION_NONE, task=0x4029d2 <run>, task_cls=0x0) at service.c:1490
#24 0x0000000000402cca in main (argc=3, argv=0x7fffb19b3c48) at gnunet-service-core.c:142


(gdb) bt
#0 0x00007fe952d08a94 in transform_blk (ctx=0x7fffb19b2730, data=0xbb1718 "!") at rmd160.c:326
#1 0x00007fe952d0978c in transform (c=0x7fffb19b2730, data=0xbb1718 "!", nblks=52) at rmd160.c:396
#2 0x00007fe952d097d5 in _gcry_rmd160_mixblock (hd=0x7fffb19b2730, blockof64byte=0xbb1418) at rmd160.c:416
#3 0x00007fe952d10f66 in mix_pool (
    pool=0xbb11c0 "C)\241\022\272\250\267\366=\264\214\364O\252\004\\\350ͦ\353\315dD|\277\373#,\274@\322DL\330R\276c\223\024㹃\353\300\221\242E}J\216M\003Xeq5\270\233\060\252\234\203G\245\363\002\276 \341\315RY#QG]\376ꊦ\262\301CDfD)%\315

Bart Polot

2014-01-07 18:46

manager   ~0007973

BTW, after de-attaching GDB core is down to 15% CPU and FS down to 0.4% (still, CORE seems really high for such a small amount of traffic)

Christian Grothoff

2014-01-07 20:04

manager   ~0007974

I've removed the repeated calls to gcry_md_open in SVN 31827. That should speed things up a bit (hard to say how much), but I doubt it addresses anything fundamental at all.

Christian Grothoff

2014-12-24 01:14

manager   ~0008733

I see some terrible behavior with respect to GNUNET_SCHEDULER_cancel that must be fixed.

Christian Grothoff

2014-12-24 01:14

manager   ~0008734

Essentially, the pending_timeout list becomes very long and is very active, because tasks are added and removed for tons of active datastore operations. First order of the day: make GNUNET_SCHEDULER_cancel() an O(1) operation instead of O(n).

Christian Grothoff

2014-12-24 02:23

manager   ~0008735

SVN 34779 changes GNUNET_SCHEDULER_cancel operations to perform in O(1), offering the potential of huge performance benefits if there are lots of tasks scheduled with GNUNET_SCHEDULER_add_delayed(), which may be the problem that caused high CPU usage by FS.

Christian Grothoff

2015-01-17 16:04

manager   ~0008802

Considering 'resolved' for now, as we identified issues in CADET/MESH, and scheduler, and FS and those have been addressed. If the issue resurfaces, a new bug should be filed.

Issue History

Date Modified Username Field Change
2013-12-29 17:42 bratao New Issue
2013-12-29 17:57 bratao Note Added: 0007950
2013-12-29 17:57 bratao Note Edited: 0007950 View Revisions
2014-01-02 11:38 Christian Grothoff Note Added: 0007957
2014-01-02 11:38 Christian Grothoff Assigned To => Christian Grothoff
2014-01-02 11:38 Christian Grothoff Status new => feedback
2014-01-07 18:21 Bart Polot Note Added: 0007968
2014-01-07 18:26 Bart Polot Note Added: 0007969
2014-01-07 18:29 Bart Polot Note Added: 0007970
2014-01-07 18:32 Bart Polot Note Added: 0007971
2014-01-07 18:46 Bart Polot Note Added: 0007973
2014-01-07 20:04 Christian Grothoff Note Added: 0007974
2014-04-11 15:21 Christian Grothoff Target Version => 0.11.0pre66
2014-04-11 15:27 Christian Grothoff Priority normal => high
2014-11-07 18:45 Christian Grothoff Target Version 0.11.0pre66 =>
2014-11-07 18:55 Christian Grothoff Assigned To Christian Grothoff =>
2014-12-24 01:13 Christian Grothoff Assigned To => Christian Grothoff
2014-12-24 01:13 Christian Grothoff Status feedback => assigned
2014-12-24 01:14 Christian Grothoff Note Added: 0008733
2014-12-24 01:14 Christian Grothoff Note Added: 0008734
2014-12-24 01:14 Christian Grothoff Target Version => 0.11.0pre66
2014-12-24 02:23 Christian Grothoff Note Added: 0008735
2014-12-24 02:53 Christian Grothoff Relationship added related to 0003596
2015-01-17 16:04 Christian Grothoff Note Added: 0008802
2015-01-17 16:04 Christian Grothoff Status assigned => resolved
2015-01-17 16:04 Christian Grothoff Fixed in Version => 0.11.0pre66
2015-01-17 16:04 Christian Grothoff Resolution open => fixed
2018-06-07 00:25 Christian Grothoff Status resolved => closed