View Issue Details

IDProjectCategoryView StatusLast Update
0003568GNUnetfile-sharing servicepublic2018-06-07 00:25
Reportercy1Assigned ToChristian Grothoff 
PrioritynormalSeveritycrashReproducibilityalways
Status closedResolutionfixed 
PlatformLinuxOSArchOS Version
Product VersionSVN HEAD 
Target Version0.11.0pre66Fixed in Version0.11.0pre66 
Summary0003568: clean_request called twice, uses freed memory then tries to double free
Descriptionin gnunet-service-fs_pr.c clean_request is being called twice on a cancelled request. That causes a segfault, since at the end of clean_request it frees the "pr" structure, so the second time it's called, pr points to freed memory. It looks like process_local_reply and client_request_destroy are both cancelling when only one should. Looks like a case of the local reply not cleaning up enough, so when the client disconnects via a cancel the client disconnect handler ends up finalizing stuff that process_local_reply already finalized.
Steps To Reproduce1) watch gnunet-service-fs somehow like with gdb idk
2) gnunet-search, then ^C cancel without results
3) gnunet-service-fs segfaults
Additional Informationhere's the stack traces of the two times clean_request is called:

#0 clean_request (cls=<optimized out>, key=0x649bf0, value=0x649bf0) at gnunet-service-fs_pr.c:649
#1 0x000000000041069d in GSF_pending_request_cancel_ (pr=0x0, full_cleanup=0) at gnunet-service-fs_pr.c:700
#2 0x000000000040fa6c in process_local_reply (cls=0x649bf0, key=<optimized out>, size=0, data=0x0, type=GNUNET_BLOCK_TYPE_ANY, priority=0,
    anonymity=0, expiration=..., uid=0) at gnunet-service-fs_pr.c:1517
#3 0x00007ffff79a0668 in process_result_message (cls=0x632f80, msg=0x649bf0) at datastore_api.c:1199
#4 0x00007ffff62ec037 in receive_task (cls=0x61f7f0, tc=<optimized out>) at client.c:595
#5 0x00007ffff631d6cb in run_ready (ws=<optimized out>, rs=<optimized out>) at scheduler.c:595
#6 GNUNET_SCHEDULER_run (task=0x3, task_cls=0x64a9d0) at scheduler.c:817
#7 0x00007ffff63272eb in GNUNET_SERVICE_run (argc=2, argv=0x7fffffffe680, service_name=0x416010 "fs", options=GNUNET_SERVICE_OPTION_NONE,
    task=0x404e50 <run>, task_cls=0x61d8b0) at service.c:1498
#8 0x00000000004046d9 in main (argc=<optimized out>, argv=<optimized out>) at gnunet-service-fs.c:737
(gdb) c
Continuing.
Dec 07 02:33:12-948714 fs-6906 DEBUG Asking datastore for content for replication (queue size: 4)

Breakpoint 1, clean_request (cls=0x0, key=0x649bf0, value=0x649bf0) at gnunet-service-fs_pr.c:591
591 {
(gdb) bt
#0 clean_request (cls=0x0, key=0x649bf0, value=0x649bf0) at gnunet-service-fs_pr.c:591
#1 0x000000000041069d in GSF_pending_request_cancel_ (pr=0x0, full_cleanup=6593520, full_cleanup@entry=1) at gnunet-service-fs_pr.c:700
#2 0x000000000040ae75 in client_request_destroy (cls=0x64aff0, tc=<optimized out>) at gnunet-service-fs_lc.c:200
#3 0x000000000040bec7 in GSF_client_disconnect_handler_ (cls=<optimized out>, client=<optimized out>) at gnunet-service-fs_lc.c:492
#4 0x00007ffff631fe0a in GNUNET_SERVER_client_disconnect (client=0x64a610) at server.c:1509
#5 0x00007ffff62f6d68 in receive_ready (cls=0x64a540, tc=<optimized out>) at connection.c:1095
#6 0x00007ffff631d6cb in run_ready (ws=<optimized out>, rs=<optimized out>) at scheduler.c:595
#7 GNUNET_SCHEDULER_run (task=0x3, task_cls=0x649a00) at scheduler.c:817
#8 0x00007ffff63272eb in GNUNET_SERVICE_run (argc=2, argv=0x7fffffffe680, service_name=0x416010 "fs", options=GNUNET_SERVICE_OPTION_NONE,
    task=0x404e50 <run>, task_cls=0x61d8b0) at service.c:1498
#9 0x00000000004046d9 in main (argc=<optimized out>, argv=<optimized out>) at gnunet-service-fs.c:737
TagsNo tags attached.

Activities

Christian Grothoff

2014-12-07 22:23

manager   ~0008654

Hmm. From what I see, the crash should only happen if you called 'gnunet-search' with the '-n' option. In that case, I think I can explain & fix it. Did you pass '-n'? I've committed a fix for the crash I can reproduce (using -n) in SVN 34496.

cy1

2014-12-07 23:58

reporter   ~0008655

Oh, yes I was passing -n to just test some local files. You probably got it right on the nose!

cy1

2014-12-08 00:09

reporter   ~0008656

The fix worked for me. Thanks so much!

Issue History

Date Modified Username Field Change
2014-12-07 11:40 cy1 New Issue
2014-12-07 21:49 Christian Grothoff Assigned To => Christian Grothoff
2014-12-07 21:49 Christian Grothoff Status new => assigned
2014-12-07 22:23 Christian Grothoff Note Added: 0008654
2014-12-07 22:23 Christian Grothoff Status assigned => feedback
2014-12-07 22:23 Christian Grothoff Target Version => 0.11.0pre66
2014-12-07 23:58 cy1 Note Added: 0008655
2014-12-07 23:58 cy1 Status feedback => assigned
2014-12-08 00:09 cy1 Note Added: 0008656
2014-12-08 00:15 Christian Grothoff Status assigned => resolved
2014-12-08 00:15 Christian Grothoff Fixed in Version => 0.11.0pre66
2014-12-08 00:15 Christian Grothoff Resolution open => fixed
2018-06-07 00:25 Christian Grothoff Status resolved => closed