- utilization can (easily, restart?) go out of control (very large), causing
content expiration job to go crazy and delete everything!
* FS: [CG]
- T gnunet-publish cannot be aborted using CTRL-C
- on some systems, keyword search does not find locally published content
(need testcase of command-line tools! - also good to cover getopt API!)
[could be related to datastore issue above!]
- 2-peer download is still too slow (why?)
- advanced FS API parts
- T gnunet-download (directory-file download [easy])
- T fs_download (recursive download; bounded parallelism)
- T indexing: index-failure-cleanup [easy]
- + gnunet-service-fs (remove failing on-demand blocks, hot-path routing,
- load-based routing, nitpicks)
+ + pick correct filenames for recursive downloads (mkdir, .gnd)
+ + support recursive download even if filename is NULL and we hence
+ do not generate files on disk (use temp_filename)
+ + bound parallelism (# fs downloads)
+ + distinguish in performance tracking and event signalling between
+ downloads that are actually running and those that are merely in the queue
+ + gnunet-service-fs (hot-path routing, load-based routing, nitpicks)
- [gnunet-service-fs.c:208]: member 'LocalGetContext::results_bf_size' is never used
- [gnunet-service-fs.c:501]: member 'PendingRequest::used_pids_size' is never used
- [gnunet-service-fs.c:654]: member 'ConnectedPeer::last_client_replies' is never used
- test churn generation
- consider changing API for peer-group termination to
call continuation when done
+* NAT/UPNP: [MW]
+ - finalize API design
+ - code clean up
+ - testing
+ - integration with transport service
+* MYSQL database backends: [CG]
+ - datacache
+ - datastore
0.9.0:
* new webpage:
enable developers to publish TGZs nicely
- port "contact" page
- add content type for "todo" items?
-* Plugins to implement: [CG]
- - MySQL database backends
- + datacache
- + datastore
- - Postgres database backends
- + datacache
- + datastore
-* VPN
+* POSTGRES database backends: [CG]
+ - datacache
+ - datastore
* Determine RC bugs and fix those!
0.9.x:
we have not 'used' (for their public keys) in a while; need a way
to track actual 'use')
- make sue we also trigger notifications whenever HELLOs expire
+* VPN
- [./transport/gnunet-service-transport.c:173]: (style) struct or union member 'TransportPlugin::rebuild' is never used (related to TCP not refreshing external addresses?)
* DATACACHE:
- add stats (# bytes available, # bytes used, # PUTs, # GETs, # GETs satisfied)
-
+* FS:
+ - support inline data in directories for recursive file downloads (fs_download)
[default daemon config directory (/etc)]),
[gn_daemon_config_dir=$withval])
AC_SUBST(GN_DAEMON_CONFIG_DIR, $gn_daemon_config_dir)
-gn_daemon_pidfile="/var/run/gnunetd/pid"
-AC_ARG_WITH(daemon-pidfile,
- AC_HELP_STRING(
- [--with-daemon-pidfile=FILE],
- [default daemon pidfile (/var/run/gnunetd/pid)]),
- [gn_daemon_pidfile=$withval])
-AC_SUBST(GN_DAEMON_PIDFILE, $gn_daemon_pidfile)
GN_INTLINCL=""
GN_LIBINTL="$LTLIBINTL"
\fB\-c \fIFILENAME\fR, \fB\-\-config=FILENAME\fR
use config file (defaults: ~/.gnunet/gnunet.conf)
.TP
-\fB\-d, \fB\-\-directory\fR
-download a GNUnet directory that has already been downloaded. Requires that a filename of an existing file is specified instead of the URI. The download will only download the top\-level files in the directory unless the `\-R' option is also specified.
-.TP
\fB\-D, \fB\-\-delete\-incomplete\fR
causes gnunet\-download to delete incomplete downloads when aborted with CTRL\-C. Note that complete files that are part of an incomplete recursive download will not be deleted even with this option. Without this option, terminating gnunet\-download with a signal will cause incomplete downloads to stay on disk. If gnunet\-download runs to (normal) completion finishing the download, this option has no effect.
.TP
check_SCRIPTS = \
test_gnunet_arm.sh
-TESTS = $(check_PROGRAMS) $(check_SCRIPTS)
+TESTS = $(check_PROGRAMS)
+#$(check_SCRIPTS)
test_arm_api_SOURCES = \
test_arm_api.c
*/
struct GNUNET_FS_DownloadContext *parent;
+ /**
+ * Head of list of child downloads.
+ */
+ struct GNUNET_FS_DownloadContext *child_head;
+
+ /**
+ * Tail of list of child downloads.
+ */
+ struct GNUNET_FS_DownloadContext *child_tail;
+
+ /**
+ * Previous download belonging to the same parent.
+ */
+ struct GNUNET_FS_DownloadContext *prev;
+
+ /**
+ * Next download belonging to the same parent.
+ */
+ struct GNUNET_FS_DownloadContext *next;
+
/**
* Context kept for the client.
*/
*/
char *filename;
+ /**
+ * Where are we writing the data temporarily (name of the
+ * file, can be NULL!); used if we do not have a permanent
+ * name and we are a directory and we do a recursive download.
+ */
+ char *temp_filename;
+
/**
* Map of active requests (those waiting
* for a response). The key is the hash
/*
This file is part of GNUnet.
- (C) 2001, 2002, 2003, 2004, 2005, 2006, 2008, 2009 Christian Grothoff (and other contributing authors)
+ (C) 2001, 2002, 2003, 2004, 2005, 2006, 2008, 2009, 2010 Christian Grothoff (and other contributing authors)
GNUnet is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published
* TODO:
* - handle recursive downloads (need directory &
* fs-level download-parallelism management)
+ * - handle recursive downloads where directory file is
+ * NOT saved on disk (need temporary file instead then!)
* - location URI suppport (can wait, easy)
* - check if blocks exist already (can wait, easy)
* - check if iblocks can be computed from existing blocks (can wait, hard)
#define DEBUG_DOWNLOAD GNUNET_NO
/**
- * We're storing the IBLOCKS after the
- * DBLOCKS on disk (so that we only have
- * to truncate the file once we're done).
+ * We're storing the IBLOCKS after the DBLOCKS on disk (so that we
+ * only have to truncate the file once we're done).
*
- * Given the offset of a block (with respect
- * to the DBLOCKS) and its depth, return the
- * offset where we would store this block
- * in the file.
-
+ * Given the offset of a block (with respect to the DBLOCKS) and its
+ * depth, return the offset where we would store this block in the
+ * file.
*
* @param fsize overall file size
* @param off offset of the block in the file
GNUNET_CONSTANTS_SERVICE_TIMEOUT,
GNUNET_NO,
&transmit_download_request,
- dc);
-
+ dc);
}
};
+/**
+ * We found an entry in a directory. Check if the respective child
+ * already exists and if not create the respective child download.
+ *
+ * @param cls the parent download
+ * @param filename name of the file in the directory
+ * @param uri URI of the file (CHK or LOC)
+ * @param meta meta data of the file
+ * @param length number of bytes in data
+ * @param data contents of the file (or NULL if they were not inlined)
+ */
+static void
+trigger_recursive_download (void *cls,
+ const char *filename,
+ const struct GNUNET_FS_Uri *uri,
+ const struct GNUNET_CONTAINER_MetaData *meta,
+ size_t length,
+ const void *data)
+{
+ struct GNUNET_FS_DownloadContext *dc = cls;
+ struct GNUNET_FS_DownloadContext *cpos;
+
+ cpos = dc->child_head;
+ while (cpos != NULL)
+ {
+ if (0 == strcmp (cpos->filename,
+ filename))
+ {
+ GNUNET_break_op (GNUNET_FS_uri_test_equal (uri,
+ cpos->uri));
+ break;
+ }
+ cpos = cpos->next;
+ }
+ if (cpos != NULL)
+ return; /* already exists */
+ if (data != NULL)
+ {
+ /* determine on-disk filename, write data! */
+ GNUNET_break (0); // FIXME: not implemented
+ }
+ GNUNET_FS_download_start (dc->h,
+ uri,
+ meta,
+ filename, /* FIXME: prepend directory name! */
+ 0,
+ GNUNET_FS_uri_chk_get_file_size (uri),
+ dc->anonymity,
+ dc->options,
+ NULL,
+ dc);
+}
+
+
+/**
+ * We're done downloading a directory. Open the file and
+ * trigger all of the (remaining) child downloads.
+ *
+ * @param dc context of download that just completed
+ */
+static void
+full_recursive_download (struct GNUNET_FS_DownloadContext *dc)
+{
+ size_t size;
+ uint64_t size64;
+ void *data;
+ struct GNUNET_DISK_FileHandle *h;
+ struct GNUNET_DISK_MapHandle *m;
+
+ size64 = GNUNET_FS_uri_chk_get_file_size (dc->uri);
+ size = (size_t) size64;
+ if (size64 != (uint64_t) size)
+ {
+ GNUNET_log (GNUNET_ERROR_TYPE_ERROR,
+ _("Recursive downloads of directories larger than 4 GB are not supported on 32-bit systems\n"));
+ return;
+ }
+ if (dc->filename != NULL)
+ {
+ h = GNUNET_DISK_file_open (dc->filename,
+ GNUNET_DISK_OPEN_READ,
+ GNUNET_DISK_PERM_NONE);
+ }
+ else
+ {
+ /* FIXME: need to initialize (and use) temp_filename
+ in various places in order for this assertion to
+ not fail; right now, it will always fail! */
+ GNUNET_assert (dc->temp_filename != NULL);
+ h = GNUNET_DISK_file_open (dc->temp_filename,
+ GNUNET_DISK_OPEN_READ,
+ GNUNET_DISK_PERM_NONE);
+ }
+ if (h == NULL)
+ return; /* oops */
+ data = GNUNET_DISK_file_map (h, &m, GNUNET_DISK_MAP_TYPE_READ, size);
+ if (data == NULL)
+ {
+ GNUNET_log (GNUNET_ERROR_TYPE_ERROR,
+ _("Directory too large for system address space\n"));
+ }
+ else
+ {
+ GNUNET_FS_directory_list_contents (size,
+ data,
+ 0,
+ &trigger_recursive_download,
+ dc);
+ GNUNET_DISK_file_unmap (m);
+ }
+ GNUNET_DISK_file_close (h);
+ if (dc->filename == NULL)
+ {
+ if (0 != UNLINK (dc->temp_filename))
+ GNUNET_log_strerror_file (GNUNET_ERROR_TYPE_WARNING,
+ "unlink",
+ dc->temp_filename);
+ GNUNET_free (dc->temp_filename);
+ dc->temp_filename = NULL;
+ }
+}
+
+
/**
* Iterator over entries in the pending requests in the 'active' map for the
* reply that we just got.
app -= (sm->offset + prc->size) - (dc->offset + dc->length);
}
dc->completed += app;
+
+ if ( (0 != (dc->options & GNUNET_FS_DOWNLOAD_OPTION_RECURSIVE)) &&
+ (GNUNET_YES == GNUNET_FS_meta_data_test_for_directory (dc->meta)) )
+ {
+ GNUNET_FS_directory_list_contents (prc->size,
+ pt,
+ off,
+ &trigger_recursive_download,
+ dc);
+ }
+
}
pi.status = GNUNET_FS_STATUS_DOWNLOAD_PROGRESS;
"truncate",
dc->filename);
}
- /* signal completion */
- pi.status = GNUNET_FS_STATUS_DOWNLOAD_COMPLETED;
- make_download_status (&pi, dc);
- dc->client_info = dc->h->upcb (dc->h->upcb_cls,
- &pi);
+
+ if ( (0 != (dc->options & GNUNET_FS_DOWNLOAD_OPTION_RECURSIVE)) &&
+ (GNUNET_YES == GNUNET_FS_meta_data_test_for_directory (dc->meta)) )
+ full_recursive_download (dc);
+ if (dc->child_head == NULL)
+ {
+ /* signal completion */
+ pi.status = GNUNET_FS_STATUS_DOWNLOAD_COMPLETED;
+ make_download_status (&pi, dc);
+ dc->client_info = dc->h->upcb (dc->h->upcb_cls,
+ &pi);
+ }
GNUNET_assert (sm->depth == dc->treedepth);
}
// FIXME: make persistent
GNUNET_break (0);
return NULL;
}
- client = GNUNET_CLIENT_connect (h->sched,
- "fs",
- h->cfg);
- if (NULL == client)
- return NULL;
// FIXME: add support for "loc" URIs!
#if DEBUG_DOWNLOAD
GNUNET_log (GNUNET_ERROR_TYPE_DEBUG,
#endif
dc = GNUNET_malloc (sizeof(struct GNUNET_FS_DownloadContext));
dc->h = h;
- dc->client = client;
dc->parent = parent;
+ if (parent != NULL)
+ {
+ GNUNET_CONTAINER_DLL_insert (parent->child_head,
+ parent->child_tail,
+ dc);
+ }
dc->uri = GNUNET_FS_uri_dup (uri);
dc->meta = GNUNET_CONTAINER_meta_data_duplicate (meta);
dc->client_info = cctx;
dc->treedepth);
#endif
// FIXME: make persistent
+
+ // FIXME: bound parallelism here!
+ client = GNUNET_CLIENT_connect (h->sched,
+ "fs",
+ h->cfg);
+ dc->client = client;
schedule_block_download (dc,
&dc->uri->data.chk.chk,
0,
{
struct GNUNET_FS_ProgressInfo pi;
+ while (NULL != dc->child_head)
+ GNUNET_FS_download_stop (dc->child_head,
+ do_delete);
// FIXME: make unpersistent
+ if (dc->parent != NULL)
+ GNUNET_CONTAINER_DLL_remove (dc->parent->child_head,
+ dc->parent->child_tail,
+ dc);
+
pi.status = GNUNET_FS_STATUS_DOWNLOAD_STOPPED;
make_download_status (&pi, dc);
dc->client_info = dc->h->upcb (dc->h->upcb_cls,
* @author Krista Bennett
* @author James Blackwell
* @author Igor Wronsky
- *
- * TODO:
- * - download-directory option support (do_directory)
*/
#include "platform.h"
#include "gnunet_fs_service.h"
static int do_recursive;
-static int do_directory;
-
static char *filename;
info->value.download.filename,
s);
GNUNET_free (s);
- if (do_directory)
- {
- GNUNET_break (0); //FIXME: not implemented
- }
- else
- {
- if (info->value.download.dc == dc)
- GNUNET_SCHEDULER_shutdown (sched);
- }
+ if (info->value.download.dc == dc)
+ GNUNET_SCHEDULER_shutdown (sched);
break;
case GNUNET_FS_STATUS_DOWNLOAD_STOPPED:
if (info->value.download.dc == dc)
enum GNUNET_FS_DownloadOptions options;
sched = s;
- if (do_directory)
+ uri = GNUNET_FS_uri_parse (args[0],
+ &emsg);
+ if (NULL == uri)
{
- GNUNET_break (0); //FIXME: not implemented
+ fprintf (stderr,
+ _("Failed to parse URI: %s\n"),
+ emsg);
+ GNUNET_free (emsg);
+ ret = 1;
+ return;
}
- else
+ if (! GNUNET_FS_uri_test_chk (uri))
{
- uri = GNUNET_FS_uri_parse (args[0],
- &emsg);
- if (NULL == uri)
- {
- fprintf (stderr,
- _("Failed to parse URI: %s\n"),
- emsg);
- GNUNET_free (emsg);
- ret = 1;
- return;
- }
- if (! GNUNET_FS_uri_test_chk (uri))
- {
- fprintf (stderr,
- "Only CHK URIs supported right now.\n");
- ret = 1;
- GNUNET_FS_uri_destroy (uri);
- return;
- }
+ fprintf (stderr,
+ "Only CHK URIs supported right now.\n");
+ ret = 1;
+ GNUNET_FS_uri_destroy (uri);
+ return;
}
if (NULL == filename)
{
options = GNUNET_FS_DOWNLOAD_OPTION_NONE;
if (do_recursive)
options |= GNUNET_FS_DOWNLOAD_OPTION_RECURSIVE;
- if (do_directory)
+ dc = GNUNET_FS_download_start (ctx,
+ uri,
+ NULL,
+ filename,
+ 0,
+ GNUNET_FS_uri_chk_get_file_size (uri),
+ anonymity,
+ options,
+ NULL,
+ NULL);
+ GNUNET_FS_uri_destroy (uri);
+ if (dc == NULL)
{
- GNUNET_break (0); //FIXME: not implemented
- }
- else
- {
- dc = GNUNET_FS_download_start (ctx,
- uri,
- NULL,
- filename,
- 0,
- GNUNET_FS_uri_chk_get_file_size (uri),
- anonymity,
- options,
- NULL,
- NULL);
- GNUNET_FS_uri_destroy (uri);
- if (dc == NULL)
- {
- GNUNET_FS_stop (ctx);
- ctx = NULL;
- return;
- }
+ GNUNET_FS_stop (ctx);
+ ctx = NULL;
+ return;
}
GNUNET_SCHEDULER_add_delayed (sched,
GNUNET_TIME_UNIT_FOREVER_REL,
{'a', "anonymity", "LEVEL",
gettext_noop ("set the desired LEVEL of receiver-anonymity"),
1, &GNUNET_GETOPT_set_uint, &anonymity},
- {'d', "directory", NULL,
- gettext_noop
- ("download a GNUnet directory that has already been downloaded. Requires that a filename of an existing file is specified instead of the URI. The download will only download the top-level files in the directory unless the `-R' option is also specified."),
- 0, &GNUNET_GETOPT_set_one, &do_directory},
{'D', "delete-incomplete", NULL,
gettext_noop ("delete incomplete downloads (when aborted with CTRL-C)"),
0, &GNUNET_GETOPT_set_one, &delete_incomplete},
/*
This file is part of GNUnet.
- (C) 2001, 2002, 2004, 2005, 2006, 2007, 2009 Christian Grothoff (and other contributing authors)
+ (C) 2001, 2002, 2004, 2005, 2006, 2007, 2009, 2010 Christian Grothoff (and other contributing authors)
GNUnet is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published
static int do_disable_creation_time;
+static GNUNET_SCHEDULER_TaskIdentifier kill_task;
+
static void
do_stop_task (void *cls,
fprintf (stderr,
_("Error publishing: %s.\n"),
info->value.publish.specifics.error.message);
+ if (kill_task != GNUNET_SCHEDULER_NO_TASK)
+ {
+ GNUNET_SCHEDULER_cancel (sched,
+ kill_task);
+ kill_task = GNUNET_SCHEDULER_NO_TASK;
+ }
GNUNET_SCHEDULER_add_continuation (sched,
&do_stop_task,
NULL,
fprintf (stdout,
_("Publishing `%s' done.\n"),
info->value.publish.filename);
+ s = GNUNET_FS_uri_to_string (info->value.publish.specifics.completed.chk_uri);
+ fprintf (stdout,
+ _("URI is `%s'.\n"),
+ s);
+ GNUNET_free (s);
if (info->value.publish.pctx == NULL)
- GNUNET_SCHEDULER_add_continuation (sched,
- &do_stop_task,
- NULL,
- GNUNET_SCHEDULER_REASON_PREREQ_DONE);
+ {
+ if (kill_task != GNUNET_SCHEDULER_NO_TASK)
+ {
+ GNUNET_SCHEDULER_cancel (sched,
+ kill_task);
+ kill_task = GNUNET_SCHEDULER_NO_TASK;
+ }
+ GNUNET_SCHEDULER_add_continuation (sched,
+ &do_stop_task,
+ NULL,
+ GNUNET_SCHEDULER_REASON_PREREQ_DONE);
+ }
break;
case GNUNET_FS_STATUS_PUBLISH_STOPPED:
GNUNET_break (NULL == pc);
ret = 1;
return;
}
+ kill_task = GNUNET_SCHEDULER_add_delayed (sched,
+ GNUNET_TIME_UNIT_FOREVER_REL,
+ &do_stop_task,
+ NULL);
}
* @file fs/gnunet-service-fs_indexing.c
* @brief program that provides indexing functions of the file-sharing service
* @author Christian Grothoff
- *
- * TODO:
- * - indexed files/blocks not removed on errors
*/
#include "platform.h"
#include <float.h>
}
if (GNUNET_NO == GNUNET_DISK_file_test (fn))
{
- /* no index info yet */
+ /* no index info yet */
GNUNET_free (fn);
return;
}
STRERROR (errno));
if (fh != NULL)
GNUNET_DISK_file_close (fh);
- /* FIXME: if this happens often, we need
- to remove the OnDemand block from the DS! */
+ GNUNET_FS_drq_remove (key,
+ size,
+ data,
+ &remove_cont,
+ NULL,
+ GNUNET_TIME_UNIT_FOREVER_REL);
return GNUNET_SYSERR;
}
GNUNET_DISK_file_close (fh);
_("Indexed file `%s' changed at offset %llu\n"),
fn,
(unsigned long long) off);
- /* FIXME: if this happens often, we need
- to remove the OnDemand block from the DS! */
+ GNUNET_FS_drq_remove (key,
+ size,
+ data,
+ &remove_cont,
+ NULL,
+ GNUNET_TIME_UNIT_FOREVER_REL);
return GNUNET_SYSERR;
}
#if DEBUG_FS
* @param fn file name to be opened
* @param flags opening flags, a combination of GNUNET_DISK_OPEN_xxx bit flags
* @param perm permissions for the newly created file, use
- * GNUNET_DISK_PERM_USER_NONE if a file could not be created by this
+ * GNUNET_DISK_PERM_NONE if a file could not be created by this
* call (because of flags)
* @return IO handle on success, NULL on error
*/