ci_udc.c allocates only a single buffer for each endpoint, which
ci_ep_alloc_request() returns as a hard-coded value rather than
dynamically allocating. Consequently, storage_common.c must limit
itself to using a single buffer at a time. Add a special case
to the definition of FSG_NUM_BUFFERS for this.
Another option would be to fix ci_ep_alloc_request() to dynamically
allocate the buffers like some/all(?) other device mode drivers do.
However, I don't think that ci_ep_queue() supports queueing up
multiple buffers either yet, and I'm not familiar enough with the
controller yet to implement that. As such, any attempt to use multiple
buffers simply results in data corruption and other errors.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
#define DELAYED_STATUS (EP0_BUFSIZE + 999) /* An impossibly large value */
/* Number of buffers we will use. 2 is enough for double-buffering */
+#ifndef CONFIG_CI_UDC
#define FSG_NUM_BUFFERS 2
+#else
+#define FSG_NUM_BUFFERS 1 /* ci_udc only allows 1 req per ep at present */
+#endif
/* Default size of buffer length. */
#define FSG_BUFLEN ((u32)16384)