The DMA may attempt to write a DMA descriptor in the ring while it is
being updated. By writing the DMA descriptor buffer address to 0, it
is assured the DMA will not use such a buffer and the buffer can be
updated without any interference.
Reviewed-by: Patrick Delaunay <patrick.delaunay@st.com>
Signed-off-by: Marek Vasut <marex@denx.de>
Cc: Joe Hershberger <joe.hershberger@ni.com>
Cc: Patrice Chotard <patrice.chotard@st.com>
Cc: Patrick Delaunay <patrick.delaunay@st.com>
Cc: Ramon Fried <rfried.dev@gmail.com>
Cc: Stephen Warren <swarren@nvidia.com>
rx_desc = &(eqos->rx_descs[eqos->rx_desc_idx]);
+ rx_desc->des0 = 0;
+ mb();
+ eqos->config->ops->eqos_flush_desc(rx_desc);
eqos->config->ops->eqos_inval_buffer(packet, length);
-
rx_desc->des0 = (u32)(ulong)packet;
rx_desc->des1 = 0;
rx_desc->des2 = 0;