The current code polls the RX desciptor ring for new packets by reading
the RX descriptor status. This works by accident, as the RX descriptors
are often in non-cacheable memory. However, the driver does support use
of RX descriptors in cacheable memory.
This patch adds a missing RX descriptor invalidation, which assures the
CPU will read a fresh copy of the RX descriptor instead of a cached one.
Reviewed-by: Patrick Delaunay <patrick.delaunay@st.com>
Signed-off-by: Marek Vasut <marex@denx.de>
Cc: Joe Hershberger <joe.hershberger@ni.com>
Cc: Patrice Chotard <patrice.chotard@st.com>
Cc: Patrick Delaunay <patrick.delaunay@st.com>
Cc: Ramon Fried <rfried.dev@gmail.com>
Cc: Stephen Warren <swarren@nvidia.com>
debug("%s(dev=%p, flags=%x):\n", __func__, dev, flags);
rx_desc = &(eqos->rx_descs[eqos->rx_desc_idx]);
+ eqos->config->ops->eqos_inval_desc(rx_desc);
if (rx_desc->des3 & EQOS_DESC3_OWN) {
debug("%s: RX packet not available\n", __func__);
return -EAGAIN;