generic: mtd: backport SPI_NOR_HAS_LOCK
[openwrt/openwrt.git] / target / linux / layerscape / patches-4.4 / 7202-staging-fsl-dpaa2-eth-code-cleanup-for-upstreaming.patch
1 From 2a6f0dd5425cf43b8c09a8203e6ee64ba2b3868d Mon Sep 17 00:00:00 2001
2 From: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
3 Date: Tue, 12 Jan 2016 08:58:40 +0200
4 Subject: [PATCH 202/226] staging: fsl-dpaa2: eth: code cleanup for
5 upstreaming
6
7 -this is a squash of cleanup commits (see QLINUX-5338), all commit logs
8 are below
9
10 Signed-off-by: Stuart Yoder <stuart.yoder@nxp.com>
11
12 ---------------------------------------------------------------------
13
14 fsl-dpaa2: eth: Drain queues upon ifconfig down
15
16 MC firmware assists in draining the Tx FQs; the Eth driver flushes the
17 Rx and TxConfirm queues then empties the buffer pool.
18
19 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
20
21 fsl-dpaa2: eth: Don't use magic numbers
22
23 Add a define to avoid mentioning directly the maximum number
24 of buffers released/acquired through a single QBMan command.
25
26 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
27
28 dpaa2-eth: Remove cpumask_rr macro
29
30 It's only used in one place and not very intuitive
31
32 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
33
34 fsl-dpaa2: eth: Rename a variable
35
36 The old name was a leftover and non-intuitive.
37
38 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
39
40 fsl-dpaa2: eth: Rearrange code
41
42 Rearrange the conditional statements in several functions
43 to avoid excessive indenting, with no change in functionality.
44
45 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
46
47 fsl-dpaa2: eth: Remove incorrect check
48
49 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
50
51 fsl-dpaa2: eth: Fix bug on error path
52
53 We were not doing a DMA unmap on the error path of dpaa2_dpni_setup.
54 Reorganize the code a bit to avoid this.
55
56 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
57
58 fsl-dpaa2: eth: Error messages cleanup
59
60 This commit cleans up and improves uniformity of messages on
61 error paths throughout the Ethernet driver:
62
63 * don't use WARN/WARN_ON/WARN_ONCE for warning messages, as
64 we don't need a stack dump
65 * give up using the DPAA2_ETH_WARN_IF_ERR custom macro
66 * ensure dev_err and netdev_err are each used where needed and
67 not randomly
68 * remove error messages on memory allocation failures; the kernel
69 is quite capable of dumping a detailed message when that happens
70 * remove error messages on the fast path; we don't want to flood
71 the console and we already increment counters in most error cases
72 * ratelimit error messages where appropriate
73
74 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@nxp.com>
75
76 fsl-dpaa2: eth: Fix name of ethtool counters
77
78 Rename counters in ethtool -S from "portal busy" to "dequeue portal
79 busy" and from "tx portal busy" to "enqueue portal busy", so it's
80 less confusing for the user.
81
82 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@nxp.com>
83
84 fsl-dpaa2: eth: Retry DAN rearm if portal busy
85
86 There's a chance the data available notification rearming will
87 fail if the QBMan portal is busy. Keep retrying until portal
88 becomes available again, like we do for buffer release and
89 pull dequeue operations.
90
91 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@nxp.com>
92
93 fsl-dpaa2: eth: Add cpu_relax() to portal busy loops
94
95 For several DPIO operations, we may need to repeatedly try
96 until the QBMan portal is no longer busy. Add a cpu_relax() to
97 those loops, like we were already doing when seeding buffers.
98
99 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@nxp.com>
100
101 fsl-dpaa2: eth: Add a counter for channel pull errors
102
103 We no longer print an error message in this case, so add an error
104 counter so we can at least know something went wrong.
105
106 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@nxp.com>
107
108 fsl-dpaa2: eth: Function renames
109
110 Attempt to provide more uniformity for the DPAA2 Ethernet
111 driver function naming conventions:
112 * major functions (ndo_ops, driver ops, ethtool, etc) all have
113 the "dpaa2_eth" prefix
114 * non-static functions also start with "dpaa2_eth"
115 * static helper functions don't get any prefix in order to avoid
116 very long names
117 * some functions get more intuitive and/or explicit names
118 * don't use names starting with an underscore
119
120 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@nxp.com>
121
122 fsl-dpaa2: eth: Structure and macro renames
123
124 Some more renaming:
125 * defines of error/status bits in the frame annotation status
126 word get a "DPAA2_FAS" prefix instead of "DPAA2_ETH_FAS", as they're
127 not really specific to the ethernet driver. We may consider moving
128 these defines to a separate header file in the future
129 * DPAA2_ETH_RX_BUFFER_SIZE is renamed to DPAA2_ETH_RX_BUF_SIZE
130 to better match the naming style of other defines
131 * structure "dpaa2_eth_stats" becomes "dpaa2_eth_drv_stats" to
132 make it clear these are driver specific statistics
133
134 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
135
136 fsl-dpaa2: eth: Cosmetics
137
138 Various coding style fixes and other minor cosmetics,
139 with no functional impact. Also remove a couple of unused
140 defines and a structure field.
141
142 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
143
144 fsl-dpaa2: eth: Move function call
145
146 Move call to set_fq_affinity() from probe to setup_fqs(), as it
147 logically belongs there.
148
149 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
150
151 fsl-dpaa2: eth: Comments cleanup
152
153 Add relevant comments where needed, remove obsolete or
154 useless ones.
155
156 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@nxp.com>
157
158 fsl-dpaa2: eth: Remove link poll Kconfig option
159
160 Always try to use interrupts, but if they are not available
161 fall back to polling the link state.
162
163 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@nxp.com>
164
165 fsl-dpaa2: eth: Remove message level
166
167 We were defining netif message level, but we weren't using
168 it when printing error/info messages, so remove for now.
169
170 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@nxp.com>
171
172 fsl-dpaa2: eth: fix compile error on 4.5 uprev
173
174 Signed-off-by: Stuart Yoder <stuart.yoder@nxp.com>
175 ---
176 drivers/staging/fsl-dpaa2/ethernet/Kconfig | 6 -
177 .../staging/fsl-dpaa2/ethernet/dpaa2-eth-debugfs.c | 6 +-
178 drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.c | 992 ++++++++++----------
179 drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.h | 133 +--
180 drivers/staging/fsl-dpaa2/ethernet/dpaa2-ethtool.c | 226 ++---
181 5 files changed, 693 insertions(+), 670 deletions(-)
182
183 --- a/drivers/staging/fsl-dpaa2/ethernet/Kconfig
184 +++ b/drivers/staging/fsl-dpaa2/ethernet/Kconfig
185 @@ -16,12 +16,6 @@ menuconfig FSL_DPAA2_ETH
186 driver, using the Freescale MC bus driver.
187
188 if FSL_DPAA2_ETH
189 -config FSL_DPAA2_ETH_LINK_POLL
190 - bool "Use polling mode for link state"
191 - default n
192 - ---help---
193 - Poll for detecting link state changes instead of using
194 - interrupts.
195
196 config FSL_DPAA2_ETH_USE_ERR_QUEUE
197 bool "Enable Rx error queue"
198 --- a/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth-debugfs.c
199 +++ b/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth-debugfs.c
200 @@ -30,7 +30,6 @@
201 * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
202 */
203
204 -
205 #include <linux/module.h>
206 #include <linux/debugfs.h>
207 #include "dpaa2-eth.h"
208 @@ -38,14 +37,13 @@
209
210 #define DPAA2_ETH_DBG_ROOT "dpaa2-eth"
211
212 -
213 static struct dentry *dpaa2_dbg_root;
214
215 static int dpaa2_dbg_cpu_show(struct seq_file *file, void *offset)
216 {
217 struct dpaa2_eth_priv *priv = (struct dpaa2_eth_priv *)file->private;
218 struct rtnl_link_stats64 *stats;
219 - struct dpaa2_eth_stats *extras;
220 + struct dpaa2_eth_drv_stats *extras;
221 int i;
222
223 seq_printf(file, "Per-CPU stats for %s\n", priv->net_dev->name);
224 @@ -200,7 +198,7 @@ static ssize_t dpaa2_dbg_reset_write(str
225 {
226 struct dpaa2_eth_priv *priv = file->private_data;
227 struct rtnl_link_stats64 *percpu_stats;
228 - struct dpaa2_eth_stats *percpu_extras;
229 + struct dpaa2_eth_drv_stats *percpu_extras;
230 struct dpaa2_eth_fq *fq;
231 struct dpaa2_eth_channel *ch;
232 int i;
233 --- a/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.c
234 +++ b/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.c
235 @@ -53,26 +53,14 @@ MODULE_LICENSE("Dual BSD/GPL");
236 MODULE_AUTHOR("Freescale Semiconductor, Inc");
237 MODULE_DESCRIPTION("Freescale DPAA2 Ethernet Driver");
238
239 -static int debug = -1;
240 -module_param(debug, int, S_IRUGO);
241 -MODULE_PARM_DESC(debug, "Module/Driver verbosity level");
242 -
243 /* Oldest DPAA2 objects version we are compatible with */
244 #define DPAA2_SUPPORTED_DPNI_VERSION 6
245 #define DPAA2_SUPPORTED_DPBP_VERSION 2
246 #define DPAA2_SUPPORTED_DPCON_VERSION 2
247
248 -/* Iterate through the cpumask in a round-robin fashion. */
249 -#define cpumask_rr(cpu, maskptr) \
250 -do { \
251 - (cpu) = cpumask_next((cpu), (maskptr)); \
252 - if ((cpu) >= nr_cpu_ids) \
253 - (cpu) = cpumask_first((maskptr)); \
254 -} while (0)
255 -
256 -static void dpaa2_eth_rx_csum(struct dpaa2_eth_priv *priv,
257 - u32 fd_status,
258 - struct sk_buff *skb)
259 +static void validate_rx_csum(struct dpaa2_eth_priv *priv,
260 + u32 fd_status,
261 + struct sk_buff *skb)
262 {
263 skb_checksum_none_assert(skb);
264
265 @@ -81,8 +69,8 @@ static void dpaa2_eth_rx_csum(struct dpa
266 return;
267
268 /* Read checksum validation bits */
269 - if (!((fd_status & DPAA2_ETH_FAS_L3CV) &&
270 - (fd_status & DPAA2_ETH_FAS_L4CV)))
271 + if (!((fd_status & DPAA2_FAS_L3CV) &&
272 + (fd_status & DPAA2_FAS_L4CV)))
273 return;
274
275 /* Inform the stack there's no need to compute L3/L4 csum anymore */
276 @@ -92,53 +80,55 @@ static void dpaa2_eth_rx_csum(struct dpa
277 /* Free a received FD.
278 * Not to be used for Tx conf FDs or on any other paths.
279 */
280 -static void dpaa2_eth_free_rx_fd(struct dpaa2_eth_priv *priv,
281 - const struct dpaa2_fd *fd,
282 - void *vaddr)
283 +static void free_rx_fd(struct dpaa2_eth_priv *priv,
284 + const struct dpaa2_fd *fd,
285 + void *vaddr)
286 {
287 struct device *dev = priv->net_dev->dev.parent;
288 dma_addr_t addr = dpaa2_fd_get_addr(fd);
289 u8 fd_format = dpaa2_fd_get_format(fd);
290 + struct dpaa2_sg_entry *sgt;
291 + void *sg_vaddr;
292 + int i;
293
294 - if (fd_format == dpaa2_fd_sg) {
295 - struct dpaa2_sg_entry *sgt = vaddr + dpaa2_fd_get_offset(fd);
296 - void *sg_vaddr;
297 - int i;
298 + /* If single buffer frame, just free the data buffer */
299 + if (fd_format == dpaa2_fd_single)
300 + goto free_buf;
301
302 - for (i = 0; i < DPAA2_ETH_MAX_SG_ENTRIES; i++) {
303 - dpaa2_sg_le_to_cpu(&sgt[i]);
304 + /* For S/G frames, we first need to free all SG entries */
305 + sgt = vaddr + dpaa2_fd_get_offset(fd);
306 + for (i = 0; i < DPAA2_ETH_MAX_SG_ENTRIES; i++) {
307 + dpaa2_sg_le_to_cpu(&sgt[i]);
308
309 - addr = dpaa2_sg_get_addr(&sgt[i]);
310 - dma_unmap_single(dev, addr, DPAA2_ETH_RX_BUFFER_SIZE,
311 - DMA_FROM_DEVICE);
312 + addr = dpaa2_sg_get_addr(&sgt[i]);
313 + dma_unmap_single(dev, addr, DPAA2_ETH_RX_BUF_SIZE,
314 + DMA_FROM_DEVICE);
315
316 - sg_vaddr = phys_to_virt(addr);
317 - put_page(virt_to_head_page(sg_vaddr));
318 + sg_vaddr = phys_to_virt(addr);
319 + put_page(virt_to_head_page(sg_vaddr));
320
321 - if (dpaa2_sg_is_final(&sgt[i]))
322 - break;
323 - }
324 + if (dpaa2_sg_is_final(&sgt[i]))
325 + break;
326 }
327
328 +free_buf:
329 put_page(virt_to_head_page(vaddr));
330 }
331
332 /* Build a linear skb based on a single-buffer frame descriptor */
333 -static struct sk_buff *dpaa2_eth_build_linear_skb(struct dpaa2_eth_priv *priv,
334 - struct dpaa2_eth_channel *ch,
335 - const struct dpaa2_fd *fd,
336 - void *fd_vaddr)
337 +static struct sk_buff *build_linear_skb(struct dpaa2_eth_priv *priv,
338 + struct dpaa2_eth_channel *ch,
339 + const struct dpaa2_fd *fd,
340 + void *fd_vaddr)
341 {
342 struct sk_buff *skb = NULL;
343 u16 fd_offset = dpaa2_fd_get_offset(fd);
344 u32 fd_length = dpaa2_fd_get_len(fd);
345
346 - skb = build_skb(fd_vaddr, DPAA2_ETH_RX_BUFFER_SIZE +
347 + skb = build_skb(fd_vaddr, DPAA2_ETH_RX_BUF_SIZE +
348 SKB_DATA_ALIGN(sizeof(struct skb_shared_info)));
349 - if (unlikely(!skb)) {
350 - netdev_err(priv->net_dev, "build_skb() failed\n");
351 + if (unlikely(!skb))
352 return NULL;
353 - }
354
355 skb_reserve(skb, fd_offset);
356 skb_put(skb, fd_length);
357 @@ -149,9 +139,9 @@ static struct sk_buff *dpaa2_eth_build_l
358 }
359
360 /* Build a non linear (fragmented) skb based on a S/G table */
361 -static struct sk_buff *dpaa2_eth_build_frag_skb(struct dpaa2_eth_priv *priv,
362 - struct dpaa2_eth_channel *ch,
363 - struct dpaa2_sg_entry *sgt)
364 +static struct sk_buff *build_frag_skb(struct dpaa2_eth_priv *priv,
365 + struct dpaa2_eth_channel *ch,
366 + struct dpaa2_sg_entry *sgt)
367 {
368 struct sk_buff *skb = NULL;
369 struct device *dev = priv->net_dev->dev.parent;
370 @@ -168,66 +158,57 @@ static struct sk_buff *dpaa2_eth_build_f
371
372 dpaa2_sg_le_to_cpu(sge);
373
374 - /* We don't support anything else yet! */
375 - if (unlikely(dpaa2_sg_get_format(sge) != dpaa2_sg_single)) {
376 - dev_warn_once(dev, "Unsupported S/G entry format: %d\n",
377 - dpaa2_sg_get_format(sge));
378 - return NULL;
379 - }
380 + /* NOTE: We only support SG entries in dpaa2_sg_single format,
381 + * but this is the only format we may receive from HW anyway
382 + */
383
384 - /* Get the address, offset and length from the S/G entry */
385 + /* Get the address and length from the S/G entry */
386 sg_addr = dpaa2_sg_get_addr(sge);
387 - dma_unmap_single(dev, sg_addr, DPAA2_ETH_RX_BUFFER_SIZE,
388 + dma_unmap_single(dev, sg_addr, DPAA2_ETH_RX_BUF_SIZE,
389 DMA_FROM_DEVICE);
390 - if (unlikely(dma_mapping_error(dev, sg_addr))) {
391 - netdev_err(priv->net_dev, "DMA unmap failed\n");
392 - return NULL;
393 - }
394 +
395 sg_vaddr = phys_to_virt(sg_addr);
396 sg_length = dpaa2_sg_get_len(sge);
397
398 if (i == 0) {
399 /* We build the skb around the first data buffer */
400 - skb = build_skb(sg_vaddr, DPAA2_ETH_RX_BUFFER_SIZE +
401 + skb = build_skb(sg_vaddr, DPAA2_ETH_RX_BUF_SIZE +
402 SKB_DATA_ALIGN(sizeof(struct skb_shared_info)));
403 - if (unlikely(!skb)) {
404 - netdev_err(priv->net_dev, "build_skb failed\n");
405 + if (unlikely(!skb))
406 return NULL;
407 - }
408 +
409 sg_offset = dpaa2_sg_get_offset(sge);
410 skb_reserve(skb, sg_offset);
411 skb_put(skb, sg_length);
412 } else {
413 - /* Subsequent data in SGEntries are stored at
414 - * offset 0 in their buffers, we don't need to
415 - * compute sg_offset.
416 - */
417 - WARN_ONCE(dpaa2_sg_get_offset(sge) != 0,
418 - "Non-zero offset in SGE[%d]!\n", i);
419 -
420 /* Rest of the data buffers are stored as skb frags */
421 page = virt_to_page(sg_vaddr);
422 head_page = virt_to_head_page(sg_vaddr);
423
424 - /* Offset in page (which may be compound) */
425 + /* Offset in page (which may be compound).
426 + * Data in subsequent SG entries is stored from the
427 + * beginning of the buffer, so we don't need to add the
428 + * sg_offset.
429 + */
430 page_offset = ((unsigned long)sg_vaddr &
431 (PAGE_SIZE - 1)) +
432 (page_address(page) - page_address(head_page));
433
434 skb_add_rx_frag(skb, i - 1, head_page, page_offset,
435 - sg_length, DPAA2_ETH_RX_BUFFER_SIZE);
436 + sg_length, DPAA2_ETH_RX_BUF_SIZE);
437 }
438
439 if (dpaa2_sg_is_final(sge))
440 break;
441 }
442
443 - /* Count all data buffers + sgt buffer */
444 + /* Count all data buffers + SG table buffer */
445 ch->buf_count -= i + 2;
446
447 return skb;
448 }
449
450 +/* Main Rx frame processing routine */
451 static void dpaa2_eth_rx(struct dpaa2_eth_priv *priv,
452 struct dpaa2_eth_channel *ch,
453 const struct dpaa2_fd *fd,
454 @@ -238,7 +219,7 @@ static void dpaa2_eth_rx(struct dpaa2_et
455 void *vaddr;
456 struct sk_buff *skb;
457 struct rtnl_link_stats64 *percpu_stats;
458 - struct dpaa2_eth_stats *percpu_extras;
459 + struct dpaa2_eth_drv_stats *percpu_extras;
460 struct device *dev = priv->net_dev->dev.parent;
461 struct dpaa2_fas *fas;
462 u32 status = 0;
463 @@ -246,7 +227,7 @@ static void dpaa2_eth_rx(struct dpaa2_et
464 /* Tracing point */
465 trace_dpaa2_rx_fd(priv->net_dev, fd);
466
467 - dma_unmap_single(dev, addr, DPAA2_ETH_RX_BUFFER_SIZE, DMA_FROM_DEVICE);
468 + dma_unmap_single(dev, addr, DPAA2_ETH_RX_BUF_SIZE, DMA_FROM_DEVICE);
469 vaddr = phys_to_virt(addr);
470
471 prefetch(vaddr + priv->buf_layout.private_data_size);
472 @@ -256,32 +237,30 @@ static void dpaa2_eth_rx(struct dpaa2_et
473 percpu_extras = this_cpu_ptr(priv->percpu_extras);
474
475 if (fd_format == dpaa2_fd_single) {
476 - skb = dpaa2_eth_build_linear_skb(priv, ch, fd, vaddr);
477 + skb = build_linear_skb(priv, ch, fd, vaddr);
478 } else if (fd_format == dpaa2_fd_sg) {
479 struct dpaa2_sg_entry *sgt =
480 vaddr + dpaa2_fd_get_offset(fd);
481 - skb = dpaa2_eth_build_frag_skb(priv, ch, sgt);
482 + skb = build_frag_skb(priv, ch, sgt);
483 put_page(virt_to_head_page(vaddr));
484 percpu_extras->rx_sg_frames++;
485 percpu_extras->rx_sg_bytes += dpaa2_fd_get_len(fd);
486 } else {
487 /* We don't support any other format */
488 - netdev_err(priv->net_dev, "Received invalid frame format\n");
489 goto err_frame_format;
490 }
491
492 - if (unlikely(!skb)) {
493 - dev_err_once(dev, "error building skb\n");
494 + if (unlikely(!skb))
495 goto err_build_skb;
496 - }
497
498 prefetch(skb->data);
499
500 + /* Get the timestamp value */
501 if (priv->ts_rx_en) {
502 struct skb_shared_hwtstamps *shhwtstamps = skb_hwtstamps(skb);
503 - u64 *ns = (u64 *) (vaddr +
504 - priv->buf_layout.private_data_size +
505 - sizeof(struct dpaa2_fas));
506 + u64 *ns = (u64 *)(vaddr +
507 + priv->buf_layout.private_data_size +
508 + sizeof(struct dpaa2_fas));
509
510 *ns = DPAA2_PTP_NOMINAL_FREQ_PERIOD_NS * (*ns);
511 memset(shhwtstamps, 0, sizeof(*shhwtstamps));
512 @@ -293,7 +272,7 @@ static void dpaa2_eth_rx(struct dpaa2_et
513 fas = (struct dpaa2_fas *)
514 (vaddr + priv->buf_layout.private_data_size);
515 status = le32_to_cpu(fas->status);
516 - dpaa2_eth_rx_csum(priv, status, skb);
517 + validate_rx_csum(priv, status, skb);
518 }
519
520 skb->protocol = eth_type_trans(skb, priv->net_dev);
521 @@ -309,11 +288,14 @@ static void dpaa2_eth_rx(struct dpaa2_et
522 return;
523 err_frame_format:
524 err_build_skb:
525 - dpaa2_eth_free_rx_fd(priv, fd, vaddr);
526 + free_rx_fd(priv, fd, vaddr);
527 percpu_stats->rx_dropped++;
528 }
529
530 #ifdef CONFIG_FSL_DPAA2_ETH_USE_ERR_QUEUE
531 +/* Processing of Rx frames received on the error FQ
532 + * We check and print the error bits and then free the frame
533 + */
534 static void dpaa2_eth_rx_err(struct dpaa2_eth_priv *priv,
535 struct dpaa2_eth_channel *ch,
536 const struct dpaa2_fd *fd,
537 @@ -326,21 +308,18 @@ static void dpaa2_eth_rx_err(struct dpaa
538 struct dpaa2_fas *fas;
539 u32 status = 0;
540
541 - dma_unmap_single(dev, addr, DPAA2_ETH_RX_BUFFER_SIZE, DMA_FROM_DEVICE);
542 + dma_unmap_single(dev, addr, DPAA2_ETH_RX_BUF_SIZE, DMA_FROM_DEVICE);
543 vaddr = phys_to_virt(addr);
544
545 if (fd->simple.frc & DPAA2_FD_FRC_FASV) {
546 fas = (struct dpaa2_fas *)
547 (vaddr + priv->buf_layout.private_data_size);
548 status = le32_to_cpu(fas->status);
549 -
550 - /* All frames received on this queue should have at least
551 - * one of the Rx error bits set */
552 - WARN_ON_ONCE((status & DPAA2_ETH_RX_ERR_MASK) == 0);
553 - netdev_dbg(priv->net_dev, "Rx frame error: 0x%08x\n",
554 - status & DPAA2_ETH_RX_ERR_MASK);
555 + if (net_ratelimit())
556 + netdev_warn(priv->net_dev, "Rx frame error: 0x%08x\n",
557 + status & DPAA2_ETH_RX_ERR_MASK);
558 }
559 - dpaa2_eth_free_rx_fd(priv, fd, vaddr);
560 + free_rx_fd(priv, fd, vaddr);
561
562 percpu_stats = this_cpu_ptr(priv->percpu_stats);
563 percpu_stats->rx_errors++;
564 @@ -353,7 +332,7 @@ static void dpaa2_eth_rx_err(struct dpaa
565 *
566 * Observance of NAPI budget is not our concern, leaving that to the caller.
567 */
568 -static int dpaa2_eth_store_consume(struct dpaa2_eth_channel *ch)
569 +static int consume_frames(struct dpaa2_eth_channel *ch)
570 {
571 struct dpaa2_eth_priv *priv = ch->priv;
572 struct dpaa2_eth_fq *fq;
573 @@ -365,20 +344,14 @@ static int dpaa2_eth_store_consume(struc
574 do {
575 dq = dpaa2_io_store_next(ch->store, &is_last);
576 if (unlikely(!dq)) {
577 - if (unlikely(!is_last)) {
578 - netdev_dbg(priv->net_dev,
579 - "Channel %d reqturned no valid frames\n",
580 - ch->ch_id);
581 - /* MUST retry until we get some sort of
582 - * valid response token (be it "empty dequeue"
583 - * or a valid frame).
584 - */
585 - continue;
586 - }
587 - break;
588 + /* If we're here, we *must* have placed a
589 + * volatile dequeue comnmand, so keep reading through
590 + * the store until we get some sort of valid response
591 + * token (either a valid frame or an "empty dequeue")
592 + */
593 + continue;
594 }
595
596 - /* Obtain FD and process it */
597 fd = dpaa2_dq_fd(dq);
598 fq = (struct dpaa2_eth_fq *)dpaa2_dq_fqd_ctx(dq);
599 fq->stats.frames++;
600 @@ -390,9 +363,10 @@ static int dpaa2_eth_store_consume(struc
601 return cleaned;
602 }
603
604 -static int dpaa2_eth_build_sg_fd(struct dpaa2_eth_priv *priv,
605 - struct sk_buff *skb,
606 - struct dpaa2_fd *fd)
607 +/* Create a frame descriptor based on a fragmented skb */
608 +static int build_sg_fd(struct dpaa2_eth_priv *priv,
609 + struct sk_buff *skb,
610 + struct dpaa2_fd *fd)
611 {
612 struct device *dev = priv->net_dev->dev.parent;
613 void *sgt_buf = NULL;
614 @@ -404,14 +378,16 @@ static int dpaa2_eth_build_sg_fd(struct
615 struct scatterlist *scl, *crt_scl;
616 int num_sg;
617 int num_dma_bufs;
618 - struct dpaa2_eth_swa *bps;
619 + struct dpaa2_eth_swa *swa;
620
621 /* Create and map scatterlist.
622 * We don't advertise NETIF_F_FRAGLIST, so skb_to_sgvec() will not have
623 * to go beyond nr_frags+1.
624 * Note: We don't support chained scatterlists
625 */
626 - WARN_ON(PAGE_SIZE / sizeof(struct scatterlist) < nr_frags + 1);
627 + if (unlikely(PAGE_SIZE / sizeof(struct scatterlist) < nr_frags + 1))
628 + return -EINVAL;
629 +
630 scl = kcalloc(nr_frags + 1, sizeof(struct scatterlist), GFP_ATOMIC);
631 if (unlikely(!scl))
632 return -ENOMEM;
633 @@ -420,7 +396,6 @@ static int dpaa2_eth_build_sg_fd(struct
634 num_sg = skb_to_sgvec(skb, scl, 0, skb->len);
635 num_dma_bufs = dma_map_sg(dev, scl, num_sg, DMA_TO_DEVICE);
636 if (unlikely(!num_dma_bufs)) {
637 - netdev_err(priv->net_dev, "dma_map_sg() error\n");
638 err = -ENOMEM;
639 goto dma_map_sg_failed;
640 }
641 @@ -430,7 +405,6 @@ static int dpaa2_eth_build_sg_fd(struct
642 sizeof(struct dpaa2_sg_entry) * (1 + num_dma_bufs);
643 sgt_buf = kzalloc(sgt_buf_size + DPAA2_ETH_TX_BUF_ALIGN, GFP_ATOMIC);
644 if (unlikely(!sgt_buf)) {
645 - netdev_err(priv->net_dev, "failed to allocate SGT buffer\n");
646 err = -ENOMEM;
647 goto sgt_buf_alloc_failed;
648 }
649 @@ -462,19 +436,19 @@ static int dpaa2_eth_build_sg_fd(struct
650 * Fit the scatterlist and the number of buffers alongside the
651 * skb backpointer in the SWA. We'll need all of them on Tx Conf.
652 */
653 - bps = (struct dpaa2_eth_swa *)sgt_buf;
654 - bps->skb = skb;
655 - bps->scl = scl;
656 - bps->num_sg = num_sg;
657 - bps->num_dma_bufs = num_dma_bufs;
658 + swa = (struct dpaa2_eth_swa *)sgt_buf;
659 + swa->skb = skb;
660 + swa->scl = scl;
661 + swa->num_sg = num_sg;
662 + swa->num_dma_bufs = num_dma_bufs;
663
664 + /* Hardware expects the SG table to be in little endian format */
665 for (j = 0; j < i; j++)
666 dpaa2_sg_cpu_to_le(&sgt[j]);
667
668 /* Separately map the SGT buffer */
669 addr = dma_map_single(dev, sgt_buf, sgt_buf_size, DMA_TO_DEVICE);
670 if (unlikely(dma_mapping_error(dev, addr))) {
671 - netdev_err(priv->net_dev, "dma_map_single() failed\n");
672 err = -ENOMEM;
673 goto dma_map_single_failed;
674 }
675 @@ -484,7 +458,7 @@ static int dpaa2_eth_build_sg_fd(struct
676 dpaa2_fd_set_len(fd, skb->len);
677
678 fd->simple.ctrl = DPAA2_FD_CTRL_ASAL | DPAA2_FD_CTRL_PTA |
679 - DPAA2_FD_CTRL_PTV1;
680 + DPAA2_FD_CTRL_PTV1;
681
682 return 0;
683
684 @@ -497,9 +471,10 @@ dma_map_sg_failed:
685 return err;
686 }
687
688 -static int dpaa2_eth_build_single_fd(struct dpaa2_eth_priv *priv,
689 - struct sk_buff *skb,
690 - struct dpaa2_fd *fd)
691 +/* Create a frame descriptor based on a linear skb */
692 +static int build_single_fd(struct dpaa2_eth_priv *priv,
693 + struct sk_buff *skb,
694 + struct dpaa2_fd *fd)
695 {
696 struct device *dev = priv->net_dev->dev.parent;
697 u8 *buffer_start;
698 @@ -524,14 +499,11 @@ static int dpaa2_eth_build_single_fd(str
699 skbh = (struct sk_buff **)buffer_start;
700 *skbh = skb;
701
702 - addr = dma_map_single(dev,
703 - buffer_start,
704 + addr = dma_map_single(dev, buffer_start,
705 skb_tail_pointer(skb) - buffer_start,
706 DMA_TO_DEVICE);
707 - if (unlikely(dma_mapping_error(dev, addr))) {
708 - dev_err(dev, "dma_map_single() failed\n");
709 - return -EINVAL;
710 - }
711 + if (unlikely(dma_mapping_error(dev, addr)))
712 + return -ENOMEM;
713
714 dpaa2_fd_set_addr(fd, addr);
715 dpaa2_fd_set_offset(fd, (u16)(skb->data - buffer_start));
716 @@ -539,21 +511,23 @@ static int dpaa2_eth_build_single_fd(str
717 dpaa2_fd_set_format(fd, dpaa2_fd_single);
718
719 fd->simple.ctrl = DPAA2_FD_CTRL_ASAL | DPAA2_FD_CTRL_PTA |
720 - DPAA2_FD_CTRL_PTV1;
721 + DPAA2_FD_CTRL_PTV1;
722
723 return 0;
724 }
725
726 -/* DMA-unmap and free FD and possibly SGT buffer allocated on Tx. The skb
727 +/* FD freeing routine on the Tx path
728 + *
729 + * DMA-unmap and free FD and possibly SGT buffer allocated on Tx. The skb
730 * back-pointed to is also freed.
731 * This can be called either from dpaa2_eth_tx_conf() or on the error path of
732 * dpaa2_eth_tx().
733 * Optionally, return the frame annotation status word (FAS), which needs
734 * to be checked if we're on the confirmation path.
735 */
736 -static void dpaa2_eth_free_fd(const struct dpaa2_eth_priv *priv,
737 - const struct dpaa2_fd *fd,
738 - u32 *status)
739 +static void free_tx_fd(const struct dpaa2_eth_priv *priv,
740 + const struct dpaa2_fd *fd,
741 + u32 *status)
742 {
743 struct device *dev = priv->net_dev->dev.parent;
744 dma_addr_t fd_addr;
745 @@ -562,7 +536,7 @@ static void dpaa2_eth_free_fd(const stru
746 int unmap_size;
747 struct scatterlist *scl;
748 int num_sg, num_dma_bufs;
749 - struct dpaa2_eth_swa *bps;
750 + struct dpaa2_eth_swa *swa;
751 bool fd_single;
752 struct dpaa2_fas *fas;
753
754 @@ -580,11 +554,11 @@ static void dpaa2_eth_free_fd(const stru
755 skb_tail_pointer(skb) - buffer_start,
756 DMA_TO_DEVICE);
757 } else {
758 - bps = (struct dpaa2_eth_swa *)skbh;
759 - skb = bps->skb;
760 - scl = bps->scl;
761 - num_sg = bps->num_sg;
762 - num_dma_bufs = bps->num_dma_bufs;
763 + swa = (struct dpaa2_eth_swa *)skbh;
764 + skb = swa->skb;
765 + scl = swa->scl;
766 + num_sg = swa->num_sg;
767 + num_dma_bufs = swa->num_dma_bufs;
768
769 /* Unmap the scatterlist */
770 dma_unmap_sg(dev, scl, num_sg, DMA_TO_DEVICE);
771 @@ -596,6 +570,7 @@ static void dpaa2_eth_free_fd(const stru
772 dma_unmap_single(dev, fd_addr, unmap_size, DMA_TO_DEVICE);
773 }
774
775 + /* Get the timestamp value */
776 if (priv->ts_tx_en && skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) {
777 struct skb_shared_hwtstamps shhwtstamps;
778 u64 *ns;
779 @@ -610,8 +585,9 @@ static void dpaa2_eth_free_fd(const stru
780 skb_tstamp_tx(skb, &shhwtstamps);
781 }
782
783 - /* Check the status from the Frame Annotation after we unmap the first
784 - * buffer but before we free it.
785 + /* Read the status from the Frame Annotation after we unmap the first
786 + * buffer but before we free it. The caller function is responsible
787 + * for checking the status value.
788 */
789 if (status && (fd->simple.frc & DPAA2_FD_FRC_FASV)) {
790 fas = (struct dpaa2_fas *)
791 @@ -632,24 +608,16 @@ static int dpaa2_eth_tx(struct sk_buff *
792 struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
793 struct dpaa2_fd fd;
794 struct rtnl_link_stats64 *percpu_stats;
795 - struct dpaa2_eth_stats *percpu_extras;
796 + struct dpaa2_eth_drv_stats *percpu_extras;
797 + u16 queue_mapping, flow_id;
798 int err, i;
799 - /* TxConf FQ selection primarily based on cpu affinity; this is
800 - * non-migratable context, so it's safe to call smp_processor_id().
801 - */
802 - u16 queue_mapping = smp_processor_id() % priv->dpni_attrs.max_senders;
803
804 percpu_stats = this_cpu_ptr(priv->percpu_stats);
805 percpu_extras = this_cpu_ptr(priv->percpu_extras);
806
807 - /* Setup the FD fields */
808 - memset(&fd, 0, sizeof(fd));
809 -
810 if (unlikely(skb_headroom(skb) < DPAA2_ETH_NEEDED_HEADROOM(priv))) {
811 struct sk_buff *ns;
812
813 - dev_info_once(net_dev->dev.parent,
814 - "skb headroom too small, must realloc.\n");
815 ns = skb_realloc_headroom(skb, DPAA2_ETH_NEEDED_HEADROOM(priv));
816 if (unlikely(!ns)) {
817 percpu_stats->tx_dropped++;
818 @@ -664,18 +632,20 @@ static int dpaa2_eth_tx(struct sk_buff *
819 */
820 skb = skb_unshare(skb, GFP_ATOMIC);
821 if (unlikely(!skb)) {
822 - netdev_err(net_dev, "Out of memory for skb_unshare()");
823 /* skb_unshare() has already freed the skb */
824 percpu_stats->tx_dropped++;
825 return NETDEV_TX_OK;
826 }
827
828 + /* Setup the FD fields */
829 + memset(&fd, 0, sizeof(fd));
830 +
831 if (skb_is_nonlinear(skb)) {
832 - err = dpaa2_eth_build_sg_fd(priv, skb, &fd);
833 + err = build_sg_fd(priv, skb, &fd);
834 percpu_extras->tx_sg_frames++;
835 percpu_extras->tx_sg_bytes += skb->len;
836 } else {
837 - err = dpaa2_eth_build_single_fd(priv, skb, &fd);
838 + err = build_single_fd(priv, skb, &fd);
839 }
840
841 if (unlikely(err)) {
842 @@ -686,19 +656,22 @@ static int dpaa2_eth_tx(struct sk_buff *
843 /* Tracing point */
844 trace_dpaa2_tx_fd(net_dev, &fd);
845
846 + /* TxConf FQ selection primarily based on cpu affinity; this is
847 + * non-migratable context, so it's safe to call smp_processor_id().
848 + */
849 + queue_mapping = smp_processor_id() % priv->dpni_attrs.max_senders;
850 + flow_id = priv->fq[queue_mapping].flowid;
851 for (i = 0; i < (DPAA2_ETH_MAX_TX_QUEUES << 1); i++) {
852 err = dpaa2_io_service_enqueue_qd(NULL, priv->tx_qdid, 0,
853 - priv->fq[queue_mapping].flowid,
854 - &fd);
855 + flow_id, &fd);
856 if (err != -EBUSY)
857 break;
858 }
859 percpu_extras->tx_portal_busy += i;
860 if (unlikely(err < 0)) {
861 - netdev_dbg(net_dev, "error enqueueing Tx frame\n");
862 percpu_stats->tx_errors++;
863 /* Clean up everything, including freeing the skb */
864 - dpaa2_eth_free_fd(priv, &fd, NULL);
865 + free_tx_fd(priv, &fd, NULL);
866 } else {
867 percpu_stats->tx_packets++;
868 percpu_stats->tx_bytes += skb->len;
869 @@ -713,13 +686,14 @@ err_alloc_headroom:
870 return NETDEV_TX_OK;
871 }
872
873 +/* Tx confirmation frame processing routine */
874 static void dpaa2_eth_tx_conf(struct dpaa2_eth_priv *priv,
875 struct dpaa2_eth_channel *ch,
876 const struct dpaa2_fd *fd,
877 struct napi_struct *napi __always_unused)
878 {
879 struct rtnl_link_stats64 *percpu_stats;
880 - struct dpaa2_eth_stats *percpu_extras;
881 + struct dpaa2_eth_drv_stats *percpu_extras;
882 u32 status = 0;
883
884 /* Tracing point */
885 @@ -729,18 +703,16 @@ static void dpaa2_eth_tx_conf(struct dpa
886 percpu_extras->tx_conf_frames++;
887 percpu_extras->tx_conf_bytes += dpaa2_fd_get_len(fd);
888
889 - dpaa2_eth_free_fd(priv, fd, &status);
890 + free_tx_fd(priv, fd, &status);
891
892 if (unlikely(status & DPAA2_ETH_TXCONF_ERR_MASK)) {
893 - netdev_err(priv->net_dev, "TxConf frame error(s): 0x%08x\n",
894 - status & DPAA2_ETH_TXCONF_ERR_MASK);
895 percpu_stats = this_cpu_ptr(priv->percpu_stats);
896 /* Tx-conf logically pertains to the egress path. */
897 percpu_stats->tx_errors++;
898 }
899 }
900
901 -static int dpaa2_eth_set_rx_csum(struct dpaa2_eth_priv *priv, bool enable)
902 +static int set_rx_csum(struct dpaa2_eth_priv *priv, bool enable)
903 {
904 int err;
905
906 @@ -763,7 +735,7 @@ static int dpaa2_eth_set_rx_csum(struct
907 return 0;
908 }
909
910 -static int dpaa2_eth_set_tx_csum(struct dpaa2_eth_priv *priv, bool enable)
911 +static int set_tx_csum(struct dpaa2_eth_priv *priv, bool enable)
912 {
913 struct dpaa2_eth_fq *fq;
914 struct dpni_tx_flow_cfg tx_flow_cfg;
915 @@ -793,37 +765,38 @@ static int dpaa2_eth_set_tx_csum(struct
916 return 0;
917 }
918
919 -static int dpaa2_bp_add_7(struct dpaa2_eth_priv *priv, u16 bpid)
920 +/* Perform a single release command to add buffers
921 + * to the specified buffer pool
922 + */
923 +static int add_bufs(struct dpaa2_eth_priv *priv, u16 bpid)
924 {
925 struct device *dev = priv->net_dev->dev.parent;
926 - u64 buf_array[7];
927 + u64 buf_array[DPAA2_ETH_BUFS_PER_CMD];
928 void *buf;
929 dma_addr_t addr;
930 int i;
931
932 - for (i = 0; i < 7; i++) {
933 + for (i = 0; i < DPAA2_ETH_BUFS_PER_CMD; i++) {
934 /* Allocate buffer visible to WRIOP + skb shared info +
935 * alignment padding
936 */
937 buf = napi_alloc_frag(DPAA2_ETH_BUF_RAW_SIZE);
938 - if (unlikely(!buf)) {
939 - dev_err(dev, "buffer allocation failed\n");
940 + if (unlikely(!buf))
941 goto err_alloc;
942 - }
943 +
944 buf = PTR_ALIGN(buf, DPAA2_ETH_RX_BUF_ALIGN);
945
946 - addr = dma_map_single(dev, buf, DPAA2_ETH_RX_BUFFER_SIZE,
947 + addr = dma_map_single(dev, buf, DPAA2_ETH_RX_BUF_SIZE,
948 DMA_FROM_DEVICE);
949 - if (unlikely(dma_mapping_error(dev, addr))) {
950 - dev_err(dev, "dma_map_single() failed\n");
951 + if (unlikely(dma_mapping_error(dev, addr)))
952 goto err_map;
953 - }
954 +
955 buf_array[i] = addr;
956
957 /* tracing point */
958 trace_dpaa2_eth_buf_seed(priv->net_dev,
959 buf, DPAA2_ETH_BUF_RAW_SIZE,
960 - addr, DPAA2_ETH_RX_BUFFER_SIZE,
961 + addr, DPAA2_ETH_RX_BUF_SIZE,
962 bpid);
963 }
964
965 @@ -850,59 +823,57 @@ err_alloc:
966 return 0;
967 }
968
969 -static int dpaa2_dpbp_seed(struct dpaa2_eth_priv *priv, u16 bpid)
970 +static int seed_pool(struct dpaa2_eth_priv *priv, u16 bpid)
971 {
972 int i, j;
973 int new_count;
974
975 /* This is the lazy seeding of Rx buffer pools.
976 - * dpaa2_bp_add_7() is also used on the Rx hotpath and calls
977 + * dpaa2_add_bufs() is also used on the Rx hotpath and calls
978 * napi_alloc_frag(). The trouble with that is that it in turn ends up
979 * calling this_cpu_ptr(), which mandates execution in atomic context.
980 * Rather than splitting up the code, do a one-off preempt disable.
981 */
982 preempt_disable();
983 for (j = 0; j < priv->num_channels; j++) {
984 - for (i = 0; i < DPAA2_ETH_NUM_BUFS; i += 7) {
985 - new_count = dpaa2_bp_add_7(priv, bpid);
986 + for (i = 0; i < DPAA2_ETH_NUM_BUFS;
987 + i += DPAA2_ETH_BUFS_PER_CMD) {
988 + new_count = add_bufs(priv, bpid);
989 priv->channel[j]->buf_count += new_count;
990
991 - if (new_count < 7) {
992 + if (new_count < DPAA2_ETH_BUFS_PER_CMD) {
993 preempt_enable();
994 - goto out_of_memory;
995 + return -ENOMEM;
996 }
997 }
998 }
999 preempt_enable();
1000
1001 return 0;
1002 -
1003 -out_of_memory:
1004 - return -ENOMEM;
1005 }
1006
1007 /**
1008 * Drain the specified number of buffers from the DPNI's private buffer pool.
1009 - * @count must not exceeed 7
1010 + * @count must not exceeed DPAA2_ETH_BUFS_PER_CMD
1011 */
1012 -static void dpaa2_dpbp_drain_cnt(struct dpaa2_eth_priv *priv, int count)
1013 +static void drain_bufs(struct dpaa2_eth_priv *priv, int count)
1014 {
1015 struct device *dev = priv->net_dev->dev.parent;
1016 - u64 buf_array[7];
1017 + u64 buf_array[DPAA2_ETH_BUFS_PER_CMD];
1018 void *vaddr;
1019 int ret, i;
1020
1021 do {
1022 ret = dpaa2_io_service_acquire(NULL, priv->dpbp_attrs.bpid,
1023 - buf_array, count);
1024 + buf_array, count);
1025 if (ret < 0) {
1026 - pr_err("dpaa2_io_service_acquire() failed\n");
1027 + netdev_err(priv->net_dev, "dpaa2_io_service_acquire() failed\n");
1028 return;
1029 }
1030 for (i = 0; i < ret; i++) {
1031 /* Same logic as on regular Rx path */
1032 dma_unmap_single(dev, buf_array[i],
1033 - DPAA2_ETH_RX_BUFFER_SIZE,
1034 + DPAA2_ETH_RX_BUF_SIZE,
1035 DMA_FROM_DEVICE);
1036 vaddr = phys_to_virt(buf_array[i]);
1037 put_page(virt_to_head_page(vaddr));
1038 @@ -910,12 +881,12 @@ static void dpaa2_dpbp_drain_cnt(struct
1039 } while (ret);
1040 }
1041
1042 -static void __dpaa2_dpbp_free(struct dpaa2_eth_priv *priv)
1043 +static void drain_pool(struct dpaa2_eth_priv *priv)
1044 {
1045 int i;
1046
1047 - dpaa2_dpbp_drain_cnt(priv, 7);
1048 - dpaa2_dpbp_drain_cnt(priv, 1);
1049 + drain_bufs(priv, DPAA2_ETH_BUFS_PER_CMD);
1050 + drain_bufs(priv, 1);
1051
1052 for (i = 0; i < priv->num_channels; i++)
1053 priv->channel[i]->buf_count = 0;
1054 @@ -924,50 +895,55 @@ static void __dpaa2_dpbp_free(struct dpa
1055 /* Function is called from softirq context only, so we don't need to guard
1056 * the access to percpu count
1057 */
1058 -static int dpaa2_dpbp_refill(struct dpaa2_eth_priv *priv,
1059 - struct dpaa2_eth_channel *ch,
1060 - u16 bpid)
1061 +static int refill_pool(struct dpaa2_eth_priv *priv,
1062 + struct dpaa2_eth_channel *ch,
1063 + u16 bpid)
1064 {
1065 int new_count;
1066 - int err = 0;
1067
1068 - if (unlikely(ch->buf_count < DPAA2_ETH_REFILL_THRESH)) {
1069 - do {
1070 - new_count = dpaa2_bp_add_7(priv, bpid);
1071 - if (unlikely(!new_count)) {
1072 - /* Out of memory; abort for now, we'll
1073 - * try later on
1074 - */
1075 - break;
1076 - }
1077 - ch->buf_count += new_count;
1078 - } while (ch->buf_count < DPAA2_ETH_NUM_BUFS);
1079 + if (likely(ch->buf_count >= DPAA2_ETH_REFILL_THRESH))
1080 + return 0;
1081
1082 - if (unlikely(ch->buf_count < DPAA2_ETH_NUM_BUFS))
1083 - err = -ENOMEM;
1084 - }
1085 + do {
1086 + new_count = add_bufs(priv, bpid);
1087 + if (unlikely(!new_count)) {
1088 + /* Out of memory; abort for now, we'll try later on */
1089 + break;
1090 + }
1091 + ch->buf_count += new_count;
1092 + } while (ch->buf_count < DPAA2_ETH_NUM_BUFS);
1093
1094 - return err;
1095 + if (unlikely(ch->buf_count < DPAA2_ETH_NUM_BUFS))
1096 + return -ENOMEM;
1097 +
1098 + return 0;
1099 }
1100
1101 -static int __dpaa2_eth_pull_channel(struct dpaa2_eth_channel *ch)
1102 +static int pull_channel(struct dpaa2_eth_channel *ch)
1103 {
1104 int err;
1105 int dequeues = -1;
1106 - struct dpaa2_eth_priv *priv = ch->priv;
1107
1108 /* Retry while portal is busy */
1109 do {
1110 err = dpaa2_io_service_pull_channel(NULL, ch->ch_id, ch->store);
1111 dequeues++;
1112 + cpu_relax();
1113 } while (err == -EBUSY);
1114 - if (unlikely(err))
1115 - netdev_err(priv->net_dev, "dpaa2_io_service_pull err %d", err);
1116
1117 ch->stats.dequeue_portal_busy += dequeues;
1118 + if (unlikely(err))
1119 + ch->stats.pull_err++;
1120 +
1121 return err;
1122 }
1123
1124 +/* NAPI poll routine
1125 + *
1126 + * Frames are dequeued from the QMan channel associated with this NAPI context.
1127 + * Rx, Tx confirmation and (if configured) Rx error frames all count
1128 + * towards the NAPI budget.
1129 + */
1130 static int dpaa2_eth_poll(struct napi_struct *napi, int budget)
1131 {
1132 struct dpaa2_eth_channel *ch;
1133 @@ -978,32 +954,32 @@ static int dpaa2_eth_poll(struct napi_st
1134 ch = container_of(napi, struct dpaa2_eth_channel, napi);
1135 priv = ch->priv;
1136
1137 - __dpaa2_eth_pull_channel(ch);
1138 + while (cleaned < budget) {
1139 + err = pull_channel(ch);
1140 + if (unlikely(err))
1141 + break;
1142
1143 - do {
1144 /* Refill pool if appropriate */
1145 - dpaa2_dpbp_refill(priv, ch, priv->dpbp_attrs.bpid);
1146 + refill_pool(priv, ch, priv->dpbp_attrs.bpid);
1147
1148 - store_cleaned = dpaa2_eth_store_consume(ch);
1149 + store_cleaned = consume_frames(ch);
1150 cleaned += store_cleaned;
1151
1152 + /* If we have enough budget left for a full store,
1153 + * try a new pull dequeue, otherwise we're done here
1154 + */
1155 if (store_cleaned == 0 ||
1156 cleaned > budget - DPAA2_ETH_STORE_SIZE)
1157 break;
1158 -
1159 - /* Try to dequeue some more */
1160 - err = __dpaa2_eth_pull_channel(ch);
1161 - if (unlikely(err))
1162 - break;
1163 - } while (1);
1164 + }
1165
1166 if (cleaned < budget) {
1167 napi_complete_done(napi, cleaned);
1168 - err = dpaa2_io_service_rearm(NULL, &ch->nctx);
1169 - if (unlikely(err))
1170 - netdev_err(priv->net_dev,
1171 - "Notif rearm failed for channel %d\n",
1172 - ch->ch_id);
1173 + /* Re-enable data available notifications */
1174 + do {
1175 + err = dpaa2_io_service_rearm(NULL, &ch->nctx);
1176 + cpu_relax();
1177 + } while (err == -EBUSY);
1178 }
1179
1180 ch->stats.frames += cleaned;
1181 @@ -1011,7 +987,7 @@ static int dpaa2_eth_poll(struct napi_st
1182 return cleaned;
1183 }
1184
1185 -static void dpaa2_eth_napi_enable(struct dpaa2_eth_priv *priv)
1186 +static void enable_ch_napi(struct dpaa2_eth_priv *priv)
1187 {
1188 struct dpaa2_eth_channel *ch;
1189 int i;
1190 @@ -1022,7 +998,7 @@ static void dpaa2_eth_napi_enable(struct
1191 }
1192 }
1193
1194 -static void dpaa2_eth_napi_disable(struct dpaa2_eth_priv *priv)
1195 +static void disable_ch_napi(struct dpaa2_eth_priv *priv)
1196 {
1197 struct dpaa2_eth_channel *ch;
1198 int i;
1199 @@ -1033,7 +1009,7 @@ static void dpaa2_eth_napi_disable(struc
1200 }
1201 }
1202
1203 -static int dpaa2_link_state_update(struct dpaa2_eth_priv *priv)
1204 +static int link_state_update(struct dpaa2_eth_priv *priv)
1205 {
1206 struct dpni_link_state state;
1207 int err;
1208 @@ -1069,7 +1045,7 @@ static int dpaa2_eth_open(struct net_dev
1209 struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
1210 int err;
1211
1212 - err = dpaa2_dpbp_seed(priv, priv->dpbp_attrs.bpid);
1213 + err = seed_pool(priv, priv->dpbp_attrs.bpid);
1214 if (err) {
1215 /* Not much to do; the buffer pool, though not filled up,
1216 * may still contain some buffers which would enable us
1217 @@ -1084,7 +1060,7 @@ static int dpaa2_eth_open(struct net_dev
1218 * immediately after dpni_enable();
1219 */
1220 netif_tx_stop_all_queues(net_dev);
1221 - dpaa2_eth_napi_enable(priv);
1222 + enable_ch_napi(priv);
1223 /* Also, explicitly set carrier off, otherwise netif_carrier_ok() will
1224 * return true and cause 'ip link show' to report the LOWER_UP flag,
1225 * even though the link notification wasn't even received.
1226 @@ -1093,16 +1069,16 @@ static int dpaa2_eth_open(struct net_dev
1227
1228 err = dpni_enable(priv->mc_io, 0, priv->mc_token);
1229 if (err < 0) {
1230 - dev_err(net_dev->dev.parent, "dpni_enable() failed\n");
1231 + netdev_err(net_dev, "dpni_enable() failed\n");
1232 goto enable_err;
1233 }
1234
1235 /* If the DPMAC object has already processed the link up interrupt,
1236 * we have to learn the link state ourselves.
1237 */
1238 - err = dpaa2_link_state_update(priv);
1239 + err = link_state_update(priv);
1240 if (err < 0) {
1241 - dev_err(net_dev->dev.parent, "Can't update link state\n");
1242 + netdev_err(net_dev, "Can't update link state\n");
1243 goto link_state_err;
1244 }
1245
1246 @@ -1110,26 +1086,84 @@ static int dpaa2_eth_open(struct net_dev
1247
1248 link_state_err:
1249 enable_err:
1250 - dpaa2_eth_napi_disable(priv);
1251 - __dpaa2_dpbp_free(priv);
1252 + disable_ch_napi(priv);
1253 + drain_pool(priv);
1254 return err;
1255 }
1256
1257 +/* The DPIO store must be empty when we call this,
1258 + * at the end of every NAPI cycle.
1259 + */
1260 +static u32 drain_channel(struct dpaa2_eth_priv *priv,
1261 + struct dpaa2_eth_channel *ch)
1262 +{
1263 + u32 drained = 0, total = 0;
1264 +
1265 + do {
1266 + pull_channel(ch);
1267 + drained = consume_frames(ch);
1268 + total += drained;
1269 + } while (drained);
1270 +
1271 + return total;
1272 +}
1273 +
1274 +static u32 drain_ingress_frames(struct dpaa2_eth_priv *priv)
1275 +{
1276 + struct dpaa2_eth_channel *ch;
1277 + int i;
1278 + u32 drained = 0;
1279 +
1280 + for (i = 0; i < priv->num_channels; i++) {
1281 + ch = priv->channel[i];
1282 + drained += drain_channel(priv, ch);
1283 + }
1284 +
1285 + return drained;
1286 +}
1287 +
1288 static int dpaa2_eth_stop(struct net_device *net_dev)
1289 {
1290 struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
1291 + int dpni_enabled;
1292 + int retries = 10;
1293 + u32 drained;
1294
1295 - /* Stop Tx and Rx traffic */
1296 netif_tx_stop_all_queues(net_dev);
1297 netif_carrier_off(net_dev);
1298 - dpni_disable(priv->mc_io, 0, priv->mc_token);
1299
1300 - msleep(500);
1301 + /* Loop while dpni_disable() attempts to drain the egress FQs
1302 + * and confirm them back to us.
1303 + */
1304 + do {
1305 + dpni_disable(priv->mc_io, 0, priv->mc_token);
1306 + dpni_is_enabled(priv->mc_io, 0, priv->mc_token, &dpni_enabled);
1307 + if (dpni_enabled)
1308 + /* Allow the MC some slack */
1309 + msleep(100);
1310 + } while (dpni_enabled && --retries);
1311 + if (!retries) {
1312 + netdev_warn(net_dev, "Retry count exceeded disabling DPNI\n");
1313 + /* Must go on and disable NAPI nonetheless, so we don't crash at
1314 + * the next "ifconfig up"
1315 + */
1316 + }
1317
1318 - dpaa2_eth_napi_disable(priv);
1319 - msleep(100);
1320 + /* Wait for NAPI to complete on every core and disable it.
1321 + * In particular, this will also prevent NAPI from being rescheduled if
1322 + * a new CDAN is serviced, effectively discarding the CDAN. We therefore
1323 + * don't even need to disarm the channels, except perhaps for the case
1324 + * of a huge coalescing value.
1325 + */
1326 + disable_ch_napi(priv);
1327 +
1328 + /* Manually drain the Rx and TxConf queues */
1329 + drained = drain_ingress_frames(priv);
1330 + if (drained)
1331 + netdev_dbg(net_dev, "Drained %d frames.\n", drained);
1332
1333 - __dpaa2_dpbp_free(priv);
1334 + /* Empty the buffer pool */
1335 + drain_pool(priv);
1336
1337 return 0;
1338 }
1339 @@ -1138,7 +1172,7 @@ static int dpaa2_eth_init(struct net_dev
1340 {
1341 u64 supported = 0;
1342 u64 not_supported = 0;
1343 - const struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
1344 + struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
1345 u32 options = priv->dpni_attrs.options;
1346
1347 /* Capabilities listing */
1348 @@ -1230,7 +1264,7 @@ static int dpaa2_eth_change_mtu(struct n
1349 err = dpni_set_max_frame_length(priv->mc_io, 0, priv->mc_token,
1350 (u16)DPAA2_ETH_L2_MAX_FRM(mtu));
1351 if (err) {
1352 - netdev_err(net_dev, "dpni_set_mfl() failed\n");
1353 + netdev_err(net_dev, "dpni_set_max_frame_length() failed\n");
1354 return err;
1355 }
1356
1357 @@ -1238,18 +1272,11 @@ static int dpaa2_eth_change_mtu(struct n
1358 return 0;
1359 }
1360
1361 -/* Convenience macro to make code littered with error checking more readable */
1362 -#define DPAA2_ETH_WARN_IF_ERR(err, netdevp, format, ...) \
1363 -do { \
1364 - if (err) \
1365 - netdev_warn(netdevp, format, ##__VA_ARGS__); \
1366 -} while (0)
1367 -
1368 /* Copy mac unicast addresses from @net_dev to @priv.
1369 * Its sole purpose is to make dpaa2_eth_set_rx_mode() more readable.
1370 */
1371 -static void _dpaa2_eth_hw_add_uc_addr(const struct net_device *net_dev,
1372 - struct dpaa2_eth_priv *priv)
1373 +static void add_uc_hw_addr(const struct net_device *net_dev,
1374 + struct dpaa2_eth_priv *priv)
1375 {
1376 struct netdev_hw_addr *ha;
1377 int err;
1378 @@ -1257,17 +1284,18 @@ static void _dpaa2_eth_hw_add_uc_addr(co
1379 netdev_for_each_uc_addr(ha, net_dev) {
1380 err = dpni_add_mac_addr(priv->mc_io, 0, priv->mc_token,
1381 ha->addr);
1382 - DPAA2_ETH_WARN_IF_ERR(err, priv->net_dev,
1383 - "Could not add ucast MAC %pM to the filtering table (err %d)\n",
1384 - ha->addr, err);
1385 + if (err)
1386 + netdev_warn(priv->net_dev,
1387 + "Could not add ucast MAC %pM to the filtering table (err %d)\n",
1388 + ha->addr, err);
1389 }
1390 }
1391
1392 /* Copy mac multicast addresses from @net_dev to @priv
1393 * Its sole purpose is to make dpaa2_eth_set_rx_mode() more readable.
1394 */
1395 -static void _dpaa2_eth_hw_add_mc_addr(const struct net_device *net_dev,
1396 - struct dpaa2_eth_priv *priv)
1397 +static void add_mc_hw_addr(const struct net_device *net_dev,
1398 + struct dpaa2_eth_priv *priv)
1399 {
1400 struct netdev_hw_addr *ha;
1401 int err;
1402 @@ -1275,9 +1303,10 @@ static void _dpaa2_eth_hw_add_mc_addr(co
1403 netdev_for_each_mc_addr(ha, net_dev) {
1404 err = dpni_add_mac_addr(priv->mc_io, 0, priv->mc_token,
1405 ha->addr);
1406 - DPAA2_ETH_WARN_IF_ERR(err, priv->net_dev,
1407 - "Could not add mcast MAC %pM to the filtering table (err %d)\n",
1408 - ha->addr, err);
1409 + if (err)
1410 + netdev_warn(priv->net_dev,
1411 + "Could not add mcast MAC %pM to the filtering table (err %d)\n",
1412 + ha->addr, err);
1413 }
1414 }
1415
1416 @@ -1296,11 +1325,11 @@ static void dpaa2_eth_set_rx_mode(struct
1417 /* Basic sanity checks; these probably indicate a misconfiguration */
1418 if (!(options & DPNI_OPT_UNICAST_FILTER) && max_uc != 0)
1419 netdev_info(net_dev,
1420 - "max_unicast_filters=%d, you must have DPNI_OPT_UNICAST_FILTER in the DPL\n",
1421 + "max_unicast_filters=%d, DPNI_OPT_UNICAST_FILTER option must be enabled\n",
1422 max_uc);
1423 if (!(options & DPNI_OPT_MULTICAST_FILTER) && max_mc != 0)
1424 netdev_info(net_dev,
1425 - "max_multicast_filters=%d, you must have DPNI_OPT_MULTICAST_FILTER in the DPL\n",
1426 + "max_multicast_filters=%d, DPNI_OPT_MULTICAST_FILTER option must be enabled\n",
1427 max_mc);
1428
1429 /* Force promiscuous if the uc or mc counts exceed our capabilities. */
1430 @@ -1318,9 +1347,9 @@ static void dpaa2_eth_set_rx_mode(struct
1431 }
1432
1433 /* Adjust promisc settings due to flag combinations */
1434 - if (net_dev->flags & IFF_PROMISC) {
1435 + if (net_dev->flags & IFF_PROMISC)
1436 goto force_promisc;
1437 - } else if (net_dev->flags & IFF_ALLMULTI) {
1438 + if (net_dev->flags & IFF_ALLMULTI) {
1439 /* First, rebuild unicast filtering table. This should be done
1440 * in promisc mode, in order to avoid frame loss while we
1441 * progressively add entries to the table.
1442 @@ -1329,16 +1358,19 @@ static void dpaa2_eth_set_rx_mode(struct
1443 * nonetheless.
1444 */
1445 err = dpni_set_unicast_promisc(mc_io, 0, mc_token, 1);
1446 - DPAA2_ETH_WARN_IF_ERR(err, net_dev, "Can't set uc promisc\n");
1447 + if (err)
1448 + netdev_warn(net_dev, "Can't set uc promisc\n");
1449
1450 /* Actual uc table reconstruction. */
1451 err = dpni_clear_mac_filters(mc_io, 0, mc_token, 1, 0);
1452 - DPAA2_ETH_WARN_IF_ERR(err, net_dev, "Can't clear uc filters\n");
1453 - _dpaa2_eth_hw_add_uc_addr(net_dev, priv);
1454 + if (err)
1455 + netdev_warn(net_dev, "Can't clear uc filters\n");
1456 + add_uc_hw_addr(net_dev, priv);
1457
1458 /* Finally, clear uc promisc and set mc promisc as requested. */
1459 err = dpni_set_unicast_promisc(mc_io, 0, mc_token, 0);
1460 - DPAA2_ETH_WARN_IF_ERR(err, net_dev, "Can't clear uc promisc\n");
1461 + if (err)
1462 + netdev_warn(net_dev, "Can't clear uc promisc\n");
1463 goto force_mc_promisc;
1464 }
1465
1466 @@ -1346,32 +1378,39 @@ static void dpaa2_eth_set_rx_mode(struct
1467 * For now, rebuild mac filtering tables while forcing both of them on.
1468 */
1469 err = dpni_set_unicast_promisc(mc_io, 0, mc_token, 1);
1470 - DPAA2_ETH_WARN_IF_ERR(err, net_dev, "Can't set uc promisc (%d)\n", err);
1471 + if (err)
1472 + netdev_warn(net_dev, "Can't set uc promisc (%d)\n", err);
1473 err = dpni_set_multicast_promisc(mc_io, 0, mc_token, 1);
1474 - DPAA2_ETH_WARN_IF_ERR(err, net_dev, "Can't set mc promisc (%d)\n", err);
1475 + if (err)
1476 + netdev_warn(net_dev, "Can't set mc promisc (%d)\n", err);
1477
1478 /* Actual mac filtering tables reconstruction */
1479 err = dpni_clear_mac_filters(mc_io, 0, mc_token, 1, 1);
1480 - DPAA2_ETH_WARN_IF_ERR(err, net_dev, "Can't clear mac filters\n");
1481 - _dpaa2_eth_hw_add_mc_addr(net_dev, priv);
1482 - _dpaa2_eth_hw_add_uc_addr(net_dev, priv);
1483 + if (err)
1484 + netdev_warn(net_dev, "Can't clear mac filters\n");
1485 + add_mc_hw_addr(net_dev, priv);
1486 + add_uc_hw_addr(net_dev, priv);
1487
1488 /* Now we can clear both ucast and mcast promisc, without risking
1489 * to drop legitimate frames anymore.
1490 */
1491 err = dpni_set_unicast_promisc(mc_io, 0, mc_token, 0);
1492 - DPAA2_ETH_WARN_IF_ERR(err, net_dev, "Can't clear ucast promisc\n");
1493 + if (err)
1494 + netdev_warn(net_dev, "Can't clear ucast promisc\n");
1495 err = dpni_set_multicast_promisc(mc_io, 0, mc_token, 0);
1496 - DPAA2_ETH_WARN_IF_ERR(err, net_dev, "Can't clear mcast promisc\n");
1497 + if (err)
1498 + netdev_warn(net_dev, "Can't clear mcast promisc\n");
1499
1500 return;
1501
1502 force_promisc:
1503 err = dpni_set_unicast_promisc(mc_io, 0, mc_token, 1);
1504 - DPAA2_ETH_WARN_IF_ERR(err, net_dev, "Can't set ucast promisc\n");
1505 + if (err)
1506 + netdev_warn(net_dev, "Can't set ucast promisc\n");
1507 force_mc_promisc:
1508 err = dpni_set_multicast_promisc(mc_io, 0, mc_token, 1);
1509 - DPAA2_ETH_WARN_IF_ERR(err, net_dev, "Can't set mcast promisc\n");
1510 + if (err)
1511 + netdev_warn(net_dev, "Can't set mcast promisc\n");
1512 }
1513
1514 static int dpaa2_eth_set_features(struct net_device *net_dev,
1515 @@ -1379,20 +1418,19 @@ static int dpaa2_eth_set_features(struct
1516 {
1517 struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
1518 netdev_features_t changed = features ^ net_dev->features;
1519 + bool enable;
1520 int err;
1521
1522 if (changed & NETIF_F_RXCSUM) {
1523 - bool enable = !!(features & NETIF_F_RXCSUM);
1524 -
1525 - err = dpaa2_eth_set_rx_csum(priv, enable);
1526 + enable = !!(features & NETIF_F_RXCSUM);
1527 + err = set_rx_csum(priv, enable);
1528 if (err)
1529 return err;
1530 }
1531
1532 if (changed & (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM)) {
1533 - bool enable = !!(features &
1534 - (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM));
1535 - err = dpaa2_eth_set_tx_csum(priv, enable);
1536 + enable = !!(features & (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM));
1537 + err = set_tx_csum(priv, enable);
1538 if (err)
1539 return err;
1540 }
1541 @@ -1419,9 +1457,9 @@ static int dpaa2_eth_ts_ioctl(struct net
1542 return -ERANGE;
1543 }
1544
1545 - if (config.rx_filter == HWTSTAMP_FILTER_NONE)
1546 + if (config.rx_filter == HWTSTAMP_FILTER_NONE) {
1547 priv->ts_rx_en = false;
1548 - else {
1549 + } else {
1550 priv->ts_rx_en = true;
1551 /* TS is set for all frame types, not only those requested */
1552 config.rx_filter = HWTSTAMP_FILTER_ALL;
1553 @@ -1435,8 +1473,8 @@ static int dpaa2_eth_ioctl(struct net_de
1554 {
1555 if (cmd == SIOCSHWTSTAMP)
1556 return dpaa2_eth_ts_ioctl(dev, rq, cmd);
1557 - else
1558 - return -EINVAL;
1559 +
1560 + return -EINVAL;
1561 }
1562
1563 static const struct net_device_ops dpaa2_eth_ops = {
1564 @@ -1452,7 +1490,7 @@ static const struct net_device_ops dpaa2
1565 .ndo_do_ioctl = dpaa2_eth_ioctl,
1566 };
1567
1568 -static void dpaa2_eth_cdan_cb(struct dpaa2_io_notification_ctx *ctx)
1569 +static void cdan_cb(struct dpaa2_io_notification_ctx *ctx)
1570 {
1571 struct dpaa2_eth_channel *ch;
1572
1573 @@ -1464,37 +1502,9 @@ static void dpaa2_eth_cdan_cb(struct dpa
1574 napi_schedule_irqoff(&ch->napi);
1575 }
1576
1577 -static void dpaa2_eth_setup_fqs(struct dpaa2_eth_priv *priv)
1578 -{
1579 - int i;
1580 -
1581 - /* We have one TxConf FQ per Tx flow */
1582 - for (i = 0; i < priv->dpni_attrs.max_senders; i++) {
1583 - priv->fq[priv->num_fqs].netdev_priv = priv;
1584 - priv->fq[priv->num_fqs].type = DPAA2_TX_CONF_FQ;
1585 - priv->fq[priv->num_fqs].consume = dpaa2_eth_tx_conf;
1586 - priv->fq[priv->num_fqs++].flowid = DPNI_NEW_FLOW_ID;
1587 - }
1588 -
1589 - /* The number of Rx queues (Rx distribution width) may be different from
1590 - * the number of cores.
1591 - * We only support one traffic class for now.
1592 - */
1593 - for (i = 0; i < dpaa2_queue_count(priv); i++) {
1594 - priv->fq[priv->num_fqs].netdev_priv = priv;
1595 - priv->fq[priv->num_fqs].type = DPAA2_RX_FQ;
1596 - priv->fq[priv->num_fqs].consume = dpaa2_eth_rx;
1597 - priv->fq[priv->num_fqs++].flowid = (u16)i;
1598 - }
1599 -
1600 -#ifdef CONFIG_FSL_DPAA2_ETH_USE_ERR_QUEUE
1601 - /* We have exactly one Rx error queue per DPNI */
1602 - priv->fq[priv->num_fqs].netdev_priv = priv;
1603 - priv->fq[priv->num_fqs].type = DPAA2_RX_ERR_FQ;
1604 - priv->fq[priv->num_fqs++].consume = dpaa2_eth_rx_err;
1605 -#endif
1606 -}
1607 -
1608 +/* Verify that the FLIB API version of various MC objects is supported
1609 + * by our driver
1610 + */
1611 static int check_obj_version(struct fsl_mc_device *ls_dev, u16 mc_version)
1612 {
1613 char *name = ls_dev->obj_desc.type;
1614 @@ -1517,8 +1527,7 @@ static int check_obj_version(struct fsl_
1615
1616 /* Check that the FLIB-defined version matches the one reported by MC */
1617 if (mc_version != flib_version) {
1618 - dev_err(dev,
1619 - "%s FLIB version mismatch: MC reports %d, we have %d\n",
1620 + dev_err(dev, "%s FLIB version mismatch: MC reports %d, we have %d\n",
1621 name, mc_version, flib_version);
1622 return -EINVAL;
1623 }
1624 @@ -1534,7 +1543,8 @@ static int check_obj_version(struct fsl_
1625 return 0;
1626 }
1627
1628 -static struct fsl_mc_device *dpaa2_dpcon_setup(struct dpaa2_eth_priv *priv)
1629 +/* Allocate and configure a DPCON object */
1630 +static struct fsl_mc_device *setup_dpcon(struct dpaa2_eth_priv *priv)
1631 {
1632 struct fsl_mc_device *dpcon;
1633 struct device *dev = priv->net_dev->dev.parent;
1634 @@ -1582,8 +1592,8 @@ err_open:
1635 return NULL;
1636 }
1637
1638 -static void dpaa2_dpcon_free(struct dpaa2_eth_priv *priv,
1639 - struct fsl_mc_device *dpcon)
1640 +static void free_dpcon(struct dpaa2_eth_priv *priv,
1641 + struct fsl_mc_device *dpcon)
1642 {
1643 dpcon_disable(priv->mc_io, 0, dpcon->mc_handle);
1644 dpcon_close(priv->mc_io, 0, dpcon->mc_handle);
1645 @@ -1591,7 +1601,7 @@ static void dpaa2_dpcon_free(struct dpaa
1646 }
1647
1648 static struct dpaa2_eth_channel *
1649 -dpaa2_alloc_channel(struct dpaa2_eth_priv *priv)
1650 +alloc_channel(struct dpaa2_eth_priv *priv)
1651 {
1652 struct dpaa2_eth_channel *channel;
1653 struct dpcon_attr attr;
1654 @@ -1599,12 +1609,10 @@ dpaa2_alloc_channel(struct dpaa2_eth_pri
1655 int err;
1656
1657 channel = kzalloc(sizeof(*channel), GFP_ATOMIC);
1658 - if (!channel) {
1659 - dev_err(dev, "Memory allocation failed\n");
1660 + if (!channel)
1661 return NULL;
1662 - }
1663
1664 - channel->dpcon = dpaa2_dpcon_setup(priv);
1665 + channel->dpcon = setup_dpcon(priv);
1666 if (!channel->dpcon)
1667 goto err_setup;
1668
1669 @@ -1622,20 +1630,23 @@ dpaa2_alloc_channel(struct dpaa2_eth_pri
1670 return channel;
1671
1672 err_get_attr:
1673 - dpaa2_dpcon_free(priv, channel->dpcon);
1674 + free_dpcon(priv, channel->dpcon);
1675 err_setup:
1676 kfree(channel);
1677 return NULL;
1678 }
1679
1680 -static void dpaa2_free_channel(struct dpaa2_eth_priv *priv,
1681 - struct dpaa2_eth_channel *channel)
1682 +static void free_channel(struct dpaa2_eth_priv *priv,
1683 + struct dpaa2_eth_channel *channel)
1684 {
1685 - dpaa2_dpcon_free(priv, channel->dpcon);
1686 + free_dpcon(priv, channel->dpcon);
1687 kfree(channel);
1688 }
1689
1690 -static int dpaa2_dpio_setup(struct dpaa2_eth_priv *priv)
1691 +/* DPIO setup: allocate and configure QBMan channels, setup core affinity
1692 + * and register data availability notifications
1693 + */
1694 +static int setup_dpio(struct dpaa2_eth_priv *priv)
1695 {
1696 struct dpaa2_io_notification_ctx *nctx;
1697 struct dpaa2_eth_channel *channel;
1698 @@ -1652,7 +1663,7 @@ static int dpaa2_dpio_setup(struct dpaa2
1699 cpumask_clear(&priv->dpio_cpumask);
1700 for_each_online_cpu(i) {
1701 /* Try to allocate a channel */
1702 - channel = dpaa2_alloc_channel(priv);
1703 + channel = alloc_channel(priv);
1704 if (!channel)
1705 goto err_alloc_ch;
1706
1707 @@ -1660,7 +1671,7 @@ static int dpaa2_dpio_setup(struct dpaa2
1708
1709 nctx = &channel->nctx;
1710 nctx->is_cdan = 1;
1711 - nctx->cb = dpaa2_eth_cdan_cb;
1712 + nctx->cb = cdan_cb;
1713 nctx->id = channel->ch_id;
1714 nctx->desired_cpu = i;
1715
1716 @@ -1671,7 +1682,7 @@ static int dpaa2_dpio_setup(struct dpaa2
1717 /* This core doesn't have an affine DPIO, but there's
1718 * a chance another one does, so keep trying
1719 */
1720 - dpaa2_free_channel(priv, channel);
1721 + free_channel(priv, channel);
1722 continue;
1723 }
1724
1725 @@ -1693,7 +1704,7 @@ static int dpaa2_dpio_setup(struct dpaa2
1726 cpumask_set_cpu(i, &priv->dpio_cpumask);
1727 priv->num_channels++;
1728
1729 - if (priv->num_channels == dpaa2_max_channels(priv))
1730 + if (priv->num_channels == dpaa2_eth_max_channels(priv))
1731 break;
1732 }
1733
1734 @@ -1706,7 +1717,7 @@ static int dpaa2_dpio_setup(struct dpaa2
1735
1736 err_set_cdan:
1737 dpaa2_io_service_deregister(NULL, nctx);
1738 - dpaa2_free_channel(priv, channel);
1739 + free_channel(priv, channel);
1740 err_alloc_ch:
1741 if (cpumask_empty(&priv->dpio_cpumask)) {
1742 dev_err(dev, "No cpu with an affine DPIO/DPCON\n");
1743 @@ -1717,7 +1728,7 @@ err_alloc_ch:
1744 return 0;
1745 }
1746
1747 -static void dpaa2_dpio_free(struct dpaa2_eth_priv *priv)
1748 +static void free_dpio(struct dpaa2_eth_priv *priv)
1749 {
1750 int i;
1751 struct dpaa2_eth_channel *ch;
1752 @@ -1726,12 +1737,12 @@ static void dpaa2_dpio_free(struct dpaa2
1753 for (i = 0; i < priv->num_channels; i++) {
1754 ch = priv->channel[i];
1755 dpaa2_io_service_deregister(NULL, &ch->nctx);
1756 - dpaa2_free_channel(priv, ch);
1757 + free_channel(priv, ch);
1758 }
1759 }
1760
1761 -static struct dpaa2_eth_channel *
1762 -dpaa2_get_channel_by_cpu(struct dpaa2_eth_priv *priv, int cpu)
1763 +static struct dpaa2_eth_channel *get_affine_channel(struct dpaa2_eth_priv *priv,
1764 + int cpu)
1765 {
1766 struct device *dev = priv->net_dev->dev.parent;
1767 int i;
1768 @@ -1748,11 +1759,11 @@ dpaa2_get_channel_by_cpu(struct dpaa2_et
1769 return priv->channel[0];
1770 }
1771
1772 -static void dpaa2_set_fq_affinity(struct dpaa2_eth_priv *priv)
1773 +static void set_fq_affinity(struct dpaa2_eth_priv *priv)
1774 {
1775 struct device *dev = priv->net_dev->dev.parent;
1776 struct dpaa2_eth_fq *fq;
1777 - int rx_cpu, txconf_cpu;
1778 + int rx_cpu, txc_cpu;
1779 int i;
1780
1781 /* For each FQ, pick one channel/CPU to deliver frames to.
1782 @@ -1760,7 +1771,7 @@ static void dpaa2_set_fq_affinity(struct
1783 * through direct user intervention.
1784 */
1785 rx_cpu = cpumask_first(&priv->dpio_cpumask);
1786 - txconf_cpu = cpumask_first(&priv->txconf_cpumask);
1787 + txc_cpu = cpumask_first(&priv->txconf_cpumask);
1788
1789 for (i = 0; i < priv->num_fqs; i++) {
1790 fq = &priv->fq[i];
1791 @@ -1768,20 +1779,56 @@ static void dpaa2_set_fq_affinity(struct
1792 case DPAA2_RX_FQ:
1793 case DPAA2_RX_ERR_FQ:
1794 fq->target_cpu = rx_cpu;
1795 - cpumask_rr(rx_cpu, &priv->dpio_cpumask);
1796 + rx_cpu = cpumask_next(rx_cpu, &priv->dpio_cpumask);
1797 + if (rx_cpu >= nr_cpu_ids)
1798 + rx_cpu = cpumask_first(&priv->dpio_cpumask);
1799 break;
1800 case DPAA2_TX_CONF_FQ:
1801 - fq->target_cpu = txconf_cpu;
1802 - cpumask_rr(txconf_cpu, &priv->txconf_cpumask);
1803 + fq->target_cpu = txc_cpu;
1804 + txc_cpu = cpumask_next(txc_cpu, &priv->txconf_cpumask);
1805 + if (txc_cpu >= nr_cpu_ids)
1806 + txc_cpu = cpumask_first(&priv->txconf_cpumask);
1807 break;
1808 default:
1809 dev_err(dev, "Unknown FQ type: %d\n", fq->type);
1810 }
1811 - fq->channel = dpaa2_get_channel_by_cpu(priv, fq->target_cpu);
1812 + fq->channel = get_affine_channel(priv, fq->target_cpu);
1813 + }
1814 +}
1815 +
1816 +static void setup_fqs(struct dpaa2_eth_priv *priv)
1817 +{
1818 + int i;
1819 +
1820 + /* We have one TxConf FQ per Tx flow */
1821 + for (i = 0; i < priv->dpni_attrs.max_senders; i++) {
1822 + priv->fq[priv->num_fqs].type = DPAA2_TX_CONF_FQ;
1823 + priv->fq[priv->num_fqs].consume = dpaa2_eth_tx_conf;
1824 + priv->fq[priv->num_fqs++].flowid = DPNI_NEW_FLOW_ID;
1825 + }
1826 +
1827 + /* The number of Rx queues (Rx distribution width) may be different from
1828 + * the number of cores.
1829 + * We only support one traffic class for now.
1830 + */
1831 + for (i = 0; i < dpaa2_eth_queue_count(priv); i++) {
1832 + priv->fq[priv->num_fqs].type = DPAA2_RX_FQ;
1833 + priv->fq[priv->num_fqs].consume = dpaa2_eth_rx;
1834 + priv->fq[priv->num_fqs++].flowid = (u16)i;
1835 }
1836 +
1837 +#ifdef CONFIG_FSL_DPAA2_ETH_USE_ERR_QUEUE
1838 + /* We have exactly one Rx error queue per DPNI */
1839 + priv->fq[priv->num_fqs].type = DPAA2_RX_ERR_FQ;
1840 + priv->fq[priv->num_fqs++].consume = dpaa2_eth_rx_err;
1841 +#endif
1842 +
1843 + /* For each FQ, decide on which core to process incoming frames */
1844 + set_fq_affinity(priv);
1845 }
1846
1847 -static int dpaa2_dpbp_setup(struct dpaa2_eth_priv *priv)
1848 +/* Allocate and configure one buffer pool for each interface */
1849 +static int setup_dpbp(struct dpaa2_eth_priv *priv)
1850 {
1851 int err;
1852 struct fsl_mc_device *dpbp_dev;
1853 @@ -1833,15 +1880,16 @@ err_open:
1854 return err;
1855 }
1856
1857 -static void dpaa2_dpbp_free(struct dpaa2_eth_priv *priv)
1858 +static void free_dpbp(struct dpaa2_eth_priv *priv)
1859 {
1860 - __dpaa2_dpbp_free(priv);
1861 + drain_pool(priv);
1862 dpbp_disable(priv->mc_io, 0, priv->dpbp_dev->mc_handle);
1863 dpbp_close(priv->mc_io, 0, priv->dpbp_dev->mc_handle);
1864 fsl_mc_object_free(priv->dpbp_dev);
1865 }
1866
1867 -static int dpaa2_dpni_setup(struct fsl_mc_device *ls_dev)
1868 +/* Configure the DPNI object this interface is associated with */
1869 +static int setup_dpni(struct fsl_mc_device *ls_dev)
1870 {
1871 struct device *dev = &ls_dev->dev;
1872 struct dpaa2_eth_priv *priv;
1873 @@ -1854,7 +1902,7 @@ static int dpaa2_dpni_setup(struct fsl_m
1874
1875 priv->dpni_id = ls_dev->obj_desc.id;
1876
1877 - /* and get a handle for the DPNI this interface is associate with */
1878 + /* get a handle for the DPNI object */
1879 err = dpni_open(priv->mc_io, 0, priv->dpni_id, &priv->mc_token);
1880 if (err) {
1881 dev_err(dev, "dpni_open() failed\n");
1882 @@ -1864,7 +1912,10 @@ static int dpaa2_dpni_setup(struct fsl_m
1883 ls_dev->mc_io = priv->mc_io;
1884 ls_dev->mc_handle = priv->mc_token;
1885
1886 - dma_mem = kzalloc(DPAA2_EXT_CFG_SIZE, GFP_DMA | GFP_KERNEL);
1887 + /* Map a memory region which will be used by MC to pass us an
1888 + * attribute structure
1889 + */
1890 + dma_mem = kzalloc(DPAA2_EXT_CFG_SIZE, GFP_DMA | GFP_KERNEL);
1891 if (!dma_mem)
1892 goto err_alloc;
1893
1894 @@ -1878,10 +1929,15 @@ static int dpaa2_dpni_setup(struct fsl_m
1895
1896 err = dpni_get_attributes(priv->mc_io, 0, priv->mc_token,
1897 &priv->dpni_attrs);
1898 +
1899 + /* We'll check the return code after unmapping, as we need to
1900 + * do this anyway
1901 + */
1902 + dma_unmap_single(dev, priv->dpni_attrs.ext_cfg_iova,
1903 + DPAA2_EXT_CFG_SIZE, DMA_FROM_DEVICE);
1904 +
1905 if (err) {
1906 dev_err(dev, "dpni_get_attributes() failed (err=%d)\n", err);
1907 - dma_unmap_single(dev, priv->dpni_attrs.ext_cfg_iova,
1908 - DPAA2_EXT_CFG_SIZE, DMA_FROM_DEVICE);
1909 goto err_get_attr;
1910 }
1911
1912 @@ -1889,9 +1945,6 @@ static int dpaa2_dpni_setup(struct fsl_m
1913 if (err)
1914 goto err_dpni_ver;
1915
1916 - dma_unmap_single(dev, priv->dpni_attrs.ext_cfg_iova,
1917 - DPAA2_EXT_CFG_SIZE, DMA_FROM_DEVICE);
1918 -
1919 memset(&priv->dpni_ext_cfg, 0, sizeof(priv->dpni_ext_cfg));
1920 err = dpni_extract_extended_cfg(&priv->dpni_ext_cfg, dma_mem);
1921 if (err) {
1922 @@ -1909,15 +1962,15 @@ static int dpaa2_dpni_setup(struct fsl_m
1923 priv->buf_layout.private_data_size = DPAA2_ETH_SWA_SIZE;
1924 /* HW erratum mandates data alignment in multiples of 256 */
1925 priv->buf_layout.data_align = DPAA2_ETH_RX_BUF_ALIGN;
1926 - /* ...rx, ... */
1927 +
1928 + /* rx buffer */
1929 err = dpni_set_rx_buffer_layout(priv->mc_io, 0, priv->mc_token,
1930 &priv->buf_layout);
1931 if (err) {
1932 dev_err(dev, "dpni_set_rx_buffer_layout() failed");
1933 goto err_buf_layout;
1934 }
1935 - /* ... tx, ... */
1936 - /* remove Rx-only options */
1937 + /* tx buffer: remove Rx-only options */
1938 priv->buf_layout.options &= ~(DPNI_BUF_LAYOUT_OPT_DATA_ALIGN |
1939 DPNI_BUF_LAYOUT_OPT_PARSER_RESULT);
1940 err = dpni_set_tx_buffer_layout(priv->mc_io, 0, priv->mc_token,
1941 @@ -1926,7 +1979,7 @@ static int dpaa2_dpni_setup(struct fsl_m
1942 dev_err(dev, "dpni_set_tx_buffer_layout() failed");
1943 goto err_buf_layout;
1944 }
1945 - /* ... tx-confirm. */
1946 + /* tx-confirm: same options as tx */
1947 priv->buf_layout.options &= ~DPNI_BUF_LAYOUT_OPT_PRIVATE_DATA_SIZE;
1948 priv->buf_layout.options |= DPNI_BUF_LAYOUT_OPT_TIMESTAMP;
1949 priv->buf_layout.pass_timestamp = 1;
1950 @@ -1946,8 +1999,9 @@ static int dpaa2_dpni_setup(struct fsl_m
1951 goto err_data_offset;
1952 }
1953
1954 - /* Warn in case TX data offset is not multiple of 64 bytes. */
1955 - WARN_ON(priv->tx_data_offset % 64);
1956 + if ((priv->tx_data_offset % 64) != 0)
1957 + dev_warn(dev, "Tx data offset (%d) not a multiple of 64B",
1958 + priv->tx_data_offset);
1959
1960 /* Accommodate SWA space. */
1961 priv->tx_data_offset += DPAA2_ETH_SWA_SIZE;
1962 @@ -1976,7 +2030,7 @@ err_open:
1963 return err;
1964 }
1965
1966 -static void dpaa2_dpni_free(struct dpaa2_eth_priv *priv)
1967 +static void free_dpni(struct dpaa2_eth_priv *priv)
1968 {
1969 int err;
1970
1971 @@ -1988,8 +2042,8 @@ static void dpaa2_dpni_free(struct dpaa2
1972 dpni_close(priv->mc_io, 0, priv->mc_token);
1973 }
1974
1975 -static int dpaa2_rx_flow_setup(struct dpaa2_eth_priv *priv,
1976 - struct dpaa2_eth_fq *fq)
1977 +static int setup_rx_flow(struct dpaa2_eth_priv *priv,
1978 + struct dpaa2_eth_fq *fq)
1979 {
1980 struct device *dev = priv->net_dev->dev.parent;
1981 struct dpni_queue_attr rx_queue_attr;
1982 @@ -2023,8 +2077,8 @@ static int dpaa2_rx_flow_setup(struct dp
1983 return 0;
1984 }
1985
1986 -static int dpaa2_tx_flow_setup(struct dpaa2_eth_priv *priv,
1987 - struct dpaa2_eth_fq *fq)
1988 +static int setup_tx_flow(struct dpaa2_eth_priv *priv,
1989 + struct dpaa2_eth_fq *fq)
1990 {
1991 struct device *dev = priv->net_dev->dev.parent;
1992 struct dpni_tx_flow_cfg tx_flow_cfg;
1993 @@ -2070,15 +2124,16 @@ static int dpaa2_tx_flow_setup(struct dp
1994 }
1995
1996 #ifdef CONFIG_FSL_DPAA2_ETH_USE_ERR_QUEUE
1997 -static int dpaa2_rx_err_setup(struct dpaa2_eth_priv *priv,
1998 - struct dpaa2_eth_fq *fq)
1999 +static int setup_rx_err_flow(struct dpaa2_eth_priv *priv,
2000 + struct dpaa2_eth_fq *fq)
2001 {
2002 struct dpni_queue_attr queue_attr;
2003 struct dpni_queue_cfg queue_cfg;
2004 int err;
2005
2006 /* Configure the Rx error queue to generate CDANs,
2007 - * just like the Rx queues */
2008 + * just like the Rx queues
2009 + */
2010 queue_cfg.options = DPNI_QUEUE_OPT_USER_CTX | DPNI_QUEUE_OPT_DEST;
2011 queue_cfg.dest_cfg.dest_type = DPNI_DEST_DPCON;
2012 queue_cfg.dest_cfg.priority = 1;
2013 @@ -2091,7 +2146,8 @@ static int dpaa2_rx_err_setup(struct dpa
2014 }
2015
2016 /* Get the FQID */
2017 - err = dpni_get_rx_err_queue(priv->mc_io, 0, priv->mc_token, &queue_attr);
2018 + err = dpni_get_rx_err_queue(priv->mc_io, 0, priv->mc_token,
2019 + &queue_attr);
2020 if (err) {
2021 netdev_err(priv->net_dev, "dpni_get_rx_err_queue() failed\n");
2022 return err;
2023 @@ -2102,7 +2158,10 @@ static int dpaa2_rx_err_setup(struct dpa
2024 }
2025 #endif
2026
2027 -static int dpaa2_dpni_bind(struct dpaa2_eth_priv *priv)
2028 +/* Bind the DPNI to its needed objects and resources: buffer pool, DPIOs,
2029 + * frame queues and channels
2030 + */
2031 +static int bind_dpni(struct dpaa2_eth_priv *priv)
2032 {
2033 struct net_device *net_dev = priv->net_dev;
2034 struct device *dev = net_dev->dev.parent;
2035 @@ -2114,20 +2173,20 @@ static int dpaa2_dpni_bind(struct dpaa2_
2036 pools_params.num_dpbp = 1;
2037 pools_params.pools[0].dpbp_id = priv->dpbp_dev->obj_desc.id;
2038 pools_params.pools[0].backup_pool = 0;
2039 - pools_params.pools[0].buffer_size = DPAA2_ETH_RX_BUFFER_SIZE;
2040 + pools_params.pools[0].buffer_size = DPAA2_ETH_RX_BUF_SIZE;
2041 err = dpni_set_pools(priv->mc_io, 0, priv->mc_token, &pools_params);
2042 if (err) {
2043 dev_err(dev, "dpni_set_pools() failed\n");
2044 return err;
2045 }
2046
2047 - dpaa2_cls_check(net_dev);
2048 + check_fs_support(net_dev);
2049
2050 /* have the interface implicitly distribute traffic based on supported
2051 * header fields
2052 */
2053 if (dpaa2_eth_hash_enabled(priv)) {
2054 - err = dpaa2_set_hash(net_dev, DPAA2_RXH_SUPPORTED);
2055 + err = dpaa2_eth_set_hash(net_dev, DPAA2_RXH_SUPPORTED);
2056 if (err)
2057 return err;
2058 }
2059 @@ -2151,14 +2210,14 @@ static int dpaa2_dpni_bind(struct dpaa2_
2060 for (i = 0; i < priv->num_fqs; i++) {
2061 switch (priv->fq[i].type) {
2062 case DPAA2_RX_FQ:
2063 - err = dpaa2_rx_flow_setup(priv, &priv->fq[i]);
2064 + err = setup_rx_flow(priv, &priv->fq[i]);
2065 break;
2066 case DPAA2_TX_CONF_FQ:
2067 - err = dpaa2_tx_flow_setup(priv, &priv->fq[i]);
2068 + err = setup_tx_flow(priv, &priv->fq[i]);
2069 break;
2070 #ifdef CONFIG_FSL_DPAA2_ETH_USE_ERR_QUEUE
2071 case DPAA2_RX_ERR_FQ:
2072 - err = dpaa2_rx_err_setup(priv, &priv->fq[i]);
2073 + err = setup_rx_err_flow(priv, &priv->fq[i]);
2074 break;
2075 #endif
2076 default:
2077 @@ -2178,7 +2237,8 @@ static int dpaa2_dpni_bind(struct dpaa2_
2078 return 0;
2079 }
2080
2081 -static int dpaa2_eth_alloc_rings(struct dpaa2_eth_priv *priv)
2082 +/* Allocate rings for storing incoming frame descriptors */
2083 +static int alloc_rings(struct dpaa2_eth_priv *priv)
2084 {
2085 struct net_device *net_dev = priv->net_dev;
2086 struct device *dev = net_dev->dev.parent;
2087 @@ -2205,7 +2265,7 @@ err_ring:
2088 return -ENOMEM;
2089 }
2090
2091 -static void dpaa2_eth_free_rings(struct dpaa2_eth_priv *priv)
2092 +static void free_rings(struct dpaa2_eth_priv *priv)
2093 {
2094 int i;
2095
2096 @@ -2213,7 +2273,7 @@ static void dpaa2_eth_free_rings(struct
2097 dpaa2_io_store_destroy(priv->channel[i]->store);
2098 }
2099
2100 -static int dpaa2_eth_netdev_init(struct net_device *net_dev)
2101 +static int netdev_init(struct net_device *net_dev)
2102 {
2103 int err;
2104 struct device *dev = net_dev->dev.parent;
2105 @@ -2223,7 +2283,9 @@ static int dpaa2_eth_netdev_init(struct
2106
2107 net_dev->netdev_ops = &dpaa2_eth_ops;
2108
2109 - /* If the DPL contains all-0 mac_addr, set a random hardware address */
2110 + /* If the DPNI attributes contain an all-0 mac_addr,
2111 + * set a random hardware address
2112 + */
2113 err = dpni_get_primary_mac_addr(priv->mc_io, 0, priv->mc_token,
2114 mac_addr);
2115 if (err) {
2116 @@ -2281,14 +2343,13 @@ static int dpaa2_eth_netdev_init(struct
2117 return 0;
2118 }
2119
2120 -#ifdef CONFIG_FSL_DPAA2_ETH_LINK_POLL
2121 -static int dpaa2_poll_link_state(void *arg)
2122 +static int poll_link_state(void *arg)
2123 {
2124 struct dpaa2_eth_priv *priv = (struct dpaa2_eth_priv *)arg;
2125 int err;
2126
2127 while (!kthread_should_stop()) {
2128 - err = dpaa2_link_state_update(priv);
2129 + err = link_state_update(priv);
2130 if (unlikely(err))
2131 return err;
2132
2133 @@ -2297,7 +2358,7 @@ static int dpaa2_poll_link_state(void *a
2134
2135 return 0;
2136 }
2137 -#else
2138 +
2139 static irqreturn_t dpni_irq0_handler(int irq_num, void *arg)
2140 {
2141 return IRQ_WAKE_THREAD;
2142 @@ -2312,7 +2373,6 @@ static irqreturn_t dpni_irq0_handler_thr
2143 struct net_device *net_dev = dev_get_drvdata(dev);
2144 int err;
2145
2146 - netdev_dbg(net_dev, "IRQ %d received\n", irq_num);
2147 err = dpni_get_irq_status(dpni_dev->mc_io, 0, dpni_dev->mc_handle,
2148 irq_index, &status);
2149 if (unlikely(err)) {
2150 @@ -2323,7 +2383,7 @@ static irqreturn_t dpni_irq0_handler_thr
2151
2152 if (status & DPNI_IRQ_EVENT_LINK_CHANGED) {
2153 clear |= DPNI_IRQ_EVENT_LINK_CHANGED;
2154 - dpaa2_link_state_update(netdev_priv(net_dev));
2155 + link_state_update(netdev_priv(net_dev));
2156 }
2157
2158 out:
2159 @@ -2332,17 +2392,18 @@ out:
2160 return IRQ_HANDLED;
2161 }
2162
2163 -static int dpaa2_eth_setup_irqs(struct fsl_mc_device *ls_dev)
2164 +static int setup_irqs(struct fsl_mc_device *ls_dev)
2165 {
2166 int err = 0;
2167 struct fsl_mc_device_irq *irq;
2168 - int irq_count = ls_dev->obj_desc.irq_count;
2169 u8 irq_index = DPNI_IRQ_INDEX;
2170 u32 mask = DPNI_IRQ_EVENT_LINK_CHANGED;
2171
2172 - /* The only interrupt supported now is the link state notification. */
2173 - if (WARN_ON(irq_count != 1))
2174 - return -EINVAL;
2175 + err = fsl_mc_allocate_irqs(ls_dev);
2176 + if (err) {
2177 + dev_err(&ls_dev->dev, "MC irqs allocation failed\n");
2178 + return err;
2179 + }
2180
2181 irq = ls_dev->irqs[0];
2182 err = devm_request_threaded_irq(&ls_dev->dev, irq->msi_desc->irq,
2183 @@ -2352,28 +2413,34 @@ static int dpaa2_eth_setup_irqs(struct f
2184 dev_name(&ls_dev->dev), &ls_dev->dev);
2185 if (err < 0) {
2186 dev_err(&ls_dev->dev, "devm_request_threaded_irq(): %d", err);
2187 - return err;
2188 + goto free_mc_irq;
2189 }
2190
2191 err = dpni_set_irq_mask(ls_dev->mc_io, 0, ls_dev->mc_handle,
2192 irq_index, mask);
2193 if (err < 0) {
2194 dev_err(&ls_dev->dev, "dpni_set_irq_mask(): %d", err);
2195 - return err;
2196 + goto free_irq;
2197 }
2198
2199 err = dpni_set_irq_enable(ls_dev->mc_io, 0, ls_dev->mc_handle,
2200 irq_index, 1);
2201 if (err < 0) {
2202 dev_err(&ls_dev->dev, "dpni_set_irq_enable(): %d", err);
2203 - return err;
2204 + goto free_irq;
2205 }
2206
2207 return 0;
2208 +
2209 +free_irq:
2210 + devm_free_irq(&ls_dev->dev, irq->msi_desc->irq, &ls_dev->dev);
2211 +free_mc_irq:
2212 + fsl_mc_free_irqs(ls_dev);
2213 +
2214 + return err;
2215 }
2216 -#endif
2217
2218 -static void dpaa2_eth_napi_add(struct dpaa2_eth_priv *priv)
2219 +static void add_ch_napi(struct dpaa2_eth_priv *priv)
2220 {
2221 int i;
2222 struct dpaa2_eth_channel *ch;
2223 @@ -2386,7 +2453,7 @@ static void dpaa2_eth_napi_add(struct dp
2224 }
2225 }
2226
2227 -static void dpaa2_eth_napi_del(struct dpaa2_eth_priv *priv)
2228 +static void del_ch_napi(struct dpaa2_eth_priv *priv)
2229 {
2230 int i;
2231 struct dpaa2_eth_channel *ch;
2232 @@ -2398,7 +2465,6 @@ static void dpaa2_eth_napi_del(struct dp
2233 }
2234
2235 /* SysFS support */
2236 -
2237 static ssize_t dpaa2_eth_show_tx_shaping(struct device *dev,
2238 struct device_attribute *attr,
2239 char *buf)
2240 @@ -2482,22 +2548,21 @@ static ssize_t dpaa2_eth_write_txconf_cp
2241 }
2242
2243 /* Set the new TxConf FQ affinities */
2244 - dpaa2_set_fq_affinity(priv);
2245 + set_fq_affinity(priv);
2246
2247 -#ifdef CONFIG_FSL_DPAA2_ETH_LINK_POLL
2248 /* dpaa2_eth_open() below will *stop* the Tx queues until an explicit
2249 * link up notification is received. Give the polling thread enough time
2250 * to detect the link state change, or else we'll end up with the
2251 * transmission side forever shut down.
2252 */
2253 - msleep(2 * DPAA2_ETH_LINK_STATE_REFRESH);
2254 -#endif
2255 + if (priv->do_link_poll)
2256 + msleep(2 * DPAA2_ETH_LINK_STATE_REFRESH);
2257
2258 for (i = 0; i < priv->num_fqs; i++) {
2259 fq = &priv->fq[i];
2260 if (fq->type != DPAA2_TX_CONF_FQ)
2261 continue;
2262 - dpaa2_tx_flow_setup(priv, fq);
2263 + setup_tx_flow(priv, fq);
2264 }
2265
2266 if (running) {
2267 @@ -2568,7 +2633,6 @@ static int dpaa2_eth_probe(struct fsl_mc
2268
2269 priv = netdev_priv(net_dev);
2270 priv->net_dev = net_dev;
2271 - priv->msg_enable = netif_msg_init(debug, -1);
2272
2273 /* Obtain a MC portal */
2274 err = fsl_mc_portal_allocate(dpni_dev, FSL_MC_IO_ATOMIC_CONTEXT_PORTAL,
2275 @@ -2578,39 +2642,27 @@ static int dpaa2_eth_probe(struct fsl_mc
2276 goto err_portal_alloc;
2277 }
2278
2279 -#ifndef CONFIG_FSL_DPAA2_ETH_LINK_POLL
2280 - err = fsl_mc_allocate_irqs(dpni_dev);
2281 - if (err) {
2282 - dev_err(dev, "MC irqs allocation failed\n");
2283 - goto err_irqs_alloc;
2284 - }
2285 -#endif
2286 -
2287 - /* DPNI initialization */
2288 - err = dpaa2_dpni_setup(dpni_dev);
2289 - if (err < 0)
2290 + /* MC objects initialization and configuration */
2291 + err = setup_dpni(dpni_dev);
2292 + if (err)
2293 goto err_dpni_setup;
2294
2295 - /* DPIO */
2296 - err = dpaa2_dpio_setup(priv);
2297 + err = setup_dpio(priv);
2298 if (err)
2299 goto err_dpio_setup;
2300
2301 - /* FQs */
2302 - dpaa2_eth_setup_fqs(priv);
2303 - dpaa2_set_fq_affinity(priv);
2304 + setup_fqs(priv);
2305
2306 - /* DPBP */
2307 - err = dpaa2_dpbp_setup(priv);
2308 + err = setup_dpbp(priv);
2309 if (err)
2310 goto err_dpbp_setup;
2311
2312 - /* DPNI binding to DPIO and DPBPs */
2313 - err = dpaa2_dpni_bind(priv);
2314 + err = bind_dpni(priv);
2315 if (err)
2316 goto err_bind;
2317
2318 - dpaa2_eth_napi_add(priv);
2319 + /* Add a NAPI context for each channel */
2320 + add_ch_napi(priv);
2321
2322 /* Percpu statistics */
2323 priv->percpu_stats = alloc_percpu(*priv->percpu_stats);
2324 @@ -2635,38 +2687,37 @@ static int dpaa2_eth_probe(struct fsl_mc
2325 dev_warn(&net_dev->dev, "using name \"%s\"\n", net_dev->name);
2326 }
2327
2328 - err = dpaa2_eth_netdev_init(net_dev);
2329 + err = netdev_init(net_dev);
2330 if (err)
2331 goto err_netdev_init;
2332
2333 /* Configure checksum offload based on current interface flags */
2334 - err = dpaa2_eth_set_rx_csum(priv,
2335 - !!(net_dev->features & NETIF_F_RXCSUM));
2336 + err = set_rx_csum(priv, !!(net_dev->features & NETIF_F_RXCSUM));
2337 if (err)
2338 goto err_csum;
2339
2340 - err = dpaa2_eth_set_tx_csum(priv,
2341 - !!(net_dev->features &
2342 - (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM)));
2343 + err = set_tx_csum(priv, !!(net_dev->features &
2344 + (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM)));
2345 if (err)
2346 goto err_csum;
2347
2348 - err = dpaa2_eth_alloc_rings(priv);
2349 + err = alloc_rings(priv);
2350 if (err)
2351 goto err_alloc_rings;
2352
2353 net_dev->ethtool_ops = &dpaa2_ethtool_ops;
2354
2355 -#ifdef CONFIG_FSL_DPAA2_ETH_LINK_POLL
2356 - priv->poll_thread = kthread_run(dpaa2_poll_link_state, priv,
2357 - "%s_poll_link", net_dev->name);
2358 -#else
2359 - err = dpaa2_eth_setup_irqs(dpni_dev);
2360 + err = setup_irqs(dpni_dev);
2361 if (err) {
2362 - netdev_err(net_dev, "ERROR %d setting up interrupts", err);
2363 - goto err_setup_irqs;
2364 + netdev_warn(net_dev, "Failed to set link interrupt, fall back to polling\n");
2365 + priv->poll_thread = kthread_run(poll_link_state, priv,
2366 + "%s_poll_link", net_dev->name);
2367 + if (IS_ERR(priv->poll_thread)) {
2368 + netdev_err(net_dev, "Error starting polling thread\n");
2369 + goto err_poll_thread;
2370 + }
2371 + priv->do_link_poll = true;
2372 }
2373 -#endif
2374
2375 dpaa2_eth_sysfs_init(&net_dev->dev);
2376 dpaa2_dbg_add(priv);
2377 @@ -2674,10 +2725,8 @@ static int dpaa2_eth_probe(struct fsl_mc
2378 dev_info(dev, "Probed interface %s\n", net_dev->name);
2379 return 0;
2380
2381 -#ifndef CONFIG_FSL_DPAA2_ETH_LINK_POLL
2382 -err_setup_irqs:
2383 -#endif
2384 - dpaa2_eth_free_rings(priv);
2385 +err_poll_thread:
2386 + free_rings(priv);
2387 err_alloc_rings:
2388 err_csum:
2389 unregister_netdev(net_dev);
2390 @@ -2686,19 +2735,15 @@ err_netdev_init:
2391 err_alloc_percpu_extras:
2392 free_percpu(priv->percpu_stats);
2393 err_alloc_percpu_stats:
2394 - dpaa2_eth_napi_del(priv);
2395 + del_ch_napi(priv);
2396 err_bind:
2397 - dpaa2_dpbp_free(priv);
2398 + free_dpbp(priv);
2399 err_dpbp_setup:
2400 - dpaa2_dpio_free(priv);
2401 + free_dpio(priv);
2402 err_dpio_setup:
2403 kfree(priv->cls_rule);
2404 dpni_close(priv->mc_io, 0, priv->mc_token);
2405 err_dpni_setup:
2406 -#ifndef CONFIG_FSL_DPAA2_ETH_LINK_POLL
2407 - fsl_mc_free_irqs(dpni_dev);
2408 -err_irqs_alloc:
2409 -#endif
2410 fsl_mc_portal_free(priv->mc_io);
2411 err_portal_alloc:
2412 dev_set_drvdata(dev, NULL);
2413 @@ -2723,22 +2768,21 @@ static int dpaa2_eth_remove(struct fsl_m
2414 unregister_netdev(net_dev);
2415 dev_info(net_dev->dev.parent, "Removed interface %s\n", net_dev->name);
2416
2417 - dpaa2_dpio_free(priv);
2418 - dpaa2_eth_free_rings(priv);
2419 - dpaa2_eth_napi_del(priv);
2420 - dpaa2_dpbp_free(priv);
2421 - dpaa2_dpni_free(priv);
2422 + free_dpio(priv);
2423 + free_rings(priv);
2424 + del_ch_napi(priv);
2425 + free_dpbp(priv);
2426 + free_dpni(priv);
2427
2428 fsl_mc_portal_free(priv->mc_io);
2429
2430 free_percpu(priv->percpu_stats);
2431 free_percpu(priv->percpu_extras);
2432
2433 -#ifdef CONFIG_FSL_DPAA2_ETH_LINK_POLL
2434 - kthread_stop(priv->poll_thread);
2435 -#else
2436 - fsl_mc_free_irqs(ls_dev);
2437 -#endif
2438 + if (priv->do_link_poll)
2439 + kthread_stop(priv->poll_thread);
2440 + else
2441 + fsl_mc_free_irqs(ls_dev);
2442
2443 kfree(priv->cls_rule);
2444
2445 --- a/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.h
2446 +++ b/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.h
2447 @@ -49,8 +49,10 @@
2448
2449 #define DPAA2_ETH_STORE_SIZE 16
2450
2451 -/* Maximum receive frame size is 64K */
2452 -#define DPAA2_ETH_MAX_SG_ENTRIES ((64 * 1024) / DPAA2_ETH_RX_BUFFER_SIZE)
2453 +/* Maximum number of scatter-gather entries in an ingress frame,
2454 + * considering the maximum receive frame size is 64K
2455 + */
2456 +#define DPAA2_ETH_MAX_SG_ENTRIES ((64 * 1024) / DPAA2_ETH_RX_BUF_SIZE)
2457
2458 /* Maximum acceptable MTU value. It is in direct relation with the MC-enforced
2459 * Max Frame Length (currently 10k).
2460 @@ -75,17 +77,26 @@
2461 #define DPAA2_ETH_NUM_BUFS (DPAA2_ETH_MAX_FRAMES_PER_QUEUE + 256)
2462 #define DPAA2_ETH_REFILL_THRESH DPAA2_ETH_MAX_FRAMES_PER_QUEUE
2463
2464 +/* Maximum number of buffers that can be acquired/released through a single
2465 + * QBMan command
2466 + */
2467 +#define DPAA2_ETH_BUFS_PER_CMD 7
2468 +
2469 /* Hardware requires alignment for ingress/egress buffer addresses
2470 * and ingress buffer lengths.
2471 */
2472 -#define DPAA2_ETH_RX_BUFFER_SIZE 2048
2473 +#define DPAA2_ETH_RX_BUF_SIZE 2048
2474 #define DPAA2_ETH_TX_BUF_ALIGN 64
2475 #define DPAA2_ETH_RX_BUF_ALIGN 256
2476 #define DPAA2_ETH_NEEDED_HEADROOM(p_priv) \
2477 ((p_priv)->tx_data_offset + DPAA2_ETH_TX_BUF_ALIGN)
2478
2479 +/* Hardware only sees DPAA2_ETH_RX_BUF_SIZE, but we need to allocate ingress
2480 + * buffers large enough to allow building an skb around them and also account
2481 + * for alignment restrictions
2482 + */
2483 #define DPAA2_ETH_BUF_RAW_SIZE \
2484 - (DPAA2_ETH_RX_BUFFER_SIZE + \
2485 + (DPAA2_ETH_RX_BUF_SIZE + \
2486 SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) + \
2487 DPAA2_ETH_RX_BUF_ALIGN)
2488
2489 @@ -127,57 +138,56 @@ struct dpaa2_fas {
2490 __le32 status;
2491 } __packed;
2492
2493 +/* Error and status bits in the frame annotation status word */
2494 /* Debug frame, otherwise supposed to be discarded */
2495 -#define DPAA2_ETH_FAS_DISC 0x80000000
2496 +#define DPAA2_FAS_DISC 0x80000000
2497 /* MACSEC frame */
2498 -#define DPAA2_ETH_FAS_MS 0x40000000
2499 -#define DPAA2_ETH_FAS_PTP 0x08000000
2500 +#define DPAA2_FAS_MS 0x40000000
2501 +#define DPAA2_FAS_PTP 0x08000000
2502 /* Ethernet multicast frame */
2503 -#define DPAA2_ETH_FAS_MC 0x04000000
2504 +#define DPAA2_FAS_MC 0x04000000
2505 /* Ethernet broadcast frame */
2506 -#define DPAA2_ETH_FAS_BC 0x02000000
2507 -#define DPAA2_ETH_FAS_KSE 0x00040000
2508 -#define DPAA2_ETH_FAS_EOFHE 0x00020000
2509 -#define DPAA2_ETH_FAS_MNLE 0x00010000
2510 -#define DPAA2_ETH_FAS_TIDE 0x00008000
2511 -#define DPAA2_ETH_FAS_PIEE 0x00004000
2512 +#define DPAA2_FAS_BC 0x02000000
2513 +#define DPAA2_FAS_KSE 0x00040000
2514 +#define DPAA2_FAS_EOFHE 0x00020000
2515 +#define DPAA2_FAS_MNLE 0x00010000
2516 +#define DPAA2_FAS_TIDE 0x00008000
2517 +#define DPAA2_FAS_PIEE 0x00004000
2518 /* Frame length error */
2519 -#define DPAA2_ETH_FAS_FLE 0x00002000
2520 -/* Frame physical error; our favourite pastime */
2521 -#define DPAA2_ETH_FAS_FPE 0x00001000
2522 -#define DPAA2_ETH_FAS_PTE 0x00000080
2523 -#define DPAA2_ETH_FAS_ISP 0x00000040
2524 -#define DPAA2_ETH_FAS_PHE 0x00000020
2525 -#define DPAA2_ETH_FAS_BLE 0x00000010
2526 +#define DPAA2_FAS_FLE 0x00002000
2527 +/* Frame physical error */
2528 +#define DPAA2_FAS_FPE 0x00001000
2529 +#define DPAA2_FAS_PTE 0x00000080
2530 +#define DPAA2_FAS_ISP 0x00000040
2531 +#define DPAA2_FAS_PHE 0x00000020
2532 +#define DPAA2_FAS_BLE 0x00000010
2533 /* L3 csum validation performed */
2534 -#define DPAA2_ETH_FAS_L3CV 0x00000008
2535 +#define DPAA2_FAS_L3CV 0x00000008
2536 /* L3 csum error */
2537 -#define DPAA2_ETH_FAS_L3CE 0x00000004
2538 +#define DPAA2_FAS_L3CE 0x00000004
2539 /* L4 csum validation performed */
2540 -#define DPAA2_ETH_FAS_L4CV 0x00000002
2541 +#define DPAA2_FAS_L4CV 0x00000002
2542 /* L4 csum error */
2543 -#define DPAA2_ETH_FAS_L4CE 0x00000001
2544 -/* These bits always signal errors */
2545 -#define DPAA2_ETH_RX_ERR_MASK (DPAA2_ETH_FAS_KSE | \
2546 - DPAA2_ETH_FAS_EOFHE | \
2547 - DPAA2_ETH_FAS_MNLE | \
2548 - DPAA2_ETH_FAS_TIDE | \
2549 - DPAA2_ETH_FAS_PIEE | \
2550 - DPAA2_ETH_FAS_FLE | \
2551 - DPAA2_ETH_FAS_FPE | \
2552 - DPAA2_ETH_FAS_PTE | \
2553 - DPAA2_ETH_FAS_ISP | \
2554 - DPAA2_ETH_FAS_PHE | \
2555 - DPAA2_ETH_FAS_BLE | \
2556 - DPAA2_ETH_FAS_L3CE | \
2557 - DPAA2_ETH_FAS_L4CE)
2558 -/* Unsupported features in the ingress */
2559 -#define DPAA2_ETH_RX_UNSUPP_MASK DPAA2_ETH_FAS_MS
2560 +#define DPAA2_FAS_L4CE 0x00000001
2561 +/* Possible errors on the ingress path */
2562 +#define DPAA2_ETH_RX_ERR_MASK (DPAA2_FAS_KSE | \
2563 + DPAA2_FAS_EOFHE | \
2564 + DPAA2_FAS_MNLE | \
2565 + DPAA2_FAS_TIDE | \
2566 + DPAA2_FAS_PIEE | \
2567 + DPAA2_FAS_FLE | \
2568 + DPAA2_FAS_FPE | \
2569 + DPAA2_FAS_PTE | \
2570 + DPAA2_FAS_ISP | \
2571 + DPAA2_FAS_PHE | \
2572 + DPAA2_FAS_BLE | \
2573 + DPAA2_FAS_L3CE | \
2574 + DPAA2_FAS_L4CE)
2575 /* Tx errors */
2576 -#define DPAA2_ETH_TXCONF_ERR_MASK (DPAA2_ETH_FAS_KSE | \
2577 - DPAA2_ETH_FAS_EOFHE | \
2578 - DPAA2_ETH_FAS_MNLE | \
2579 - DPAA2_ETH_FAS_TIDE)
2580 +#define DPAA2_ETH_TXCONF_ERR_MASK (DPAA2_FAS_KSE | \
2581 + DPAA2_FAS_EOFHE | \
2582 + DPAA2_FAS_MNLE | \
2583 + DPAA2_FAS_TIDE)
2584
2585 /* Time in milliseconds between link state updates */
2586 #define DPAA2_ETH_LINK_STATE_REFRESH 1000
2587 @@ -185,7 +195,7 @@ struct dpaa2_fas {
2588 /* Driver statistics, other than those in struct rtnl_link_stats64.
2589 * These are usually collected per-CPU and aggregated by ethtool.
2590 */
2591 -struct dpaa2_eth_stats {
2592 +struct dpaa2_eth_drv_stats {
2593 __u64 tx_conf_frames;
2594 __u64 tx_conf_bytes;
2595 __u64 tx_sg_frames;
2596 @@ -210,15 +220,17 @@ struct dpaa2_eth_ch_stats {
2597 __u64 cdan;
2598 /* Number of frames received on queues from this channel */
2599 __u64 frames;
2600 + /* Pull errors */
2601 + __u64 pull_err;
2602 };
2603
2604 -/* Maximum number of Rx queues associated with a DPNI */
2605 +/* Maximum number of queues associated with a DPNI */
2606 #define DPAA2_ETH_MAX_RX_QUEUES 16
2607 #define DPAA2_ETH_MAX_TX_QUEUES NR_CPUS
2608 #define DPAA2_ETH_MAX_RX_ERR_QUEUES 1
2609 -#define DPAA2_ETH_MAX_QUEUES (DPAA2_ETH_MAX_RX_QUEUES + \
2610 - DPAA2_ETH_MAX_TX_QUEUES + \
2611 - DPAA2_ETH_MAX_RX_ERR_QUEUES)
2612 +#define DPAA2_ETH_MAX_QUEUES (DPAA2_ETH_MAX_RX_QUEUES + \
2613 + DPAA2_ETH_MAX_TX_QUEUES + \
2614 + DPAA2_ETH_MAX_RX_ERR_QUEUES)
2615
2616 #define DPAA2_ETH_MAX_DPCONS NR_CPUS
2617
2618 @@ -241,7 +253,6 @@ struct dpaa2_eth_fq {
2619 struct dpaa2_eth_channel *,
2620 const struct dpaa2_fd *,
2621 struct napi_struct *);
2622 - struct dpaa2_eth_priv *netdev_priv; /* backpointer */
2623 struct dpaa2_eth_fq_stats stats;
2624 };
2625
2626 @@ -258,16 +269,16 @@ struct dpaa2_eth_channel {
2627 struct dpaa2_eth_ch_stats stats;
2628 };
2629
2630 -struct dpaa2_cls_rule {
2631 +struct dpaa2_eth_cls_rule {
2632 struct ethtool_rx_flow_spec fs;
2633 bool in_use;
2634 };
2635
2636 +/* Driver private data */
2637 struct dpaa2_eth_priv {
2638 struct net_device *net_dev;
2639
2640 u8 num_fqs;
2641 - /* First queue is tx conf, the rest are rx */
2642 struct dpaa2_eth_fq fq[DPAA2_ETH_MAX_QUEUES];
2643
2644 u8 num_channels;
2645 @@ -299,12 +310,12 @@ struct dpaa2_eth_priv {
2646 /* Standard statistics */
2647 struct rtnl_link_stats64 __percpu *percpu_stats;
2648 /* Extra stats, in addition to the ones known by the kernel */
2649 - struct dpaa2_eth_stats __percpu *percpu_extras;
2650 - u32 msg_enable; /* net_device message level */
2651 + struct dpaa2_eth_drv_stats __percpu *percpu_extras;
2652
2653 u16 mc_token;
2654
2655 struct dpni_link_state link_state;
2656 + bool do_link_poll;
2657 struct task_struct *poll_thread;
2658
2659 /* enabled ethtool hashing bits */
2660 @@ -315,7 +326,7 @@ struct dpaa2_eth_priv {
2661 #endif
2662
2663 /* array of classification rules */
2664 - struct dpaa2_cls_rule *cls_rule;
2665 + struct dpaa2_eth_cls_rule *cls_rule;
2666
2667 struct dpni_tx_shaping_cfg shaping_cfg;
2668
2669 @@ -341,9 +352,9 @@ struct dpaa2_eth_priv {
2670
2671 extern const struct ethtool_ops dpaa2_ethtool_ops;
2672
2673 -int dpaa2_set_hash(struct net_device *net_dev, u64 flags);
2674 +int dpaa2_eth_set_hash(struct net_device *net_dev, u64 flags);
2675
2676 -static int dpaa2_queue_count(struct dpaa2_eth_priv *priv)
2677 +static int dpaa2_eth_queue_count(struct dpaa2_eth_priv *priv)
2678 {
2679 if (!dpaa2_eth_hash_enabled(priv))
2680 return 1;
2681 @@ -351,16 +362,16 @@ static int dpaa2_queue_count(struct dpaa
2682 return priv->dpni_ext_cfg.tc_cfg[0].max_dist;
2683 }
2684
2685 -static inline int dpaa2_max_channels(struct dpaa2_eth_priv *priv)
2686 +static inline int dpaa2_eth_max_channels(struct dpaa2_eth_priv *priv)
2687 {
2688 /* Ideally, we want a number of channels large enough
2689 * to accommodate both the Rx distribution size
2690 * and the max number of Tx confirmation queues
2691 */
2692 - return max_t(int, dpaa2_queue_count(priv),
2693 + return max_t(int, dpaa2_eth_queue_count(priv),
2694 priv->dpni_attrs.max_senders);
2695 }
2696
2697 -void dpaa2_cls_check(struct net_device *);
2698 +void check_fs_support(struct net_device *);
2699
2700 #endif /* __DPAA2_H */
2701 --- a/drivers/staging/fsl-dpaa2/ethernet/dpaa2-ethtool.c
2702 +++ b/drivers/staging/fsl-dpaa2/ethernet/dpaa2-ethtool.c
2703 @@ -52,7 +52,7 @@ char dpaa2_ethtool_stats[][ETH_GSTRING_L
2704
2705 #define DPAA2_ETH_NUM_STATS ARRAY_SIZE(dpaa2_ethtool_stats)
2706
2707 -/* To be kept in sync with 'struct dpaa2_eth_stats' */
2708 +/* To be kept in sync with 'struct dpaa2_eth_drv_stats' */
2709 char dpaa2_ethtool_extras[][ETH_GSTRING_LEN] = {
2710 /* per-cpu stats */
2711
2712 @@ -63,12 +63,12 @@ char dpaa2_ethtool_extras[][ETH_GSTRING_
2713 "rx sg frames",
2714 "rx sg bytes",
2715 /* how many times we had to retry the enqueue command */
2716 - "tx portal busy",
2717 + "enqueue portal busy",
2718
2719 /* Channel stats */
2720 -
2721 /* How many times we had to retry the volatile dequeue command */
2722 - "portal busy",
2723 + "dequeue portal busy",
2724 + "channel pull errors",
2725 /* Number of notifications received */
2726 "cdan",
2727 #ifdef CONFIG_FSL_QBMAN_DEBUG
2728 @@ -83,8 +83,8 @@ char dpaa2_ethtool_extras[][ETH_GSTRING_
2729
2730 #define DPAA2_ETH_NUM_EXTRA_STATS ARRAY_SIZE(dpaa2_ethtool_extras)
2731
2732 -static void dpaa2_get_drvinfo(struct net_device *net_dev,
2733 - struct ethtool_drvinfo *drvinfo)
2734 +static void dpaa2_eth_get_drvinfo(struct net_device *net_dev,
2735 + struct ethtool_drvinfo *drvinfo)
2736 {
2737 struct mc_version mc_ver;
2738 struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
2739 @@ -112,20 +112,8 @@ static void dpaa2_get_drvinfo(struct net
2740 sizeof(drvinfo->bus_info));
2741 }
2742
2743 -static u32 dpaa2_get_msglevel(struct net_device *net_dev)
2744 -{
2745 - return ((struct dpaa2_eth_priv *)netdev_priv(net_dev))->msg_enable;
2746 -}
2747 -
2748 -static void dpaa2_set_msglevel(struct net_device *net_dev,
2749 - u32 msg_enable)
2750 -{
2751 - ((struct dpaa2_eth_priv *)netdev_priv(net_dev))->msg_enable =
2752 - msg_enable;
2753 -}
2754 -
2755 -static int dpaa2_get_settings(struct net_device *net_dev,
2756 - struct ethtool_cmd *cmd)
2757 +static int dpaa2_eth_get_settings(struct net_device *net_dev,
2758 + struct ethtool_cmd *cmd)
2759 {
2760 struct dpni_link_state state = {0};
2761 int err = 0;
2762 @@ -152,8 +140,8 @@ out:
2763 return err;
2764 }
2765
2766 -static int dpaa2_set_settings(struct net_device *net_dev,
2767 - struct ethtool_cmd *cmd)
2768 +static int dpaa2_eth_set_settings(struct net_device *net_dev,
2769 + struct ethtool_cmd *cmd)
2770 {
2771 struct dpni_link_cfg cfg = {0};
2772 struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
2773 @@ -190,8 +178,8 @@ static int dpaa2_set_settings(struct net
2774 return err;
2775 }
2776
2777 -static void dpaa2_get_strings(struct net_device *netdev, u32 stringset,
2778 - u8 *data)
2779 +static void dpaa2_eth_get_strings(struct net_device *netdev, u32 stringset,
2780 + u8 *data)
2781 {
2782 u8 *p = data;
2783 int i;
2784 @@ -210,7 +198,7 @@ static void dpaa2_get_strings(struct net
2785 }
2786 }
2787
2788 -static int dpaa2_get_sset_count(struct net_device *net_dev, int sset)
2789 +static int dpaa2_eth_get_sset_count(struct net_device *net_dev, int sset)
2790 {
2791 switch (sset) {
2792 case ETH_SS_STATS: /* ethtool_get_stats(), ethtool_get_drvinfo() */
2793 @@ -222,9 +210,9 @@ static int dpaa2_get_sset_count(struct n
2794
2795 /** Fill in hardware counters, as returned by the MC firmware.
2796 */
2797 -static void dpaa2_get_ethtool_stats(struct net_device *net_dev,
2798 - struct ethtool_stats *stats,
2799 - u64 *data)
2800 +static void dpaa2_eth_get_ethtool_stats(struct net_device *net_dev,
2801 + struct ethtool_stats *stats,
2802 + u64 *data)
2803 {
2804 int i; /* Current index in the data array */
2805 int j, k, err;
2806 @@ -236,9 +224,9 @@ static void dpaa2_get_ethtool_stats(stru
2807 u32 buf_cnt;
2808 #endif
2809 u64 cdan = 0;
2810 - u64 portal_busy = 0;
2811 + u64 portal_busy = 0, pull_err = 0;
2812 struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
2813 - struct dpaa2_eth_stats *extras;
2814 + struct dpaa2_eth_drv_stats *extras;
2815 struct dpaa2_eth_ch_stats *ch_stats;
2816
2817 memset(data, 0,
2818 @@ -266,16 +254,18 @@ static void dpaa2_get_ethtool_stats(stru
2819 ch_stats = &priv->channel[j]->stats;
2820 cdan += ch_stats->cdan;
2821 portal_busy += ch_stats->dequeue_portal_busy;
2822 + pull_err += ch_stats->pull_err;
2823 }
2824
2825 *(data + i++) = portal_busy;
2826 + *(data + i++) = pull_err;
2827 *(data + i++) = cdan;
2828
2829 #ifdef CONFIG_FSL_QBMAN_DEBUG
2830 for (j = 0; j < priv->num_fqs; j++) {
2831 /* Print FQ instantaneous counts */
2832 err = dpaa2_io_query_fq_count(NULL, priv->fq[j].fqid,
2833 - &fcnt, &bcnt);
2834 + &fcnt, &bcnt);
2835 if (err) {
2836 netdev_warn(net_dev, "FQ query error %d", err);
2837 return;
2838 @@ -303,12 +293,12 @@ static void dpaa2_get_ethtool_stats(stru
2839 #endif
2840 }
2841
2842 -static const struct dpaa2_hash_fields {
2843 +static const struct dpaa2_eth_hash_fields {
2844 u64 rxnfc_field;
2845 enum net_prot cls_prot;
2846 int cls_field;
2847 int size;
2848 -} dpaa2_hash_fields[] = {
2849 +} hash_fields[] = {
2850 {
2851 /* L2 header */
2852 .rxnfc_field = RXH_L2DA,
2853 @@ -353,55 +343,53 @@ static const struct dpaa2_hash_fields {
2854 },
2855 };
2856
2857 -static int dpaa2_cls_is_enabled(struct net_device *net_dev, u64 flag)
2858 +static int cls_is_enabled(struct net_device *net_dev, u64 flag)
2859 {
2860 struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
2861
2862 return !!(priv->rx_hash_fields & flag);
2863 }
2864
2865 -static int dpaa2_cls_key_off(struct net_device *net_dev, u64 flag)
2866 +static int cls_key_off(struct net_device *net_dev, u64 flag)
2867 {
2868 int i, off = 0;
2869
2870 - for (i = 0; i < ARRAY_SIZE(dpaa2_hash_fields); i++) {
2871 - if (dpaa2_hash_fields[i].rxnfc_field & flag)
2872 + for (i = 0; i < ARRAY_SIZE(hash_fields); i++) {
2873 + if (hash_fields[i].rxnfc_field & flag)
2874 return off;
2875 - if (dpaa2_cls_is_enabled(net_dev,
2876 - dpaa2_hash_fields[i].rxnfc_field))
2877 - off += dpaa2_hash_fields[i].size;
2878 + if (cls_is_enabled(net_dev, hash_fields[i].rxnfc_field))
2879 + off += hash_fields[i].size;
2880 }
2881
2882 return -1;
2883 }
2884
2885 -static u8 dpaa2_cls_key_size(struct net_device *net_dev)
2886 +static u8 cls_key_size(struct net_device *net_dev)
2887 {
2888 u8 i, size = 0;
2889
2890 - for (i = 0; i < ARRAY_SIZE(dpaa2_hash_fields); i++) {
2891 - if (!dpaa2_cls_is_enabled(net_dev,
2892 - dpaa2_hash_fields[i].rxnfc_field))
2893 + for (i = 0; i < ARRAY_SIZE(hash_fields); i++) {
2894 + if (!cls_is_enabled(net_dev, hash_fields[i].rxnfc_field))
2895 continue;
2896 - size += dpaa2_hash_fields[i].size;
2897 + size += hash_fields[i].size;
2898 }
2899
2900 return size;
2901 }
2902
2903 -static u8 dpaa2_cls_max_key_size(struct net_device *net_dev)
2904 +static u8 cls_max_key_size(struct net_device *net_dev)
2905 {
2906 u8 i, size = 0;
2907
2908 - for (i = 0; i < ARRAY_SIZE(dpaa2_hash_fields); i++)
2909 - size += dpaa2_hash_fields[i].size;
2910 + for (i = 0; i < ARRAY_SIZE(hash_fields); i++)
2911 + size += hash_fields[i].size;
2912
2913 return size;
2914 }
2915
2916 -void dpaa2_cls_check(struct net_device *net_dev)
2917 +void check_fs_support(struct net_device *net_dev)
2918 {
2919 - u8 key_size = dpaa2_cls_max_key_size(net_dev);
2920 + u8 key_size = cls_max_key_size(net_dev);
2921 struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
2922
2923 if (priv->dpni_attrs.options & DPNI_OPT_DIST_FS &&
2924 @@ -417,7 +405,7 @@ void dpaa2_cls_check(struct net_device *
2925 /* Set RX hash options
2926 * flags is a combination of RXH_ bits
2927 */
2928 -int dpaa2_set_hash(struct net_device *net_dev, u64 flags)
2929 +int dpaa2_eth_set_hash(struct net_device *net_dev, u64 flags)
2930 {
2931 struct device *dev = net_dev->dev.parent;
2932 struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
2933 @@ -441,11 +429,11 @@ int dpaa2_set_hash(struct net_device *ne
2934
2935 memset(&cls_cfg, 0, sizeof(cls_cfg));
2936
2937 - for (i = 0; i < ARRAY_SIZE(dpaa2_hash_fields); i++) {
2938 + for (i = 0; i < ARRAY_SIZE(hash_fields); i++) {
2939 struct dpkg_extract *key =
2940 &cls_cfg.extracts[cls_cfg.num_extracts];
2941
2942 - if (!(flags & dpaa2_hash_fields[i].rxnfc_field))
2943 + if (!(flags & hash_fields[i].rxnfc_field))
2944 continue;
2945
2946 if (cls_cfg.num_extracts >= DPKG_MAX_NUM_OF_EXTRACTS) {
2947 @@ -454,14 +442,12 @@ int dpaa2_set_hash(struct net_device *ne
2948 }
2949
2950 key->type = DPKG_EXTRACT_FROM_HDR;
2951 - key->extract.from_hdr.prot =
2952 - dpaa2_hash_fields[i].cls_prot;
2953 + key->extract.from_hdr.prot = hash_fields[i].cls_prot;
2954 key->extract.from_hdr.type = DPKG_FULL_FIELD;
2955 - key->extract.from_hdr.field =
2956 - dpaa2_hash_fields[i].cls_field;
2957 + key->extract.from_hdr.field = hash_fields[i].cls_field;
2958 cls_cfg.num_extracts++;
2959
2960 - enabled_flags |= dpaa2_hash_fields[i].rxnfc_field;
2961 + enabled_flags |= hash_fields[i].rxnfc_field;
2962 }
2963
2964 dma_mem = kzalloc(DPAA2_CLASSIFIER_DMA_SIZE, GFP_DMA | GFP_KERNEL);
2965 @@ -486,7 +472,7 @@ int dpaa2_set_hash(struct net_device *ne
2966 return -ENOMEM;
2967 }
2968
2969 - dist_cfg.dist_size = dpaa2_queue_count(priv);
2970 + dist_cfg.dist_size = dpaa2_eth_queue_count(priv);
2971 if (dpaa2_eth_fs_enabled(priv)) {
2972 dist_cfg.dist_mode = DPNI_DIST_MODE_FS;
2973 dist_cfg.fs_cfg.miss_action = DPNI_FS_MISS_HASH;
2974 @@ -508,14 +494,14 @@ int dpaa2_set_hash(struct net_device *ne
2975 return 0;
2976 }
2977
2978 -static int dpaa2_cls_prep_rule(struct net_device *net_dev,
2979 - struct ethtool_rx_flow_spec *fs,
2980 - void *key)
2981 +static int prep_cls_rule(struct net_device *net_dev,
2982 + struct ethtool_rx_flow_spec *fs,
2983 + void *key)
2984 {
2985 struct ethtool_tcpip4_spec *l4ip4_h, *l4ip4_m;
2986 struct ethhdr *eth_h, *eth_m;
2987 struct ethtool_flow_ext *ext_h, *ext_m;
2988 - const u8 key_size = dpaa2_cls_key_size(net_dev);
2989 + const u8 key_size = cls_key_size(net_dev);
2990 void *msk = key + key_size;
2991
2992 memset(key, 0, key_size * 2);
2993 @@ -546,51 +532,47 @@ l4ip4:
2994 "ToS is not supported for IPv4 L4\n");
2995 return -EOPNOTSUPP;
2996 }
2997 - if (l4ip4_m->ip4src &&
2998 - !dpaa2_cls_is_enabled(net_dev, RXH_IP_SRC)) {
2999 + if (l4ip4_m->ip4src && !cls_is_enabled(net_dev, RXH_IP_SRC)) {
3000 netdev_err(net_dev, "IP SRC not supported!\n");
3001 return -EOPNOTSUPP;
3002 }
3003 - if (l4ip4_m->ip4dst &&
3004 - !dpaa2_cls_is_enabled(net_dev, RXH_IP_DST)) {
3005 + if (l4ip4_m->ip4dst && !cls_is_enabled(net_dev, RXH_IP_DST)) {
3006 netdev_err(net_dev, "IP DST not supported!\n");
3007 return -EOPNOTSUPP;
3008 }
3009 - if (l4ip4_m->psrc &&
3010 - !dpaa2_cls_is_enabled(net_dev, RXH_L4_B_0_1)) {
3011 + if (l4ip4_m->psrc && !cls_is_enabled(net_dev, RXH_L4_B_0_1)) {
3012 netdev_err(net_dev, "PSRC not supported, ignored\n");
3013 return -EOPNOTSUPP;
3014 }
3015 - if (l4ip4_m->pdst &&
3016 - !dpaa2_cls_is_enabled(net_dev, RXH_L4_B_2_3)) {
3017 + if (l4ip4_m->pdst && !cls_is_enabled(net_dev, RXH_L4_B_2_3)) {
3018 netdev_err(net_dev, "PDST not supported, ignored\n");
3019 return -EOPNOTSUPP;
3020 }
3021
3022 - if (dpaa2_cls_is_enabled(net_dev, RXH_IP_SRC)) {
3023 - *(u32 *)(key + dpaa2_cls_key_off(net_dev, RXH_IP_SRC))
3024 + if (cls_is_enabled(net_dev, RXH_IP_SRC)) {
3025 + *(u32 *)(key + cls_key_off(net_dev, RXH_IP_SRC))
3026 = l4ip4_h->ip4src;
3027 - *(u32 *)(msk + dpaa2_cls_key_off(net_dev, RXH_IP_SRC))
3028 + *(u32 *)(msk + cls_key_off(net_dev, RXH_IP_SRC))
3029 = l4ip4_m->ip4src;
3030 }
3031 - if (dpaa2_cls_is_enabled(net_dev, RXH_IP_DST)) {
3032 - *(u32 *)(key + dpaa2_cls_key_off(net_dev, RXH_IP_DST))
3033 + if (cls_is_enabled(net_dev, RXH_IP_DST)) {
3034 + *(u32 *)(key + cls_key_off(net_dev, RXH_IP_DST))
3035 = l4ip4_h->ip4dst;
3036 - *(u32 *)(msk + dpaa2_cls_key_off(net_dev, RXH_IP_DST))
3037 + *(u32 *)(msk + cls_key_off(net_dev, RXH_IP_DST))
3038 = l4ip4_m->ip4dst;
3039 }
3040
3041 - if (dpaa2_cls_is_enabled(net_dev, RXH_L4_B_0_1)) {
3042 - *(u32 *)(key + dpaa2_cls_key_off(net_dev, RXH_L4_B_0_1))
3043 + if (cls_is_enabled(net_dev, RXH_L4_B_0_1)) {
3044 + *(u32 *)(key + cls_key_off(net_dev, RXH_L4_B_0_1))
3045 = l4ip4_h->psrc;
3046 - *(u32 *)(msk + dpaa2_cls_key_off(net_dev, RXH_L4_B_0_1))
3047 + *(u32 *)(msk + cls_key_off(net_dev, RXH_L4_B_0_1))
3048 = l4ip4_m->psrc;
3049 }
3050
3051 - if (dpaa2_cls_is_enabled(net_dev, RXH_L4_B_2_3)) {
3052 - *(u32 *)(key + dpaa2_cls_key_off(net_dev, RXH_L4_B_2_3))
3053 + if (cls_is_enabled(net_dev, RXH_L4_B_2_3)) {
3054 + *(u32 *)(key + cls_key_off(net_dev, RXH_L4_B_2_3))
3055 = l4ip4_h->pdst;
3056 - *(u32 *)(msk + dpaa2_cls_key_off(net_dev, RXH_L4_B_2_3))
3057 + *(u32 *)(msk + cls_key_off(net_dev, RXH_L4_B_2_3))
3058 = l4ip4_m->pdst;
3059 }
3060 break;
3061 @@ -609,12 +591,10 @@ l4ip4:
3062 return -EOPNOTSUPP;
3063 }
3064
3065 - if (dpaa2_cls_is_enabled(net_dev, RXH_L2DA)) {
3066 - ether_addr_copy(key
3067 - + dpaa2_cls_key_off(net_dev, RXH_L2DA),
3068 + if (cls_is_enabled(net_dev, RXH_L2DA)) {
3069 + ether_addr_copy(key + cls_key_off(net_dev, RXH_L2DA),
3070 eth_h->h_dest);
3071 - ether_addr_copy(msk
3072 - + dpaa2_cls_key_off(net_dev, RXH_L2DA),
3073 + ether_addr_copy(msk + cls_key_off(net_dev, RXH_L2DA),
3074 eth_m->h_dest);
3075 } else {
3076 if (!is_zero_ether_addr(eth_m->h_dest)) {
3077 @@ -639,12 +619,10 @@ l4ip4:
3078 ext_h = &fs->h_ext;
3079 ext_m = &fs->m_ext;
3080
3081 - if (dpaa2_cls_is_enabled(net_dev, RXH_L2DA)) {
3082 - ether_addr_copy(key
3083 - + dpaa2_cls_key_off(net_dev, RXH_L2DA),
3084 + if (cls_is_enabled(net_dev, RXH_L2DA)) {
3085 + ether_addr_copy(key + cls_key_off(net_dev, RXH_L2DA),
3086 ext_h->h_dest);
3087 - ether_addr_copy(msk
3088 - + dpaa2_cls_key_off(net_dev, RXH_L2DA),
3089 + ether_addr_copy(msk + cls_key_off(net_dev, RXH_L2DA),
3090 ext_m->h_dest);
3091 } else {
3092 if (!is_zero_ether_addr(ext_m->h_dest)) {
3093 @@ -657,9 +635,9 @@ l4ip4:
3094 return 0;
3095 }
3096
3097 -static int dpaa2_do_cls(struct net_device *net_dev,
3098 - struct ethtool_rx_flow_spec *fs,
3099 - bool add)
3100 +static int do_cls(struct net_device *net_dev,
3101 + struct ethtool_rx_flow_spec *fs,
3102 + bool add)
3103 {
3104 struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
3105 const int rule_cnt = DPAA2_CLASSIFIER_ENTRY_COUNT;
3106 @@ -674,19 +652,19 @@ static int dpaa2_do_cls(struct net_devic
3107 }
3108
3109 if ((fs->ring_cookie != RX_CLS_FLOW_DISC &&
3110 - fs->ring_cookie >= dpaa2_queue_count(priv)) ||
3111 + fs->ring_cookie >= dpaa2_eth_queue_count(priv)) ||
3112 fs->location >= rule_cnt)
3113 return -EINVAL;
3114
3115 memset(&rule_cfg, 0, sizeof(rule_cfg));
3116 - rule_cfg.key_size = dpaa2_cls_key_size(net_dev);
3117 + rule_cfg.key_size = cls_key_size(net_dev);
3118
3119 /* allocate twice the key size, for the actual key and for mask */
3120 dma_mem = kzalloc(rule_cfg.key_size * 2, GFP_DMA | GFP_KERNEL);
3121 if (!dma_mem)
3122 return -ENOMEM;
3123
3124 - err = dpaa2_cls_prep_rule(net_dev, fs, dma_mem);
3125 + err = prep_cls_rule(net_dev, fs, dma_mem);
3126 if (err)
3127 goto err_free_mem;
3128
3129 @@ -735,13 +713,13 @@ err_free_mem:
3130 return err;
3131 }
3132
3133 -static int dpaa2_add_cls(struct net_device *net_dev,
3134 - struct ethtool_rx_flow_spec *fs)
3135 +static int add_cls(struct net_device *net_dev,
3136 + struct ethtool_rx_flow_spec *fs)
3137 {
3138 struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
3139 int err;
3140
3141 - err = dpaa2_do_cls(net_dev, fs, true);
3142 + err = do_cls(net_dev, fs, true);
3143 if (err)
3144 return err;
3145
3146 @@ -751,12 +729,12 @@ static int dpaa2_add_cls(struct net_devi
3147 return 0;
3148 }
3149
3150 -static int dpaa2_del_cls(struct net_device *net_dev, int location)
3151 +static int del_cls(struct net_device *net_dev, int location)
3152 {
3153 struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
3154 int err;
3155
3156 - err = dpaa2_do_cls(net_dev, &priv->cls_rule[location].fs, false);
3157 + err = do_cls(net_dev, &priv->cls_rule[location].fs, false);
3158 if (err)
3159 return err;
3160
3161 @@ -765,7 +743,7 @@ static int dpaa2_del_cls(struct net_devi
3162 return 0;
3163 }
3164
3165 -static void dpaa2_clear_cls(struct net_device *net_dev)
3166 +static void clear_cls(struct net_device *net_dev)
3167 {
3168 struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
3169 int i, err;
3170 @@ -774,7 +752,7 @@ static void dpaa2_clear_cls(struct net_d
3171 if (!priv->cls_rule[i].in_use)
3172 continue;
3173
3174 - err = dpaa2_del_cls(net_dev, i);
3175 + err = del_cls(net_dev, i);
3176 if (err)
3177 netdev_warn(net_dev,
3178 "err trying to delete classification entry %d\n",
3179 @@ -782,8 +760,8 @@ static void dpaa2_clear_cls(struct net_d
3180 }
3181 }
3182
3183 -static int dpaa2_set_rxnfc(struct net_device *net_dev,
3184 - struct ethtool_rxnfc *rxnfc)
3185 +static int dpaa2_eth_set_rxnfc(struct net_device *net_dev,
3186 + struct ethtool_rxnfc *rxnfc)
3187 {
3188 int err = 0;
3189
3190 @@ -792,19 +770,19 @@ static int dpaa2_set_rxnfc(struct net_de
3191 /* first off clear ALL classification rules, chaging key
3192 * composition will break them anyway
3193 */
3194 - dpaa2_clear_cls(net_dev);
3195 + clear_cls(net_dev);
3196 /* we purposely ignore cmd->flow_type for now, because the
3197 * classifier only supports a single set of fields for all
3198 * protocols
3199 */
3200 - err = dpaa2_set_hash(net_dev, rxnfc->data);
3201 + err = dpaa2_eth_set_hash(net_dev, rxnfc->data);
3202 break;
3203 case ETHTOOL_SRXCLSRLINS:
3204 - err = dpaa2_add_cls(net_dev, &rxnfc->fs);
3205 + err = add_cls(net_dev, &rxnfc->fs);
3206 break;
3207
3208 case ETHTOOL_SRXCLSRLDEL:
3209 - err = dpaa2_del_cls(net_dev, rxnfc->fs.location);
3210 + err = del_cls(net_dev, rxnfc->fs.location);
3211 break;
3212
3213 default:
3214 @@ -814,8 +792,8 @@ static int dpaa2_set_rxnfc(struct net_de
3215 return err;
3216 }
3217
3218 -static int dpaa2_get_rxnfc(struct net_device *net_dev,
3219 - struct ethtool_rxnfc *rxnfc, u32 *rule_locs)
3220 +static int dpaa2_eth_get_rxnfc(struct net_device *net_dev,
3221 + struct ethtool_rxnfc *rxnfc, u32 *rule_locs)
3222 {
3223 struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
3224 const int rule_cnt = DPAA2_CLASSIFIER_ENTRY_COUNT;
3225 @@ -831,7 +809,7 @@ static int dpaa2_get_rxnfc(struct net_de
3226 break;
3227
3228 case ETHTOOL_GRXRINGS:
3229 - rxnfc->data = dpaa2_queue_count(priv);
3230 + rxnfc->data = dpaa2_eth_queue_count(priv);
3231 break;
3232
3233 case ETHTOOL_GRXCLSRLCNT:
3234 @@ -868,15 +846,13 @@ static int dpaa2_get_rxnfc(struct net_de
3235 }
3236
3237 const struct ethtool_ops dpaa2_ethtool_ops = {
3238 - .get_drvinfo = dpaa2_get_drvinfo,
3239 - .get_msglevel = dpaa2_get_msglevel,
3240 - .set_msglevel = dpaa2_set_msglevel,
3241 + .get_drvinfo = dpaa2_eth_get_drvinfo,
3242 .get_link = ethtool_op_get_link,
3243 - .get_settings = dpaa2_get_settings,
3244 - .set_settings = dpaa2_set_settings,
3245 - .get_sset_count = dpaa2_get_sset_count,
3246 - .get_ethtool_stats = dpaa2_get_ethtool_stats,
3247 - .get_strings = dpaa2_get_strings,
3248 - .get_rxnfc = dpaa2_get_rxnfc,
3249 - .set_rxnfc = dpaa2_set_rxnfc,
3250 + .get_settings = dpaa2_eth_get_settings,
3251 + .set_settings = dpaa2_eth_set_settings,
3252 + .get_sset_count = dpaa2_eth_get_sset_count,
3253 + .get_ethtool_stats = dpaa2_eth_get_ethtool_stats,
3254 + .get_strings = dpaa2_eth_get_strings,
3255 + .get_rxnfc = dpaa2_eth_get_rxnfc,
3256 + .set_rxnfc = dpaa2_eth_set_rxnfc,
3257 };