1 From e588172442093fe22374dc1bfc88a7da751d6b30 Mon Sep 17 00:00:00 2001
2 From: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
3 Date: Tue, 15 Sep 2015 10:14:16 -0500
4 Subject: [PATCH 201/226] staging: dpaa2-eth: initial commit of dpaa2-eth
7 commit 3106ece5d96784b63a4eabb26661baaefedd164f
10 This is a commit of a squash of the cumulative dpaa2-eth patches
11 in the sdk 2.0 kernel as of 3/7/2016.
13 flib,dpaa2-eth: flib header update (Rebasing onto kernel 3.19, MC 0.6)
15 this patch was moved from 4.0 branch
17 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
18 [Stuart: split into multiple patches]
19 Signed-off-by: Stuart Yoder <stuart.yoder@freescale.com>
20 Integrated-by: Jilong Guo <jilong.guo@nxp.com>
22 flib,dpaa2-eth: updated Eth (was: Rebasing onto kernel 3.19, MC 0.6)
24 updated Ethernet driver from 4.0 branch
26 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
27 [Stuart: cherry-picked patch from 4.0 and split it up]
28 Signed-off-by: Stuart Yoder <stuart.yoder@freescale.com>
32 drivers/staging/Makefile
34 Signed-off-by: Stuart Yoder <stuart.yoder@nxp.com>
36 dpaa2-eth: Adjust 'options' size
38 The 'options' field of various MC configuration structures has changed
39 from u64 to u32 as of MC firmware version 7.0.
41 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
42 Change-Id: I9ba0c19fc22f745e6be6cc40862afa18fa3ac3db
43 Reviewed-on: http://git.am.freescale.net:8181/35579
44 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
45 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
47 dpaa2-eth: Selectively disable preemption
49 Temporary workaround for a MC Bus API quirk which only allows us to
50 specify exclusively, either a spinlock-protected MC Portal, or a
51 mutex-protected one, but then tries to match the runtime context in
52 order to enforce their usage.
56 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
57 Change-Id: Ida2ec1fdbdebfd2e427f96ddad7582880146fda9
58 Reviewed-on: http://git.am.freescale.net:8181/35580
59 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
60 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
62 dpaa2-eth: Fix ethtool bug
64 We were writing beyond the end of the allocated data area for ethtool
67 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
68 Change-Id: I6b77498a78dad06970508ebbed7144be73854f7f
69 Reviewed-on: http://git.am.freescale.net:8181/35583
70 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
71 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
72 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
74 dpaa2-eth: Retry read if store unexpectedly empty
76 After we place a volatile dequeue command, we might get to inquire the
77 store before the DMA has actually completed. In such cases, we must
78 retry, lest we'll have the store overwritten by the next legitimate
81 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
82 Change-Id: I314fbb8b4d9f589715e42d35fc6677d726b8f5ba
83 Reviewed-on: http://git.am.freescale.net:8181/35584
84 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
85 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
87 flib: Fix "missing braces around initializer" warning
89 Gcc does not support (yet?) the ={0} initializer in the case of an array
90 of structs. Fixing the Flib in order to make the warning go away.
92 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
93 Change-Id: I8782ecb714c032cfeeecf4c8323cf9dbb702b10f
94 Reviewed-on: http://git.am.freescale.net:8181/35586
95 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
96 Tested-by: Stuart Yoder <stuart.yoder@freescale.com>
98 Revert "dpaa2-eth: Selectively disable preemption"
100 This reverts commit e1455823c33b8dd48b5d2d50a7e8a11d3934cc0d.
102 dpaa2-eth: Fix memory leak
104 A buffer kmalloc'ed at probe time was not freed after it was no
107 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
108 Change-Id: Iba197209e9203ed306449729c6dcd23ec95f094d
109 Reviewed-on: http://git.am.freescale.net:8181/35756
110 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
111 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
112 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
114 dpaa2-eth: Remove unused field in ldpaa_eth_priv structure
116 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
117 Change-Id: I124c3e4589b6420b1ea5cc05a03a51ea938b2bea
118 Reviewed-on: http://git.am.freescale.net:8181/35757
119 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
120 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
121 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
123 dpaa2-eth: Fix "NOHZ: local_softirq_pending" warning
125 Explicitly run softirqs after we enable NAPI. This in particular gets us
126 rid of the "NOHZ: local_softirq_pending" warnings, but it also solves a
127 couple of other problems, among which fluctuating performance and high
131 - This will prevent us from timely processing notifications and
132 other "non-frame events" coming into the software portal. So far,
133 though, we only expect Dequeue Available Notifications, so this patch
134 is good enough for now.
135 - A degradation in console responsiveness is expected, especially in
136 cases where the bottom-half runs on the same CPU as the console.
138 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
139 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
140 Change-Id: Ia6f11da433024e80ee59e821c9eabfa5068df5e5
141 Reviewed-on: http://git.am.freescale.net:8181/35830
142 Reviewed-by: Alexandru Marginean <Alexandru.Marginean@freescale.com>
143 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
144 Tested-by: Stuart Yoder <stuart.yoder@freescale.com>
146 dpaa2-eth: Add polling mode for link state changes
148 Add the Kconfigurable option of using a thread for polling on
149 the link state instead of relying on interrupts from the MC.
151 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
152 Change-Id: If2fe66fc5c0fbee2568d7afa15d43ea33f92e8e2
153 Reviewed-on: http://git.am.freescale.net:8181/35967
154 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
155 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
156 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
158 dpaa2-eth: Update copyright years.
160 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
161 Change-Id: I7e00eecfc5569027c908124726edaf06be357c02
162 Reviewed-on: http://git.am.freescale.net:8181/37666
163 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
164 Reviewed-by: Ruxandra Ioana Radulescu <ruxandra.radulescu@freescale.com>
165 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
167 dpaa2-eth: Drain bpools when netdev is down
169 In a data path layout with potentially a dozen interfaces, not all of
170 them may be up at the same time, yet they may consume a fair amount of
172 Drain the buffer pool upon ifdown and re-seed it at ifup.
174 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
175 Change-Id: I24a379b643c8b5161a33b966c3314cf91024ed4a
176 Reviewed-on: http://git.am.freescale.net:8181/37667
177 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
178 Reviewed-by: Ruxandra Ioana Radulescu <ruxandra.radulescu@freescale.com>
179 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
181 dpaa2-eth: Interrupts cleanup
183 Add the code for cleaning up interrupts on driver removal.
184 This was lost during transition from kernel 3.16 to 3.19.
186 Also, there's no need to call devm_free_irq() if probe fails
187 as the kernel will release all driver resources.
189 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
190 Change-Id: Ifd404bbf399d5ba62e2896371076719c1d6b4214
191 Reviewed-on: http://git.am.freescale.net:8181/36199
192 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
193 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
194 Reviewed-by: Bharat Bhushan <Bharat.Bhushan@freescale.com>
195 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
196 Reviewed-on: http://git.am.freescale.net:8181/37690
198 dpaa2-eth: Ethtool support for hashing
200 Only one set of header fields is supported for all protocols, the driver
201 silently replaces previous configuration regardless of user selected
203 Following fields are supported:
209 L4 bytes 0 & 1 [TCP/UDP src port]
210 L4 bytes 2 & 3 [TCP/UDP dst port]
212 Signed-off-by: Alex Marginean <alexandru.marginean@freescale.com>
214 Change-Id: I97c9dac1b842fe6bc7115e40c08c42f67dee8c9c
215 Reviewed-on: http://git.am.freescale.net:8181/37260
216 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
217 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
218 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
220 dpaa2-eth: Fix maximum number of FQs
222 The maximum number of Rx/Tx conf FQs associated to a DPNI was not
223 updated when the implementation changed. It just happened to work
226 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
227 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
228 Change-Id: I03e30e0121a40d0d15fcdc4bee1fb98caa17c0ef
229 Reviewed-on: http://git.am.freescale.net:8181/37668
230 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
231 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
233 dpaa2-eth: Fix Rx buffer address alignment
235 We need to align the start address of the Rx buffers to
236 LDPAA_ETH_BUF_ALIGN bytes. We were using SMP_CACHE_BYTES instead.
237 It happened to work because both defines have the value of 64,
238 but this may change at some point.
240 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
241 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
242 Change-Id: I08a0f3f18f82c5581c491bd395e3ad066b25bcf5
243 Reviewed-on: http://git.am.freescale.net:8181/37669
244 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
245 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
247 dpaa2-eth: Add buffer count to ethtool statistics
249 Print the number of buffers available in the pool for a certain DPNI
250 along with the rest of the ethtool -S stats.
252 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
253 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
254 Change-Id: Ia1f5cf341c8414ae2058a73f6bc81490ef134592
255 Reviewed-on: http://git.am.freescale.net:8181/37671
256 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
257 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
259 dpaa2-eth: Add Rx error queue
261 Add a Kconfigurable option that allows Rx error frames to be
262 enqueued on an error FQ. By default error frames are discarded,
263 but for debug purposes we may want to process them at driver
266 Note: Checkpatch issues a false positive about complex macros that
267 should be parenthesized.
269 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
270 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
271 Change-Id: I7d19d00b5d5445514ebd112c886ce8ccdbb1f0da
272 Reviewed-on: http://git.am.freescale.net:8181/37672
273 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
274 Tested-by: Stuart Yoder <stuart.yoder@freescale.com>
276 staging: fsl-dpaa2: FLib headers cleanup
278 Going with the flow of moving fsl-dpaa2 headers into the drivers'
279 location rather than keeping them all in one place.
281 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
282 Change-Id: Ia2870cd019a4934c7835d38752a46b2a0045f30e
283 Reviewed-on: http://git.am.freescale.net:8181/37674
284 Reviewed-by: Ruxandra Ioana Radulescu <ruxandra.radulescu@freescale.com>
285 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
286 Tested-by: Stuart Yoder <stuart.yoder@freescale.com>
288 dpaa2-eth: Klocwork fixes
290 Fix several issues reported by Klocwork.
292 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
293 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
294 Change-Id: I1e23365765f3b0ff9b6474d8207df7c1f2433ccd
295 Reviewed-on: http://git.am.freescale.net:8181/37675
296 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
297 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
299 dpaa2-eth: Probe devices with no hash support
301 Don't fail at probe if the DPNI doesn't have the hash distribution
302 option enabled. Instead, initialize a single Rx frame queue and
303 use it for all incoming traffic.
305 Rx flow hashing configuration through ethtool will not work
308 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
309 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
310 Change-Id: Iaf17e05b15946e6901c39a21b5344b89e9f1d797
311 Reviewed-on: http://git.am.freescale.net:8181/37676
312 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
313 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
315 dpaa2-eth: Process frames in IRQ context
317 Stop using threaded IRQs and move back to hardirq top-halves.
318 This is the first patch of a small series adapting the DPIO and Ethernet
319 code to these changes.
321 Signed-off-by: Roy Pledge <roy.pledge@freescale.com>
322 Tested-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
323 Tested-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
324 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
325 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
326 [Stuart: split dpio and eth into separate patches, updated subject]
327 Signed-off-by: Stuart Yoder <stuart.yoder@freescale.com>
329 dpaa2-eth: Fix bug in NAPI poll
331 We incorrectly rearmed FQDAN notifications at the end of a NAPI cycle,
332 regardless of whether the NAPI budget was consumed or not. We only need
333 to rearm notifications if the NAPI cycle cleaned less frames than its
334 budget, otherwise a new NAPI poll will be scheduled anyway.
336 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
337 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
338 Change-Id: Ib55497bdbd769047420b3150668f2e2aef3c93f6
339 Reviewed-on: http://git.am.freescale.net:8181/38317
340 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
341 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
343 dpaa2-eth: Use dma_map_sg on Tx
345 Use the simpler dma_map_sg() along with the scatterlist API if the
346 egress frame is scatter-gather, at the cost of keeping some extra
347 information in the frame's software annotation area.
349 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
350 Change-Id: If293aeabbd58d031f21456704357d4ff7e53c559
351 Reviewed-on: http://git.am.freescale.net:8181/37681
352 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
353 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
355 dpaa2-eth: Reduce retries if Tx portal busy
357 Too many retries due to Tx portal contention led to a significant cycle
358 waste and reduction in performance.
359 Reducing the number of enqueue retries and dropping frame if eventually
362 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
363 Change-Id: Ib111ec61cd4294a7632348c25fa3d7f4002be0c0
364 Reviewed-on: http://git.am.freescale.net:8181/37682
365 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
366 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
368 dpaa2-eth: Add sysfs support for TxConf affinity change
370 This adds support in sysfs for affining Tx Confirmation queues to GPPs,
371 via the affine DPIO objects.
373 The user can specify a cpu list in /sys/class/net/ni<X>/txconf_affinity
374 to which the Ethernet driver will affine the TxConf FQs, in round-robin
375 fashion. This is naturally a bit coarse, because there is no "official"
376 mapping of the transmitting CPUs to Tx Confirmation queues.
378 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
379 Change-Id: I4b3da632e202ceeb22986c842d746aafe2a87a81
380 Reviewed-on: http://git.am.freescale.net:8181/37684
381 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
382 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
384 dpaa2-eth: Implement ndo_select_queue
386 Use a very simple selection function for the egress FQ. The purpose
387 behind this is to more evenly distribute Tx Confirmation traffic,
388 especially in the case of multiple egress flows, when bundling it all on
389 CPU 0 would make that CPU a bottleneck.
391 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
392 Change-Id: Ibfe8aad7ad5c719cc95d7817d7de6d2094f0f7ed
393 Reviewed-on: http://git.am.freescale.net:8181/37685
394 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
395 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
397 dpaa2-eth: Reduce TxConf NAPI weight back to 64
399 It turns out that not only the kernel frowned upon the old budget of 256,
400 but the measured values were well below that anyway.
402 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
403 Change-Id: I62ddd3ea1dbfd8b51e2bcb2286e0d5eb10ac7f27
404 Reviewed-on: http://git.am.freescale.net:8181/37688
405 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
406 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
408 dpaa2-eth: Try refilling the buffer pool less often
410 We used to check if the buffer pool needs refilling at each Rx
411 frame. Instead, do that check (and the actual buffer release if
412 needed) only after a pull dequeue.
414 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
415 Change-Id: Id52fab83873c40a711b8cadfcf909eb7e2e210f3
416 Reviewed-on: http://git.am.freescale.net:8181/38318
417 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
418 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
419 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
421 dpaa2-eth: Stay in NAPI if exact budget is met
423 An off-by-one bug would cause premature exiting from the NAPI cycle.
424 Performance degradation is particularly severe in IPFWD cases.
426 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
427 Tested-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
428 Change-Id: I9de2580c7ff8e46cbca9613890b03737add35e26
429 Reviewed-on: http://git.am.freescale.net:8181/37908
430 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
431 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
432 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
434 dpaa2-eth: Minor changes to FQ stats
436 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
437 Change-Id: I0ced0e7b2eee28599cdea79094336c0d44f0d32b
438 Reviewed-on: http://git.am.freescale.net:8181/38319
439 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
440 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
441 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
443 dpaa2-eth: Support fewer DPIOs than CPUs
445 The previous DPIO functions would transparently choose a (perhaps
446 non-affine) CPU if the required CPU was not available. Now that their API
447 contract is enforced, we must make an explicit request for *any* DPIO if
448 the request for an *affine* DPIO has failed.
450 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
451 Change-Id: Ib08047ffa33518993b1ffa4671d0d4f36d6793d0
452 Reviewed-on: http://git.am.freescale.net:8181/38320
453 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
454 Reviewed-by: Roy Pledge <roy.pledge@freescale.com>
455 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
457 dpaa2-eth: cosmetic changes in hashing code
459 Signed-off-by: Alex Marginean <alexandru.marginean@freescale.com>
460 Change-Id: I79e21a69a6fb68cdbdb8d853c059661f8988dbf9
461 Reviewed-on: http://git.am.freescale.net:8181/37258
462 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
463 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
465 dpaa2-eth: Prefetch data before initial access
467 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
468 Change-Id: Ie8f0163651aea7e3e197a408f89ca98d296d4b8b
469 Reviewed-on: http://git.am.freescale.net:8181/38753
470 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
471 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
472 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
474 dpaa2-eth: Use netif_receive_skb
476 netif_rx() is a leftover since our pre-NAPI codebase.
478 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
479 Change-Id: I02ff0a059862964df1bf81b247853193994c2dfe
480 Reviewed-on: http://git.am.freescale.net:8181/38754
481 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
482 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
483 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
485 dpaa2-eth: Use napi_alloc_frag() on Rx.
487 A bit better-suited than netdev_alloc_frag().
489 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
490 Change-Id: I8863a783502db963e5dc968f049534c36ad484e2
491 Reviewed-on: http://git.am.freescale.net:8181/38755
492 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
493 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
494 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
496 dpaa2-eth: Silence skb_realloc_headroom() warning
498 pktgen tests tend to be too noisy because pktgen does not observe the
499 net device's needed_headroom specification and we used to be pretty loud
500 about that. We'll print the warning message just once.
502 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
503 Change-Id: I3c12eba29c79aa9c487307d367f6d9f4dbe447a3
504 Reviewed-on: http://git.am.freescale.net:8181/38756
505 Reviewed-by: Ruxandra Ioana Radulescu <ruxandra.radulescu@freescale.com>
506 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
507 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
509 dpaa2-eth: Print message upon device unplugging
511 Give a console notification when a DPNI is unplugged. This is useful for
512 automated tests to know the operation (which is not instantaneous) has
515 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
516 Change-Id: If33033201fcee7671ad91c2b56badf3fb56a9e3e
517 Reviewed-on: http://git.am.freescale.net:8181/38757
518 Reviewed-by: Ruxandra Ioana Radulescu <ruxandra.radulescu@freescale.com>
519 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
520 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
522 dpaa2-eth: Add debugfs support
524 Add debugfs entries for showing detailed per-CPU and per-FQ
525 counters for each network interface. Also add a knob for
526 resetting these stats.
527 The agregated interface statistics were already available through
530 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
531 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
532 Change-Id: I55f5bfe07a15b0d1bf0c6175d8829654163a4318
533 Reviewed-on: http://git.am.freescale.net:8181/38758
534 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
535 Tested-by: Stuart Yoder <stuart.yoder@freescale.com>
537 dpaa2-eth: limited support for flow steering
539 Steering is supported on a sub-set of fields, including DMAC, IP SRC
541 Steering and hashing configurations depend on each other, that makes
542 the whole thing tricky to configure. Currently FS can be configured
543 using only the fields selected for hashing and all the hashing fields
544 must be included in the match key - masking doesn't work yet.
546 Signed-off-by: Alex Marginean <alexandru.marginean@freescale.com>
547 Change-Id: I9fa3199f7818a9a5f9d69d3483ffd839056cc468
548 Reviewed-on: http://git.am.freescale.net:8181/38759
549 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
550 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
551 Reviewed-by: Ruxandra Ioana Radulescu <ruxandra.radulescu@freescale.com>
552 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
554 dpaa2-eth: Rename files into the dpaa2 nomenclature
556 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
557 Change-Id: I1c3d62e2f19a59d4b65727234fd7df2dfd8683d9
558 Reviewed-on: http://git.am.freescale.net:8181/38965
559 Reviewed-by: Alexandru Marginean <Alexandru.Marginean@freescale.com>
560 Reviewed-by: Ruxandra Ioana Radulescu <ruxandra.radulescu@freescale.com>
561 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
562 Tested-by: Stuart Yoder <stuart.yoder@freescale.com>
564 staging: dpaa2-eth: migrated remaining flibs for MC fw 8.0.0
566 Signed-off-by: J. German Rivera <German.Rivera@freescale.com>
567 [Stuart: split eth part into separate patch, updated subject]
568 Signed-off-by: Stuart Yoder <stuart.yoder@freescale.com>
570 dpaa2-eth: Clear 'backup_pool' attribute
572 New MC-0.7 firmware allows specifying an alternate buffer pool, but we
573 are momentarily not using that feature.
575 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
576 Change-Id: I0a6e6626512b7bbddfef732c71f1400b67f3e619
577 Reviewed-on: http://git.am.freescale.net:8181/39149
578 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
579 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
581 dpaa2-eth: Do programing of MSIs in devm_request_threaded_irq()
583 With the new dprc_set_obj_irq() we can now program MSIS in the device
584 in the callback invoked from devm_request_threaded_irq().
585 Since this callback is invoked with interrupts disabled, we need to
586 use an atomic portal, instead of the root DPRC's built-in portal
589 Signed-off-by: Itai Katz <itai.katz@freescale.com>
590 Signed-off-by: J. German Rivera <German.Rivera@freescale.com>
591 [Stuart: split original patch into multiple patches]
592 Signed-off-by: Stuart Yoder <stuart.yoder@freescale.com>
594 dpaa2-eth: Do not map beyond skb tail
596 On Tx do dma_map only until skb->tail, rather than skb->end.
598 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
600 dpaa2-eth: Declare NETIF_F_LLTX as a capability
602 We are effectively doing lock-less Tx.
604 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
606 dpaa2-eth: Avoid bitcopy of 'backpointers' struct
608 Make 'struct ldpaa_eth_swa bps' a pointer and void copying it on both Tx
611 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
613 dpaa2-eth: Use CDANs instead of FQDANs
615 Use Channel Dequeue Available Notifications (CDANs) instead of
616 Frame Queue notifications. We allocate a QMan channel (or DPCON
617 object) for each available cpu and assign to it the Rx and Tx conf
618 queues associated with that cpu.
620 We usually want to have affine DPIOs and DPCONs (one for each core).
621 If this is not possible due to insufficient resources, we distribute
622 all ingress traffic on the cores with affine DPIOs.
624 NAPI instances are now one per channel instead of one per FQ, as the
625 interrupt source changes. Statistics counters change accordingly.
627 Note that after this commit is applied, one needs to provide sufficient
628 DPCON objects (either through DPL on restool) in order for the Ethernet
631 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
632 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
634 dpaa2-eth: Cleanup debugfs statistics
636 Several minor changes to statistics reporting:
637 * Fix print alignment of statistics counters
638 * Fix a naming ambiguity in the cpu_stats debugfs ops
639 * Add Rx/Tx error counters; these were already used, but not
640 reported in the per-CPU stats
642 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
644 dpaa2-eth: Add tx shaping configuration in sysfs
646 Egress traffic can be shaped via a per-DPNI SysFS entry:
647 echo M N > /sys/class/net/ni<X>/tx_shaping
649 M is the maximum throughput, expressed in Mbps.
650 N is the maximum burst size, expressed in bytes, at most 64000.
652 To remove shaping, use M=0, N=0.
654 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
656 dpaa2-eth: Fix "Tx busy" counter
658 Under heavy egress load, when a large number of the transmitted packets
659 cannot be sent because of high portal contention, the "Tx busy" counter
660 was not properly incremented.
662 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
664 dpaa2-eth: Fix memory cleanup in case of Tx congestion
666 The error path of ldpaa_eth_tx() was not properly freeing the SGT buffer
667 if the enqueue had failed because of congestion. DMA unmapping was
670 Factor the code originally inside the TxConf callback out into a
671 separate function that would be called on both TxConf and Tx paths.
673 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
674 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
676 dpaa2-eth: Use napi_gro_receive()
678 Call napi_gro_receive(), effectively enabling GRO.
679 NOTE: We could further optimize this by looking ahead in the parse results
680 received from hardware and only using GRO when the L3+L4 combination is
683 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
685 dpaa2-eth: Fix compilation of Rx Error FQ code
687 Conditionally-compiled code slipped between cracks when FLibs were
690 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
692 fsl-dpaa2: Add Kconfig dependency on DEBUG_FS
694 The driver's debugfs support depends on the generic CONFIG_DEBUG_FS.
696 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
698 dpaa2-eth: Fix interface down/up bug
700 If a networking interface was brought down while still receiving
701 ingress traffic, the delay between DPNI disable and NAPI disable
702 was not enough to ensure all in-flight frames got processed.
703 Instead, some frames were left pending in the Rx queues. If the
704 net device was then removed (i.e. restool unbind/unplug), the
705 call to dpni_reset() silently failed and the kernel crashed on
708 Fix this by increasing the FQ drain time. Also, at ifconfig up
709 we enable NAPI before starting the DPNI, to make sure we don't
710 miss any early CDANs.
712 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
714 dpaa2-eth: Iterate only through initialized channels
716 The number of DPIO objects available to a DPNI may be fewer than the
717 number of online cores. A typical example would be a DPNI with a
718 distribution size smaller than 8. Since we only initialize as many
719 channels (DPCONs) as there are DPIOs, iterating through all online cpus
720 would produce a nasty oops when retrieving ethtool stats.
722 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
724 net: pktgen: Observe needed_headroom of the device
726 Allocate enough space so as not to force the outgoing net device to do
727 skb_realloc_headroom().
729 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
730 Signed-off-by: David S. Miller <davem@davemloft.net>
732 dpaa2-eth: Trace buffer pool seeding
734 Add ftrace support for buffer pool seeding. Individual buffers are
735 described by virtual and dma addresses and sizes, as well as by bpid.
737 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
739 dpaa2-eth: Explicitly set carrier off at ifconfig up
741 If we don't, netif_carrier_ok() will still return true even if the link
742 state is marked as LINKWATCH_PENDING, which in a dpni-2-dpni case may
743 last indefinitely long. This will cause "ifconfig up" followed by "ip
744 link show" to report LOWER_UP when the peer DPNI is still down (and in
745 fact before we've even received any link notification at all).
747 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
749 dpaa2-eth: Fix FQ type in stats print
751 Fix a bug where the type of the Rx error queue was printed
752 incorrectly in the debugfs statistics
754 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
756 dpaa2-eth: Don't build debugfs support as a separate module
758 Instead have module init and exit functions declared explicitly for
759 the Ethernet driver and initialize/destroy the debugfs directory there.
761 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
763 dpaa2-eth: Remove debugfs #ifdefs from dpaa2-eth.c
765 Instead of conditionally compiling the calls to debugfs init
766 functions in dpaa2-eth.c, define no-op stubs for these functions
767 in case the debugfs Kconfig option is not enabled. This makes
768 the code more readable.
770 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
771 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
773 dpaa2-eth: Use napi_complete_done()
775 Replace napi_complete() with napi_complete_done().
777 Together with setting /sys/class/net/ethX/gro_flush_timeout, this
778 allows us to take better advantage of GRO coalescing and improves
779 throughput and cpu load in TCP termination tests.
781 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
783 dpaa2-eth: Fix error path in probe
785 NAPI delete was called at the wrong place when exiting probe
786 function on an error path
788 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
790 dpaa2-eth: Allocate channels based on queue count
792 Limit the number of channels allocated per DPNI to the maximum
793 between the number of Rx queues per traffic class (distribution size)
794 and Tx confirmation queues (number of tx flows).
795 If this happens to be larger than the number of available cores, only
796 allocate one channel for each core and distribute the frame queues on
797 the cores/channels in a round robin fashion.
799 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
800 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
802 dpaa2-eth: Use DPNI setting for number of Tx flows
804 Instead of creating one Tx flow for each online cpu, use the DPNI
805 attributes for deciding how many senders we have.
807 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
809 dpaa2-eth: Renounce sentinel in enum dpni_counter
811 Bring back the Flib header dpni.h to its initial content by removing the
812 sentinel value in enum dpni_counter.
814 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
816 dpaa2-eth: Fix Rx queue count
818 We were missing a roundup to the next power of 2 in order to be in sync
819 with the MC implementation.
820 Actually, moved that logic in a separate function which we'll remove
821 once the MC API is updated.
823 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
825 dpaa2-eth: Unmap the S/G table outside ldpaa_eth_free_rx_fd
827 The Scatter-Gather table is already unmapped outside ldpaa_eth_free_rx_fd
828 so no need to try to unmap it once more.
830 Signed-off-by: Cristian Sovaiala <cristian.sovaiala@freescale.com>
832 dpaa2-eth: Use napi_schedule_irqoff()
834 At the time we schedule NAPI, the Dequeue Available Notifications (which
835 are the de facto triggers of NAPI processing) are already disabled.
837 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
838 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
840 net: Fix ethernet Kconfig
842 Re-add missing 'source' directive. This exists on the integration
843 branch, but was mistakenly removed by an earlier dpaa2-eth rebase.
845 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
847 dpaa2-eth: Manually update link state at ifup
849 The DPMAC may have handled the link state notification before the DPNI
850 is up. A new PHY state transision may not subsequently occur, so the
851 DPNI must initiate a read of the DPMAC state.
853 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
855 dpaa2-eth: Stop carrier upon ifdown
857 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
859 dpaa2-eth: Fix print messages in link state handling code
861 Avoid an "(uninitialized)" message during DPNI probe by replacing
862 netdev_info() with its corresponding dev_info().
863 Purge some related comments and add some netdev messages to assist
864 link state debugging.
865 Remove an excessively defensive assertion.
867 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
869 dpaa2-eth: Do not allow ethtool settings change while the NI is up
871 Due to a MC limitation, link state changes while the DPNI is enabled
872 will fail. For now, we'll just prevent the call from going down to the MC
873 if we know it will fail.
875 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
877 dpaa2-eth: Reduce ethtool messages verbosity
879 Transform a couple of netdev_info() calls into netdev_dbg().
881 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
883 dpaa2-eth: Only unmask IRQs that we actually handle
885 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
887 dpaa2-eth: Produce fewer boot log messages
889 No longer print one line for each all-zero hwaddr that was replaced with
890 a random MAC address; just inform the user once that this has occurred.
891 And reduce redundancy of some printouts in the bootlog.
893 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
895 dpaa2-eth: Fix big endian issue
897 We were not doing any endianness conversions on the scatter gather
898 table entries, which caused problems on big endian kernels.
900 For frame descriptors the QMan driver takes care of this transparently,
901 but in the case of SG entries we need to do it ourselves.
903 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
905 dpaa2-eth: Force atomic context for lazy bpool seeding
907 We use the same ldpaa_bp_add_7() function for initial buffer pool
908 seeding (from .ndo_open) and for hotpath pool replenishing. The function
909 is using napi_alloc_frag() as an optimization for the Rx datapath, but
910 that turns out to require atomic execution because of a this_cpu_ptr()
912 This patch temporarily disables preemption around the initial seeding of
915 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
917 dpaa-eth: Integrate Flib version 0.7.1.2
919 Although API-compatible with 0.7.1.1, there are some ABI changes
920 that warrant a new integration.
922 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
924 dpaa2-eth: No longer adjust max_dist_per_tc
926 The MC firmware until version 0.7.1.1/8.0.2 requires that
927 max_dist_per_tc have the value expected by the hardware, which would be
928 different from what the user expects. MC firmware 0.7.1.2/8.0.5 fixes
929 that, so we remove our transparent conversion.
931 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
933 dpaa2-eth: Enforce 256-byte Rx alignment
935 Hardware erratum enforced by MC requires that Rx buffer lengths and
936 addresses be 265-byte aligned.
938 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
940 dpaa2-eth: Rename Tx buf alignment macro
942 The existing "BUF_ALIGN" macro remained confined to Tx usage, after
943 separate alignment was introduced for Rx. Renaming accordingly.
945 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
947 dpaa2-eth: Fix hashing distribution size
949 Commit be3fb62623e4338e60fb60019f134b6055cbc127
950 Author: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
951 Date: Fri Oct 23 18:26:44 2015 +0300
953 dpaa2-eth: No longer adjust max_dist_per_tc
955 missed one usage of the ldpaa_queue_count() function, making
956 distribution size inadvertenly lower.
958 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
960 dpaa2-eth: Remove ndo_select_queue
962 Our implementation of ndo_select_queue would lead to questions regarding
963 our support for qdiscs. Until we find an optimal way to select the txq
964 without breaking future qdisc integration, just remove the
965 ndo_select_queue callback entirely and leave the stack figure out the
967 This incurs a ~2-3% penalty on some performance tests.
969 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
971 dpaa2-eth: Select TxConf FQ based on processor id
973 Use smp_processor_id instead of skb queue mapping to determine the tx
974 flow id and implicitly the confirmation queue.
976 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
978 dpaa2-eth: Reduce number of buffers in bpool
980 Reduce the maximum number of buffers in each buffer pool associated
981 with a DPNI. This in turn reduces the number of memory allocations
982 performed in a single batch when buffers fall below a certain
985 Provides a significant performance boost (~5-10% increase) on both
986 termination and forwarding scenarios, while also reducing the driver
989 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
991 dpaa2-eth: Replace "ldpaa" with "dpaa2"
993 Replace all instances of "ldpaa"/"LDPAA" in the Ethernet driver
994 (names of functions, structures, macros, etc), with "dpaa2"/"DPAA2",
995 except for DPIO API function calls.
997 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
999 dpaa2-eth: rename ldpaa to dpaa2
1001 Signed-off-by: Haiying Wang <Haiying.wang@freescale.com>
1002 (Stuart: this patch was split out from the origin global rename patch)
1003 Signed-off-by: Stuart Yoder <stuart.yoder@nxp.com>
1005 dpaa2-eth: Rename dpaa_io_query_fq_count to dpaa2_io_query_fq_count
1007 Signed-off-by: Cristian Sovaiala <cristian.sovaiala@freescale.com>
1009 fsl-dpio: rename dpaa_* structure to dpaa2_*
1011 Signed-off-by: Haiying Wang <Haiying.wang@freescale.com>
1013 dpaa2-eth, dpni, fsl-mc: Updates for MC0.8.0
1015 Several changes need to be performed in sync for supporting
1016 the newest MC version:
1018 * Update the dpni binary interface to v6.0
1019 * Update the DPAA2 Eth driver to account for several API changes
1021 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
1023 staging: fsl-dpaa2: ethernet: add support for hardware timestamping
1025 Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com>
1027 fsl-dpaa2: eth: Do not set bpid in egress fd
1029 We don't do FD recycling on egress, BPID is therefore not necessary.
1031 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
1033 fsl-dpaa2: eth: Amend buffer refill comment
1035 A change request has been pending for placing an upper bound to the
1036 buffer replenish logic on Rx. However, short of practical alternatives,
1037 resort to amending the relevant comment and rely on ksoftirqd to
1038 guarantee interactivity.
1040 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
1042 fsl-dpaa2: eth: Configure a taildrop threshold for each Rx frame queue.
1044 The selected value allows for Rx jumbo (10K) frames processing
1045 while at the same time helps balance the system in the case of
1048 Also compute the number of buffers in the pool based on the TD
1049 threshold to avoid starving some of the ingress queues in small
1050 frames, high throughput scenarios.
1052 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
1054 fsl-dpaa2: eth: Check objects' FLIB version
1056 Make sure we support the DPNI, DPCON and DPBP version, otherwise
1057 abort probing early on and provide an error message.
1059 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
1061 fsl-dpaa2: eth: Remove likely/unlikely from cold paths
1063 Signed-off-by: Cristian Sovaiala <cristi.sovaiala@nxp.com>
1064 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
1066 fsl-dpaa2: eth: Remove __cold attribute
1068 Signed-off-by: Cristian Sovaiala <cristi.sovaiala@nxp.com>
1070 fsl-dpaa2: eth: Replace netdev_XXX with dev_XXX before register_netdevice()
1072 Signed-off-by: Cristian Sovaiala <cristi.sovaiala@nxp.com>
1073 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
1075 fsl-dpaa2: eth: Fix coccinelle issue
1077 drivers/staging/fsl-dpaa2/ethernet/dpaa2-ethtool.c:687:1-36: WARNING:
1078 Assignment of bool to 0/1
1080 Signed-off-by: Cristian Sovaiala <cristi.sovaiala@nxp.com>
1082 fsl-dpaa2: eth: Fix minor spelling issue
1084 Signed-off-by: Cristian Sovaiala <cristi.sovaiala@nxp.com>
1086 fsl-dpaa2: eth: Add a couple of 'unlikely' on hot path
1088 Signed-off-by: Cristian Sovaiala <cristi.sovaiala@nxp.com>
1089 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
1091 fsl-dpaa2: eth: Fix a bunch of minor issues found by static analysis tools
1093 As found by Klocworks and Checkpatch:
1095 - Integer type replacements
1096 - Unchecked memory allocations
1097 - Whitespace, alignment and newlining
1099 Signed-off-by: Cristian Sovaiala <cristi.sovaiala@nxp.com>
1100 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
1102 fsl-dpaa2: eth: Remove "inline" keyword from static functions
1104 Signed-off-by: Cristian Sovaiala <cristi.sovaiala@nxp.com>
1106 fsl-dpaa2: eth: Remove BUG/BUG_ONs
1108 Signed-off-by: Cristian Sovaiala <cristi.sovaiala@nxp.com>
1109 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
1111 fsl-dpaa2: eth: Use NAPI_POLL_WEIGHT
1113 No need to define our own macro as long as we're using the
1114 default value of 64.
1116 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
1118 dpaa2-eth: Move dpaa2_eth_swa structure to header file
1120 It was the only structure defined inside dpaa2-eth.c
1122 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
1124 fsl-dpaa2: eth: Replace uintX_t with uX
1126 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
1127 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
1129 fsl-dpaa2: eth: Minor fixes & cosmetics
1131 - Make driver log level an int, because this is what
1132 netif_msg_init expects.
1133 - Remove driver description macro as it was used only once,
1134 immediately after being defined
1135 - Remove include comment
1137 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
1139 dpaa2-eth: Move bcast address setup to dpaa2_eth_netdev_init
1141 It seems to fit better there than directly in probe.
1143 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
1145 dpaa2-eth: Fix DMA mapping bug
1147 During hashing/flow steering configuration via ethtool, we were
1148 doing a DMA unmap from the wrong address. Fix the issue by using
1149 the DMA address that was initially mapped.
1151 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
1153 dpaa2-eth: Associate buffer counting to queues instead of cpu
1155 Move the buffer counters from being percpu variables to being
1156 associated with QMan channels. This is more natural as we need
1157 to dimension the buffer pool count based on distribution size
1158 rather than number of online cores.
1160 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
1162 fsl-dpaa2: eth: Provide driver and fw version to ethtool
1164 Read fw version from the MC and interpret DPNI FLib major.minor as the
1165 driver's version. Report these in 'ethool -i'.
1167 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
1169 fsl-dpaa2: eth: Remove dependency on GCOV_KERNEL
1171 Signed-off-by: Cristian Sovaiala <cristi.sovaiala@nxp.com>
1173 fsl-dpaa2: eth: Remove FIXME/TODO comments from the code
1175 Some of the concerns had already been addressed, a couple are being
1177 Left a few TODOs related to the flow-steering code, which needs to be
1178 revisited before upstreaming anyway.
1180 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
1182 fsl-dpaa2: eth: Remove forward declarations
1184 Instead move the functions such that they are defined prior to
1187 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
1189 fsl-dpaa2: eth: Remove dead code in IRQ handler
1191 If any of those conditions were met, it is unlikely we'd ever be there
1194 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
1196 fsl-dpaa2: eth: Remove dpaa2_dpbp_drain()
1198 Its sole caller was __dpaa2_dpbp_free(), so move its content and get rid
1199 of one function call.
1201 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
1203 fsl-dpaa2: eth: Remove duplicate define
1205 We somehow ended up with two defines for the maximum number
1208 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
1210 fsl-dpaa2: eth: Move header comment to .c file
1212 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
1214 fsl-dpaa2: eth: Make DPCON allocation failure produce a benign message
1216 Number of DPCONs may be smaller than the number of CPUs in a number of
1217 valid scenarios. One such scenario is when the DPNI's distribution width
1218 is smaller than the number of cores and we just don't want to
1219 over-allocate DPCONs.
1220 Make the DPCON allocation failure less menacing by changing the logged
1223 While at it, remove a unused parameter in function prototype.
1225 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
1227 dpaa2 eth: irq update
1229 Signed-off-by: Stuart Yoder <stuart.yoder@nxp.com>
1232 drivers/staging/Kconfig
1233 drivers/staging/Makefile
1236 drivers/staging/Kconfig | 2 +
1237 drivers/staging/Makefile | 1 +
1238 drivers/staging/fsl-dpaa2/Kconfig | 11 +
1239 drivers/staging/fsl-dpaa2/Makefile | 5 +
1240 drivers/staging/fsl-dpaa2/ethernet/Kconfig | 42 +
1241 drivers/staging/fsl-dpaa2/ethernet/Makefile | 21 +
1242 .../staging/fsl-dpaa2/ethernet/dpaa2-eth-debugfs.c | 319 +++
1243 .../staging/fsl-dpaa2/ethernet/dpaa2-eth-debugfs.h | 61 +
1244 .../staging/fsl-dpaa2/ethernet/dpaa2-eth-trace.h | 185 ++
1245 drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.c | 2793 ++++++++++++++++++++
1246 drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.h | 366 +++
1247 drivers/staging/fsl-dpaa2/ethernet/dpaa2-ethtool.c | 882 +++++++
1248 drivers/staging/fsl-dpaa2/ethernet/dpkg.h | 175 ++
1249 drivers/staging/fsl-dpaa2/ethernet/dpni-cmd.h | 1058 ++++++++
1250 drivers/staging/fsl-dpaa2/ethernet/dpni.c | 1907 +++++++++++++
1251 drivers/staging/fsl-dpaa2/ethernet/dpni.h | 2581 ++++++++++++++++++
1252 drivers/staging/fsl-mc/include/mc-cmd.h | 5 +-
1253 drivers/staging/fsl-mc/include/net.h | 481 ++++
1254 net/core/pktgen.c | 1 +
1255 20 files changed, 10910 insertions(+), 1 deletion(-)
1256 create mode 100644 drivers/staging/fsl-dpaa2/Kconfig
1257 create mode 100644 drivers/staging/fsl-dpaa2/Makefile
1258 create mode 100644 drivers/staging/fsl-dpaa2/ethernet/Kconfig
1259 create mode 100644 drivers/staging/fsl-dpaa2/ethernet/Makefile
1260 create mode 100644 drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth-debugfs.c
1261 create mode 100644 drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth-debugfs.h
1262 create mode 100644 drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth-trace.h
1263 create mode 100644 drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.c
1264 create mode 100644 drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.h
1265 create mode 100644 drivers/staging/fsl-dpaa2/ethernet/dpaa2-ethtool.c
1266 create mode 100644 drivers/staging/fsl-dpaa2/ethernet/dpkg.h
1267 create mode 100644 drivers/staging/fsl-dpaa2/ethernet/dpni-cmd.h
1268 create mode 100644 drivers/staging/fsl-dpaa2/ethernet/dpni.c
1269 create mode 100644 drivers/staging/fsl-dpaa2/ethernet/dpni.h
1270 create mode 100644 drivers/staging/fsl-mc/include/net.h
1274 @@ -4539,6 +4539,21 @@ L: linux-kernel@vger.kernel.org
1276 F: drivers/staging/fsl-mc/
1278 +FREESCALE DPAA2 ETH DRIVER
1279 +M: Ioana Radulescu <ruxandra.radulescu@freescale.com>
1280 +M: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
1281 +M: Cristian Sovaiala <cristian.sovaiala@freescale.com>
1282 +L: linux-kernel@vger.kernel.org
1284 +F: drivers/staging/fsl-dpaa2/ethernet/
1286 +FREESCALE QORIQ MANAGEMENT COMPLEX RESTOOL DRIVER
1287 +M: Lijun Pan <Lijun.Pan@freescale.com>
1288 +L: linux-kernel@vger.kernel.org
1290 +F: drivers/staging/fsl-mc/bus/mc-ioctl.h
1291 +F: drivers/staging/fsl-mc/bus/mc-restool.c
1294 M: Christoph Hellwig <hch@infradead.org>
1295 W: ftp://ftp.openlinux.org/pub/people/hch/vxfs
1296 --- a/drivers/staging/Kconfig
1297 +++ b/drivers/staging/Kconfig
1298 @@ -114,4 +114,6 @@ source "drivers/staging/most/Kconfig"
1300 source "drivers/staging/fsl_ppfe/Kconfig"
1302 +source "drivers/staging/fsl-dpaa2/Kconfig"
1305 --- a/drivers/staging/Makefile
1306 +++ b/drivers/staging/Makefile
1307 @@ -49,3 +49,4 @@ obj-$(CONFIG_FSL_DPA) += fsl_q
1308 obj-$(CONFIG_WILC1000) += wilc1000/
1309 obj-$(CONFIG_MOST) += most/
1310 obj-$(CONFIG_FSL_PPFE) += fsl_ppfe/
1311 +obj-$(CONFIG_FSL_DPAA2) += fsl-dpaa2/
1313 +++ b/drivers/staging/fsl-dpaa2/Kconfig
1316 +# Freescale device configuration
1320 + bool "Freescale DPAA2 devices"
1321 + depends on FSL_MC_BUS
1323 + Build drivers for Freescale DataPath Acceleration Architecture (DPAA2) family of SoCs.
1324 +# TODO move DPIO driver in-here?
1325 +source "drivers/staging/fsl-dpaa2/ethernet/Kconfig"
1327 +++ b/drivers/staging/fsl-dpaa2/Makefile
1330 +# Makefile for the Freescale network device drivers.
1333 +obj-$(CONFIG_FSL_DPAA2_ETH) += ethernet/
1335 +++ b/drivers/staging/fsl-dpaa2/ethernet/Kconfig
1338 +# Freescale DPAA Ethernet driver configuration
1340 +# Copyright (C) 2014-2015 Freescale Semiconductor, Inc.
1342 +# This file is released under the GPLv2
1345 +menuconfig FSL_DPAA2_ETH
1346 + tristate "Freescale DPAA2 Ethernet"
1347 + depends on FSL_DPAA2 && FSL_MC_BUS && FSL_MC_DPIO
1348 + select FSL_DPAA2_MAC
1351 + Freescale Data Path Acceleration Architecture Ethernet
1352 + driver, using the Freescale MC bus driver.
1355 +config FSL_DPAA2_ETH_LINK_POLL
1356 + bool "Use polling mode for link state"
1359 + Poll for detecting link state changes instead of using
1362 +config FSL_DPAA2_ETH_USE_ERR_QUEUE
1363 + bool "Enable Rx error queue"
1366 + Allow Rx error frames to be enqueued on an error queue
1367 + and processed by the driver (by default they are dropped
1369 + This may impact performance, recommended for debugging
1372 +config FSL_DPAA2_ETH_DEBUGFS
1373 + depends on DEBUG_FS && FSL_QBMAN_DEBUG
1374 + bool "Enable debugfs support"
1377 + Enable advanced statistics through debugfs interface.
1380 +++ b/drivers/staging/fsl-dpaa2/ethernet/Makefile
1383 +# Makefile for the Freescale DPAA Ethernet controllers
1385 +# Copyright (C) 2014-2015 Freescale Semiconductor, Inc.
1387 +# This file is released under the GPLv2
1390 +ccflags-y += -DVERSION=\"\"
1392 +obj-$(CONFIG_FSL_DPAA2_ETH) += fsl-dpaa2-eth.o
1394 +fsl-dpaa2-eth-objs := dpaa2-eth.o dpaa2-ethtool.o dpni.o
1395 +fsl-dpaa2-eth-${CONFIG_FSL_DPAA2_ETH_DEBUGFS} += dpaa2-eth-debugfs.o
1397 +#Needed by the tracing framework
1398 +CFLAGS_dpaa2-eth.o := -I$(src)
1400 +ifeq ($(CONFIG_FSL_DPAA2_ETH_GCOV),y)
1404 +++ b/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth-debugfs.c
1407 +/* Copyright 2015 Freescale Semiconductor Inc.
1409 + * Redistribution and use in source and binary forms, with or without
1410 + * modification, are permitted provided that the following conditions are met:
1411 + * * Redistributions of source code must retain the above copyright
1412 + * notice, this list of conditions and the following disclaimer.
1413 + * * Redistributions in binary form must reproduce the above copyright
1414 + * notice, this list of conditions and the following disclaimer in the
1415 + * documentation and/or other materials provided with the distribution.
1416 + * * Neither the name of Freescale Semiconductor nor the
1417 + * names of its contributors may be used to endorse or promote products
1418 + * derived from this software without specific prior written permission.
1421 + * ALTERNATIVELY, this software may be distributed under the terms of the
1422 + * GNU General Public License ("GPL") as published by the Free Software
1423 + * Foundation, either version 2 of that License or (at your option) any
1426 + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
1427 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
1428 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
1429 + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
1430 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
1431 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
1432 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
1433 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
1434 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
1435 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
1439 +#include <linux/module.h>
1440 +#include <linux/debugfs.h>
1441 +#include "dpaa2-eth.h"
1442 +#include "dpaa2-eth-debugfs.h"
1444 +#define DPAA2_ETH_DBG_ROOT "dpaa2-eth"
1447 +static struct dentry *dpaa2_dbg_root;
1449 +static int dpaa2_dbg_cpu_show(struct seq_file *file, void *offset)
1451 + struct dpaa2_eth_priv *priv = (struct dpaa2_eth_priv *)file->private;
1452 + struct rtnl_link_stats64 *stats;
1453 + struct dpaa2_eth_stats *extras;
1456 + seq_printf(file, "Per-CPU stats for %s\n", priv->net_dev->name);
1457 + seq_printf(file, "%s%16s%16s%16s%16s%16s%16s%16s%16s\n",
1458 + "CPU", "Rx", "Rx Err", "Rx SG", "Tx", "Tx Err", "Tx conf",
1459 + "Tx SG", "Enq busy");
1461 + for_each_online_cpu(i) {
1462 + stats = per_cpu_ptr(priv->percpu_stats, i);
1463 + extras = per_cpu_ptr(priv->percpu_extras, i);
1464 + seq_printf(file, "%3d%16llu%16llu%16llu%16llu%16llu%16llu%16llu%16llu\n",
1466 + stats->rx_packets,
1468 + extras->rx_sg_frames,
1469 + stats->tx_packets,
1471 + extras->tx_conf_frames,
1472 + extras->tx_sg_frames,
1473 + extras->tx_portal_busy);
1479 +static int dpaa2_dbg_cpu_open(struct inode *inode, struct file *file)
1482 + struct dpaa2_eth_priv *priv = (struct dpaa2_eth_priv *)inode->i_private;
1484 + err = single_open(file, dpaa2_dbg_cpu_show, priv);
1486 + netdev_err(priv->net_dev, "single_open() failed\n");
1491 +static const struct file_operations dpaa2_dbg_cpu_ops = {
1492 + .open = dpaa2_dbg_cpu_open,
1494 + .llseek = seq_lseek,
1495 + .release = single_release,
1498 +static char *fq_type_to_str(struct dpaa2_eth_fq *fq)
1500 + switch (fq->type) {
1503 + case DPAA2_TX_CONF_FQ:
1505 + case DPAA2_RX_ERR_FQ:
1512 +static int dpaa2_dbg_fqs_show(struct seq_file *file, void *offset)
1514 + struct dpaa2_eth_priv *priv = (struct dpaa2_eth_priv *)file->private;
1515 + struct dpaa2_eth_fq *fq;
1519 + seq_printf(file, "FQ stats for %s:\n", priv->net_dev->name);
1520 + seq_printf(file, "%s%16s%16s%16s%16s\n",
1521 + "VFQID", "CPU", "Type", "Frames", "Pending frames");
1523 + for (i = 0; i < priv->num_fqs; i++) {
1524 + fq = &priv->fq[i];
1525 + err = dpaa2_io_query_fq_count(NULL, fq->fqid, &fcnt, &bcnt);
1529 + seq_printf(file, "%5d%16d%16s%16llu%16u\n",
1532 + fq_type_to_str(fq),
1540 +static int dpaa2_dbg_fqs_open(struct inode *inode, struct file *file)
1543 + struct dpaa2_eth_priv *priv = (struct dpaa2_eth_priv *)inode->i_private;
1545 + err = single_open(file, dpaa2_dbg_fqs_show, priv);
1547 + netdev_err(priv->net_dev, "single_open() failed\n");
1552 +static const struct file_operations dpaa2_dbg_fq_ops = {
1553 + .open = dpaa2_dbg_fqs_open,
1555 + .llseek = seq_lseek,
1556 + .release = single_release,
1559 +static int dpaa2_dbg_ch_show(struct seq_file *file, void *offset)
1561 + struct dpaa2_eth_priv *priv = (struct dpaa2_eth_priv *)file->private;
1562 + struct dpaa2_eth_channel *ch;
1565 + seq_printf(file, "Channel stats for %s:\n", priv->net_dev->name);
1566 + seq_printf(file, "%s%16s%16s%16s%16s%16s\n",
1567 + "CHID", "CPU", "Deq busy", "Frames", "CDANs",
1570 + for (i = 0; i < priv->num_channels; i++) {
1571 + ch = priv->channel[i];
1572 + seq_printf(file, "%4d%16d%16llu%16llu%16llu%16llu\n",
1574 + ch->nctx.desired_cpu,
1575 + ch->stats.dequeue_portal_busy,
1578 + ch->stats.frames / ch->stats.cdan);
1584 +static int dpaa2_dbg_ch_open(struct inode *inode, struct file *file)
1587 + struct dpaa2_eth_priv *priv = (struct dpaa2_eth_priv *)inode->i_private;
1589 + err = single_open(file, dpaa2_dbg_ch_show, priv);
1591 + netdev_err(priv->net_dev, "single_open() failed\n");
1596 +static const struct file_operations dpaa2_dbg_ch_ops = {
1597 + .open = dpaa2_dbg_ch_open,
1599 + .llseek = seq_lseek,
1600 + .release = single_release,
1603 +static ssize_t dpaa2_dbg_reset_write(struct file *file, const char __user *buf,
1604 + size_t count, loff_t *offset)
1606 + struct dpaa2_eth_priv *priv = file->private_data;
1607 + struct rtnl_link_stats64 *percpu_stats;
1608 + struct dpaa2_eth_stats *percpu_extras;
1609 + struct dpaa2_eth_fq *fq;
1610 + struct dpaa2_eth_channel *ch;
1613 + for_each_online_cpu(i) {
1614 + percpu_stats = per_cpu_ptr(priv->percpu_stats, i);
1615 + memset(percpu_stats, 0, sizeof(*percpu_stats));
1617 + percpu_extras = per_cpu_ptr(priv->percpu_extras, i);
1618 + memset(percpu_extras, 0, sizeof(*percpu_extras));
1621 + for (i = 0; i < priv->num_fqs; i++) {
1622 + fq = &priv->fq[i];
1623 + memset(&fq->stats, 0, sizeof(fq->stats));
1626 + for_each_cpu(i, &priv->dpio_cpumask) {
1627 + ch = priv->channel[i];
1628 + memset(&ch->stats, 0, sizeof(ch->stats));
1634 +static const struct file_operations dpaa2_dbg_reset_ops = {
1635 + .open = simple_open,
1636 + .write = dpaa2_dbg_reset_write,
1639 +void dpaa2_dbg_add(struct dpaa2_eth_priv *priv)
1641 + if (!dpaa2_dbg_root)
1644 + /* Create a directory for the interface */
1645 + priv->dbg.dir = debugfs_create_dir(priv->net_dev->name,
1647 + if (!priv->dbg.dir) {
1648 + netdev_err(priv->net_dev, "debugfs_create_dir() failed\n");
1652 + /* per-cpu stats file */
1653 + priv->dbg.cpu_stats = debugfs_create_file("cpu_stats", S_IRUGO,
1654 + priv->dbg.dir, priv,
1655 + &dpaa2_dbg_cpu_ops);
1656 + if (!priv->dbg.cpu_stats) {
1657 + netdev_err(priv->net_dev, "debugfs_create_file() failed\n");
1658 + goto err_cpu_stats;
1661 + /* per-fq stats file */
1662 + priv->dbg.fq_stats = debugfs_create_file("fq_stats", S_IRUGO,
1663 + priv->dbg.dir, priv,
1664 + &dpaa2_dbg_fq_ops);
1665 + if (!priv->dbg.fq_stats) {
1666 + netdev_err(priv->net_dev, "debugfs_create_file() failed\n");
1667 + goto err_fq_stats;
1670 + /* per-fq stats file */
1671 + priv->dbg.ch_stats = debugfs_create_file("ch_stats", S_IRUGO,
1672 + priv->dbg.dir, priv,
1673 + &dpaa2_dbg_ch_ops);
1674 + if (!priv->dbg.fq_stats) {
1675 + netdev_err(priv->net_dev, "debugfs_create_file() failed\n");
1676 + goto err_ch_stats;
1680 + priv->dbg.reset_stats = debugfs_create_file("reset_stats", S_IWUSR,
1681 + priv->dbg.dir, priv,
1682 + &dpaa2_dbg_reset_ops);
1683 + if (!priv->dbg.reset_stats) {
1684 + netdev_err(priv->net_dev, "debugfs_create_file() failed\n");
1685 + goto err_reset_stats;
1691 + debugfs_remove(priv->dbg.ch_stats);
1693 + debugfs_remove(priv->dbg.fq_stats);
1695 + debugfs_remove(priv->dbg.cpu_stats);
1697 + debugfs_remove(priv->dbg.dir);
1700 +void dpaa2_dbg_remove(struct dpaa2_eth_priv *priv)
1702 + debugfs_remove(priv->dbg.reset_stats);
1703 + debugfs_remove(priv->dbg.fq_stats);
1704 + debugfs_remove(priv->dbg.ch_stats);
1705 + debugfs_remove(priv->dbg.cpu_stats);
1706 + debugfs_remove(priv->dbg.dir);
1709 +void dpaa2_eth_dbg_init(void)
1711 + dpaa2_dbg_root = debugfs_create_dir(DPAA2_ETH_DBG_ROOT, NULL);
1712 + if (!dpaa2_dbg_root) {
1713 + pr_err("DPAA2-ETH: debugfs create failed\n");
1717 + pr_info("DPAA2-ETH: debugfs created\n");
1720 +void __exit dpaa2_eth_dbg_exit(void)
1722 + debugfs_remove(dpaa2_dbg_root);
1726 +++ b/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth-debugfs.h
1728 +/* Copyright 2015 Freescale Semiconductor Inc.
1730 + * Redistribution and use in source and binary forms, with or without
1731 + * modification, are permitted provided that the following conditions are met:
1732 + * * Redistributions of source code must retain the above copyright
1733 + * notice, this list of conditions and the following disclaimer.
1734 + * * Redistributions in binary form must reproduce the above copyright
1735 + * notice, this list of conditions and the following disclaimer in the
1736 + * documentation and/or other materials provided with the distribution.
1737 + * * Neither the name of Freescale Semiconductor nor the
1738 + * names of its contributors may be used to endorse or promote products
1739 + * derived from this software without specific prior written permission.
1742 + * ALTERNATIVELY, this software may be distributed under the terms of the
1743 + * GNU General Public License ("GPL") as published by the Free Software
1744 + * Foundation, either version 2 of that License or (at your option) any
1747 + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
1748 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
1749 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
1750 + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
1751 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
1752 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
1753 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
1754 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
1755 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
1756 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
1759 +#ifndef DPAA2_ETH_DEBUGFS_H
1760 +#define DPAA2_ETH_DEBUGFS_H
1762 +#include <linux/dcache.h>
1763 +#include "dpaa2-eth.h"
1765 +extern struct dpaa2_eth_priv *priv;
1767 +struct dpaa2_debugfs {
1768 + struct dentry *dir;
1769 + struct dentry *fq_stats;
1770 + struct dentry *ch_stats;
1771 + struct dentry *cpu_stats;
1772 + struct dentry *reset_stats;
1775 +#ifdef CONFIG_FSL_DPAA2_ETH_DEBUGFS
1776 +void dpaa2_eth_dbg_init(void);
1777 +void dpaa2_eth_dbg_exit(void);
1778 +void dpaa2_dbg_add(struct dpaa2_eth_priv *priv);
1779 +void dpaa2_dbg_remove(struct dpaa2_eth_priv *priv);
1781 +static inline void dpaa2_eth_dbg_init(void) {}
1782 +static inline void dpaa2_eth_dbg_exit(void) {}
1783 +static inline void dpaa2_dbg_add(struct dpaa2_eth_priv *priv) {}
1784 +static inline void dpaa2_dbg_remove(struct dpaa2_eth_priv *priv) {}
1785 +#endif /* CONFIG_FSL_DPAA2_ETH_DEBUGFS */
1787 +#endif /* DPAA2_ETH_DEBUGFS_H */
1790 +++ b/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth-trace.h
1792 +/* Copyright 2014-2015 Freescale Semiconductor Inc.
1794 + * Redistribution and use in source and binary forms, with or without
1795 + * modification, are permitted provided that the following conditions are met:
1796 + * * Redistributions of source code must retain the above copyright
1797 + * notice, this list of conditions and the following disclaimer.
1798 + * * Redistributions in binary form must reproduce the above copyright
1799 + * notice, this list of conditions and the following disclaimer in the
1800 + * documentation and/or other materials provided with the distribution.
1801 + * * Neither the name of Freescale Semiconductor nor the
1802 + * names of its contributors may be used to endorse or promote products
1803 + * derived from this software without specific prior written permission.
1806 + * ALTERNATIVELY, this software may be distributed under the terms of the
1807 + * GNU General Public License ("GPL") as published by the Free Software
1808 + * Foundation, either version 2 of that License or (at your option) any
1811 + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
1812 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
1813 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
1814 + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
1815 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
1816 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
1817 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
1818 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
1819 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
1820 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
1823 +#undef TRACE_SYSTEM
1824 +#define TRACE_SYSTEM dpaa2_eth
1826 +#if !defined(_DPAA2_ETH_TRACE_H) || defined(TRACE_HEADER_MULTI_READ)
1827 +#define _DPAA2_ETH_TRACE_H
1829 +#include <linux/skbuff.h>
1830 +#include <linux/netdevice.h>
1831 +#include "dpaa2-eth.h"
1832 +#include <linux/tracepoint.h>
1834 +#define TR_FMT "[%s] fd: addr=0x%llx, len=%u, off=%u"
1835 +/* trace_printk format for raw buffer event class */
1836 +#define TR_BUF_FMT "[%s] vaddr=%p size=%zu dma_addr=%pad map_size=%zu bpid=%d"
1838 +/* This is used to declare a class of events.
1839 + * individual events of this type will be defined below.
1842 +/* Store details about a frame descriptor */
1843 +DECLARE_EVENT_CLASS(dpaa2_eth_fd,
1844 + /* Trace function prototype */
1845 + TP_PROTO(struct net_device *netdev,
1846 + const struct dpaa2_fd *fd),
1848 + /* Repeat argument list here */
1849 + TP_ARGS(netdev, fd),
1851 + /* A structure containing the relevant information we want
1852 + * to record. Declare name and type for each normal element,
1853 + * name, type and size for arrays. Use __string for variable
1857 + __field(u64, fd_addr)
1858 + __field(u32, fd_len)
1859 + __field(u16, fd_offset)
1860 + __string(name, netdev->name)
1863 + /* The function that assigns values to the above declared
1867 + __entry->fd_addr = dpaa2_fd_get_addr(fd);
1868 + __entry->fd_len = dpaa2_fd_get_len(fd);
1869 + __entry->fd_offset = dpaa2_fd_get_offset(fd);
1870 + __assign_str(name, netdev->name);
1873 + /* This is what gets printed when the trace event is
1880 + __entry->fd_offset)
1883 +/* Now declare events of the above type. Format is:
1884 + * DEFINE_EVENT(class, name, proto, args), with proto and args same as for class
1887 +/* Tx (egress) fd */
1888 +DEFINE_EVENT(dpaa2_eth_fd, dpaa2_tx_fd,
1889 + TP_PROTO(struct net_device *netdev,
1890 + const struct dpaa2_fd *fd),
1892 + TP_ARGS(netdev, fd)
1896 +DEFINE_EVENT(dpaa2_eth_fd, dpaa2_rx_fd,
1897 + TP_PROTO(struct net_device *netdev,
1898 + const struct dpaa2_fd *fd),
1900 + TP_ARGS(netdev, fd)
1903 +/* Tx confirmation fd */
1904 +DEFINE_EVENT(dpaa2_eth_fd, dpaa2_tx_conf_fd,
1905 + TP_PROTO(struct net_device *netdev,
1906 + const struct dpaa2_fd *fd),
1908 + TP_ARGS(netdev, fd)
1911 +/* Log data about raw buffers. Useful for tracing DPBP content. */
1912 +TRACE_EVENT(dpaa2_eth_buf_seed,
1913 + /* Trace function prototype */
1914 + TP_PROTO(struct net_device *netdev,
1915 + /* virtual address and size */
1918 + /* dma map address and size */
1919 + dma_addr_t dma_addr,
1921 + /* buffer pool id, if relevant */
1924 + /* Repeat argument list here */
1925 + TP_ARGS(netdev, vaddr, size, dma_addr, map_size, bpid),
1927 + /* A structure containing the relevant information we want
1928 + * to record. Declare name and type for each normal element,
1929 + * name, type and size for arrays. Use __string for variable
1933 + __field(void *, vaddr)
1934 + __field(size_t, size)
1935 + __field(dma_addr_t, dma_addr)
1936 + __field(size_t, map_size)
1937 + __field(u16, bpid)
1938 + __string(name, netdev->name)
1941 + /* The function that assigns values to the above declared
1945 + __entry->vaddr = vaddr;
1946 + __entry->size = size;
1947 + __entry->dma_addr = dma_addr;
1948 + __entry->map_size = map_size;
1949 + __entry->bpid = bpid;
1950 + __assign_str(name, netdev->name);
1953 + /* This is what gets printed when the trace event is
1956 + TP_printk(TR_BUF_FMT,
1960 + &__entry->dma_addr,
1961 + __entry->map_size,
1965 +/* If only one event of a certain type needs to be declared, use TRACE_EVENT().
1966 + * The syntax is the same as for DECLARE_EVENT_CLASS().
1969 +#endif /* _DPAA2_ETH_TRACE_H */
1971 +/* This must be outside ifdef _DPAA2_ETH_TRACE_H */
1972 +#undef TRACE_INCLUDE_PATH
1973 +#define TRACE_INCLUDE_PATH .
1974 +#undef TRACE_INCLUDE_FILE
1975 +#define TRACE_INCLUDE_FILE dpaa2-eth-trace
1976 +#include <trace/define_trace.h>
1978 +++ b/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.c
1980 +/* Copyright 2014-2015 Freescale Semiconductor Inc.
1982 + * Redistribution and use in source and binary forms, with or without
1983 + * modification, are permitted provided that the following conditions are met:
1984 + * * Redistributions of source code must retain the above copyright
1985 + * notice, this list of conditions and the following disclaimer.
1986 + * * Redistributions in binary form must reproduce the above copyright
1987 + * notice, this list of conditions and the following disclaimer in the
1988 + * documentation and/or other materials provided with the distribution.
1989 + * * Neither the name of Freescale Semiconductor nor the
1990 + * names of its contributors may be used to endorse or promote products
1991 + * derived from this software without specific prior written permission.
1994 + * ALTERNATIVELY, this software may be distributed under the terms of the
1995 + * GNU General Public License ("GPL") as published by the Free Software
1996 + * Foundation, either version 2 of that License or (at your option) any
1999 + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
2000 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
2001 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
2002 + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
2003 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
2004 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
2005 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
2006 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
2007 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
2008 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
2010 +#include <linux/init.h>
2011 +#include <linux/module.h>
2012 +#include <linux/platform_device.h>
2013 +#include <linux/etherdevice.h>
2014 +#include <linux/of_net.h>
2015 +#include <linux/interrupt.h>
2016 +#include <linux/msi.h>
2017 +#include <linux/debugfs.h>
2018 +#include <linux/kthread.h>
2019 +#include <linux/net_tstamp.h>
2021 +#include "../../fsl-mc/include/mc.h"
2022 +#include "../../fsl-mc/include/mc-sys.h"
2023 +#include "dpaa2-eth.h"
2025 +/* CREATE_TRACE_POINTS only needs to be defined once. Other dpa files
2026 + * using trace events only need to #include <trace/events/sched.h>
2028 +#define CREATE_TRACE_POINTS
2029 +#include "dpaa2-eth-trace.h"
2031 +MODULE_LICENSE("Dual BSD/GPL");
2032 +MODULE_AUTHOR("Freescale Semiconductor, Inc");
2033 +MODULE_DESCRIPTION("Freescale DPAA2 Ethernet Driver");
2035 +static int debug = -1;
2036 +module_param(debug, int, S_IRUGO);
2037 +MODULE_PARM_DESC(debug, "Module/Driver verbosity level");
2039 +/* Oldest DPAA2 objects version we are compatible with */
2040 +#define DPAA2_SUPPORTED_DPNI_VERSION 6
2041 +#define DPAA2_SUPPORTED_DPBP_VERSION 2
2042 +#define DPAA2_SUPPORTED_DPCON_VERSION 2
2044 +/* Iterate through the cpumask in a round-robin fashion. */
2045 +#define cpumask_rr(cpu, maskptr) \
2047 + (cpu) = cpumask_next((cpu), (maskptr)); \
2048 + if ((cpu) >= nr_cpu_ids) \
2049 + (cpu) = cpumask_first((maskptr)); \
2052 +static void dpaa2_eth_rx_csum(struct dpaa2_eth_priv *priv,
2054 + struct sk_buff *skb)
2056 + skb_checksum_none_assert(skb);
2058 + /* HW checksum validation is disabled, nothing to do here */
2059 + if (!(priv->net_dev->features & NETIF_F_RXCSUM))
2062 + /* Read checksum validation bits */
2063 + if (!((fd_status & DPAA2_ETH_FAS_L3CV) &&
2064 + (fd_status & DPAA2_ETH_FAS_L4CV)))
2067 + /* Inform the stack there's no need to compute L3/L4 csum anymore */
2068 + skb->ip_summed = CHECKSUM_UNNECESSARY;
2071 +/* Free a received FD.
2072 + * Not to be used for Tx conf FDs or on any other paths.
2074 +static void dpaa2_eth_free_rx_fd(struct dpaa2_eth_priv *priv,
2075 + const struct dpaa2_fd *fd,
2078 + struct device *dev = priv->net_dev->dev.parent;
2079 + dma_addr_t addr = dpaa2_fd_get_addr(fd);
2080 + u8 fd_format = dpaa2_fd_get_format(fd);
2082 + if (fd_format == dpaa2_fd_sg) {
2083 + struct dpaa2_sg_entry *sgt = vaddr + dpaa2_fd_get_offset(fd);
2087 + for (i = 0; i < DPAA2_ETH_MAX_SG_ENTRIES; i++) {
2088 + dpaa2_sg_le_to_cpu(&sgt[i]);
2090 + addr = dpaa2_sg_get_addr(&sgt[i]);
2091 + dma_unmap_single(dev, addr, DPAA2_ETH_RX_BUFFER_SIZE,
2094 + sg_vaddr = phys_to_virt(addr);
2095 + put_page(virt_to_head_page(sg_vaddr));
2097 + if (dpaa2_sg_is_final(&sgt[i]))
2102 + put_page(virt_to_head_page(vaddr));
2105 +/* Build a linear skb based on a single-buffer frame descriptor */
2106 +static struct sk_buff *dpaa2_eth_build_linear_skb(struct dpaa2_eth_priv *priv,
2107 + struct dpaa2_eth_channel *ch,
2108 + const struct dpaa2_fd *fd,
2111 + struct sk_buff *skb = NULL;
2112 + u16 fd_offset = dpaa2_fd_get_offset(fd);
2113 + u32 fd_length = dpaa2_fd_get_len(fd);
2115 + skb = build_skb(fd_vaddr, DPAA2_ETH_RX_BUFFER_SIZE +
2116 + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)));
2117 + if (unlikely(!skb)) {
2118 + netdev_err(priv->net_dev, "build_skb() failed\n");
2122 + skb_reserve(skb, fd_offset);
2123 + skb_put(skb, fd_length);
2130 +/* Build a non linear (fragmented) skb based on a S/G table */
2131 +static struct sk_buff *dpaa2_eth_build_frag_skb(struct dpaa2_eth_priv *priv,
2132 + struct dpaa2_eth_channel *ch,
2133 + struct dpaa2_sg_entry *sgt)
2135 + struct sk_buff *skb = NULL;
2136 + struct device *dev = priv->net_dev->dev.parent;
2138 + dma_addr_t sg_addr;
2141 + struct page *page, *head_page;
2145 + for (i = 0; i < DPAA2_ETH_MAX_SG_ENTRIES; i++) {
2146 + struct dpaa2_sg_entry *sge = &sgt[i];
2148 + dpaa2_sg_le_to_cpu(sge);
2150 + /* We don't support anything else yet! */
2151 + if (unlikely(dpaa2_sg_get_format(sge) != dpaa2_sg_single)) {
2152 + dev_warn_once(dev, "Unsupported S/G entry format: %d\n",
2153 + dpaa2_sg_get_format(sge));
2157 + /* Get the address, offset and length from the S/G entry */
2158 + sg_addr = dpaa2_sg_get_addr(sge);
2159 + dma_unmap_single(dev, sg_addr, DPAA2_ETH_RX_BUFFER_SIZE,
2161 + if (unlikely(dma_mapping_error(dev, sg_addr))) {
2162 + netdev_err(priv->net_dev, "DMA unmap failed\n");
2165 + sg_vaddr = phys_to_virt(sg_addr);
2166 + sg_length = dpaa2_sg_get_len(sge);
2169 + /* We build the skb around the first data buffer */
2170 + skb = build_skb(sg_vaddr, DPAA2_ETH_RX_BUFFER_SIZE +
2171 + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)));
2172 + if (unlikely(!skb)) {
2173 + netdev_err(priv->net_dev, "build_skb failed\n");
2176 + sg_offset = dpaa2_sg_get_offset(sge);
2177 + skb_reserve(skb, sg_offset);
2178 + skb_put(skb, sg_length);
2180 + /* Subsequent data in SGEntries are stored at
2181 + * offset 0 in their buffers, we don't need to
2182 + * compute sg_offset.
2184 + WARN_ONCE(dpaa2_sg_get_offset(sge) != 0,
2185 + "Non-zero offset in SGE[%d]!\n", i);
2187 + /* Rest of the data buffers are stored as skb frags */
2188 + page = virt_to_page(sg_vaddr);
2189 + head_page = virt_to_head_page(sg_vaddr);
2191 + /* Offset in page (which may be compound) */
2192 + page_offset = ((unsigned long)sg_vaddr &
2193 + (PAGE_SIZE - 1)) +
2194 + (page_address(page) - page_address(head_page));
2196 + skb_add_rx_frag(skb, i - 1, head_page, page_offset,
2197 + sg_length, DPAA2_ETH_RX_BUFFER_SIZE);
2200 + if (dpaa2_sg_is_final(sge))
2204 + /* Count all data buffers + sgt buffer */
2205 + ch->buf_count -= i + 2;
2210 +static void dpaa2_eth_rx(struct dpaa2_eth_priv *priv,
2211 + struct dpaa2_eth_channel *ch,
2212 + const struct dpaa2_fd *fd,
2213 + struct napi_struct *napi)
2215 + dma_addr_t addr = dpaa2_fd_get_addr(fd);
2216 + u8 fd_format = dpaa2_fd_get_format(fd);
2218 + struct sk_buff *skb;
2219 + struct rtnl_link_stats64 *percpu_stats;
2220 + struct dpaa2_eth_stats *percpu_extras;
2221 + struct device *dev = priv->net_dev->dev.parent;
2222 + struct dpaa2_fas *fas;
2225 + /* Tracing point */
2226 + trace_dpaa2_rx_fd(priv->net_dev, fd);
2228 + dma_unmap_single(dev, addr, DPAA2_ETH_RX_BUFFER_SIZE, DMA_FROM_DEVICE);
2229 + vaddr = phys_to_virt(addr);
2231 + prefetch(vaddr + priv->buf_layout.private_data_size);
2232 + prefetch(vaddr + dpaa2_fd_get_offset(fd));
2234 + percpu_stats = this_cpu_ptr(priv->percpu_stats);
2235 + percpu_extras = this_cpu_ptr(priv->percpu_extras);
2237 + if (fd_format == dpaa2_fd_single) {
2238 + skb = dpaa2_eth_build_linear_skb(priv, ch, fd, vaddr);
2239 + } else if (fd_format == dpaa2_fd_sg) {
2240 + struct dpaa2_sg_entry *sgt =
2241 + vaddr + dpaa2_fd_get_offset(fd);
2242 + skb = dpaa2_eth_build_frag_skb(priv, ch, sgt);
2243 + put_page(virt_to_head_page(vaddr));
2244 + percpu_extras->rx_sg_frames++;
2245 + percpu_extras->rx_sg_bytes += dpaa2_fd_get_len(fd);
2247 + /* We don't support any other format */
2248 + netdev_err(priv->net_dev, "Received invalid frame format\n");
2249 + goto err_frame_format;
2252 + if (unlikely(!skb)) {
2253 + dev_err_once(dev, "error building skb\n");
2254 + goto err_build_skb;
2257 + prefetch(skb->data);
2259 + if (priv->ts_rx_en) {
2260 + struct skb_shared_hwtstamps *shhwtstamps = skb_hwtstamps(skb);
2261 + u64 *ns = (u64 *) (vaddr +
2262 + priv->buf_layout.private_data_size +
2263 + sizeof(struct dpaa2_fas));
2265 + *ns = DPAA2_PTP_NOMINAL_FREQ_PERIOD_NS * (*ns);
2266 + memset(shhwtstamps, 0, sizeof(*shhwtstamps));
2267 + shhwtstamps->hwtstamp = ns_to_ktime(*ns);
2270 + /* Check if we need to validate the L4 csum */
2271 + if (likely(fd->simple.frc & DPAA2_FD_FRC_FASV)) {
2272 + fas = (struct dpaa2_fas *)
2273 + (vaddr + priv->buf_layout.private_data_size);
2274 + status = le32_to_cpu(fas->status);
2275 + dpaa2_eth_rx_csum(priv, status, skb);
2278 + skb->protocol = eth_type_trans(skb, priv->net_dev);
2280 + percpu_stats->rx_packets++;
2281 + percpu_stats->rx_bytes += skb->len;
2283 + if (priv->net_dev->features & NETIF_F_GRO)
2284 + napi_gro_receive(napi, skb);
2286 + netif_receive_skb(skb);
2291 + dpaa2_eth_free_rx_fd(priv, fd, vaddr);
2292 + percpu_stats->rx_dropped++;
2295 +#ifdef CONFIG_FSL_DPAA2_ETH_USE_ERR_QUEUE
2296 +static void dpaa2_eth_rx_err(struct dpaa2_eth_priv *priv,
2297 + struct dpaa2_eth_channel *ch,
2298 + const struct dpaa2_fd *fd,
2299 + struct napi_struct *napi __always_unused)
2301 + struct device *dev = priv->net_dev->dev.parent;
2302 + dma_addr_t addr = dpaa2_fd_get_addr(fd);
2304 + struct rtnl_link_stats64 *percpu_stats;
2305 + struct dpaa2_fas *fas;
2308 + dma_unmap_single(dev, addr, DPAA2_ETH_RX_BUFFER_SIZE, DMA_FROM_DEVICE);
2309 + vaddr = phys_to_virt(addr);
2311 + if (fd->simple.frc & DPAA2_FD_FRC_FASV) {
2312 + fas = (struct dpaa2_fas *)
2313 + (vaddr + priv->buf_layout.private_data_size);
2314 + status = le32_to_cpu(fas->status);
2316 + /* All frames received on this queue should have at least
2317 + * one of the Rx error bits set */
2318 + WARN_ON_ONCE((status & DPAA2_ETH_RX_ERR_MASK) == 0);
2319 + netdev_dbg(priv->net_dev, "Rx frame error: 0x%08x\n",
2320 + status & DPAA2_ETH_RX_ERR_MASK);
2322 + dpaa2_eth_free_rx_fd(priv, fd, vaddr);
2324 + percpu_stats = this_cpu_ptr(priv->percpu_stats);
2325 + percpu_stats->rx_errors++;
2329 +/* Consume all frames pull-dequeued into the store. This is the simplest way to
2330 + * make sure we don't accidentally issue another volatile dequeue which would
2331 + * overwrite (leak) frames already in the store.
2333 + * Observance of NAPI budget is not our concern, leaving that to the caller.
2335 +static int dpaa2_eth_store_consume(struct dpaa2_eth_channel *ch)
2337 + struct dpaa2_eth_priv *priv = ch->priv;
2338 + struct dpaa2_eth_fq *fq;
2339 + struct dpaa2_dq *dq;
2340 + const struct dpaa2_fd *fd;
2345 + dq = dpaa2_io_store_next(ch->store, &is_last);
2346 + if (unlikely(!dq)) {
2347 + if (unlikely(!is_last)) {
2348 + netdev_dbg(priv->net_dev,
2349 + "Channel %d reqturned no valid frames\n",
2351 + /* MUST retry until we get some sort of
2352 + * valid response token (be it "empty dequeue"
2353 + * or a valid frame).
2360 + /* Obtain FD and process it */
2361 + fd = dpaa2_dq_fd(dq);
2362 + fq = (struct dpaa2_eth_fq *)dpaa2_dq_fqd_ctx(dq);
2363 + fq->stats.frames++;
2365 + fq->consume(priv, ch, fd, &ch->napi);
2367 + } while (!is_last);
2372 +static int dpaa2_eth_build_sg_fd(struct dpaa2_eth_priv *priv,
2373 + struct sk_buff *skb,
2374 + struct dpaa2_fd *fd)
2376 + struct device *dev = priv->net_dev->dev.parent;
2377 + void *sgt_buf = NULL;
2379 + int nr_frags = skb_shinfo(skb)->nr_frags;
2380 + struct dpaa2_sg_entry *sgt;
2383 + struct scatterlist *scl, *crt_scl;
2386 + struct dpaa2_eth_swa *bps;
2388 + /* Create and map scatterlist.
2389 + * We don't advertise NETIF_F_FRAGLIST, so skb_to_sgvec() will not have
2390 + * to go beyond nr_frags+1.
2391 + * Note: We don't support chained scatterlists
2393 + WARN_ON(PAGE_SIZE / sizeof(struct scatterlist) < nr_frags + 1);
2394 + scl = kcalloc(nr_frags + 1, sizeof(struct scatterlist), GFP_ATOMIC);
2395 + if (unlikely(!scl))
2398 + sg_init_table(scl, nr_frags + 1);
2399 + num_sg = skb_to_sgvec(skb, scl, 0, skb->len);
2400 + num_dma_bufs = dma_map_sg(dev, scl, num_sg, DMA_TO_DEVICE);
2401 + if (unlikely(!num_dma_bufs)) {
2402 + netdev_err(priv->net_dev, "dma_map_sg() error\n");
2404 + goto dma_map_sg_failed;
2407 + /* Prepare the HW SGT structure */
2408 + sgt_buf_size = priv->tx_data_offset +
2409 + sizeof(struct dpaa2_sg_entry) * (1 + num_dma_bufs);
2410 + sgt_buf = kzalloc(sgt_buf_size + DPAA2_ETH_TX_BUF_ALIGN, GFP_ATOMIC);
2411 + if (unlikely(!sgt_buf)) {
2412 + netdev_err(priv->net_dev, "failed to allocate SGT buffer\n");
2414 + goto sgt_buf_alloc_failed;
2416 + sgt_buf = PTR_ALIGN(sgt_buf, DPAA2_ETH_TX_BUF_ALIGN);
2418 + /* PTA from egress side is passed as is to the confirmation side so
2419 + * we need to clear some fields here in order to find consistent values
2420 + * on TX confirmation. We are clearing FAS (Frame Annotation Status)
2423 + memset(sgt_buf + priv->buf_layout.private_data_size, 0, 8);
2425 + sgt = (struct dpaa2_sg_entry *)(sgt_buf + priv->tx_data_offset);
2427 + /* Fill in the HW SGT structure.
2429 + * sgt_buf is zeroed out, so the following fields are implicit
2430 + * in all sgt entries:
2432 + * - format is 'dpaa2_sg_single'
2434 + for_each_sg(scl, crt_scl, num_dma_bufs, i) {
2435 + dpaa2_sg_set_addr(&sgt[i], sg_dma_address(crt_scl));
2436 + dpaa2_sg_set_len(&sgt[i], sg_dma_len(crt_scl));
2438 + dpaa2_sg_set_final(&sgt[i - 1], true);
2440 + /* Store the skb backpointer in the SGT buffer.
2441 + * Fit the scatterlist and the number of buffers alongside the
2442 + * skb backpointer in the SWA. We'll need all of them on Tx Conf.
2444 + bps = (struct dpaa2_eth_swa *)sgt_buf;
2447 + bps->num_sg = num_sg;
2448 + bps->num_dma_bufs = num_dma_bufs;
2450 + for (j = 0; j < i; j++)
2451 + dpaa2_sg_cpu_to_le(&sgt[j]);
2453 + /* Separately map the SGT buffer */
2454 + addr = dma_map_single(dev, sgt_buf, sgt_buf_size, DMA_TO_DEVICE);
2455 + if (unlikely(dma_mapping_error(dev, addr))) {
2456 + netdev_err(priv->net_dev, "dma_map_single() failed\n");
2458 + goto dma_map_single_failed;
2460 + dpaa2_fd_set_offset(fd, priv->tx_data_offset);
2461 + dpaa2_fd_set_format(fd, dpaa2_fd_sg);
2462 + dpaa2_fd_set_addr(fd, addr);
2463 + dpaa2_fd_set_len(fd, skb->len);
2465 + fd->simple.ctrl = DPAA2_FD_CTRL_ASAL | DPAA2_FD_CTRL_PTA |
2466 + DPAA2_FD_CTRL_PTV1;
2470 +dma_map_single_failed:
2472 +sgt_buf_alloc_failed:
2473 + dma_unmap_sg(dev, scl, num_sg, DMA_TO_DEVICE);
2479 +static int dpaa2_eth_build_single_fd(struct dpaa2_eth_priv *priv,
2480 + struct sk_buff *skb,
2481 + struct dpaa2_fd *fd)
2483 + struct device *dev = priv->net_dev->dev.parent;
2485 + struct sk_buff **skbh;
2488 + buffer_start = PTR_ALIGN(skb->data - priv->tx_data_offset -
2489 + DPAA2_ETH_TX_BUF_ALIGN,
2490 + DPAA2_ETH_TX_BUF_ALIGN);
2492 + /* PTA from egress side is passed as is to the confirmation side so
2493 + * we need to clear some fields here in order to find consistent values
2494 + * on TX confirmation. We are clearing FAS (Frame Annotation Status)
2497 + memset(buffer_start + priv->buf_layout.private_data_size, 0, 8);
2499 + /* Store a backpointer to the skb at the beginning of the buffer
2500 + * (in the private data area) such that we can release it
2503 + skbh = (struct sk_buff **)buffer_start;
2506 + addr = dma_map_single(dev,
2508 + skb_tail_pointer(skb) - buffer_start,
2510 + if (unlikely(dma_mapping_error(dev, addr))) {
2511 + dev_err(dev, "dma_map_single() failed\n");
2515 + dpaa2_fd_set_addr(fd, addr);
2516 + dpaa2_fd_set_offset(fd, (u16)(skb->data - buffer_start));
2517 + dpaa2_fd_set_len(fd, skb->len);
2518 + dpaa2_fd_set_format(fd, dpaa2_fd_single);
2520 + fd->simple.ctrl = DPAA2_FD_CTRL_ASAL | DPAA2_FD_CTRL_PTA |
2521 + DPAA2_FD_CTRL_PTV1;
2526 +/* DMA-unmap and free FD and possibly SGT buffer allocated on Tx. The skb
2527 + * back-pointed to is also freed.
2528 + * This can be called either from dpaa2_eth_tx_conf() or on the error path of
2530 + * Optionally, return the frame annotation status word (FAS), which needs
2531 + * to be checked if we're on the confirmation path.
2533 +static void dpaa2_eth_free_fd(const struct dpaa2_eth_priv *priv,
2534 + const struct dpaa2_fd *fd,
2537 + struct device *dev = priv->net_dev->dev.parent;
2538 + dma_addr_t fd_addr;
2539 + struct sk_buff **skbh, *skb;
2540 + unsigned char *buffer_start;
2542 + struct scatterlist *scl;
2543 + int num_sg, num_dma_bufs;
2544 + struct dpaa2_eth_swa *bps;
2546 + struct dpaa2_fas *fas;
2548 + fd_addr = dpaa2_fd_get_addr(fd);
2549 + skbh = phys_to_virt(fd_addr);
2550 + fd_single = (dpaa2_fd_get_format(fd) == dpaa2_fd_single);
2554 + buffer_start = (unsigned char *)skbh;
2555 + /* Accessing the skb buffer is safe before dma unmap, because
2556 + * we didn't map the actual skb shell.
2558 + dma_unmap_single(dev, fd_addr,
2559 + skb_tail_pointer(skb) - buffer_start,
2562 + bps = (struct dpaa2_eth_swa *)skbh;
2565 + num_sg = bps->num_sg;
2566 + num_dma_bufs = bps->num_dma_bufs;
2568 + /* Unmap the scatterlist */
2569 + dma_unmap_sg(dev, scl, num_sg, DMA_TO_DEVICE);
2572 + /* Unmap the SGT buffer */
2573 + unmap_size = priv->tx_data_offset +
2574 + sizeof(struct dpaa2_sg_entry) * (1 + num_dma_bufs);
2575 + dma_unmap_single(dev, fd_addr, unmap_size, DMA_TO_DEVICE);
2578 + if (priv->ts_tx_en && skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) {
2579 + struct skb_shared_hwtstamps shhwtstamps;
2582 + memset(&shhwtstamps, 0, sizeof(shhwtstamps));
2584 + ns = (u64 *)((void *)skbh +
2585 + priv->buf_layout.private_data_size +
2586 + sizeof(struct dpaa2_fas));
2587 + *ns = DPAA2_PTP_NOMINAL_FREQ_PERIOD_NS * (*ns);
2588 + shhwtstamps.hwtstamp = ns_to_ktime(*ns);
2589 + skb_tstamp_tx(skb, &shhwtstamps);
2592 + /* Check the status from the Frame Annotation after we unmap the first
2593 + * buffer but before we free it.
2595 + if (status && (fd->simple.frc & DPAA2_FD_FRC_FASV)) {
2596 + fas = (struct dpaa2_fas *)
2597 + ((void *)skbh + priv->buf_layout.private_data_size);
2598 + *status = le32_to_cpu(fas->status);
2601 + /* Free SGT buffer kmalloc'ed on tx */
2605 + /* Move on with skb release */
2606 + dev_kfree_skb(skb);
2609 +static int dpaa2_eth_tx(struct sk_buff *skb, struct net_device *net_dev)
2611 + struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
2612 + struct dpaa2_fd fd;
2613 + struct rtnl_link_stats64 *percpu_stats;
2614 + struct dpaa2_eth_stats *percpu_extras;
2616 + /* TxConf FQ selection primarily based on cpu affinity; this is
2617 + * non-migratable context, so it's safe to call smp_processor_id().
2619 + u16 queue_mapping = smp_processor_id() % priv->dpni_attrs.max_senders;
2621 + percpu_stats = this_cpu_ptr(priv->percpu_stats);
2622 + percpu_extras = this_cpu_ptr(priv->percpu_extras);
2624 + /* Setup the FD fields */
2625 + memset(&fd, 0, sizeof(fd));
2627 + if (unlikely(skb_headroom(skb) < DPAA2_ETH_NEEDED_HEADROOM(priv))) {
2628 + struct sk_buff *ns;
2630 + dev_info_once(net_dev->dev.parent,
2631 + "skb headroom too small, must realloc.\n");
2632 + ns = skb_realloc_headroom(skb, DPAA2_ETH_NEEDED_HEADROOM(priv));
2633 + if (unlikely(!ns)) {
2634 + percpu_stats->tx_dropped++;
2635 + goto err_alloc_headroom;
2637 + dev_kfree_skb(skb);
2641 + /* We'll be holding a back-reference to the skb until Tx Confirmation;
2642 + * we don't want that overwritten by a concurrent Tx with a cloned skb.
2644 + skb = skb_unshare(skb, GFP_ATOMIC);
2645 + if (unlikely(!skb)) {
2646 + netdev_err(net_dev, "Out of memory for skb_unshare()");
2647 + /* skb_unshare() has already freed the skb */
2648 + percpu_stats->tx_dropped++;
2649 + return NETDEV_TX_OK;
2652 + if (skb_is_nonlinear(skb)) {
2653 + err = dpaa2_eth_build_sg_fd(priv, skb, &fd);
2654 + percpu_extras->tx_sg_frames++;
2655 + percpu_extras->tx_sg_bytes += skb->len;
2657 + err = dpaa2_eth_build_single_fd(priv, skb, &fd);
2660 + if (unlikely(err)) {
2661 + percpu_stats->tx_dropped++;
2662 + goto err_build_fd;
2665 + /* Tracing point */
2666 + trace_dpaa2_tx_fd(net_dev, &fd);
2668 + for (i = 0; i < (DPAA2_ETH_MAX_TX_QUEUES << 1); i++) {
2669 + err = dpaa2_io_service_enqueue_qd(NULL, priv->tx_qdid, 0,
2670 + priv->fq[queue_mapping].flowid,
2672 + if (err != -EBUSY)
2675 + percpu_extras->tx_portal_busy += i;
2676 + if (unlikely(err < 0)) {
2677 + netdev_dbg(net_dev, "error enqueueing Tx frame\n");
2678 + percpu_stats->tx_errors++;
2679 + /* Clean up everything, including freeing the skb */
2680 + dpaa2_eth_free_fd(priv, &fd, NULL);
2682 + percpu_stats->tx_packets++;
2683 + percpu_stats->tx_bytes += skb->len;
2686 + return NETDEV_TX_OK;
2689 +err_alloc_headroom:
2690 + dev_kfree_skb(skb);
2692 + return NETDEV_TX_OK;
2695 +static void dpaa2_eth_tx_conf(struct dpaa2_eth_priv *priv,
2696 + struct dpaa2_eth_channel *ch,
2697 + const struct dpaa2_fd *fd,
2698 + struct napi_struct *napi __always_unused)
2700 + struct rtnl_link_stats64 *percpu_stats;
2701 + struct dpaa2_eth_stats *percpu_extras;
2704 + /* Tracing point */
2705 + trace_dpaa2_tx_conf_fd(priv->net_dev, fd);
2707 + percpu_extras = this_cpu_ptr(priv->percpu_extras);
2708 + percpu_extras->tx_conf_frames++;
2709 + percpu_extras->tx_conf_bytes += dpaa2_fd_get_len(fd);
2711 + dpaa2_eth_free_fd(priv, fd, &status);
2713 + if (unlikely(status & DPAA2_ETH_TXCONF_ERR_MASK)) {
2714 + netdev_err(priv->net_dev, "TxConf frame error(s): 0x%08x\n",
2715 + status & DPAA2_ETH_TXCONF_ERR_MASK);
2716 + percpu_stats = this_cpu_ptr(priv->percpu_stats);
2717 + /* Tx-conf logically pertains to the egress path. */
2718 + percpu_stats->tx_errors++;
2722 +static int dpaa2_eth_set_rx_csum(struct dpaa2_eth_priv *priv, bool enable)
2726 + err = dpni_set_l3_chksum_validation(priv->mc_io, 0, priv->mc_token,
2729 + netdev_err(priv->net_dev,
2730 + "dpni_set_l3_chksum_validation() failed\n");
2734 + err = dpni_set_l4_chksum_validation(priv->mc_io, 0, priv->mc_token,
2737 + netdev_err(priv->net_dev,
2738 + "dpni_set_l4_chksum_validation failed\n");
2745 +static int dpaa2_eth_set_tx_csum(struct dpaa2_eth_priv *priv, bool enable)
2747 + struct dpaa2_eth_fq *fq;
2748 + struct dpni_tx_flow_cfg tx_flow_cfg;
2752 + memset(&tx_flow_cfg, 0, sizeof(tx_flow_cfg));
2753 + tx_flow_cfg.options = DPNI_TX_FLOW_OPT_L3_CHKSUM_GEN |
2754 + DPNI_TX_FLOW_OPT_L4_CHKSUM_GEN;
2755 + tx_flow_cfg.l3_chksum_gen = enable;
2756 + tx_flow_cfg.l4_chksum_gen = enable;
2758 + for (i = 0; i < priv->num_fqs; i++) {
2759 + fq = &priv->fq[i];
2760 + if (fq->type != DPAA2_TX_CONF_FQ)
2763 + /* The Tx flowid is kept in the corresponding TxConf FQ. */
2764 + err = dpni_set_tx_flow(priv->mc_io, 0, priv->mc_token,
2765 + &fq->flowid, &tx_flow_cfg);
2767 + netdev_err(priv->net_dev, "dpni_set_tx_flow failed\n");
2775 +static int dpaa2_bp_add_7(struct dpaa2_eth_priv *priv, u16 bpid)
2777 + struct device *dev = priv->net_dev->dev.parent;
2783 + for (i = 0; i < 7; i++) {
2784 + /* Allocate buffer visible to WRIOP + skb shared info +
2785 + * alignment padding
2787 + buf = napi_alloc_frag(DPAA2_ETH_BUF_RAW_SIZE);
2788 + if (unlikely(!buf)) {
2789 + dev_err(dev, "buffer allocation failed\n");
2792 + buf = PTR_ALIGN(buf, DPAA2_ETH_RX_BUF_ALIGN);
2794 + addr = dma_map_single(dev, buf, DPAA2_ETH_RX_BUFFER_SIZE,
2796 + if (unlikely(dma_mapping_error(dev, addr))) {
2797 + dev_err(dev, "dma_map_single() failed\n");
2800 + buf_array[i] = addr;
2802 + /* tracing point */
2803 + trace_dpaa2_eth_buf_seed(priv->net_dev,
2804 + buf, DPAA2_ETH_BUF_RAW_SIZE,
2805 + addr, DPAA2_ETH_RX_BUFFER_SIZE,
2810 + /* In case the portal is busy, retry until successful.
2811 + * The buffer release function would only fail if the QBMan portal
2812 + * was busy, which implies portal contention (i.e. more CPUs than
2813 + * portals, i.e. GPPs w/o affine DPIOs). For all practical purposes,
2814 + * there is little we can realistically do, short of giving up -
2815 + * in which case we'd risk depleting the buffer pool and never again
2816 + * receiving the Rx interrupt which would kick-start the refill logic.
2817 + * So just keep retrying, at the risk of being moved to ksoftirqd.
2819 + while (dpaa2_io_service_release(NULL, bpid, buf_array, i))
2824 + put_page(virt_to_head_page(buf));
2827 + goto release_bufs;
2832 +static int dpaa2_dpbp_seed(struct dpaa2_eth_priv *priv, u16 bpid)
2837 + /* This is the lazy seeding of Rx buffer pools.
2838 + * dpaa2_bp_add_7() is also used on the Rx hotpath and calls
2839 + * napi_alloc_frag(). The trouble with that is that it in turn ends up
2840 + * calling this_cpu_ptr(), which mandates execution in atomic context.
2841 + * Rather than splitting up the code, do a one-off preempt disable.
2843 + preempt_disable();
2844 + for (j = 0; j < priv->num_channels; j++) {
2845 + for (i = 0; i < DPAA2_ETH_NUM_BUFS; i += 7) {
2846 + new_count = dpaa2_bp_add_7(priv, bpid);
2847 + priv->channel[j]->buf_count += new_count;
2849 + if (new_count < 7) {
2851 + goto out_of_memory;
2864 + * Drain the specified number of buffers from the DPNI's private buffer pool.
2865 + * @count must not exceeed 7
2867 +static void dpaa2_dpbp_drain_cnt(struct dpaa2_eth_priv *priv, int count)
2869 + struct device *dev = priv->net_dev->dev.parent;
2875 + ret = dpaa2_io_service_acquire(NULL, priv->dpbp_attrs.bpid,
2876 + buf_array, count);
2878 + pr_err("dpaa2_io_service_acquire() failed\n");
2881 + for (i = 0; i < ret; i++) {
2882 + /* Same logic as on regular Rx path */
2883 + dma_unmap_single(dev, buf_array[i],
2884 + DPAA2_ETH_RX_BUFFER_SIZE,
2886 + vaddr = phys_to_virt(buf_array[i]);
2887 + put_page(virt_to_head_page(vaddr));
2892 +static void __dpaa2_dpbp_free(struct dpaa2_eth_priv *priv)
2896 + dpaa2_dpbp_drain_cnt(priv, 7);
2897 + dpaa2_dpbp_drain_cnt(priv, 1);
2899 + for (i = 0; i < priv->num_channels; i++)
2900 + priv->channel[i]->buf_count = 0;
2903 +/* Function is called from softirq context only, so we don't need to guard
2904 + * the access to percpu count
2906 +static int dpaa2_dpbp_refill(struct dpaa2_eth_priv *priv,
2907 + struct dpaa2_eth_channel *ch,
2913 + if (unlikely(ch->buf_count < DPAA2_ETH_REFILL_THRESH)) {
2915 + new_count = dpaa2_bp_add_7(priv, bpid);
2916 + if (unlikely(!new_count)) {
2917 + /* Out of memory; abort for now, we'll
2922 + ch->buf_count += new_count;
2923 + } while (ch->buf_count < DPAA2_ETH_NUM_BUFS);
2925 + if (unlikely(ch->buf_count < DPAA2_ETH_NUM_BUFS))
2932 +static int __dpaa2_eth_pull_channel(struct dpaa2_eth_channel *ch)
2935 + int dequeues = -1;
2936 + struct dpaa2_eth_priv *priv = ch->priv;
2938 + /* Retry while portal is busy */
2940 + err = dpaa2_io_service_pull_channel(NULL, ch->ch_id, ch->store);
2942 + } while (err == -EBUSY);
2943 + if (unlikely(err))
2944 + netdev_err(priv->net_dev, "dpaa2_io_service_pull err %d", err);
2946 + ch->stats.dequeue_portal_busy += dequeues;
2950 +static int dpaa2_eth_poll(struct napi_struct *napi, int budget)
2952 + struct dpaa2_eth_channel *ch;
2953 + int cleaned = 0, store_cleaned;
2954 + struct dpaa2_eth_priv *priv;
2957 + ch = container_of(napi, struct dpaa2_eth_channel, napi);
2960 + __dpaa2_eth_pull_channel(ch);
2963 + /* Refill pool if appropriate */
2964 + dpaa2_dpbp_refill(priv, ch, priv->dpbp_attrs.bpid);
2966 + store_cleaned = dpaa2_eth_store_consume(ch);
2967 + cleaned += store_cleaned;
2969 + if (store_cleaned == 0 ||
2970 + cleaned > budget - DPAA2_ETH_STORE_SIZE)
2973 + /* Try to dequeue some more */
2974 + err = __dpaa2_eth_pull_channel(ch);
2975 + if (unlikely(err))
2979 + if (cleaned < budget) {
2980 + napi_complete_done(napi, cleaned);
2981 + err = dpaa2_io_service_rearm(NULL, &ch->nctx);
2982 + if (unlikely(err))
2983 + netdev_err(priv->net_dev,
2984 + "Notif rearm failed for channel %d\n",
2988 + ch->stats.frames += cleaned;
2993 +static void dpaa2_eth_napi_enable(struct dpaa2_eth_priv *priv)
2995 + struct dpaa2_eth_channel *ch;
2998 + for (i = 0; i < priv->num_channels; i++) {
2999 + ch = priv->channel[i];
3000 + napi_enable(&ch->napi);
3004 +static void dpaa2_eth_napi_disable(struct dpaa2_eth_priv *priv)
3006 + struct dpaa2_eth_channel *ch;
3009 + for (i = 0; i < priv->num_channels; i++) {
3010 + ch = priv->channel[i];
3011 + napi_disable(&ch->napi);
3015 +static int dpaa2_link_state_update(struct dpaa2_eth_priv *priv)
3017 + struct dpni_link_state state;
3020 + err = dpni_get_link_state(priv->mc_io, 0, priv->mc_token, &state);
3021 + if (unlikely(err)) {
3022 + netdev_err(priv->net_dev,
3023 + "dpni_get_link_state() failed\n");
3027 + /* Chech link state; speed / duplex changes are not treated yet */
3028 + if (priv->link_state.up == state.up)
3031 + priv->link_state = state;
3033 + netif_carrier_on(priv->net_dev);
3034 + netif_tx_start_all_queues(priv->net_dev);
3036 + netif_tx_stop_all_queues(priv->net_dev);
3037 + netif_carrier_off(priv->net_dev);
3040 + netdev_info(priv->net_dev, "Link Event: state %s",
3041 + state.up ? "up" : "down");
3046 +static int dpaa2_eth_open(struct net_device *net_dev)
3048 + struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
3051 + err = dpaa2_dpbp_seed(priv, priv->dpbp_attrs.bpid);
3053 + /* Not much to do; the buffer pool, though not filled up,
3054 + * may still contain some buffers which would enable us
3057 + netdev_err(net_dev, "Buffer seeding failed for DPBP %d (bpid=%d)\n",
3058 + priv->dpbp_dev->obj_desc.id, priv->dpbp_attrs.bpid);
3061 + /* We'll only start the txqs when the link is actually ready; make sure
3062 + * we don't race against the link up notification, which may come
3063 + * immediately after dpni_enable();
3065 + netif_tx_stop_all_queues(net_dev);
3066 + dpaa2_eth_napi_enable(priv);
3067 + /* Also, explicitly set carrier off, otherwise netif_carrier_ok() will
3068 + * return true and cause 'ip link show' to report the LOWER_UP flag,
3069 + * even though the link notification wasn't even received.
3071 + netif_carrier_off(net_dev);
3073 + err = dpni_enable(priv->mc_io, 0, priv->mc_token);
3075 + dev_err(net_dev->dev.parent, "dpni_enable() failed\n");
3079 + /* If the DPMAC object has already processed the link up interrupt,
3080 + * we have to learn the link state ourselves.
3082 + err = dpaa2_link_state_update(priv);
3084 + dev_err(net_dev->dev.parent, "Can't update link state\n");
3085 + goto link_state_err;
3092 + dpaa2_eth_napi_disable(priv);
3093 + __dpaa2_dpbp_free(priv);
3097 +static int dpaa2_eth_stop(struct net_device *net_dev)
3099 + struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
3101 + /* Stop Tx and Rx traffic */
3102 + netif_tx_stop_all_queues(net_dev);
3103 + netif_carrier_off(net_dev);
3104 + dpni_disable(priv->mc_io, 0, priv->mc_token);
3108 + dpaa2_eth_napi_disable(priv);
3111 + __dpaa2_dpbp_free(priv);
3116 +static int dpaa2_eth_init(struct net_device *net_dev)
3118 + u64 supported = 0;
3119 + u64 not_supported = 0;
3120 + const struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
3121 + u32 options = priv->dpni_attrs.options;
3123 + /* Capabilities listing */
3124 + supported |= IFF_LIVE_ADDR_CHANGE | IFF_PROMISC | IFF_ALLMULTI;
3126 + if (options & DPNI_OPT_UNICAST_FILTER)
3127 + supported |= IFF_UNICAST_FLT;
3129 + not_supported |= IFF_UNICAST_FLT;
3131 + if (options & DPNI_OPT_MULTICAST_FILTER)
3132 + supported |= IFF_MULTICAST;
3134 + not_supported |= IFF_MULTICAST;
3136 + net_dev->priv_flags |= supported;
3137 + net_dev->priv_flags &= ~not_supported;
3140 + net_dev->features = NETIF_F_RXCSUM |
3141 + NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
3142 + NETIF_F_SG | NETIF_F_HIGHDMA |
3144 + net_dev->hw_features = net_dev->features;
3149 +static int dpaa2_eth_set_addr(struct net_device *net_dev, void *addr)
3151 + struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
3152 + struct device *dev = net_dev->dev.parent;
3155 + err = eth_mac_addr(net_dev, addr);
3157 + dev_err(dev, "eth_mac_addr() failed with error %d\n", err);
3161 + err = dpni_set_primary_mac_addr(priv->mc_io, 0, priv->mc_token,
3162 + net_dev->dev_addr);
3164 + dev_err(dev, "dpni_set_primary_mac_addr() failed (%d)\n", err);
3171 +/** Fill in counters maintained by the GPP driver. These may be different from
3172 + * the hardware counters obtained by ethtool.
3174 +static struct rtnl_link_stats64
3175 +*dpaa2_eth_get_stats(struct net_device *net_dev,
3176 + struct rtnl_link_stats64 *stats)
3178 + struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
3179 + struct rtnl_link_stats64 *percpu_stats;
3181 + u64 *netstats = (u64 *)stats;
3183 + int num = sizeof(struct rtnl_link_stats64) / sizeof(u64);
3185 + for_each_possible_cpu(i) {
3186 + percpu_stats = per_cpu_ptr(priv->percpu_stats, i);
3187 + cpustats = (u64 *)percpu_stats;
3188 + for (j = 0; j < num; j++)
3189 + netstats[j] += cpustats[j];
3195 +static int dpaa2_eth_change_mtu(struct net_device *net_dev, int mtu)
3197 + struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
3200 + if (mtu < 68 || mtu > DPAA2_ETH_MAX_MTU) {
3201 + netdev_err(net_dev, "Invalid MTU %d. Valid range is: 68..%d\n",
3202 + mtu, DPAA2_ETH_MAX_MTU);
3206 + /* Set the maximum Rx frame length to match the transmit side;
3207 + * account for L2 headers when computing the MFL
3209 + err = dpni_set_max_frame_length(priv->mc_io, 0, priv->mc_token,
3210 + (u16)DPAA2_ETH_L2_MAX_FRM(mtu));
3212 + netdev_err(net_dev, "dpni_set_mfl() failed\n");
3216 + net_dev->mtu = mtu;
3220 +/* Convenience macro to make code littered with error checking more readable */
3221 +#define DPAA2_ETH_WARN_IF_ERR(err, netdevp, format, ...) \
3224 + netdev_warn(netdevp, format, ##__VA_ARGS__); \
3227 +/* Copy mac unicast addresses from @net_dev to @priv.
3228 + * Its sole purpose is to make dpaa2_eth_set_rx_mode() more readable.
3230 +static void _dpaa2_eth_hw_add_uc_addr(const struct net_device *net_dev,
3231 + struct dpaa2_eth_priv *priv)
3233 + struct netdev_hw_addr *ha;
3236 + netdev_for_each_uc_addr(ha, net_dev) {
3237 + err = dpni_add_mac_addr(priv->mc_io, 0, priv->mc_token,
3239 + DPAA2_ETH_WARN_IF_ERR(err, priv->net_dev,
3240 + "Could not add ucast MAC %pM to the filtering table (err %d)\n",
3245 +/* Copy mac multicast addresses from @net_dev to @priv
3246 + * Its sole purpose is to make dpaa2_eth_set_rx_mode() more readable.
3248 +static void _dpaa2_eth_hw_add_mc_addr(const struct net_device *net_dev,
3249 + struct dpaa2_eth_priv *priv)
3251 + struct netdev_hw_addr *ha;
3254 + netdev_for_each_mc_addr(ha, net_dev) {
3255 + err = dpni_add_mac_addr(priv->mc_io, 0, priv->mc_token,
3257 + DPAA2_ETH_WARN_IF_ERR(err, priv->net_dev,
3258 + "Could not add mcast MAC %pM to the filtering table (err %d)\n",
3263 +static void dpaa2_eth_set_rx_mode(struct net_device *net_dev)
3265 + struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
3266 + int uc_count = netdev_uc_count(net_dev);
3267 + int mc_count = netdev_mc_count(net_dev);
3268 + u8 max_uc = priv->dpni_attrs.max_unicast_filters;
3269 + u8 max_mc = priv->dpni_attrs.max_multicast_filters;
3270 + u32 options = priv->dpni_attrs.options;
3271 + u16 mc_token = priv->mc_token;
3272 + struct fsl_mc_io *mc_io = priv->mc_io;
3275 + /* Basic sanity checks; these probably indicate a misconfiguration */
3276 + if (!(options & DPNI_OPT_UNICAST_FILTER) && max_uc != 0)
3277 + netdev_info(net_dev,
3278 + "max_unicast_filters=%d, you must have DPNI_OPT_UNICAST_FILTER in the DPL\n",
3280 + if (!(options & DPNI_OPT_MULTICAST_FILTER) && max_mc != 0)
3281 + netdev_info(net_dev,
3282 + "max_multicast_filters=%d, you must have DPNI_OPT_MULTICAST_FILTER in the DPL\n",
3285 + /* Force promiscuous if the uc or mc counts exceed our capabilities. */
3286 + if (uc_count > max_uc) {
3287 + netdev_info(net_dev,
3288 + "Unicast addr count reached %d, max allowed is %d; forcing promisc\n",
3289 + uc_count, max_uc);
3290 + goto force_promisc;
3292 + if (mc_count > max_mc) {
3293 + netdev_info(net_dev,
3294 + "Multicast addr count reached %d, max allowed is %d; forcing promisc\n",
3295 + mc_count, max_mc);
3296 + goto force_mc_promisc;
3299 + /* Adjust promisc settings due to flag combinations */
3300 + if (net_dev->flags & IFF_PROMISC) {
3301 + goto force_promisc;
3302 + } else if (net_dev->flags & IFF_ALLMULTI) {
3303 + /* First, rebuild unicast filtering table. This should be done
3304 + * in promisc mode, in order to avoid frame loss while we
3305 + * progressively add entries to the table.
3306 + * We don't know whether we had been in promisc already, and
3307 + * making an MC call to find it is expensive; so set uc promisc
3310 + err = dpni_set_unicast_promisc(mc_io, 0, mc_token, 1);
3311 + DPAA2_ETH_WARN_IF_ERR(err, net_dev, "Can't set uc promisc\n");
3313 + /* Actual uc table reconstruction. */
3314 + err = dpni_clear_mac_filters(mc_io, 0, mc_token, 1, 0);
3315 + DPAA2_ETH_WARN_IF_ERR(err, net_dev, "Can't clear uc filters\n");
3316 + _dpaa2_eth_hw_add_uc_addr(net_dev, priv);
3318 + /* Finally, clear uc promisc and set mc promisc as requested. */
3319 + err = dpni_set_unicast_promisc(mc_io, 0, mc_token, 0);
3320 + DPAA2_ETH_WARN_IF_ERR(err, net_dev, "Can't clear uc promisc\n");
3321 + goto force_mc_promisc;
3324 + /* Neither unicast, nor multicast promisc will be on... eventually.
3325 + * For now, rebuild mac filtering tables while forcing both of them on.
3327 + err = dpni_set_unicast_promisc(mc_io, 0, mc_token, 1);
3328 + DPAA2_ETH_WARN_IF_ERR(err, net_dev, "Can't set uc promisc (%d)\n", err);
3329 + err = dpni_set_multicast_promisc(mc_io, 0, mc_token, 1);
3330 + DPAA2_ETH_WARN_IF_ERR(err, net_dev, "Can't set mc promisc (%d)\n", err);
3332 + /* Actual mac filtering tables reconstruction */
3333 + err = dpni_clear_mac_filters(mc_io, 0, mc_token, 1, 1);
3334 + DPAA2_ETH_WARN_IF_ERR(err, net_dev, "Can't clear mac filters\n");
3335 + _dpaa2_eth_hw_add_mc_addr(net_dev, priv);
3336 + _dpaa2_eth_hw_add_uc_addr(net_dev, priv);
3338 + /* Now we can clear both ucast and mcast promisc, without risking
3339 + * to drop legitimate frames anymore.
3341 + err = dpni_set_unicast_promisc(mc_io, 0, mc_token, 0);
3342 + DPAA2_ETH_WARN_IF_ERR(err, net_dev, "Can't clear ucast promisc\n");
3343 + err = dpni_set_multicast_promisc(mc_io, 0, mc_token, 0);
3344 + DPAA2_ETH_WARN_IF_ERR(err, net_dev, "Can't clear mcast promisc\n");
3349 + err = dpni_set_unicast_promisc(mc_io, 0, mc_token, 1);
3350 + DPAA2_ETH_WARN_IF_ERR(err, net_dev, "Can't set ucast promisc\n");
3352 + err = dpni_set_multicast_promisc(mc_io, 0, mc_token, 1);
3353 + DPAA2_ETH_WARN_IF_ERR(err, net_dev, "Can't set mcast promisc\n");
3356 +static int dpaa2_eth_set_features(struct net_device *net_dev,
3357 + netdev_features_t features)
3359 + struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
3360 + netdev_features_t changed = features ^ net_dev->features;
3363 + if (changed & NETIF_F_RXCSUM) {
3364 + bool enable = !!(features & NETIF_F_RXCSUM);
3366 + err = dpaa2_eth_set_rx_csum(priv, enable);
3371 + if (changed & (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM)) {
3372 + bool enable = !!(features &
3373 + (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM));
3374 + err = dpaa2_eth_set_tx_csum(priv, enable);
3382 +static int dpaa2_eth_ts_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
3384 + struct dpaa2_eth_priv *priv = netdev_priv(dev);
3385 + struct hwtstamp_config config;
3387 + if (copy_from_user(&config, rq->ifr_data, sizeof(config)))
3390 + switch (config.tx_type) {
3391 + case HWTSTAMP_TX_OFF:
3392 + priv->ts_tx_en = false;
3394 + case HWTSTAMP_TX_ON:
3395 + priv->ts_tx_en = true;
3401 + if (config.rx_filter == HWTSTAMP_FILTER_NONE)
3402 + priv->ts_rx_en = false;
3404 + priv->ts_rx_en = true;
3405 + /* TS is set for all frame types, not only those requested */
3406 + config.rx_filter = HWTSTAMP_FILTER_ALL;
3409 + return copy_to_user(rq->ifr_data, &config, sizeof(config)) ?
3413 +static int dpaa2_eth_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
3415 + if (cmd == SIOCSHWTSTAMP)
3416 + return dpaa2_eth_ts_ioctl(dev, rq, cmd);
3421 +static const struct net_device_ops dpaa2_eth_ops = {
3422 + .ndo_open = dpaa2_eth_open,
3423 + .ndo_start_xmit = dpaa2_eth_tx,
3424 + .ndo_stop = dpaa2_eth_stop,
3425 + .ndo_init = dpaa2_eth_init,
3426 + .ndo_set_mac_address = dpaa2_eth_set_addr,
3427 + .ndo_get_stats64 = dpaa2_eth_get_stats,
3428 + .ndo_change_mtu = dpaa2_eth_change_mtu,
3429 + .ndo_set_rx_mode = dpaa2_eth_set_rx_mode,
3430 + .ndo_set_features = dpaa2_eth_set_features,
3431 + .ndo_do_ioctl = dpaa2_eth_ioctl,
3434 +static void dpaa2_eth_cdan_cb(struct dpaa2_io_notification_ctx *ctx)
3436 + struct dpaa2_eth_channel *ch;
3438 + ch = container_of(ctx, struct dpaa2_eth_channel, nctx);
3440 + /* Update NAPI statistics */
3443 + napi_schedule_irqoff(&ch->napi);
3446 +static void dpaa2_eth_setup_fqs(struct dpaa2_eth_priv *priv)
3450 + /* We have one TxConf FQ per Tx flow */
3451 + for (i = 0; i < priv->dpni_attrs.max_senders; i++) {
3452 + priv->fq[priv->num_fqs].netdev_priv = priv;
3453 + priv->fq[priv->num_fqs].type = DPAA2_TX_CONF_FQ;
3454 + priv->fq[priv->num_fqs].consume = dpaa2_eth_tx_conf;
3455 + priv->fq[priv->num_fqs++].flowid = DPNI_NEW_FLOW_ID;
3458 + /* The number of Rx queues (Rx distribution width) may be different from
3459 + * the number of cores.
3460 + * We only support one traffic class for now.
3462 + for (i = 0; i < dpaa2_queue_count(priv); i++) {
3463 + priv->fq[priv->num_fqs].netdev_priv = priv;
3464 + priv->fq[priv->num_fqs].type = DPAA2_RX_FQ;
3465 + priv->fq[priv->num_fqs].consume = dpaa2_eth_rx;
3466 + priv->fq[priv->num_fqs++].flowid = (u16)i;
3469 +#ifdef CONFIG_FSL_DPAA2_ETH_USE_ERR_QUEUE
3470 + /* We have exactly one Rx error queue per DPNI */
3471 + priv->fq[priv->num_fqs].netdev_priv = priv;
3472 + priv->fq[priv->num_fqs].type = DPAA2_RX_ERR_FQ;
3473 + priv->fq[priv->num_fqs++].consume = dpaa2_eth_rx_err;
3477 +static int check_obj_version(struct fsl_mc_device *ls_dev, u16 mc_version)
3479 + char *name = ls_dev->obj_desc.type;
3480 + struct device *dev = &ls_dev->dev;
3481 + u16 supported_version, flib_version;
3483 + if (strcmp(name, "dpni") == 0) {
3484 + flib_version = DPNI_VER_MAJOR;
3485 + supported_version = DPAA2_SUPPORTED_DPNI_VERSION;
3486 + } else if (strcmp(name, "dpbp") == 0) {
3487 + flib_version = DPBP_VER_MAJOR;
3488 + supported_version = DPAA2_SUPPORTED_DPBP_VERSION;
3489 + } else if (strcmp(name, "dpcon") == 0) {
3490 + flib_version = DPCON_VER_MAJOR;
3491 + supported_version = DPAA2_SUPPORTED_DPCON_VERSION;
3493 + dev_err(dev, "invalid object type (%s)\n", name);
3497 + /* Check that the FLIB-defined version matches the one reported by MC */
3498 + if (mc_version != flib_version) {
3500 + "%s FLIB version mismatch: MC reports %d, we have %d\n",
3501 + name, mc_version, flib_version);
3505 + /* ... and that we actually support it */
3506 + if (mc_version < supported_version) {
3507 + dev_err(dev, "Unsupported %s FLIB version (%d)\n",
3508 + name, mc_version);
3511 + dev_dbg(dev, "Using %s FLIB version %d\n", name, mc_version);
3516 +static struct fsl_mc_device *dpaa2_dpcon_setup(struct dpaa2_eth_priv *priv)
3518 + struct fsl_mc_device *dpcon;
3519 + struct device *dev = priv->net_dev->dev.parent;
3520 + struct dpcon_attr attrs;
3523 + err = fsl_mc_object_allocate(to_fsl_mc_device(dev),
3524 + FSL_MC_POOL_DPCON, &dpcon);
3526 + dev_info(dev, "Not enough DPCONs, will go on as-is\n");
3530 + err = dpcon_open(priv->mc_io, 0, dpcon->obj_desc.id, &dpcon->mc_handle);
3532 + dev_err(dev, "dpcon_open() failed\n");
3536 + err = dpcon_get_attributes(priv->mc_io, 0, dpcon->mc_handle, &attrs);
3538 + dev_err(dev, "dpcon_get_attributes() failed\n");
3539 + goto err_get_attr;
3542 + err = check_obj_version(dpcon, attrs.version.major);
3544 + goto err_dpcon_ver;
3546 + err = dpcon_enable(priv->mc_io, 0, dpcon->mc_handle);
3548 + dev_err(dev, "dpcon_enable() failed\n");
3557 + dpcon_close(priv->mc_io, 0, dpcon->mc_handle);
3559 + fsl_mc_object_free(dpcon);
3564 +static void dpaa2_dpcon_free(struct dpaa2_eth_priv *priv,
3565 + struct fsl_mc_device *dpcon)
3567 + dpcon_disable(priv->mc_io, 0, dpcon->mc_handle);
3568 + dpcon_close(priv->mc_io, 0, dpcon->mc_handle);
3569 + fsl_mc_object_free(dpcon);
3572 +static struct dpaa2_eth_channel *
3573 +dpaa2_alloc_channel(struct dpaa2_eth_priv *priv)
3575 + struct dpaa2_eth_channel *channel;
3576 + struct dpcon_attr attr;
3577 + struct device *dev = priv->net_dev->dev.parent;
3580 + channel = kzalloc(sizeof(*channel), GFP_ATOMIC);
3582 + dev_err(dev, "Memory allocation failed\n");
3586 + channel->dpcon = dpaa2_dpcon_setup(priv);
3587 + if (!channel->dpcon)
3590 + err = dpcon_get_attributes(priv->mc_io, 0, channel->dpcon->mc_handle,
3593 + dev_err(dev, "dpcon_get_attributes() failed\n");
3594 + goto err_get_attr;
3597 + channel->dpcon_id = attr.id;
3598 + channel->ch_id = attr.qbman_ch_id;
3599 + channel->priv = priv;
3604 + dpaa2_dpcon_free(priv, channel->dpcon);
3610 +static void dpaa2_free_channel(struct dpaa2_eth_priv *priv,
3611 + struct dpaa2_eth_channel *channel)
3613 + dpaa2_dpcon_free(priv, channel->dpcon);
3617 +static int dpaa2_dpio_setup(struct dpaa2_eth_priv *priv)
3619 + struct dpaa2_io_notification_ctx *nctx;
3620 + struct dpaa2_eth_channel *channel;
3621 + struct dpcon_notification_cfg dpcon_notif_cfg;
3622 + struct device *dev = priv->net_dev->dev.parent;
3625 + /* Don't allocate more channels than strictly necessary and assign
3626 + * them to cores starting from the first one available in
3627 + * cpu_online_mask.
3628 + * If the number of channels is lower than the number of cores,
3629 + * there will be no rx/tx conf processing on the last cores in the mask.
3631 + cpumask_clear(&priv->dpio_cpumask);
3632 + for_each_online_cpu(i) {
3633 + /* Try to allocate a channel */
3634 + channel = dpaa2_alloc_channel(priv);
3636 + goto err_alloc_ch;
3638 + priv->channel[priv->num_channels] = channel;
3640 + nctx = &channel->nctx;
3641 + nctx->is_cdan = 1;
3642 + nctx->cb = dpaa2_eth_cdan_cb;
3643 + nctx->id = channel->ch_id;
3644 + nctx->desired_cpu = i;
3646 + /* Register the new context */
3647 + err = dpaa2_io_service_register(NULL, nctx);
3649 + dev_info(dev, "No affine DPIO for core %d\n", i);
3650 + /* This core doesn't have an affine DPIO, but there's
3651 + * a chance another one does, so keep trying
3653 + dpaa2_free_channel(priv, channel);
3657 + /* Register DPCON notification with MC */
3658 + dpcon_notif_cfg.dpio_id = nctx->dpio_id;
3659 + dpcon_notif_cfg.priority = 0;
3660 + dpcon_notif_cfg.user_ctx = nctx->qman64;
3661 + err = dpcon_set_notification(priv->mc_io, 0,
3662 + channel->dpcon->mc_handle,
3663 + &dpcon_notif_cfg);
3665 + dev_err(dev, "dpcon_set_notification failed()\n");
3666 + goto err_set_cdan;
3669 + /* If we managed to allocate a channel and also found an affine
3670 + * DPIO for this core, add it to the final mask
3672 + cpumask_set_cpu(i, &priv->dpio_cpumask);
3673 + priv->num_channels++;
3675 + if (priv->num_channels == dpaa2_max_channels(priv))
3679 + /* Tx confirmation queues can only be serviced by cpus
3680 + * with an affine DPIO/channel
3682 + cpumask_copy(&priv->txconf_cpumask, &priv->dpio_cpumask);
3687 + dpaa2_io_service_deregister(NULL, nctx);
3688 + dpaa2_free_channel(priv, channel);
3690 + if (cpumask_empty(&priv->dpio_cpumask)) {
3691 + dev_err(dev, "No cpu with an affine DPIO/DPCON\n");
3694 + cpumask_copy(&priv->txconf_cpumask, &priv->dpio_cpumask);
3699 +static void dpaa2_dpio_free(struct dpaa2_eth_priv *priv)
3702 + struct dpaa2_eth_channel *ch;
3704 + /* deregister CDAN notifications and free channels */
3705 + for (i = 0; i < priv->num_channels; i++) {
3706 + ch = priv->channel[i];
3707 + dpaa2_io_service_deregister(NULL, &ch->nctx);
3708 + dpaa2_free_channel(priv, ch);
3712 +static struct dpaa2_eth_channel *
3713 +dpaa2_get_channel_by_cpu(struct dpaa2_eth_priv *priv, int cpu)
3715 + struct device *dev = priv->net_dev->dev.parent;
3718 + for (i = 0; i < priv->num_channels; i++)
3719 + if (priv->channel[i]->nctx.desired_cpu == cpu)
3720 + return priv->channel[i];
3722 + /* We should never get here. Issue a warning and return
3723 + * the first channel, because it's still better than nothing
3725 + dev_warn(dev, "No affine channel found for cpu %d\n", cpu);
3727 + return priv->channel[0];
3730 +static void dpaa2_set_fq_affinity(struct dpaa2_eth_priv *priv)
3732 + struct device *dev = priv->net_dev->dev.parent;
3733 + struct dpaa2_eth_fq *fq;
3734 + int rx_cpu, txconf_cpu;
3737 + /* For each FQ, pick one channel/CPU to deliver frames to.
3738 + * This may well change at runtime, either through irqbalance or
3739 + * through direct user intervention.
3741 + rx_cpu = cpumask_first(&priv->dpio_cpumask);
3742 + txconf_cpu = cpumask_first(&priv->txconf_cpumask);
3744 + for (i = 0; i < priv->num_fqs; i++) {
3745 + fq = &priv->fq[i];
3746 + switch (fq->type) {
3748 + case DPAA2_RX_ERR_FQ:
3749 + fq->target_cpu = rx_cpu;
3750 + cpumask_rr(rx_cpu, &priv->dpio_cpumask);
3752 + case DPAA2_TX_CONF_FQ:
3753 + fq->target_cpu = txconf_cpu;
3754 + cpumask_rr(txconf_cpu, &priv->txconf_cpumask);
3757 + dev_err(dev, "Unknown FQ type: %d\n", fq->type);
3759 + fq->channel = dpaa2_get_channel_by_cpu(priv, fq->target_cpu);
3763 +static int dpaa2_dpbp_setup(struct dpaa2_eth_priv *priv)
3766 + struct fsl_mc_device *dpbp_dev;
3767 + struct device *dev = priv->net_dev->dev.parent;
3769 + err = fsl_mc_object_allocate(to_fsl_mc_device(dev), FSL_MC_POOL_DPBP,
3772 + dev_err(dev, "DPBP device allocation failed\n");
3776 + priv->dpbp_dev = dpbp_dev;
3778 + err = dpbp_open(priv->mc_io, 0, priv->dpbp_dev->obj_desc.id,
3779 + &dpbp_dev->mc_handle);
3781 + dev_err(dev, "dpbp_open() failed\n");
3785 + err = dpbp_enable(priv->mc_io, 0, dpbp_dev->mc_handle);
3787 + dev_err(dev, "dpbp_enable() failed\n");
3791 + err = dpbp_get_attributes(priv->mc_io, 0, dpbp_dev->mc_handle,
3792 + &priv->dpbp_attrs);
3794 + dev_err(dev, "dpbp_get_attributes() failed\n");
3795 + goto err_get_attr;
3798 + err = check_obj_version(dpbp_dev, priv->dpbp_attrs.version.major);
3800 + goto err_dpbp_ver;
3806 + dpbp_disable(priv->mc_io, 0, dpbp_dev->mc_handle);
3808 + dpbp_close(priv->mc_io, 0, dpbp_dev->mc_handle);
3810 + fsl_mc_object_free(dpbp_dev);
3815 +static void dpaa2_dpbp_free(struct dpaa2_eth_priv *priv)
3817 + __dpaa2_dpbp_free(priv);
3818 + dpbp_disable(priv->mc_io, 0, priv->dpbp_dev->mc_handle);
3819 + dpbp_close(priv->mc_io, 0, priv->dpbp_dev->mc_handle);
3820 + fsl_mc_object_free(priv->dpbp_dev);
3823 +static int dpaa2_dpni_setup(struct fsl_mc_device *ls_dev)
3825 + struct device *dev = &ls_dev->dev;
3826 + struct dpaa2_eth_priv *priv;
3827 + struct net_device *net_dev;
3831 + net_dev = dev_get_drvdata(dev);
3832 + priv = netdev_priv(net_dev);
3834 + priv->dpni_id = ls_dev->obj_desc.id;
3836 + /* and get a handle for the DPNI this interface is associate with */
3837 + err = dpni_open(priv->mc_io, 0, priv->dpni_id, &priv->mc_token);
3839 + dev_err(dev, "dpni_open() failed\n");
3843 + ls_dev->mc_io = priv->mc_io;
3844 + ls_dev->mc_handle = priv->mc_token;
3846 + dma_mem = kzalloc(DPAA2_EXT_CFG_SIZE, GFP_DMA | GFP_KERNEL);
3850 + priv->dpni_attrs.ext_cfg_iova = dma_map_single(dev, dma_mem,
3851 + DPAA2_EXT_CFG_SIZE,
3853 + if (dma_mapping_error(dev, priv->dpni_attrs.ext_cfg_iova)) {
3854 + dev_err(dev, "dma mapping for dpni_ext_cfg failed\n");
3858 + err = dpni_get_attributes(priv->mc_io, 0, priv->mc_token,
3859 + &priv->dpni_attrs);
3861 + dev_err(dev, "dpni_get_attributes() failed (err=%d)\n", err);
3862 + dma_unmap_single(dev, priv->dpni_attrs.ext_cfg_iova,
3863 + DPAA2_EXT_CFG_SIZE, DMA_FROM_DEVICE);
3864 + goto err_get_attr;
3867 + err = check_obj_version(ls_dev, priv->dpni_attrs.version.major);
3869 + goto err_dpni_ver;
3871 + dma_unmap_single(dev, priv->dpni_attrs.ext_cfg_iova,
3872 + DPAA2_EXT_CFG_SIZE, DMA_FROM_DEVICE);
3874 + memset(&priv->dpni_ext_cfg, 0, sizeof(priv->dpni_ext_cfg));
3875 + err = dpni_extract_extended_cfg(&priv->dpni_ext_cfg, dma_mem);
3877 + dev_err(dev, "dpni_extract_extended_cfg() failed\n");
3881 + /* Configure our buffers' layout */
3882 + priv->buf_layout.options = DPNI_BUF_LAYOUT_OPT_PARSER_RESULT |
3883 + DPNI_BUF_LAYOUT_OPT_FRAME_STATUS |
3884 + DPNI_BUF_LAYOUT_OPT_PRIVATE_DATA_SIZE |
3885 + DPNI_BUF_LAYOUT_OPT_DATA_ALIGN;
3886 + priv->buf_layout.pass_parser_result = true;
3887 + priv->buf_layout.pass_frame_status = true;
3888 + priv->buf_layout.private_data_size = DPAA2_ETH_SWA_SIZE;
3889 + /* HW erratum mandates data alignment in multiples of 256 */
3890 + priv->buf_layout.data_align = DPAA2_ETH_RX_BUF_ALIGN;
3892 + err = dpni_set_rx_buffer_layout(priv->mc_io, 0, priv->mc_token,
3893 + &priv->buf_layout);
3895 + dev_err(dev, "dpni_set_rx_buffer_layout() failed");
3896 + goto err_buf_layout;
3899 + /* remove Rx-only options */
3900 + priv->buf_layout.options &= ~(DPNI_BUF_LAYOUT_OPT_DATA_ALIGN |
3901 + DPNI_BUF_LAYOUT_OPT_PARSER_RESULT);
3902 + err = dpni_set_tx_buffer_layout(priv->mc_io, 0, priv->mc_token,
3903 + &priv->buf_layout);
3905 + dev_err(dev, "dpni_set_tx_buffer_layout() failed");
3906 + goto err_buf_layout;
3908 + /* ... tx-confirm. */
3909 + priv->buf_layout.options &= ~DPNI_BUF_LAYOUT_OPT_PRIVATE_DATA_SIZE;
3910 + priv->buf_layout.options |= DPNI_BUF_LAYOUT_OPT_TIMESTAMP;
3911 + priv->buf_layout.pass_timestamp = 1;
3912 + err = dpni_set_tx_conf_buffer_layout(priv->mc_io, 0, priv->mc_token,
3913 + &priv->buf_layout);
3915 + dev_err(dev, "dpni_set_tx_conf_buffer_layout() failed");
3916 + goto err_buf_layout;
3918 + /* Now that we've set our tx buffer layout, retrieve the minimum
3919 + * required tx data offset.
3921 + err = dpni_get_tx_data_offset(priv->mc_io, 0, priv->mc_token,
3922 + &priv->tx_data_offset);
3924 + dev_err(dev, "dpni_get_tx_data_offset() failed\n");
3925 + goto err_data_offset;
3928 + /* Warn in case TX data offset is not multiple of 64 bytes. */
3929 + WARN_ON(priv->tx_data_offset % 64);
3931 + /* Accommodate SWA space. */
3932 + priv->tx_data_offset += DPAA2_ETH_SWA_SIZE;
3934 + /* allocate classification rule space */
3935 + priv->cls_rule = kzalloc(sizeof(*priv->cls_rule) *
3936 + DPAA2_CLASSIFIER_ENTRY_COUNT, GFP_KERNEL);
3937 + if (!priv->cls_rule)
3938 + goto err_cls_rule;
3953 + dpni_close(priv->mc_io, 0, priv->mc_token);
3958 +static void dpaa2_dpni_free(struct dpaa2_eth_priv *priv)
3962 + err = dpni_reset(priv->mc_io, 0, priv->mc_token);
3964 + netdev_warn(priv->net_dev, "dpni_reset() failed (err %d)\n",
3967 + dpni_close(priv->mc_io, 0, priv->mc_token);
3970 +static int dpaa2_rx_flow_setup(struct dpaa2_eth_priv *priv,
3971 + struct dpaa2_eth_fq *fq)
3973 + struct device *dev = priv->net_dev->dev.parent;
3974 + struct dpni_queue_attr rx_queue_attr;
3975 + struct dpni_queue_cfg queue_cfg;
3978 + memset(&queue_cfg, 0, sizeof(queue_cfg));
3979 + queue_cfg.options = DPNI_QUEUE_OPT_USER_CTX | DPNI_QUEUE_OPT_DEST |
3980 + DPNI_QUEUE_OPT_TAILDROP_THRESHOLD;
3981 + queue_cfg.dest_cfg.dest_type = DPNI_DEST_DPCON;
3982 + queue_cfg.dest_cfg.priority = 1;
3983 + queue_cfg.user_ctx = (u64)fq;
3984 + queue_cfg.dest_cfg.dest_id = fq->channel->dpcon_id;
3985 + queue_cfg.tail_drop_threshold = DPAA2_ETH_TAILDROP_THRESH;
3986 + err = dpni_set_rx_flow(priv->mc_io, 0, priv->mc_token, 0, fq->flowid,
3989 + dev_err(dev, "dpni_set_rx_flow() failed\n");
3993 + /* Get the actual FQID that was assigned by MC */
3994 + err = dpni_get_rx_flow(priv->mc_io, 0, priv->mc_token, 0, fq->flowid,
3997 + dev_err(dev, "dpni_get_rx_flow() failed\n");
4000 + fq->fqid = rx_queue_attr.fqid;
4005 +static int dpaa2_tx_flow_setup(struct dpaa2_eth_priv *priv,
4006 + struct dpaa2_eth_fq *fq)
4008 + struct device *dev = priv->net_dev->dev.parent;
4009 + struct dpni_tx_flow_cfg tx_flow_cfg;
4010 + struct dpni_tx_conf_cfg tx_conf_cfg;
4011 + struct dpni_tx_conf_attr tx_conf_attr;
4014 + memset(&tx_flow_cfg, 0, sizeof(tx_flow_cfg));
4015 + tx_flow_cfg.options = DPNI_TX_FLOW_OPT_TX_CONF_ERROR;
4016 + tx_flow_cfg.use_common_tx_conf_queue = 0;
4017 + err = dpni_set_tx_flow(priv->mc_io, 0, priv->mc_token,
4018 + &fq->flowid, &tx_flow_cfg);
4020 + dev_err(dev, "dpni_set_tx_flow() failed\n");
4024 + tx_conf_cfg.errors_only = 0;
4025 + tx_conf_cfg.queue_cfg.options = DPNI_QUEUE_OPT_USER_CTX |
4026 + DPNI_QUEUE_OPT_DEST;
4027 + tx_conf_cfg.queue_cfg.user_ctx = (u64)fq;
4028 + tx_conf_cfg.queue_cfg.dest_cfg.dest_type = DPNI_DEST_DPCON;
4029 + tx_conf_cfg.queue_cfg.dest_cfg.dest_id = fq->channel->dpcon_id;
4030 + tx_conf_cfg.queue_cfg.dest_cfg.priority = 0;
4032 + err = dpni_set_tx_conf(priv->mc_io, 0, priv->mc_token, fq->flowid,
4035 + dev_err(dev, "dpni_set_tx_conf() failed\n");
4039 + err = dpni_get_tx_conf(priv->mc_io, 0, priv->mc_token, fq->flowid,
4042 + dev_err(dev, "dpni_get_tx_conf() failed\n");
4046 + fq->fqid = tx_conf_attr.queue_attr.fqid;
4051 +#ifdef CONFIG_FSL_DPAA2_ETH_USE_ERR_QUEUE
4052 +static int dpaa2_rx_err_setup(struct dpaa2_eth_priv *priv,
4053 + struct dpaa2_eth_fq *fq)
4055 + struct dpni_queue_attr queue_attr;
4056 + struct dpni_queue_cfg queue_cfg;
4059 + /* Configure the Rx error queue to generate CDANs,
4060 + * just like the Rx queues */
4061 + queue_cfg.options = DPNI_QUEUE_OPT_USER_CTX | DPNI_QUEUE_OPT_DEST;
4062 + queue_cfg.dest_cfg.dest_type = DPNI_DEST_DPCON;
4063 + queue_cfg.dest_cfg.priority = 1;
4064 + queue_cfg.user_ctx = (u64)fq;
4065 + queue_cfg.dest_cfg.dest_id = fq->channel->dpcon_id;
4066 + err = dpni_set_rx_err_queue(priv->mc_io, 0, priv->mc_token, &queue_cfg);
4068 + netdev_err(priv->net_dev, "dpni_set_rx_err_queue() failed\n");
4072 + /* Get the FQID */
4073 + err = dpni_get_rx_err_queue(priv->mc_io, 0, priv->mc_token, &queue_attr);
4075 + netdev_err(priv->net_dev, "dpni_get_rx_err_queue() failed\n");
4078 + fq->fqid = queue_attr.fqid;
4084 +static int dpaa2_dpni_bind(struct dpaa2_eth_priv *priv)
4086 + struct net_device *net_dev = priv->net_dev;
4087 + struct device *dev = net_dev->dev.parent;
4088 + struct dpni_pools_cfg pools_params;
4089 + struct dpni_error_cfg err_cfg;
4093 + pools_params.num_dpbp = 1;
4094 + pools_params.pools[0].dpbp_id = priv->dpbp_dev->obj_desc.id;
4095 + pools_params.pools[0].backup_pool = 0;
4096 + pools_params.pools[0].buffer_size = DPAA2_ETH_RX_BUFFER_SIZE;
4097 + err = dpni_set_pools(priv->mc_io, 0, priv->mc_token, &pools_params);
4099 + dev_err(dev, "dpni_set_pools() failed\n");
4103 + dpaa2_cls_check(net_dev);
4105 + /* have the interface implicitly distribute traffic based on supported
4108 + if (dpaa2_eth_hash_enabled(priv)) {
4109 + err = dpaa2_set_hash(net_dev, DPAA2_RXH_SUPPORTED);
4114 + /* Configure handling of error frames */
4115 + err_cfg.errors = DPAA2_ETH_RX_ERR_MASK;
4116 + err_cfg.set_frame_annotation = 1;
4117 +#ifdef CONFIG_FSL_DPAA2_ETH_USE_ERR_QUEUE
4118 + err_cfg.error_action = DPNI_ERROR_ACTION_SEND_TO_ERROR_QUEUE;
4120 + err_cfg.error_action = DPNI_ERROR_ACTION_DISCARD;
4122 + err = dpni_set_errors_behavior(priv->mc_io, 0, priv->mc_token,
4125 + dev_err(dev, "dpni_set_errors_behavior failed\n");
4129 + /* Configure Rx and Tx conf queues to generate CDANs */
4130 + for (i = 0; i < priv->num_fqs; i++) {
4131 + switch (priv->fq[i].type) {
4133 + err = dpaa2_rx_flow_setup(priv, &priv->fq[i]);
4135 + case DPAA2_TX_CONF_FQ:
4136 + err = dpaa2_tx_flow_setup(priv, &priv->fq[i]);
4138 +#ifdef CONFIG_FSL_DPAA2_ETH_USE_ERR_QUEUE
4139 + case DPAA2_RX_ERR_FQ:
4140 + err = dpaa2_rx_err_setup(priv, &priv->fq[i]);
4144 + dev_err(dev, "Invalid FQ type %d\n", priv->fq[i].type);
4151 + err = dpni_get_qdid(priv->mc_io, 0, priv->mc_token, &priv->tx_qdid);
4153 + dev_err(dev, "dpni_get_qdid() failed\n");
4160 +static int dpaa2_eth_alloc_rings(struct dpaa2_eth_priv *priv)
4162 + struct net_device *net_dev = priv->net_dev;
4163 + struct device *dev = net_dev->dev.parent;
4166 + for (i = 0; i < priv->num_channels; i++) {
4167 + priv->channel[i]->store =
4168 + dpaa2_io_store_create(DPAA2_ETH_STORE_SIZE, dev);
4169 + if (!priv->channel[i]->store) {
4170 + netdev_err(net_dev, "dpaa2_io_store_create() failed\n");
4178 + for (i = 0; i < priv->num_channels; i++) {
4179 + if (!priv->channel[i]->store)
4181 + dpaa2_io_store_destroy(priv->channel[i]->store);
4187 +static void dpaa2_eth_free_rings(struct dpaa2_eth_priv *priv)
4191 + for (i = 0; i < priv->num_channels; i++)
4192 + dpaa2_io_store_destroy(priv->channel[i]->store);
4195 +static int dpaa2_eth_netdev_init(struct net_device *net_dev)
4198 + struct device *dev = net_dev->dev.parent;
4199 + struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
4200 + u8 mac_addr[ETH_ALEN];
4201 + u8 bcast_addr[ETH_ALEN];
4203 + net_dev->netdev_ops = &dpaa2_eth_ops;
4205 + /* If the DPL contains all-0 mac_addr, set a random hardware address */
4206 + err = dpni_get_primary_mac_addr(priv->mc_io, 0, priv->mc_token,
4209 + dev_err(dev, "dpni_get_primary_mac_addr() failed (%d)", err);
4212 + if (is_zero_ether_addr(mac_addr)) {
4213 + /* Fills in net_dev->dev_addr, as required by
4214 + * register_netdevice()
4216 + eth_hw_addr_random(net_dev);
4217 + /* Make the user aware, without cluttering the boot log */
4218 + pr_info_once(KBUILD_MODNAME " device(s) have all-zero hwaddr, replaced with random");
4219 + err = dpni_set_primary_mac_addr(priv->mc_io, 0, priv->mc_token,
4220 + net_dev->dev_addr);
4222 + dev_err(dev, "dpni_set_primary_mac_addr(): %d\n", err);
4225 + /* Override NET_ADDR_RANDOM set by eth_hw_addr_random(); for all
4226 + * practical purposes, this will be our "permanent" mac address,
4227 + * at least until the next reboot. This move will also permit
4228 + * register_netdevice() to properly fill up net_dev->perm_addr.
4230 + net_dev->addr_assign_type = NET_ADDR_PERM;
4232 + /* NET_ADDR_PERM is default, all we have to do is
4233 + * fill in the device addr.
4235 + memcpy(net_dev->dev_addr, mac_addr, net_dev->addr_len);
4238 + /* Explicitly add the broadcast address to the MAC filtering table;
4239 + * the MC won't do that for us.
4241 + eth_broadcast_addr(bcast_addr);
4242 + err = dpni_add_mac_addr(priv->mc_io, 0, priv->mc_token, bcast_addr);
4244 + dev_warn(dev, "dpni_add_mac_addr() failed (%d)\n", err);
4245 + /* Won't return an error; at least, we'd have egress traffic */
4248 + /* Reserve enough space to align buffer as per hardware requirement;
4249 + * NOTE: priv->tx_data_offset MUST be initialized at this point.
4251 + net_dev->needed_headroom = DPAA2_ETH_NEEDED_HEADROOM(priv);
4253 + /* Our .ndo_init will be called herein */
4254 + err = register_netdev(net_dev);
4256 + dev_err(dev, "register_netdev() = %d\n", err);
4263 +#ifdef CONFIG_FSL_DPAA2_ETH_LINK_POLL
4264 +static int dpaa2_poll_link_state(void *arg)
4266 + struct dpaa2_eth_priv *priv = (struct dpaa2_eth_priv *)arg;
4269 + while (!kthread_should_stop()) {
4270 + err = dpaa2_link_state_update(priv);
4271 + if (unlikely(err))
4274 + msleep(DPAA2_ETH_LINK_STATE_REFRESH);
4280 +static irqreturn_t dpni_irq0_handler(int irq_num, void *arg)
4282 + return IRQ_WAKE_THREAD;
4285 +static irqreturn_t dpni_irq0_handler_thread(int irq_num, void *arg)
4287 + u8 irq_index = DPNI_IRQ_INDEX;
4288 + u32 status, clear = 0;
4289 + struct device *dev = (struct device *)arg;
4290 + struct fsl_mc_device *dpni_dev = to_fsl_mc_device(dev);
4291 + struct net_device *net_dev = dev_get_drvdata(dev);
4294 + netdev_dbg(net_dev, "IRQ %d received\n", irq_num);
4295 + err = dpni_get_irq_status(dpni_dev->mc_io, 0, dpni_dev->mc_handle,
4296 + irq_index, &status);
4297 + if (unlikely(err)) {
4298 + netdev_err(net_dev, "Can't get irq status (err %d)", err);
4299 + clear = 0xffffffff;
4303 + if (status & DPNI_IRQ_EVENT_LINK_CHANGED) {
4304 + clear |= DPNI_IRQ_EVENT_LINK_CHANGED;
4305 + dpaa2_link_state_update(netdev_priv(net_dev));
4309 + dpni_clear_irq_status(dpni_dev->mc_io, 0, dpni_dev->mc_handle,
4310 + irq_index, clear);
4311 + return IRQ_HANDLED;
4314 +static int dpaa2_eth_setup_irqs(struct fsl_mc_device *ls_dev)
4317 + struct fsl_mc_device_irq *irq;
4318 + int irq_count = ls_dev->obj_desc.irq_count;
4319 + u8 irq_index = DPNI_IRQ_INDEX;
4320 + u32 mask = DPNI_IRQ_EVENT_LINK_CHANGED;
4322 + /* The only interrupt supported now is the link state notification. */
4323 + if (WARN_ON(irq_count != 1))
4326 + irq = ls_dev->irqs[0];
4327 + err = devm_request_threaded_irq(&ls_dev->dev, irq->msi_desc->irq,
4328 + dpni_irq0_handler,
4329 + dpni_irq0_handler_thread,
4330 + IRQF_NO_SUSPEND | IRQF_ONESHOT,
4331 + dev_name(&ls_dev->dev), &ls_dev->dev);
4333 + dev_err(&ls_dev->dev, "devm_request_threaded_irq(): %d", err);
4337 + err = dpni_set_irq_mask(ls_dev->mc_io, 0, ls_dev->mc_handle,
4340 + dev_err(&ls_dev->dev, "dpni_set_irq_mask(): %d", err);
4344 + err = dpni_set_irq_enable(ls_dev->mc_io, 0, ls_dev->mc_handle,
4347 + dev_err(&ls_dev->dev, "dpni_set_irq_enable(): %d", err);
4355 +static void dpaa2_eth_napi_add(struct dpaa2_eth_priv *priv)
4358 + struct dpaa2_eth_channel *ch;
4360 + for (i = 0; i < priv->num_channels; i++) {
4361 + ch = priv->channel[i];
4362 + /* NAPI weight *MUST* be a multiple of DPAA2_ETH_STORE_SIZE */
4363 + netif_napi_add(priv->net_dev, &ch->napi, dpaa2_eth_poll,
4364 + NAPI_POLL_WEIGHT);
4368 +static void dpaa2_eth_napi_del(struct dpaa2_eth_priv *priv)
4371 + struct dpaa2_eth_channel *ch;
4373 + for (i = 0; i < priv->num_channels; i++) {
4374 + ch = priv->channel[i];
4375 + netif_napi_del(&ch->napi);
4379 +/* SysFS support */
4381 +static ssize_t dpaa2_eth_show_tx_shaping(struct device *dev,
4382 + struct device_attribute *attr,
4385 + struct dpaa2_eth_priv *priv = netdev_priv(to_net_dev(dev));
4386 + /* No MC API for getting the shaping config. We're stateful. */
4387 + struct dpni_tx_shaping_cfg *scfg = &priv->shaping_cfg;
4389 + return sprintf(buf, "%u %hu\n", scfg->rate_limit, scfg->max_burst_size);
4392 +static ssize_t dpaa2_eth_write_tx_shaping(struct device *dev,
4393 + struct device_attribute *attr,
4398 + struct dpaa2_eth_priv *priv = netdev_priv(to_net_dev(dev));
4399 + struct dpni_tx_shaping_cfg scfg;
4401 + items = sscanf(buf, "%u %hu", &scfg.rate_limit, &scfg.max_burst_size);
4403 + pr_err("Expected format: \"rate_limit(Mbps) max_burst_size(bytes)\"\n");
4406 + /* Size restriction as per MC API documentation */
4407 + if (scfg.max_burst_size > 64000) {
4408 + pr_err("max_burst_size must be <= 64000, thanks.\n");
4412 + err = dpni_set_tx_shaping(priv->mc_io, 0, priv->mc_token, &scfg);
4414 + dev_err(dev, "dpni_set_tx_shaping() failed\n");
4417 + /* If successful, save the current configuration for future inquiries */
4418 + priv->shaping_cfg = scfg;
4423 +static ssize_t dpaa2_eth_show_txconf_cpumask(struct device *dev,
4424 + struct device_attribute *attr,
4427 + struct dpaa2_eth_priv *priv = netdev_priv(to_net_dev(dev));
4429 + return cpumap_print_to_pagebuf(1, buf, &priv->txconf_cpumask);
4432 +static ssize_t dpaa2_eth_write_txconf_cpumask(struct device *dev,
4433 + struct device_attribute *attr,
4437 + struct dpaa2_eth_priv *priv = netdev_priv(to_net_dev(dev));
4438 + struct dpaa2_eth_fq *fq;
4439 + bool running = netif_running(priv->net_dev);
4442 + err = cpulist_parse(buf, &priv->txconf_cpumask);
4446 + /* Only accept CPUs that have an affine DPIO */
4447 + if (!cpumask_subset(&priv->txconf_cpumask, &priv->dpio_cpumask)) {
4448 + netdev_info(priv->net_dev,
4449 + "cpumask must be a subset of 0x%lx\n",
4450 + *cpumask_bits(&priv->dpio_cpumask));
4451 + cpumask_and(&priv->txconf_cpumask, &priv->dpio_cpumask,
4452 + &priv->txconf_cpumask);
4455 + /* Rewiring the TxConf FQs requires interface shutdown.
4458 + err = dpaa2_eth_stop(priv->net_dev);
4463 + /* Set the new TxConf FQ affinities */
4464 + dpaa2_set_fq_affinity(priv);
4466 +#ifdef CONFIG_FSL_DPAA2_ETH_LINK_POLL
4467 + /* dpaa2_eth_open() below will *stop* the Tx queues until an explicit
4468 + * link up notification is received. Give the polling thread enough time
4469 + * to detect the link state change, or else we'll end up with the
4470 + * transmission side forever shut down.
4472 + msleep(2 * DPAA2_ETH_LINK_STATE_REFRESH);
4475 + for (i = 0; i < priv->num_fqs; i++) {
4476 + fq = &priv->fq[i];
4477 + if (fq->type != DPAA2_TX_CONF_FQ)
4479 + dpaa2_tx_flow_setup(priv, fq);
4483 + err = dpaa2_eth_open(priv->net_dev);
4491 +static struct device_attribute dpaa2_eth_attrs[] = {
4492 + __ATTR(txconf_cpumask,
4493 + S_IRUSR | S_IWUSR,
4494 + dpaa2_eth_show_txconf_cpumask,
4495 + dpaa2_eth_write_txconf_cpumask),
4497 + __ATTR(tx_shaping,
4498 + S_IRUSR | S_IWUSR,
4499 + dpaa2_eth_show_tx_shaping,
4500 + dpaa2_eth_write_tx_shaping),
4503 +void dpaa2_eth_sysfs_init(struct device *dev)
4507 + for (i = 0; i < ARRAY_SIZE(dpaa2_eth_attrs); i++) {
4508 + err = device_create_file(dev, &dpaa2_eth_attrs[i]);
4510 + dev_err(dev, "ERROR creating sysfs file\n");
4518 + device_remove_file(dev, &dpaa2_eth_attrs[--i]);
4521 +void dpaa2_eth_sysfs_remove(struct device *dev)
4525 + for (i = 0; i < ARRAY_SIZE(dpaa2_eth_attrs); i++)
4526 + device_remove_file(dev, &dpaa2_eth_attrs[i]);
4529 +static int dpaa2_eth_probe(struct fsl_mc_device *dpni_dev)
4531 + struct device *dev;
4532 + struct net_device *net_dev = NULL;
4533 + struct dpaa2_eth_priv *priv = NULL;
4536 + dev = &dpni_dev->dev;
4539 + net_dev = alloc_etherdev_mq(sizeof(*priv), DPAA2_ETH_MAX_TX_QUEUES);
4541 + dev_err(dev, "alloc_etherdev_mq() failed\n");
4545 + SET_NETDEV_DEV(net_dev, dev);
4546 + dev_set_drvdata(dev, net_dev);
4548 + priv = netdev_priv(net_dev);
4549 + priv->net_dev = net_dev;
4550 + priv->msg_enable = netif_msg_init(debug, -1);
4552 + /* Obtain a MC portal */
4553 + err = fsl_mc_portal_allocate(dpni_dev, FSL_MC_IO_ATOMIC_CONTEXT_PORTAL,
4556 + dev_err(dev, "MC portal allocation failed\n");
4557 + goto err_portal_alloc;
4560 +#ifndef CONFIG_FSL_DPAA2_ETH_LINK_POLL
4561 + err = fsl_mc_allocate_irqs(dpni_dev);
4563 + dev_err(dev, "MC irqs allocation failed\n");
4564 + goto err_irqs_alloc;
4568 + /* DPNI initialization */
4569 + err = dpaa2_dpni_setup(dpni_dev);
4571 + goto err_dpni_setup;
4574 + err = dpaa2_dpio_setup(priv);
4576 + goto err_dpio_setup;
4579 + dpaa2_eth_setup_fqs(priv);
4580 + dpaa2_set_fq_affinity(priv);
4583 + err = dpaa2_dpbp_setup(priv);
4585 + goto err_dpbp_setup;
4587 + /* DPNI binding to DPIO and DPBPs */
4588 + err = dpaa2_dpni_bind(priv);
4592 + dpaa2_eth_napi_add(priv);
4594 + /* Percpu statistics */
4595 + priv->percpu_stats = alloc_percpu(*priv->percpu_stats);
4596 + if (!priv->percpu_stats) {
4597 + dev_err(dev, "alloc_percpu(percpu_stats) failed\n");
4599 + goto err_alloc_percpu_stats;
4601 + priv->percpu_extras = alloc_percpu(*priv->percpu_extras);
4602 + if (!priv->percpu_extras) {
4603 + dev_err(dev, "alloc_percpu(percpu_extras) failed\n");
4605 + goto err_alloc_percpu_extras;
4608 + snprintf(net_dev->name, IFNAMSIZ, "ni%d", dpni_dev->obj_desc.id);
4609 + if (!dev_valid_name(net_dev->name)) {
4610 + dev_warn(&net_dev->dev,
4611 + "netdevice name \"%s\" cannot be used, reverting to default..\n",
4613 + dev_alloc_name(net_dev, "eth%d");
4614 + dev_warn(&net_dev->dev, "using name \"%s\"\n", net_dev->name);
4617 + err = dpaa2_eth_netdev_init(net_dev);
4619 + goto err_netdev_init;
4621 + /* Configure checksum offload based on current interface flags */
4622 + err = dpaa2_eth_set_rx_csum(priv,
4623 + !!(net_dev->features & NETIF_F_RXCSUM));
4627 + err = dpaa2_eth_set_tx_csum(priv,
4628 + !!(net_dev->features &
4629 + (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM)));
4633 + err = dpaa2_eth_alloc_rings(priv);
4635 + goto err_alloc_rings;
4637 + net_dev->ethtool_ops = &dpaa2_ethtool_ops;
4639 +#ifdef CONFIG_FSL_DPAA2_ETH_LINK_POLL
4640 + priv->poll_thread = kthread_run(dpaa2_poll_link_state, priv,
4641 + "%s_poll_link", net_dev->name);
4643 + err = dpaa2_eth_setup_irqs(dpni_dev);
4645 + netdev_err(net_dev, "ERROR %d setting up interrupts", err);
4646 + goto err_setup_irqs;
4650 + dpaa2_eth_sysfs_init(&net_dev->dev);
4651 + dpaa2_dbg_add(priv);
4653 + dev_info(dev, "Probed interface %s\n", net_dev->name);
4656 +#ifndef CONFIG_FSL_DPAA2_ETH_LINK_POLL
4659 + dpaa2_eth_free_rings(priv);
4662 + unregister_netdev(net_dev);
4664 + free_percpu(priv->percpu_extras);
4665 +err_alloc_percpu_extras:
4666 + free_percpu(priv->percpu_stats);
4667 +err_alloc_percpu_stats:
4668 + dpaa2_eth_napi_del(priv);
4670 + dpaa2_dpbp_free(priv);
4672 + dpaa2_dpio_free(priv);
4674 + kfree(priv->cls_rule);
4675 + dpni_close(priv->mc_io, 0, priv->mc_token);
4677 +#ifndef CONFIG_FSL_DPAA2_ETH_LINK_POLL
4678 + fsl_mc_free_irqs(dpni_dev);
4681 + fsl_mc_portal_free(priv->mc_io);
4683 + dev_set_drvdata(dev, NULL);
4684 + free_netdev(net_dev);
4689 +static int dpaa2_eth_remove(struct fsl_mc_device *ls_dev)
4691 + struct device *dev;
4692 + struct net_device *net_dev;
4693 + struct dpaa2_eth_priv *priv;
4695 + dev = &ls_dev->dev;
4696 + net_dev = dev_get_drvdata(dev);
4697 + priv = netdev_priv(net_dev);
4699 + dpaa2_dbg_remove(priv);
4700 + dpaa2_eth_sysfs_remove(&net_dev->dev);
4702 + unregister_netdev(net_dev);
4703 + dev_info(net_dev->dev.parent, "Removed interface %s\n", net_dev->name);
4705 + dpaa2_dpio_free(priv);
4706 + dpaa2_eth_free_rings(priv);
4707 + dpaa2_eth_napi_del(priv);
4708 + dpaa2_dpbp_free(priv);
4709 + dpaa2_dpni_free(priv);
4711 + fsl_mc_portal_free(priv->mc_io);
4713 + free_percpu(priv->percpu_stats);
4714 + free_percpu(priv->percpu_extras);
4716 +#ifdef CONFIG_FSL_DPAA2_ETH_LINK_POLL
4717 + kthread_stop(priv->poll_thread);
4719 + fsl_mc_free_irqs(ls_dev);
4722 + kfree(priv->cls_rule);
4724 + dev_set_drvdata(dev, NULL);
4725 + free_netdev(net_dev);
4730 +static const struct fsl_mc_device_match_id dpaa2_eth_match_id_table[] = {
4732 + .vendor = FSL_MC_VENDOR_FREESCALE,
4733 + .obj_type = "dpni",
4734 + .ver_major = DPNI_VER_MAJOR,
4735 + .ver_minor = DPNI_VER_MINOR
4740 +static struct fsl_mc_driver dpaa2_eth_driver = {
4742 + .name = KBUILD_MODNAME,
4743 + .owner = THIS_MODULE,
4745 + .probe = dpaa2_eth_probe,
4746 + .remove = dpaa2_eth_remove,
4747 + .match_id_table = dpaa2_eth_match_id_table
4750 +static int __init dpaa2_eth_driver_init(void)
4754 + dpaa2_eth_dbg_init();
4756 + err = fsl_mc_driver_register(&dpaa2_eth_driver);
4758 + dpaa2_eth_dbg_exit();
4765 +static void __exit dpaa2_eth_driver_exit(void)
4767 + fsl_mc_driver_unregister(&dpaa2_eth_driver);
4768 + dpaa2_eth_dbg_exit();
4771 +module_init(dpaa2_eth_driver_init);
4772 +module_exit(dpaa2_eth_driver_exit);
4774 +++ b/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.h
4776 +/* Copyright 2014-2015 Freescale Semiconductor Inc.
4778 + * Redistribution and use in source and binary forms, with or without
4779 + * modification, are permitted provided that the following conditions are met:
4780 + * * Redistributions of source code must retain the above copyright
4781 + * notice, this list of conditions and the following disclaimer.
4782 + * * Redistributions in binary form must reproduce the above copyright
4783 + * notice, this list of conditions and the following disclaimer in the
4784 + * documentation and/or other materials provided with the distribution.
4785 + * * Neither the name of Freescale Semiconductor nor the
4786 + * names of its contributors may be used to endorse or promote products
4787 + * derived from this software without specific prior written permission.
4790 + * ALTERNATIVELY, this software may be distributed under the terms of the
4791 + * GNU General Public License ("GPL") as published by the Free Software
4792 + * Foundation, either version 2 of that License or (at your option) any
4795 + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
4796 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
4797 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
4798 + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
4799 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
4800 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
4801 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
4802 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
4803 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
4804 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
4807 +#ifndef __DPAA2_ETH_H
4808 +#define __DPAA2_ETH_H
4810 +#include <linux/netdevice.h>
4811 +#include <linux/if_vlan.h>
4812 +#include "../../fsl-mc/include/fsl_dpaa2_io.h"
4813 +#include "../../fsl-mc/include/fsl_dpaa2_fd.h"
4814 +#include "../../fsl-mc/include/dpbp.h"
4815 +#include "../../fsl-mc/include/dpbp-cmd.h"
4816 +#include "../../fsl-mc/include/dpcon.h"
4817 +#include "../../fsl-mc/include/dpcon-cmd.h"
4818 +#include "../../fsl-mc/include/dpmng.h"
4820 +#include "dpni-cmd.h"
4822 +#include "dpaa2-eth-trace.h"
4823 +#include "dpaa2-eth-debugfs.h"
4825 +#define DPAA2_ETH_STORE_SIZE 16
4827 +/* Maximum receive frame size is 64K */
4828 +#define DPAA2_ETH_MAX_SG_ENTRIES ((64 * 1024) / DPAA2_ETH_RX_BUFFER_SIZE)
4830 +/* Maximum acceptable MTU value. It is in direct relation with the MC-enforced
4831 + * Max Frame Length (currently 10k).
4833 +#define DPAA2_ETH_MFL (10 * 1024)
4834 +#define DPAA2_ETH_MAX_MTU (DPAA2_ETH_MFL - VLAN_ETH_HLEN)
4835 +/* Convert L3 MTU to L2 MFL */
4836 +#define DPAA2_ETH_L2_MAX_FRM(mtu) (mtu + VLAN_ETH_HLEN)
4838 +/* Set the taildrop threshold (in bytes) to allow the enqueue of several jumbo
4839 + * frames in the Rx queues (length of the current frame is not
4840 + * taken into account when making the taildrop decision)
4842 +#define DPAA2_ETH_TAILDROP_THRESH (64 * 1024)
4844 +/* Buffer quota per queue. Must be large enough such that for minimum sized
4845 + * frames taildrop kicks in before the bpool gets depleted, so we compute
4846 + * how many 64B frames fit inside the taildrop threshold and add a margin
4847 + * to accommodate the buffer refill delay.
4849 +#define DPAA2_ETH_MAX_FRAMES_PER_QUEUE (DPAA2_ETH_TAILDROP_THRESH / 64)
4850 +#define DPAA2_ETH_NUM_BUFS (DPAA2_ETH_MAX_FRAMES_PER_QUEUE + 256)
4851 +#define DPAA2_ETH_REFILL_THRESH DPAA2_ETH_MAX_FRAMES_PER_QUEUE
4853 +/* Hardware requires alignment for ingress/egress buffer addresses
4854 + * and ingress buffer lengths.
4856 +#define DPAA2_ETH_RX_BUFFER_SIZE 2048
4857 +#define DPAA2_ETH_TX_BUF_ALIGN 64
4858 +#define DPAA2_ETH_RX_BUF_ALIGN 256
4859 +#define DPAA2_ETH_NEEDED_HEADROOM(p_priv) \
4860 + ((p_priv)->tx_data_offset + DPAA2_ETH_TX_BUF_ALIGN)
4862 +#define DPAA2_ETH_BUF_RAW_SIZE \
4863 + (DPAA2_ETH_RX_BUFFER_SIZE + \
4864 + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) + \
4865 + DPAA2_ETH_RX_BUF_ALIGN)
4867 +/* PTP nominal frequency 1MHz */
4868 +#define DPAA2_PTP_NOMINAL_FREQ_PERIOD_NS 1000
4870 +/* We are accommodating a skb backpointer and some S/G info
4871 + * in the frame's software annotation. The hardware
4872 + * options are either 0 or 64, so we choose the latter.
4874 +#define DPAA2_ETH_SWA_SIZE 64
4876 +/* Must keep this struct smaller than DPAA2_ETH_SWA_SIZE */
4877 +struct dpaa2_eth_swa {
4878 + struct sk_buff *skb;
4879 + struct scatterlist *scl;
4884 +/* Annotation valid bits in FD FRC */
4885 +#define DPAA2_FD_FRC_FASV 0x8000
4886 +#define DPAA2_FD_FRC_FAEADV 0x4000
4887 +#define DPAA2_FD_FRC_FAPRV 0x2000
4888 +#define DPAA2_FD_FRC_FAIADV 0x1000
4889 +#define DPAA2_FD_FRC_FASWOV 0x0800
4890 +#define DPAA2_FD_FRC_FAICFDV 0x0400
4892 +/* Annotation bits in FD CTRL */
4893 +#define DPAA2_FD_CTRL_ASAL 0x00020000 /* ASAL = 128 */
4894 +#define DPAA2_FD_CTRL_PTA 0x00800000
4895 +#define DPAA2_FD_CTRL_PTV1 0x00400000
4897 +/* Frame annotation status */
4905 +/* Debug frame, otherwise supposed to be discarded */
4906 +#define DPAA2_ETH_FAS_DISC 0x80000000
4908 +#define DPAA2_ETH_FAS_MS 0x40000000
4909 +#define DPAA2_ETH_FAS_PTP 0x08000000
4910 +/* Ethernet multicast frame */
4911 +#define DPAA2_ETH_FAS_MC 0x04000000
4912 +/* Ethernet broadcast frame */
4913 +#define DPAA2_ETH_FAS_BC 0x02000000
4914 +#define DPAA2_ETH_FAS_KSE 0x00040000
4915 +#define DPAA2_ETH_FAS_EOFHE 0x00020000
4916 +#define DPAA2_ETH_FAS_MNLE 0x00010000
4917 +#define DPAA2_ETH_FAS_TIDE 0x00008000
4918 +#define DPAA2_ETH_FAS_PIEE 0x00004000
4919 +/* Frame length error */
4920 +#define DPAA2_ETH_FAS_FLE 0x00002000
4921 +/* Frame physical error; our favourite pastime */
4922 +#define DPAA2_ETH_FAS_FPE 0x00001000
4923 +#define DPAA2_ETH_FAS_PTE 0x00000080
4924 +#define DPAA2_ETH_FAS_ISP 0x00000040
4925 +#define DPAA2_ETH_FAS_PHE 0x00000020
4926 +#define DPAA2_ETH_FAS_BLE 0x00000010
4927 +/* L3 csum validation performed */
4928 +#define DPAA2_ETH_FAS_L3CV 0x00000008
4929 +/* L3 csum error */
4930 +#define DPAA2_ETH_FAS_L3CE 0x00000004
4931 +/* L4 csum validation performed */
4932 +#define DPAA2_ETH_FAS_L4CV 0x00000002
4933 +/* L4 csum error */
4934 +#define DPAA2_ETH_FAS_L4CE 0x00000001
4935 +/* These bits always signal errors */
4936 +#define DPAA2_ETH_RX_ERR_MASK (DPAA2_ETH_FAS_KSE | \
4937 + DPAA2_ETH_FAS_EOFHE | \
4938 + DPAA2_ETH_FAS_MNLE | \
4939 + DPAA2_ETH_FAS_TIDE | \
4940 + DPAA2_ETH_FAS_PIEE | \
4941 + DPAA2_ETH_FAS_FLE | \
4942 + DPAA2_ETH_FAS_FPE | \
4943 + DPAA2_ETH_FAS_PTE | \
4944 + DPAA2_ETH_FAS_ISP | \
4945 + DPAA2_ETH_FAS_PHE | \
4946 + DPAA2_ETH_FAS_BLE | \
4947 + DPAA2_ETH_FAS_L3CE | \
4948 + DPAA2_ETH_FAS_L4CE)
4949 +/* Unsupported features in the ingress */
4950 +#define DPAA2_ETH_RX_UNSUPP_MASK DPAA2_ETH_FAS_MS
4952 +#define DPAA2_ETH_TXCONF_ERR_MASK (DPAA2_ETH_FAS_KSE | \
4953 + DPAA2_ETH_FAS_EOFHE | \
4954 + DPAA2_ETH_FAS_MNLE | \
4955 + DPAA2_ETH_FAS_TIDE)
4957 +/* Time in milliseconds between link state updates */
4958 +#define DPAA2_ETH_LINK_STATE_REFRESH 1000
4960 +/* Driver statistics, other than those in struct rtnl_link_stats64.
4961 + * These are usually collected per-CPU and aggregated by ethtool.
4963 +struct dpaa2_eth_stats {
4964 + __u64 tx_conf_frames;
4965 + __u64 tx_conf_bytes;
4966 + __u64 tx_sg_frames;
4967 + __u64 tx_sg_bytes;
4968 + __u64 rx_sg_frames;
4969 + __u64 rx_sg_bytes;
4970 + /* Enqueues retried due to portal busy */
4971 + __u64 tx_portal_busy;
4974 +/* Per-FQ statistics */
4975 +struct dpaa2_eth_fq_stats {
4976 + /* Number of frames received on this queue */
4980 +/* Per-channel statistics */
4981 +struct dpaa2_eth_ch_stats {
4982 + /* Volatile dequeues retried due to portal busy */
4983 + __u64 dequeue_portal_busy;
4984 + /* Number of CDANs; useful to estimate avg NAPI len */
4986 + /* Number of frames received on queues from this channel */
4990 +/* Maximum number of Rx queues associated with a DPNI */
4991 +#define DPAA2_ETH_MAX_RX_QUEUES 16
4992 +#define DPAA2_ETH_MAX_TX_QUEUES NR_CPUS
4993 +#define DPAA2_ETH_MAX_RX_ERR_QUEUES 1
4994 +#define DPAA2_ETH_MAX_QUEUES (DPAA2_ETH_MAX_RX_QUEUES + \
4995 + DPAA2_ETH_MAX_TX_QUEUES + \
4996 + DPAA2_ETH_MAX_RX_ERR_QUEUES)
4998 +#define DPAA2_ETH_MAX_DPCONS NR_CPUS
5000 +enum dpaa2_eth_fq_type {
5006 +struct dpaa2_eth_priv;
5008 +struct dpaa2_eth_fq {
5012 + struct dpaa2_eth_channel *channel;
5013 + enum dpaa2_eth_fq_type type;
5015 + void (*consume)(struct dpaa2_eth_priv *,
5016 + struct dpaa2_eth_channel *,
5017 + const struct dpaa2_fd *,
5018 + struct napi_struct *);
5019 + struct dpaa2_eth_priv *netdev_priv; /* backpointer */
5020 + struct dpaa2_eth_fq_stats stats;
5023 +struct dpaa2_eth_channel {
5024 + struct dpaa2_io_notification_ctx nctx;
5025 + struct fsl_mc_device *dpcon;
5029 + struct napi_struct napi;
5030 + struct dpaa2_io_store *store;
5031 + struct dpaa2_eth_priv *priv;
5033 + struct dpaa2_eth_ch_stats stats;
5036 +struct dpaa2_cls_rule {
5037 + struct ethtool_rx_flow_spec fs;
5041 +struct dpaa2_eth_priv {
5042 + struct net_device *net_dev;
5045 + /* First queue is tx conf, the rest are rx */
5046 + struct dpaa2_eth_fq fq[DPAA2_ETH_MAX_QUEUES];
5049 + struct dpaa2_eth_channel *channel[DPAA2_ETH_MAX_DPCONS];
5052 + struct dpni_attr dpni_attrs;
5053 + struct dpni_extended_cfg dpni_ext_cfg;
5054 + /* Insofar as the MC is concerned, we're using one layout on all 3 types
5055 + * of buffers (Rx, Tx, Tx-Conf).
5057 + struct dpni_buffer_layout buf_layout;
5058 + u16 tx_data_offset;
5060 + struct fsl_mc_device *dpbp_dev;
5061 + struct dpbp_attr dpbp_attrs;
5064 + struct fsl_mc_io *mc_io;
5065 + /* SysFS-controlled affinity mask for TxConf FQs */
5066 + struct cpumask txconf_cpumask;
5067 + /* Cores which have an affine DPIO/DPCON.
5068 + * This is the cpu set on which Rx frames are processed;
5069 + * Tx confirmation frames are processed on a subset of this,
5070 + * depending on user settings.
5072 + struct cpumask dpio_cpumask;
5074 + /* Standard statistics */
5075 + struct rtnl_link_stats64 __percpu *percpu_stats;
5076 + /* Extra stats, in addition to the ones known by the kernel */
5077 + struct dpaa2_eth_stats __percpu *percpu_extras;
5078 + u32 msg_enable; /* net_device message level */
5082 + struct dpni_link_state link_state;
5083 + struct task_struct *poll_thread;
5085 + /* enabled ethtool hashing bits */
5086 + u64 rx_hash_fields;
5088 +#ifdef CONFIG_FSL_DPAA2_ETH_DEBUGFS
5089 + struct dpaa2_debugfs dbg;
5092 + /* array of classification rules */
5093 + struct dpaa2_cls_rule *cls_rule;
5095 + struct dpni_tx_shaping_cfg shaping_cfg;
5097 + bool ts_tx_en; /* Tx timestamping enabled */
5098 + bool ts_rx_en; /* Rx timestamping enabled */
5101 +/* default Rx hash options, set during probing */
5102 +#define DPAA2_RXH_SUPPORTED (RXH_L2DA | RXH_VLAN | RXH_L3_PROTO \
5103 + | RXH_IP_SRC | RXH_IP_DST | RXH_L4_B_0_1 \
5106 +#define dpaa2_eth_hash_enabled(priv) \
5107 + ((priv)->dpni_attrs.options & DPNI_OPT_DIST_HASH)
5109 +#define dpaa2_eth_fs_enabled(priv) \
5110 + ((priv)->dpni_attrs.options & DPNI_OPT_DIST_FS)
5112 +#define DPAA2_CLASSIFIER_ENTRY_COUNT 16
5114 +/* Required by struct dpni_attr::ext_cfg_iova */
5115 +#define DPAA2_EXT_CFG_SIZE 256
5117 +extern const struct ethtool_ops dpaa2_ethtool_ops;
5119 +int dpaa2_set_hash(struct net_device *net_dev, u64 flags);
5121 +static int dpaa2_queue_count(struct dpaa2_eth_priv *priv)
5123 + if (!dpaa2_eth_hash_enabled(priv))
5126 + return priv->dpni_ext_cfg.tc_cfg[0].max_dist;
5129 +static inline int dpaa2_max_channels(struct dpaa2_eth_priv *priv)
5131 + /* Ideally, we want a number of channels large enough
5132 + * to accommodate both the Rx distribution size
5133 + * and the max number of Tx confirmation queues
5135 + return max_t(int, dpaa2_queue_count(priv),
5136 + priv->dpni_attrs.max_senders);
5139 +void dpaa2_cls_check(struct net_device *);
5141 +#endif /* __DPAA2_H */
5143 +++ b/drivers/staging/fsl-dpaa2/ethernet/dpaa2-ethtool.c
5145 +/* Copyright 2014-2015 Freescale Semiconductor Inc.
5147 + * Redistribution and use in source and binary forms, with or without
5148 + * modification, are permitted provided that the following conditions are met:
5149 + * * Redistributions of source code must retain the above copyright
5150 + * notice, this list of conditions and the following disclaimer.
5151 + * * Redistributions in binary form must reproduce the above copyright
5152 + * notice, this list of conditions and the following disclaimer in the
5153 + * documentation and/or other materials provided with the distribution.
5154 + * * Neither the name of Freescale Semiconductor nor the
5155 + * names of its contributors may be used to endorse or promote products
5156 + * derived from this software without specific prior written permission.
5159 + * ALTERNATIVELY, this software may be distributed under the terms of the
5160 + * GNU General Public License ("GPL") as published by the Free Software
5161 + * Foundation, either version 2 of that License or (at your option) any
5164 + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
5165 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
5166 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
5167 + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
5168 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
5169 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
5170 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
5171 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
5172 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
5173 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
5176 +#include "dpni.h" /* DPNI_LINK_OPT_* */
5177 +#include "dpaa2-eth.h"
5179 +/* size of DMA memory used to pass configuration to classifier, in bytes */
5180 +#define DPAA2_CLASSIFIER_DMA_SIZE 256
5182 +/* To be kept in sync with 'enum dpni_counter' */
5183 +char dpaa2_ethtool_stats[][ETH_GSTRING_LEN] = {
5186 + "rx frames dropped",
5188 + "rx mcast frames",
5190 + "rx bcast frames",
5197 +#define DPAA2_ETH_NUM_STATS ARRAY_SIZE(dpaa2_ethtool_stats)
5199 +/* To be kept in sync with 'struct dpaa2_eth_stats' */
5200 +char dpaa2_ethtool_extras[][ETH_GSTRING_LEN] = {
5201 + /* per-cpu stats */
5209 + /* how many times we had to retry the enqueue command */
5212 + /* Channel stats */
5214 + /* How many times we had to retry the volatile dequeue command */
5216 + /* Number of notifications received */
5218 +#ifdef CONFIG_FSL_QBMAN_DEBUG
5220 + "rx pending frames",
5221 + "rx pending bytes",
5222 + "tx conf pending frames",
5223 + "tx conf pending bytes",
5228 +#define DPAA2_ETH_NUM_EXTRA_STATS ARRAY_SIZE(dpaa2_ethtool_extras)
5230 +static void dpaa2_get_drvinfo(struct net_device *net_dev,
5231 + struct ethtool_drvinfo *drvinfo)
5233 + struct mc_version mc_ver;
5234 + struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
5235 + char fw_version[ETHTOOL_FWVERS_LEN];
5239 + err = mc_get_version(priv->mc_io, 0, &mc_ver);
5241 + strlcpy(drvinfo->fw_version, "Error retrieving MC version",
5242 + sizeof(drvinfo->fw_version));
5244 + scnprintf(fw_version, sizeof(fw_version), "%d.%d.%d",
5245 + mc_ver.major, mc_ver.minor, mc_ver.revision);
5246 + strlcpy(drvinfo->fw_version, fw_version,
5247 + sizeof(drvinfo->fw_version));
5250 + scnprintf(version, sizeof(version), "%d.%d", DPNI_VER_MAJOR,
5252 + strlcpy(drvinfo->version, version, sizeof(drvinfo->version));
5254 + strlcpy(drvinfo->driver, KBUILD_MODNAME, sizeof(drvinfo->driver));
5255 + strlcpy(drvinfo->bus_info, dev_name(net_dev->dev.parent->parent),
5256 + sizeof(drvinfo->bus_info));
5259 +static u32 dpaa2_get_msglevel(struct net_device *net_dev)
5261 + return ((struct dpaa2_eth_priv *)netdev_priv(net_dev))->msg_enable;
5264 +static void dpaa2_set_msglevel(struct net_device *net_dev,
5267 + ((struct dpaa2_eth_priv *)netdev_priv(net_dev))->msg_enable =
5271 +static int dpaa2_get_settings(struct net_device *net_dev,
5272 + struct ethtool_cmd *cmd)
5274 + struct dpni_link_state state = {0};
5276 + struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
5278 + err = dpni_get_link_state(priv->mc_io, 0, priv->mc_token, &state);
5280 + netdev_err(net_dev, "ERROR %d getting link state", err);
5284 + /* At the moment, we have no way of interrogating the DPMAC
5285 + * from the DPNI side - and for that matter there may exist
5286 + * no DPMAC at all. So for now we just don't report anything
5287 + * beyond the DPNI attributes.
5289 + if (state.options & DPNI_LINK_OPT_AUTONEG)
5290 + cmd->autoneg = AUTONEG_ENABLE;
5291 + if (!(state.options & DPNI_LINK_OPT_HALF_DUPLEX))
5292 + cmd->duplex = DUPLEX_FULL;
5293 + ethtool_cmd_speed_set(cmd, state.rate);
5299 +static int dpaa2_set_settings(struct net_device *net_dev,
5300 + struct ethtool_cmd *cmd)
5302 + struct dpni_link_cfg cfg = {0};
5303 + struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
5306 + netdev_dbg(net_dev, "Setting link parameters...");
5308 + /* Due to a temporary firmware limitation, the DPNI must be down
5309 + * in order to be able to change link settings. Taking steps to let
5310 + * the user know that.
5312 + if (netif_running(net_dev)) {
5313 + netdev_info(net_dev, "Sorry, interface must be brought down first.\n");
5317 + cfg.rate = ethtool_cmd_speed(cmd);
5318 + if (cmd->autoneg == AUTONEG_ENABLE)
5319 + cfg.options |= DPNI_LINK_OPT_AUTONEG;
5321 + cfg.options &= ~DPNI_LINK_OPT_AUTONEG;
5322 + if (cmd->duplex == DUPLEX_HALF)
5323 + cfg.options |= DPNI_LINK_OPT_HALF_DUPLEX;
5325 + cfg.options &= ~DPNI_LINK_OPT_HALF_DUPLEX;
5327 + err = dpni_set_link_cfg(priv->mc_io, 0, priv->mc_token, &cfg);
5329 + /* ethtool will be loud enough if we return an error; no point
5330 + * in putting our own error message on the console by default
5332 + netdev_dbg(net_dev, "ERROR %d setting link cfg", err);
5337 +static void dpaa2_get_strings(struct net_device *netdev, u32 stringset,
5343 + switch (stringset) {
5344 + case ETH_SS_STATS:
5345 + for (i = 0; i < DPAA2_ETH_NUM_STATS; i++) {
5346 + strlcpy(p, dpaa2_ethtool_stats[i], ETH_GSTRING_LEN);
5347 + p += ETH_GSTRING_LEN;
5349 + for (i = 0; i < DPAA2_ETH_NUM_EXTRA_STATS; i++) {
5350 + strlcpy(p, dpaa2_ethtool_extras[i], ETH_GSTRING_LEN);
5351 + p += ETH_GSTRING_LEN;
5357 +static int dpaa2_get_sset_count(struct net_device *net_dev, int sset)
5360 + case ETH_SS_STATS: /* ethtool_get_stats(), ethtool_get_drvinfo() */
5361 + return DPAA2_ETH_NUM_STATS + DPAA2_ETH_NUM_EXTRA_STATS;
5363 + return -EOPNOTSUPP;
5367 +/** Fill in hardware counters, as returned by the MC firmware.
5369 +static void dpaa2_get_ethtool_stats(struct net_device *net_dev,
5370 + struct ethtool_stats *stats,
5373 + int i; /* Current index in the data array */
5376 +#ifdef CONFIG_FSL_QBMAN_DEBUG
5378 + u32 fcnt_rx_total = 0, fcnt_tx_total = 0;
5379 + u32 bcnt_rx_total = 0, bcnt_tx_total = 0;
5383 + u64 portal_busy = 0;
5384 + struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
5385 + struct dpaa2_eth_stats *extras;
5386 + struct dpaa2_eth_ch_stats *ch_stats;
5389 + sizeof(u64) * (DPAA2_ETH_NUM_STATS + DPAA2_ETH_NUM_EXTRA_STATS));
5391 + /* Print standard counters, from DPNI statistics */
5392 + for (i = 0; i < DPAA2_ETH_NUM_STATS; i++) {
5393 + err = dpni_get_counter(priv->mc_io, 0, priv->mc_token, i,
5396 + netdev_warn(net_dev, "Err %d getting DPNI counter %d",
5400 + /* Print per-cpu extra stats */
5401 + for_each_online_cpu(k) {
5402 + extras = per_cpu_ptr(priv->percpu_extras, k);
5403 + for (j = 0; j < sizeof(*extras) / sizeof(__u64); j++)
5404 + *((__u64 *)data + i + j) += *((__u64 *)extras + j);
5408 + /* We may be using fewer DPIOs than actual CPUs */
5409 + for_each_cpu(j, &priv->dpio_cpumask) {
5410 + ch_stats = &priv->channel[j]->stats;
5411 + cdan += ch_stats->cdan;
5412 + portal_busy += ch_stats->dequeue_portal_busy;
5415 + *(data + i++) = portal_busy;
5416 + *(data + i++) = cdan;
5418 +#ifdef CONFIG_FSL_QBMAN_DEBUG
5419 + for (j = 0; j < priv->num_fqs; j++) {
5420 + /* Print FQ instantaneous counts */
5421 + err = dpaa2_io_query_fq_count(NULL, priv->fq[j].fqid,
5424 + netdev_warn(net_dev, "FQ query error %d", err);
5428 + if (priv->fq[j].type == DPAA2_TX_CONF_FQ) {
5429 + fcnt_tx_total += fcnt;
5430 + bcnt_tx_total += bcnt;
5432 + fcnt_rx_total += fcnt;
5433 + bcnt_rx_total += bcnt;
5436 + *(data + i++) = fcnt_rx_total;
5437 + *(data + i++) = bcnt_rx_total;
5438 + *(data + i++) = fcnt_tx_total;
5439 + *(data + i++) = bcnt_tx_total;
5441 + err = dpaa2_io_query_bp_count(NULL, priv->dpbp_attrs.bpid, &buf_cnt);
5443 + netdev_warn(net_dev, "Buffer count query error %d\n", err);
5446 + *(data + i++) = buf_cnt;
5450 +static const struct dpaa2_hash_fields {
5452 + enum net_prot cls_prot;
5455 +} dpaa2_hash_fields[] = {
5458 + .rxnfc_field = RXH_L2DA,
5459 + .cls_prot = NET_PROT_ETH,
5460 + .cls_field = NH_FLD_ETH_DA,
5464 + .rxnfc_field = RXH_VLAN,
5465 + .cls_prot = NET_PROT_VLAN,
5466 + .cls_field = NH_FLD_VLAN_TCI,
5470 + .rxnfc_field = RXH_IP_SRC,
5471 + .cls_prot = NET_PROT_IP,
5472 + .cls_field = NH_FLD_IP_SRC,
5475 + .rxnfc_field = RXH_IP_DST,
5476 + .cls_prot = NET_PROT_IP,
5477 + .cls_field = NH_FLD_IP_DST,
5480 + .rxnfc_field = RXH_L3_PROTO,
5481 + .cls_prot = NET_PROT_IP,
5482 + .cls_field = NH_FLD_IP_PROTO,
5485 + /* Using UDP ports, this is functionally equivalent to raw
5486 + * byte pairs from L4 header.
5488 + .rxnfc_field = RXH_L4_B_0_1,
5489 + .cls_prot = NET_PROT_UDP,
5490 + .cls_field = NH_FLD_UDP_PORT_SRC,
5493 + .rxnfc_field = RXH_L4_B_2_3,
5494 + .cls_prot = NET_PROT_UDP,
5495 + .cls_field = NH_FLD_UDP_PORT_DST,
5500 +static int dpaa2_cls_is_enabled(struct net_device *net_dev, u64 flag)
5502 + struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
5504 + return !!(priv->rx_hash_fields & flag);
5507 +static int dpaa2_cls_key_off(struct net_device *net_dev, u64 flag)
5511 + for (i = 0; i < ARRAY_SIZE(dpaa2_hash_fields); i++) {
5512 + if (dpaa2_hash_fields[i].rxnfc_field & flag)
5514 + if (dpaa2_cls_is_enabled(net_dev,
5515 + dpaa2_hash_fields[i].rxnfc_field))
5516 + off += dpaa2_hash_fields[i].size;
5522 +static u8 dpaa2_cls_key_size(struct net_device *net_dev)
5526 + for (i = 0; i < ARRAY_SIZE(dpaa2_hash_fields); i++) {
5527 + if (!dpaa2_cls_is_enabled(net_dev,
5528 + dpaa2_hash_fields[i].rxnfc_field))
5530 + size += dpaa2_hash_fields[i].size;
5536 +static u8 dpaa2_cls_max_key_size(struct net_device *net_dev)
5540 + for (i = 0; i < ARRAY_SIZE(dpaa2_hash_fields); i++)
5541 + size += dpaa2_hash_fields[i].size;
5546 +void dpaa2_cls_check(struct net_device *net_dev)
5548 + u8 key_size = dpaa2_cls_max_key_size(net_dev);
5549 + struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
5551 + if (priv->dpni_attrs.options & DPNI_OPT_DIST_FS &&
5552 + priv->dpni_attrs.max_dist_key_size < key_size) {
5553 + dev_err(&net_dev->dev,
5554 + "max_dist_key_size = %d, expected %d. Steering is disabled\n",
5555 + priv->dpni_attrs.max_dist_key_size,
5557 + priv->dpni_attrs.options &= ~DPNI_OPT_DIST_FS;
5561 +/* Set RX hash options
5562 + * flags is a combination of RXH_ bits
5564 +int dpaa2_set_hash(struct net_device *net_dev, u64 flags)
5566 + struct device *dev = net_dev->dev.parent;
5567 + struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
5568 + struct dpkg_profile_cfg cls_cfg;
5569 + struct dpni_rx_tc_dist_cfg dist_cfg;
5571 + u64 enabled_flags = 0;
5575 + if (!dpaa2_eth_hash_enabled(priv)) {
5576 + dev_err(dev, "Hashing support is not enabled\n");
5577 + return -EOPNOTSUPP;
5580 + if (flags & ~DPAA2_RXH_SUPPORTED) {
5581 + /* RXH_DISCARD is not supported */
5582 + dev_err(dev, "unsupported option selected, supported options are: mvtsdfn\n");
5583 + return -EOPNOTSUPP;
5586 + memset(&cls_cfg, 0, sizeof(cls_cfg));
5588 + for (i = 0; i < ARRAY_SIZE(dpaa2_hash_fields); i++) {
5589 + struct dpkg_extract *key =
5590 + &cls_cfg.extracts[cls_cfg.num_extracts];
5592 + if (!(flags & dpaa2_hash_fields[i].rxnfc_field))
5595 + if (cls_cfg.num_extracts >= DPKG_MAX_NUM_OF_EXTRACTS) {
5596 + dev_err(dev, "error adding key extraction rule, too many rules?\n");
5600 + key->type = DPKG_EXTRACT_FROM_HDR;
5601 + key->extract.from_hdr.prot =
5602 + dpaa2_hash_fields[i].cls_prot;
5603 + key->extract.from_hdr.type = DPKG_FULL_FIELD;
5604 + key->extract.from_hdr.field =
5605 + dpaa2_hash_fields[i].cls_field;
5606 + cls_cfg.num_extracts++;
5608 + enabled_flags |= dpaa2_hash_fields[i].rxnfc_field;
5611 + dma_mem = kzalloc(DPAA2_CLASSIFIER_DMA_SIZE, GFP_DMA | GFP_KERNEL);
5615 + err = dpni_prepare_key_cfg(&cls_cfg, dma_mem);
5617 + dev_err(dev, "dpni_prepare_key_cfg error %d", err);
5621 + memset(&dist_cfg, 0, sizeof(dist_cfg));
5623 + /* Prepare for setting the rx dist */
5624 + dist_cfg.key_cfg_iova = dma_map_single(net_dev->dev.parent, dma_mem,
5625 + DPAA2_CLASSIFIER_DMA_SIZE,
5627 + if (dma_mapping_error(net_dev->dev.parent, dist_cfg.key_cfg_iova)) {
5628 + dev_err(dev, "DMA mapping failed\n");
5633 + dist_cfg.dist_size = dpaa2_queue_count(priv);
5634 + if (dpaa2_eth_fs_enabled(priv)) {
5635 + dist_cfg.dist_mode = DPNI_DIST_MODE_FS;
5636 + dist_cfg.fs_cfg.miss_action = DPNI_FS_MISS_HASH;
5638 + dist_cfg.dist_mode = DPNI_DIST_MODE_HASH;
5641 + err = dpni_set_rx_tc_dist(priv->mc_io, 0, priv->mc_token, 0, &dist_cfg);
5642 + dma_unmap_single(net_dev->dev.parent, dist_cfg.key_cfg_iova,
5643 + DPAA2_CLASSIFIER_DMA_SIZE, DMA_TO_DEVICE);
5646 + dev_err(dev, "dpni_set_rx_tc_dist() error %d\n", err);
5650 + priv->rx_hash_fields = enabled_flags;
5655 +static int dpaa2_cls_prep_rule(struct net_device *net_dev,
5656 + struct ethtool_rx_flow_spec *fs,
5659 + struct ethtool_tcpip4_spec *l4ip4_h, *l4ip4_m;
5660 + struct ethhdr *eth_h, *eth_m;
5661 + struct ethtool_flow_ext *ext_h, *ext_m;
5662 + const u8 key_size = dpaa2_cls_key_size(net_dev);
5663 + void *msk = key + key_size;
5665 + memset(key, 0, key_size * 2);
5667 + /* This code is a major mess, it has to be cleaned up after the
5668 + * classification mask issue is fixed and key format will be made static
5671 + switch (fs->flow_type & 0xff) {
5673 + l4ip4_h = &fs->h_u.tcp_ip4_spec;
5674 + l4ip4_m = &fs->m_u.tcp_ip4_spec;
5675 + /* TODO: ethertype to match IPv4 and protocol to match TCP */
5679 + l4ip4_h = &fs->h_u.udp_ip4_spec;
5680 + l4ip4_m = &fs->m_u.udp_ip4_spec;
5683 + case SCTP_V4_FLOW:
5684 + l4ip4_h = &fs->h_u.sctp_ip4_spec;
5685 + l4ip4_m = &fs->m_u.sctp_ip4_spec;
5688 + if (l4ip4_m->tos) {
5689 + netdev_err(net_dev,
5690 + "ToS is not supported for IPv4 L4\n");
5691 + return -EOPNOTSUPP;
5693 + if (l4ip4_m->ip4src &&
5694 + !dpaa2_cls_is_enabled(net_dev, RXH_IP_SRC)) {
5695 + netdev_err(net_dev, "IP SRC not supported!\n");
5696 + return -EOPNOTSUPP;
5698 + if (l4ip4_m->ip4dst &&
5699 + !dpaa2_cls_is_enabled(net_dev, RXH_IP_DST)) {
5700 + netdev_err(net_dev, "IP DST not supported!\n");
5701 + return -EOPNOTSUPP;
5703 + if (l4ip4_m->psrc &&
5704 + !dpaa2_cls_is_enabled(net_dev, RXH_L4_B_0_1)) {
5705 + netdev_err(net_dev, "PSRC not supported, ignored\n");
5706 + return -EOPNOTSUPP;
5708 + if (l4ip4_m->pdst &&
5709 + !dpaa2_cls_is_enabled(net_dev, RXH_L4_B_2_3)) {
5710 + netdev_err(net_dev, "PDST not supported, ignored\n");
5711 + return -EOPNOTSUPP;
5714 + if (dpaa2_cls_is_enabled(net_dev, RXH_IP_SRC)) {
5715 + *(u32 *)(key + dpaa2_cls_key_off(net_dev, RXH_IP_SRC))
5716 + = l4ip4_h->ip4src;
5717 + *(u32 *)(msk + dpaa2_cls_key_off(net_dev, RXH_IP_SRC))
5718 + = l4ip4_m->ip4src;
5720 + if (dpaa2_cls_is_enabled(net_dev, RXH_IP_DST)) {
5721 + *(u32 *)(key + dpaa2_cls_key_off(net_dev, RXH_IP_DST))
5722 + = l4ip4_h->ip4dst;
5723 + *(u32 *)(msk + dpaa2_cls_key_off(net_dev, RXH_IP_DST))
5724 + = l4ip4_m->ip4dst;
5727 + if (dpaa2_cls_is_enabled(net_dev, RXH_L4_B_0_1)) {
5728 + *(u32 *)(key + dpaa2_cls_key_off(net_dev, RXH_L4_B_0_1))
5730 + *(u32 *)(msk + dpaa2_cls_key_off(net_dev, RXH_L4_B_0_1))
5734 + if (dpaa2_cls_is_enabled(net_dev, RXH_L4_B_2_3)) {
5735 + *(u32 *)(key + dpaa2_cls_key_off(net_dev, RXH_L4_B_2_3))
5737 + *(u32 *)(msk + dpaa2_cls_key_off(net_dev, RXH_L4_B_2_3))
5743 + eth_h = &fs->h_u.ether_spec;
5744 + eth_m = &fs->m_u.ether_spec;
5746 + if (eth_m->h_proto) {
5747 + netdev_err(net_dev, "Ethertype is not supported!\n");
5748 + return -EOPNOTSUPP;
5751 + if (!is_zero_ether_addr(eth_m->h_source)) {
5752 + netdev_err(net_dev, "ETH SRC is not supported!\n");
5753 + return -EOPNOTSUPP;
5756 + if (dpaa2_cls_is_enabled(net_dev, RXH_L2DA)) {
5757 + ether_addr_copy(key
5758 + + dpaa2_cls_key_off(net_dev, RXH_L2DA),
5760 + ether_addr_copy(msk
5761 + + dpaa2_cls_key_off(net_dev, RXH_L2DA),
5764 + if (!is_zero_ether_addr(eth_m->h_dest)) {
5765 + netdev_err(net_dev,
5766 + "ETH DST is not supported!\n");
5767 + return -EOPNOTSUPP;
5773 + /* TODO: IP user flow, AH, ESP */
5774 + return -EOPNOTSUPP;
5777 + if (fs->flow_type & FLOW_EXT) {
5778 + /* TODO: ETH data, VLAN ethertype, VLAN TCI .. */
5779 + return -EOPNOTSUPP;
5782 + if (fs->flow_type & FLOW_MAC_EXT) {
5783 + ext_h = &fs->h_ext;
5784 + ext_m = &fs->m_ext;
5786 + if (dpaa2_cls_is_enabled(net_dev, RXH_L2DA)) {
5787 + ether_addr_copy(key
5788 + + dpaa2_cls_key_off(net_dev, RXH_L2DA),
5790 + ether_addr_copy(msk
5791 + + dpaa2_cls_key_off(net_dev, RXH_L2DA),
5794 + if (!is_zero_ether_addr(ext_m->h_dest)) {
5795 + netdev_err(net_dev,
5796 + "ETH DST is not supported!\n");
5797 + return -EOPNOTSUPP;
5804 +static int dpaa2_do_cls(struct net_device *net_dev,
5805 + struct ethtool_rx_flow_spec *fs,
5808 + struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
5809 + const int rule_cnt = DPAA2_CLASSIFIER_ENTRY_COUNT;
5810 + struct dpni_rule_cfg rule_cfg;
5814 + if (!dpaa2_eth_fs_enabled(priv)) {
5815 + netdev_err(net_dev, "dev does not support steering!\n");
5816 + /* dev doesn't support steering */
5817 + return -EOPNOTSUPP;
5820 + if ((fs->ring_cookie != RX_CLS_FLOW_DISC &&
5821 + fs->ring_cookie >= dpaa2_queue_count(priv)) ||
5822 + fs->location >= rule_cnt)
5825 + memset(&rule_cfg, 0, sizeof(rule_cfg));
5826 + rule_cfg.key_size = dpaa2_cls_key_size(net_dev);
5828 + /* allocate twice the key size, for the actual key and for mask */
5829 + dma_mem = kzalloc(rule_cfg.key_size * 2, GFP_DMA | GFP_KERNEL);
5833 + err = dpaa2_cls_prep_rule(net_dev, fs, dma_mem);
5835 + goto err_free_mem;
5837 + rule_cfg.key_iova = dma_map_single(net_dev->dev.parent, dma_mem,
5838 + rule_cfg.key_size * 2,
5841 + rule_cfg.mask_iova = rule_cfg.key_iova + rule_cfg.key_size;
5843 + if (!(priv->dpni_attrs.options & DPNI_OPT_FS_MASK_SUPPORT)) {
5845 + u8 *mask = dma_mem + rule_cfg.key_size;
5847 + /* check that nothing is masked out, otherwise it won't work */
5848 + for (i = 0; i < rule_cfg.key_size; i++) {
5849 + if (mask[i] == 0xff)
5851 + netdev_err(net_dev, "dev does not support masking!\n");
5852 + err = -EOPNOTSUPP;
5853 + goto err_free_mem;
5855 + rule_cfg.mask_iova = 0;
5858 + /* No way to control rule order in firmware */
5860 + err = dpni_add_fs_entry(priv->mc_io, 0, priv->mc_token, 0,
5861 + &rule_cfg, (u16)fs->ring_cookie);
5863 + err = dpni_remove_fs_entry(priv->mc_io, 0, priv->mc_token, 0,
5866 + dma_unmap_single(net_dev->dev.parent, rule_cfg.key_iova,
5867 + rule_cfg.key_size * 2, DMA_TO_DEVICE);
5869 + netdev_err(net_dev, "dpaa2_add_cls() error %d\n", err);
5870 + goto err_free_mem;
5873 + priv->cls_rule[fs->location].fs = *fs;
5874 + priv->cls_rule[fs->location].in_use = true;
5882 +static int dpaa2_add_cls(struct net_device *net_dev,
5883 + struct ethtool_rx_flow_spec *fs)
5885 + struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
5888 + err = dpaa2_do_cls(net_dev, fs, true);
5892 + priv->cls_rule[fs->location].in_use = true;
5893 + priv->cls_rule[fs->location].fs = *fs;
5898 +static int dpaa2_del_cls(struct net_device *net_dev, int location)
5900 + struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
5903 + err = dpaa2_do_cls(net_dev, &priv->cls_rule[location].fs, false);
5907 + priv->cls_rule[location].in_use = false;
5912 +static void dpaa2_clear_cls(struct net_device *net_dev)
5914 + struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
5917 + for (i = 0; i < DPAA2_CLASSIFIER_ENTRY_COUNT; i++) {
5918 + if (!priv->cls_rule[i].in_use)
5921 + err = dpaa2_del_cls(net_dev, i);
5923 + netdev_warn(net_dev,
5924 + "err trying to delete classification entry %d\n",
5929 +static int dpaa2_set_rxnfc(struct net_device *net_dev,
5930 + struct ethtool_rxnfc *rxnfc)
5934 + switch (rxnfc->cmd) {
5935 + case ETHTOOL_SRXFH:
5936 + /* first off clear ALL classification rules, chaging key
5937 + * composition will break them anyway
5939 + dpaa2_clear_cls(net_dev);
5940 + /* we purposely ignore cmd->flow_type for now, because the
5941 + * classifier only supports a single set of fields for all
5944 + err = dpaa2_set_hash(net_dev, rxnfc->data);
5946 + case ETHTOOL_SRXCLSRLINS:
5947 + err = dpaa2_add_cls(net_dev, &rxnfc->fs);
5950 + case ETHTOOL_SRXCLSRLDEL:
5951 + err = dpaa2_del_cls(net_dev, rxnfc->fs.location);
5955 + err = -EOPNOTSUPP;
5961 +static int dpaa2_get_rxnfc(struct net_device *net_dev,
5962 + struct ethtool_rxnfc *rxnfc, u32 *rule_locs)
5964 + struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
5965 + const int rule_cnt = DPAA2_CLASSIFIER_ENTRY_COUNT;
5968 + switch (rxnfc->cmd) {
5969 + case ETHTOOL_GRXFH:
5970 + /* we purposely ignore cmd->flow_type for now, because the
5971 + * classifier only supports a single set of fields for all
5974 + rxnfc->data = priv->rx_hash_fields;
5977 + case ETHTOOL_GRXRINGS:
5978 + rxnfc->data = dpaa2_queue_count(priv);
5981 + case ETHTOOL_GRXCLSRLCNT:
5982 + for (i = 0, rxnfc->rule_cnt = 0; i < rule_cnt; i++)
5983 + if (priv->cls_rule[i].in_use)
5984 + rxnfc->rule_cnt++;
5985 + rxnfc->data = rule_cnt;
5988 + case ETHTOOL_GRXCLSRULE:
5989 + if (!priv->cls_rule[rxnfc->fs.location].in_use)
5992 + rxnfc->fs = priv->cls_rule[rxnfc->fs.location].fs;
5995 + case ETHTOOL_GRXCLSRLALL:
5996 + for (i = 0, j = 0; i < rule_cnt; i++) {
5997 + if (!priv->cls_rule[i].in_use)
5999 + if (j == rxnfc->rule_cnt)
6001 + rule_locs[j++] = i;
6003 + rxnfc->rule_cnt = j;
6004 + rxnfc->data = rule_cnt;
6008 + return -EOPNOTSUPP;
6014 +const struct ethtool_ops dpaa2_ethtool_ops = {
6015 + .get_drvinfo = dpaa2_get_drvinfo,
6016 + .get_msglevel = dpaa2_get_msglevel,
6017 + .set_msglevel = dpaa2_set_msglevel,
6018 + .get_link = ethtool_op_get_link,
6019 + .get_settings = dpaa2_get_settings,
6020 + .set_settings = dpaa2_set_settings,
6021 + .get_sset_count = dpaa2_get_sset_count,
6022 + .get_ethtool_stats = dpaa2_get_ethtool_stats,
6023 + .get_strings = dpaa2_get_strings,
6024 + .get_rxnfc = dpaa2_get_rxnfc,
6025 + .set_rxnfc = dpaa2_set_rxnfc,
6028 +++ b/drivers/staging/fsl-dpaa2/ethernet/dpkg.h
6030 +/* Copyright 2013-2015 Freescale Semiconductor Inc.
6032 + * Redistribution and use in source and binary forms, with or without
6033 + * modification, are permitted provided that the following conditions are met:
6034 + * * Redistributions of source code must retain the above copyright
6035 + * notice, this list of conditions and the following disclaimer.
6036 + * * Redistributions in binary form must reproduce the above copyright
6037 + * notice, this list of conditions and the following disclaimer in the
6038 + * documentation and/or other materials provided with the distribution.
6039 + * * Neither the name of the above-listed copyright holders nor the
6040 + * names of any contributors may be used to endorse or promote products
6041 + * derived from this software without specific prior written permission.
6044 + * ALTERNATIVELY, this software may be distributed under the terms of the
6045 + * GNU General Public License ("GPL") as published by the Free Software
6046 + * Foundation, either version 2 of that License or (at your option) any
6049 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
6050 + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
6051 + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
6052 + * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
6053 + * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
6054 + * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
6055 + * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
6056 + * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
6057 + * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
6058 + * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
6059 + * POSSIBILITY OF SUCH DAMAGE.
6061 +#ifndef __FSL_DPKG_H_
6062 +#define __FSL_DPKG_H_
6064 +#include <linux/types.h>
6065 +#include "../../fsl-mc/include/net.h"
6067 +/* Data Path Key Generator API
6068 + * Contains initialization APIs and runtime APIs for the Key Generator
6071 +/** Key Generator properties */
6074 + * Number of masks per key extraction
6076 +#define DPKG_NUM_OF_MASKS 4
6078 + * Number of extractions per key profile
6080 +#define DPKG_MAX_NUM_OF_EXTRACTS 10
6083 + * enum dpkg_extract_from_hdr_type - Selecting extraction by header types
6084 + * @DPKG_FROM_HDR: Extract selected bytes from header, by offset
6085 + * @DPKG_FROM_FIELD: Extract selected bytes from header, by offset from field
6086 + * @DPKG_FULL_FIELD: Extract a full field
6088 +enum dpkg_extract_from_hdr_type {
6089 + DPKG_FROM_HDR = 0,
6090 + DPKG_FROM_FIELD = 1,
6091 + DPKG_FULL_FIELD = 2
6095 + * enum dpkg_extract_type - Enumeration for selecting extraction type
6096 + * @DPKG_EXTRACT_FROM_HDR: Extract from the header
6097 + * @DPKG_EXTRACT_FROM_DATA: Extract from data not in specific header
6098 + * @DPKG_EXTRACT_FROM_PARSE: Extract from parser-result;
6099 + * e.g. can be used to extract header existence;
6100 + * please refer to 'Parse Result definition' section in the parser BG
6102 +enum dpkg_extract_type {
6103 + DPKG_EXTRACT_FROM_HDR = 0,
6104 + DPKG_EXTRACT_FROM_DATA = 1,
6105 + DPKG_EXTRACT_FROM_PARSE = 3
6109 + * struct dpkg_mask - A structure for defining a single extraction mask
6110 + * @mask: Byte mask for the extracted content
6111 + * @offset: Offset within the extracted content
6119 + * struct dpkg_extract - A structure for defining a single extraction
6120 + * @type: Determines how the union below is interpreted:
6121 + * DPKG_EXTRACT_FROM_HDR: selects 'from_hdr';
6122 + * DPKG_EXTRACT_FROM_DATA: selects 'from_data';
6123 + * DPKG_EXTRACT_FROM_PARSE: selects 'from_parse'
6124 + * @extract: Selects extraction method
6125 + * @num_of_byte_masks: Defines the number of valid entries in the array below;
6126 + * This is also the number of bytes to be used as masks
6127 + * @masks: Masks parameters
6129 +struct dpkg_extract {
6130 + enum dpkg_extract_type type;
6132 + * union extract - Selects extraction method
6133 + * @from_hdr - Used when 'type = DPKG_EXTRACT_FROM_HDR'
6134 + * @from_data - Used when 'type = DPKG_EXTRACT_FROM_DATA'
6135 + * @from_parse - Used when 'type = DPKG_EXTRACT_FROM_PARSE'
6139 + * struct from_hdr - Used when 'type = DPKG_EXTRACT_FROM_HDR'
6140 + * @prot: Any of the supported headers
6141 + * @type: Defines the type of header extraction:
6142 + * DPKG_FROM_HDR: use size & offset below;
6143 + * DPKG_FROM_FIELD: use field, size and offset below;
6144 + * DPKG_FULL_FIELD: use field below
6145 + * @field: One of the supported fields (NH_FLD_)
6147 + * @size: Size in bytes
6148 + * @offset: Byte offset
6149 + * @hdr_index: Clear for cases not listed below;
6150 + * Used for protocols that may have more than a single
6151 + * header, 0 indicates an outer header;
6152 + * Supported protocols (possible values):
6153 + * NET_PROT_VLAN (0, HDR_INDEX_LAST);
6154 + * NET_PROT_MPLS (0, 1, HDR_INDEX_LAST);
6155 + * NET_PROT_IP(0, HDR_INDEX_LAST);
6156 + * NET_PROT_IPv4(0, HDR_INDEX_LAST);
6157 + * NET_PROT_IPv6(0, HDR_INDEX_LAST);
6161 + enum net_prot prot;
6162 + enum dpkg_extract_from_hdr_type type;
6166 + uint8_t hdr_index;
6169 + * struct from_data - Used when 'type = DPKG_EXTRACT_FROM_DATA'
6170 + * @size: Size in bytes
6171 + * @offset: Byte offset
6179 + * struct from_parse - Used when 'type = DPKG_EXTRACT_FROM_PARSE'
6180 + * @size: Size in bytes
6181 + * @offset: Byte offset
6189 + uint8_t num_of_byte_masks;
6190 + struct dpkg_mask masks[DPKG_NUM_OF_MASKS];
6194 + * struct dpkg_profile_cfg - A structure for defining a full Key Generation
6196 + * @num_extracts: Defines the number of valid entries in the array below
6197 + * @extracts: Array of required extractions
6199 +struct dpkg_profile_cfg {
6200 + uint8_t num_extracts;
6201 + struct dpkg_extract extracts[DPKG_MAX_NUM_OF_EXTRACTS];
6204 +#endif /* __FSL_DPKG_H_ */
6206 +++ b/drivers/staging/fsl-dpaa2/ethernet/dpni-cmd.h
6208 +/* Copyright 2013-2015 Freescale Semiconductor Inc.
6210 + * Redistribution and use in source and binary forms, with or without
6211 + * modification, are permitted provided that the following conditions are met:
6212 + * * Redistributions of source code must retain the above copyright
6213 + * notice, this list of conditions and the following disclaimer.
6214 + * * Redistributions in binary form must reproduce the above copyright
6215 + * notice, this list of conditions and the following disclaimer in the
6216 + * documentation and/or other materials provided with the distribution.
6217 + * * Neither the name of the above-listed copyright holders nor the
6218 + * names of any contributors may be used to endorse or promote products
6219 + * derived from this software without specific prior written permission.
6222 + * ALTERNATIVELY, this software may be distributed under the terms of the
6223 + * GNU General Public License ("GPL") as published by the Free Software
6224 + * Foundation, either version 2 of that License or (at your option) any
6227 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
6228 + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
6229 + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
6230 + * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
6231 + * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
6232 + * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
6233 + * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
6234 + * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
6235 + * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
6236 + * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
6237 + * POSSIBILITY OF SUCH DAMAGE.
6239 +#ifndef _FSL_DPNI_CMD_H
6240 +#define _FSL_DPNI_CMD_H
6243 +#define DPNI_VER_MAJOR 6
6244 +#define DPNI_VER_MINOR 0
6247 +#define DPNI_CMDID_OPEN 0x801
6248 +#define DPNI_CMDID_CLOSE 0x800
6249 +#define DPNI_CMDID_CREATE 0x901
6250 +#define DPNI_CMDID_DESTROY 0x900
6252 +#define DPNI_CMDID_ENABLE 0x002
6253 +#define DPNI_CMDID_DISABLE 0x003
6254 +#define DPNI_CMDID_GET_ATTR 0x004
6255 +#define DPNI_CMDID_RESET 0x005
6256 +#define DPNI_CMDID_IS_ENABLED 0x006
6258 +#define DPNI_CMDID_SET_IRQ 0x010
6259 +#define DPNI_CMDID_GET_IRQ 0x011
6260 +#define DPNI_CMDID_SET_IRQ_ENABLE 0x012
6261 +#define DPNI_CMDID_GET_IRQ_ENABLE 0x013
6262 +#define DPNI_CMDID_SET_IRQ_MASK 0x014
6263 +#define DPNI_CMDID_GET_IRQ_MASK 0x015
6264 +#define DPNI_CMDID_GET_IRQ_STATUS 0x016
6265 +#define DPNI_CMDID_CLEAR_IRQ_STATUS 0x017
6267 +#define DPNI_CMDID_SET_POOLS 0x200
6268 +#define DPNI_CMDID_GET_RX_BUFFER_LAYOUT 0x201
6269 +#define DPNI_CMDID_SET_RX_BUFFER_LAYOUT 0x202
6270 +#define DPNI_CMDID_GET_TX_BUFFER_LAYOUT 0x203
6271 +#define DPNI_CMDID_SET_TX_BUFFER_LAYOUT 0x204
6272 +#define DPNI_CMDID_SET_TX_CONF_BUFFER_LAYOUT 0x205
6273 +#define DPNI_CMDID_GET_TX_CONF_BUFFER_LAYOUT 0x206
6274 +#define DPNI_CMDID_SET_L3_CHKSUM_VALIDATION 0x207
6275 +#define DPNI_CMDID_GET_L3_CHKSUM_VALIDATION 0x208
6276 +#define DPNI_CMDID_SET_L4_CHKSUM_VALIDATION 0x209
6277 +#define DPNI_CMDID_GET_L4_CHKSUM_VALIDATION 0x20A
6278 +#define DPNI_CMDID_SET_ERRORS_BEHAVIOR 0x20B
6279 +#define DPNI_CMDID_SET_TX_CONF_REVOKE 0x20C
6281 +#define DPNI_CMDID_GET_QDID 0x210
6282 +#define DPNI_CMDID_GET_SP_INFO 0x211
6283 +#define DPNI_CMDID_GET_TX_DATA_OFFSET 0x212
6284 +#define DPNI_CMDID_GET_COUNTER 0x213
6285 +#define DPNI_CMDID_SET_COUNTER 0x214
6286 +#define DPNI_CMDID_GET_LINK_STATE 0x215
6287 +#define DPNI_CMDID_SET_MAX_FRAME_LENGTH 0x216
6288 +#define DPNI_CMDID_GET_MAX_FRAME_LENGTH 0x217
6289 +#define DPNI_CMDID_SET_MTU 0x218
6290 +#define DPNI_CMDID_GET_MTU 0x219
6291 +#define DPNI_CMDID_SET_LINK_CFG 0x21A
6292 +#define DPNI_CMDID_SET_TX_SHAPING 0x21B
6294 +#define DPNI_CMDID_SET_MCAST_PROMISC 0x220
6295 +#define DPNI_CMDID_GET_MCAST_PROMISC 0x221
6296 +#define DPNI_CMDID_SET_UNICAST_PROMISC 0x222
6297 +#define DPNI_CMDID_GET_UNICAST_PROMISC 0x223
6298 +#define DPNI_CMDID_SET_PRIM_MAC 0x224
6299 +#define DPNI_CMDID_GET_PRIM_MAC 0x225
6300 +#define DPNI_CMDID_ADD_MAC_ADDR 0x226
6301 +#define DPNI_CMDID_REMOVE_MAC_ADDR 0x227
6302 +#define DPNI_CMDID_CLR_MAC_FILTERS 0x228
6304 +#define DPNI_CMDID_SET_VLAN_FILTERS 0x230
6305 +#define DPNI_CMDID_ADD_VLAN_ID 0x231
6306 +#define DPNI_CMDID_REMOVE_VLAN_ID 0x232
6307 +#define DPNI_CMDID_CLR_VLAN_FILTERS 0x233
6309 +#define DPNI_CMDID_SET_RX_TC_DIST 0x235
6310 +#define DPNI_CMDID_SET_TX_FLOW 0x236
6311 +#define DPNI_CMDID_GET_TX_FLOW 0x237
6312 +#define DPNI_CMDID_SET_RX_FLOW 0x238
6313 +#define DPNI_CMDID_GET_RX_FLOW 0x239
6314 +#define DPNI_CMDID_SET_RX_ERR_QUEUE 0x23A
6315 +#define DPNI_CMDID_GET_RX_ERR_QUEUE 0x23B
6317 +#define DPNI_CMDID_SET_RX_TC_POLICING 0x23E
6318 +#define DPNI_CMDID_SET_RX_TC_EARLY_DROP 0x23F
6320 +#define DPNI_CMDID_SET_QOS_TBL 0x240
6321 +#define DPNI_CMDID_ADD_QOS_ENT 0x241
6322 +#define DPNI_CMDID_REMOVE_QOS_ENT 0x242
6323 +#define DPNI_CMDID_CLR_QOS_TBL 0x243
6324 +#define DPNI_CMDID_ADD_FS_ENT 0x244
6325 +#define DPNI_CMDID_REMOVE_FS_ENT 0x245
6326 +#define DPNI_CMDID_CLR_FS_ENT 0x246
6327 +#define DPNI_CMDID_SET_VLAN_INSERTION 0x247
6328 +#define DPNI_CMDID_SET_VLAN_REMOVAL 0x248
6329 +#define DPNI_CMDID_SET_IPR 0x249
6330 +#define DPNI_CMDID_SET_IPF 0x24A
6332 +#define DPNI_CMDID_SET_TX_SELECTION 0x250
6333 +#define DPNI_CMDID_GET_RX_TC_POLICING 0x251
6334 +#define DPNI_CMDID_GET_RX_TC_EARLY_DROP 0x252
6335 +#define DPNI_CMDID_SET_RX_TC_CONGESTION_NOTIFICATION 0x253
6336 +#define DPNI_CMDID_GET_RX_TC_CONGESTION_NOTIFICATION 0x254
6337 +#define DPNI_CMDID_SET_TX_TC_CONGESTION_NOTIFICATION 0x255
6338 +#define DPNI_CMDID_GET_TX_TC_CONGESTION_NOTIFICATION 0x256
6339 +#define DPNI_CMDID_SET_TX_CONF 0x257
6340 +#define DPNI_CMDID_GET_TX_CONF 0x258
6341 +#define DPNI_CMDID_SET_TX_CONF_CONGESTION_NOTIFICATION 0x259
6342 +#define DPNI_CMDID_GET_TX_CONF_CONGESTION_NOTIFICATION 0x25A
6343 +#define DPNI_CMDID_SET_TX_TC_EARLY_DROP 0x25B
6344 +#define DPNI_CMDID_GET_TX_TC_EARLY_DROP 0x25C
6346 +/* cmd, param, offset, width, type, arg_name */
6347 +#define DPNI_CMD_OPEN(cmd, dpni_id) \
6348 + MC_CMD_OP(cmd, 0, 0, 32, int, dpni_id)
6350 +#define DPNI_PREP_EXTENDED_CFG(ext, cfg) \
6352 + MC_PREP_OP(ext, 0, 0, 16, uint16_t, cfg->tc_cfg[0].max_dist); \
6353 + MC_PREP_OP(ext, 0, 16, 16, uint16_t, cfg->tc_cfg[0].max_fs_entries); \
6354 + MC_PREP_OP(ext, 0, 32, 16, uint16_t, cfg->tc_cfg[1].max_dist); \
6355 + MC_PREP_OP(ext, 0, 48, 16, uint16_t, cfg->tc_cfg[1].max_fs_entries); \
6356 + MC_PREP_OP(ext, 1, 0, 16, uint16_t, cfg->tc_cfg[2].max_dist); \
6357 + MC_PREP_OP(ext, 1, 16, 16, uint16_t, cfg->tc_cfg[2].max_fs_entries); \
6358 + MC_PREP_OP(ext, 1, 32, 16, uint16_t, cfg->tc_cfg[3].max_dist); \
6359 + MC_PREP_OP(ext, 1, 48, 16, uint16_t, cfg->tc_cfg[3].max_fs_entries); \
6360 + MC_PREP_OP(ext, 2, 0, 16, uint16_t, cfg->tc_cfg[4].max_dist); \
6361 + MC_PREP_OP(ext, 2, 16, 16, uint16_t, cfg->tc_cfg[4].max_fs_entries); \
6362 + MC_PREP_OP(ext, 2, 32, 16, uint16_t, cfg->tc_cfg[5].max_dist); \
6363 + MC_PREP_OP(ext, 2, 48, 16, uint16_t, cfg->tc_cfg[5].max_fs_entries); \
6364 + MC_PREP_OP(ext, 3, 0, 16, uint16_t, cfg->tc_cfg[6].max_dist); \
6365 + MC_PREP_OP(ext, 3, 16, 16, uint16_t, cfg->tc_cfg[6].max_fs_entries); \
6366 + MC_PREP_OP(ext, 3, 32, 16, uint16_t, cfg->tc_cfg[7].max_dist); \
6367 + MC_PREP_OP(ext, 3, 48, 16, uint16_t, cfg->tc_cfg[7].max_fs_entries); \
6368 + MC_PREP_OP(ext, 4, 0, 16, uint16_t, \
6369 + cfg->ipr_cfg.max_open_frames_ipv4); \
6370 + MC_PREP_OP(ext, 4, 16, 16, uint16_t, \
6371 + cfg->ipr_cfg.max_open_frames_ipv6); \
6372 + MC_PREP_OP(ext, 4, 32, 16, uint16_t, \
6373 + cfg->ipr_cfg.max_reass_frm_size); \
6374 + MC_PREP_OP(ext, 5, 0, 16, uint16_t, \
6375 + cfg->ipr_cfg.min_frag_size_ipv4); \
6376 + MC_PREP_OP(ext, 5, 16, 16, uint16_t, \
6377 + cfg->ipr_cfg.min_frag_size_ipv6); \
6380 +#define DPNI_EXT_EXTENDED_CFG(ext, cfg) \
6382 + MC_EXT_OP(ext, 0, 0, 16, uint16_t, cfg->tc_cfg[0].max_dist); \
6383 + MC_EXT_OP(ext, 0, 16, 16, uint16_t, cfg->tc_cfg[0].max_fs_entries); \
6384 + MC_EXT_OP(ext, 0, 32, 16, uint16_t, cfg->tc_cfg[1].max_dist); \
6385 + MC_EXT_OP(ext, 0, 48, 16, uint16_t, cfg->tc_cfg[1].max_fs_entries); \
6386 + MC_EXT_OP(ext, 1, 0, 16, uint16_t, cfg->tc_cfg[2].max_dist); \
6387 + MC_EXT_OP(ext, 1, 16, 16, uint16_t, cfg->tc_cfg[2].max_fs_entries); \
6388 + MC_EXT_OP(ext, 1, 32, 16, uint16_t, cfg->tc_cfg[3].max_dist); \
6389 + MC_EXT_OP(ext, 1, 48, 16, uint16_t, cfg->tc_cfg[3].max_fs_entries); \
6390 + MC_EXT_OP(ext, 2, 0, 16, uint16_t, cfg->tc_cfg[4].max_dist); \
6391 + MC_EXT_OP(ext, 2, 16, 16, uint16_t, cfg->tc_cfg[4].max_fs_entries); \
6392 + MC_EXT_OP(ext, 2, 32, 16, uint16_t, cfg->tc_cfg[5].max_dist); \
6393 + MC_EXT_OP(ext, 2, 48, 16, uint16_t, cfg->tc_cfg[5].max_fs_entries); \
6394 + MC_EXT_OP(ext, 3, 0, 16, uint16_t, cfg->tc_cfg[6].max_dist); \
6395 + MC_EXT_OP(ext, 3, 16, 16, uint16_t, cfg->tc_cfg[6].max_fs_entries); \
6396 + MC_EXT_OP(ext, 3, 32, 16, uint16_t, cfg->tc_cfg[7].max_dist); \
6397 + MC_EXT_OP(ext, 3, 48, 16, uint16_t, cfg->tc_cfg[7].max_fs_entries); \
6398 + MC_EXT_OP(ext, 4, 0, 16, uint16_t, \
6399 + cfg->ipr_cfg.max_open_frames_ipv4); \
6400 + MC_EXT_OP(ext, 4, 16, 16, uint16_t, \
6401 + cfg->ipr_cfg.max_open_frames_ipv6); \
6402 + MC_EXT_OP(ext, 4, 32, 16, uint16_t, \
6403 + cfg->ipr_cfg.max_reass_frm_size); \
6404 + MC_EXT_OP(ext, 5, 0, 16, uint16_t, \
6405 + cfg->ipr_cfg.min_frag_size_ipv4); \
6406 + MC_EXT_OP(ext, 5, 16, 16, uint16_t, \
6407 + cfg->ipr_cfg.min_frag_size_ipv6); \
6410 +/* cmd, param, offset, width, type, arg_name */
6411 +#define DPNI_CMD_CREATE(cmd, cfg) \
6413 + MC_CMD_OP(cmd, 0, 0, 8, uint8_t, cfg->adv.max_tcs); \
6414 + MC_CMD_OP(cmd, 0, 8, 8, uint8_t, cfg->adv.max_senders); \
6415 + MC_CMD_OP(cmd, 0, 16, 8, uint8_t, cfg->mac_addr[5]); \
6416 + MC_CMD_OP(cmd, 0, 24, 8, uint8_t, cfg->mac_addr[4]); \
6417 + MC_CMD_OP(cmd, 0, 32, 8, uint8_t, cfg->mac_addr[3]); \
6418 + MC_CMD_OP(cmd, 0, 40, 8, uint8_t, cfg->mac_addr[2]); \
6419 + MC_CMD_OP(cmd, 0, 48, 8, uint8_t, cfg->mac_addr[1]); \
6420 + MC_CMD_OP(cmd, 0, 56, 8, uint8_t, cfg->mac_addr[0]); \
6421 + MC_CMD_OP(cmd, 1, 0, 32, uint32_t, cfg->adv.options); \
6422 + MC_CMD_OP(cmd, 2, 0, 8, uint8_t, cfg->adv.max_unicast_filters); \
6423 + MC_CMD_OP(cmd, 2, 8, 8, uint8_t, cfg->adv.max_multicast_filters); \
6424 + MC_CMD_OP(cmd, 2, 16, 8, uint8_t, cfg->adv.max_vlan_filters); \
6425 + MC_CMD_OP(cmd, 2, 24, 8, uint8_t, cfg->adv.max_qos_entries); \
6426 + MC_CMD_OP(cmd, 2, 32, 8, uint8_t, cfg->adv.max_qos_key_size); \
6427 + MC_CMD_OP(cmd, 2, 48, 8, uint8_t, cfg->adv.max_dist_key_size); \
6428 + MC_CMD_OP(cmd, 2, 56, 8, enum net_prot, cfg->adv.start_hdr); \
6429 + MC_CMD_OP(cmd, 4, 48, 8, uint8_t, cfg->adv.max_policers); \
6430 + MC_CMD_OP(cmd, 4, 56, 8, uint8_t, cfg->adv.max_congestion_ctrl); \
6431 + MC_CMD_OP(cmd, 5, 0, 64, uint64_t, cfg->adv.ext_cfg_iova); \
6434 +/* cmd, param, offset, width, type, arg_name */
6435 +#define DPNI_CMD_SET_POOLS(cmd, cfg) \
6437 + MC_CMD_OP(cmd, 0, 0, 8, uint8_t, cfg->num_dpbp); \
6438 + MC_CMD_OP(cmd, 0, 8, 1, int, cfg->pools[0].backup_pool); \
6439 + MC_CMD_OP(cmd, 0, 9, 1, int, cfg->pools[1].backup_pool); \
6440 + MC_CMD_OP(cmd, 0, 10, 1, int, cfg->pools[2].backup_pool); \
6441 + MC_CMD_OP(cmd, 0, 11, 1, int, cfg->pools[3].backup_pool); \
6442 + MC_CMD_OP(cmd, 0, 12, 1, int, cfg->pools[4].backup_pool); \
6443 + MC_CMD_OP(cmd, 0, 13, 1, int, cfg->pools[5].backup_pool); \
6444 + MC_CMD_OP(cmd, 0, 14, 1, int, cfg->pools[6].backup_pool); \
6445 + MC_CMD_OP(cmd, 0, 15, 1, int, cfg->pools[7].backup_pool); \
6446 + MC_CMD_OP(cmd, 0, 32, 32, int, cfg->pools[0].dpbp_id); \
6447 + MC_CMD_OP(cmd, 4, 32, 16, uint16_t, cfg->pools[0].buffer_size);\
6448 + MC_CMD_OP(cmd, 1, 0, 32, int, cfg->pools[1].dpbp_id); \
6449 + MC_CMD_OP(cmd, 4, 48, 16, uint16_t, cfg->pools[1].buffer_size);\
6450 + MC_CMD_OP(cmd, 1, 32, 32, int, cfg->pools[2].dpbp_id); \
6451 + MC_CMD_OP(cmd, 5, 0, 16, uint16_t, cfg->pools[2].buffer_size);\
6452 + MC_CMD_OP(cmd, 2, 0, 32, int, cfg->pools[3].dpbp_id); \
6453 + MC_CMD_OP(cmd, 5, 16, 16, uint16_t, cfg->pools[3].buffer_size);\
6454 + MC_CMD_OP(cmd, 2, 32, 32, int, cfg->pools[4].dpbp_id); \
6455 + MC_CMD_OP(cmd, 5, 32, 16, uint16_t, cfg->pools[4].buffer_size);\
6456 + MC_CMD_OP(cmd, 3, 0, 32, int, cfg->pools[5].dpbp_id); \
6457 + MC_CMD_OP(cmd, 5, 48, 16, uint16_t, cfg->pools[5].buffer_size);\
6458 + MC_CMD_OP(cmd, 3, 32, 32, int, cfg->pools[6].dpbp_id); \
6459 + MC_CMD_OP(cmd, 6, 0, 16, uint16_t, cfg->pools[6].buffer_size);\
6460 + MC_CMD_OP(cmd, 4, 0, 32, int, cfg->pools[7].dpbp_id); \
6461 + MC_CMD_OP(cmd, 6, 16, 16, uint16_t, cfg->pools[7].buffer_size);\
6464 +/* cmd, param, offset, width, type, arg_name */
6465 +#define DPNI_RSP_IS_ENABLED(cmd, en) \
6466 + MC_RSP_OP(cmd, 0, 0, 1, int, en)
6468 +/* cmd, param, offset, width, type, arg_name */
6469 +#define DPNI_CMD_SET_IRQ(cmd, irq_index, irq_cfg) \
6471 + MC_CMD_OP(cmd, 0, 0, 32, uint32_t, irq_cfg->val); \
6472 + MC_CMD_OP(cmd, 0, 32, 8, uint8_t, irq_index); \
6473 + MC_CMD_OP(cmd, 1, 0, 64, uint64_t, irq_cfg->addr); \
6474 + MC_CMD_OP(cmd, 2, 0, 32, int, irq_cfg->irq_num); \
6477 +/* cmd, param, offset, width, type, arg_name */
6478 +#define DPNI_CMD_GET_IRQ(cmd, irq_index) \
6479 + MC_CMD_OP(cmd, 0, 32, 8, uint8_t, irq_index)
6481 +/* cmd, param, offset, width, type, arg_name */
6482 +#define DPNI_RSP_GET_IRQ(cmd, type, irq_cfg) \
6484 + MC_RSP_OP(cmd, 0, 0, 32, uint32_t, irq_cfg->val); \
6485 + MC_RSP_OP(cmd, 1, 0, 64, uint64_t, irq_cfg->addr); \
6486 + MC_RSP_OP(cmd, 2, 0, 32, int, irq_cfg->irq_num); \
6487 + MC_RSP_OP(cmd, 2, 32, 32, int, type); \
6490 +/* cmd, param, offset, width, type, arg_name */
6491 +#define DPNI_CMD_SET_IRQ_ENABLE(cmd, irq_index, en) \
6493 + MC_CMD_OP(cmd, 0, 0, 8, uint8_t, en); \
6494 + MC_CMD_OP(cmd, 0, 32, 8, uint8_t, irq_index);\
6497 +/* cmd, param, offset, width, type, arg_name */
6498 +#define DPNI_CMD_GET_IRQ_ENABLE(cmd, irq_index) \
6499 + MC_CMD_OP(cmd, 0, 32, 8, uint8_t, irq_index)
6501 +/* cmd, param, offset, width, type, arg_name */
6502 +#define DPNI_RSP_GET_IRQ_ENABLE(cmd, en) \
6503 + MC_RSP_OP(cmd, 0, 0, 8, uint8_t, en)
6505 +/* cmd, param, offset, width, type, arg_name */
6506 +#define DPNI_CMD_SET_IRQ_MASK(cmd, irq_index, mask) \
6508 + MC_CMD_OP(cmd, 0, 0, 32, uint32_t, mask); \
6509 + MC_CMD_OP(cmd, 0, 32, 8, uint8_t, irq_index);\
6512 +/* cmd, param, offset, width, type, arg_name */
6513 +#define DPNI_CMD_GET_IRQ_MASK(cmd, irq_index) \
6514 + MC_CMD_OP(cmd, 0, 32, 8, uint8_t, irq_index)
6516 +/* cmd, param, offset, width, type, arg_name */
6517 +#define DPNI_RSP_GET_IRQ_MASK(cmd, mask) \
6518 + MC_RSP_OP(cmd, 0, 0, 32, uint32_t, mask)
6520 +/* cmd, param, offset, width, type, arg_name */
6521 +#define DPNI_CMD_GET_IRQ_STATUS(cmd, irq_index, status) \
6523 + MC_CMD_OP(cmd, 0, 0, 32, uint32_t, status);\
6524 + MC_CMD_OP(cmd, 0, 32, 8, uint8_t, irq_index);\
6527 +/* cmd, param, offset, width, type, arg_name */
6528 +#define DPNI_RSP_GET_IRQ_STATUS(cmd, status) \
6529 + MC_RSP_OP(cmd, 0, 0, 32, uint32_t, status)
6531 +/* cmd, param, offset, width, type, arg_name */
6532 +#define DPNI_CMD_CLEAR_IRQ_STATUS(cmd, irq_index, status) \
6534 + MC_CMD_OP(cmd, 0, 0, 32, uint32_t, status); \
6535 + MC_CMD_OP(cmd, 0, 32, 8, uint8_t, irq_index);\
6538 +/* cmd, param, offset, width, type, arg_name */
6539 +#define DPNI_CMD_GET_ATTR(cmd, attr) \
6540 + MC_CMD_OP(cmd, 6, 0, 64, uint64_t, attr->ext_cfg_iova)
6542 +/* cmd, param, offset, width, type, arg_name */
6543 +#define DPNI_RSP_GET_ATTR(cmd, attr) \
6545 + MC_RSP_OP(cmd, 0, 0, 32, int, attr->id);\
6546 + MC_RSP_OP(cmd, 0, 32, 8, uint8_t, attr->max_tcs); \
6547 + MC_RSP_OP(cmd, 0, 40, 8, uint8_t, attr->max_senders); \
6548 + MC_RSP_OP(cmd, 0, 48, 8, enum net_prot, attr->start_hdr); \
6549 + MC_RSP_OP(cmd, 1, 0, 32, uint32_t, attr->options); \
6550 + MC_RSP_OP(cmd, 2, 0, 8, uint8_t, attr->max_unicast_filters); \
6551 + MC_RSP_OP(cmd, 2, 8, 8, uint8_t, attr->max_multicast_filters);\
6552 + MC_RSP_OP(cmd, 2, 16, 8, uint8_t, attr->max_vlan_filters); \
6553 + MC_RSP_OP(cmd, 2, 24, 8, uint8_t, attr->max_qos_entries); \
6554 + MC_RSP_OP(cmd, 2, 32, 8, uint8_t, attr->max_qos_key_size); \
6555 + MC_RSP_OP(cmd, 2, 40, 8, uint8_t, attr->max_dist_key_size); \
6556 + MC_RSP_OP(cmd, 4, 48, 8, uint8_t, attr->max_policers); \
6557 + MC_RSP_OP(cmd, 4, 56, 8, uint8_t, attr->max_congestion_ctrl); \
6558 + MC_RSP_OP(cmd, 5, 32, 16, uint16_t, attr->version.major);\
6559 + MC_RSP_OP(cmd, 5, 48, 16, uint16_t, attr->version.minor);\
6562 +/* cmd, param, offset, width, type, arg_name */
6563 +#define DPNI_CMD_SET_ERRORS_BEHAVIOR(cmd, cfg) \
6565 + MC_CMD_OP(cmd, 0, 0, 32, uint32_t, cfg->errors); \
6566 + MC_CMD_OP(cmd, 0, 32, 4, enum dpni_error_action, cfg->error_action); \
6567 + MC_CMD_OP(cmd, 0, 36, 1, int, cfg->set_frame_annotation); \
6570 +/* cmd, param, offset, width, type, arg_name */
6571 +#define DPNI_RSP_GET_RX_BUFFER_LAYOUT(cmd, layout) \
6573 + MC_RSP_OP(cmd, 0, 0, 16, uint16_t, layout->private_data_size); \
6574 + MC_RSP_OP(cmd, 0, 16, 16, uint16_t, layout->data_align); \
6575 + MC_RSP_OP(cmd, 1, 0, 1, int, layout->pass_timestamp); \
6576 + MC_RSP_OP(cmd, 1, 1, 1, int, layout->pass_parser_result); \
6577 + MC_RSP_OP(cmd, 1, 2, 1, int, layout->pass_frame_status); \
6578 + MC_RSP_OP(cmd, 1, 16, 16, uint16_t, layout->data_head_room); \
6579 + MC_RSP_OP(cmd, 1, 32, 16, uint16_t, layout->data_tail_room); \
6582 +/* cmd, param, offset, width, type, arg_name */
6583 +#define DPNI_CMD_SET_RX_BUFFER_LAYOUT(cmd, layout) \
6585 + MC_CMD_OP(cmd, 0, 0, 16, uint16_t, layout->private_data_size); \
6586 + MC_CMD_OP(cmd, 0, 16, 16, uint16_t, layout->data_align); \
6587 + MC_CMD_OP(cmd, 0, 32, 32, uint32_t, layout->options); \
6588 + MC_CMD_OP(cmd, 1, 0, 1, int, layout->pass_timestamp); \
6589 + MC_CMD_OP(cmd, 1, 1, 1, int, layout->pass_parser_result); \
6590 + MC_CMD_OP(cmd, 1, 2, 1, int, layout->pass_frame_status); \
6591 + MC_CMD_OP(cmd, 1, 16, 16, uint16_t, layout->data_head_room); \
6592 + MC_CMD_OP(cmd, 1, 32, 16, uint16_t, layout->data_tail_room); \
6595 +/* cmd, param, offset, width, type, arg_name */
6596 +#define DPNI_RSP_GET_TX_BUFFER_LAYOUT(cmd, layout) \
6598 + MC_RSP_OP(cmd, 0, 0, 16, uint16_t, layout->private_data_size); \
6599 + MC_RSP_OP(cmd, 0, 16, 16, uint16_t, layout->data_align); \
6600 + MC_RSP_OP(cmd, 1, 0, 1, int, layout->pass_timestamp); \
6601 + MC_RSP_OP(cmd, 1, 1, 1, int, layout->pass_parser_result); \
6602 + MC_RSP_OP(cmd, 1, 2, 1, int, layout->pass_frame_status); \
6603 + MC_RSP_OP(cmd, 1, 16, 16, uint16_t, layout->data_head_room); \
6604 + MC_RSP_OP(cmd, 1, 32, 16, uint16_t, layout->data_tail_room); \
6607 +/* cmd, param, offset, width, type, arg_name */
6608 +#define DPNI_CMD_SET_TX_BUFFER_LAYOUT(cmd, layout) \
6610 + MC_CMD_OP(cmd, 0, 0, 16, uint16_t, layout->private_data_size); \
6611 + MC_CMD_OP(cmd, 0, 16, 16, uint16_t, layout->data_align); \
6612 + MC_CMD_OP(cmd, 0, 32, 32, uint32_t, layout->options); \
6613 + MC_CMD_OP(cmd, 1, 0, 1, int, layout->pass_timestamp); \
6614 + MC_CMD_OP(cmd, 1, 1, 1, int, layout->pass_parser_result); \
6615 + MC_CMD_OP(cmd, 1, 2, 1, int, layout->pass_frame_status); \
6616 + MC_CMD_OP(cmd, 1, 16, 16, uint16_t, layout->data_head_room); \
6617 + MC_CMD_OP(cmd, 1, 32, 16, uint16_t, layout->data_tail_room); \
6620 +/* cmd, param, offset, width, type, arg_name */
6621 +#define DPNI_RSP_GET_TX_CONF_BUFFER_LAYOUT(cmd, layout) \
6623 + MC_RSP_OP(cmd, 0, 0, 16, uint16_t, layout->private_data_size); \
6624 + MC_RSP_OP(cmd, 0, 16, 16, uint16_t, layout->data_align); \
6625 + MC_RSP_OP(cmd, 1, 0, 1, int, layout->pass_timestamp); \
6626 + MC_RSP_OP(cmd, 1, 1, 1, int, layout->pass_parser_result); \
6627 + MC_RSP_OP(cmd, 1, 2, 1, int, layout->pass_frame_status); \
6628 + MC_RSP_OP(cmd, 1, 16, 16, uint16_t, layout->data_head_room); \
6629 + MC_RSP_OP(cmd, 1, 32, 16, uint16_t, layout->data_tail_room); \
6632 +/* cmd, param, offset, width, type, arg_name */
6633 +#define DPNI_CMD_SET_TX_CONF_BUFFER_LAYOUT(cmd, layout) \
6635 + MC_CMD_OP(cmd, 0, 0, 16, uint16_t, layout->private_data_size); \
6636 + MC_CMD_OP(cmd, 0, 16, 16, uint16_t, layout->data_align); \
6637 + MC_CMD_OP(cmd, 0, 32, 32, uint32_t, layout->options); \
6638 + MC_CMD_OP(cmd, 1, 0, 1, int, layout->pass_timestamp); \
6639 + MC_CMD_OP(cmd, 1, 1, 1, int, layout->pass_parser_result); \
6640 + MC_CMD_OP(cmd, 1, 2, 1, int, layout->pass_frame_status); \
6641 + MC_CMD_OP(cmd, 1, 16, 16, uint16_t, layout->data_head_room); \
6642 + MC_CMD_OP(cmd, 1, 32, 16, uint16_t, layout->data_tail_room); \
6645 +/* cmd, param, offset, width, type, arg_name */
6646 +#define DPNI_CMD_SET_L3_CHKSUM_VALIDATION(cmd, en) \
6647 + MC_CMD_OP(cmd, 0, 0, 1, int, en)
6649 +/* cmd, param, offset, width, type, arg_name */
6650 +#define DPNI_RSP_GET_L3_CHKSUM_VALIDATION(cmd, en) \
6651 + MC_RSP_OP(cmd, 0, 0, 1, int, en)
6653 +/* cmd, param, offset, width, type, arg_name */
6654 +#define DPNI_CMD_SET_L4_CHKSUM_VALIDATION(cmd, en) \
6655 + MC_CMD_OP(cmd, 0, 0, 1, int, en)
6657 +/* cmd, param, offset, width, type, arg_name */
6658 +#define DPNI_RSP_GET_L4_CHKSUM_VALIDATION(cmd, en) \
6659 + MC_RSP_OP(cmd, 0, 0, 1, int, en)
6661 +/* cmd, param, offset, width, type, arg_name */
6662 +#define DPNI_RSP_GET_QDID(cmd, qdid) \
6663 + MC_RSP_OP(cmd, 0, 0, 16, uint16_t, qdid)
6665 +/* cmd, param, offset, width, type, arg_name */
6666 +#define DPNI_RSP_GET_SP_INFO(cmd, sp_info) \
6668 + MC_RSP_OP(cmd, 0, 0, 16, uint16_t, sp_info->spids[0]); \
6669 + MC_RSP_OP(cmd, 0, 16, 16, uint16_t, sp_info->spids[1]); \
6672 +/* cmd, param, offset, width, type, arg_name */
6673 +#define DPNI_RSP_GET_TX_DATA_OFFSET(cmd, data_offset) \
6674 + MC_RSP_OP(cmd, 0, 0, 16, uint16_t, data_offset)
6676 +/* cmd, param, offset, width, type, arg_name */
6677 +#define DPNI_CMD_GET_COUNTER(cmd, counter) \
6678 + MC_CMD_OP(cmd, 0, 0, 16, enum dpni_counter, counter)
6680 +/* cmd, param, offset, width, type, arg_name */
6681 +#define DPNI_RSP_GET_COUNTER(cmd, value) \
6682 + MC_RSP_OP(cmd, 1, 0, 64, uint64_t, value)
6684 +/* cmd, param, offset, width, type, arg_name */
6685 +#define DPNI_CMD_SET_COUNTER(cmd, counter, value) \
6687 + MC_CMD_OP(cmd, 0, 0, 16, enum dpni_counter, counter); \
6688 + MC_CMD_OP(cmd, 1, 0, 64, uint64_t, value); \
6691 +/* cmd, param, offset, width, type, arg_name */
6692 +#define DPNI_CMD_SET_LINK_CFG(cmd, cfg) \
6694 + MC_CMD_OP(cmd, 1, 0, 32, uint32_t, cfg->rate);\
6695 + MC_CMD_OP(cmd, 2, 0, 64, uint64_t, cfg->options);\
6698 +/* cmd, param, offset, width, type, arg_name */
6699 +#define DPNI_RSP_GET_LINK_STATE(cmd, state) \
6701 + MC_RSP_OP(cmd, 0, 32, 1, int, state->up);\
6702 + MC_RSP_OP(cmd, 1, 0, 32, uint32_t, state->rate);\
6703 + MC_RSP_OP(cmd, 2, 0, 64, uint64_t, state->options);\
6706 +/* cmd, param, offset, width, type, arg_name */
6707 +#define DPNI_CMD_SET_TX_SHAPING(cmd, tx_shaper) \
6709 + MC_CMD_OP(cmd, 0, 0, 16, uint16_t, tx_shaper->max_burst_size);\
6710 + MC_CMD_OP(cmd, 1, 0, 32, uint32_t, tx_shaper->rate_limit);\
6713 +/* cmd, param, offset, width, type, arg_name */
6714 +#define DPNI_CMD_SET_MAX_FRAME_LENGTH(cmd, max_frame_length) \
6715 + MC_CMD_OP(cmd, 0, 0, 16, uint16_t, max_frame_length)
6717 +/* cmd, param, offset, width, type, arg_name */
6718 +#define DPNI_RSP_GET_MAX_FRAME_LENGTH(cmd, max_frame_length) \
6719 + MC_RSP_OP(cmd, 0, 0, 16, uint16_t, max_frame_length)
6721 +/* cmd, param, offset, width, type, arg_name */
6722 +#define DPNI_CMD_SET_MTU(cmd, mtu) \
6723 + MC_CMD_OP(cmd, 0, 0, 16, uint16_t, mtu)
6725 +/* cmd, param, offset, width, type, arg_name */
6726 +#define DPNI_RSP_GET_MTU(cmd, mtu) \
6727 + MC_RSP_OP(cmd, 0, 0, 16, uint16_t, mtu)
6729 +/* cmd, param, offset, width, type, arg_name */
6730 +#define DPNI_CMD_SET_MULTICAST_PROMISC(cmd, en) \
6731 + MC_CMD_OP(cmd, 0, 0, 1, int, en)
6733 +/* cmd, param, offset, width, type, arg_name */
6734 +#define DPNI_RSP_GET_MULTICAST_PROMISC(cmd, en) \
6735 + MC_RSP_OP(cmd, 0, 0, 1, int, en)
6737 +/* cmd, param, offset, width, type, arg_name */
6738 +#define DPNI_CMD_SET_UNICAST_PROMISC(cmd, en) \
6739 + MC_CMD_OP(cmd, 0, 0, 1, int, en)
6741 +/* cmd, param, offset, width, type, arg_name */
6742 +#define DPNI_RSP_GET_UNICAST_PROMISC(cmd, en) \
6743 + MC_RSP_OP(cmd, 0, 0, 1, int, en)
6745 +/* cmd, param, offset, width, type, arg_name */
6746 +#define DPNI_CMD_SET_PRIMARY_MAC_ADDR(cmd, mac_addr) \
6748 + MC_CMD_OP(cmd, 0, 16, 8, uint8_t, mac_addr[5]); \
6749 + MC_CMD_OP(cmd, 0, 24, 8, uint8_t, mac_addr[4]); \
6750 + MC_CMD_OP(cmd, 0, 32, 8, uint8_t, mac_addr[3]); \
6751 + MC_CMD_OP(cmd, 0, 40, 8, uint8_t, mac_addr[2]); \
6752 + MC_CMD_OP(cmd, 0, 48, 8, uint8_t, mac_addr[1]); \
6753 + MC_CMD_OP(cmd, 0, 56, 8, uint8_t, mac_addr[0]); \
6756 +/* cmd, param, offset, width, type, arg_name */
6757 +#define DPNI_RSP_GET_PRIMARY_MAC_ADDR(cmd, mac_addr) \
6759 + MC_RSP_OP(cmd, 0, 16, 8, uint8_t, mac_addr[5]); \
6760 + MC_RSP_OP(cmd, 0, 24, 8, uint8_t, mac_addr[4]); \
6761 + MC_RSP_OP(cmd, 0, 32, 8, uint8_t, mac_addr[3]); \
6762 + MC_RSP_OP(cmd, 0, 40, 8, uint8_t, mac_addr[2]); \
6763 + MC_RSP_OP(cmd, 0, 48, 8, uint8_t, mac_addr[1]); \
6764 + MC_RSP_OP(cmd, 0, 56, 8, uint8_t, mac_addr[0]); \
6767 +/* cmd, param, offset, width, type, arg_name */
6768 +#define DPNI_CMD_ADD_MAC_ADDR(cmd, mac_addr) \
6770 + MC_CMD_OP(cmd, 0, 16, 8, uint8_t, mac_addr[5]); \
6771 + MC_CMD_OP(cmd, 0, 24, 8, uint8_t, mac_addr[4]); \
6772 + MC_CMD_OP(cmd, 0, 32, 8, uint8_t, mac_addr[3]); \
6773 + MC_CMD_OP(cmd, 0, 40, 8, uint8_t, mac_addr[2]); \
6774 + MC_CMD_OP(cmd, 0, 48, 8, uint8_t, mac_addr[1]); \
6775 + MC_CMD_OP(cmd, 0, 56, 8, uint8_t, mac_addr[0]); \
6778 +/* cmd, param, offset, width, type, arg_name */
6779 +#define DPNI_CMD_REMOVE_MAC_ADDR(cmd, mac_addr) \
6781 + MC_CMD_OP(cmd, 0, 16, 8, uint8_t, mac_addr[5]); \
6782 + MC_CMD_OP(cmd, 0, 24, 8, uint8_t, mac_addr[4]); \
6783 + MC_CMD_OP(cmd, 0, 32, 8, uint8_t, mac_addr[3]); \
6784 + MC_CMD_OP(cmd, 0, 40, 8, uint8_t, mac_addr[2]); \
6785 + MC_CMD_OP(cmd, 0, 48, 8, uint8_t, mac_addr[1]); \
6786 + MC_CMD_OP(cmd, 0, 56, 8, uint8_t, mac_addr[0]); \
6789 +/* cmd, param, offset, width, type, arg_name */
6790 +#define DPNI_CMD_CLEAR_MAC_FILTERS(cmd, unicast, multicast) \
6792 + MC_CMD_OP(cmd, 0, 0, 1, int, unicast); \
6793 + MC_CMD_OP(cmd, 0, 1, 1, int, multicast); \
6796 +/* cmd, param, offset, width, type, arg_name */
6797 +#define DPNI_CMD_SET_VLAN_FILTERS(cmd, en) \
6798 + MC_CMD_OP(cmd, 0, 0, 1, int, en)
6800 +/* cmd, param, offset, width, type, arg_name */
6801 +#define DPNI_CMD_ADD_VLAN_ID(cmd, vlan_id) \
6802 + MC_CMD_OP(cmd, 0, 32, 16, uint16_t, vlan_id)
6804 +/* cmd, param, offset, width, type, arg_name */
6805 +#define DPNI_CMD_REMOVE_VLAN_ID(cmd, vlan_id) \
6806 + MC_CMD_OP(cmd, 0, 32, 16, uint16_t, vlan_id)
6808 +/* cmd, param, offset, width, type, arg_name */
6809 +#define DPNI_CMD_SET_TX_SELECTION(cmd, cfg) \
6811 + MC_CMD_OP(cmd, 0, 0, 16, uint16_t, cfg->tc_sched[0].delta_bandwidth);\
6812 + MC_CMD_OP(cmd, 0, 16, 4, enum dpni_tx_schedule_mode, \
6813 + cfg->tc_sched[0].mode); \
6814 + MC_CMD_OP(cmd, 0, 32, 16, uint16_t, cfg->tc_sched[1].delta_bandwidth);\
6815 + MC_CMD_OP(cmd, 0, 48, 4, enum dpni_tx_schedule_mode, \
6816 + cfg->tc_sched[1].mode); \
6817 + MC_CMD_OP(cmd, 1, 0, 16, uint16_t, cfg->tc_sched[2].delta_bandwidth);\
6818 + MC_CMD_OP(cmd, 1, 16, 4, enum dpni_tx_schedule_mode, \
6819 + cfg->tc_sched[2].mode); \
6820 + MC_CMD_OP(cmd, 1, 32, 16, uint16_t, cfg->tc_sched[3].delta_bandwidth);\
6821 + MC_CMD_OP(cmd, 1, 48, 4, enum dpni_tx_schedule_mode, \
6822 + cfg->tc_sched[3].mode); \
6823 + MC_CMD_OP(cmd, 2, 0, 16, uint16_t, cfg->tc_sched[4].delta_bandwidth);\
6824 + MC_CMD_OP(cmd, 2, 16, 4, enum dpni_tx_schedule_mode, \
6825 + cfg->tc_sched[4].mode); \
6826 + MC_CMD_OP(cmd, 2, 32, 16, uint16_t, cfg->tc_sched[5].delta_bandwidth);\
6827 + MC_CMD_OP(cmd, 2, 48, 4, enum dpni_tx_schedule_mode, \
6828 + cfg->tc_sched[5].mode); \
6829 + MC_CMD_OP(cmd, 3, 0, 16, uint16_t, cfg->tc_sched[6].delta_bandwidth);\
6830 + MC_CMD_OP(cmd, 3, 16, 4, enum dpni_tx_schedule_mode, \
6831 + cfg->tc_sched[6].mode); \
6832 + MC_CMD_OP(cmd, 3, 32, 16, uint16_t, cfg->tc_sched[7].delta_bandwidth);\
6833 + MC_CMD_OP(cmd, 3, 48, 4, enum dpni_tx_schedule_mode, \
6834 + cfg->tc_sched[7].mode); \
6837 +/* cmd, param, offset, width, type, arg_name */
6838 +#define DPNI_CMD_SET_RX_TC_DIST(cmd, tc_id, cfg) \
6840 + MC_CMD_OP(cmd, 0, 0, 16, uint16_t, cfg->dist_size); \
6841 + MC_CMD_OP(cmd, 0, 16, 8, uint8_t, tc_id); \
6842 + MC_CMD_OP(cmd, 0, 24, 4, enum dpni_dist_mode, cfg->dist_mode); \
6843 + MC_CMD_OP(cmd, 0, 28, 4, enum dpni_fs_miss_action, \
6844 + cfg->fs_cfg.miss_action); \
6845 + MC_CMD_OP(cmd, 0, 48, 16, uint16_t, cfg->fs_cfg.default_flow_id); \
6846 + MC_CMD_OP(cmd, 6, 0, 64, uint64_t, cfg->key_cfg_iova); \
6849 +/* cmd, param, offset, width, type, arg_name */
6850 +#define DPNI_CMD_SET_TX_FLOW(cmd, flow_id, cfg) \
6852 + MC_CMD_OP(cmd, 0, 43, 1, int, cfg->l3_chksum_gen);\
6853 + MC_CMD_OP(cmd, 0, 44, 1, int, cfg->l4_chksum_gen);\
6854 + MC_CMD_OP(cmd, 0, 45, 1, int, cfg->use_common_tx_conf_queue);\
6855 + MC_CMD_OP(cmd, 0, 48, 16, uint16_t, flow_id);\
6856 + MC_CMD_OP(cmd, 2, 0, 32, uint32_t, cfg->options);\
6859 +/* cmd, param, offset, width, type, arg_name */
6860 +#define DPNI_RSP_SET_TX_FLOW(cmd, flow_id) \
6861 + MC_RSP_OP(cmd, 0, 48, 16, uint16_t, flow_id)
6863 +/* cmd, param, offset, width, type, arg_name */
6864 +#define DPNI_CMD_GET_TX_FLOW(cmd, flow_id) \
6865 + MC_CMD_OP(cmd, 0, 48, 16, uint16_t, flow_id)
6867 +/* cmd, param, offset, width, type, arg_name */
6868 +#define DPNI_RSP_GET_TX_FLOW(cmd, attr) \
6870 + MC_RSP_OP(cmd, 0, 43, 1, int, attr->l3_chksum_gen);\
6871 + MC_RSP_OP(cmd, 0, 44, 1, int, attr->l4_chksum_gen);\
6872 + MC_RSP_OP(cmd, 0, 45, 1, int, attr->use_common_tx_conf_queue);\
6875 +/* cmd, param, offset, width, type, arg_name */
6876 +#define DPNI_CMD_SET_RX_FLOW(cmd, tc_id, flow_id, cfg) \
6878 + MC_CMD_OP(cmd, 0, 0, 32, int, cfg->dest_cfg.dest_id); \
6879 + MC_CMD_OP(cmd, 0, 32, 8, uint8_t, cfg->dest_cfg.priority);\
6880 + MC_CMD_OP(cmd, 0, 40, 2, enum dpni_dest, cfg->dest_cfg.dest_type);\
6881 + MC_CMD_OP(cmd, 0, 42, 1, int, cfg->order_preservation_en);\
6882 + MC_CMD_OP(cmd, 0, 48, 16, uint16_t, flow_id); \
6883 + MC_CMD_OP(cmd, 1, 0, 64, uint64_t, cfg->user_ctx); \
6884 + MC_CMD_OP(cmd, 2, 16, 8, uint8_t, tc_id); \
6885 + MC_CMD_OP(cmd, 2, 32, 32, uint32_t, cfg->options); \
6886 + MC_CMD_OP(cmd, 3, 0, 4, enum dpni_flc_type, cfg->flc_cfg.flc_type); \
6887 + MC_CMD_OP(cmd, 3, 4, 4, enum dpni_stash_size, \
6888 + cfg->flc_cfg.frame_data_size);\
6889 + MC_CMD_OP(cmd, 3, 8, 4, enum dpni_stash_size, \
6890 + cfg->flc_cfg.flow_context_size);\
6891 + MC_CMD_OP(cmd, 3, 32, 32, uint32_t, cfg->flc_cfg.options);\
6892 + MC_CMD_OP(cmd, 4, 0, 64, uint64_t, cfg->flc_cfg.flow_context);\
6893 + MC_CMD_OP(cmd, 5, 0, 32, uint32_t, cfg->tail_drop_threshold); \
6896 +/* cmd, param, offset, width, type, arg_name */
6897 +#define DPNI_CMD_GET_RX_FLOW(cmd, tc_id, flow_id) \
6899 + MC_CMD_OP(cmd, 0, 16, 8, uint8_t, tc_id); \
6900 + MC_CMD_OP(cmd, 0, 48, 16, uint16_t, flow_id); \
6903 +/* cmd, param, offset, width, type, arg_name */
6904 +#define DPNI_RSP_GET_RX_FLOW(cmd, attr) \
6906 + MC_RSP_OP(cmd, 0, 0, 32, int, attr->dest_cfg.dest_id); \
6907 + MC_RSP_OP(cmd, 0, 32, 8, uint8_t, attr->dest_cfg.priority);\
6908 + MC_RSP_OP(cmd, 0, 40, 2, enum dpni_dest, attr->dest_cfg.dest_type); \
6909 + MC_RSP_OP(cmd, 0, 42, 1, int, attr->order_preservation_en);\
6910 + MC_RSP_OP(cmd, 1, 0, 64, uint64_t, attr->user_ctx); \
6911 + MC_RSP_OP(cmd, 2, 0, 32, uint32_t, attr->tail_drop_threshold); \
6912 + MC_RSP_OP(cmd, 2, 32, 32, uint32_t, attr->fqid); \
6913 + MC_RSP_OP(cmd, 3, 0, 4, enum dpni_flc_type, attr->flc_cfg.flc_type); \
6914 + MC_RSP_OP(cmd, 3, 4, 4, enum dpni_stash_size, \
6915 + attr->flc_cfg.frame_data_size);\
6916 + MC_RSP_OP(cmd, 3, 8, 4, enum dpni_stash_size, \
6917 + attr->flc_cfg.flow_context_size);\
6918 + MC_RSP_OP(cmd, 3, 32, 32, uint32_t, attr->flc_cfg.options);\
6919 + MC_RSP_OP(cmd, 4, 0, 64, uint64_t, attr->flc_cfg.flow_context);\
6922 +/* cmd, param, offset, width, type, arg_name */
6923 +#define DPNI_CMD_SET_RX_ERR_QUEUE(cmd, cfg) \
6925 + MC_CMD_OP(cmd, 0, 0, 32, int, cfg->dest_cfg.dest_id); \
6926 + MC_CMD_OP(cmd, 0, 32, 8, uint8_t, cfg->dest_cfg.priority);\
6927 + MC_CMD_OP(cmd, 0, 40, 2, enum dpni_dest, cfg->dest_cfg.dest_type);\
6928 + MC_CMD_OP(cmd, 0, 42, 1, int, cfg->order_preservation_en);\
6929 + MC_CMD_OP(cmd, 1, 0, 64, uint64_t, cfg->user_ctx); \
6930 + MC_CMD_OP(cmd, 2, 0, 32, uint32_t, cfg->options); \
6931 + MC_CMD_OP(cmd, 2, 32, 32, uint32_t, cfg->tail_drop_threshold); \
6932 + MC_CMD_OP(cmd, 3, 0, 4, enum dpni_flc_type, cfg->flc_cfg.flc_type); \
6933 + MC_CMD_OP(cmd, 3, 4, 4, enum dpni_stash_size, \
6934 + cfg->flc_cfg.frame_data_size);\
6935 + MC_CMD_OP(cmd, 3, 8, 4, enum dpni_stash_size, \
6936 + cfg->flc_cfg.flow_context_size);\
6937 + MC_CMD_OP(cmd, 3, 32, 32, uint32_t, cfg->flc_cfg.options);\
6938 + MC_CMD_OP(cmd, 4, 0, 64, uint64_t, cfg->flc_cfg.flow_context);\
6941 +/* cmd, param, offset, width, type, arg_name */
6942 +#define DPNI_RSP_GET_RX_ERR_QUEUE(cmd, attr) \
6944 + MC_RSP_OP(cmd, 0, 0, 32, int, attr->dest_cfg.dest_id); \
6945 + MC_RSP_OP(cmd, 0, 32, 8, uint8_t, attr->dest_cfg.priority);\
6946 + MC_RSP_OP(cmd, 0, 40, 2, enum dpni_dest, attr->dest_cfg.dest_type);\
6947 + MC_RSP_OP(cmd, 0, 42, 1, int, attr->order_preservation_en);\
6948 + MC_RSP_OP(cmd, 1, 0, 64, uint64_t, attr->user_ctx); \
6949 + MC_RSP_OP(cmd, 2, 0, 32, uint32_t, attr->tail_drop_threshold); \
6950 + MC_RSP_OP(cmd, 2, 32, 32, uint32_t, attr->fqid); \
6951 + MC_RSP_OP(cmd, 3, 0, 4, enum dpni_flc_type, attr->flc_cfg.flc_type); \
6952 + MC_RSP_OP(cmd, 3, 4, 4, enum dpni_stash_size, \
6953 + attr->flc_cfg.frame_data_size);\
6954 + MC_RSP_OP(cmd, 3, 8, 4, enum dpni_stash_size, \
6955 + attr->flc_cfg.flow_context_size);\
6956 + MC_RSP_OP(cmd, 3, 32, 32, uint32_t, attr->flc_cfg.options);\
6957 + MC_RSP_OP(cmd, 4, 0, 64, uint64_t, attr->flc_cfg.flow_context);\
6960 +/* cmd, param, offset, width, type, arg_name */
6961 +#define DPNI_CMD_SET_TX_CONF_REVOKE(cmd, revoke) \
6962 + MC_CMD_OP(cmd, 0, 0, 1, int, revoke)
6964 +/* cmd, param, offset, width, type, arg_name */
6965 +#define DPNI_CMD_SET_QOS_TABLE(cmd, cfg) \
6967 + MC_CMD_OP(cmd, 0, 32, 8, uint8_t, cfg->default_tc); \
6968 + MC_CMD_OP(cmd, 0, 40, 1, int, cfg->discard_on_miss); \
6969 + MC_CMD_OP(cmd, 6, 0, 64, uint64_t, cfg->key_cfg_iova); \
6972 +/* cmd, param, offset, width, type, arg_name */
6973 +#define DPNI_CMD_ADD_QOS_ENTRY(cmd, cfg, tc_id) \
6975 + MC_CMD_OP(cmd, 0, 16, 8, uint8_t, tc_id); \
6976 + MC_CMD_OP(cmd, 0, 24, 8, uint8_t, cfg->key_size); \
6977 + MC_CMD_OP(cmd, 1, 0, 64, uint64_t, cfg->key_iova); \
6978 + MC_CMD_OP(cmd, 2, 0, 64, uint64_t, cfg->mask_iova); \
6981 +/* cmd, param, offset, width, type, arg_name */
6982 +#define DPNI_CMD_REMOVE_QOS_ENTRY(cmd, cfg) \
6984 + MC_CMD_OP(cmd, 0, 24, 8, uint8_t, cfg->key_size); \
6985 + MC_CMD_OP(cmd, 1, 0, 64, uint64_t, cfg->key_iova); \
6986 + MC_CMD_OP(cmd, 2, 0, 64, uint64_t, cfg->mask_iova); \
6989 +/* cmd, param, offset, width, type, arg_name */
6990 +#define DPNI_CMD_ADD_FS_ENTRY(cmd, tc_id, cfg, flow_id) \
6992 + MC_CMD_OP(cmd, 0, 16, 8, uint8_t, tc_id); \
6993 + MC_CMD_OP(cmd, 0, 48, 16, uint16_t, flow_id); \
6994 + MC_CMD_OP(cmd, 0, 24, 8, uint8_t, cfg->key_size); \
6995 + MC_CMD_OP(cmd, 1, 0, 64, uint64_t, cfg->key_iova); \
6996 + MC_CMD_OP(cmd, 2, 0, 64, uint64_t, cfg->mask_iova); \
6999 +/* cmd, param, offset, width, type, arg_name */
7000 +#define DPNI_CMD_REMOVE_FS_ENTRY(cmd, tc_id, cfg) \
7002 + MC_CMD_OP(cmd, 0, 16, 8, uint8_t, tc_id); \
7003 + MC_CMD_OP(cmd, 0, 24, 8, uint8_t, cfg->key_size); \
7004 + MC_CMD_OP(cmd, 1, 0, 64, uint64_t, cfg->key_iova); \
7005 + MC_CMD_OP(cmd, 2, 0, 64, uint64_t, cfg->mask_iova); \
7008 +/* cmd, param, offset, width, type, arg_name */
7009 +#define DPNI_CMD_CLEAR_FS_ENTRIES(cmd, tc_id) \
7010 + MC_CMD_OP(cmd, 0, 16, 8, uint8_t, tc_id)
7012 +/* cmd, param, offset, width, type, arg_name */
7013 +#define DPNI_CMD_SET_VLAN_INSERTION(cmd, en) \
7014 + MC_CMD_OP(cmd, 0, 0, 1, int, en)
7016 +/* cmd, param, offset, width, type, arg_name */
7017 +#define DPNI_CMD_SET_VLAN_REMOVAL(cmd, en) \
7018 + MC_CMD_OP(cmd, 0, 0, 1, int, en)
7020 +/* cmd, param, offset, width, type, arg_name */
7021 +#define DPNI_CMD_SET_IPR(cmd, en) \
7022 + MC_CMD_OP(cmd, 0, 0, 1, int, en)
7024 +/* cmd, param, offset, width, type, arg_name */
7025 +#define DPNI_CMD_SET_IPF(cmd, en) \
7026 + MC_CMD_OP(cmd, 0, 0, 1, int, en)
7028 +/* cmd, param, offset, width, type, arg_name */
7029 +#define DPNI_CMD_SET_RX_TC_POLICING(cmd, tc_id, cfg) \
7031 + MC_CMD_OP(cmd, 0, 0, 4, enum dpni_policer_mode, cfg->mode); \
7032 + MC_CMD_OP(cmd, 0, 4, 4, enum dpni_policer_color, cfg->default_color); \
7033 + MC_CMD_OP(cmd, 0, 8, 4, enum dpni_policer_unit, cfg->units); \
7034 + MC_CMD_OP(cmd, 0, 16, 8, uint8_t, tc_id); \
7035 + MC_CMD_OP(cmd, 0, 32, 32, uint32_t, cfg->options); \
7036 + MC_CMD_OP(cmd, 1, 0, 32, uint32_t, cfg->cir); \
7037 + MC_CMD_OP(cmd, 1, 32, 32, uint32_t, cfg->cbs); \
7038 + MC_CMD_OP(cmd, 2, 0, 32, uint32_t, cfg->eir); \
7039 + MC_CMD_OP(cmd, 2, 32, 32, uint32_t, cfg->ebs);\
7042 +/* cmd, param, offset, width, type, arg_name */
7043 +#define DPNI_CMD_GET_RX_TC_POLICING(cmd, tc_id) \
7044 + MC_CMD_OP(cmd, 0, 16, 8, uint8_t, tc_id)
7046 +/* cmd, param, offset, width, type, arg_name */
7047 +#define DPNI_RSP_GET_RX_TC_POLICING(cmd, cfg) \
7049 + MC_RSP_OP(cmd, 0, 0, 4, enum dpni_policer_mode, cfg->mode); \
7050 + MC_RSP_OP(cmd, 0, 4, 4, enum dpni_policer_color, cfg->default_color); \
7051 + MC_RSP_OP(cmd, 0, 8, 4, enum dpni_policer_unit, cfg->units); \
7052 + MC_RSP_OP(cmd, 0, 32, 32, uint32_t, cfg->options); \
7053 + MC_RSP_OP(cmd, 1, 0, 32, uint32_t, cfg->cir); \
7054 + MC_RSP_OP(cmd, 1, 32, 32, uint32_t, cfg->cbs); \
7055 + MC_RSP_OP(cmd, 2, 0, 32, uint32_t, cfg->eir); \
7056 + MC_RSP_OP(cmd, 2, 32, 32, uint32_t, cfg->ebs);\
7059 +/* cmd, param, offset, width, type, arg_name */
7060 +#define DPNI_PREP_EARLY_DROP(ext, cfg) \
7062 + MC_PREP_OP(ext, 0, 0, 2, enum dpni_early_drop_mode, cfg->mode); \
7063 + MC_PREP_OP(ext, 0, 2, 2, \
7064 + enum dpni_congestion_unit, cfg->units); \
7065 + MC_PREP_OP(ext, 0, 32, 32, uint32_t, cfg->tail_drop_threshold); \
7066 + MC_PREP_OP(ext, 1, 0, 8, uint8_t, cfg->green.drop_probability); \
7067 + MC_PREP_OP(ext, 2, 0, 64, uint64_t, cfg->green.max_threshold); \
7068 + MC_PREP_OP(ext, 3, 0, 64, uint64_t, cfg->green.min_threshold); \
7069 + MC_PREP_OP(ext, 5, 0, 8, uint8_t, cfg->yellow.drop_probability);\
7070 + MC_PREP_OP(ext, 6, 0, 64, uint64_t, cfg->yellow.max_threshold); \
7071 + MC_PREP_OP(ext, 7, 0, 64, uint64_t, cfg->yellow.min_threshold); \
7072 + MC_PREP_OP(ext, 9, 0, 8, uint8_t, cfg->red.drop_probability); \
7073 + MC_PREP_OP(ext, 10, 0, 64, uint64_t, cfg->red.max_threshold); \
7074 + MC_PREP_OP(ext, 11, 0, 64, uint64_t, cfg->red.min_threshold); \
7077 +/* cmd, param, offset, width, type, arg_name */
7078 +#define DPNI_EXT_EARLY_DROP(ext, cfg) \
7080 + MC_EXT_OP(ext, 0, 0, 2, enum dpni_early_drop_mode, cfg->mode); \
7081 + MC_EXT_OP(ext, 0, 2, 2, \
7082 + enum dpni_congestion_unit, cfg->units); \
7083 + MC_EXT_OP(ext, 0, 32, 32, uint32_t, cfg->tail_drop_threshold); \
7084 + MC_EXT_OP(ext, 1, 0, 8, uint8_t, cfg->green.drop_probability); \
7085 + MC_EXT_OP(ext, 2, 0, 64, uint64_t, cfg->green.max_threshold); \
7086 + MC_EXT_OP(ext, 3, 0, 64, uint64_t, cfg->green.min_threshold); \
7087 + MC_EXT_OP(ext, 5, 0, 8, uint8_t, cfg->yellow.drop_probability);\
7088 + MC_EXT_OP(ext, 6, 0, 64, uint64_t, cfg->yellow.max_threshold); \
7089 + MC_EXT_OP(ext, 7, 0, 64, uint64_t, cfg->yellow.min_threshold); \
7090 + MC_EXT_OP(ext, 9, 0, 8, uint8_t, cfg->red.drop_probability); \
7091 + MC_EXT_OP(ext, 10, 0, 64, uint64_t, cfg->red.max_threshold); \
7092 + MC_EXT_OP(ext, 11, 0, 64, uint64_t, cfg->red.min_threshold); \
7095 +/* cmd, param, offset, width, type, arg_name */
7096 +#define DPNI_CMD_SET_RX_TC_EARLY_DROP(cmd, tc_id, early_drop_iova) \
7098 + MC_CMD_OP(cmd, 0, 8, 8, uint8_t, tc_id); \
7099 + MC_CMD_OP(cmd, 1, 0, 64, uint64_t, early_drop_iova); \
7102 +/* cmd, param, offset, width, type, arg_name */
7103 +#define DPNI_CMD_GET_RX_TC_EARLY_DROP(cmd, tc_id, early_drop_iova) \
7105 + MC_CMD_OP(cmd, 0, 8, 8, uint8_t, tc_id); \
7106 + MC_CMD_OP(cmd, 1, 0, 64, uint64_t, early_drop_iova); \
7109 +/* cmd, param, offset, width, type, arg_name */
7110 +#define DPNI_CMD_SET_TX_TC_EARLY_DROP(cmd, tc_id, early_drop_iova) \
7112 + MC_CMD_OP(cmd, 0, 8, 8, uint8_t, tc_id); \
7113 + MC_CMD_OP(cmd, 1, 0, 64, uint64_t, early_drop_iova); \
7116 +/* cmd, param, offset, width, type, arg_name */
7117 +#define DPNI_CMD_GET_TX_TC_EARLY_DROP(cmd, tc_id, early_drop_iova) \
7119 + MC_CMD_OP(cmd, 0, 8, 8, uint8_t, tc_id); \
7120 + MC_CMD_OP(cmd, 1, 0, 64, uint64_t, early_drop_iova); \
7123 +#define DPNI_CMD_SET_RX_TC_CONGESTION_NOTIFICATION(cmd, tc_id, cfg) \
7125 + MC_CMD_OP(cmd, 0, 0, 2, enum dpni_congestion_unit, cfg->units); \
7126 + MC_CMD_OP(cmd, 0, 4, 4, enum dpni_dest, cfg->dest_cfg.dest_type); \
7127 + MC_CMD_OP(cmd, 0, 8, 8, uint8_t, tc_id); \
7128 + MC_CMD_OP(cmd, 0, 16, 8, uint8_t, cfg->dest_cfg.priority); \
7129 + MC_CMD_OP(cmd, 1, 0, 32, uint32_t, cfg->threshold_entry); \
7130 + MC_CMD_OP(cmd, 1, 32, 32, uint32_t, cfg->threshold_exit); \
7131 + MC_CMD_OP(cmd, 2, 0, 16, uint16_t, cfg->options); \
7132 + MC_CMD_OP(cmd, 2, 32, 32, int, cfg->dest_cfg.dest_id); \
7133 + MC_CMD_OP(cmd, 3, 0, 64, uint64_t, cfg->message_ctx); \
7134 + MC_CMD_OP(cmd, 4, 0, 64, uint64_t, cfg->message_iova); \
7137 +#define DPNI_CMD_GET_RX_TC_CONGESTION_NOTIFICATION(cmd, tc_id) \
7138 + MC_CMD_OP(cmd, 0, 8, 8, uint8_t, tc_id)
7140 +#define DPNI_RSP_GET_RX_TC_CONGESTION_NOTIFICATION(cmd, cfg) \
7142 + MC_RSP_OP(cmd, 0, 0, 2, enum dpni_congestion_unit, cfg->units); \
7143 + MC_RSP_OP(cmd, 0, 4, 4, enum dpni_dest, cfg->dest_cfg.dest_type); \
7144 + MC_RSP_OP(cmd, 0, 16, 8, uint8_t, cfg->dest_cfg.priority); \
7145 + MC_RSP_OP(cmd, 1, 0, 32, uint32_t, cfg->threshold_entry); \
7146 + MC_RSP_OP(cmd, 1, 32, 32, uint32_t, cfg->threshold_exit); \
7147 + MC_RSP_OP(cmd, 2, 0, 16, uint16_t, cfg->options); \
7148 + MC_RSP_OP(cmd, 2, 32, 32, int, cfg->dest_cfg.dest_id); \
7149 + MC_RSP_OP(cmd, 3, 0, 64, uint64_t, cfg->message_ctx); \
7150 + MC_RSP_OP(cmd, 4, 0, 64, uint64_t, cfg->message_iova); \
7153 +#define DPNI_CMD_SET_TX_TC_CONGESTION_NOTIFICATION(cmd, tc_id, cfg) \
7155 + MC_CMD_OP(cmd, 0, 0, 2, enum dpni_congestion_unit, cfg->units); \
7156 + MC_CMD_OP(cmd, 0, 4, 4, enum dpni_dest, cfg->dest_cfg.dest_type); \
7157 + MC_CMD_OP(cmd, 0, 8, 8, uint8_t, tc_id); \
7158 + MC_CMD_OP(cmd, 0, 16, 8, uint8_t, cfg->dest_cfg.priority); \
7159 + MC_CMD_OP(cmd, 1, 0, 32, uint32_t, cfg->threshold_entry); \
7160 + MC_CMD_OP(cmd, 1, 32, 32, uint32_t, cfg->threshold_exit); \
7161 + MC_CMD_OP(cmd, 2, 0, 16, uint16_t, cfg->options); \
7162 + MC_CMD_OP(cmd, 2, 32, 32, int, cfg->dest_cfg.dest_id); \
7163 + MC_CMD_OP(cmd, 3, 0, 64, uint64_t, cfg->message_ctx); \
7164 + MC_CMD_OP(cmd, 4, 0, 64, uint64_t, cfg->message_iova); \
7167 +#define DPNI_CMD_GET_TX_TC_CONGESTION_NOTIFICATION(cmd, tc_id) \
7168 + MC_CMD_OP(cmd, 0, 8, 8, uint8_t, tc_id)
7170 +#define DPNI_RSP_GET_TX_TC_CONGESTION_NOTIFICATION(cmd, cfg) \
7172 + MC_RSP_OP(cmd, 0, 0, 2, enum dpni_congestion_unit, cfg->units); \
7173 + MC_RSP_OP(cmd, 0, 4, 4, enum dpni_dest, cfg->dest_cfg.dest_type); \
7174 + MC_RSP_OP(cmd, 0, 16, 8, uint8_t, cfg->dest_cfg.priority); \
7175 + MC_RSP_OP(cmd, 1, 0, 32, uint32_t, cfg->threshold_entry); \
7176 + MC_RSP_OP(cmd, 1, 32, 32, uint32_t, cfg->threshold_exit); \
7177 + MC_RSP_OP(cmd, 2, 0, 16, uint16_t, cfg->options); \
7178 + MC_RSP_OP(cmd, 2, 32, 32, int, cfg->dest_cfg.dest_id); \
7179 + MC_RSP_OP(cmd, 3, 0, 64, uint64_t, cfg->message_ctx); \
7180 + MC_RSP_OP(cmd, 4, 0, 64, uint64_t, cfg->message_iova); \
7183 +#define DPNI_CMD_SET_TX_CONF(cmd, flow_id, cfg) \
7185 + MC_CMD_OP(cmd, 0, 32, 8, uint8_t, cfg->queue_cfg.dest_cfg.priority); \
7186 + MC_CMD_OP(cmd, 0, 40, 2, enum dpni_dest, \
7187 + cfg->queue_cfg.dest_cfg.dest_type); \
7188 + MC_CMD_OP(cmd, 0, 42, 1, int, cfg->errors_only); \
7189 + MC_CMD_OP(cmd, 0, 46, 1, int, cfg->queue_cfg.order_preservation_en); \
7190 + MC_CMD_OP(cmd, 0, 48, 16, uint16_t, flow_id); \
7191 + MC_CMD_OP(cmd, 1, 0, 64, uint64_t, cfg->queue_cfg.user_ctx); \
7192 + MC_CMD_OP(cmd, 2, 0, 32, uint32_t, cfg->queue_cfg.options); \
7193 + MC_CMD_OP(cmd, 2, 32, 32, int, cfg->queue_cfg.dest_cfg.dest_id); \
7194 + MC_CMD_OP(cmd, 3, 0, 32, uint32_t, \
7195 + cfg->queue_cfg.tail_drop_threshold); \
7196 + MC_CMD_OP(cmd, 4, 0, 4, enum dpni_flc_type, \
7197 + cfg->queue_cfg.flc_cfg.flc_type); \
7198 + MC_CMD_OP(cmd, 4, 4, 4, enum dpni_stash_size, \
7199 + cfg->queue_cfg.flc_cfg.frame_data_size); \
7200 + MC_CMD_OP(cmd, 4, 8, 4, enum dpni_stash_size, \
7201 + cfg->queue_cfg.flc_cfg.flow_context_size); \
7202 + MC_CMD_OP(cmd, 4, 32, 32, uint32_t, cfg->queue_cfg.flc_cfg.options); \
7203 + MC_CMD_OP(cmd, 5, 0, 64, uint64_t, \
7204 + cfg->queue_cfg.flc_cfg.flow_context); \
7207 +#define DPNI_CMD_GET_TX_CONF(cmd, flow_id) \
7208 + MC_CMD_OP(cmd, 0, 48, 16, uint16_t, flow_id)
7210 +#define DPNI_RSP_GET_TX_CONF(cmd, attr) \
7212 + MC_RSP_OP(cmd, 0, 32, 8, uint8_t, \
7213 + attr->queue_attr.dest_cfg.priority); \
7214 + MC_RSP_OP(cmd, 0, 40, 2, enum dpni_dest, \
7215 + attr->queue_attr.dest_cfg.dest_type); \
7216 + MC_RSP_OP(cmd, 0, 42, 1, int, attr->errors_only); \
7217 + MC_RSP_OP(cmd, 0, 46, 1, int, \
7218 + attr->queue_attr.order_preservation_en); \
7219 + MC_RSP_OP(cmd, 1, 0, 64, uint64_t, attr->queue_attr.user_ctx); \
7220 + MC_RSP_OP(cmd, 2, 32, 32, int, attr->queue_attr.dest_cfg.dest_id); \
7221 + MC_RSP_OP(cmd, 3, 0, 32, uint32_t, \
7222 + attr->queue_attr.tail_drop_threshold); \
7223 + MC_RSP_OP(cmd, 3, 32, 32, uint32_t, attr->queue_attr.fqid); \
7224 + MC_RSP_OP(cmd, 4, 0, 4, enum dpni_flc_type, \
7225 + attr->queue_attr.flc_cfg.flc_type); \
7226 + MC_RSP_OP(cmd, 4, 4, 4, enum dpni_stash_size, \
7227 + attr->queue_attr.flc_cfg.frame_data_size); \
7228 + MC_RSP_OP(cmd, 4, 8, 4, enum dpni_stash_size, \
7229 + attr->queue_attr.flc_cfg.flow_context_size); \
7230 + MC_RSP_OP(cmd, 4, 32, 32, uint32_t, attr->queue_attr.flc_cfg.options); \
7231 + MC_RSP_OP(cmd, 5, 0, 64, uint64_t, \
7232 + attr->queue_attr.flc_cfg.flow_context); \
7235 +#define DPNI_CMD_SET_TX_CONF_CONGESTION_NOTIFICATION(cmd, flow_id, cfg) \
7237 + MC_CMD_OP(cmd, 0, 0, 2, enum dpni_congestion_unit, cfg->units); \
7238 + MC_CMD_OP(cmd, 0, 4, 4, enum dpni_dest, cfg->dest_cfg.dest_type); \
7239 + MC_CMD_OP(cmd, 0, 16, 8, uint8_t, cfg->dest_cfg.priority); \
7240 + MC_CMD_OP(cmd, 0, 48, 16, uint16_t, flow_id); \
7241 + MC_CMD_OP(cmd, 1, 0, 32, uint32_t, cfg->threshold_entry); \
7242 + MC_CMD_OP(cmd, 1, 32, 32, uint32_t, cfg->threshold_exit); \
7243 + MC_CMD_OP(cmd, 2, 0, 16, uint16_t, cfg->options); \
7244 + MC_CMD_OP(cmd, 2, 32, 32, int, cfg->dest_cfg.dest_id); \
7245 + MC_CMD_OP(cmd, 3, 0, 64, uint64_t, cfg->message_ctx); \
7246 + MC_CMD_OP(cmd, 4, 0, 64, uint64_t, cfg->message_iova); \
7249 +#define DPNI_CMD_GET_TX_CONF_CONGESTION_NOTIFICATION(cmd, flow_id) \
7250 + MC_CMD_OP(cmd, 0, 48, 16, uint16_t, flow_id)
7252 +#define DPNI_RSP_GET_TX_CONF_CONGESTION_NOTIFICATION(cmd, cfg) \
7254 + MC_RSP_OP(cmd, 0, 0, 2, enum dpni_congestion_unit, cfg->units); \
7255 + MC_RSP_OP(cmd, 0, 4, 4, enum dpni_dest, cfg->dest_cfg.dest_type); \
7256 + MC_RSP_OP(cmd, 0, 16, 8, uint8_t, cfg->dest_cfg.priority); \
7257 + MC_RSP_OP(cmd, 1, 0, 32, uint32_t, cfg->threshold_entry); \
7258 + MC_RSP_OP(cmd, 1, 32, 32, uint32_t, cfg->threshold_exit); \
7259 + MC_RSP_OP(cmd, 2, 0, 16, uint16_t, cfg->options); \
7260 + MC_RSP_OP(cmd, 2, 32, 32, int, cfg->dest_cfg.dest_id); \
7261 + MC_RSP_OP(cmd, 3, 0, 64, uint64_t, cfg->message_ctx); \
7262 + MC_RSP_OP(cmd, 4, 0, 64, uint64_t, cfg->message_iova); \
7265 +#endif /* _FSL_DPNI_CMD_H */
7267 +++ b/drivers/staging/fsl-dpaa2/ethernet/dpni.c
7269 +/* Copyright 2013-2015 Freescale Semiconductor Inc.
7271 + * Redistribution and use in source and binary forms, with or without
7272 + * modification, are permitted provided that the following conditions are met:
7273 + * * Redistributions of source code must retain the above copyright
7274 + * notice, this list of conditions and the following disclaimer.
7275 + * * Redistributions in binary form must reproduce the above copyright
7276 + * notice, this list of conditions and the following disclaimer in the
7277 + * documentation and/or other materials provided with the distribution.
7278 + * * Neither the name of the above-listed copyright holders nor the
7279 + * names of any contributors may be used to endorse or promote products
7280 + * derived from this software without specific prior written permission.
7283 + * ALTERNATIVELY, this software may be distributed under the terms of the
7284 + * GNU General Public License ("GPL") as published by the Free Software
7285 + * Foundation, either version 2 of that License or (at your option) any
7288 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
7289 + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
7290 + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
7291 + * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
7292 + * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
7293 + * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
7294 + * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
7295 + * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
7296 + * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
7297 + * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
7298 + * POSSIBILITY OF SUCH DAMAGE.
7300 +#include "../../fsl-mc/include/mc-sys.h"
7301 +#include "../../fsl-mc/include/mc-cmd.h"
7303 +#include "dpni-cmd.h"
7305 +int dpni_prepare_key_cfg(const struct dpkg_profile_cfg *cfg,
7306 + uint8_t *key_cfg_buf)
7311 + uint64_t *params = (uint64_t *)key_cfg_buf;
7313 + if (!key_cfg_buf || !cfg)
7316 + params[0] |= mc_enc(0, 8, cfg->num_extracts);
7317 + params[0] = cpu_to_le64(params[0]);
7319 + if (cfg->num_extracts >= DPKG_MAX_NUM_OF_EXTRACTS)
7322 + for (i = 0; i < cfg->num_extracts; i++) {
7323 + switch (cfg->extracts[i].type) {
7324 + case DPKG_EXTRACT_FROM_HDR:
7325 + params[param] |= mc_enc(0, 8,
7326 + cfg->extracts[i].extract.from_hdr.prot);
7327 + params[param] |= mc_enc(8, 4,
7328 + cfg->extracts[i].extract.from_hdr.type);
7329 + params[param] |= mc_enc(16, 8,
7330 + cfg->extracts[i].extract.from_hdr.size);
7331 + params[param] |= mc_enc(24, 8,
7332 + cfg->extracts[i].extract.
7334 + params[param] |= mc_enc(32, 32,
7335 + cfg->extracts[i].extract.
7337 + params[param] = cpu_to_le64(params[param]);
7339 + params[param] |= mc_enc(0, 8,
7340 + cfg->extracts[i].extract.
7341 + from_hdr.hdr_index);
7343 + case DPKG_EXTRACT_FROM_DATA:
7344 + params[param] |= mc_enc(16, 8,
7345 + cfg->extracts[i].extract.
7347 + params[param] |= mc_enc(24, 8,
7348 + cfg->extracts[i].extract.
7349 + from_data.offset);
7350 + params[param] = cpu_to_le64(params[param]);
7353 + case DPKG_EXTRACT_FROM_PARSE:
7354 + params[param] |= mc_enc(16, 8,
7355 + cfg->extracts[i].extract.
7357 + params[param] |= mc_enc(24, 8,
7358 + cfg->extracts[i].extract.
7359 + from_parse.offset);
7360 + params[param] = cpu_to_le64(params[param]);
7366 + params[param] |= mc_enc(
7367 + 24, 8, cfg->extracts[i].num_of_byte_masks);
7368 + params[param] |= mc_enc(32, 4, cfg->extracts[i].type);
7369 + params[param] = cpu_to_le64(params[param]);
7371 + for (offset = 0, j = 0;
7372 + j < DPKG_NUM_OF_MASKS;
7373 + offset += 16, j++) {
7374 + params[param] |= mc_enc(
7375 + (offset), 8, cfg->extracts[i].masks[j].mask);
7376 + params[param] |= mc_enc(
7378 + cfg->extracts[i].masks[j].offset);
7380 + params[param] = cpu_to_le64(params[param]);
7386 +int dpni_prepare_extended_cfg(const struct dpni_extended_cfg *cfg,
7387 + uint8_t *ext_cfg_buf)
7389 + uint64_t *ext_params = (uint64_t *)ext_cfg_buf;
7391 + DPNI_PREP_EXTENDED_CFG(ext_params, cfg);
7396 +int dpni_extract_extended_cfg(struct dpni_extended_cfg *cfg,
7397 + const uint8_t *ext_cfg_buf)
7399 + uint64_t *ext_params = (uint64_t *)ext_cfg_buf;
7401 + DPNI_EXT_EXTENDED_CFG(ext_params, cfg);
7406 +int dpni_open(struct fsl_mc_io *mc_io,
7407 + uint32_t cmd_flags,
7411 + struct mc_command cmd = { 0 };
7414 + /* prepare command */
7415 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_OPEN,
7418 + DPNI_CMD_OPEN(cmd, dpni_id);
7420 + /* send command to mc*/
7421 + err = mc_send_command(mc_io, &cmd);
7425 + /* retrieve response parameters */
7426 + *token = MC_CMD_HDR_READ_TOKEN(cmd.header);
7431 +int dpni_close(struct fsl_mc_io *mc_io,
7432 + uint32_t cmd_flags,
7435 + struct mc_command cmd = { 0 };
7437 + /* prepare command */
7438 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_CLOSE,
7442 + /* send command to mc*/
7443 + return mc_send_command(mc_io, &cmd);
7446 +int dpni_create(struct fsl_mc_io *mc_io,
7447 + uint32_t cmd_flags,
7448 + const struct dpni_cfg *cfg,
7451 + struct mc_command cmd = { 0 };
7454 + /* prepare command */
7455 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_CREATE,
7458 + DPNI_CMD_CREATE(cmd, cfg);
7460 + /* send command to mc*/
7461 + err = mc_send_command(mc_io, &cmd);
7465 + /* retrieve response parameters */
7466 + *token = MC_CMD_HDR_READ_TOKEN(cmd.header);
7471 +int dpni_destroy(struct fsl_mc_io *mc_io,
7472 + uint32_t cmd_flags,
7475 + struct mc_command cmd = { 0 };
7477 + /* prepare command */
7478 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_DESTROY,
7482 + /* send command to mc*/
7483 + return mc_send_command(mc_io, &cmd);
7486 +int dpni_set_pools(struct fsl_mc_io *mc_io,
7487 + uint32_t cmd_flags,
7489 + const struct dpni_pools_cfg *cfg)
7491 + struct mc_command cmd = { 0 };
7493 + /* prepare command */
7494 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_POOLS,
7497 + DPNI_CMD_SET_POOLS(cmd, cfg);
7499 + /* send command to mc*/
7500 + return mc_send_command(mc_io, &cmd);
7503 +int dpni_enable(struct fsl_mc_io *mc_io,
7504 + uint32_t cmd_flags,
7507 + struct mc_command cmd = { 0 };
7509 + /* prepare command */
7510 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_ENABLE,
7514 + /* send command to mc*/
7515 + return mc_send_command(mc_io, &cmd);
7518 +int dpni_disable(struct fsl_mc_io *mc_io,
7519 + uint32_t cmd_flags,
7522 + struct mc_command cmd = { 0 };
7524 + /* prepare command */
7525 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_DISABLE,
7529 + /* send command to mc*/
7530 + return mc_send_command(mc_io, &cmd);
7533 +int dpni_is_enabled(struct fsl_mc_io *mc_io,
7534 + uint32_t cmd_flags,
7538 + struct mc_command cmd = { 0 };
7540 + /* prepare command */
7541 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_IS_ENABLED, cmd_flags,
7544 + /* send command to mc*/
7545 + err = mc_send_command(mc_io, &cmd);
7549 + /* retrieve response parameters */
7550 + DPNI_RSP_IS_ENABLED(cmd, *en);
7555 +int dpni_reset(struct fsl_mc_io *mc_io,
7556 + uint32_t cmd_flags,
7559 + struct mc_command cmd = { 0 };
7561 + /* prepare command */
7562 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_RESET,
7566 + /* send command to mc*/
7567 + return mc_send_command(mc_io, &cmd);
7570 +int dpni_set_irq(struct fsl_mc_io *mc_io,
7571 + uint32_t cmd_flags,
7573 + uint8_t irq_index,
7574 + struct dpni_irq_cfg *irq_cfg)
7576 + struct mc_command cmd = { 0 };
7578 + /* prepare command */
7579 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_IRQ,
7582 + DPNI_CMD_SET_IRQ(cmd, irq_index, irq_cfg);
7584 + /* send command to mc*/
7585 + return mc_send_command(mc_io, &cmd);
7588 +int dpni_get_irq(struct fsl_mc_io *mc_io,
7589 + uint32_t cmd_flags,
7591 + uint8_t irq_index,
7593 + struct dpni_irq_cfg *irq_cfg)
7595 + struct mc_command cmd = { 0 };
7598 + /* prepare command */
7599 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_IRQ,
7602 + DPNI_CMD_GET_IRQ(cmd, irq_index);
7604 + /* send command to mc*/
7605 + err = mc_send_command(mc_io, &cmd);
7609 + /* retrieve response parameters */
7610 + DPNI_RSP_GET_IRQ(cmd, *type, irq_cfg);
7615 +int dpni_set_irq_enable(struct fsl_mc_io *mc_io,
7616 + uint32_t cmd_flags,
7618 + uint8_t irq_index,
7621 + struct mc_command cmd = { 0 };
7623 + /* prepare command */
7624 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_IRQ_ENABLE,
7627 + DPNI_CMD_SET_IRQ_ENABLE(cmd, irq_index, en);
7629 + /* send command to mc*/
7630 + return mc_send_command(mc_io, &cmd);
7633 +int dpni_get_irq_enable(struct fsl_mc_io *mc_io,
7634 + uint32_t cmd_flags,
7636 + uint8_t irq_index,
7639 + struct mc_command cmd = { 0 };
7642 + /* prepare command */
7643 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_IRQ_ENABLE,
7646 + DPNI_CMD_GET_IRQ_ENABLE(cmd, irq_index);
7648 + /* send command to mc*/
7649 + err = mc_send_command(mc_io, &cmd);
7653 + /* retrieve response parameters */
7654 + DPNI_RSP_GET_IRQ_ENABLE(cmd, *en);
7659 +int dpni_set_irq_mask(struct fsl_mc_io *mc_io,
7660 + uint32_t cmd_flags,
7662 + uint8_t irq_index,
7665 + struct mc_command cmd = { 0 };
7667 + /* prepare command */
7668 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_IRQ_MASK,
7671 + DPNI_CMD_SET_IRQ_MASK(cmd, irq_index, mask);
7673 + /* send command to mc*/
7674 + return mc_send_command(mc_io, &cmd);
7677 +int dpni_get_irq_mask(struct fsl_mc_io *mc_io,
7678 + uint32_t cmd_flags,
7680 + uint8_t irq_index,
7683 + struct mc_command cmd = { 0 };
7686 + /* prepare command */
7687 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_IRQ_MASK,
7690 + DPNI_CMD_GET_IRQ_MASK(cmd, irq_index);
7692 + /* send command to mc*/
7693 + err = mc_send_command(mc_io, &cmd);
7697 + /* retrieve response parameters */
7698 + DPNI_RSP_GET_IRQ_MASK(cmd, *mask);
7703 +int dpni_get_irq_status(struct fsl_mc_io *mc_io,
7704 + uint32_t cmd_flags,
7706 + uint8_t irq_index,
7709 + struct mc_command cmd = { 0 };
7712 + /* prepare command */
7713 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_IRQ_STATUS,
7716 + DPNI_CMD_GET_IRQ_STATUS(cmd, irq_index, *status);
7718 + /* send command to mc*/
7719 + err = mc_send_command(mc_io, &cmd);
7723 + /* retrieve response parameters */
7724 + DPNI_RSP_GET_IRQ_STATUS(cmd, *status);
7729 +int dpni_clear_irq_status(struct fsl_mc_io *mc_io,
7730 + uint32_t cmd_flags,
7732 + uint8_t irq_index,
7735 + struct mc_command cmd = { 0 };
7737 + /* prepare command */
7738 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_CLEAR_IRQ_STATUS,
7741 + DPNI_CMD_CLEAR_IRQ_STATUS(cmd, irq_index, status);
7743 + /* send command to mc*/
7744 + return mc_send_command(mc_io, &cmd);
7747 +int dpni_get_attributes(struct fsl_mc_io *mc_io,
7748 + uint32_t cmd_flags,
7750 + struct dpni_attr *attr)
7752 + struct mc_command cmd = { 0 };
7755 + /* prepare command */
7756 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_ATTR,
7759 + DPNI_CMD_GET_ATTR(cmd, attr);
7761 + /* send command to mc*/
7762 + err = mc_send_command(mc_io, &cmd);
7766 + /* retrieve response parameters */
7767 + DPNI_RSP_GET_ATTR(cmd, attr);
7772 +int dpni_set_errors_behavior(struct fsl_mc_io *mc_io,
7773 + uint32_t cmd_flags,
7775 + struct dpni_error_cfg *cfg)
7777 + struct mc_command cmd = { 0 };
7779 + /* prepare command */
7780 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_ERRORS_BEHAVIOR,
7783 + DPNI_CMD_SET_ERRORS_BEHAVIOR(cmd, cfg);
7785 + /* send command to mc*/
7786 + return mc_send_command(mc_io, &cmd);
7789 +int dpni_get_rx_buffer_layout(struct fsl_mc_io *mc_io,
7790 + uint32_t cmd_flags,
7792 + struct dpni_buffer_layout *layout)
7794 + struct mc_command cmd = { 0 };
7797 + /* prepare command */
7798 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_RX_BUFFER_LAYOUT,
7802 + /* send command to mc*/
7803 + err = mc_send_command(mc_io, &cmd);
7807 + /* retrieve response parameters */
7808 + DPNI_RSP_GET_RX_BUFFER_LAYOUT(cmd, layout);
7813 +int dpni_set_rx_buffer_layout(struct fsl_mc_io *mc_io,
7814 + uint32_t cmd_flags,
7816 + const struct dpni_buffer_layout *layout)
7818 + struct mc_command cmd = { 0 };
7820 + /* prepare command */
7821 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_RX_BUFFER_LAYOUT,
7824 + DPNI_CMD_SET_RX_BUFFER_LAYOUT(cmd, layout);
7826 + /* send command to mc*/
7827 + return mc_send_command(mc_io, &cmd);
7830 +int dpni_get_tx_buffer_layout(struct fsl_mc_io *mc_io,
7831 + uint32_t cmd_flags,
7833 + struct dpni_buffer_layout *layout)
7835 + struct mc_command cmd = { 0 };
7838 + /* prepare command */
7839 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_TX_BUFFER_LAYOUT,
7843 + /* send command to mc*/
7844 + err = mc_send_command(mc_io, &cmd);
7848 + /* retrieve response parameters */
7849 + DPNI_RSP_GET_TX_BUFFER_LAYOUT(cmd, layout);
7854 +int dpni_set_tx_buffer_layout(struct fsl_mc_io *mc_io,
7855 + uint32_t cmd_flags,
7857 + const struct dpni_buffer_layout *layout)
7859 + struct mc_command cmd = { 0 };
7861 + /* prepare command */
7862 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_TX_BUFFER_LAYOUT,
7865 + DPNI_CMD_SET_TX_BUFFER_LAYOUT(cmd, layout);
7867 + /* send command to mc*/
7868 + return mc_send_command(mc_io, &cmd);
7871 +int dpni_get_tx_conf_buffer_layout(struct fsl_mc_io *mc_io,
7872 + uint32_t cmd_flags,
7874 + struct dpni_buffer_layout *layout)
7876 + struct mc_command cmd = { 0 };
7879 + /* prepare command */
7880 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_TX_CONF_BUFFER_LAYOUT,
7884 + /* send command to mc*/
7885 + err = mc_send_command(mc_io, &cmd);
7889 + /* retrieve response parameters */
7890 + DPNI_RSP_GET_TX_CONF_BUFFER_LAYOUT(cmd, layout);
7895 +int dpni_set_tx_conf_buffer_layout(struct fsl_mc_io *mc_io,
7896 + uint32_t cmd_flags,
7898 + const struct dpni_buffer_layout *layout)
7900 + struct mc_command cmd = { 0 };
7902 + /* prepare command */
7903 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_TX_CONF_BUFFER_LAYOUT,
7906 + DPNI_CMD_SET_TX_CONF_BUFFER_LAYOUT(cmd, layout);
7908 + /* send command to mc*/
7909 + return mc_send_command(mc_io, &cmd);
7912 +int dpni_get_l3_chksum_validation(struct fsl_mc_io *mc_io,
7913 + uint32_t cmd_flags,
7917 + struct mc_command cmd = { 0 };
7920 + /* prepare command */
7921 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_L3_CHKSUM_VALIDATION,
7925 + /* send command to mc*/
7926 + err = mc_send_command(mc_io, &cmd);
7930 + /* retrieve response parameters */
7931 + DPNI_RSP_GET_L3_CHKSUM_VALIDATION(cmd, *en);
7936 +int dpni_set_l3_chksum_validation(struct fsl_mc_io *mc_io,
7937 + uint32_t cmd_flags,
7941 + struct mc_command cmd = { 0 };
7943 + /* prepare command */
7944 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_L3_CHKSUM_VALIDATION,
7947 + DPNI_CMD_SET_L3_CHKSUM_VALIDATION(cmd, en);
7949 + /* send command to mc*/
7950 + return mc_send_command(mc_io, &cmd);
7953 +int dpni_get_l4_chksum_validation(struct fsl_mc_io *mc_io,
7954 + uint32_t cmd_flags,
7958 + struct mc_command cmd = { 0 };
7961 + /* prepare command */
7962 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_L4_CHKSUM_VALIDATION,
7966 + /* send command to mc*/
7967 + err = mc_send_command(mc_io, &cmd);
7971 + /* retrieve response parameters */
7972 + DPNI_RSP_GET_L4_CHKSUM_VALIDATION(cmd, *en);
7977 +int dpni_set_l4_chksum_validation(struct fsl_mc_io *mc_io,
7978 + uint32_t cmd_flags,
7982 + struct mc_command cmd = { 0 };
7984 + /* prepare command */
7985 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_L4_CHKSUM_VALIDATION,
7988 + DPNI_CMD_SET_L4_CHKSUM_VALIDATION(cmd, en);
7990 + /* send command to mc*/
7991 + return mc_send_command(mc_io, &cmd);
7994 +int dpni_get_qdid(struct fsl_mc_io *mc_io,
7995 + uint32_t cmd_flags,
7999 + struct mc_command cmd = { 0 };
8002 + /* prepare command */
8003 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_QDID,
8007 + /* send command to mc*/
8008 + err = mc_send_command(mc_io, &cmd);
8012 + /* retrieve response parameters */
8013 + DPNI_RSP_GET_QDID(cmd, *qdid);
8018 +int dpni_get_sp_info(struct fsl_mc_io *mc_io,
8019 + uint32_t cmd_flags,
8021 + struct dpni_sp_info *sp_info)
8023 + struct mc_command cmd = { 0 };
8026 + /* prepare command */
8027 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_SP_INFO,
8031 + /* send command to mc*/
8032 + err = mc_send_command(mc_io, &cmd);
8036 + /* retrieve response parameters */
8037 + DPNI_RSP_GET_SP_INFO(cmd, sp_info);
8042 +int dpni_get_tx_data_offset(struct fsl_mc_io *mc_io,
8043 + uint32_t cmd_flags,
8045 + uint16_t *data_offset)
8047 + struct mc_command cmd = { 0 };
8050 + /* prepare command */
8051 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_TX_DATA_OFFSET,
8055 + /* send command to mc*/
8056 + err = mc_send_command(mc_io, &cmd);
8060 + /* retrieve response parameters */
8061 + DPNI_RSP_GET_TX_DATA_OFFSET(cmd, *data_offset);
8066 +int dpni_get_counter(struct fsl_mc_io *mc_io,
8067 + uint32_t cmd_flags,
8069 + enum dpni_counter counter,
8072 + struct mc_command cmd = { 0 };
8075 + /* prepare command */
8076 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_COUNTER,
8079 + DPNI_CMD_GET_COUNTER(cmd, counter);
8081 + /* send command to mc*/
8082 + err = mc_send_command(mc_io, &cmd);
8086 + /* retrieve response parameters */
8087 + DPNI_RSP_GET_COUNTER(cmd, *value);
8092 +int dpni_set_counter(struct fsl_mc_io *mc_io,
8093 + uint32_t cmd_flags,
8095 + enum dpni_counter counter,
8098 + struct mc_command cmd = { 0 };
8100 + /* prepare command */
8101 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_COUNTER,
8104 + DPNI_CMD_SET_COUNTER(cmd, counter, value);
8106 + /* send command to mc*/
8107 + return mc_send_command(mc_io, &cmd);
8110 +int dpni_set_link_cfg(struct fsl_mc_io *mc_io,
8111 + uint32_t cmd_flags,
8113 + const struct dpni_link_cfg *cfg)
8115 + struct mc_command cmd = { 0 };
8117 + /* prepare command */
8118 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_LINK_CFG,
8121 + DPNI_CMD_SET_LINK_CFG(cmd, cfg);
8123 + /* send command to mc*/
8124 + return mc_send_command(mc_io, &cmd);
8127 +int dpni_get_link_state(struct fsl_mc_io *mc_io,
8128 + uint32_t cmd_flags,
8130 + struct dpni_link_state *state)
8132 + struct mc_command cmd = { 0 };
8135 + /* prepare command */
8136 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_LINK_STATE,
8140 + /* send command to mc*/
8141 + err = mc_send_command(mc_io, &cmd);
8145 + /* retrieve response parameters */
8146 + DPNI_RSP_GET_LINK_STATE(cmd, state);
8151 +int dpni_set_tx_shaping(struct fsl_mc_io *mc_io,
8152 + uint32_t cmd_flags,
8154 + const struct dpni_tx_shaping_cfg *tx_shaper)
8156 + struct mc_command cmd = { 0 };
8158 + /* prepare command */
8159 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_TX_SHAPING,
8162 + DPNI_CMD_SET_TX_SHAPING(cmd, tx_shaper);
8164 + /* send command to mc*/
8165 + return mc_send_command(mc_io, &cmd);
8168 +int dpni_set_max_frame_length(struct fsl_mc_io *mc_io,
8169 + uint32_t cmd_flags,
8171 + uint16_t max_frame_length)
8173 + struct mc_command cmd = { 0 };
8175 + /* prepare command */
8176 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_MAX_FRAME_LENGTH,
8179 + DPNI_CMD_SET_MAX_FRAME_LENGTH(cmd, max_frame_length);
8181 + /* send command to mc*/
8182 + return mc_send_command(mc_io, &cmd);
8185 +int dpni_get_max_frame_length(struct fsl_mc_io *mc_io,
8186 + uint32_t cmd_flags,
8188 + uint16_t *max_frame_length)
8190 + struct mc_command cmd = { 0 };
8193 + /* prepare command */
8194 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_MAX_FRAME_LENGTH,
8198 + /* send command to mc*/
8199 + err = mc_send_command(mc_io, &cmd);
8203 + /* retrieve response parameters */
8204 + DPNI_RSP_GET_MAX_FRAME_LENGTH(cmd, *max_frame_length);
8209 +int dpni_set_mtu(struct fsl_mc_io *mc_io,
8210 + uint32_t cmd_flags,
8214 + struct mc_command cmd = { 0 };
8216 + /* prepare command */
8217 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_MTU,
8220 + DPNI_CMD_SET_MTU(cmd, mtu);
8222 + /* send command to mc*/
8223 + return mc_send_command(mc_io, &cmd);
8226 +int dpni_get_mtu(struct fsl_mc_io *mc_io,
8227 + uint32_t cmd_flags,
8231 + struct mc_command cmd = { 0 };
8234 + /* prepare command */
8235 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_MTU,
8239 + /* send command to mc*/
8240 + err = mc_send_command(mc_io, &cmd);
8244 + /* retrieve response parameters */
8245 + DPNI_RSP_GET_MTU(cmd, *mtu);
8250 +int dpni_set_multicast_promisc(struct fsl_mc_io *mc_io,
8251 + uint32_t cmd_flags,
8255 + struct mc_command cmd = { 0 };
8257 + /* prepare command */
8258 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_MCAST_PROMISC,
8261 + DPNI_CMD_SET_MULTICAST_PROMISC(cmd, en);
8263 + /* send command to mc*/
8264 + return mc_send_command(mc_io, &cmd);
8267 +int dpni_get_multicast_promisc(struct fsl_mc_io *mc_io,
8268 + uint32_t cmd_flags,
8272 + struct mc_command cmd = { 0 };
8275 + /* prepare command */
8276 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_MCAST_PROMISC,
8280 + /* send command to mc*/
8281 + err = mc_send_command(mc_io, &cmd);
8285 + /* retrieve response parameters */
8286 + DPNI_RSP_GET_MULTICAST_PROMISC(cmd, *en);
8291 +int dpni_set_unicast_promisc(struct fsl_mc_io *mc_io,
8292 + uint32_t cmd_flags,
8296 + struct mc_command cmd = { 0 };
8298 + /* prepare command */
8299 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_UNICAST_PROMISC,
8302 + DPNI_CMD_SET_UNICAST_PROMISC(cmd, en);
8304 + /* send command to mc*/
8305 + return mc_send_command(mc_io, &cmd);
8308 +int dpni_get_unicast_promisc(struct fsl_mc_io *mc_io,
8309 + uint32_t cmd_flags,
8313 + struct mc_command cmd = { 0 };
8316 + /* prepare command */
8317 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_UNICAST_PROMISC,
8321 + /* send command to mc*/
8322 + err = mc_send_command(mc_io, &cmd);
8326 + /* retrieve response parameters */
8327 + DPNI_RSP_GET_UNICAST_PROMISC(cmd, *en);
8332 +int dpni_set_primary_mac_addr(struct fsl_mc_io *mc_io,
8333 + uint32_t cmd_flags,
8335 + const uint8_t mac_addr[6])
8337 + struct mc_command cmd = { 0 };
8339 + /* prepare command */
8340 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_PRIM_MAC,
8343 + DPNI_CMD_SET_PRIMARY_MAC_ADDR(cmd, mac_addr);
8345 + /* send command to mc*/
8346 + return mc_send_command(mc_io, &cmd);
8349 +int dpni_get_primary_mac_addr(struct fsl_mc_io *mc_io,
8350 + uint32_t cmd_flags,
8352 + uint8_t mac_addr[6])
8354 + struct mc_command cmd = { 0 };
8357 + /* prepare command */
8358 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_PRIM_MAC,
8362 + /* send command to mc*/
8363 + err = mc_send_command(mc_io, &cmd);
8367 + /* retrieve response parameters */
8368 + DPNI_RSP_GET_PRIMARY_MAC_ADDR(cmd, mac_addr);
8373 +int dpni_add_mac_addr(struct fsl_mc_io *mc_io,
8374 + uint32_t cmd_flags,
8376 + const uint8_t mac_addr[6])
8378 + struct mc_command cmd = { 0 };
8380 + /* prepare command */
8381 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_ADD_MAC_ADDR,
8384 + DPNI_CMD_ADD_MAC_ADDR(cmd, mac_addr);
8386 + /* send command to mc*/
8387 + return mc_send_command(mc_io, &cmd);
8390 +int dpni_remove_mac_addr(struct fsl_mc_io *mc_io,
8391 + uint32_t cmd_flags,
8393 + const uint8_t mac_addr[6])
8395 + struct mc_command cmd = { 0 };
8397 + /* prepare command */
8398 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_REMOVE_MAC_ADDR,
8401 + DPNI_CMD_REMOVE_MAC_ADDR(cmd, mac_addr);
8403 + /* send command to mc*/
8404 + return mc_send_command(mc_io, &cmd);
8407 +int dpni_clear_mac_filters(struct fsl_mc_io *mc_io,
8408 + uint32_t cmd_flags,
8413 + struct mc_command cmd = { 0 };
8415 + /* prepare command */
8416 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_CLR_MAC_FILTERS,
8419 + DPNI_CMD_CLEAR_MAC_FILTERS(cmd, unicast, multicast);
8421 + /* send command to mc*/
8422 + return mc_send_command(mc_io, &cmd);
8425 +int dpni_set_vlan_filters(struct fsl_mc_io *mc_io,
8426 + uint32_t cmd_flags,
8430 + struct mc_command cmd = { 0 };
8432 + /* prepare command */
8433 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_VLAN_FILTERS,
8436 + DPNI_CMD_SET_VLAN_FILTERS(cmd, en);
8438 + /* send command to mc*/
8439 + return mc_send_command(mc_io, &cmd);
8442 +int dpni_add_vlan_id(struct fsl_mc_io *mc_io,
8443 + uint32_t cmd_flags,
8447 + struct mc_command cmd = { 0 };
8449 + /* prepare command */
8450 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_ADD_VLAN_ID,
8453 + DPNI_CMD_ADD_VLAN_ID(cmd, vlan_id);
8455 + /* send command to mc*/
8456 + return mc_send_command(mc_io, &cmd);
8459 +int dpni_remove_vlan_id(struct fsl_mc_io *mc_io,
8460 + uint32_t cmd_flags,
8464 + struct mc_command cmd = { 0 };
8466 + /* prepare command */
8467 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_REMOVE_VLAN_ID,
8470 + DPNI_CMD_REMOVE_VLAN_ID(cmd, vlan_id);
8472 + /* send command to mc*/
8473 + return mc_send_command(mc_io, &cmd);
8476 +int dpni_clear_vlan_filters(struct fsl_mc_io *mc_io,
8477 + uint32_t cmd_flags,
8480 + struct mc_command cmd = { 0 };
8482 + /* prepare command */
8483 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_CLR_VLAN_FILTERS,
8487 + /* send command to mc*/
8488 + return mc_send_command(mc_io, &cmd);
8491 +int dpni_set_tx_selection(struct fsl_mc_io *mc_io,
8492 + uint32_t cmd_flags,
8494 + const struct dpni_tx_selection_cfg *cfg)
8496 + struct mc_command cmd = { 0 };
8498 + /* prepare command */
8499 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_TX_SELECTION,
8502 + DPNI_CMD_SET_TX_SELECTION(cmd, cfg);
8504 + /* send command to mc*/
8505 + return mc_send_command(mc_io, &cmd);
8508 +int dpni_set_rx_tc_dist(struct fsl_mc_io *mc_io,
8509 + uint32_t cmd_flags,
8512 + const struct dpni_rx_tc_dist_cfg *cfg)
8514 + struct mc_command cmd = { 0 };
8516 + /* prepare command */
8517 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_RX_TC_DIST,
8520 + DPNI_CMD_SET_RX_TC_DIST(cmd, tc_id, cfg);
8522 + /* send command to mc*/
8523 + return mc_send_command(mc_io, &cmd);
8526 +int dpni_set_tx_flow(struct fsl_mc_io *mc_io,
8527 + uint32_t cmd_flags,
8529 + uint16_t *flow_id,
8530 + const struct dpni_tx_flow_cfg *cfg)
8532 + struct mc_command cmd = { 0 };
8535 + /* prepare command */
8536 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_TX_FLOW,
8539 + DPNI_CMD_SET_TX_FLOW(cmd, *flow_id, cfg);
8541 + /* send command to mc*/
8542 + err = mc_send_command(mc_io, &cmd);
8546 + /* retrieve response parameters */
8547 + DPNI_RSP_SET_TX_FLOW(cmd, *flow_id);
8552 +int dpni_get_tx_flow(struct fsl_mc_io *mc_io,
8553 + uint32_t cmd_flags,
8556 + struct dpni_tx_flow_attr *attr)
8558 + struct mc_command cmd = { 0 };
8561 + /* prepare command */
8562 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_TX_FLOW,
8565 + DPNI_CMD_GET_TX_FLOW(cmd, flow_id);
8567 + /* send command to mc*/
8568 + err = mc_send_command(mc_io, &cmd);
8572 + /* retrieve response parameters */
8573 + DPNI_RSP_GET_TX_FLOW(cmd, attr);
8578 +int dpni_set_rx_flow(struct fsl_mc_io *mc_io,
8579 + uint32_t cmd_flags,
8583 + const struct dpni_queue_cfg *cfg)
8585 + struct mc_command cmd = { 0 };
8587 + /* prepare command */
8588 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_RX_FLOW,
8591 + DPNI_CMD_SET_RX_FLOW(cmd, tc_id, flow_id, cfg);
8593 + /* send command to mc*/
8594 + return mc_send_command(mc_io, &cmd);
8597 +int dpni_get_rx_flow(struct fsl_mc_io *mc_io,
8598 + uint32_t cmd_flags,
8602 + struct dpni_queue_attr *attr)
8604 + struct mc_command cmd = { 0 };
8606 + /* prepare command */
8607 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_RX_FLOW,
8610 + DPNI_CMD_GET_RX_FLOW(cmd, tc_id, flow_id);
8612 + /* send command to mc*/
8613 + err = mc_send_command(mc_io, &cmd);
8617 + /* retrieve response parameters */
8618 + DPNI_RSP_GET_RX_FLOW(cmd, attr);
8623 +int dpni_set_rx_err_queue(struct fsl_mc_io *mc_io,
8624 + uint32_t cmd_flags,
8626 + const struct dpni_queue_cfg *cfg)
8628 + struct mc_command cmd = { 0 };
8630 + /* prepare command */
8631 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_RX_ERR_QUEUE,
8634 + DPNI_CMD_SET_RX_ERR_QUEUE(cmd, cfg);
8636 + /* send command to mc*/
8637 + return mc_send_command(mc_io, &cmd);
8640 +int dpni_get_rx_err_queue(struct fsl_mc_io *mc_io,
8641 + uint32_t cmd_flags,
8643 + struct dpni_queue_attr *attr)
8645 + struct mc_command cmd = { 0 };
8648 + /* prepare command */
8649 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_RX_ERR_QUEUE,
8653 + /* send command to mc*/
8654 + err = mc_send_command(mc_io, &cmd);
8658 + /* retrieve response parameters */
8659 + DPNI_RSP_GET_RX_ERR_QUEUE(cmd, attr);
8664 +int dpni_set_tx_conf_revoke(struct fsl_mc_io *mc_io,
8665 + uint32_t cmd_flags,
8669 + struct mc_command cmd = { 0 };
8671 + /* prepare command */
8672 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_TX_CONF_REVOKE,
8675 + DPNI_CMD_SET_TX_CONF_REVOKE(cmd, revoke);
8677 + /* send command to mc*/
8678 + return mc_send_command(mc_io, &cmd);
8681 +int dpni_set_qos_table(struct fsl_mc_io *mc_io,
8682 + uint32_t cmd_flags,
8684 + const struct dpni_qos_tbl_cfg *cfg)
8686 + struct mc_command cmd = { 0 };
8688 + /* prepare command */
8689 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_QOS_TBL,
8692 + DPNI_CMD_SET_QOS_TABLE(cmd, cfg);
8694 + /* send command to mc*/
8695 + return mc_send_command(mc_io, &cmd);
8698 +int dpni_add_qos_entry(struct fsl_mc_io *mc_io,
8699 + uint32_t cmd_flags,
8701 + const struct dpni_rule_cfg *cfg,
8704 + struct mc_command cmd = { 0 };
8706 + /* prepare command */
8707 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_ADD_QOS_ENT,
8710 + DPNI_CMD_ADD_QOS_ENTRY(cmd, cfg, tc_id);
8712 + /* send command to mc*/
8713 + return mc_send_command(mc_io, &cmd);
8716 +int dpni_remove_qos_entry(struct fsl_mc_io *mc_io,
8717 + uint32_t cmd_flags,
8719 + const struct dpni_rule_cfg *cfg)
8721 + struct mc_command cmd = { 0 };
8723 + /* prepare command */
8724 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_REMOVE_QOS_ENT,
8727 + DPNI_CMD_REMOVE_QOS_ENTRY(cmd, cfg);
8729 + /* send command to mc*/
8730 + return mc_send_command(mc_io, &cmd);
8733 +int dpni_clear_qos_table(struct fsl_mc_io *mc_io,
8734 + uint32_t cmd_flags,
8737 + struct mc_command cmd = { 0 };
8739 + /* prepare command */
8740 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_CLR_QOS_TBL,
8744 + /* send command to mc*/
8745 + return mc_send_command(mc_io, &cmd);
8748 +int dpni_add_fs_entry(struct fsl_mc_io *mc_io,
8749 + uint32_t cmd_flags,
8752 + const struct dpni_rule_cfg *cfg,
8755 + struct mc_command cmd = { 0 };
8757 + /* prepare command */
8758 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_ADD_FS_ENT,
8761 + DPNI_CMD_ADD_FS_ENTRY(cmd, tc_id, cfg, flow_id);
8763 + /* send command to mc*/
8764 + return mc_send_command(mc_io, &cmd);
8767 +int dpni_remove_fs_entry(struct fsl_mc_io *mc_io,
8768 + uint32_t cmd_flags,
8771 + const struct dpni_rule_cfg *cfg)
8773 + struct mc_command cmd = { 0 };
8775 + /* prepare command */
8776 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_REMOVE_FS_ENT,
8779 + DPNI_CMD_REMOVE_FS_ENTRY(cmd, tc_id, cfg);
8781 + /* send command to mc*/
8782 + return mc_send_command(mc_io, &cmd);
8785 +int dpni_clear_fs_entries(struct fsl_mc_io *mc_io,
8786 + uint32_t cmd_flags,
8790 + struct mc_command cmd = { 0 };
8792 + /* prepare command */
8793 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_CLR_FS_ENT,
8796 + DPNI_CMD_CLEAR_FS_ENTRIES(cmd, tc_id);
8798 + /* send command to mc*/
8799 + return mc_send_command(mc_io, &cmd);
8802 +int dpni_set_vlan_insertion(struct fsl_mc_io *mc_io,
8803 + uint32_t cmd_flags,
8807 + struct mc_command cmd = { 0 };
8809 + /* prepare command */
8810 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_VLAN_INSERTION,
8811 + cmd_flags, token);
8812 + DPNI_CMD_SET_VLAN_INSERTION(cmd, en);
8814 + /* send command to mc*/
8815 + return mc_send_command(mc_io, &cmd);
8818 +int dpni_set_vlan_removal(struct fsl_mc_io *mc_io,
8819 + uint32_t cmd_flags,
8823 + struct mc_command cmd = { 0 };
8825 + /* prepare command */
8826 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_VLAN_REMOVAL,
8827 + cmd_flags, token);
8828 + DPNI_CMD_SET_VLAN_REMOVAL(cmd, en);
8830 + /* send command to mc*/
8831 + return mc_send_command(mc_io, &cmd);
8834 +int dpni_set_ipr(struct fsl_mc_io *mc_io,
8835 + uint32_t cmd_flags,
8839 + struct mc_command cmd = { 0 };
8841 + /* prepare command */
8842 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_IPR,
8845 + DPNI_CMD_SET_IPR(cmd, en);
8847 + /* send command to mc*/
8848 + return mc_send_command(mc_io, &cmd);
8851 +int dpni_set_ipf(struct fsl_mc_io *mc_io,
8852 + uint32_t cmd_flags,
8856 + struct mc_command cmd = { 0 };
8858 + /* prepare command */
8859 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_IPF,
8862 + DPNI_CMD_SET_IPF(cmd, en);
8864 + /* send command to mc*/
8865 + return mc_send_command(mc_io, &cmd);
8868 +int dpni_set_rx_tc_policing(struct fsl_mc_io *mc_io,
8869 + uint32_t cmd_flags,
8872 + const struct dpni_rx_tc_policing_cfg *cfg)
8874 + struct mc_command cmd = { 0 };
8876 + /* prepare command */
8877 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_RX_TC_POLICING,
8880 + DPNI_CMD_SET_RX_TC_POLICING(cmd, tc_id, cfg);
8882 + /* send command to mc*/
8883 + return mc_send_command(mc_io, &cmd);
8886 +int dpni_get_rx_tc_policing(struct fsl_mc_io *mc_io,
8887 + uint32_t cmd_flags,
8890 + struct dpni_rx_tc_policing_cfg *cfg)
8892 + struct mc_command cmd = { 0 };
8895 + /* prepare command */
8896 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_RX_TC_POLICING,
8899 + DPNI_CMD_GET_RX_TC_POLICING(cmd, tc_id);
8901 + /* send command to mc*/
8902 + err = mc_send_command(mc_io, &cmd);
8906 + DPNI_RSP_GET_RX_TC_POLICING(cmd, cfg);
8911 +void dpni_prepare_early_drop(const struct dpni_early_drop_cfg *cfg,
8912 + uint8_t *early_drop_buf)
8914 + uint64_t *ext_params = (uint64_t *)early_drop_buf;
8916 + DPNI_PREP_EARLY_DROP(ext_params, cfg);
8919 +void dpni_extract_early_drop(struct dpni_early_drop_cfg *cfg,
8920 + const uint8_t *early_drop_buf)
8922 + uint64_t *ext_params = (uint64_t *)early_drop_buf;
8924 + DPNI_EXT_EARLY_DROP(ext_params, cfg);
8927 +int dpni_set_rx_tc_early_drop(struct fsl_mc_io *mc_io,
8928 + uint32_t cmd_flags,
8931 + uint64_t early_drop_iova)
8933 + struct mc_command cmd = { 0 };
8935 + /* prepare command */
8936 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_RX_TC_EARLY_DROP,
8939 + DPNI_CMD_SET_RX_TC_EARLY_DROP(cmd, tc_id, early_drop_iova);
8941 + /* send command to mc*/
8942 + return mc_send_command(mc_io, &cmd);
8945 +int dpni_get_rx_tc_early_drop(struct fsl_mc_io *mc_io,
8946 + uint32_t cmd_flags,
8949 + uint64_t early_drop_iova)
8951 + struct mc_command cmd = { 0 };
8953 + /* prepare command */
8954 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_RX_TC_EARLY_DROP,
8957 + DPNI_CMD_GET_RX_TC_EARLY_DROP(cmd, tc_id, early_drop_iova);
8959 + /* send command to mc*/
8960 + return mc_send_command(mc_io, &cmd);
8963 +int dpni_set_tx_tc_early_drop(struct fsl_mc_io *mc_io,
8964 + uint32_t cmd_flags,
8967 + uint64_t early_drop_iova)
8969 + struct mc_command cmd = { 0 };
8971 + /* prepare command */
8972 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_TX_TC_EARLY_DROP,
8975 + DPNI_CMD_SET_TX_TC_EARLY_DROP(cmd, tc_id, early_drop_iova);
8977 + /* send command to mc*/
8978 + return mc_send_command(mc_io, &cmd);
8981 +int dpni_get_tx_tc_early_drop(struct fsl_mc_io *mc_io,
8982 + uint32_t cmd_flags,
8985 + uint64_t early_drop_iova)
8987 + struct mc_command cmd = { 0 };
8989 + /* prepare command */
8990 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_TX_TC_EARLY_DROP,
8993 + DPNI_CMD_GET_TX_TC_EARLY_DROP(cmd, tc_id, early_drop_iova);
8995 + /* send command to mc*/
8996 + return mc_send_command(mc_io, &cmd);
8999 +int dpni_set_rx_tc_congestion_notification(struct fsl_mc_io *mc_io,
9000 + uint32_t cmd_flags,
9003 + const struct dpni_congestion_notification_cfg *cfg)
9005 + struct mc_command cmd = { 0 };
9007 + /* prepare command */
9008 + cmd.header = mc_encode_cmd_header(
9009 + DPNI_CMDID_SET_RX_TC_CONGESTION_NOTIFICATION,
9012 + DPNI_CMD_SET_RX_TC_CONGESTION_NOTIFICATION(cmd, tc_id, cfg);
9014 + /* send command to mc*/
9015 + return mc_send_command(mc_io, &cmd);
9018 +int dpni_get_rx_tc_congestion_notification(struct fsl_mc_io *mc_io,
9019 + uint32_t cmd_flags,
9022 + struct dpni_congestion_notification_cfg *cfg)
9024 + struct mc_command cmd = { 0 };
9027 + /* prepare command */
9028 + cmd.header = mc_encode_cmd_header(
9029 + DPNI_CMDID_GET_RX_TC_CONGESTION_NOTIFICATION,
9032 + DPNI_CMD_GET_RX_TC_CONGESTION_NOTIFICATION(cmd, tc_id);
9034 + /* send command to mc*/
9035 + err = mc_send_command(mc_io, &cmd);
9039 + DPNI_RSP_GET_RX_TC_CONGESTION_NOTIFICATION(cmd, cfg);
9044 +int dpni_set_tx_tc_congestion_notification(struct fsl_mc_io *mc_io,
9045 + uint32_t cmd_flags,
9048 + const struct dpni_congestion_notification_cfg *cfg)
9050 + struct mc_command cmd = { 0 };
9052 + /* prepare command */
9053 + cmd.header = mc_encode_cmd_header(
9054 + DPNI_CMDID_SET_TX_TC_CONGESTION_NOTIFICATION,
9057 + DPNI_CMD_SET_TX_TC_CONGESTION_NOTIFICATION(cmd, tc_id, cfg);
9059 + /* send command to mc*/
9060 + return mc_send_command(mc_io, &cmd);
9063 +int dpni_get_tx_tc_congestion_notification(struct fsl_mc_io *mc_io,
9064 + uint32_t cmd_flags,
9067 + struct dpni_congestion_notification_cfg *cfg)
9069 + struct mc_command cmd = { 0 };
9072 + /* prepare command */
9073 + cmd.header = mc_encode_cmd_header(
9074 + DPNI_CMDID_GET_TX_TC_CONGESTION_NOTIFICATION,
9077 + DPNI_CMD_GET_TX_TC_CONGESTION_NOTIFICATION(cmd, tc_id);
9079 + /* send command to mc*/
9080 + err = mc_send_command(mc_io, &cmd);
9084 + DPNI_RSP_GET_TX_TC_CONGESTION_NOTIFICATION(cmd, cfg);
9089 +int dpni_set_tx_conf(struct fsl_mc_io *mc_io,
9090 + uint32_t cmd_flags,
9093 + const struct dpni_tx_conf_cfg *cfg)
9095 + struct mc_command cmd = { 0 };
9097 + /* prepare command */
9098 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_TX_CONF,
9101 + DPNI_CMD_SET_TX_CONF(cmd, flow_id, cfg);
9103 + /* send command to mc*/
9104 + return mc_send_command(mc_io, &cmd);
9107 +int dpni_get_tx_conf(struct fsl_mc_io *mc_io,
9108 + uint32_t cmd_flags,
9111 + struct dpni_tx_conf_attr *attr)
9113 + struct mc_command cmd = { 0 };
9116 + /* prepare command */
9117 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_TX_CONF,
9120 + DPNI_CMD_GET_TX_CONF(cmd, flow_id);
9122 + /* send command to mc*/
9123 + err = mc_send_command(mc_io, &cmd);
9127 + DPNI_RSP_GET_TX_CONF(cmd, attr);
9132 +int dpni_set_tx_conf_congestion_notification(struct fsl_mc_io *mc_io,
9133 + uint32_t cmd_flags,
9136 + const struct dpni_congestion_notification_cfg *cfg)
9138 + struct mc_command cmd = { 0 };
9140 + /* prepare command */
9141 + cmd.header = mc_encode_cmd_header(
9142 + DPNI_CMDID_SET_TX_CONF_CONGESTION_NOTIFICATION,
9145 + DPNI_CMD_SET_TX_CONF_CONGESTION_NOTIFICATION(cmd, flow_id, cfg);
9147 + /* send command to mc*/
9148 + return mc_send_command(mc_io, &cmd);
9151 +int dpni_get_tx_conf_congestion_notification(struct fsl_mc_io *mc_io,
9152 + uint32_t cmd_flags,
9155 + struct dpni_congestion_notification_cfg *cfg)
9157 + struct mc_command cmd = { 0 };
9160 + /* prepare command */
9161 + cmd.header = mc_encode_cmd_header(
9162 + DPNI_CMDID_GET_TX_CONF_CONGESTION_NOTIFICATION,
9165 + DPNI_CMD_GET_TX_CONF_CONGESTION_NOTIFICATION(cmd, flow_id);
9167 + /* send command to mc*/
9168 + err = mc_send_command(mc_io, &cmd);
9172 + DPNI_RSP_GET_TX_CONF_CONGESTION_NOTIFICATION(cmd, cfg);
9177 +++ b/drivers/staging/fsl-dpaa2/ethernet/dpni.h
9179 +/* Copyright 2013-2015 Freescale Semiconductor Inc.
9181 + * Redistribution and use in source and binary forms, with or without
9182 + * modification, are permitted provided that the following conditions are met:
9183 + * * Redistributions of source code must retain the above copyright
9184 + * notice, this list of conditions and the following disclaimer.
9185 + * * Redistributions in binary form must reproduce the above copyright
9186 + * notice, this list of conditions and the following disclaimer in the
9187 + * documentation and/or other materials provided with the distribution.
9188 + * * Neither the name of the above-listed copyright holders nor the
9189 + * names of any contributors may be used to endorse or promote products
9190 + * derived from this software without specific prior written permission.
9193 + * ALTERNATIVELY, this software may be distributed under the terms of the
9194 + * GNU General Public License ("GPL") as published by the Free Software
9195 + * Foundation, either version 2 of that License or (at your option) any
9198 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
9199 + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
9200 + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
9201 + * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
9202 + * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
9203 + * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
9204 + * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
9205 + * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
9206 + * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
9207 + * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
9208 + * POSSIBILITY OF SUCH DAMAGE.
9210 +#ifndef __FSL_DPNI_H
9211 +#define __FSL_DPNI_H
9218 + * Data Path Network Interface API
9219 + * Contains initialization APIs and runtime control APIs for DPNI
9222 +/** General DPNI macros */
9225 + * Maximum number of traffic classes
9227 +#define DPNI_MAX_TC 8
9229 + * Maximum number of buffer pools per DPNI
9231 +#define DPNI_MAX_DPBP 8
9233 + * Maximum number of storage-profiles per DPNI
9235 +#define DPNI_MAX_SP 2
9238 + * All traffic classes considered; see dpni_set_rx_flow()
9240 +#define DPNI_ALL_TCS (uint8_t)(-1)
9242 + * All flows within traffic class considered; see dpni_set_rx_flow()
9244 +#define DPNI_ALL_TC_FLOWS (uint16_t)(-1)
9246 + * Generate new flow ID; see dpni_set_tx_flow()
9248 +#define DPNI_NEW_FLOW_ID (uint16_t)(-1)
9249 +/* use for common tx-conf queue; see dpni_set_tx_conf_<x>() */
9250 +#define DPNI_COMMON_TX_CONF (uint16_t)(-1)
9253 + * dpni_open() - Open a control session for the specified object
9254 + * @mc_io: Pointer to MC portal's I/O object
9255 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
9256 + * @dpni_id: DPNI unique ID
9257 + * @token: Returned token; use in subsequent API calls
9259 + * This function can be used to open a control session for an
9260 + * already created object; an object may have been declared in
9261 + * the DPL or by calling the dpni_create() function.
9262 + * This function returns a unique authentication token,
9263 + * associated with the specific object ID and the specific MC
9264 + * portal; this token must be used in all subsequent commands for
9265 + * this specific object.
9267 + * Return: '0' on Success; Error code otherwise.
9269 +int dpni_open(struct fsl_mc_io *mc_io,
9270 + uint32_t cmd_flags,
9275 + * dpni_close() - Close the control session of the object
9276 + * @mc_io: Pointer to MC portal's I/O object
9277 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
9278 + * @token: Token of DPNI object
9280 + * After this function is called, no further operations are
9281 + * allowed on the object without opening a new control session.
9283 + * Return: '0' on Success; Error code otherwise.
9285 +int dpni_close(struct fsl_mc_io *mc_io,
9286 + uint32_t cmd_flags,
9289 +/* DPNI configuration options */
9292 + * Allow different distribution key profiles for different traffic classes;
9293 + * if not set, a single key profile is assumed
9295 +#define DPNI_OPT_ALLOW_DIST_KEY_PER_TC 0x00000001
9298 + * Disable all non-error transmit confirmation; error frames are reported
9299 + * back to a common Tx error queue
9301 +#define DPNI_OPT_TX_CONF_DISABLED 0x00000002
9304 + * Disable per-sender private Tx confirmation/error queue
9306 +#define DPNI_OPT_PRIVATE_TX_CONF_ERROR_DISABLED 0x00000004
9309 + * Support distribution based on hashed key;
9310 + * allows statistical distribution over receive queues in a traffic class
9312 +#define DPNI_OPT_DIST_HASH 0x00000010
9315 + * DEPRECATED - if this flag is selected and and all new 'max_fs_entries' are
9316 + * '0' then backward compatibility is preserved;
9317 + * Support distribution based on flow steering;
9318 + * allows explicit control of distribution over receive queues in a traffic
9321 +#define DPNI_OPT_DIST_FS 0x00000020
9324 + * Unicast filtering support
9326 +#define DPNI_OPT_UNICAST_FILTER 0x00000080
9328 + * Multicast filtering support
9330 +#define DPNI_OPT_MULTICAST_FILTER 0x00000100
9332 + * VLAN filtering support
9334 +#define DPNI_OPT_VLAN_FILTER 0x00000200
9336 + * Support IP reassembly on received packets
9338 +#define DPNI_OPT_IPR 0x00000800
9340 + * Support IP fragmentation on transmitted packets
9342 +#define DPNI_OPT_IPF 0x00001000
9344 + * VLAN manipulation support
9346 +#define DPNI_OPT_VLAN_MANIPULATION 0x00010000
9348 + * Support masking of QoS lookup keys
9350 +#define DPNI_OPT_QOS_MASK_SUPPORT 0x00020000
9352 + * Support masking of Flow Steering lookup keys
9354 +#define DPNI_OPT_FS_MASK_SUPPORT 0x00040000
9357 + * struct dpni_extended_cfg - Structure representing extended DPNI configuration
9358 + * @tc_cfg: TCs configuration
9359 + * @ipr_cfg: IP reassembly configuration
9361 +struct dpni_extended_cfg {
9363 + * struct tc_cfg - TC configuration
9364 + * @max_dist: Maximum distribution size for Rx traffic class;
9365 + * supported values: 1,2,3,4,6,7,8,12,14,16,24,28,32,48,56,64,96,
9366 + * 112,128,192,224,256,384,448,512,768,896,1024;
9367 + * value '0' will be treated as '1'.
9368 + * other unsupported values will be round down to the nearest
9369 + * supported value.
9370 + * @max_fs_entries: Maximum FS entries for Rx traffic class;
9371 + * '0' means no support for this TC;
9374 + uint16_t max_dist;
9375 + uint16_t max_fs_entries;
9376 + } tc_cfg[DPNI_MAX_TC];
9378 + * struct ipr_cfg - Structure representing IP reassembly configuration
9379 + * @max_reass_frm_size: Maximum size of the reassembled frame
9380 + * @min_frag_size_ipv4: Minimum fragment size of IPv4 fragments
9381 + * @min_frag_size_ipv6: Minimum fragment size of IPv6 fragments
9382 + * @max_open_frames_ipv4: Maximum concurrent IPv4 packets in reassembly
9384 + * @max_open_frames_ipv6: Maximum concurrent IPv6 packets in reassembly
9388 + uint16_t max_reass_frm_size;
9389 + uint16_t min_frag_size_ipv4;
9390 + uint16_t min_frag_size_ipv6;
9391 + uint16_t max_open_frames_ipv4;
9392 + uint16_t max_open_frames_ipv6;
9397 + * dpni_prepare_extended_cfg() - function prepare extended parameters
9398 + * @cfg: extended structure
9399 + * @ext_cfg_buf: Zeroed 256 bytes of memory before mapping it to DMA
9401 + * This function has to be called before dpni_create()
9403 +int dpni_prepare_extended_cfg(const struct dpni_extended_cfg *cfg,
9404 + uint8_t *ext_cfg_buf);
9407 + * struct dpni_cfg - Structure representing DPNI configuration
9408 + * @mac_addr: Primary MAC address
9409 + * @adv: Advanced parameters; default is all zeros;
9410 + * use this structure to change default settings
9413 + uint8_t mac_addr[6];
9415 + * struct adv - Advanced parameters
9416 + * @options: Mask of available options; use 'DPNI_OPT_<X>' values
9417 + * @start_hdr: Selects the packet starting header for parsing;
9418 + * 'NET_PROT_NONE' is treated as default: 'NET_PROT_ETH'
9419 + * @max_senders: Maximum number of different senders; used as the number
9420 + * of dedicated Tx flows; Non-power-of-2 values are rounded
9421 + * up to the next power-of-2 value as hardware demands it;
9422 + * '0' will be treated as '1'
9423 + * @max_tcs: Maximum number of traffic classes (for both Tx and Rx);
9424 + * '0' will e treated as '1'
9425 + * @max_unicast_filters: Maximum number of unicast filters;
9426 + * '0' is treated as '16'
9427 + * @max_multicast_filters: Maximum number of multicast filters;
9428 + * '0' is treated as '64'
9429 + * @max_qos_entries: if 'max_tcs > 1', declares the maximum entries in
9430 + * the QoS table; '0' is treated as '64'
9431 + * @max_qos_key_size: Maximum key size for the QoS look-up;
9432 + * '0' is treated as '24' which is enough for IPv4
9434 + * @max_dist_key_size: Maximum key size for the distribution;
9435 + * '0' is treated as '24' which is enough for IPv4 5-tuple
9436 + * @max_policers: Maximum number of policers;
9437 + * should be between '0' and max_tcs
9438 + * @max_congestion_ctrl: Maximum number of congestion control groups
9439 + * (CGs); covers early drop and congestion notification
9441 + * should be between '0' and ('max_tcs' + 'max_senders')
9442 + * @ext_cfg_iova: I/O virtual address of 256 bytes DMA-able memory
9443 + * filled with the extended configuration by calling
9444 + * dpni_prepare_extended_cfg()
9448 + enum net_prot start_hdr;
9449 + uint8_t max_senders;
9451 + uint8_t max_unicast_filters;
9452 + uint8_t max_multicast_filters;
9453 + uint8_t max_vlan_filters;
9454 + uint8_t max_qos_entries;
9455 + uint8_t max_qos_key_size;
9456 + uint8_t max_dist_key_size;
9457 + uint8_t max_policers;
9458 + uint8_t max_congestion_ctrl;
9459 + uint64_t ext_cfg_iova;
9464 + * dpni_create() - Create the DPNI object
9465 + * @mc_io: Pointer to MC portal's I/O object
9466 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
9467 + * @cfg: Configuration structure
9468 + * @token: Returned token; use in subsequent API calls
9470 + * Create the DPNI object, allocate required resources and
9471 + * perform required initialization.
9473 + * The object can be created either by declaring it in the
9474 + * DPL file, or by calling this function.
9476 + * This function returns a unique authentication token,
9477 + * associated with the specific object ID and the specific MC
9478 + * portal; this token must be used in all subsequent calls to
9479 + * this specific object. For objects that are created using the
9480 + * DPL file, call dpni_open() function to get an authentication
9483 + * Return: '0' on Success; Error code otherwise.
9485 +int dpni_create(struct fsl_mc_io *mc_io,
9486 + uint32_t cmd_flags,
9487 + const struct dpni_cfg *cfg,
9491 + * dpni_destroy() - Destroy the DPNI object and release all its resources.
9492 + * @mc_io: Pointer to MC portal's I/O object
9493 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
9494 + * @token: Token of DPNI object
9496 + * Return: '0' on Success; error code otherwise.
9498 +int dpni_destroy(struct fsl_mc_io *mc_io,
9499 + uint32_t cmd_flags,
9503 + * struct dpni_pools_cfg - Structure representing buffer pools configuration
9504 + * @num_dpbp: Number of DPBPs
9505 + * @pools: Array of buffer pools parameters; The number of valid entries
9506 + * must match 'num_dpbp' value
9508 +struct dpni_pools_cfg {
9511 + * struct pools - Buffer pools parameters
9512 + * @dpbp_id: DPBP object ID
9513 + * @buffer_size: Buffer size
9514 + * @backup_pool: Backup pool
9518 + uint16_t buffer_size;
9520 + } pools[DPNI_MAX_DPBP];
9524 + * dpni_set_pools() - Set buffer pools configuration
9525 + * @mc_io: Pointer to MC portal's I/O object
9526 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
9527 + * @token: Token of DPNI object
9528 + * @cfg: Buffer pools configuration
9530 + * mandatory for DPNI operation
9531 + * warning:Allowed only when DPNI is disabled
9533 + * Return: '0' on Success; Error code otherwise.
9535 +int dpni_set_pools(struct fsl_mc_io *mc_io,
9536 + uint32_t cmd_flags,
9538 + const struct dpni_pools_cfg *cfg);
9541 + * dpni_enable() - Enable the DPNI, allow sending and receiving frames.
9542 + * @mc_io: Pointer to MC portal's I/O object
9543 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
9544 + * @token: Token of DPNI object
9546 + * Return: '0' on Success; Error code otherwise.
9548 +int dpni_enable(struct fsl_mc_io *mc_io,
9549 + uint32_t cmd_flags,
9553 + * dpni_disable() - Disable the DPNI, stop sending and receiving frames.
9554 + * @mc_io: Pointer to MC portal's I/O object
9555 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
9556 + * @token: Token of DPNI object
9558 + * Return: '0' on Success; Error code otherwise.
9560 +int dpni_disable(struct fsl_mc_io *mc_io,
9561 + uint32_t cmd_flags,
9565 + * dpni_is_enabled() - Check if the DPNI is enabled.
9566 + * @mc_io: Pointer to MC portal's I/O object
9567 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
9568 + * @token: Token of DPNI object
9569 + * @en: Returns '1' if object is enabled; '0' otherwise
9571 + * Return: '0' on Success; Error code otherwise.
9573 +int dpni_is_enabled(struct fsl_mc_io *mc_io,
9574 + uint32_t cmd_flags,
9579 + * dpni_reset() - Reset the DPNI, returns the object to initial state.
9580 + * @mc_io: Pointer to MC portal's I/O object
9581 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
9582 + * @token: Token of DPNI object
9584 + * Return: '0' on Success; Error code otherwise.
9586 +int dpni_reset(struct fsl_mc_io *mc_io,
9587 + uint32_t cmd_flags,
9591 + * DPNI IRQ Index and Events
9597 +#define DPNI_IRQ_INDEX 0
9599 + * IRQ event - indicates a change in link state
9601 +#define DPNI_IRQ_EVENT_LINK_CHANGED 0x00000001
9604 + * struct dpni_irq_cfg - IRQ configuration
9605 + * @addr: Address that must be written to signal a message-based interrupt
9606 + * @val: Value to write into irq_addr address
9607 + * @irq_num: A user defined number associated with this IRQ
9609 +struct dpni_irq_cfg {
9616 + * dpni_set_irq() - Set IRQ information for the DPNI to trigger an interrupt.
9617 + * @mc_io: Pointer to MC portal's I/O object
9618 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
9619 + * @token: Token of DPNI object
9620 + * @irq_index: Identifies the interrupt index to configure
9621 + * @irq_cfg: IRQ configuration
9623 + * Return: '0' on Success; Error code otherwise.
9625 +int dpni_set_irq(struct fsl_mc_io *mc_io,
9626 + uint32_t cmd_flags,
9628 + uint8_t irq_index,
9629 + struct dpni_irq_cfg *irq_cfg);
9632 + * dpni_get_irq() - Get IRQ information from the DPNI.
9633 + * @mc_io: Pointer to MC portal's I/O object
9634 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
9635 + * @token: Token of DPNI object
9636 + * @irq_index: The interrupt index to configure
9637 + * @type: Interrupt type: 0 represents message interrupt
9638 + * type (both irq_addr and irq_val are valid)
9639 + * @irq_cfg: IRQ attributes
9641 + * Return: '0' on Success; Error code otherwise.
9643 +int dpni_get_irq(struct fsl_mc_io *mc_io,
9644 + uint32_t cmd_flags,
9646 + uint8_t irq_index,
9648 + struct dpni_irq_cfg *irq_cfg);
9651 + * dpni_set_irq_enable() - Set overall interrupt state.
9652 + * @mc_io: Pointer to MC portal's I/O object
9653 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
9654 + * @token: Token of DPNI object
9655 + * @irq_index: The interrupt index to configure
9656 + * @en: Interrupt state: - enable = 1, disable = 0
9658 + * Allows GPP software to control when interrupts are generated.
9659 + * Each interrupt can have up to 32 causes. The enable/disable control's the
9660 + * overall interrupt state. if the interrupt is disabled no causes will cause
9663 + * Return: '0' on Success; Error code otherwise.
9665 +int dpni_set_irq_enable(struct fsl_mc_io *mc_io,
9666 + uint32_t cmd_flags,
9668 + uint8_t irq_index,
9672 + * dpni_get_irq_enable() - Get overall interrupt state
9673 + * @mc_io: Pointer to MC portal's I/O object
9674 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
9675 + * @token: Token of DPNI object
9676 + * @irq_index: The interrupt index to configure
9677 + * @en: Returned interrupt state - enable = 1, disable = 0
9679 + * Return: '0' on Success; Error code otherwise.
9681 +int dpni_get_irq_enable(struct fsl_mc_io *mc_io,
9682 + uint32_t cmd_flags,
9684 + uint8_t irq_index,
9688 + * dpni_set_irq_mask() - Set interrupt mask.
9689 + * @mc_io: Pointer to MC portal's I/O object
9690 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
9691 + * @token: Token of DPNI object
9692 + * @irq_index: The interrupt index to configure
9693 + * @mask: event mask to trigger interrupt;
9695 + * 0 = ignore event
9696 + * 1 = consider event for asserting IRQ
9698 + * Every interrupt can have up to 32 causes and the interrupt model supports
9699 + * masking/unmasking each cause independently
9701 + * Return: '0' on Success; Error code otherwise.
9703 +int dpni_set_irq_mask(struct fsl_mc_io *mc_io,
9704 + uint32_t cmd_flags,
9706 + uint8_t irq_index,
9710 + * dpni_get_irq_mask() - Get interrupt mask.
9711 + * @mc_io: Pointer to MC portal's I/O object
9712 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
9713 + * @token: Token of DPNI object
9714 + * @irq_index: The interrupt index to configure
9715 + * @mask: Returned event mask to trigger interrupt
9717 + * Every interrupt can have up to 32 causes and the interrupt model supports
9718 + * masking/unmasking each cause independently
9720 + * Return: '0' on Success; Error code otherwise.
9722 +int dpni_get_irq_mask(struct fsl_mc_io *mc_io,
9723 + uint32_t cmd_flags,
9725 + uint8_t irq_index,
9729 + * dpni_get_irq_status() - Get the current status of any pending interrupts.
9730 + * @mc_io: Pointer to MC portal's I/O object
9731 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
9732 + * @token: Token of DPNI object
9733 + * @irq_index: The interrupt index to configure
9734 + * @status: Returned interrupts status - one bit per cause:
9735 + * 0 = no interrupt pending
9736 + * 1 = interrupt pending
9738 + * Return: '0' on Success; Error code otherwise.
9740 +int dpni_get_irq_status(struct fsl_mc_io *mc_io,
9741 + uint32_t cmd_flags,
9743 + uint8_t irq_index,
9744 + uint32_t *status);
9747 + * dpni_clear_irq_status() - Clear a pending interrupt's status
9748 + * @mc_io: Pointer to MC portal's I/O object
9749 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
9750 + * @token: Token of DPNI object
9751 + * @irq_index: The interrupt index to configure
9752 + * @status: bits to clear (W1C) - one bit per cause:
9753 + * 0 = don't change
9754 + * 1 = clear status bit
9756 + * Return: '0' on Success; Error code otherwise.
9758 +int dpni_clear_irq_status(struct fsl_mc_io *mc_io,
9759 + uint32_t cmd_flags,
9761 + uint8_t irq_index,
9765 + * struct dpni_attr - Structure representing DPNI attributes
9766 + * @id: DPNI object ID
9767 + * @version: DPNI version
9768 + * @start_hdr: Indicates the packet starting header for parsing
9769 + * @options: Mask of available options; reflects the value as was given in
9770 + * object's creation
9771 + * @max_senders: Maximum number of different senders; used as the number
9772 + * of dedicated Tx flows;
9773 + * @max_tcs: Maximum number of traffic classes (for both Tx and Rx)
9774 + * @max_unicast_filters: Maximum number of unicast filters
9775 + * @max_multicast_filters: Maximum number of multicast filters
9776 + * @max_vlan_filters: Maximum number of VLAN filters
9777 + * @max_qos_entries: if 'max_tcs > 1', declares the maximum entries in QoS table
9778 + * @max_qos_key_size: Maximum key size for the QoS look-up
9779 + * @max_dist_key_size: Maximum key size for the distribution look-up
9780 + * @max_policers: Maximum number of policers;
9781 + * @max_congestion_ctrl: Maximum number of congestion control groups (CGs);
9782 + * @ext_cfg_iova: I/O virtual address of 256 bytes DMA-able memory;
9783 + * call dpni_extract_extended_cfg() to extract the extended configuration
9788 + * struct version - DPNI version
9789 + * @major: DPNI major version
9790 + * @minor: DPNI minor version
9796 + enum net_prot start_hdr;
9798 + uint8_t max_senders;
9800 + uint8_t max_unicast_filters;
9801 + uint8_t max_multicast_filters;
9802 + uint8_t max_vlan_filters;
9803 + uint8_t max_qos_entries;
9804 + uint8_t max_qos_key_size;
9805 + uint8_t max_dist_key_size;
9806 + uint8_t max_policers;
9807 + uint8_t max_congestion_ctrl;
9808 + uint64_t ext_cfg_iova;
9812 + * dpni_get_attributes() - Retrieve DPNI attributes.
9813 + * @mc_io: Pointer to MC portal's I/O object
9814 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
9815 + * @token: Token of DPNI object
9816 + * @attr: Object's attributes
9818 + * Return: '0' on Success; Error code otherwise.
9820 +int dpni_get_attributes(struct fsl_mc_io *mc_io,
9821 + uint32_t cmd_flags,
9823 + struct dpni_attr *attr);
9826 + * dpni_extract_extended_cfg() - extract the extended parameters
9827 + * @cfg: extended structure
9828 + * @ext_cfg_buf: 256 bytes of DMA-able memory
9830 + * This function has to be called after dpni_get_attributes()
9832 +int dpni_extract_extended_cfg(struct dpni_extended_cfg *cfg,
9833 + const uint8_t *ext_cfg_buf);
9840 + * Extract out of frame header error
9842 +#define DPNI_ERROR_EOFHE 0x00020000
9844 + * Frame length error
9846 +#define DPNI_ERROR_FLE 0x00002000
9848 + * Frame physical error
9850 +#define DPNI_ERROR_FPE 0x00001000
9852 + * Parsing header error
9854 +#define DPNI_ERROR_PHE 0x00000020
9856 + * Parser L3 checksum error
9858 +#define DPNI_ERROR_L3CE 0x00000004
9860 + * Parser L3 checksum error
9862 +#define DPNI_ERROR_L4CE 0x00000001
9865 + * enum dpni_error_action - Defines DPNI behavior for errors
9866 + * @DPNI_ERROR_ACTION_DISCARD: Discard the frame
9867 + * @DPNI_ERROR_ACTION_CONTINUE: Continue with the normal flow
9868 + * @DPNI_ERROR_ACTION_SEND_TO_ERROR_QUEUE: Send the frame to the error queue
9870 +enum dpni_error_action {
9871 + DPNI_ERROR_ACTION_DISCARD = 0,
9872 + DPNI_ERROR_ACTION_CONTINUE = 1,
9873 + DPNI_ERROR_ACTION_SEND_TO_ERROR_QUEUE = 2
9877 + * struct dpni_error_cfg - Structure representing DPNI errors treatment
9878 + * @errors: Errors mask; use 'DPNI_ERROR__<X>
9879 + * @error_action: The desired action for the errors mask
9880 + * @set_frame_annotation: Set to '1' to mark the errors in frame annotation
9881 + * status (FAS); relevant only for the non-discard action
9883 +struct dpni_error_cfg {
9885 + enum dpni_error_action error_action;
9886 + int set_frame_annotation;
9890 + * dpni_set_errors_behavior() - Set errors behavior
9891 + * @mc_io: Pointer to MC portal's I/O object
9892 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
9893 + * @token: Token of DPNI object
9894 + * @cfg: Errors configuration
9896 + * this function may be called numerous times with different
9899 + * Return: '0' on Success; Error code otherwise.
9901 +int dpni_set_errors_behavior(struct fsl_mc_io *mc_io,
9902 + uint32_t cmd_flags,
9904 + struct dpni_error_cfg *cfg);
9907 + * DPNI buffer layout modification options
9911 + * Select to modify the time-stamp setting
9913 +#define DPNI_BUF_LAYOUT_OPT_TIMESTAMP 0x00000001
9915 + * Select to modify the parser-result setting; not applicable for Tx
9917 +#define DPNI_BUF_LAYOUT_OPT_PARSER_RESULT 0x00000002
9919 + * Select to modify the frame-status setting
9921 +#define DPNI_BUF_LAYOUT_OPT_FRAME_STATUS 0x00000004
9923 + * Select to modify the private-data-size setting
9925 +#define DPNI_BUF_LAYOUT_OPT_PRIVATE_DATA_SIZE 0x00000008
9927 + * Select to modify the data-alignment setting
9929 +#define DPNI_BUF_LAYOUT_OPT_DATA_ALIGN 0x00000010
9931 + * Select to modify the data-head-room setting
9933 +#define DPNI_BUF_LAYOUT_OPT_DATA_HEAD_ROOM 0x00000020
9935 + * Select to modify the data-tail-room setting
9937 +#define DPNI_BUF_LAYOUT_OPT_DATA_TAIL_ROOM 0x00000040
9940 + * struct dpni_buffer_layout - Structure representing DPNI buffer layout
9941 + * @options: Flags representing the suggested modifications to the buffer
9942 + * layout; Use any combination of 'DPNI_BUF_LAYOUT_OPT_<X>' flags
9943 + * @pass_timestamp: Pass timestamp value
9944 + * @pass_parser_result: Pass parser results
9945 + * @pass_frame_status: Pass frame status
9946 + * @private_data_size: Size kept for private data (in bytes)
9947 + * @data_align: Data alignment
9948 + * @data_head_room: Data head room
9949 + * @data_tail_room: Data tail room
9951 +struct dpni_buffer_layout {
9953 + int pass_timestamp;
9954 + int pass_parser_result;
9955 + int pass_frame_status;
9956 + uint16_t private_data_size;
9957 + uint16_t data_align;
9958 + uint16_t data_head_room;
9959 + uint16_t data_tail_room;
9963 + * dpni_get_rx_buffer_layout() - Retrieve Rx buffer layout attributes.
9964 + * @mc_io: Pointer to MC portal's I/O object
9965 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
9966 + * @token: Token of DPNI object
9967 + * @layout: Returns buffer layout attributes
9969 + * Return: '0' on Success; Error code otherwise.
9971 +int dpni_get_rx_buffer_layout(struct fsl_mc_io *mc_io,
9972 + uint32_t cmd_flags,
9974 + struct dpni_buffer_layout *layout);
9977 + * dpni_set_rx_buffer_layout() - Set Rx buffer layout configuration.
9978 + * @mc_io: Pointer to MC portal's I/O object
9979 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
9980 + * @token: Token of DPNI object
9981 + * @layout: Buffer layout configuration
9983 + * Return: '0' on Success; Error code otherwise.
9985 + * @warning Allowed only when DPNI is disabled
9987 +int dpni_set_rx_buffer_layout(struct fsl_mc_io *mc_io,
9988 + uint32_t cmd_flags,
9990 + const struct dpni_buffer_layout *layout);
9993 + * dpni_get_tx_buffer_layout() - Retrieve Tx buffer layout attributes.
9994 + * @mc_io: Pointer to MC portal's I/O object
9995 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
9996 + * @token: Token of DPNI object
9997 + * @layout: Returns buffer layout attributes
9999 + * Return: '0' on Success; Error code otherwise.
10001 +int dpni_get_tx_buffer_layout(struct fsl_mc_io *mc_io,
10002 + uint32_t cmd_flags,
10004 + struct dpni_buffer_layout *layout);
10007 + * dpni_set_tx_buffer_layout() - Set Tx buffer layout configuration.
10008 + * @mc_io: Pointer to MC portal's I/O object
10009 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10010 + * @token: Token of DPNI object
10011 + * @layout: Buffer layout configuration
10013 + * Return: '0' on Success; Error code otherwise.
10015 + * @warning Allowed only when DPNI is disabled
10017 +int dpni_set_tx_buffer_layout(struct fsl_mc_io *mc_io,
10018 + uint32_t cmd_flags,
10020 + const struct dpni_buffer_layout *layout);
10023 + * dpni_get_tx_conf_buffer_layout() - Retrieve Tx confirmation buffer layout
10025 + * @mc_io: Pointer to MC portal's I/O object
10026 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10027 + * @token: Token of DPNI object
10028 + * @layout: Returns buffer layout attributes
10030 + * Return: '0' on Success; Error code otherwise.
10032 +int dpni_get_tx_conf_buffer_layout(struct fsl_mc_io *mc_io,
10033 + uint32_t cmd_flags,
10035 + struct dpni_buffer_layout *layout);
10038 + * dpni_set_tx_conf_buffer_layout() - Set Tx confirmation buffer layout
10040 + * @mc_io: Pointer to MC portal's I/O object
10041 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10042 + * @token: Token of DPNI object
10043 + * @layout: Buffer layout configuration
10045 + * Return: '0' on Success; Error code otherwise.
10047 + * @warning Allowed only when DPNI is disabled
10049 +int dpni_set_tx_conf_buffer_layout(struct fsl_mc_io *mc_io,
10050 + uint32_t cmd_flags,
10052 + const struct dpni_buffer_layout *layout);
10055 + * dpni_set_l3_chksum_validation() - Enable/disable L3 checksum validation
10056 + * @mc_io: Pointer to MC portal's I/O object
10057 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10058 + * @token: Token of DPNI object
10059 + * @en: Set to '1' to enable; '0' to disable
10061 + * Return: '0' on Success; Error code otherwise.
10063 +int dpni_set_l3_chksum_validation(struct fsl_mc_io *mc_io,
10064 + uint32_t cmd_flags,
10069 + * dpni_get_l3_chksum_validation() - Get L3 checksum validation mode
10070 + * @mc_io: Pointer to MC portal's I/O object
10071 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10072 + * @token: Token of DPNI object
10073 + * @en: Returns '1' if enabled; '0' otherwise
10075 + * Return: '0' on Success; Error code otherwise.
10077 +int dpni_get_l3_chksum_validation(struct fsl_mc_io *mc_io,
10078 + uint32_t cmd_flags,
10083 + * dpni_set_l4_chksum_validation() - Enable/disable L4 checksum validation
10084 + * @mc_io: Pointer to MC portal's I/O object
10085 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10086 + * @token: Token of DPNI object
10087 + * @en: Set to '1' to enable; '0' to disable
10089 + * Return: '0' on Success; Error code otherwise.
10091 +int dpni_set_l4_chksum_validation(struct fsl_mc_io *mc_io,
10092 + uint32_t cmd_flags,
10097 + * dpni_get_l4_chksum_validation() - Get L4 checksum validation mode
10098 + * @mc_io: Pointer to MC portal's I/O object
10099 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10100 + * @token: Token of DPNI object
10101 + * @en: Returns '1' if enabled; '0' otherwise
10103 + * Return: '0' on Success; Error code otherwise.
10105 +int dpni_get_l4_chksum_validation(struct fsl_mc_io *mc_io,
10106 + uint32_t cmd_flags,
10111 + * dpni_get_qdid() - Get the Queuing Destination ID (QDID) that should be used
10112 + * for enqueue operations
10113 + * @mc_io: Pointer to MC portal's I/O object
10114 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10115 + * @token: Token of DPNI object
10116 + * @qdid: Returned virtual QDID value that should be used as an argument
10117 + * in all enqueue operations
10119 + * Return: '0' on Success; Error code otherwise.
10121 +int dpni_get_qdid(struct fsl_mc_io *mc_io,
10122 + uint32_t cmd_flags,
10127 + * struct dpni_sp_info - Structure representing DPNI storage-profile information
10128 + * (relevant only for DPNI owned by AIOP)
10129 + * @spids: array of storage-profiles
10131 +struct dpni_sp_info {
10132 + uint16_t spids[DPNI_MAX_SP];
10136 + * dpni_get_spids() - Get the AIOP storage profile IDs associated with the DPNI
10137 + * @mc_io: Pointer to MC portal's I/O object
10138 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10139 + * @token: Token of DPNI object
10140 + * @sp_info: Returned AIOP storage-profile information
10142 + * Return: '0' on Success; Error code otherwise.
10144 + * @warning Only relevant for DPNI that belongs to AIOP container.
10146 +int dpni_get_sp_info(struct fsl_mc_io *mc_io,
10147 + uint32_t cmd_flags,
10149 + struct dpni_sp_info *sp_info);
10152 + * dpni_get_tx_data_offset() - Get the Tx data offset (from start of buffer)
10153 + * @mc_io: Pointer to MC portal's I/O object
10154 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10155 + * @token: Token of DPNI object
10156 + * @data_offset: Tx data offset (from start of buffer)
10158 + * Return: '0' on Success; Error code otherwise.
10160 +int dpni_get_tx_data_offset(struct fsl_mc_io *mc_io,
10161 + uint32_t cmd_flags,
10163 + uint16_t *data_offset);
10166 + * enum dpni_counter - DPNI counter types
10167 + * @DPNI_CNT_ING_FRAME: Counts ingress frames
10168 + * @DPNI_CNT_ING_BYTE: Counts ingress bytes
10169 + * @DPNI_CNT_ING_FRAME_DROP: Counts ingress frames dropped due to explicit
10171 + * @DPNI_CNT_ING_FRAME_DISCARD: Counts ingress frames discarded due to errors
10172 + * @DPNI_CNT_ING_MCAST_FRAME: Counts ingress multicast frames
10173 + * @DPNI_CNT_ING_MCAST_BYTE: Counts ingress multicast bytes
10174 + * @DPNI_CNT_ING_BCAST_FRAME: Counts ingress broadcast frames
10175 + * @DPNI_CNT_ING_BCAST_BYTES: Counts ingress broadcast bytes
10176 + * @DPNI_CNT_EGR_FRAME: Counts egress frames
10177 + * @DPNI_CNT_EGR_BYTE: Counts egress bytes
10178 + * @DPNI_CNT_EGR_FRAME_DISCARD: Counts egress frames discarded due to errors
10180 +enum dpni_counter {
10181 + DPNI_CNT_ING_FRAME = 0x0,
10182 + DPNI_CNT_ING_BYTE = 0x1,
10183 + DPNI_CNT_ING_FRAME_DROP = 0x2,
10184 + DPNI_CNT_ING_FRAME_DISCARD = 0x3,
10185 + DPNI_CNT_ING_MCAST_FRAME = 0x4,
10186 + DPNI_CNT_ING_MCAST_BYTE = 0x5,
10187 + DPNI_CNT_ING_BCAST_FRAME = 0x6,
10188 + DPNI_CNT_ING_BCAST_BYTES = 0x7,
10189 + DPNI_CNT_EGR_FRAME = 0x8,
10190 + DPNI_CNT_EGR_BYTE = 0x9,
10191 + DPNI_CNT_EGR_FRAME_DISCARD = 0xa
10195 + * dpni_get_counter() - Read a specific DPNI counter
10196 + * @mc_io: Pointer to MC portal's I/O object
10197 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10198 + * @token: Token of DPNI object
10199 + * @counter: The requested counter
10200 + * @value: Returned counter's current value
10202 + * Return: '0' on Success; Error code otherwise.
10204 +int dpni_get_counter(struct fsl_mc_io *mc_io,
10205 + uint32_t cmd_flags,
10207 + enum dpni_counter counter,
10208 + uint64_t *value);
10211 + * dpni_set_counter() - Set (or clear) a specific DPNI counter
10212 + * @mc_io: Pointer to MC portal's I/O object
10213 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10214 + * @token: Token of DPNI object
10215 + * @counter: The requested counter
10216 + * @value: New counter value; typically pass '0' for resetting
10219 + * Return: '0' on Success; Error code otherwise.
10221 +int dpni_set_counter(struct fsl_mc_io *mc_io,
10222 + uint32_t cmd_flags,
10224 + enum dpni_counter counter,
10228 + * Enable auto-negotiation
10230 +#define DPNI_LINK_OPT_AUTONEG 0x0000000000000001ULL
10232 + * Enable half-duplex mode
10234 +#define DPNI_LINK_OPT_HALF_DUPLEX 0x0000000000000002ULL
10236 + * Enable pause frames
10238 +#define DPNI_LINK_OPT_PAUSE 0x0000000000000004ULL
10240 + * Enable a-symmetric pause frames
10242 +#define DPNI_LINK_OPT_ASYM_PAUSE 0x0000000000000008ULL
10245 + * struct - Structure representing DPNI link configuration
10247 + * @options: Mask of available options; use 'DPNI_LINK_OPT_<X>' values
10249 +struct dpni_link_cfg {
10251 + uint64_t options;
10255 + * dpni_set_link_cfg() - set the link configuration.
10256 + * @mc_io: Pointer to MC portal's I/O object
10257 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10258 + * @token: Token of DPNI object
10259 + * @cfg: Link configuration
10261 + * Return: '0' on Success; Error code otherwise.
10263 +int dpni_set_link_cfg(struct fsl_mc_io *mc_io,
10264 + uint32_t cmd_flags,
10266 + const struct dpni_link_cfg *cfg);
10269 + * struct dpni_link_state - Structure representing DPNI link state
10271 + * @options: Mask of available options; use 'DPNI_LINK_OPT_<X>' values
10272 + * @up: Link state; '0' for down, '1' for up
10274 +struct dpni_link_state {
10276 + uint64_t options;
10281 + * dpni_get_link_state() - Return the link state (either up or down)
10282 + * @mc_io: Pointer to MC portal's I/O object
10283 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10284 + * @token: Token of DPNI object
10285 + * @state: Returned link state;
10287 + * Return: '0' on Success; Error code otherwise.
10289 +int dpni_get_link_state(struct fsl_mc_io *mc_io,
10290 + uint32_t cmd_flags,
10292 + struct dpni_link_state *state);
10295 + * struct dpni_tx_shaping - Structure representing DPNI tx shaping configuration
10296 + * @rate_limit: rate in Mbps
10297 + * @max_burst_size: burst size in bytes (up to 64KB)
10299 +struct dpni_tx_shaping_cfg {
10300 + uint32_t rate_limit;
10301 + uint16_t max_burst_size;
10305 + * dpni_set_tx_shaping() - Set the transmit shaping
10306 + * @mc_io: Pointer to MC portal's I/O object
10307 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10308 + * @token: Token of DPNI object
10309 + * @tx_shaper: tx shaping configuration
10311 + * Return: '0' on Success; Error code otherwise.
10313 +int dpni_set_tx_shaping(struct fsl_mc_io *mc_io,
10314 + uint32_t cmd_flags,
10316 + const struct dpni_tx_shaping_cfg *tx_shaper);
10319 + * dpni_set_max_frame_length() - Set the maximum received frame length.
10320 + * @mc_io: Pointer to MC portal's I/O object
10321 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10322 + * @token: Token of DPNI object
10323 + * @max_frame_length: Maximum received frame length (in
10324 + * bytes); frame is discarded if its
10325 + * length exceeds this value
10327 + * Return: '0' on Success; Error code otherwise.
10329 +int dpni_set_max_frame_length(struct fsl_mc_io *mc_io,
10330 + uint32_t cmd_flags,
10332 + uint16_t max_frame_length);
10335 + * dpni_get_max_frame_length() - Get the maximum received frame length.
10336 + * @mc_io: Pointer to MC portal's I/O object
10337 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10338 + * @token: Token of DPNI object
10339 + * @max_frame_length: Maximum received frame length (in
10340 + * bytes); frame is discarded if its
10341 + * length exceeds this value
10343 + * Return: '0' on Success; Error code otherwise.
10345 +int dpni_get_max_frame_length(struct fsl_mc_io *mc_io,
10346 + uint32_t cmd_flags,
10348 + uint16_t *max_frame_length);
10351 + * dpni_set_mtu() - Set the MTU for the interface.
10352 + * @mc_io: Pointer to MC portal's I/O object
10353 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10354 + * @token: Token of DPNI object
10355 + * @mtu: MTU length (in bytes)
10357 + * MTU determines the maximum fragment size for performing IP
10358 + * fragmentation on egress packets.
10359 + * Return: '0' on Success; Error code otherwise.
10361 +int dpni_set_mtu(struct fsl_mc_io *mc_io,
10362 + uint32_t cmd_flags,
10367 + * dpni_get_mtu() - Get the MTU.
10368 + * @mc_io: Pointer to MC portal's I/O object
10369 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10370 + * @token: Token of DPNI object
10371 + * @mtu: Returned MTU length (in bytes)
10373 + * Return: '0' on Success; Error code otherwise.
10375 +int dpni_get_mtu(struct fsl_mc_io *mc_io,
10376 + uint32_t cmd_flags,
10381 + * dpni_set_multicast_promisc() - Enable/disable multicast promiscuous mode
10382 + * @mc_io: Pointer to MC portal's I/O object
10383 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10384 + * @token: Token of DPNI object
10385 + * @en: Set to '1' to enable; '0' to disable
10387 + * Return: '0' on Success; Error code otherwise.
10389 +int dpni_set_multicast_promisc(struct fsl_mc_io *mc_io,
10390 + uint32_t cmd_flags,
10395 + * dpni_get_multicast_promisc() - Get multicast promiscuous mode
10396 + * @mc_io: Pointer to MC portal's I/O object
10397 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10398 + * @token: Token of DPNI object
10399 + * @en: Returns '1' if enabled; '0' otherwise
10401 + * Return: '0' on Success; Error code otherwise.
10403 +int dpni_get_multicast_promisc(struct fsl_mc_io *mc_io,
10404 + uint32_t cmd_flags,
10409 + * dpni_set_unicast_promisc() - Enable/disable unicast promiscuous mode
10410 + * @mc_io: Pointer to MC portal's I/O object
10411 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10412 + * @token: Token of DPNI object
10413 + * @en: Set to '1' to enable; '0' to disable
10415 + * Return: '0' on Success; Error code otherwise.
10417 +int dpni_set_unicast_promisc(struct fsl_mc_io *mc_io,
10418 + uint32_t cmd_flags,
10423 + * dpni_get_unicast_promisc() - Get unicast promiscuous mode
10424 + * @mc_io: Pointer to MC portal's I/O object
10425 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10426 + * @token: Token of DPNI object
10427 + * @en: Returns '1' if enabled; '0' otherwise
10429 + * Return: '0' on Success; Error code otherwise.
10431 +int dpni_get_unicast_promisc(struct fsl_mc_io *mc_io,
10432 + uint32_t cmd_flags,
10437 + * dpni_set_primary_mac_addr() - Set the primary MAC address
10438 + * @mc_io: Pointer to MC portal's I/O object
10439 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10440 + * @token: Token of DPNI object
10441 + * @mac_addr: MAC address to set as primary address
10443 + * Return: '0' on Success; Error code otherwise.
10445 +int dpni_set_primary_mac_addr(struct fsl_mc_io *mc_io,
10446 + uint32_t cmd_flags,
10448 + const uint8_t mac_addr[6]);
10451 + * dpni_get_primary_mac_addr() - Get the primary MAC address
10452 + * @mc_io: Pointer to MC portal's I/O object
10453 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10454 + * @token: Token of DPNI object
10455 + * @mac_addr: Returned MAC address
10457 + * Return: '0' on Success; Error code otherwise.
10459 +int dpni_get_primary_mac_addr(struct fsl_mc_io *mc_io,
10460 + uint32_t cmd_flags,
10462 + uint8_t mac_addr[6]);
10465 + * dpni_add_mac_addr() - Add MAC address filter
10466 + * @mc_io: Pointer to MC portal's I/O object
10467 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10468 + * @token: Token of DPNI object
10469 + * @mac_addr: MAC address to add
10471 + * Return: '0' on Success; Error code otherwise.
10473 +int dpni_add_mac_addr(struct fsl_mc_io *mc_io,
10474 + uint32_t cmd_flags,
10476 + const uint8_t mac_addr[6]);
10479 + * dpni_remove_mac_addr() - Remove MAC address filter
10480 + * @mc_io: Pointer to MC portal's I/O object
10481 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10482 + * @token: Token of DPNI object
10483 + * @mac_addr: MAC address to remove
10485 + * Return: '0' on Success; Error code otherwise.
10487 +int dpni_remove_mac_addr(struct fsl_mc_io *mc_io,
10488 + uint32_t cmd_flags,
10490 + const uint8_t mac_addr[6]);
10493 + * dpni_clear_mac_filters() - Clear all unicast and/or multicast MAC filters
10494 + * @mc_io: Pointer to MC portal's I/O object
10495 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10496 + * @token: Token of DPNI object
10497 + * @unicast: Set to '1' to clear unicast addresses
10498 + * @multicast: Set to '1' to clear multicast addresses
10500 + * The primary MAC address is not cleared by this operation.
10502 + * Return: '0' on Success; Error code otherwise.
10504 +int dpni_clear_mac_filters(struct fsl_mc_io *mc_io,
10505 + uint32_t cmd_flags,
10511 + * dpni_set_vlan_filters() - Enable/disable VLAN filtering mode
10512 + * @mc_io: Pointer to MC portal's I/O object
10513 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10514 + * @token: Token of DPNI object
10515 + * @en: Set to '1' to enable; '0' to disable
10517 + * Return: '0' on Success; Error code otherwise.
10519 +int dpni_set_vlan_filters(struct fsl_mc_io *mc_io,
10520 + uint32_t cmd_flags,
10525 + * dpni_add_vlan_id() - Add VLAN ID filter
10526 + * @mc_io: Pointer to MC portal's I/O object
10527 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10528 + * @token: Token of DPNI object
10529 + * @vlan_id: VLAN ID to add
10531 + * Return: '0' on Success; Error code otherwise.
10533 +int dpni_add_vlan_id(struct fsl_mc_io *mc_io,
10534 + uint32_t cmd_flags,
10536 + uint16_t vlan_id);
10539 + * dpni_remove_vlan_id() - Remove VLAN ID filter
10540 + * @mc_io: Pointer to MC portal's I/O object
10541 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10542 + * @token: Token of DPNI object
10543 + * @vlan_id: VLAN ID to remove
10545 + * Return: '0' on Success; Error code otherwise.
10547 +int dpni_remove_vlan_id(struct fsl_mc_io *mc_io,
10548 + uint32_t cmd_flags,
10550 + uint16_t vlan_id);
10553 + * dpni_clear_vlan_filters() - Clear all VLAN filters
10554 + * @mc_io: Pointer to MC portal's I/O object
10555 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10556 + * @token: Token of DPNI object
10558 + * Return: '0' on Success; Error code otherwise.
10560 +int dpni_clear_vlan_filters(struct fsl_mc_io *mc_io,
10561 + uint32_t cmd_flags,
10565 + * enum dpni_tx_schedule_mode - DPNI Tx scheduling mode
10566 + * @DPNI_TX_SCHED_STRICT_PRIORITY: strict priority
10567 + * @DPNI_TX_SCHED_WEIGHTED: weighted based scheduling
10569 +enum dpni_tx_schedule_mode {
10570 + DPNI_TX_SCHED_STRICT_PRIORITY,
10571 + DPNI_TX_SCHED_WEIGHTED,
10575 + * struct dpni_tx_schedule_cfg - Structure representing Tx
10576 + * scheduling configuration
10577 + * @mode: scheduling mode
10578 + * @delta_bandwidth: Bandwidth represented in weights from 100 to 10000;
10579 + * not applicable for 'strict-priority' mode;
10581 +struct dpni_tx_schedule_cfg {
10582 + enum dpni_tx_schedule_mode mode;
10583 + uint16_t delta_bandwidth;
10587 + * struct dpni_tx_selection_cfg - Structure representing transmission
10588 + * selection configuration
10589 + * @tc_sched: an array of traffic-classes
10591 +struct dpni_tx_selection_cfg {
10592 + struct dpni_tx_schedule_cfg tc_sched[DPNI_MAX_TC];
10596 + * dpni_set_tx_selection() - Set transmission selection configuration
10597 + * @mc_io: Pointer to MC portal's I/O object
10598 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10599 + * @token: Token of DPNI object
10600 + * @cfg: transmission selection configuration
10602 + * warning: Allowed only when DPNI is disabled
10604 + * Return: '0' on Success; Error code otherwise.
10606 +int dpni_set_tx_selection(struct fsl_mc_io *mc_io,
10607 + uint32_t cmd_flags,
10609 + const struct dpni_tx_selection_cfg *cfg);
10612 + * enum dpni_dist_mode - DPNI distribution mode
10613 + * @DPNI_DIST_MODE_NONE: No distribution
10614 + * @DPNI_DIST_MODE_HASH: Use hash distribution; only relevant if
10615 + * the 'DPNI_OPT_DIST_HASH' option was set at DPNI creation
10616 + * @DPNI_DIST_MODE_FS: Use explicit flow steering; only relevant if
10617 + * the 'DPNI_OPT_DIST_FS' option was set at DPNI creation
10619 +enum dpni_dist_mode {
10620 + DPNI_DIST_MODE_NONE = 0,
10621 + DPNI_DIST_MODE_HASH = 1,
10622 + DPNI_DIST_MODE_FS = 2
10626 + * enum dpni_fs_miss_action - DPNI Flow Steering miss action
10627 + * @DPNI_FS_MISS_DROP: In case of no-match, drop the frame
10628 + * @DPNI_FS_MISS_EXPLICIT_FLOWID: In case of no-match, use explicit flow-id
10629 + * @DPNI_FS_MISS_HASH: In case of no-match, distribute using hash
10631 +enum dpni_fs_miss_action {
10632 + DPNI_FS_MISS_DROP = 0,
10633 + DPNI_FS_MISS_EXPLICIT_FLOWID = 1,
10634 + DPNI_FS_MISS_HASH = 2
10638 + * struct dpni_fs_tbl_cfg - Flow Steering table configuration
10639 + * @miss_action: Miss action selection
10640 + * @default_flow_id: Used when 'miss_action = DPNI_FS_MISS_EXPLICIT_FLOWID'
10642 +struct dpni_fs_tbl_cfg {
10643 + enum dpni_fs_miss_action miss_action;
10644 + uint16_t default_flow_id;
10648 + * dpni_prepare_key_cfg() - function prepare extract parameters
10649 + * @cfg: defining a full Key Generation profile (rule)
10650 + * @key_cfg_buf: Zeroed 256 bytes of memory before mapping it to DMA
10652 + * This function has to be called before the following functions:
10653 + * - dpni_set_rx_tc_dist()
10654 + * - dpni_set_qos_table()
10656 +int dpni_prepare_key_cfg(const struct dpkg_profile_cfg *cfg,
10657 + uint8_t *key_cfg_buf);
10660 + * struct dpni_rx_tc_dist_cfg - Rx traffic class distribution configuration
10661 + * @dist_size: Set the distribution size;
10662 + * supported values: 1,2,3,4,6,7,8,12,14,16,24,28,32,48,56,64,96,
10663 + * 112,128,192,224,256,384,448,512,768,896,1024
10664 + * @dist_mode: Distribution mode
10665 + * @key_cfg_iova: I/O virtual address of 256 bytes DMA-able memory filled with
10666 + * the extractions to be used for the distribution key by calling
10667 + * dpni_prepare_key_cfg() relevant only when
10668 + * 'dist_mode != DPNI_DIST_MODE_NONE', otherwise it can be '0'
10669 + * @fs_cfg: Flow Steering table configuration; only relevant if
10670 + * 'dist_mode = DPNI_DIST_MODE_FS'
10672 +struct dpni_rx_tc_dist_cfg {
10673 + uint16_t dist_size;
10674 + enum dpni_dist_mode dist_mode;
10675 + uint64_t key_cfg_iova;
10676 + struct dpni_fs_tbl_cfg fs_cfg;
10680 + * dpni_set_rx_tc_dist() - Set Rx traffic class distribution configuration
10681 + * @mc_io: Pointer to MC portal's I/O object
10682 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10683 + * @token: Token of DPNI object
10684 + * @tc_id: Traffic class selection (0-7)
10685 + * @cfg: Traffic class distribution configuration
10687 + * warning: if 'dist_mode != DPNI_DIST_MODE_NONE', call dpni_prepare_key_cfg()
10688 + * first to prepare the key_cfg_iova parameter
10690 + * Return: '0' on Success; error code otherwise.
10692 +int dpni_set_rx_tc_dist(struct fsl_mc_io *mc_io,
10693 + uint32_t cmd_flags,
10696 + const struct dpni_rx_tc_dist_cfg *cfg);
10699 + * Set to select color aware mode (otherwise - color blind)
10701 +#define DPNI_POLICER_OPT_COLOR_AWARE 0x00000001
10703 + * Set to discard frame with RED color
10705 +#define DPNI_POLICER_OPT_DISCARD_RED 0x00000002
10708 + * enum dpni_policer_mode - selecting the policer mode
10709 + * @DPNI_POLICER_MODE_NONE: Policer is disabled
10710 + * @DPNI_POLICER_MODE_PASS_THROUGH: Policer pass through
10711 + * @DPNI_POLICER_MODE_RFC_2698: Policer algorithm RFC 2698
10712 + * @DPNI_POLICER_MODE_RFC_4115: Policer algorithm RFC 4115
10714 +enum dpni_policer_mode {
10715 + DPNI_POLICER_MODE_NONE = 0,
10716 + DPNI_POLICER_MODE_PASS_THROUGH,
10717 + DPNI_POLICER_MODE_RFC_2698,
10718 + DPNI_POLICER_MODE_RFC_4115
10722 + * enum dpni_policer_unit - DPNI policer units
10723 + * @DPNI_POLICER_UNIT_BYTES: bytes units
10724 + * @DPNI_POLICER_UNIT_FRAMES: frames units
10726 +enum dpni_policer_unit {
10727 + DPNI_POLICER_UNIT_BYTES = 0,
10728 + DPNI_POLICER_UNIT_FRAMES
10732 + * enum dpni_policer_color - selecting the policer color
10733 + * @DPNI_POLICER_COLOR_GREEN: Green color
10734 + * @DPNI_POLICER_COLOR_YELLOW: Yellow color
10735 + * @DPNI_POLICER_COLOR_RED: Red color
10737 +enum dpni_policer_color {
10738 + DPNI_POLICER_COLOR_GREEN = 0,
10739 + DPNI_POLICER_COLOR_YELLOW,
10740 + DPNI_POLICER_COLOR_RED
10744 + * struct dpni_rx_tc_policing_cfg - Policer configuration
10745 + * @options: Mask of available options; use 'DPNI_POLICER_OPT_<X>' values
10746 + * @mode: policer mode
10747 + * @default_color: For pass-through mode the policer re-colors with this
10748 + * color any incoming packets. For Color aware non-pass-through mode:
10749 + * policer re-colors with this color all packets with FD[DROPP]>2.
10750 + * @units: Bytes or Packets
10751 + * @cir: Committed information rate (CIR) in Kbps or packets/second
10752 + * @cbs: Committed burst size (CBS) in bytes or packets
10753 + * @eir: Peak information rate (PIR, rfc2698) in Kbps or packets/second
10754 + * Excess information rate (EIR, rfc4115) in Kbps or packets/second
10755 + * @ebs: Peak burst size (PBS, rfc2698) in bytes or packets
10756 + * Excess burst size (EBS, rfc4115) in bytes or packets
10758 +struct dpni_rx_tc_policing_cfg {
10759 + uint32_t options;
10760 + enum dpni_policer_mode mode;
10761 + enum dpni_policer_unit units;
10762 + enum dpni_policer_color default_color;
10770 + * dpni_set_rx_tc_policing() - Set Rx traffic class policing configuration
10771 + * @mc_io: Pointer to MC portal's I/O object
10772 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10773 + * @token: Token of DPNI object
10774 + * @tc_id: Traffic class selection (0-7)
10775 + * @cfg: Traffic class policing configuration
10777 + * Return: '0' on Success; error code otherwise.
10779 +int dpni_set_rx_tc_policing(struct fsl_mc_io *mc_io,
10780 + uint32_t cmd_flags,
10783 + const struct dpni_rx_tc_policing_cfg *cfg);
10786 + * dpni_get_rx_tc_policing() - Get Rx traffic class policing configuration
10787 + * @mc_io: Pointer to MC portal's I/O object
10788 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10789 + * @token: Token of DPNI object
10790 + * @tc_id: Traffic class selection (0-7)
10791 + * @cfg: Traffic class policing configuration
10793 + * Return: '0' on Success; error code otherwise.
10795 +int dpni_get_rx_tc_policing(struct fsl_mc_io *mc_io,
10796 + uint32_t cmd_flags,
10799 + struct dpni_rx_tc_policing_cfg *cfg);
10802 + * enum dpni_congestion_unit - DPNI congestion units
10803 + * @DPNI_CONGESTION_UNIT_BYTES: bytes units
10804 + * @DPNI_CONGESTION_UNIT_FRAMES: frames units
10806 +enum dpni_congestion_unit {
10807 + DPNI_CONGESTION_UNIT_BYTES = 0,
10808 + DPNI_CONGESTION_UNIT_FRAMES
10812 + * enum dpni_early_drop_mode - DPNI early drop mode
10813 + * @DPNI_EARLY_DROP_MODE_NONE: early drop is disabled
10814 + * @DPNI_EARLY_DROP_MODE_TAIL: early drop in taildrop mode
10815 + * @DPNI_EARLY_DROP_MODE_WRED: early drop in WRED mode
10817 +enum dpni_early_drop_mode {
10818 + DPNI_EARLY_DROP_MODE_NONE = 0,
10819 + DPNI_EARLY_DROP_MODE_TAIL,
10820 + DPNI_EARLY_DROP_MODE_WRED
10824 + * struct dpni_wred_cfg - WRED configuration
10825 + * @max_threshold: maximum threshold that packets may be discarded. Above this
10826 + * threshold all packets are discarded; must be less than 2^39;
10827 + * approximated to be expressed as (x+256)*2^(y-1) due to HW
10828 + * implementation.
10829 + * @min_threshold: minimum threshold that packets may be discarded at
10830 + * @drop_probability: probability that a packet will be discarded (1-100,
10831 + * associated with the max_threshold).
10833 +struct dpni_wred_cfg {
10834 + uint64_t max_threshold;
10835 + uint64_t min_threshold;
10836 + uint8_t drop_probability;
10840 + * struct dpni_early_drop_cfg - early-drop configuration
10841 + * @mode: drop mode
10842 + * @units: units type
10843 + * @green: WRED - 'green' configuration
10844 + * @yellow: WRED - 'yellow' configuration
10845 + * @red: WRED - 'red' configuration
10846 + * @tail_drop_threshold: tail drop threshold
10848 +struct dpni_early_drop_cfg {
10849 + enum dpni_early_drop_mode mode;
10850 + enum dpni_congestion_unit units;
10852 + struct dpni_wred_cfg green;
10853 + struct dpni_wred_cfg yellow;
10854 + struct dpni_wred_cfg red;
10856 + uint32_t tail_drop_threshold;
10860 + * dpni_prepare_early_drop() - prepare an early drop.
10861 + * @cfg: Early-drop configuration
10862 + * @early_drop_buf: Zeroed 256 bytes of memory before mapping it to DMA
10864 + * This function has to be called before dpni_set_rx_tc_early_drop or
10865 + * dpni_set_tx_tc_early_drop
10868 +void dpni_prepare_early_drop(const struct dpni_early_drop_cfg *cfg,
10869 + uint8_t *early_drop_buf);
10872 + * dpni_extract_early_drop() - extract the early drop configuration.
10873 + * @cfg: Early-drop configuration
10874 + * @early_drop_buf: Zeroed 256 bytes of memory before mapping it to DMA
10876 + * This function has to be called after dpni_get_rx_tc_early_drop or
10877 + * dpni_get_tx_tc_early_drop
10880 +void dpni_extract_early_drop(struct dpni_early_drop_cfg *cfg,
10881 + const uint8_t *early_drop_buf);
10884 + * dpni_set_rx_tc_early_drop() - Set Rx traffic class early-drop configuration
10885 + * @mc_io: Pointer to MC portal's I/O object
10886 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10887 + * @token: Token of DPNI object
10888 + * @tc_id: Traffic class selection (0-7)
10889 + * @early_drop_iova: I/O virtual address of 256 bytes DMA-able memory filled
10890 + * with the early-drop configuration by calling dpni_prepare_early_drop()
10892 + * warning: Before calling this function, call dpni_prepare_early_drop() to
10893 + * prepare the early_drop_iova parameter
10895 + * Return: '0' on Success; error code otherwise.
10897 +int dpni_set_rx_tc_early_drop(struct fsl_mc_io *mc_io,
10898 + uint32_t cmd_flags,
10901 + uint64_t early_drop_iova);
10904 + * dpni_get_rx_tc_early_drop() - Get Rx traffic class early-drop configuration
10905 + * @mc_io: Pointer to MC portal's I/O object
10906 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10907 + * @token: Token of DPNI object
10908 + * @tc_id: Traffic class selection (0-7)
10909 + * @early_drop_iova: I/O virtual address of 256 bytes DMA-able memory
10911 + * warning: After calling this function, call dpni_extract_early_drop() to
10912 + * get the early drop configuration
10914 + * Return: '0' on Success; error code otherwise.
10916 +int dpni_get_rx_tc_early_drop(struct fsl_mc_io *mc_io,
10917 + uint32_t cmd_flags,
10920 + uint64_t early_drop_iova);
10923 + * dpni_set_tx_tc_early_drop() - Set Tx traffic class early-drop configuration
10924 + * @mc_io: Pointer to MC portal's I/O object
10925 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10926 + * @token: Token of DPNI object
10927 + * @tc_id: Traffic class selection (0-7)
10928 + * @early_drop_iova: I/O virtual address of 256 bytes DMA-able memory filled
10929 + * with the early-drop configuration by calling dpni_prepare_early_drop()
10931 + * warning: Before calling this function, call dpni_prepare_early_drop() to
10932 + * prepare the early_drop_iova parameter
10934 + * Return: '0' on Success; error code otherwise.
10936 +int dpni_set_tx_tc_early_drop(struct fsl_mc_io *mc_io,
10937 + uint32_t cmd_flags,
10940 + uint64_t early_drop_iova);
10943 + * dpni_get_tx_tc_early_drop() - Get Tx traffic class early-drop configuration
10944 + * @mc_io: Pointer to MC portal's I/O object
10945 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10946 + * @token: Token of DPNI object
10947 + * @tc_id: Traffic class selection (0-7)
10948 + * @early_drop_iova: I/O virtual address of 256 bytes DMA-able memory
10950 + * warning: After calling this function, call dpni_extract_early_drop() to
10951 + * get the early drop configuration
10953 + * Return: '0' on Success; error code otherwise.
10955 +int dpni_get_tx_tc_early_drop(struct fsl_mc_io *mc_io,
10956 + uint32_t cmd_flags,
10959 + uint64_t early_drop_iova);
10962 + * enum dpni_dest - DPNI destination types
10963 + * @DPNI_DEST_NONE: Unassigned destination; The queue is set in parked mode and
10964 + * does not generate FQDAN notifications; user is expected to
10965 + * dequeue from the queue based on polling or other user-defined
10967 + * @DPNI_DEST_DPIO: The queue is set in schedule mode and generates FQDAN
10968 + * notifications to the specified DPIO; user is expected to dequeue
10969 + * from the queue only after notification is received
10970 + * @DPNI_DEST_DPCON: The queue is set in schedule mode and does not generate
10971 + * FQDAN notifications, but is connected to the specified DPCON
10972 + * object; user is expected to dequeue from the DPCON channel
10975 + DPNI_DEST_NONE = 0,
10976 + DPNI_DEST_DPIO = 1,
10977 + DPNI_DEST_DPCON = 2
10981 + * struct dpni_dest_cfg - Structure representing DPNI destination parameters
10982 + * @dest_type: Destination type
10983 + * @dest_id: Either DPIO ID or DPCON ID, depending on the destination type
10984 + * @priority: Priority selection within the DPIO or DPCON channel; valid values
10985 + * are 0-1 or 0-7, depending on the number of priorities in that
10986 + * channel; not relevant for 'DPNI_DEST_NONE' option
10988 +struct dpni_dest_cfg {
10989 + enum dpni_dest dest_type;
10991 + uint8_t priority;
10994 +/* DPNI congestion options */
10997 + * CSCN message is written to message_iova once entering a
10998 + * congestion state (see 'threshold_entry')
11000 +#define DPNI_CONG_OPT_WRITE_MEM_ON_ENTER 0x00000001
11002 + * CSCN message is written to message_iova once exiting a
11003 + * congestion state (see 'threshold_exit')
11005 +#define DPNI_CONG_OPT_WRITE_MEM_ON_EXIT 0x00000002
11007 + * CSCN write will attempt to allocate into a cache (coherent write);
11008 + * valid only if 'DPNI_CONG_OPT_WRITE_MEM_<X>' is selected
11010 +#define DPNI_CONG_OPT_COHERENT_WRITE 0x00000004
11012 + * if 'dest_cfg.dest_type != DPNI_DEST_NONE' CSCN message is sent to
11013 + * DPIO/DPCON's WQ channel once entering a congestion state
11014 + * (see 'threshold_entry')
11016 +#define DPNI_CONG_OPT_NOTIFY_DEST_ON_ENTER 0x00000008
11018 + * if 'dest_cfg.dest_type != DPNI_DEST_NONE' CSCN message is sent to
11019 + * DPIO/DPCON's WQ channel once exiting a congestion state
11020 + * (see 'threshold_exit')
11022 +#define DPNI_CONG_OPT_NOTIFY_DEST_ON_EXIT 0x00000010
11024 + * if 'dest_cfg.dest_type != DPNI_DEST_NONE' when the CSCN is written to the
11025 + * sw-portal's DQRR, the DQRI interrupt is asserted immediately (if enabled)
11027 +#define DPNI_CONG_OPT_INTR_COALESCING_DISABLED 0x00000020
11030 + * struct dpni_congestion_notification_cfg - congestion notification
11032 + * @units: units type
11033 + * @threshold_entry: above this threshold we enter a congestion state.
11034 + * set it to '0' to disable it
11035 + * @threshold_exit: below this threshold we exit the congestion state.
11036 + * @message_ctx: The context that will be part of the CSCN message
11037 + * @message_iova: I/O virtual address (must be in DMA-able memory),
11038 + * must be 16B aligned; valid only if 'DPNI_CONG_OPT_WRITE_MEM_<X>' is
11039 + * contained in 'options'
11040 + * @dest_cfg: CSCN can be send to either DPIO or DPCON WQ channel
11041 + * @options: Mask of available options; use 'DPNI_CONG_OPT_<X>' values
11044 +struct dpni_congestion_notification_cfg {
11045 + enum dpni_congestion_unit units;
11046 + uint32_t threshold_entry;
11047 + uint32_t threshold_exit;
11048 + uint64_t message_ctx;
11049 + uint64_t message_iova;
11050 + struct dpni_dest_cfg dest_cfg;
11051 + uint16_t options;
11055 + * dpni_set_rx_tc_congestion_notification() - Set Rx traffic class congestion
11056 + * notification configuration
11057 + * @mc_io: Pointer to MC portal's I/O object
11058 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11059 + * @token: Token of DPNI object
11060 + * @tc_id: Traffic class selection (0-7)
11061 + * @cfg: congestion notification configuration
11063 + * Return: '0' on Success; error code otherwise.
11065 +int dpni_set_rx_tc_congestion_notification(struct fsl_mc_io *mc_io,
11066 + uint32_t cmd_flags,
11069 + const struct dpni_congestion_notification_cfg *cfg);
11072 + * dpni_get_rx_tc_congestion_notification() - Get Rx traffic class congestion
11073 + * notification configuration
11074 + * @mc_io: Pointer to MC portal's I/O object
11075 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11076 + * @token: Token of DPNI object
11077 + * @tc_id: Traffic class selection (0-7)
11078 + * @cfg: congestion notification configuration
11080 + * Return: '0' on Success; error code otherwise.
11082 +int dpni_get_rx_tc_congestion_notification(struct fsl_mc_io *mc_io,
11083 + uint32_t cmd_flags,
11086 + struct dpni_congestion_notification_cfg *cfg);
11089 + * dpni_set_tx_tc_congestion_notification() - Set Tx traffic class congestion
11090 + * notification configuration
11091 + * @mc_io: Pointer to MC portal's I/O object
11092 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11093 + * @token: Token of DPNI object
11094 + * @tc_id: Traffic class selection (0-7)
11095 + * @cfg: congestion notification configuration
11097 + * Return: '0' on Success; error code otherwise.
11099 +int dpni_set_tx_tc_congestion_notification(struct fsl_mc_io *mc_io,
11100 + uint32_t cmd_flags,
11103 + const struct dpni_congestion_notification_cfg *cfg);
11106 + * dpni_get_tx_tc_congestion_notification() - Get Tx traffic class congestion
11107 + * notification configuration
11108 + * @mc_io: Pointer to MC portal's I/O object
11109 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11110 + * @token: Token of DPNI object
11111 + * @tc_id: Traffic class selection (0-7)
11112 + * @cfg: congestion notification configuration
11114 + * Return: '0' on Success; error code otherwise.
11116 +int dpni_get_tx_tc_congestion_notification(struct fsl_mc_io *mc_io,
11117 + uint32_t cmd_flags,
11120 + struct dpni_congestion_notification_cfg *cfg);
11123 + * enum dpni_flc_type - DPNI FLC types
11124 + * @DPNI_FLC_USER_DEFINED: select the FLC to be used for user defined value
11125 + * @DPNI_FLC_STASH: select the FLC to be used for stash control
11127 +enum dpni_flc_type {
11128 + DPNI_FLC_USER_DEFINED = 0,
11129 + DPNI_FLC_STASH = 1,
11133 + * enum dpni_stash_size - DPNI FLC stashing size
11134 + * @DPNI_STASH_SIZE_0B: no stash
11135 + * @DPNI_STASH_SIZE_64B: stashes 64 bytes
11136 + * @DPNI_STASH_SIZE_128B: stashes 128 bytes
11137 + * @DPNI_STASH_SIZE_192B: stashes 192 bytes
11139 +enum dpni_stash_size {
11140 + DPNI_STASH_SIZE_0B = 0,
11141 + DPNI_STASH_SIZE_64B = 1,
11142 + DPNI_STASH_SIZE_128B = 2,
11143 + DPNI_STASH_SIZE_192B = 3,
11146 +/* DPNI FLC stash options */
11149 + * stashes the whole annotation area (up to 192 bytes)
11151 +#define DPNI_FLC_STASH_FRAME_ANNOTATION 0x00000001
11154 + * struct dpni_flc_cfg - Structure representing DPNI FLC configuration
11155 + * @flc_type: FLC type
11156 + * @options: Mask of available options;
11157 + * use 'DPNI_FLC_STASH_<X>' values
11158 + * @frame_data_size: Size of frame data to be stashed
11159 + * @flow_context_size: Size of flow context to be stashed
11160 + * @flow_context: 1. In case flc_type is 'DPNI_FLC_USER_DEFINED':
11161 + * this value will be provided in the frame descriptor
11163 + * 2. In case flc_type is 'DPNI_FLC_STASH':
11164 + * this value will be I/O virtual address of the
11166 + * Must be cacheline-aligned and DMA-able memory
11168 +struct dpni_flc_cfg {
11169 + enum dpni_flc_type flc_type;
11170 + uint32_t options;
11171 + enum dpni_stash_size frame_data_size;
11172 + enum dpni_stash_size flow_context_size;
11173 + uint64_t flow_context;
11177 + * DPNI queue modification options
11181 + * Select to modify the user's context associated with the queue
11183 +#define DPNI_QUEUE_OPT_USER_CTX 0x00000001
11185 + * Select to modify the queue's destination
11187 +#define DPNI_QUEUE_OPT_DEST 0x00000002
11188 +/** Select to modify the flow-context parameters;
11189 + * not applicable for Tx-conf/Err queues as the FD comes from the user
11191 +#define DPNI_QUEUE_OPT_FLC 0x00000004
11193 + * Select to modify the queue's order preservation
11195 +#define DPNI_QUEUE_OPT_ORDER_PRESERVATION 0x00000008
11196 +/* Select to modify the queue's tail-drop threshold */
11197 +#define DPNI_QUEUE_OPT_TAILDROP_THRESHOLD 0x00000010
11200 + * struct dpni_queue_cfg - Structure representing queue configuration
11201 + * @options: Flags representing the suggested modifications to the queue;
11202 + * Use any combination of 'DPNI_QUEUE_OPT_<X>' flags
11203 + * @user_ctx: User context value provided in the frame descriptor of each
11204 + * dequeued frame; valid only if 'DPNI_QUEUE_OPT_USER_CTX'
11205 + * is contained in 'options'
11206 + * @dest_cfg: Queue destination parameters;
11207 + * valid only if 'DPNI_QUEUE_OPT_DEST' is contained in 'options'
11208 + * @flc_cfg: Flow context configuration; in case the TC's distribution
11209 + * is either NONE or HASH the FLC's settings of flow#0 are used.
11210 + * in the case of FS (flow-steering) the flow's FLC settings
11212 + * valid only if 'DPNI_QUEUE_OPT_FLC' is contained in 'options'
11213 + * @order_preservation_en: enable/disable order preservation;
11214 + * valid only if 'DPNI_QUEUE_OPT_ORDER_PRESERVATION' is contained
11216 + * @tail_drop_threshold: set the queue's tail drop threshold in bytes;
11217 + * '0' value disable the threshold; maximum value is 0xE000000;
11218 + * valid only if 'DPNI_QUEUE_OPT_TAILDROP_THRESHOLD' is contained
11221 +struct dpni_queue_cfg {
11222 + uint32_t options;
11223 + uint64_t user_ctx;
11224 + struct dpni_dest_cfg dest_cfg;
11225 + struct dpni_flc_cfg flc_cfg;
11226 + int order_preservation_en;
11227 + uint32_t tail_drop_threshold;
11231 + * struct dpni_queue_attr - Structure representing queue attributes
11232 + * @user_ctx: User context value provided in the frame descriptor of each
11234 + * @dest_cfg: Queue destination configuration
11235 + * @flc_cfg: Flow context configuration
11236 + * @order_preservation_en: enable/disable order preservation
11237 + * @tail_drop_threshold: queue's tail drop threshold in bytes;
11238 + * @fqid: Virtual fqid value to be used for dequeue operations
11240 +struct dpni_queue_attr {
11241 + uint64_t user_ctx;
11242 + struct dpni_dest_cfg dest_cfg;
11243 + struct dpni_flc_cfg flc_cfg;
11244 + int order_preservation_en;
11245 + uint32_t tail_drop_threshold;
11251 + * DPNI Tx flow modification options
11255 + * Select to modify the settings for dedicate Tx confirmation/error
11257 +#define DPNI_TX_FLOW_OPT_TX_CONF_ERROR 0x00000001
11259 + * Select to modify the L3 checksum generation setting
11261 +#define DPNI_TX_FLOW_OPT_L3_CHKSUM_GEN 0x00000010
11263 + * Select to modify the L4 checksum generation setting
11265 +#define DPNI_TX_FLOW_OPT_L4_CHKSUM_GEN 0x00000020
11268 + * struct dpni_tx_flow_cfg - Structure representing Tx flow configuration
11269 + * @options: Flags representing the suggested modifications to the Tx flow;
11270 + * Use any combination 'DPNI_TX_FLOW_OPT_<X>' flags
11271 + * @use_common_tx_conf_queue: Set to '1' to use the common (default) Tx
11272 + * confirmation and error queue; Set to '0' to use the private
11273 + * Tx confirmation and error queue; valid only if
11274 + * 'DPNI_OPT_PRIVATE_TX_CONF_ERROR_DISABLED' wasn't set at DPNI creation
11275 + * and 'DPNI_TX_FLOW_OPT_TX_CONF_ERROR' is contained in 'options'
11276 + * @l3_chksum_gen: Set to '1' to enable L3 checksum generation; '0' to disable;
11277 + * valid only if 'DPNI_TX_FLOW_OPT_L3_CHKSUM_GEN' is contained in 'options'
11278 + * @l4_chksum_gen: Set to '1' to enable L4 checksum generation; '0' to disable;
11279 + * valid only if 'DPNI_TX_FLOW_OPT_L4_CHKSUM_GEN' is contained in 'options'
11281 +struct dpni_tx_flow_cfg {
11282 + uint32_t options;
11283 + int use_common_tx_conf_queue;
11284 + int l3_chksum_gen;
11285 + int l4_chksum_gen;
11289 + * dpni_set_tx_flow() - Set Tx flow configuration
11290 + * @mc_io: Pointer to MC portal's I/O object
11291 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11292 + * @token: Token of DPNI object
11293 + * @flow_id: Provides (or returns) the sender's flow ID;
11294 + * for each new sender set (*flow_id) to 'DPNI_NEW_FLOW_ID' to generate
11295 + * a new flow_id; this ID should be used as the QDBIN argument
11296 + * in enqueue operations
11297 + * @cfg: Tx flow configuration
11299 + * Return: '0' on Success; Error code otherwise.
11301 +int dpni_set_tx_flow(struct fsl_mc_io *mc_io,
11302 + uint32_t cmd_flags,
11304 + uint16_t *flow_id,
11305 + const struct dpni_tx_flow_cfg *cfg);
11308 + * struct dpni_tx_flow_attr - Structure representing Tx flow attributes
11309 + * @use_common_tx_conf_queue: '1' if using common (default) Tx confirmation and
11310 + * error queue; '0' if using private Tx confirmation and error queue
11311 + * @l3_chksum_gen: '1' if L3 checksum generation is enabled; '0' if disabled
11312 + * @l4_chksum_gen: '1' if L4 checksum generation is enabled; '0' if disabled
11314 +struct dpni_tx_flow_attr {
11315 + int use_common_tx_conf_queue;
11316 + int l3_chksum_gen;
11317 + int l4_chksum_gen;
11321 + * dpni_get_tx_flow() - Get Tx flow attributes
11322 + * @mc_io: Pointer to MC portal's I/O object
11323 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11324 + * @token: Token of DPNI object
11325 + * @flow_id: The sender's flow ID, as returned by the
11326 + * dpni_set_tx_flow() function
11327 + * @attr: Returned Tx flow attributes
11329 + * Return: '0' on Success; Error code otherwise.
11331 +int dpni_get_tx_flow(struct fsl_mc_io *mc_io,
11332 + uint32_t cmd_flags,
11334 + uint16_t flow_id,
11335 + struct dpni_tx_flow_attr *attr);
11338 + * struct dpni_tx_conf_cfg - Structure representing Tx conf configuration
11339 + * @errors_only: Set to '1' to report back only error frames;
11340 + * Set to '0' to confirm transmission/error for all transmitted frames;
11341 + * @queue_cfg: Queue configuration
11343 +struct dpni_tx_conf_cfg {
11345 + struct dpni_queue_cfg queue_cfg;
11349 + * dpni_set_tx_conf() - Set Tx confirmation and error queue configuration
11350 + * @mc_io: Pointer to MC portal's I/O object
11351 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11352 + * @token: Token of DPNI object
11353 + * @flow_id: The sender's flow ID, as returned by the
11354 + * dpni_set_tx_flow() function;
11355 + * use 'DPNI_COMMON_TX_CONF' for common tx-conf
11356 + * @cfg: Queue configuration
11358 + * If either 'DPNI_OPT_TX_CONF_DISABLED' or
11359 + * 'DPNI_OPT_PRIVATE_TX_CONF_ERROR_DISABLED' were selected at DPNI creation,
11360 + * this function can ONLY be used with 'flow_id == DPNI_COMMON_TX_CONF';
11361 + * i.e. only serve the common tx-conf-err queue;
11362 + * if 'DPNI_OPT_TX_CONF_DISABLED' was selected, only error frames are reported
11363 + * back - successfully transmitted frames are not confirmed. Otherwise, all
11364 + * transmitted frames are sent for confirmation.
11366 + * Return: '0' on Success; Error code otherwise.
11368 +int dpni_set_tx_conf(struct fsl_mc_io *mc_io,
11369 + uint32_t cmd_flags,
11371 + uint16_t flow_id,
11372 + const struct dpni_tx_conf_cfg *cfg);
11375 + * struct dpni_tx_conf_attr - Structure representing Tx conf attributes
11376 + * @errors_only: '1' if only error frames are reported back; '0' if all
11377 + * transmitted frames are confirmed
11378 + * @queue_attr: Queue attributes
11380 +struct dpni_tx_conf_attr {
11382 + struct dpni_queue_attr queue_attr;
11386 + * dpni_get_tx_conf() - Get Tx confirmation and error queue attributes
11387 + * @mc_io: Pointer to MC portal's I/O object
11388 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11389 + * @token: Token of DPNI object
11390 + * @flow_id: The sender's flow ID, as returned by the
11391 + * dpni_set_tx_flow() function;
11392 + * use 'DPNI_COMMON_TX_CONF' for common tx-conf
11393 + * @attr: Returned tx-conf attributes
11395 + * If either 'DPNI_OPT_TX_CONF_DISABLED' or
11396 + * 'DPNI_OPT_PRIVATE_TX_CONF_ERROR_DISABLED' were selected at DPNI creation,
11397 + * this function can ONLY be used with 'flow_id == DPNI_COMMON_TX_CONF';
11398 + * i.e. only serve the common tx-conf-err queue;
11400 + * Return: '0' on Success; Error code otherwise.
11402 +int dpni_get_tx_conf(struct fsl_mc_io *mc_io,
11403 + uint32_t cmd_flags,
11405 + uint16_t flow_id,
11406 + struct dpni_tx_conf_attr *attr);
11409 + * dpni_set_tx_conf_congestion_notification() - Set Tx conf congestion
11410 + * notification configuration
11411 + * @mc_io: Pointer to MC portal's I/O object
11412 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11413 + * @token: Token of DPNI object
11414 + * @flow_id: The sender's flow ID, as returned by the
11415 + * dpni_set_tx_flow() function;
11416 + * use 'DPNI_COMMON_TX_CONF' for common tx-conf
11417 + * @cfg: congestion notification configuration
11419 + * If either 'DPNI_OPT_TX_CONF_DISABLED' or
11420 + * 'DPNI_OPT_PRIVATE_TX_CONF_ERROR_DISABLED' were selected at DPNI creation,
11421 + * this function can ONLY be used with 'flow_id == DPNI_COMMON_TX_CONF';
11422 + * i.e. only serve the common tx-conf-err queue;
11424 + * Return: '0' on Success; error code otherwise.
11426 +int dpni_set_tx_conf_congestion_notification(struct fsl_mc_io *mc_io,
11427 + uint32_t cmd_flags,
11429 + uint16_t flow_id,
11430 + const struct dpni_congestion_notification_cfg *cfg);
11433 + * dpni_get_tx_conf_congestion_notification() - Get Tx conf congestion
11434 + * notification configuration
11435 + * @mc_io: Pointer to MC portal's I/O object
11436 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11437 + * @token: Token of DPNI object
11438 + * @flow_id: The sender's flow ID, as returned by the
11439 + * dpni_set_tx_flow() function;
11440 + * use 'DPNI_COMMON_TX_CONF' for common tx-conf
11441 + * @cfg: congestion notification
11443 + * If either 'DPNI_OPT_TX_CONF_DISABLED' or
11444 + * 'DPNI_OPT_PRIVATE_TX_CONF_ERROR_DISABLED' were selected at DPNI creation,
11445 + * this function can ONLY be used with 'flow_id == DPNI_COMMON_TX_CONF';
11446 + * i.e. only serve the common tx-conf-err queue;
11448 + * Return: '0' on Success; error code otherwise.
11450 +int dpni_get_tx_conf_congestion_notification(struct fsl_mc_io *mc_io,
11451 + uint32_t cmd_flags,
11453 + uint16_t flow_id,
11454 + struct dpni_congestion_notification_cfg *cfg);
11457 + * dpni_set_tx_conf_revoke() - Tx confirmation revocation
11458 + * @mc_io: Pointer to MC portal's I/O object
11459 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11460 + * @token: Token of DPNI object
11461 + * @revoke: revoke or not
11463 + * This function is useful only when 'DPNI_OPT_TX_CONF_DISABLED' is not
11464 + * selected at DPNI creation.
11465 + * Calling this function with 'revoke' set to '1' disables all transmit
11466 + * confirmation (including the private confirmation queues), regardless of
11467 + * previous settings; Note that in this case, Tx error frames are still
11468 + * enqueued to the general transmit errors queue.
11469 + * Calling this function with 'revoke' set to '0' restores the previous
11470 + * settings for both general and private transmit confirmation.
11472 + * Return: '0' on Success; Error code otherwise.
11474 +int dpni_set_tx_conf_revoke(struct fsl_mc_io *mc_io,
11475 + uint32_t cmd_flags,
11480 + * dpni_set_rx_flow() - Set Rx flow configuration
11481 + * @mc_io: Pointer to MC portal's I/O object
11482 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11483 + * @token: Token of DPNI object
11484 + * @tc_id: Traffic class selection (0-7);
11485 + * use 'DPNI_ALL_TCS' to set all TCs and all flows
11486 + * @flow_id: Rx flow id within the traffic class; use
11487 + * 'DPNI_ALL_TC_FLOWS' to set all flows within
11488 + * this tc_id; ignored if tc_id is set to
11489 + * 'DPNI_ALL_TCS';
11490 + * @cfg: Rx flow configuration
11492 + * Return: '0' on Success; Error code otherwise.
11494 +int dpni_set_rx_flow(struct fsl_mc_io *mc_io,
11495 + uint32_t cmd_flags,
11498 + uint16_t flow_id,
11499 + const struct dpni_queue_cfg *cfg);
11502 + * dpni_get_rx_flow() - Get Rx flow attributes
11503 + * @mc_io: Pointer to MC portal's I/O object
11504 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11505 + * @token: Token of DPNI object
11506 + * @tc_id: Traffic class selection (0-7)
11507 + * @flow_id: Rx flow id within the traffic class
11508 + * @attr: Returned Rx flow attributes
11510 + * Return: '0' on Success; Error code otherwise.
11512 +int dpni_get_rx_flow(struct fsl_mc_io *mc_io,
11513 + uint32_t cmd_flags,
11516 + uint16_t flow_id,
11517 + struct dpni_queue_attr *attr);
11520 + * dpni_set_rx_err_queue() - Set Rx error queue configuration
11521 + * @mc_io: Pointer to MC portal's I/O object
11522 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11523 + * @token: Token of DPNI object
11524 + * @cfg: Queue configuration
11526 + * Return: '0' on Success; Error code otherwise.
11528 +int dpni_set_rx_err_queue(struct fsl_mc_io *mc_io,
11529 + uint32_t cmd_flags,
11531 + const struct dpni_queue_cfg *cfg);
11534 + * dpni_get_rx_err_queue() - Get Rx error queue attributes
11535 + * @mc_io: Pointer to MC portal's I/O object
11536 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11537 + * @token: Token of DPNI object
11538 + * @attr: Returned Queue attributes
11540 + * Return: '0' on Success; Error code otherwise.
11542 +int dpni_get_rx_err_queue(struct fsl_mc_io *mc_io,
11543 + uint32_t cmd_flags,
11545 + struct dpni_queue_attr *attr);
11548 + * struct dpni_qos_tbl_cfg - Structure representing QOS table configuration
11549 + * @key_cfg_iova: I/O virtual address of 256 bytes DMA-able memory filled with
11550 + * key extractions to be used as the QoS criteria by calling
11551 + * dpni_prepare_key_cfg()
11552 + * @discard_on_miss: Set to '1' to discard frames in case of no match (miss);
11553 + * '0' to use the 'default_tc' in such cases
11554 + * @default_tc: Used in case of no-match and 'discard_on_miss'= 0
11556 +struct dpni_qos_tbl_cfg {
11557 + uint64_t key_cfg_iova;
11558 + int discard_on_miss;
11559 + uint8_t default_tc;
11563 + * dpni_set_qos_table() - Set QoS mapping table
11564 + * @mc_io: Pointer to MC portal's I/O object
11565 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11566 + * @token: Token of DPNI object
11567 + * @cfg: QoS table configuration
11569 + * This function and all QoS-related functions require that
11570 + *'max_tcs > 1' was set at DPNI creation.
11572 + * warning: Before calling this function, call dpni_prepare_key_cfg() to
11573 + * prepare the key_cfg_iova parameter
11575 + * Return: '0' on Success; Error code otherwise.
11577 +int dpni_set_qos_table(struct fsl_mc_io *mc_io,
11578 + uint32_t cmd_flags,
11580 + const struct dpni_qos_tbl_cfg *cfg);
11583 + * struct dpni_rule_cfg - Rule configuration for table lookup
11584 + * @key_iova: I/O virtual address of the key (must be in DMA-able memory)
11585 + * @mask_iova: I/O virtual address of the mask (must be in DMA-able memory)
11586 + * @key_size: key and mask size (in bytes)
11588 +struct dpni_rule_cfg {
11589 + uint64_t key_iova;
11590 + uint64_t mask_iova;
11591 + uint8_t key_size;
11595 + * dpni_add_qos_entry() - Add QoS mapping entry (to select a traffic class)
11596 + * @mc_io: Pointer to MC portal's I/O object
11597 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11598 + * @token: Token of DPNI object
11599 + * @cfg: QoS rule to add
11600 + * @tc_id: Traffic class selection (0-7)
11602 + * Return: '0' on Success; Error code otherwise.
11604 +int dpni_add_qos_entry(struct fsl_mc_io *mc_io,
11605 + uint32_t cmd_flags,
11607 + const struct dpni_rule_cfg *cfg,
11611 + * dpni_remove_qos_entry() - Remove QoS mapping entry
11612 + * @mc_io: Pointer to MC portal's I/O object
11613 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11614 + * @token: Token of DPNI object
11615 + * @cfg: QoS rule to remove
11617 + * Return: '0' on Success; Error code otherwise.
11619 +int dpni_remove_qos_entry(struct fsl_mc_io *mc_io,
11620 + uint32_t cmd_flags,
11622 + const struct dpni_rule_cfg *cfg);
11625 + * dpni_clear_qos_table() - Clear all QoS mapping entries
11626 + * @mc_io: Pointer to MC portal's I/O object
11627 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11628 + * @token: Token of DPNI object
11630 + * Following this function call, all frames are directed to
11631 + * the default traffic class (0)
11633 + * Return: '0' on Success; Error code otherwise.
11635 +int dpni_clear_qos_table(struct fsl_mc_io *mc_io,
11636 + uint32_t cmd_flags,
11640 + * dpni_add_fs_entry() - Add Flow Steering entry for a specific traffic class
11641 + * (to select a flow ID)
11642 + * @mc_io: Pointer to MC portal's I/O object
11643 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11644 + * @token: Token of DPNI object
11645 + * @tc_id: Traffic class selection (0-7)
11646 + * @cfg: Flow steering rule to add
11647 + * @flow_id: Flow id selection (must be smaller than the
11648 + * distribution size of the traffic class)
11650 + * Return: '0' on Success; Error code otherwise.
11652 +int dpni_add_fs_entry(struct fsl_mc_io *mc_io,
11653 + uint32_t cmd_flags,
11656 + const struct dpni_rule_cfg *cfg,
11657 + uint16_t flow_id);
11660 + * dpni_remove_fs_entry() - Remove Flow Steering entry from a specific
11662 + * @mc_io: Pointer to MC portal's I/O object
11663 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11664 + * @token: Token of DPNI object
11665 + * @tc_id: Traffic class selection (0-7)
11666 + * @cfg: Flow steering rule to remove
11668 + * Return: '0' on Success; Error code otherwise.
11670 +int dpni_remove_fs_entry(struct fsl_mc_io *mc_io,
11671 + uint32_t cmd_flags,
11674 + const struct dpni_rule_cfg *cfg);
11677 + * dpni_clear_fs_entries() - Clear all Flow Steering entries of a specific
11679 + * @mc_io: Pointer to MC portal's I/O object
11680 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11681 + * @token: Token of DPNI object
11682 + * @tc_id: Traffic class selection (0-7)
11684 + * Return: '0' on Success; Error code otherwise.
11686 +int dpni_clear_fs_entries(struct fsl_mc_io *mc_io,
11687 + uint32_t cmd_flags,
11692 + * dpni_set_vlan_insertion() - Enable/disable VLAN insertion for egress frames
11693 + * @mc_io: Pointer to MC portal's I/O object
11694 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11695 + * @token: Token of DPNI object
11696 + * @en: Set to '1' to enable; '0' to disable
11698 + * Requires that the 'DPNI_OPT_VLAN_MANIPULATION' option is set
11699 + * at DPNI creation.
11701 + * Return: '0' on Success; Error code otherwise.
11703 +int dpni_set_vlan_insertion(struct fsl_mc_io *mc_io,
11704 + uint32_t cmd_flags,
11709 + * dpni_set_vlan_removal() - Enable/disable VLAN removal for ingress frames
11710 + * @mc_io: Pointer to MC portal's I/O object
11711 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11712 + * @token: Token of DPNI object
11713 + * @en: Set to '1' to enable; '0' to disable
11715 + * Requires that the 'DPNI_OPT_VLAN_MANIPULATION' option is set
11716 + * at DPNI creation.
11718 + * Return: '0' on Success; Error code otherwise.
11720 +int dpni_set_vlan_removal(struct fsl_mc_io *mc_io,
11721 + uint32_t cmd_flags,
11726 + * dpni_set_ipr() - Enable/disable IP reassembly of ingress frames
11727 + * @mc_io: Pointer to MC portal's I/O object
11728 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11729 + * @token: Token of DPNI object
11730 + * @en: Set to '1' to enable; '0' to disable
11732 + * Requires that the 'DPNI_OPT_IPR' option is set at DPNI creation.
11734 + * Return: '0' on Success; Error code otherwise.
11736 +int dpni_set_ipr(struct fsl_mc_io *mc_io,
11737 + uint32_t cmd_flags,
11742 + * dpni_set_ipf() - Enable/disable IP fragmentation of egress frames
11743 + * @mc_io: Pointer to MC portal's I/O object
11744 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11745 + * @token: Token of DPNI object
11746 + * @en: Set to '1' to enable; '0' to disable
11748 + * Requires that the 'DPNI_OPT_IPF' option is set at DPNI
11749 + * creation. Fragmentation is performed according to MTU value
11750 + * set by dpni_set_mtu() function
11752 + * Return: '0' on Success; Error code otherwise.
11754 +int dpni_set_ipf(struct fsl_mc_io *mc_io,
11755 + uint32_t cmd_flags,
11759 +#endif /* __FSL_DPNI_H */
11760 --- a/drivers/staging/fsl-mc/include/mc-cmd.h
11761 +++ b/drivers/staging/fsl-mc/include/mc-cmd.h
11762 @@ -103,8 +103,11 @@ enum mc_cmd_status {
11763 #define MC_CMD_HDR_READ_FLAGS(_hdr) \
11764 ((u32)mc_dec((_hdr), MC_CMD_HDR_FLAGS_O, MC_CMD_HDR_FLAGS_S))
11766 +#define MC_PREP_OP(_ext, _param, _offset, _width, _type, _arg) \
11767 + ((_ext)[_param] |= cpu_to_le64(mc_enc((_offset), (_width), _arg)))
11769 #define MC_EXT_OP(_ext, _param, _offset, _width, _type, _arg) \
11770 - ((_ext)[_param] |= mc_enc((_offset), (_width), _arg))
11771 + (_arg = (_type)mc_dec(cpu_to_le64(_ext[_param]), (_offset), (_width)))
11773 #define MC_CMD_OP(_cmd, _param, _offset, _width, _type, _arg) \
11774 ((_cmd).params[_param] |= mc_enc((_offset), (_width), _arg))
11776 +++ b/drivers/staging/fsl-mc/include/net.h
11778 +/* Copyright 2013-2015 Freescale Semiconductor Inc.
11780 + * Redistribution and use in source and binary forms, with or without
11781 + * modification, are permitted provided that the following conditions are met:
11782 + * * Redistributions of source code must retain the above copyright
11783 + * notice, this list of conditions and the following disclaimer.
11784 + * * Redistributions in binary form must reproduce the above copyright
11785 + * notice, this list of conditions and the following disclaimer in the
11786 + * documentation and/or other materials provided with the distribution.
11787 + * * Neither the name of the above-listed copyright holders nor the
11788 + * names of any contributors may be used to endorse or promote products
11789 + * derived from this software without specific prior written permission.
11792 + * ALTERNATIVELY, this software may be distributed under the terms of the
11793 + * GNU General Public License ("GPL") as published by the Free Software
11794 + * Foundation, either version 2 of that License or (at your option) any
11797 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
11798 + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
11799 + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
11800 + * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
11801 + * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
11802 + * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
11803 + * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
11804 + * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
11805 + * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
11806 + * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
11807 + * POSSIBILITY OF SUCH DAMAGE.
11809 +#ifndef __FSL_NET_H
11810 +#define __FSL_NET_H
11812 +#define LAST_HDR_INDEX 0xFFFFFFFF
11814 +/*****************************************************************************/
11815 +/* Protocol fields */
11816 +/*****************************************************************************/
11818 +/************************* Ethernet fields *********************************/
11819 +#define NH_FLD_ETH_DA (1)
11820 +#define NH_FLD_ETH_SA (NH_FLD_ETH_DA << 1)
11821 +#define NH_FLD_ETH_LENGTH (NH_FLD_ETH_DA << 2)
11822 +#define NH_FLD_ETH_TYPE (NH_FLD_ETH_DA << 3)
11823 +#define NH_FLD_ETH_FINAL_CKSUM (NH_FLD_ETH_DA << 4)
11824 +#define NH_FLD_ETH_PADDING (NH_FLD_ETH_DA << 5)
11825 +#define NH_FLD_ETH_ALL_FIELDS ((NH_FLD_ETH_DA << 6) - 1)
11827 +#define NH_FLD_ETH_ADDR_SIZE 6
11829 +/*************************** VLAN fields ***********************************/
11830 +#define NH_FLD_VLAN_VPRI (1)
11831 +#define NH_FLD_VLAN_CFI (NH_FLD_VLAN_VPRI << 1)
11832 +#define NH_FLD_VLAN_VID (NH_FLD_VLAN_VPRI << 2)
11833 +#define NH_FLD_VLAN_LENGTH (NH_FLD_VLAN_VPRI << 3)
11834 +#define NH_FLD_VLAN_TYPE (NH_FLD_VLAN_VPRI << 4)
11835 +#define NH_FLD_VLAN_ALL_FIELDS ((NH_FLD_VLAN_VPRI << 5) - 1)
11837 +#define NH_FLD_VLAN_TCI (NH_FLD_VLAN_VPRI | \
11838 + NH_FLD_VLAN_CFI | \
11841 +/************************ IP (generic) fields ******************************/
11842 +#define NH_FLD_IP_VER (1)
11843 +#define NH_FLD_IP_DSCP (NH_FLD_IP_VER << 2)
11844 +#define NH_FLD_IP_ECN (NH_FLD_IP_VER << 3)
11845 +#define NH_FLD_IP_PROTO (NH_FLD_IP_VER << 4)
11846 +#define NH_FLD_IP_SRC (NH_FLD_IP_VER << 5)
11847 +#define NH_FLD_IP_DST (NH_FLD_IP_VER << 6)
11848 +#define NH_FLD_IP_TOS_TC (NH_FLD_IP_VER << 7)
11849 +#define NH_FLD_IP_ID (NH_FLD_IP_VER << 8)
11850 +#define NH_FLD_IP_ALL_FIELDS ((NH_FLD_IP_VER << 9) - 1)
11852 +#define NH_FLD_IP_PROTO_SIZE 1
11854 +/***************************** IPV4 fields *********************************/
11855 +#define NH_FLD_IPV4_VER (1)
11856 +#define NH_FLD_IPV4_HDR_LEN (NH_FLD_IPV4_VER << 1)
11857 +#define NH_FLD_IPV4_TOS (NH_FLD_IPV4_VER << 2)
11858 +#define NH_FLD_IPV4_TOTAL_LEN (NH_FLD_IPV4_VER << 3)
11859 +#define NH_FLD_IPV4_ID (NH_FLD_IPV4_VER << 4)
11860 +#define NH_FLD_IPV4_FLAG_D (NH_FLD_IPV4_VER << 5)
11861 +#define NH_FLD_IPV4_FLAG_M (NH_FLD_IPV4_VER << 6)
11862 +#define NH_FLD_IPV4_OFFSET (NH_FLD_IPV4_VER << 7)
11863 +#define NH_FLD_IPV4_TTL (NH_FLD_IPV4_VER << 8)
11864 +#define NH_FLD_IPV4_PROTO (NH_FLD_IPV4_VER << 9)
11865 +#define NH_FLD_IPV4_CKSUM (NH_FLD_IPV4_VER << 10)
11866 +#define NH_FLD_IPV4_SRC_IP (NH_FLD_IPV4_VER << 11)
11867 +#define NH_FLD_IPV4_DST_IP (NH_FLD_IPV4_VER << 12)
11868 +#define NH_FLD_IPV4_OPTS (NH_FLD_IPV4_VER << 13)
11869 +#define NH_FLD_IPV4_OPTS_COUNT (NH_FLD_IPV4_VER << 14)
11870 +#define NH_FLD_IPV4_ALL_FIELDS ((NH_FLD_IPV4_VER << 15) - 1)
11872 +#define NH_FLD_IPV4_ADDR_SIZE 4
11873 +#define NH_FLD_IPV4_PROTO_SIZE 1
11875 +/***************************** IPV6 fields *********************************/
11876 +#define NH_FLD_IPV6_VER (1)
11877 +#define NH_FLD_IPV6_TC (NH_FLD_IPV6_VER << 1)
11878 +#define NH_FLD_IPV6_SRC_IP (NH_FLD_IPV6_VER << 2)
11879 +#define NH_FLD_IPV6_DST_IP (NH_FLD_IPV6_VER << 3)
11880 +#define NH_FLD_IPV6_NEXT_HDR (NH_FLD_IPV6_VER << 4)
11881 +#define NH_FLD_IPV6_FL (NH_FLD_IPV6_VER << 5)
11882 +#define NH_FLD_IPV6_HOP_LIMIT (NH_FLD_IPV6_VER << 6)
11883 +#define NH_FLD_IPV6_ID (NH_FLD_IPV6_VER << 7)
11884 +#define NH_FLD_IPV6_ALL_FIELDS ((NH_FLD_IPV6_VER << 8) - 1)
11886 +#define NH_FLD_IPV6_ADDR_SIZE 16
11887 +#define NH_FLD_IPV6_NEXT_HDR_SIZE 1
11889 +/***************************** ICMP fields *********************************/
11890 +#define NH_FLD_ICMP_TYPE (1)
11891 +#define NH_FLD_ICMP_CODE (NH_FLD_ICMP_TYPE << 1)
11892 +#define NH_FLD_ICMP_CKSUM (NH_FLD_ICMP_TYPE << 2)
11893 +#define NH_FLD_ICMP_ID (NH_FLD_ICMP_TYPE << 3)
11894 +#define NH_FLD_ICMP_SQ_NUM (NH_FLD_ICMP_TYPE << 4)
11895 +#define NH_FLD_ICMP_ALL_FIELDS ((NH_FLD_ICMP_TYPE << 5) - 1)
11897 +#define NH_FLD_ICMP_CODE_SIZE 1
11898 +#define NH_FLD_ICMP_TYPE_SIZE 1
11900 +/***************************** IGMP fields *********************************/
11901 +#define NH_FLD_IGMP_VERSION (1)
11902 +#define NH_FLD_IGMP_TYPE (NH_FLD_IGMP_VERSION << 1)
11903 +#define NH_FLD_IGMP_CKSUM (NH_FLD_IGMP_VERSION << 2)
11904 +#define NH_FLD_IGMP_DATA (NH_FLD_IGMP_VERSION << 3)
11905 +#define NH_FLD_IGMP_ALL_FIELDS ((NH_FLD_IGMP_VERSION << 4) - 1)
11907 +/***************************** TCP fields **********************************/
11908 +#define NH_FLD_TCP_PORT_SRC (1)
11909 +#define NH_FLD_TCP_PORT_DST (NH_FLD_TCP_PORT_SRC << 1)
11910 +#define NH_FLD_TCP_SEQ (NH_FLD_TCP_PORT_SRC << 2)
11911 +#define NH_FLD_TCP_ACK (NH_FLD_TCP_PORT_SRC << 3)
11912 +#define NH_FLD_TCP_OFFSET (NH_FLD_TCP_PORT_SRC << 4)
11913 +#define NH_FLD_TCP_FLAGS (NH_FLD_TCP_PORT_SRC << 5)
11914 +#define NH_FLD_TCP_WINDOW (NH_FLD_TCP_PORT_SRC << 6)
11915 +#define NH_FLD_TCP_CKSUM (NH_FLD_TCP_PORT_SRC << 7)
11916 +#define NH_FLD_TCP_URGPTR (NH_FLD_TCP_PORT_SRC << 8)
11917 +#define NH_FLD_TCP_OPTS (NH_FLD_TCP_PORT_SRC << 9)
11918 +#define NH_FLD_TCP_OPTS_COUNT (NH_FLD_TCP_PORT_SRC << 10)
11919 +#define NH_FLD_TCP_ALL_FIELDS ((NH_FLD_TCP_PORT_SRC << 11) - 1)
11921 +#define NH_FLD_TCP_PORT_SIZE 2
11923 +/***************************** UDP fields **********************************/
11924 +#define NH_FLD_UDP_PORT_SRC (1)
11925 +#define NH_FLD_UDP_PORT_DST (NH_FLD_UDP_PORT_SRC << 1)
11926 +#define NH_FLD_UDP_LEN (NH_FLD_UDP_PORT_SRC << 2)
11927 +#define NH_FLD_UDP_CKSUM (NH_FLD_UDP_PORT_SRC << 3)
11928 +#define NH_FLD_UDP_ALL_FIELDS ((NH_FLD_UDP_PORT_SRC << 4) - 1)
11930 +#define NH_FLD_UDP_PORT_SIZE 2
11932 +/*************************** UDP-lite fields *******************************/
11933 +#define NH_FLD_UDP_LITE_PORT_SRC (1)
11934 +#define NH_FLD_UDP_LITE_PORT_DST (NH_FLD_UDP_LITE_PORT_SRC << 1)
11935 +#define NH_FLD_UDP_LITE_ALL_FIELDS \
11936 + ((NH_FLD_UDP_LITE_PORT_SRC << 2) - 1)
11938 +#define NH_FLD_UDP_LITE_PORT_SIZE 2
11940 +/*************************** UDP-encap-ESP fields **************************/
11941 +#define NH_FLD_UDP_ENC_ESP_PORT_SRC (1)
11942 +#define NH_FLD_UDP_ENC_ESP_PORT_DST (NH_FLD_UDP_ENC_ESP_PORT_SRC << 1)
11943 +#define NH_FLD_UDP_ENC_ESP_LEN (NH_FLD_UDP_ENC_ESP_PORT_SRC << 2)
11944 +#define NH_FLD_UDP_ENC_ESP_CKSUM (NH_FLD_UDP_ENC_ESP_PORT_SRC << 3)
11945 +#define NH_FLD_UDP_ENC_ESP_SPI (NH_FLD_UDP_ENC_ESP_PORT_SRC << 4)
11946 +#define NH_FLD_UDP_ENC_ESP_SEQUENCE_NUM (NH_FLD_UDP_ENC_ESP_PORT_SRC << 5)
11947 +#define NH_FLD_UDP_ENC_ESP_ALL_FIELDS \
11948 + ((NH_FLD_UDP_ENC_ESP_PORT_SRC << 6) - 1)
11950 +#define NH_FLD_UDP_ENC_ESP_PORT_SIZE 2
11951 +#define NH_FLD_UDP_ENC_ESP_SPI_SIZE 4
11953 +/***************************** SCTP fields *********************************/
11954 +#define NH_FLD_SCTP_PORT_SRC (1)
11955 +#define NH_FLD_SCTP_PORT_DST (NH_FLD_SCTP_PORT_SRC << 1)
11956 +#define NH_FLD_SCTP_VER_TAG (NH_FLD_SCTP_PORT_SRC << 2)
11957 +#define NH_FLD_SCTP_CKSUM (NH_FLD_SCTP_PORT_SRC << 3)
11958 +#define NH_FLD_SCTP_ALL_FIELDS ((NH_FLD_SCTP_PORT_SRC << 4) - 1)
11960 +#define NH_FLD_SCTP_PORT_SIZE 2
11962 +/***************************** DCCP fields *********************************/
11963 +#define NH_FLD_DCCP_PORT_SRC (1)
11964 +#define NH_FLD_DCCP_PORT_DST (NH_FLD_DCCP_PORT_SRC << 1)
11965 +#define NH_FLD_DCCP_ALL_FIELDS ((NH_FLD_DCCP_PORT_SRC << 2) - 1)
11967 +#define NH_FLD_DCCP_PORT_SIZE 2
11969 +/***************************** IPHC fields *********************************/
11970 +#define NH_FLD_IPHC_CID (1)
11971 +#define NH_FLD_IPHC_CID_TYPE (NH_FLD_IPHC_CID << 1)
11972 +#define NH_FLD_IPHC_HCINDEX (NH_FLD_IPHC_CID << 2)
11973 +#define NH_FLD_IPHC_GEN (NH_FLD_IPHC_CID << 3)
11974 +#define NH_FLD_IPHC_D_BIT (NH_FLD_IPHC_CID << 4)
11975 +#define NH_FLD_IPHC_ALL_FIELDS ((NH_FLD_IPHC_CID << 5) - 1)
11977 +/***************************** SCTP fields *********************************/
11978 +#define NH_FLD_SCTP_CHUNK_DATA_TYPE (1)
11979 +#define NH_FLD_SCTP_CHUNK_DATA_FLAGS (NH_FLD_SCTP_CHUNK_DATA_TYPE << 1)
11980 +#define NH_FLD_SCTP_CHUNK_DATA_LENGTH (NH_FLD_SCTP_CHUNK_DATA_TYPE << 2)
11981 +#define NH_FLD_SCTP_CHUNK_DATA_TSN (NH_FLD_SCTP_CHUNK_DATA_TYPE << 3)
11982 +#define NH_FLD_SCTP_CHUNK_DATA_STREAM_ID (NH_FLD_SCTP_CHUNK_DATA_TYPE << 4)
11983 +#define NH_FLD_SCTP_CHUNK_DATA_STREAM_SQN (NH_FLD_SCTP_CHUNK_DATA_TYPE << 5)
11984 +#define NH_FLD_SCTP_CHUNK_DATA_PAYLOAD_PID (NH_FLD_SCTP_CHUNK_DATA_TYPE << 6)
11985 +#define NH_FLD_SCTP_CHUNK_DATA_UNORDERED (NH_FLD_SCTP_CHUNK_DATA_TYPE << 7)
11986 +#define NH_FLD_SCTP_CHUNK_DATA_BEGGINING (NH_FLD_SCTP_CHUNK_DATA_TYPE << 8)
11987 +#define NH_FLD_SCTP_CHUNK_DATA_END (NH_FLD_SCTP_CHUNK_DATA_TYPE << 9)
11988 +#define NH_FLD_SCTP_CHUNK_DATA_ALL_FIELDS \
11989 + ((NH_FLD_SCTP_CHUNK_DATA_TYPE << 10) - 1)
11991 +/*************************** L2TPV2 fields *********************************/
11992 +#define NH_FLD_L2TPV2_TYPE_BIT (1)
11993 +#define NH_FLD_L2TPV2_LENGTH_BIT (NH_FLD_L2TPV2_TYPE_BIT << 1)
11994 +#define NH_FLD_L2TPV2_SEQUENCE_BIT (NH_FLD_L2TPV2_TYPE_BIT << 2)
11995 +#define NH_FLD_L2TPV2_OFFSET_BIT (NH_FLD_L2TPV2_TYPE_BIT << 3)
11996 +#define NH_FLD_L2TPV2_PRIORITY_BIT (NH_FLD_L2TPV2_TYPE_BIT << 4)
11997 +#define NH_FLD_L2TPV2_VERSION (NH_FLD_L2TPV2_TYPE_BIT << 5)
11998 +#define NH_FLD_L2TPV2_LEN (NH_FLD_L2TPV2_TYPE_BIT << 6)
11999 +#define NH_FLD_L2TPV2_TUNNEL_ID (NH_FLD_L2TPV2_TYPE_BIT << 7)
12000 +#define NH_FLD_L2TPV2_SESSION_ID (NH_FLD_L2TPV2_TYPE_BIT << 8)
12001 +#define NH_FLD_L2TPV2_NS (NH_FLD_L2TPV2_TYPE_BIT << 9)
12002 +#define NH_FLD_L2TPV2_NR (NH_FLD_L2TPV2_TYPE_BIT << 10)
12003 +#define NH_FLD_L2TPV2_OFFSET_SIZE (NH_FLD_L2TPV2_TYPE_BIT << 11)
12004 +#define NH_FLD_L2TPV2_FIRST_BYTE (NH_FLD_L2TPV2_TYPE_BIT << 12)
12005 +#define NH_FLD_L2TPV2_ALL_FIELDS \
12006 + ((NH_FLD_L2TPV2_TYPE_BIT << 13) - 1)
12008 +/*************************** L2TPV3 fields *********************************/
12009 +#define NH_FLD_L2TPV3_CTRL_TYPE_BIT (1)
12010 +#define NH_FLD_L2TPV3_CTRL_LENGTH_BIT (NH_FLD_L2TPV3_CTRL_TYPE_BIT << 1)
12011 +#define NH_FLD_L2TPV3_CTRL_SEQUENCE_BIT (NH_FLD_L2TPV3_CTRL_TYPE_BIT << 2)
12012 +#define NH_FLD_L2TPV3_CTRL_VERSION (NH_FLD_L2TPV3_CTRL_TYPE_BIT << 3)
12013 +#define NH_FLD_L2TPV3_CTRL_LENGTH (NH_FLD_L2TPV3_CTRL_TYPE_BIT << 4)
12014 +#define NH_FLD_L2TPV3_CTRL_CONTROL (NH_FLD_L2TPV3_CTRL_TYPE_BIT << 5)
12015 +#define NH_FLD_L2TPV3_CTRL_SENT (NH_FLD_L2TPV3_CTRL_TYPE_BIT << 6)
12016 +#define NH_FLD_L2TPV3_CTRL_RECV (NH_FLD_L2TPV3_CTRL_TYPE_BIT << 7)
12017 +#define NH_FLD_L2TPV3_CTRL_FIRST_BYTE (NH_FLD_L2TPV3_CTRL_TYPE_BIT << 8)
12018 +#define NH_FLD_L2TPV3_CTRL_ALL_FIELDS \
12019 + ((NH_FLD_L2TPV3_CTRL_TYPE_BIT << 9) - 1)
12021 +#define NH_FLD_L2TPV3_SESS_TYPE_BIT (1)
12022 +#define NH_FLD_L2TPV3_SESS_VERSION (NH_FLD_L2TPV3_SESS_TYPE_BIT << 1)
12023 +#define NH_FLD_L2TPV3_SESS_ID (NH_FLD_L2TPV3_SESS_TYPE_BIT << 2)
12024 +#define NH_FLD_L2TPV3_SESS_COOKIE (NH_FLD_L2TPV3_SESS_TYPE_BIT << 3)
12025 +#define NH_FLD_L2TPV3_SESS_ALL_FIELDS \
12026 + ((NH_FLD_L2TPV3_SESS_TYPE_BIT << 4) - 1)
12028 +/**************************** PPP fields ***********************************/
12029 +#define NH_FLD_PPP_PID (1)
12030 +#define NH_FLD_PPP_COMPRESSED (NH_FLD_PPP_PID << 1)
12031 +#define NH_FLD_PPP_ALL_FIELDS ((NH_FLD_PPP_PID << 2) - 1)
12033 +/************************** PPPoE fields ***********************************/
12034 +#define NH_FLD_PPPOE_VER (1)
12035 +#define NH_FLD_PPPOE_TYPE (NH_FLD_PPPOE_VER << 1)
12036 +#define NH_FLD_PPPOE_CODE (NH_FLD_PPPOE_VER << 2)
12037 +#define NH_FLD_PPPOE_SID (NH_FLD_PPPOE_VER << 3)
12038 +#define NH_FLD_PPPOE_LEN (NH_FLD_PPPOE_VER << 4)
12039 +#define NH_FLD_PPPOE_SESSION (NH_FLD_PPPOE_VER << 5)
12040 +#define NH_FLD_PPPOE_PID (NH_FLD_PPPOE_VER << 6)
12041 +#define NH_FLD_PPPOE_ALL_FIELDS ((NH_FLD_PPPOE_VER << 7) - 1)
12043 +/************************* PPP-Mux fields **********************************/
12044 +#define NH_FLD_PPPMUX_PID (1)
12045 +#define NH_FLD_PPPMUX_CKSUM (NH_FLD_PPPMUX_PID << 1)
12046 +#define NH_FLD_PPPMUX_COMPRESSED (NH_FLD_PPPMUX_PID << 2)
12047 +#define NH_FLD_PPPMUX_ALL_FIELDS ((NH_FLD_PPPMUX_PID << 3) - 1)
12049 +/*********************** PPP-Mux sub-frame fields **************************/
12050 +#define NH_FLD_PPPMUX_SUBFRM_PFF (1)
12051 +#define NH_FLD_PPPMUX_SUBFRM_LXT (NH_FLD_PPPMUX_SUBFRM_PFF << 1)
12052 +#define NH_FLD_PPPMUX_SUBFRM_LEN (NH_FLD_PPPMUX_SUBFRM_PFF << 2)
12053 +#define NH_FLD_PPPMUX_SUBFRM_PID (NH_FLD_PPPMUX_SUBFRM_PFF << 3)
12054 +#define NH_FLD_PPPMUX_SUBFRM_USE_PID (NH_FLD_PPPMUX_SUBFRM_PFF << 4)
12055 +#define NH_FLD_PPPMUX_SUBFRM_ALL_FIELDS \
12056 + ((NH_FLD_PPPMUX_SUBFRM_PFF << 5) - 1)
12058 +/*************************** LLC fields ************************************/
12059 +#define NH_FLD_LLC_DSAP (1)
12060 +#define NH_FLD_LLC_SSAP (NH_FLD_LLC_DSAP << 1)
12061 +#define NH_FLD_LLC_CTRL (NH_FLD_LLC_DSAP << 2)
12062 +#define NH_FLD_LLC_ALL_FIELDS ((NH_FLD_LLC_DSAP << 3) - 1)
12064 +/*************************** NLPID fields **********************************/
12065 +#define NH_FLD_NLPID_NLPID (1)
12066 +#define NH_FLD_NLPID_ALL_FIELDS ((NH_FLD_NLPID_NLPID << 1) - 1)
12068 +/*************************** SNAP fields ***********************************/
12069 +#define NH_FLD_SNAP_OUI (1)
12070 +#define NH_FLD_SNAP_PID (NH_FLD_SNAP_OUI << 1)
12071 +#define NH_FLD_SNAP_ALL_FIELDS ((NH_FLD_SNAP_OUI << 2) - 1)
12073 +/*************************** LLC SNAP fields *******************************/
12074 +#define NH_FLD_LLC_SNAP_TYPE (1)
12075 +#define NH_FLD_LLC_SNAP_ALL_FIELDS ((NH_FLD_LLC_SNAP_TYPE << 1) - 1)
12077 +#define NH_FLD_ARP_HTYPE (1)
12078 +#define NH_FLD_ARP_PTYPE (NH_FLD_ARP_HTYPE << 1)
12079 +#define NH_FLD_ARP_HLEN (NH_FLD_ARP_HTYPE << 2)
12080 +#define NH_FLD_ARP_PLEN (NH_FLD_ARP_HTYPE << 3)
12081 +#define NH_FLD_ARP_OPER (NH_FLD_ARP_HTYPE << 4)
12082 +#define NH_FLD_ARP_SHA (NH_FLD_ARP_HTYPE << 5)
12083 +#define NH_FLD_ARP_SPA (NH_FLD_ARP_HTYPE << 6)
12084 +#define NH_FLD_ARP_THA (NH_FLD_ARP_HTYPE << 7)
12085 +#define NH_FLD_ARP_TPA (NH_FLD_ARP_HTYPE << 8)
12086 +#define NH_FLD_ARP_ALL_FIELDS ((NH_FLD_ARP_HTYPE << 9) - 1)
12088 +/*************************** RFC2684 fields ********************************/
12089 +#define NH_FLD_RFC2684_LLC (1)
12090 +#define NH_FLD_RFC2684_NLPID (NH_FLD_RFC2684_LLC << 1)
12091 +#define NH_FLD_RFC2684_OUI (NH_FLD_RFC2684_LLC << 2)
12092 +#define NH_FLD_RFC2684_PID (NH_FLD_RFC2684_LLC << 3)
12093 +#define NH_FLD_RFC2684_VPN_OUI (NH_FLD_RFC2684_LLC << 4)
12094 +#define NH_FLD_RFC2684_VPN_IDX (NH_FLD_RFC2684_LLC << 5)
12095 +#define NH_FLD_RFC2684_ALL_FIELDS ((NH_FLD_RFC2684_LLC << 6) - 1)
12097 +/*************************** User defined fields ***************************/
12098 +#define NH_FLD_USER_DEFINED_SRCPORT (1)
12099 +#define NH_FLD_USER_DEFINED_PCDID (NH_FLD_USER_DEFINED_SRCPORT << 1)
12100 +#define NH_FLD_USER_DEFINED_ALL_FIELDS \
12101 + ((NH_FLD_USER_DEFINED_SRCPORT << 2) - 1)
12103 +/*************************** Payload fields ********************************/
12104 +#define NH_FLD_PAYLOAD_BUFFER (1)
12105 +#define NH_FLD_PAYLOAD_SIZE (NH_FLD_PAYLOAD_BUFFER << 1)
12106 +#define NH_FLD_MAX_FRM_SIZE (NH_FLD_PAYLOAD_BUFFER << 2)
12107 +#define NH_FLD_MIN_FRM_SIZE (NH_FLD_PAYLOAD_BUFFER << 3)
12108 +#define NH_FLD_PAYLOAD_TYPE (NH_FLD_PAYLOAD_BUFFER << 4)
12109 +#define NH_FLD_FRAME_SIZE (NH_FLD_PAYLOAD_BUFFER << 5)
12110 +#define NH_FLD_PAYLOAD_ALL_FIELDS ((NH_FLD_PAYLOAD_BUFFER << 6) - 1)
12112 +/*************************** GRE fields ************************************/
12113 +#define NH_FLD_GRE_TYPE (1)
12114 +#define NH_FLD_GRE_ALL_FIELDS ((NH_FLD_GRE_TYPE << 1) - 1)
12116 +/*************************** MINENCAP fields *******************************/
12117 +#define NH_FLD_MINENCAP_SRC_IP (1)
12118 +#define NH_FLD_MINENCAP_DST_IP (NH_FLD_MINENCAP_SRC_IP << 1)
12119 +#define NH_FLD_MINENCAP_TYPE (NH_FLD_MINENCAP_SRC_IP << 2)
12120 +#define NH_FLD_MINENCAP_ALL_FIELDS \
12121 + ((NH_FLD_MINENCAP_SRC_IP << 3) - 1)
12123 +/*************************** IPSEC AH fields *******************************/
12124 +#define NH_FLD_IPSEC_AH_SPI (1)
12125 +#define NH_FLD_IPSEC_AH_NH (NH_FLD_IPSEC_AH_SPI << 1)
12126 +#define NH_FLD_IPSEC_AH_ALL_FIELDS ((NH_FLD_IPSEC_AH_SPI << 2) - 1)
12128 +/*************************** IPSEC ESP fields ******************************/
12129 +#define NH_FLD_IPSEC_ESP_SPI (1)
12130 +#define NH_FLD_IPSEC_ESP_SEQUENCE_NUM (NH_FLD_IPSEC_ESP_SPI << 1)
12131 +#define NH_FLD_IPSEC_ESP_ALL_FIELDS ((NH_FLD_IPSEC_ESP_SPI << 2) - 1)
12133 +#define NH_FLD_IPSEC_ESP_SPI_SIZE 4
12135 +/*************************** MPLS fields ***********************************/
12136 +#define NH_FLD_MPLS_LABEL_STACK (1)
12137 +#define NH_FLD_MPLS_LABEL_STACK_ALL_FIELDS \
12138 + ((NH_FLD_MPLS_LABEL_STACK << 1) - 1)
12140 +/*************************** MACSEC fields *********************************/
12141 +#define NH_FLD_MACSEC_SECTAG (1)
12142 +#define NH_FLD_MACSEC_ALL_FIELDS ((NH_FLD_MACSEC_SECTAG << 1) - 1)
12144 +/*************************** GTP fields ************************************/
12145 +#define NH_FLD_GTP_TEID (1)
12148 +/* Protocol options */
12150 +/* Ethernet options */
12151 +#define NH_OPT_ETH_BROADCAST 1
12152 +#define NH_OPT_ETH_MULTICAST 2
12153 +#define NH_OPT_ETH_UNICAST 3
12154 +#define NH_OPT_ETH_BPDU 4
12156 +#define NH_ETH_IS_MULTICAST_ADDR(addr) (addr[0] & 0x01)
12157 +/* also applicable for broadcast */
12159 +/* VLAN options */
12160 +#define NH_OPT_VLAN_CFI 1
12162 +/* IPV4 options */
12163 +#define NH_OPT_IPV4_UNICAST 1
12164 +#define NH_OPT_IPV4_MULTICAST 2
12165 +#define NH_OPT_IPV4_BROADCAST 3
12166 +#define NH_OPT_IPV4_OPTION 4
12167 +#define NH_OPT_IPV4_FRAG 5
12168 +#define NH_OPT_IPV4_INITIAL_FRAG 6
12170 +/* IPV6 options */
12171 +#define NH_OPT_IPV6_UNICAST 1
12172 +#define NH_OPT_IPV6_MULTICAST 2
12173 +#define NH_OPT_IPV6_OPTION 3
12174 +#define NH_OPT_IPV6_FRAG 4
12175 +#define NH_OPT_IPV6_INITIAL_FRAG 5
12177 +/* General IP options (may be used for any version) */
12178 +#define NH_OPT_IP_FRAG 1
12179 +#define NH_OPT_IP_INITIAL_FRAG 2
12180 +#define NH_OPT_IP_OPTION 3
12182 +/* Minenc. options */
12183 +#define NH_OPT_MINENCAP_SRC_ADDR_PRESENT 1
12185 +/* GRE. options */
12186 +#define NH_OPT_GRE_ROUTING_PRESENT 1
12189 +#define NH_OPT_TCP_OPTIONS 1
12190 +#define NH_OPT_TCP_CONTROL_HIGH_BITS 2
12191 +#define NH_OPT_TCP_CONTROL_LOW_BITS 3
12193 +/* CAPWAP options */
12194 +#define NH_OPT_CAPWAP_DTLS 1
12197 + NET_PROT_NONE = 0,
12198 + NET_PROT_PAYLOAD,
12206 + NET_PROT_UDP_LITE,
12209 + NET_PROT_SCTP_CHUNK_DATA,
12213 + NET_PROT_PPPMUX_SUBFRM,
12215 + NET_PROT_L2TPV3_CTRL,
12216 + NET_PROT_L2TPV3_SESS,
12218 + NET_PROT_LLC_SNAP,
12222 + NET_PROT_IPSEC_AH,
12223 + NET_PROT_IPSEC_ESP,
12224 + NET_PROT_UDP_ENC_ESP, /* RFC 3948 */
12227 + NET_PROT_MINENCAP,
12232 + NET_PROT_CAPWAP_DATA,
12233 + NET_PROT_CAPWAP_CTRL,
12234 + NET_PROT_RFC2684,
12240 + NET_PROT_USER_DEFINED_L2,
12241 + NET_PROT_USER_DEFINED_L3,
12242 + NET_PROT_USER_DEFINED_L4,
12243 + NET_PROT_USER_DEFINED_L5,
12244 + NET_PROT_USER_DEFINED_SHIM1,
12245 + NET_PROT_USER_DEFINED_SHIM2,
12247 + NET_PROT_DUMMY_LAST
12251 +#define NH_IEEE8021Q_ETYPE 0x8100
12252 +#define NH_IEEE8021Q_HDR(etype, pcp, dei, vlan_id) \
12253 + ((((uint32_t)(etype & 0xFFFF)) << 16) | \
12254 + (((uint32_t)(pcp & 0x07)) << 13) | \
12255 + (((uint32_t)(dei & 0x01)) << 12) | \
12256 + (((uint32_t)(vlan_id & 0xFFF))))
12258 +#endif /* __FSL_NET_H */
12259 --- a/net/core/pktgen.c
12260 +++ b/net/core/pktgen.c
12261 @@ -2790,6 +2790,7 @@ static struct sk_buff *pktgen_alloc_skb(
12263 skb = __netdev_alloc_skb(dev, size, GFP_NOWAIT);
12265 + skb_reserve(skb, LL_RESERVED_SPACE(dev));
12267 /* the caller pre-fetches from skb->data and reserves for the mac hdr */