layerscape: add ls1088ardb device support
[openwrt/staging/lynxis/omap.git] / target / linux / layerscape / patches-4.4 / 7201-staging-dpaa2-eth-initial-commit-of-dpaa2-eth-driver.patch
1 From e588172442093fe22374dc1bfc88a7da751d6b30 Mon Sep 17 00:00:00 2001
2 From: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
3 Date: Tue, 15 Sep 2015 10:14:16 -0500
4 Subject: [PATCH 201/226] staging: dpaa2-eth: initial commit of dpaa2-eth
5 driver
6
7 commit 3106ece5d96784b63a4eabb26661baaefedd164f
8 [context adjustment]
9
10 This is a commit of a squash of the cumulative dpaa2-eth patches
11 in the sdk 2.0 kernel as of 3/7/2016.
12
13 flib,dpaa2-eth: flib header update (Rebasing onto kernel 3.19, MC 0.6)
14
15 this patch was moved from 4.0 branch
16
17 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
18 [Stuart: split into multiple patches]
19 Signed-off-by: Stuart Yoder <stuart.yoder@freescale.com>
20 Integrated-by: Jilong Guo <jilong.guo@nxp.com>
21
22 flib,dpaa2-eth: updated Eth (was: Rebasing onto kernel 3.19, MC 0.6)
23
24 updated Ethernet driver from 4.0 branch
25
26 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
27 [Stuart: cherry-picked patch from 4.0 and split it up]
28 Signed-off-by: Stuart Yoder <stuart.yoder@freescale.com>
29
30 Conflicts:
31
32 drivers/staging/Makefile
33
34 Signed-off-by: Stuart Yoder <stuart.yoder@nxp.com>
35
36 dpaa2-eth: Adjust 'options' size
37
38 The 'options' field of various MC configuration structures has changed
39 from u64 to u32 as of MC firmware version 7.0.
40
41 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
42 Change-Id: I9ba0c19fc22f745e6be6cc40862afa18fa3ac3db
43 Reviewed-on: http://git.am.freescale.net:8181/35579
44 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
45 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
46
47 dpaa2-eth: Selectively disable preemption
48
49 Temporary workaround for a MC Bus API quirk which only allows us to
50 specify exclusively, either a spinlock-protected MC Portal, or a
51 mutex-protected one, but then tries to match the runtime context in
52 order to enforce their usage.
53
54 Te Be Reverted.
55
56 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
57 Change-Id: Ida2ec1fdbdebfd2e427f96ddad7582880146fda9
58 Reviewed-on: http://git.am.freescale.net:8181/35580
59 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
60 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
61
62 dpaa2-eth: Fix ethtool bug
63
64 We were writing beyond the end of the allocated data area for ethtool
65 statistics.
66
67 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
68 Change-Id: I6b77498a78dad06970508ebbed7144be73854f7f
69 Reviewed-on: http://git.am.freescale.net:8181/35583
70 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
71 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
72 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
73
74 dpaa2-eth: Retry read if store unexpectedly empty
75
76 After we place a volatile dequeue command, we might get to inquire the
77 store before the DMA has actually completed. In such cases, we must
78 retry, lest we'll have the store overwritten by the next legitimate
79 volatile dequeue.
80
81 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
82 Change-Id: I314fbb8b4d9f589715e42d35fc6677d726b8f5ba
83 Reviewed-on: http://git.am.freescale.net:8181/35584
84 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
85 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
86
87 flib: Fix "missing braces around initializer" warning
88
89 Gcc does not support (yet?) the ={0} initializer in the case of an array
90 of structs. Fixing the Flib in order to make the warning go away.
91
92 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
93 Change-Id: I8782ecb714c032cfeeecf4c8323cf9dbb702b10f
94 Reviewed-on: http://git.am.freescale.net:8181/35586
95 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
96 Tested-by: Stuart Yoder <stuart.yoder@freescale.com>
97
98 Revert "dpaa2-eth: Selectively disable preemption"
99
100 This reverts commit e1455823c33b8dd48b5d2d50a7e8a11d3934cc0d.
101
102 dpaa2-eth: Fix memory leak
103
104 A buffer kmalloc'ed at probe time was not freed after it was no
105 longer needed.
106
107 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
108 Change-Id: Iba197209e9203ed306449729c6dcd23ec95f094d
109 Reviewed-on: http://git.am.freescale.net:8181/35756
110 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
111 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
112 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
113
114 dpaa2-eth: Remove unused field in ldpaa_eth_priv structure
115
116 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
117 Change-Id: I124c3e4589b6420b1ea5cc05a03a51ea938b2bea
118 Reviewed-on: http://git.am.freescale.net:8181/35757
119 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
120 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
121 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
122
123 dpaa2-eth: Fix "NOHZ: local_softirq_pending" warning
124
125 Explicitly run softirqs after we enable NAPI. This in particular gets us
126 rid of the "NOHZ: local_softirq_pending" warnings, but it also solves a
127 couple of other problems, among which fluctuating performance and high
128 ping latencies.
129
130 Notes:
131 - This will prevent us from timely processing notifications and
132 other "non-frame events" coming into the software portal. So far,
133 though, we only expect Dequeue Available Notifications, so this patch
134 is good enough for now.
135 - A degradation in console responsiveness is expected, especially in
136 cases where the bottom-half runs on the same CPU as the console.
137
138 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
139 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
140 Change-Id: Ia6f11da433024e80ee59e821c9eabfa5068df5e5
141 Reviewed-on: http://git.am.freescale.net:8181/35830
142 Reviewed-by: Alexandru Marginean <Alexandru.Marginean@freescale.com>
143 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
144 Tested-by: Stuart Yoder <stuart.yoder@freescale.com>
145
146 dpaa2-eth: Add polling mode for link state changes
147
148 Add the Kconfigurable option of using a thread for polling on
149 the link state instead of relying on interrupts from the MC.
150
151 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
152 Change-Id: If2fe66fc5c0fbee2568d7afa15d43ea33f92e8e2
153 Reviewed-on: http://git.am.freescale.net:8181/35967
154 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
155 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
156 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
157
158 dpaa2-eth: Update copyright years.
159
160 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
161 Change-Id: I7e00eecfc5569027c908124726edaf06be357c02
162 Reviewed-on: http://git.am.freescale.net:8181/37666
163 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
164 Reviewed-by: Ruxandra Ioana Radulescu <ruxandra.radulescu@freescale.com>
165 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
166
167 dpaa2-eth: Drain bpools when netdev is down
168
169 In a data path layout with potentially a dozen interfaces, not all of
170 them may be up at the same time, yet they may consume a fair amount of
171 buffer space.
172 Drain the buffer pool upon ifdown and re-seed it at ifup.
173
174 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
175 Change-Id: I24a379b643c8b5161a33b966c3314cf91024ed4a
176 Reviewed-on: http://git.am.freescale.net:8181/37667
177 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
178 Reviewed-by: Ruxandra Ioana Radulescu <ruxandra.radulescu@freescale.com>
179 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
180
181 dpaa2-eth: Interrupts cleanup
182
183 Add the code for cleaning up interrupts on driver removal.
184 This was lost during transition from kernel 3.16 to 3.19.
185
186 Also, there's no need to call devm_free_irq() if probe fails
187 as the kernel will release all driver resources.
188
189 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
190 Change-Id: Ifd404bbf399d5ba62e2896371076719c1d6b4214
191 Reviewed-on: http://git.am.freescale.net:8181/36199
192 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
193 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
194 Reviewed-by: Bharat Bhushan <Bharat.Bhushan@freescale.com>
195 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
196 Reviewed-on: http://git.am.freescale.net:8181/37690
197
198 dpaa2-eth: Ethtool support for hashing
199
200 Only one set of header fields is supported for all protocols, the driver
201 silently replaces previous configuration regardless of user selected
202 protocol.
203 Following fields are supported:
204 L2DA
205 VLAN tag
206 L3 proto
207 IP SA
208 IP DA
209 L4 bytes 0 & 1 [TCP/UDP src port]
210 L4 bytes 2 & 3 [TCP/UDP dst port]
211
212 Signed-off-by: Alex Marginean <alexandru.marginean@freescale.com>
213
214 Change-Id: I97c9dac1b842fe6bc7115e40c08c42f67dee8c9c
215 Reviewed-on: http://git.am.freescale.net:8181/37260
216 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
217 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
218 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
219
220 dpaa2-eth: Fix maximum number of FQs
221
222 The maximum number of Rx/Tx conf FQs associated to a DPNI was not
223 updated when the implementation changed. It just happened to work
224 by accident.
225
226 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
227 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
228 Change-Id: I03e30e0121a40d0d15fcdc4bee1fb98caa17c0ef
229 Reviewed-on: http://git.am.freescale.net:8181/37668
230 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
231 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
232
233 dpaa2-eth: Fix Rx buffer address alignment
234
235 We need to align the start address of the Rx buffers to
236 LDPAA_ETH_BUF_ALIGN bytes. We were using SMP_CACHE_BYTES instead.
237 It happened to work because both defines have the value of 64,
238 but this may change at some point.
239
240 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
241 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
242 Change-Id: I08a0f3f18f82c5581c491bd395e3ad066b25bcf5
243 Reviewed-on: http://git.am.freescale.net:8181/37669
244 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
245 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
246
247 dpaa2-eth: Add buffer count to ethtool statistics
248
249 Print the number of buffers available in the pool for a certain DPNI
250 along with the rest of the ethtool -S stats.
251
252 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
253 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
254 Change-Id: Ia1f5cf341c8414ae2058a73f6bc81490ef134592
255 Reviewed-on: http://git.am.freescale.net:8181/37671
256 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
257 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
258
259 dpaa2-eth: Add Rx error queue
260
261 Add a Kconfigurable option that allows Rx error frames to be
262 enqueued on an error FQ. By default error frames are discarded,
263 but for debug purposes we may want to process them at driver
264 level.
265
266 Note: Checkpatch issues a false positive about complex macros that
267 should be parenthesized.
268
269 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
270 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
271 Change-Id: I7d19d00b5d5445514ebd112c886ce8ccdbb1f0da
272 Reviewed-on: http://git.am.freescale.net:8181/37672
273 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
274 Tested-by: Stuart Yoder <stuart.yoder@freescale.com>
275
276 staging: fsl-dpaa2: FLib headers cleanup
277
278 Going with the flow of moving fsl-dpaa2 headers into the drivers'
279 location rather than keeping them all in one place.
280
281 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
282 Change-Id: Ia2870cd019a4934c7835d38752a46b2a0045f30e
283 Reviewed-on: http://git.am.freescale.net:8181/37674
284 Reviewed-by: Ruxandra Ioana Radulescu <ruxandra.radulescu@freescale.com>
285 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
286 Tested-by: Stuart Yoder <stuart.yoder@freescale.com>
287
288 dpaa2-eth: Klocwork fixes
289
290 Fix several issues reported by Klocwork.
291
292 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
293 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
294 Change-Id: I1e23365765f3b0ff9b6474d8207df7c1f2433ccd
295 Reviewed-on: http://git.am.freescale.net:8181/37675
296 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
297 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
298
299 dpaa2-eth: Probe devices with no hash support
300
301 Don't fail at probe if the DPNI doesn't have the hash distribution
302 option enabled. Instead, initialize a single Rx frame queue and
303 use it for all incoming traffic.
304
305 Rx flow hashing configuration through ethtool will not work
306 in this case.
307
308 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
309 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
310 Change-Id: Iaf17e05b15946e6901c39a21b5344b89e9f1d797
311 Reviewed-on: http://git.am.freescale.net:8181/37676
312 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
313 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
314
315 dpaa2-eth: Process frames in IRQ context
316
317 Stop using threaded IRQs and move back to hardirq top-halves.
318 This is the first patch of a small series adapting the DPIO and Ethernet
319 code to these changes.
320
321 Signed-off-by: Roy Pledge <roy.pledge@freescale.com>
322 Tested-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
323 Tested-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
324 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
325 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
326 [Stuart: split dpio and eth into separate patches, updated subject]
327 Signed-off-by: Stuart Yoder <stuart.yoder@freescale.com>
328
329 dpaa2-eth: Fix bug in NAPI poll
330
331 We incorrectly rearmed FQDAN notifications at the end of a NAPI cycle,
332 regardless of whether the NAPI budget was consumed or not. We only need
333 to rearm notifications if the NAPI cycle cleaned less frames than its
334 budget, otherwise a new NAPI poll will be scheduled anyway.
335
336 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
337 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
338 Change-Id: Ib55497bdbd769047420b3150668f2e2aef3c93f6
339 Reviewed-on: http://git.am.freescale.net:8181/38317
340 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
341 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
342
343 dpaa2-eth: Use dma_map_sg on Tx
344
345 Use the simpler dma_map_sg() along with the scatterlist API if the
346 egress frame is scatter-gather, at the cost of keeping some extra
347 information in the frame's software annotation area.
348
349 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
350 Change-Id: If293aeabbd58d031f21456704357d4ff7e53c559
351 Reviewed-on: http://git.am.freescale.net:8181/37681
352 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
353 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
354
355 dpaa2-eth: Reduce retries if Tx portal busy
356
357 Too many retries due to Tx portal contention led to a significant cycle
358 waste and reduction in performance.
359 Reducing the number of enqueue retries and dropping frame if eventually
360 unsuccessful.
361
362 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
363 Change-Id: Ib111ec61cd4294a7632348c25fa3d7f4002be0c0
364 Reviewed-on: http://git.am.freescale.net:8181/37682
365 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
366 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
367
368 dpaa2-eth: Add sysfs support for TxConf affinity change
369
370 This adds support in sysfs for affining Tx Confirmation queues to GPPs,
371 via the affine DPIO objects.
372
373 The user can specify a cpu list in /sys/class/net/ni<X>/txconf_affinity
374 to which the Ethernet driver will affine the TxConf FQs, in round-robin
375 fashion. This is naturally a bit coarse, because there is no "official"
376 mapping of the transmitting CPUs to Tx Confirmation queues.
377
378 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
379 Change-Id: I4b3da632e202ceeb22986c842d746aafe2a87a81
380 Reviewed-on: http://git.am.freescale.net:8181/37684
381 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
382 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
383
384 dpaa2-eth: Implement ndo_select_queue
385
386 Use a very simple selection function for the egress FQ. The purpose
387 behind this is to more evenly distribute Tx Confirmation traffic,
388 especially in the case of multiple egress flows, when bundling it all on
389 CPU 0 would make that CPU a bottleneck.
390
391 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
392 Change-Id: Ibfe8aad7ad5c719cc95d7817d7de6d2094f0f7ed
393 Reviewed-on: http://git.am.freescale.net:8181/37685
394 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
395 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
396
397 dpaa2-eth: Reduce TxConf NAPI weight back to 64
398
399 It turns out that not only the kernel frowned upon the old budget of 256,
400 but the measured values were well below that anyway.
401
402 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
403 Change-Id: I62ddd3ea1dbfd8b51e2bcb2286e0d5eb10ac7f27
404 Reviewed-on: http://git.am.freescale.net:8181/37688
405 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
406 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
407
408 dpaa2-eth: Try refilling the buffer pool less often
409
410 We used to check if the buffer pool needs refilling at each Rx
411 frame. Instead, do that check (and the actual buffer release if
412 needed) only after a pull dequeue.
413
414 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
415 Change-Id: Id52fab83873c40a711b8cadfcf909eb7e2e210f3
416 Reviewed-on: http://git.am.freescale.net:8181/38318
417 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
418 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
419 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
420
421 dpaa2-eth: Stay in NAPI if exact budget is met
422
423 An off-by-one bug would cause premature exiting from the NAPI cycle.
424 Performance degradation is particularly severe in IPFWD cases.
425
426 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
427 Tested-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
428 Change-Id: I9de2580c7ff8e46cbca9613890b03737add35e26
429 Reviewed-on: http://git.am.freescale.net:8181/37908
430 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
431 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
432 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
433
434 dpaa2-eth: Minor changes to FQ stats
435
436 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
437 Change-Id: I0ced0e7b2eee28599cdea79094336c0d44f0d32b
438 Reviewed-on: http://git.am.freescale.net:8181/38319
439 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
440 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
441 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
442
443 dpaa2-eth: Support fewer DPIOs than CPUs
444
445 The previous DPIO functions would transparently choose a (perhaps
446 non-affine) CPU if the required CPU was not available. Now that their API
447 contract is enforced, we must make an explicit request for *any* DPIO if
448 the request for an *affine* DPIO has failed.
449
450 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
451 Change-Id: Ib08047ffa33518993b1ffa4671d0d4f36d6793d0
452 Reviewed-on: http://git.am.freescale.net:8181/38320
453 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
454 Reviewed-by: Roy Pledge <roy.pledge@freescale.com>
455 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
456
457 dpaa2-eth: cosmetic changes in hashing code
458
459 Signed-off-by: Alex Marginean <alexandru.marginean@freescale.com>
460 Change-Id: I79e21a69a6fb68cdbdb8d853c059661f8988dbf9
461 Reviewed-on: http://git.am.freescale.net:8181/37258
462 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
463 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
464
465 dpaa2-eth: Prefetch data before initial access
466
467 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
468 Change-Id: Ie8f0163651aea7e3e197a408f89ca98d296d4b8b
469 Reviewed-on: http://git.am.freescale.net:8181/38753
470 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
471 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
472 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
473
474 dpaa2-eth: Use netif_receive_skb
475
476 netif_rx() is a leftover since our pre-NAPI codebase.
477
478 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
479 Change-Id: I02ff0a059862964df1bf81b247853193994c2dfe
480 Reviewed-on: http://git.am.freescale.net:8181/38754
481 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
482 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
483 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
484
485 dpaa2-eth: Use napi_alloc_frag() on Rx.
486
487 A bit better-suited than netdev_alloc_frag().
488
489 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
490 Change-Id: I8863a783502db963e5dc968f049534c36ad484e2
491 Reviewed-on: http://git.am.freescale.net:8181/38755
492 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
493 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
494 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
495
496 dpaa2-eth: Silence skb_realloc_headroom() warning
497
498 pktgen tests tend to be too noisy because pktgen does not observe the
499 net device's needed_headroom specification and we used to be pretty loud
500 about that. We'll print the warning message just once.
501
502 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
503 Change-Id: I3c12eba29c79aa9c487307d367f6d9f4dbe447a3
504 Reviewed-on: http://git.am.freescale.net:8181/38756
505 Reviewed-by: Ruxandra Ioana Radulescu <ruxandra.radulescu@freescale.com>
506 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
507 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
508
509 dpaa2-eth: Print message upon device unplugging
510
511 Give a console notification when a DPNI is unplugged. This is useful for
512 automated tests to know the operation (which is not instantaneous) has
513 finished.
514
515 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
516 Change-Id: If33033201fcee7671ad91c2b56badf3fb56a9e3e
517 Reviewed-on: http://git.am.freescale.net:8181/38757
518 Reviewed-by: Ruxandra Ioana Radulescu <ruxandra.radulescu@freescale.com>
519 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
520 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
521
522 dpaa2-eth: Add debugfs support
523
524 Add debugfs entries for showing detailed per-CPU and per-FQ
525 counters for each network interface. Also add a knob for
526 resetting these stats.
527 The agregated interface statistics were already available through
528 ethtool -S.
529
530 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
531 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
532 Change-Id: I55f5bfe07a15b0d1bf0c6175d8829654163a4318
533 Reviewed-on: http://git.am.freescale.net:8181/38758
534 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
535 Tested-by: Stuart Yoder <stuart.yoder@freescale.com>
536
537 dpaa2-eth: limited support for flow steering
538
539 Steering is supported on a sub-set of fields, including DMAC, IP SRC
540 and DST, L4 ports.
541 Steering and hashing configurations depend on each other, that makes
542 the whole thing tricky to configure. Currently FS can be configured
543 using only the fields selected for hashing and all the hashing fields
544 must be included in the match key - masking doesn't work yet.
545
546 Signed-off-by: Alex Marginean <alexandru.marginean@freescale.com>
547 Change-Id: I9fa3199f7818a9a5f9d69d3483ffd839056cc468
548 Reviewed-on: http://git.am.freescale.net:8181/38759
549 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
550 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
551 Reviewed-by: Ruxandra Ioana Radulescu <ruxandra.radulescu@freescale.com>
552 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
553
554 dpaa2-eth: Rename files into the dpaa2 nomenclature
555
556 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
557 Change-Id: I1c3d62e2f19a59d4b65727234fd7df2dfd8683d9
558 Reviewed-on: http://git.am.freescale.net:8181/38965
559 Reviewed-by: Alexandru Marginean <Alexandru.Marginean@freescale.com>
560 Reviewed-by: Ruxandra Ioana Radulescu <ruxandra.radulescu@freescale.com>
561 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
562 Tested-by: Stuart Yoder <stuart.yoder@freescale.com>
563
564 staging: dpaa2-eth: migrated remaining flibs for MC fw 8.0.0
565
566 Signed-off-by: J. German Rivera <German.Rivera@freescale.com>
567 [Stuart: split eth part into separate patch, updated subject]
568 Signed-off-by: Stuart Yoder <stuart.yoder@freescale.com>
569
570 dpaa2-eth: Clear 'backup_pool' attribute
571
572 New MC-0.7 firmware allows specifying an alternate buffer pool, but we
573 are momentarily not using that feature.
574
575 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
576 Change-Id: I0a6e6626512b7bbddfef732c71f1400b67f3e619
577 Reviewed-on: http://git.am.freescale.net:8181/39149
578 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
579 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
580
581 dpaa2-eth: Do programing of MSIs in devm_request_threaded_irq()
582
583 With the new dprc_set_obj_irq() we can now program MSIS in the device
584 in the callback invoked from devm_request_threaded_irq().
585 Since this callback is invoked with interrupts disabled, we need to
586 use an atomic portal, instead of the root DPRC's built-in portal
587 which is non-atomic.
588
589 Signed-off-by: Itai Katz <itai.katz@freescale.com>
590 Signed-off-by: J. German Rivera <German.Rivera@freescale.com>
591 [Stuart: split original patch into multiple patches]
592 Signed-off-by: Stuart Yoder <stuart.yoder@freescale.com>
593
594 dpaa2-eth: Do not map beyond skb tail
595
596 On Tx do dma_map only until skb->tail, rather than skb->end.
597
598 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
599
600 dpaa2-eth: Declare NETIF_F_LLTX as a capability
601
602 We are effectively doing lock-less Tx.
603
604 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
605
606 dpaa2-eth: Avoid bitcopy of 'backpointers' struct
607
608 Make 'struct ldpaa_eth_swa bps' a pointer and void copying it on both Tx
609 and TxConf.
610
611 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
612
613 dpaa2-eth: Use CDANs instead of FQDANs
614
615 Use Channel Dequeue Available Notifications (CDANs) instead of
616 Frame Queue notifications. We allocate a QMan channel (or DPCON
617 object) for each available cpu and assign to it the Rx and Tx conf
618 queues associated with that cpu.
619
620 We usually want to have affine DPIOs and DPCONs (one for each core).
621 If this is not possible due to insufficient resources, we distribute
622 all ingress traffic on the cores with affine DPIOs.
623
624 NAPI instances are now one per channel instead of one per FQ, as the
625 interrupt source changes. Statistics counters change accordingly.
626
627 Note that after this commit is applied, one needs to provide sufficient
628 DPCON objects (either through DPL on restool) in order for the Ethernet
629 interfaces to work.
630
631 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
632 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
633
634 dpaa2-eth: Cleanup debugfs statistics
635
636 Several minor changes to statistics reporting:
637 * Fix print alignment of statistics counters
638 * Fix a naming ambiguity in the cpu_stats debugfs ops
639 * Add Rx/Tx error counters; these were already used, but not
640 reported in the per-CPU stats
641
642 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
643
644 dpaa2-eth: Add tx shaping configuration in sysfs
645
646 Egress traffic can be shaped via a per-DPNI SysFS entry:
647 echo M N > /sys/class/net/ni<X>/tx_shaping
648 where:
649 M is the maximum throughput, expressed in Mbps.
650 N is the maximum burst size, expressed in bytes, at most 64000.
651
652 To remove shaping, use M=0, N=0.
653
654 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
655
656 dpaa2-eth: Fix "Tx busy" counter
657
658 Under heavy egress load, when a large number of the transmitted packets
659 cannot be sent because of high portal contention, the "Tx busy" counter
660 was not properly incremented.
661
662 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
663
664 dpaa2-eth: Fix memory cleanup in case of Tx congestion
665
666 The error path of ldpaa_eth_tx() was not properly freeing the SGT buffer
667 if the enqueue had failed because of congestion. DMA unmapping was
668 missing, too.
669
670 Factor the code originally inside the TxConf callback out into a
671 separate function that would be called on both TxConf and Tx paths.
672
673 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
674 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
675
676 dpaa2-eth: Use napi_gro_receive()
677
678 Call napi_gro_receive(), effectively enabling GRO.
679 NOTE: We could further optimize this by looking ahead in the parse results
680 received from hardware and only using GRO when the L3+L4 combination is
681 appropriate.
682
683 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
684
685 dpaa2-eth: Fix compilation of Rx Error FQ code
686
687 Conditionally-compiled code slipped between cracks when FLibs were
688 updated.
689
690 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
691
692 fsl-dpaa2: Add Kconfig dependency on DEBUG_FS
693
694 The driver's debugfs support depends on the generic CONFIG_DEBUG_FS.
695
696 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
697
698 dpaa2-eth: Fix interface down/up bug
699
700 If a networking interface was brought down while still receiving
701 ingress traffic, the delay between DPNI disable and NAPI disable
702 was not enough to ensure all in-flight frames got processed.
703 Instead, some frames were left pending in the Rx queues. If the
704 net device was then removed (i.e. restool unbind/unplug), the
705 call to dpni_reset() silently failed and the kernel crashed on
706 device replugging.
707
708 Fix this by increasing the FQ drain time. Also, at ifconfig up
709 we enable NAPI before starting the DPNI, to make sure we don't
710 miss any early CDANs.
711
712 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
713
714 dpaa2-eth: Iterate only through initialized channels
715
716 The number of DPIO objects available to a DPNI may be fewer than the
717 number of online cores. A typical example would be a DPNI with a
718 distribution size smaller than 8. Since we only initialize as many
719 channels (DPCONs) as there are DPIOs, iterating through all online cpus
720 would produce a nasty oops when retrieving ethtool stats.
721
722 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
723
724 net: pktgen: Observe needed_headroom of the device
725
726 Allocate enough space so as not to force the outgoing net device to do
727 skb_realloc_headroom().
728
729 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
730 Signed-off-by: David S. Miller <davem@davemloft.net>
731
732 dpaa2-eth: Trace buffer pool seeding
733
734 Add ftrace support for buffer pool seeding. Individual buffers are
735 described by virtual and dma addresses and sizes, as well as by bpid.
736
737 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
738
739 dpaa2-eth: Explicitly set carrier off at ifconfig up
740
741 If we don't, netif_carrier_ok() will still return true even if the link
742 state is marked as LINKWATCH_PENDING, which in a dpni-2-dpni case may
743 last indefinitely long. This will cause "ifconfig up" followed by "ip
744 link show" to report LOWER_UP when the peer DPNI is still down (and in
745 fact before we've even received any link notification at all).
746
747 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
748
749 dpaa2-eth: Fix FQ type in stats print
750
751 Fix a bug where the type of the Rx error queue was printed
752 incorrectly in the debugfs statistics
753
754 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
755
756 dpaa2-eth: Don't build debugfs support as a separate module
757
758 Instead have module init and exit functions declared explicitly for
759 the Ethernet driver and initialize/destroy the debugfs directory there.
760
761 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
762
763 dpaa2-eth: Remove debugfs #ifdefs from dpaa2-eth.c
764
765 Instead of conditionally compiling the calls to debugfs init
766 functions in dpaa2-eth.c, define no-op stubs for these functions
767 in case the debugfs Kconfig option is not enabled. This makes
768 the code more readable.
769
770 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
771 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
772
773 dpaa2-eth: Use napi_complete_done()
774
775 Replace napi_complete() with napi_complete_done().
776
777 Together with setting /sys/class/net/ethX/gro_flush_timeout, this
778 allows us to take better advantage of GRO coalescing and improves
779 throughput and cpu load in TCP termination tests.
780
781 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
782
783 dpaa2-eth: Fix error path in probe
784
785 NAPI delete was called at the wrong place when exiting probe
786 function on an error path
787
788 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
789
790 dpaa2-eth: Allocate channels based on queue count
791
792 Limit the number of channels allocated per DPNI to the maximum
793 between the number of Rx queues per traffic class (distribution size)
794 and Tx confirmation queues (number of tx flows).
795 If this happens to be larger than the number of available cores, only
796 allocate one channel for each core and distribute the frame queues on
797 the cores/channels in a round robin fashion.
798
799 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
800 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
801
802 dpaa2-eth: Use DPNI setting for number of Tx flows
803
804 Instead of creating one Tx flow for each online cpu, use the DPNI
805 attributes for deciding how many senders we have.
806
807 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
808
809 dpaa2-eth: Renounce sentinel in enum dpni_counter
810
811 Bring back the Flib header dpni.h to its initial content by removing the
812 sentinel value in enum dpni_counter.
813
814 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
815
816 dpaa2-eth: Fix Rx queue count
817
818 We were missing a roundup to the next power of 2 in order to be in sync
819 with the MC implementation.
820 Actually, moved that logic in a separate function which we'll remove
821 once the MC API is updated.
822
823 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
824
825 dpaa2-eth: Unmap the S/G table outside ldpaa_eth_free_rx_fd
826
827 The Scatter-Gather table is already unmapped outside ldpaa_eth_free_rx_fd
828 so no need to try to unmap it once more.
829
830 Signed-off-by: Cristian Sovaiala <cristian.sovaiala@freescale.com>
831
832 dpaa2-eth: Use napi_schedule_irqoff()
833
834 At the time we schedule NAPI, the Dequeue Available Notifications (which
835 are the de facto triggers of NAPI processing) are already disabled.
836
837 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
838 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
839
840 net: Fix ethernet Kconfig
841
842 Re-add missing 'source' directive. This exists on the integration
843 branch, but was mistakenly removed by an earlier dpaa2-eth rebase.
844
845 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
846
847 dpaa2-eth: Manually update link state at ifup
848
849 The DPMAC may have handled the link state notification before the DPNI
850 is up. A new PHY state transision may not subsequently occur, so the
851 DPNI must initiate a read of the DPMAC state.
852
853 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
854
855 dpaa2-eth: Stop carrier upon ifdown
856
857 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
858
859 dpaa2-eth: Fix print messages in link state handling code
860
861 Avoid an "(uninitialized)" message during DPNI probe by replacing
862 netdev_info() with its corresponding dev_info().
863 Purge some related comments and add some netdev messages to assist
864 link state debugging.
865 Remove an excessively defensive assertion.
866
867 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
868
869 dpaa2-eth: Do not allow ethtool settings change while the NI is up
870
871 Due to a MC limitation, link state changes while the DPNI is enabled
872 will fail. For now, we'll just prevent the call from going down to the MC
873 if we know it will fail.
874
875 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
876
877 dpaa2-eth: Reduce ethtool messages verbosity
878
879 Transform a couple of netdev_info() calls into netdev_dbg().
880
881 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
882
883 dpaa2-eth: Only unmask IRQs that we actually handle
884
885 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
886
887 dpaa2-eth: Produce fewer boot log messages
888
889 No longer print one line for each all-zero hwaddr that was replaced with
890 a random MAC address; just inform the user once that this has occurred.
891 And reduce redundancy of some printouts in the bootlog.
892
893 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
894
895 dpaa2-eth: Fix big endian issue
896
897 We were not doing any endianness conversions on the scatter gather
898 table entries, which caused problems on big endian kernels.
899
900 For frame descriptors the QMan driver takes care of this transparently,
901 but in the case of SG entries we need to do it ourselves.
902
903 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
904
905 dpaa2-eth: Force atomic context for lazy bpool seeding
906
907 We use the same ldpaa_bp_add_7() function for initial buffer pool
908 seeding (from .ndo_open) and for hotpath pool replenishing. The function
909 is using napi_alloc_frag() as an optimization for the Rx datapath, but
910 that turns out to require atomic execution because of a this_cpu_ptr()
911 call down its stack.
912 This patch temporarily disables preemption around the initial seeding of
913 the Rx buffer pool.
914
915 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
916
917 dpaa-eth: Integrate Flib version 0.7.1.2
918
919 Although API-compatible with 0.7.1.1, there are some ABI changes
920 that warrant a new integration.
921
922 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
923
924 dpaa2-eth: No longer adjust max_dist_per_tc
925
926 The MC firmware until version 0.7.1.1/8.0.2 requires that
927 max_dist_per_tc have the value expected by the hardware, which would be
928 different from what the user expects. MC firmware 0.7.1.2/8.0.5 fixes
929 that, so we remove our transparent conversion.
930
931 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
932
933 dpaa2-eth: Enforce 256-byte Rx alignment
934
935 Hardware erratum enforced by MC requires that Rx buffer lengths and
936 addresses be 265-byte aligned.
937
938 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
939
940 dpaa2-eth: Rename Tx buf alignment macro
941
942 The existing "BUF_ALIGN" macro remained confined to Tx usage, after
943 separate alignment was introduced for Rx. Renaming accordingly.
944
945 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
946
947 dpaa2-eth: Fix hashing distribution size
948
949 Commit be3fb62623e4338e60fb60019f134b6055cbc127
950 Author: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
951 Date: Fri Oct 23 18:26:44 2015 +0300
952
953 dpaa2-eth: No longer adjust max_dist_per_tc
954
955 missed one usage of the ldpaa_queue_count() function, making
956 distribution size inadvertenly lower.
957
958 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
959
960 dpaa2-eth: Remove ndo_select_queue
961
962 Our implementation of ndo_select_queue would lead to questions regarding
963 our support for qdiscs. Until we find an optimal way to select the txq
964 without breaking future qdisc integration, just remove the
965 ndo_select_queue callback entirely and leave the stack figure out the
966 flow.
967 This incurs a ~2-3% penalty on some performance tests.
968
969 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
970
971 dpaa2-eth: Select TxConf FQ based on processor id
972
973 Use smp_processor_id instead of skb queue mapping to determine the tx
974 flow id and implicitly the confirmation queue.
975
976 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
977
978 dpaa2-eth: Reduce number of buffers in bpool
979
980 Reduce the maximum number of buffers in each buffer pool associated
981 with a DPNI. This in turn reduces the number of memory allocations
982 performed in a single batch when buffers fall below a certain
983 threshold.
984
985 Provides a significant performance boost (~5-10% increase) on both
986 termination and forwarding scenarios, while also reducing the driver
987 memory footprint.
988
989 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
990
991 dpaa2-eth: Replace "ldpaa" with "dpaa2"
992
993 Replace all instances of "ldpaa"/"LDPAA" in the Ethernet driver
994 (names of functions, structures, macros, etc), with "dpaa2"/"DPAA2",
995 except for DPIO API function calls.
996
997 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
998
999 dpaa2-eth: rename ldpaa to dpaa2
1000
1001 Signed-off-by: Haiying Wang <Haiying.wang@freescale.com>
1002 (Stuart: this patch was split out from the origin global rename patch)
1003 Signed-off-by: Stuart Yoder <stuart.yoder@nxp.com>
1004
1005 dpaa2-eth: Rename dpaa_io_query_fq_count to dpaa2_io_query_fq_count
1006
1007 Signed-off-by: Cristian Sovaiala <cristian.sovaiala@freescale.com>
1008
1009 fsl-dpio: rename dpaa_* structure to dpaa2_*
1010
1011 Signed-off-by: Haiying Wang <Haiying.wang@freescale.com>
1012
1013 dpaa2-eth, dpni, fsl-mc: Updates for MC0.8.0
1014
1015 Several changes need to be performed in sync for supporting
1016 the newest MC version:
1017 * Update mc-cmd.h
1018 * Update the dpni binary interface to v6.0
1019 * Update the DPAA2 Eth driver to account for several API changes
1020
1021 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
1022
1023 staging: fsl-dpaa2: ethernet: add support for hardware timestamping
1024
1025 Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com>
1026
1027 fsl-dpaa2: eth: Do not set bpid in egress fd
1028
1029 We don't do FD recycling on egress, BPID is therefore not necessary.
1030
1031 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
1032
1033 fsl-dpaa2: eth: Amend buffer refill comment
1034
1035 A change request has been pending for placing an upper bound to the
1036 buffer replenish logic on Rx. However, short of practical alternatives,
1037 resort to amending the relevant comment and rely on ksoftirqd to
1038 guarantee interactivity.
1039
1040 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
1041
1042 fsl-dpaa2: eth: Configure a taildrop threshold for each Rx frame queue.
1043
1044 The selected value allows for Rx jumbo (10K) frames processing
1045 while at the same time helps balance the system in the case of
1046 IP forwarding.
1047
1048 Also compute the number of buffers in the pool based on the TD
1049 threshold to avoid starving some of the ingress queues in small
1050 frames, high throughput scenarios.
1051
1052 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
1053
1054 fsl-dpaa2: eth: Check objects' FLIB version
1055
1056 Make sure we support the DPNI, DPCON and DPBP version, otherwise
1057 abort probing early on and provide an error message.
1058
1059 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
1060
1061 fsl-dpaa2: eth: Remove likely/unlikely from cold paths
1062
1063 Signed-off-by: Cristian Sovaiala <cristi.sovaiala@nxp.com>
1064 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
1065
1066 fsl-dpaa2: eth: Remove __cold attribute
1067
1068 Signed-off-by: Cristian Sovaiala <cristi.sovaiala@nxp.com>
1069
1070 fsl-dpaa2: eth: Replace netdev_XXX with dev_XXX before register_netdevice()
1071
1072 Signed-off-by: Cristian Sovaiala <cristi.sovaiala@nxp.com>
1073 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
1074
1075 fsl-dpaa2: eth: Fix coccinelle issue
1076
1077 drivers/staging/fsl-dpaa2/ethernet/dpaa2-ethtool.c:687:1-36: WARNING:
1078 Assignment of bool to 0/1
1079
1080 Signed-off-by: Cristian Sovaiala <cristi.sovaiala@nxp.com>
1081
1082 fsl-dpaa2: eth: Fix minor spelling issue
1083
1084 Signed-off-by: Cristian Sovaiala <cristi.sovaiala@nxp.com>
1085
1086 fsl-dpaa2: eth: Add a couple of 'unlikely' on hot path
1087
1088 Signed-off-by: Cristian Sovaiala <cristi.sovaiala@nxp.com>
1089 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
1090
1091 fsl-dpaa2: eth: Fix a bunch of minor issues found by static analysis tools
1092
1093 As found by Klocworks and Checkpatch:
1094 - Unused variables
1095 - Integer type replacements
1096 - Unchecked memory allocations
1097 - Whitespace, alignment and newlining
1098
1099 Signed-off-by: Cristian Sovaiala <cristi.sovaiala@nxp.com>
1100 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
1101
1102 fsl-dpaa2: eth: Remove "inline" keyword from static functions
1103
1104 Signed-off-by: Cristian Sovaiala <cristi.sovaiala@nxp.com>
1105
1106 fsl-dpaa2: eth: Remove BUG/BUG_ONs
1107
1108 Signed-off-by: Cristian Sovaiala <cristi.sovaiala@nxp.com>
1109 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
1110
1111 fsl-dpaa2: eth: Use NAPI_POLL_WEIGHT
1112
1113 No need to define our own macro as long as we're using the
1114 default value of 64.
1115
1116 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
1117
1118 dpaa2-eth: Move dpaa2_eth_swa structure to header file
1119
1120 It was the only structure defined inside dpaa2-eth.c
1121
1122 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
1123
1124 fsl-dpaa2: eth: Replace uintX_t with uX
1125
1126 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
1127 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
1128
1129 fsl-dpaa2: eth: Minor fixes & cosmetics
1130
1131 - Make driver log level an int, because this is what
1132 netif_msg_init expects.
1133 - Remove driver description macro as it was used only once,
1134 immediately after being defined
1135 - Remove include comment
1136
1137 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
1138
1139 dpaa2-eth: Move bcast address setup to dpaa2_eth_netdev_init
1140
1141 It seems to fit better there than directly in probe.
1142
1143 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
1144
1145 dpaa2-eth: Fix DMA mapping bug
1146
1147 During hashing/flow steering configuration via ethtool, we were
1148 doing a DMA unmap from the wrong address. Fix the issue by using
1149 the DMA address that was initially mapped.
1150
1151 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
1152
1153 dpaa2-eth: Associate buffer counting to queues instead of cpu
1154
1155 Move the buffer counters from being percpu variables to being
1156 associated with QMan channels. This is more natural as we need
1157 to dimension the buffer pool count based on distribution size
1158 rather than number of online cores.
1159
1160 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
1161
1162 fsl-dpaa2: eth: Provide driver and fw version to ethtool
1163
1164 Read fw version from the MC and interpret DPNI FLib major.minor as the
1165 driver's version. Report these in 'ethool -i'.
1166
1167 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
1168
1169 fsl-dpaa2: eth: Remove dependency on GCOV_KERNEL
1170
1171 Signed-off-by: Cristian Sovaiala <cristi.sovaiala@nxp.com>
1172
1173 fsl-dpaa2: eth: Remove FIXME/TODO comments from the code
1174
1175 Some of the concerns had already been addressed, a couple are being
1176 fixed in place.
1177 Left a few TODOs related to the flow-steering code, which needs to be
1178 revisited before upstreaming anyway.
1179
1180 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
1181
1182 fsl-dpaa2: eth: Remove forward declarations
1183
1184 Instead move the functions such that they are defined prior to
1185 being used.
1186
1187 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
1188
1189 fsl-dpaa2: eth: Remove dead code in IRQ handler
1190
1191 If any of those conditions were met, it is unlikely we'd ever be there
1192 in the first place.
1193
1194 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
1195
1196 fsl-dpaa2: eth: Remove dpaa2_dpbp_drain()
1197
1198 Its sole caller was __dpaa2_dpbp_free(), so move its content and get rid
1199 of one function call.
1200
1201 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
1202
1203 fsl-dpaa2: eth: Remove duplicate define
1204
1205 We somehow ended up with two defines for the maximum number
1206 of tx queues.
1207
1208 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
1209
1210 fsl-dpaa2: eth: Move header comment to .c file
1211
1212 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
1213
1214 fsl-dpaa2: eth: Make DPCON allocation failure produce a benign message
1215
1216 Number of DPCONs may be smaller than the number of CPUs in a number of
1217 valid scenarios. One such scenario is when the DPNI's distribution width
1218 is smaller than the number of cores and we just don't want to
1219 over-allocate DPCONs.
1220 Make the DPCON allocation failure less menacing by changing the logged
1221 message.
1222
1223 While at it, remove a unused parameter in function prototype.
1224
1225 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
1226
1227 dpaa2 eth: irq update
1228
1229 Signed-off-by: Stuart Yoder <stuart.yoder@nxp.com>
1230
1231 Conflicts:
1232 drivers/staging/Kconfig
1233 drivers/staging/Makefile
1234 ---
1235 MAINTAINERS | 15 +
1236 drivers/staging/Kconfig | 2 +
1237 drivers/staging/Makefile | 1 +
1238 drivers/staging/fsl-dpaa2/Kconfig | 11 +
1239 drivers/staging/fsl-dpaa2/Makefile | 5 +
1240 drivers/staging/fsl-dpaa2/ethernet/Kconfig | 42 +
1241 drivers/staging/fsl-dpaa2/ethernet/Makefile | 21 +
1242 .../staging/fsl-dpaa2/ethernet/dpaa2-eth-debugfs.c | 319 +++
1243 .../staging/fsl-dpaa2/ethernet/dpaa2-eth-debugfs.h | 61 +
1244 .../staging/fsl-dpaa2/ethernet/dpaa2-eth-trace.h | 185 ++
1245 drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.c | 2793 ++++++++++++++++++++
1246 drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.h | 366 +++
1247 drivers/staging/fsl-dpaa2/ethernet/dpaa2-ethtool.c | 882 +++++++
1248 drivers/staging/fsl-dpaa2/ethernet/dpkg.h | 175 ++
1249 drivers/staging/fsl-dpaa2/ethernet/dpni-cmd.h | 1058 ++++++++
1250 drivers/staging/fsl-dpaa2/ethernet/dpni.c | 1907 +++++++++++++
1251 drivers/staging/fsl-dpaa2/ethernet/dpni.h | 2581 ++++++++++++++++++
1252 drivers/staging/fsl-mc/include/mc-cmd.h | 5 +-
1253 drivers/staging/fsl-mc/include/net.h | 481 ++++
1254 net/core/pktgen.c | 1 +
1255 20 files changed, 10910 insertions(+), 1 deletion(-)
1256 create mode 100644 drivers/staging/fsl-dpaa2/Kconfig
1257 create mode 100644 drivers/staging/fsl-dpaa2/Makefile
1258 create mode 100644 drivers/staging/fsl-dpaa2/ethernet/Kconfig
1259 create mode 100644 drivers/staging/fsl-dpaa2/ethernet/Makefile
1260 create mode 100644 drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth-debugfs.c
1261 create mode 100644 drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth-debugfs.h
1262 create mode 100644 drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth-trace.h
1263 create mode 100644 drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.c
1264 create mode 100644 drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.h
1265 create mode 100644 drivers/staging/fsl-dpaa2/ethernet/dpaa2-ethtool.c
1266 create mode 100644 drivers/staging/fsl-dpaa2/ethernet/dpkg.h
1267 create mode 100644 drivers/staging/fsl-dpaa2/ethernet/dpni-cmd.h
1268 create mode 100644 drivers/staging/fsl-dpaa2/ethernet/dpni.c
1269 create mode 100644 drivers/staging/fsl-dpaa2/ethernet/dpni.h
1270 create mode 100644 drivers/staging/fsl-mc/include/net.h
1271
1272 --- a/MAINTAINERS
1273 +++ b/MAINTAINERS
1274 @@ -4539,6 +4539,21 @@ L: linux-kernel@vger.kernel.org
1275 S: Maintained
1276 F: drivers/staging/fsl-mc/
1277
1278 +FREESCALE DPAA2 ETH DRIVER
1279 +M: Ioana Radulescu <ruxandra.radulescu@freescale.com>
1280 +M: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
1281 +M: Cristian Sovaiala <cristian.sovaiala@freescale.com>
1282 +L: linux-kernel@vger.kernel.org
1283 +S: Maintained
1284 +F: drivers/staging/fsl-dpaa2/ethernet/
1285 +
1286 +FREESCALE QORIQ MANAGEMENT COMPLEX RESTOOL DRIVER
1287 +M: Lijun Pan <Lijun.Pan@freescale.com>
1288 +L: linux-kernel@vger.kernel.org
1289 +S: Maintained
1290 +F: drivers/staging/fsl-mc/bus/mc-ioctl.h
1291 +F: drivers/staging/fsl-mc/bus/mc-restool.c
1292 +
1293 FREEVXFS FILESYSTEM
1294 M: Christoph Hellwig <hch@infradead.org>
1295 W: ftp://ftp.openlinux.org/pub/people/hch/vxfs
1296 --- a/drivers/staging/Kconfig
1297 +++ b/drivers/staging/Kconfig
1298 @@ -114,4 +114,6 @@ source "drivers/staging/most/Kconfig"
1299
1300 source "drivers/staging/fsl_ppfe/Kconfig"
1301
1302 +source "drivers/staging/fsl-dpaa2/Kconfig"
1303 +
1304 endif # STAGING
1305 --- a/drivers/staging/Makefile
1306 +++ b/drivers/staging/Makefile
1307 @@ -49,3 +49,4 @@ obj-$(CONFIG_FSL_DPA) += fsl_q
1308 obj-$(CONFIG_WILC1000) += wilc1000/
1309 obj-$(CONFIG_MOST) += most/
1310 obj-$(CONFIG_FSL_PPFE) += fsl_ppfe/
1311 +obj-$(CONFIG_FSL_DPAA2) += fsl-dpaa2/
1312 --- /dev/null
1313 +++ b/drivers/staging/fsl-dpaa2/Kconfig
1314 @@ -0,0 +1,11 @@
1315 +#
1316 +# Freescale device configuration
1317 +#
1318 +
1319 +config FSL_DPAA2
1320 + bool "Freescale DPAA2 devices"
1321 + depends on FSL_MC_BUS
1322 + ---help---
1323 + Build drivers for Freescale DataPath Acceleration Architecture (DPAA2) family of SoCs.
1324 +# TODO move DPIO driver in-here?
1325 +source "drivers/staging/fsl-dpaa2/ethernet/Kconfig"
1326 --- /dev/null
1327 +++ b/drivers/staging/fsl-dpaa2/Makefile
1328 @@ -0,0 +1,5 @@
1329 +#
1330 +# Makefile for the Freescale network device drivers.
1331 +#
1332 +
1333 +obj-$(CONFIG_FSL_DPAA2_ETH) += ethernet/
1334 --- /dev/null
1335 +++ b/drivers/staging/fsl-dpaa2/ethernet/Kconfig
1336 @@ -0,0 +1,42 @@
1337 +#
1338 +# Freescale DPAA Ethernet driver configuration
1339 +#
1340 +# Copyright (C) 2014-2015 Freescale Semiconductor, Inc.
1341 +#
1342 +# This file is released under the GPLv2
1343 +#
1344 +
1345 +menuconfig FSL_DPAA2_ETH
1346 + tristate "Freescale DPAA2 Ethernet"
1347 + depends on FSL_DPAA2 && FSL_MC_BUS && FSL_MC_DPIO
1348 + select FSL_DPAA2_MAC
1349 + default y
1350 + ---help---
1351 + Freescale Data Path Acceleration Architecture Ethernet
1352 + driver, using the Freescale MC bus driver.
1353 +
1354 +if FSL_DPAA2_ETH
1355 +config FSL_DPAA2_ETH_LINK_POLL
1356 + bool "Use polling mode for link state"
1357 + default n
1358 + ---help---
1359 + Poll for detecting link state changes instead of using
1360 + interrupts.
1361 +
1362 +config FSL_DPAA2_ETH_USE_ERR_QUEUE
1363 + bool "Enable Rx error queue"
1364 + default n
1365 + ---help---
1366 + Allow Rx error frames to be enqueued on an error queue
1367 + and processed by the driver (by default they are dropped
1368 + in hardware).
1369 + This may impact performance, recommended for debugging
1370 + purposes only.
1371 +
1372 +config FSL_DPAA2_ETH_DEBUGFS
1373 + depends on DEBUG_FS && FSL_QBMAN_DEBUG
1374 + bool "Enable debugfs support"
1375 + default n
1376 + ---help---
1377 + Enable advanced statistics through debugfs interface.
1378 +endif
1379 --- /dev/null
1380 +++ b/drivers/staging/fsl-dpaa2/ethernet/Makefile
1381 @@ -0,0 +1,21 @@
1382 +#
1383 +# Makefile for the Freescale DPAA Ethernet controllers
1384 +#
1385 +# Copyright (C) 2014-2015 Freescale Semiconductor, Inc.
1386 +#
1387 +# This file is released under the GPLv2
1388 +#
1389 +
1390 +ccflags-y += -DVERSION=\"\"
1391 +
1392 +obj-$(CONFIG_FSL_DPAA2_ETH) += fsl-dpaa2-eth.o
1393 +
1394 +fsl-dpaa2-eth-objs := dpaa2-eth.o dpaa2-ethtool.o dpni.o
1395 +fsl-dpaa2-eth-${CONFIG_FSL_DPAA2_ETH_DEBUGFS} += dpaa2-eth-debugfs.o
1396 +
1397 +#Needed by the tracing framework
1398 +CFLAGS_dpaa2-eth.o := -I$(src)
1399 +
1400 +ifeq ($(CONFIG_FSL_DPAA2_ETH_GCOV),y)
1401 + GCOV_PROFILE := y
1402 +endif
1403 --- /dev/null
1404 +++ b/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth-debugfs.c
1405 @@ -0,0 +1,319 @@
1406 +
1407 +/* Copyright 2015 Freescale Semiconductor Inc.
1408 + *
1409 + * Redistribution and use in source and binary forms, with or without
1410 + * modification, are permitted provided that the following conditions are met:
1411 + * * Redistributions of source code must retain the above copyright
1412 + * notice, this list of conditions and the following disclaimer.
1413 + * * Redistributions in binary form must reproduce the above copyright
1414 + * notice, this list of conditions and the following disclaimer in the
1415 + * documentation and/or other materials provided with the distribution.
1416 + * * Neither the name of Freescale Semiconductor nor the
1417 + * names of its contributors may be used to endorse or promote products
1418 + * derived from this software without specific prior written permission.
1419 + *
1420 + *
1421 + * ALTERNATIVELY, this software may be distributed under the terms of the
1422 + * GNU General Public License ("GPL") as published by the Free Software
1423 + * Foundation, either version 2 of that License or (at your option) any
1424 + * later version.
1425 + *
1426 + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
1427 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
1428 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
1429 + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
1430 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
1431 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
1432 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
1433 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
1434 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
1435 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
1436 + */
1437 +
1438 +
1439 +#include <linux/module.h>
1440 +#include <linux/debugfs.h>
1441 +#include "dpaa2-eth.h"
1442 +#include "dpaa2-eth-debugfs.h"
1443 +
1444 +#define DPAA2_ETH_DBG_ROOT "dpaa2-eth"
1445 +
1446 +
1447 +static struct dentry *dpaa2_dbg_root;
1448 +
1449 +static int dpaa2_dbg_cpu_show(struct seq_file *file, void *offset)
1450 +{
1451 + struct dpaa2_eth_priv *priv = (struct dpaa2_eth_priv *)file->private;
1452 + struct rtnl_link_stats64 *stats;
1453 + struct dpaa2_eth_stats *extras;
1454 + int i;
1455 +
1456 + seq_printf(file, "Per-CPU stats for %s\n", priv->net_dev->name);
1457 + seq_printf(file, "%s%16s%16s%16s%16s%16s%16s%16s%16s\n",
1458 + "CPU", "Rx", "Rx Err", "Rx SG", "Tx", "Tx Err", "Tx conf",
1459 + "Tx SG", "Enq busy");
1460 +
1461 + for_each_online_cpu(i) {
1462 + stats = per_cpu_ptr(priv->percpu_stats, i);
1463 + extras = per_cpu_ptr(priv->percpu_extras, i);
1464 + seq_printf(file, "%3d%16llu%16llu%16llu%16llu%16llu%16llu%16llu%16llu\n",
1465 + i,
1466 + stats->rx_packets,
1467 + stats->rx_errors,
1468 + extras->rx_sg_frames,
1469 + stats->tx_packets,
1470 + stats->tx_errors,
1471 + extras->tx_conf_frames,
1472 + extras->tx_sg_frames,
1473 + extras->tx_portal_busy);
1474 + }
1475 +
1476 + return 0;
1477 +}
1478 +
1479 +static int dpaa2_dbg_cpu_open(struct inode *inode, struct file *file)
1480 +{
1481 + int err;
1482 + struct dpaa2_eth_priv *priv = (struct dpaa2_eth_priv *)inode->i_private;
1483 +
1484 + err = single_open(file, dpaa2_dbg_cpu_show, priv);
1485 + if (err < 0)
1486 + netdev_err(priv->net_dev, "single_open() failed\n");
1487 +
1488 + return err;
1489 +}
1490 +
1491 +static const struct file_operations dpaa2_dbg_cpu_ops = {
1492 + .open = dpaa2_dbg_cpu_open,
1493 + .read = seq_read,
1494 + .llseek = seq_lseek,
1495 + .release = single_release,
1496 +};
1497 +
1498 +static char *fq_type_to_str(struct dpaa2_eth_fq *fq)
1499 +{
1500 + switch (fq->type) {
1501 + case DPAA2_RX_FQ:
1502 + return "Rx";
1503 + case DPAA2_TX_CONF_FQ:
1504 + return "Tx conf";
1505 + case DPAA2_RX_ERR_FQ:
1506 + return "Rx err";
1507 + default:
1508 + return "N/A";
1509 + }
1510 +}
1511 +
1512 +static int dpaa2_dbg_fqs_show(struct seq_file *file, void *offset)
1513 +{
1514 + struct dpaa2_eth_priv *priv = (struct dpaa2_eth_priv *)file->private;
1515 + struct dpaa2_eth_fq *fq;
1516 + u32 fcnt, bcnt;
1517 + int i, err;
1518 +
1519 + seq_printf(file, "FQ stats for %s:\n", priv->net_dev->name);
1520 + seq_printf(file, "%s%16s%16s%16s%16s\n",
1521 + "VFQID", "CPU", "Type", "Frames", "Pending frames");
1522 +
1523 + for (i = 0; i < priv->num_fqs; i++) {
1524 + fq = &priv->fq[i];
1525 + err = dpaa2_io_query_fq_count(NULL, fq->fqid, &fcnt, &bcnt);
1526 + if (err)
1527 + fcnt = 0;
1528 +
1529 + seq_printf(file, "%5d%16d%16s%16llu%16u\n",
1530 + fq->fqid,
1531 + fq->target_cpu,
1532 + fq_type_to_str(fq),
1533 + fq->stats.frames,
1534 + fcnt);
1535 + }
1536 +
1537 + return 0;
1538 +}
1539 +
1540 +static int dpaa2_dbg_fqs_open(struct inode *inode, struct file *file)
1541 +{
1542 + int err;
1543 + struct dpaa2_eth_priv *priv = (struct dpaa2_eth_priv *)inode->i_private;
1544 +
1545 + err = single_open(file, dpaa2_dbg_fqs_show, priv);
1546 + if (err < 0)
1547 + netdev_err(priv->net_dev, "single_open() failed\n");
1548 +
1549 + return err;
1550 +}
1551 +
1552 +static const struct file_operations dpaa2_dbg_fq_ops = {
1553 + .open = dpaa2_dbg_fqs_open,
1554 + .read = seq_read,
1555 + .llseek = seq_lseek,
1556 + .release = single_release,
1557 +};
1558 +
1559 +static int dpaa2_dbg_ch_show(struct seq_file *file, void *offset)
1560 +{
1561 + struct dpaa2_eth_priv *priv = (struct dpaa2_eth_priv *)file->private;
1562 + struct dpaa2_eth_channel *ch;
1563 + int i;
1564 +
1565 + seq_printf(file, "Channel stats for %s:\n", priv->net_dev->name);
1566 + seq_printf(file, "%s%16s%16s%16s%16s%16s\n",
1567 + "CHID", "CPU", "Deq busy", "Frames", "CDANs",
1568 + "Avg frm/CDAN");
1569 +
1570 + for (i = 0; i < priv->num_channels; i++) {
1571 + ch = priv->channel[i];
1572 + seq_printf(file, "%4d%16d%16llu%16llu%16llu%16llu\n",
1573 + ch->ch_id,
1574 + ch->nctx.desired_cpu,
1575 + ch->stats.dequeue_portal_busy,
1576 + ch->stats.frames,
1577 + ch->stats.cdan,
1578 + ch->stats.frames / ch->stats.cdan);
1579 + }
1580 +
1581 + return 0;
1582 +}
1583 +
1584 +static int dpaa2_dbg_ch_open(struct inode *inode, struct file *file)
1585 +{
1586 + int err;
1587 + struct dpaa2_eth_priv *priv = (struct dpaa2_eth_priv *)inode->i_private;
1588 +
1589 + err = single_open(file, dpaa2_dbg_ch_show, priv);
1590 + if (err < 0)
1591 + netdev_err(priv->net_dev, "single_open() failed\n");
1592 +
1593 + return err;
1594 +}
1595 +
1596 +static const struct file_operations dpaa2_dbg_ch_ops = {
1597 + .open = dpaa2_dbg_ch_open,
1598 + .read = seq_read,
1599 + .llseek = seq_lseek,
1600 + .release = single_release,
1601 +};
1602 +
1603 +static ssize_t dpaa2_dbg_reset_write(struct file *file, const char __user *buf,
1604 + size_t count, loff_t *offset)
1605 +{
1606 + struct dpaa2_eth_priv *priv = file->private_data;
1607 + struct rtnl_link_stats64 *percpu_stats;
1608 + struct dpaa2_eth_stats *percpu_extras;
1609 + struct dpaa2_eth_fq *fq;
1610 + struct dpaa2_eth_channel *ch;
1611 + int i;
1612 +
1613 + for_each_online_cpu(i) {
1614 + percpu_stats = per_cpu_ptr(priv->percpu_stats, i);
1615 + memset(percpu_stats, 0, sizeof(*percpu_stats));
1616 +
1617 + percpu_extras = per_cpu_ptr(priv->percpu_extras, i);
1618 + memset(percpu_extras, 0, sizeof(*percpu_extras));
1619 + }
1620 +
1621 + for (i = 0; i < priv->num_fqs; i++) {
1622 + fq = &priv->fq[i];
1623 + memset(&fq->stats, 0, sizeof(fq->stats));
1624 + }
1625 +
1626 + for_each_cpu(i, &priv->dpio_cpumask) {
1627 + ch = priv->channel[i];
1628 + memset(&ch->stats, 0, sizeof(ch->stats));
1629 + }
1630 +
1631 + return count;
1632 +}
1633 +
1634 +static const struct file_operations dpaa2_dbg_reset_ops = {
1635 + .open = simple_open,
1636 + .write = dpaa2_dbg_reset_write,
1637 +};
1638 +
1639 +void dpaa2_dbg_add(struct dpaa2_eth_priv *priv)
1640 +{
1641 + if (!dpaa2_dbg_root)
1642 + return;
1643 +
1644 + /* Create a directory for the interface */
1645 + priv->dbg.dir = debugfs_create_dir(priv->net_dev->name,
1646 + dpaa2_dbg_root);
1647 + if (!priv->dbg.dir) {
1648 + netdev_err(priv->net_dev, "debugfs_create_dir() failed\n");
1649 + return;
1650 + }
1651 +
1652 + /* per-cpu stats file */
1653 + priv->dbg.cpu_stats = debugfs_create_file("cpu_stats", S_IRUGO,
1654 + priv->dbg.dir, priv,
1655 + &dpaa2_dbg_cpu_ops);
1656 + if (!priv->dbg.cpu_stats) {
1657 + netdev_err(priv->net_dev, "debugfs_create_file() failed\n");
1658 + goto err_cpu_stats;
1659 + }
1660 +
1661 + /* per-fq stats file */
1662 + priv->dbg.fq_stats = debugfs_create_file("fq_stats", S_IRUGO,
1663 + priv->dbg.dir, priv,
1664 + &dpaa2_dbg_fq_ops);
1665 + if (!priv->dbg.fq_stats) {
1666 + netdev_err(priv->net_dev, "debugfs_create_file() failed\n");
1667 + goto err_fq_stats;
1668 + }
1669 +
1670 + /* per-fq stats file */
1671 + priv->dbg.ch_stats = debugfs_create_file("ch_stats", S_IRUGO,
1672 + priv->dbg.dir, priv,
1673 + &dpaa2_dbg_ch_ops);
1674 + if (!priv->dbg.fq_stats) {
1675 + netdev_err(priv->net_dev, "debugfs_create_file() failed\n");
1676 + goto err_ch_stats;
1677 + }
1678 +
1679 + /* reset stats */
1680 + priv->dbg.reset_stats = debugfs_create_file("reset_stats", S_IWUSR,
1681 + priv->dbg.dir, priv,
1682 + &dpaa2_dbg_reset_ops);
1683 + if (!priv->dbg.reset_stats) {
1684 + netdev_err(priv->net_dev, "debugfs_create_file() failed\n");
1685 + goto err_reset_stats;
1686 + }
1687 +
1688 + return;
1689 +
1690 +err_reset_stats:
1691 + debugfs_remove(priv->dbg.ch_stats);
1692 +err_ch_stats:
1693 + debugfs_remove(priv->dbg.fq_stats);
1694 +err_fq_stats:
1695 + debugfs_remove(priv->dbg.cpu_stats);
1696 +err_cpu_stats:
1697 + debugfs_remove(priv->dbg.dir);
1698 +}
1699 +
1700 +void dpaa2_dbg_remove(struct dpaa2_eth_priv *priv)
1701 +{
1702 + debugfs_remove(priv->dbg.reset_stats);
1703 + debugfs_remove(priv->dbg.fq_stats);
1704 + debugfs_remove(priv->dbg.ch_stats);
1705 + debugfs_remove(priv->dbg.cpu_stats);
1706 + debugfs_remove(priv->dbg.dir);
1707 +}
1708 +
1709 +void dpaa2_eth_dbg_init(void)
1710 +{
1711 + dpaa2_dbg_root = debugfs_create_dir(DPAA2_ETH_DBG_ROOT, NULL);
1712 + if (!dpaa2_dbg_root) {
1713 + pr_err("DPAA2-ETH: debugfs create failed\n");
1714 + return;
1715 + }
1716 +
1717 + pr_info("DPAA2-ETH: debugfs created\n");
1718 +}
1719 +
1720 +void __exit dpaa2_eth_dbg_exit(void)
1721 +{
1722 + debugfs_remove(dpaa2_dbg_root);
1723 +}
1724 +
1725 --- /dev/null
1726 +++ b/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth-debugfs.h
1727 @@ -0,0 +1,61 @@
1728 +/* Copyright 2015 Freescale Semiconductor Inc.
1729 + *
1730 + * Redistribution and use in source and binary forms, with or without
1731 + * modification, are permitted provided that the following conditions are met:
1732 + * * Redistributions of source code must retain the above copyright
1733 + * notice, this list of conditions and the following disclaimer.
1734 + * * Redistributions in binary form must reproduce the above copyright
1735 + * notice, this list of conditions and the following disclaimer in the
1736 + * documentation and/or other materials provided with the distribution.
1737 + * * Neither the name of Freescale Semiconductor nor the
1738 + * names of its contributors may be used to endorse or promote products
1739 + * derived from this software without specific prior written permission.
1740 + *
1741 + *
1742 + * ALTERNATIVELY, this software may be distributed under the terms of the
1743 + * GNU General Public License ("GPL") as published by the Free Software
1744 + * Foundation, either version 2 of that License or (at your option) any
1745 + * later version.
1746 + *
1747 + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
1748 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
1749 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
1750 + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
1751 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
1752 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
1753 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
1754 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
1755 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
1756 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
1757 + */
1758 +
1759 +#ifndef DPAA2_ETH_DEBUGFS_H
1760 +#define DPAA2_ETH_DEBUGFS_H
1761 +
1762 +#include <linux/dcache.h>
1763 +#include "dpaa2-eth.h"
1764 +
1765 +extern struct dpaa2_eth_priv *priv;
1766 +
1767 +struct dpaa2_debugfs {
1768 + struct dentry *dir;
1769 + struct dentry *fq_stats;
1770 + struct dentry *ch_stats;
1771 + struct dentry *cpu_stats;
1772 + struct dentry *reset_stats;
1773 +};
1774 +
1775 +#ifdef CONFIG_FSL_DPAA2_ETH_DEBUGFS
1776 +void dpaa2_eth_dbg_init(void);
1777 +void dpaa2_eth_dbg_exit(void);
1778 +void dpaa2_dbg_add(struct dpaa2_eth_priv *priv);
1779 +void dpaa2_dbg_remove(struct dpaa2_eth_priv *priv);
1780 +#else
1781 +static inline void dpaa2_eth_dbg_init(void) {}
1782 +static inline void dpaa2_eth_dbg_exit(void) {}
1783 +static inline void dpaa2_dbg_add(struct dpaa2_eth_priv *priv) {}
1784 +static inline void dpaa2_dbg_remove(struct dpaa2_eth_priv *priv) {}
1785 +#endif /* CONFIG_FSL_DPAA2_ETH_DEBUGFS */
1786 +
1787 +#endif /* DPAA2_ETH_DEBUGFS_H */
1788 +
1789 --- /dev/null
1790 +++ b/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth-trace.h
1791 @@ -0,0 +1,185 @@
1792 +/* Copyright 2014-2015 Freescale Semiconductor Inc.
1793 + *
1794 + * Redistribution and use in source and binary forms, with or without
1795 + * modification, are permitted provided that the following conditions are met:
1796 + * * Redistributions of source code must retain the above copyright
1797 + * notice, this list of conditions and the following disclaimer.
1798 + * * Redistributions in binary form must reproduce the above copyright
1799 + * notice, this list of conditions and the following disclaimer in the
1800 + * documentation and/or other materials provided with the distribution.
1801 + * * Neither the name of Freescale Semiconductor nor the
1802 + * names of its contributors may be used to endorse or promote products
1803 + * derived from this software without specific prior written permission.
1804 + *
1805 + *
1806 + * ALTERNATIVELY, this software may be distributed under the terms of the
1807 + * GNU General Public License ("GPL") as published by the Free Software
1808 + * Foundation, either version 2 of that License or (at your option) any
1809 + * later version.
1810 + *
1811 + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
1812 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
1813 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
1814 + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
1815 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
1816 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
1817 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
1818 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
1819 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
1820 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
1821 + */
1822 +
1823 +#undef TRACE_SYSTEM
1824 +#define TRACE_SYSTEM dpaa2_eth
1825 +
1826 +#if !defined(_DPAA2_ETH_TRACE_H) || defined(TRACE_HEADER_MULTI_READ)
1827 +#define _DPAA2_ETH_TRACE_H
1828 +
1829 +#include <linux/skbuff.h>
1830 +#include <linux/netdevice.h>
1831 +#include "dpaa2-eth.h"
1832 +#include <linux/tracepoint.h>
1833 +
1834 +#define TR_FMT "[%s] fd: addr=0x%llx, len=%u, off=%u"
1835 +/* trace_printk format for raw buffer event class */
1836 +#define TR_BUF_FMT "[%s] vaddr=%p size=%zu dma_addr=%pad map_size=%zu bpid=%d"
1837 +
1838 +/* This is used to declare a class of events.
1839 + * individual events of this type will be defined below.
1840 + */
1841 +
1842 +/* Store details about a frame descriptor */
1843 +DECLARE_EVENT_CLASS(dpaa2_eth_fd,
1844 + /* Trace function prototype */
1845 + TP_PROTO(struct net_device *netdev,
1846 + const struct dpaa2_fd *fd),
1847 +
1848 + /* Repeat argument list here */
1849 + TP_ARGS(netdev, fd),
1850 +
1851 + /* A structure containing the relevant information we want
1852 + * to record. Declare name and type for each normal element,
1853 + * name, type and size for arrays. Use __string for variable
1854 + * length strings.
1855 + */
1856 + TP_STRUCT__entry(
1857 + __field(u64, fd_addr)
1858 + __field(u32, fd_len)
1859 + __field(u16, fd_offset)
1860 + __string(name, netdev->name)
1861 + ),
1862 +
1863 + /* The function that assigns values to the above declared
1864 + * fields
1865 + */
1866 + TP_fast_assign(
1867 + __entry->fd_addr = dpaa2_fd_get_addr(fd);
1868 + __entry->fd_len = dpaa2_fd_get_len(fd);
1869 + __entry->fd_offset = dpaa2_fd_get_offset(fd);
1870 + __assign_str(name, netdev->name);
1871 + ),
1872 +
1873 + /* This is what gets printed when the trace event is
1874 + * triggered.
1875 + */
1876 + TP_printk(TR_FMT,
1877 + __get_str(name),
1878 + __entry->fd_addr,
1879 + __entry->fd_len,
1880 + __entry->fd_offset)
1881 +);
1882 +
1883 +/* Now declare events of the above type. Format is:
1884 + * DEFINE_EVENT(class, name, proto, args), with proto and args same as for class
1885 + */
1886 +
1887 +/* Tx (egress) fd */
1888 +DEFINE_EVENT(dpaa2_eth_fd, dpaa2_tx_fd,
1889 + TP_PROTO(struct net_device *netdev,
1890 + const struct dpaa2_fd *fd),
1891 +
1892 + TP_ARGS(netdev, fd)
1893 +);
1894 +
1895 +/* Rx fd */
1896 +DEFINE_EVENT(dpaa2_eth_fd, dpaa2_rx_fd,
1897 + TP_PROTO(struct net_device *netdev,
1898 + const struct dpaa2_fd *fd),
1899 +
1900 + TP_ARGS(netdev, fd)
1901 +);
1902 +
1903 +/* Tx confirmation fd */
1904 +DEFINE_EVENT(dpaa2_eth_fd, dpaa2_tx_conf_fd,
1905 + TP_PROTO(struct net_device *netdev,
1906 + const struct dpaa2_fd *fd),
1907 +
1908 + TP_ARGS(netdev, fd)
1909 +);
1910 +
1911 +/* Log data about raw buffers. Useful for tracing DPBP content. */
1912 +TRACE_EVENT(dpaa2_eth_buf_seed,
1913 + /* Trace function prototype */
1914 + TP_PROTO(struct net_device *netdev,
1915 + /* virtual address and size */
1916 + void *vaddr,
1917 + size_t size,
1918 + /* dma map address and size */
1919 + dma_addr_t dma_addr,
1920 + size_t map_size,
1921 + /* buffer pool id, if relevant */
1922 + u16 bpid),
1923 +
1924 + /* Repeat argument list here */
1925 + TP_ARGS(netdev, vaddr, size, dma_addr, map_size, bpid),
1926 +
1927 + /* A structure containing the relevant information we want
1928 + * to record. Declare name and type for each normal element,
1929 + * name, type and size for arrays. Use __string for variable
1930 + * length strings.
1931 + */
1932 + TP_STRUCT__entry(
1933 + __field(void *, vaddr)
1934 + __field(size_t, size)
1935 + __field(dma_addr_t, dma_addr)
1936 + __field(size_t, map_size)
1937 + __field(u16, bpid)
1938 + __string(name, netdev->name)
1939 + ),
1940 +
1941 + /* The function that assigns values to the above declared
1942 + * fields
1943 + */
1944 + TP_fast_assign(
1945 + __entry->vaddr = vaddr;
1946 + __entry->size = size;
1947 + __entry->dma_addr = dma_addr;
1948 + __entry->map_size = map_size;
1949 + __entry->bpid = bpid;
1950 + __assign_str(name, netdev->name);
1951 + ),
1952 +
1953 + /* This is what gets printed when the trace event is
1954 + * triggered.
1955 + */
1956 + TP_printk(TR_BUF_FMT,
1957 + __get_str(name),
1958 + __entry->vaddr,
1959 + __entry->size,
1960 + &__entry->dma_addr,
1961 + __entry->map_size,
1962 + __entry->bpid)
1963 +);
1964 +
1965 +/* If only one event of a certain type needs to be declared, use TRACE_EVENT().
1966 + * The syntax is the same as for DECLARE_EVENT_CLASS().
1967 + */
1968 +
1969 +#endif /* _DPAA2_ETH_TRACE_H */
1970 +
1971 +/* This must be outside ifdef _DPAA2_ETH_TRACE_H */
1972 +#undef TRACE_INCLUDE_PATH
1973 +#define TRACE_INCLUDE_PATH .
1974 +#undef TRACE_INCLUDE_FILE
1975 +#define TRACE_INCLUDE_FILE dpaa2-eth-trace
1976 +#include <trace/define_trace.h>
1977 --- /dev/null
1978 +++ b/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.c
1979 @@ -0,0 +1,2793 @@
1980 +/* Copyright 2014-2015 Freescale Semiconductor Inc.
1981 + *
1982 + * Redistribution and use in source and binary forms, with or without
1983 + * modification, are permitted provided that the following conditions are met:
1984 + * * Redistributions of source code must retain the above copyright
1985 + * notice, this list of conditions and the following disclaimer.
1986 + * * Redistributions in binary form must reproduce the above copyright
1987 + * notice, this list of conditions and the following disclaimer in the
1988 + * documentation and/or other materials provided with the distribution.
1989 + * * Neither the name of Freescale Semiconductor nor the
1990 + * names of its contributors may be used to endorse or promote products
1991 + * derived from this software without specific prior written permission.
1992 + *
1993 + *
1994 + * ALTERNATIVELY, this software may be distributed under the terms of the
1995 + * GNU General Public License ("GPL") as published by the Free Software
1996 + * Foundation, either version 2 of that License or (at your option) any
1997 + * later version.
1998 + *
1999 + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
2000 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
2001 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
2002 + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
2003 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
2004 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
2005 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
2006 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
2007 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
2008 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
2009 + */
2010 +#include <linux/init.h>
2011 +#include <linux/module.h>
2012 +#include <linux/platform_device.h>
2013 +#include <linux/etherdevice.h>
2014 +#include <linux/of_net.h>
2015 +#include <linux/interrupt.h>
2016 +#include <linux/msi.h>
2017 +#include <linux/debugfs.h>
2018 +#include <linux/kthread.h>
2019 +#include <linux/net_tstamp.h>
2020 +
2021 +#include "../../fsl-mc/include/mc.h"
2022 +#include "../../fsl-mc/include/mc-sys.h"
2023 +#include "dpaa2-eth.h"
2024 +
2025 +/* CREATE_TRACE_POINTS only needs to be defined once. Other dpa files
2026 + * using trace events only need to #include <trace/events/sched.h>
2027 + */
2028 +#define CREATE_TRACE_POINTS
2029 +#include "dpaa2-eth-trace.h"
2030 +
2031 +MODULE_LICENSE("Dual BSD/GPL");
2032 +MODULE_AUTHOR("Freescale Semiconductor, Inc");
2033 +MODULE_DESCRIPTION("Freescale DPAA2 Ethernet Driver");
2034 +
2035 +static int debug = -1;
2036 +module_param(debug, int, S_IRUGO);
2037 +MODULE_PARM_DESC(debug, "Module/Driver verbosity level");
2038 +
2039 +/* Oldest DPAA2 objects version we are compatible with */
2040 +#define DPAA2_SUPPORTED_DPNI_VERSION 6
2041 +#define DPAA2_SUPPORTED_DPBP_VERSION 2
2042 +#define DPAA2_SUPPORTED_DPCON_VERSION 2
2043 +
2044 +/* Iterate through the cpumask in a round-robin fashion. */
2045 +#define cpumask_rr(cpu, maskptr) \
2046 +do { \
2047 + (cpu) = cpumask_next((cpu), (maskptr)); \
2048 + if ((cpu) >= nr_cpu_ids) \
2049 + (cpu) = cpumask_first((maskptr)); \
2050 +} while (0)
2051 +
2052 +static void dpaa2_eth_rx_csum(struct dpaa2_eth_priv *priv,
2053 + u32 fd_status,
2054 + struct sk_buff *skb)
2055 +{
2056 + skb_checksum_none_assert(skb);
2057 +
2058 + /* HW checksum validation is disabled, nothing to do here */
2059 + if (!(priv->net_dev->features & NETIF_F_RXCSUM))
2060 + return;
2061 +
2062 + /* Read checksum validation bits */
2063 + if (!((fd_status & DPAA2_ETH_FAS_L3CV) &&
2064 + (fd_status & DPAA2_ETH_FAS_L4CV)))
2065 + return;
2066 +
2067 + /* Inform the stack there's no need to compute L3/L4 csum anymore */
2068 + skb->ip_summed = CHECKSUM_UNNECESSARY;
2069 +}
2070 +
2071 +/* Free a received FD.
2072 + * Not to be used for Tx conf FDs or on any other paths.
2073 + */
2074 +static void dpaa2_eth_free_rx_fd(struct dpaa2_eth_priv *priv,
2075 + const struct dpaa2_fd *fd,
2076 + void *vaddr)
2077 +{
2078 + struct device *dev = priv->net_dev->dev.parent;
2079 + dma_addr_t addr = dpaa2_fd_get_addr(fd);
2080 + u8 fd_format = dpaa2_fd_get_format(fd);
2081 +
2082 + if (fd_format == dpaa2_fd_sg) {
2083 + struct dpaa2_sg_entry *sgt = vaddr + dpaa2_fd_get_offset(fd);
2084 + void *sg_vaddr;
2085 + int i;
2086 +
2087 + for (i = 0; i < DPAA2_ETH_MAX_SG_ENTRIES; i++) {
2088 + dpaa2_sg_le_to_cpu(&sgt[i]);
2089 +
2090 + addr = dpaa2_sg_get_addr(&sgt[i]);
2091 + dma_unmap_single(dev, addr, DPAA2_ETH_RX_BUFFER_SIZE,
2092 + DMA_FROM_DEVICE);
2093 +
2094 + sg_vaddr = phys_to_virt(addr);
2095 + put_page(virt_to_head_page(sg_vaddr));
2096 +
2097 + if (dpaa2_sg_is_final(&sgt[i]))
2098 + break;
2099 + }
2100 + }
2101 +
2102 + put_page(virt_to_head_page(vaddr));
2103 +}
2104 +
2105 +/* Build a linear skb based on a single-buffer frame descriptor */
2106 +static struct sk_buff *dpaa2_eth_build_linear_skb(struct dpaa2_eth_priv *priv,
2107 + struct dpaa2_eth_channel *ch,
2108 + const struct dpaa2_fd *fd,
2109 + void *fd_vaddr)
2110 +{
2111 + struct sk_buff *skb = NULL;
2112 + u16 fd_offset = dpaa2_fd_get_offset(fd);
2113 + u32 fd_length = dpaa2_fd_get_len(fd);
2114 +
2115 + skb = build_skb(fd_vaddr, DPAA2_ETH_RX_BUFFER_SIZE +
2116 + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)));
2117 + if (unlikely(!skb)) {
2118 + netdev_err(priv->net_dev, "build_skb() failed\n");
2119 + return NULL;
2120 + }
2121 +
2122 + skb_reserve(skb, fd_offset);
2123 + skb_put(skb, fd_length);
2124 +
2125 + ch->buf_count--;
2126 +
2127 + return skb;
2128 +}
2129 +
2130 +/* Build a non linear (fragmented) skb based on a S/G table */
2131 +static struct sk_buff *dpaa2_eth_build_frag_skb(struct dpaa2_eth_priv *priv,
2132 + struct dpaa2_eth_channel *ch,
2133 + struct dpaa2_sg_entry *sgt)
2134 +{
2135 + struct sk_buff *skb = NULL;
2136 + struct device *dev = priv->net_dev->dev.parent;
2137 + void *sg_vaddr;
2138 + dma_addr_t sg_addr;
2139 + u16 sg_offset;
2140 + u32 sg_length;
2141 + struct page *page, *head_page;
2142 + int page_offset;
2143 + int i;
2144 +
2145 + for (i = 0; i < DPAA2_ETH_MAX_SG_ENTRIES; i++) {
2146 + struct dpaa2_sg_entry *sge = &sgt[i];
2147 +
2148 + dpaa2_sg_le_to_cpu(sge);
2149 +
2150 + /* We don't support anything else yet! */
2151 + if (unlikely(dpaa2_sg_get_format(sge) != dpaa2_sg_single)) {
2152 + dev_warn_once(dev, "Unsupported S/G entry format: %d\n",
2153 + dpaa2_sg_get_format(sge));
2154 + return NULL;
2155 + }
2156 +
2157 + /* Get the address, offset and length from the S/G entry */
2158 + sg_addr = dpaa2_sg_get_addr(sge);
2159 + dma_unmap_single(dev, sg_addr, DPAA2_ETH_RX_BUFFER_SIZE,
2160 + DMA_FROM_DEVICE);
2161 + if (unlikely(dma_mapping_error(dev, sg_addr))) {
2162 + netdev_err(priv->net_dev, "DMA unmap failed\n");
2163 + return NULL;
2164 + }
2165 + sg_vaddr = phys_to_virt(sg_addr);
2166 + sg_length = dpaa2_sg_get_len(sge);
2167 +
2168 + if (i == 0) {
2169 + /* We build the skb around the first data buffer */
2170 + skb = build_skb(sg_vaddr, DPAA2_ETH_RX_BUFFER_SIZE +
2171 + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)));
2172 + if (unlikely(!skb)) {
2173 + netdev_err(priv->net_dev, "build_skb failed\n");
2174 + return NULL;
2175 + }
2176 + sg_offset = dpaa2_sg_get_offset(sge);
2177 + skb_reserve(skb, sg_offset);
2178 + skb_put(skb, sg_length);
2179 + } else {
2180 + /* Subsequent data in SGEntries are stored at
2181 + * offset 0 in their buffers, we don't need to
2182 + * compute sg_offset.
2183 + */
2184 + WARN_ONCE(dpaa2_sg_get_offset(sge) != 0,
2185 + "Non-zero offset in SGE[%d]!\n", i);
2186 +
2187 + /* Rest of the data buffers are stored as skb frags */
2188 + page = virt_to_page(sg_vaddr);
2189 + head_page = virt_to_head_page(sg_vaddr);
2190 +
2191 + /* Offset in page (which may be compound) */
2192 + page_offset = ((unsigned long)sg_vaddr &
2193 + (PAGE_SIZE - 1)) +
2194 + (page_address(page) - page_address(head_page));
2195 +
2196 + skb_add_rx_frag(skb, i - 1, head_page, page_offset,
2197 + sg_length, DPAA2_ETH_RX_BUFFER_SIZE);
2198 + }
2199 +
2200 + if (dpaa2_sg_is_final(sge))
2201 + break;
2202 + }
2203 +
2204 + /* Count all data buffers + sgt buffer */
2205 + ch->buf_count -= i + 2;
2206 +
2207 + return skb;
2208 +}
2209 +
2210 +static void dpaa2_eth_rx(struct dpaa2_eth_priv *priv,
2211 + struct dpaa2_eth_channel *ch,
2212 + const struct dpaa2_fd *fd,
2213 + struct napi_struct *napi)
2214 +{
2215 + dma_addr_t addr = dpaa2_fd_get_addr(fd);
2216 + u8 fd_format = dpaa2_fd_get_format(fd);
2217 + void *vaddr;
2218 + struct sk_buff *skb;
2219 + struct rtnl_link_stats64 *percpu_stats;
2220 + struct dpaa2_eth_stats *percpu_extras;
2221 + struct device *dev = priv->net_dev->dev.parent;
2222 + struct dpaa2_fas *fas;
2223 + u32 status = 0;
2224 +
2225 + /* Tracing point */
2226 + trace_dpaa2_rx_fd(priv->net_dev, fd);
2227 +
2228 + dma_unmap_single(dev, addr, DPAA2_ETH_RX_BUFFER_SIZE, DMA_FROM_DEVICE);
2229 + vaddr = phys_to_virt(addr);
2230 +
2231 + prefetch(vaddr + priv->buf_layout.private_data_size);
2232 + prefetch(vaddr + dpaa2_fd_get_offset(fd));
2233 +
2234 + percpu_stats = this_cpu_ptr(priv->percpu_stats);
2235 + percpu_extras = this_cpu_ptr(priv->percpu_extras);
2236 +
2237 + if (fd_format == dpaa2_fd_single) {
2238 + skb = dpaa2_eth_build_linear_skb(priv, ch, fd, vaddr);
2239 + } else if (fd_format == dpaa2_fd_sg) {
2240 + struct dpaa2_sg_entry *sgt =
2241 + vaddr + dpaa2_fd_get_offset(fd);
2242 + skb = dpaa2_eth_build_frag_skb(priv, ch, sgt);
2243 + put_page(virt_to_head_page(vaddr));
2244 + percpu_extras->rx_sg_frames++;
2245 + percpu_extras->rx_sg_bytes += dpaa2_fd_get_len(fd);
2246 + } else {
2247 + /* We don't support any other format */
2248 + netdev_err(priv->net_dev, "Received invalid frame format\n");
2249 + goto err_frame_format;
2250 + }
2251 +
2252 + if (unlikely(!skb)) {
2253 + dev_err_once(dev, "error building skb\n");
2254 + goto err_build_skb;
2255 + }
2256 +
2257 + prefetch(skb->data);
2258 +
2259 + if (priv->ts_rx_en) {
2260 + struct skb_shared_hwtstamps *shhwtstamps = skb_hwtstamps(skb);
2261 + u64 *ns = (u64 *) (vaddr +
2262 + priv->buf_layout.private_data_size +
2263 + sizeof(struct dpaa2_fas));
2264 +
2265 + *ns = DPAA2_PTP_NOMINAL_FREQ_PERIOD_NS * (*ns);
2266 + memset(shhwtstamps, 0, sizeof(*shhwtstamps));
2267 + shhwtstamps->hwtstamp = ns_to_ktime(*ns);
2268 + }
2269 +
2270 + /* Check if we need to validate the L4 csum */
2271 + if (likely(fd->simple.frc & DPAA2_FD_FRC_FASV)) {
2272 + fas = (struct dpaa2_fas *)
2273 + (vaddr + priv->buf_layout.private_data_size);
2274 + status = le32_to_cpu(fas->status);
2275 + dpaa2_eth_rx_csum(priv, status, skb);
2276 + }
2277 +
2278 + skb->protocol = eth_type_trans(skb, priv->net_dev);
2279 +
2280 + percpu_stats->rx_packets++;
2281 + percpu_stats->rx_bytes += skb->len;
2282 +
2283 + if (priv->net_dev->features & NETIF_F_GRO)
2284 + napi_gro_receive(napi, skb);
2285 + else
2286 + netif_receive_skb(skb);
2287 +
2288 + return;
2289 +err_frame_format:
2290 +err_build_skb:
2291 + dpaa2_eth_free_rx_fd(priv, fd, vaddr);
2292 + percpu_stats->rx_dropped++;
2293 +}
2294 +
2295 +#ifdef CONFIG_FSL_DPAA2_ETH_USE_ERR_QUEUE
2296 +static void dpaa2_eth_rx_err(struct dpaa2_eth_priv *priv,
2297 + struct dpaa2_eth_channel *ch,
2298 + const struct dpaa2_fd *fd,
2299 + struct napi_struct *napi __always_unused)
2300 +{
2301 + struct device *dev = priv->net_dev->dev.parent;
2302 + dma_addr_t addr = dpaa2_fd_get_addr(fd);
2303 + void *vaddr;
2304 + struct rtnl_link_stats64 *percpu_stats;
2305 + struct dpaa2_fas *fas;
2306 + u32 status = 0;
2307 +
2308 + dma_unmap_single(dev, addr, DPAA2_ETH_RX_BUFFER_SIZE, DMA_FROM_DEVICE);
2309 + vaddr = phys_to_virt(addr);
2310 +
2311 + if (fd->simple.frc & DPAA2_FD_FRC_FASV) {
2312 + fas = (struct dpaa2_fas *)
2313 + (vaddr + priv->buf_layout.private_data_size);
2314 + status = le32_to_cpu(fas->status);
2315 +
2316 + /* All frames received on this queue should have at least
2317 + * one of the Rx error bits set */
2318 + WARN_ON_ONCE((status & DPAA2_ETH_RX_ERR_MASK) == 0);
2319 + netdev_dbg(priv->net_dev, "Rx frame error: 0x%08x\n",
2320 + status & DPAA2_ETH_RX_ERR_MASK);
2321 + }
2322 + dpaa2_eth_free_rx_fd(priv, fd, vaddr);
2323 +
2324 + percpu_stats = this_cpu_ptr(priv->percpu_stats);
2325 + percpu_stats->rx_errors++;
2326 +}
2327 +#endif
2328 +
2329 +/* Consume all frames pull-dequeued into the store. This is the simplest way to
2330 + * make sure we don't accidentally issue another volatile dequeue which would
2331 + * overwrite (leak) frames already in the store.
2332 + *
2333 + * Observance of NAPI budget is not our concern, leaving that to the caller.
2334 + */
2335 +static int dpaa2_eth_store_consume(struct dpaa2_eth_channel *ch)
2336 +{
2337 + struct dpaa2_eth_priv *priv = ch->priv;
2338 + struct dpaa2_eth_fq *fq;
2339 + struct dpaa2_dq *dq;
2340 + const struct dpaa2_fd *fd;
2341 + int cleaned = 0;
2342 + int is_last;
2343 +
2344 + do {
2345 + dq = dpaa2_io_store_next(ch->store, &is_last);
2346 + if (unlikely(!dq)) {
2347 + if (unlikely(!is_last)) {
2348 + netdev_dbg(priv->net_dev,
2349 + "Channel %d reqturned no valid frames\n",
2350 + ch->ch_id);
2351 + /* MUST retry until we get some sort of
2352 + * valid response token (be it "empty dequeue"
2353 + * or a valid frame).
2354 + */
2355 + continue;
2356 + }
2357 + break;
2358 + }
2359 +
2360 + /* Obtain FD and process it */
2361 + fd = dpaa2_dq_fd(dq);
2362 + fq = (struct dpaa2_eth_fq *)dpaa2_dq_fqd_ctx(dq);
2363 + fq->stats.frames++;
2364 +
2365 + fq->consume(priv, ch, fd, &ch->napi);
2366 + cleaned++;
2367 + } while (!is_last);
2368 +
2369 + return cleaned;
2370 +}
2371 +
2372 +static int dpaa2_eth_build_sg_fd(struct dpaa2_eth_priv *priv,
2373 + struct sk_buff *skb,
2374 + struct dpaa2_fd *fd)
2375 +{
2376 + struct device *dev = priv->net_dev->dev.parent;
2377 + void *sgt_buf = NULL;
2378 + dma_addr_t addr;
2379 + int nr_frags = skb_shinfo(skb)->nr_frags;
2380 + struct dpaa2_sg_entry *sgt;
2381 + int i, j, err;
2382 + int sgt_buf_size;
2383 + struct scatterlist *scl, *crt_scl;
2384 + int num_sg;
2385 + int num_dma_bufs;
2386 + struct dpaa2_eth_swa *bps;
2387 +
2388 + /* Create and map scatterlist.
2389 + * We don't advertise NETIF_F_FRAGLIST, so skb_to_sgvec() will not have
2390 + * to go beyond nr_frags+1.
2391 + * Note: We don't support chained scatterlists
2392 + */
2393 + WARN_ON(PAGE_SIZE / sizeof(struct scatterlist) < nr_frags + 1);
2394 + scl = kcalloc(nr_frags + 1, sizeof(struct scatterlist), GFP_ATOMIC);
2395 + if (unlikely(!scl))
2396 + return -ENOMEM;
2397 +
2398 + sg_init_table(scl, nr_frags + 1);
2399 + num_sg = skb_to_sgvec(skb, scl, 0, skb->len);
2400 + num_dma_bufs = dma_map_sg(dev, scl, num_sg, DMA_TO_DEVICE);
2401 + if (unlikely(!num_dma_bufs)) {
2402 + netdev_err(priv->net_dev, "dma_map_sg() error\n");
2403 + err = -ENOMEM;
2404 + goto dma_map_sg_failed;
2405 + }
2406 +
2407 + /* Prepare the HW SGT structure */
2408 + sgt_buf_size = priv->tx_data_offset +
2409 + sizeof(struct dpaa2_sg_entry) * (1 + num_dma_bufs);
2410 + sgt_buf = kzalloc(sgt_buf_size + DPAA2_ETH_TX_BUF_ALIGN, GFP_ATOMIC);
2411 + if (unlikely(!sgt_buf)) {
2412 + netdev_err(priv->net_dev, "failed to allocate SGT buffer\n");
2413 + err = -ENOMEM;
2414 + goto sgt_buf_alloc_failed;
2415 + }
2416 + sgt_buf = PTR_ALIGN(sgt_buf, DPAA2_ETH_TX_BUF_ALIGN);
2417 +
2418 + /* PTA from egress side is passed as is to the confirmation side so
2419 + * we need to clear some fields here in order to find consistent values
2420 + * on TX confirmation. We are clearing FAS (Frame Annotation Status)
2421 + * field here.
2422 + */
2423 + memset(sgt_buf + priv->buf_layout.private_data_size, 0, 8);
2424 +
2425 + sgt = (struct dpaa2_sg_entry *)(sgt_buf + priv->tx_data_offset);
2426 +
2427 + /* Fill in the HW SGT structure.
2428 + *
2429 + * sgt_buf is zeroed out, so the following fields are implicit
2430 + * in all sgt entries:
2431 + * - offset is 0
2432 + * - format is 'dpaa2_sg_single'
2433 + */
2434 + for_each_sg(scl, crt_scl, num_dma_bufs, i) {
2435 + dpaa2_sg_set_addr(&sgt[i], sg_dma_address(crt_scl));
2436 + dpaa2_sg_set_len(&sgt[i], sg_dma_len(crt_scl));
2437 + }
2438 + dpaa2_sg_set_final(&sgt[i - 1], true);
2439 +
2440 + /* Store the skb backpointer in the SGT buffer.
2441 + * Fit the scatterlist and the number of buffers alongside the
2442 + * skb backpointer in the SWA. We'll need all of them on Tx Conf.
2443 + */
2444 + bps = (struct dpaa2_eth_swa *)sgt_buf;
2445 + bps->skb = skb;
2446 + bps->scl = scl;
2447 + bps->num_sg = num_sg;
2448 + bps->num_dma_bufs = num_dma_bufs;
2449 +
2450 + for (j = 0; j < i; j++)
2451 + dpaa2_sg_cpu_to_le(&sgt[j]);
2452 +
2453 + /* Separately map the SGT buffer */
2454 + addr = dma_map_single(dev, sgt_buf, sgt_buf_size, DMA_TO_DEVICE);
2455 + if (unlikely(dma_mapping_error(dev, addr))) {
2456 + netdev_err(priv->net_dev, "dma_map_single() failed\n");
2457 + err = -ENOMEM;
2458 + goto dma_map_single_failed;
2459 + }
2460 + dpaa2_fd_set_offset(fd, priv->tx_data_offset);
2461 + dpaa2_fd_set_format(fd, dpaa2_fd_sg);
2462 + dpaa2_fd_set_addr(fd, addr);
2463 + dpaa2_fd_set_len(fd, skb->len);
2464 +
2465 + fd->simple.ctrl = DPAA2_FD_CTRL_ASAL | DPAA2_FD_CTRL_PTA |
2466 + DPAA2_FD_CTRL_PTV1;
2467 +
2468 + return 0;
2469 +
2470 +dma_map_single_failed:
2471 + kfree(sgt_buf);
2472 +sgt_buf_alloc_failed:
2473 + dma_unmap_sg(dev, scl, num_sg, DMA_TO_DEVICE);
2474 +dma_map_sg_failed:
2475 + kfree(scl);
2476 + return err;
2477 +}
2478 +
2479 +static int dpaa2_eth_build_single_fd(struct dpaa2_eth_priv *priv,
2480 + struct sk_buff *skb,
2481 + struct dpaa2_fd *fd)
2482 +{
2483 + struct device *dev = priv->net_dev->dev.parent;
2484 + u8 *buffer_start;
2485 + struct sk_buff **skbh;
2486 + dma_addr_t addr;
2487 +
2488 + buffer_start = PTR_ALIGN(skb->data - priv->tx_data_offset -
2489 + DPAA2_ETH_TX_BUF_ALIGN,
2490 + DPAA2_ETH_TX_BUF_ALIGN);
2491 +
2492 + /* PTA from egress side is passed as is to the confirmation side so
2493 + * we need to clear some fields here in order to find consistent values
2494 + * on TX confirmation. We are clearing FAS (Frame Annotation Status)
2495 + * field here.
2496 + */
2497 + memset(buffer_start + priv->buf_layout.private_data_size, 0, 8);
2498 +
2499 + /* Store a backpointer to the skb at the beginning of the buffer
2500 + * (in the private data area) such that we can release it
2501 + * on Tx confirm
2502 + */
2503 + skbh = (struct sk_buff **)buffer_start;
2504 + *skbh = skb;
2505 +
2506 + addr = dma_map_single(dev,
2507 + buffer_start,
2508 + skb_tail_pointer(skb) - buffer_start,
2509 + DMA_TO_DEVICE);
2510 + if (unlikely(dma_mapping_error(dev, addr))) {
2511 + dev_err(dev, "dma_map_single() failed\n");
2512 + return -EINVAL;
2513 + }
2514 +
2515 + dpaa2_fd_set_addr(fd, addr);
2516 + dpaa2_fd_set_offset(fd, (u16)(skb->data - buffer_start));
2517 + dpaa2_fd_set_len(fd, skb->len);
2518 + dpaa2_fd_set_format(fd, dpaa2_fd_single);
2519 +
2520 + fd->simple.ctrl = DPAA2_FD_CTRL_ASAL | DPAA2_FD_CTRL_PTA |
2521 + DPAA2_FD_CTRL_PTV1;
2522 +
2523 + return 0;
2524 +}
2525 +
2526 +/* DMA-unmap and free FD and possibly SGT buffer allocated on Tx. The skb
2527 + * back-pointed to is also freed.
2528 + * This can be called either from dpaa2_eth_tx_conf() or on the error path of
2529 + * dpaa2_eth_tx().
2530 + * Optionally, return the frame annotation status word (FAS), which needs
2531 + * to be checked if we're on the confirmation path.
2532 + */
2533 +static void dpaa2_eth_free_fd(const struct dpaa2_eth_priv *priv,
2534 + const struct dpaa2_fd *fd,
2535 + u32 *status)
2536 +{
2537 + struct device *dev = priv->net_dev->dev.parent;
2538 + dma_addr_t fd_addr;
2539 + struct sk_buff **skbh, *skb;
2540 + unsigned char *buffer_start;
2541 + int unmap_size;
2542 + struct scatterlist *scl;
2543 + int num_sg, num_dma_bufs;
2544 + struct dpaa2_eth_swa *bps;
2545 + bool fd_single;
2546 + struct dpaa2_fas *fas;
2547 +
2548 + fd_addr = dpaa2_fd_get_addr(fd);
2549 + skbh = phys_to_virt(fd_addr);
2550 + fd_single = (dpaa2_fd_get_format(fd) == dpaa2_fd_single);
2551 +
2552 + if (fd_single) {
2553 + skb = *skbh;
2554 + buffer_start = (unsigned char *)skbh;
2555 + /* Accessing the skb buffer is safe before dma unmap, because
2556 + * we didn't map the actual skb shell.
2557 + */
2558 + dma_unmap_single(dev, fd_addr,
2559 + skb_tail_pointer(skb) - buffer_start,
2560 + DMA_TO_DEVICE);
2561 + } else {
2562 + bps = (struct dpaa2_eth_swa *)skbh;
2563 + skb = bps->skb;
2564 + scl = bps->scl;
2565 + num_sg = bps->num_sg;
2566 + num_dma_bufs = bps->num_dma_bufs;
2567 +
2568 + /* Unmap the scatterlist */
2569 + dma_unmap_sg(dev, scl, num_sg, DMA_TO_DEVICE);
2570 + kfree(scl);
2571 +
2572 + /* Unmap the SGT buffer */
2573 + unmap_size = priv->tx_data_offset +
2574 + sizeof(struct dpaa2_sg_entry) * (1 + num_dma_bufs);
2575 + dma_unmap_single(dev, fd_addr, unmap_size, DMA_TO_DEVICE);
2576 + }
2577 +
2578 + if (priv->ts_tx_en && skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) {
2579 + struct skb_shared_hwtstamps shhwtstamps;
2580 + u64 *ns;
2581 +
2582 + memset(&shhwtstamps, 0, sizeof(shhwtstamps));
2583 +
2584 + ns = (u64 *)((void *)skbh +
2585 + priv->buf_layout.private_data_size +
2586 + sizeof(struct dpaa2_fas));
2587 + *ns = DPAA2_PTP_NOMINAL_FREQ_PERIOD_NS * (*ns);
2588 + shhwtstamps.hwtstamp = ns_to_ktime(*ns);
2589 + skb_tstamp_tx(skb, &shhwtstamps);
2590 + }
2591 +
2592 + /* Check the status from the Frame Annotation after we unmap the first
2593 + * buffer but before we free it.
2594 + */
2595 + if (status && (fd->simple.frc & DPAA2_FD_FRC_FASV)) {
2596 + fas = (struct dpaa2_fas *)
2597 + ((void *)skbh + priv->buf_layout.private_data_size);
2598 + *status = le32_to_cpu(fas->status);
2599 + }
2600 +
2601 + /* Free SGT buffer kmalloc'ed on tx */
2602 + if (!fd_single)
2603 + kfree(skbh);
2604 +
2605 + /* Move on with skb release */
2606 + dev_kfree_skb(skb);
2607 +}
2608 +
2609 +static int dpaa2_eth_tx(struct sk_buff *skb, struct net_device *net_dev)
2610 +{
2611 + struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
2612 + struct dpaa2_fd fd;
2613 + struct rtnl_link_stats64 *percpu_stats;
2614 + struct dpaa2_eth_stats *percpu_extras;
2615 + int err, i;
2616 + /* TxConf FQ selection primarily based on cpu affinity; this is
2617 + * non-migratable context, so it's safe to call smp_processor_id().
2618 + */
2619 + u16 queue_mapping = smp_processor_id() % priv->dpni_attrs.max_senders;
2620 +
2621 + percpu_stats = this_cpu_ptr(priv->percpu_stats);
2622 + percpu_extras = this_cpu_ptr(priv->percpu_extras);
2623 +
2624 + /* Setup the FD fields */
2625 + memset(&fd, 0, sizeof(fd));
2626 +
2627 + if (unlikely(skb_headroom(skb) < DPAA2_ETH_NEEDED_HEADROOM(priv))) {
2628 + struct sk_buff *ns;
2629 +
2630 + dev_info_once(net_dev->dev.parent,
2631 + "skb headroom too small, must realloc.\n");
2632 + ns = skb_realloc_headroom(skb, DPAA2_ETH_NEEDED_HEADROOM(priv));
2633 + if (unlikely(!ns)) {
2634 + percpu_stats->tx_dropped++;
2635 + goto err_alloc_headroom;
2636 + }
2637 + dev_kfree_skb(skb);
2638 + skb = ns;
2639 + }
2640 +
2641 + /* We'll be holding a back-reference to the skb until Tx Confirmation;
2642 + * we don't want that overwritten by a concurrent Tx with a cloned skb.
2643 + */
2644 + skb = skb_unshare(skb, GFP_ATOMIC);
2645 + if (unlikely(!skb)) {
2646 + netdev_err(net_dev, "Out of memory for skb_unshare()");
2647 + /* skb_unshare() has already freed the skb */
2648 + percpu_stats->tx_dropped++;
2649 + return NETDEV_TX_OK;
2650 + }
2651 +
2652 + if (skb_is_nonlinear(skb)) {
2653 + err = dpaa2_eth_build_sg_fd(priv, skb, &fd);
2654 + percpu_extras->tx_sg_frames++;
2655 + percpu_extras->tx_sg_bytes += skb->len;
2656 + } else {
2657 + err = dpaa2_eth_build_single_fd(priv, skb, &fd);
2658 + }
2659 +
2660 + if (unlikely(err)) {
2661 + percpu_stats->tx_dropped++;
2662 + goto err_build_fd;
2663 + }
2664 +
2665 + /* Tracing point */
2666 + trace_dpaa2_tx_fd(net_dev, &fd);
2667 +
2668 + for (i = 0; i < (DPAA2_ETH_MAX_TX_QUEUES << 1); i++) {
2669 + err = dpaa2_io_service_enqueue_qd(NULL, priv->tx_qdid, 0,
2670 + priv->fq[queue_mapping].flowid,
2671 + &fd);
2672 + if (err != -EBUSY)
2673 + break;
2674 + }
2675 + percpu_extras->tx_portal_busy += i;
2676 + if (unlikely(err < 0)) {
2677 + netdev_dbg(net_dev, "error enqueueing Tx frame\n");
2678 + percpu_stats->tx_errors++;
2679 + /* Clean up everything, including freeing the skb */
2680 + dpaa2_eth_free_fd(priv, &fd, NULL);
2681 + } else {
2682 + percpu_stats->tx_packets++;
2683 + percpu_stats->tx_bytes += skb->len;
2684 + }
2685 +
2686 + return NETDEV_TX_OK;
2687 +
2688 +err_build_fd:
2689 +err_alloc_headroom:
2690 + dev_kfree_skb(skb);
2691 +
2692 + return NETDEV_TX_OK;
2693 +}
2694 +
2695 +static void dpaa2_eth_tx_conf(struct dpaa2_eth_priv *priv,
2696 + struct dpaa2_eth_channel *ch,
2697 + const struct dpaa2_fd *fd,
2698 + struct napi_struct *napi __always_unused)
2699 +{
2700 + struct rtnl_link_stats64 *percpu_stats;
2701 + struct dpaa2_eth_stats *percpu_extras;
2702 + u32 status = 0;
2703 +
2704 + /* Tracing point */
2705 + trace_dpaa2_tx_conf_fd(priv->net_dev, fd);
2706 +
2707 + percpu_extras = this_cpu_ptr(priv->percpu_extras);
2708 + percpu_extras->tx_conf_frames++;
2709 + percpu_extras->tx_conf_bytes += dpaa2_fd_get_len(fd);
2710 +
2711 + dpaa2_eth_free_fd(priv, fd, &status);
2712 +
2713 + if (unlikely(status & DPAA2_ETH_TXCONF_ERR_MASK)) {
2714 + netdev_err(priv->net_dev, "TxConf frame error(s): 0x%08x\n",
2715 + status & DPAA2_ETH_TXCONF_ERR_MASK);
2716 + percpu_stats = this_cpu_ptr(priv->percpu_stats);
2717 + /* Tx-conf logically pertains to the egress path. */
2718 + percpu_stats->tx_errors++;
2719 + }
2720 +}
2721 +
2722 +static int dpaa2_eth_set_rx_csum(struct dpaa2_eth_priv *priv, bool enable)
2723 +{
2724 + int err;
2725 +
2726 + err = dpni_set_l3_chksum_validation(priv->mc_io, 0, priv->mc_token,
2727 + enable);
2728 + if (err) {
2729 + netdev_err(priv->net_dev,
2730 + "dpni_set_l3_chksum_validation() failed\n");
2731 + return err;
2732 + }
2733 +
2734 + err = dpni_set_l4_chksum_validation(priv->mc_io, 0, priv->mc_token,
2735 + enable);
2736 + if (err) {
2737 + netdev_err(priv->net_dev,
2738 + "dpni_set_l4_chksum_validation failed\n");
2739 + return err;
2740 + }
2741 +
2742 + return 0;
2743 +}
2744 +
2745 +static int dpaa2_eth_set_tx_csum(struct dpaa2_eth_priv *priv, bool enable)
2746 +{
2747 + struct dpaa2_eth_fq *fq;
2748 + struct dpni_tx_flow_cfg tx_flow_cfg;
2749 + int err;
2750 + int i;
2751 +
2752 + memset(&tx_flow_cfg, 0, sizeof(tx_flow_cfg));
2753 + tx_flow_cfg.options = DPNI_TX_FLOW_OPT_L3_CHKSUM_GEN |
2754 + DPNI_TX_FLOW_OPT_L4_CHKSUM_GEN;
2755 + tx_flow_cfg.l3_chksum_gen = enable;
2756 + tx_flow_cfg.l4_chksum_gen = enable;
2757 +
2758 + for (i = 0; i < priv->num_fqs; i++) {
2759 + fq = &priv->fq[i];
2760 + if (fq->type != DPAA2_TX_CONF_FQ)
2761 + continue;
2762 +
2763 + /* The Tx flowid is kept in the corresponding TxConf FQ. */
2764 + err = dpni_set_tx_flow(priv->mc_io, 0, priv->mc_token,
2765 + &fq->flowid, &tx_flow_cfg);
2766 + if (err) {
2767 + netdev_err(priv->net_dev, "dpni_set_tx_flow failed\n");
2768 + return err;
2769 + }
2770 + }
2771 +
2772 + return 0;
2773 +}
2774 +
2775 +static int dpaa2_bp_add_7(struct dpaa2_eth_priv *priv, u16 bpid)
2776 +{
2777 + struct device *dev = priv->net_dev->dev.parent;
2778 + u64 buf_array[7];
2779 + void *buf;
2780 + dma_addr_t addr;
2781 + int i;
2782 +
2783 + for (i = 0; i < 7; i++) {
2784 + /* Allocate buffer visible to WRIOP + skb shared info +
2785 + * alignment padding
2786 + */
2787 + buf = napi_alloc_frag(DPAA2_ETH_BUF_RAW_SIZE);
2788 + if (unlikely(!buf)) {
2789 + dev_err(dev, "buffer allocation failed\n");
2790 + goto err_alloc;
2791 + }
2792 + buf = PTR_ALIGN(buf, DPAA2_ETH_RX_BUF_ALIGN);
2793 +
2794 + addr = dma_map_single(dev, buf, DPAA2_ETH_RX_BUFFER_SIZE,
2795 + DMA_FROM_DEVICE);
2796 + if (unlikely(dma_mapping_error(dev, addr))) {
2797 + dev_err(dev, "dma_map_single() failed\n");
2798 + goto err_map;
2799 + }
2800 + buf_array[i] = addr;
2801 +
2802 + /* tracing point */
2803 + trace_dpaa2_eth_buf_seed(priv->net_dev,
2804 + buf, DPAA2_ETH_BUF_RAW_SIZE,
2805 + addr, DPAA2_ETH_RX_BUFFER_SIZE,
2806 + bpid);
2807 + }
2808 +
2809 +release_bufs:
2810 + /* In case the portal is busy, retry until successful.
2811 + * The buffer release function would only fail if the QBMan portal
2812 + * was busy, which implies portal contention (i.e. more CPUs than
2813 + * portals, i.e. GPPs w/o affine DPIOs). For all practical purposes,
2814 + * there is little we can realistically do, short of giving up -
2815 + * in which case we'd risk depleting the buffer pool and never again
2816 + * receiving the Rx interrupt which would kick-start the refill logic.
2817 + * So just keep retrying, at the risk of being moved to ksoftirqd.
2818 + */
2819 + while (dpaa2_io_service_release(NULL, bpid, buf_array, i))
2820 + cpu_relax();
2821 + return i;
2822 +
2823 +err_map:
2824 + put_page(virt_to_head_page(buf));
2825 +err_alloc:
2826 + if (i)
2827 + goto release_bufs;
2828 +
2829 + return 0;
2830 +}
2831 +
2832 +static int dpaa2_dpbp_seed(struct dpaa2_eth_priv *priv, u16 bpid)
2833 +{
2834 + int i, j;
2835 + int new_count;
2836 +
2837 + /* This is the lazy seeding of Rx buffer pools.
2838 + * dpaa2_bp_add_7() is also used on the Rx hotpath and calls
2839 + * napi_alloc_frag(). The trouble with that is that it in turn ends up
2840 + * calling this_cpu_ptr(), which mandates execution in atomic context.
2841 + * Rather than splitting up the code, do a one-off preempt disable.
2842 + */
2843 + preempt_disable();
2844 + for (j = 0; j < priv->num_channels; j++) {
2845 + for (i = 0; i < DPAA2_ETH_NUM_BUFS; i += 7) {
2846 + new_count = dpaa2_bp_add_7(priv, bpid);
2847 + priv->channel[j]->buf_count += new_count;
2848 +
2849 + if (new_count < 7) {
2850 + preempt_enable();
2851 + goto out_of_memory;
2852 + }
2853 + }
2854 + }
2855 + preempt_enable();
2856 +
2857 + return 0;
2858 +
2859 +out_of_memory:
2860 + return -ENOMEM;
2861 +}
2862 +
2863 +/**
2864 + * Drain the specified number of buffers from the DPNI's private buffer pool.
2865 + * @count must not exceeed 7
2866 + */
2867 +static void dpaa2_dpbp_drain_cnt(struct dpaa2_eth_priv *priv, int count)
2868 +{
2869 + struct device *dev = priv->net_dev->dev.parent;
2870 + u64 buf_array[7];
2871 + void *vaddr;
2872 + int ret, i;
2873 +
2874 + do {
2875 + ret = dpaa2_io_service_acquire(NULL, priv->dpbp_attrs.bpid,
2876 + buf_array, count);
2877 + if (ret < 0) {
2878 + pr_err("dpaa2_io_service_acquire() failed\n");
2879 + return;
2880 + }
2881 + for (i = 0; i < ret; i++) {
2882 + /* Same logic as on regular Rx path */
2883 + dma_unmap_single(dev, buf_array[i],
2884 + DPAA2_ETH_RX_BUFFER_SIZE,
2885 + DMA_FROM_DEVICE);
2886 + vaddr = phys_to_virt(buf_array[i]);
2887 + put_page(virt_to_head_page(vaddr));
2888 + }
2889 + } while (ret);
2890 +}
2891 +
2892 +static void __dpaa2_dpbp_free(struct dpaa2_eth_priv *priv)
2893 +{
2894 + int i;
2895 +
2896 + dpaa2_dpbp_drain_cnt(priv, 7);
2897 + dpaa2_dpbp_drain_cnt(priv, 1);
2898 +
2899 + for (i = 0; i < priv->num_channels; i++)
2900 + priv->channel[i]->buf_count = 0;
2901 +}
2902 +
2903 +/* Function is called from softirq context only, so we don't need to guard
2904 + * the access to percpu count
2905 + */
2906 +static int dpaa2_dpbp_refill(struct dpaa2_eth_priv *priv,
2907 + struct dpaa2_eth_channel *ch,
2908 + u16 bpid)
2909 +{
2910 + int new_count;
2911 + int err = 0;
2912 +
2913 + if (unlikely(ch->buf_count < DPAA2_ETH_REFILL_THRESH)) {
2914 + do {
2915 + new_count = dpaa2_bp_add_7(priv, bpid);
2916 + if (unlikely(!new_count)) {
2917 + /* Out of memory; abort for now, we'll
2918 + * try later on
2919 + */
2920 + break;
2921 + }
2922 + ch->buf_count += new_count;
2923 + } while (ch->buf_count < DPAA2_ETH_NUM_BUFS);
2924 +
2925 + if (unlikely(ch->buf_count < DPAA2_ETH_NUM_BUFS))
2926 + err = -ENOMEM;
2927 + }
2928 +
2929 + return err;
2930 +}
2931 +
2932 +static int __dpaa2_eth_pull_channel(struct dpaa2_eth_channel *ch)
2933 +{
2934 + int err;
2935 + int dequeues = -1;
2936 + struct dpaa2_eth_priv *priv = ch->priv;
2937 +
2938 + /* Retry while portal is busy */
2939 + do {
2940 + err = dpaa2_io_service_pull_channel(NULL, ch->ch_id, ch->store);
2941 + dequeues++;
2942 + } while (err == -EBUSY);
2943 + if (unlikely(err))
2944 + netdev_err(priv->net_dev, "dpaa2_io_service_pull err %d", err);
2945 +
2946 + ch->stats.dequeue_portal_busy += dequeues;
2947 + return err;
2948 +}
2949 +
2950 +static int dpaa2_eth_poll(struct napi_struct *napi, int budget)
2951 +{
2952 + struct dpaa2_eth_channel *ch;
2953 + int cleaned = 0, store_cleaned;
2954 + struct dpaa2_eth_priv *priv;
2955 + int err;
2956 +
2957 + ch = container_of(napi, struct dpaa2_eth_channel, napi);
2958 + priv = ch->priv;
2959 +
2960 + __dpaa2_eth_pull_channel(ch);
2961 +
2962 + do {
2963 + /* Refill pool if appropriate */
2964 + dpaa2_dpbp_refill(priv, ch, priv->dpbp_attrs.bpid);
2965 +
2966 + store_cleaned = dpaa2_eth_store_consume(ch);
2967 + cleaned += store_cleaned;
2968 +
2969 + if (store_cleaned == 0 ||
2970 + cleaned > budget - DPAA2_ETH_STORE_SIZE)
2971 + break;
2972 +
2973 + /* Try to dequeue some more */
2974 + err = __dpaa2_eth_pull_channel(ch);
2975 + if (unlikely(err))
2976 + break;
2977 + } while (1);
2978 +
2979 + if (cleaned < budget) {
2980 + napi_complete_done(napi, cleaned);
2981 + err = dpaa2_io_service_rearm(NULL, &ch->nctx);
2982 + if (unlikely(err))
2983 + netdev_err(priv->net_dev,
2984 + "Notif rearm failed for channel %d\n",
2985 + ch->ch_id);
2986 + }
2987 +
2988 + ch->stats.frames += cleaned;
2989 +
2990 + return cleaned;
2991 +}
2992 +
2993 +static void dpaa2_eth_napi_enable(struct dpaa2_eth_priv *priv)
2994 +{
2995 + struct dpaa2_eth_channel *ch;
2996 + int i;
2997 +
2998 + for (i = 0; i < priv->num_channels; i++) {
2999 + ch = priv->channel[i];
3000 + napi_enable(&ch->napi);
3001 + }
3002 +}
3003 +
3004 +static void dpaa2_eth_napi_disable(struct dpaa2_eth_priv *priv)
3005 +{
3006 + struct dpaa2_eth_channel *ch;
3007 + int i;
3008 +
3009 + for (i = 0; i < priv->num_channels; i++) {
3010 + ch = priv->channel[i];
3011 + napi_disable(&ch->napi);
3012 + }
3013 +}
3014 +
3015 +static int dpaa2_link_state_update(struct dpaa2_eth_priv *priv)
3016 +{
3017 + struct dpni_link_state state;
3018 + int err;
3019 +
3020 + err = dpni_get_link_state(priv->mc_io, 0, priv->mc_token, &state);
3021 + if (unlikely(err)) {
3022 + netdev_err(priv->net_dev,
3023 + "dpni_get_link_state() failed\n");
3024 + return err;
3025 + }
3026 +
3027 + /* Chech link state; speed / duplex changes are not treated yet */
3028 + if (priv->link_state.up == state.up)
3029 + return 0;
3030 +
3031 + priv->link_state = state;
3032 + if (state.up) {
3033 + netif_carrier_on(priv->net_dev);
3034 + netif_tx_start_all_queues(priv->net_dev);
3035 + } else {
3036 + netif_tx_stop_all_queues(priv->net_dev);
3037 + netif_carrier_off(priv->net_dev);
3038 + }
3039 +
3040 + netdev_info(priv->net_dev, "Link Event: state %s",
3041 + state.up ? "up" : "down");
3042 +
3043 + return 0;
3044 +}
3045 +
3046 +static int dpaa2_eth_open(struct net_device *net_dev)
3047 +{
3048 + struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
3049 + int err;
3050 +
3051 + err = dpaa2_dpbp_seed(priv, priv->dpbp_attrs.bpid);
3052 + if (err) {
3053 + /* Not much to do; the buffer pool, though not filled up,
3054 + * may still contain some buffers which would enable us
3055 + * to limp on.
3056 + */
3057 + netdev_err(net_dev, "Buffer seeding failed for DPBP %d (bpid=%d)\n",
3058 + priv->dpbp_dev->obj_desc.id, priv->dpbp_attrs.bpid);
3059 + }
3060 +
3061 + /* We'll only start the txqs when the link is actually ready; make sure
3062 + * we don't race against the link up notification, which may come
3063 + * immediately after dpni_enable();
3064 + */
3065 + netif_tx_stop_all_queues(net_dev);
3066 + dpaa2_eth_napi_enable(priv);
3067 + /* Also, explicitly set carrier off, otherwise netif_carrier_ok() will
3068 + * return true and cause 'ip link show' to report the LOWER_UP flag,
3069 + * even though the link notification wasn't even received.
3070 + */
3071 + netif_carrier_off(net_dev);
3072 +
3073 + err = dpni_enable(priv->mc_io, 0, priv->mc_token);
3074 + if (err < 0) {
3075 + dev_err(net_dev->dev.parent, "dpni_enable() failed\n");
3076 + goto enable_err;
3077 + }
3078 +
3079 + /* If the DPMAC object has already processed the link up interrupt,
3080 + * we have to learn the link state ourselves.
3081 + */
3082 + err = dpaa2_link_state_update(priv);
3083 + if (err < 0) {
3084 + dev_err(net_dev->dev.parent, "Can't update link state\n");
3085 + goto link_state_err;
3086 + }
3087 +
3088 + return 0;
3089 +
3090 +link_state_err:
3091 +enable_err:
3092 + dpaa2_eth_napi_disable(priv);
3093 + __dpaa2_dpbp_free(priv);
3094 + return err;
3095 +}
3096 +
3097 +static int dpaa2_eth_stop(struct net_device *net_dev)
3098 +{
3099 + struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
3100 +
3101 + /* Stop Tx and Rx traffic */
3102 + netif_tx_stop_all_queues(net_dev);
3103 + netif_carrier_off(net_dev);
3104 + dpni_disable(priv->mc_io, 0, priv->mc_token);
3105 +
3106 + msleep(500);
3107 +
3108 + dpaa2_eth_napi_disable(priv);
3109 + msleep(100);
3110 +
3111 + __dpaa2_dpbp_free(priv);
3112 +
3113 + return 0;
3114 +}
3115 +
3116 +static int dpaa2_eth_init(struct net_device *net_dev)
3117 +{
3118 + u64 supported = 0;
3119 + u64 not_supported = 0;
3120 + const struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
3121 + u32 options = priv->dpni_attrs.options;
3122 +
3123 + /* Capabilities listing */
3124 + supported |= IFF_LIVE_ADDR_CHANGE | IFF_PROMISC | IFF_ALLMULTI;
3125 +
3126 + if (options & DPNI_OPT_UNICAST_FILTER)
3127 + supported |= IFF_UNICAST_FLT;
3128 + else
3129 + not_supported |= IFF_UNICAST_FLT;
3130 +
3131 + if (options & DPNI_OPT_MULTICAST_FILTER)
3132 + supported |= IFF_MULTICAST;
3133 + else
3134 + not_supported |= IFF_MULTICAST;
3135 +
3136 + net_dev->priv_flags |= supported;
3137 + net_dev->priv_flags &= ~not_supported;
3138 +
3139 + /* Features */
3140 + net_dev->features = NETIF_F_RXCSUM |
3141 + NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
3142 + NETIF_F_SG | NETIF_F_HIGHDMA |
3143 + NETIF_F_LLTX;
3144 + net_dev->hw_features = net_dev->features;
3145 +
3146 + return 0;
3147 +}
3148 +
3149 +static int dpaa2_eth_set_addr(struct net_device *net_dev, void *addr)
3150 +{
3151 + struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
3152 + struct device *dev = net_dev->dev.parent;
3153 + int err;
3154 +
3155 + err = eth_mac_addr(net_dev, addr);
3156 + if (err < 0) {
3157 + dev_err(dev, "eth_mac_addr() failed with error %d\n", err);
3158 + return err;
3159 + }
3160 +
3161 + err = dpni_set_primary_mac_addr(priv->mc_io, 0, priv->mc_token,
3162 + net_dev->dev_addr);
3163 + if (err) {
3164 + dev_err(dev, "dpni_set_primary_mac_addr() failed (%d)\n", err);
3165 + return err;
3166 + }
3167 +
3168 + return 0;
3169 +}
3170 +
3171 +/** Fill in counters maintained by the GPP driver. These may be different from
3172 + * the hardware counters obtained by ethtool.
3173 + */
3174 +static struct rtnl_link_stats64
3175 +*dpaa2_eth_get_stats(struct net_device *net_dev,
3176 + struct rtnl_link_stats64 *stats)
3177 +{
3178 + struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
3179 + struct rtnl_link_stats64 *percpu_stats;
3180 + u64 *cpustats;
3181 + u64 *netstats = (u64 *)stats;
3182 + int i, j;
3183 + int num = sizeof(struct rtnl_link_stats64) / sizeof(u64);
3184 +
3185 + for_each_possible_cpu(i) {
3186 + percpu_stats = per_cpu_ptr(priv->percpu_stats, i);
3187 + cpustats = (u64 *)percpu_stats;
3188 + for (j = 0; j < num; j++)
3189 + netstats[j] += cpustats[j];
3190 + }
3191 +
3192 + return stats;
3193 +}
3194 +
3195 +static int dpaa2_eth_change_mtu(struct net_device *net_dev, int mtu)
3196 +{
3197 + struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
3198 + int err;
3199 +
3200 + if (mtu < 68 || mtu > DPAA2_ETH_MAX_MTU) {
3201 + netdev_err(net_dev, "Invalid MTU %d. Valid range is: 68..%d\n",
3202 + mtu, DPAA2_ETH_MAX_MTU);
3203 + return -EINVAL;
3204 + }
3205 +
3206 + /* Set the maximum Rx frame length to match the transmit side;
3207 + * account for L2 headers when computing the MFL
3208 + */
3209 + err = dpni_set_max_frame_length(priv->mc_io, 0, priv->mc_token,
3210 + (u16)DPAA2_ETH_L2_MAX_FRM(mtu));
3211 + if (err) {
3212 + netdev_err(net_dev, "dpni_set_mfl() failed\n");
3213 + return err;
3214 + }
3215 +
3216 + net_dev->mtu = mtu;
3217 + return 0;
3218 +}
3219 +
3220 +/* Convenience macro to make code littered with error checking more readable */
3221 +#define DPAA2_ETH_WARN_IF_ERR(err, netdevp, format, ...) \
3222 +do { \
3223 + if (err) \
3224 + netdev_warn(netdevp, format, ##__VA_ARGS__); \
3225 +} while (0)
3226 +
3227 +/* Copy mac unicast addresses from @net_dev to @priv.
3228 + * Its sole purpose is to make dpaa2_eth_set_rx_mode() more readable.
3229 + */
3230 +static void _dpaa2_eth_hw_add_uc_addr(const struct net_device *net_dev,
3231 + struct dpaa2_eth_priv *priv)
3232 +{
3233 + struct netdev_hw_addr *ha;
3234 + int err;
3235 +
3236 + netdev_for_each_uc_addr(ha, net_dev) {
3237 + err = dpni_add_mac_addr(priv->mc_io, 0, priv->mc_token,
3238 + ha->addr);
3239 + DPAA2_ETH_WARN_IF_ERR(err, priv->net_dev,
3240 + "Could not add ucast MAC %pM to the filtering table (err %d)\n",
3241 + ha->addr, err);
3242 + }
3243 +}
3244 +
3245 +/* Copy mac multicast addresses from @net_dev to @priv
3246 + * Its sole purpose is to make dpaa2_eth_set_rx_mode() more readable.
3247 + */
3248 +static void _dpaa2_eth_hw_add_mc_addr(const struct net_device *net_dev,
3249 + struct dpaa2_eth_priv *priv)
3250 +{
3251 + struct netdev_hw_addr *ha;
3252 + int err;
3253 +
3254 + netdev_for_each_mc_addr(ha, net_dev) {
3255 + err = dpni_add_mac_addr(priv->mc_io, 0, priv->mc_token,
3256 + ha->addr);
3257 + DPAA2_ETH_WARN_IF_ERR(err, priv->net_dev,
3258 + "Could not add mcast MAC %pM to the filtering table (err %d)\n",
3259 + ha->addr, err);
3260 + }
3261 +}
3262 +
3263 +static void dpaa2_eth_set_rx_mode(struct net_device *net_dev)
3264 +{
3265 + struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
3266 + int uc_count = netdev_uc_count(net_dev);
3267 + int mc_count = netdev_mc_count(net_dev);
3268 + u8 max_uc = priv->dpni_attrs.max_unicast_filters;
3269 + u8 max_mc = priv->dpni_attrs.max_multicast_filters;
3270 + u32 options = priv->dpni_attrs.options;
3271 + u16 mc_token = priv->mc_token;
3272 + struct fsl_mc_io *mc_io = priv->mc_io;
3273 + int err;
3274 +
3275 + /* Basic sanity checks; these probably indicate a misconfiguration */
3276 + if (!(options & DPNI_OPT_UNICAST_FILTER) && max_uc != 0)
3277 + netdev_info(net_dev,
3278 + "max_unicast_filters=%d, you must have DPNI_OPT_UNICAST_FILTER in the DPL\n",
3279 + max_uc);
3280 + if (!(options & DPNI_OPT_MULTICAST_FILTER) && max_mc != 0)
3281 + netdev_info(net_dev,
3282 + "max_multicast_filters=%d, you must have DPNI_OPT_MULTICAST_FILTER in the DPL\n",
3283 + max_mc);
3284 +
3285 + /* Force promiscuous if the uc or mc counts exceed our capabilities. */
3286 + if (uc_count > max_uc) {
3287 + netdev_info(net_dev,
3288 + "Unicast addr count reached %d, max allowed is %d; forcing promisc\n",
3289 + uc_count, max_uc);
3290 + goto force_promisc;
3291 + }
3292 + if (mc_count > max_mc) {
3293 + netdev_info(net_dev,
3294 + "Multicast addr count reached %d, max allowed is %d; forcing promisc\n",
3295 + mc_count, max_mc);
3296 + goto force_mc_promisc;
3297 + }
3298 +
3299 + /* Adjust promisc settings due to flag combinations */
3300 + if (net_dev->flags & IFF_PROMISC) {
3301 + goto force_promisc;
3302 + } else if (net_dev->flags & IFF_ALLMULTI) {
3303 + /* First, rebuild unicast filtering table. This should be done
3304 + * in promisc mode, in order to avoid frame loss while we
3305 + * progressively add entries to the table.
3306 + * We don't know whether we had been in promisc already, and
3307 + * making an MC call to find it is expensive; so set uc promisc
3308 + * nonetheless.
3309 + */
3310 + err = dpni_set_unicast_promisc(mc_io, 0, mc_token, 1);
3311 + DPAA2_ETH_WARN_IF_ERR(err, net_dev, "Can't set uc promisc\n");
3312 +
3313 + /* Actual uc table reconstruction. */
3314 + err = dpni_clear_mac_filters(mc_io, 0, mc_token, 1, 0);
3315 + DPAA2_ETH_WARN_IF_ERR(err, net_dev, "Can't clear uc filters\n");
3316 + _dpaa2_eth_hw_add_uc_addr(net_dev, priv);
3317 +
3318 + /* Finally, clear uc promisc and set mc promisc as requested. */
3319 + err = dpni_set_unicast_promisc(mc_io, 0, mc_token, 0);
3320 + DPAA2_ETH_WARN_IF_ERR(err, net_dev, "Can't clear uc promisc\n");
3321 + goto force_mc_promisc;
3322 + }
3323 +
3324 + /* Neither unicast, nor multicast promisc will be on... eventually.
3325 + * For now, rebuild mac filtering tables while forcing both of them on.
3326 + */
3327 + err = dpni_set_unicast_promisc(mc_io, 0, mc_token, 1);
3328 + DPAA2_ETH_WARN_IF_ERR(err, net_dev, "Can't set uc promisc (%d)\n", err);
3329 + err = dpni_set_multicast_promisc(mc_io, 0, mc_token, 1);
3330 + DPAA2_ETH_WARN_IF_ERR(err, net_dev, "Can't set mc promisc (%d)\n", err);
3331 +
3332 + /* Actual mac filtering tables reconstruction */
3333 + err = dpni_clear_mac_filters(mc_io, 0, mc_token, 1, 1);
3334 + DPAA2_ETH_WARN_IF_ERR(err, net_dev, "Can't clear mac filters\n");
3335 + _dpaa2_eth_hw_add_mc_addr(net_dev, priv);
3336 + _dpaa2_eth_hw_add_uc_addr(net_dev, priv);
3337 +
3338 + /* Now we can clear both ucast and mcast promisc, without risking
3339 + * to drop legitimate frames anymore.
3340 + */
3341 + err = dpni_set_unicast_promisc(mc_io, 0, mc_token, 0);
3342 + DPAA2_ETH_WARN_IF_ERR(err, net_dev, "Can't clear ucast promisc\n");
3343 + err = dpni_set_multicast_promisc(mc_io, 0, mc_token, 0);
3344 + DPAA2_ETH_WARN_IF_ERR(err, net_dev, "Can't clear mcast promisc\n");
3345 +
3346 + return;
3347 +
3348 +force_promisc:
3349 + err = dpni_set_unicast_promisc(mc_io, 0, mc_token, 1);
3350 + DPAA2_ETH_WARN_IF_ERR(err, net_dev, "Can't set ucast promisc\n");
3351 +force_mc_promisc:
3352 + err = dpni_set_multicast_promisc(mc_io, 0, mc_token, 1);
3353 + DPAA2_ETH_WARN_IF_ERR(err, net_dev, "Can't set mcast promisc\n");
3354 +}
3355 +
3356 +static int dpaa2_eth_set_features(struct net_device *net_dev,
3357 + netdev_features_t features)
3358 +{
3359 + struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
3360 + netdev_features_t changed = features ^ net_dev->features;
3361 + int err;
3362 +
3363 + if (changed & NETIF_F_RXCSUM) {
3364 + bool enable = !!(features & NETIF_F_RXCSUM);
3365 +
3366 + err = dpaa2_eth_set_rx_csum(priv, enable);
3367 + if (err)
3368 + return err;
3369 + }
3370 +
3371 + if (changed & (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM)) {
3372 + bool enable = !!(features &
3373 + (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM));
3374 + err = dpaa2_eth_set_tx_csum(priv, enable);
3375 + if (err)
3376 + return err;
3377 + }
3378 +
3379 + return 0;
3380 +}
3381 +
3382 +static int dpaa2_eth_ts_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
3383 +{
3384 + struct dpaa2_eth_priv *priv = netdev_priv(dev);
3385 + struct hwtstamp_config config;
3386 +
3387 + if (copy_from_user(&config, rq->ifr_data, sizeof(config)))
3388 + return -EFAULT;
3389 +
3390 + switch (config.tx_type) {
3391 + case HWTSTAMP_TX_OFF:
3392 + priv->ts_tx_en = false;
3393 + break;
3394 + case HWTSTAMP_TX_ON:
3395 + priv->ts_tx_en = true;
3396 + break;
3397 + default:
3398 + return -ERANGE;
3399 + }
3400 +
3401 + if (config.rx_filter == HWTSTAMP_FILTER_NONE)
3402 + priv->ts_rx_en = false;
3403 + else {
3404 + priv->ts_rx_en = true;
3405 + /* TS is set for all frame types, not only those requested */
3406 + config.rx_filter = HWTSTAMP_FILTER_ALL;
3407 + }
3408 +
3409 + return copy_to_user(rq->ifr_data, &config, sizeof(config)) ?
3410 + -EFAULT : 0;
3411 +}
3412 +
3413 +static int dpaa2_eth_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
3414 +{
3415 + if (cmd == SIOCSHWTSTAMP)
3416 + return dpaa2_eth_ts_ioctl(dev, rq, cmd);
3417 + else
3418 + return -EINVAL;
3419 +}
3420 +
3421 +static const struct net_device_ops dpaa2_eth_ops = {
3422 + .ndo_open = dpaa2_eth_open,
3423 + .ndo_start_xmit = dpaa2_eth_tx,
3424 + .ndo_stop = dpaa2_eth_stop,
3425 + .ndo_init = dpaa2_eth_init,
3426 + .ndo_set_mac_address = dpaa2_eth_set_addr,
3427 + .ndo_get_stats64 = dpaa2_eth_get_stats,
3428 + .ndo_change_mtu = dpaa2_eth_change_mtu,
3429 + .ndo_set_rx_mode = dpaa2_eth_set_rx_mode,
3430 + .ndo_set_features = dpaa2_eth_set_features,
3431 + .ndo_do_ioctl = dpaa2_eth_ioctl,
3432 +};
3433 +
3434 +static void dpaa2_eth_cdan_cb(struct dpaa2_io_notification_ctx *ctx)
3435 +{
3436 + struct dpaa2_eth_channel *ch;
3437 +
3438 + ch = container_of(ctx, struct dpaa2_eth_channel, nctx);
3439 +
3440 + /* Update NAPI statistics */
3441 + ch->stats.cdan++;
3442 +
3443 + napi_schedule_irqoff(&ch->napi);
3444 +}
3445 +
3446 +static void dpaa2_eth_setup_fqs(struct dpaa2_eth_priv *priv)
3447 +{
3448 + int i;
3449 +
3450 + /* We have one TxConf FQ per Tx flow */
3451 + for (i = 0; i < priv->dpni_attrs.max_senders; i++) {
3452 + priv->fq[priv->num_fqs].netdev_priv = priv;
3453 + priv->fq[priv->num_fqs].type = DPAA2_TX_CONF_FQ;
3454 + priv->fq[priv->num_fqs].consume = dpaa2_eth_tx_conf;
3455 + priv->fq[priv->num_fqs++].flowid = DPNI_NEW_FLOW_ID;
3456 + }
3457 +
3458 + /* The number of Rx queues (Rx distribution width) may be different from
3459 + * the number of cores.
3460 + * We only support one traffic class for now.
3461 + */
3462 + for (i = 0; i < dpaa2_queue_count(priv); i++) {
3463 + priv->fq[priv->num_fqs].netdev_priv = priv;
3464 + priv->fq[priv->num_fqs].type = DPAA2_RX_FQ;
3465 + priv->fq[priv->num_fqs].consume = dpaa2_eth_rx;
3466 + priv->fq[priv->num_fqs++].flowid = (u16)i;
3467 + }
3468 +
3469 +#ifdef CONFIG_FSL_DPAA2_ETH_USE_ERR_QUEUE
3470 + /* We have exactly one Rx error queue per DPNI */
3471 + priv->fq[priv->num_fqs].netdev_priv = priv;
3472 + priv->fq[priv->num_fqs].type = DPAA2_RX_ERR_FQ;
3473 + priv->fq[priv->num_fqs++].consume = dpaa2_eth_rx_err;
3474 +#endif
3475 +}
3476 +
3477 +static int check_obj_version(struct fsl_mc_device *ls_dev, u16 mc_version)
3478 +{
3479 + char *name = ls_dev->obj_desc.type;
3480 + struct device *dev = &ls_dev->dev;
3481 + u16 supported_version, flib_version;
3482 +
3483 + if (strcmp(name, "dpni") == 0) {
3484 + flib_version = DPNI_VER_MAJOR;
3485 + supported_version = DPAA2_SUPPORTED_DPNI_VERSION;
3486 + } else if (strcmp(name, "dpbp") == 0) {
3487 + flib_version = DPBP_VER_MAJOR;
3488 + supported_version = DPAA2_SUPPORTED_DPBP_VERSION;
3489 + } else if (strcmp(name, "dpcon") == 0) {
3490 + flib_version = DPCON_VER_MAJOR;
3491 + supported_version = DPAA2_SUPPORTED_DPCON_VERSION;
3492 + } else {
3493 + dev_err(dev, "invalid object type (%s)\n", name);
3494 + return -EINVAL;
3495 + }
3496 +
3497 + /* Check that the FLIB-defined version matches the one reported by MC */
3498 + if (mc_version != flib_version) {
3499 + dev_err(dev,
3500 + "%s FLIB version mismatch: MC reports %d, we have %d\n",
3501 + name, mc_version, flib_version);
3502 + return -EINVAL;
3503 + }
3504 +
3505 + /* ... and that we actually support it */
3506 + if (mc_version < supported_version) {
3507 + dev_err(dev, "Unsupported %s FLIB version (%d)\n",
3508 + name, mc_version);
3509 + return -EINVAL;
3510 + }
3511 + dev_dbg(dev, "Using %s FLIB version %d\n", name, mc_version);
3512 +
3513 + return 0;
3514 +}
3515 +
3516 +static struct fsl_mc_device *dpaa2_dpcon_setup(struct dpaa2_eth_priv *priv)
3517 +{
3518 + struct fsl_mc_device *dpcon;
3519 + struct device *dev = priv->net_dev->dev.parent;
3520 + struct dpcon_attr attrs;
3521 + int err;
3522 +
3523 + err = fsl_mc_object_allocate(to_fsl_mc_device(dev),
3524 + FSL_MC_POOL_DPCON, &dpcon);
3525 + if (err) {
3526 + dev_info(dev, "Not enough DPCONs, will go on as-is\n");
3527 + return NULL;
3528 + }
3529 +
3530 + err = dpcon_open(priv->mc_io, 0, dpcon->obj_desc.id, &dpcon->mc_handle);
3531 + if (err) {
3532 + dev_err(dev, "dpcon_open() failed\n");
3533 + goto err_open;
3534 + }
3535 +
3536 + err = dpcon_get_attributes(priv->mc_io, 0, dpcon->mc_handle, &attrs);
3537 + if (err) {
3538 + dev_err(dev, "dpcon_get_attributes() failed\n");
3539 + goto err_get_attr;
3540 + }
3541 +
3542 + err = check_obj_version(dpcon, attrs.version.major);
3543 + if (err)
3544 + goto err_dpcon_ver;
3545 +
3546 + err = dpcon_enable(priv->mc_io, 0, dpcon->mc_handle);
3547 + if (err) {
3548 + dev_err(dev, "dpcon_enable() failed\n");
3549 + goto err_enable;
3550 + }
3551 +
3552 + return dpcon;
3553 +
3554 +err_enable:
3555 +err_dpcon_ver:
3556 +err_get_attr:
3557 + dpcon_close(priv->mc_io, 0, dpcon->mc_handle);
3558 +err_open:
3559 + fsl_mc_object_free(dpcon);
3560 +
3561 + return NULL;
3562 +}
3563 +
3564 +static void dpaa2_dpcon_free(struct dpaa2_eth_priv *priv,
3565 + struct fsl_mc_device *dpcon)
3566 +{
3567 + dpcon_disable(priv->mc_io, 0, dpcon->mc_handle);
3568 + dpcon_close(priv->mc_io, 0, dpcon->mc_handle);
3569 + fsl_mc_object_free(dpcon);
3570 +}
3571 +
3572 +static struct dpaa2_eth_channel *
3573 +dpaa2_alloc_channel(struct dpaa2_eth_priv *priv)
3574 +{
3575 + struct dpaa2_eth_channel *channel;
3576 + struct dpcon_attr attr;
3577 + struct device *dev = priv->net_dev->dev.parent;
3578 + int err;
3579 +
3580 + channel = kzalloc(sizeof(*channel), GFP_ATOMIC);
3581 + if (!channel) {
3582 + dev_err(dev, "Memory allocation failed\n");
3583 + return NULL;
3584 + }
3585 +
3586 + channel->dpcon = dpaa2_dpcon_setup(priv);
3587 + if (!channel->dpcon)
3588 + goto err_setup;
3589 +
3590 + err = dpcon_get_attributes(priv->mc_io, 0, channel->dpcon->mc_handle,
3591 + &attr);
3592 + if (err) {
3593 + dev_err(dev, "dpcon_get_attributes() failed\n");
3594 + goto err_get_attr;
3595 + }
3596 +
3597 + channel->dpcon_id = attr.id;
3598 + channel->ch_id = attr.qbman_ch_id;
3599 + channel->priv = priv;
3600 +
3601 + return channel;
3602 +
3603 +err_get_attr:
3604 + dpaa2_dpcon_free(priv, channel->dpcon);
3605 +err_setup:
3606 + kfree(channel);
3607 + return NULL;
3608 +}
3609 +
3610 +static void dpaa2_free_channel(struct dpaa2_eth_priv *priv,
3611 + struct dpaa2_eth_channel *channel)
3612 +{
3613 + dpaa2_dpcon_free(priv, channel->dpcon);
3614 + kfree(channel);
3615 +}
3616 +
3617 +static int dpaa2_dpio_setup(struct dpaa2_eth_priv *priv)
3618 +{
3619 + struct dpaa2_io_notification_ctx *nctx;
3620 + struct dpaa2_eth_channel *channel;
3621 + struct dpcon_notification_cfg dpcon_notif_cfg;
3622 + struct device *dev = priv->net_dev->dev.parent;
3623 + int i, err;
3624 +
3625 + /* Don't allocate more channels than strictly necessary and assign
3626 + * them to cores starting from the first one available in
3627 + * cpu_online_mask.
3628 + * If the number of channels is lower than the number of cores,
3629 + * there will be no rx/tx conf processing on the last cores in the mask.
3630 + */
3631 + cpumask_clear(&priv->dpio_cpumask);
3632 + for_each_online_cpu(i) {
3633 + /* Try to allocate a channel */
3634 + channel = dpaa2_alloc_channel(priv);
3635 + if (!channel)
3636 + goto err_alloc_ch;
3637 +
3638 + priv->channel[priv->num_channels] = channel;
3639 +
3640 + nctx = &channel->nctx;
3641 + nctx->is_cdan = 1;
3642 + nctx->cb = dpaa2_eth_cdan_cb;
3643 + nctx->id = channel->ch_id;
3644 + nctx->desired_cpu = i;
3645 +
3646 + /* Register the new context */
3647 + err = dpaa2_io_service_register(NULL, nctx);
3648 + if (err) {
3649 + dev_info(dev, "No affine DPIO for core %d\n", i);
3650 + /* This core doesn't have an affine DPIO, but there's
3651 + * a chance another one does, so keep trying
3652 + */
3653 + dpaa2_free_channel(priv, channel);
3654 + continue;
3655 + }
3656 +
3657 + /* Register DPCON notification with MC */
3658 + dpcon_notif_cfg.dpio_id = nctx->dpio_id;
3659 + dpcon_notif_cfg.priority = 0;
3660 + dpcon_notif_cfg.user_ctx = nctx->qman64;
3661 + err = dpcon_set_notification(priv->mc_io, 0,
3662 + channel->dpcon->mc_handle,
3663 + &dpcon_notif_cfg);
3664 + if (err) {
3665 + dev_err(dev, "dpcon_set_notification failed()\n");
3666 + goto err_set_cdan;
3667 + }
3668 +
3669 + /* If we managed to allocate a channel and also found an affine
3670 + * DPIO for this core, add it to the final mask
3671 + */
3672 + cpumask_set_cpu(i, &priv->dpio_cpumask);
3673 + priv->num_channels++;
3674 +
3675 + if (priv->num_channels == dpaa2_max_channels(priv))
3676 + break;
3677 + }
3678 +
3679 + /* Tx confirmation queues can only be serviced by cpus
3680 + * with an affine DPIO/channel
3681 + */
3682 + cpumask_copy(&priv->txconf_cpumask, &priv->dpio_cpumask);
3683 +
3684 + return 0;
3685 +
3686 +err_set_cdan:
3687 + dpaa2_io_service_deregister(NULL, nctx);
3688 + dpaa2_free_channel(priv, channel);
3689 +err_alloc_ch:
3690 + if (cpumask_empty(&priv->dpio_cpumask)) {
3691 + dev_err(dev, "No cpu with an affine DPIO/DPCON\n");
3692 + return -ENODEV;
3693 + }
3694 + cpumask_copy(&priv->txconf_cpumask, &priv->dpio_cpumask);
3695 +
3696 + return 0;
3697 +}
3698 +
3699 +static void dpaa2_dpio_free(struct dpaa2_eth_priv *priv)
3700 +{
3701 + int i;
3702 + struct dpaa2_eth_channel *ch;
3703 +
3704 + /* deregister CDAN notifications and free channels */
3705 + for (i = 0; i < priv->num_channels; i++) {
3706 + ch = priv->channel[i];
3707 + dpaa2_io_service_deregister(NULL, &ch->nctx);
3708 + dpaa2_free_channel(priv, ch);
3709 + }
3710 +}
3711 +
3712 +static struct dpaa2_eth_channel *
3713 +dpaa2_get_channel_by_cpu(struct dpaa2_eth_priv *priv, int cpu)
3714 +{
3715 + struct device *dev = priv->net_dev->dev.parent;
3716 + int i;
3717 +
3718 + for (i = 0; i < priv->num_channels; i++)
3719 + if (priv->channel[i]->nctx.desired_cpu == cpu)
3720 + return priv->channel[i];
3721 +
3722 + /* We should never get here. Issue a warning and return
3723 + * the first channel, because it's still better than nothing
3724 + */
3725 + dev_warn(dev, "No affine channel found for cpu %d\n", cpu);
3726 +
3727 + return priv->channel[0];
3728 +}
3729 +
3730 +static void dpaa2_set_fq_affinity(struct dpaa2_eth_priv *priv)
3731 +{
3732 + struct device *dev = priv->net_dev->dev.parent;
3733 + struct dpaa2_eth_fq *fq;
3734 + int rx_cpu, txconf_cpu;
3735 + int i;
3736 +
3737 + /* For each FQ, pick one channel/CPU to deliver frames to.
3738 + * This may well change at runtime, either through irqbalance or
3739 + * through direct user intervention.
3740 + */
3741 + rx_cpu = cpumask_first(&priv->dpio_cpumask);
3742 + txconf_cpu = cpumask_first(&priv->txconf_cpumask);
3743 +
3744 + for (i = 0; i < priv->num_fqs; i++) {
3745 + fq = &priv->fq[i];
3746 + switch (fq->type) {
3747 + case DPAA2_RX_FQ:
3748 + case DPAA2_RX_ERR_FQ:
3749 + fq->target_cpu = rx_cpu;
3750 + cpumask_rr(rx_cpu, &priv->dpio_cpumask);
3751 + break;
3752 + case DPAA2_TX_CONF_FQ:
3753 + fq->target_cpu = txconf_cpu;
3754 + cpumask_rr(txconf_cpu, &priv->txconf_cpumask);
3755 + break;
3756 + default:
3757 + dev_err(dev, "Unknown FQ type: %d\n", fq->type);
3758 + }
3759 + fq->channel = dpaa2_get_channel_by_cpu(priv, fq->target_cpu);
3760 + }
3761 +}
3762 +
3763 +static int dpaa2_dpbp_setup(struct dpaa2_eth_priv *priv)
3764 +{
3765 + int err;
3766 + struct fsl_mc_device *dpbp_dev;
3767 + struct device *dev = priv->net_dev->dev.parent;
3768 +
3769 + err = fsl_mc_object_allocate(to_fsl_mc_device(dev), FSL_MC_POOL_DPBP,
3770 + &dpbp_dev);
3771 + if (err) {
3772 + dev_err(dev, "DPBP device allocation failed\n");
3773 + return err;
3774 + }
3775 +
3776 + priv->dpbp_dev = dpbp_dev;
3777 +
3778 + err = dpbp_open(priv->mc_io, 0, priv->dpbp_dev->obj_desc.id,
3779 + &dpbp_dev->mc_handle);
3780 + if (err) {
3781 + dev_err(dev, "dpbp_open() failed\n");
3782 + goto err_open;
3783 + }
3784 +
3785 + err = dpbp_enable(priv->mc_io, 0, dpbp_dev->mc_handle);
3786 + if (err) {
3787 + dev_err(dev, "dpbp_enable() failed\n");
3788 + goto err_enable;
3789 + }
3790 +
3791 + err = dpbp_get_attributes(priv->mc_io, 0, dpbp_dev->mc_handle,
3792 + &priv->dpbp_attrs);
3793 + if (err) {
3794 + dev_err(dev, "dpbp_get_attributes() failed\n");
3795 + goto err_get_attr;
3796 + }
3797 +
3798 + err = check_obj_version(dpbp_dev, priv->dpbp_attrs.version.major);
3799 + if (err)
3800 + goto err_dpbp_ver;
3801 +
3802 + return 0;
3803 +
3804 +err_dpbp_ver:
3805 +err_get_attr:
3806 + dpbp_disable(priv->mc_io, 0, dpbp_dev->mc_handle);
3807 +err_enable:
3808 + dpbp_close(priv->mc_io, 0, dpbp_dev->mc_handle);
3809 +err_open:
3810 + fsl_mc_object_free(dpbp_dev);
3811 +
3812 + return err;
3813 +}
3814 +
3815 +static void dpaa2_dpbp_free(struct dpaa2_eth_priv *priv)
3816 +{
3817 + __dpaa2_dpbp_free(priv);
3818 + dpbp_disable(priv->mc_io, 0, priv->dpbp_dev->mc_handle);
3819 + dpbp_close(priv->mc_io, 0, priv->dpbp_dev->mc_handle);
3820 + fsl_mc_object_free(priv->dpbp_dev);
3821 +}
3822 +
3823 +static int dpaa2_dpni_setup(struct fsl_mc_device *ls_dev)
3824 +{
3825 + struct device *dev = &ls_dev->dev;
3826 + struct dpaa2_eth_priv *priv;
3827 + struct net_device *net_dev;
3828 + void *dma_mem;
3829 + int err;
3830 +
3831 + net_dev = dev_get_drvdata(dev);
3832 + priv = netdev_priv(net_dev);
3833 +
3834 + priv->dpni_id = ls_dev->obj_desc.id;
3835 +
3836 + /* and get a handle for the DPNI this interface is associate with */
3837 + err = dpni_open(priv->mc_io, 0, priv->dpni_id, &priv->mc_token);
3838 + if (err) {
3839 + dev_err(dev, "dpni_open() failed\n");
3840 + goto err_open;
3841 + }
3842 +
3843 + ls_dev->mc_io = priv->mc_io;
3844 + ls_dev->mc_handle = priv->mc_token;
3845 +
3846 + dma_mem = kzalloc(DPAA2_EXT_CFG_SIZE, GFP_DMA | GFP_KERNEL);
3847 + if (!dma_mem)
3848 + goto err_alloc;
3849 +
3850 + priv->dpni_attrs.ext_cfg_iova = dma_map_single(dev, dma_mem,
3851 + DPAA2_EXT_CFG_SIZE,
3852 + DMA_FROM_DEVICE);
3853 + if (dma_mapping_error(dev, priv->dpni_attrs.ext_cfg_iova)) {
3854 + dev_err(dev, "dma mapping for dpni_ext_cfg failed\n");
3855 + goto err_dma_map;
3856 + }
3857 +
3858 + err = dpni_get_attributes(priv->mc_io, 0, priv->mc_token,
3859 + &priv->dpni_attrs);
3860 + if (err) {
3861 + dev_err(dev, "dpni_get_attributes() failed (err=%d)\n", err);
3862 + dma_unmap_single(dev, priv->dpni_attrs.ext_cfg_iova,
3863 + DPAA2_EXT_CFG_SIZE, DMA_FROM_DEVICE);
3864 + goto err_get_attr;
3865 + }
3866 +
3867 + err = check_obj_version(ls_dev, priv->dpni_attrs.version.major);
3868 + if (err)
3869 + goto err_dpni_ver;
3870 +
3871 + dma_unmap_single(dev, priv->dpni_attrs.ext_cfg_iova,
3872 + DPAA2_EXT_CFG_SIZE, DMA_FROM_DEVICE);
3873 +
3874 + memset(&priv->dpni_ext_cfg, 0, sizeof(priv->dpni_ext_cfg));
3875 + err = dpni_extract_extended_cfg(&priv->dpni_ext_cfg, dma_mem);
3876 + if (err) {
3877 + dev_err(dev, "dpni_extract_extended_cfg() failed\n");
3878 + goto err_extract;
3879 + }
3880 +
3881 + /* Configure our buffers' layout */
3882 + priv->buf_layout.options = DPNI_BUF_LAYOUT_OPT_PARSER_RESULT |
3883 + DPNI_BUF_LAYOUT_OPT_FRAME_STATUS |
3884 + DPNI_BUF_LAYOUT_OPT_PRIVATE_DATA_SIZE |
3885 + DPNI_BUF_LAYOUT_OPT_DATA_ALIGN;
3886 + priv->buf_layout.pass_parser_result = true;
3887 + priv->buf_layout.pass_frame_status = true;
3888 + priv->buf_layout.private_data_size = DPAA2_ETH_SWA_SIZE;
3889 + /* HW erratum mandates data alignment in multiples of 256 */
3890 + priv->buf_layout.data_align = DPAA2_ETH_RX_BUF_ALIGN;
3891 + /* ...rx, ... */
3892 + err = dpni_set_rx_buffer_layout(priv->mc_io, 0, priv->mc_token,
3893 + &priv->buf_layout);
3894 + if (err) {
3895 + dev_err(dev, "dpni_set_rx_buffer_layout() failed");
3896 + goto err_buf_layout;
3897 + }
3898 + /* ... tx, ... */
3899 + /* remove Rx-only options */
3900 + priv->buf_layout.options &= ~(DPNI_BUF_LAYOUT_OPT_DATA_ALIGN |
3901 + DPNI_BUF_LAYOUT_OPT_PARSER_RESULT);
3902 + err = dpni_set_tx_buffer_layout(priv->mc_io, 0, priv->mc_token,
3903 + &priv->buf_layout);
3904 + if (err) {
3905 + dev_err(dev, "dpni_set_tx_buffer_layout() failed");
3906 + goto err_buf_layout;
3907 + }
3908 + /* ... tx-confirm. */
3909 + priv->buf_layout.options &= ~DPNI_BUF_LAYOUT_OPT_PRIVATE_DATA_SIZE;
3910 + priv->buf_layout.options |= DPNI_BUF_LAYOUT_OPT_TIMESTAMP;
3911 + priv->buf_layout.pass_timestamp = 1;
3912 + err = dpni_set_tx_conf_buffer_layout(priv->mc_io, 0, priv->mc_token,
3913 + &priv->buf_layout);
3914 + if (err) {
3915 + dev_err(dev, "dpni_set_tx_conf_buffer_layout() failed");
3916 + goto err_buf_layout;
3917 + }
3918 + /* Now that we've set our tx buffer layout, retrieve the minimum
3919 + * required tx data offset.
3920 + */
3921 + err = dpni_get_tx_data_offset(priv->mc_io, 0, priv->mc_token,
3922 + &priv->tx_data_offset);
3923 + if (err) {
3924 + dev_err(dev, "dpni_get_tx_data_offset() failed\n");
3925 + goto err_data_offset;
3926 + }
3927 +
3928 + /* Warn in case TX data offset is not multiple of 64 bytes. */
3929 + WARN_ON(priv->tx_data_offset % 64);
3930 +
3931 + /* Accommodate SWA space. */
3932 + priv->tx_data_offset += DPAA2_ETH_SWA_SIZE;
3933 +
3934 + /* allocate classification rule space */
3935 + priv->cls_rule = kzalloc(sizeof(*priv->cls_rule) *
3936 + DPAA2_CLASSIFIER_ENTRY_COUNT, GFP_KERNEL);
3937 + if (!priv->cls_rule)
3938 + goto err_cls_rule;
3939 +
3940 + kfree(dma_mem);
3941 +
3942 + return 0;
3943 +
3944 +err_cls_rule:
3945 +err_data_offset:
3946 +err_buf_layout:
3947 +err_extract:
3948 +err_dpni_ver:
3949 +err_get_attr:
3950 +err_dma_map:
3951 + kfree(dma_mem);
3952 +err_alloc:
3953 + dpni_close(priv->mc_io, 0, priv->mc_token);
3954 +err_open:
3955 + return err;
3956 +}
3957 +
3958 +static void dpaa2_dpni_free(struct dpaa2_eth_priv *priv)
3959 +{
3960 + int err;
3961 +
3962 + err = dpni_reset(priv->mc_io, 0, priv->mc_token);
3963 + if (err)
3964 + netdev_warn(priv->net_dev, "dpni_reset() failed (err %d)\n",
3965 + err);
3966 +
3967 + dpni_close(priv->mc_io, 0, priv->mc_token);
3968 +}
3969 +
3970 +static int dpaa2_rx_flow_setup(struct dpaa2_eth_priv *priv,
3971 + struct dpaa2_eth_fq *fq)
3972 +{
3973 + struct device *dev = priv->net_dev->dev.parent;
3974 + struct dpni_queue_attr rx_queue_attr;
3975 + struct dpni_queue_cfg queue_cfg;
3976 + int err;
3977 +
3978 + memset(&queue_cfg, 0, sizeof(queue_cfg));
3979 + queue_cfg.options = DPNI_QUEUE_OPT_USER_CTX | DPNI_QUEUE_OPT_DEST |
3980 + DPNI_QUEUE_OPT_TAILDROP_THRESHOLD;
3981 + queue_cfg.dest_cfg.dest_type = DPNI_DEST_DPCON;
3982 + queue_cfg.dest_cfg.priority = 1;
3983 + queue_cfg.user_ctx = (u64)fq;
3984 + queue_cfg.dest_cfg.dest_id = fq->channel->dpcon_id;
3985 + queue_cfg.tail_drop_threshold = DPAA2_ETH_TAILDROP_THRESH;
3986 + err = dpni_set_rx_flow(priv->mc_io, 0, priv->mc_token, 0, fq->flowid,
3987 + &queue_cfg);
3988 + if (err) {
3989 + dev_err(dev, "dpni_set_rx_flow() failed\n");
3990 + return err;
3991 + }
3992 +
3993 + /* Get the actual FQID that was assigned by MC */
3994 + err = dpni_get_rx_flow(priv->mc_io, 0, priv->mc_token, 0, fq->flowid,
3995 + &rx_queue_attr);
3996 + if (err) {
3997 + dev_err(dev, "dpni_get_rx_flow() failed\n");
3998 + return err;
3999 + }
4000 + fq->fqid = rx_queue_attr.fqid;
4001 +
4002 + return 0;
4003 +}
4004 +
4005 +static int dpaa2_tx_flow_setup(struct dpaa2_eth_priv *priv,
4006 + struct dpaa2_eth_fq *fq)
4007 +{
4008 + struct device *dev = priv->net_dev->dev.parent;
4009 + struct dpni_tx_flow_cfg tx_flow_cfg;
4010 + struct dpni_tx_conf_cfg tx_conf_cfg;
4011 + struct dpni_tx_conf_attr tx_conf_attr;
4012 + int err;
4013 +
4014 + memset(&tx_flow_cfg, 0, sizeof(tx_flow_cfg));
4015 + tx_flow_cfg.options = DPNI_TX_FLOW_OPT_TX_CONF_ERROR;
4016 + tx_flow_cfg.use_common_tx_conf_queue = 0;
4017 + err = dpni_set_tx_flow(priv->mc_io, 0, priv->mc_token,
4018 + &fq->flowid, &tx_flow_cfg);
4019 + if (err) {
4020 + dev_err(dev, "dpni_set_tx_flow() failed\n");
4021 + return err;
4022 + }
4023 +
4024 + tx_conf_cfg.errors_only = 0;
4025 + tx_conf_cfg.queue_cfg.options = DPNI_QUEUE_OPT_USER_CTX |
4026 + DPNI_QUEUE_OPT_DEST;
4027 + tx_conf_cfg.queue_cfg.user_ctx = (u64)fq;
4028 + tx_conf_cfg.queue_cfg.dest_cfg.dest_type = DPNI_DEST_DPCON;
4029 + tx_conf_cfg.queue_cfg.dest_cfg.dest_id = fq->channel->dpcon_id;
4030 + tx_conf_cfg.queue_cfg.dest_cfg.priority = 0;
4031 +
4032 + err = dpni_set_tx_conf(priv->mc_io, 0, priv->mc_token, fq->flowid,
4033 + &tx_conf_cfg);
4034 + if (err) {
4035 + dev_err(dev, "dpni_set_tx_conf() failed\n");
4036 + return err;
4037 + }
4038 +
4039 + err = dpni_get_tx_conf(priv->mc_io, 0, priv->mc_token, fq->flowid,
4040 + &tx_conf_attr);
4041 + if (err) {
4042 + dev_err(dev, "dpni_get_tx_conf() failed\n");
4043 + return err;
4044 + }
4045 +
4046 + fq->fqid = tx_conf_attr.queue_attr.fqid;
4047 +
4048 + return 0;
4049 +}
4050 +
4051 +#ifdef CONFIG_FSL_DPAA2_ETH_USE_ERR_QUEUE
4052 +static int dpaa2_rx_err_setup(struct dpaa2_eth_priv *priv,
4053 + struct dpaa2_eth_fq *fq)
4054 +{
4055 + struct dpni_queue_attr queue_attr;
4056 + struct dpni_queue_cfg queue_cfg;
4057 + int err;
4058 +
4059 + /* Configure the Rx error queue to generate CDANs,
4060 + * just like the Rx queues */
4061 + queue_cfg.options = DPNI_QUEUE_OPT_USER_CTX | DPNI_QUEUE_OPT_DEST;
4062 + queue_cfg.dest_cfg.dest_type = DPNI_DEST_DPCON;
4063 + queue_cfg.dest_cfg.priority = 1;
4064 + queue_cfg.user_ctx = (u64)fq;
4065 + queue_cfg.dest_cfg.dest_id = fq->channel->dpcon_id;
4066 + err = dpni_set_rx_err_queue(priv->mc_io, 0, priv->mc_token, &queue_cfg);
4067 + if (err) {
4068 + netdev_err(priv->net_dev, "dpni_set_rx_err_queue() failed\n");
4069 + return err;
4070 + }
4071 +
4072 + /* Get the FQID */
4073 + err = dpni_get_rx_err_queue(priv->mc_io, 0, priv->mc_token, &queue_attr);
4074 + if (err) {
4075 + netdev_err(priv->net_dev, "dpni_get_rx_err_queue() failed\n");
4076 + return err;
4077 + }
4078 + fq->fqid = queue_attr.fqid;
4079 +
4080 + return 0;
4081 +}
4082 +#endif
4083 +
4084 +static int dpaa2_dpni_bind(struct dpaa2_eth_priv *priv)
4085 +{
4086 + struct net_device *net_dev = priv->net_dev;
4087 + struct device *dev = net_dev->dev.parent;
4088 + struct dpni_pools_cfg pools_params;
4089 + struct dpni_error_cfg err_cfg;
4090 + int err = 0;
4091 + int i;
4092 +
4093 + pools_params.num_dpbp = 1;
4094 + pools_params.pools[0].dpbp_id = priv->dpbp_dev->obj_desc.id;
4095 + pools_params.pools[0].backup_pool = 0;
4096 + pools_params.pools[0].buffer_size = DPAA2_ETH_RX_BUFFER_SIZE;
4097 + err = dpni_set_pools(priv->mc_io, 0, priv->mc_token, &pools_params);
4098 + if (err) {
4099 + dev_err(dev, "dpni_set_pools() failed\n");
4100 + return err;
4101 + }
4102 +
4103 + dpaa2_cls_check(net_dev);
4104 +
4105 + /* have the interface implicitly distribute traffic based on supported
4106 + * header fields
4107 + */
4108 + if (dpaa2_eth_hash_enabled(priv)) {
4109 + err = dpaa2_set_hash(net_dev, DPAA2_RXH_SUPPORTED);
4110 + if (err)
4111 + return err;
4112 + }
4113 +
4114 + /* Configure handling of error frames */
4115 + err_cfg.errors = DPAA2_ETH_RX_ERR_MASK;
4116 + err_cfg.set_frame_annotation = 1;
4117 +#ifdef CONFIG_FSL_DPAA2_ETH_USE_ERR_QUEUE
4118 + err_cfg.error_action = DPNI_ERROR_ACTION_SEND_TO_ERROR_QUEUE;
4119 +#else
4120 + err_cfg.error_action = DPNI_ERROR_ACTION_DISCARD;
4121 +#endif
4122 + err = dpni_set_errors_behavior(priv->mc_io, 0, priv->mc_token,
4123 + &err_cfg);
4124 + if (err) {
4125 + dev_err(dev, "dpni_set_errors_behavior failed\n");
4126 + return err;
4127 + }
4128 +
4129 + /* Configure Rx and Tx conf queues to generate CDANs */
4130 + for (i = 0; i < priv->num_fqs; i++) {
4131 + switch (priv->fq[i].type) {
4132 + case DPAA2_RX_FQ:
4133 + err = dpaa2_rx_flow_setup(priv, &priv->fq[i]);
4134 + break;
4135 + case DPAA2_TX_CONF_FQ:
4136 + err = dpaa2_tx_flow_setup(priv, &priv->fq[i]);
4137 + break;
4138 +#ifdef CONFIG_FSL_DPAA2_ETH_USE_ERR_QUEUE
4139 + case DPAA2_RX_ERR_FQ:
4140 + err = dpaa2_rx_err_setup(priv, &priv->fq[i]);
4141 + break;
4142 +#endif
4143 + default:
4144 + dev_err(dev, "Invalid FQ type %d\n", priv->fq[i].type);
4145 + return -EINVAL;
4146 + }
4147 + if (err)
4148 + return err;
4149 + }
4150 +
4151 + err = dpni_get_qdid(priv->mc_io, 0, priv->mc_token, &priv->tx_qdid);
4152 + if (err) {
4153 + dev_err(dev, "dpni_get_qdid() failed\n");
4154 + return err;
4155 + }
4156 +
4157 + return 0;
4158 +}
4159 +
4160 +static int dpaa2_eth_alloc_rings(struct dpaa2_eth_priv *priv)
4161 +{
4162 + struct net_device *net_dev = priv->net_dev;
4163 + struct device *dev = net_dev->dev.parent;
4164 + int i;
4165 +
4166 + for (i = 0; i < priv->num_channels; i++) {
4167 + priv->channel[i]->store =
4168 + dpaa2_io_store_create(DPAA2_ETH_STORE_SIZE, dev);
4169 + if (!priv->channel[i]->store) {
4170 + netdev_err(net_dev, "dpaa2_io_store_create() failed\n");
4171 + goto err_ring;
4172 + }
4173 + }
4174 +
4175 + return 0;
4176 +
4177 +err_ring:
4178 + for (i = 0; i < priv->num_channels; i++) {
4179 + if (!priv->channel[i]->store)
4180 + break;
4181 + dpaa2_io_store_destroy(priv->channel[i]->store);
4182 + }
4183 +
4184 + return -ENOMEM;
4185 +}
4186 +
4187 +static void dpaa2_eth_free_rings(struct dpaa2_eth_priv *priv)
4188 +{
4189 + int i;
4190 +
4191 + for (i = 0; i < priv->num_channels; i++)
4192 + dpaa2_io_store_destroy(priv->channel[i]->store);
4193 +}
4194 +
4195 +static int dpaa2_eth_netdev_init(struct net_device *net_dev)
4196 +{
4197 + int err;
4198 + struct device *dev = net_dev->dev.parent;
4199 + struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
4200 + u8 mac_addr[ETH_ALEN];
4201 + u8 bcast_addr[ETH_ALEN];
4202 +
4203 + net_dev->netdev_ops = &dpaa2_eth_ops;
4204 +
4205 + /* If the DPL contains all-0 mac_addr, set a random hardware address */
4206 + err = dpni_get_primary_mac_addr(priv->mc_io, 0, priv->mc_token,
4207 + mac_addr);
4208 + if (err) {
4209 + dev_err(dev, "dpni_get_primary_mac_addr() failed (%d)", err);
4210 + return err;
4211 + }
4212 + if (is_zero_ether_addr(mac_addr)) {
4213 + /* Fills in net_dev->dev_addr, as required by
4214 + * register_netdevice()
4215 + */
4216 + eth_hw_addr_random(net_dev);
4217 + /* Make the user aware, without cluttering the boot log */
4218 + pr_info_once(KBUILD_MODNAME " device(s) have all-zero hwaddr, replaced with random");
4219 + err = dpni_set_primary_mac_addr(priv->mc_io, 0, priv->mc_token,
4220 + net_dev->dev_addr);
4221 + if (err) {
4222 + dev_err(dev, "dpni_set_primary_mac_addr(): %d\n", err);
4223 + return err;
4224 + }
4225 + /* Override NET_ADDR_RANDOM set by eth_hw_addr_random(); for all
4226 + * practical purposes, this will be our "permanent" mac address,
4227 + * at least until the next reboot. This move will also permit
4228 + * register_netdevice() to properly fill up net_dev->perm_addr.
4229 + */
4230 + net_dev->addr_assign_type = NET_ADDR_PERM;
4231 + } else {
4232 + /* NET_ADDR_PERM is default, all we have to do is
4233 + * fill in the device addr.
4234 + */
4235 + memcpy(net_dev->dev_addr, mac_addr, net_dev->addr_len);
4236 + }
4237 +
4238 + /* Explicitly add the broadcast address to the MAC filtering table;
4239 + * the MC won't do that for us.
4240 + */
4241 + eth_broadcast_addr(bcast_addr);
4242 + err = dpni_add_mac_addr(priv->mc_io, 0, priv->mc_token, bcast_addr);
4243 + if (err) {
4244 + dev_warn(dev, "dpni_add_mac_addr() failed (%d)\n", err);
4245 + /* Won't return an error; at least, we'd have egress traffic */
4246 + }
4247 +
4248 + /* Reserve enough space to align buffer as per hardware requirement;
4249 + * NOTE: priv->tx_data_offset MUST be initialized at this point.
4250 + */
4251 + net_dev->needed_headroom = DPAA2_ETH_NEEDED_HEADROOM(priv);
4252 +
4253 + /* Our .ndo_init will be called herein */
4254 + err = register_netdev(net_dev);
4255 + if (err < 0) {
4256 + dev_err(dev, "register_netdev() = %d\n", err);
4257 + return err;
4258 + }
4259 +
4260 + return 0;
4261 +}
4262 +
4263 +#ifdef CONFIG_FSL_DPAA2_ETH_LINK_POLL
4264 +static int dpaa2_poll_link_state(void *arg)
4265 +{
4266 + struct dpaa2_eth_priv *priv = (struct dpaa2_eth_priv *)arg;
4267 + int err;
4268 +
4269 + while (!kthread_should_stop()) {
4270 + err = dpaa2_link_state_update(priv);
4271 + if (unlikely(err))
4272 + return err;
4273 +
4274 + msleep(DPAA2_ETH_LINK_STATE_REFRESH);
4275 + }
4276 +
4277 + return 0;
4278 +}
4279 +#else
4280 +static irqreturn_t dpni_irq0_handler(int irq_num, void *arg)
4281 +{
4282 + return IRQ_WAKE_THREAD;
4283 +}
4284 +
4285 +static irqreturn_t dpni_irq0_handler_thread(int irq_num, void *arg)
4286 +{
4287 + u8 irq_index = DPNI_IRQ_INDEX;
4288 + u32 status, clear = 0;
4289 + struct device *dev = (struct device *)arg;
4290 + struct fsl_mc_device *dpni_dev = to_fsl_mc_device(dev);
4291 + struct net_device *net_dev = dev_get_drvdata(dev);
4292 + int err;
4293 +
4294 + netdev_dbg(net_dev, "IRQ %d received\n", irq_num);
4295 + err = dpni_get_irq_status(dpni_dev->mc_io, 0, dpni_dev->mc_handle,
4296 + irq_index, &status);
4297 + if (unlikely(err)) {
4298 + netdev_err(net_dev, "Can't get irq status (err %d)", err);
4299 + clear = 0xffffffff;
4300 + goto out;
4301 + }
4302 +
4303 + if (status & DPNI_IRQ_EVENT_LINK_CHANGED) {
4304 + clear |= DPNI_IRQ_EVENT_LINK_CHANGED;
4305 + dpaa2_link_state_update(netdev_priv(net_dev));
4306 + }
4307 +
4308 +out:
4309 + dpni_clear_irq_status(dpni_dev->mc_io, 0, dpni_dev->mc_handle,
4310 + irq_index, clear);
4311 + return IRQ_HANDLED;
4312 +}
4313 +
4314 +static int dpaa2_eth_setup_irqs(struct fsl_mc_device *ls_dev)
4315 +{
4316 + int err = 0;
4317 + struct fsl_mc_device_irq *irq;
4318 + int irq_count = ls_dev->obj_desc.irq_count;
4319 + u8 irq_index = DPNI_IRQ_INDEX;
4320 + u32 mask = DPNI_IRQ_EVENT_LINK_CHANGED;
4321 +
4322 + /* The only interrupt supported now is the link state notification. */
4323 + if (WARN_ON(irq_count != 1))
4324 + return -EINVAL;
4325 +
4326 + irq = ls_dev->irqs[0];
4327 + err = devm_request_threaded_irq(&ls_dev->dev, irq->msi_desc->irq,
4328 + dpni_irq0_handler,
4329 + dpni_irq0_handler_thread,
4330 + IRQF_NO_SUSPEND | IRQF_ONESHOT,
4331 + dev_name(&ls_dev->dev), &ls_dev->dev);
4332 + if (err < 0) {
4333 + dev_err(&ls_dev->dev, "devm_request_threaded_irq(): %d", err);
4334 + return err;
4335 + }
4336 +
4337 + err = dpni_set_irq_mask(ls_dev->mc_io, 0, ls_dev->mc_handle,
4338 + irq_index, mask);
4339 + if (err < 0) {
4340 + dev_err(&ls_dev->dev, "dpni_set_irq_mask(): %d", err);
4341 + return err;
4342 + }
4343 +
4344 + err = dpni_set_irq_enable(ls_dev->mc_io, 0, ls_dev->mc_handle,
4345 + irq_index, 1);
4346 + if (err < 0) {
4347 + dev_err(&ls_dev->dev, "dpni_set_irq_enable(): %d", err);
4348 + return err;
4349 + }
4350 +
4351 + return 0;
4352 +}
4353 +#endif
4354 +
4355 +static void dpaa2_eth_napi_add(struct dpaa2_eth_priv *priv)
4356 +{
4357 + int i;
4358 + struct dpaa2_eth_channel *ch;
4359 +
4360 + for (i = 0; i < priv->num_channels; i++) {
4361 + ch = priv->channel[i];
4362 + /* NAPI weight *MUST* be a multiple of DPAA2_ETH_STORE_SIZE */
4363 + netif_napi_add(priv->net_dev, &ch->napi, dpaa2_eth_poll,
4364 + NAPI_POLL_WEIGHT);
4365 + }
4366 +}
4367 +
4368 +static void dpaa2_eth_napi_del(struct dpaa2_eth_priv *priv)
4369 +{
4370 + int i;
4371 + struct dpaa2_eth_channel *ch;
4372 +
4373 + for (i = 0; i < priv->num_channels; i++) {
4374 + ch = priv->channel[i];
4375 + netif_napi_del(&ch->napi);
4376 + }
4377 +}
4378 +
4379 +/* SysFS support */
4380 +
4381 +static ssize_t dpaa2_eth_show_tx_shaping(struct device *dev,
4382 + struct device_attribute *attr,
4383 + char *buf)
4384 +{
4385 + struct dpaa2_eth_priv *priv = netdev_priv(to_net_dev(dev));
4386 + /* No MC API for getting the shaping config. We're stateful. */
4387 + struct dpni_tx_shaping_cfg *scfg = &priv->shaping_cfg;
4388 +
4389 + return sprintf(buf, "%u %hu\n", scfg->rate_limit, scfg->max_burst_size);
4390 +}
4391 +
4392 +static ssize_t dpaa2_eth_write_tx_shaping(struct device *dev,
4393 + struct device_attribute *attr,
4394 + const char *buf,
4395 + size_t count)
4396 +{
4397 + int err, items;
4398 + struct dpaa2_eth_priv *priv = netdev_priv(to_net_dev(dev));
4399 + struct dpni_tx_shaping_cfg scfg;
4400 +
4401 + items = sscanf(buf, "%u %hu", &scfg.rate_limit, &scfg.max_burst_size);
4402 + if (items != 2) {
4403 + pr_err("Expected format: \"rate_limit(Mbps) max_burst_size(bytes)\"\n");
4404 + return -EINVAL;
4405 + }
4406 + /* Size restriction as per MC API documentation */
4407 + if (scfg.max_burst_size > 64000) {
4408 + pr_err("max_burst_size must be <= 64000, thanks.\n");
4409 + return -EINVAL;
4410 + }
4411 +
4412 + err = dpni_set_tx_shaping(priv->mc_io, 0, priv->mc_token, &scfg);
4413 + if (err) {
4414 + dev_err(dev, "dpni_set_tx_shaping() failed\n");
4415 + return -EPERM;
4416 + }
4417 + /* If successful, save the current configuration for future inquiries */
4418 + priv->shaping_cfg = scfg;
4419 +
4420 + return count;
4421 +}
4422 +
4423 +static ssize_t dpaa2_eth_show_txconf_cpumask(struct device *dev,
4424 + struct device_attribute *attr,
4425 + char *buf)
4426 +{
4427 + struct dpaa2_eth_priv *priv = netdev_priv(to_net_dev(dev));
4428 +
4429 + return cpumap_print_to_pagebuf(1, buf, &priv->txconf_cpumask);
4430 +}
4431 +
4432 +static ssize_t dpaa2_eth_write_txconf_cpumask(struct device *dev,
4433 + struct device_attribute *attr,
4434 + const char *buf,
4435 + size_t count)
4436 +{
4437 + struct dpaa2_eth_priv *priv = netdev_priv(to_net_dev(dev));
4438 + struct dpaa2_eth_fq *fq;
4439 + bool running = netif_running(priv->net_dev);
4440 + int i, err;
4441 +
4442 + err = cpulist_parse(buf, &priv->txconf_cpumask);
4443 + if (err)
4444 + return err;
4445 +
4446 + /* Only accept CPUs that have an affine DPIO */
4447 + if (!cpumask_subset(&priv->txconf_cpumask, &priv->dpio_cpumask)) {
4448 + netdev_info(priv->net_dev,
4449 + "cpumask must be a subset of 0x%lx\n",
4450 + *cpumask_bits(&priv->dpio_cpumask));
4451 + cpumask_and(&priv->txconf_cpumask, &priv->dpio_cpumask,
4452 + &priv->txconf_cpumask);
4453 + }
4454 +
4455 + /* Rewiring the TxConf FQs requires interface shutdown.
4456 + */
4457 + if (running) {
4458 + err = dpaa2_eth_stop(priv->net_dev);
4459 + if (err)
4460 + return -ENODEV;
4461 + }
4462 +
4463 + /* Set the new TxConf FQ affinities */
4464 + dpaa2_set_fq_affinity(priv);
4465 +
4466 +#ifdef CONFIG_FSL_DPAA2_ETH_LINK_POLL
4467 + /* dpaa2_eth_open() below will *stop* the Tx queues until an explicit
4468 + * link up notification is received. Give the polling thread enough time
4469 + * to detect the link state change, or else we'll end up with the
4470 + * transmission side forever shut down.
4471 + */
4472 + msleep(2 * DPAA2_ETH_LINK_STATE_REFRESH);
4473 +#endif
4474 +
4475 + for (i = 0; i < priv->num_fqs; i++) {
4476 + fq = &priv->fq[i];
4477 + if (fq->type != DPAA2_TX_CONF_FQ)
4478 + continue;
4479 + dpaa2_tx_flow_setup(priv, fq);
4480 + }
4481 +
4482 + if (running) {
4483 + err = dpaa2_eth_open(priv->net_dev);
4484 + if (err)
4485 + return -ENODEV;
4486 + }
4487 +
4488 + return count;
4489 +}
4490 +
4491 +static struct device_attribute dpaa2_eth_attrs[] = {
4492 + __ATTR(txconf_cpumask,
4493 + S_IRUSR | S_IWUSR,
4494 + dpaa2_eth_show_txconf_cpumask,
4495 + dpaa2_eth_write_txconf_cpumask),
4496 +
4497 + __ATTR(tx_shaping,
4498 + S_IRUSR | S_IWUSR,
4499 + dpaa2_eth_show_tx_shaping,
4500 + dpaa2_eth_write_tx_shaping),
4501 +};
4502 +
4503 +void dpaa2_eth_sysfs_init(struct device *dev)
4504 +{
4505 + int i, err;
4506 +
4507 + for (i = 0; i < ARRAY_SIZE(dpaa2_eth_attrs); i++) {
4508 + err = device_create_file(dev, &dpaa2_eth_attrs[i]);
4509 + if (err) {
4510 + dev_err(dev, "ERROR creating sysfs file\n");
4511 + goto undo;
4512 + }
4513 + }
4514 + return;
4515 +
4516 +undo:
4517 + while (i > 0)
4518 + device_remove_file(dev, &dpaa2_eth_attrs[--i]);
4519 +}
4520 +
4521 +void dpaa2_eth_sysfs_remove(struct device *dev)
4522 +{
4523 + int i;
4524 +
4525 + for (i = 0; i < ARRAY_SIZE(dpaa2_eth_attrs); i++)
4526 + device_remove_file(dev, &dpaa2_eth_attrs[i]);
4527 +}
4528 +
4529 +static int dpaa2_eth_probe(struct fsl_mc_device *dpni_dev)
4530 +{
4531 + struct device *dev;
4532 + struct net_device *net_dev = NULL;
4533 + struct dpaa2_eth_priv *priv = NULL;
4534 + int err = 0;
4535 +
4536 + dev = &dpni_dev->dev;
4537 +
4538 + /* Net device */
4539 + net_dev = alloc_etherdev_mq(sizeof(*priv), DPAA2_ETH_MAX_TX_QUEUES);
4540 + if (!net_dev) {
4541 + dev_err(dev, "alloc_etherdev_mq() failed\n");
4542 + return -ENOMEM;
4543 + }
4544 +
4545 + SET_NETDEV_DEV(net_dev, dev);
4546 + dev_set_drvdata(dev, net_dev);
4547 +
4548 + priv = netdev_priv(net_dev);
4549 + priv->net_dev = net_dev;
4550 + priv->msg_enable = netif_msg_init(debug, -1);
4551 +
4552 + /* Obtain a MC portal */
4553 + err = fsl_mc_portal_allocate(dpni_dev, FSL_MC_IO_ATOMIC_CONTEXT_PORTAL,
4554 + &priv->mc_io);
4555 + if (err) {
4556 + dev_err(dev, "MC portal allocation failed\n");
4557 + goto err_portal_alloc;
4558 + }
4559 +
4560 +#ifndef CONFIG_FSL_DPAA2_ETH_LINK_POLL
4561 + err = fsl_mc_allocate_irqs(dpni_dev);
4562 + if (err) {
4563 + dev_err(dev, "MC irqs allocation failed\n");
4564 + goto err_irqs_alloc;
4565 + }
4566 +#endif
4567 +
4568 + /* DPNI initialization */
4569 + err = dpaa2_dpni_setup(dpni_dev);
4570 + if (err < 0)
4571 + goto err_dpni_setup;
4572 +
4573 + /* DPIO */
4574 + err = dpaa2_dpio_setup(priv);
4575 + if (err)
4576 + goto err_dpio_setup;
4577 +
4578 + /* FQs */
4579 + dpaa2_eth_setup_fqs(priv);
4580 + dpaa2_set_fq_affinity(priv);
4581 +
4582 + /* DPBP */
4583 + err = dpaa2_dpbp_setup(priv);
4584 + if (err)
4585 + goto err_dpbp_setup;
4586 +
4587 + /* DPNI binding to DPIO and DPBPs */
4588 + err = dpaa2_dpni_bind(priv);
4589 + if (err)
4590 + goto err_bind;
4591 +
4592 + dpaa2_eth_napi_add(priv);
4593 +
4594 + /* Percpu statistics */
4595 + priv->percpu_stats = alloc_percpu(*priv->percpu_stats);
4596 + if (!priv->percpu_stats) {
4597 + dev_err(dev, "alloc_percpu(percpu_stats) failed\n");
4598 + err = -ENOMEM;
4599 + goto err_alloc_percpu_stats;
4600 + }
4601 + priv->percpu_extras = alloc_percpu(*priv->percpu_extras);
4602 + if (!priv->percpu_extras) {
4603 + dev_err(dev, "alloc_percpu(percpu_extras) failed\n");
4604 + err = -ENOMEM;
4605 + goto err_alloc_percpu_extras;
4606 + }
4607 +
4608 + snprintf(net_dev->name, IFNAMSIZ, "ni%d", dpni_dev->obj_desc.id);
4609 + if (!dev_valid_name(net_dev->name)) {
4610 + dev_warn(&net_dev->dev,
4611 + "netdevice name \"%s\" cannot be used, reverting to default..\n",
4612 + net_dev->name);
4613 + dev_alloc_name(net_dev, "eth%d");
4614 + dev_warn(&net_dev->dev, "using name \"%s\"\n", net_dev->name);
4615 + }
4616 +
4617 + err = dpaa2_eth_netdev_init(net_dev);
4618 + if (err)
4619 + goto err_netdev_init;
4620 +
4621 + /* Configure checksum offload based on current interface flags */
4622 + err = dpaa2_eth_set_rx_csum(priv,
4623 + !!(net_dev->features & NETIF_F_RXCSUM));
4624 + if (err)
4625 + goto err_csum;
4626 +
4627 + err = dpaa2_eth_set_tx_csum(priv,
4628 + !!(net_dev->features &
4629 + (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM)));
4630 + if (err)
4631 + goto err_csum;
4632 +
4633 + err = dpaa2_eth_alloc_rings(priv);
4634 + if (err)
4635 + goto err_alloc_rings;
4636 +
4637 + net_dev->ethtool_ops = &dpaa2_ethtool_ops;
4638 +
4639 +#ifdef CONFIG_FSL_DPAA2_ETH_LINK_POLL
4640 + priv->poll_thread = kthread_run(dpaa2_poll_link_state, priv,
4641 + "%s_poll_link", net_dev->name);
4642 +#else
4643 + err = dpaa2_eth_setup_irqs(dpni_dev);
4644 + if (err) {
4645 + netdev_err(net_dev, "ERROR %d setting up interrupts", err);
4646 + goto err_setup_irqs;
4647 + }
4648 +#endif
4649 +
4650 + dpaa2_eth_sysfs_init(&net_dev->dev);
4651 + dpaa2_dbg_add(priv);
4652 +
4653 + dev_info(dev, "Probed interface %s\n", net_dev->name);
4654 + return 0;
4655 +
4656 +#ifndef CONFIG_FSL_DPAA2_ETH_LINK_POLL
4657 +err_setup_irqs:
4658 +#endif
4659 + dpaa2_eth_free_rings(priv);
4660 +err_alloc_rings:
4661 +err_csum:
4662 + unregister_netdev(net_dev);
4663 +err_netdev_init:
4664 + free_percpu(priv->percpu_extras);
4665 +err_alloc_percpu_extras:
4666 + free_percpu(priv->percpu_stats);
4667 +err_alloc_percpu_stats:
4668 + dpaa2_eth_napi_del(priv);
4669 +err_bind:
4670 + dpaa2_dpbp_free(priv);
4671 +err_dpbp_setup:
4672 + dpaa2_dpio_free(priv);
4673 +err_dpio_setup:
4674 + kfree(priv->cls_rule);
4675 + dpni_close(priv->mc_io, 0, priv->mc_token);
4676 +err_dpni_setup:
4677 +#ifndef CONFIG_FSL_DPAA2_ETH_LINK_POLL
4678 + fsl_mc_free_irqs(dpni_dev);
4679 +err_irqs_alloc:
4680 +#endif
4681 + fsl_mc_portal_free(priv->mc_io);
4682 +err_portal_alloc:
4683 + dev_set_drvdata(dev, NULL);
4684 + free_netdev(net_dev);
4685 +
4686 + return err;
4687 +}
4688 +
4689 +static int dpaa2_eth_remove(struct fsl_mc_device *ls_dev)
4690 +{
4691 + struct device *dev;
4692 + struct net_device *net_dev;
4693 + struct dpaa2_eth_priv *priv;
4694 +
4695 + dev = &ls_dev->dev;
4696 + net_dev = dev_get_drvdata(dev);
4697 + priv = netdev_priv(net_dev);
4698 +
4699 + dpaa2_dbg_remove(priv);
4700 + dpaa2_eth_sysfs_remove(&net_dev->dev);
4701 +
4702 + unregister_netdev(net_dev);
4703 + dev_info(net_dev->dev.parent, "Removed interface %s\n", net_dev->name);
4704 +
4705 + dpaa2_dpio_free(priv);
4706 + dpaa2_eth_free_rings(priv);
4707 + dpaa2_eth_napi_del(priv);
4708 + dpaa2_dpbp_free(priv);
4709 + dpaa2_dpni_free(priv);
4710 +
4711 + fsl_mc_portal_free(priv->mc_io);
4712 +
4713 + free_percpu(priv->percpu_stats);
4714 + free_percpu(priv->percpu_extras);
4715 +
4716 +#ifdef CONFIG_FSL_DPAA2_ETH_LINK_POLL
4717 + kthread_stop(priv->poll_thread);
4718 +#else
4719 + fsl_mc_free_irqs(ls_dev);
4720 +#endif
4721 +
4722 + kfree(priv->cls_rule);
4723 +
4724 + dev_set_drvdata(dev, NULL);
4725 + free_netdev(net_dev);
4726 +
4727 + return 0;
4728 +}
4729 +
4730 +static const struct fsl_mc_device_match_id dpaa2_eth_match_id_table[] = {
4731 + {
4732 + .vendor = FSL_MC_VENDOR_FREESCALE,
4733 + .obj_type = "dpni",
4734 + .ver_major = DPNI_VER_MAJOR,
4735 + .ver_minor = DPNI_VER_MINOR
4736 + },
4737 + { .vendor = 0x0 }
4738 +};
4739 +
4740 +static struct fsl_mc_driver dpaa2_eth_driver = {
4741 + .driver = {
4742 + .name = KBUILD_MODNAME,
4743 + .owner = THIS_MODULE,
4744 + },
4745 + .probe = dpaa2_eth_probe,
4746 + .remove = dpaa2_eth_remove,
4747 + .match_id_table = dpaa2_eth_match_id_table
4748 +};
4749 +
4750 +static int __init dpaa2_eth_driver_init(void)
4751 +{
4752 + int err;
4753 +
4754 + dpaa2_eth_dbg_init();
4755 +
4756 + err = fsl_mc_driver_register(&dpaa2_eth_driver);
4757 + if (err) {
4758 + dpaa2_eth_dbg_exit();
4759 + return err;
4760 + }
4761 +
4762 + return 0;
4763 +}
4764 +
4765 +static void __exit dpaa2_eth_driver_exit(void)
4766 +{
4767 + fsl_mc_driver_unregister(&dpaa2_eth_driver);
4768 + dpaa2_eth_dbg_exit();
4769 +}
4770 +
4771 +module_init(dpaa2_eth_driver_init);
4772 +module_exit(dpaa2_eth_driver_exit);
4773 --- /dev/null
4774 +++ b/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.h
4775 @@ -0,0 +1,366 @@
4776 +/* Copyright 2014-2015 Freescale Semiconductor Inc.
4777 + *
4778 + * Redistribution and use in source and binary forms, with or without
4779 + * modification, are permitted provided that the following conditions are met:
4780 + * * Redistributions of source code must retain the above copyright
4781 + * notice, this list of conditions and the following disclaimer.
4782 + * * Redistributions in binary form must reproduce the above copyright
4783 + * notice, this list of conditions and the following disclaimer in the
4784 + * documentation and/or other materials provided with the distribution.
4785 + * * Neither the name of Freescale Semiconductor nor the
4786 + * names of its contributors may be used to endorse or promote products
4787 + * derived from this software without specific prior written permission.
4788 + *
4789 + *
4790 + * ALTERNATIVELY, this software may be distributed under the terms of the
4791 + * GNU General Public License ("GPL") as published by the Free Software
4792 + * Foundation, either version 2 of that License or (at your option) any
4793 + * later version.
4794 + *
4795 + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
4796 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
4797 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
4798 + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
4799 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
4800 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
4801 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
4802 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
4803 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
4804 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
4805 + */
4806 +
4807 +#ifndef __DPAA2_ETH_H
4808 +#define __DPAA2_ETH_H
4809 +
4810 +#include <linux/netdevice.h>
4811 +#include <linux/if_vlan.h>
4812 +#include "../../fsl-mc/include/fsl_dpaa2_io.h"
4813 +#include "../../fsl-mc/include/fsl_dpaa2_fd.h"
4814 +#include "../../fsl-mc/include/dpbp.h"
4815 +#include "../../fsl-mc/include/dpbp-cmd.h"
4816 +#include "../../fsl-mc/include/dpcon.h"
4817 +#include "../../fsl-mc/include/dpcon-cmd.h"
4818 +#include "../../fsl-mc/include/dpmng.h"
4819 +#include "dpni.h"
4820 +#include "dpni-cmd.h"
4821 +
4822 +#include "dpaa2-eth-trace.h"
4823 +#include "dpaa2-eth-debugfs.h"
4824 +
4825 +#define DPAA2_ETH_STORE_SIZE 16
4826 +
4827 +/* Maximum receive frame size is 64K */
4828 +#define DPAA2_ETH_MAX_SG_ENTRIES ((64 * 1024) / DPAA2_ETH_RX_BUFFER_SIZE)
4829 +
4830 +/* Maximum acceptable MTU value. It is in direct relation with the MC-enforced
4831 + * Max Frame Length (currently 10k).
4832 + */
4833 +#define DPAA2_ETH_MFL (10 * 1024)
4834 +#define DPAA2_ETH_MAX_MTU (DPAA2_ETH_MFL - VLAN_ETH_HLEN)
4835 +/* Convert L3 MTU to L2 MFL */
4836 +#define DPAA2_ETH_L2_MAX_FRM(mtu) (mtu + VLAN_ETH_HLEN)
4837 +
4838 +/* Set the taildrop threshold (in bytes) to allow the enqueue of several jumbo
4839 + * frames in the Rx queues (length of the current frame is not
4840 + * taken into account when making the taildrop decision)
4841 + */
4842 +#define DPAA2_ETH_TAILDROP_THRESH (64 * 1024)
4843 +
4844 +/* Buffer quota per queue. Must be large enough such that for minimum sized
4845 + * frames taildrop kicks in before the bpool gets depleted, so we compute
4846 + * how many 64B frames fit inside the taildrop threshold and add a margin
4847 + * to accommodate the buffer refill delay.
4848 + */
4849 +#define DPAA2_ETH_MAX_FRAMES_PER_QUEUE (DPAA2_ETH_TAILDROP_THRESH / 64)
4850 +#define DPAA2_ETH_NUM_BUFS (DPAA2_ETH_MAX_FRAMES_PER_QUEUE + 256)
4851 +#define DPAA2_ETH_REFILL_THRESH DPAA2_ETH_MAX_FRAMES_PER_QUEUE
4852 +
4853 +/* Hardware requires alignment for ingress/egress buffer addresses
4854 + * and ingress buffer lengths.
4855 + */
4856 +#define DPAA2_ETH_RX_BUFFER_SIZE 2048
4857 +#define DPAA2_ETH_TX_BUF_ALIGN 64
4858 +#define DPAA2_ETH_RX_BUF_ALIGN 256
4859 +#define DPAA2_ETH_NEEDED_HEADROOM(p_priv) \
4860 + ((p_priv)->tx_data_offset + DPAA2_ETH_TX_BUF_ALIGN)
4861 +
4862 +#define DPAA2_ETH_BUF_RAW_SIZE \
4863 + (DPAA2_ETH_RX_BUFFER_SIZE + \
4864 + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) + \
4865 + DPAA2_ETH_RX_BUF_ALIGN)
4866 +
4867 +/* PTP nominal frequency 1MHz */
4868 +#define DPAA2_PTP_NOMINAL_FREQ_PERIOD_NS 1000
4869 +
4870 +/* We are accommodating a skb backpointer and some S/G info
4871 + * in the frame's software annotation. The hardware
4872 + * options are either 0 or 64, so we choose the latter.
4873 + */
4874 +#define DPAA2_ETH_SWA_SIZE 64
4875 +
4876 +/* Must keep this struct smaller than DPAA2_ETH_SWA_SIZE */
4877 +struct dpaa2_eth_swa {
4878 + struct sk_buff *skb;
4879 + struct scatterlist *scl;
4880 + int num_sg;
4881 + int num_dma_bufs;
4882 +};
4883 +
4884 +/* Annotation valid bits in FD FRC */
4885 +#define DPAA2_FD_FRC_FASV 0x8000
4886 +#define DPAA2_FD_FRC_FAEADV 0x4000
4887 +#define DPAA2_FD_FRC_FAPRV 0x2000
4888 +#define DPAA2_FD_FRC_FAIADV 0x1000
4889 +#define DPAA2_FD_FRC_FASWOV 0x0800
4890 +#define DPAA2_FD_FRC_FAICFDV 0x0400
4891 +
4892 +/* Annotation bits in FD CTRL */
4893 +#define DPAA2_FD_CTRL_ASAL 0x00020000 /* ASAL = 128 */
4894 +#define DPAA2_FD_CTRL_PTA 0x00800000
4895 +#define DPAA2_FD_CTRL_PTV1 0x00400000
4896 +
4897 +/* Frame annotation status */
4898 +struct dpaa2_fas {
4899 + u8 reserved;
4900 + u8 ppid;
4901 + __le16 ifpid;
4902 + __le32 status;
4903 +} __packed;
4904 +
4905 +/* Debug frame, otherwise supposed to be discarded */
4906 +#define DPAA2_ETH_FAS_DISC 0x80000000
4907 +/* MACSEC frame */
4908 +#define DPAA2_ETH_FAS_MS 0x40000000
4909 +#define DPAA2_ETH_FAS_PTP 0x08000000
4910 +/* Ethernet multicast frame */
4911 +#define DPAA2_ETH_FAS_MC 0x04000000
4912 +/* Ethernet broadcast frame */
4913 +#define DPAA2_ETH_FAS_BC 0x02000000
4914 +#define DPAA2_ETH_FAS_KSE 0x00040000
4915 +#define DPAA2_ETH_FAS_EOFHE 0x00020000
4916 +#define DPAA2_ETH_FAS_MNLE 0x00010000
4917 +#define DPAA2_ETH_FAS_TIDE 0x00008000
4918 +#define DPAA2_ETH_FAS_PIEE 0x00004000
4919 +/* Frame length error */
4920 +#define DPAA2_ETH_FAS_FLE 0x00002000
4921 +/* Frame physical error; our favourite pastime */
4922 +#define DPAA2_ETH_FAS_FPE 0x00001000
4923 +#define DPAA2_ETH_FAS_PTE 0x00000080
4924 +#define DPAA2_ETH_FAS_ISP 0x00000040
4925 +#define DPAA2_ETH_FAS_PHE 0x00000020
4926 +#define DPAA2_ETH_FAS_BLE 0x00000010
4927 +/* L3 csum validation performed */
4928 +#define DPAA2_ETH_FAS_L3CV 0x00000008
4929 +/* L3 csum error */
4930 +#define DPAA2_ETH_FAS_L3CE 0x00000004
4931 +/* L4 csum validation performed */
4932 +#define DPAA2_ETH_FAS_L4CV 0x00000002
4933 +/* L4 csum error */
4934 +#define DPAA2_ETH_FAS_L4CE 0x00000001
4935 +/* These bits always signal errors */
4936 +#define DPAA2_ETH_RX_ERR_MASK (DPAA2_ETH_FAS_KSE | \
4937 + DPAA2_ETH_FAS_EOFHE | \
4938 + DPAA2_ETH_FAS_MNLE | \
4939 + DPAA2_ETH_FAS_TIDE | \
4940 + DPAA2_ETH_FAS_PIEE | \
4941 + DPAA2_ETH_FAS_FLE | \
4942 + DPAA2_ETH_FAS_FPE | \
4943 + DPAA2_ETH_FAS_PTE | \
4944 + DPAA2_ETH_FAS_ISP | \
4945 + DPAA2_ETH_FAS_PHE | \
4946 + DPAA2_ETH_FAS_BLE | \
4947 + DPAA2_ETH_FAS_L3CE | \
4948 + DPAA2_ETH_FAS_L4CE)
4949 +/* Unsupported features in the ingress */
4950 +#define DPAA2_ETH_RX_UNSUPP_MASK DPAA2_ETH_FAS_MS
4951 +/* Tx errors */
4952 +#define DPAA2_ETH_TXCONF_ERR_MASK (DPAA2_ETH_FAS_KSE | \
4953 + DPAA2_ETH_FAS_EOFHE | \
4954 + DPAA2_ETH_FAS_MNLE | \
4955 + DPAA2_ETH_FAS_TIDE)
4956 +
4957 +/* Time in milliseconds between link state updates */
4958 +#define DPAA2_ETH_LINK_STATE_REFRESH 1000
4959 +
4960 +/* Driver statistics, other than those in struct rtnl_link_stats64.
4961 + * These are usually collected per-CPU and aggregated by ethtool.
4962 + */
4963 +struct dpaa2_eth_stats {
4964 + __u64 tx_conf_frames;
4965 + __u64 tx_conf_bytes;
4966 + __u64 tx_sg_frames;
4967 + __u64 tx_sg_bytes;
4968 + __u64 rx_sg_frames;
4969 + __u64 rx_sg_bytes;
4970 + /* Enqueues retried due to portal busy */
4971 + __u64 tx_portal_busy;
4972 +};
4973 +
4974 +/* Per-FQ statistics */
4975 +struct dpaa2_eth_fq_stats {
4976 + /* Number of frames received on this queue */
4977 + __u64 frames;
4978 +};
4979 +
4980 +/* Per-channel statistics */
4981 +struct dpaa2_eth_ch_stats {
4982 + /* Volatile dequeues retried due to portal busy */
4983 + __u64 dequeue_portal_busy;
4984 + /* Number of CDANs; useful to estimate avg NAPI len */
4985 + __u64 cdan;
4986 + /* Number of frames received on queues from this channel */
4987 + __u64 frames;
4988 +};
4989 +
4990 +/* Maximum number of Rx queues associated with a DPNI */
4991 +#define DPAA2_ETH_MAX_RX_QUEUES 16
4992 +#define DPAA2_ETH_MAX_TX_QUEUES NR_CPUS
4993 +#define DPAA2_ETH_MAX_RX_ERR_QUEUES 1
4994 +#define DPAA2_ETH_MAX_QUEUES (DPAA2_ETH_MAX_RX_QUEUES + \
4995 + DPAA2_ETH_MAX_TX_QUEUES + \
4996 + DPAA2_ETH_MAX_RX_ERR_QUEUES)
4997 +
4998 +#define DPAA2_ETH_MAX_DPCONS NR_CPUS
4999 +
5000 +enum dpaa2_eth_fq_type {
5001 + DPAA2_RX_FQ = 0,
5002 + DPAA2_TX_CONF_FQ,
5003 + DPAA2_RX_ERR_FQ
5004 +};
5005 +
5006 +struct dpaa2_eth_priv;
5007 +
5008 +struct dpaa2_eth_fq {
5009 + u32 fqid;
5010 + u16 flowid;
5011 + int target_cpu;
5012 + struct dpaa2_eth_channel *channel;
5013 + enum dpaa2_eth_fq_type type;
5014 +
5015 + void (*consume)(struct dpaa2_eth_priv *,
5016 + struct dpaa2_eth_channel *,
5017 + const struct dpaa2_fd *,
5018 + struct napi_struct *);
5019 + struct dpaa2_eth_priv *netdev_priv; /* backpointer */
5020 + struct dpaa2_eth_fq_stats stats;
5021 +};
5022 +
5023 +struct dpaa2_eth_channel {
5024 + struct dpaa2_io_notification_ctx nctx;
5025 + struct fsl_mc_device *dpcon;
5026 + int dpcon_id;
5027 + int ch_id;
5028 + int dpio_id;
5029 + struct napi_struct napi;
5030 + struct dpaa2_io_store *store;
5031 + struct dpaa2_eth_priv *priv;
5032 + int buf_count;
5033 + struct dpaa2_eth_ch_stats stats;
5034 +};
5035 +
5036 +struct dpaa2_cls_rule {
5037 + struct ethtool_rx_flow_spec fs;
5038 + bool in_use;
5039 +};
5040 +
5041 +struct dpaa2_eth_priv {
5042 + struct net_device *net_dev;
5043 +
5044 + u8 num_fqs;
5045 + /* First queue is tx conf, the rest are rx */
5046 + struct dpaa2_eth_fq fq[DPAA2_ETH_MAX_QUEUES];
5047 +
5048 + u8 num_channels;
5049 + struct dpaa2_eth_channel *channel[DPAA2_ETH_MAX_DPCONS];
5050 +
5051 + int dpni_id;
5052 + struct dpni_attr dpni_attrs;
5053 + struct dpni_extended_cfg dpni_ext_cfg;
5054 + /* Insofar as the MC is concerned, we're using one layout on all 3 types
5055 + * of buffers (Rx, Tx, Tx-Conf).
5056 + */
5057 + struct dpni_buffer_layout buf_layout;
5058 + u16 tx_data_offset;
5059 +
5060 + struct fsl_mc_device *dpbp_dev;
5061 + struct dpbp_attr dpbp_attrs;
5062 +
5063 + u16 tx_qdid;
5064 + struct fsl_mc_io *mc_io;
5065 + /* SysFS-controlled affinity mask for TxConf FQs */
5066 + struct cpumask txconf_cpumask;
5067 + /* Cores which have an affine DPIO/DPCON.
5068 + * This is the cpu set on which Rx frames are processed;
5069 + * Tx confirmation frames are processed on a subset of this,
5070 + * depending on user settings.
5071 + */
5072 + struct cpumask dpio_cpumask;
5073 +
5074 + /* Standard statistics */
5075 + struct rtnl_link_stats64 __percpu *percpu_stats;
5076 + /* Extra stats, in addition to the ones known by the kernel */
5077 + struct dpaa2_eth_stats __percpu *percpu_extras;
5078 + u32 msg_enable; /* net_device message level */
5079 +
5080 + u16 mc_token;
5081 +
5082 + struct dpni_link_state link_state;
5083 + struct task_struct *poll_thread;
5084 +
5085 + /* enabled ethtool hashing bits */
5086 + u64 rx_hash_fields;
5087 +
5088 +#ifdef CONFIG_FSL_DPAA2_ETH_DEBUGFS
5089 + struct dpaa2_debugfs dbg;
5090 +#endif
5091 +
5092 + /* array of classification rules */
5093 + struct dpaa2_cls_rule *cls_rule;
5094 +
5095 + struct dpni_tx_shaping_cfg shaping_cfg;
5096 +
5097 + bool ts_tx_en; /* Tx timestamping enabled */
5098 + bool ts_rx_en; /* Rx timestamping enabled */
5099 +};
5100 +
5101 +/* default Rx hash options, set during probing */
5102 +#define DPAA2_RXH_SUPPORTED (RXH_L2DA | RXH_VLAN | RXH_L3_PROTO \
5103 + | RXH_IP_SRC | RXH_IP_DST | RXH_L4_B_0_1 \
5104 + | RXH_L4_B_2_3)
5105 +
5106 +#define dpaa2_eth_hash_enabled(priv) \
5107 + ((priv)->dpni_attrs.options & DPNI_OPT_DIST_HASH)
5108 +
5109 +#define dpaa2_eth_fs_enabled(priv) \
5110 + ((priv)->dpni_attrs.options & DPNI_OPT_DIST_FS)
5111 +
5112 +#define DPAA2_CLASSIFIER_ENTRY_COUNT 16
5113 +
5114 +/* Required by struct dpni_attr::ext_cfg_iova */
5115 +#define DPAA2_EXT_CFG_SIZE 256
5116 +
5117 +extern const struct ethtool_ops dpaa2_ethtool_ops;
5118 +
5119 +int dpaa2_set_hash(struct net_device *net_dev, u64 flags);
5120 +
5121 +static int dpaa2_queue_count(struct dpaa2_eth_priv *priv)
5122 +{
5123 + if (!dpaa2_eth_hash_enabled(priv))
5124 + return 1;
5125 +
5126 + return priv->dpni_ext_cfg.tc_cfg[0].max_dist;
5127 +}
5128 +
5129 +static inline int dpaa2_max_channels(struct dpaa2_eth_priv *priv)
5130 +{
5131 + /* Ideally, we want a number of channels large enough
5132 + * to accommodate both the Rx distribution size
5133 + * and the max number of Tx confirmation queues
5134 + */
5135 + return max_t(int, dpaa2_queue_count(priv),
5136 + priv->dpni_attrs.max_senders);
5137 +}
5138 +
5139 +void dpaa2_cls_check(struct net_device *);
5140 +
5141 +#endif /* __DPAA2_H */
5142 --- /dev/null
5143 +++ b/drivers/staging/fsl-dpaa2/ethernet/dpaa2-ethtool.c
5144 @@ -0,0 +1,882 @@
5145 +/* Copyright 2014-2015 Freescale Semiconductor Inc.
5146 + *
5147 + * Redistribution and use in source and binary forms, with or without
5148 + * modification, are permitted provided that the following conditions are met:
5149 + * * Redistributions of source code must retain the above copyright
5150 + * notice, this list of conditions and the following disclaimer.
5151 + * * Redistributions in binary form must reproduce the above copyright
5152 + * notice, this list of conditions and the following disclaimer in the
5153 + * documentation and/or other materials provided with the distribution.
5154 + * * Neither the name of Freescale Semiconductor nor the
5155 + * names of its contributors may be used to endorse or promote products
5156 + * derived from this software without specific prior written permission.
5157 + *
5158 + *
5159 + * ALTERNATIVELY, this software may be distributed under the terms of the
5160 + * GNU General Public License ("GPL") as published by the Free Software
5161 + * Foundation, either version 2 of that License or (at your option) any
5162 + * later version.
5163 + *
5164 + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
5165 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
5166 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
5167 + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
5168 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
5169 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
5170 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
5171 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
5172 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
5173 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
5174 + */
5175 +
5176 +#include "dpni.h" /* DPNI_LINK_OPT_* */
5177 +#include "dpaa2-eth.h"
5178 +
5179 +/* size of DMA memory used to pass configuration to classifier, in bytes */
5180 +#define DPAA2_CLASSIFIER_DMA_SIZE 256
5181 +
5182 +/* To be kept in sync with 'enum dpni_counter' */
5183 +char dpaa2_ethtool_stats[][ETH_GSTRING_LEN] = {
5184 + "rx frames",
5185 + "rx bytes",
5186 + "rx frames dropped",
5187 + "rx err frames",
5188 + "rx mcast frames",
5189 + "rx mcast bytes",
5190 + "rx bcast frames",
5191 + "rx bcast bytes",
5192 + "tx frames",
5193 + "tx bytes",
5194 + "tx err frames",
5195 +};
5196 +
5197 +#define DPAA2_ETH_NUM_STATS ARRAY_SIZE(dpaa2_ethtool_stats)
5198 +
5199 +/* To be kept in sync with 'struct dpaa2_eth_stats' */
5200 +char dpaa2_ethtool_extras[][ETH_GSTRING_LEN] = {
5201 + /* per-cpu stats */
5202 +
5203 + "tx conf frames",
5204 + "tx conf bytes",
5205 + "tx sg frames",
5206 + "tx sg bytes",
5207 + "rx sg frames",
5208 + "rx sg bytes",
5209 + /* how many times we had to retry the enqueue command */
5210 + "tx portal busy",
5211 +
5212 + /* Channel stats */
5213 +
5214 + /* How many times we had to retry the volatile dequeue command */
5215 + "portal busy",
5216 + /* Number of notifications received */
5217 + "cdan",
5218 +#ifdef CONFIG_FSL_QBMAN_DEBUG
5219 + /* FQ stats */
5220 + "rx pending frames",
5221 + "rx pending bytes",
5222 + "tx conf pending frames",
5223 + "tx conf pending bytes",
5224 + "buffer count"
5225 +#endif
5226 +};
5227 +
5228 +#define DPAA2_ETH_NUM_EXTRA_STATS ARRAY_SIZE(dpaa2_ethtool_extras)
5229 +
5230 +static void dpaa2_get_drvinfo(struct net_device *net_dev,
5231 + struct ethtool_drvinfo *drvinfo)
5232 +{
5233 + struct mc_version mc_ver;
5234 + struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
5235 + char fw_version[ETHTOOL_FWVERS_LEN];
5236 + char version[32];
5237 + int err;
5238 +
5239 + err = mc_get_version(priv->mc_io, 0, &mc_ver);
5240 + if (err) {
5241 + strlcpy(drvinfo->fw_version, "Error retrieving MC version",
5242 + sizeof(drvinfo->fw_version));
5243 + } else {
5244 + scnprintf(fw_version, sizeof(fw_version), "%d.%d.%d",
5245 + mc_ver.major, mc_ver.minor, mc_ver.revision);
5246 + strlcpy(drvinfo->fw_version, fw_version,
5247 + sizeof(drvinfo->fw_version));
5248 + }
5249 +
5250 + scnprintf(version, sizeof(version), "%d.%d", DPNI_VER_MAJOR,
5251 + DPNI_VER_MINOR);
5252 + strlcpy(drvinfo->version, version, sizeof(drvinfo->version));
5253 +
5254 + strlcpy(drvinfo->driver, KBUILD_MODNAME, sizeof(drvinfo->driver));
5255 + strlcpy(drvinfo->bus_info, dev_name(net_dev->dev.parent->parent),
5256 + sizeof(drvinfo->bus_info));
5257 +}
5258 +
5259 +static u32 dpaa2_get_msglevel(struct net_device *net_dev)
5260 +{
5261 + return ((struct dpaa2_eth_priv *)netdev_priv(net_dev))->msg_enable;
5262 +}
5263 +
5264 +static void dpaa2_set_msglevel(struct net_device *net_dev,
5265 + u32 msg_enable)
5266 +{
5267 + ((struct dpaa2_eth_priv *)netdev_priv(net_dev))->msg_enable =
5268 + msg_enable;
5269 +}
5270 +
5271 +static int dpaa2_get_settings(struct net_device *net_dev,
5272 + struct ethtool_cmd *cmd)
5273 +{
5274 + struct dpni_link_state state = {0};
5275 + int err = 0;
5276 + struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
5277 +
5278 + err = dpni_get_link_state(priv->mc_io, 0, priv->mc_token, &state);
5279 + if (err) {
5280 + netdev_err(net_dev, "ERROR %d getting link state", err);
5281 + goto out;
5282 + }
5283 +
5284 + /* At the moment, we have no way of interrogating the DPMAC
5285 + * from the DPNI side - and for that matter there may exist
5286 + * no DPMAC at all. So for now we just don't report anything
5287 + * beyond the DPNI attributes.
5288 + */
5289 + if (state.options & DPNI_LINK_OPT_AUTONEG)
5290 + cmd->autoneg = AUTONEG_ENABLE;
5291 + if (!(state.options & DPNI_LINK_OPT_HALF_DUPLEX))
5292 + cmd->duplex = DUPLEX_FULL;
5293 + ethtool_cmd_speed_set(cmd, state.rate);
5294 +
5295 +out:
5296 + return err;
5297 +}
5298 +
5299 +static int dpaa2_set_settings(struct net_device *net_dev,
5300 + struct ethtool_cmd *cmd)
5301 +{
5302 + struct dpni_link_cfg cfg = {0};
5303 + struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
5304 + int err = 0;
5305 +
5306 + netdev_dbg(net_dev, "Setting link parameters...");
5307 +
5308 + /* Due to a temporary firmware limitation, the DPNI must be down
5309 + * in order to be able to change link settings. Taking steps to let
5310 + * the user know that.
5311 + */
5312 + if (netif_running(net_dev)) {
5313 + netdev_info(net_dev, "Sorry, interface must be brought down first.\n");
5314 + return -EACCES;
5315 + }
5316 +
5317 + cfg.rate = ethtool_cmd_speed(cmd);
5318 + if (cmd->autoneg == AUTONEG_ENABLE)
5319 + cfg.options |= DPNI_LINK_OPT_AUTONEG;
5320 + else
5321 + cfg.options &= ~DPNI_LINK_OPT_AUTONEG;
5322 + if (cmd->duplex == DUPLEX_HALF)
5323 + cfg.options |= DPNI_LINK_OPT_HALF_DUPLEX;
5324 + else
5325 + cfg.options &= ~DPNI_LINK_OPT_HALF_DUPLEX;
5326 +
5327 + err = dpni_set_link_cfg(priv->mc_io, 0, priv->mc_token, &cfg);
5328 + if (err)
5329 + /* ethtool will be loud enough if we return an error; no point
5330 + * in putting our own error message on the console by default
5331 + */
5332 + netdev_dbg(net_dev, "ERROR %d setting link cfg", err);
5333 +
5334 + return err;
5335 +}
5336 +
5337 +static void dpaa2_get_strings(struct net_device *netdev, u32 stringset,
5338 + u8 *data)
5339 +{
5340 + u8 *p = data;
5341 + int i;
5342 +
5343 + switch (stringset) {
5344 + case ETH_SS_STATS:
5345 + for (i = 0; i < DPAA2_ETH_NUM_STATS; i++) {
5346 + strlcpy(p, dpaa2_ethtool_stats[i], ETH_GSTRING_LEN);
5347 + p += ETH_GSTRING_LEN;
5348 + }
5349 + for (i = 0; i < DPAA2_ETH_NUM_EXTRA_STATS; i++) {
5350 + strlcpy(p, dpaa2_ethtool_extras[i], ETH_GSTRING_LEN);
5351 + p += ETH_GSTRING_LEN;
5352 + }
5353 + break;
5354 + }
5355 +}
5356 +
5357 +static int dpaa2_get_sset_count(struct net_device *net_dev, int sset)
5358 +{
5359 + switch (sset) {
5360 + case ETH_SS_STATS: /* ethtool_get_stats(), ethtool_get_drvinfo() */
5361 + return DPAA2_ETH_NUM_STATS + DPAA2_ETH_NUM_EXTRA_STATS;
5362 + default:
5363 + return -EOPNOTSUPP;
5364 + }
5365 +}
5366 +
5367 +/** Fill in hardware counters, as returned by the MC firmware.
5368 + */
5369 +static void dpaa2_get_ethtool_stats(struct net_device *net_dev,
5370 + struct ethtool_stats *stats,
5371 + u64 *data)
5372 +{
5373 + int i; /* Current index in the data array */
5374 + int j, k, err;
5375 +
5376 +#ifdef CONFIG_FSL_QBMAN_DEBUG
5377 + u32 fcnt, bcnt;
5378 + u32 fcnt_rx_total = 0, fcnt_tx_total = 0;
5379 + u32 bcnt_rx_total = 0, bcnt_tx_total = 0;
5380 + u32 buf_cnt;
5381 +#endif
5382 + u64 cdan = 0;
5383 + u64 portal_busy = 0;
5384 + struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
5385 + struct dpaa2_eth_stats *extras;
5386 + struct dpaa2_eth_ch_stats *ch_stats;
5387 +
5388 + memset(data, 0,
5389 + sizeof(u64) * (DPAA2_ETH_NUM_STATS + DPAA2_ETH_NUM_EXTRA_STATS));
5390 +
5391 + /* Print standard counters, from DPNI statistics */
5392 + for (i = 0; i < DPAA2_ETH_NUM_STATS; i++) {
5393 + err = dpni_get_counter(priv->mc_io, 0, priv->mc_token, i,
5394 + data + i);
5395 + if (err != 0)
5396 + netdev_warn(net_dev, "Err %d getting DPNI counter %d",
5397 + err, i);
5398 + }
5399 +
5400 + /* Print per-cpu extra stats */
5401 + for_each_online_cpu(k) {
5402 + extras = per_cpu_ptr(priv->percpu_extras, k);
5403 + for (j = 0; j < sizeof(*extras) / sizeof(__u64); j++)
5404 + *((__u64 *)data + i + j) += *((__u64 *)extras + j);
5405 + }
5406 + i += j;
5407 +
5408 + /* We may be using fewer DPIOs than actual CPUs */
5409 + for_each_cpu(j, &priv->dpio_cpumask) {
5410 + ch_stats = &priv->channel[j]->stats;
5411 + cdan += ch_stats->cdan;
5412 + portal_busy += ch_stats->dequeue_portal_busy;
5413 + }
5414 +
5415 + *(data + i++) = portal_busy;
5416 + *(data + i++) = cdan;
5417 +
5418 +#ifdef CONFIG_FSL_QBMAN_DEBUG
5419 + for (j = 0; j < priv->num_fqs; j++) {
5420 + /* Print FQ instantaneous counts */
5421 + err = dpaa2_io_query_fq_count(NULL, priv->fq[j].fqid,
5422 + &fcnt, &bcnt);
5423 + if (err) {
5424 + netdev_warn(net_dev, "FQ query error %d", err);
5425 + return;
5426 + }
5427 +
5428 + if (priv->fq[j].type == DPAA2_TX_CONF_FQ) {
5429 + fcnt_tx_total += fcnt;
5430 + bcnt_tx_total += bcnt;
5431 + } else {
5432 + fcnt_rx_total += fcnt;
5433 + bcnt_rx_total += bcnt;
5434 + }
5435 + }
5436 + *(data + i++) = fcnt_rx_total;
5437 + *(data + i++) = bcnt_rx_total;
5438 + *(data + i++) = fcnt_tx_total;
5439 + *(data + i++) = bcnt_tx_total;
5440 +
5441 + err = dpaa2_io_query_bp_count(NULL, priv->dpbp_attrs.bpid, &buf_cnt);
5442 + if (err) {
5443 + netdev_warn(net_dev, "Buffer count query error %d\n", err);
5444 + return;
5445 + }
5446 + *(data + i++) = buf_cnt;
5447 +#endif
5448 +}
5449 +
5450 +static const struct dpaa2_hash_fields {
5451 + u64 rxnfc_field;
5452 + enum net_prot cls_prot;
5453 + int cls_field;
5454 + int size;
5455 +} dpaa2_hash_fields[] = {
5456 + {
5457 + /* L2 header */
5458 + .rxnfc_field = RXH_L2DA,
5459 + .cls_prot = NET_PROT_ETH,
5460 + .cls_field = NH_FLD_ETH_DA,
5461 + .size = 6,
5462 + }, {
5463 + /* VLAN header */
5464 + .rxnfc_field = RXH_VLAN,
5465 + .cls_prot = NET_PROT_VLAN,
5466 + .cls_field = NH_FLD_VLAN_TCI,
5467 + .size = 2,
5468 + }, {
5469 + /* IP header */
5470 + .rxnfc_field = RXH_IP_SRC,
5471 + .cls_prot = NET_PROT_IP,
5472 + .cls_field = NH_FLD_IP_SRC,
5473 + .size = 4,
5474 + }, {
5475 + .rxnfc_field = RXH_IP_DST,
5476 + .cls_prot = NET_PROT_IP,
5477 + .cls_field = NH_FLD_IP_DST,
5478 + .size = 4,
5479 + }, {
5480 + .rxnfc_field = RXH_L3_PROTO,
5481 + .cls_prot = NET_PROT_IP,
5482 + .cls_field = NH_FLD_IP_PROTO,
5483 + .size = 1,
5484 + }, {
5485 + /* Using UDP ports, this is functionally equivalent to raw
5486 + * byte pairs from L4 header.
5487 + */
5488 + .rxnfc_field = RXH_L4_B_0_1,
5489 + .cls_prot = NET_PROT_UDP,
5490 + .cls_field = NH_FLD_UDP_PORT_SRC,
5491 + .size = 2,
5492 + }, {
5493 + .rxnfc_field = RXH_L4_B_2_3,
5494 + .cls_prot = NET_PROT_UDP,
5495 + .cls_field = NH_FLD_UDP_PORT_DST,
5496 + .size = 2,
5497 + },
5498 +};
5499 +
5500 +static int dpaa2_cls_is_enabled(struct net_device *net_dev, u64 flag)
5501 +{
5502 + struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
5503 +
5504 + return !!(priv->rx_hash_fields & flag);
5505 +}
5506 +
5507 +static int dpaa2_cls_key_off(struct net_device *net_dev, u64 flag)
5508 +{
5509 + int i, off = 0;
5510 +
5511 + for (i = 0; i < ARRAY_SIZE(dpaa2_hash_fields); i++) {
5512 + if (dpaa2_hash_fields[i].rxnfc_field & flag)
5513 + return off;
5514 + if (dpaa2_cls_is_enabled(net_dev,
5515 + dpaa2_hash_fields[i].rxnfc_field))
5516 + off += dpaa2_hash_fields[i].size;
5517 + }
5518 +
5519 + return -1;
5520 +}
5521 +
5522 +static u8 dpaa2_cls_key_size(struct net_device *net_dev)
5523 +{
5524 + u8 i, size = 0;
5525 +
5526 + for (i = 0; i < ARRAY_SIZE(dpaa2_hash_fields); i++) {
5527 + if (!dpaa2_cls_is_enabled(net_dev,
5528 + dpaa2_hash_fields[i].rxnfc_field))
5529 + continue;
5530 + size += dpaa2_hash_fields[i].size;
5531 + }
5532 +
5533 + return size;
5534 +}
5535 +
5536 +static u8 dpaa2_cls_max_key_size(struct net_device *net_dev)
5537 +{
5538 + u8 i, size = 0;
5539 +
5540 + for (i = 0; i < ARRAY_SIZE(dpaa2_hash_fields); i++)
5541 + size += dpaa2_hash_fields[i].size;
5542 +
5543 + return size;
5544 +}
5545 +
5546 +void dpaa2_cls_check(struct net_device *net_dev)
5547 +{
5548 + u8 key_size = dpaa2_cls_max_key_size(net_dev);
5549 + struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
5550 +
5551 + if (priv->dpni_attrs.options & DPNI_OPT_DIST_FS &&
5552 + priv->dpni_attrs.max_dist_key_size < key_size) {
5553 + dev_err(&net_dev->dev,
5554 + "max_dist_key_size = %d, expected %d. Steering is disabled\n",
5555 + priv->dpni_attrs.max_dist_key_size,
5556 + key_size);
5557 + priv->dpni_attrs.options &= ~DPNI_OPT_DIST_FS;
5558 + }
5559 +}
5560 +
5561 +/* Set RX hash options
5562 + * flags is a combination of RXH_ bits
5563 + */
5564 +int dpaa2_set_hash(struct net_device *net_dev, u64 flags)
5565 +{
5566 + struct device *dev = net_dev->dev.parent;
5567 + struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
5568 + struct dpkg_profile_cfg cls_cfg;
5569 + struct dpni_rx_tc_dist_cfg dist_cfg;
5570 + u8 *dma_mem;
5571 + u64 enabled_flags = 0;
5572 + int i;
5573 + int err = 0;
5574 +
5575 + if (!dpaa2_eth_hash_enabled(priv)) {
5576 + dev_err(dev, "Hashing support is not enabled\n");
5577 + return -EOPNOTSUPP;
5578 + }
5579 +
5580 + if (flags & ~DPAA2_RXH_SUPPORTED) {
5581 + /* RXH_DISCARD is not supported */
5582 + dev_err(dev, "unsupported option selected, supported options are: mvtsdfn\n");
5583 + return -EOPNOTSUPP;
5584 + }
5585 +
5586 + memset(&cls_cfg, 0, sizeof(cls_cfg));
5587 +
5588 + for (i = 0; i < ARRAY_SIZE(dpaa2_hash_fields); i++) {
5589 + struct dpkg_extract *key =
5590 + &cls_cfg.extracts[cls_cfg.num_extracts];
5591 +
5592 + if (!(flags & dpaa2_hash_fields[i].rxnfc_field))
5593 + continue;
5594 +
5595 + if (cls_cfg.num_extracts >= DPKG_MAX_NUM_OF_EXTRACTS) {
5596 + dev_err(dev, "error adding key extraction rule, too many rules?\n");
5597 + return -E2BIG;
5598 + }
5599 +
5600 + key->type = DPKG_EXTRACT_FROM_HDR;
5601 + key->extract.from_hdr.prot =
5602 + dpaa2_hash_fields[i].cls_prot;
5603 + key->extract.from_hdr.type = DPKG_FULL_FIELD;
5604 + key->extract.from_hdr.field =
5605 + dpaa2_hash_fields[i].cls_field;
5606 + cls_cfg.num_extracts++;
5607 +
5608 + enabled_flags |= dpaa2_hash_fields[i].rxnfc_field;
5609 + }
5610 +
5611 + dma_mem = kzalloc(DPAA2_CLASSIFIER_DMA_SIZE, GFP_DMA | GFP_KERNEL);
5612 + if (!dma_mem)
5613 + return -ENOMEM;
5614 +
5615 + err = dpni_prepare_key_cfg(&cls_cfg, dma_mem);
5616 + if (err) {
5617 + dev_err(dev, "dpni_prepare_key_cfg error %d", err);
5618 + return err;
5619 + }
5620 +
5621 + memset(&dist_cfg, 0, sizeof(dist_cfg));
5622 +
5623 + /* Prepare for setting the rx dist */
5624 + dist_cfg.key_cfg_iova = dma_map_single(net_dev->dev.parent, dma_mem,
5625 + DPAA2_CLASSIFIER_DMA_SIZE,
5626 + DMA_TO_DEVICE);
5627 + if (dma_mapping_error(net_dev->dev.parent, dist_cfg.key_cfg_iova)) {
5628 + dev_err(dev, "DMA mapping failed\n");
5629 + kfree(dma_mem);
5630 + return -ENOMEM;
5631 + }
5632 +
5633 + dist_cfg.dist_size = dpaa2_queue_count(priv);
5634 + if (dpaa2_eth_fs_enabled(priv)) {
5635 + dist_cfg.dist_mode = DPNI_DIST_MODE_FS;
5636 + dist_cfg.fs_cfg.miss_action = DPNI_FS_MISS_HASH;
5637 + } else {
5638 + dist_cfg.dist_mode = DPNI_DIST_MODE_HASH;
5639 + }
5640 +
5641 + err = dpni_set_rx_tc_dist(priv->mc_io, 0, priv->mc_token, 0, &dist_cfg);
5642 + dma_unmap_single(net_dev->dev.parent, dist_cfg.key_cfg_iova,
5643 + DPAA2_CLASSIFIER_DMA_SIZE, DMA_TO_DEVICE);
5644 + kfree(dma_mem);
5645 + if (err) {
5646 + dev_err(dev, "dpni_set_rx_tc_dist() error %d\n", err);
5647 + return err;
5648 + }
5649 +
5650 + priv->rx_hash_fields = enabled_flags;
5651 +
5652 + return 0;
5653 +}
5654 +
5655 +static int dpaa2_cls_prep_rule(struct net_device *net_dev,
5656 + struct ethtool_rx_flow_spec *fs,
5657 + void *key)
5658 +{
5659 + struct ethtool_tcpip4_spec *l4ip4_h, *l4ip4_m;
5660 + struct ethhdr *eth_h, *eth_m;
5661 + struct ethtool_flow_ext *ext_h, *ext_m;
5662 + const u8 key_size = dpaa2_cls_key_size(net_dev);
5663 + void *msk = key + key_size;
5664 +
5665 + memset(key, 0, key_size * 2);
5666 +
5667 + /* This code is a major mess, it has to be cleaned up after the
5668 + * classification mask issue is fixed and key format will be made static
5669 + */
5670 +
5671 + switch (fs->flow_type & 0xff) {
5672 + case TCP_V4_FLOW:
5673 + l4ip4_h = &fs->h_u.tcp_ip4_spec;
5674 + l4ip4_m = &fs->m_u.tcp_ip4_spec;
5675 + /* TODO: ethertype to match IPv4 and protocol to match TCP */
5676 + goto l4ip4;
5677 +
5678 + case UDP_V4_FLOW:
5679 + l4ip4_h = &fs->h_u.udp_ip4_spec;
5680 + l4ip4_m = &fs->m_u.udp_ip4_spec;
5681 + goto l4ip4;
5682 +
5683 + case SCTP_V4_FLOW:
5684 + l4ip4_h = &fs->h_u.sctp_ip4_spec;
5685 + l4ip4_m = &fs->m_u.sctp_ip4_spec;
5686 +
5687 +l4ip4:
5688 + if (l4ip4_m->tos) {
5689 + netdev_err(net_dev,
5690 + "ToS is not supported for IPv4 L4\n");
5691 + return -EOPNOTSUPP;
5692 + }
5693 + if (l4ip4_m->ip4src &&
5694 + !dpaa2_cls_is_enabled(net_dev, RXH_IP_SRC)) {
5695 + netdev_err(net_dev, "IP SRC not supported!\n");
5696 + return -EOPNOTSUPP;
5697 + }
5698 + if (l4ip4_m->ip4dst &&
5699 + !dpaa2_cls_is_enabled(net_dev, RXH_IP_DST)) {
5700 + netdev_err(net_dev, "IP DST not supported!\n");
5701 + return -EOPNOTSUPP;
5702 + }
5703 + if (l4ip4_m->psrc &&
5704 + !dpaa2_cls_is_enabled(net_dev, RXH_L4_B_0_1)) {
5705 + netdev_err(net_dev, "PSRC not supported, ignored\n");
5706 + return -EOPNOTSUPP;
5707 + }
5708 + if (l4ip4_m->pdst &&
5709 + !dpaa2_cls_is_enabled(net_dev, RXH_L4_B_2_3)) {
5710 + netdev_err(net_dev, "PDST not supported, ignored\n");
5711 + return -EOPNOTSUPP;
5712 + }
5713 +
5714 + if (dpaa2_cls_is_enabled(net_dev, RXH_IP_SRC)) {
5715 + *(u32 *)(key + dpaa2_cls_key_off(net_dev, RXH_IP_SRC))
5716 + = l4ip4_h->ip4src;
5717 + *(u32 *)(msk + dpaa2_cls_key_off(net_dev, RXH_IP_SRC))
5718 + = l4ip4_m->ip4src;
5719 + }
5720 + if (dpaa2_cls_is_enabled(net_dev, RXH_IP_DST)) {
5721 + *(u32 *)(key + dpaa2_cls_key_off(net_dev, RXH_IP_DST))
5722 + = l4ip4_h->ip4dst;
5723 + *(u32 *)(msk + dpaa2_cls_key_off(net_dev, RXH_IP_DST))
5724 + = l4ip4_m->ip4dst;
5725 + }
5726 +
5727 + if (dpaa2_cls_is_enabled(net_dev, RXH_L4_B_0_1)) {
5728 + *(u32 *)(key + dpaa2_cls_key_off(net_dev, RXH_L4_B_0_1))
5729 + = l4ip4_h->psrc;
5730 + *(u32 *)(msk + dpaa2_cls_key_off(net_dev, RXH_L4_B_0_1))
5731 + = l4ip4_m->psrc;
5732 + }
5733 +
5734 + if (dpaa2_cls_is_enabled(net_dev, RXH_L4_B_2_3)) {
5735 + *(u32 *)(key + dpaa2_cls_key_off(net_dev, RXH_L4_B_2_3))
5736 + = l4ip4_h->pdst;
5737 + *(u32 *)(msk + dpaa2_cls_key_off(net_dev, RXH_L4_B_2_3))
5738 + = l4ip4_m->pdst;
5739 + }
5740 + break;
5741 +
5742 + case ETHER_FLOW:
5743 + eth_h = &fs->h_u.ether_spec;
5744 + eth_m = &fs->m_u.ether_spec;
5745 +
5746 + if (eth_m->h_proto) {
5747 + netdev_err(net_dev, "Ethertype is not supported!\n");
5748 + return -EOPNOTSUPP;
5749 + }
5750 +
5751 + if (!is_zero_ether_addr(eth_m->h_source)) {
5752 + netdev_err(net_dev, "ETH SRC is not supported!\n");
5753 + return -EOPNOTSUPP;
5754 + }
5755 +
5756 + if (dpaa2_cls_is_enabled(net_dev, RXH_L2DA)) {
5757 + ether_addr_copy(key
5758 + + dpaa2_cls_key_off(net_dev, RXH_L2DA),
5759 + eth_h->h_dest);
5760 + ether_addr_copy(msk
5761 + + dpaa2_cls_key_off(net_dev, RXH_L2DA),
5762 + eth_m->h_dest);
5763 + } else {
5764 + if (!is_zero_ether_addr(eth_m->h_dest)) {
5765 + netdev_err(net_dev,
5766 + "ETH DST is not supported!\n");
5767 + return -EOPNOTSUPP;
5768 + }
5769 + }
5770 + break;
5771 +
5772 + default:
5773 + /* TODO: IP user flow, AH, ESP */
5774 + return -EOPNOTSUPP;
5775 + }
5776 +
5777 + if (fs->flow_type & FLOW_EXT) {
5778 + /* TODO: ETH data, VLAN ethertype, VLAN TCI .. */
5779 + return -EOPNOTSUPP;
5780 + }
5781 +
5782 + if (fs->flow_type & FLOW_MAC_EXT) {
5783 + ext_h = &fs->h_ext;
5784 + ext_m = &fs->m_ext;
5785 +
5786 + if (dpaa2_cls_is_enabled(net_dev, RXH_L2DA)) {
5787 + ether_addr_copy(key
5788 + + dpaa2_cls_key_off(net_dev, RXH_L2DA),
5789 + ext_h->h_dest);
5790 + ether_addr_copy(msk
5791 + + dpaa2_cls_key_off(net_dev, RXH_L2DA),
5792 + ext_m->h_dest);
5793 + } else {
5794 + if (!is_zero_ether_addr(ext_m->h_dest)) {
5795 + netdev_err(net_dev,
5796 + "ETH DST is not supported!\n");
5797 + return -EOPNOTSUPP;
5798 + }
5799 + }
5800 + }
5801 + return 0;
5802 +}
5803 +
5804 +static int dpaa2_do_cls(struct net_device *net_dev,
5805 + struct ethtool_rx_flow_spec *fs,
5806 + bool add)
5807 +{
5808 + struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
5809 + const int rule_cnt = DPAA2_CLASSIFIER_ENTRY_COUNT;
5810 + struct dpni_rule_cfg rule_cfg;
5811 + void *dma_mem;
5812 + int err = 0;
5813 +
5814 + if (!dpaa2_eth_fs_enabled(priv)) {
5815 + netdev_err(net_dev, "dev does not support steering!\n");
5816 + /* dev doesn't support steering */
5817 + return -EOPNOTSUPP;
5818 + }
5819 +
5820 + if ((fs->ring_cookie != RX_CLS_FLOW_DISC &&
5821 + fs->ring_cookie >= dpaa2_queue_count(priv)) ||
5822 + fs->location >= rule_cnt)
5823 + return -EINVAL;
5824 +
5825 + memset(&rule_cfg, 0, sizeof(rule_cfg));
5826 + rule_cfg.key_size = dpaa2_cls_key_size(net_dev);
5827 +
5828 + /* allocate twice the key size, for the actual key and for mask */
5829 + dma_mem = kzalloc(rule_cfg.key_size * 2, GFP_DMA | GFP_KERNEL);
5830 + if (!dma_mem)
5831 + return -ENOMEM;
5832 +
5833 + err = dpaa2_cls_prep_rule(net_dev, fs, dma_mem);
5834 + if (err)
5835 + goto err_free_mem;
5836 +
5837 + rule_cfg.key_iova = dma_map_single(net_dev->dev.parent, dma_mem,
5838 + rule_cfg.key_size * 2,
5839 + DMA_TO_DEVICE);
5840 +
5841 + rule_cfg.mask_iova = rule_cfg.key_iova + rule_cfg.key_size;
5842 +
5843 + if (!(priv->dpni_attrs.options & DPNI_OPT_FS_MASK_SUPPORT)) {
5844 + int i;
5845 + u8 *mask = dma_mem + rule_cfg.key_size;
5846 +
5847 + /* check that nothing is masked out, otherwise it won't work */
5848 + for (i = 0; i < rule_cfg.key_size; i++) {
5849 + if (mask[i] == 0xff)
5850 + continue;
5851 + netdev_err(net_dev, "dev does not support masking!\n");
5852 + err = -EOPNOTSUPP;
5853 + goto err_free_mem;
5854 + }
5855 + rule_cfg.mask_iova = 0;
5856 + }
5857 +
5858 + /* No way to control rule order in firmware */
5859 + if (add)
5860 + err = dpni_add_fs_entry(priv->mc_io, 0, priv->mc_token, 0,
5861 + &rule_cfg, (u16)fs->ring_cookie);
5862 + else
5863 + err = dpni_remove_fs_entry(priv->mc_io, 0, priv->mc_token, 0,
5864 + &rule_cfg);
5865 +
5866 + dma_unmap_single(net_dev->dev.parent, rule_cfg.key_iova,
5867 + rule_cfg.key_size * 2, DMA_TO_DEVICE);
5868 + if (err) {
5869 + netdev_err(net_dev, "dpaa2_add_cls() error %d\n", err);
5870 + goto err_free_mem;
5871 + }
5872 +
5873 + priv->cls_rule[fs->location].fs = *fs;
5874 + priv->cls_rule[fs->location].in_use = true;
5875 +
5876 +err_free_mem:
5877 + kfree(dma_mem);
5878 +
5879 + return err;
5880 +}
5881 +
5882 +static int dpaa2_add_cls(struct net_device *net_dev,
5883 + struct ethtool_rx_flow_spec *fs)
5884 +{
5885 + struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
5886 + int err;
5887 +
5888 + err = dpaa2_do_cls(net_dev, fs, true);
5889 + if (err)
5890 + return err;
5891 +
5892 + priv->cls_rule[fs->location].in_use = true;
5893 + priv->cls_rule[fs->location].fs = *fs;
5894 +
5895 + return 0;
5896 +}
5897 +
5898 +static int dpaa2_del_cls(struct net_device *net_dev, int location)
5899 +{
5900 + struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
5901 + int err;
5902 +
5903 + err = dpaa2_do_cls(net_dev, &priv->cls_rule[location].fs, false);
5904 + if (err)
5905 + return err;
5906 +
5907 + priv->cls_rule[location].in_use = false;
5908 +
5909 + return 0;
5910 +}
5911 +
5912 +static void dpaa2_clear_cls(struct net_device *net_dev)
5913 +{
5914 + struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
5915 + int i, err;
5916 +
5917 + for (i = 0; i < DPAA2_CLASSIFIER_ENTRY_COUNT; i++) {
5918 + if (!priv->cls_rule[i].in_use)
5919 + continue;
5920 +
5921 + err = dpaa2_del_cls(net_dev, i);
5922 + if (err)
5923 + netdev_warn(net_dev,
5924 + "err trying to delete classification entry %d\n",
5925 + i);
5926 + }
5927 +}
5928 +
5929 +static int dpaa2_set_rxnfc(struct net_device *net_dev,
5930 + struct ethtool_rxnfc *rxnfc)
5931 +{
5932 + int err = 0;
5933 +
5934 + switch (rxnfc->cmd) {
5935 + case ETHTOOL_SRXFH:
5936 + /* first off clear ALL classification rules, chaging key
5937 + * composition will break them anyway
5938 + */
5939 + dpaa2_clear_cls(net_dev);
5940 + /* we purposely ignore cmd->flow_type for now, because the
5941 + * classifier only supports a single set of fields for all
5942 + * protocols
5943 + */
5944 + err = dpaa2_set_hash(net_dev, rxnfc->data);
5945 + break;
5946 + case ETHTOOL_SRXCLSRLINS:
5947 + err = dpaa2_add_cls(net_dev, &rxnfc->fs);
5948 + break;
5949 +
5950 + case ETHTOOL_SRXCLSRLDEL:
5951 + err = dpaa2_del_cls(net_dev, rxnfc->fs.location);
5952 + break;
5953 +
5954 + default:
5955 + err = -EOPNOTSUPP;
5956 + }
5957 +
5958 + return err;
5959 +}
5960 +
5961 +static int dpaa2_get_rxnfc(struct net_device *net_dev,
5962 + struct ethtool_rxnfc *rxnfc, u32 *rule_locs)
5963 +{
5964 + struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
5965 + const int rule_cnt = DPAA2_CLASSIFIER_ENTRY_COUNT;
5966 + int i, j;
5967 +
5968 + switch (rxnfc->cmd) {
5969 + case ETHTOOL_GRXFH:
5970 + /* we purposely ignore cmd->flow_type for now, because the
5971 + * classifier only supports a single set of fields for all
5972 + * protocols
5973 + */
5974 + rxnfc->data = priv->rx_hash_fields;
5975 + break;
5976 +
5977 + case ETHTOOL_GRXRINGS:
5978 + rxnfc->data = dpaa2_queue_count(priv);
5979 + break;
5980 +
5981 + case ETHTOOL_GRXCLSRLCNT:
5982 + for (i = 0, rxnfc->rule_cnt = 0; i < rule_cnt; i++)
5983 + if (priv->cls_rule[i].in_use)
5984 + rxnfc->rule_cnt++;
5985 + rxnfc->data = rule_cnt;
5986 + break;
5987 +
5988 + case ETHTOOL_GRXCLSRULE:
5989 + if (!priv->cls_rule[rxnfc->fs.location].in_use)
5990 + return -EINVAL;
5991 +
5992 + rxnfc->fs = priv->cls_rule[rxnfc->fs.location].fs;
5993 + break;
5994 +
5995 + case ETHTOOL_GRXCLSRLALL:
5996 + for (i = 0, j = 0; i < rule_cnt; i++) {
5997 + if (!priv->cls_rule[i].in_use)
5998 + continue;
5999 + if (j == rxnfc->rule_cnt)
6000 + return -EMSGSIZE;
6001 + rule_locs[j++] = i;
6002 + }
6003 + rxnfc->rule_cnt = j;
6004 + rxnfc->data = rule_cnt;
6005 + break;
6006 +
6007 + default:
6008 + return -EOPNOTSUPP;
6009 + }
6010 +
6011 + return 0;
6012 +}
6013 +
6014 +const struct ethtool_ops dpaa2_ethtool_ops = {
6015 + .get_drvinfo = dpaa2_get_drvinfo,
6016 + .get_msglevel = dpaa2_get_msglevel,
6017 + .set_msglevel = dpaa2_set_msglevel,
6018 + .get_link = ethtool_op_get_link,
6019 + .get_settings = dpaa2_get_settings,
6020 + .set_settings = dpaa2_set_settings,
6021 + .get_sset_count = dpaa2_get_sset_count,
6022 + .get_ethtool_stats = dpaa2_get_ethtool_stats,
6023 + .get_strings = dpaa2_get_strings,
6024 + .get_rxnfc = dpaa2_get_rxnfc,
6025 + .set_rxnfc = dpaa2_set_rxnfc,
6026 +};
6027 --- /dev/null
6028 +++ b/drivers/staging/fsl-dpaa2/ethernet/dpkg.h
6029 @@ -0,0 +1,175 @@
6030 +/* Copyright 2013-2015 Freescale Semiconductor Inc.
6031 + *
6032 + * Redistribution and use in source and binary forms, with or without
6033 + * modification, are permitted provided that the following conditions are met:
6034 + * * Redistributions of source code must retain the above copyright
6035 + * notice, this list of conditions and the following disclaimer.
6036 + * * Redistributions in binary form must reproduce the above copyright
6037 + * notice, this list of conditions and the following disclaimer in the
6038 + * documentation and/or other materials provided with the distribution.
6039 + * * Neither the name of the above-listed copyright holders nor the
6040 + * names of any contributors may be used to endorse or promote products
6041 + * derived from this software without specific prior written permission.
6042 + *
6043 + *
6044 + * ALTERNATIVELY, this software may be distributed under the terms of the
6045 + * GNU General Public License ("GPL") as published by the Free Software
6046 + * Foundation, either version 2 of that License or (at your option) any
6047 + * later version.
6048 + *
6049 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
6050 + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
6051 + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
6052 + * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
6053 + * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
6054 + * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
6055 + * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
6056 + * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
6057 + * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
6058 + * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
6059 + * POSSIBILITY OF SUCH DAMAGE.
6060 + */
6061 +#ifndef __FSL_DPKG_H_
6062 +#define __FSL_DPKG_H_
6063 +
6064 +#include <linux/types.h>
6065 +#include "../../fsl-mc/include/net.h"
6066 +
6067 +/* Data Path Key Generator API
6068 + * Contains initialization APIs and runtime APIs for the Key Generator
6069 + */
6070 +
6071 +/** Key Generator properties */
6072 +
6073 +/**
6074 + * Number of masks per key extraction
6075 + */
6076 +#define DPKG_NUM_OF_MASKS 4
6077 +/**
6078 + * Number of extractions per key profile
6079 + */
6080 +#define DPKG_MAX_NUM_OF_EXTRACTS 10
6081 +
6082 +/**
6083 + * enum dpkg_extract_from_hdr_type - Selecting extraction by header types
6084 + * @DPKG_FROM_HDR: Extract selected bytes from header, by offset
6085 + * @DPKG_FROM_FIELD: Extract selected bytes from header, by offset from field
6086 + * @DPKG_FULL_FIELD: Extract a full field
6087 + */
6088 +enum dpkg_extract_from_hdr_type {
6089 + DPKG_FROM_HDR = 0,
6090 + DPKG_FROM_FIELD = 1,
6091 + DPKG_FULL_FIELD = 2
6092 +};
6093 +
6094 +/**
6095 + * enum dpkg_extract_type - Enumeration for selecting extraction type
6096 + * @DPKG_EXTRACT_FROM_HDR: Extract from the header
6097 + * @DPKG_EXTRACT_FROM_DATA: Extract from data not in specific header
6098 + * @DPKG_EXTRACT_FROM_PARSE: Extract from parser-result;
6099 + * e.g. can be used to extract header existence;
6100 + * please refer to 'Parse Result definition' section in the parser BG
6101 + */
6102 +enum dpkg_extract_type {
6103 + DPKG_EXTRACT_FROM_HDR = 0,
6104 + DPKG_EXTRACT_FROM_DATA = 1,
6105 + DPKG_EXTRACT_FROM_PARSE = 3
6106 +};
6107 +
6108 +/**
6109 + * struct dpkg_mask - A structure for defining a single extraction mask
6110 + * @mask: Byte mask for the extracted content
6111 + * @offset: Offset within the extracted content
6112 + */
6113 +struct dpkg_mask {
6114 + uint8_t mask;
6115 + uint8_t offset;
6116 +};
6117 +
6118 +/**
6119 + * struct dpkg_extract - A structure for defining a single extraction
6120 + * @type: Determines how the union below is interpreted:
6121 + * DPKG_EXTRACT_FROM_HDR: selects 'from_hdr';
6122 + * DPKG_EXTRACT_FROM_DATA: selects 'from_data';
6123 + * DPKG_EXTRACT_FROM_PARSE: selects 'from_parse'
6124 + * @extract: Selects extraction method
6125 + * @num_of_byte_masks: Defines the number of valid entries in the array below;
6126 + * This is also the number of bytes to be used as masks
6127 + * @masks: Masks parameters
6128 + */
6129 +struct dpkg_extract {
6130 + enum dpkg_extract_type type;
6131 + /**
6132 + * union extract - Selects extraction method
6133 + * @from_hdr - Used when 'type = DPKG_EXTRACT_FROM_HDR'
6134 + * @from_data - Used when 'type = DPKG_EXTRACT_FROM_DATA'
6135 + * @from_parse - Used when 'type = DPKG_EXTRACT_FROM_PARSE'
6136 + */
6137 + union {
6138 + /**
6139 + * struct from_hdr - Used when 'type = DPKG_EXTRACT_FROM_HDR'
6140 + * @prot: Any of the supported headers
6141 + * @type: Defines the type of header extraction:
6142 + * DPKG_FROM_HDR: use size & offset below;
6143 + * DPKG_FROM_FIELD: use field, size and offset below;
6144 + * DPKG_FULL_FIELD: use field below
6145 + * @field: One of the supported fields (NH_FLD_)
6146 + *
6147 + * @size: Size in bytes
6148 + * @offset: Byte offset
6149 + * @hdr_index: Clear for cases not listed below;
6150 + * Used for protocols that may have more than a single
6151 + * header, 0 indicates an outer header;
6152 + * Supported protocols (possible values):
6153 + * NET_PROT_VLAN (0, HDR_INDEX_LAST);
6154 + * NET_PROT_MPLS (0, 1, HDR_INDEX_LAST);
6155 + * NET_PROT_IP(0, HDR_INDEX_LAST);
6156 + * NET_PROT_IPv4(0, HDR_INDEX_LAST);
6157 + * NET_PROT_IPv6(0, HDR_INDEX_LAST);
6158 + */
6159 +
6160 + struct {
6161 + enum net_prot prot;
6162 + enum dpkg_extract_from_hdr_type type;
6163 + uint32_t field;
6164 + uint8_t size;
6165 + uint8_t offset;
6166 + uint8_t hdr_index;
6167 + } from_hdr;
6168 + /**
6169 + * struct from_data - Used when 'type = DPKG_EXTRACT_FROM_DATA'
6170 + * @size: Size in bytes
6171 + * @offset: Byte offset
6172 + */
6173 + struct {
6174 + uint8_t size;
6175 + uint8_t offset;
6176 + } from_data;
6177 +
6178 + /**
6179 + * struct from_parse - Used when 'type = DPKG_EXTRACT_FROM_PARSE'
6180 + * @size: Size in bytes
6181 + * @offset: Byte offset
6182 + */
6183 + struct {
6184 + uint8_t size;
6185 + uint8_t offset;
6186 + } from_parse;
6187 + } extract;
6188 +
6189 + uint8_t num_of_byte_masks;
6190 + struct dpkg_mask masks[DPKG_NUM_OF_MASKS];
6191 +};
6192 +
6193 +/**
6194 + * struct dpkg_profile_cfg - A structure for defining a full Key Generation
6195 + * profile (rule)
6196 + * @num_extracts: Defines the number of valid entries in the array below
6197 + * @extracts: Array of required extractions
6198 + */
6199 +struct dpkg_profile_cfg {
6200 + uint8_t num_extracts;
6201 + struct dpkg_extract extracts[DPKG_MAX_NUM_OF_EXTRACTS];
6202 +};
6203 +
6204 +#endif /* __FSL_DPKG_H_ */
6205 --- /dev/null
6206 +++ b/drivers/staging/fsl-dpaa2/ethernet/dpni-cmd.h
6207 @@ -0,0 +1,1058 @@
6208 +/* Copyright 2013-2015 Freescale Semiconductor Inc.
6209 + *
6210 + * Redistribution and use in source and binary forms, with or without
6211 + * modification, are permitted provided that the following conditions are met:
6212 + * * Redistributions of source code must retain the above copyright
6213 + * notice, this list of conditions and the following disclaimer.
6214 + * * Redistributions in binary form must reproduce the above copyright
6215 + * notice, this list of conditions and the following disclaimer in the
6216 + * documentation and/or other materials provided with the distribution.
6217 + * * Neither the name of the above-listed copyright holders nor the
6218 + * names of any contributors may be used to endorse or promote products
6219 + * derived from this software without specific prior written permission.
6220 + *
6221 + *
6222 + * ALTERNATIVELY, this software may be distributed under the terms of the
6223 + * GNU General Public License ("GPL") as published by the Free Software
6224 + * Foundation, either version 2 of that License or (at your option) any
6225 + * later version.
6226 + *
6227 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
6228 + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
6229 + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
6230 + * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
6231 + * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
6232 + * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
6233 + * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
6234 + * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
6235 + * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
6236 + * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
6237 + * POSSIBILITY OF SUCH DAMAGE.
6238 + */
6239 +#ifndef _FSL_DPNI_CMD_H
6240 +#define _FSL_DPNI_CMD_H
6241 +
6242 +/* DPNI Version */
6243 +#define DPNI_VER_MAJOR 6
6244 +#define DPNI_VER_MINOR 0
6245 +
6246 +/* Command IDs */
6247 +#define DPNI_CMDID_OPEN 0x801
6248 +#define DPNI_CMDID_CLOSE 0x800
6249 +#define DPNI_CMDID_CREATE 0x901
6250 +#define DPNI_CMDID_DESTROY 0x900
6251 +
6252 +#define DPNI_CMDID_ENABLE 0x002
6253 +#define DPNI_CMDID_DISABLE 0x003
6254 +#define DPNI_CMDID_GET_ATTR 0x004
6255 +#define DPNI_CMDID_RESET 0x005
6256 +#define DPNI_CMDID_IS_ENABLED 0x006
6257 +
6258 +#define DPNI_CMDID_SET_IRQ 0x010
6259 +#define DPNI_CMDID_GET_IRQ 0x011
6260 +#define DPNI_CMDID_SET_IRQ_ENABLE 0x012
6261 +#define DPNI_CMDID_GET_IRQ_ENABLE 0x013
6262 +#define DPNI_CMDID_SET_IRQ_MASK 0x014
6263 +#define DPNI_CMDID_GET_IRQ_MASK 0x015
6264 +#define DPNI_CMDID_GET_IRQ_STATUS 0x016
6265 +#define DPNI_CMDID_CLEAR_IRQ_STATUS 0x017
6266 +
6267 +#define DPNI_CMDID_SET_POOLS 0x200
6268 +#define DPNI_CMDID_GET_RX_BUFFER_LAYOUT 0x201
6269 +#define DPNI_CMDID_SET_RX_BUFFER_LAYOUT 0x202
6270 +#define DPNI_CMDID_GET_TX_BUFFER_LAYOUT 0x203
6271 +#define DPNI_CMDID_SET_TX_BUFFER_LAYOUT 0x204
6272 +#define DPNI_CMDID_SET_TX_CONF_BUFFER_LAYOUT 0x205
6273 +#define DPNI_CMDID_GET_TX_CONF_BUFFER_LAYOUT 0x206
6274 +#define DPNI_CMDID_SET_L3_CHKSUM_VALIDATION 0x207
6275 +#define DPNI_CMDID_GET_L3_CHKSUM_VALIDATION 0x208
6276 +#define DPNI_CMDID_SET_L4_CHKSUM_VALIDATION 0x209
6277 +#define DPNI_CMDID_GET_L4_CHKSUM_VALIDATION 0x20A
6278 +#define DPNI_CMDID_SET_ERRORS_BEHAVIOR 0x20B
6279 +#define DPNI_CMDID_SET_TX_CONF_REVOKE 0x20C
6280 +
6281 +#define DPNI_CMDID_GET_QDID 0x210
6282 +#define DPNI_CMDID_GET_SP_INFO 0x211
6283 +#define DPNI_CMDID_GET_TX_DATA_OFFSET 0x212
6284 +#define DPNI_CMDID_GET_COUNTER 0x213
6285 +#define DPNI_CMDID_SET_COUNTER 0x214
6286 +#define DPNI_CMDID_GET_LINK_STATE 0x215
6287 +#define DPNI_CMDID_SET_MAX_FRAME_LENGTH 0x216
6288 +#define DPNI_CMDID_GET_MAX_FRAME_LENGTH 0x217
6289 +#define DPNI_CMDID_SET_MTU 0x218
6290 +#define DPNI_CMDID_GET_MTU 0x219
6291 +#define DPNI_CMDID_SET_LINK_CFG 0x21A
6292 +#define DPNI_CMDID_SET_TX_SHAPING 0x21B
6293 +
6294 +#define DPNI_CMDID_SET_MCAST_PROMISC 0x220
6295 +#define DPNI_CMDID_GET_MCAST_PROMISC 0x221
6296 +#define DPNI_CMDID_SET_UNICAST_PROMISC 0x222
6297 +#define DPNI_CMDID_GET_UNICAST_PROMISC 0x223
6298 +#define DPNI_CMDID_SET_PRIM_MAC 0x224
6299 +#define DPNI_CMDID_GET_PRIM_MAC 0x225
6300 +#define DPNI_CMDID_ADD_MAC_ADDR 0x226
6301 +#define DPNI_CMDID_REMOVE_MAC_ADDR 0x227
6302 +#define DPNI_CMDID_CLR_MAC_FILTERS 0x228
6303 +
6304 +#define DPNI_CMDID_SET_VLAN_FILTERS 0x230
6305 +#define DPNI_CMDID_ADD_VLAN_ID 0x231
6306 +#define DPNI_CMDID_REMOVE_VLAN_ID 0x232
6307 +#define DPNI_CMDID_CLR_VLAN_FILTERS 0x233
6308 +
6309 +#define DPNI_CMDID_SET_RX_TC_DIST 0x235
6310 +#define DPNI_CMDID_SET_TX_FLOW 0x236
6311 +#define DPNI_CMDID_GET_TX_FLOW 0x237
6312 +#define DPNI_CMDID_SET_RX_FLOW 0x238
6313 +#define DPNI_CMDID_GET_RX_FLOW 0x239
6314 +#define DPNI_CMDID_SET_RX_ERR_QUEUE 0x23A
6315 +#define DPNI_CMDID_GET_RX_ERR_QUEUE 0x23B
6316 +
6317 +#define DPNI_CMDID_SET_RX_TC_POLICING 0x23E
6318 +#define DPNI_CMDID_SET_RX_TC_EARLY_DROP 0x23F
6319 +
6320 +#define DPNI_CMDID_SET_QOS_TBL 0x240
6321 +#define DPNI_CMDID_ADD_QOS_ENT 0x241
6322 +#define DPNI_CMDID_REMOVE_QOS_ENT 0x242
6323 +#define DPNI_CMDID_CLR_QOS_TBL 0x243
6324 +#define DPNI_CMDID_ADD_FS_ENT 0x244
6325 +#define DPNI_CMDID_REMOVE_FS_ENT 0x245
6326 +#define DPNI_CMDID_CLR_FS_ENT 0x246
6327 +#define DPNI_CMDID_SET_VLAN_INSERTION 0x247
6328 +#define DPNI_CMDID_SET_VLAN_REMOVAL 0x248
6329 +#define DPNI_CMDID_SET_IPR 0x249
6330 +#define DPNI_CMDID_SET_IPF 0x24A
6331 +
6332 +#define DPNI_CMDID_SET_TX_SELECTION 0x250
6333 +#define DPNI_CMDID_GET_RX_TC_POLICING 0x251
6334 +#define DPNI_CMDID_GET_RX_TC_EARLY_DROP 0x252
6335 +#define DPNI_CMDID_SET_RX_TC_CONGESTION_NOTIFICATION 0x253
6336 +#define DPNI_CMDID_GET_RX_TC_CONGESTION_NOTIFICATION 0x254
6337 +#define DPNI_CMDID_SET_TX_TC_CONGESTION_NOTIFICATION 0x255
6338 +#define DPNI_CMDID_GET_TX_TC_CONGESTION_NOTIFICATION 0x256
6339 +#define DPNI_CMDID_SET_TX_CONF 0x257
6340 +#define DPNI_CMDID_GET_TX_CONF 0x258
6341 +#define DPNI_CMDID_SET_TX_CONF_CONGESTION_NOTIFICATION 0x259
6342 +#define DPNI_CMDID_GET_TX_CONF_CONGESTION_NOTIFICATION 0x25A
6343 +#define DPNI_CMDID_SET_TX_TC_EARLY_DROP 0x25B
6344 +#define DPNI_CMDID_GET_TX_TC_EARLY_DROP 0x25C
6345 +
6346 +/* cmd, param, offset, width, type, arg_name */
6347 +#define DPNI_CMD_OPEN(cmd, dpni_id) \
6348 + MC_CMD_OP(cmd, 0, 0, 32, int, dpni_id)
6349 +
6350 +#define DPNI_PREP_EXTENDED_CFG(ext, cfg) \
6351 +do { \
6352 + MC_PREP_OP(ext, 0, 0, 16, uint16_t, cfg->tc_cfg[0].max_dist); \
6353 + MC_PREP_OP(ext, 0, 16, 16, uint16_t, cfg->tc_cfg[0].max_fs_entries); \
6354 + MC_PREP_OP(ext, 0, 32, 16, uint16_t, cfg->tc_cfg[1].max_dist); \
6355 + MC_PREP_OP(ext, 0, 48, 16, uint16_t, cfg->tc_cfg[1].max_fs_entries); \
6356 + MC_PREP_OP(ext, 1, 0, 16, uint16_t, cfg->tc_cfg[2].max_dist); \
6357 + MC_PREP_OP(ext, 1, 16, 16, uint16_t, cfg->tc_cfg[2].max_fs_entries); \
6358 + MC_PREP_OP(ext, 1, 32, 16, uint16_t, cfg->tc_cfg[3].max_dist); \
6359 + MC_PREP_OP(ext, 1, 48, 16, uint16_t, cfg->tc_cfg[3].max_fs_entries); \
6360 + MC_PREP_OP(ext, 2, 0, 16, uint16_t, cfg->tc_cfg[4].max_dist); \
6361 + MC_PREP_OP(ext, 2, 16, 16, uint16_t, cfg->tc_cfg[4].max_fs_entries); \
6362 + MC_PREP_OP(ext, 2, 32, 16, uint16_t, cfg->tc_cfg[5].max_dist); \
6363 + MC_PREP_OP(ext, 2, 48, 16, uint16_t, cfg->tc_cfg[5].max_fs_entries); \
6364 + MC_PREP_OP(ext, 3, 0, 16, uint16_t, cfg->tc_cfg[6].max_dist); \
6365 + MC_PREP_OP(ext, 3, 16, 16, uint16_t, cfg->tc_cfg[6].max_fs_entries); \
6366 + MC_PREP_OP(ext, 3, 32, 16, uint16_t, cfg->tc_cfg[7].max_dist); \
6367 + MC_PREP_OP(ext, 3, 48, 16, uint16_t, cfg->tc_cfg[7].max_fs_entries); \
6368 + MC_PREP_OP(ext, 4, 0, 16, uint16_t, \
6369 + cfg->ipr_cfg.max_open_frames_ipv4); \
6370 + MC_PREP_OP(ext, 4, 16, 16, uint16_t, \
6371 + cfg->ipr_cfg.max_open_frames_ipv6); \
6372 + MC_PREP_OP(ext, 4, 32, 16, uint16_t, \
6373 + cfg->ipr_cfg.max_reass_frm_size); \
6374 + MC_PREP_OP(ext, 5, 0, 16, uint16_t, \
6375 + cfg->ipr_cfg.min_frag_size_ipv4); \
6376 + MC_PREP_OP(ext, 5, 16, 16, uint16_t, \
6377 + cfg->ipr_cfg.min_frag_size_ipv6); \
6378 +} while (0)
6379 +
6380 +#define DPNI_EXT_EXTENDED_CFG(ext, cfg) \
6381 +do { \
6382 + MC_EXT_OP(ext, 0, 0, 16, uint16_t, cfg->tc_cfg[0].max_dist); \
6383 + MC_EXT_OP(ext, 0, 16, 16, uint16_t, cfg->tc_cfg[0].max_fs_entries); \
6384 + MC_EXT_OP(ext, 0, 32, 16, uint16_t, cfg->tc_cfg[1].max_dist); \
6385 + MC_EXT_OP(ext, 0, 48, 16, uint16_t, cfg->tc_cfg[1].max_fs_entries); \
6386 + MC_EXT_OP(ext, 1, 0, 16, uint16_t, cfg->tc_cfg[2].max_dist); \
6387 + MC_EXT_OP(ext, 1, 16, 16, uint16_t, cfg->tc_cfg[2].max_fs_entries); \
6388 + MC_EXT_OP(ext, 1, 32, 16, uint16_t, cfg->tc_cfg[3].max_dist); \
6389 + MC_EXT_OP(ext, 1, 48, 16, uint16_t, cfg->tc_cfg[3].max_fs_entries); \
6390 + MC_EXT_OP(ext, 2, 0, 16, uint16_t, cfg->tc_cfg[4].max_dist); \
6391 + MC_EXT_OP(ext, 2, 16, 16, uint16_t, cfg->tc_cfg[4].max_fs_entries); \
6392 + MC_EXT_OP(ext, 2, 32, 16, uint16_t, cfg->tc_cfg[5].max_dist); \
6393 + MC_EXT_OP(ext, 2, 48, 16, uint16_t, cfg->tc_cfg[5].max_fs_entries); \
6394 + MC_EXT_OP(ext, 3, 0, 16, uint16_t, cfg->tc_cfg[6].max_dist); \
6395 + MC_EXT_OP(ext, 3, 16, 16, uint16_t, cfg->tc_cfg[6].max_fs_entries); \
6396 + MC_EXT_OP(ext, 3, 32, 16, uint16_t, cfg->tc_cfg[7].max_dist); \
6397 + MC_EXT_OP(ext, 3, 48, 16, uint16_t, cfg->tc_cfg[7].max_fs_entries); \
6398 + MC_EXT_OP(ext, 4, 0, 16, uint16_t, \
6399 + cfg->ipr_cfg.max_open_frames_ipv4); \
6400 + MC_EXT_OP(ext, 4, 16, 16, uint16_t, \
6401 + cfg->ipr_cfg.max_open_frames_ipv6); \
6402 + MC_EXT_OP(ext, 4, 32, 16, uint16_t, \
6403 + cfg->ipr_cfg.max_reass_frm_size); \
6404 + MC_EXT_OP(ext, 5, 0, 16, uint16_t, \
6405 + cfg->ipr_cfg.min_frag_size_ipv4); \
6406 + MC_EXT_OP(ext, 5, 16, 16, uint16_t, \
6407 + cfg->ipr_cfg.min_frag_size_ipv6); \
6408 +} while (0)
6409 +
6410 +/* cmd, param, offset, width, type, arg_name */
6411 +#define DPNI_CMD_CREATE(cmd, cfg) \
6412 +do { \
6413 + MC_CMD_OP(cmd, 0, 0, 8, uint8_t, cfg->adv.max_tcs); \
6414 + MC_CMD_OP(cmd, 0, 8, 8, uint8_t, cfg->adv.max_senders); \
6415 + MC_CMD_OP(cmd, 0, 16, 8, uint8_t, cfg->mac_addr[5]); \
6416 + MC_CMD_OP(cmd, 0, 24, 8, uint8_t, cfg->mac_addr[4]); \
6417 + MC_CMD_OP(cmd, 0, 32, 8, uint8_t, cfg->mac_addr[3]); \
6418 + MC_CMD_OP(cmd, 0, 40, 8, uint8_t, cfg->mac_addr[2]); \
6419 + MC_CMD_OP(cmd, 0, 48, 8, uint8_t, cfg->mac_addr[1]); \
6420 + MC_CMD_OP(cmd, 0, 56, 8, uint8_t, cfg->mac_addr[0]); \
6421 + MC_CMD_OP(cmd, 1, 0, 32, uint32_t, cfg->adv.options); \
6422 + MC_CMD_OP(cmd, 2, 0, 8, uint8_t, cfg->adv.max_unicast_filters); \
6423 + MC_CMD_OP(cmd, 2, 8, 8, uint8_t, cfg->adv.max_multicast_filters); \
6424 + MC_CMD_OP(cmd, 2, 16, 8, uint8_t, cfg->adv.max_vlan_filters); \
6425 + MC_CMD_OP(cmd, 2, 24, 8, uint8_t, cfg->adv.max_qos_entries); \
6426 + MC_CMD_OP(cmd, 2, 32, 8, uint8_t, cfg->adv.max_qos_key_size); \
6427 + MC_CMD_OP(cmd, 2, 48, 8, uint8_t, cfg->adv.max_dist_key_size); \
6428 + MC_CMD_OP(cmd, 2, 56, 8, enum net_prot, cfg->adv.start_hdr); \
6429 + MC_CMD_OP(cmd, 4, 48, 8, uint8_t, cfg->adv.max_policers); \
6430 + MC_CMD_OP(cmd, 4, 56, 8, uint8_t, cfg->adv.max_congestion_ctrl); \
6431 + MC_CMD_OP(cmd, 5, 0, 64, uint64_t, cfg->adv.ext_cfg_iova); \
6432 +} while (0)
6433 +
6434 +/* cmd, param, offset, width, type, arg_name */
6435 +#define DPNI_CMD_SET_POOLS(cmd, cfg) \
6436 +do { \
6437 + MC_CMD_OP(cmd, 0, 0, 8, uint8_t, cfg->num_dpbp); \
6438 + MC_CMD_OP(cmd, 0, 8, 1, int, cfg->pools[0].backup_pool); \
6439 + MC_CMD_OP(cmd, 0, 9, 1, int, cfg->pools[1].backup_pool); \
6440 + MC_CMD_OP(cmd, 0, 10, 1, int, cfg->pools[2].backup_pool); \
6441 + MC_CMD_OP(cmd, 0, 11, 1, int, cfg->pools[3].backup_pool); \
6442 + MC_CMD_OP(cmd, 0, 12, 1, int, cfg->pools[4].backup_pool); \
6443 + MC_CMD_OP(cmd, 0, 13, 1, int, cfg->pools[5].backup_pool); \
6444 + MC_CMD_OP(cmd, 0, 14, 1, int, cfg->pools[6].backup_pool); \
6445 + MC_CMD_OP(cmd, 0, 15, 1, int, cfg->pools[7].backup_pool); \
6446 + MC_CMD_OP(cmd, 0, 32, 32, int, cfg->pools[0].dpbp_id); \
6447 + MC_CMD_OP(cmd, 4, 32, 16, uint16_t, cfg->pools[0].buffer_size);\
6448 + MC_CMD_OP(cmd, 1, 0, 32, int, cfg->pools[1].dpbp_id); \
6449 + MC_CMD_OP(cmd, 4, 48, 16, uint16_t, cfg->pools[1].buffer_size);\
6450 + MC_CMD_OP(cmd, 1, 32, 32, int, cfg->pools[2].dpbp_id); \
6451 + MC_CMD_OP(cmd, 5, 0, 16, uint16_t, cfg->pools[2].buffer_size);\
6452 + MC_CMD_OP(cmd, 2, 0, 32, int, cfg->pools[3].dpbp_id); \
6453 + MC_CMD_OP(cmd, 5, 16, 16, uint16_t, cfg->pools[3].buffer_size);\
6454 + MC_CMD_OP(cmd, 2, 32, 32, int, cfg->pools[4].dpbp_id); \
6455 + MC_CMD_OP(cmd, 5, 32, 16, uint16_t, cfg->pools[4].buffer_size);\
6456 + MC_CMD_OP(cmd, 3, 0, 32, int, cfg->pools[5].dpbp_id); \
6457 + MC_CMD_OP(cmd, 5, 48, 16, uint16_t, cfg->pools[5].buffer_size);\
6458 + MC_CMD_OP(cmd, 3, 32, 32, int, cfg->pools[6].dpbp_id); \
6459 + MC_CMD_OP(cmd, 6, 0, 16, uint16_t, cfg->pools[6].buffer_size);\
6460 + MC_CMD_OP(cmd, 4, 0, 32, int, cfg->pools[7].dpbp_id); \
6461 + MC_CMD_OP(cmd, 6, 16, 16, uint16_t, cfg->pools[7].buffer_size);\
6462 +} while (0)
6463 +
6464 +/* cmd, param, offset, width, type, arg_name */
6465 +#define DPNI_RSP_IS_ENABLED(cmd, en) \
6466 + MC_RSP_OP(cmd, 0, 0, 1, int, en)
6467 +
6468 +/* cmd, param, offset, width, type, arg_name */
6469 +#define DPNI_CMD_SET_IRQ(cmd, irq_index, irq_cfg) \
6470 +do { \
6471 + MC_CMD_OP(cmd, 0, 0, 32, uint32_t, irq_cfg->val); \
6472 + MC_CMD_OP(cmd, 0, 32, 8, uint8_t, irq_index); \
6473 + MC_CMD_OP(cmd, 1, 0, 64, uint64_t, irq_cfg->addr); \
6474 + MC_CMD_OP(cmd, 2, 0, 32, int, irq_cfg->irq_num); \
6475 +} while (0)
6476 +
6477 +/* cmd, param, offset, width, type, arg_name */
6478 +#define DPNI_CMD_GET_IRQ(cmd, irq_index) \
6479 + MC_CMD_OP(cmd, 0, 32, 8, uint8_t, irq_index)
6480 +
6481 +/* cmd, param, offset, width, type, arg_name */
6482 +#define DPNI_RSP_GET_IRQ(cmd, type, irq_cfg) \
6483 +do { \
6484 + MC_RSP_OP(cmd, 0, 0, 32, uint32_t, irq_cfg->val); \
6485 + MC_RSP_OP(cmd, 1, 0, 64, uint64_t, irq_cfg->addr); \
6486 + MC_RSP_OP(cmd, 2, 0, 32, int, irq_cfg->irq_num); \
6487 + MC_RSP_OP(cmd, 2, 32, 32, int, type); \
6488 +} while (0)
6489 +
6490 +/* cmd, param, offset, width, type, arg_name */
6491 +#define DPNI_CMD_SET_IRQ_ENABLE(cmd, irq_index, en) \
6492 +do { \
6493 + MC_CMD_OP(cmd, 0, 0, 8, uint8_t, en); \
6494 + MC_CMD_OP(cmd, 0, 32, 8, uint8_t, irq_index);\
6495 +} while (0)
6496 +
6497 +/* cmd, param, offset, width, type, arg_name */
6498 +#define DPNI_CMD_GET_IRQ_ENABLE(cmd, irq_index) \
6499 + MC_CMD_OP(cmd, 0, 32, 8, uint8_t, irq_index)
6500 +
6501 +/* cmd, param, offset, width, type, arg_name */
6502 +#define DPNI_RSP_GET_IRQ_ENABLE(cmd, en) \
6503 + MC_RSP_OP(cmd, 0, 0, 8, uint8_t, en)
6504 +
6505 +/* cmd, param, offset, width, type, arg_name */
6506 +#define DPNI_CMD_SET_IRQ_MASK(cmd, irq_index, mask) \
6507 +do { \
6508 + MC_CMD_OP(cmd, 0, 0, 32, uint32_t, mask); \
6509 + MC_CMD_OP(cmd, 0, 32, 8, uint8_t, irq_index);\
6510 +} while (0)
6511 +
6512 +/* cmd, param, offset, width, type, arg_name */
6513 +#define DPNI_CMD_GET_IRQ_MASK(cmd, irq_index) \
6514 + MC_CMD_OP(cmd, 0, 32, 8, uint8_t, irq_index)
6515 +
6516 +/* cmd, param, offset, width, type, arg_name */
6517 +#define DPNI_RSP_GET_IRQ_MASK(cmd, mask) \
6518 + MC_RSP_OP(cmd, 0, 0, 32, uint32_t, mask)
6519 +
6520 +/* cmd, param, offset, width, type, arg_name */
6521 +#define DPNI_CMD_GET_IRQ_STATUS(cmd, irq_index, status) \
6522 +do { \
6523 + MC_CMD_OP(cmd, 0, 0, 32, uint32_t, status);\
6524 + MC_CMD_OP(cmd, 0, 32, 8, uint8_t, irq_index);\
6525 +} while (0)
6526 +
6527 +/* cmd, param, offset, width, type, arg_name */
6528 +#define DPNI_RSP_GET_IRQ_STATUS(cmd, status) \
6529 + MC_RSP_OP(cmd, 0, 0, 32, uint32_t, status)
6530 +
6531 +/* cmd, param, offset, width, type, arg_name */
6532 +#define DPNI_CMD_CLEAR_IRQ_STATUS(cmd, irq_index, status) \
6533 +do { \
6534 + MC_CMD_OP(cmd, 0, 0, 32, uint32_t, status); \
6535 + MC_CMD_OP(cmd, 0, 32, 8, uint8_t, irq_index);\
6536 +} while (0)
6537 +
6538 +/* cmd, param, offset, width, type, arg_name */
6539 +#define DPNI_CMD_GET_ATTR(cmd, attr) \
6540 + MC_CMD_OP(cmd, 6, 0, 64, uint64_t, attr->ext_cfg_iova)
6541 +
6542 +/* cmd, param, offset, width, type, arg_name */
6543 +#define DPNI_RSP_GET_ATTR(cmd, attr) \
6544 +do { \
6545 + MC_RSP_OP(cmd, 0, 0, 32, int, attr->id);\
6546 + MC_RSP_OP(cmd, 0, 32, 8, uint8_t, attr->max_tcs); \
6547 + MC_RSP_OP(cmd, 0, 40, 8, uint8_t, attr->max_senders); \
6548 + MC_RSP_OP(cmd, 0, 48, 8, enum net_prot, attr->start_hdr); \
6549 + MC_RSP_OP(cmd, 1, 0, 32, uint32_t, attr->options); \
6550 + MC_RSP_OP(cmd, 2, 0, 8, uint8_t, attr->max_unicast_filters); \
6551 + MC_RSP_OP(cmd, 2, 8, 8, uint8_t, attr->max_multicast_filters);\
6552 + MC_RSP_OP(cmd, 2, 16, 8, uint8_t, attr->max_vlan_filters); \
6553 + MC_RSP_OP(cmd, 2, 24, 8, uint8_t, attr->max_qos_entries); \
6554 + MC_RSP_OP(cmd, 2, 32, 8, uint8_t, attr->max_qos_key_size); \
6555 + MC_RSP_OP(cmd, 2, 40, 8, uint8_t, attr->max_dist_key_size); \
6556 + MC_RSP_OP(cmd, 4, 48, 8, uint8_t, attr->max_policers); \
6557 + MC_RSP_OP(cmd, 4, 56, 8, uint8_t, attr->max_congestion_ctrl); \
6558 + MC_RSP_OP(cmd, 5, 32, 16, uint16_t, attr->version.major);\
6559 + MC_RSP_OP(cmd, 5, 48, 16, uint16_t, attr->version.minor);\
6560 +} while (0)
6561 +
6562 +/* cmd, param, offset, width, type, arg_name */
6563 +#define DPNI_CMD_SET_ERRORS_BEHAVIOR(cmd, cfg) \
6564 +do { \
6565 + MC_CMD_OP(cmd, 0, 0, 32, uint32_t, cfg->errors); \
6566 + MC_CMD_OP(cmd, 0, 32, 4, enum dpni_error_action, cfg->error_action); \
6567 + MC_CMD_OP(cmd, 0, 36, 1, int, cfg->set_frame_annotation); \
6568 +} while (0)
6569 +
6570 +/* cmd, param, offset, width, type, arg_name */
6571 +#define DPNI_RSP_GET_RX_BUFFER_LAYOUT(cmd, layout) \
6572 +do { \
6573 + MC_RSP_OP(cmd, 0, 0, 16, uint16_t, layout->private_data_size); \
6574 + MC_RSP_OP(cmd, 0, 16, 16, uint16_t, layout->data_align); \
6575 + MC_RSP_OP(cmd, 1, 0, 1, int, layout->pass_timestamp); \
6576 + MC_RSP_OP(cmd, 1, 1, 1, int, layout->pass_parser_result); \
6577 + MC_RSP_OP(cmd, 1, 2, 1, int, layout->pass_frame_status); \
6578 + MC_RSP_OP(cmd, 1, 16, 16, uint16_t, layout->data_head_room); \
6579 + MC_RSP_OP(cmd, 1, 32, 16, uint16_t, layout->data_tail_room); \
6580 +} while (0)
6581 +
6582 +/* cmd, param, offset, width, type, arg_name */
6583 +#define DPNI_CMD_SET_RX_BUFFER_LAYOUT(cmd, layout) \
6584 +do { \
6585 + MC_CMD_OP(cmd, 0, 0, 16, uint16_t, layout->private_data_size); \
6586 + MC_CMD_OP(cmd, 0, 16, 16, uint16_t, layout->data_align); \
6587 + MC_CMD_OP(cmd, 0, 32, 32, uint32_t, layout->options); \
6588 + MC_CMD_OP(cmd, 1, 0, 1, int, layout->pass_timestamp); \
6589 + MC_CMD_OP(cmd, 1, 1, 1, int, layout->pass_parser_result); \
6590 + MC_CMD_OP(cmd, 1, 2, 1, int, layout->pass_frame_status); \
6591 + MC_CMD_OP(cmd, 1, 16, 16, uint16_t, layout->data_head_room); \
6592 + MC_CMD_OP(cmd, 1, 32, 16, uint16_t, layout->data_tail_room); \
6593 +} while (0)
6594 +
6595 +/* cmd, param, offset, width, type, arg_name */
6596 +#define DPNI_RSP_GET_TX_BUFFER_LAYOUT(cmd, layout) \
6597 +do { \
6598 + MC_RSP_OP(cmd, 0, 0, 16, uint16_t, layout->private_data_size); \
6599 + MC_RSP_OP(cmd, 0, 16, 16, uint16_t, layout->data_align); \
6600 + MC_RSP_OP(cmd, 1, 0, 1, int, layout->pass_timestamp); \
6601 + MC_RSP_OP(cmd, 1, 1, 1, int, layout->pass_parser_result); \
6602 + MC_RSP_OP(cmd, 1, 2, 1, int, layout->pass_frame_status); \
6603 + MC_RSP_OP(cmd, 1, 16, 16, uint16_t, layout->data_head_room); \
6604 + MC_RSP_OP(cmd, 1, 32, 16, uint16_t, layout->data_tail_room); \
6605 +} while (0)
6606 +
6607 +/* cmd, param, offset, width, type, arg_name */
6608 +#define DPNI_CMD_SET_TX_BUFFER_LAYOUT(cmd, layout) \
6609 +do { \
6610 + MC_CMD_OP(cmd, 0, 0, 16, uint16_t, layout->private_data_size); \
6611 + MC_CMD_OP(cmd, 0, 16, 16, uint16_t, layout->data_align); \
6612 + MC_CMD_OP(cmd, 0, 32, 32, uint32_t, layout->options); \
6613 + MC_CMD_OP(cmd, 1, 0, 1, int, layout->pass_timestamp); \
6614 + MC_CMD_OP(cmd, 1, 1, 1, int, layout->pass_parser_result); \
6615 + MC_CMD_OP(cmd, 1, 2, 1, int, layout->pass_frame_status); \
6616 + MC_CMD_OP(cmd, 1, 16, 16, uint16_t, layout->data_head_room); \
6617 + MC_CMD_OP(cmd, 1, 32, 16, uint16_t, layout->data_tail_room); \
6618 +} while (0)
6619 +
6620 +/* cmd, param, offset, width, type, arg_name */
6621 +#define DPNI_RSP_GET_TX_CONF_BUFFER_LAYOUT(cmd, layout) \
6622 +do { \
6623 + MC_RSP_OP(cmd, 0, 0, 16, uint16_t, layout->private_data_size); \
6624 + MC_RSP_OP(cmd, 0, 16, 16, uint16_t, layout->data_align); \
6625 + MC_RSP_OP(cmd, 1, 0, 1, int, layout->pass_timestamp); \
6626 + MC_RSP_OP(cmd, 1, 1, 1, int, layout->pass_parser_result); \
6627 + MC_RSP_OP(cmd, 1, 2, 1, int, layout->pass_frame_status); \
6628 + MC_RSP_OP(cmd, 1, 16, 16, uint16_t, layout->data_head_room); \
6629 + MC_RSP_OP(cmd, 1, 32, 16, uint16_t, layout->data_tail_room); \
6630 +} while (0)
6631 +
6632 +/* cmd, param, offset, width, type, arg_name */
6633 +#define DPNI_CMD_SET_TX_CONF_BUFFER_LAYOUT(cmd, layout) \
6634 +do { \
6635 + MC_CMD_OP(cmd, 0, 0, 16, uint16_t, layout->private_data_size); \
6636 + MC_CMD_OP(cmd, 0, 16, 16, uint16_t, layout->data_align); \
6637 + MC_CMD_OP(cmd, 0, 32, 32, uint32_t, layout->options); \
6638 + MC_CMD_OP(cmd, 1, 0, 1, int, layout->pass_timestamp); \
6639 + MC_CMD_OP(cmd, 1, 1, 1, int, layout->pass_parser_result); \
6640 + MC_CMD_OP(cmd, 1, 2, 1, int, layout->pass_frame_status); \
6641 + MC_CMD_OP(cmd, 1, 16, 16, uint16_t, layout->data_head_room); \
6642 + MC_CMD_OP(cmd, 1, 32, 16, uint16_t, layout->data_tail_room); \
6643 +} while (0)
6644 +
6645 +/* cmd, param, offset, width, type, arg_name */
6646 +#define DPNI_CMD_SET_L3_CHKSUM_VALIDATION(cmd, en) \
6647 + MC_CMD_OP(cmd, 0, 0, 1, int, en)
6648 +
6649 +/* cmd, param, offset, width, type, arg_name */
6650 +#define DPNI_RSP_GET_L3_CHKSUM_VALIDATION(cmd, en) \
6651 + MC_RSP_OP(cmd, 0, 0, 1, int, en)
6652 +
6653 +/* cmd, param, offset, width, type, arg_name */
6654 +#define DPNI_CMD_SET_L4_CHKSUM_VALIDATION(cmd, en) \
6655 + MC_CMD_OP(cmd, 0, 0, 1, int, en)
6656 +
6657 +/* cmd, param, offset, width, type, arg_name */
6658 +#define DPNI_RSP_GET_L4_CHKSUM_VALIDATION(cmd, en) \
6659 + MC_RSP_OP(cmd, 0, 0, 1, int, en)
6660 +
6661 +/* cmd, param, offset, width, type, arg_name */
6662 +#define DPNI_RSP_GET_QDID(cmd, qdid) \
6663 + MC_RSP_OP(cmd, 0, 0, 16, uint16_t, qdid)
6664 +
6665 +/* cmd, param, offset, width, type, arg_name */
6666 +#define DPNI_RSP_GET_SP_INFO(cmd, sp_info) \
6667 +do { \
6668 + MC_RSP_OP(cmd, 0, 0, 16, uint16_t, sp_info->spids[0]); \
6669 + MC_RSP_OP(cmd, 0, 16, 16, uint16_t, sp_info->spids[1]); \
6670 +} while (0)
6671 +
6672 +/* cmd, param, offset, width, type, arg_name */
6673 +#define DPNI_RSP_GET_TX_DATA_OFFSET(cmd, data_offset) \
6674 + MC_RSP_OP(cmd, 0, 0, 16, uint16_t, data_offset)
6675 +
6676 +/* cmd, param, offset, width, type, arg_name */
6677 +#define DPNI_CMD_GET_COUNTER(cmd, counter) \
6678 + MC_CMD_OP(cmd, 0, 0, 16, enum dpni_counter, counter)
6679 +
6680 +/* cmd, param, offset, width, type, arg_name */
6681 +#define DPNI_RSP_GET_COUNTER(cmd, value) \
6682 + MC_RSP_OP(cmd, 1, 0, 64, uint64_t, value)
6683 +
6684 +/* cmd, param, offset, width, type, arg_name */
6685 +#define DPNI_CMD_SET_COUNTER(cmd, counter, value) \
6686 +do { \
6687 + MC_CMD_OP(cmd, 0, 0, 16, enum dpni_counter, counter); \
6688 + MC_CMD_OP(cmd, 1, 0, 64, uint64_t, value); \
6689 +} while (0)
6690 +
6691 +/* cmd, param, offset, width, type, arg_name */
6692 +#define DPNI_CMD_SET_LINK_CFG(cmd, cfg) \
6693 +do { \
6694 + MC_CMD_OP(cmd, 1, 0, 32, uint32_t, cfg->rate);\
6695 + MC_CMD_OP(cmd, 2, 0, 64, uint64_t, cfg->options);\
6696 +} while (0)
6697 +
6698 +/* cmd, param, offset, width, type, arg_name */
6699 +#define DPNI_RSP_GET_LINK_STATE(cmd, state) \
6700 +do { \
6701 + MC_RSP_OP(cmd, 0, 32, 1, int, state->up);\
6702 + MC_RSP_OP(cmd, 1, 0, 32, uint32_t, state->rate);\
6703 + MC_RSP_OP(cmd, 2, 0, 64, uint64_t, state->options);\
6704 +} while (0)
6705 +
6706 +/* cmd, param, offset, width, type, arg_name */
6707 +#define DPNI_CMD_SET_TX_SHAPING(cmd, tx_shaper) \
6708 +do { \
6709 + MC_CMD_OP(cmd, 0, 0, 16, uint16_t, tx_shaper->max_burst_size);\
6710 + MC_CMD_OP(cmd, 1, 0, 32, uint32_t, tx_shaper->rate_limit);\
6711 +} while (0)
6712 +
6713 +/* cmd, param, offset, width, type, arg_name */
6714 +#define DPNI_CMD_SET_MAX_FRAME_LENGTH(cmd, max_frame_length) \
6715 + MC_CMD_OP(cmd, 0, 0, 16, uint16_t, max_frame_length)
6716 +
6717 +/* cmd, param, offset, width, type, arg_name */
6718 +#define DPNI_RSP_GET_MAX_FRAME_LENGTH(cmd, max_frame_length) \
6719 + MC_RSP_OP(cmd, 0, 0, 16, uint16_t, max_frame_length)
6720 +
6721 +/* cmd, param, offset, width, type, arg_name */
6722 +#define DPNI_CMD_SET_MTU(cmd, mtu) \
6723 + MC_CMD_OP(cmd, 0, 0, 16, uint16_t, mtu)
6724 +
6725 +/* cmd, param, offset, width, type, arg_name */
6726 +#define DPNI_RSP_GET_MTU(cmd, mtu) \
6727 + MC_RSP_OP(cmd, 0, 0, 16, uint16_t, mtu)
6728 +
6729 +/* cmd, param, offset, width, type, arg_name */
6730 +#define DPNI_CMD_SET_MULTICAST_PROMISC(cmd, en) \
6731 + MC_CMD_OP(cmd, 0, 0, 1, int, en)
6732 +
6733 +/* cmd, param, offset, width, type, arg_name */
6734 +#define DPNI_RSP_GET_MULTICAST_PROMISC(cmd, en) \
6735 + MC_RSP_OP(cmd, 0, 0, 1, int, en)
6736 +
6737 +/* cmd, param, offset, width, type, arg_name */
6738 +#define DPNI_CMD_SET_UNICAST_PROMISC(cmd, en) \
6739 + MC_CMD_OP(cmd, 0, 0, 1, int, en)
6740 +
6741 +/* cmd, param, offset, width, type, arg_name */
6742 +#define DPNI_RSP_GET_UNICAST_PROMISC(cmd, en) \
6743 + MC_RSP_OP(cmd, 0, 0, 1, int, en)
6744 +
6745 +/* cmd, param, offset, width, type, arg_name */
6746 +#define DPNI_CMD_SET_PRIMARY_MAC_ADDR(cmd, mac_addr) \
6747 +do { \
6748 + MC_CMD_OP(cmd, 0, 16, 8, uint8_t, mac_addr[5]); \
6749 + MC_CMD_OP(cmd, 0, 24, 8, uint8_t, mac_addr[4]); \
6750 + MC_CMD_OP(cmd, 0, 32, 8, uint8_t, mac_addr[3]); \
6751 + MC_CMD_OP(cmd, 0, 40, 8, uint8_t, mac_addr[2]); \
6752 + MC_CMD_OP(cmd, 0, 48, 8, uint8_t, mac_addr[1]); \
6753 + MC_CMD_OP(cmd, 0, 56, 8, uint8_t, mac_addr[0]); \
6754 +} while (0)
6755 +
6756 +/* cmd, param, offset, width, type, arg_name */
6757 +#define DPNI_RSP_GET_PRIMARY_MAC_ADDR(cmd, mac_addr) \
6758 +do { \
6759 + MC_RSP_OP(cmd, 0, 16, 8, uint8_t, mac_addr[5]); \
6760 + MC_RSP_OP(cmd, 0, 24, 8, uint8_t, mac_addr[4]); \
6761 + MC_RSP_OP(cmd, 0, 32, 8, uint8_t, mac_addr[3]); \
6762 + MC_RSP_OP(cmd, 0, 40, 8, uint8_t, mac_addr[2]); \
6763 + MC_RSP_OP(cmd, 0, 48, 8, uint8_t, mac_addr[1]); \
6764 + MC_RSP_OP(cmd, 0, 56, 8, uint8_t, mac_addr[0]); \
6765 +} while (0)
6766 +
6767 +/* cmd, param, offset, width, type, arg_name */
6768 +#define DPNI_CMD_ADD_MAC_ADDR(cmd, mac_addr) \
6769 +do { \
6770 + MC_CMD_OP(cmd, 0, 16, 8, uint8_t, mac_addr[5]); \
6771 + MC_CMD_OP(cmd, 0, 24, 8, uint8_t, mac_addr[4]); \
6772 + MC_CMD_OP(cmd, 0, 32, 8, uint8_t, mac_addr[3]); \
6773 + MC_CMD_OP(cmd, 0, 40, 8, uint8_t, mac_addr[2]); \
6774 + MC_CMD_OP(cmd, 0, 48, 8, uint8_t, mac_addr[1]); \
6775 + MC_CMD_OP(cmd, 0, 56, 8, uint8_t, mac_addr[0]); \
6776 +} while (0)
6777 +
6778 +/* cmd, param, offset, width, type, arg_name */
6779 +#define DPNI_CMD_REMOVE_MAC_ADDR(cmd, mac_addr) \
6780 +do { \
6781 + MC_CMD_OP(cmd, 0, 16, 8, uint8_t, mac_addr[5]); \
6782 + MC_CMD_OP(cmd, 0, 24, 8, uint8_t, mac_addr[4]); \
6783 + MC_CMD_OP(cmd, 0, 32, 8, uint8_t, mac_addr[3]); \
6784 + MC_CMD_OP(cmd, 0, 40, 8, uint8_t, mac_addr[2]); \
6785 + MC_CMD_OP(cmd, 0, 48, 8, uint8_t, mac_addr[1]); \
6786 + MC_CMD_OP(cmd, 0, 56, 8, uint8_t, mac_addr[0]); \
6787 +} while (0)
6788 +
6789 +/* cmd, param, offset, width, type, arg_name */
6790 +#define DPNI_CMD_CLEAR_MAC_FILTERS(cmd, unicast, multicast) \
6791 +do { \
6792 + MC_CMD_OP(cmd, 0, 0, 1, int, unicast); \
6793 + MC_CMD_OP(cmd, 0, 1, 1, int, multicast); \
6794 +} while (0)
6795 +
6796 +/* cmd, param, offset, width, type, arg_name */
6797 +#define DPNI_CMD_SET_VLAN_FILTERS(cmd, en) \
6798 + MC_CMD_OP(cmd, 0, 0, 1, int, en)
6799 +
6800 +/* cmd, param, offset, width, type, arg_name */
6801 +#define DPNI_CMD_ADD_VLAN_ID(cmd, vlan_id) \
6802 + MC_CMD_OP(cmd, 0, 32, 16, uint16_t, vlan_id)
6803 +
6804 +/* cmd, param, offset, width, type, arg_name */
6805 +#define DPNI_CMD_REMOVE_VLAN_ID(cmd, vlan_id) \
6806 + MC_CMD_OP(cmd, 0, 32, 16, uint16_t, vlan_id)
6807 +
6808 +/* cmd, param, offset, width, type, arg_name */
6809 +#define DPNI_CMD_SET_TX_SELECTION(cmd, cfg) \
6810 +do { \
6811 + MC_CMD_OP(cmd, 0, 0, 16, uint16_t, cfg->tc_sched[0].delta_bandwidth);\
6812 + MC_CMD_OP(cmd, 0, 16, 4, enum dpni_tx_schedule_mode, \
6813 + cfg->tc_sched[0].mode); \
6814 + MC_CMD_OP(cmd, 0, 32, 16, uint16_t, cfg->tc_sched[1].delta_bandwidth);\
6815 + MC_CMD_OP(cmd, 0, 48, 4, enum dpni_tx_schedule_mode, \
6816 + cfg->tc_sched[1].mode); \
6817 + MC_CMD_OP(cmd, 1, 0, 16, uint16_t, cfg->tc_sched[2].delta_bandwidth);\
6818 + MC_CMD_OP(cmd, 1, 16, 4, enum dpni_tx_schedule_mode, \
6819 + cfg->tc_sched[2].mode); \
6820 + MC_CMD_OP(cmd, 1, 32, 16, uint16_t, cfg->tc_sched[3].delta_bandwidth);\
6821 + MC_CMD_OP(cmd, 1, 48, 4, enum dpni_tx_schedule_mode, \
6822 + cfg->tc_sched[3].mode); \
6823 + MC_CMD_OP(cmd, 2, 0, 16, uint16_t, cfg->tc_sched[4].delta_bandwidth);\
6824 + MC_CMD_OP(cmd, 2, 16, 4, enum dpni_tx_schedule_mode, \
6825 + cfg->tc_sched[4].mode); \
6826 + MC_CMD_OP(cmd, 2, 32, 16, uint16_t, cfg->tc_sched[5].delta_bandwidth);\
6827 + MC_CMD_OP(cmd, 2, 48, 4, enum dpni_tx_schedule_mode, \
6828 + cfg->tc_sched[5].mode); \
6829 + MC_CMD_OP(cmd, 3, 0, 16, uint16_t, cfg->tc_sched[6].delta_bandwidth);\
6830 + MC_CMD_OP(cmd, 3, 16, 4, enum dpni_tx_schedule_mode, \
6831 + cfg->tc_sched[6].mode); \
6832 + MC_CMD_OP(cmd, 3, 32, 16, uint16_t, cfg->tc_sched[7].delta_bandwidth);\
6833 + MC_CMD_OP(cmd, 3, 48, 4, enum dpni_tx_schedule_mode, \
6834 + cfg->tc_sched[7].mode); \
6835 +} while (0)
6836 +
6837 +/* cmd, param, offset, width, type, arg_name */
6838 +#define DPNI_CMD_SET_RX_TC_DIST(cmd, tc_id, cfg) \
6839 +do { \
6840 + MC_CMD_OP(cmd, 0, 0, 16, uint16_t, cfg->dist_size); \
6841 + MC_CMD_OP(cmd, 0, 16, 8, uint8_t, tc_id); \
6842 + MC_CMD_OP(cmd, 0, 24, 4, enum dpni_dist_mode, cfg->dist_mode); \
6843 + MC_CMD_OP(cmd, 0, 28, 4, enum dpni_fs_miss_action, \
6844 + cfg->fs_cfg.miss_action); \
6845 + MC_CMD_OP(cmd, 0, 48, 16, uint16_t, cfg->fs_cfg.default_flow_id); \
6846 + MC_CMD_OP(cmd, 6, 0, 64, uint64_t, cfg->key_cfg_iova); \
6847 +} while (0)
6848 +
6849 +/* cmd, param, offset, width, type, arg_name */
6850 +#define DPNI_CMD_SET_TX_FLOW(cmd, flow_id, cfg) \
6851 +do { \
6852 + MC_CMD_OP(cmd, 0, 43, 1, int, cfg->l3_chksum_gen);\
6853 + MC_CMD_OP(cmd, 0, 44, 1, int, cfg->l4_chksum_gen);\
6854 + MC_CMD_OP(cmd, 0, 45, 1, int, cfg->use_common_tx_conf_queue);\
6855 + MC_CMD_OP(cmd, 0, 48, 16, uint16_t, flow_id);\
6856 + MC_CMD_OP(cmd, 2, 0, 32, uint32_t, cfg->options);\
6857 +} while (0)
6858 +
6859 +/* cmd, param, offset, width, type, arg_name */
6860 +#define DPNI_RSP_SET_TX_FLOW(cmd, flow_id) \
6861 + MC_RSP_OP(cmd, 0, 48, 16, uint16_t, flow_id)
6862 +
6863 +/* cmd, param, offset, width, type, arg_name */
6864 +#define DPNI_CMD_GET_TX_FLOW(cmd, flow_id) \
6865 + MC_CMD_OP(cmd, 0, 48, 16, uint16_t, flow_id)
6866 +
6867 +/* cmd, param, offset, width, type, arg_name */
6868 +#define DPNI_RSP_GET_TX_FLOW(cmd, attr) \
6869 +do { \
6870 + MC_RSP_OP(cmd, 0, 43, 1, int, attr->l3_chksum_gen);\
6871 + MC_RSP_OP(cmd, 0, 44, 1, int, attr->l4_chksum_gen);\
6872 + MC_RSP_OP(cmd, 0, 45, 1, int, attr->use_common_tx_conf_queue);\
6873 +} while (0)
6874 +
6875 +/* cmd, param, offset, width, type, arg_name */
6876 +#define DPNI_CMD_SET_RX_FLOW(cmd, tc_id, flow_id, cfg) \
6877 +do { \
6878 + MC_CMD_OP(cmd, 0, 0, 32, int, cfg->dest_cfg.dest_id); \
6879 + MC_CMD_OP(cmd, 0, 32, 8, uint8_t, cfg->dest_cfg.priority);\
6880 + MC_CMD_OP(cmd, 0, 40, 2, enum dpni_dest, cfg->dest_cfg.dest_type);\
6881 + MC_CMD_OP(cmd, 0, 42, 1, int, cfg->order_preservation_en);\
6882 + MC_CMD_OP(cmd, 0, 48, 16, uint16_t, flow_id); \
6883 + MC_CMD_OP(cmd, 1, 0, 64, uint64_t, cfg->user_ctx); \
6884 + MC_CMD_OP(cmd, 2, 16, 8, uint8_t, tc_id); \
6885 + MC_CMD_OP(cmd, 2, 32, 32, uint32_t, cfg->options); \
6886 + MC_CMD_OP(cmd, 3, 0, 4, enum dpni_flc_type, cfg->flc_cfg.flc_type); \
6887 + MC_CMD_OP(cmd, 3, 4, 4, enum dpni_stash_size, \
6888 + cfg->flc_cfg.frame_data_size);\
6889 + MC_CMD_OP(cmd, 3, 8, 4, enum dpni_stash_size, \
6890 + cfg->flc_cfg.flow_context_size);\
6891 + MC_CMD_OP(cmd, 3, 32, 32, uint32_t, cfg->flc_cfg.options);\
6892 + MC_CMD_OP(cmd, 4, 0, 64, uint64_t, cfg->flc_cfg.flow_context);\
6893 + MC_CMD_OP(cmd, 5, 0, 32, uint32_t, cfg->tail_drop_threshold); \
6894 +} while (0)
6895 +
6896 +/* cmd, param, offset, width, type, arg_name */
6897 +#define DPNI_CMD_GET_RX_FLOW(cmd, tc_id, flow_id) \
6898 +do { \
6899 + MC_CMD_OP(cmd, 0, 16, 8, uint8_t, tc_id); \
6900 + MC_CMD_OP(cmd, 0, 48, 16, uint16_t, flow_id); \
6901 +} while (0)
6902 +
6903 +/* cmd, param, offset, width, type, arg_name */
6904 +#define DPNI_RSP_GET_RX_FLOW(cmd, attr) \
6905 +do { \
6906 + MC_RSP_OP(cmd, 0, 0, 32, int, attr->dest_cfg.dest_id); \
6907 + MC_RSP_OP(cmd, 0, 32, 8, uint8_t, attr->dest_cfg.priority);\
6908 + MC_RSP_OP(cmd, 0, 40, 2, enum dpni_dest, attr->dest_cfg.dest_type); \
6909 + MC_RSP_OP(cmd, 0, 42, 1, int, attr->order_preservation_en);\
6910 + MC_RSP_OP(cmd, 1, 0, 64, uint64_t, attr->user_ctx); \
6911 + MC_RSP_OP(cmd, 2, 0, 32, uint32_t, attr->tail_drop_threshold); \
6912 + MC_RSP_OP(cmd, 2, 32, 32, uint32_t, attr->fqid); \
6913 + MC_RSP_OP(cmd, 3, 0, 4, enum dpni_flc_type, attr->flc_cfg.flc_type); \
6914 + MC_RSP_OP(cmd, 3, 4, 4, enum dpni_stash_size, \
6915 + attr->flc_cfg.frame_data_size);\
6916 + MC_RSP_OP(cmd, 3, 8, 4, enum dpni_stash_size, \
6917 + attr->flc_cfg.flow_context_size);\
6918 + MC_RSP_OP(cmd, 3, 32, 32, uint32_t, attr->flc_cfg.options);\
6919 + MC_RSP_OP(cmd, 4, 0, 64, uint64_t, attr->flc_cfg.flow_context);\
6920 +} while (0)
6921 +
6922 +/* cmd, param, offset, width, type, arg_name */
6923 +#define DPNI_CMD_SET_RX_ERR_QUEUE(cmd, cfg) \
6924 +do { \
6925 + MC_CMD_OP(cmd, 0, 0, 32, int, cfg->dest_cfg.dest_id); \
6926 + MC_CMD_OP(cmd, 0, 32, 8, uint8_t, cfg->dest_cfg.priority);\
6927 + MC_CMD_OP(cmd, 0, 40, 2, enum dpni_dest, cfg->dest_cfg.dest_type);\
6928 + MC_CMD_OP(cmd, 0, 42, 1, int, cfg->order_preservation_en);\
6929 + MC_CMD_OP(cmd, 1, 0, 64, uint64_t, cfg->user_ctx); \
6930 + MC_CMD_OP(cmd, 2, 0, 32, uint32_t, cfg->options); \
6931 + MC_CMD_OP(cmd, 2, 32, 32, uint32_t, cfg->tail_drop_threshold); \
6932 + MC_CMD_OP(cmd, 3, 0, 4, enum dpni_flc_type, cfg->flc_cfg.flc_type); \
6933 + MC_CMD_OP(cmd, 3, 4, 4, enum dpni_stash_size, \
6934 + cfg->flc_cfg.frame_data_size);\
6935 + MC_CMD_OP(cmd, 3, 8, 4, enum dpni_stash_size, \
6936 + cfg->flc_cfg.flow_context_size);\
6937 + MC_CMD_OP(cmd, 3, 32, 32, uint32_t, cfg->flc_cfg.options);\
6938 + MC_CMD_OP(cmd, 4, 0, 64, uint64_t, cfg->flc_cfg.flow_context);\
6939 +} while (0)
6940 +
6941 +/* cmd, param, offset, width, type, arg_name */
6942 +#define DPNI_RSP_GET_RX_ERR_QUEUE(cmd, attr) \
6943 +do { \
6944 + MC_RSP_OP(cmd, 0, 0, 32, int, attr->dest_cfg.dest_id); \
6945 + MC_RSP_OP(cmd, 0, 32, 8, uint8_t, attr->dest_cfg.priority);\
6946 + MC_RSP_OP(cmd, 0, 40, 2, enum dpni_dest, attr->dest_cfg.dest_type);\
6947 + MC_RSP_OP(cmd, 0, 42, 1, int, attr->order_preservation_en);\
6948 + MC_RSP_OP(cmd, 1, 0, 64, uint64_t, attr->user_ctx); \
6949 + MC_RSP_OP(cmd, 2, 0, 32, uint32_t, attr->tail_drop_threshold); \
6950 + MC_RSP_OP(cmd, 2, 32, 32, uint32_t, attr->fqid); \
6951 + MC_RSP_OP(cmd, 3, 0, 4, enum dpni_flc_type, attr->flc_cfg.flc_type); \
6952 + MC_RSP_OP(cmd, 3, 4, 4, enum dpni_stash_size, \
6953 + attr->flc_cfg.frame_data_size);\
6954 + MC_RSP_OP(cmd, 3, 8, 4, enum dpni_stash_size, \
6955 + attr->flc_cfg.flow_context_size);\
6956 + MC_RSP_OP(cmd, 3, 32, 32, uint32_t, attr->flc_cfg.options);\
6957 + MC_RSP_OP(cmd, 4, 0, 64, uint64_t, attr->flc_cfg.flow_context);\
6958 +} while (0)
6959 +
6960 +/* cmd, param, offset, width, type, arg_name */
6961 +#define DPNI_CMD_SET_TX_CONF_REVOKE(cmd, revoke) \
6962 + MC_CMD_OP(cmd, 0, 0, 1, int, revoke)
6963 +
6964 +/* cmd, param, offset, width, type, arg_name */
6965 +#define DPNI_CMD_SET_QOS_TABLE(cmd, cfg) \
6966 +do { \
6967 + MC_CMD_OP(cmd, 0, 32, 8, uint8_t, cfg->default_tc); \
6968 + MC_CMD_OP(cmd, 0, 40, 1, int, cfg->discard_on_miss); \
6969 + MC_CMD_OP(cmd, 6, 0, 64, uint64_t, cfg->key_cfg_iova); \
6970 +} while (0)
6971 +
6972 +/* cmd, param, offset, width, type, arg_name */
6973 +#define DPNI_CMD_ADD_QOS_ENTRY(cmd, cfg, tc_id) \
6974 +do { \
6975 + MC_CMD_OP(cmd, 0, 16, 8, uint8_t, tc_id); \
6976 + MC_CMD_OP(cmd, 0, 24, 8, uint8_t, cfg->key_size); \
6977 + MC_CMD_OP(cmd, 1, 0, 64, uint64_t, cfg->key_iova); \
6978 + MC_CMD_OP(cmd, 2, 0, 64, uint64_t, cfg->mask_iova); \
6979 +} while (0)
6980 +
6981 +/* cmd, param, offset, width, type, arg_name */
6982 +#define DPNI_CMD_REMOVE_QOS_ENTRY(cmd, cfg) \
6983 +do { \
6984 + MC_CMD_OP(cmd, 0, 24, 8, uint8_t, cfg->key_size); \
6985 + MC_CMD_OP(cmd, 1, 0, 64, uint64_t, cfg->key_iova); \
6986 + MC_CMD_OP(cmd, 2, 0, 64, uint64_t, cfg->mask_iova); \
6987 +} while (0)
6988 +
6989 +/* cmd, param, offset, width, type, arg_name */
6990 +#define DPNI_CMD_ADD_FS_ENTRY(cmd, tc_id, cfg, flow_id) \
6991 +do { \
6992 + MC_CMD_OP(cmd, 0, 16, 8, uint8_t, tc_id); \
6993 + MC_CMD_OP(cmd, 0, 48, 16, uint16_t, flow_id); \
6994 + MC_CMD_OP(cmd, 0, 24, 8, uint8_t, cfg->key_size); \
6995 + MC_CMD_OP(cmd, 1, 0, 64, uint64_t, cfg->key_iova); \
6996 + MC_CMD_OP(cmd, 2, 0, 64, uint64_t, cfg->mask_iova); \
6997 +} while (0)
6998 +
6999 +/* cmd, param, offset, width, type, arg_name */
7000 +#define DPNI_CMD_REMOVE_FS_ENTRY(cmd, tc_id, cfg) \
7001 +do { \
7002 + MC_CMD_OP(cmd, 0, 16, 8, uint8_t, tc_id); \
7003 + MC_CMD_OP(cmd, 0, 24, 8, uint8_t, cfg->key_size); \
7004 + MC_CMD_OP(cmd, 1, 0, 64, uint64_t, cfg->key_iova); \
7005 + MC_CMD_OP(cmd, 2, 0, 64, uint64_t, cfg->mask_iova); \
7006 +} while (0)
7007 +
7008 +/* cmd, param, offset, width, type, arg_name */
7009 +#define DPNI_CMD_CLEAR_FS_ENTRIES(cmd, tc_id) \
7010 + MC_CMD_OP(cmd, 0, 16, 8, uint8_t, tc_id)
7011 +
7012 +/* cmd, param, offset, width, type, arg_name */
7013 +#define DPNI_CMD_SET_VLAN_INSERTION(cmd, en) \
7014 + MC_CMD_OP(cmd, 0, 0, 1, int, en)
7015 +
7016 +/* cmd, param, offset, width, type, arg_name */
7017 +#define DPNI_CMD_SET_VLAN_REMOVAL(cmd, en) \
7018 + MC_CMD_OP(cmd, 0, 0, 1, int, en)
7019 +
7020 +/* cmd, param, offset, width, type, arg_name */
7021 +#define DPNI_CMD_SET_IPR(cmd, en) \
7022 + MC_CMD_OP(cmd, 0, 0, 1, int, en)
7023 +
7024 +/* cmd, param, offset, width, type, arg_name */
7025 +#define DPNI_CMD_SET_IPF(cmd, en) \
7026 + MC_CMD_OP(cmd, 0, 0, 1, int, en)
7027 +
7028 +/* cmd, param, offset, width, type, arg_name */
7029 +#define DPNI_CMD_SET_RX_TC_POLICING(cmd, tc_id, cfg) \
7030 +do { \
7031 + MC_CMD_OP(cmd, 0, 0, 4, enum dpni_policer_mode, cfg->mode); \
7032 + MC_CMD_OP(cmd, 0, 4, 4, enum dpni_policer_color, cfg->default_color); \
7033 + MC_CMD_OP(cmd, 0, 8, 4, enum dpni_policer_unit, cfg->units); \
7034 + MC_CMD_OP(cmd, 0, 16, 8, uint8_t, tc_id); \
7035 + MC_CMD_OP(cmd, 0, 32, 32, uint32_t, cfg->options); \
7036 + MC_CMD_OP(cmd, 1, 0, 32, uint32_t, cfg->cir); \
7037 + MC_CMD_OP(cmd, 1, 32, 32, uint32_t, cfg->cbs); \
7038 + MC_CMD_OP(cmd, 2, 0, 32, uint32_t, cfg->eir); \
7039 + MC_CMD_OP(cmd, 2, 32, 32, uint32_t, cfg->ebs);\
7040 +} while (0)
7041 +
7042 +/* cmd, param, offset, width, type, arg_name */
7043 +#define DPNI_CMD_GET_RX_TC_POLICING(cmd, tc_id) \
7044 + MC_CMD_OP(cmd, 0, 16, 8, uint8_t, tc_id)
7045 +
7046 +/* cmd, param, offset, width, type, arg_name */
7047 +#define DPNI_RSP_GET_RX_TC_POLICING(cmd, cfg) \
7048 +do { \
7049 + MC_RSP_OP(cmd, 0, 0, 4, enum dpni_policer_mode, cfg->mode); \
7050 + MC_RSP_OP(cmd, 0, 4, 4, enum dpni_policer_color, cfg->default_color); \
7051 + MC_RSP_OP(cmd, 0, 8, 4, enum dpni_policer_unit, cfg->units); \
7052 + MC_RSP_OP(cmd, 0, 32, 32, uint32_t, cfg->options); \
7053 + MC_RSP_OP(cmd, 1, 0, 32, uint32_t, cfg->cir); \
7054 + MC_RSP_OP(cmd, 1, 32, 32, uint32_t, cfg->cbs); \
7055 + MC_RSP_OP(cmd, 2, 0, 32, uint32_t, cfg->eir); \
7056 + MC_RSP_OP(cmd, 2, 32, 32, uint32_t, cfg->ebs);\
7057 +} while (0)
7058 +
7059 +/* cmd, param, offset, width, type, arg_name */
7060 +#define DPNI_PREP_EARLY_DROP(ext, cfg) \
7061 +do { \
7062 + MC_PREP_OP(ext, 0, 0, 2, enum dpni_early_drop_mode, cfg->mode); \
7063 + MC_PREP_OP(ext, 0, 2, 2, \
7064 + enum dpni_congestion_unit, cfg->units); \
7065 + MC_PREP_OP(ext, 0, 32, 32, uint32_t, cfg->tail_drop_threshold); \
7066 + MC_PREP_OP(ext, 1, 0, 8, uint8_t, cfg->green.drop_probability); \
7067 + MC_PREP_OP(ext, 2, 0, 64, uint64_t, cfg->green.max_threshold); \
7068 + MC_PREP_OP(ext, 3, 0, 64, uint64_t, cfg->green.min_threshold); \
7069 + MC_PREP_OP(ext, 5, 0, 8, uint8_t, cfg->yellow.drop_probability);\
7070 + MC_PREP_OP(ext, 6, 0, 64, uint64_t, cfg->yellow.max_threshold); \
7071 + MC_PREP_OP(ext, 7, 0, 64, uint64_t, cfg->yellow.min_threshold); \
7072 + MC_PREP_OP(ext, 9, 0, 8, uint8_t, cfg->red.drop_probability); \
7073 + MC_PREP_OP(ext, 10, 0, 64, uint64_t, cfg->red.max_threshold); \
7074 + MC_PREP_OP(ext, 11, 0, 64, uint64_t, cfg->red.min_threshold); \
7075 +} while (0)
7076 +
7077 +/* cmd, param, offset, width, type, arg_name */
7078 +#define DPNI_EXT_EARLY_DROP(ext, cfg) \
7079 +do { \
7080 + MC_EXT_OP(ext, 0, 0, 2, enum dpni_early_drop_mode, cfg->mode); \
7081 + MC_EXT_OP(ext, 0, 2, 2, \
7082 + enum dpni_congestion_unit, cfg->units); \
7083 + MC_EXT_OP(ext, 0, 32, 32, uint32_t, cfg->tail_drop_threshold); \
7084 + MC_EXT_OP(ext, 1, 0, 8, uint8_t, cfg->green.drop_probability); \
7085 + MC_EXT_OP(ext, 2, 0, 64, uint64_t, cfg->green.max_threshold); \
7086 + MC_EXT_OP(ext, 3, 0, 64, uint64_t, cfg->green.min_threshold); \
7087 + MC_EXT_OP(ext, 5, 0, 8, uint8_t, cfg->yellow.drop_probability);\
7088 + MC_EXT_OP(ext, 6, 0, 64, uint64_t, cfg->yellow.max_threshold); \
7089 + MC_EXT_OP(ext, 7, 0, 64, uint64_t, cfg->yellow.min_threshold); \
7090 + MC_EXT_OP(ext, 9, 0, 8, uint8_t, cfg->red.drop_probability); \
7091 + MC_EXT_OP(ext, 10, 0, 64, uint64_t, cfg->red.max_threshold); \
7092 + MC_EXT_OP(ext, 11, 0, 64, uint64_t, cfg->red.min_threshold); \
7093 +} while (0)
7094 +
7095 +/* cmd, param, offset, width, type, arg_name */
7096 +#define DPNI_CMD_SET_RX_TC_EARLY_DROP(cmd, tc_id, early_drop_iova) \
7097 +do { \
7098 + MC_CMD_OP(cmd, 0, 8, 8, uint8_t, tc_id); \
7099 + MC_CMD_OP(cmd, 1, 0, 64, uint64_t, early_drop_iova); \
7100 +} while (0)
7101 +
7102 +/* cmd, param, offset, width, type, arg_name */
7103 +#define DPNI_CMD_GET_RX_TC_EARLY_DROP(cmd, tc_id, early_drop_iova) \
7104 +do { \
7105 + MC_CMD_OP(cmd, 0, 8, 8, uint8_t, tc_id); \
7106 + MC_CMD_OP(cmd, 1, 0, 64, uint64_t, early_drop_iova); \
7107 +} while (0)
7108 +
7109 +/* cmd, param, offset, width, type, arg_name */
7110 +#define DPNI_CMD_SET_TX_TC_EARLY_DROP(cmd, tc_id, early_drop_iova) \
7111 +do { \
7112 + MC_CMD_OP(cmd, 0, 8, 8, uint8_t, tc_id); \
7113 + MC_CMD_OP(cmd, 1, 0, 64, uint64_t, early_drop_iova); \
7114 +} while (0)
7115 +
7116 +/* cmd, param, offset, width, type, arg_name */
7117 +#define DPNI_CMD_GET_TX_TC_EARLY_DROP(cmd, tc_id, early_drop_iova) \
7118 +do { \
7119 + MC_CMD_OP(cmd, 0, 8, 8, uint8_t, tc_id); \
7120 + MC_CMD_OP(cmd, 1, 0, 64, uint64_t, early_drop_iova); \
7121 +} while (0)
7122 +
7123 +#define DPNI_CMD_SET_RX_TC_CONGESTION_NOTIFICATION(cmd, tc_id, cfg) \
7124 +do { \
7125 + MC_CMD_OP(cmd, 0, 0, 2, enum dpni_congestion_unit, cfg->units); \
7126 + MC_CMD_OP(cmd, 0, 4, 4, enum dpni_dest, cfg->dest_cfg.dest_type); \
7127 + MC_CMD_OP(cmd, 0, 8, 8, uint8_t, tc_id); \
7128 + MC_CMD_OP(cmd, 0, 16, 8, uint8_t, cfg->dest_cfg.priority); \
7129 + MC_CMD_OP(cmd, 1, 0, 32, uint32_t, cfg->threshold_entry); \
7130 + MC_CMD_OP(cmd, 1, 32, 32, uint32_t, cfg->threshold_exit); \
7131 + MC_CMD_OP(cmd, 2, 0, 16, uint16_t, cfg->options); \
7132 + MC_CMD_OP(cmd, 2, 32, 32, int, cfg->dest_cfg.dest_id); \
7133 + MC_CMD_OP(cmd, 3, 0, 64, uint64_t, cfg->message_ctx); \
7134 + MC_CMD_OP(cmd, 4, 0, 64, uint64_t, cfg->message_iova); \
7135 +} while (0)
7136 +
7137 +#define DPNI_CMD_GET_RX_TC_CONGESTION_NOTIFICATION(cmd, tc_id) \
7138 + MC_CMD_OP(cmd, 0, 8, 8, uint8_t, tc_id)
7139 +
7140 +#define DPNI_RSP_GET_RX_TC_CONGESTION_NOTIFICATION(cmd, cfg) \
7141 +do { \
7142 + MC_RSP_OP(cmd, 0, 0, 2, enum dpni_congestion_unit, cfg->units); \
7143 + MC_RSP_OP(cmd, 0, 4, 4, enum dpni_dest, cfg->dest_cfg.dest_type); \
7144 + MC_RSP_OP(cmd, 0, 16, 8, uint8_t, cfg->dest_cfg.priority); \
7145 + MC_RSP_OP(cmd, 1, 0, 32, uint32_t, cfg->threshold_entry); \
7146 + MC_RSP_OP(cmd, 1, 32, 32, uint32_t, cfg->threshold_exit); \
7147 + MC_RSP_OP(cmd, 2, 0, 16, uint16_t, cfg->options); \
7148 + MC_RSP_OP(cmd, 2, 32, 32, int, cfg->dest_cfg.dest_id); \
7149 + MC_RSP_OP(cmd, 3, 0, 64, uint64_t, cfg->message_ctx); \
7150 + MC_RSP_OP(cmd, 4, 0, 64, uint64_t, cfg->message_iova); \
7151 +} while (0)
7152 +
7153 +#define DPNI_CMD_SET_TX_TC_CONGESTION_NOTIFICATION(cmd, tc_id, cfg) \
7154 +do { \
7155 + MC_CMD_OP(cmd, 0, 0, 2, enum dpni_congestion_unit, cfg->units); \
7156 + MC_CMD_OP(cmd, 0, 4, 4, enum dpni_dest, cfg->dest_cfg.dest_type); \
7157 + MC_CMD_OP(cmd, 0, 8, 8, uint8_t, tc_id); \
7158 + MC_CMD_OP(cmd, 0, 16, 8, uint8_t, cfg->dest_cfg.priority); \
7159 + MC_CMD_OP(cmd, 1, 0, 32, uint32_t, cfg->threshold_entry); \
7160 + MC_CMD_OP(cmd, 1, 32, 32, uint32_t, cfg->threshold_exit); \
7161 + MC_CMD_OP(cmd, 2, 0, 16, uint16_t, cfg->options); \
7162 + MC_CMD_OP(cmd, 2, 32, 32, int, cfg->dest_cfg.dest_id); \
7163 + MC_CMD_OP(cmd, 3, 0, 64, uint64_t, cfg->message_ctx); \
7164 + MC_CMD_OP(cmd, 4, 0, 64, uint64_t, cfg->message_iova); \
7165 +} while (0)
7166 +
7167 +#define DPNI_CMD_GET_TX_TC_CONGESTION_NOTIFICATION(cmd, tc_id) \
7168 + MC_CMD_OP(cmd, 0, 8, 8, uint8_t, tc_id)
7169 +
7170 +#define DPNI_RSP_GET_TX_TC_CONGESTION_NOTIFICATION(cmd, cfg) \
7171 +do { \
7172 + MC_RSP_OP(cmd, 0, 0, 2, enum dpni_congestion_unit, cfg->units); \
7173 + MC_RSP_OP(cmd, 0, 4, 4, enum dpni_dest, cfg->dest_cfg.dest_type); \
7174 + MC_RSP_OP(cmd, 0, 16, 8, uint8_t, cfg->dest_cfg.priority); \
7175 + MC_RSP_OP(cmd, 1, 0, 32, uint32_t, cfg->threshold_entry); \
7176 + MC_RSP_OP(cmd, 1, 32, 32, uint32_t, cfg->threshold_exit); \
7177 + MC_RSP_OP(cmd, 2, 0, 16, uint16_t, cfg->options); \
7178 + MC_RSP_OP(cmd, 2, 32, 32, int, cfg->dest_cfg.dest_id); \
7179 + MC_RSP_OP(cmd, 3, 0, 64, uint64_t, cfg->message_ctx); \
7180 + MC_RSP_OP(cmd, 4, 0, 64, uint64_t, cfg->message_iova); \
7181 +} while (0)
7182 +
7183 +#define DPNI_CMD_SET_TX_CONF(cmd, flow_id, cfg) \
7184 +do { \
7185 + MC_CMD_OP(cmd, 0, 32, 8, uint8_t, cfg->queue_cfg.dest_cfg.priority); \
7186 + MC_CMD_OP(cmd, 0, 40, 2, enum dpni_dest, \
7187 + cfg->queue_cfg.dest_cfg.dest_type); \
7188 + MC_CMD_OP(cmd, 0, 42, 1, int, cfg->errors_only); \
7189 + MC_CMD_OP(cmd, 0, 46, 1, int, cfg->queue_cfg.order_preservation_en); \
7190 + MC_CMD_OP(cmd, 0, 48, 16, uint16_t, flow_id); \
7191 + MC_CMD_OP(cmd, 1, 0, 64, uint64_t, cfg->queue_cfg.user_ctx); \
7192 + MC_CMD_OP(cmd, 2, 0, 32, uint32_t, cfg->queue_cfg.options); \
7193 + MC_CMD_OP(cmd, 2, 32, 32, int, cfg->queue_cfg.dest_cfg.dest_id); \
7194 + MC_CMD_OP(cmd, 3, 0, 32, uint32_t, \
7195 + cfg->queue_cfg.tail_drop_threshold); \
7196 + MC_CMD_OP(cmd, 4, 0, 4, enum dpni_flc_type, \
7197 + cfg->queue_cfg.flc_cfg.flc_type); \
7198 + MC_CMD_OP(cmd, 4, 4, 4, enum dpni_stash_size, \
7199 + cfg->queue_cfg.flc_cfg.frame_data_size); \
7200 + MC_CMD_OP(cmd, 4, 8, 4, enum dpni_stash_size, \
7201 + cfg->queue_cfg.flc_cfg.flow_context_size); \
7202 + MC_CMD_OP(cmd, 4, 32, 32, uint32_t, cfg->queue_cfg.flc_cfg.options); \
7203 + MC_CMD_OP(cmd, 5, 0, 64, uint64_t, \
7204 + cfg->queue_cfg.flc_cfg.flow_context); \
7205 +} while (0)
7206 +
7207 +#define DPNI_CMD_GET_TX_CONF(cmd, flow_id) \
7208 + MC_CMD_OP(cmd, 0, 48, 16, uint16_t, flow_id)
7209 +
7210 +#define DPNI_RSP_GET_TX_CONF(cmd, attr) \
7211 +do { \
7212 + MC_RSP_OP(cmd, 0, 32, 8, uint8_t, \
7213 + attr->queue_attr.dest_cfg.priority); \
7214 + MC_RSP_OP(cmd, 0, 40, 2, enum dpni_dest, \
7215 + attr->queue_attr.dest_cfg.dest_type); \
7216 + MC_RSP_OP(cmd, 0, 42, 1, int, attr->errors_only); \
7217 + MC_RSP_OP(cmd, 0, 46, 1, int, \
7218 + attr->queue_attr.order_preservation_en); \
7219 + MC_RSP_OP(cmd, 1, 0, 64, uint64_t, attr->queue_attr.user_ctx); \
7220 + MC_RSP_OP(cmd, 2, 32, 32, int, attr->queue_attr.dest_cfg.dest_id); \
7221 + MC_RSP_OP(cmd, 3, 0, 32, uint32_t, \
7222 + attr->queue_attr.tail_drop_threshold); \
7223 + MC_RSP_OP(cmd, 3, 32, 32, uint32_t, attr->queue_attr.fqid); \
7224 + MC_RSP_OP(cmd, 4, 0, 4, enum dpni_flc_type, \
7225 + attr->queue_attr.flc_cfg.flc_type); \
7226 + MC_RSP_OP(cmd, 4, 4, 4, enum dpni_stash_size, \
7227 + attr->queue_attr.flc_cfg.frame_data_size); \
7228 + MC_RSP_OP(cmd, 4, 8, 4, enum dpni_stash_size, \
7229 + attr->queue_attr.flc_cfg.flow_context_size); \
7230 + MC_RSP_OP(cmd, 4, 32, 32, uint32_t, attr->queue_attr.flc_cfg.options); \
7231 + MC_RSP_OP(cmd, 5, 0, 64, uint64_t, \
7232 + attr->queue_attr.flc_cfg.flow_context); \
7233 +} while (0)
7234 +
7235 +#define DPNI_CMD_SET_TX_CONF_CONGESTION_NOTIFICATION(cmd, flow_id, cfg) \
7236 +do { \
7237 + MC_CMD_OP(cmd, 0, 0, 2, enum dpni_congestion_unit, cfg->units); \
7238 + MC_CMD_OP(cmd, 0, 4, 4, enum dpni_dest, cfg->dest_cfg.dest_type); \
7239 + MC_CMD_OP(cmd, 0, 16, 8, uint8_t, cfg->dest_cfg.priority); \
7240 + MC_CMD_OP(cmd, 0, 48, 16, uint16_t, flow_id); \
7241 + MC_CMD_OP(cmd, 1, 0, 32, uint32_t, cfg->threshold_entry); \
7242 + MC_CMD_OP(cmd, 1, 32, 32, uint32_t, cfg->threshold_exit); \
7243 + MC_CMD_OP(cmd, 2, 0, 16, uint16_t, cfg->options); \
7244 + MC_CMD_OP(cmd, 2, 32, 32, int, cfg->dest_cfg.dest_id); \
7245 + MC_CMD_OP(cmd, 3, 0, 64, uint64_t, cfg->message_ctx); \
7246 + MC_CMD_OP(cmd, 4, 0, 64, uint64_t, cfg->message_iova); \
7247 +} while (0)
7248 +
7249 +#define DPNI_CMD_GET_TX_CONF_CONGESTION_NOTIFICATION(cmd, flow_id) \
7250 + MC_CMD_OP(cmd, 0, 48, 16, uint16_t, flow_id)
7251 +
7252 +#define DPNI_RSP_GET_TX_CONF_CONGESTION_NOTIFICATION(cmd, cfg) \
7253 +do { \
7254 + MC_RSP_OP(cmd, 0, 0, 2, enum dpni_congestion_unit, cfg->units); \
7255 + MC_RSP_OP(cmd, 0, 4, 4, enum dpni_dest, cfg->dest_cfg.dest_type); \
7256 + MC_RSP_OP(cmd, 0, 16, 8, uint8_t, cfg->dest_cfg.priority); \
7257 + MC_RSP_OP(cmd, 1, 0, 32, uint32_t, cfg->threshold_entry); \
7258 + MC_RSP_OP(cmd, 1, 32, 32, uint32_t, cfg->threshold_exit); \
7259 + MC_RSP_OP(cmd, 2, 0, 16, uint16_t, cfg->options); \
7260 + MC_RSP_OP(cmd, 2, 32, 32, int, cfg->dest_cfg.dest_id); \
7261 + MC_RSP_OP(cmd, 3, 0, 64, uint64_t, cfg->message_ctx); \
7262 + MC_RSP_OP(cmd, 4, 0, 64, uint64_t, cfg->message_iova); \
7263 +} while (0)
7264 +
7265 +#endif /* _FSL_DPNI_CMD_H */
7266 --- /dev/null
7267 +++ b/drivers/staging/fsl-dpaa2/ethernet/dpni.c
7268 @@ -0,0 +1,1907 @@
7269 +/* Copyright 2013-2015 Freescale Semiconductor Inc.
7270 + *
7271 + * Redistribution and use in source and binary forms, with or without
7272 + * modification, are permitted provided that the following conditions are met:
7273 + * * Redistributions of source code must retain the above copyright
7274 + * notice, this list of conditions and the following disclaimer.
7275 + * * Redistributions in binary form must reproduce the above copyright
7276 + * notice, this list of conditions and the following disclaimer in the
7277 + * documentation and/or other materials provided with the distribution.
7278 + * * Neither the name of the above-listed copyright holders nor the
7279 + * names of any contributors may be used to endorse or promote products
7280 + * derived from this software without specific prior written permission.
7281 + *
7282 + *
7283 + * ALTERNATIVELY, this software may be distributed under the terms of the
7284 + * GNU General Public License ("GPL") as published by the Free Software
7285 + * Foundation, either version 2 of that License or (at your option) any
7286 + * later version.
7287 + *
7288 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
7289 + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
7290 + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
7291 + * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
7292 + * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
7293 + * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
7294 + * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
7295 + * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
7296 + * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
7297 + * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
7298 + * POSSIBILITY OF SUCH DAMAGE.
7299 + */
7300 +#include "../../fsl-mc/include/mc-sys.h"
7301 +#include "../../fsl-mc/include/mc-cmd.h"
7302 +#include "dpni.h"
7303 +#include "dpni-cmd.h"
7304 +
7305 +int dpni_prepare_key_cfg(const struct dpkg_profile_cfg *cfg,
7306 + uint8_t *key_cfg_buf)
7307 +{
7308 + int i, j;
7309 + int offset = 0;
7310 + int param = 1;
7311 + uint64_t *params = (uint64_t *)key_cfg_buf;
7312 +
7313 + if (!key_cfg_buf || !cfg)
7314 + return -EINVAL;
7315 +
7316 + params[0] |= mc_enc(0, 8, cfg->num_extracts);
7317 + params[0] = cpu_to_le64(params[0]);
7318 +
7319 + if (cfg->num_extracts >= DPKG_MAX_NUM_OF_EXTRACTS)
7320 + return -EINVAL;
7321 +
7322 + for (i = 0; i < cfg->num_extracts; i++) {
7323 + switch (cfg->extracts[i].type) {
7324 + case DPKG_EXTRACT_FROM_HDR:
7325 + params[param] |= mc_enc(0, 8,
7326 + cfg->extracts[i].extract.from_hdr.prot);
7327 + params[param] |= mc_enc(8, 4,
7328 + cfg->extracts[i].extract.from_hdr.type);
7329 + params[param] |= mc_enc(16, 8,
7330 + cfg->extracts[i].extract.from_hdr.size);
7331 + params[param] |= mc_enc(24, 8,
7332 + cfg->extracts[i].extract.
7333 + from_hdr.offset);
7334 + params[param] |= mc_enc(32, 32,
7335 + cfg->extracts[i].extract.
7336 + from_hdr.field);
7337 + params[param] = cpu_to_le64(params[param]);
7338 + param++;
7339 + params[param] |= mc_enc(0, 8,
7340 + cfg->extracts[i].extract.
7341 + from_hdr.hdr_index);
7342 + break;
7343 + case DPKG_EXTRACT_FROM_DATA:
7344 + params[param] |= mc_enc(16, 8,
7345 + cfg->extracts[i].extract.
7346 + from_data.size);
7347 + params[param] |= mc_enc(24, 8,
7348 + cfg->extracts[i].extract.
7349 + from_data.offset);
7350 + params[param] = cpu_to_le64(params[param]);
7351 + param++;
7352 + break;
7353 + case DPKG_EXTRACT_FROM_PARSE:
7354 + params[param] |= mc_enc(16, 8,
7355 + cfg->extracts[i].extract.
7356 + from_parse.size);
7357 + params[param] |= mc_enc(24, 8,
7358 + cfg->extracts[i].extract.
7359 + from_parse.offset);
7360 + params[param] = cpu_to_le64(params[param]);
7361 + param++;
7362 + break;
7363 + default:
7364 + return -EINVAL;
7365 + }
7366 + params[param] |= mc_enc(
7367 + 24, 8, cfg->extracts[i].num_of_byte_masks);
7368 + params[param] |= mc_enc(32, 4, cfg->extracts[i].type);
7369 + params[param] = cpu_to_le64(params[param]);
7370 + param++;
7371 + for (offset = 0, j = 0;
7372 + j < DPKG_NUM_OF_MASKS;
7373 + offset += 16, j++) {
7374 + params[param] |= mc_enc(
7375 + (offset), 8, cfg->extracts[i].masks[j].mask);
7376 + params[param] |= mc_enc(
7377 + (offset + 8), 8,
7378 + cfg->extracts[i].masks[j].offset);
7379 + }
7380 + params[param] = cpu_to_le64(params[param]);
7381 + param++;
7382 + }
7383 + return 0;
7384 +}
7385 +
7386 +int dpni_prepare_extended_cfg(const struct dpni_extended_cfg *cfg,
7387 + uint8_t *ext_cfg_buf)
7388 +{
7389 + uint64_t *ext_params = (uint64_t *)ext_cfg_buf;
7390 +
7391 + DPNI_PREP_EXTENDED_CFG(ext_params, cfg);
7392 +
7393 + return 0;
7394 +}
7395 +
7396 +int dpni_extract_extended_cfg(struct dpni_extended_cfg *cfg,
7397 + const uint8_t *ext_cfg_buf)
7398 +{
7399 + uint64_t *ext_params = (uint64_t *)ext_cfg_buf;
7400 +
7401 + DPNI_EXT_EXTENDED_CFG(ext_params, cfg);
7402 +
7403 + return 0;
7404 +}
7405 +
7406 +int dpni_open(struct fsl_mc_io *mc_io,
7407 + uint32_t cmd_flags,
7408 + int dpni_id,
7409 + uint16_t *token)
7410 +{
7411 + struct mc_command cmd = { 0 };
7412 + int err;
7413 +
7414 + /* prepare command */
7415 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_OPEN,
7416 + cmd_flags,
7417 + 0);
7418 + DPNI_CMD_OPEN(cmd, dpni_id);
7419 +
7420 + /* send command to mc*/
7421 + err = mc_send_command(mc_io, &cmd);
7422 + if (err)
7423 + return err;
7424 +
7425 + /* retrieve response parameters */
7426 + *token = MC_CMD_HDR_READ_TOKEN(cmd.header);
7427 +
7428 + return 0;
7429 +}
7430 +
7431 +int dpni_close(struct fsl_mc_io *mc_io,
7432 + uint32_t cmd_flags,
7433 + uint16_t token)
7434 +{
7435 + struct mc_command cmd = { 0 };
7436 +
7437 + /* prepare command */
7438 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_CLOSE,
7439 + cmd_flags,
7440 + token);
7441 +
7442 + /* send command to mc*/
7443 + return mc_send_command(mc_io, &cmd);
7444 +}
7445 +
7446 +int dpni_create(struct fsl_mc_io *mc_io,
7447 + uint32_t cmd_flags,
7448 + const struct dpni_cfg *cfg,
7449 + uint16_t *token)
7450 +{
7451 + struct mc_command cmd = { 0 };
7452 + int err;
7453 +
7454 + /* prepare command */
7455 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_CREATE,
7456 + cmd_flags,
7457 + 0);
7458 + DPNI_CMD_CREATE(cmd, cfg);
7459 +
7460 + /* send command to mc*/
7461 + err = mc_send_command(mc_io, &cmd);
7462 + if (err)
7463 + return err;
7464 +
7465 + /* retrieve response parameters */
7466 + *token = MC_CMD_HDR_READ_TOKEN(cmd.header);
7467 +
7468 + return 0;
7469 +}
7470 +
7471 +int dpni_destroy(struct fsl_mc_io *mc_io,
7472 + uint32_t cmd_flags,
7473 + uint16_t token)
7474 +{
7475 + struct mc_command cmd = { 0 };
7476 +
7477 + /* prepare command */
7478 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_DESTROY,
7479 + cmd_flags,
7480 + token);
7481 +
7482 + /* send command to mc*/
7483 + return mc_send_command(mc_io, &cmd);
7484 +}
7485 +
7486 +int dpni_set_pools(struct fsl_mc_io *mc_io,
7487 + uint32_t cmd_flags,
7488 + uint16_t token,
7489 + const struct dpni_pools_cfg *cfg)
7490 +{
7491 + struct mc_command cmd = { 0 };
7492 +
7493 + /* prepare command */
7494 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_POOLS,
7495 + cmd_flags,
7496 + token);
7497 + DPNI_CMD_SET_POOLS(cmd, cfg);
7498 +
7499 + /* send command to mc*/
7500 + return mc_send_command(mc_io, &cmd);
7501 +}
7502 +
7503 +int dpni_enable(struct fsl_mc_io *mc_io,
7504 + uint32_t cmd_flags,
7505 + uint16_t token)
7506 +{
7507 + struct mc_command cmd = { 0 };
7508 +
7509 + /* prepare command */
7510 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_ENABLE,
7511 + cmd_flags,
7512 + token);
7513 +
7514 + /* send command to mc*/
7515 + return mc_send_command(mc_io, &cmd);
7516 +}
7517 +
7518 +int dpni_disable(struct fsl_mc_io *mc_io,
7519 + uint32_t cmd_flags,
7520 + uint16_t token)
7521 +{
7522 + struct mc_command cmd = { 0 };
7523 +
7524 + /* prepare command */
7525 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_DISABLE,
7526 + cmd_flags,
7527 + token);
7528 +
7529 + /* send command to mc*/
7530 + return mc_send_command(mc_io, &cmd);
7531 +}
7532 +
7533 +int dpni_is_enabled(struct fsl_mc_io *mc_io,
7534 + uint32_t cmd_flags,
7535 + uint16_t token,
7536 + int *en)
7537 +{
7538 + struct mc_command cmd = { 0 };
7539 + int err;
7540 + /* prepare command */
7541 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_IS_ENABLED, cmd_flags,
7542 + token);
7543 +
7544 + /* send command to mc*/
7545 + err = mc_send_command(mc_io, &cmd);
7546 + if (err)
7547 + return err;
7548 +
7549 + /* retrieve response parameters */
7550 + DPNI_RSP_IS_ENABLED(cmd, *en);
7551 +
7552 + return 0;
7553 +}
7554 +
7555 +int dpni_reset(struct fsl_mc_io *mc_io,
7556 + uint32_t cmd_flags,
7557 + uint16_t token)
7558 +{
7559 + struct mc_command cmd = { 0 };
7560 +
7561 + /* prepare command */
7562 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_RESET,
7563 + cmd_flags,
7564 + token);
7565 +
7566 + /* send command to mc*/
7567 + return mc_send_command(mc_io, &cmd);
7568 +}
7569 +
7570 +int dpni_set_irq(struct fsl_mc_io *mc_io,
7571 + uint32_t cmd_flags,
7572 + uint16_t token,
7573 + uint8_t irq_index,
7574 + struct dpni_irq_cfg *irq_cfg)
7575 +{
7576 + struct mc_command cmd = { 0 };
7577 +
7578 + /* prepare command */
7579 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_IRQ,
7580 + cmd_flags,
7581 + token);
7582 + DPNI_CMD_SET_IRQ(cmd, irq_index, irq_cfg);
7583 +
7584 + /* send command to mc*/
7585 + return mc_send_command(mc_io, &cmd);
7586 +}
7587 +
7588 +int dpni_get_irq(struct fsl_mc_io *mc_io,
7589 + uint32_t cmd_flags,
7590 + uint16_t token,
7591 + uint8_t irq_index,
7592 + int *type,
7593 + struct dpni_irq_cfg *irq_cfg)
7594 +{
7595 + struct mc_command cmd = { 0 };
7596 + int err;
7597 +
7598 + /* prepare command */
7599 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_IRQ,
7600 + cmd_flags,
7601 + token);
7602 + DPNI_CMD_GET_IRQ(cmd, irq_index);
7603 +
7604 + /* send command to mc*/
7605 + err = mc_send_command(mc_io, &cmd);
7606 + if (err)
7607 + return err;
7608 +
7609 + /* retrieve response parameters */
7610 + DPNI_RSP_GET_IRQ(cmd, *type, irq_cfg);
7611 +
7612 + return 0;
7613 +}
7614 +
7615 +int dpni_set_irq_enable(struct fsl_mc_io *mc_io,
7616 + uint32_t cmd_flags,
7617 + uint16_t token,
7618 + uint8_t irq_index,
7619 + uint8_t en)
7620 +{
7621 + struct mc_command cmd = { 0 };
7622 +
7623 + /* prepare command */
7624 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_IRQ_ENABLE,
7625 + cmd_flags,
7626 + token);
7627 + DPNI_CMD_SET_IRQ_ENABLE(cmd, irq_index, en);
7628 +
7629 + /* send command to mc*/
7630 + return mc_send_command(mc_io, &cmd);
7631 +}
7632 +
7633 +int dpni_get_irq_enable(struct fsl_mc_io *mc_io,
7634 + uint32_t cmd_flags,
7635 + uint16_t token,
7636 + uint8_t irq_index,
7637 + uint8_t *en)
7638 +{
7639 + struct mc_command cmd = { 0 };
7640 + int err;
7641 +
7642 + /* prepare command */
7643 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_IRQ_ENABLE,
7644 + cmd_flags,
7645 + token);
7646 + DPNI_CMD_GET_IRQ_ENABLE(cmd, irq_index);
7647 +
7648 + /* send command to mc*/
7649 + err = mc_send_command(mc_io, &cmd);
7650 + if (err)
7651 + return err;
7652 +
7653 + /* retrieve response parameters */
7654 + DPNI_RSP_GET_IRQ_ENABLE(cmd, *en);
7655 +
7656 + return 0;
7657 +}
7658 +
7659 +int dpni_set_irq_mask(struct fsl_mc_io *mc_io,
7660 + uint32_t cmd_flags,
7661 + uint16_t token,
7662 + uint8_t irq_index,
7663 + uint32_t mask)
7664 +{
7665 + struct mc_command cmd = { 0 };
7666 +
7667 + /* prepare command */
7668 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_IRQ_MASK,
7669 + cmd_flags,
7670 + token);
7671 + DPNI_CMD_SET_IRQ_MASK(cmd, irq_index, mask);
7672 +
7673 + /* send command to mc*/
7674 + return mc_send_command(mc_io, &cmd);
7675 +}
7676 +
7677 +int dpni_get_irq_mask(struct fsl_mc_io *mc_io,
7678 + uint32_t cmd_flags,
7679 + uint16_t token,
7680 + uint8_t irq_index,
7681 + uint32_t *mask)
7682 +{
7683 + struct mc_command cmd = { 0 };
7684 + int err;
7685 +
7686 + /* prepare command */
7687 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_IRQ_MASK,
7688 + cmd_flags,
7689 + token);
7690 + DPNI_CMD_GET_IRQ_MASK(cmd, irq_index);
7691 +
7692 + /* send command to mc*/
7693 + err = mc_send_command(mc_io, &cmd);
7694 + if (err)
7695 + return err;
7696 +
7697 + /* retrieve response parameters */
7698 + DPNI_RSP_GET_IRQ_MASK(cmd, *mask);
7699 +
7700 + return 0;
7701 +}
7702 +
7703 +int dpni_get_irq_status(struct fsl_mc_io *mc_io,
7704 + uint32_t cmd_flags,
7705 + uint16_t token,
7706 + uint8_t irq_index,
7707 + uint32_t *status)
7708 +{
7709 + struct mc_command cmd = { 0 };
7710 + int err;
7711 +
7712 + /* prepare command */
7713 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_IRQ_STATUS,
7714 + cmd_flags,
7715 + token);
7716 + DPNI_CMD_GET_IRQ_STATUS(cmd, irq_index, *status);
7717 +
7718 + /* send command to mc*/
7719 + err = mc_send_command(mc_io, &cmd);
7720 + if (err)
7721 + return err;
7722 +
7723 + /* retrieve response parameters */
7724 + DPNI_RSP_GET_IRQ_STATUS(cmd, *status);
7725 +
7726 + return 0;
7727 +}
7728 +
7729 +int dpni_clear_irq_status(struct fsl_mc_io *mc_io,
7730 + uint32_t cmd_flags,
7731 + uint16_t token,
7732 + uint8_t irq_index,
7733 + uint32_t status)
7734 +{
7735 + struct mc_command cmd = { 0 };
7736 +
7737 + /* prepare command */
7738 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_CLEAR_IRQ_STATUS,
7739 + cmd_flags,
7740 + token);
7741 + DPNI_CMD_CLEAR_IRQ_STATUS(cmd, irq_index, status);
7742 +
7743 + /* send command to mc*/
7744 + return mc_send_command(mc_io, &cmd);
7745 +}
7746 +
7747 +int dpni_get_attributes(struct fsl_mc_io *mc_io,
7748 + uint32_t cmd_flags,
7749 + uint16_t token,
7750 + struct dpni_attr *attr)
7751 +{
7752 + struct mc_command cmd = { 0 };
7753 + int err;
7754 +
7755 + /* prepare command */
7756 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_ATTR,
7757 + cmd_flags,
7758 + token);
7759 + DPNI_CMD_GET_ATTR(cmd, attr);
7760 +
7761 + /* send command to mc*/
7762 + err = mc_send_command(mc_io, &cmd);
7763 + if (err)
7764 + return err;
7765 +
7766 + /* retrieve response parameters */
7767 + DPNI_RSP_GET_ATTR(cmd, attr);
7768 +
7769 + return 0;
7770 +}
7771 +
7772 +int dpni_set_errors_behavior(struct fsl_mc_io *mc_io,
7773 + uint32_t cmd_flags,
7774 + uint16_t token,
7775 + struct dpni_error_cfg *cfg)
7776 +{
7777 + struct mc_command cmd = { 0 };
7778 +
7779 + /* prepare command */
7780 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_ERRORS_BEHAVIOR,
7781 + cmd_flags,
7782 + token);
7783 + DPNI_CMD_SET_ERRORS_BEHAVIOR(cmd, cfg);
7784 +
7785 + /* send command to mc*/
7786 + return mc_send_command(mc_io, &cmd);
7787 +}
7788 +
7789 +int dpni_get_rx_buffer_layout(struct fsl_mc_io *mc_io,
7790 + uint32_t cmd_flags,
7791 + uint16_t token,
7792 + struct dpni_buffer_layout *layout)
7793 +{
7794 + struct mc_command cmd = { 0 };
7795 + int err;
7796 +
7797 + /* prepare command */
7798 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_RX_BUFFER_LAYOUT,
7799 + cmd_flags,
7800 + token);
7801 +
7802 + /* send command to mc*/
7803 + err = mc_send_command(mc_io, &cmd);
7804 + if (err)
7805 + return err;
7806 +
7807 + /* retrieve response parameters */
7808 + DPNI_RSP_GET_RX_BUFFER_LAYOUT(cmd, layout);
7809 +
7810 + return 0;
7811 +}
7812 +
7813 +int dpni_set_rx_buffer_layout(struct fsl_mc_io *mc_io,
7814 + uint32_t cmd_flags,
7815 + uint16_t token,
7816 + const struct dpni_buffer_layout *layout)
7817 +{
7818 + struct mc_command cmd = { 0 };
7819 +
7820 + /* prepare command */
7821 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_RX_BUFFER_LAYOUT,
7822 + cmd_flags,
7823 + token);
7824 + DPNI_CMD_SET_RX_BUFFER_LAYOUT(cmd, layout);
7825 +
7826 + /* send command to mc*/
7827 + return mc_send_command(mc_io, &cmd);
7828 +}
7829 +
7830 +int dpni_get_tx_buffer_layout(struct fsl_mc_io *mc_io,
7831 + uint32_t cmd_flags,
7832 + uint16_t token,
7833 + struct dpni_buffer_layout *layout)
7834 +{
7835 + struct mc_command cmd = { 0 };
7836 + int err;
7837 +
7838 + /* prepare command */
7839 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_TX_BUFFER_LAYOUT,
7840 + cmd_flags,
7841 + token);
7842 +
7843 + /* send command to mc*/
7844 + err = mc_send_command(mc_io, &cmd);
7845 + if (err)
7846 + return err;
7847 +
7848 + /* retrieve response parameters */
7849 + DPNI_RSP_GET_TX_BUFFER_LAYOUT(cmd, layout);
7850 +
7851 + return 0;
7852 +}
7853 +
7854 +int dpni_set_tx_buffer_layout(struct fsl_mc_io *mc_io,
7855 + uint32_t cmd_flags,
7856 + uint16_t token,
7857 + const struct dpni_buffer_layout *layout)
7858 +{
7859 + struct mc_command cmd = { 0 };
7860 +
7861 + /* prepare command */
7862 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_TX_BUFFER_LAYOUT,
7863 + cmd_flags,
7864 + token);
7865 + DPNI_CMD_SET_TX_BUFFER_LAYOUT(cmd, layout);
7866 +
7867 + /* send command to mc*/
7868 + return mc_send_command(mc_io, &cmd);
7869 +}
7870 +
7871 +int dpni_get_tx_conf_buffer_layout(struct fsl_mc_io *mc_io,
7872 + uint32_t cmd_flags,
7873 + uint16_t token,
7874 + struct dpni_buffer_layout *layout)
7875 +{
7876 + struct mc_command cmd = { 0 };
7877 + int err;
7878 +
7879 + /* prepare command */
7880 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_TX_CONF_BUFFER_LAYOUT,
7881 + cmd_flags,
7882 + token);
7883 +
7884 + /* send command to mc*/
7885 + err = mc_send_command(mc_io, &cmd);
7886 + if (err)
7887 + return err;
7888 +
7889 + /* retrieve response parameters */
7890 + DPNI_RSP_GET_TX_CONF_BUFFER_LAYOUT(cmd, layout);
7891 +
7892 + return 0;
7893 +}
7894 +
7895 +int dpni_set_tx_conf_buffer_layout(struct fsl_mc_io *mc_io,
7896 + uint32_t cmd_flags,
7897 + uint16_t token,
7898 + const struct dpni_buffer_layout *layout)
7899 +{
7900 + struct mc_command cmd = { 0 };
7901 +
7902 + /* prepare command */
7903 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_TX_CONF_BUFFER_LAYOUT,
7904 + cmd_flags,
7905 + token);
7906 + DPNI_CMD_SET_TX_CONF_BUFFER_LAYOUT(cmd, layout);
7907 +
7908 + /* send command to mc*/
7909 + return mc_send_command(mc_io, &cmd);
7910 +}
7911 +
7912 +int dpni_get_l3_chksum_validation(struct fsl_mc_io *mc_io,
7913 + uint32_t cmd_flags,
7914 + uint16_t token,
7915 + int *en)
7916 +{
7917 + struct mc_command cmd = { 0 };
7918 + int err;
7919 +
7920 + /* prepare command */
7921 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_L3_CHKSUM_VALIDATION,
7922 + cmd_flags,
7923 + token);
7924 +
7925 + /* send command to mc*/
7926 + err = mc_send_command(mc_io, &cmd);
7927 + if (err)
7928 + return err;
7929 +
7930 + /* retrieve response parameters */
7931 + DPNI_RSP_GET_L3_CHKSUM_VALIDATION(cmd, *en);
7932 +
7933 + return 0;
7934 +}
7935 +
7936 +int dpni_set_l3_chksum_validation(struct fsl_mc_io *mc_io,
7937 + uint32_t cmd_flags,
7938 + uint16_t token,
7939 + int en)
7940 +{
7941 + struct mc_command cmd = { 0 };
7942 +
7943 + /* prepare command */
7944 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_L3_CHKSUM_VALIDATION,
7945 + cmd_flags,
7946 + token);
7947 + DPNI_CMD_SET_L3_CHKSUM_VALIDATION(cmd, en);
7948 +
7949 + /* send command to mc*/
7950 + return mc_send_command(mc_io, &cmd);
7951 +}
7952 +
7953 +int dpni_get_l4_chksum_validation(struct fsl_mc_io *mc_io,
7954 + uint32_t cmd_flags,
7955 + uint16_t token,
7956 + int *en)
7957 +{
7958 + struct mc_command cmd = { 0 };
7959 + int err;
7960 +
7961 + /* prepare command */
7962 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_L4_CHKSUM_VALIDATION,
7963 + cmd_flags,
7964 + token);
7965 +
7966 + /* send command to mc*/
7967 + err = mc_send_command(mc_io, &cmd);
7968 + if (err)
7969 + return err;
7970 +
7971 + /* retrieve response parameters */
7972 + DPNI_RSP_GET_L4_CHKSUM_VALIDATION(cmd, *en);
7973 +
7974 + return 0;
7975 +}
7976 +
7977 +int dpni_set_l4_chksum_validation(struct fsl_mc_io *mc_io,
7978 + uint32_t cmd_flags,
7979 + uint16_t token,
7980 + int en)
7981 +{
7982 + struct mc_command cmd = { 0 };
7983 +
7984 + /* prepare command */
7985 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_L4_CHKSUM_VALIDATION,
7986 + cmd_flags,
7987 + token);
7988 + DPNI_CMD_SET_L4_CHKSUM_VALIDATION(cmd, en);
7989 +
7990 + /* send command to mc*/
7991 + return mc_send_command(mc_io, &cmd);
7992 +}
7993 +
7994 +int dpni_get_qdid(struct fsl_mc_io *mc_io,
7995 + uint32_t cmd_flags,
7996 + uint16_t token,
7997 + uint16_t *qdid)
7998 +{
7999 + struct mc_command cmd = { 0 };
8000 + int err;
8001 +
8002 + /* prepare command */
8003 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_QDID,
8004 + cmd_flags,
8005 + token);
8006 +
8007 + /* send command to mc*/
8008 + err = mc_send_command(mc_io, &cmd);
8009 + if (err)
8010 + return err;
8011 +
8012 + /* retrieve response parameters */
8013 + DPNI_RSP_GET_QDID(cmd, *qdid);
8014 +
8015 + return 0;
8016 +}
8017 +
8018 +int dpni_get_sp_info(struct fsl_mc_io *mc_io,
8019 + uint32_t cmd_flags,
8020 + uint16_t token,
8021 + struct dpni_sp_info *sp_info)
8022 +{
8023 + struct mc_command cmd = { 0 };
8024 + int err;
8025 +
8026 + /* prepare command */
8027 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_SP_INFO,
8028 + cmd_flags,
8029 + token);
8030 +
8031 + /* send command to mc*/
8032 + err = mc_send_command(mc_io, &cmd);
8033 + if (err)
8034 + return err;
8035 +
8036 + /* retrieve response parameters */
8037 + DPNI_RSP_GET_SP_INFO(cmd, sp_info);
8038 +
8039 + return 0;
8040 +}
8041 +
8042 +int dpni_get_tx_data_offset(struct fsl_mc_io *mc_io,
8043 + uint32_t cmd_flags,
8044 + uint16_t token,
8045 + uint16_t *data_offset)
8046 +{
8047 + struct mc_command cmd = { 0 };
8048 + int err;
8049 +
8050 + /* prepare command */
8051 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_TX_DATA_OFFSET,
8052 + cmd_flags,
8053 + token);
8054 +
8055 + /* send command to mc*/
8056 + err = mc_send_command(mc_io, &cmd);
8057 + if (err)
8058 + return err;
8059 +
8060 + /* retrieve response parameters */
8061 + DPNI_RSP_GET_TX_DATA_OFFSET(cmd, *data_offset);
8062 +
8063 + return 0;
8064 +}
8065 +
8066 +int dpni_get_counter(struct fsl_mc_io *mc_io,
8067 + uint32_t cmd_flags,
8068 + uint16_t token,
8069 + enum dpni_counter counter,
8070 + uint64_t *value)
8071 +{
8072 + struct mc_command cmd = { 0 };
8073 + int err;
8074 +
8075 + /* prepare command */
8076 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_COUNTER,
8077 + cmd_flags,
8078 + token);
8079 + DPNI_CMD_GET_COUNTER(cmd, counter);
8080 +
8081 + /* send command to mc*/
8082 + err = mc_send_command(mc_io, &cmd);
8083 + if (err)
8084 + return err;
8085 +
8086 + /* retrieve response parameters */
8087 + DPNI_RSP_GET_COUNTER(cmd, *value);
8088 +
8089 + return 0;
8090 +}
8091 +
8092 +int dpni_set_counter(struct fsl_mc_io *mc_io,
8093 + uint32_t cmd_flags,
8094 + uint16_t token,
8095 + enum dpni_counter counter,
8096 + uint64_t value)
8097 +{
8098 + struct mc_command cmd = { 0 };
8099 +
8100 + /* prepare command */
8101 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_COUNTER,
8102 + cmd_flags,
8103 + token);
8104 + DPNI_CMD_SET_COUNTER(cmd, counter, value);
8105 +
8106 + /* send command to mc*/
8107 + return mc_send_command(mc_io, &cmd);
8108 +}
8109 +
8110 +int dpni_set_link_cfg(struct fsl_mc_io *mc_io,
8111 + uint32_t cmd_flags,
8112 + uint16_t token,
8113 + const struct dpni_link_cfg *cfg)
8114 +{
8115 + struct mc_command cmd = { 0 };
8116 +
8117 + /* prepare command */
8118 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_LINK_CFG,
8119 + cmd_flags,
8120 + token);
8121 + DPNI_CMD_SET_LINK_CFG(cmd, cfg);
8122 +
8123 + /* send command to mc*/
8124 + return mc_send_command(mc_io, &cmd);
8125 +}
8126 +
8127 +int dpni_get_link_state(struct fsl_mc_io *mc_io,
8128 + uint32_t cmd_flags,
8129 + uint16_t token,
8130 + struct dpni_link_state *state)
8131 +{
8132 + struct mc_command cmd = { 0 };
8133 + int err;
8134 +
8135 + /* prepare command */
8136 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_LINK_STATE,
8137 + cmd_flags,
8138 + token);
8139 +
8140 + /* send command to mc*/
8141 + err = mc_send_command(mc_io, &cmd);
8142 + if (err)
8143 + return err;
8144 +
8145 + /* retrieve response parameters */
8146 + DPNI_RSP_GET_LINK_STATE(cmd, state);
8147 +
8148 + return 0;
8149 +}
8150 +
8151 +int dpni_set_tx_shaping(struct fsl_mc_io *mc_io,
8152 + uint32_t cmd_flags,
8153 + uint16_t token,
8154 + const struct dpni_tx_shaping_cfg *tx_shaper)
8155 +{
8156 + struct mc_command cmd = { 0 };
8157 +
8158 + /* prepare command */
8159 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_TX_SHAPING,
8160 + cmd_flags,
8161 + token);
8162 + DPNI_CMD_SET_TX_SHAPING(cmd, tx_shaper);
8163 +
8164 + /* send command to mc*/
8165 + return mc_send_command(mc_io, &cmd);
8166 +}
8167 +
8168 +int dpni_set_max_frame_length(struct fsl_mc_io *mc_io,
8169 + uint32_t cmd_flags,
8170 + uint16_t token,
8171 + uint16_t max_frame_length)
8172 +{
8173 + struct mc_command cmd = { 0 };
8174 +
8175 + /* prepare command */
8176 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_MAX_FRAME_LENGTH,
8177 + cmd_flags,
8178 + token);
8179 + DPNI_CMD_SET_MAX_FRAME_LENGTH(cmd, max_frame_length);
8180 +
8181 + /* send command to mc*/
8182 + return mc_send_command(mc_io, &cmd);
8183 +}
8184 +
8185 +int dpni_get_max_frame_length(struct fsl_mc_io *mc_io,
8186 + uint32_t cmd_flags,
8187 + uint16_t token,
8188 + uint16_t *max_frame_length)
8189 +{
8190 + struct mc_command cmd = { 0 };
8191 + int err;
8192 +
8193 + /* prepare command */
8194 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_MAX_FRAME_LENGTH,
8195 + cmd_flags,
8196 + token);
8197 +
8198 + /* send command to mc*/
8199 + err = mc_send_command(mc_io, &cmd);
8200 + if (err)
8201 + return err;
8202 +
8203 + /* retrieve response parameters */
8204 + DPNI_RSP_GET_MAX_FRAME_LENGTH(cmd, *max_frame_length);
8205 +
8206 + return 0;
8207 +}
8208 +
8209 +int dpni_set_mtu(struct fsl_mc_io *mc_io,
8210 + uint32_t cmd_flags,
8211 + uint16_t token,
8212 + uint16_t mtu)
8213 +{
8214 + struct mc_command cmd = { 0 };
8215 +
8216 + /* prepare command */
8217 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_MTU,
8218 + cmd_flags,
8219 + token);
8220 + DPNI_CMD_SET_MTU(cmd, mtu);
8221 +
8222 + /* send command to mc*/
8223 + return mc_send_command(mc_io, &cmd);
8224 +}
8225 +
8226 +int dpni_get_mtu(struct fsl_mc_io *mc_io,
8227 + uint32_t cmd_flags,
8228 + uint16_t token,
8229 + uint16_t *mtu)
8230 +{
8231 + struct mc_command cmd = { 0 };
8232 + int err;
8233 +
8234 + /* prepare command */
8235 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_MTU,
8236 + cmd_flags,
8237 + token);
8238 +
8239 + /* send command to mc*/
8240 + err = mc_send_command(mc_io, &cmd);
8241 + if (err)
8242 + return err;
8243 +
8244 + /* retrieve response parameters */
8245 + DPNI_RSP_GET_MTU(cmd, *mtu);
8246 +
8247 + return 0;
8248 +}
8249 +
8250 +int dpni_set_multicast_promisc(struct fsl_mc_io *mc_io,
8251 + uint32_t cmd_flags,
8252 + uint16_t token,
8253 + int en)
8254 +{
8255 + struct mc_command cmd = { 0 };
8256 +
8257 + /* prepare command */
8258 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_MCAST_PROMISC,
8259 + cmd_flags,
8260 + token);
8261 + DPNI_CMD_SET_MULTICAST_PROMISC(cmd, en);
8262 +
8263 + /* send command to mc*/
8264 + return mc_send_command(mc_io, &cmd);
8265 +}
8266 +
8267 +int dpni_get_multicast_promisc(struct fsl_mc_io *mc_io,
8268 + uint32_t cmd_flags,
8269 + uint16_t token,
8270 + int *en)
8271 +{
8272 + struct mc_command cmd = { 0 };
8273 + int err;
8274 +
8275 + /* prepare command */
8276 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_MCAST_PROMISC,
8277 + cmd_flags,
8278 + token);
8279 +
8280 + /* send command to mc*/
8281 + err = mc_send_command(mc_io, &cmd);
8282 + if (err)
8283 + return err;
8284 +
8285 + /* retrieve response parameters */
8286 + DPNI_RSP_GET_MULTICAST_PROMISC(cmd, *en);
8287 +
8288 + return 0;
8289 +}
8290 +
8291 +int dpni_set_unicast_promisc(struct fsl_mc_io *mc_io,
8292 + uint32_t cmd_flags,
8293 + uint16_t token,
8294 + int en)
8295 +{
8296 + struct mc_command cmd = { 0 };
8297 +
8298 + /* prepare command */
8299 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_UNICAST_PROMISC,
8300 + cmd_flags,
8301 + token);
8302 + DPNI_CMD_SET_UNICAST_PROMISC(cmd, en);
8303 +
8304 + /* send command to mc*/
8305 + return mc_send_command(mc_io, &cmd);
8306 +}
8307 +
8308 +int dpni_get_unicast_promisc(struct fsl_mc_io *mc_io,
8309 + uint32_t cmd_flags,
8310 + uint16_t token,
8311 + int *en)
8312 +{
8313 + struct mc_command cmd = { 0 };
8314 + int err;
8315 +
8316 + /* prepare command */
8317 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_UNICAST_PROMISC,
8318 + cmd_flags,
8319 + token);
8320 +
8321 + /* send command to mc*/
8322 + err = mc_send_command(mc_io, &cmd);
8323 + if (err)
8324 + return err;
8325 +
8326 + /* retrieve response parameters */
8327 + DPNI_RSP_GET_UNICAST_PROMISC(cmd, *en);
8328 +
8329 + return 0;
8330 +}
8331 +
8332 +int dpni_set_primary_mac_addr(struct fsl_mc_io *mc_io,
8333 + uint32_t cmd_flags,
8334 + uint16_t token,
8335 + const uint8_t mac_addr[6])
8336 +{
8337 + struct mc_command cmd = { 0 };
8338 +
8339 + /* prepare command */
8340 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_PRIM_MAC,
8341 + cmd_flags,
8342 + token);
8343 + DPNI_CMD_SET_PRIMARY_MAC_ADDR(cmd, mac_addr);
8344 +
8345 + /* send command to mc*/
8346 + return mc_send_command(mc_io, &cmd);
8347 +}
8348 +
8349 +int dpni_get_primary_mac_addr(struct fsl_mc_io *mc_io,
8350 + uint32_t cmd_flags,
8351 + uint16_t token,
8352 + uint8_t mac_addr[6])
8353 +{
8354 + struct mc_command cmd = { 0 };
8355 + int err;
8356 +
8357 + /* prepare command */
8358 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_PRIM_MAC,
8359 + cmd_flags,
8360 + token);
8361 +
8362 + /* send command to mc*/
8363 + err = mc_send_command(mc_io, &cmd);
8364 + if (err)
8365 + return err;
8366 +
8367 + /* retrieve response parameters */
8368 + DPNI_RSP_GET_PRIMARY_MAC_ADDR(cmd, mac_addr);
8369 +
8370 + return 0;
8371 +}
8372 +
8373 +int dpni_add_mac_addr(struct fsl_mc_io *mc_io,
8374 + uint32_t cmd_flags,
8375 + uint16_t token,
8376 + const uint8_t mac_addr[6])
8377 +{
8378 + struct mc_command cmd = { 0 };
8379 +
8380 + /* prepare command */
8381 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_ADD_MAC_ADDR,
8382 + cmd_flags,
8383 + token);
8384 + DPNI_CMD_ADD_MAC_ADDR(cmd, mac_addr);
8385 +
8386 + /* send command to mc*/
8387 + return mc_send_command(mc_io, &cmd);
8388 +}
8389 +
8390 +int dpni_remove_mac_addr(struct fsl_mc_io *mc_io,
8391 + uint32_t cmd_flags,
8392 + uint16_t token,
8393 + const uint8_t mac_addr[6])
8394 +{
8395 + struct mc_command cmd = { 0 };
8396 +
8397 + /* prepare command */
8398 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_REMOVE_MAC_ADDR,
8399 + cmd_flags,
8400 + token);
8401 + DPNI_CMD_REMOVE_MAC_ADDR(cmd, mac_addr);
8402 +
8403 + /* send command to mc*/
8404 + return mc_send_command(mc_io, &cmd);
8405 +}
8406 +
8407 +int dpni_clear_mac_filters(struct fsl_mc_io *mc_io,
8408 + uint32_t cmd_flags,
8409 + uint16_t token,
8410 + int unicast,
8411 + int multicast)
8412 +{
8413 + struct mc_command cmd = { 0 };
8414 +
8415 + /* prepare command */
8416 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_CLR_MAC_FILTERS,
8417 + cmd_flags,
8418 + token);
8419 + DPNI_CMD_CLEAR_MAC_FILTERS(cmd, unicast, multicast);
8420 +
8421 + /* send command to mc*/
8422 + return mc_send_command(mc_io, &cmd);
8423 +}
8424 +
8425 +int dpni_set_vlan_filters(struct fsl_mc_io *mc_io,
8426 + uint32_t cmd_flags,
8427 + uint16_t token,
8428 + int en)
8429 +{
8430 + struct mc_command cmd = { 0 };
8431 +
8432 + /* prepare command */
8433 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_VLAN_FILTERS,
8434 + cmd_flags,
8435 + token);
8436 + DPNI_CMD_SET_VLAN_FILTERS(cmd, en);
8437 +
8438 + /* send command to mc*/
8439 + return mc_send_command(mc_io, &cmd);
8440 +}
8441 +
8442 +int dpni_add_vlan_id(struct fsl_mc_io *mc_io,
8443 + uint32_t cmd_flags,
8444 + uint16_t token,
8445 + uint16_t vlan_id)
8446 +{
8447 + struct mc_command cmd = { 0 };
8448 +
8449 + /* prepare command */
8450 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_ADD_VLAN_ID,
8451 + cmd_flags,
8452 + token);
8453 + DPNI_CMD_ADD_VLAN_ID(cmd, vlan_id);
8454 +
8455 + /* send command to mc*/
8456 + return mc_send_command(mc_io, &cmd);
8457 +}
8458 +
8459 +int dpni_remove_vlan_id(struct fsl_mc_io *mc_io,
8460 + uint32_t cmd_flags,
8461 + uint16_t token,
8462 + uint16_t vlan_id)
8463 +{
8464 + struct mc_command cmd = { 0 };
8465 +
8466 + /* prepare command */
8467 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_REMOVE_VLAN_ID,
8468 + cmd_flags,
8469 + token);
8470 + DPNI_CMD_REMOVE_VLAN_ID(cmd, vlan_id);
8471 +
8472 + /* send command to mc*/
8473 + return mc_send_command(mc_io, &cmd);
8474 +}
8475 +
8476 +int dpni_clear_vlan_filters(struct fsl_mc_io *mc_io,
8477 + uint32_t cmd_flags,
8478 + uint16_t token)
8479 +{
8480 + struct mc_command cmd = { 0 };
8481 +
8482 + /* prepare command */
8483 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_CLR_VLAN_FILTERS,
8484 + cmd_flags,
8485 + token);
8486 +
8487 + /* send command to mc*/
8488 + return mc_send_command(mc_io, &cmd);
8489 +}
8490 +
8491 +int dpni_set_tx_selection(struct fsl_mc_io *mc_io,
8492 + uint32_t cmd_flags,
8493 + uint16_t token,
8494 + const struct dpni_tx_selection_cfg *cfg)
8495 +{
8496 + struct mc_command cmd = { 0 };
8497 +
8498 + /* prepare command */
8499 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_TX_SELECTION,
8500 + cmd_flags,
8501 + token);
8502 + DPNI_CMD_SET_TX_SELECTION(cmd, cfg);
8503 +
8504 + /* send command to mc*/
8505 + return mc_send_command(mc_io, &cmd);
8506 +}
8507 +
8508 +int dpni_set_rx_tc_dist(struct fsl_mc_io *mc_io,
8509 + uint32_t cmd_flags,
8510 + uint16_t token,
8511 + uint8_t tc_id,
8512 + const struct dpni_rx_tc_dist_cfg *cfg)
8513 +{
8514 + struct mc_command cmd = { 0 };
8515 +
8516 + /* prepare command */
8517 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_RX_TC_DIST,
8518 + cmd_flags,
8519 + token);
8520 + DPNI_CMD_SET_RX_TC_DIST(cmd, tc_id, cfg);
8521 +
8522 + /* send command to mc*/
8523 + return mc_send_command(mc_io, &cmd);
8524 +}
8525 +
8526 +int dpni_set_tx_flow(struct fsl_mc_io *mc_io,
8527 + uint32_t cmd_flags,
8528 + uint16_t token,
8529 + uint16_t *flow_id,
8530 + const struct dpni_tx_flow_cfg *cfg)
8531 +{
8532 + struct mc_command cmd = { 0 };
8533 + int err;
8534 +
8535 + /* prepare command */
8536 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_TX_FLOW,
8537 + cmd_flags,
8538 + token);
8539 + DPNI_CMD_SET_TX_FLOW(cmd, *flow_id, cfg);
8540 +
8541 + /* send command to mc*/
8542 + err = mc_send_command(mc_io, &cmd);
8543 + if (err)
8544 + return err;
8545 +
8546 + /* retrieve response parameters */
8547 + DPNI_RSP_SET_TX_FLOW(cmd, *flow_id);
8548 +
8549 + return 0;
8550 +}
8551 +
8552 +int dpni_get_tx_flow(struct fsl_mc_io *mc_io,
8553 + uint32_t cmd_flags,
8554 + uint16_t token,
8555 + uint16_t flow_id,
8556 + struct dpni_tx_flow_attr *attr)
8557 +{
8558 + struct mc_command cmd = { 0 };
8559 + int err;
8560 +
8561 + /* prepare command */
8562 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_TX_FLOW,
8563 + cmd_flags,
8564 + token);
8565 + DPNI_CMD_GET_TX_FLOW(cmd, flow_id);
8566 +
8567 + /* send command to mc*/
8568 + err = mc_send_command(mc_io, &cmd);
8569 + if (err)
8570 + return err;
8571 +
8572 + /* retrieve response parameters */
8573 + DPNI_RSP_GET_TX_FLOW(cmd, attr);
8574 +
8575 + return 0;
8576 +}
8577 +
8578 +int dpni_set_rx_flow(struct fsl_mc_io *mc_io,
8579 + uint32_t cmd_flags,
8580 + uint16_t token,
8581 + uint8_t tc_id,
8582 + uint16_t flow_id,
8583 + const struct dpni_queue_cfg *cfg)
8584 +{
8585 + struct mc_command cmd = { 0 };
8586 +
8587 + /* prepare command */
8588 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_RX_FLOW,
8589 + cmd_flags,
8590 + token);
8591 + DPNI_CMD_SET_RX_FLOW(cmd, tc_id, flow_id, cfg);
8592 +
8593 + /* send command to mc*/
8594 + return mc_send_command(mc_io, &cmd);
8595 +}
8596 +
8597 +int dpni_get_rx_flow(struct fsl_mc_io *mc_io,
8598 + uint32_t cmd_flags,
8599 + uint16_t token,
8600 + uint8_t tc_id,
8601 + uint16_t flow_id,
8602 + struct dpni_queue_attr *attr)
8603 +{
8604 + struct mc_command cmd = { 0 };
8605 + int err;
8606 + /* prepare command */
8607 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_RX_FLOW,
8608 + cmd_flags,
8609 + token);
8610 + DPNI_CMD_GET_RX_FLOW(cmd, tc_id, flow_id);
8611 +
8612 + /* send command to mc*/
8613 + err = mc_send_command(mc_io, &cmd);
8614 + if (err)
8615 + return err;
8616 +
8617 + /* retrieve response parameters */
8618 + DPNI_RSP_GET_RX_FLOW(cmd, attr);
8619 +
8620 + return 0;
8621 +}
8622 +
8623 +int dpni_set_rx_err_queue(struct fsl_mc_io *mc_io,
8624 + uint32_t cmd_flags,
8625 + uint16_t token,
8626 + const struct dpni_queue_cfg *cfg)
8627 +{
8628 + struct mc_command cmd = { 0 };
8629 +
8630 + /* prepare command */
8631 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_RX_ERR_QUEUE,
8632 + cmd_flags,
8633 + token);
8634 + DPNI_CMD_SET_RX_ERR_QUEUE(cmd, cfg);
8635 +
8636 + /* send command to mc*/
8637 + return mc_send_command(mc_io, &cmd);
8638 +}
8639 +
8640 +int dpni_get_rx_err_queue(struct fsl_mc_io *mc_io,
8641 + uint32_t cmd_flags,
8642 + uint16_t token,
8643 + struct dpni_queue_attr *attr)
8644 +{
8645 + struct mc_command cmd = { 0 };
8646 + int err;
8647 +
8648 + /* prepare command */
8649 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_RX_ERR_QUEUE,
8650 + cmd_flags,
8651 + token);
8652 +
8653 + /* send command to mc*/
8654 + err = mc_send_command(mc_io, &cmd);
8655 + if (err)
8656 + return err;
8657 +
8658 + /* retrieve response parameters */
8659 + DPNI_RSP_GET_RX_ERR_QUEUE(cmd, attr);
8660 +
8661 + return 0;
8662 +}
8663 +
8664 +int dpni_set_tx_conf_revoke(struct fsl_mc_io *mc_io,
8665 + uint32_t cmd_flags,
8666 + uint16_t token,
8667 + int revoke)
8668 +{
8669 + struct mc_command cmd = { 0 };
8670 +
8671 + /* prepare command */
8672 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_TX_CONF_REVOKE,
8673 + cmd_flags,
8674 + token);
8675 + DPNI_CMD_SET_TX_CONF_REVOKE(cmd, revoke);
8676 +
8677 + /* send command to mc*/
8678 + return mc_send_command(mc_io, &cmd);
8679 +}
8680 +
8681 +int dpni_set_qos_table(struct fsl_mc_io *mc_io,
8682 + uint32_t cmd_flags,
8683 + uint16_t token,
8684 + const struct dpni_qos_tbl_cfg *cfg)
8685 +{
8686 + struct mc_command cmd = { 0 };
8687 +
8688 + /* prepare command */
8689 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_QOS_TBL,
8690 + cmd_flags,
8691 + token);
8692 + DPNI_CMD_SET_QOS_TABLE(cmd, cfg);
8693 +
8694 + /* send command to mc*/
8695 + return mc_send_command(mc_io, &cmd);
8696 +}
8697 +
8698 +int dpni_add_qos_entry(struct fsl_mc_io *mc_io,
8699 + uint32_t cmd_flags,
8700 + uint16_t token,
8701 + const struct dpni_rule_cfg *cfg,
8702 + uint8_t tc_id)
8703 +{
8704 + struct mc_command cmd = { 0 };
8705 +
8706 + /* prepare command */
8707 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_ADD_QOS_ENT,
8708 + cmd_flags,
8709 + token);
8710 + DPNI_CMD_ADD_QOS_ENTRY(cmd, cfg, tc_id);
8711 +
8712 + /* send command to mc*/
8713 + return mc_send_command(mc_io, &cmd);
8714 +}
8715 +
8716 +int dpni_remove_qos_entry(struct fsl_mc_io *mc_io,
8717 + uint32_t cmd_flags,
8718 + uint16_t token,
8719 + const struct dpni_rule_cfg *cfg)
8720 +{
8721 + struct mc_command cmd = { 0 };
8722 +
8723 + /* prepare command */
8724 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_REMOVE_QOS_ENT,
8725 + cmd_flags,
8726 + token);
8727 + DPNI_CMD_REMOVE_QOS_ENTRY(cmd, cfg);
8728 +
8729 + /* send command to mc*/
8730 + return mc_send_command(mc_io, &cmd);
8731 +}
8732 +
8733 +int dpni_clear_qos_table(struct fsl_mc_io *mc_io,
8734 + uint32_t cmd_flags,
8735 + uint16_t token)
8736 +{
8737 + struct mc_command cmd = { 0 };
8738 +
8739 + /* prepare command */
8740 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_CLR_QOS_TBL,
8741 + cmd_flags,
8742 + token);
8743 +
8744 + /* send command to mc*/
8745 + return mc_send_command(mc_io, &cmd);
8746 +}
8747 +
8748 +int dpni_add_fs_entry(struct fsl_mc_io *mc_io,
8749 + uint32_t cmd_flags,
8750 + uint16_t token,
8751 + uint8_t tc_id,
8752 + const struct dpni_rule_cfg *cfg,
8753 + uint16_t flow_id)
8754 +{
8755 + struct mc_command cmd = { 0 };
8756 +
8757 + /* prepare command */
8758 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_ADD_FS_ENT,
8759 + cmd_flags,
8760 + token);
8761 + DPNI_CMD_ADD_FS_ENTRY(cmd, tc_id, cfg, flow_id);
8762 +
8763 + /* send command to mc*/
8764 + return mc_send_command(mc_io, &cmd);
8765 +}
8766 +
8767 +int dpni_remove_fs_entry(struct fsl_mc_io *mc_io,
8768 + uint32_t cmd_flags,
8769 + uint16_t token,
8770 + uint8_t tc_id,
8771 + const struct dpni_rule_cfg *cfg)
8772 +{
8773 + struct mc_command cmd = { 0 };
8774 +
8775 + /* prepare command */
8776 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_REMOVE_FS_ENT,
8777 + cmd_flags,
8778 + token);
8779 + DPNI_CMD_REMOVE_FS_ENTRY(cmd, tc_id, cfg);
8780 +
8781 + /* send command to mc*/
8782 + return mc_send_command(mc_io, &cmd);
8783 +}
8784 +
8785 +int dpni_clear_fs_entries(struct fsl_mc_io *mc_io,
8786 + uint32_t cmd_flags,
8787 + uint16_t token,
8788 + uint8_t tc_id)
8789 +{
8790 + struct mc_command cmd = { 0 };
8791 +
8792 + /* prepare command */
8793 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_CLR_FS_ENT,
8794 + cmd_flags,
8795 + token);
8796 + DPNI_CMD_CLEAR_FS_ENTRIES(cmd, tc_id);
8797 +
8798 + /* send command to mc*/
8799 + return mc_send_command(mc_io, &cmd);
8800 +}
8801 +
8802 +int dpni_set_vlan_insertion(struct fsl_mc_io *mc_io,
8803 + uint32_t cmd_flags,
8804 + uint16_t token,
8805 + int en)
8806 +{
8807 + struct mc_command cmd = { 0 };
8808 +
8809 + /* prepare command */
8810 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_VLAN_INSERTION,
8811 + cmd_flags, token);
8812 + DPNI_CMD_SET_VLAN_INSERTION(cmd, en);
8813 +
8814 + /* send command to mc*/
8815 + return mc_send_command(mc_io, &cmd);
8816 +}
8817 +
8818 +int dpni_set_vlan_removal(struct fsl_mc_io *mc_io,
8819 + uint32_t cmd_flags,
8820 + uint16_t token,
8821 + int en)
8822 +{
8823 + struct mc_command cmd = { 0 };
8824 +
8825 + /* prepare command */
8826 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_VLAN_REMOVAL,
8827 + cmd_flags, token);
8828 + DPNI_CMD_SET_VLAN_REMOVAL(cmd, en);
8829 +
8830 + /* send command to mc*/
8831 + return mc_send_command(mc_io, &cmd);
8832 +}
8833 +
8834 +int dpni_set_ipr(struct fsl_mc_io *mc_io,
8835 + uint32_t cmd_flags,
8836 + uint16_t token,
8837 + int en)
8838 +{
8839 + struct mc_command cmd = { 0 };
8840 +
8841 + /* prepare command */
8842 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_IPR,
8843 + cmd_flags,
8844 + token);
8845 + DPNI_CMD_SET_IPR(cmd, en);
8846 +
8847 + /* send command to mc*/
8848 + return mc_send_command(mc_io, &cmd);
8849 +}
8850 +
8851 +int dpni_set_ipf(struct fsl_mc_io *mc_io,
8852 + uint32_t cmd_flags,
8853 + uint16_t token,
8854 + int en)
8855 +{
8856 + struct mc_command cmd = { 0 };
8857 +
8858 + /* prepare command */
8859 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_IPF,
8860 + cmd_flags,
8861 + token);
8862 + DPNI_CMD_SET_IPF(cmd, en);
8863 +
8864 + /* send command to mc*/
8865 + return mc_send_command(mc_io, &cmd);
8866 +}
8867 +
8868 +int dpni_set_rx_tc_policing(struct fsl_mc_io *mc_io,
8869 + uint32_t cmd_flags,
8870 + uint16_t token,
8871 + uint8_t tc_id,
8872 + const struct dpni_rx_tc_policing_cfg *cfg)
8873 +{
8874 + struct mc_command cmd = { 0 };
8875 +
8876 + /* prepare command */
8877 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_RX_TC_POLICING,
8878 + cmd_flags,
8879 + token);
8880 + DPNI_CMD_SET_RX_TC_POLICING(cmd, tc_id, cfg);
8881 +
8882 + /* send command to mc*/
8883 + return mc_send_command(mc_io, &cmd);
8884 +}
8885 +
8886 +int dpni_get_rx_tc_policing(struct fsl_mc_io *mc_io,
8887 + uint32_t cmd_flags,
8888 + uint16_t token,
8889 + uint8_t tc_id,
8890 + struct dpni_rx_tc_policing_cfg *cfg)
8891 +{
8892 + struct mc_command cmd = { 0 };
8893 + int err;
8894 +
8895 + /* prepare command */
8896 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_RX_TC_POLICING,
8897 + cmd_flags,
8898 + token);
8899 + DPNI_CMD_GET_RX_TC_POLICING(cmd, tc_id);
8900 +
8901 + /* send command to mc*/
8902 + err = mc_send_command(mc_io, &cmd);
8903 + if (err)
8904 + return err;
8905 +
8906 + DPNI_RSP_GET_RX_TC_POLICING(cmd, cfg);
8907 +
8908 + return 0;
8909 +}
8910 +
8911 +void dpni_prepare_early_drop(const struct dpni_early_drop_cfg *cfg,
8912 + uint8_t *early_drop_buf)
8913 +{
8914 + uint64_t *ext_params = (uint64_t *)early_drop_buf;
8915 +
8916 + DPNI_PREP_EARLY_DROP(ext_params, cfg);
8917 +}
8918 +
8919 +void dpni_extract_early_drop(struct dpni_early_drop_cfg *cfg,
8920 + const uint8_t *early_drop_buf)
8921 +{
8922 + uint64_t *ext_params = (uint64_t *)early_drop_buf;
8923 +
8924 + DPNI_EXT_EARLY_DROP(ext_params, cfg);
8925 +}
8926 +
8927 +int dpni_set_rx_tc_early_drop(struct fsl_mc_io *mc_io,
8928 + uint32_t cmd_flags,
8929 + uint16_t token,
8930 + uint8_t tc_id,
8931 + uint64_t early_drop_iova)
8932 +{
8933 + struct mc_command cmd = { 0 };
8934 +
8935 + /* prepare command */
8936 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_RX_TC_EARLY_DROP,
8937 + cmd_flags,
8938 + token);
8939 + DPNI_CMD_SET_RX_TC_EARLY_DROP(cmd, tc_id, early_drop_iova);
8940 +
8941 + /* send command to mc*/
8942 + return mc_send_command(mc_io, &cmd);
8943 +}
8944 +
8945 +int dpni_get_rx_tc_early_drop(struct fsl_mc_io *mc_io,
8946 + uint32_t cmd_flags,
8947 + uint16_t token,
8948 + uint8_t tc_id,
8949 + uint64_t early_drop_iova)
8950 +{
8951 + struct mc_command cmd = { 0 };
8952 +
8953 + /* prepare command */
8954 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_RX_TC_EARLY_DROP,
8955 + cmd_flags,
8956 + token);
8957 + DPNI_CMD_GET_RX_TC_EARLY_DROP(cmd, tc_id, early_drop_iova);
8958 +
8959 + /* send command to mc*/
8960 + return mc_send_command(mc_io, &cmd);
8961 +}
8962 +
8963 +int dpni_set_tx_tc_early_drop(struct fsl_mc_io *mc_io,
8964 + uint32_t cmd_flags,
8965 + uint16_t token,
8966 + uint8_t tc_id,
8967 + uint64_t early_drop_iova)
8968 +{
8969 + struct mc_command cmd = { 0 };
8970 +
8971 + /* prepare command */
8972 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_TX_TC_EARLY_DROP,
8973 + cmd_flags,
8974 + token);
8975 + DPNI_CMD_SET_TX_TC_EARLY_DROP(cmd, tc_id, early_drop_iova);
8976 +
8977 + /* send command to mc*/
8978 + return mc_send_command(mc_io, &cmd);
8979 +}
8980 +
8981 +int dpni_get_tx_tc_early_drop(struct fsl_mc_io *mc_io,
8982 + uint32_t cmd_flags,
8983 + uint16_t token,
8984 + uint8_t tc_id,
8985 + uint64_t early_drop_iova)
8986 +{
8987 + struct mc_command cmd = { 0 };
8988 +
8989 + /* prepare command */
8990 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_TX_TC_EARLY_DROP,
8991 + cmd_flags,
8992 + token);
8993 + DPNI_CMD_GET_TX_TC_EARLY_DROP(cmd, tc_id, early_drop_iova);
8994 +
8995 + /* send command to mc*/
8996 + return mc_send_command(mc_io, &cmd);
8997 +}
8998 +
8999 +int dpni_set_rx_tc_congestion_notification(struct fsl_mc_io *mc_io,
9000 + uint32_t cmd_flags,
9001 + uint16_t token,
9002 + uint8_t tc_id,
9003 + const struct dpni_congestion_notification_cfg *cfg)
9004 +{
9005 + struct mc_command cmd = { 0 };
9006 +
9007 + /* prepare command */
9008 + cmd.header = mc_encode_cmd_header(
9009 + DPNI_CMDID_SET_RX_TC_CONGESTION_NOTIFICATION,
9010 + cmd_flags,
9011 + token);
9012 + DPNI_CMD_SET_RX_TC_CONGESTION_NOTIFICATION(cmd, tc_id, cfg);
9013 +
9014 + /* send command to mc*/
9015 + return mc_send_command(mc_io, &cmd);
9016 +}
9017 +
9018 +int dpni_get_rx_tc_congestion_notification(struct fsl_mc_io *mc_io,
9019 + uint32_t cmd_flags,
9020 + uint16_t token,
9021 + uint8_t tc_id,
9022 + struct dpni_congestion_notification_cfg *cfg)
9023 +{
9024 + struct mc_command cmd = { 0 };
9025 + int err;
9026 +
9027 + /* prepare command */
9028 + cmd.header = mc_encode_cmd_header(
9029 + DPNI_CMDID_GET_RX_TC_CONGESTION_NOTIFICATION,
9030 + cmd_flags,
9031 + token);
9032 + DPNI_CMD_GET_RX_TC_CONGESTION_NOTIFICATION(cmd, tc_id);
9033 +
9034 + /* send command to mc*/
9035 + err = mc_send_command(mc_io, &cmd);
9036 + if (err)
9037 + return err;
9038 +
9039 + DPNI_RSP_GET_RX_TC_CONGESTION_NOTIFICATION(cmd, cfg);
9040 +
9041 + return 0;
9042 +}
9043 +
9044 +int dpni_set_tx_tc_congestion_notification(struct fsl_mc_io *mc_io,
9045 + uint32_t cmd_flags,
9046 + uint16_t token,
9047 + uint8_t tc_id,
9048 + const struct dpni_congestion_notification_cfg *cfg)
9049 +{
9050 + struct mc_command cmd = { 0 };
9051 +
9052 + /* prepare command */
9053 + cmd.header = mc_encode_cmd_header(
9054 + DPNI_CMDID_SET_TX_TC_CONGESTION_NOTIFICATION,
9055 + cmd_flags,
9056 + token);
9057 + DPNI_CMD_SET_TX_TC_CONGESTION_NOTIFICATION(cmd, tc_id, cfg);
9058 +
9059 + /* send command to mc*/
9060 + return mc_send_command(mc_io, &cmd);
9061 +}
9062 +
9063 +int dpni_get_tx_tc_congestion_notification(struct fsl_mc_io *mc_io,
9064 + uint32_t cmd_flags,
9065 + uint16_t token,
9066 + uint8_t tc_id,
9067 + struct dpni_congestion_notification_cfg *cfg)
9068 +{
9069 + struct mc_command cmd = { 0 };
9070 + int err;
9071 +
9072 + /* prepare command */
9073 + cmd.header = mc_encode_cmd_header(
9074 + DPNI_CMDID_GET_TX_TC_CONGESTION_NOTIFICATION,
9075 + cmd_flags,
9076 + token);
9077 + DPNI_CMD_GET_TX_TC_CONGESTION_NOTIFICATION(cmd, tc_id);
9078 +
9079 + /* send command to mc*/
9080 + err = mc_send_command(mc_io, &cmd);
9081 + if (err)
9082 + return err;
9083 +
9084 + DPNI_RSP_GET_TX_TC_CONGESTION_NOTIFICATION(cmd, cfg);
9085 +
9086 + return 0;
9087 +}
9088 +
9089 +int dpni_set_tx_conf(struct fsl_mc_io *mc_io,
9090 + uint32_t cmd_flags,
9091 + uint16_t token,
9092 + uint16_t flow_id,
9093 + const struct dpni_tx_conf_cfg *cfg)
9094 +{
9095 + struct mc_command cmd = { 0 };
9096 +
9097 + /* prepare command */
9098 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_TX_CONF,
9099 + cmd_flags,
9100 + token);
9101 + DPNI_CMD_SET_TX_CONF(cmd, flow_id, cfg);
9102 +
9103 + /* send command to mc*/
9104 + return mc_send_command(mc_io, &cmd);
9105 +}
9106 +
9107 +int dpni_get_tx_conf(struct fsl_mc_io *mc_io,
9108 + uint32_t cmd_flags,
9109 + uint16_t token,
9110 + uint16_t flow_id,
9111 + struct dpni_tx_conf_attr *attr)
9112 +{
9113 + struct mc_command cmd = { 0 };
9114 + int err;
9115 +
9116 + /* prepare command */
9117 + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_TX_CONF,
9118 + cmd_flags,
9119 + token);
9120 + DPNI_CMD_GET_TX_CONF(cmd, flow_id);
9121 +
9122 + /* send command to mc*/
9123 + err = mc_send_command(mc_io, &cmd);
9124 + if (err)
9125 + return err;
9126 +
9127 + DPNI_RSP_GET_TX_CONF(cmd, attr);
9128 +
9129 + return 0;
9130 +}
9131 +
9132 +int dpni_set_tx_conf_congestion_notification(struct fsl_mc_io *mc_io,
9133 + uint32_t cmd_flags,
9134 + uint16_t token,
9135 + uint16_t flow_id,
9136 + const struct dpni_congestion_notification_cfg *cfg)
9137 +{
9138 + struct mc_command cmd = { 0 };
9139 +
9140 + /* prepare command */
9141 + cmd.header = mc_encode_cmd_header(
9142 + DPNI_CMDID_SET_TX_CONF_CONGESTION_NOTIFICATION,
9143 + cmd_flags,
9144 + token);
9145 + DPNI_CMD_SET_TX_CONF_CONGESTION_NOTIFICATION(cmd, flow_id, cfg);
9146 +
9147 + /* send command to mc*/
9148 + return mc_send_command(mc_io, &cmd);
9149 +}
9150 +
9151 +int dpni_get_tx_conf_congestion_notification(struct fsl_mc_io *mc_io,
9152 + uint32_t cmd_flags,
9153 + uint16_t token,
9154 + uint16_t flow_id,
9155 + struct dpni_congestion_notification_cfg *cfg)
9156 +{
9157 + struct mc_command cmd = { 0 };
9158 + int err;
9159 +
9160 + /* prepare command */
9161 + cmd.header = mc_encode_cmd_header(
9162 + DPNI_CMDID_GET_TX_CONF_CONGESTION_NOTIFICATION,
9163 + cmd_flags,
9164 + token);
9165 + DPNI_CMD_GET_TX_CONF_CONGESTION_NOTIFICATION(cmd, flow_id);
9166 +
9167 + /* send command to mc*/
9168 + err = mc_send_command(mc_io, &cmd);
9169 + if (err)
9170 + return err;
9171 +
9172 + DPNI_RSP_GET_TX_CONF_CONGESTION_NOTIFICATION(cmd, cfg);
9173 +
9174 + return 0;
9175 +}
9176 --- /dev/null
9177 +++ b/drivers/staging/fsl-dpaa2/ethernet/dpni.h
9178 @@ -0,0 +1,2581 @@
9179 +/* Copyright 2013-2015 Freescale Semiconductor Inc.
9180 + *
9181 + * Redistribution and use in source and binary forms, with or without
9182 + * modification, are permitted provided that the following conditions are met:
9183 + * * Redistributions of source code must retain the above copyright
9184 + * notice, this list of conditions and the following disclaimer.
9185 + * * Redistributions in binary form must reproduce the above copyright
9186 + * notice, this list of conditions and the following disclaimer in the
9187 + * documentation and/or other materials provided with the distribution.
9188 + * * Neither the name of the above-listed copyright holders nor the
9189 + * names of any contributors may be used to endorse or promote products
9190 + * derived from this software without specific prior written permission.
9191 + *
9192 + *
9193 + * ALTERNATIVELY, this software may be distributed under the terms of the
9194 + * GNU General Public License ("GPL") as published by the Free Software
9195 + * Foundation, either version 2 of that License or (at your option) any
9196 + * later version.
9197 + *
9198 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
9199 + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
9200 + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
9201 + * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
9202 + * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
9203 + * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
9204 + * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
9205 + * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
9206 + * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
9207 + * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
9208 + * POSSIBILITY OF SUCH DAMAGE.
9209 + */
9210 +#ifndef __FSL_DPNI_H
9211 +#define __FSL_DPNI_H
9212 +
9213 +#include "dpkg.h"
9214 +
9215 +struct fsl_mc_io;
9216 +
9217 +/**
9218 + * Data Path Network Interface API
9219 + * Contains initialization APIs and runtime control APIs for DPNI
9220 + */
9221 +
9222 +/** General DPNI macros */
9223 +
9224 +/**
9225 + * Maximum number of traffic classes
9226 + */
9227 +#define DPNI_MAX_TC 8
9228 +/**
9229 + * Maximum number of buffer pools per DPNI
9230 + */
9231 +#define DPNI_MAX_DPBP 8
9232 +/**
9233 + * Maximum number of storage-profiles per DPNI
9234 + */
9235 +#define DPNI_MAX_SP 2
9236 +
9237 +/**
9238 + * All traffic classes considered; see dpni_set_rx_flow()
9239 + */
9240 +#define DPNI_ALL_TCS (uint8_t)(-1)
9241 +/**
9242 + * All flows within traffic class considered; see dpni_set_rx_flow()
9243 + */
9244 +#define DPNI_ALL_TC_FLOWS (uint16_t)(-1)
9245 +/**
9246 + * Generate new flow ID; see dpni_set_tx_flow()
9247 + */
9248 +#define DPNI_NEW_FLOW_ID (uint16_t)(-1)
9249 +/* use for common tx-conf queue; see dpni_set_tx_conf_<x>() */
9250 +#define DPNI_COMMON_TX_CONF (uint16_t)(-1)
9251 +
9252 +/**
9253 + * dpni_open() - Open a control session for the specified object
9254 + * @mc_io: Pointer to MC portal's I/O object
9255 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
9256 + * @dpni_id: DPNI unique ID
9257 + * @token: Returned token; use in subsequent API calls
9258 + *
9259 + * This function can be used to open a control session for an
9260 + * already created object; an object may have been declared in
9261 + * the DPL or by calling the dpni_create() function.
9262 + * This function returns a unique authentication token,
9263 + * associated with the specific object ID and the specific MC
9264 + * portal; this token must be used in all subsequent commands for
9265 + * this specific object.
9266 + *
9267 + * Return: '0' on Success; Error code otherwise.
9268 + */
9269 +int dpni_open(struct fsl_mc_io *mc_io,
9270 + uint32_t cmd_flags,
9271 + int dpni_id,
9272 + uint16_t *token);
9273 +
9274 +/**
9275 + * dpni_close() - Close the control session of the object
9276 + * @mc_io: Pointer to MC portal's I/O object
9277 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
9278 + * @token: Token of DPNI object
9279 + *
9280 + * After this function is called, no further operations are
9281 + * allowed on the object without opening a new control session.
9282 + *
9283 + * Return: '0' on Success; Error code otherwise.
9284 + */
9285 +int dpni_close(struct fsl_mc_io *mc_io,
9286 + uint32_t cmd_flags,
9287 + uint16_t token);
9288 +
9289 +/* DPNI configuration options */
9290 +
9291 +/**
9292 + * Allow different distribution key profiles for different traffic classes;
9293 + * if not set, a single key profile is assumed
9294 + */
9295 +#define DPNI_OPT_ALLOW_DIST_KEY_PER_TC 0x00000001
9296 +
9297 +/**
9298 + * Disable all non-error transmit confirmation; error frames are reported
9299 + * back to a common Tx error queue
9300 + */
9301 +#define DPNI_OPT_TX_CONF_DISABLED 0x00000002
9302 +
9303 +/**
9304 + * Disable per-sender private Tx confirmation/error queue
9305 + */
9306 +#define DPNI_OPT_PRIVATE_TX_CONF_ERROR_DISABLED 0x00000004
9307 +
9308 +/**
9309 + * Support distribution based on hashed key;
9310 + * allows statistical distribution over receive queues in a traffic class
9311 + */
9312 +#define DPNI_OPT_DIST_HASH 0x00000010
9313 +
9314 +/**
9315 + * DEPRECATED - if this flag is selected and and all new 'max_fs_entries' are
9316 + * '0' then backward compatibility is preserved;
9317 + * Support distribution based on flow steering;
9318 + * allows explicit control of distribution over receive queues in a traffic
9319 + * class
9320 + */
9321 +#define DPNI_OPT_DIST_FS 0x00000020
9322 +
9323 +/**
9324 + * Unicast filtering support
9325 + */
9326 +#define DPNI_OPT_UNICAST_FILTER 0x00000080
9327 +/**
9328 + * Multicast filtering support
9329 + */
9330 +#define DPNI_OPT_MULTICAST_FILTER 0x00000100
9331 +/**
9332 + * VLAN filtering support
9333 + */
9334 +#define DPNI_OPT_VLAN_FILTER 0x00000200
9335 +/**
9336 + * Support IP reassembly on received packets
9337 + */
9338 +#define DPNI_OPT_IPR 0x00000800
9339 +/**
9340 + * Support IP fragmentation on transmitted packets
9341 + */
9342 +#define DPNI_OPT_IPF 0x00001000
9343 +/**
9344 + * VLAN manipulation support
9345 + */
9346 +#define DPNI_OPT_VLAN_MANIPULATION 0x00010000
9347 +/**
9348 + * Support masking of QoS lookup keys
9349 + */
9350 +#define DPNI_OPT_QOS_MASK_SUPPORT 0x00020000
9351 +/**
9352 + * Support masking of Flow Steering lookup keys
9353 + */
9354 +#define DPNI_OPT_FS_MASK_SUPPORT 0x00040000
9355 +
9356 +/**
9357 + * struct dpni_extended_cfg - Structure representing extended DPNI configuration
9358 + * @tc_cfg: TCs configuration
9359 + * @ipr_cfg: IP reassembly configuration
9360 + */
9361 +struct dpni_extended_cfg {
9362 + /**
9363 + * struct tc_cfg - TC configuration
9364 + * @max_dist: Maximum distribution size for Rx traffic class;
9365 + * supported values: 1,2,3,4,6,7,8,12,14,16,24,28,32,48,56,64,96,
9366 + * 112,128,192,224,256,384,448,512,768,896,1024;
9367 + * value '0' will be treated as '1'.
9368 + * other unsupported values will be round down to the nearest
9369 + * supported value.
9370 + * @max_fs_entries: Maximum FS entries for Rx traffic class;
9371 + * '0' means no support for this TC;
9372 + */
9373 + struct {
9374 + uint16_t max_dist;
9375 + uint16_t max_fs_entries;
9376 + } tc_cfg[DPNI_MAX_TC];
9377 + /**
9378 + * struct ipr_cfg - Structure representing IP reassembly configuration
9379 + * @max_reass_frm_size: Maximum size of the reassembled frame
9380 + * @min_frag_size_ipv4: Minimum fragment size of IPv4 fragments
9381 + * @min_frag_size_ipv6: Minimum fragment size of IPv6 fragments
9382 + * @max_open_frames_ipv4: Maximum concurrent IPv4 packets in reassembly
9383 + * process
9384 + * @max_open_frames_ipv6: Maximum concurrent IPv6 packets in reassembly
9385 + * process
9386 + */
9387 + struct {
9388 + uint16_t max_reass_frm_size;
9389 + uint16_t min_frag_size_ipv4;
9390 + uint16_t min_frag_size_ipv6;
9391 + uint16_t max_open_frames_ipv4;
9392 + uint16_t max_open_frames_ipv6;
9393 + } ipr_cfg;
9394 +};
9395 +
9396 +/**
9397 + * dpni_prepare_extended_cfg() - function prepare extended parameters
9398 + * @cfg: extended structure
9399 + * @ext_cfg_buf: Zeroed 256 bytes of memory before mapping it to DMA
9400 + *
9401 + * This function has to be called before dpni_create()
9402 + */
9403 +int dpni_prepare_extended_cfg(const struct dpni_extended_cfg *cfg,
9404 + uint8_t *ext_cfg_buf);
9405 +
9406 +/**
9407 + * struct dpni_cfg - Structure representing DPNI configuration
9408 + * @mac_addr: Primary MAC address
9409 + * @adv: Advanced parameters; default is all zeros;
9410 + * use this structure to change default settings
9411 + */
9412 +struct dpni_cfg {
9413 + uint8_t mac_addr[6];
9414 + /**
9415 + * struct adv - Advanced parameters
9416 + * @options: Mask of available options; use 'DPNI_OPT_<X>' values
9417 + * @start_hdr: Selects the packet starting header for parsing;
9418 + * 'NET_PROT_NONE' is treated as default: 'NET_PROT_ETH'
9419 + * @max_senders: Maximum number of different senders; used as the number
9420 + * of dedicated Tx flows; Non-power-of-2 values are rounded
9421 + * up to the next power-of-2 value as hardware demands it;
9422 + * '0' will be treated as '1'
9423 + * @max_tcs: Maximum number of traffic classes (for both Tx and Rx);
9424 + * '0' will e treated as '1'
9425 + * @max_unicast_filters: Maximum number of unicast filters;
9426 + * '0' is treated as '16'
9427 + * @max_multicast_filters: Maximum number of multicast filters;
9428 + * '0' is treated as '64'
9429 + * @max_qos_entries: if 'max_tcs > 1', declares the maximum entries in
9430 + * the QoS table; '0' is treated as '64'
9431 + * @max_qos_key_size: Maximum key size for the QoS look-up;
9432 + * '0' is treated as '24' which is enough for IPv4
9433 + * 5-tuple
9434 + * @max_dist_key_size: Maximum key size for the distribution;
9435 + * '0' is treated as '24' which is enough for IPv4 5-tuple
9436 + * @max_policers: Maximum number of policers;
9437 + * should be between '0' and max_tcs
9438 + * @max_congestion_ctrl: Maximum number of congestion control groups
9439 + * (CGs); covers early drop and congestion notification
9440 + * requirements;
9441 + * should be between '0' and ('max_tcs' + 'max_senders')
9442 + * @ext_cfg_iova: I/O virtual address of 256 bytes DMA-able memory
9443 + * filled with the extended configuration by calling
9444 + * dpni_prepare_extended_cfg()
9445 + */
9446 + struct {
9447 + uint32_t options;
9448 + enum net_prot start_hdr;
9449 + uint8_t max_senders;
9450 + uint8_t max_tcs;
9451 + uint8_t max_unicast_filters;
9452 + uint8_t max_multicast_filters;
9453 + uint8_t max_vlan_filters;
9454 + uint8_t max_qos_entries;
9455 + uint8_t max_qos_key_size;
9456 + uint8_t max_dist_key_size;
9457 + uint8_t max_policers;
9458 + uint8_t max_congestion_ctrl;
9459 + uint64_t ext_cfg_iova;
9460 + } adv;
9461 +};
9462 +
9463 +/**
9464 + * dpni_create() - Create the DPNI object
9465 + * @mc_io: Pointer to MC portal's I/O object
9466 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
9467 + * @cfg: Configuration structure
9468 + * @token: Returned token; use in subsequent API calls
9469 + *
9470 + * Create the DPNI object, allocate required resources and
9471 + * perform required initialization.
9472 + *
9473 + * The object can be created either by declaring it in the
9474 + * DPL file, or by calling this function.
9475 + *
9476 + * This function returns a unique authentication token,
9477 + * associated with the specific object ID and the specific MC
9478 + * portal; this token must be used in all subsequent calls to
9479 + * this specific object. For objects that are created using the
9480 + * DPL file, call dpni_open() function to get an authentication
9481 + * token first.
9482 + *
9483 + * Return: '0' on Success; Error code otherwise.
9484 + */
9485 +int dpni_create(struct fsl_mc_io *mc_io,
9486 + uint32_t cmd_flags,
9487 + const struct dpni_cfg *cfg,
9488 + uint16_t *token);
9489 +
9490 +/**
9491 + * dpni_destroy() - Destroy the DPNI object and release all its resources.
9492 + * @mc_io: Pointer to MC portal's I/O object
9493 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
9494 + * @token: Token of DPNI object
9495 + *
9496 + * Return: '0' on Success; error code otherwise.
9497 + */
9498 +int dpni_destroy(struct fsl_mc_io *mc_io,
9499 + uint32_t cmd_flags,
9500 + uint16_t token);
9501 +
9502 +/**
9503 + * struct dpni_pools_cfg - Structure representing buffer pools configuration
9504 + * @num_dpbp: Number of DPBPs
9505 + * @pools: Array of buffer pools parameters; The number of valid entries
9506 + * must match 'num_dpbp' value
9507 + */
9508 +struct dpni_pools_cfg {
9509 + uint8_t num_dpbp;
9510 + /**
9511 + * struct pools - Buffer pools parameters
9512 + * @dpbp_id: DPBP object ID
9513 + * @buffer_size: Buffer size
9514 + * @backup_pool: Backup pool
9515 + */
9516 + struct {
9517 + int dpbp_id;
9518 + uint16_t buffer_size;
9519 + int backup_pool;
9520 + } pools[DPNI_MAX_DPBP];
9521 +};
9522 +
9523 +/**
9524 + * dpni_set_pools() - Set buffer pools configuration
9525 + * @mc_io: Pointer to MC portal's I/O object
9526 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
9527 + * @token: Token of DPNI object
9528 + * @cfg: Buffer pools configuration
9529 + *
9530 + * mandatory for DPNI operation
9531 + * warning:Allowed only when DPNI is disabled
9532 + *
9533 + * Return: '0' on Success; Error code otherwise.
9534 + */
9535 +int dpni_set_pools(struct fsl_mc_io *mc_io,
9536 + uint32_t cmd_flags,
9537 + uint16_t token,
9538 + const struct dpni_pools_cfg *cfg);
9539 +
9540 +/**
9541 + * dpni_enable() - Enable the DPNI, allow sending and receiving frames.
9542 + * @mc_io: Pointer to MC portal's I/O object
9543 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
9544 + * @token: Token of DPNI object
9545 + *
9546 + * Return: '0' on Success; Error code otherwise.
9547 + */
9548 +int dpni_enable(struct fsl_mc_io *mc_io,
9549 + uint32_t cmd_flags,
9550 + uint16_t token);
9551 +
9552 +/**
9553 + * dpni_disable() - Disable the DPNI, stop sending and receiving frames.
9554 + * @mc_io: Pointer to MC portal's I/O object
9555 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
9556 + * @token: Token of DPNI object
9557 + *
9558 + * Return: '0' on Success; Error code otherwise.
9559 + */
9560 +int dpni_disable(struct fsl_mc_io *mc_io,
9561 + uint32_t cmd_flags,
9562 + uint16_t token);
9563 +
9564 +/**
9565 + * dpni_is_enabled() - Check if the DPNI is enabled.
9566 + * @mc_io: Pointer to MC portal's I/O object
9567 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
9568 + * @token: Token of DPNI object
9569 + * @en: Returns '1' if object is enabled; '0' otherwise
9570 + *
9571 + * Return: '0' on Success; Error code otherwise.
9572 + */
9573 +int dpni_is_enabled(struct fsl_mc_io *mc_io,
9574 + uint32_t cmd_flags,
9575 + uint16_t token,
9576 + int *en);
9577 +
9578 +/**
9579 + * dpni_reset() - Reset the DPNI, returns the object to initial state.
9580 + * @mc_io: Pointer to MC portal's I/O object
9581 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
9582 + * @token: Token of DPNI object
9583 + *
9584 + * Return: '0' on Success; Error code otherwise.
9585 + */
9586 +int dpni_reset(struct fsl_mc_io *mc_io,
9587 + uint32_t cmd_flags,
9588 + uint16_t token);
9589 +
9590 +/**
9591 + * DPNI IRQ Index and Events
9592 + */
9593 +
9594 +/**
9595 + * IRQ index
9596 + */
9597 +#define DPNI_IRQ_INDEX 0
9598 +/**
9599 + * IRQ event - indicates a change in link state
9600 + */
9601 +#define DPNI_IRQ_EVENT_LINK_CHANGED 0x00000001
9602 +
9603 +/**
9604 + * struct dpni_irq_cfg - IRQ configuration
9605 + * @addr: Address that must be written to signal a message-based interrupt
9606 + * @val: Value to write into irq_addr address
9607 + * @irq_num: A user defined number associated with this IRQ
9608 + */
9609 +struct dpni_irq_cfg {
9610 + uint64_t addr;
9611 + uint32_t val;
9612 + int irq_num;
9613 +};
9614 +
9615 +/**
9616 + * dpni_set_irq() - Set IRQ information for the DPNI to trigger an interrupt.
9617 + * @mc_io: Pointer to MC portal's I/O object
9618 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
9619 + * @token: Token of DPNI object
9620 + * @irq_index: Identifies the interrupt index to configure
9621 + * @irq_cfg: IRQ configuration
9622 + *
9623 + * Return: '0' on Success; Error code otherwise.
9624 + */
9625 +int dpni_set_irq(struct fsl_mc_io *mc_io,
9626 + uint32_t cmd_flags,
9627 + uint16_t token,
9628 + uint8_t irq_index,
9629 + struct dpni_irq_cfg *irq_cfg);
9630 +
9631 +/**
9632 + * dpni_get_irq() - Get IRQ information from the DPNI.
9633 + * @mc_io: Pointer to MC portal's I/O object
9634 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
9635 + * @token: Token of DPNI object
9636 + * @irq_index: The interrupt index to configure
9637 + * @type: Interrupt type: 0 represents message interrupt
9638 + * type (both irq_addr and irq_val are valid)
9639 + * @irq_cfg: IRQ attributes
9640 + *
9641 + * Return: '0' on Success; Error code otherwise.
9642 + */
9643 +int dpni_get_irq(struct fsl_mc_io *mc_io,
9644 + uint32_t cmd_flags,
9645 + uint16_t token,
9646 + uint8_t irq_index,
9647 + int *type,
9648 + struct dpni_irq_cfg *irq_cfg);
9649 +
9650 +/**
9651 + * dpni_set_irq_enable() - Set overall interrupt state.
9652 + * @mc_io: Pointer to MC portal's I/O object
9653 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
9654 + * @token: Token of DPNI object
9655 + * @irq_index: The interrupt index to configure
9656 + * @en: Interrupt state: - enable = 1, disable = 0
9657 + *
9658 + * Allows GPP software to control when interrupts are generated.
9659 + * Each interrupt can have up to 32 causes. The enable/disable control's the
9660 + * overall interrupt state. if the interrupt is disabled no causes will cause
9661 + * an interrupt.
9662 + *
9663 + * Return: '0' on Success; Error code otherwise.
9664 + */
9665 +int dpni_set_irq_enable(struct fsl_mc_io *mc_io,
9666 + uint32_t cmd_flags,
9667 + uint16_t token,
9668 + uint8_t irq_index,
9669 + uint8_t en);
9670 +
9671 +/**
9672 + * dpni_get_irq_enable() - Get overall interrupt state
9673 + * @mc_io: Pointer to MC portal's I/O object
9674 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
9675 + * @token: Token of DPNI object
9676 + * @irq_index: The interrupt index to configure
9677 + * @en: Returned interrupt state - enable = 1, disable = 0
9678 + *
9679 + * Return: '0' on Success; Error code otherwise.
9680 + */
9681 +int dpni_get_irq_enable(struct fsl_mc_io *mc_io,
9682 + uint32_t cmd_flags,
9683 + uint16_t token,
9684 + uint8_t irq_index,
9685 + uint8_t *en);
9686 +
9687 +/**
9688 + * dpni_set_irq_mask() - Set interrupt mask.
9689 + * @mc_io: Pointer to MC portal's I/O object
9690 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
9691 + * @token: Token of DPNI object
9692 + * @irq_index: The interrupt index to configure
9693 + * @mask: event mask to trigger interrupt;
9694 + * each bit:
9695 + * 0 = ignore event
9696 + * 1 = consider event for asserting IRQ
9697 + *
9698 + * Every interrupt can have up to 32 causes and the interrupt model supports
9699 + * masking/unmasking each cause independently
9700 + *
9701 + * Return: '0' on Success; Error code otherwise.
9702 + */
9703 +int dpni_set_irq_mask(struct fsl_mc_io *mc_io,
9704 + uint32_t cmd_flags,
9705 + uint16_t token,
9706 + uint8_t irq_index,
9707 + uint32_t mask);
9708 +
9709 +/**
9710 + * dpni_get_irq_mask() - Get interrupt mask.
9711 + * @mc_io: Pointer to MC portal's I/O object
9712 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
9713 + * @token: Token of DPNI object
9714 + * @irq_index: The interrupt index to configure
9715 + * @mask: Returned event mask to trigger interrupt
9716 + *
9717 + * Every interrupt can have up to 32 causes and the interrupt model supports
9718 + * masking/unmasking each cause independently
9719 + *
9720 + * Return: '0' on Success; Error code otherwise.
9721 + */
9722 +int dpni_get_irq_mask(struct fsl_mc_io *mc_io,
9723 + uint32_t cmd_flags,
9724 + uint16_t token,
9725 + uint8_t irq_index,
9726 + uint32_t *mask);
9727 +
9728 +/**
9729 + * dpni_get_irq_status() - Get the current status of any pending interrupts.
9730 + * @mc_io: Pointer to MC portal's I/O object
9731 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
9732 + * @token: Token of DPNI object
9733 + * @irq_index: The interrupt index to configure
9734 + * @status: Returned interrupts status - one bit per cause:
9735 + * 0 = no interrupt pending
9736 + * 1 = interrupt pending
9737 + *
9738 + * Return: '0' on Success; Error code otherwise.
9739 + */
9740 +int dpni_get_irq_status(struct fsl_mc_io *mc_io,
9741 + uint32_t cmd_flags,
9742 + uint16_t token,
9743 + uint8_t irq_index,
9744 + uint32_t *status);
9745 +
9746 +/**
9747 + * dpni_clear_irq_status() - Clear a pending interrupt's status
9748 + * @mc_io: Pointer to MC portal's I/O object
9749 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
9750 + * @token: Token of DPNI object
9751 + * @irq_index: The interrupt index to configure
9752 + * @status: bits to clear (W1C) - one bit per cause:
9753 + * 0 = don't change
9754 + * 1 = clear status bit
9755 + *
9756 + * Return: '0' on Success; Error code otherwise.
9757 + */
9758 +int dpni_clear_irq_status(struct fsl_mc_io *mc_io,
9759 + uint32_t cmd_flags,
9760 + uint16_t token,
9761 + uint8_t irq_index,
9762 + uint32_t status);
9763 +
9764 +/**
9765 + * struct dpni_attr - Structure representing DPNI attributes
9766 + * @id: DPNI object ID
9767 + * @version: DPNI version
9768 + * @start_hdr: Indicates the packet starting header for parsing
9769 + * @options: Mask of available options; reflects the value as was given in
9770 + * object's creation
9771 + * @max_senders: Maximum number of different senders; used as the number
9772 + * of dedicated Tx flows;
9773 + * @max_tcs: Maximum number of traffic classes (for both Tx and Rx)
9774 + * @max_unicast_filters: Maximum number of unicast filters
9775 + * @max_multicast_filters: Maximum number of multicast filters
9776 + * @max_vlan_filters: Maximum number of VLAN filters
9777 + * @max_qos_entries: if 'max_tcs > 1', declares the maximum entries in QoS table
9778 + * @max_qos_key_size: Maximum key size for the QoS look-up
9779 + * @max_dist_key_size: Maximum key size for the distribution look-up
9780 + * @max_policers: Maximum number of policers;
9781 + * @max_congestion_ctrl: Maximum number of congestion control groups (CGs);
9782 + * @ext_cfg_iova: I/O virtual address of 256 bytes DMA-able memory;
9783 + * call dpni_extract_extended_cfg() to extract the extended configuration
9784 + */
9785 +struct dpni_attr {
9786 + int id;
9787 + /**
9788 + * struct version - DPNI version
9789 + * @major: DPNI major version
9790 + * @minor: DPNI minor version
9791 + */
9792 + struct {
9793 + uint16_t major;
9794 + uint16_t minor;
9795 + } version;
9796 + enum net_prot start_hdr;
9797 + uint32_t options;
9798 + uint8_t max_senders;
9799 + uint8_t max_tcs;
9800 + uint8_t max_unicast_filters;
9801 + uint8_t max_multicast_filters;
9802 + uint8_t max_vlan_filters;
9803 + uint8_t max_qos_entries;
9804 + uint8_t max_qos_key_size;
9805 + uint8_t max_dist_key_size;
9806 + uint8_t max_policers;
9807 + uint8_t max_congestion_ctrl;
9808 + uint64_t ext_cfg_iova;
9809 +};
9810 +
9811 +/**
9812 + * dpni_get_attributes() - Retrieve DPNI attributes.
9813 + * @mc_io: Pointer to MC portal's I/O object
9814 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
9815 + * @token: Token of DPNI object
9816 + * @attr: Object's attributes
9817 + *
9818 + * Return: '0' on Success; Error code otherwise.
9819 + */
9820 +int dpni_get_attributes(struct fsl_mc_io *mc_io,
9821 + uint32_t cmd_flags,
9822 + uint16_t token,
9823 + struct dpni_attr *attr);
9824 +
9825 +/**
9826 + * dpni_extract_extended_cfg() - extract the extended parameters
9827 + * @cfg: extended structure
9828 + * @ext_cfg_buf: 256 bytes of DMA-able memory
9829 + *
9830 + * This function has to be called after dpni_get_attributes()
9831 + */
9832 +int dpni_extract_extended_cfg(struct dpni_extended_cfg *cfg,
9833 + const uint8_t *ext_cfg_buf);
9834 +
9835 +/**
9836 + * DPNI errors
9837 + */
9838 +
9839 +/**
9840 + * Extract out of frame header error
9841 + */
9842 +#define DPNI_ERROR_EOFHE 0x00020000
9843 +/**
9844 + * Frame length error
9845 + */
9846 +#define DPNI_ERROR_FLE 0x00002000
9847 +/**
9848 + * Frame physical error
9849 + */
9850 +#define DPNI_ERROR_FPE 0x00001000
9851 +/**
9852 + * Parsing header error
9853 + */
9854 +#define DPNI_ERROR_PHE 0x00000020
9855 +/**
9856 + * Parser L3 checksum error
9857 + */
9858 +#define DPNI_ERROR_L3CE 0x00000004
9859 +/**
9860 + * Parser L3 checksum error
9861 + */
9862 +#define DPNI_ERROR_L4CE 0x00000001
9863 +
9864 +/**
9865 + * enum dpni_error_action - Defines DPNI behavior for errors
9866 + * @DPNI_ERROR_ACTION_DISCARD: Discard the frame
9867 + * @DPNI_ERROR_ACTION_CONTINUE: Continue with the normal flow
9868 + * @DPNI_ERROR_ACTION_SEND_TO_ERROR_QUEUE: Send the frame to the error queue
9869 + */
9870 +enum dpni_error_action {
9871 + DPNI_ERROR_ACTION_DISCARD = 0,
9872 + DPNI_ERROR_ACTION_CONTINUE = 1,
9873 + DPNI_ERROR_ACTION_SEND_TO_ERROR_QUEUE = 2
9874 +};
9875 +
9876 +/**
9877 + * struct dpni_error_cfg - Structure representing DPNI errors treatment
9878 + * @errors: Errors mask; use 'DPNI_ERROR__<X>
9879 + * @error_action: The desired action for the errors mask
9880 + * @set_frame_annotation: Set to '1' to mark the errors in frame annotation
9881 + * status (FAS); relevant only for the non-discard action
9882 + */
9883 +struct dpni_error_cfg {
9884 + uint32_t errors;
9885 + enum dpni_error_action error_action;
9886 + int set_frame_annotation;
9887 +};
9888 +
9889 +/**
9890 + * dpni_set_errors_behavior() - Set errors behavior
9891 + * @mc_io: Pointer to MC portal's I/O object
9892 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
9893 + * @token: Token of DPNI object
9894 + * @cfg: Errors configuration
9895 + *
9896 + * this function may be called numerous times with different
9897 + * error masks
9898 + *
9899 + * Return: '0' on Success; Error code otherwise.
9900 + */
9901 +int dpni_set_errors_behavior(struct fsl_mc_io *mc_io,
9902 + uint32_t cmd_flags,
9903 + uint16_t token,
9904 + struct dpni_error_cfg *cfg);
9905 +
9906 +/**
9907 + * DPNI buffer layout modification options
9908 + */
9909 +
9910 +/**
9911 + * Select to modify the time-stamp setting
9912 + */
9913 +#define DPNI_BUF_LAYOUT_OPT_TIMESTAMP 0x00000001
9914 +/**
9915 + * Select to modify the parser-result setting; not applicable for Tx
9916 + */
9917 +#define DPNI_BUF_LAYOUT_OPT_PARSER_RESULT 0x00000002
9918 +/**
9919 + * Select to modify the frame-status setting
9920 + */
9921 +#define DPNI_BUF_LAYOUT_OPT_FRAME_STATUS 0x00000004
9922 +/**
9923 + * Select to modify the private-data-size setting
9924 + */
9925 +#define DPNI_BUF_LAYOUT_OPT_PRIVATE_DATA_SIZE 0x00000008
9926 +/**
9927 + * Select to modify the data-alignment setting
9928 + */
9929 +#define DPNI_BUF_LAYOUT_OPT_DATA_ALIGN 0x00000010
9930 +/**
9931 + * Select to modify the data-head-room setting
9932 + */
9933 +#define DPNI_BUF_LAYOUT_OPT_DATA_HEAD_ROOM 0x00000020
9934 +/**
9935 + * Select to modify the data-tail-room setting
9936 + */
9937 +#define DPNI_BUF_LAYOUT_OPT_DATA_TAIL_ROOM 0x00000040
9938 +
9939 +/**
9940 + * struct dpni_buffer_layout - Structure representing DPNI buffer layout
9941 + * @options: Flags representing the suggested modifications to the buffer
9942 + * layout; Use any combination of 'DPNI_BUF_LAYOUT_OPT_<X>' flags
9943 + * @pass_timestamp: Pass timestamp value
9944 + * @pass_parser_result: Pass parser results
9945 + * @pass_frame_status: Pass frame status
9946 + * @private_data_size: Size kept for private data (in bytes)
9947 + * @data_align: Data alignment
9948 + * @data_head_room: Data head room
9949 + * @data_tail_room: Data tail room
9950 + */
9951 +struct dpni_buffer_layout {
9952 + uint32_t options;
9953 + int pass_timestamp;
9954 + int pass_parser_result;
9955 + int pass_frame_status;
9956 + uint16_t private_data_size;
9957 + uint16_t data_align;
9958 + uint16_t data_head_room;
9959 + uint16_t data_tail_room;
9960 +};
9961 +
9962 +/**
9963 + * dpni_get_rx_buffer_layout() - Retrieve Rx buffer layout attributes.
9964 + * @mc_io: Pointer to MC portal's I/O object
9965 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
9966 + * @token: Token of DPNI object
9967 + * @layout: Returns buffer layout attributes
9968 + *
9969 + * Return: '0' on Success; Error code otherwise.
9970 + */
9971 +int dpni_get_rx_buffer_layout(struct fsl_mc_io *mc_io,
9972 + uint32_t cmd_flags,
9973 + uint16_t token,
9974 + struct dpni_buffer_layout *layout);
9975 +
9976 +/**
9977 + * dpni_set_rx_buffer_layout() - Set Rx buffer layout configuration.
9978 + * @mc_io: Pointer to MC portal's I/O object
9979 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
9980 + * @token: Token of DPNI object
9981 + * @layout: Buffer layout configuration
9982 + *
9983 + * Return: '0' on Success; Error code otherwise.
9984 + *
9985 + * @warning Allowed only when DPNI is disabled
9986 + */
9987 +int dpni_set_rx_buffer_layout(struct fsl_mc_io *mc_io,
9988 + uint32_t cmd_flags,
9989 + uint16_t token,
9990 + const struct dpni_buffer_layout *layout);
9991 +
9992 +/**
9993 + * dpni_get_tx_buffer_layout() - Retrieve Tx buffer layout attributes.
9994 + * @mc_io: Pointer to MC portal's I/O object
9995 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
9996 + * @token: Token of DPNI object
9997 + * @layout: Returns buffer layout attributes
9998 + *
9999 + * Return: '0' on Success; Error code otherwise.
10000 + */
10001 +int dpni_get_tx_buffer_layout(struct fsl_mc_io *mc_io,
10002 + uint32_t cmd_flags,
10003 + uint16_t token,
10004 + struct dpni_buffer_layout *layout);
10005 +
10006 +/**
10007 + * dpni_set_tx_buffer_layout() - Set Tx buffer layout configuration.
10008 + * @mc_io: Pointer to MC portal's I/O object
10009 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10010 + * @token: Token of DPNI object
10011 + * @layout: Buffer layout configuration
10012 + *
10013 + * Return: '0' on Success; Error code otherwise.
10014 + *
10015 + * @warning Allowed only when DPNI is disabled
10016 + */
10017 +int dpni_set_tx_buffer_layout(struct fsl_mc_io *mc_io,
10018 + uint32_t cmd_flags,
10019 + uint16_t token,
10020 + const struct dpni_buffer_layout *layout);
10021 +
10022 +/**
10023 + * dpni_get_tx_conf_buffer_layout() - Retrieve Tx confirmation buffer layout
10024 + * attributes.
10025 + * @mc_io: Pointer to MC portal's I/O object
10026 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10027 + * @token: Token of DPNI object
10028 + * @layout: Returns buffer layout attributes
10029 + *
10030 + * Return: '0' on Success; Error code otherwise.
10031 + */
10032 +int dpni_get_tx_conf_buffer_layout(struct fsl_mc_io *mc_io,
10033 + uint32_t cmd_flags,
10034 + uint16_t token,
10035 + struct dpni_buffer_layout *layout);
10036 +
10037 +/**
10038 + * dpni_set_tx_conf_buffer_layout() - Set Tx confirmation buffer layout
10039 + * configuration.
10040 + * @mc_io: Pointer to MC portal's I/O object
10041 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10042 + * @token: Token of DPNI object
10043 + * @layout: Buffer layout configuration
10044 + *
10045 + * Return: '0' on Success; Error code otherwise.
10046 + *
10047 + * @warning Allowed only when DPNI is disabled
10048 + */
10049 +int dpni_set_tx_conf_buffer_layout(struct fsl_mc_io *mc_io,
10050 + uint32_t cmd_flags,
10051 + uint16_t token,
10052 + const struct dpni_buffer_layout *layout);
10053 +
10054 +/**
10055 + * dpni_set_l3_chksum_validation() - Enable/disable L3 checksum validation
10056 + * @mc_io: Pointer to MC portal's I/O object
10057 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10058 + * @token: Token of DPNI object
10059 + * @en: Set to '1' to enable; '0' to disable
10060 + *
10061 + * Return: '0' on Success; Error code otherwise.
10062 + */
10063 +int dpni_set_l3_chksum_validation(struct fsl_mc_io *mc_io,
10064 + uint32_t cmd_flags,
10065 + uint16_t token,
10066 + int en);
10067 +
10068 +/**
10069 + * dpni_get_l3_chksum_validation() - Get L3 checksum validation mode
10070 + * @mc_io: Pointer to MC portal's I/O object
10071 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10072 + * @token: Token of DPNI object
10073 + * @en: Returns '1' if enabled; '0' otherwise
10074 + *
10075 + * Return: '0' on Success; Error code otherwise.
10076 + */
10077 +int dpni_get_l3_chksum_validation(struct fsl_mc_io *mc_io,
10078 + uint32_t cmd_flags,
10079 + uint16_t token,
10080 + int *en);
10081 +
10082 +/**
10083 + * dpni_set_l4_chksum_validation() - Enable/disable L4 checksum validation
10084 + * @mc_io: Pointer to MC portal's I/O object
10085 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10086 + * @token: Token of DPNI object
10087 + * @en: Set to '1' to enable; '0' to disable
10088 + *
10089 + * Return: '0' on Success; Error code otherwise.
10090 + */
10091 +int dpni_set_l4_chksum_validation(struct fsl_mc_io *mc_io,
10092 + uint32_t cmd_flags,
10093 + uint16_t token,
10094 + int en);
10095 +
10096 +/**
10097 + * dpni_get_l4_chksum_validation() - Get L4 checksum validation mode
10098 + * @mc_io: Pointer to MC portal's I/O object
10099 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10100 + * @token: Token of DPNI object
10101 + * @en: Returns '1' if enabled; '0' otherwise
10102 + *
10103 + * Return: '0' on Success; Error code otherwise.
10104 + */
10105 +int dpni_get_l4_chksum_validation(struct fsl_mc_io *mc_io,
10106 + uint32_t cmd_flags,
10107 + uint16_t token,
10108 + int *en);
10109 +
10110 +/**
10111 + * dpni_get_qdid() - Get the Queuing Destination ID (QDID) that should be used
10112 + * for enqueue operations
10113 + * @mc_io: Pointer to MC portal's I/O object
10114 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10115 + * @token: Token of DPNI object
10116 + * @qdid: Returned virtual QDID value that should be used as an argument
10117 + * in all enqueue operations
10118 + *
10119 + * Return: '0' on Success; Error code otherwise.
10120 + */
10121 +int dpni_get_qdid(struct fsl_mc_io *mc_io,
10122 + uint32_t cmd_flags,
10123 + uint16_t token,
10124 + uint16_t *qdid);
10125 +
10126 +/**
10127 + * struct dpni_sp_info - Structure representing DPNI storage-profile information
10128 + * (relevant only for DPNI owned by AIOP)
10129 + * @spids: array of storage-profiles
10130 + */
10131 +struct dpni_sp_info {
10132 + uint16_t spids[DPNI_MAX_SP];
10133 +};
10134 +
10135 +/**
10136 + * dpni_get_spids() - Get the AIOP storage profile IDs associated with the DPNI
10137 + * @mc_io: Pointer to MC portal's I/O object
10138 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10139 + * @token: Token of DPNI object
10140 + * @sp_info: Returned AIOP storage-profile information
10141 + *
10142 + * Return: '0' on Success; Error code otherwise.
10143 + *
10144 + * @warning Only relevant for DPNI that belongs to AIOP container.
10145 + */
10146 +int dpni_get_sp_info(struct fsl_mc_io *mc_io,
10147 + uint32_t cmd_flags,
10148 + uint16_t token,
10149 + struct dpni_sp_info *sp_info);
10150 +
10151 +/**
10152 + * dpni_get_tx_data_offset() - Get the Tx data offset (from start of buffer)
10153 + * @mc_io: Pointer to MC portal's I/O object
10154 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10155 + * @token: Token of DPNI object
10156 + * @data_offset: Tx data offset (from start of buffer)
10157 + *
10158 + * Return: '0' on Success; Error code otherwise.
10159 + */
10160 +int dpni_get_tx_data_offset(struct fsl_mc_io *mc_io,
10161 + uint32_t cmd_flags,
10162 + uint16_t token,
10163 + uint16_t *data_offset);
10164 +
10165 +/**
10166 + * enum dpni_counter - DPNI counter types
10167 + * @DPNI_CNT_ING_FRAME: Counts ingress frames
10168 + * @DPNI_CNT_ING_BYTE: Counts ingress bytes
10169 + * @DPNI_CNT_ING_FRAME_DROP: Counts ingress frames dropped due to explicit
10170 + * 'drop' setting
10171 + * @DPNI_CNT_ING_FRAME_DISCARD: Counts ingress frames discarded due to errors
10172 + * @DPNI_CNT_ING_MCAST_FRAME: Counts ingress multicast frames
10173 + * @DPNI_CNT_ING_MCAST_BYTE: Counts ingress multicast bytes
10174 + * @DPNI_CNT_ING_BCAST_FRAME: Counts ingress broadcast frames
10175 + * @DPNI_CNT_ING_BCAST_BYTES: Counts ingress broadcast bytes
10176 + * @DPNI_CNT_EGR_FRAME: Counts egress frames
10177 + * @DPNI_CNT_EGR_BYTE: Counts egress bytes
10178 + * @DPNI_CNT_EGR_FRAME_DISCARD: Counts egress frames discarded due to errors
10179 + */
10180 +enum dpni_counter {
10181 + DPNI_CNT_ING_FRAME = 0x0,
10182 + DPNI_CNT_ING_BYTE = 0x1,
10183 + DPNI_CNT_ING_FRAME_DROP = 0x2,
10184 + DPNI_CNT_ING_FRAME_DISCARD = 0x3,
10185 + DPNI_CNT_ING_MCAST_FRAME = 0x4,
10186 + DPNI_CNT_ING_MCAST_BYTE = 0x5,
10187 + DPNI_CNT_ING_BCAST_FRAME = 0x6,
10188 + DPNI_CNT_ING_BCAST_BYTES = 0x7,
10189 + DPNI_CNT_EGR_FRAME = 0x8,
10190 + DPNI_CNT_EGR_BYTE = 0x9,
10191 + DPNI_CNT_EGR_FRAME_DISCARD = 0xa
10192 +};
10193 +
10194 +/**
10195 + * dpni_get_counter() - Read a specific DPNI counter
10196 + * @mc_io: Pointer to MC portal's I/O object
10197 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10198 + * @token: Token of DPNI object
10199 + * @counter: The requested counter
10200 + * @value: Returned counter's current value
10201 + *
10202 + * Return: '0' on Success; Error code otherwise.
10203 + */
10204 +int dpni_get_counter(struct fsl_mc_io *mc_io,
10205 + uint32_t cmd_flags,
10206 + uint16_t token,
10207 + enum dpni_counter counter,
10208 + uint64_t *value);
10209 +
10210 +/**
10211 + * dpni_set_counter() - Set (or clear) a specific DPNI counter
10212 + * @mc_io: Pointer to MC portal's I/O object
10213 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10214 + * @token: Token of DPNI object
10215 + * @counter: The requested counter
10216 + * @value: New counter value; typically pass '0' for resetting
10217 + * the counter.
10218 + *
10219 + * Return: '0' on Success; Error code otherwise.
10220 + */
10221 +int dpni_set_counter(struct fsl_mc_io *mc_io,
10222 + uint32_t cmd_flags,
10223 + uint16_t token,
10224 + enum dpni_counter counter,
10225 + uint64_t value);
10226 +
10227 +/**
10228 + * Enable auto-negotiation
10229 + */
10230 +#define DPNI_LINK_OPT_AUTONEG 0x0000000000000001ULL
10231 +/**
10232 + * Enable half-duplex mode
10233 + */
10234 +#define DPNI_LINK_OPT_HALF_DUPLEX 0x0000000000000002ULL
10235 +/**
10236 + * Enable pause frames
10237 + */
10238 +#define DPNI_LINK_OPT_PAUSE 0x0000000000000004ULL
10239 +/**
10240 + * Enable a-symmetric pause frames
10241 + */
10242 +#define DPNI_LINK_OPT_ASYM_PAUSE 0x0000000000000008ULL
10243 +
10244 +/**
10245 + * struct - Structure representing DPNI link configuration
10246 + * @rate: Rate
10247 + * @options: Mask of available options; use 'DPNI_LINK_OPT_<X>' values
10248 + */
10249 +struct dpni_link_cfg {
10250 + uint32_t rate;
10251 + uint64_t options;
10252 +};
10253 +
10254 +/**
10255 + * dpni_set_link_cfg() - set the link configuration.
10256 + * @mc_io: Pointer to MC portal's I/O object
10257 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10258 + * @token: Token of DPNI object
10259 + * @cfg: Link configuration
10260 + *
10261 + * Return: '0' on Success; Error code otherwise.
10262 + */
10263 +int dpni_set_link_cfg(struct fsl_mc_io *mc_io,
10264 + uint32_t cmd_flags,
10265 + uint16_t token,
10266 + const struct dpni_link_cfg *cfg);
10267 +
10268 +/**
10269 + * struct dpni_link_state - Structure representing DPNI link state
10270 + * @rate: Rate
10271 + * @options: Mask of available options; use 'DPNI_LINK_OPT_<X>' values
10272 + * @up: Link state; '0' for down, '1' for up
10273 + */
10274 +struct dpni_link_state {
10275 + uint32_t rate;
10276 + uint64_t options;
10277 + int up;
10278 +};
10279 +
10280 +/**
10281 + * dpni_get_link_state() - Return the link state (either up or down)
10282 + * @mc_io: Pointer to MC portal's I/O object
10283 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10284 + * @token: Token of DPNI object
10285 + * @state: Returned link state;
10286 + *
10287 + * Return: '0' on Success; Error code otherwise.
10288 + */
10289 +int dpni_get_link_state(struct fsl_mc_io *mc_io,
10290 + uint32_t cmd_flags,
10291 + uint16_t token,
10292 + struct dpni_link_state *state);
10293 +
10294 +/**
10295 + * struct dpni_tx_shaping - Structure representing DPNI tx shaping configuration
10296 + * @rate_limit: rate in Mbps
10297 + * @max_burst_size: burst size in bytes (up to 64KB)
10298 + */
10299 +struct dpni_tx_shaping_cfg {
10300 + uint32_t rate_limit;
10301 + uint16_t max_burst_size;
10302 +};
10303 +
10304 +/**
10305 + * dpni_set_tx_shaping() - Set the transmit shaping
10306 + * @mc_io: Pointer to MC portal's I/O object
10307 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10308 + * @token: Token of DPNI object
10309 + * @tx_shaper: tx shaping configuration
10310 + *
10311 + * Return: '0' on Success; Error code otherwise.
10312 + */
10313 +int dpni_set_tx_shaping(struct fsl_mc_io *mc_io,
10314 + uint32_t cmd_flags,
10315 + uint16_t token,
10316 + const struct dpni_tx_shaping_cfg *tx_shaper);
10317 +
10318 +/**
10319 + * dpni_set_max_frame_length() - Set the maximum received frame length.
10320 + * @mc_io: Pointer to MC portal's I/O object
10321 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10322 + * @token: Token of DPNI object
10323 + * @max_frame_length: Maximum received frame length (in
10324 + * bytes); frame is discarded if its
10325 + * length exceeds this value
10326 + *
10327 + * Return: '0' on Success; Error code otherwise.
10328 + */
10329 +int dpni_set_max_frame_length(struct fsl_mc_io *mc_io,
10330 + uint32_t cmd_flags,
10331 + uint16_t token,
10332 + uint16_t max_frame_length);
10333 +
10334 +/**
10335 + * dpni_get_max_frame_length() - Get the maximum received frame length.
10336 + * @mc_io: Pointer to MC portal's I/O object
10337 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10338 + * @token: Token of DPNI object
10339 + * @max_frame_length: Maximum received frame length (in
10340 + * bytes); frame is discarded if its
10341 + * length exceeds this value
10342 + *
10343 + * Return: '0' on Success; Error code otherwise.
10344 + */
10345 +int dpni_get_max_frame_length(struct fsl_mc_io *mc_io,
10346 + uint32_t cmd_flags,
10347 + uint16_t token,
10348 + uint16_t *max_frame_length);
10349 +
10350 +/**
10351 + * dpni_set_mtu() - Set the MTU for the interface.
10352 + * @mc_io: Pointer to MC portal's I/O object
10353 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10354 + * @token: Token of DPNI object
10355 + * @mtu: MTU length (in bytes)
10356 + *
10357 + * MTU determines the maximum fragment size for performing IP
10358 + * fragmentation on egress packets.
10359 + * Return: '0' on Success; Error code otherwise.
10360 + */
10361 +int dpni_set_mtu(struct fsl_mc_io *mc_io,
10362 + uint32_t cmd_flags,
10363 + uint16_t token,
10364 + uint16_t mtu);
10365 +
10366 +/**
10367 + * dpni_get_mtu() - Get the MTU.
10368 + * @mc_io: Pointer to MC portal's I/O object
10369 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10370 + * @token: Token of DPNI object
10371 + * @mtu: Returned MTU length (in bytes)
10372 + *
10373 + * Return: '0' on Success; Error code otherwise.
10374 + */
10375 +int dpni_get_mtu(struct fsl_mc_io *mc_io,
10376 + uint32_t cmd_flags,
10377 + uint16_t token,
10378 + uint16_t *mtu);
10379 +
10380 +/**
10381 + * dpni_set_multicast_promisc() - Enable/disable multicast promiscuous mode
10382 + * @mc_io: Pointer to MC portal's I/O object
10383 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10384 + * @token: Token of DPNI object
10385 + * @en: Set to '1' to enable; '0' to disable
10386 + *
10387 + * Return: '0' on Success; Error code otherwise.
10388 + */
10389 +int dpni_set_multicast_promisc(struct fsl_mc_io *mc_io,
10390 + uint32_t cmd_flags,
10391 + uint16_t token,
10392 + int en);
10393 +
10394 +/**
10395 + * dpni_get_multicast_promisc() - Get multicast promiscuous mode
10396 + * @mc_io: Pointer to MC portal's I/O object
10397 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10398 + * @token: Token of DPNI object
10399 + * @en: Returns '1' if enabled; '0' otherwise
10400 + *
10401 + * Return: '0' on Success; Error code otherwise.
10402 + */
10403 +int dpni_get_multicast_promisc(struct fsl_mc_io *mc_io,
10404 + uint32_t cmd_flags,
10405 + uint16_t token,
10406 + int *en);
10407 +
10408 +/**
10409 + * dpni_set_unicast_promisc() - Enable/disable unicast promiscuous mode
10410 + * @mc_io: Pointer to MC portal's I/O object
10411 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10412 + * @token: Token of DPNI object
10413 + * @en: Set to '1' to enable; '0' to disable
10414 + *
10415 + * Return: '0' on Success; Error code otherwise.
10416 + */
10417 +int dpni_set_unicast_promisc(struct fsl_mc_io *mc_io,
10418 + uint32_t cmd_flags,
10419 + uint16_t token,
10420 + int en);
10421 +
10422 +/**
10423 + * dpni_get_unicast_promisc() - Get unicast promiscuous mode
10424 + * @mc_io: Pointer to MC portal's I/O object
10425 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10426 + * @token: Token of DPNI object
10427 + * @en: Returns '1' if enabled; '0' otherwise
10428 + *
10429 + * Return: '0' on Success; Error code otherwise.
10430 + */
10431 +int dpni_get_unicast_promisc(struct fsl_mc_io *mc_io,
10432 + uint32_t cmd_flags,
10433 + uint16_t token,
10434 + int *en);
10435 +
10436 +/**
10437 + * dpni_set_primary_mac_addr() - Set the primary MAC address
10438 + * @mc_io: Pointer to MC portal's I/O object
10439 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10440 + * @token: Token of DPNI object
10441 + * @mac_addr: MAC address to set as primary address
10442 + *
10443 + * Return: '0' on Success; Error code otherwise.
10444 + */
10445 +int dpni_set_primary_mac_addr(struct fsl_mc_io *mc_io,
10446 + uint32_t cmd_flags,
10447 + uint16_t token,
10448 + const uint8_t mac_addr[6]);
10449 +
10450 +/**
10451 + * dpni_get_primary_mac_addr() - Get the primary MAC address
10452 + * @mc_io: Pointer to MC portal's I/O object
10453 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10454 + * @token: Token of DPNI object
10455 + * @mac_addr: Returned MAC address
10456 + *
10457 + * Return: '0' on Success; Error code otherwise.
10458 + */
10459 +int dpni_get_primary_mac_addr(struct fsl_mc_io *mc_io,
10460 + uint32_t cmd_flags,
10461 + uint16_t token,
10462 + uint8_t mac_addr[6]);
10463 +
10464 +/**
10465 + * dpni_add_mac_addr() - Add MAC address filter
10466 + * @mc_io: Pointer to MC portal's I/O object
10467 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10468 + * @token: Token of DPNI object
10469 + * @mac_addr: MAC address to add
10470 + *
10471 + * Return: '0' on Success; Error code otherwise.
10472 + */
10473 +int dpni_add_mac_addr(struct fsl_mc_io *mc_io,
10474 + uint32_t cmd_flags,
10475 + uint16_t token,
10476 + const uint8_t mac_addr[6]);
10477 +
10478 +/**
10479 + * dpni_remove_mac_addr() - Remove MAC address filter
10480 + * @mc_io: Pointer to MC portal's I/O object
10481 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10482 + * @token: Token of DPNI object
10483 + * @mac_addr: MAC address to remove
10484 + *
10485 + * Return: '0' on Success; Error code otherwise.
10486 + */
10487 +int dpni_remove_mac_addr(struct fsl_mc_io *mc_io,
10488 + uint32_t cmd_flags,
10489 + uint16_t token,
10490 + const uint8_t mac_addr[6]);
10491 +
10492 +/**
10493 + * dpni_clear_mac_filters() - Clear all unicast and/or multicast MAC filters
10494 + * @mc_io: Pointer to MC portal's I/O object
10495 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10496 + * @token: Token of DPNI object
10497 + * @unicast: Set to '1' to clear unicast addresses
10498 + * @multicast: Set to '1' to clear multicast addresses
10499 + *
10500 + * The primary MAC address is not cleared by this operation.
10501 + *
10502 + * Return: '0' on Success; Error code otherwise.
10503 + */
10504 +int dpni_clear_mac_filters(struct fsl_mc_io *mc_io,
10505 + uint32_t cmd_flags,
10506 + uint16_t token,
10507 + int unicast,
10508 + int multicast);
10509 +
10510 +/**
10511 + * dpni_set_vlan_filters() - Enable/disable VLAN filtering mode
10512 + * @mc_io: Pointer to MC portal's I/O object
10513 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10514 + * @token: Token of DPNI object
10515 + * @en: Set to '1' to enable; '0' to disable
10516 + *
10517 + * Return: '0' on Success; Error code otherwise.
10518 + */
10519 +int dpni_set_vlan_filters(struct fsl_mc_io *mc_io,
10520 + uint32_t cmd_flags,
10521 + uint16_t token,
10522 + int en);
10523 +
10524 +/**
10525 + * dpni_add_vlan_id() - Add VLAN ID filter
10526 + * @mc_io: Pointer to MC portal's I/O object
10527 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10528 + * @token: Token of DPNI object
10529 + * @vlan_id: VLAN ID to add
10530 + *
10531 + * Return: '0' on Success; Error code otherwise.
10532 + */
10533 +int dpni_add_vlan_id(struct fsl_mc_io *mc_io,
10534 + uint32_t cmd_flags,
10535 + uint16_t token,
10536 + uint16_t vlan_id);
10537 +
10538 +/**
10539 + * dpni_remove_vlan_id() - Remove VLAN ID filter
10540 + * @mc_io: Pointer to MC portal's I/O object
10541 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10542 + * @token: Token of DPNI object
10543 + * @vlan_id: VLAN ID to remove
10544 + *
10545 + * Return: '0' on Success; Error code otherwise.
10546 + */
10547 +int dpni_remove_vlan_id(struct fsl_mc_io *mc_io,
10548 + uint32_t cmd_flags,
10549 + uint16_t token,
10550 + uint16_t vlan_id);
10551 +
10552 +/**
10553 + * dpni_clear_vlan_filters() - Clear all VLAN filters
10554 + * @mc_io: Pointer to MC portal's I/O object
10555 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10556 + * @token: Token of DPNI object
10557 + *
10558 + * Return: '0' on Success; Error code otherwise.
10559 + */
10560 +int dpni_clear_vlan_filters(struct fsl_mc_io *mc_io,
10561 + uint32_t cmd_flags,
10562 + uint16_t token);
10563 +
10564 +/**
10565 + * enum dpni_tx_schedule_mode - DPNI Tx scheduling mode
10566 + * @DPNI_TX_SCHED_STRICT_PRIORITY: strict priority
10567 + * @DPNI_TX_SCHED_WEIGHTED: weighted based scheduling
10568 + */
10569 +enum dpni_tx_schedule_mode {
10570 + DPNI_TX_SCHED_STRICT_PRIORITY,
10571 + DPNI_TX_SCHED_WEIGHTED,
10572 +};
10573 +
10574 +/**
10575 + * struct dpni_tx_schedule_cfg - Structure representing Tx
10576 + * scheduling configuration
10577 + * @mode: scheduling mode
10578 + * @delta_bandwidth: Bandwidth represented in weights from 100 to 10000;
10579 + * not applicable for 'strict-priority' mode;
10580 + */
10581 +struct dpni_tx_schedule_cfg {
10582 + enum dpni_tx_schedule_mode mode;
10583 + uint16_t delta_bandwidth;
10584 +};
10585 +
10586 +/**
10587 + * struct dpni_tx_selection_cfg - Structure representing transmission
10588 + * selection configuration
10589 + * @tc_sched: an array of traffic-classes
10590 + */
10591 +struct dpni_tx_selection_cfg {
10592 + struct dpni_tx_schedule_cfg tc_sched[DPNI_MAX_TC];
10593 +};
10594 +
10595 +/**
10596 + * dpni_set_tx_selection() - Set transmission selection configuration
10597 + * @mc_io: Pointer to MC portal's I/O object
10598 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10599 + * @token: Token of DPNI object
10600 + * @cfg: transmission selection configuration
10601 + *
10602 + * warning: Allowed only when DPNI is disabled
10603 + *
10604 + * Return: '0' on Success; Error code otherwise.
10605 + */
10606 +int dpni_set_tx_selection(struct fsl_mc_io *mc_io,
10607 + uint32_t cmd_flags,
10608 + uint16_t token,
10609 + const struct dpni_tx_selection_cfg *cfg);
10610 +
10611 +/**
10612 + * enum dpni_dist_mode - DPNI distribution mode
10613 + * @DPNI_DIST_MODE_NONE: No distribution
10614 + * @DPNI_DIST_MODE_HASH: Use hash distribution; only relevant if
10615 + * the 'DPNI_OPT_DIST_HASH' option was set at DPNI creation
10616 + * @DPNI_DIST_MODE_FS: Use explicit flow steering; only relevant if
10617 + * the 'DPNI_OPT_DIST_FS' option was set at DPNI creation
10618 + */
10619 +enum dpni_dist_mode {
10620 + DPNI_DIST_MODE_NONE = 0,
10621 + DPNI_DIST_MODE_HASH = 1,
10622 + DPNI_DIST_MODE_FS = 2
10623 +};
10624 +
10625 +/**
10626 + * enum dpni_fs_miss_action - DPNI Flow Steering miss action
10627 + * @DPNI_FS_MISS_DROP: In case of no-match, drop the frame
10628 + * @DPNI_FS_MISS_EXPLICIT_FLOWID: In case of no-match, use explicit flow-id
10629 + * @DPNI_FS_MISS_HASH: In case of no-match, distribute using hash
10630 + */
10631 +enum dpni_fs_miss_action {
10632 + DPNI_FS_MISS_DROP = 0,
10633 + DPNI_FS_MISS_EXPLICIT_FLOWID = 1,
10634 + DPNI_FS_MISS_HASH = 2
10635 +};
10636 +
10637 +/**
10638 + * struct dpni_fs_tbl_cfg - Flow Steering table configuration
10639 + * @miss_action: Miss action selection
10640 + * @default_flow_id: Used when 'miss_action = DPNI_FS_MISS_EXPLICIT_FLOWID'
10641 + */
10642 +struct dpni_fs_tbl_cfg {
10643 + enum dpni_fs_miss_action miss_action;
10644 + uint16_t default_flow_id;
10645 +};
10646 +
10647 +/**
10648 + * dpni_prepare_key_cfg() - function prepare extract parameters
10649 + * @cfg: defining a full Key Generation profile (rule)
10650 + * @key_cfg_buf: Zeroed 256 bytes of memory before mapping it to DMA
10651 + *
10652 + * This function has to be called before the following functions:
10653 + * - dpni_set_rx_tc_dist()
10654 + * - dpni_set_qos_table()
10655 + */
10656 +int dpni_prepare_key_cfg(const struct dpkg_profile_cfg *cfg,
10657 + uint8_t *key_cfg_buf);
10658 +
10659 +/**
10660 + * struct dpni_rx_tc_dist_cfg - Rx traffic class distribution configuration
10661 + * @dist_size: Set the distribution size;
10662 + * supported values: 1,2,3,4,6,7,8,12,14,16,24,28,32,48,56,64,96,
10663 + * 112,128,192,224,256,384,448,512,768,896,1024
10664 + * @dist_mode: Distribution mode
10665 + * @key_cfg_iova: I/O virtual address of 256 bytes DMA-able memory filled with
10666 + * the extractions to be used for the distribution key by calling
10667 + * dpni_prepare_key_cfg() relevant only when
10668 + * 'dist_mode != DPNI_DIST_MODE_NONE', otherwise it can be '0'
10669 + * @fs_cfg: Flow Steering table configuration; only relevant if
10670 + * 'dist_mode = DPNI_DIST_MODE_FS'
10671 + */
10672 +struct dpni_rx_tc_dist_cfg {
10673 + uint16_t dist_size;
10674 + enum dpni_dist_mode dist_mode;
10675 + uint64_t key_cfg_iova;
10676 + struct dpni_fs_tbl_cfg fs_cfg;
10677 +};
10678 +
10679 +/**
10680 + * dpni_set_rx_tc_dist() - Set Rx traffic class distribution configuration
10681 + * @mc_io: Pointer to MC portal's I/O object
10682 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10683 + * @token: Token of DPNI object
10684 + * @tc_id: Traffic class selection (0-7)
10685 + * @cfg: Traffic class distribution configuration
10686 + *
10687 + * warning: if 'dist_mode != DPNI_DIST_MODE_NONE', call dpni_prepare_key_cfg()
10688 + * first to prepare the key_cfg_iova parameter
10689 + *
10690 + * Return: '0' on Success; error code otherwise.
10691 + */
10692 +int dpni_set_rx_tc_dist(struct fsl_mc_io *mc_io,
10693 + uint32_t cmd_flags,
10694 + uint16_t token,
10695 + uint8_t tc_id,
10696 + const struct dpni_rx_tc_dist_cfg *cfg);
10697 +
10698 +/**
10699 + * Set to select color aware mode (otherwise - color blind)
10700 + */
10701 +#define DPNI_POLICER_OPT_COLOR_AWARE 0x00000001
10702 +/**
10703 + * Set to discard frame with RED color
10704 + */
10705 +#define DPNI_POLICER_OPT_DISCARD_RED 0x00000002
10706 +
10707 +/**
10708 + * enum dpni_policer_mode - selecting the policer mode
10709 + * @DPNI_POLICER_MODE_NONE: Policer is disabled
10710 + * @DPNI_POLICER_MODE_PASS_THROUGH: Policer pass through
10711 + * @DPNI_POLICER_MODE_RFC_2698: Policer algorithm RFC 2698
10712 + * @DPNI_POLICER_MODE_RFC_4115: Policer algorithm RFC 4115
10713 + */
10714 +enum dpni_policer_mode {
10715 + DPNI_POLICER_MODE_NONE = 0,
10716 + DPNI_POLICER_MODE_PASS_THROUGH,
10717 + DPNI_POLICER_MODE_RFC_2698,
10718 + DPNI_POLICER_MODE_RFC_4115
10719 +};
10720 +
10721 +/**
10722 + * enum dpni_policer_unit - DPNI policer units
10723 + * @DPNI_POLICER_UNIT_BYTES: bytes units
10724 + * @DPNI_POLICER_UNIT_FRAMES: frames units
10725 + */
10726 +enum dpni_policer_unit {
10727 + DPNI_POLICER_UNIT_BYTES = 0,
10728 + DPNI_POLICER_UNIT_FRAMES
10729 +};
10730 +
10731 +/**
10732 + * enum dpni_policer_color - selecting the policer color
10733 + * @DPNI_POLICER_COLOR_GREEN: Green color
10734 + * @DPNI_POLICER_COLOR_YELLOW: Yellow color
10735 + * @DPNI_POLICER_COLOR_RED: Red color
10736 + */
10737 +enum dpni_policer_color {
10738 + DPNI_POLICER_COLOR_GREEN = 0,
10739 + DPNI_POLICER_COLOR_YELLOW,
10740 + DPNI_POLICER_COLOR_RED
10741 +};
10742 +
10743 +/**
10744 + * struct dpni_rx_tc_policing_cfg - Policer configuration
10745 + * @options: Mask of available options; use 'DPNI_POLICER_OPT_<X>' values
10746 + * @mode: policer mode
10747 + * @default_color: For pass-through mode the policer re-colors with this
10748 + * color any incoming packets. For Color aware non-pass-through mode:
10749 + * policer re-colors with this color all packets with FD[DROPP]>2.
10750 + * @units: Bytes or Packets
10751 + * @cir: Committed information rate (CIR) in Kbps or packets/second
10752 + * @cbs: Committed burst size (CBS) in bytes or packets
10753 + * @eir: Peak information rate (PIR, rfc2698) in Kbps or packets/second
10754 + * Excess information rate (EIR, rfc4115) in Kbps or packets/second
10755 + * @ebs: Peak burst size (PBS, rfc2698) in bytes or packets
10756 + * Excess burst size (EBS, rfc4115) in bytes or packets
10757 + */
10758 +struct dpni_rx_tc_policing_cfg {
10759 + uint32_t options;
10760 + enum dpni_policer_mode mode;
10761 + enum dpni_policer_unit units;
10762 + enum dpni_policer_color default_color;
10763 + uint32_t cir;
10764 + uint32_t cbs;
10765 + uint32_t eir;
10766 + uint32_t ebs;
10767 +};
10768 +
10769 +/**
10770 + * dpni_set_rx_tc_policing() - Set Rx traffic class policing configuration
10771 + * @mc_io: Pointer to MC portal's I/O object
10772 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10773 + * @token: Token of DPNI object
10774 + * @tc_id: Traffic class selection (0-7)
10775 + * @cfg: Traffic class policing configuration
10776 + *
10777 + * Return: '0' on Success; error code otherwise.
10778 + */
10779 +int dpni_set_rx_tc_policing(struct fsl_mc_io *mc_io,
10780 + uint32_t cmd_flags,
10781 + uint16_t token,
10782 + uint8_t tc_id,
10783 + const struct dpni_rx_tc_policing_cfg *cfg);
10784 +
10785 +/**
10786 + * dpni_get_rx_tc_policing() - Get Rx traffic class policing configuration
10787 + * @mc_io: Pointer to MC portal's I/O object
10788 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10789 + * @token: Token of DPNI object
10790 + * @tc_id: Traffic class selection (0-7)
10791 + * @cfg: Traffic class policing configuration
10792 + *
10793 + * Return: '0' on Success; error code otherwise.
10794 + */
10795 +int dpni_get_rx_tc_policing(struct fsl_mc_io *mc_io,
10796 + uint32_t cmd_flags,
10797 + uint16_t token,
10798 + uint8_t tc_id,
10799 + struct dpni_rx_tc_policing_cfg *cfg);
10800 +
10801 +/**
10802 + * enum dpni_congestion_unit - DPNI congestion units
10803 + * @DPNI_CONGESTION_UNIT_BYTES: bytes units
10804 + * @DPNI_CONGESTION_UNIT_FRAMES: frames units
10805 + */
10806 +enum dpni_congestion_unit {
10807 + DPNI_CONGESTION_UNIT_BYTES = 0,
10808 + DPNI_CONGESTION_UNIT_FRAMES
10809 +};
10810 +
10811 +/**
10812 + * enum dpni_early_drop_mode - DPNI early drop mode
10813 + * @DPNI_EARLY_DROP_MODE_NONE: early drop is disabled
10814 + * @DPNI_EARLY_DROP_MODE_TAIL: early drop in taildrop mode
10815 + * @DPNI_EARLY_DROP_MODE_WRED: early drop in WRED mode
10816 + */
10817 +enum dpni_early_drop_mode {
10818 + DPNI_EARLY_DROP_MODE_NONE = 0,
10819 + DPNI_EARLY_DROP_MODE_TAIL,
10820 + DPNI_EARLY_DROP_MODE_WRED
10821 +};
10822 +
10823 +/**
10824 + * struct dpni_wred_cfg - WRED configuration
10825 + * @max_threshold: maximum threshold that packets may be discarded. Above this
10826 + * threshold all packets are discarded; must be less than 2^39;
10827 + * approximated to be expressed as (x+256)*2^(y-1) due to HW
10828 + * implementation.
10829 + * @min_threshold: minimum threshold that packets may be discarded at
10830 + * @drop_probability: probability that a packet will be discarded (1-100,
10831 + * associated with the max_threshold).
10832 + */
10833 +struct dpni_wred_cfg {
10834 + uint64_t max_threshold;
10835 + uint64_t min_threshold;
10836 + uint8_t drop_probability;
10837 +};
10838 +
10839 +/**
10840 + * struct dpni_early_drop_cfg - early-drop configuration
10841 + * @mode: drop mode
10842 + * @units: units type
10843 + * @green: WRED - 'green' configuration
10844 + * @yellow: WRED - 'yellow' configuration
10845 + * @red: WRED - 'red' configuration
10846 + * @tail_drop_threshold: tail drop threshold
10847 + */
10848 +struct dpni_early_drop_cfg {
10849 + enum dpni_early_drop_mode mode;
10850 + enum dpni_congestion_unit units;
10851 +
10852 + struct dpni_wred_cfg green;
10853 + struct dpni_wred_cfg yellow;
10854 + struct dpni_wred_cfg red;
10855 +
10856 + uint32_t tail_drop_threshold;
10857 +};
10858 +
10859 +/**
10860 + * dpni_prepare_early_drop() - prepare an early drop.
10861 + * @cfg: Early-drop configuration
10862 + * @early_drop_buf: Zeroed 256 bytes of memory before mapping it to DMA
10863 + *
10864 + * This function has to be called before dpni_set_rx_tc_early_drop or
10865 + * dpni_set_tx_tc_early_drop
10866 + *
10867 + */
10868 +void dpni_prepare_early_drop(const struct dpni_early_drop_cfg *cfg,
10869 + uint8_t *early_drop_buf);
10870 +
10871 +/**
10872 + * dpni_extract_early_drop() - extract the early drop configuration.
10873 + * @cfg: Early-drop configuration
10874 + * @early_drop_buf: Zeroed 256 bytes of memory before mapping it to DMA
10875 + *
10876 + * This function has to be called after dpni_get_rx_tc_early_drop or
10877 + * dpni_get_tx_tc_early_drop
10878 + *
10879 + */
10880 +void dpni_extract_early_drop(struct dpni_early_drop_cfg *cfg,
10881 + const uint8_t *early_drop_buf);
10882 +
10883 +/**
10884 + * dpni_set_rx_tc_early_drop() - Set Rx traffic class early-drop configuration
10885 + * @mc_io: Pointer to MC portal's I/O object
10886 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10887 + * @token: Token of DPNI object
10888 + * @tc_id: Traffic class selection (0-7)
10889 + * @early_drop_iova: I/O virtual address of 256 bytes DMA-able memory filled
10890 + * with the early-drop configuration by calling dpni_prepare_early_drop()
10891 + *
10892 + * warning: Before calling this function, call dpni_prepare_early_drop() to
10893 + * prepare the early_drop_iova parameter
10894 + *
10895 + * Return: '0' on Success; error code otherwise.
10896 + */
10897 +int dpni_set_rx_tc_early_drop(struct fsl_mc_io *mc_io,
10898 + uint32_t cmd_flags,
10899 + uint16_t token,
10900 + uint8_t tc_id,
10901 + uint64_t early_drop_iova);
10902 +
10903 +/**
10904 + * dpni_get_rx_tc_early_drop() - Get Rx traffic class early-drop configuration
10905 + * @mc_io: Pointer to MC portal's I/O object
10906 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10907 + * @token: Token of DPNI object
10908 + * @tc_id: Traffic class selection (0-7)
10909 + * @early_drop_iova: I/O virtual address of 256 bytes DMA-able memory
10910 + *
10911 + * warning: After calling this function, call dpni_extract_early_drop() to
10912 + * get the early drop configuration
10913 + *
10914 + * Return: '0' on Success; error code otherwise.
10915 + */
10916 +int dpni_get_rx_tc_early_drop(struct fsl_mc_io *mc_io,
10917 + uint32_t cmd_flags,
10918 + uint16_t token,
10919 + uint8_t tc_id,
10920 + uint64_t early_drop_iova);
10921 +
10922 +/**
10923 + * dpni_set_tx_tc_early_drop() - Set Tx traffic class early-drop configuration
10924 + * @mc_io: Pointer to MC portal's I/O object
10925 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10926 + * @token: Token of DPNI object
10927 + * @tc_id: Traffic class selection (0-7)
10928 + * @early_drop_iova: I/O virtual address of 256 bytes DMA-able memory filled
10929 + * with the early-drop configuration by calling dpni_prepare_early_drop()
10930 + *
10931 + * warning: Before calling this function, call dpni_prepare_early_drop() to
10932 + * prepare the early_drop_iova parameter
10933 + *
10934 + * Return: '0' on Success; error code otherwise.
10935 + */
10936 +int dpni_set_tx_tc_early_drop(struct fsl_mc_io *mc_io,
10937 + uint32_t cmd_flags,
10938 + uint16_t token,
10939 + uint8_t tc_id,
10940 + uint64_t early_drop_iova);
10941 +
10942 +/**
10943 + * dpni_get_tx_tc_early_drop() - Get Tx traffic class early-drop configuration
10944 + * @mc_io: Pointer to MC portal's I/O object
10945 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
10946 + * @token: Token of DPNI object
10947 + * @tc_id: Traffic class selection (0-7)
10948 + * @early_drop_iova: I/O virtual address of 256 bytes DMA-able memory
10949 + *
10950 + * warning: After calling this function, call dpni_extract_early_drop() to
10951 + * get the early drop configuration
10952 + *
10953 + * Return: '0' on Success; error code otherwise.
10954 + */
10955 +int dpni_get_tx_tc_early_drop(struct fsl_mc_io *mc_io,
10956 + uint32_t cmd_flags,
10957 + uint16_t token,
10958 + uint8_t tc_id,
10959 + uint64_t early_drop_iova);
10960 +
10961 +/**
10962 + * enum dpni_dest - DPNI destination types
10963 + * @DPNI_DEST_NONE: Unassigned destination; The queue is set in parked mode and
10964 + * does not generate FQDAN notifications; user is expected to
10965 + * dequeue from the queue based on polling or other user-defined
10966 + * method
10967 + * @DPNI_DEST_DPIO: The queue is set in schedule mode and generates FQDAN
10968 + * notifications to the specified DPIO; user is expected to dequeue
10969 + * from the queue only after notification is received
10970 + * @DPNI_DEST_DPCON: The queue is set in schedule mode and does not generate
10971 + * FQDAN notifications, but is connected to the specified DPCON
10972 + * object; user is expected to dequeue from the DPCON channel
10973 + */
10974 +enum dpni_dest {
10975 + DPNI_DEST_NONE = 0,
10976 + DPNI_DEST_DPIO = 1,
10977 + DPNI_DEST_DPCON = 2
10978 +};
10979 +
10980 +/**
10981 + * struct dpni_dest_cfg - Structure representing DPNI destination parameters
10982 + * @dest_type: Destination type
10983 + * @dest_id: Either DPIO ID or DPCON ID, depending on the destination type
10984 + * @priority: Priority selection within the DPIO or DPCON channel; valid values
10985 + * are 0-1 or 0-7, depending on the number of priorities in that
10986 + * channel; not relevant for 'DPNI_DEST_NONE' option
10987 + */
10988 +struct dpni_dest_cfg {
10989 + enum dpni_dest dest_type;
10990 + int dest_id;
10991 + uint8_t priority;
10992 +};
10993 +
10994 +/* DPNI congestion options */
10995 +
10996 +/**
10997 + * CSCN message is written to message_iova once entering a
10998 + * congestion state (see 'threshold_entry')
10999 + */
11000 +#define DPNI_CONG_OPT_WRITE_MEM_ON_ENTER 0x00000001
11001 +/**
11002 + * CSCN message is written to message_iova once exiting a
11003 + * congestion state (see 'threshold_exit')
11004 + */
11005 +#define DPNI_CONG_OPT_WRITE_MEM_ON_EXIT 0x00000002
11006 +/**
11007 + * CSCN write will attempt to allocate into a cache (coherent write);
11008 + * valid only if 'DPNI_CONG_OPT_WRITE_MEM_<X>' is selected
11009 + */
11010 +#define DPNI_CONG_OPT_COHERENT_WRITE 0x00000004
11011 +/**
11012 + * if 'dest_cfg.dest_type != DPNI_DEST_NONE' CSCN message is sent to
11013 + * DPIO/DPCON's WQ channel once entering a congestion state
11014 + * (see 'threshold_entry')
11015 + */
11016 +#define DPNI_CONG_OPT_NOTIFY_DEST_ON_ENTER 0x00000008
11017 +/**
11018 + * if 'dest_cfg.dest_type != DPNI_DEST_NONE' CSCN message is sent to
11019 + * DPIO/DPCON's WQ channel once exiting a congestion state
11020 + * (see 'threshold_exit')
11021 + */
11022 +#define DPNI_CONG_OPT_NOTIFY_DEST_ON_EXIT 0x00000010
11023 +/**
11024 + * if 'dest_cfg.dest_type != DPNI_DEST_NONE' when the CSCN is written to the
11025 + * sw-portal's DQRR, the DQRI interrupt is asserted immediately (if enabled)
11026 + */
11027 +#define DPNI_CONG_OPT_INTR_COALESCING_DISABLED 0x00000020
11028 +
11029 +/**
11030 + * struct dpni_congestion_notification_cfg - congestion notification
11031 + * configuration
11032 + * @units: units type
11033 + * @threshold_entry: above this threshold we enter a congestion state.
11034 + * set it to '0' to disable it
11035 + * @threshold_exit: below this threshold we exit the congestion state.
11036 + * @message_ctx: The context that will be part of the CSCN message
11037 + * @message_iova: I/O virtual address (must be in DMA-able memory),
11038 + * must be 16B aligned; valid only if 'DPNI_CONG_OPT_WRITE_MEM_<X>' is
11039 + * contained in 'options'
11040 + * @dest_cfg: CSCN can be send to either DPIO or DPCON WQ channel
11041 + * @options: Mask of available options; use 'DPNI_CONG_OPT_<X>' values
11042 + */
11043 +
11044 +struct dpni_congestion_notification_cfg {
11045 + enum dpni_congestion_unit units;
11046 + uint32_t threshold_entry;
11047 + uint32_t threshold_exit;
11048 + uint64_t message_ctx;
11049 + uint64_t message_iova;
11050 + struct dpni_dest_cfg dest_cfg;
11051 + uint16_t options;
11052 +};
11053 +
11054 +/**
11055 + * dpni_set_rx_tc_congestion_notification() - Set Rx traffic class congestion
11056 + * notification configuration
11057 + * @mc_io: Pointer to MC portal's I/O object
11058 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11059 + * @token: Token of DPNI object
11060 + * @tc_id: Traffic class selection (0-7)
11061 + * @cfg: congestion notification configuration
11062 + *
11063 + * Return: '0' on Success; error code otherwise.
11064 + */
11065 +int dpni_set_rx_tc_congestion_notification(struct fsl_mc_io *mc_io,
11066 + uint32_t cmd_flags,
11067 + uint16_t token,
11068 + uint8_t tc_id,
11069 + const struct dpni_congestion_notification_cfg *cfg);
11070 +
11071 +/**
11072 + * dpni_get_rx_tc_congestion_notification() - Get Rx traffic class congestion
11073 + * notification configuration
11074 + * @mc_io: Pointer to MC portal's I/O object
11075 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11076 + * @token: Token of DPNI object
11077 + * @tc_id: Traffic class selection (0-7)
11078 + * @cfg: congestion notification configuration
11079 + *
11080 + * Return: '0' on Success; error code otherwise.
11081 + */
11082 +int dpni_get_rx_tc_congestion_notification(struct fsl_mc_io *mc_io,
11083 + uint32_t cmd_flags,
11084 + uint16_t token,
11085 + uint8_t tc_id,
11086 + struct dpni_congestion_notification_cfg *cfg);
11087 +
11088 +/**
11089 + * dpni_set_tx_tc_congestion_notification() - Set Tx traffic class congestion
11090 + * notification configuration
11091 + * @mc_io: Pointer to MC portal's I/O object
11092 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11093 + * @token: Token of DPNI object
11094 + * @tc_id: Traffic class selection (0-7)
11095 + * @cfg: congestion notification configuration
11096 + *
11097 + * Return: '0' on Success; error code otherwise.
11098 + */
11099 +int dpni_set_tx_tc_congestion_notification(struct fsl_mc_io *mc_io,
11100 + uint32_t cmd_flags,
11101 + uint16_t token,
11102 + uint8_t tc_id,
11103 + const struct dpni_congestion_notification_cfg *cfg);
11104 +
11105 +/**
11106 + * dpni_get_tx_tc_congestion_notification() - Get Tx traffic class congestion
11107 + * notification configuration
11108 + * @mc_io: Pointer to MC portal's I/O object
11109 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11110 + * @token: Token of DPNI object
11111 + * @tc_id: Traffic class selection (0-7)
11112 + * @cfg: congestion notification configuration
11113 + *
11114 + * Return: '0' on Success; error code otherwise.
11115 + */
11116 +int dpni_get_tx_tc_congestion_notification(struct fsl_mc_io *mc_io,
11117 + uint32_t cmd_flags,
11118 + uint16_t token,
11119 + uint8_t tc_id,
11120 + struct dpni_congestion_notification_cfg *cfg);
11121 +
11122 +/**
11123 + * enum dpni_flc_type - DPNI FLC types
11124 + * @DPNI_FLC_USER_DEFINED: select the FLC to be used for user defined value
11125 + * @DPNI_FLC_STASH: select the FLC to be used for stash control
11126 + */
11127 +enum dpni_flc_type {
11128 + DPNI_FLC_USER_DEFINED = 0,
11129 + DPNI_FLC_STASH = 1,
11130 +};
11131 +
11132 +/**
11133 + * enum dpni_stash_size - DPNI FLC stashing size
11134 + * @DPNI_STASH_SIZE_0B: no stash
11135 + * @DPNI_STASH_SIZE_64B: stashes 64 bytes
11136 + * @DPNI_STASH_SIZE_128B: stashes 128 bytes
11137 + * @DPNI_STASH_SIZE_192B: stashes 192 bytes
11138 + */
11139 +enum dpni_stash_size {
11140 + DPNI_STASH_SIZE_0B = 0,
11141 + DPNI_STASH_SIZE_64B = 1,
11142 + DPNI_STASH_SIZE_128B = 2,
11143 + DPNI_STASH_SIZE_192B = 3,
11144 +};
11145 +
11146 +/* DPNI FLC stash options */
11147 +
11148 +/**
11149 + * stashes the whole annotation area (up to 192 bytes)
11150 + */
11151 +#define DPNI_FLC_STASH_FRAME_ANNOTATION 0x00000001
11152 +
11153 +/**
11154 + * struct dpni_flc_cfg - Structure representing DPNI FLC configuration
11155 + * @flc_type: FLC type
11156 + * @options: Mask of available options;
11157 + * use 'DPNI_FLC_STASH_<X>' values
11158 + * @frame_data_size: Size of frame data to be stashed
11159 + * @flow_context_size: Size of flow context to be stashed
11160 + * @flow_context: 1. In case flc_type is 'DPNI_FLC_USER_DEFINED':
11161 + * this value will be provided in the frame descriptor
11162 + * (FD[FLC])
11163 + * 2. In case flc_type is 'DPNI_FLC_STASH':
11164 + * this value will be I/O virtual address of the
11165 + * flow-context;
11166 + * Must be cacheline-aligned and DMA-able memory
11167 + */
11168 +struct dpni_flc_cfg {
11169 + enum dpni_flc_type flc_type;
11170 + uint32_t options;
11171 + enum dpni_stash_size frame_data_size;
11172 + enum dpni_stash_size flow_context_size;
11173 + uint64_t flow_context;
11174 +};
11175 +
11176 +/**
11177 + * DPNI queue modification options
11178 + */
11179 +
11180 +/**
11181 + * Select to modify the user's context associated with the queue
11182 + */
11183 +#define DPNI_QUEUE_OPT_USER_CTX 0x00000001
11184 +/**
11185 + * Select to modify the queue's destination
11186 + */
11187 +#define DPNI_QUEUE_OPT_DEST 0x00000002
11188 +/** Select to modify the flow-context parameters;
11189 + * not applicable for Tx-conf/Err queues as the FD comes from the user
11190 + */
11191 +#define DPNI_QUEUE_OPT_FLC 0x00000004
11192 +/**
11193 + * Select to modify the queue's order preservation
11194 + */
11195 +#define DPNI_QUEUE_OPT_ORDER_PRESERVATION 0x00000008
11196 +/* Select to modify the queue's tail-drop threshold */
11197 +#define DPNI_QUEUE_OPT_TAILDROP_THRESHOLD 0x00000010
11198 +
11199 +/**
11200 + * struct dpni_queue_cfg - Structure representing queue configuration
11201 + * @options: Flags representing the suggested modifications to the queue;
11202 + * Use any combination of 'DPNI_QUEUE_OPT_<X>' flags
11203 + * @user_ctx: User context value provided in the frame descriptor of each
11204 + * dequeued frame; valid only if 'DPNI_QUEUE_OPT_USER_CTX'
11205 + * is contained in 'options'
11206 + * @dest_cfg: Queue destination parameters;
11207 + * valid only if 'DPNI_QUEUE_OPT_DEST' is contained in 'options'
11208 + * @flc_cfg: Flow context configuration; in case the TC's distribution
11209 + * is either NONE or HASH the FLC's settings of flow#0 are used.
11210 + * in the case of FS (flow-steering) the flow's FLC settings
11211 + * are used.
11212 + * valid only if 'DPNI_QUEUE_OPT_FLC' is contained in 'options'
11213 + * @order_preservation_en: enable/disable order preservation;
11214 + * valid only if 'DPNI_QUEUE_OPT_ORDER_PRESERVATION' is contained
11215 + * in 'options'
11216 + * @tail_drop_threshold: set the queue's tail drop threshold in bytes;
11217 + * '0' value disable the threshold; maximum value is 0xE000000;
11218 + * valid only if 'DPNI_QUEUE_OPT_TAILDROP_THRESHOLD' is contained
11219 + * in 'options'
11220 + */
11221 +struct dpni_queue_cfg {
11222 + uint32_t options;
11223 + uint64_t user_ctx;
11224 + struct dpni_dest_cfg dest_cfg;
11225 + struct dpni_flc_cfg flc_cfg;
11226 + int order_preservation_en;
11227 + uint32_t tail_drop_threshold;
11228 +};
11229 +
11230 +/**
11231 + * struct dpni_queue_attr - Structure representing queue attributes
11232 + * @user_ctx: User context value provided in the frame descriptor of each
11233 + * dequeued frame
11234 + * @dest_cfg: Queue destination configuration
11235 + * @flc_cfg: Flow context configuration
11236 + * @order_preservation_en: enable/disable order preservation
11237 + * @tail_drop_threshold: queue's tail drop threshold in bytes;
11238 + * @fqid: Virtual fqid value to be used for dequeue operations
11239 + */
11240 +struct dpni_queue_attr {
11241 + uint64_t user_ctx;
11242 + struct dpni_dest_cfg dest_cfg;
11243 + struct dpni_flc_cfg flc_cfg;
11244 + int order_preservation_en;
11245 + uint32_t tail_drop_threshold;
11246 +
11247 + uint32_t fqid;
11248 +};
11249 +
11250 +/**
11251 + * DPNI Tx flow modification options
11252 + */
11253 +
11254 +/**
11255 + * Select to modify the settings for dedicate Tx confirmation/error
11256 + */
11257 +#define DPNI_TX_FLOW_OPT_TX_CONF_ERROR 0x00000001
11258 +/**
11259 + * Select to modify the L3 checksum generation setting
11260 + */
11261 +#define DPNI_TX_FLOW_OPT_L3_CHKSUM_GEN 0x00000010
11262 +/**
11263 + * Select to modify the L4 checksum generation setting
11264 + */
11265 +#define DPNI_TX_FLOW_OPT_L4_CHKSUM_GEN 0x00000020
11266 +
11267 +/**
11268 + * struct dpni_tx_flow_cfg - Structure representing Tx flow configuration
11269 + * @options: Flags representing the suggested modifications to the Tx flow;
11270 + * Use any combination 'DPNI_TX_FLOW_OPT_<X>' flags
11271 + * @use_common_tx_conf_queue: Set to '1' to use the common (default) Tx
11272 + * confirmation and error queue; Set to '0' to use the private
11273 + * Tx confirmation and error queue; valid only if
11274 + * 'DPNI_OPT_PRIVATE_TX_CONF_ERROR_DISABLED' wasn't set at DPNI creation
11275 + * and 'DPNI_TX_FLOW_OPT_TX_CONF_ERROR' is contained in 'options'
11276 + * @l3_chksum_gen: Set to '1' to enable L3 checksum generation; '0' to disable;
11277 + * valid only if 'DPNI_TX_FLOW_OPT_L3_CHKSUM_GEN' is contained in 'options'
11278 + * @l4_chksum_gen: Set to '1' to enable L4 checksum generation; '0' to disable;
11279 + * valid only if 'DPNI_TX_FLOW_OPT_L4_CHKSUM_GEN' is contained in 'options'
11280 + */
11281 +struct dpni_tx_flow_cfg {
11282 + uint32_t options;
11283 + int use_common_tx_conf_queue;
11284 + int l3_chksum_gen;
11285 + int l4_chksum_gen;
11286 +};
11287 +
11288 +/**
11289 + * dpni_set_tx_flow() - Set Tx flow configuration
11290 + * @mc_io: Pointer to MC portal's I/O object
11291 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11292 + * @token: Token of DPNI object
11293 + * @flow_id: Provides (or returns) the sender's flow ID;
11294 + * for each new sender set (*flow_id) to 'DPNI_NEW_FLOW_ID' to generate
11295 + * a new flow_id; this ID should be used as the QDBIN argument
11296 + * in enqueue operations
11297 + * @cfg: Tx flow configuration
11298 + *
11299 + * Return: '0' on Success; Error code otherwise.
11300 + */
11301 +int dpni_set_tx_flow(struct fsl_mc_io *mc_io,
11302 + uint32_t cmd_flags,
11303 + uint16_t token,
11304 + uint16_t *flow_id,
11305 + const struct dpni_tx_flow_cfg *cfg);
11306 +
11307 +/**
11308 + * struct dpni_tx_flow_attr - Structure representing Tx flow attributes
11309 + * @use_common_tx_conf_queue: '1' if using common (default) Tx confirmation and
11310 + * error queue; '0' if using private Tx confirmation and error queue
11311 + * @l3_chksum_gen: '1' if L3 checksum generation is enabled; '0' if disabled
11312 + * @l4_chksum_gen: '1' if L4 checksum generation is enabled; '0' if disabled
11313 + */
11314 +struct dpni_tx_flow_attr {
11315 + int use_common_tx_conf_queue;
11316 + int l3_chksum_gen;
11317 + int l4_chksum_gen;
11318 +};
11319 +
11320 +/**
11321 + * dpni_get_tx_flow() - Get Tx flow attributes
11322 + * @mc_io: Pointer to MC portal's I/O object
11323 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11324 + * @token: Token of DPNI object
11325 + * @flow_id: The sender's flow ID, as returned by the
11326 + * dpni_set_tx_flow() function
11327 + * @attr: Returned Tx flow attributes
11328 + *
11329 + * Return: '0' on Success; Error code otherwise.
11330 + */
11331 +int dpni_get_tx_flow(struct fsl_mc_io *mc_io,
11332 + uint32_t cmd_flags,
11333 + uint16_t token,
11334 + uint16_t flow_id,
11335 + struct dpni_tx_flow_attr *attr);
11336 +
11337 +/**
11338 + * struct dpni_tx_conf_cfg - Structure representing Tx conf configuration
11339 + * @errors_only: Set to '1' to report back only error frames;
11340 + * Set to '0' to confirm transmission/error for all transmitted frames;
11341 + * @queue_cfg: Queue configuration
11342 + */
11343 +struct dpni_tx_conf_cfg {
11344 + int errors_only;
11345 + struct dpni_queue_cfg queue_cfg;
11346 +};
11347 +
11348 +/**
11349 + * dpni_set_tx_conf() - Set Tx confirmation and error queue configuration
11350 + * @mc_io: Pointer to MC portal's I/O object
11351 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11352 + * @token: Token of DPNI object
11353 + * @flow_id: The sender's flow ID, as returned by the
11354 + * dpni_set_tx_flow() function;
11355 + * use 'DPNI_COMMON_TX_CONF' for common tx-conf
11356 + * @cfg: Queue configuration
11357 + *
11358 + * If either 'DPNI_OPT_TX_CONF_DISABLED' or
11359 + * 'DPNI_OPT_PRIVATE_TX_CONF_ERROR_DISABLED' were selected at DPNI creation,
11360 + * this function can ONLY be used with 'flow_id == DPNI_COMMON_TX_CONF';
11361 + * i.e. only serve the common tx-conf-err queue;
11362 + * if 'DPNI_OPT_TX_CONF_DISABLED' was selected, only error frames are reported
11363 + * back - successfully transmitted frames are not confirmed. Otherwise, all
11364 + * transmitted frames are sent for confirmation.
11365 + *
11366 + * Return: '0' on Success; Error code otherwise.
11367 + */
11368 +int dpni_set_tx_conf(struct fsl_mc_io *mc_io,
11369 + uint32_t cmd_flags,
11370 + uint16_t token,
11371 + uint16_t flow_id,
11372 + const struct dpni_tx_conf_cfg *cfg);
11373 +
11374 +/**
11375 + * struct dpni_tx_conf_attr - Structure representing Tx conf attributes
11376 + * @errors_only: '1' if only error frames are reported back; '0' if all
11377 + * transmitted frames are confirmed
11378 + * @queue_attr: Queue attributes
11379 + */
11380 +struct dpni_tx_conf_attr {
11381 + int errors_only;
11382 + struct dpni_queue_attr queue_attr;
11383 +};
11384 +
11385 +/**
11386 + * dpni_get_tx_conf() - Get Tx confirmation and error queue attributes
11387 + * @mc_io: Pointer to MC portal's I/O object
11388 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11389 + * @token: Token of DPNI object
11390 + * @flow_id: The sender's flow ID, as returned by the
11391 + * dpni_set_tx_flow() function;
11392 + * use 'DPNI_COMMON_TX_CONF' for common tx-conf
11393 + * @attr: Returned tx-conf attributes
11394 + *
11395 + * If either 'DPNI_OPT_TX_CONF_DISABLED' or
11396 + * 'DPNI_OPT_PRIVATE_TX_CONF_ERROR_DISABLED' were selected at DPNI creation,
11397 + * this function can ONLY be used with 'flow_id == DPNI_COMMON_TX_CONF';
11398 + * i.e. only serve the common tx-conf-err queue;
11399 + *
11400 + * Return: '0' on Success; Error code otherwise.
11401 + */
11402 +int dpni_get_tx_conf(struct fsl_mc_io *mc_io,
11403 + uint32_t cmd_flags,
11404 + uint16_t token,
11405 + uint16_t flow_id,
11406 + struct dpni_tx_conf_attr *attr);
11407 +
11408 +/**
11409 + * dpni_set_tx_conf_congestion_notification() - Set Tx conf congestion
11410 + * notification configuration
11411 + * @mc_io: Pointer to MC portal's I/O object
11412 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11413 + * @token: Token of DPNI object
11414 + * @flow_id: The sender's flow ID, as returned by the
11415 + * dpni_set_tx_flow() function;
11416 + * use 'DPNI_COMMON_TX_CONF' for common tx-conf
11417 + * @cfg: congestion notification configuration
11418 + *
11419 + * If either 'DPNI_OPT_TX_CONF_DISABLED' or
11420 + * 'DPNI_OPT_PRIVATE_TX_CONF_ERROR_DISABLED' were selected at DPNI creation,
11421 + * this function can ONLY be used with 'flow_id == DPNI_COMMON_TX_CONF';
11422 + * i.e. only serve the common tx-conf-err queue;
11423 + *
11424 + * Return: '0' on Success; error code otherwise.
11425 + */
11426 +int dpni_set_tx_conf_congestion_notification(struct fsl_mc_io *mc_io,
11427 + uint32_t cmd_flags,
11428 + uint16_t token,
11429 + uint16_t flow_id,
11430 + const struct dpni_congestion_notification_cfg *cfg);
11431 +
11432 +/**
11433 + * dpni_get_tx_conf_congestion_notification() - Get Tx conf congestion
11434 + * notification configuration
11435 + * @mc_io: Pointer to MC portal's I/O object
11436 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11437 + * @token: Token of DPNI object
11438 + * @flow_id: The sender's flow ID, as returned by the
11439 + * dpni_set_tx_flow() function;
11440 + * use 'DPNI_COMMON_TX_CONF' for common tx-conf
11441 + * @cfg: congestion notification
11442 + *
11443 + * If either 'DPNI_OPT_TX_CONF_DISABLED' or
11444 + * 'DPNI_OPT_PRIVATE_TX_CONF_ERROR_DISABLED' were selected at DPNI creation,
11445 + * this function can ONLY be used with 'flow_id == DPNI_COMMON_TX_CONF';
11446 + * i.e. only serve the common tx-conf-err queue;
11447 + *
11448 + * Return: '0' on Success; error code otherwise.
11449 + */
11450 +int dpni_get_tx_conf_congestion_notification(struct fsl_mc_io *mc_io,
11451 + uint32_t cmd_flags,
11452 + uint16_t token,
11453 + uint16_t flow_id,
11454 + struct dpni_congestion_notification_cfg *cfg);
11455 +
11456 +/**
11457 + * dpni_set_tx_conf_revoke() - Tx confirmation revocation
11458 + * @mc_io: Pointer to MC portal's I/O object
11459 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11460 + * @token: Token of DPNI object
11461 + * @revoke: revoke or not
11462 + *
11463 + * This function is useful only when 'DPNI_OPT_TX_CONF_DISABLED' is not
11464 + * selected at DPNI creation.
11465 + * Calling this function with 'revoke' set to '1' disables all transmit
11466 + * confirmation (including the private confirmation queues), regardless of
11467 + * previous settings; Note that in this case, Tx error frames are still
11468 + * enqueued to the general transmit errors queue.
11469 + * Calling this function with 'revoke' set to '0' restores the previous
11470 + * settings for both general and private transmit confirmation.
11471 + *
11472 + * Return: '0' on Success; Error code otherwise.
11473 + */
11474 +int dpni_set_tx_conf_revoke(struct fsl_mc_io *mc_io,
11475 + uint32_t cmd_flags,
11476 + uint16_t token,
11477 + int revoke);
11478 +
11479 +/**
11480 + * dpni_set_rx_flow() - Set Rx flow configuration
11481 + * @mc_io: Pointer to MC portal's I/O object
11482 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11483 + * @token: Token of DPNI object
11484 + * @tc_id: Traffic class selection (0-7);
11485 + * use 'DPNI_ALL_TCS' to set all TCs and all flows
11486 + * @flow_id: Rx flow id within the traffic class; use
11487 + * 'DPNI_ALL_TC_FLOWS' to set all flows within
11488 + * this tc_id; ignored if tc_id is set to
11489 + * 'DPNI_ALL_TCS';
11490 + * @cfg: Rx flow configuration
11491 + *
11492 + * Return: '0' on Success; Error code otherwise.
11493 + */
11494 +int dpni_set_rx_flow(struct fsl_mc_io *mc_io,
11495 + uint32_t cmd_flags,
11496 + uint16_t token,
11497 + uint8_t tc_id,
11498 + uint16_t flow_id,
11499 + const struct dpni_queue_cfg *cfg);
11500 +
11501 +/**
11502 + * dpni_get_rx_flow() - Get Rx flow attributes
11503 + * @mc_io: Pointer to MC portal's I/O object
11504 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11505 + * @token: Token of DPNI object
11506 + * @tc_id: Traffic class selection (0-7)
11507 + * @flow_id: Rx flow id within the traffic class
11508 + * @attr: Returned Rx flow attributes
11509 + *
11510 + * Return: '0' on Success; Error code otherwise.
11511 + */
11512 +int dpni_get_rx_flow(struct fsl_mc_io *mc_io,
11513 + uint32_t cmd_flags,
11514 + uint16_t token,
11515 + uint8_t tc_id,
11516 + uint16_t flow_id,
11517 + struct dpni_queue_attr *attr);
11518 +
11519 +/**
11520 + * dpni_set_rx_err_queue() - Set Rx error queue configuration
11521 + * @mc_io: Pointer to MC portal's I/O object
11522 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11523 + * @token: Token of DPNI object
11524 + * @cfg: Queue configuration
11525 + *
11526 + * Return: '0' on Success; Error code otherwise.
11527 + */
11528 +int dpni_set_rx_err_queue(struct fsl_mc_io *mc_io,
11529 + uint32_t cmd_flags,
11530 + uint16_t token,
11531 + const struct dpni_queue_cfg *cfg);
11532 +
11533 +/**
11534 + * dpni_get_rx_err_queue() - Get Rx error queue attributes
11535 + * @mc_io: Pointer to MC portal's I/O object
11536 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11537 + * @token: Token of DPNI object
11538 + * @attr: Returned Queue attributes
11539 + *
11540 + * Return: '0' on Success; Error code otherwise.
11541 + */
11542 +int dpni_get_rx_err_queue(struct fsl_mc_io *mc_io,
11543 + uint32_t cmd_flags,
11544 + uint16_t token,
11545 + struct dpni_queue_attr *attr);
11546 +
11547 +/**
11548 + * struct dpni_qos_tbl_cfg - Structure representing QOS table configuration
11549 + * @key_cfg_iova: I/O virtual address of 256 bytes DMA-able memory filled with
11550 + * key extractions to be used as the QoS criteria by calling
11551 + * dpni_prepare_key_cfg()
11552 + * @discard_on_miss: Set to '1' to discard frames in case of no match (miss);
11553 + * '0' to use the 'default_tc' in such cases
11554 + * @default_tc: Used in case of no-match and 'discard_on_miss'= 0
11555 + */
11556 +struct dpni_qos_tbl_cfg {
11557 + uint64_t key_cfg_iova;
11558 + int discard_on_miss;
11559 + uint8_t default_tc;
11560 +};
11561 +
11562 +/**
11563 + * dpni_set_qos_table() - Set QoS mapping table
11564 + * @mc_io: Pointer to MC portal's I/O object
11565 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11566 + * @token: Token of DPNI object
11567 + * @cfg: QoS table configuration
11568 + *
11569 + * This function and all QoS-related functions require that
11570 + *'max_tcs > 1' was set at DPNI creation.
11571 + *
11572 + * warning: Before calling this function, call dpni_prepare_key_cfg() to
11573 + * prepare the key_cfg_iova parameter
11574 + *
11575 + * Return: '0' on Success; Error code otherwise.
11576 + */
11577 +int dpni_set_qos_table(struct fsl_mc_io *mc_io,
11578 + uint32_t cmd_flags,
11579 + uint16_t token,
11580 + const struct dpni_qos_tbl_cfg *cfg);
11581 +
11582 +/**
11583 + * struct dpni_rule_cfg - Rule configuration for table lookup
11584 + * @key_iova: I/O virtual address of the key (must be in DMA-able memory)
11585 + * @mask_iova: I/O virtual address of the mask (must be in DMA-able memory)
11586 + * @key_size: key and mask size (in bytes)
11587 + */
11588 +struct dpni_rule_cfg {
11589 + uint64_t key_iova;
11590 + uint64_t mask_iova;
11591 + uint8_t key_size;
11592 +};
11593 +
11594 +/**
11595 + * dpni_add_qos_entry() - Add QoS mapping entry (to select a traffic class)
11596 + * @mc_io: Pointer to MC portal's I/O object
11597 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11598 + * @token: Token of DPNI object
11599 + * @cfg: QoS rule to add
11600 + * @tc_id: Traffic class selection (0-7)
11601 + *
11602 + * Return: '0' on Success; Error code otherwise.
11603 + */
11604 +int dpni_add_qos_entry(struct fsl_mc_io *mc_io,
11605 + uint32_t cmd_flags,
11606 + uint16_t token,
11607 + const struct dpni_rule_cfg *cfg,
11608 + uint8_t tc_id);
11609 +
11610 +/**
11611 + * dpni_remove_qos_entry() - Remove QoS mapping entry
11612 + * @mc_io: Pointer to MC portal's I/O object
11613 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11614 + * @token: Token of DPNI object
11615 + * @cfg: QoS rule to remove
11616 + *
11617 + * Return: '0' on Success; Error code otherwise.
11618 + */
11619 +int dpni_remove_qos_entry(struct fsl_mc_io *mc_io,
11620 + uint32_t cmd_flags,
11621 + uint16_t token,
11622 + const struct dpni_rule_cfg *cfg);
11623 +
11624 +/**
11625 + * dpni_clear_qos_table() - Clear all QoS mapping entries
11626 + * @mc_io: Pointer to MC portal's I/O object
11627 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11628 + * @token: Token of DPNI object
11629 + *
11630 + * Following this function call, all frames are directed to
11631 + * the default traffic class (0)
11632 + *
11633 + * Return: '0' on Success; Error code otherwise.
11634 + */
11635 +int dpni_clear_qos_table(struct fsl_mc_io *mc_io,
11636 + uint32_t cmd_flags,
11637 + uint16_t token);
11638 +
11639 +/**
11640 + * dpni_add_fs_entry() - Add Flow Steering entry for a specific traffic class
11641 + * (to select a flow ID)
11642 + * @mc_io: Pointer to MC portal's I/O object
11643 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11644 + * @token: Token of DPNI object
11645 + * @tc_id: Traffic class selection (0-7)
11646 + * @cfg: Flow steering rule to add
11647 + * @flow_id: Flow id selection (must be smaller than the
11648 + * distribution size of the traffic class)
11649 + *
11650 + * Return: '0' on Success; Error code otherwise.
11651 + */
11652 +int dpni_add_fs_entry(struct fsl_mc_io *mc_io,
11653 + uint32_t cmd_flags,
11654 + uint16_t token,
11655 + uint8_t tc_id,
11656 + const struct dpni_rule_cfg *cfg,
11657 + uint16_t flow_id);
11658 +
11659 +/**
11660 + * dpni_remove_fs_entry() - Remove Flow Steering entry from a specific
11661 + * traffic class
11662 + * @mc_io: Pointer to MC portal's I/O object
11663 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11664 + * @token: Token of DPNI object
11665 + * @tc_id: Traffic class selection (0-7)
11666 + * @cfg: Flow steering rule to remove
11667 + *
11668 + * Return: '0' on Success; Error code otherwise.
11669 + */
11670 +int dpni_remove_fs_entry(struct fsl_mc_io *mc_io,
11671 + uint32_t cmd_flags,
11672 + uint16_t token,
11673 + uint8_t tc_id,
11674 + const struct dpni_rule_cfg *cfg);
11675 +
11676 +/**
11677 + * dpni_clear_fs_entries() - Clear all Flow Steering entries of a specific
11678 + * traffic class
11679 + * @mc_io: Pointer to MC portal's I/O object
11680 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11681 + * @token: Token of DPNI object
11682 + * @tc_id: Traffic class selection (0-7)
11683 + *
11684 + * Return: '0' on Success; Error code otherwise.
11685 + */
11686 +int dpni_clear_fs_entries(struct fsl_mc_io *mc_io,
11687 + uint32_t cmd_flags,
11688 + uint16_t token,
11689 + uint8_t tc_id);
11690 +
11691 +/**
11692 + * dpni_set_vlan_insertion() - Enable/disable VLAN insertion for egress frames
11693 + * @mc_io: Pointer to MC portal's I/O object
11694 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11695 + * @token: Token of DPNI object
11696 + * @en: Set to '1' to enable; '0' to disable
11697 + *
11698 + * Requires that the 'DPNI_OPT_VLAN_MANIPULATION' option is set
11699 + * at DPNI creation.
11700 + *
11701 + * Return: '0' on Success; Error code otherwise.
11702 + */
11703 +int dpni_set_vlan_insertion(struct fsl_mc_io *mc_io,
11704 + uint32_t cmd_flags,
11705 + uint16_t token,
11706 + int en);
11707 +
11708 +/**
11709 + * dpni_set_vlan_removal() - Enable/disable VLAN removal for ingress frames
11710 + * @mc_io: Pointer to MC portal's I/O object
11711 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11712 + * @token: Token of DPNI object
11713 + * @en: Set to '1' to enable; '0' to disable
11714 + *
11715 + * Requires that the 'DPNI_OPT_VLAN_MANIPULATION' option is set
11716 + * at DPNI creation.
11717 + *
11718 + * Return: '0' on Success; Error code otherwise.
11719 + */
11720 +int dpni_set_vlan_removal(struct fsl_mc_io *mc_io,
11721 + uint32_t cmd_flags,
11722 + uint16_t token,
11723 + int en);
11724 +
11725 +/**
11726 + * dpni_set_ipr() - Enable/disable IP reassembly of ingress frames
11727 + * @mc_io: Pointer to MC portal's I/O object
11728 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11729 + * @token: Token of DPNI object
11730 + * @en: Set to '1' to enable; '0' to disable
11731 + *
11732 + * Requires that the 'DPNI_OPT_IPR' option is set at DPNI creation.
11733 + *
11734 + * Return: '0' on Success; Error code otherwise.
11735 + */
11736 +int dpni_set_ipr(struct fsl_mc_io *mc_io,
11737 + uint32_t cmd_flags,
11738 + uint16_t token,
11739 + int en);
11740 +
11741 +/**
11742 + * dpni_set_ipf() - Enable/disable IP fragmentation of egress frames
11743 + * @mc_io: Pointer to MC portal's I/O object
11744 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
11745 + * @token: Token of DPNI object
11746 + * @en: Set to '1' to enable; '0' to disable
11747 + *
11748 + * Requires that the 'DPNI_OPT_IPF' option is set at DPNI
11749 + * creation. Fragmentation is performed according to MTU value
11750 + * set by dpni_set_mtu() function
11751 + *
11752 + * Return: '0' on Success; Error code otherwise.
11753 + */
11754 +int dpni_set_ipf(struct fsl_mc_io *mc_io,
11755 + uint32_t cmd_flags,
11756 + uint16_t token,
11757 + int en);
11758 +
11759 +#endif /* __FSL_DPNI_H */
11760 --- a/drivers/staging/fsl-mc/include/mc-cmd.h
11761 +++ b/drivers/staging/fsl-mc/include/mc-cmd.h
11762 @@ -103,8 +103,11 @@ enum mc_cmd_status {
11763 #define MC_CMD_HDR_READ_FLAGS(_hdr) \
11764 ((u32)mc_dec((_hdr), MC_CMD_HDR_FLAGS_O, MC_CMD_HDR_FLAGS_S))
11765
11766 +#define MC_PREP_OP(_ext, _param, _offset, _width, _type, _arg) \
11767 + ((_ext)[_param] |= cpu_to_le64(mc_enc((_offset), (_width), _arg)))
11768 +
11769 #define MC_EXT_OP(_ext, _param, _offset, _width, _type, _arg) \
11770 - ((_ext)[_param] |= mc_enc((_offset), (_width), _arg))
11771 + (_arg = (_type)mc_dec(cpu_to_le64(_ext[_param]), (_offset), (_width)))
11772
11773 #define MC_CMD_OP(_cmd, _param, _offset, _width, _type, _arg) \
11774 ((_cmd).params[_param] |= mc_enc((_offset), (_width), _arg))
11775 --- /dev/null
11776 +++ b/drivers/staging/fsl-mc/include/net.h
11777 @@ -0,0 +1,481 @@
11778 +/* Copyright 2013-2015 Freescale Semiconductor Inc.
11779 + *
11780 + * Redistribution and use in source and binary forms, with or without
11781 + * modification, are permitted provided that the following conditions are met:
11782 + * * Redistributions of source code must retain the above copyright
11783 + * notice, this list of conditions and the following disclaimer.
11784 + * * Redistributions in binary form must reproduce the above copyright
11785 + * notice, this list of conditions and the following disclaimer in the
11786 + * documentation and/or other materials provided with the distribution.
11787 + * * Neither the name of the above-listed copyright holders nor the
11788 + * names of any contributors may be used to endorse or promote products
11789 + * derived from this software without specific prior written permission.
11790 + *
11791 + *
11792 + * ALTERNATIVELY, this software may be distributed under the terms of the
11793 + * GNU General Public License ("GPL") as published by the Free Software
11794 + * Foundation, either version 2 of that License or (at your option) any
11795 + * later version.
11796 + *
11797 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
11798 + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
11799 + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
11800 + * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
11801 + * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
11802 + * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
11803 + * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
11804 + * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
11805 + * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
11806 + * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
11807 + * POSSIBILITY OF SUCH DAMAGE.
11808 + */
11809 +#ifndef __FSL_NET_H
11810 +#define __FSL_NET_H
11811 +
11812 +#define LAST_HDR_INDEX 0xFFFFFFFF
11813 +
11814 +/*****************************************************************************/
11815 +/* Protocol fields */
11816 +/*****************************************************************************/
11817 +
11818 +/************************* Ethernet fields *********************************/
11819 +#define NH_FLD_ETH_DA (1)
11820 +#define NH_FLD_ETH_SA (NH_FLD_ETH_DA << 1)
11821 +#define NH_FLD_ETH_LENGTH (NH_FLD_ETH_DA << 2)
11822 +#define NH_FLD_ETH_TYPE (NH_FLD_ETH_DA << 3)
11823 +#define NH_FLD_ETH_FINAL_CKSUM (NH_FLD_ETH_DA << 4)
11824 +#define NH_FLD_ETH_PADDING (NH_FLD_ETH_DA << 5)
11825 +#define NH_FLD_ETH_ALL_FIELDS ((NH_FLD_ETH_DA << 6) - 1)
11826 +
11827 +#define NH_FLD_ETH_ADDR_SIZE 6
11828 +
11829 +/*************************** VLAN fields ***********************************/
11830 +#define NH_FLD_VLAN_VPRI (1)
11831 +#define NH_FLD_VLAN_CFI (NH_FLD_VLAN_VPRI << 1)
11832 +#define NH_FLD_VLAN_VID (NH_FLD_VLAN_VPRI << 2)
11833 +#define NH_FLD_VLAN_LENGTH (NH_FLD_VLAN_VPRI << 3)
11834 +#define NH_FLD_VLAN_TYPE (NH_FLD_VLAN_VPRI << 4)
11835 +#define NH_FLD_VLAN_ALL_FIELDS ((NH_FLD_VLAN_VPRI << 5) - 1)
11836 +
11837 +#define NH_FLD_VLAN_TCI (NH_FLD_VLAN_VPRI | \
11838 + NH_FLD_VLAN_CFI | \
11839 + NH_FLD_VLAN_VID)
11840 +
11841 +/************************ IP (generic) fields ******************************/
11842 +#define NH_FLD_IP_VER (1)
11843 +#define NH_FLD_IP_DSCP (NH_FLD_IP_VER << 2)
11844 +#define NH_FLD_IP_ECN (NH_FLD_IP_VER << 3)
11845 +#define NH_FLD_IP_PROTO (NH_FLD_IP_VER << 4)
11846 +#define NH_FLD_IP_SRC (NH_FLD_IP_VER << 5)
11847 +#define NH_FLD_IP_DST (NH_FLD_IP_VER << 6)
11848 +#define NH_FLD_IP_TOS_TC (NH_FLD_IP_VER << 7)
11849 +#define NH_FLD_IP_ID (NH_FLD_IP_VER << 8)
11850 +#define NH_FLD_IP_ALL_FIELDS ((NH_FLD_IP_VER << 9) - 1)
11851 +
11852 +#define NH_FLD_IP_PROTO_SIZE 1
11853 +
11854 +/***************************** IPV4 fields *********************************/
11855 +#define NH_FLD_IPV4_VER (1)
11856 +#define NH_FLD_IPV4_HDR_LEN (NH_FLD_IPV4_VER << 1)
11857 +#define NH_FLD_IPV4_TOS (NH_FLD_IPV4_VER << 2)
11858 +#define NH_FLD_IPV4_TOTAL_LEN (NH_FLD_IPV4_VER << 3)
11859 +#define NH_FLD_IPV4_ID (NH_FLD_IPV4_VER << 4)
11860 +#define NH_FLD_IPV4_FLAG_D (NH_FLD_IPV4_VER << 5)
11861 +#define NH_FLD_IPV4_FLAG_M (NH_FLD_IPV4_VER << 6)
11862 +#define NH_FLD_IPV4_OFFSET (NH_FLD_IPV4_VER << 7)
11863 +#define NH_FLD_IPV4_TTL (NH_FLD_IPV4_VER << 8)
11864 +#define NH_FLD_IPV4_PROTO (NH_FLD_IPV4_VER << 9)
11865 +#define NH_FLD_IPV4_CKSUM (NH_FLD_IPV4_VER << 10)
11866 +#define NH_FLD_IPV4_SRC_IP (NH_FLD_IPV4_VER << 11)
11867 +#define NH_FLD_IPV4_DST_IP (NH_FLD_IPV4_VER << 12)
11868 +#define NH_FLD_IPV4_OPTS (NH_FLD_IPV4_VER << 13)
11869 +#define NH_FLD_IPV4_OPTS_COUNT (NH_FLD_IPV4_VER << 14)
11870 +#define NH_FLD_IPV4_ALL_FIELDS ((NH_FLD_IPV4_VER << 15) - 1)
11871 +
11872 +#define NH_FLD_IPV4_ADDR_SIZE 4
11873 +#define NH_FLD_IPV4_PROTO_SIZE 1
11874 +
11875 +/***************************** IPV6 fields *********************************/
11876 +#define NH_FLD_IPV6_VER (1)
11877 +#define NH_FLD_IPV6_TC (NH_FLD_IPV6_VER << 1)
11878 +#define NH_FLD_IPV6_SRC_IP (NH_FLD_IPV6_VER << 2)
11879 +#define NH_FLD_IPV6_DST_IP (NH_FLD_IPV6_VER << 3)
11880 +#define NH_FLD_IPV6_NEXT_HDR (NH_FLD_IPV6_VER << 4)
11881 +#define NH_FLD_IPV6_FL (NH_FLD_IPV6_VER << 5)
11882 +#define NH_FLD_IPV6_HOP_LIMIT (NH_FLD_IPV6_VER << 6)
11883 +#define NH_FLD_IPV6_ID (NH_FLD_IPV6_VER << 7)
11884 +#define NH_FLD_IPV6_ALL_FIELDS ((NH_FLD_IPV6_VER << 8) - 1)
11885 +
11886 +#define NH_FLD_IPV6_ADDR_SIZE 16
11887 +#define NH_FLD_IPV6_NEXT_HDR_SIZE 1
11888 +
11889 +/***************************** ICMP fields *********************************/
11890 +#define NH_FLD_ICMP_TYPE (1)
11891 +#define NH_FLD_ICMP_CODE (NH_FLD_ICMP_TYPE << 1)
11892 +#define NH_FLD_ICMP_CKSUM (NH_FLD_ICMP_TYPE << 2)
11893 +#define NH_FLD_ICMP_ID (NH_FLD_ICMP_TYPE << 3)
11894 +#define NH_FLD_ICMP_SQ_NUM (NH_FLD_ICMP_TYPE << 4)
11895 +#define NH_FLD_ICMP_ALL_FIELDS ((NH_FLD_ICMP_TYPE << 5) - 1)
11896 +
11897 +#define NH_FLD_ICMP_CODE_SIZE 1
11898 +#define NH_FLD_ICMP_TYPE_SIZE 1
11899 +
11900 +/***************************** IGMP fields *********************************/
11901 +#define NH_FLD_IGMP_VERSION (1)
11902 +#define NH_FLD_IGMP_TYPE (NH_FLD_IGMP_VERSION << 1)
11903 +#define NH_FLD_IGMP_CKSUM (NH_FLD_IGMP_VERSION << 2)
11904 +#define NH_FLD_IGMP_DATA (NH_FLD_IGMP_VERSION << 3)
11905 +#define NH_FLD_IGMP_ALL_FIELDS ((NH_FLD_IGMP_VERSION << 4) - 1)
11906 +
11907 +/***************************** TCP fields **********************************/
11908 +#define NH_FLD_TCP_PORT_SRC (1)
11909 +#define NH_FLD_TCP_PORT_DST (NH_FLD_TCP_PORT_SRC << 1)
11910 +#define NH_FLD_TCP_SEQ (NH_FLD_TCP_PORT_SRC << 2)
11911 +#define NH_FLD_TCP_ACK (NH_FLD_TCP_PORT_SRC << 3)
11912 +#define NH_FLD_TCP_OFFSET (NH_FLD_TCP_PORT_SRC << 4)
11913 +#define NH_FLD_TCP_FLAGS (NH_FLD_TCP_PORT_SRC << 5)
11914 +#define NH_FLD_TCP_WINDOW (NH_FLD_TCP_PORT_SRC << 6)
11915 +#define NH_FLD_TCP_CKSUM (NH_FLD_TCP_PORT_SRC << 7)
11916 +#define NH_FLD_TCP_URGPTR (NH_FLD_TCP_PORT_SRC << 8)
11917 +#define NH_FLD_TCP_OPTS (NH_FLD_TCP_PORT_SRC << 9)
11918 +#define NH_FLD_TCP_OPTS_COUNT (NH_FLD_TCP_PORT_SRC << 10)
11919 +#define NH_FLD_TCP_ALL_FIELDS ((NH_FLD_TCP_PORT_SRC << 11) - 1)
11920 +
11921 +#define NH_FLD_TCP_PORT_SIZE 2
11922 +
11923 +/***************************** UDP fields **********************************/
11924 +#define NH_FLD_UDP_PORT_SRC (1)
11925 +#define NH_FLD_UDP_PORT_DST (NH_FLD_UDP_PORT_SRC << 1)
11926 +#define NH_FLD_UDP_LEN (NH_FLD_UDP_PORT_SRC << 2)
11927 +#define NH_FLD_UDP_CKSUM (NH_FLD_UDP_PORT_SRC << 3)
11928 +#define NH_FLD_UDP_ALL_FIELDS ((NH_FLD_UDP_PORT_SRC << 4) - 1)
11929 +
11930 +#define NH_FLD_UDP_PORT_SIZE 2
11931 +
11932 +/*************************** UDP-lite fields *******************************/
11933 +#define NH_FLD_UDP_LITE_PORT_SRC (1)
11934 +#define NH_FLD_UDP_LITE_PORT_DST (NH_FLD_UDP_LITE_PORT_SRC << 1)
11935 +#define NH_FLD_UDP_LITE_ALL_FIELDS \
11936 + ((NH_FLD_UDP_LITE_PORT_SRC << 2) - 1)
11937 +
11938 +#define NH_FLD_UDP_LITE_PORT_SIZE 2
11939 +
11940 +/*************************** UDP-encap-ESP fields **************************/
11941 +#define NH_FLD_UDP_ENC_ESP_PORT_SRC (1)
11942 +#define NH_FLD_UDP_ENC_ESP_PORT_DST (NH_FLD_UDP_ENC_ESP_PORT_SRC << 1)
11943 +#define NH_FLD_UDP_ENC_ESP_LEN (NH_FLD_UDP_ENC_ESP_PORT_SRC << 2)
11944 +#define NH_FLD_UDP_ENC_ESP_CKSUM (NH_FLD_UDP_ENC_ESP_PORT_SRC << 3)
11945 +#define NH_FLD_UDP_ENC_ESP_SPI (NH_FLD_UDP_ENC_ESP_PORT_SRC << 4)
11946 +#define NH_FLD_UDP_ENC_ESP_SEQUENCE_NUM (NH_FLD_UDP_ENC_ESP_PORT_SRC << 5)
11947 +#define NH_FLD_UDP_ENC_ESP_ALL_FIELDS \
11948 + ((NH_FLD_UDP_ENC_ESP_PORT_SRC << 6) - 1)
11949 +
11950 +#define NH_FLD_UDP_ENC_ESP_PORT_SIZE 2
11951 +#define NH_FLD_UDP_ENC_ESP_SPI_SIZE 4
11952 +
11953 +/***************************** SCTP fields *********************************/
11954 +#define NH_FLD_SCTP_PORT_SRC (1)
11955 +#define NH_FLD_SCTP_PORT_DST (NH_FLD_SCTP_PORT_SRC << 1)
11956 +#define NH_FLD_SCTP_VER_TAG (NH_FLD_SCTP_PORT_SRC << 2)
11957 +#define NH_FLD_SCTP_CKSUM (NH_FLD_SCTP_PORT_SRC << 3)
11958 +#define NH_FLD_SCTP_ALL_FIELDS ((NH_FLD_SCTP_PORT_SRC << 4) - 1)
11959 +
11960 +#define NH_FLD_SCTP_PORT_SIZE 2
11961 +
11962 +/***************************** DCCP fields *********************************/
11963 +#define NH_FLD_DCCP_PORT_SRC (1)
11964 +#define NH_FLD_DCCP_PORT_DST (NH_FLD_DCCP_PORT_SRC << 1)
11965 +#define NH_FLD_DCCP_ALL_FIELDS ((NH_FLD_DCCP_PORT_SRC << 2) - 1)
11966 +
11967 +#define NH_FLD_DCCP_PORT_SIZE 2
11968 +
11969 +/***************************** IPHC fields *********************************/
11970 +#define NH_FLD_IPHC_CID (1)
11971 +#define NH_FLD_IPHC_CID_TYPE (NH_FLD_IPHC_CID << 1)
11972 +#define NH_FLD_IPHC_HCINDEX (NH_FLD_IPHC_CID << 2)
11973 +#define NH_FLD_IPHC_GEN (NH_FLD_IPHC_CID << 3)
11974 +#define NH_FLD_IPHC_D_BIT (NH_FLD_IPHC_CID << 4)
11975 +#define NH_FLD_IPHC_ALL_FIELDS ((NH_FLD_IPHC_CID << 5) - 1)
11976 +
11977 +/***************************** SCTP fields *********************************/
11978 +#define NH_FLD_SCTP_CHUNK_DATA_TYPE (1)
11979 +#define NH_FLD_SCTP_CHUNK_DATA_FLAGS (NH_FLD_SCTP_CHUNK_DATA_TYPE << 1)
11980 +#define NH_FLD_SCTP_CHUNK_DATA_LENGTH (NH_FLD_SCTP_CHUNK_DATA_TYPE << 2)
11981 +#define NH_FLD_SCTP_CHUNK_DATA_TSN (NH_FLD_SCTP_CHUNK_DATA_TYPE << 3)
11982 +#define NH_FLD_SCTP_CHUNK_DATA_STREAM_ID (NH_FLD_SCTP_CHUNK_DATA_TYPE << 4)
11983 +#define NH_FLD_SCTP_CHUNK_DATA_STREAM_SQN (NH_FLD_SCTP_CHUNK_DATA_TYPE << 5)
11984 +#define NH_FLD_SCTP_CHUNK_DATA_PAYLOAD_PID (NH_FLD_SCTP_CHUNK_DATA_TYPE << 6)
11985 +#define NH_FLD_SCTP_CHUNK_DATA_UNORDERED (NH_FLD_SCTP_CHUNK_DATA_TYPE << 7)
11986 +#define NH_FLD_SCTP_CHUNK_DATA_BEGGINING (NH_FLD_SCTP_CHUNK_DATA_TYPE << 8)
11987 +#define NH_FLD_SCTP_CHUNK_DATA_END (NH_FLD_SCTP_CHUNK_DATA_TYPE << 9)
11988 +#define NH_FLD_SCTP_CHUNK_DATA_ALL_FIELDS \
11989 + ((NH_FLD_SCTP_CHUNK_DATA_TYPE << 10) - 1)
11990 +
11991 +/*************************** L2TPV2 fields *********************************/
11992 +#define NH_FLD_L2TPV2_TYPE_BIT (1)
11993 +#define NH_FLD_L2TPV2_LENGTH_BIT (NH_FLD_L2TPV2_TYPE_BIT << 1)
11994 +#define NH_FLD_L2TPV2_SEQUENCE_BIT (NH_FLD_L2TPV2_TYPE_BIT << 2)
11995 +#define NH_FLD_L2TPV2_OFFSET_BIT (NH_FLD_L2TPV2_TYPE_BIT << 3)
11996 +#define NH_FLD_L2TPV2_PRIORITY_BIT (NH_FLD_L2TPV2_TYPE_BIT << 4)
11997 +#define NH_FLD_L2TPV2_VERSION (NH_FLD_L2TPV2_TYPE_BIT << 5)
11998 +#define NH_FLD_L2TPV2_LEN (NH_FLD_L2TPV2_TYPE_BIT << 6)
11999 +#define NH_FLD_L2TPV2_TUNNEL_ID (NH_FLD_L2TPV2_TYPE_BIT << 7)
12000 +#define NH_FLD_L2TPV2_SESSION_ID (NH_FLD_L2TPV2_TYPE_BIT << 8)
12001 +#define NH_FLD_L2TPV2_NS (NH_FLD_L2TPV2_TYPE_BIT << 9)
12002 +#define NH_FLD_L2TPV2_NR (NH_FLD_L2TPV2_TYPE_BIT << 10)
12003 +#define NH_FLD_L2TPV2_OFFSET_SIZE (NH_FLD_L2TPV2_TYPE_BIT << 11)
12004 +#define NH_FLD_L2TPV2_FIRST_BYTE (NH_FLD_L2TPV2_TYPE_BIT << 12)
12005 +#define NH_FLD_L2TPV2_ALL_FIELDS \
12006 + ((NH_FLD_L2TPV2_TYPE_BIT << 13) - 1)
12007 +
12008 +/*************************** L2TPV3 fields *********************************/
12009 +#define NH_FLD_L2TPV3_CTRL_TYPE_BIT (1)
12010 +#define NH_FLD_L2TPV3_CTRL_LENGTH_BIT (NH_FLD_L2TPV3_CTRL_TYPE_BIT << 1)
12011 +#define NH_FLD_L2TPV3_CTRL_SEQUENCE_BIT (NH_FLD_L2TPV3_CTRL_TYPE_BIT << 2)
12012 +#define NH_FLD_L2TPV3_CTRL_VERSION (NH_FLD_L2TPV3_CTRL_TYPE_BIT << 3)
12013 +#define NH_FLD_L2TPV3_CTRL_LENGTH (NH_FLD_L2TPV3_CTRL_TYPE_BIT << 4)
12014 +#define NH_FLD_L2TPV3_CTRL_CONTROL (NH_FLD_L2TPV3_CTRL_TYPE_BIT << 5)
12015 +#define NH_FLD_L2TPV3_CTRL_SENT (NH_FLD_L2TPV3_CTRL_TYPE_BIT << 6)
12016 +#define NH_FLD_L2TPV3_CTRL_RECV (NH_FLD_L2TPV3_CTRL_TYPE_BIT << 7)
12017 +#define NH_FLD_L2TPV3_CTRL_FIRST_BYTE (NH_FLD_L2TPV3_CTRL_TYPE_BIT << 8)
12018 +#define NH_FLD_L2TPV3_CTRL_ALL_FIELDS \
12019 + ((NH_FLD_L2TPV3_CTRL_TYPE_BIT << 9) - 1)
12020 +
12021 +#define NH_FLD_L2TPV3_SESS_TYPE_BIT (1)
12022 +#define NH_FLD_L2TPV3_SESS_VERSION (NH_FLD_L2TPV3_SESS_TYPE_BIT << 1)
12023 +#define NH_FLD_L2TPV3_SESS_ID (NH_FLD_L2TPV3_SESS_TYPE_BIT << 2)
12024 +#define NH_FLD_L2TPV3_SESS_COOKIE (NH_FLD_L2TPV3_SESS_TYPE_BIT << 3)
12025 +#define NH_FLD_L2TPV3_SESS_ALL_FIELDS \
12026 + ((NH_FLD_L2TPV3_SESS_TYPE_BIT << 4) - 1)
12027 +
12028 +/**************************** PPP fields ***********************************/
12029 +#define NH_FLD_PPP_PID (1)
12030 +#define NH_FLD_PPP_COMPRESSED (NH_FLD_PPP_PID << 1)
12031 +#define NH_FLD_PPP_ALL_FIELDS ((NH_FLD_PPP_PID << 2) - 1)
12032 +
12033 +/************************** PPPoE fields ***********************************/
12034 +#define NH_FLD_PPPOE_VER (1)
12035 +#define NH_FLD_PPPOE_TYPE (NH_FLD_PPPOE_VER << 1)
12036 +#define NH_FLD_PPPOE_CODE (NH_FLD_PPPOE_VER << 2)
12037 +#define NH_FLD_PPPOE_SID (NH_FLD_PPPOE_VER << 3)
12038 +#define NH_FLD_PPPOE_LEN (NH_FLD_PPPOE_VER << 4)
12039 +#define NH_FLD_PPPOE_SESSION (NH_FLD_PPPOE_VER << 5)
12040 +#define NH_FLD_PPPOE_PID (NH_FLD_PPPOE_VER << 6)
12041 +#define NH_FLD_PPPOE_ALL_FIELDS ((NH_FLD_PPPOE_VER << 7) - 1)
12042 +
12043 +/************************* PPP-Mux fields **********************************/
12044 +#define NH_FLD_PPPMUX_PID (1)
12045 +#define NH_FLD_PPPMUX_CKSUM (NH_FLD_PPPMUX_PID << 1)
12046 +#define NH_FLD_PPPMUX_COMPRESSED (NH_FLD_PPPMUX_PID << 2)
12047 +#define NH_FLD_PPPMUX_ALL_FIELDS ((NH_FLD_PPPMUX_PID << 3) - 1)
12048 +
12049 +/*********************** PPP-Mux sub-frame fields **************************/
12050 +#define NH_FLD_PPPMUX_SUBFRM_PFF (1)
12051 +#define NH_FLD_PPPMUX_SUBFRM_LXT (NH_FLD_PPPMUX_SUBFRM_PFF << 1)
12052 +#define NH_FLD_PPPMUX_SUBFRM_LEN (NH_FLD_PPPMUX_SUBFRM_PFF << 2)
12053 +#define NH_FLD_PPPMUX_SUBFRM_PID (NH_FLD_PPPMUX_SUBFRM_PFF << 3)
12054 +#define NH_FLD_PPPMUX_SUBFRM_USE_PID (NH_FLD_PPPMUX_SUBFRM_PFF << 4)
12055 +#define NH_FLD_PPPMUX_SUBFRM_ALL_FIELDS \
12056 + ((NH_FLD_PPPMUX_SUBFRM_PFF << 5) - 1)
12057 +
12058 +/*************************** LLC fields ************************************/
12059 +#define NH_FLD_LLC_DSAP (1)
12060 +#define NH_FLD_LLC_SSAP (NH_FLD_LLC_DSAP << 1)
12061 +#define NH_FLD_LLC_CTRL (NH_FLD_LLC_DSAP << 2)
12062 +#define NH_FLD_LLC_ALL_FIELDS ((NH_FLD_LLC_DSAP << 3) - 1)
12063 +
12064 +/*************************** NLPID fields **********************************/
12065 +#define NH_FLD_NLPID_NLPID (1)
12066 +#define NH_FLD_NLPID_ALL_FIELDS ((NH_FLD_NLPID_NLPID << 1) - 1)
12067 +
12068 +/*************************** SNAP fields ***********************************/
12069 +#define NH_FLD_SNAP_OUI (1)
12070 +#define NH_FLD_SNAP_PID (NH_FLD_SNAP_OUI << 1)
12071 +#define NH_FLD_SNAP_ALL_FIELDS ((NH_FLD_SNAP_OUI << 2) - 1)
12072 +
12073 +/*************************** LLC SNAP fields *******************************/
12074 +#define NH_FLD_LLC_SNAP_TYPE (1)
12075 +#define NH_FLD_LLC_SNAP_ALL_FIELDS ((NH_FLD_LLC_SNAP_TYPE << 1) - 1)
12076 +
12077 +#define NH_FLD_ARP_HTYPE (1)
12078 +#define NH_FLD_ARP_PTYPE (NH_FLD_ARP_HTYPE << 1)
12079 +#define NH_FLD_ARP_HLEN (NH_FLD_ARP_HTYPE << 2)
12080 +#define NH_FLD_ARP_PLEN (NH_FLD_ARP_HTYPE << 3)
12081 +#define NH_FLD_ARP_OPER (NH_FLD_ARP_HTYPE << 4)
12082 +#define NH_FLD_ARP_SHA (NH_FLD_ARP_HTYPE << 5)
12083 +#define NH_FLD_ARP_SPA (NH_FLD_ARP_HTYPE << 6)
12084 +#define NH_FLD_ARP_THA (NH_FLD_ARP_HTYPE << 7)
12085 +#define NH_FLD_ARP_TPA (NH_FLD_ARP_HTYPE << 8)
12086 +#define NH_FLD_ARP_ALL_FIELDS ((NH_FLD_ARP_HTYPE << 9) - 1)
12087 +
12088 +/*************************** RFC2684 fields ********************************/
12089 +#define NH_FLD_RFC2684_LLC (1)
12090 +#define NH_FLD_RFC2684_NLPID (NH_FLD_RFC2684_LLC << 1)
12091 +#define NH_FLD_RFC2684_OUI (NH_FLD_RFC2684_LLC << 2)
12092 +#define NH_FLD_RFC2684_PID (NH_FLD_RFC2684_LLC << 3)
12093 +#define NH_FLD_RFC2684_VPN_OUI (NH_FLD_RFC2684_LLC << 4)
12094 +#define NH_FLD_RFC2684_VPN_IDX (NH_FLD_RFC2684_LLC << 5)
12095 +#define NH_FLD_RFC2684_ALL_FIELDS ((NH_FLD_RFC2684_LLC << 6) - 1)
12096 +
12097 +/*************************** User defined fields ***************************/
12098 +#define NH_FLD_USER_DEFINED_SRCPORT (1)
12099 +#define NH_FLD_USER_DEFINED_PCDID (NH_FLD_USER_DEFINED_SRCPORT << 1)
12100 +#define NH_FLD_USER_DEFINED_ALL_FIELDS \
12101 + ((NH_FLD_USER_DEFINED_SRCPORT << 2) - 1)
12102 +
12103 +/*************************** Payload fields ********************************/
12104 +#define NH_FLD_PAYLOAD_BUFFER (1)
12105 +#define NH_FLD_PAYLOAD_SIZE (NH_FLD_PAYLOAD_BUFFER << 1)
12106 +#define NH_FLD_MAX_FRM_SIZE (NH_FLD_PAYLOAD_BUFFER << 2)
12107 +#define NH_FLD_MIN_FRM_SIZE (NH_FLD_PAYLOAD_BUFFER << 3)
12108 +#define NH_FLD_PAYLOAD_TYPE (NH_FLD_PAYLOAD_BUFFER << 4)
12109 +#define NH_FLD_FRAME_SIZE (NH_FLD_PAYLOAD_BUFFER << 5)
12110 +#define NH_FLD_PAYLOAD_ALL_FIELDS ((NH_FLD_PAYLOAD_BUFFER << 6) - 1)
12111 +
12112 +/*************************** GRE fields ************************************/
12113 +#define NH_FLD_GRE_TYPE (1)
12114 +#define NH_FLD_GRE_ALL_FIELDS ((NH_FLD_GRE_TYPE << 1) - 1)
12115 +
12116 +/*************************** MINENCAP fields *******************************/
12117 +#define NH_FLD_MINENCAP_SRC_IP (1)
12118 +#define NH_FLD_MINENCAP_DST_IP (NH_FLD_MINENCAP_SRC_IP << 1)
12119 +#define NH_FLD_MINENCAP_TYPE (NH_FLD_MINENCAP_SRC_IP << 2)
12120 +#define NH_FLD_MINENCAP_ALL_FIELDS \
12121 + ((NH_FLD_MINENCAP_SRC_IP << 3) - 1)
12122 +
12123 +/*************************** IPSEC AH fields *******************************/
12124 +#define NH_FLD_IPSEC_AH_SPI (1)
12125 +#define NH_FLD_IPSEC_AH_NH (NH_FLD_IPSEC_AH_SPI << 1)
12126 +#define NH_FLD_IPSEC_AH_ALL_FIELDS ((NH_FLD_IPSEC_AH_SPI << 2) - 1)
12127 +
12128 +/*************************** IPSEC ESP fields ******************************/
12129 +#define NH_FLD_IPSEC_ESP_SPI (1)
12130 +#define NH_FLD_IPSEC_ESP_SEQUENCE_NUM (NH_FLD_IPSEC_ESP_SPI << 1)
12131 +#define NH_FLD_IPSEC_ESP_ALL_FIELDS ((NH_FLD_IPSEC_ESP_SPI << 2) - 1)
12132 +
12133 +#define NH_FLD_IPSEC_ESP_SPI_SIZE 4
12134 +
12135 +/*************************** MPLS fields ***********************************/
12136 +#define NH_FLD_MPLS_LABEL_STACK (1)
12137 +#define NH_FLD_MPLS_LABEL_STACK_ALL_FIELDS \
12138 + ((NH_FLD_MPLS_LABEL_STACK << 1) - 1)
12139 +
12140 +/*************************** MACSEC fields *********************************/
12141 +#define NH_FLD_MACSEC_SECTAG (1)
12142 +#define NH_FLD_MACSEC_ALL_FIELDS ((NH_FLD_MACSEC_SECTAG << 1) - 1)
12143 +
12144 +/*************************** GTP fields ************************************/
12145 +#define NH_FLD_GTP_TEID (1)
12146 +
12147 +
12148 +/* Protocol options */
12149 +
12150 +/* Ethernet options */
12151 +#define NH_OPT_ETH_BROADCAST 1
12152 +#define NH_OPT_ETH_MULTICAST 2
12153 +#define NH_OPT_ETH_UNICAST 3
12154 +#define NH_OPT_ETH_BPDU 4
12155 +
12156 +#define NH_ETH_IS_MULTICAST_ADDR(addr) (addr[0] & 0x01)
12157 +/* also applicable for broadcast */
12158 +
12159 +/* VLAN options */
12160 +#define NH_OPT_VLAN_CFI 1
12161 +
12162 +/* IPV4 options */
12163 +#define NH_OPT_IPV4_UNICAST 1
12164 +#define NH_OPT_IPV4_MULTICAST 2
12165 +#define NH_OPT_IPV4_BROADCAST 3
12166 +#define NH_OPT_IPV4_OPTION 4
12167 +#define NH_OPT_IPV4_FRAG 5
12168 +#define NH_OPT_IPV4_INITIAL_FRAG 6
12169 +
12170 +/* IPV6 options */
12171 +#define NH_OPT_IPV6_UNICAST 1
12172 +#define NH_OPT_IPV6_MULTICAST 2
12173 +#define NH_OPT_IPV6_OPTION 3
12174 +#define NH_OPT_IPV6_FRAG 4
12175 +#define NH_OPT_IPV6_INITIAL_FRAG 5
12176 +
12177 +/* General IP options (may be used for any version) */
12178 +#define NH_OPT_IP_FRAG 1
12179 +#define NH_OPT_IP_INITIAL_FRAG 2
12180 +#define NH_OPT_IP_OPTION 3
12181 +
12182 +/* Minenc. options */
12183 +#define NH_OPT_MINENCAP_SRC_ADDR_PRESENT 1
12184 +
12185 +/* GRE. options */
12186 +#define NH_OPT_GRE_ROUTING_PRESENT 1
12187 +
12188 +/* TCP options */
12189 +#define NH_OPT_TCP_OPTIONS 1
12190 +#define NH_OPT_TCP_CONTROL_HIGH_BITS 2
12191 +#define NH_OPT_TCP_CONTROL_LOW_BITS 3
12192 +
12193 +/* CAPWAP options */
12194 +#define NH_OPT_CAPWAP_DTLS 1
12195 +
12196 +enum net_prot {
12197 + NET_PROT_NONE = 0,
12198 + NET_PROT_PAYLOAD,
12199 + NET_PROT_ETH,
12200 + NET_PROT_VLAN,
12201 + NET_PROT_IPV4,
12202 + NET_PROT_IPV6,
12203 + NET_PROT_IP,
12204 + NET_PROT_TCP,
12205 + NET_PROT_UDP,
12206 + NET_PROT_UDP_LITE,
12207 + NET_PROT_IPHC,
12208 + NET_PROT_SCTP,
12209 + NET_PROT_SCTP_CHUNK_DATA,
12210 + NET_PROT_PPPOE,
12211 + NET_PROT_PPP,
12212 + NET_PROT_PPPMUX,
12213 + NET_PROT_PPPMUX_SUBFRM,
12214 + NET_PROT_L2TPV2,
12215 + NET_PROT_L2TPV3_CTRL,
12216 + NET_PROT_L2TPV3_SESS,
12217 + NET_PROT_LLC,
12218 + NET_PROT_LLC_SNAP,
12219 + NET_PROT_NLPID,
12220 + NET_PROT_SNAP,
12221 + NET_PROT_MPLS,
12222 + NET_PROT_IPSEC_AH,
12223 + NET_PROT_IPSEC_ESP,
12224 + NET_PROT_UDP_ENC_ESP, /* RFC 3948 */
12225 + NET_PROT_MACSEC,
12226 + NET_PROT_GRE,
12227 + NET_PROT_MINENCAP,
12228 + NET_PROT_DCCP,
12229 + NET_PROT_ICMP,
12230 + NET_PROT_IGMP,
12231 + NET_PROT_ARP,
12232 + NET_PROT_CAPWAP_DATA,
12233 + NET_PROT_CAPWAP_CTRL,
12234 + NET_PROT_RFC2684,
12235 + NET_PROT_ICMPV6,
12236 + NET_PROT_FCOE,
12237 + NET_PROT_FIP,
12238 + NET_PROT_ISCSI,
12239 + NET_PROT_GTP,
12240 + NET_PROT_USER_DEFINED_L2,
12241 + NET_PROT_USER_DEFINED_L3,
12242 + NET_PROT_USER_DEFINED_L4,
12243 + NET_PROT_USER_DEFINED_L5,
12244 + NET_PROT_USER_DEFINED_SHIM1,
12245 + NET_PROT_USER_DEFINED_SHIM2,
12246 +
12247 + NET_PROT_DUMMY_LAST
12248 +};
12249 +
12250 +/*! IEEE8021.Q */
12251 +#define NH_IEEE8021Q_ETYPE 0x8100
12252 +#define NH_IEEE8021Q_HDR(etype, pcp, dei, vlan_id) \
12253 + ((((uint32_t)(etype & 0xFFFF)) << 16) | \
12254 + (((uint32_t)(pcp & 0x07)) << 13) | \
12255 + (((uint32_t)(dei & 0x01)) << 12) | \
12256 + (((uint32_t)(vlan_id & 0xFFF))))
12257 +
12258 +#endif /* __FSL_NET_H */
12259 --- a/net/core/pktgen.c
12260 +++ b/net/core/pktgen.c
12261 @@ -2790,6 +2790,7 @@ static struct sk_buff *pktgen_alloc_skb(
12262 } else {
12263 skb = __netdev_alloc_skb(dev, size, GFP_NOWAIT);
12264 }
12265 + skb_reserve(skb, LL_RESERVED_SPACE(dev));
12266
12267 /* the caller pre-fetches from skb->data and reserves for the mac hdr */
12268 if (likely(skb))