layerscape: copy patches 5.10 to 5.15
[openwrt/openwrt.git] / target / linux / layerscape / patches-5.10 / 701-staging-add-fsl_ppfe-driver.patch
1 From ab06204b9ae48324ed5b7e7026cce47ecd0a376d Mon Sep 17 00:00:00 2001
2 From: Martin Schiller <ms@dev.tdt.de>
3 Date: Mon, 8 Nov 2021 14:56:10 +0100
4 Subject: [PATCH] staging: add fsl_ppfe driver
5 MIME-Version: 1.0
6 Content-Type: text/plain; charset=UTF-8
7 Content-Transfer-Encoding: 8bit
8
9 This patch is the squashed version of all ppfe related commits from
10 LSDK-21.08.
11
12 See the following git log for further details:
13
14 commit bc389fa57819620b61f86a39444cf22c70e291ad
15 Author: Calvin Johnson <calvin.johnson@nxp.com>
16 Date: Sat Sep 16 07:05:49 2017 +0530
17
18 net: fsl_ppfe: dts binding for ppfe
19
20 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
21 Signed-off-by: Anjaneyulu Jagarlmudi <anji.jagarlmudi@nxp.com>
22
23 commit d3822b65f897e4c421c72bd215f34e41d8c4a40e
24 Author: Calvin Johnson <calvin.johnson@nxp.com>
25 Date: Sat Sep 16 14:21:37 2017 +0530
26
27 staging: fsl_ppfe/eth: header files for pfe driver
28
29 This patch has all pfe header files.
30
31 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
32 Signed-off-by: Anjaneyulu Jagarlmudi <anji.jagarlmudi@nxp.com>
33
34 commit 9184a85f93816a8b81cb363464925757185b7138
35 Author: Calvin Johnson <calvin.johnson@nxp.com>
36 Date: Sat Sep 16 14:22:17 2017 +0530
37
38 staging: fsl_ppfe/eth: introduce pfe driver
39
40 This patch introduces Linux support for NXP's LS1012A Packet
41 Forwarding Engine (pfe_eth). LS1012A uses hardware packet forwarding
42 engine to provide high performance Ethernet interfaces. The device
43 includes two Ethernet ports.
44
45 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
46 Signed-off-by: Anjaneyulu Jagarlmudi <anji.jagarlmudi@nxp.com>
47
48 commit 63f6117f1c9af34bc333425507a8b858fcd61951
49 Author: Calvin Johnson <calvin.johnson@nxp.com>
50 Date: Wed Oct 11 19:23:38 2017 +0530
51
52 staging: fsl_ppfe/eth: fix RGMII tx delay issue
53
54 Recently logic to enable RGMII tx delay was changed by
55 below patch.
56
57 https://patchwork.kernel.org/patch/9447581/
58
59 Based on the patch, appropriate change is made in PFE driver.
60
61 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
62 Signed-off-by: Anjaneyulu Jagarlmudi <anji.jagarlmudi@nxp.com>
63
64 commit adf71df9d53026513da71892a27c455ae23a6d06
65 Author: Calvin Johnson <calvin.johnson@nxp.com>
66 Date: Wed Oct 18 14:29:30 2017 +0530
67
68 staging: fsl_ppfe/eth: remove unused functions
69
70 Remove unused functions hif_xmit_pkt & hif_lib_xmit_pkt.
71
72 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
73
74 commit ea632882dacf4ec21f84fa73ebff7d89d46fbeb3
75 Author: Calvin Johnson <calvin.johnson@nxp.com>
76 Date: Wed Oct 18 18:34:41 2017 +0530
77
78 staging: fsl_ppfe/eth: fix read/write/ack idx issue
79
80 While fixing checkpatch errors some of the index increments
81 were commented out. They are enabled.
82
83 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
84
85 commit 75dc5856eafe3ec988aa52f498ad683332e5a528
86 Author: Calvin Johnson <calvin.johnson@nxp.com>
87 Date: Fri Oct 27 11:20:47 2017 +0530
88
89 staging: fsl_ppfe/eth: Make phy_ethtool_ksettings_get return void
90
91 Make return value void since function never return meaningful value
92
93 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
94
95 commit fd60dff2329a4c20f5638147db315879d4b92097
96 Author: Calvin Johnson <calvin.johnson@nxp.com>
97 Date: Wed Nov 15 13:45:27 2017 +0530
98
99 staging: fsl_ppfe/eth: add function to update tmu credits
100
101 __hif_lib_update_credit function is used to update the tmu credits.
102 If tx_qos is set, tmu credit is updated based on the number of packets
103 transmitted by tmu.
104
105 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
106 Signed-off-by: Anjaneyulu Jagarlmudi <anji.jagarlmudi@nxp.com>
107
108 commit 259418d3755afeabca062df4c177cb6617be7e2b
109 Author: Kavi Akhila-B46177 <akhila.kavi@nxp.com>
110 Date: Thu Nov 2 12:05:35 2017 +0530
111
112 staging: fsl_ppfe/eth: Avoid packet drop at TMU queues
113
114 Added flow control between TMU queues and PFE Linux driver,
115 based on TMU credits availability.
116 Added tx_qos module parameter to control this behavior.
117 Use queue-0 as default queue to transmit packets.
118
119 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
120 Signed-off-by: Akhila Kavi <akhila.kavi@nxp.com>
121 Signed-off-by: Anjaneyulu Jagarlmudi <anji.jagarlmudi@nxp.com>
122
123 commit 48f8aa50bdcccdec699550d10e274711f9f8cb4d
124 Author: Bhaskar Upadhaya <Bhaskar.Upadhaya@nxp.com>
125 Date: Wed Nov 29 12:08:00 2017 +0530
126
127 staging: fsl_ppfe/eth: Enable PFE in clause 45 mode
128
129 when we opearate in clause 45 mode, we need to call
130 the function get_phy_device() with its 3rd argument as
131 "true" and then the resultant phy device needs to be
132 register with phy layer via phy_device_register()
133
134 Signed-off-by: Bhaskar Upadhaya <Bhaskar.Upadhaya@nxp.com>
135
136 commit 217cf01a1eb7c0a1f44c250ab05bb832287069ca
137 Author: Bhaskar Upadhaya <Bhaskar.Upadhaya@nxp.com>
138 Date: Wed Nov 29 12:21:43 2017 +0530
139
140 staging: fsl_ppfe/eth: Disable autonegotiation for 2.5G SGMII
141
142 PCS initialization sequence for 2.5G SGMII interface governs
143 auto negotiation to be in disabled mode
144
145 Signed-off-by: Bhaskar Upadhaya <Bhaskar.Upadhaya@nxp.com>
146
147 commit 0758955e5d8be44260c3b6877ff78e18c2dc2706
148 Author: Calvin Johnson <calvin.johnson@nxp.com>
149 Date: Thu Mar 8 13:58:38 2018 +0530
150
151 staging: fsl_ppfe/eth: calculate PFE_PKT_SIZE with SKB_DATA_ALIGN
152
153 pfe packet size was calculated without considering skb data alignment
154 and this resulted in jumbo frames crashing kernel when the
155 cacheline size increased from 64 to 128 bytes with
156 commit 97303480753e ("arm64: Increase the max granular size").
157
158 Modify pfe packet size caclulation to include skb data alignment of
159 sizeof(struct skb_shared_info).
160
161 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
162
163 commit d336c470781a28fe44a494b4f537a4dbac9fd1dd
164 Author: Akhil Goyal <akhil.goyal@nxp.com>
165 Date: Fri Apr 13 15:41:28 2018 +0530
166
167 staging: fsl_ppfe/eth: support for userspace networking
168
169 This patch adds the userspace mode support to fsl_ppfe network driver.
170 In the new mode, basic hardware initialization is performed in kernel, while
171 the datapath and HIF handling is the responsibility of the userspace.
172
173 The new command line parameter is added to initialize the ppfe module
174 in userspace mode. By default the module remains in kernelspace networking
175 mode.
176 To enable userspace mode, use "insmod pfe.ko us=1"
177
178 Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
179 Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
180
181 commit 30f97a6ae76cf7a284fee4b8ad30bce24568ac53
182 Author: Calvin Johnson <calvin.johnson@nxp.com>
183 Date: Mon Apr 30 11:40:01 2018 +0530
184
185 staging: fsl_ppfe/eth: unregister netdev after pfe_phy_exit
186
187 rmmod pfe.ko throws below warning:
188
189 kernfs: can not remove 'phydev', no directory
190 ------------[ cut here ]------------
191 WARNING: CPU: 0 PID: 2230 at fs/kernfs/dir.c:1481
192 kernfs_remove_by_name_ns+0x90/0xa0
193
194 This is caused when the unregistered netdev structure is accessed to
195 disconnect phy.
196
197 Resolve the issue by unregistering netdev after disconnecting phy.
198
199 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
200
201 commit 1b7daa665cd53fe51f19689c5b209fd89551f131
202 Author: anuj batham <anuj.batham@nxp.com>
203 Date: Fri Apr 27 14:38:09 2018 +0530
204
205 staging: fsl_ppfe/eth: HW parse results for DPDK
206
207 HW Parse results are included in the packet headroom.
208 Length and Offset calculation now accommodates parse info size.
209
210 Signed-off-by: Archana Madhavan <archana.madhavan@nxp.com>
211
212 commit 0aeb9981d44aad6a45eb8f3ead37f91258be173f
213 Author: Calvin Johnson <calvin.johnson@nxp.com>
214 Date: Wed Jun 20 10:22:32 2018 +0530
215
216 staging: fsl_ppfe/eth: reorganize pfe_netdev_ops
217
218 Reorganize members of struct pfe_netdev_ops to match with the order
219 of members in struct net_device_ops defined in include/linux/netdevice.h
220
221 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
222
223 commit 1cbccc5028c337e7c88bf61cf89038cfad449d34
224 Author: Calvin Johnson <calvin.johnson@nxp.com>
225 Date: Wed Jun 20 10:22:50 2018 +0530
226
227 staging: fsl_ppfe/eth: use mask for rx max frame len
228
229 Define and use PFE_RCR_MAX_FL_MASK to properly set Rx max frame
230 length of MAC Receive Control Register.
231
232 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
233
234 commit d8c8ed721470bd47ad5414d8fdc5d093cdd247f7
235 Author: Calvin Johnson <calvin.johnson@nxp.com>
236 Date: Wed Jun 20 10:23:01 2018 +0530
237
238 staging: fsl_ppfe/eth: define pfe ndo_change_mtu function
239
240 Define ndo_change_mtu function for pfe. This sets the max Rx frame
241 length to the new mtu.
242
243 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
244
245 commit f5f50edda84cf9305db06310536525c206970d6c
246 Author: Calvin Johnson <calvin.johnson@nxp.com>
247 Date: Wed Jun 20 10:23:16 2018 +0530
248
249 staging: fsl_ppfe/eth: remove jumbo frame enable from gemac init
250
251 MAC Receive Control Register was configured to allow jumbo frames.
252 This is removed as jumbo frames can be supported anytime by changing
253 mtu which will in turn modify MAX_FL field of MAC RCR.
254 Jumbo frames caused pfe to hang on LS1012A rev 1.0 Silicon due to
255 erratum A-010897.
256
257 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
258
259 commit 53e3c57af87d72ee0299a723499bd911cb1ed25a
260 Author: Calvin Johnson <calvin.johnson@nxp.com>
261 Date: Wed Jun 20 10:23:32 2018 +0530
262
263 staging: fsl_ppfe/eth: disable CRC removal
264
265 Disable CRC removal from the packet, so that packets are forwarded
266 as is to Linux.
267 CRC configuration in MAC will be reflected in the packet received
268 to Linux.
269
270 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
271
272 commit eb55f7878a6ece7edbecd648e147a5683da18c76
273 Author: Calvin Johnson <calvin.johnson@nxp.com>
274 Date: Wed Jun 20 10:23:41 2018 +0530
275
276 staging: fsl_ppfe/eth: handle ls1012a errata_a010897
277
278 On LS1012A rev 1.0, Jumbo frames are not supported as it causes
279 the PFE controller to hang. A reset of the entire chip is required
280 to resume normal operation.
281
282 To handle this errata, frames with length > 1900 are truncated for
283 rev 1.0 of LS1012A.
284
285 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
286
287 commit c7b4f5f8f74925ce1d209ee4fcd5973d5cc5b61c
288 Author: Calvin Johnson <calvin.johnson@nxp.com>
289 Date: Thu Oct 4 09:38:34 2018 +0530
290
291 staging: fsl_ppfe/eth: replace magic numbers
292
293 Replace magic numbers and some cosmetic changes.
294
295 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
296
297 commit c0ed379aa248dd70b2acf5dd8908bec1f6de5487
298 Author: Calvin Johnson <calvin.johnson@nxp.com>
299 Date: Thu Oct 4 09:39:00 2018 +0530
300
301 staging: fsl_ppfe/eth: resolve indentation warning
302
303 Resolve the following indentation warning:
304
305 drivers/staging/fsl_ppfe/pfe_ls1012a_platform.c:
306 In function ‘pfe_get_gemac_if_proprties’:
307 drivers/staging/fsl_ppfe/pfe_ls1012a_platform.c:96:2:
308 warning: this ‘else’ clause does not guard...
309 [-Wmisleading-indentation]
310 else
311 ^~~~
312 drivers/staging/fsl_ppfe/pfe_ls1012a_platform.c:98:3:
313 note: ...this statement, but the latter is misleadingly indented as
314 if it were guarded by the ‘else’
315 pdata->ls1012a_eth_pdata[port].mdio_muxval = phy_id;
316 ^~~~~
317
318 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
319
320 commit c509cb585af2848dbb4ab194bf0fa435e356cb0a
321 Author: Calvin Johnson <calvin.johnson@nxp.com>
322 Date: Thu Oct 4 09:38:17 2018 +0530
323
324 staging: fsl_ppfe/eth: add fixed-link support
325
326 In cases where MAC is not connected to a normal MDIO-managed PHY
327 device, and instead to a switch, it is configured as a "fixed-link".
328 Code to handle this scenario is added here.
329
330 phy_node in the dtb is checked to identify a fixed-link.
331 On identification of a fixed-link, it is registered and connected.
332
333 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
334
335 commit 5e93b6ed52d4cdce12a481866c6f211299940734
336 Author: Shreyansh Jain <shreyansh.jain@nxp.com>
337 Date: Wed Jun 6 14:19:34 2018 +0530
338
339 staging: fsl_ppfe: add support for a char dev for link status
340
341 Read and IOCTL support is added. Application would need to open,
342 read/ioctl the /dev/pfe_us_cdev device.
343 select is pending as it requires a wait_queue.
344
345 Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
346 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
347
348 commit 48e064df94157b0c34f2d75e164ea5c7f4970b7b
349 Author: Akhil Goyal <akhil.goyal@nxp.com>
350 Date: Thu Jul 5 20:14:21 2018 +0530
351
352 staging: fsl_ppfe: enable hif event from userspace
353
354 HIF interrupts are enabled using ioctl from user space,
355 and epoll wait from user space wakes up when there is an HIF
356 interrupt.
357
358 Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
359
360 commit d16d91d3250abec31422b28ff04a973b8b3d73c5
361 Author: Akhil Goyal <akhil.goyal@nxp.com>
362 Date: Fri Jul 20 16:43:25 2018 +0530
363
364 staging: fsl_ppfe: performance tuning for user space
365
366 interrupt coalescing of 100 usec is added.
367
368 Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
369 Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
370
371 commit eadb4c9d3e37c44659284fc9190d7e4f04b12aa0
372 Author: Calvin Johnson <calvin.johnson@nxp.com>
373 Date: Tue Nov 20 21:50:23 2018 +0530
374
375 staging: fsl_ppfe/eth: Update to use SPDX identifiers
376
377 Replace license text with corresponding SPDX identifiers and update the
378 format of existing SPDX identifiers to follow the new guideline
379 Documentation/process/license-rules.rst.
380
381 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
382
383 commit a468be7bda6fd98afab1cccb4e7151f23ca096e9
384 Author: Calvin Johnson <calvin.johnson@nxp.com>
385 Date: Tue Nov 20 21:50:40 2018 +0530
386
387 staging: fsl_ppfe/eth: misc clean up
388
389 - remove redundant hwfeature init
390 - remove unused vars from ls1012a_eth_platform_data
391 - To handle ls1012a errata_a010897, PPFE driver requires GUTS driver
392 to be compiled in. Select FSL_GUTS when PPFE driver is compiled.
393
394 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
395
396 commit 0e37bfbeda9510688ad987251aa07e3c88d6ba41
397 Author: Calvin Johnson <calvin.johnson@nxp.com>
398 Date: Tue Nov 20 21:50:51 2018 +0530
399
400 staging: fsl_ppfe/eth: reorganize platform phy parameters
401
402 - Use "phy-handle" and of_* functions to get phy node and fixed-link
403 parameters
404
405 - Reorganize phy parameters and initialize them only if phy-handle
406 or fixed-link is defined in the dtb.
407
408 - correct typo pfe_get_gemac_if_proprties to pfe_get_gemac_if_properties
409
410 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
411
412 commit c8703ab06e644e853b12baf082dd703f6a4440a5
413 Author: Calvin Johnson <calvin.johnson@nxp.com>
414 Date: Fri Nov 23 23:58:28 2018 +0530
415
416 staging: fsl_ppfe/eth: support single interface initialization
417
418 - arrange members of struct mii_bus in sequence matching phy.h
419 - if mdio node is defined, use of_mdiobus_register to register
420 child nodes (phy devices) available on the mdio bus.
421 - remove of_phy_register_fixed_link from pfe_phy_init as it is being
422 handled in pfe_get_gemac_if_properties
423 - remove mdio enabled check
424 - skip phy init, if no PHY or fixed-link
425
426 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
427
428 commit 89804e2d74002d01ea3f174048176e498298329a
429 Author: Calvin Johnson <calvin.johnson@nxp.com>
430 Date: Tue Nov 20 21:51:53 2018 +0530
431
432 net: fsl_ppfe: update dts properties for phy
433
434 Use commonly used phy-handle property and mdio subnode to handle
435 phy properties.
436
437 Deprecate bindings fsl,gemac-phy-id & fsl,pfe-phy-if-flags.
438
439 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
440
441 commit 5b0cc262ba7791fb0fae1f81f9619f54a15a75ba
442 Author: Calvin Johnson <calvin.johnson@nxp.com>
443 Date: Fri Dec 7 19:30:03 2018 +0530
444
445 staging: fsl_ppfe/eth: remove unused code
446
447 - remove gemac-bus-id related code that is unused.
448 - remove unused prototype gemac_set_mdc_div.
449
450 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
451
452 commit a5a237f331ab7fcd85b95466f481ddb7023aecc3
453 Author: Calvin Johnson <calvin.johnson@nxp.com>
454 Date: Mon Dec 10 10:22:33 2018 +0530
455
456 staging: fsl_ppfe/eth: separate mdio init from mac init
457
458 - separate mdio initialization from mac initialization
459 - Define pfe_mdio_priv_s structure to hold mii_bus structure and other
460 related data.
461 - Modify functions to work with the separted mdio init model.
462
463 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
464
465 commit 262a03c082de5a93efd7f54bec48b39cda9042a8
466 Author: Calvin Johnson <calvin.johnson@nxp.com>
467 Date: Wed Mar 27 13:25:57 2019 +0530
468
469 staging: fsl_ppfe/eth: adapt to link mode based phydev changes
470
471 Setting link mode bits have changed with the integration of
472 commit (3c1bcc8 net: ethernet: Convert phydev advertize and
473 supported from u32 to link mode). Adapt to the new method of
474 setting and clearing the link mode bits.
475
476 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
477
478 commit 75f911d8ae45215f1c188191da92905ad3f7ad4a
479 Author: Calvin Johnson <calvin.johnson@nxp.com>
480 Date: Wed Mar 27 19:31:35 2019 +0530
481
482 staging: fsl_ppfe/eth: use generic soc_device infra instead of fsl_guts_get_svr()
483
484 Commit ("soc: fsl: guts: make fsl_guts_get_svr() static") has
485 made fsl_guts_get_svr() static and hence use generic soc_device
486 infrastructure to check SoC revision.
487
488 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
489
490 commit 0a0ca3d898d15b3d7a206597a68f28134d4dfebd
491 Author: Calvin Johnson <calvin.johnson@nxp.com>
492 Date: Tue Mar 26 16:52:22 2019 +0530
493
494 staging: fsl_ppfe/eth: use memremap() to map RAM area used by PFE
495
496 RAM area used by PFE should be mapped using memremap() instead of
497 directly traslating physical addr to virtual. This will ensure proper
498 checks are done before the area is used.
499
500 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
501
502 commit bfc0c2fedb76ad9c888db421b435d51e33fe276a
503 Author: Li Yang <leoyang.li@nxp.com>
504 Date: Tue Jun 11 18:24:37 2019 -0500
505
506 staging: fsl_ppfe/eth: remove 'fallback' argument from dev->ndo_select_queue()
507
508 To be consistent with upstream API change.
509
510 Signed-off-by: Li Yang <leoyang.li@nxp.com>
511
512 commit cfd0841983961067efe69379371ddcb49b230dac
513 Author: Ting Liu <ting.liu@nxp.com>
514 Date: Mon Jun 17 09:27:53 2019 +0200
515
516 staging: fsl_ppfe/eth: prefix header search paths with $(srctree)/
517
518 Currently, the rules for configuring search paths in Kbuild have
519 changed: https://lkml.org/lkml/2019/5/13/37
520
521 This will lead the below error:
522
523 fatal error: pfe/pfe.h: No such file or directory
524
525 Fix it by adding $(srctree)/ prefix to the search paths.
526
527 Signed-off-by: Ting Liu <ting.liu@nxp.com>
528
529 commit 4be722099e4b6bdff2e683234ffaa2dc62fc773d
530 Author: Calvin Johnson <calvin.johnson@nxp.com>
531 Date: Wed Nov 1 11:11:30 2017 +0530
532
533 staging: fsl_ppfe/eth: add pfe support to Kconfig and Makefile
534
535 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
536 [ Aisheng: fix minor conflict due to removed VBOXSF_FS ]
537 Signed-off-by: Dong Aisheng <aisheng.dong@nxp.com>
538
539 commit 5a2c9482ec3959dc2010f99897359b9e4006d2a4
540 Author: Nagesh Koneti <koneti.nagesh@nxp.com>
541 Date: Wed Sep 25 12:01:19 2019 +0530
542
543 staging: fsl_ppfe/eth: Disable termination of CRC fwd.
544
545 LS1012A MAC PCS block has an erratum that is seen with specific PHY AR803x.
546 The issue is triggered by the (spec-compliant) operation of the AR803x PHY
547 on the LS1012A-FRWY board.Due to this, good FCS packet is reported as error
548 packet by MAC, so for these error packets FCS should be validated and
549 discard only real error packets in PFE Rx packet path.
550
551 Signed-off-by: Nagesh Koneti <koneti.nagesh@nxp.com>
552 Signed-off-by: Nagesh Koneti <“koneti.nagesh@nxp.com”>
553
554 commit 36344cfc28f4cbdf606330ab8929e10d0778a087
555 Author: Li Yang <leoyang.li@nxp.com>
556 Date: Sun Dec 8 18:19:18 2019 -0600
557
558 net: ppfe: Cope with of_get_phy_mode() API change
559
560 Signed-off-by: Li Yang <leoyang.li@nxp.com>
561
562 commit ee2a796cf8990ac06f329c580f18fffefbac6a9a
563 Author: Anji Jagarlmudi <anji.jagarlmudi@nxp.com>
564 Date: Wed Jan 8 19:06:12 2020 +0530
565
566 staging: fsl_ppfe/eth: Enhance error checking in platform probe
567
568 Fix the kernel crash when MAC addr is not passed in dtb.
569
570 Signed-off-by: Anji Jagarlmudi <anji.jagarlmudi@nxp.com>
571
572 commit 2e8ca14ea6cc8d6816f2a445f3610f6b5b852e7c
573 Author: Anji Jagarlmudi <anji.jagarlmudi@nxp.com>
574 Date: Mon Apr 6 19:46:05 2020 +0530
575
576 staging: fsl_ppfe/eth: reject unsupported coalescing params
577
578 Set ethtool_ops->supported_coalesce_params to let
579 the core reject unsupported coalescing parameters.
580
581 Signed-off-by: Anji Jagarlmudi <anji.jagarlmudi@nxp.com>
582
583 commit 7ce4dd75c94d8a7dbe4e4ad747b4ddc5be6d83b4
584 Author: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
585 Date: Sat Mar 28 00:30:53 2020 +0530
586
587 staging: fsl_ppfe/eth:check "reg" property before pfe_get_gemac_if_properties()
588
589 It has been observed that the function pfe_get_gemac_if_properties() is
590 been called blindly for the next two child nodes. There might be some
591 cases where it may go wrong and that lead to missing interfaces.
592 with these changes it is ensured thats not the case.
593
594 Signed-off-by: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
595 Signed-off-by: Anji J <anji.jagarlmudi@nxp.com>
596
597 commit 7ba7cfb799c8928f4963f86a1ad47e2dd56022b2
598 Author: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
599 Date: Sat Mar 28 00:34:13 2020 +0530
600
601 staging: fsl_ppfe/eth: "struct firmware" dereference is reduced in many functions
602
603 firmware structure's data variable is the actual elf data. It has been
604 dereferenced in multiple functions and this has been reduced.
605
606 Signed-off-by: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
607 Signed-off-by: Anji J <anji.jagarlmudi@nxp.com>
608
609 commit 443538bb09b979fcc98a58e18a3eb8cebe25ad4f
610 Author: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
611 Date: Sun Mar 29 18:05:50 2020 +0530
612
613 staging: fsl_ppfe/eth: LF-27 load pfe binaries from FDT
614
615 FDT prepared in uboot now has pfe firmware part of it.
616 These changes will read the firmware by default from it and tries to load
617 the elf into the PFE PEs. This help build the pfe driver pasrt of kernel.
618
619 Signed-off-by: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
620 Signed-off-by: Anji J <anji.jagarlmudi@nxp.com>
621
622 commit 2b1f551c2aee4550ffca5f86031bc4f5e6ccb848
623 Author: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
624 Date: Tue Jun 9 15:33:42 2020 +0530
625
626 staging: fsl_ppfe/eth: proper handling for RGMII delay mode
627
628 The correct setting for the RGMII ports on LS1012ARDB is to
629 enable delay on both Tx and Rx. So the phy mode to be matched
630 is PHY_INTERFACE_MODE_RGMII_ID.
631
632 Signed-off-by: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
633 Signed-off-by: Anji Jagarlmudi <anji.jagarlmudi@nxp.com>
634
635 commit a95d84b87e743bd7c57b287901524f2cffbf38d7
636 Author: Dong Aisheng <aisheng.dong@nxp.com>
637 Date: Wed Jul 15 16:06:24 2020 +0800
638
639 LF-1762-2 staging: fsl_ppfe: replace '---help---' in Kconfig files with 'help'
640
641 Update Kconfig to cope with upstream change
642 commit 84af7a6194e4 ("checkpatch: kconfig: prefer 'help' over
643 '---help---'").
644
645 Signed-off-by: Dong Aisheng <aisheng.dong@nxp.com>
646
647 commit 546ef027c2b0768517d903429d56bcd89b919e6d
648 Author: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
649 Date: Wed Jul 15 10:36:07 2020 +0530
650
651 staging: fsl_ppfe/eth: Nesting level does not match indentation
652
653 corrected nesting level
654 LF-1661 and Coverity CID: 8879316
655
656 Signed-off-by: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
657
658 commit c76a95f776e0f3e1a2c8abc1748c662151e29be5
659 Author: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
660 Date: Wed Jul 15 11:19:57 2020 +0530
661
662 staging: fsl_ppfe/eth: Initialized scalar variable
663
664 Proper initialization of scalar variable
665 LF-1657 and Coverity CID: 3335133
666
667 Signed-off-by: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
668
669 commit f48286af915f664c15aa327aaf6d9b61d33eea67
670 Author: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
671 Date: Wed Jul 15 11:47:50 2020 +0530
672
673 staging: fsl_ppfe/eth: misspelt variable name
674
675 variable name corrected
676 LF-1656 and Coverity CID: 3335119
677
678 Signed-off-by: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
679
680 commit a5e006f71fce8e76831704b31218f3d57c0b9924
681 Author: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
682 Date: Wed Jul 15 12:10:47 2020 +0530
683
684 staging: fsl_ppfe/eth: Avoiding out-of-bound writes
685
686 avoid out-of-bound writes with proper error handling
687 LF-1654, LF-1652 and Coverity CID: 3335106, 3335090
688
689 Signed-off-by: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
690
691 commit 957fde83420b6a74a5e96b2f27183274b04211d9
692 Author: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
693 Date: Wed Jul 15 13:09:35 2020 +0530
694
695 staging: fsl_ppfe/eth: Initializing scalar variable
696
697 proper initialization of scalar variable.
698 LF-1653 and Coverity CID: 3335101
699
700 Signed-off-by: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
701
702 commit fae95666a4c992b7f9035ee978e3e289495cbd47
703 Author: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
704 Date: Wed Jul 15 13:45:50 2020 +0530
705
706 staging: fsl_ppfe/eth: checking return value
707
708 proper checks added and handled for return value.
709 LF-1644 and Coverity CID: 241888
710
711 Signed-off-by: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
712
713 commit 5bf7baddb22ecafd6064c9062a804bcd17a326cc
714 Author: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
715 Date: Wed Jul 15 14:04:10 2020 +0530
716
717 staging: fsl_ppfe/eth: Avoid out-of-bound access
718
719 proper handling to avoid out-of-bound access
720 LF-1642, LF-1641 and Coverity CID: 240910, 240891
721
722 Signed-off-by: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
723
724 commit b15d480d66277fc00bc610a91af56263733a4e13
725 Author: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
726 Date: Fri Jul 31 10:19:00 2020 +0530
727
728 staging: fsl_ppfe/eth: Avoiding out-of-bound writes
729
730 avoid out-of-bound writes with proper error handling
731 LF-1654, LF-1652 and Coverity CID: 3335106, 3335090
732
733 Signed-off-by: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
734
735 commit 188992fc4d4173c6bb36e924b5833bb092f7d602
736 Author: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
737 Date: Fri Jul 31 10:23:59 2020 +0530
738
739 staging: fsl_ppfe/eth: return value init in error case
740
741 proper err return in error case.
742 LF-1806 and Coverity CID: 10468592
743
744 Signed-off-by: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
745
746 commit 26d6dd0bd1331dd07764914033fd8e98777fc165
747 Author: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
748 Date: Thu Aug 13 12:04:48 2020 +0530
749
750 staging: fsl_ppfe/eth: Avoid recursion in header inclusion
751
752 Avoiding header inclusions that are not necessary and also that are
753 causing header inclusion recursion.
754
755 LF-2102 and Coverity CID: 240838
756
757 Signed-off-by: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
758
759 commit f18618582d078b87c9ee2e93ebbffee44cd76ec0
760 Author: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
761 Date: Thu Aug 13 12:28:25 2020 +0530
762
763 staging: fsl_ppfe/eth: Avoiding return value overwrite
764
765 avoid return value overwrite at the end of function.
766 LF-2136, LF-2137 and Coverity CID: 8879341, 8879364
767
768 Signed-off-by: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
769
770 commit 4cd4e5f325516d91c630311af4d36472ce19124e
771 Author: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
772 Date: Wed Sep 9 17:50:10 2020 +0530
773
774 staging: fsl_ppfe/eth: LF-27 enabling PFE firmware load from FDT
775
776 The macro, "LOAD_PFEFIRMWARE_FROM_FILESYSTEM" is been disabled to load
777 the firmware from FDT by default. Enabling the macro will load the
778 firmware from filesystem.
779
780 Also, the Makefile is now tuned to build pfe as per the config option
781
782 Signed-off-by: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
783
784 commit 175a310323ccfab0025f2da56799bb3939b65c9d
785 Author: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
786 Date: Thu Apr 9 18:17:48 2020 +0530
787
788 staging: fsl_ppfe/eth: Ethtool stats correction for IEEE_rx_drop counter
789
790 Due to carrier extended bug the phy counter IEEE_rx_drop counter is
791 incremented some times and phy reports the packet has crc error.
792 Because of this PFE revalidates all the packets that are marked crc
793 error by phy. Now, the counter phy reports is till bogus and this
794 patch decrements the counter by pfe revalidated (and are crc ok)
795 counter amount.
796
797 Signed-off-by: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
798
799 commit a4683911f7a7d71762a90dabf72faadab5766774
800 Author: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
801 Date: Wed Sep 30 17:20:19 2020 +0530
802
803 staging: fsl_ppfe/eth: PFE firmware load enhancements
804
805 PFE driver enhancements to load the PE firmware from filesystem
806 when the firmware is not found in FDT.
807
808 Signed-off-by: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
809 ---
810 .../devicetree/bindings/net/fsl_ppfe/pfe.txt | 199 ++
811 MAINTAINERS | 8 +
812 drivers/staging/Kconfig | 2 +
813 drivers/staging/Makefile | 1 +
814 drivers/staging/fsl_ppfe/Kconfig | 21 +
815 drivers/staging/fsl_ppfe/Makefile | 20 +
816 drivers/staging/fsl_ppfe/TODO | 2 +
817 drivers/staging/fsl_ppfe/include/pfe/cbus.h | 78 +
818 .../staging/fsl_ppfe/include/pfe/cbus/bmu.h | 55 +
819 .../fsl_ppfe/include/pfe/cbus/class_csr.h | 289 ++
820 .../fsl_ppfe/include/pfe/cbus/emac_mtip.h | 242 ++
821 .../staging/fsl_ppfe/include/pfe/cbus/gpi.h | 86 +
822 .../staging/fsl_ppfe/include/pfe/cbus/hif.h | 100 +
823 .../fsl_ppfe/include/pfe/cbus/hif_nocpy.h | 50 +
824 .../fsl_ppfe/include/pfe/cbus/tmu_csr.h | 168 ++
825 .../fsl_ppfe/include/pfe/cbus/util_csr.h | 61 +
826 drivers/staging/fsl_ppfe/include/pfe/pfe.h | 372 +++
827 drivers/staging/fsl_ppfe/pfe_cdev.c | 258 ++
828 drivers/staging/fsl_ppfe/pfe_cdev.h | 41 +
829 drivers/staging/fsl_ppfe/pfe_ctrl.c | 226 ++
830 drivers/staging/fsl_ppfe/pfe_ctrl.h | 100 +
831 drivers/staging/fsl_ppfe/pfe_debugfs.c | 99 +
832 drivers/staging/fsl_ppfe/pfe_debugfs.h | 13 +
833 drivers/staging/fsl_ppfe/pfe_eth.c | 2587 +++++++++++++++++
834 drivers/staging/fsl_ppfe/pfe_eth.h | 175 ++
835 drivers/staging/fsl_ppfe/pfe_firmware.c | 398 +++
836 drivers/staging/fsl_ppfe/pfe_firmware.h | 21 +
837 drivers/staging/fsl_ppfe/pfe_hal.c | 1517 ++++++++++
838 drivers/staging/fsl_ppfe/pfe_hif.c | 1064 +++++++
839 drivers/staging/fsl_ppfe/pfe_hif.h | 199 ++
840 drivers/staging/fsl_ppfe/pfe_hif_lib.c | 628 ++++
841 drivers/staging/fsl_ppfe/pfe_hif_lib.h | 229 ++
842 drivers/staging/fsl_ppfe/pfe_hw.c | 164 ++
843 drivers/staging/fsl_ppfe/pfe_hw.h | 15 +
844 .../staging/fsl_ppfe/pfe_ls1012a_platform.c | 383 +++
845 drivers/staging/fsl_ppfe/pfe_mod.c | 158 +
846 drivers/staging/fsl_ppfe/pfe_mod.h | 103 +
847 drivers/staging/fsl_ppfe/pfe_perfmon.h | 26 +
848 drivers/staging/fsl_ppfe/pfe_sysfs.c | 840 ++++++
849 drivers/staging/fsl_ppfe/pfe_sysfs.h | 17 +
850 40 files changed, 11015 insertions(+)
851 create mode 100644 Documentation/devicetree/bindings/net/fsl_ppfe/pfe.txt
852 create mode 100644 drivers/staging/fsl_ppfe/Kconfig
853 create mode 100644 drivers/staging/fsl_ppfe/Makefile
854 create mode 100644 drivers/staging/fsl_ppfe/TODO
855 create mode 100644 drivers/staging/fsl_ppfe/include/pfe/cbus.h
856 create mode 100644 drivers/staging/fsl_ppfe/include/pfe/cbus/bmu.h
857 create mode 100644 drivers/staging/fsl_ppfe/include/pfe/cbus/class_csr.h
858 create mode 100644 drivers/staging/fsl_ppfe/include/pfe/cbus/emac_mtip.h
859 create mode 100644 drivers/staging/fsl_ppfe/include/pfe/cbus/gpi.h
860 create mode 100644 drivers/staging/fsl_ppfe/include/pfe/cbus/hif.h
861 create mode 100644 drivers/staging/fsl_ppfe/include/pfe/cbus/hif_nocpy.h
862 create mode 100644 drivers/staging/fsl_ppfe/include/pfe/cbus/tmu_csr.h
863 create mode 100644 drivers/staging/fsl_ppfe/include/pfe/cbus/util_csr.h
864 create mode 100644 drivers/staging/fsl_ppfe/include/pfe/pfe.h
865 create mode 100644 drivers/staging/fsl_ppfe/pfe_cdev.c
866 create mode 100644 drivers/staging/fsl_ppfe/pfe_cdev.h
867 create mode 100644 drivers/staging/fsl_ppfe/pfe_ctrl.c
868 create mode 100644 drivers/staging/fsl_ppfe/pfe_ctrl.h
869 create mode 100644 drivers/staging/fsl_ppfe/pfe_debugfs.c
870 create mode 100644 drivers/staging/fsl_ppfe/pfe_debugfs.h
871 create mode 100644 drivers/staging/fsl_ppfe/pfe_eth.c
872 create mode 100644 drivers/staging/fsl_ppfe/pfe_eth.h
873 create mode 100644 drivers/staging/fsl_ppfe/pfe_firmware.c
874 create mode 100644 drivers/staging/fsl_ppfe/pfe_firmware.h
875 create mode 100644 drivers/staging/fsl_ppfe/pfe_hal.c
876 create mode 100644 drivers/staging/fsl_ppfe/pfe_hif.c
877 create mode 100644 drivers/staging/fsl_ppfe/pfe_hif.h
878 create mode 100644 drivers/staging/fsl_ppfe/pfe_hif_lib.c
879 create mode 100644 drivers/staging/fsl_ppfe/pfe_hif_lib.h
880 create mode 100644 drivers/staging/fsl_ppfe/pfe_hw.c
881 create mode 100644 drivers/staging/fsl_ppfe/pfe_hw.h
882 create mode 100644 drivers/staging/fsl_ppfe/pfe_ls1012a_platform.c
883 create mode 100644 drivers/staging/fsl_ppfe/pfe_mod.c
884 create mode 100644 drivers/staging/fsl_ppfe/pfe_mod.h
885 create mode 100644 drivers/staging/fsl_ppfe/pfe_perfmon.h
886 create mode 100644 drivers/staging/fsl_ppfe/pfe_sysfs.c
887 create mode 100644 drivers/staging/fsl_ppfe/pfe_sysfs.h
888
889 --- /dev/null
890 +++ b/Documentation/devicetree/bindings/net/fsl_ppfe/pfe.txt
891 @@ -0,0 +1,199 @@
892 +=============================================================================
893 +NXP Programmable Packet Forwarding Engine Device Bindings
894 +
895 +CONTENTS
896 + - PFE Node
897 + - Ethernet Node
898 +
899 +=============================================================================
900 +PFE Node
901 +
902 +DESCRIPTION
903 +
904 +PFE Node has all the properties associated with Packet Forwarding Engine block.
905 +
906 +PROPERTIES
907 +
908 +- compatible
909 + Usage: required
910 + Value type: <stringlist>
911 + Definition: Must include "fsl,pfe"
912 +
913 +- reg
914 + Usage: required
915 + Value type: <prop-encoded-array>
916 + Definition: A standard property.
917 + Specifies the offset of the following registers:
918 + - PFE configuration registers
919 + - DDR memory used by PFE
920 +
921 +- fsl,pfe-num-interfaces
922 + Usage: required
923 + Value type: <u32>
924 + Definition: Must be present. Value can be either one or two.
925 +
926 +- interrupts
927 + Usage: required
928 + Value type: <prop-encoded-array>
929 + Definition: Three interrupts are specified in this property.
930 + - HIF interrupt
931 + - HIF NO COPY interrupt
932 + - Wake On LAN interrupt
933 +
934 +- interrupt-names
935 + Usage: required
936 + Value type: <stringlist>
937 + Definition: Following strings are defined for the 3 interrupts.
938 + "pfe_hif" - HIF interrupt
939 + "pfe_hif_nocpy" - HIF NO COPY interrupt
940 + "pfe_wol" - Wake On LAN interrupt
941 +
942 +- memory-region
943 + Usage: required
944 + Value type: <phandle>
945 + Definition: phandle to a node describing reserved memory used by pfe.
946 + Refer:- Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
947 +
948 +- fsl,pfe-scfg
949 + Usage: required
950 + Value type: <phandle>
951 + Definition: phandle for scfg.
952 +
953 +- fsl,rcpm-wakeup
954 + Usage: required
955 + Value type: <phandle>
956 + Definition: phandle for rcpm.
957 +
958 +- clocks
959 + Usage: required
960 + Value type: <phandle>
961 + Definition: phandle for clockgen.
962 +
963 +- clock-names
964 + Usage: required
965 + Value type: <string>
966 + Definition: phandle for clock name.
967 +
968 +EXAMPLE
969 +
970 +pfe: pfe@04000000 {
971 + compatible = "fsl,pfe";
972 + reg = <0x0 0x04000000 0x0 0xc00000>, /* AXI 16M */
973 + <0x0 0x83400000 0x0 0xc00000>; /* PFE DDR 12M */
974 + reg-names = "pfe", "pfe-ddr";
975 + fsl,pfe-num-interfaces = <0x2>;
976 + interrupts = <0 172 0x4>, /* HIF interrupt */
977 + <0 173 0x4>, /*HIF_NOCPY interrupt */
978 + <0 174 0x4>; /* WoL interrupt */
979 + interrupt-names = "pfe_hif", "pfe_hif_nocpy", "pfe_wol";
980 + memory-region = <&pfe_reserved>;
981 + fsl,pfe-scfg = <&scfg 0>;
982 + fsl,rcpm-wakeup = <&rcpm 0xf0000020>;
983 + clocks = <&clockgen 4 0>;
984 + clock-names = "pfe";
985 +
986 + status = "okay";
987 + pfe_mac0: ethernet@0 {
988 + };
989 +
990 + pfe_mac1: ethernet@1 {
991 + };
992 +};
993 +
994 +=============================================================================
995 +Ethernet Node
996 +
997 +DESCRIPTION
998 +
999 +Ethernet Node has all the properties associated with PFE used by platforms to
1000 +connect to PHY:
1001 +
1002 +PROPERTIES
1003 +
1004 +- compatible
1005 + Usage: required
1006 + Value type: <stringlist>
1007 + Definition: Must include "fsl,pfe-gemac-port"
1008 +
1009 +- reg
1010 + Usage: required
1011 + Value type: <prop-encoded-array>
1012 + Definition: A standard property.
1013 + Specifies the gemacid of the interface.
1014 +
1015 +- fsl,gemac-bus-id
1016 + Usage: required
1017 + Value type: <u32>
1018 + Definition: Must be present. Value should be the id of the bus
1019 + connected to gemac.
1020 +
1021 +- fsl,gemac-phy-id (deprecated binding)
1022 + Usage: required
1023 + Value type: <u32>
1024 + Definition: This binding shouldn't be used with new platforms.
1025 + Must be present. Value should be the id of the phy
1026 + connected to gemac.
1027 +
1028 +- fsl,mdio-mux-val
1029 + Usage: required
1030 + Value type: <u32>
1031 + Definition: Must be present. Value can be either 0 or 2 or 3.
1032 + This value is used to configure the mux to enable mdio.
1033 +
1034 +- phy-mode
1035 + Usage: required
1036 + Value type: <string>
1037 + Definition: Must include "sgmii"
1038 +
1039 +- fsl,pfe-phy-if-flags (deprecated binding)
1040 + Usage: required
1041 + Value type: <u32>
1042 + Definition: This binding shouldn't be used with new platforms.
1043 + Must be present. Value should be 0 by default.
1044 + If there is not phy connected, this need to be 1.
1045 +
1046 +- phy-handle
1047 + Usage: optional
1048 + Value type: <phandle>
1049 + Definition: phandle to the PHY device connected to this device.
1050 +
1051 +- mdio : A required subnode which specifies the mdio bus in the PFE and used as
1052 +a container for phy nodes according to ../phy.txt.
1053 +
1054 +EXAMPLE
1055 +
1056 +ethernet@0 {
1057 + compatible = "fsl,pfe-gemac-port";
1058 + #address-cells = <1>;
1059 + #size-cells = <0>;
1060 + reg = <0x0>; /* GEM_ID */
1061 + fsl,gemac-bus-id = <0x0>; /* BUS_ID */
1062 + fsl,mdio-mux-val = <0x0>;
1063 + phy-mode = "sgmii";
1064 + phy-handle = <&sgmii_phy1>;
1065 +};
1066 +
1067 +
1068 +ethernet@1 {
1069 + compatible = "fsl,pfe-gemac-port";
1070 + #address-cells = <1>;
1071 + #size-cells = <0>;
1072 + reg = <0x1>; /* GEM_ID */
1073 + fsl,gemac-bus-id = <0x1>; /* BUS_ID */
1074 + fsl,mdio-mux-val = <0x0>;
1075 + phy-mode = "sgmii";
1076 + phy-handle = <&sgmii_phy2>;
1077 +};
1078 +
1079 +mdio@0 {
1080 + #address-cells = <1>;
1081 + #size-cells = <0>;
1082 +
1083 + sgmii_phy1: ethernet-phy@2 {
1084 + reg = <0x2>;
1085 + };
1086 +
1087 + sgmii_phy2: ethernet-phy@1 {
1088 + reg = <0x1>;
1089 + };
1090 +};
1091 --- a/MAINTAINERS
1092 +++ b/MAINTAINERS
1093 @@ -7068,6 +7068,14 @@ F: drivers/ptp/ptp_qoriq.c
1094 F: drivers/ptp/ptp_qoriq_debugfs.c
1095 F: include/linux/fsl/ptp_qoriq.h
1096
1097 +FREESCALE QORIQ PPFE ETHERNET DRIVER
1098 +M: Anji Jagarlmudi <anji.jagarlmudi@nxp.com>
1099 +M: Calvin Johnson <calvin.johnson@nxp.com>
1100 +L: netdev@vger.kernel.org
1101 +S: Maintained
1102 +F: drivers/staging/fsl_ppfe
1103 +F: Documentation/devicetree/bindings/net/fsl_ppfe/pfe.txt
1104 +
1105 FREESCALE QUAD SPI DRIVER
1106 M: Han Xu <han.xu@nxp.com>
1107 L: linux-spi@vger.kernel.org
1108 --- a/drivers/staging/Kconfig
1109 +++ b/drivers/staging/Kconfig
1110 @@ -118,4 +118,6 @@ source "drivers/staging/wfx/Kconfig"
1111
1112 source "drivers/staging/hikey9xx/Kconfig"
1113
1114 +source "drivers/staging/fsl_ppfe/Kconfig"
1115 +
1116 endif # STAGING
1117 --- a/drivers/staging/Makefile
1118 +++ b/drivers/staging/Makefile
1119 @@ -49,3 +49,4 @@ obj-$(CONFIG_KPC2000) += kpc2000/
1120 obj-$(CONFIG_QLGE) += qlge/
1121 obj-$(CONFIG_WFX) += wfx/
1122 obj-y += hikey9xx/
1123 +obj-$(CONFIG_FSL_PPFE) += fsl_ppfe/
1124 --- /dev/null
1125 +++ b/drivers/staging/fsl_ppfe/Kconfig
1126 @@ -0,0 +1,21 @@
1127 +#
1128 +# Freescale Programmable Packet Forwarding Engine driver
1129 +#
1130 +config FSL_PPFE
1131 + tristate "Freescale PPFE Driver"
1132 + select FSL_GUTS
1133 + default n
1134 + help
1135 + Freescale LS1012A SoC has a Programmable Packet Forwarding Engine.
1136 + It provides two high performance ethernet interfaces.
1137 + This driver initializes, programs and controls the PPFE.
1138 + Use this driver to enable network connectivity on LS1012A platforms.
1139 +
1140 +if FSL_PPFE
1141 +
1142 +config FSL_PPFE_UTIL_DISABLED
1143 + bool "Disable PPFE UTIL Processor Engine"
1144 + help
1145 + UTIL PE has to be enabled only if required.
1146 +
1147 +endif # FSL_PPFE
1148 --- /dev/null
1149 +++ b/drivers/staging/fsl_ppfe/Makefile
1150 @@ -0,0 +1,20 @@
1151 +#
1152 +# Makefile for Freesecale PPFE driver
1153 +#
1154 +
1155 +ccflags-y += -I $(srctree)/$(src)/include -I $(srctree)/$(src)
1156 +
1157 +obj-$(CONFIG_FSL_PPFE) += pfe.o
1158 +
1159 +pfe-y += pfe_mod.o \
1160 + pfe_hw.o \
1161 + pfe_firmware.o \
1162 + pfe_ctrl.o \
1163 + pfe_hif.o \
1164 + pfe_hif_lib.o\
1165 + pfe_eth.o \
1166 + pfe_sysfs.o \
1167 + pfe_debugfs.o \
1168 + pfe_ls1012a_platform.o \
1169 + pfe_hal.o \
1170 + pfe_cdev.o
1171 --- /dev/null
1172 +++ b/drivers/staging/fsl_ppfe/TODO
1173 @@ -0,0 +1,2 @@
1174 +TODO:
1175 + - provide pfe pe monitoring support
1176 --- /dev/null
1177 +++ b/drivers/staging/fsl_ppfe/include/pfe/cbus.h
1178 @@ -0,0 +1,78 @@
1179 +/*
1180 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
1181 + * Copyright 2017 NXP
1182 + *
1183 + * This program is free software; you can redistribute it and/or modify
1184 + * it under the terms of the GNU General Public License as published by
1185 + * the Free Software Foundation; either version 2 of the License, or
1186 + * (at your option) any later version.
1187 + *
1188 + * This program is distributed in the hope that it will be useful,
1189 + * but WITHOUT ANY WARRANTY; without even the implied warranty of
1190 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1191 + * GNU General Public License for more details.
1192 + *
1193 + * You should have received a copy of the GNU General Public License
1194 + * along with this program. If not, see <http://www.gnu.org/licenses/>.
1195 + */
1196 +
1197 +#ifndef _CBUS_H_
1198 +#define _CBUS_H_
1199 +
1200 +#define EMAC1_BASE_ADDR (CBUS_BASE_ADDR + 0x200000)
1201 +#define EGPI1_BASE_ADDR (CBUS_BASE_ADDR + 0x210000)
1202 +#define EMAC2_BASE_ADDR (CBUS_BASE_ADDR + 0x220000)
1203 +#define EGPI2_BASE_ADDR (CBUS_BASE_ADDR + 0x230000)
1204 +#define BMU1_BASE_ADDR (CBUS_BASE_ADDR + 0x240000)
1205 +#define BMU2_BASE_ADDR (CBUS_BASE_ADDR + 0x250000)
1206 +#define ARB_BASE_ADDR (CBUS_BASE_ADDR + 0x260000)
1207 +#define DDR_CONFIG_BASE_ADDR (CBUS_BASE_ADDR + 0x270000)
1208 +#define HIF_BASE_ADDR (CBUS_BASE_ADDR + 0x280000)
1209 +#define HGPI_BASE_ADDR (CBUS_BASE_ADDR + 0x290000)
1210 +#define LMEM_BASE_ADDR (CBUS_BASE_ADDR + 0x300000)
1211 +#define LMEM_SIZE 0x10000
1212 +#define LMEM_END (LMEM_BASE_ADDR + LMEM_SIZE)
1213 +#define TMU_CSR_BASE_ADDR (CBUS_BASE_ADDR + 0x310000)
1214 +#define CLASS_CSR_BASE_ADDR (CBUS_BASE_ADDR + 0x320000)
1215 +#define HIF_NOCPY_BASE_ADDR (CBUS_BASE_ADDR + 0x350000)
1216 +#define UTIL_CSR_BASE_ADDR (CBUS_BASE_ADDR + 0x360000)
1217 +#define CBUS_GPT_BASE_ADDR (CBUS_BASE_ADDR + 0x370000)
1218 +
1219 +/*
1220 + * defgroup XXX_MEM_ACCESS_ADDR PE memory access through CSR
1221 + * XXX_MEM_ACCESS_ADDR register bit definitions.
1222 + */
1223 +#define PE_MEM_ACCESS_WRITE BIT(31) /* Internal Memory Write. */
1224 +#define PE_MEM_ACCESS_IMEM BIT(15)
1225 +#define PE_MEM_ACCESS_DMEM BIT(16)
1226 +
1227 +/* Byte Enables of the Internal memory access. These are interpred in BE */
1228 +#define PE_MEM_ACCESS_BYTE_ENABLE(offset, size) \
1229 + ({ typeof(size) size_ = (size); \
1230 + (((BIT(size_) - 1) << (4 - (offset) - (size_))) & 0xf) << 24; })
1231 +
1232 +#include "cbus/emac_mtip.h"
1233 +#include "cbus/gpi.h"
1234 +#include "cbus/bmu.h"
1235 +#include "cbus/hif.h"
1236 +#include "cbus/tmu_csr.h"
1237 +#include "cbus/class_csr.h"
1238 +#include "cbus/hif_nocpy.h"
1239 +#include "cbus/util_csr.h"
1240 +
1241 +/* PFE cores states */
1242 +#define CORE_DISABLE 0x00000000
1243 +#define CORE_ENABLE 0x00000001
1244 +#define CORE_SW_RESET 0x00000002
1245 +
1246 +/* LMEM defines */
1247 +#define LMEM_HDR_SIZE 0x0010
1248 +#define LMEM_BUF_SIZE_LN2 0x7
1249 +#define LMEM_BUF_SIZE BIT(LMEM_BUF_SIZE_LN2)
1250 +
1251 +/* DDR defines */
1252 +#define DDR_HDR_SIZE 0x0100
1253 +#define DDR_BUF_SIZE_LN2 0xb
1254 +#define DDR_BUF_SIZE BIT(DDR_BUF_SIZE_LN2)
1255 +
1256 +#endif /* _CBUS_H_ */
1257 --- /dev/null
1258 +++ b/drivers/staging/fsl_ppfe/include/pfe/cbus/bmu.h
1259 @@ -0,0 +1,55 @@
1260 +/*
1261 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
1262 + * Copyright 2017 NXP
1263 + *
1264 + * This program is free software; you can redistribute it and/or modify
1265 + * it under the terms of the GNU General Public License as published by
1266 + * the Free Software Foundation; either version 2 of the License, or
1267 + * (at your option) any later version.
1268 + *
1269 + * This program is distributed in the hope that it will be useful,
1270 + * but WITHOUT ANY WARRANTY; without even the implied warranty of
1271 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1272 + * GNU General Public License for more details.
1273 + *
1274 + * You should have received a copy of the GNU General Public License
1275 + * along with this program. If not, see <http://www.gnu.org/licenses/>.
1276 + */
1277 +
1278 +#ifndef _BMU_H_
1279 +#define _BMU_H_
1280 +
1281 +#define BMU_VERSION 0x000
1282 +#define BMU_CTRL 0x004
1283 +#define BMU_UCAST_CONFIG 0x008
1284 +#define BMU_UCAST_BASE_ADDR 0x00c
1285 +#define BMU_BUF_SIZE 0x010
1286 +#define BMU_BUF_CNT 0x014
1287 +#define BMU_THRES 0x018
1288 +#define BMU_INT_SRC 0x020
1289 +#define BMU_INT_ENABLE 0x024
1290 +#define BMU_ALLOC_CTRL 0x030
1291 +#define BMU_FREE_CTRL 0x034
1292 +#define BMU_FREE_ERR_ADDR 0x038
1293 +#define BMU_CURR_BUF_CNT 0x03c
1294 +#define BMU_MCAST_CNT 0x040
1295 +#define BMU_MCAST_ALLOC_CTRL 0x044
1296 +#define BMU_REM_BUF_CNT 0x048
1297 +#define BMU_LOW_WATERMARK 0x050
1298 +#define BMU_HIGH_WATERMARK 0x054
1299 +#define BMU_INT_MEM_ACCESS 0x100
1300 +
1301 +struct BMU_CFG {
1302 + unsigned long baseaddr;
1303 + u32 count;
1304 + u32 size;
1305 + u32 low_watermark;
1306 + u32 high_watermark;
1307 +};
1308 +
1309 +#define BMU1_BUF_SIZE LMEM_BUF_SIZE_LN2
1310 +#define BMU2_BUF_SIZE DDR_BUF_SIZE_LN2
1311 +
1312 +#define BMU2_MCAST_ALLOC_CTRL (BMU2_BASE_ADDR + BMU_MCAST_ALLOC_CTRL)
1313 +
1314 +#endif /* _BMU_H_ */
1315 --- /dev/null
1316 +++ b/drivers/staging/fsl_ppfe/include/pfe/cbus/class_csr.h
1317 @@ -0,0 +1,289 @@
1318 +/*
1319 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
1320 + * Copyright 2017 NXP
1321 + *
1322 + * This program is free software; you can redistribute it and/or modify
1323 + * it under the terms of the GNU General Public License as published by
1324 + * the Free Software Foundation; either version 2 of the License, or
1325 + * (at your option) any later version.
1326 + *
1327 + * This program is distributed in the hope that it will be useful,
1328 + * but WITHOUT ANY WARRANTY; without even the implied warranty of
1329 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1330 + * GNU General Public License for more details.
1331 + *
1332 + * You should have received a copy of the GNU General Public License
1333 + * along with this program. If not, see <http://www.gnu.org/licenses/>.
1334 + */
1335 +
1336 +#ifndef _CLASS_CSR_H_
1337 +#define _CLASS_CSR_H_
1338 +
1339 +/* @file class_csr.h.
1340 + * class_csr - block containing all the classifier control and status register.
1341 + * Mapped on CBUS and accessible from all PE's and ARM.
1342 + */
1343 +#define CLASS_VERSION (CLASS_CSR_BASE_ADDR + 0x000)
1344 +#define CLASS_TX_CTRL (CLASS_CSR_BASE_ADDR + 0x004)
1345 +#define CLASS_INQ_PKTPTR (CLASS_CSR_BASE_ADDR + 0x010)
1346 +
1347 +/* (ddr_hdr_size[24:16], lmem_hdr_size[5:0]) */
1348 +#define CLASS_HDR_SIZE (CLASS_CSR_BASE_ADDR + 0x014)
1349 +
1350 +/* LMEM header size for the Classifier block.\ Data in the LMEM
1351 + * is written from this offset.
1352 + */
1353 +#define CLASS_HDR_SIZE_LMEM(off) ((off) & 0x3f)
1354 +
1355 +/* DDR header size for the Classifier block.\ Data in the DDR
1356 + * is written from this offset.
1357 + */
1358 +#define CLASS_HDR_SIZE_DDR(off) (((off) & 0x1ff) << 16)
1359 +
1360 +#define CLASS_PE0_QB_DM_ADDR0 (CLASS_CSR_BASE_ADDR + 0x020)
1361 +
1362 +/* DMEM address of first [15:0] and second [31:16] buffers on QB side. */
1363 +#define CLASS_PE0_QB_DM_ADDR1 (CLASS_CSR_BASE_ADDR + 0x024)
1364 +
1365 +/* DMEM address of third [15:0] and fourth [31:16] buffers on QB side. */
1366 +#define CLASS_PE0_RO_DM_ADDR0 (CLASS_CSR_BASE_ADDR + 0x060)
1367 +
1368 +/* DMEM address of first [15:0] and second [31:16] buffers on RO side. */
1369 +#define CLASS_PE0_RO_DM_ADDR1 (CLASS_CSR_BASE_ADDR + 0x064)
1370 +
1371 +/* DMEM address of third [15:0] and fourth [31:16] buffers on RO side. */
1372 +
1373 +/* @name Class PE memory access. Allows external PE's and HOST to
1374 + * read/write PMEM/DMEM memory ranges for each classifier PE.
1375 + */
1376 +/* {sr_pe_mem_cmd[31], csr_pe_mem_wren[27:24], csr_pe_mem_addr[23:0]},
1377 + * See \ref XXX_MEM_ACCESS_ADDR for details.
1378 + */
1379 +#define CLASS_MEM_ACCESS_ADDR (CLASS_CSR_BASE_ADDR + 0x100)
1380 +
1381 +/* Internal Memory Access Write Data [31:0] */
1382 +#define CLASS_MEM_ACCESS_WDATA (CLASS_CSR_BASE_ADDR + 0x104)
1383 +
1384 +/* Internal Memory Access Read Data [31:0] */
1385 +#define CLASS_MEM_ACCESS_RDATA (CLASS_CSR_BASE_ADDR + 0x108)
1386 +#define CLASS_TM_INQ_ADDR (CLASS_CSR_BASE_ADDR + 0x114)
1387 +#define CLASS_PE_STATUS (CLASS_CSR_BASE_ADDR + 0x118)
1388 +
1389 +#define CLASS_PHY1_RX_PKTS (CLASS_CSR_BASE_ADDR + 0x11c)
1390 +#define CLASS_PHY1_TX_PKTS (CLASS_CSR_BASE_ADDR + 0x120)
1391 +#define CLASS_PHY1_LP_FAIL_PKTS (CLASS_CSR_BASE_ADDR + 0x124)
1392 +#define CLASS_PHY1_INTF_FAIL_PKTS (CLASS_CSR_BASE_ADDR + 0x128)
1393 +#define CLASS_PHY1_INTF_MATCH_PKTS (CLASS_CSR_BASE_ADDR + 0x12c)
1394 +#define CLASS_PHY1_L3_FAIL_PKTS (CLASS_CSR_BASE_ADDR + 0x130)
1395 +#define CLASS_PHY1_V4_PKTS (CLASS_CSR_BASE_ADDR + 0x134)
1396 +#define CLASS_PHY1_V6_PKTS (CLASS_CSR_BASE_ADDR + 0x138)
1397 +#define CLASS_PHY1_CHKSUM_ERR_PKTS (CLASS_CSR_BASE_ADDR + 0x13c)
1398 +#define CLASS_PHY1_TTL_ERR_PKTS (CLASS_CSR_BASE_ADDR + 0x140)
1399 +#define CLASS_PHY2_RX_PKTS (CLASS_CSR_BASE_ADDR + 0x144)
1400 +#define CLASS_PHY2_TX_PKTS (CLASS_CSR_BASE_ADDR + 0x148)
1401 +#define CLASS_PHY2_LP_FAIL_PKTS (CLASS_CSR_BASE_ADDR + 0x14c)
1402 +#define CLASS_PHY2_INTF_FAIL_PKTS (CLASS_CSR_BASE_ADDR + 0x150)
1403 +#define CLASS_PHY2_INTF_MATCH_PKTS (CLASS_CSR_BASE_ADDR + 0x154)
1404 +#define CLASS_PHY2_L3_FAIL_PKTS (CLASS_CSR_BASE_ADDR + 0x158)
1405 +#define CLASS_PHY2_V4_PKTS (CLASS_CSR_BASE_ADDR + 0x15c)
1406 +#define CLASS_PHY2_V6_PKTS (CLASS_CSR_BASE_ADDR + 0x160)
1407 +#define CLASS_PHY2_CHKSUM_ERR_PKTS (CLASS_CSR_BASE_ADDR + 0x164)
1408 +#define CLASS_PHY2_TTL_ERR_PKTS (CLASS_CSR_BASE_ADDR + 0x168)
1409 +#define CLASS_PHY3_RX_PKTS (CLASS_CSR_BASE_ADDR + 0x16c)
1410 +#define CLASS_PHY3_TX_PKTS (CLASS_CSR_BASE_ADDR + 0x170)
1411 +#define CLASS_PHY3_LP_FAIL_PKTS (CLASS_CSR_BASE_ADDR + 0x174)
1412 +#define CLASS_PHY3_INTF_FAIL_PKTS (CLASS_CSR_BASE_ADDR + 0x178)
1413 +#define CLASS_PHY3_INTF_MATCH_PKTS (CLASS_CSR_BASE_ADDR + 0x17c)
1414 +#define CLASS_PHY3_L3_FAIL_PKTS (CLASS_CSR_BASE_ADDR + 0x180)
1415 +#define CLASS_PHY3_V4_PKTS (CLASS_CSR_BASE_ADDR + 0x184)
1416 +#define CLASS_PHY3_V6_PKTS (CLASS_CSR_BASE_ADDR + 0x188)
1417 +#define CLASS_PHY3_CHKSUM_ERR_PKTS (CLASS_CSR_BASE_ADDR + 0x18c)
1418 +#define CLASS_PHY3_TTL_ERR_PKTS (CLASS_CSR_BASE_ADDR + 0x190)
1419 +#define CLASS_PHY1_ICMP_PKTS (CLASS_CSR_BASE_ADDR + 0x194)
1420 +#define CLASS_PHY1_IGMP_PKTS (CLASS_CSR_BASE_ADDR + 0x198)
1421 +#define CLASS_PHY1_TCP_PKTS (CLASS_CSR_BASE_ADDR + 0x19c)
1422 +#define CLASS_PHY1_UDP_PKTS (CLASS_CSR_BASE_ADDR + 0x1a0)
1423 +#define CLASS_PHY2_ICMP_PKTS (CLASS_CSR_BASE_ADDR + 0x1a4)
1424 +#define CLASS_PHY2_IGMP_PKTS (CLASS_CSR_BASE_ADDR + 0x1a8)
1425 +#define CLASS_PHY2_TCP_PKTS (CLASS_CSR_BASE_ADDR + 0x1ac)
1426 +#define CLASS_PHY2_UDP_PKTS (CLASS_CSR_BASE_ADDR + 0x1b0)
1427 +#define CLASS_PHY3_ICMP_PKTS (CLASS_CSR_BASE_ADDR + 0x1b4)
1428 +#define CLASS_PHY3_IGMP_PKTS (CLASS_CSR_BASE_ADDR + 0x1b8)
1429 +#define CLASS_PHY3_TCP_PKTS (CLASS_CSR_BASE_ADDR + 0x1bc)
1430 +#define CLASS_PHY3_UDP_PKTS (CLASS_CSR_BASE_ADDR + 0x1c0)
1431 +#define CLASS_PHY4_ICMP_PKTS (CLASS_CSR_BASE_ADDR + 0x1c4)
1432 +#define CLASS_PHY4_IGMP_PKTS (CLASS_CSR_BASE_ADDR + 0x1c8)
1433 +#define CLASS_PHY4_TCP_PKTS (CLASS_CSR_BASE_ADDR + 0x1cc)
1434 +#define CLASS_PHY4_UDP_PKTS (CLASS_CSR_BASE_ADDR + 0x1d0)
1435 +#define CLASS_PHY4_RX_PKTS (CLASS_CSR_BASE_ADDR + 0x1d4)
1436 +#define CLASS_PHY4_TX_PKTS (CLASS_CSR_BASE_ADDR + 0x1d8)
1437 +#define CLASS_PHY4_LP_FAIL_PKTS (CLASS_CSR_BASE_ADDR + 0x1dc)
1438 +#define CLASS_PHY4_INTF_FAIL_PKTS (CLASS_CSR_BASE_ADDR + 0x1e0)
1439 +#define CLASS_PHY4_INTF_MATCH_PKTS (CLASS_CSR_BASE_ADDR + 0x1e4)
1440 +#define CLASS_PHY4_L3_FAIL_PKTS (CLASS_CSR_BASE_ADDR + 0x1e8)
1441 +#define CLASS_PHY4_V4_PKTS (CLASS_CSR_BASE_ADDR + 0x1ec)
1442 +#define CLASS_PHY4_V6_PKTS (CLASS_CSR_BASE_ADDR + 0x1f0)
1443 +#define CLASS_PHY4_CHKSUM_ERR_PKTS (CLASS_CSR_BASE_ADDR + 0x1f4)
1444 +#define CLASS_PHY4_TTL_ERR_PKTS (CLASS_CSR_BASE_ADDR + 0x1f8)
1445 +
1446 +#define CLASS_PE_SYS_CLK_RATIO (CLASS_CSR_BASE_ADDR + 0x200)
1447 +#define CLASS_AFULL_THRES (CLASS_CSR_BASE_ADDR + 0x204)
1448 +#define CLASS_GAP_BETWEEN_READS (CLASS_CSR_BASE_ADDR + 0x208)
1449 +#define CLASS_MAX_BUF_CNT (CLASS_CSR_BASE_ADDR + 0x20c)
1450 +#define CLASS_TSQ_FIFO_THRES (CLASS_CSR_BASE_ADDR + 0x210)
1451 +#define CLASS_TSQ_MAX_CNT (CLASS_CSR_BASE_ADDR + 0x214)
1452 +#define CLASS_IRAM_DATA_0 (CLASS_CSR_BASE_ADDR + 0x218)
1453 +#define CLASS_IRAM_DATA_1 (CLASS_CSR_BASE_ADDR + 0x21c)
1454 +#define CLASS_IRAM_DATA_2 (CLASS_CSR_BASE_ADDR + 0x220)
1455 +#define CLASS_IRAM_DATA_3 (CLASS_CSR_BASE_ADDR + 0x224)
1456 +
1457 +#define CLASS_BUS_ACCESS_ADDR (CLASS_CSR_BASE_ADDR + 0x228)
1458 +
1459 +#define CLASS_BUS_ACCESS_WDATA (CLASS_CSR_BASE_ADDR + 0x22c)
1460 +#define CLASS_BUS_ACCESS_RDATA (CLASS_CSR_BASE_ADDR + 0x230)
1461 +
1462 +/* (route_entry_size[9:0], route_hash_size[23:16]
1463 + * (this is actually ln2(size)))
1464 + */
1465 +#define CLASS_ROUTE_HASH_ENTRY_SIZE (CLASS_CSR_BASE_ADDR + 0x234)
1466 +
1467 +#define CLASS_ROUTE_ENTRY_SIZE(size) ((size) & 0x1ff)
1468 +#define CLASS_ROUTE_HASH_SIZE(hash_bits) (((hash_bits) & 0xff) << 16)
1469 +
1470 +#define CLASS_ROUTE_TABLE_BASE (CLASS_CSR_BASE_ADDR + 0x238)
1471 +
1472 +#define CLASS_ROUTE_MULTI (CLASS_CSR_BASE_ADDR + 0x23c)
1473 +#define CLASS_SMEM_OFFSET (CLASS_CSR_BASE_ADDR + 0x240)
1474 +#define CLASS_LMEM_BUF_SIZE (CLASS_CSR_BASE_ADDR + 0x244)
1475 +#define CLASS_VLAN_ID (CLASS_CSR_BASE_ADDR + 0x248)
1476 +#define CLASS_BMU1_BUF_FREE (CLASS_CSR_BASE_ADDR + 0x24c)
1477 +#define CLASS_USE_TMU_INQ (CLASS_CSR_BASE_ADDR + 0x250)
1478 +#define CLASS_VLAN_ID1 (CLASS_CSR_BASE_ADDR + 0x254)
1479 +
1480 +#define CLASS_BUS_ACCESS_BASE (CLASS_CSR_BASE_ADDR + 0x258)
1481 +#define CLASS_BUS_ACCESS_BASE_MASK (0xFF000000)
1482 +/* bit 31:24 of PE peripheral address are stored in CLASS_BUS_ACCESS_BASE */
1483 +
1484 +#define CLASS_HIF_PARSE (CLASS_CSR_BASE_ADDR + 0x25c)
1485 +
1486 +#define CLASS_HOST_PE0_GP (CLASS_CSR_BASE_ADDR + 0x260)
1487 +#define CLASS_PE0_GP (CLASS_CSR_BASE_ADDR + 0x264)
1488 +#define CLASS_HOST_PE1_GP (CLASS_CSR_BASE_ADDR + 0x268)
1489 +#define CLASS_PE1_GP (CLASS_CSR_BASE_ADDR + 0x26c)
1490 +#define CLASS_HOST_PE2_GP (CLASS_CSR_BASE_ADDR + 0x270)
1491 +#define CLASS_PE2_GP (CLASS_CSR_BASE_ADDR + 0x274)
1492 +#define CLASS_HOST_PE3_GP (CLASS_CSR_BASE_ADDR + 0x278)
1493 +#define CLASS_PE3_GP (CLASS_CSR_BASE_ADDR + 0x27c)
1494 +#define CLASS_HOST_PE4_GP (CLASS_CSR_BASE_ADDR + 0x280)
1495 +#define CLASS_PE4_GP (CLASS_CSR_BASE_ADDR + 0x284)
1496 +#define CLASS_HOST_PE5_GP (CLASS_CSR_BASE_ADDR + 0x288)
1497 +#define CLASS_PE5_GP (CLASS_CSR_BASE_ADDR + 0x28c)
1498 +
1499 +#define CLASS_PE_INT_SRC (CLASS_CSR_BASE_ADDR + 0x290)
1500 +#define CLASS_PE_INT_ENABLE (CLASS_CSR_BASE_ADDR + 0x294)
1501 +
1502 +#define CLASS_TPID0_TPID1 (CLASS_CSR_BASE_ADDR + 0x298)
1503 +#define CLASS_TPID2 (CLASS_CSR_BASE_ADDR + 0x29c)
1504 +
1505 +#define CLASS_L4_CHKSUM_ADDR (CLASS_CSR_BASE_ADDR + 0x2a0)
1506 +
1507 +#define CLASS_PE0_DEBUG (CLASS_CSR_BASE_ADDR + 0x2a4)
1508 +#define CLASS_PE1_DEBUG (CLASS_CSR_BASE_ADDR + 0x2a8)
1509 +#define CLASS_PE2_DEBUG (CLASS_CSR_BASE_ADDR + 0x2ac)
1510 +#define CLASS_PE3_DEBUG (CLASS_CSR_BASE_ADDR + 0x2b0)
1511 +#define CLASS_PE4_DEBUG (CLASS_CSR_BASE_ADDR + 0x2b4)
1512 +#define CLASS_PE5_DEBUG (CLASS_CSR_BASE_ADDR + 0x2b8)
1513 +
1514 +#define CLASS_STATE (CLASS_CSR_BASE_ADDR + 0x2bc)
1515 +
1516 +/* CLASS defines */
1517 +#define CLASS_PBUF_SIZE 0x100 /* Fixed by hardware */
1518 +#define CLASS_PBUF_HEADER_OFFSET 0x80 /* Can be configured */
1519 +
1520 +/* Can be configured */
1521 +#define CLASS_PBUF0_BASE_ADDR 0x000
1522 +/* Can be configured */
1523 +#define CLASS_PBUF1_BASE_ADDR (CLASS_PBUF0_BASE_ADDR + CLASS_PBUF_SIZE)
1524 +/* Can be configured */
1525 +#define CLASS_PBUF2_BASE_ADDR (CLASS_PBUF1_BASE_ADDR + CLASS_PBUF_SIZE)
1526 +/* Can be configured */
1527 +#define CLASS_PBUF3_BASE_ADDR (CLASS_PBUF2_BASE_ADDR + CLASS_PBUF_SIZE)
1528 +
1529 +#define CLASS_PBUF0_HEADER_BASE_ADDR (CLASS_PBUF0_BASE_ADDR + \
1530 + CLASS_PBUF_HEADER_OFFSET)
1531 +#define CLASS_PBUF1_HEADER_BASE_ADDR (CLASS_PBUF1_BASE_ADDR + \
1532 + CLASS_PBUF_HEADER_OFFSET)
1533 +#define CLASS_PBUF2_HEADER_BASE_ADDR (CLASS_PBUF2_BASE_ADDR + \
1534 + CLASS_PBUF_HEADER_OFFSET)
1535 +#define CLASS_PBUF3_HEADER_BASE_ADDR (CLASS_PBUF3_BASE_ADDR + \
1536 + CLASS_PBUF_HEADER_OFFSET)
1537 +
1538 +#define CLASS_PE0_RO_DM_ADDR0_VAL ((CLASS_PBUF1_BASE_ADDR << 16) | \
1539 + CLASS_PBUF0_BASE_ADDR)
1540 +#define CLASS_PE0_RO_DM_ADDR1_VAL ((CLASS_PBUF3_BASE_ADDR << 16) | \
1541 + CLASS_PBUF2_BASE_ADDR)
1542 +
1543 +#define CLASS_PE0_QB_DM_ADDR0_VAL ((CLASS_PBUF1_HEADER_BASE_ADDR << 16) |\
1544 + CLASS_PBUF0_HEADER_BASE_ADDR)
1545 +#define CLASS_PE0_QB_DM_ADDR1_VAL ((CLASS_PBUF3_HEADER_BASE_ADDR << 16) |\
1546 + CLASS_PBUF2_HEADER_BASE_ADDR)
1547 +
1548 +#define CLASS_ROUTE_SIZE 128
1549 +#define CLASS_MAX_ROUTE_SIZE 256
1550 +#define CLASS_ROUTE_HASH_BITS 20
1551 +#define CLASS_ROUTE_HASH_MASK (BIT(CLASS_ROUTE_HASH_BITS) - 1)
1552 +
1553 +/* Can be configured */
1554 +#define CLASS_ROUTE0_BASE_ADDR 0x400
1555 +/* Can be configured */
1556 +#define CLASS_ROUTE1_BASE_ADDR (CLASS_ROUTE0_BASE_ADDR + CLASS_ROUTE_SIZE)
1557 +/* Can be configured */
1558 +#define CLASS_ROUTE2_BASE_ADDR (CLASS_ROUTE1_BASE_ADDR + CLASS_ROUTE_SIZE)
1559 +/* Can be configured */
1560 +#define CLASS_ROUTE3_BASE_ADDR (CLASS_ROUTE2_BASE_ADDR + CLASS_ROUTE_SIZE)
1561 +
1562 +#define CLASS_SA_SIZE 128
1563 +#define CLASS_IPSEC_SA0_BASE_ADDR 0x600
1564 +/* not used */
1565 +#define CLASS_IPSEC_SA1_BASE_ADDR (CLASS_IPSEC_SA0_BASE_ADDR + CLASS_SA_SIZE)
1566 +/* not used */
1567 +#define CLASS_IPSEC_SA2_BASE_ADDR (CLASS_IPSEC_SA1_BASE_ADDR + CLASS_SA_SIZE)
1568 +/* not used */
1569 +#define CLASS_IPSEC_SA3_BASE_ADDR (CLASS_IPSEC_SA2_BASE_ADDR + CLASS_SA_SIZE)
1570 +
1571 +/* generic purpose free dmem buffer, last portion of 2K dmem pbuf */
1572 +#define CLASS_GP_DMEM_BUF_SIZE (2048 - (CLASS_PBUF_SIZE * 4) - \
1573 + (CLASS_ROUTE_SIZE * 4) - (CLASS_SA_SIZE))
1574 +#define CLASS_GP_DMEM_BUF ((void *)(CLASS_IPSEC_SA0_BASE_ADDR + \
1575 + CLASS_SA_SIZE))
1576 +
1577 +#define TWO_LEVEL_ROUTE BIT(0)
1578 +#define PHYNO_IN_HASH BIT(1)
1579 +#define HW_ROUTE_FETCH BIT(3)
1580 +#define HW_BRIDGE_FETCH BIT(5)
1581 +#define IP_ALIGNED BIT(6)
1582 +#define ARC_HIT_CHECK_EN BIT(7)
1583 +#define CLASS_TOE BIT(11)
1584 +#define HASH_NORMAL (0 << 12)
1585 +#define HASH_CRC_PORT BIT(12)
1586 +#define HASH_CRC_IP (2 << 12)
1587 +#define HASH_CRC_PORT_IP (3 << 12)
1588 +#define QB2BUS_LE BIT(15)
1589 +
1590 +#define TCP_CHKSUM_DROP BIT(0)
1591 +#define UDP_CHKSUM_DROP BIT(1)
1592 +#define IPV4_CHKSUM_DROP BIT(9)
1593 +
1594 +/*CLASS_HIF_PARSE bits*/
1595 +#define HIF_PKT_CLASS_EN BIT(0)
1596 +#define HIF_PKT_OFFSET(ofst) (((ofst) & 0xF) << 1)
1597 +
1598 +struct class_cfg {
1599 + u32 toe_mode;
1600 + unsigned long route_table_baseaddr;
1601 + u32 route_table_hash_bits;
1602 + u32 pe_sys_clk_ratio;
1603 + u32 resume;
1604 +};
1605 +
1606 +#endif /* _CLASS_CSR_H_ */
1607 --- /dev/null
1608 +++ b/drivers/staging/fsl_ppfe/include/pfe/cbus/emac_mtip.h
1609 @@ -0,0 +1,242 @@
1610 +/*
1611 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
1612 + * Copyright 2017 NXP
1613 + *
1614 + * This program is free software; you can redistribute it and/or modify
1615 + * it under the terms of the GNU General Public License as published by
1616 + * the Free Software Foundation; either version 2 of the License, or
1617 + * (at your option) any later version.
1618 + *
1619 + * This program is distributed in the hope that it will be useful,
1620 + * but WITHOUT ANY WARRANTY; without even the implied warranty of
1621 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1622 + * GNU General Public License for more details.
1623 + *
1624 + * You should have received a copy of the GNU General Public License
1625 + * along with this program. If not, see <http://www.gnu.org/licenses/>.
1626 + */
1627 +
1628 +#ifndef _EMAC_H_
1629 +#define _EMAC_H_
1630 +
1631 +#include <linux/ethtool.h>
1632 +
1633 +#define EMAC_IEVENT_REG 0x004
1634 +#define EMAC_IMASK_REG 0x008
1635 +#define EMAC_R_DES_ACTIVE_REG 0x010
1636 +#define EMAC_X_DES_ACTIVE_REG 0x014
1637 +#define EMAC_ECNTRL_REG 0x024
1638 +#define EMAC_MII_DATA_REG 0x040
1639 +#define EMAC_MII_CTRL_REG 0x044
1640 +#define EMAC_MIB_CTRL_STS_REG 0x064
1641 +#define EMAC_RCNTRL_REG 0x084
1642 +#define EMAC_TCNTRL_REG 0x0C4
1643 +#define EMAC_PHY_ADDR_LOW 0x0E4
1644 +#define EMAC_PHY_ADDR_HIGH 0x0E8
1645 +#define EMAC_GAUR 0x120
1646 +#define EMAC_GALR 0x124
1647 +#define EMAC_TFWR_STR_FWD 0x144
1648 +#define EMAC_RX_SECTION_FULL 0x190
1649 +#define EMAC_RX_SECTION_EMPTY 0x194
1650 +#define EMAC_TX_SECTION_EMPTY 0x1A0
1651 +#define EMAC_TRUNC_FL 0x1B0
1652 +
1653 +#define RMON_T_DROP 0x200 /* Count of frames not cntd correctly */
1654 +#define RMON_T_PACKETS 0x204 /* RMON TX packet count */
1655 +#define RMON_T_BC_PKT 0x208 /* RMON TX broadcast pkts */
1656 +#define RMON_T_MC_PKT 0x20c /* RMON TX multicast pkts */
1657 +#define RMON_T_CRC_ALIGN 0x210 /* RMON TX pkts with CRC align err */
1658 +#define RMON_T_UNDERSIZE 0x214 /* RMON TX pkts < 64 bytes, good CRC */
1659 +#define RMON_T_OVERSIZE 0x218 /* RMON TX pkts > MAX_FL bytes good CRC */
1660 +#define RMON_T_FRAG 0x21c /* RMON TX pkts < 64 bytes, bad CRC */
1661 +#define RMON_T_JAB 0x220 /* RMON TX pkts > MAX_FL bytes, bad CRC */
1662 +#define RMON_T_COL 0x224 /* RMON TX collision count */
1663 +#define RMON_T_P64 0x228 /* RMON TX 64 byte pkts */
1664 +#define RMON_T_P65TO127 0x22c /* RMON TX 65 to 127 byte pkts */
1665 +#define RMON_T_P128TO255 0x230 /* RMON TX 128 to 255 byte pkts */
1666 +#define RMON_T_P256TO511 0x234 /* RMON TX 256 to 511 byte pkts */
1667 +#define RMON_T_P512TO1023 0x238 /* RMON TX 512 to 1023 byte pkts */
1668 +#define RMON_T_P1024TO2047 0x23c /* RMON TX 1024 to 2047 byte pkts */
1669 +#define RMON_T_P_GTE2048 0x240 /* RMON TX pkts > 2048 bytes */
1670 +#define RMON_T_OCTETS 0x244 /* RMON TX octets */
1671 +#define IEEE_T_DROP 0x248 /* Count of frames not counted crtly */
1672 +#define IEEE_T_FRAME_OK 0x24c /* Frames tx'd OK */
1673 +#define IEEE_T_1COL 0x250 /* Frames tx'd with single collision */
1674 +#define IEEE_T_MCOL 0x254 /* Frames tx'd with multiple collision */
1675 +#define IEEE_T_DEF 0x258 /* Frames tx'd after deferral delay */
1676 +#define IEEE_T_LCOL 0x25c /* Frames tx'd with late collision */
1677 +#define IEEE_T_EXCOL 0x260 /* Frames tx'd with excesv collisions */
1678 +#define IEEE_T_MACERR 0x264 /* Frames tx'd with TX FIFO underrun */
1679 +#define IEEE_T_CSERR 0x268 /* Frames tx'd with carrier sense err */
1680 +#define IEEE_T_SQE 0x26c /* Frames tx'd with SQE err */
1681 +#define IEEE_T_FDXFC 0x270 /* Flow control pause frames tx'd */
1682 +#define IEEE_T_OCTETS_OK 0x274 /* Octet count for frames tx'd w/o err */
1683 +#define RMON_R_PACKETS 0x284 /* RMON RX packet count */
1684 +#define RMON_R_BC_PKT 0x288 /* RMON RX broadcast pkts */
1685 +#define RMON_R_MC_PKT 0x28c /* RMON RX multicast pkts */
1686 +#define RMON_R_CRC_ALIGN 0x290 /* RMON RX pkts with CRC alignment err */
1687 +#define RMON_R_UNDERSIZE 0x294 /* RMON RX pkts < 64 bytes, good CRC */
1688 +#define RMON_R_OVERSIZE 0x298 /* RMON RX pkts > MAX_FL bytes good CRC */
1689 +#define RMON_R_FRAG 0x29c /* RMON RX pkts < 64 bytes, bad CRC */
1690 +#define RMON_R_JAB 0x2a0 /* RMON RX pkts > MAX_FL bytes, bad CRC */
1691 +#define RMON_R_RESVD_O 0x2a4 /* Reserved */
1692 +#define RMON_R_P64 0x2a8 /* RMON RX 64 byte pkts */
1693 +#define RMON_R_P65TO127 0x2ac /* RMON RX 65 to 127 byte pkts */
1694 +#define RMON_R_P128TO255 0x2b0 /* RMON RX 128 to 255 byte pkts */
1695 +#define RMON_R_P256TO511 0x2b4 /* RMON RX 256 to 511 byte pkts */
1696 +#define RMON_R_P512TO1023 0x2b8 /* RMON RX 512 to 1023 byte pkts */
1697 +#define RMON_R_P1024TO2047 0x2bc /* RMON RX 1024 to 2047 byte pkts */
1698 +#define RMON_R_P_GTE2048 0x2c0 /* RMON RX pkts > 2048 bytes */
1699 +#define RMON_R_OCTETS 0x2c4 /* RMON RX octets */
1700 +#define IEEE_R_DROP 0x2c8 /* Count frames not counted correctly */
1701 +#define IEEE_R_FRAME_OK 0x2cc /* Frames rx'd OK */
1702 +#define IEEE_R_CRC 0x2d0 /* Frames rx'd with CRC err */
1703 +#define IEEE_R_ALIGN 0x2d4 /* Frames rx'd with alignment err */
1704 +#define IEEE_R_MACERR 0x2d8 /* Receive FIFO overflow count */
1705 +#define IEEE_R_FDXFC 0x2dc /* Flow control pause frames rx'd */
1706 +#define IEEE_R_OCTETS_OK 0x2e0 /* Octet cnt for frames rx'd w/o err */
1707 +
1708 +#define EMAC_SMAC_0_0 0x500 /*Supplemental MAC Address 0 (RW).*/
1709 +#define EMAC_SMAC_0_1 0x504 /*Supplemental MAC Address 0 (RW).*/
1710 +
1711 +/* GEMAC definitions and settings */
1712 +
1713 +#define EMAC_PORT_0 0
1714 +#define EMAC_PORT_1 1
1715 +
1716 +/* GEMAC Bit definitions */
1717 +#define EMAC_IEVENT_HBERR 0x80000000
1718 +#define EMAC_IEVENT_BABR 0x40000000
1719 +#define EMAC_IEVENT_BABT 0x20000000
1720 +#define EMAC_IEVENT_GRA 0x10000000
1721 +#define EMAC_IEVENT_TXF 0x08000000
1722 +#define EMAC_IEVENT_TXB 0x04000000
1723 +#define EMAC_IEVENT_RXF 0x02000000
1724 +#define EMAC_IEVENT_RXB 0x01000000
1725 +#define EMAC_IEVENT_MII 0x00800000
1726 +#define EMAC_IEVENT_EBERR 0x00400000
1727 +#define EMAC_IEVENT_LC 0x00200000
1728 +#define EMAC_IEVENT_RL 0x00100000
1729 +#define EMAC_IEVENT_UN 0x00080000
1730 +
1731 +#define EMAC_IMASK_HBERR 0x80000000
1732 +#define EMAC_IMASK_BABR 0x40000000
1733 +#define EMAC_IMASKT_BABT 0x20000000
1734 +#define EMAC_IMASK_GRA 0x10000000
1735 +#define EMAC_IMASKT_TXF 0x08000000
1736 +#define EMAC_IMASK_TXB 0x04000000
1737 +#define EMAC_IMASKT_RXF 0x02000000
1738 +#define EMAC_IMASK_RXB 0x01000000
1739 +#define EMAC_IMASK_MII 0x00800000
1740 +#define EMAC_IMASK_EBERR 0x00400000
1741 +#define EMAC_IMASK_LC 0x00200000
1742 +#define EMAC_IMASKT_RL 0x00100000
1743 +#define EMAC_IMASK_UN 0x00080000
1744 +
1745 +#define EMAC_RCNTRL_MAX_FL_SHIFT 16
1746 +#define EMAC_RCNTRL_LOOP 0x00000001
1747 +#define EMAC_RCNTRL_DRT 0x00000002
1748 +#define EMAC_RCNTRL_MII_MODE 0x00000004
1749 +#define EMAC_RCNTRL_PROM 0x00000008
1750 +#define EMAC_RCNTRL_BC_REJ 0x00000010
1751 +#define EMAC_RCNTRL_FCE 0x00000020
1752 +#define EMAC_RCNTRL_RGMII 0x00000040
1753 +#define EMAC_RCNTRL_SGMII 0x00000080
1754 +#define EMAC_RCNTRL_RMII 0x00000100
1755 +#define EMAC_RCNTRL_RMII_10T 0x00000200
1756 +#define EMAC_RCNTRL_CRC_FWD 0x00004000
1757 +
1758 +#define EMAC_TCNTRL_GTS 0x00000001
1759 +#define EMAC_TCNTRL_HBC 0x00000002
1760 +#define EMAC_TCNTRL_FDEN 0x00000004
1761 +#define EMAC_TCNTRL_TFC_PAUSE 0x00000008
1762 +#define EMAC_TCNTRL_RFC_PAUSE 0x00000010
1763 +
1764 +#define EMAC_ECNTRL_RESET 0x00000001 /* reset the EMAC */
1765 +#define EMAC_ECNTRL_ETHER_EN 0x00000002 /* enable the EMAC */
1766 +#define EMAC_ECNTRL_MAGIC_ENA 0x00000004
1767 +#define EMAC_ECNTRL_SLEEP 0x00000008
1768 +#define EMAC_ECNTRL_SPEED 0x00000020
1769 +#define EMAC_ECNTRL_DBSWAP 0x00000100
1770 +
1771 +#define EMAC_X_WMRK_STRFWD 0x00000100
1772 +
1773 +#define EMAC_X_DES_ACTIVE_TDAR 0x01000000
1774 +#define EMAC_R_DES_ACTIVE_RDAR 0x01000000
1775 +
1776 +#define EMAC_RX_SECTION_EMPTY_V 0x00010006
1777 +/*
1778 + * The possible operating speeds of the MAC, currently supporting 10, 100 and
1779 + * 1000Mb modes.
1780 + */
1781 +enum mac_speed {SPEED_10M, SPEED_100M, SPEED_1000M, SPEED_1000M_PCS};
1782 +
1783 +/* MII-related definitios */
1784 +#define EMAC_MII_DATA_ST 0x40000000 /* Start of frame delimiter */
1785 +#define EMAC_MII_DATA_OP_RD 0x20000000 /* Perform a read operation */
1786 +#define EMAC_MII_DATA_OP_CL45_RD 0x30000000 /* Perform a read operation */
1787 +#define EMAC_MII_DATA_OP_WR 0x10000000 /* Perform a write operation */
1788 +#define EMAC_MII_DATA_OP_CL45_WR 0x10000000 /* Perform a write operation */
1789 +#define EMAC_MII_DATA_PA_MSK 0x0f800000 /* PHY Address field mask */
1790 +#define EMAC_MII_DATA_RA_MSK 0x007c0000 /* PHY Register field mask */
1791 +#define EMAC_MII_DATA_TA 0x00020000 /* Turnaround */
1792 +#define EMAC_MII_DATA_DATAMSK 0x0000ffff /* PHY data field */
1793 +
1794 +#define EMAC_MII_DATA_RA_SHIFT 18 /* MII Register address bits */
1795 +#define EMAC_MII_DATA_RA_MASK 0x1F /* MII Register address mask */
1796 +#define EMAC_MII_DATA_PA_SHIFT 23 /* MII PHY address bits */
1797 +#define EMAC_MII_DATA_PA_MASK 0x1F /* MII PHY address mask */
1798 +
1799 +#define EMAC_MII_DATA_RA(v) (((v) & EMAC_MII_DATA_RA_MASK) << \
1800 + EMAC_MII_DATA_RA_SHIFT)
1801 +#define EMAC_MII_DATA_PA(v) (((v) & EMAC_MII_DATA_RA_MASK) << \
1802 + EMAC_MII_DATA_PA_SHIFT)
1803 +#define EMAC_MII_DATA(v) ((v) & 0xffff)
1804 +
1805 +#define EMAC_MII_SPEED_SHIFT 1
1806 +#define EMAC_HOLDTIME_SHIFT 8
1807 +#define EMAC_HOLDTIME_MASK 0x7
1808 +#define EMAC_HOLDTIME(v) (((v) & EMAC_HOLDTIME_MASK) << \
1809 + EMAC_HOLDTIME_SHIFT)
1810 +
1811 +/*
1812 + * The Address organisation for the MAC device. All addresses are split into
1813 + * two 32-bit register fields. The first one (bottom) is the lower 32-bits of
1814 + * the address and the other field are the high order bits - this may be 16-bits
1815 + * in the case of MAC addresses, or 32-bits for the hash address.
1816 + * In terms of memory storage, the first item (bottom) is assumed to be at a
1817 + * lower address location than 'top'. i.e. top should be at address location of
1818 + * 'bottom' + 4 bytes.
1819 + */
1820 +struct pfe_mac_addr {
1821 + u32 bottom; /* Lower 32-bits of address. */
1822 + u32 top; /* Upper 32-bits of address. */
1823 +};
1824 +
1825 +/*
1826 + * The following is the organisation of the address filters section of the MAC
1827 + * registers. The Cadence MAC contains four possible specific address match
1828 + * addresses, if an incoming frame corresponds to any one of these four
1829 + * addresses then the frame will be copied to memory.
1830 + * It is not necessary for all four of the address match registers to be
1831 + * programmed, this is application dependent.
1832 + */
1833 +struct spec_addr {
1834 + struct pfe_mac_addr one; /* Specific address register 1. */
1835 + struct pfe_mac_addr two; /* Specific address register 2. */
1836 + struct pfe_mac_addr three; /* Specific address register 3. */
1837 + struct pfe_mac_addr four; /* Specific address register 4. */
1838 +};
1839 +
1840 +struct gemac_cfg {
1841 + u32 mode;
1842 + u32 speed;
1843 + u32 duplex;
1844 +};
1845 +
1846 +/* EMAC Hash size */
1847 +#define EMAC_HASH_REG_BITS 64
1848 +
1849 +#define EMAC_SPEC_ADDR_MAX 4
1850 +
1851 +#endif /* _EMAC_H_ */
1852 --- /dev/null
1853 +++ b/drivers/staging/fsl_ppfe/include/pfe/cbus/gpi.h
1854 @@ -0,0 +1,86 @@
1855 +/*
1856 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
1857 + * Copyright 2017 NXP
1858 + *
1859 + * This program is free software; you can redistribute it and/or modify
1860 + * it under the terms of the GNU General Public License as published by
1861 + * the Free Software Foundation; either version 2 of the License, or
1862 + * (at your option) any later version.
1863 + *
1864 + * This program is distributed in the hope that it will be useful,
1865 + * but WITHOUT ANY WARRANTY; without even the implied warranty of
1866 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1867 + * GNU General Public License for more details.
1868 + *
1869 + * You should have received a copy of the GNU General Public License
1870 + * along with this program. If not, see <http://www.gnu.org/licenses/>.
1871 + */
1872 +
1873 +#ifndef _GPI_H_
1874 +#define _GPI_H_
1875 +
1876 +#define GPI_VERSION 0x00
1877 +#define GPI_CTRL 0x04
1878 +#define GPI_RX_CONFIG 0x08
1879 +#define GPI_HDR_SIZE 0x0c
1880 +#define GPI_BUF_SIZE 0x10
1881 +#define GPI_LMEM_ALLOC_ADDR 0x14
1882 +#define GPI_LMEM_FREE_ADDR 0x18
1883 +#define GPI_DDR_ALLOC_ADDR 0x1c
1884 +#define GPI_DDR_FREE_ADDR 0x20
1885 +#define GPI_CLASS_ADDR 0x24
1886 +#define GPI_DRX_FIFO 0x28
1887 +#define GPI_TRX_FIFO 0x2c
1888 +#define GPI_INQ_PKTPTR 0x30
1889 +#define GPI_DDR_DATA_OFFSET 0x34
1890 +#define GPI_LMEM_DATA_OFFSET 0x38
1891 +#define GPI_TMLF_TX 0x4c
1892 +#define GPI_DTX_ASEQ 0x50
1893 +#define GPI_FIFO_STATUS 0x54
1894 +#define GPI_FIFO_DEBUG 0x58
1895 +#define GPI_TX_PAUSE_TIME 0x5c
1896 +#define GPI_LMEM_SEC_BUF_DATA_OFFSET 0x60
1897 +#define GPI_DDR_SEC_BUF_DATA_OFFSET 0x64
1898 +#define GPI_TOE_CHKSUM_EN 0x68
1899 +#define GPI_OVERRUN_DROPCNT 0x6c
1900 +#define GPI_CSR_MTIP_PAUSE_REG 0x74
1901 +#define GPI_CSR_MTIP_PAUSE_QUANTUM 0x78
1902 +#define GPI_CSR_RX_CNT 0x7c
1903 +#define GPI_CSR_TX_CNT 0x80
1904 +#define GPI_CSR_DEBUG1 0x84
1905 +#define GPI_CSR_DEBUG2 0x88
1906 +
1907 +struct gpi_cfg {
1908 + u32 lmem_rtry_cnt;
1909 + u32 tmlf_txthres;
1910 + u32 aseq_len;
1911 + u32 mtip_pause_reg;
1912 +};
1913 +
1914 +/* GPI commons defines */
1915 +#define GPI_LMEM_BUF_EN 0x1
1916 +#define GPI_DDR_BUF_EN 0x1
1917 +
1918 +/* EGPI 1 defines */
1919 +#define EGPI1_LMEM_RTRY_CNT 0x40
1920 +#define EGPI1_TMLF_TXTHRES 0xBC
1921 +#define EGPI1_ASEQ_LEN 0x50
1922 +
1923 +/* EGPI 2 defines */
1924 +#define EGPI2_LMEM_RTRY_CNT 0x40
1925 +#define EGPI2_TMLF_TXTHRES 0xBC
1926 +#define EGPI2_ASEQ_LEN 0x40
1927 +
1928 +/* EGPI 3 defines */
1929 +#define EGPI3_LMEM_RTRY_CNT 0x40
1930 +#define EGPI3_TMLF_TXTHRES 0xBC
1931 +#define EGPI3_ASEQ_LEN 0x40
1932 +
1933 +/* HGPI defines */
1934 +#define HGPI_LMEM_RTRY_CNT 0x40
1935 +#define HGPI_TMLF_TXTHRES 0xBC
1936 +#define HGPI_ASEQ_LEN 0x40
1937 +
1938 +#define EGPI_PAUSE_TIME 0x000007D0
1939 +#define EGPI_PAUSE_ENABLE 0x40000000
1940 +#endif /* _GPI_H_ */
1941 --- /dev/null
1942 +++ b/drivers/staging/fsl_ppfe/include/pfe/cbus/hif.h
1943 @@ -0,0 +1,100 @@
1944 +/*
1945 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
1946 + * Copyright 2017 NXP
1947 + *
1948 + * This program is free software; you can redistribute it and/or modify
1949 + * it under the terms of the GNU General Public License as published by
1950 + * the Free Software Foundation; either version 2 of the License, or
1951 + * (at your option) any later version.
1952 + *
1953 + * This program is distributed in the hope that it will be useful,
1954 + * but WITHOUT ANY WARRANTY; without even the implied warranty of
1955 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1956 + * GNU General Public License for more details.
1957 + *
1958 + * You should have received a copy of the GNU General Public License
1959 + * along with this program. If not, see <http://www.gnu.org/licenses/>.
1960 + */
1961 +
1962 +#ifndef _HIF_H_
1963 +#define _HIF_H_
1964 +
1965 +/* @file hif.h.
1966 + * hif - PFE hif block control and status register.
1967 + * Mapped on CBUS and accessible from all PE's and ARM.
1968 + */
1969 +#define HIF_VERSION (HIF_BASE_ADDR + 0x00)
1970 +#define HIF_TX_CTRL (HIF_BASE_ADDR + 0x04)
1971 +#define HIF_TX_CURR_BD_ADDR (HIF_BASE_ADDR + 0x08)
1972 +#define HIF_TX_ALLOC (HIF_BASE_ADDR + 0x0c)
1973 +#define HIF_TX_BDP_ADDR (HIF_BASE_ADDR + 0x10)
1974 +#define HIF_TX_STATUS (HIF_BASE_ADDR + 0x14)
1975 +#define HIF_RX_CTRL (HIF_BASE_ADDR + 0x20)
1976 +#define HIF_RX_BDP_ADDR (HIF_BASE_ADDR + 0x24)
1977 +#define HIF_RX_STATUS (HIF_BASE_ADDR + 0x30)
1978 +#define HIF_INT_SRC (HIF_BASE_ADDR + 0x34)
1979 +#define HIF_INT_ENABLE (HIF_BASE_ADDR + 0x38)
1980 +#define HIF_POLL_CTRL (HIF_BASE_ADDR + 0x3c)
1981 +#define HIF_RX_CURR_BD_ADDR (HIF_BASE_ADDR + 0x40)
1982 +#define HIF_RX_ALLOC (HIF_BASE_ADDR + 0x44)
1983 +#define HIF_TX_DMA_STATUS (HIF_BASE_ADDR + 0x48)
1984 +#define HIF_RX_DMA_STATUS (HIF_BASE_ADDR + 0x4c)
1985 +#define HIF_INT_COAL (HIF_BASE_ADDR + 0x50)
1986 +
1987 +/* HIF_INT_SRC/ HIF_INT_ENABLE control bits */
1988 +#define HIF_INT BIT(0)
1989 +#define HIF_RXBD_INT BIT(1)
1990 +#define HIF_RXPKT_INT BIT(2)
1991 +#define HIF_TXBD_INT BIT(3)
1992 +#define HIF_TXPKT_INT BIT(4)
1993 +
1994 +/* HIF_TX_CTRL bits */
1995 +#define HIF_CTRL_DMA_EN BIT(0)
1996 +#define HIF_CTRL_BDP_POLL_CTRL_EN BIT(1)
1997 +#define HIF_CTRL_BDP_CH_START_WSTB BIT(2)
1998 +
1999 +/* HIF_RX_STATUS bits */
2000 +#define BDP_CSR_RX_DMA_ACTV BIT(16)
2001 +
2002 +/* HIF_INT_ENABLE bits */
2003 +#define HIF_INT_EN BIT(0)
2004 +#define HIF_RXBD_INT_EN BIT(1)
2005 +#define HIF_RXPKT_INT_EN BIT(2)
2006 +#define HIF_TXBD_INT_EN BIT(3)
2007 +#define HIF_TXPKT_INT_EN BIT(4)
2008 +
2009 +/* HIF_POLL_CTRL bits*/
2010 +#define HIF_RX_POLL_CTRL_CYCLE 0x0400
2011 +#define HIF_TX_POLL_CTRL_CYCLE 0x0400
2012 +
2013 +/* HIF_INT_COAL bits*/
2014 +#define HIF_INT_COAL_ENABLE BIT(31)
2015 +
2016 +/* Buffer descriptor control bits */
2017 +#define BD_CTRL_BUFLEN_MASK 0x3fff
2018 +#define BD_BUF_LEN(x) ((x) & BD_CTRL_BUFLEN_MASK)
2019 +#define BD_CTRL_CBD_INT_EN BIT(16)
2020 +#define BD_CTRL_PKT_INT_EN BIT(17)
2021 +#define BD_CTRL_LIFM BIT(18)
2022 +#define BD_CTRL_LAST_BD BIT(19)
2023 +#define BD_CTRL_DIR BIT(20)
2024 +#define BD_CTRL_LMEM_CPY BIT(21) /* Valid only for HIF_NOCPY */
2025 +#define BD_CTRL_PKT_XFER BIT(24)
2026 +#define BD_CTRL_DESC_EN BIT(31)
2027 +#define BD_CTRL_PARSE_DISABLE BIT(25)
2028 +#define BD_CTRL_BRFETCH_DISABLE BIT(26)
2029 +#define BD_CTRL_RTFETCH_DISABLE BIT(27)
2030 +
2031 +/* Buffer descriptor status bits*/
2032 +#define BD_STATUS_CONN_ID(x) ((x) & 0xffff)
2033 +#define BD_STATUS_DIR_PROC_ID BIT(16)
2034 +#define BD_STATUS_CONN_ID_EN BIT(17)
2035 +#define BD_STATUS_PE2PROC_ID(x) (((x) & 7) << 18)
2036 +#define BD_STATUS_LE_DATA BIT(21)
2037 +#define BD_STATUS_CHKSUM_EN BIT(22)
2038 +
2039 +/* HIF Buffer descriptor status bits */
2040 +#define DIR_PROC_ID BIT(16)
2041 +#define PROC_ID(id) ((id) << 18)
2042 +
2043 +#endif /* _HIF_H_ */
2044 --- /dev/null
2045 +++ b/drivers/staging/fsl_ppfe/include/pfe/cbus/hif_nocpy.h
2046 @@ -0,0 +1,50 @@
2047 +/*
2048 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
2049 + * Copyright 2017 NXP
2050 + *
2051 + * This program is free software; you can redistribute it and/or modify
2052 + * it under the terms of the GNU General Public License as published by
2053 + * the Free Software Foundation; either version 2 of the License, or
2054 + * (at your option) any later version.
2055 + *
2056 + * This program is distributed in the hope that it will be useful,
2057 + * but WITHOUT ANY WARRANTY; without even the implied warranty of
2058 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
2059 + * GNU General Public License for more details.
2060 + *
2061 + * You should have received a copy of the GNU General Public License
2062 + * along with this program. If not, see <http://www.gnu.org/licenses/>.
2063 + */
2064 +
2065 +#ifndef _HIF_NOCPY_H_
2066 +#define _HIF_NOCPY_H_
2067 +
2068 +#define HIF_NOCPY_VERSION (HIF_NOCPY_BASE_ADDR + 0x00)
2069 +#define HIF_NOCPY_TX_CTRL (HIF_NOCPY_BASE_ADDR + 0x04)
2070 +#define HIF_NOCPY_TX_CURR_BD_ADDR (HIF_NOCPY_BASE_ADDR + 0x08)
2071 +#define HIF_NOCPY_TX_ALLOC (HIF_NOCPY_BASE_ADDR + 0x0c)
2072 +#define HIF_NOCPY_TX_BDP_ADDR (HIF_NOCPY_BASE_ADDR + 0x10)
2073 +#define HIF_NOCPY_TX_STATUS (HIF_NOCPY_BASE_ADDR + 0x14)
2074 +#define HIF_NOCPY_RX_CTRL (HIF_NOCPY_BASE_ADDR + 0x20)
2075 +#define HIF_NOCPY_RX_BDP_ADDR (HIF_NOCPY_BASE_ADDR + 0x24)
2076 +#define HIF_NOCPY_RX_STATUS (HIF_NOCPY_BASE_ADDR + 0x30)
2077 +#define HIF_NOCPY_INT_SRC (HIF_NOCPY_BASE_ADDR + 0x34)
2078 +#define HIF_NOCPY_INT_ENABLE (HIF_NOCPY_BASE_ADDR + 0x38)
2079 +#define HIF_NOCPY_POLL_CTRL (HIF_NOCPY_BASE_ADDR + 0x3c)
2080 +#define HIF_NOCPY_RX_CURR_BD_ADDR (HIF_NOCPY_BASE_ADDR + 0x40)
2081 +#define HIF_NOCPY_RX_ALLOC (HIF_NOCPY_BASE_ADDR + 0x44)
2082 +#define HIF_NOCPY_TX_DMA_STATUS (HIF_NOCPY_BASE_ADDR + 0x48)
2083 +#define HIF_NOCPY_RX_DMA_STATUS (HIF_NOCPY_BASE_ADDR + 0x4c)
2084 +#define HIF_NOCPY_RX_INQ0_PKTPTR (HIF_NOCPY_BASE_ADDR + 0x50)
2085 +#define HIF_NOCPY_RX_INQ1_PKTPTR (HIF_NOCPY_BASE_ADDR + 0x54)
2086 +#define HIF_NOCPY_TX_PORT_NO (HIF_NOCPY_BASE_ADDR + 0x60)
2087 +#define HIF_NOCPY_LMEM_ALLOC_ADDR (HIF_NOCPY_BASE_ADDR + 0x64)
2088 +#define HIF_NOCPY_CLASS_ADDR (HIF_NOCPY_BASE_ADDR + 0x68)
2089 +#define HIF_NOCPY_TMU_PORT0_ADDR (HIF_NOCPY_BASE_ADDR + 0x70)
2090 +#define HIF_NOCPY_TMU_PORT1_ADDR (HIF_NOCPY_BASE_ADDR + 0x74)
2091 +#define HIF_NOCPY_TMU_PORT2_ADDR (HIF_NOCPY_BASE_ADDR + 0x7c)
2092 +#define HIF_NOCPY_TMU_PORT3_ADDR (HIF_NOCPY_BASE_ADDR + 0x80)
2093 +#define HIF_NOCPY_TMU_PORT4_ADDR (HIF_NOCPY_BASE_ADDR + 0x84)
2094 +#define HIF_NOCPY_INT_COAL (HIF_NOCPY_BASE_ADDR + 0x90)
2095 +
2096 +#endif /* _HIF_NOCPY_H_ */
2097 --- /dev/null
2098 +++ b/drivers/staging/fsl_ppfe/include/pfe/cbus/tmu_csr.h
2099 @@ -0,0 +1,168 @@
2100 +/*
2101 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
2102 + * Copyright 2017 NXP
2103 + *
2104 + * This program is free software; you can redistribute it and/or modify
2105 + * it under the terms of the GNU General Public License as published by
2106 + * the Free Software Foundation; either version 2 of the License, or
2107 + * (at your option) any later version.
2108 + *
2109 + * This program is distributed in the hope that it will be useful,
2110 + * but WITHOUT ANY WARRANTY; without even the implied warranty of
2111 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
2112 + * GNU General Public License for more details.
2113 + *
2114 + * You should have received a copy of the GNU General Public License
2115 + * along with this program. If not, see <http://www.gnu.org/licenses/>.
2116 + */
2117 +
2118 +#ifndef _TMU_CSR_H_
2119 +#define _TMU_CSR_H_
2120 +
2121 +#define TMU_VERSION (TMU_CSR_BASE_ADDR + 0x000)
2122 +#define TMU_INQ_WATERMARK (TMU_CSR_BASE_ADDR + 0x004)
2123 +#define TMU_PHY_INQ_PKTPTR (TMU_CSR_BASE_ADDR + 0x008)
2124 +#define TMU_PHY_INQ_PKTINFO (TMU_CSR_BASE_ADDR + 0x00c)
2125 +#define TMU_PHY_INQ_FIFO_CNT (TMU_CSR_BASE_ADDR + 0x010)
2126 +#define TMU_SYS_GENERIC_CONTROL (TMU_CSR_BASE_ADDR + 0x014)
2127 +#define TMU_SYS_GENERIC_STATUS (TMU_CSR_BASE_ADDR + 0x018)
2128 +#define TMU_SYS_GEN_CON0 (TMU_CSR_BASE_ADDR + 0x01c)
2129 +#define TMU_SYS_GEN_CON1 (TMU_CSR_BASE_ADDR + 0x020)
2130 +#define TMU_SYS_GEN_CON2 (TMU_CSR_BASE_ADDR + 0x024)
2131 +#define TMU_SYS_GEN_CON3 (TMU_CSR_BASE_ADDR + 0x028)
2132 +#define TMU_SYS_GEN_CON4 (TMU_CSR_BASE_ADDR + 0x02c)
2133 +#define TMU_TEQ_DISABLE_DROPCHK (TMU_CSR_BASE_ADDR + 0x030)
2134 +#define TMU_TEQ_CTRL (TMU_CSR_BASE_ADDR + 0x034)
2135 +#define TMU_TEQ_QCFG (TMU_CSR_BASE_ADDR + 0x038)
2136 +#define TMU_TEQ_DROP_STAT (TMU_CSR_BASE_ADDR + 0x03c)
2137 +#define TMU_TEQ_QAVG (TMU_CSR_BASE_ADDR + 0x040)
2138 +#define TMU_TEQ_WREG_PROB (TMU_CSR_BASE_ADDR + 0x044)
2139 +#define TMU_TEQ_TRANS_STAT (TMU_CSR_BASE_ADDR + 0x048)
2140 +#define TMU_TEQ_HW_PROB_CFG0 (TMU_CSR_BASE_ADDR + 0x04c)
2141 +#define TMU_TEQ_HW_PROB_CFG1 (TMU_CSR_BASE_ADDR + 0x050)
2142 +#define TMU_TEQ_HW_PROB_CFG2 (TMU_CSR_BASE_ADDR + 0x054)
2143 +#define TMU_TEQ_HW_PROB_CFG3 (TMU_CSR_BASE_ADDR + 0x058)
2144 +#define TMU_TEQ_HW_PROB_CFG4 (TMU_CSR_BASE_ADDR + 0x05c)
2145 +#define TMU_TEQ_HW_PROB_CFG5 (TMU_CSR_BASE_ADDR + 0x060)
2146 +#define TMU_TEQ_HW_PROB_CFG6 (TMU_CSR_BASE_ADDR + 0x064)
2147 +#define TMU_TEQ_HW_PROB_CFG7 (TMU_CSR_BASE_ADDR + 0x068)
2148 +#define TMU_TEQ_HW_PROB_CFG8 (TMU_CSR_BASE_ADDR + 0x06c)
2149 +#define TMU_TEQ_HW_PROB_CFG9 (TMU_CSR_BASE_ADDR + 0x070)
2150 +#define TMU_TEQ_HW_PROB_CFG10 (TMU_CSR_BASE_ADDR + 0x074)
2151 +#define TMU_TEQ_HW_PROB_CFG11 (TMU_CSR_BASE_ADDR + 0x078)
2152 +#define TMU_TEQ_HW_PROB_CFG12 (TMU_CSR_BASE_ADDR + 0x07c)
2153 +#define TMU_TEQ_HW_PROB_CFG13 (TMU_CSR_BASE_ADDR + 0x080)
2154 +#define TMU_TEQ_HW_PROB_CFG14 (TMU_CSR_BASE_ADDR + 0x084)
2155 +#define TMU_TEQ_HW_PROB_CFG15 (TMU_CSR_BASE_ADDR + 0x088)
2156 +#define TMU_TEQ_HW_PROB_CFG16 (TMU_CSR_BASE_ADDR + 0x08c)
2157 +#define TMU_TEQ_HW_PROB_CFG17 (TMU_CSR_BASE_ADDR + 0x090)
2158 +#define TMU_TEQ_HW_PROB_CFG18 (TMU_CSR_BASE_ADDR + 0x094)
2159 +#define TMU_TEQ_HW_PROB_CFG19 (TMU_CSR_BASE_ADDR + 0x098)
2160 +#define TMU_TEQ_HW_PROB_CFG20 (TMU_CSR_BASE_ADDR + 0x09c)
2161 +#define TMU_TEQ_HW_PROB_CFG21 (TMU_CSR_BASE_ADDR + 0x0a0)
2162 +#define TMU_TEQ_HW_PROB_CFG22 (TMU_CSR_BASE_ADDR + 0x0a4)
2163 +#define TMU_TEQ_HW_PROB_CFG23 (TMU_CSR_BASE_ADDR + 0x0a8)
2164 +#define TMU_TEQ_HW_PROB_CFG24 (TMU_CSR_BASE_ADDR + 0x0ac)
2165 +#define TMU_TEQ_HW_PROB_CFG25 (TMU_CSR_BASE_ADDR + 0x0b0)
2166 +#define TMU_TDQ_IIFG_CFG (TMU_CSR_BASE_ADDR + 0x0b4)
2167 +/* [9:0] Scheduler Enable for each of the scheduler in the TDQ.
2168 + * This is a global Enable for all schedulers in PHY0
2169 + */
2170 +#define TMU_TDQ0_SCH_CTRL (TMU_CSR_BASE_ADDR + 0x0b8)
2171 +
2172 +#define TMU_LLM_CTRL (TMU_CSR_BASE_ADDR + 0x0bc)
2173 +#define TMU_LLM_BASE_ADDR (TMU_CSR_BASE_ADDR + 0x0c0)
2174 +#define TMU_LLM_QUE_LEN (TMU_CSR_BASE_ADDR + 0x0c4)
2175 +#define TMU_LLM_QUE_HEADPTR (TMU_CSR_BASE_ADDR + 0x0c8)
2176 +#define TMU_LLM_QUE_TAILPTR (TMU_CSR_BASE_ADDR + 0x0cc)
2177 +#define TMU_LLM_QUE_DROPCNT (TMU_CSR_BASE_ADDR + 0x0d0)
2178 +#define TMU_INT_EN (TMU_CSR_BASE_ADDR + 0x0d4)
2179 +#define TMU_INT_SRC (TMU_CSR_BASE_ADDR + 0x0d8)
2180 +#define TMU_INQ_STAT (TMU_CSR_BASE_ADDR + 0x0dc)
2181 +#define TMU_CTRL (TMU_CSR_BASE_ADDR + 0x0e0)
2182 +
2183 +/* [31] Mem Access Command. 0 = Internal Memory Read, 1 = Internal memory
2184 + * Write [27:24] Byte Enables of the Internal memory access [23:0] Address of
2185 + * the internal memory. This address is used to access both the PM and DM of
2186 + * all the PE's
2187 + */
2188 +#define TMU_MEM_ACCESS_ADDR (TMU_CSR_BASE_ADDR + 0x0e4)
2189 +
2190 +/* Internal Memory Access Write Data */
2191 +#define TMU_MEM_ACCESS_WDATA (TMU_CSR_BASE_ADDR + 0x0e8)
2192 +/* Internal Memory Access Read Data. The commands are blocked
2193 + * at the mem_access only
2194 + */
2195 +#define TMU_MEM_ACCESS_RDATA (TMU_CSR_BASE_ADDR + 0x0ec)
2196 +
2197 +/* [31:0] PHY0 in queue address (must be initialized with one of the
2198 + * xxx_INQ_PKTPTR cbus addresses)
2199 + */
2200 +#define TMU_PHY0_INQ_ADDR (TMU_CSR_BASE_ADDR + 0x0f0)
2201 +/* [31:0] PHY1 in queue address (must be initialized with one of the
2202 + * xxx_INQ_PKTPTR cbus addresses)
2203 + */
2204 +#define TMU_PHY1_INQ_ADDR (TMU_CSR_BASE_ADDR + 0x0f4)
2205 +/* [31:0] PHY2 in queue address (must be initialized with one of the
2206 + * xxx_INQ_PKTPTR cbus addresses)
2207 + */
2208 +#define TMU_PHY2_INQ_ADDR (TMU_CSR_BASE_ADDR + 0x0f8)
2209 +/* [31:0] PHY3 in queue address (must be initialized with one of the
2210 + * xxx_INQ_PKTPTR cbus addresses)
2211 + */
2212 +#define TMU_PHY3_INQ_ADDR (TMU_CSR_BASE_ADDR + 0x0fc)
2213 +#define TMU_BMU_INQ_ADDR (TMU_CSR_BASE_ADDR + 0x100)
2214 +#define TMU_TX_CTRL (TMU_CSR_BASE_ADDR + 0x104)
2215 +
2216 +#define TMU_BUS_ACCESS_WDATA (TMU_CSR_BASE_ADDR + 0x108)
2217 +#define TMU_BUS_ACCESS (TMU_CSR_BASE_ADDR + 0x10c)
2218 +#define TMU_BUS_ACCESS_RDATA (TMU_CSR_BASE_ADDR + 0x110)
2219 +
2220 +#define TMU_PE_SYS_CLK_RATIO (TMU_CSR_BASE_ADDR + 0x114)
2221 +#define TMU_PE_STATUS (TMU_CSR_BASE_ADDR + 0x118)
2222 +#define TMU_TEQ_MAX_THRESHOLD (TMU_CSR_BASE_ADDR + 0x11c)
2223 +/* [31:0] PHY4 in queue address (must be initialized with one of the
2224 + * xxx_INQ_PKTPTR cbus addresses)
2225 + */
2226 +#define TMU_PHY4_INQ_ADDR (TMU_CSR_BASE_ADDR + 0x134)
2227 +/* [9:0] Scheduler Enable for each of the scheduler in the TDQ.
2228 + * This is a global Enable for all schedulers in PHY1
2229 + */
2230 +#define TMU_TDQ1_SCH_CTRL (TMU_CSR_BASE_ADDR + 0x138)
2231 +/* [9:0] Scheduler Enable for each of the scheduler in the TDQ.
2232 + * This is a global Enable for all schedulers in PHY2
2233 + */
2234 +#define TMU_TDQ2_SCH_CTRL (TMU_CSR_BASE_ADDR + 0x13c)
2235 +/* [9:0] Scheduler Enable for each of the scheduler in the TDQ.
2236 + * This is a global Enable for all schedulers in PHY3
2237 + */
2238 +#define TMU_TDQ3_SCH_CTRL (TMU_CSR_BASE_ADDR + 0x140)
2239 +#define TMU_BMU_BUF_SIZE (TMU_CSR_BASE_ADDR + 0x144)
2240 +/* [31:0] PHY5 in queue address (must be initialized with one of the
2241 + * xxx_INQ_PKTPTR cbus addresses)
2242 + */
2243 +#define TMU_PHY5_INQ_ADDR (TMU_CSR_BASE_ADDR + 0x148)
2244 +
2245 +#define SW_RESET BIT(0) /* Global software reset */
2246 +#define INQ_RESET BIT(2)
2247 +#define TEQ_RESET BIT(3)
2248 +#define TDQ_RESET BIT(4)
2249 +#define PE_RESET BIT(5)
2250 +#define MEM_INIT BIT(6)
2251 +#define MEM_INIT_DONE BIT(7)
2252 +#define LLM_INIT BIT(8)
2253 +#define LLM_INIT_DONE BIT(9)
2254 +#define ECC_MEM_INIT_DONE BIT(10)
2255 +
2256 +struct tmu_cfg {
2257 + u32 pe_sys_clk_ratio;
2258 + unsigned long llm_base_addr;
2259 + u32 llm_queue_len;
2260 +};
2261 +
2262 +/* Not HW related for pfe_ctrl / pfe common defines */
2263 +#define DEFAULT_MAX_QDEPTH 80
2264 +#define DEFAULT_Q0_QDEPTH 511 /*We keep one large queue for host tx qos */
2265 +#define DEFAULT_TMU3_QDEPTH 127
2266 +
2267 +#endif /* _TMU_CSR_H_ */
2268 --- /dev/null
2269 +++ b/drivers/staging/fsl_ppfe/include/pfe/cbus/util_csr.h
2270 @@ -0,0 +1,61 @@
2271 +/*
2272 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
2273 + * Copyright 2017 NXP
2274 + *
2275 + * This program is free software; you can redistribute it and/or modify
2276 + * it under the terms of the GNU General Public License as published by
2277 + * the Free Software Foundation; either version 2 of the License, or
2278 + * (at your option) any later version.
2279 + *
2280 + * This program is distributed in the hope that it will be useful,
2281 + * but WITHOUT ANY WARRANTY; without even the implied warranty of
2282 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
2283 + * GNU General Public License for more details.
2284 + *
2285 + * You should have received a copy of the GNU General Public License
2286 + * along with this program. If not, see <http://www.gnu.org/licenses/>.
2287 + */
2288 +
2289 +#ifndef _UTIL_CSR_H_
2290 +#define _UTIL_CSR_H_
2291 +
2292 +#define UTIL_VERSION (UTIL_CSR_BASE_ADDR + 0x000)
2293 +#define UTIL_TX_CTRL (UTIL_CSR_BASE_ADDR + 0x004)
2294 +#define UTIL_INQ_PKTPTR (UTIL_CSR_BASE_ADDR + 0x010)
2295 +
2296 +#define UTIL_HDR_SIZE (UTIL_CSR_BASE_ADDR + 0x014)
2297 +
2298 +#define UTIL_PE0_QB_DM_ADDR0 (UTIL_CSR_BASE_ADDR + 0x020)
2299 +#define UTIL_PE0_QB_DM_ADDR1 (UTIL_CSR_BASE_ADDR + 0x024)
2300 +#define UTIL_PE0_RO_DM_ADDR0 (UTIL_CSR_BASE_ADDR + 0x060)
2301 +#define UTIL_PE0_RO_DM_ADDR1 (UTIL_CSR_BASE_ADDR + 0x064)
2302 +
2303 +#define UTIL_MEM_ACCESS_ADDR (UTIL_CSR_BASE_ADDR + 0x100)
2304 +#define UTIL_MEM_ACCESS_WDATA (UTIL_CSR_BASE_ADDR + 0x104)
2305 +#define UTIL_MEM_ACCESS_RDATA (UTIL_CSR_BASE_ADDR + 0x108)
2306 +
2307 +#define UTIL_TM_INQ_ADDR (UTIL_CSR_BASE_ADDR + 0x114)
2308 +#define UTIL_PE_STATUS (UTIL_CSR_BASE_ADDR + 0x118)
2309 +
2310 +#define UTIL_PE_SYS_CLK_RATIO (UTIL_CSR_BASE_ADDR + 0x200)
2311 +#define UTIL_AFULL_THRES (UTIL_CSR_BASE_ADDR + 0x204)
2312 +#define UTIL_GAP_BETWEEN_READS (UTIL_CSR_BASE_ADDR + 0x208)
2313 +#define UTIL_MAX_BUF_CNT (UTIL_CSR_BASE_ADDR + 0x20c)
2314 +#define UTIL_TSQ_FIFO_THRES (UTIL_CSR_BASE_ADDR + 0x210)
2315 +#define UTIL_TSQ_MAX_CNT (UTIL_CSR_BASE_ADDR + 0x214)
2316 +#define UTIL_IRAM_DATA_0 (UTIL_CSR_BASE_ADDR + 0x218)
2317 +#define UTIL_IRAM_DATA_1 (UTIL_CSR_BASE_ADDR + 0x21c)
2318 +#define UTIL_IRAM_DATA_2 (UTIL_CSR_BASE_ADDR + 0x220)
2319 +#define UTIL_IRAM_DATA_3 (UTIL_CSR_BASE_ADDR + 0x224)
2320 +
2321 +#define UTIL_BUS_ACCESS_ADDR (UTIL_CSR_BASE_ADDR + 0x228)
2322 +#define UTIL_BUS_ACCESS_WDATA (UTIL_CSR_BASE_ADDR + 0x22c)
2323 +#define UTIL_BUS_ACCESS_RDATA (UTIL_CSR_BASE_ADDR + 0x230)
2324 +
2325 +#define UTIL_INQ_AFULL_THRES (UTIL_CSR_BASE_ADDR + 0x234)
2326 +
2327 +struct util_cfg {
2328 + u32 pe_sys_clk_ratio;
2329 +};
2330 +
2331 +#endif /* _UTIL_CSR_H_ */
2332 --- /dev/null
2333 +++ b/drivers/staging/fsl_ppfe/include/pfe/pfe.h
2334 @@ -0,0 +1,372 @@
2335 +/*
2336 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
2337 + * Copyright 2017 NXP
2338 + *
2339 + * This program is free software; you can redistribute it and/or modify
2340 + * it under the terms of the GNU General Public License as published by
2341 + * the Free Software Foundation; either version 2 of the License, or
2342 + * (at your option) any later version.
2343 + *
2344 + * This program is distributed in the hope that it will be useful,
2345 + * but WITHOUT ANY WARRANTY; without even the implied warranty of
2346 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
2347 + * GNU General Public License for more details.
2348 + *
2349 + * You should have received a copy of the GNU General Public License
2350 + * along with this program. If not, see <http://www.gnu.org/licenses/>.
2351 + */
2352 +
2353 +#ifndef _PFE_H_
2354 +#define _PFE_H_
2355 +
2356 +#include "cbus.h"
2357 +
2358 +#define CLASS_DMEM_BASE_ADDR(i) (0x00000000 | ((i) << 20))
2359 +/*
2360 + * Only valid for mem access register interface
2361 + */
2362 +#define CLASS_IMEM_BASE_ADDR(i) (0x00000000 | ((i) << 20))
2363 +#define CLASS_DMEM_SIZE 0x00002000
2364 +#define CLASS_IMEM_SIZE 0x00008000
2365 +
2366 +#define TMU_DMEM_BASE_ADDR(i) (0x00000000 + ((i) << 20))
2367 +/*
2368 + * Only valid for mem access register interface
2369 + */
2370 +#define TMU_IMEM_BASE_ADDR(i) (0x00000000 + ((i) << 20))
2371 +#define TMU_DMEM_SIZE 0x00000800
2372 +#define TMU_IMEM_SIZE 0x00002000
2373 +
2374 +#define UTIL_DMEM_BASE_ADDR 0x00000000
2375 +#define UTIL_DMEM_SIZE 0x00002000
2376 +
2377 +#define PE_LMEM_BASE_ADDR 0xc3010000
2378 +#define PE_LMEM_SIZE 0x8000
2379 +#define PE_LMEM_END (PE_LMEM_BASE_ADDR + PE_LMEM_SIZE)
2380 +
2381 +#define DMEM_BASE_ADDR 0x00000000
2382 +#define DMEM_SIZE 0x2000 /* TMU has less... */
2383 +#define DMEM_END (DMEM_BASE_ADDR + DMEM_SIZE)
2384 +
2385 +#define PMEM_BASE_ADDR 0x00010000
2386 +#define PMEM_SIZE 0x8000 /* TMU has less... */
2387 +#define PMEM_END (PMEM_BASE_ADDR + PMEM_SIZE)
2388 +
2389 +/* These check memory ranges from PE point of view/memory map */
2390 +#define IS_DMEM(addr, len) \
2391 + ({ typeof(addr) addr_ = (addr); \
2392 + ((unsigned long)(addr_) >= DMEM_BASE_ADDR) && \
2393 + (((unsigned long)(addr_) + (len)) <= DMEM_END); })
2394 +
2395 +#define IS_PMEM(addr, len) \
2396 + ({ typeof(addr) addr_ = (addr); \
2397 + ((unsigned long)(addr_) >= PMEM_BASE_ADDR) && \
2398 + (((unsigned long)(addr_) + (len)) <= PMEM_END); })
2399 +
2400 +#define IS_PE_LMEM(addr, len) \
2401 + ({ typeof(addr) addr_ = (addr); \
2402 + ((unsigned long)(addr_) >= \
2403 + PE_LMEM_BASE_ADDR) && \
2404 + (((unsigned long)(addr_) + \
2405 + (len)) <= PE_LMEM_END); })
2406 +
2407 +#define IS_PFE_LMEM(addr, len) \
2408 + ({ typeof(addr) addr_ = (addr); \
2409 + ((unsigned long)(addr_) >= \
2410 + CBUS_VIRT_TO_PFE(LMEM_BASE_ADDR)) && \
2411 + (((unsigned long)(addr_) + (len)) <= \
2412 + CBUS_VIRT_TO_PFE(LMEM_END)); })
2413 +
2414 +#define __IS_PHYS_DDR(addr, len) \
2415 + ({ typeof(addr) addr_ = (addr); \
2416 + ((unsigned long)(addr_) >= \
2417 + DDR_PHYS_BASE_ADDR) && \
2418 + (((unsigned long)(addr_) + (len)) <= \
2419 + DDR_PHYS_END); })
2420 +
2421 +#define IS_PHYS_DDR(addr, len) __IS_PHYS_DDR(DDR_PFE_TO_PHYS(addr), len)
2422 +
2423 +/*
2424 + * If using a run-time virtual address for the cbus base address use this code
2425 + */
2426 +extern void *cbus_base_addr;
2427 +extern void *ddr_base_addr;
2428 +extern unsigned long ddr_phys_base_addr;
2429 +extern unsigned int ddr_size;
2430 +
2431 +#define CBUS_BASE_ADDR cbus_base_addr
2432 +#define DDR_PHYS_BASE_ADDR ddr_phys_base_addr
2433 +#define DDR_BASE_ADDR ddr_base_addr
2434 +#define DDR_SIZE ddr_size
2435 +
2436 +#define DDR_PHYS_END (DDR_PHYS_BASE_ADDR + DDR_SIZE)
2437 +
2438 +#define LS1012A_PFE_RESET_WA /*
2439 + * PFE doesn't have global reset and re-init
2440 + * should takecare few things to make PFE
2441 + * functional after reset
2442 + */
2443 +#define PFE_CBUS_PHYS_BASE_ADDR 0xc0000000 /* CBUS physical base address
2444 + * as seen by PE's.
2445 + */
2446 +/* CBUS physical base address as seen by PE's. */
2447 +#define PFE_CBUS_PHYS_BASE_ADDR_FROM_PFE 0xc0000000
2448 +
2449 +#define DDR_PHYS_TO_PFE(p) (((unsigned long int)(p)) & 0x7FFFFFFF)
2450 +#define DDR_PFE_TO_PHYS(p) (((unsigned long int)(p)) | 0x80000000)
2451 +#define CBUS_PHYS_TO_PFE(p) (((p) - PFE_CBUS_PHYS_BASE_ADDR) + \
2452 + PFE_CBUS_PHYS_BASE_ADDR_FROM_PFE)
2453 +/* Translates to PFE address map */
2454 +
2455 +#define DDR_PHYS_TO_VIRT(p) (((p) - DDR_PHYS_BASE_ADDR) + DDR_BASE_ADDR)
2456 +#define DDR_VIRT_TO_PHYS(v) (((v) - DDR_BASE_ADDR) + DDR_PHYS_BASE_ADDR)
2457 +#define DDR_VIRT_TO_PFE(p) (DDR_PHYS_TO_PFE(DDR_VIRT_TO_PHYS(p)))
2458 +
2459 +#define CBUS_VIRT_TO_PFE(v) (((v) - CBUS_BASE_ADDR) + \
2460 + PFE_CBUS_PHYS_BASE_ADDR)
2461 +#define CBUS_PFE_TO_VIRT(p) (((unsigned long int)(p) - \
2462 + PFE_CBUS_PHYS_BASE_ADDR) + CBUS_BASE_ADDR)
2463 +
2464 +/* The below part of the code is used in QOS control driver from host */
2465 +#define TMU_APB_BASE_ADDR 0xc1000000 /* TMU base address seen by
2466 + * pe's
2467 + */
2468 +
2469 +enum {
2470 + CLASS0_ID = 0,
2471 + CLASS1_ID,
2472 + CLASS2_ID,
2473 + CLASS3_ID,
2474 + CLASS4_ID,
2475 + CLASS5_ID,
2476 + TMU0_ID,
2477 + TMU1_ID,
2478 + TMU2_ID,
2479 + TMU3_ID,
2480 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
2481 + UTIL_ID,
2482 +#endif
2483 + MAX_PE
2484 +};
2485 +
2486 +#define CLASS_MASK (BIT(CLASS0_ID) | BIT(CLASS1_ID) |\
2487 + BIT(CLASS2_ID) | BIT(CLASS3_ID) |\
2488 + BIT(CLASS4_ID) | BIT(CLASS5_ID))
2489 +#define CLASS_MAX_ID CLASS5_ID
2490 +
2491 +#define TMU_MASK (BIT(TMU0_ID) | BIT(TMU1_ID) |\
2492 + BIT(TMU3_ID))
2493 +
2494 +#define TMU_MAX_ID TMU3_ID
2495 +
2496 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
2497 +#define UTIL_MASK BIT(UTIL_ID)
2498 +#endif
2499 +
2500 +struct pe_status {
2501 + u32 cpu_state;
2502 + u32 activity_counter;
2503 + u32 rx;
2504 + union {
2505 + u32 tx;
2506 + u32 tmu_qstatus;
2507 + };
2508 + u32 drop;
2509 +#if defined(CFG_PE_DEBUG)
2510 + u32 debug_indicator;
2511 + u32 debug[16];
2512 +#endif
2513 +} __aligned(16);
2514 +
2515 +struct pe_sync_mailbox {
2516 + u32 stop;
2517 + u32 stopped;
2518 +};
2519 +
2520 +/* Drop counter definitions */
2521 +
2522 +#define CLASS_NUM_DROP_COUNTERS 13
2523 +#define UTIL_NUM_DROP_COUNTERS 8
2524 +
2525 +/* PE information.
2526 + * Structure containing PE's specific information. It is used to create
2527 + * generic C functions common to all PE's.
2528 + * Before using the library functions this structure needs to be initialized
2529 + * with the different registers virtual addresses
2530 + * (according to the ARM MMU mmaping). The default initialization supports a
2531 + * virtual == physical mapping.
2532 + */
2533 +struct pe_info {
2534 + u32 dmem_base_addr; /* PE's dmem base address */
2535 + u32 pmem_base_addr; /* PE's pmem base address */
2536 + u32 pmem_size; /* PE's pmem size */
2537 +
2538 + void *mem_access_wdata; /* PE's _MEM_ACCESS_WDATA register
2539 + * address
2540 + */
2541 + void *mem_access_addr; /* PE's _MEM_ACCESS_ADDR register
2542 + * address
2543 + */
2544 + void *mem_access_rdata; /* PE's _MEM_ACCESS_RDATA register
2545 + * address
2546 + */
2547 +};
2548 +
2549 +void pe_lmem_read(u32 *dst, u32 len, u32 offset);
2550 +void pe_lmem_write(u32 *src, u32 len, u32 offset);
2551 +
2552 +void pe_dmem_memcpy_to32(int id, u32 dst, const void *src, unsigned int len);
2553 +void pe_pmem_memcpy_to32(int id, u32 dst, const void *src, unsigned int len);
2554 +
2555 +u32 pe_pmem_read(int id, u32 addr, u8 size);
2556 +
2557 +void pe_dmem_write(int id, u32 val, u32 addr, u8 size);
2558 +u32 pe_dmem_read(int id, u32 addr, u8 size);
2559 +void class_pe_lmem_memcpy_to32(u32 dst, const void *src, unsigned int len);
2560 +void class_pe_lmem_memset(u32 dst, int val, unsigned int len);
2561 +void class_bus_write(u32 val, u32 addr, u8 size);
2562 +u32 class_bus_read(u32 addr, u8 size);
2563 +
2564 +#define class_bus_readl(addr) class_bus_read(addr, 4)
2565 +#define class_bus_readw(addr) class_bus_read(addr, 2)
2566 +#define class_bus_readb(addr) class_bus_read(addr, 1)
2567 +
2568 +#define class_bus_writel(val, addr) class_bus_write(val, addr, 4)
2569 +#define class_bus_writew(val, addr) class_bus_write(val, addr, 2)
2570 +#define class_bus_writeb(val, addr) class_bus_write(val, addr, 1)
2571 +
2572 +#define pe_dmem_readl(id, addr) pe_dmem_read(id, addr, 4)
2573 +#define pe_dmem_readw(id, addr) pe_dmem_read(id, addr, 2)
2574 +#define pe_dmem_readb(id, addr) pe_dmem_read(id, addr, 1)
2575 +
2576 +#define pe_dmem_writel(id, val, addr) pe_dmem_write(id, val, addr, 4)
2577 +#define pe_dmem_writew(id, val, addr) pe_dmem_write(id, val, addr, 2)
2578 +#define pe_dmem_writeb(id, val, addr) pe_dmem_write(id, val, addr, 1)
2579 +
2580 +/*int pe_load_elf_section(int id, const void *data, elf32_shdr *shdr); */
2581 +int pe_load_elf_section(int id, const void *data, struct elf32_shdr *shdr,
2582 + struct device *dev);
2583 +
2584 +void pfe_lib_init(void *cbus_base, void *ddr_base, unsigned long ddr_phys_base,
2585 + unsigned int ddr_size);
2586 +void bmu_init(void *base, struct BMU_CFG *cfg);
2587 +void bmu_reset(void *base);
2588 +void bmu_enable(void *base);
2589 +void bmu_disable(void *base);
2590 +void bmu_set_config(void *base, struct BMU_CFG *cfg);
2591 +
2592 +/*
2593 + * An enumerated type for loopback values. This can be one of three values, no
2594 + * loopback -normal operation, local loopback with internal loopback module of
2595 + * MAC or PHY loopback which is through the external PHY.
2596 + */
2597 +#ifndef __MAC_LOOP_ENUM__
2598 +#define __MAC_LOOP_ENUM__
2599 +enum mac_loop {LB_NONE, LB_EXT, LB_LOCAL};
2600 +#endif
2601 +
2602 +void gemac_init(void *base, void *config);
2603 +void gemac_disable_rx_checksum_offload(void *base);
2604 +void gemac_enable_rx_checksum_offload(void *base);
2605 +void gemac_set_speed(void *base, enum mac_speed gem_speed);
2606 +void gemac_set_duplex(void *base, int duplex);
2607 +void gemac_set_mode(void *base, int mode);
2608 +void gemac_enable(void *base);
2609 +void gemac_tx_disable(void *base);
2610 +void gemac_tx_enable(void *base);
2611 +void gemac_disable(void *base);
2612 +void gemac_reset(void *base);
2613 +void gemac_set_address(void *base, struct spec_addr *addr);
2614 +struct spec_addr gemac_get_address(void *base);
2615 +void gemac_set_loop(void *base, enum mac_loop gem_loop);
2616 +void gemac_set_laddr1(void *base, struct pfe_mac_addr *address);
2617 +void gemac_set_laddr2(void *base, struct pfe_mac_addr *address);
2618 +void gemac_set_laddr3(void *base, struct pfe_mac_addr *address);
2619 +void gemac_set_laddr4(void *base, struct pfe_mac_addr *address);
2620 +void gemac_set_laddrN(void *base, struct pfe_mac_addr *address,
2621 + unsigned int entry_index);
2622 +void gemac_clear_laddr1(void *base);
2623 +void gemac_clear_laddr2(void *base);
2624 +void gemac_clear_laddr3(void *base);
2625 +void gemac_clear_laddr4(void *base);
2626 +void gemac_clear_laddrN(void *base, unsigned int entry_index);
2627 +struct pfe_mac_addr gemac_get_hash(void *base);
2628 +void gemac_set_hash(void *base, struct pfe_mac_addr *hash);
2629 +struct pfe_mac_addr gem_get_laddr1(void *base);
2630 +struct pfe_mac_addr gem_get_laddr2(void *base);
2631 +struct pfe_mac_addr gem_get_laddr3(void *base);
2632 +struct pfe_mac_addr gem_get_laddr4(void *base);
2633 +struct pfe_mac_addr gem_get_laddrN(void *base, unsigned int entry_index);
2634 +void gemac_set_config(void *base, struct gemac_cfg *cfg);
2635 +void gemac_allow_broadcast(void *base);
2636 +void gemac_no_broadcast(void *base);
2637 +void gemac_enable_1536_rx(void *base);
2638 +void gemac_disable_1536_rx(void *base);
2639 +void gemac_set_rx_max_fl(void *base, int mtu);
2640 +void gemac_enable_rx_jmb(void *base);
2641 +void gemac_disable_rx_jmb(void *base);
2642 +void gemac_enable_stacked_vlan(void *base);
2643 +void gemac_disable_stacked_vlan(void *base);
2644 +void gemac_enable_pause_rx(void *base);
2645 +void gemac_disable_pause_rx(void *base);
2646 +void gemac_enable_copy_all(void *base);
2647 +void gemac_disable_copy_all(void *base);
2648 +void gemac_set_bus_width(void *base, int width);
2649 +void gemac_set_wol(void *base, u32 wol_conf);
2650 +
2651 +void gpi_init(void *base, struct gpi_cfg *cfg);
2652 +void gpi_reset(void *base);
2653 +void gpi_enable(void *base);
2654 +void gpi_disable(void *base);
2655 +void gpi_set_config(void *base, struct gpi_cfg *cfg);
2656 +
2657 +void class_init(struct class_cfg *cfg);
2658 +void class_reset(void);
2659 +void class_enable(void);
2660 +void class_disable(void);
2661 +void class_set_config(struct class_cfg *cfg);
2662 +
2663 +void tmu_reset(void);
2664 +void tmu_init(struct tmu_cfg *cfg);
2665 +void tmu_enable(u32 pe_mask);
2666 +void tmu_disable(u32 pe_mask);
2667 +u32 tmu_qstatus(u32 if_id);
2668 +u32 tmu_pkts_processed(u32 if_id);
2669 +
2670 +void util_init(struct util_cfg *cfg);
2671 +void util_reset(void);
2672 +void util_enable(void);
2673 +void util_disable(void);
2674 +
2675 +void hif_init(void);
2676 +void hif_tx_enable(void);
2677 +void hif_tx_disable(void);
2678 +void hif_rx_enable(void);
2679 +void hif_rx_disable(void);
2680 +
2681 +/* Get Chip Revision level
2682 + *
2683 + */
2684 +static inline unsigned int CHIP_REVISION(void)
2685 +{
2686 + /*For LS1012A return always 1 */
2687 + return 1;
2688 +}
2689 +
2690 +/* Start HIF rx DMA
2691 + *
2692 + */
2693 +static inline void hif_rx_dma_start(void)
2694 +{
2695 + writel(HIF_CTRL_DMA_EN | HIF_CTRL_BDP_CH_START_WSTB, HIF_RX_CTRL);
2696 +}
2697 +
2698 +/* Start HIF tx DMA
2699 + *
2700 + */
2701 +static inline void hif_tx_dma_start(void)
2702 +{
2703 + writel(HIF_CTRL_DMA_EN | HIF_CTRL_BDP_CH_START_WSTB, HIF_TX_CTRL);
2704 +}
2705 +
2706 +#endif /* _PFE_H_ */
2707 --- /dev/null
2708 +++ b/drivers/staging/fsl_ppfe/pfe_cdev.c
2709 @@ -0,0 +1,258 @@
2710 +// SPDX-License-Identifier: GPL-2.0+
2711 +/*
2712 + * Copyright 2018 NXP
2713 + */
2714 +
2715 +/* @pfe_cdev.c.
2716 + * Dummy device representing the PFE US in userspace.
2717 + * - used for interacting with the kernel layer for link status
2718 + */
2719 +
2720 +#include <linux/eventfd.h>
2721 +#include <linux/irqreturn.h>
2722 +#include <linux/io.h>
2723 +#include <asm/irq.h>
2724 +
2725 +#include "pfe_cdev.h"
2726 +#include "pfe_mod.h"
2727 +
2728 +static int pfe_majno;
2729 +static struct class *pfe_char_class;
2730 +static struct device *pfe_char_dev;
2731 +struct eventfd_ctx *g_trigger;
2732 +
2733 +struct pfe_shared_info link_states[PFE_CDEV_ETH_COUNT];
2734 +
2735 +static int pfe_cdev_open(struct inode *inp, struct file *fp)
2736 +{
2737 + pr_debug("PFE CDEV device opened.\n");
2738 + return 0;
2739 +}
2740 +
2741 +static ssize_t pfe_cdev_read(struct file *fp, char *buf,
2742 + size_t len, loff_t *off)
2743 +{
2744 + int ret = 0;
2745 +
2746 + pr_info("PFE CDEV attempt copying (%lu) size of user.\n",
2747 + sizeof(link_states));
2748 +
2749 + pr_debug("Dump link_state on screen before copy_to_user\n");
2750 + for (; ret < PFE_CDEV_ETH_COUNT; ret++) {
2751 + pr_debug("%u %u", link_states[ret].phy_id,
2752 + link_states[ret].state);
2753 + pr_debug("\n");
2754 + }
2755 +
2756 + /* Copy to user the value in buffer sized len */
2757 + ret = copy_to_user(buf, &link_states, sizeof(link_states));
2758 + if (ret != 0) {
2759 + pr_err("Failed to send (%d)bytes of (%lu) requested.\n",
2760 + ret, len);
2761 + return -EFAULT;
2762 + }
2763 +
2764 + /* offset set back to 0 as there is contextual reading offset */
2765 + *off = 0;
2766 + pr_debug("Read of (%lu) bytes performed.\n", sizeof(link_states));
2767 +
2768 + return sizeof(link_states);
2769 +}
2770 +
2771 +/**
2772 + * This function is for getting some commands from user through non-IOCTL
2773 + * channel. It can used to configure the device.
2774 + * TODO: To be filled in future, if require duplex communication with user
2775 + * space.
2776 + */
2777 +static ssize_t pfe_cdev_write(struct file *fp, const char *buf,
2778 + size_t len, loff_t *off)
2779 +{
2780 + pr_info("PFE CDEV Write operation not supported!\n");
2781 +
2782 + return -EFAULT;
2783 +}
2784 +
2785 +static int pfe_cdev_release(struct inode *inp, struct file *fp)
2786 +{
2787 + if (g_trigger) {
2788 + free_irq(pfe->hif_irq, g_trigger);
2789 + eventfd_ctx_put(g_trigger);
2790 + g_trigger = NULL;
2791 + }
2792 +
2793 + pr_info("PFE_CDEV: Device successfully closed\n");
2794 + return 0;
2795 +}
2796 +
2797 +/*
2798 + * hif_us_isr-
2799 + * This ISR routine processes Rx/Tx done interrupts from the HIF hardware block
2800 + */
2801 +static irqreturn_t hif_us_isr(int irq, void *arg)
2802 +{
2803 + struct eventfd_ctx *trigger = (struct eventfd_ctx *)arg;
2804 + int int_status;
2805 + int int_enable_mask;
2806 +
2807 + /*Read hif interrupt source register */
2808 + int_status = readl_relaxed(HIF_INT_SRC);
2809 + int_enable_mask = readl_relaxed(HIF_INT_ENABLE);
2810 +
2811 + if ((int_status & HIF_INT) == 0)
2812 + return IRQ_NONE;
2813 +
2814 + if (int_status & HIF_RXPKT_INT) {
2815 + int_enable_mask &= ~(HIF_RXPKT_INT);
2816 + /* Disable interrupts, they will be enabled after
2817 + * they are serviced
2818 + */
2819 + writel_relaxed(int_enable_mask, HIF_INT_ENABLE);
2820 +
2821 + eventfd_signal(trigger, 1);
2822 + }
2823 +
2824 + return IRQ_HANDLED;
2825 +}
2826 +
2827 +#define PFE_INTR_COAL_USECS 100
2828 +static long pfe_cdev_ioctl(struct file *fp, unsigned int cmd,
2829 + unsigned long arg)
2830 +{
2831 + int ret = -EFAULT;
2832 + int __user *argp = (int __user *)arg;
2833 +
2834 + pr_debug("PFE CDEV IOCTL Called with cmd=(%u)\n", cmd);
2835 +
2836 + switch (cmd) {
2837 + case PFE_CDEV_ETH0_STATE_GET:
2838 + /* Return an unsigned int (link state) for ETH0 */
2839 + *argp = link_states[0].state;
2840 + pr_debug("Returning state=%d for ETH0\n", *argp);
2841 + ret = 0;
2842 + break;
2843 + case PFE_CDEV_ETH1_STATE_GET:
2844 + /* Return an unsigned int (link state) for ETH0 */
2845 + *argp = link_states[1].state;
2846 + pr_debug("Returning state=%d for ETH1\n", *argp);
2847 + ret = 0;
2848 + break;
2849 + case PFE_CDEV_HIF_INTR_EN:
2850 + /* Return success/failure */
2851 + g_trigger = eventfd_ctx_fdget(*argp);
2852 + if (IS_ERR(g_trigger))
2853 + return PTR_ERR(g_trigger);
2854 + ret = request_irq(pfe->hif_irq, hif_us_isr, 0, "pfe_hif",
2855 + g_trigger);
2856 + if (ret) {
2857 + pr_err("%s: failed to get the hif IRQ = %d\n",
2858 + __func__, pfe->hif_irq);
2859 + eventfd_ctx_put(g_trigger);
2860 + g_trigger = NULL;
2861 + }
2862 + writel((PFE_INTR_COAL_USECS * (pfe->ctrl.sys_clk / 1000)) |
2863 + HIF_INT_COAL_ENABLE, HIF_INT_COAL);
2864 +
2865 + pr_debug("request_irq for hif interrupt: %d\n", pfe->hif_irq);
2866 + ret = 0;
2867 + break;
2868 + default:
2869 + pr_info("Unsupport cmd (%d) for PFE CDEV.\n", cmd);
2870 + break;
2871 + };
2872 +
2873 + return ret;
2874 +}
2875 +
2876 +static unsigned int pfe_cdev_poll(struct file *fp,
2877 + struct poll_table_struct *wait)
2878 +{
2879 + pr_info("PFE CDEV poll method not supported\n");
2880 + return 0;
2881 +}
2882 +
2883 +static const struct file_operations pfe_cdev_fops = {
2884 + .open = pfe_cdev_open,
2885 + .read = pfe_cdev_read,
2886 + .write = pfe_cdev_write,
2887 + .release = pfe_cdev_release,
2888 + .unlocked_ioctl = pfe_cdev_ioctl,
2889 + .poll = pfe_cdev_poll,
2890 +};
2891 +
2892 +int pfe_cdev_init(void)
2893 +{
2894 + int ret;
2895 +
2896 + pr_debug("PFE CDEV initialization begin\n");
2897 +
2898 + /* Register the major number for the device */
2899 + pfe_majno = register_chrdev(0, PFE_CDEV_NAME, &pfe_cdev_fops);
2900 + if (pfe_majno < 0) {
2901 + pr_err("Unable to register PFE CDEV. PFE CDEV not available\n");
2902 + ret = pfe_majno;
2903 + goto cleanup;
2904 + }
2905 +
2906 + pr_debug("PFE CDEV assigned major number: %d\n", pfe_majno);
2907 +
2908 + /* Register the class for the device */
2909 + pfe_char_class = class_create(THIS_MODULE, PFE_CLASS_NAME);
2910 + if (IS_ERR(pfe_char_class)) {
2911 + pr_err(
2912 + "Failed to init class for PFE CDEV. PFE CDEV not available.\n");
2913 + ret = PTR_ERR(pfe_char_class);
2914 + goto cleanup;
2915 + }
2916 +
2917 + pr_debug("PFE CDEV Class created successfully.\n");
2918 +
2919 + /* Create the device without any parent and without any callback data */
2920 + pfe_char_dev = device_create(pfe_char_class, NULL,
2921 + MKDEV(pfe_majno, 0), NULL,
2922 + PFE_CDEV_NAME);
2923 + if (IS_ERR(pfe_char_dev)) {
2924 + pr_err("Unable to PFE CDEV device. PFE CDEV not available.\n");
2925 + ret = PTR_ERR(pfe_char_dev);
2926 + goto cleanup;
2927 + }
2928 +
2929 + /* Information structure being shared with the userspace */
2930 + memset(link_states, 0, sizeof(struct pfe_shared_info) *
2931 + PFE_CDEV_ETH_COUNT);
2932 +
2933 + pr_info("PFE CDEV created: %s\n", PFE_CDEV_NAME);
2934 +
2935 + ret = 0;
2936 + return ret;
2937 +
2938 +cleanup:
2939 + if (!IS_ERR(pfe_char_class))
2940 + class_destroy(pfe_char_class);
2941 +
2942 + if (pfe_majno > 0)
2943 + unregister_chrdev(pfe_majno, PFE_CDEV_NAME);
2944 +
2945 + return ret;
2946 +}
2947 +
2948 +void pfe_cdev_exit(void)
2949 +{
2950 + if (!IS_ERR(pfe_char_dev))
2951 + device_destroy(pfe_char_class, MKDEV(pfe_majno, 0));
2952 +
2953 + if (!IS_ERR(pfe_char_class)) {
2954 + class_unregister(pfe_char_class);
2955 + class_destroy(pfe_char_class);
2956 + }
2957 +
2958 + if (pfe_majno > 0)
2959 + unregister_chrdev(pfe_majno, PFE_CDEV_NAME);
2960 +
2961 + /* reset the variables */
2962 + pfe_majno = 0;
2963 + pfe_char_class = NULL;
2964 + pfe_char_dev = NULL;
2965 +
2966 + pr_info("PFE CDEV Removed.\n");
2967 +}
2968 --- /dev/null
2969 +++ b/drivers/staging/fsl_ppfe/pfe_cdev.h
2970 @@ -0,0 +1,41 @@
2971 +/* SPDX-License-Identifier: GPL-2.0+ */
2972 +/*
2973 + * Copyright 2018 NXP
2974 + */
2975 +
2976 +#ifndef _PFE_CDEV_H_
2977 +#define _PFE_CDEV_H_
2978 +
2979 +#include <linux/init.h>
2980 +#include <linux/device.h>
2981 +#include <linux/err.h>
2982 +#include <linux/kernel.h>
2983 +#include <linux/fs.h>
2984 +#include <linux/uaccess.h>
2985 +#include <linux/poll.h>
2986 +
2987 +#define PFE_CDEV_NAME "pfe_us_cdev"
2988 +#define PFE_CLASS_NAME "ppfe_us"
2989 +
2990 +/* Extracted from ls1012a_pfe_platform_data, there are 3 interfaces which are
2991 + * supported by PFE driver. Should be updated if number of eth devices are
2992 + * changed.
2993 + */
2994 +#define PFE_CDEV_ETH_COUNT 3
2995 +
2996 +struct pfe_shared_info {
2997 + uint32_t phy_id; /* Link phy ID */
2998 + uint8_t state; /* Has either 0 or 1 */
2999 +};
3000 +
3001 +extern struct pfe_shared_info link_states[PFE_CDEV_ETH_COUNT];
3002 +
3003 +/* IOCTL Commands */
3004 +#define PFE_CDEV_ETH0_STATE_GET _IOR('R', 0, int)
3005 +#define PFE_CDEV_ETH1_STATE_GET _IOR('R', 1, int)
3006 +#define PFE_CDEV_HIF_INTR_EN _IOWR('R', 2, int)
3007 +
3008 +int pfe_cdev_init(void);
3009 +void pfe_cdev_exit(void);
3010 +
3011 +#endif /* _PFE_CDEV_H_ */
3012 --- /dev/null
3013 +++ b/drivers/staging/fsl_ppfe/pfe_ctrl.c
3014 @@ -0,0 +1,226 @@
3015 +// SPDX-License-Identifier: GPL-2.0+
3016 +/*
3017 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
3018 + * Copyright 2017 NXP
3019 + */
3020 +
3021 +#include <linux/kernel.h>
3022 +#include <linux/sched.h>
3023 +#include <linux/module.h>
3024 +#include <linux/list.h>
3025 +#include <linux/kthread.h>
3026 +
3027 +#include "pfe_mod.h"
3028 +#include "pfe_ctrl.h"
3029 +
3030 +#define TIMEOUT_MS 1000
3031 +
3032 +int relax(unsigned long end)
3033 +{
3034 + if (time_after(jiffies, end)) {
3035 + if (time_after(jiffies, end + (TIMEOUT_MS * HZ) / 1000))
3036 + return -1;
3037 +
3038 + if (need_resched())
3039 + schedule();
3040 + }
3041 +
3042 + return 0;
3043 +}
3044 +
3045 +void pfe_ctrl_suspend(struct pfe_ctrl *ctrl)
3046 +{
3047 + int id;
3048 +
3049 + mutex_lock(&ctrl->mutex);
3050 +
3051 + for (id = CLASS0_ID; id <= CLASS_MAX_ID; id++)
3052 + pe_dmem_write(id, cpu_to_be32(0x1), CLASS_DM_RESUME, 4);
3053 +
3054 + for (id = TMU0_ID; id <= TMU_MAX_ID; id++) {
3055 + if (id == TMU2_ID)
3056 + continue;
3057 + pe_dmem_write(id, cpu_to_be32(0x1), TMU_DM_RESUME, 4);
3058 + }
3059 +
3060 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
3061 + pe_dmem_write(UTIL_ID, cpu_to_be32(0x1), UTIL_DM_RESUME, 4);
3062 +#endif
3063 + mutex_unlock(&ctrl->mutex);
3064 +}
3065 +
3066 +void pfe_ctrl_resume(struct pfe_ctrl *ctrl)
3067 +{
3068 + int pe_mask = CLASS_MASK | TMU_MASK;
3069 +
3070 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
3071 + pe_mask |= UTIL_MASK;
3072 +#endif
3073 + mutex_lock(&ctrl->mutex);
3074 + pe_start(&pfe->ctrl, pe_mask);
3075 + mutex_unlock(&ctrl->mutex);
3076 +}
3077 +
3078 +/* PE sync stop.
3079 + * Stops packet processing for a list of PE's (specified using a bitmask).
3080 + * The caller must hold ctrl->mutex.
3081 + *
3082 + * @param ctrl Control context
3083 + * @param pe_mask Mask of PE id's to stop
3084 + *
3085 + */
3086 +int pe_sync_stop(struct pfe_ctrl *ctrl, int pe_mask)
3087 +{
3088 + struct pe_sync_mailbox *mbox;
3089 + int pe_stopped = 0;
3090 + unsigned long end = jiffies + 2;
3091 + int i;
3092 +
3093 + pe_mask &= 0x2FF; /*Exclude Util + TMU2 */
3094 +
3095 + for (i = 0; i < MAX_PE; i++)
3096 + if (pe_mask & (1 << i)) {
3097 + mbox = (void *)ctrl->sync_mailbox_baseaddr[i];
3098 +
3099 + pe_dmem_write(i, cpu_to_be32(0x1), (unsigned
3100 + long)&mbox->stop, 4);
3101 + }
3102 +
3103 + while (pe_stopped != pe_mask) {
3104 + for (i = 0; i < MAX_PE; i++)
3105 + if ((pe_mask & (1 << i)) && !(pe_stopped & (1 << i))) {
3106 + mbox = (void *)ctrl->sync_mailbox_baseaddr[i];
3107 +
3108 + if (pe_dmem_read(i, (unsigned
3109 + long)&mbox->stopped, 4) &
3110 + cpu_to_be32(0x1))
3111 + pe_stopped |= (1 << i);
3112 + }
3113 +
3114 + if (relax(end) < 0)
3115 + goto err;
3116 + }
3117 +
3118 + return 0;
3119 +
3120 +err:
3121 + pr_err("%s: timeout, %x %x\n", __func__, pe_mask, pe_stopped);
3122 +
3123 + for (i = 0; i < MAX_PE; i++)
3124 + if (pe_mask & (1 << i)) {
3125 + mbox = (void *)ctrl->sync_mailbox_baseaddr[i];
3126 +
3127 + pe_dmem_write(i, cpu_to_be32(0x0), (unsigned
3128 + long)&mbox->stop, 4);
3129 + }
3130 +
3131 + return -EIO;
3132 +}
3133 +
3134 +/* PE start.
3135 + * Starts packet processing for a list of PE's (specified using a bitmask).
3136 + * The caller must hold ctrl->mutex.
3137 + *
3138 + * @param ctrl Control context
3139 + * @param pe_mask Mask of PE id's to start
3140 + *
3141 + */
3142 +void pe_start(struct pfe_ctrl *ctrl, int pe_mask)
3143 +{
3144 + struct pe_sync_mailbox *mbox;
3145 + int i;
3146 +
3147 + for (i = 0; i < MAX_PE; i++)
3148 + if (pe_mask & (1 << i)) {
3149 + mbox = (void *)ctrl->sync_mailbox_baseaddr[i];
3150 +
3151 + pe_dmem_write(i, cpu_to_be32(0x0), (unsigned
3152 + long)&mbox->stop, 4);
3153 + }
3154 +}
3155 +
3156 +/* This function will ensure all PEs are put in to idle state */
3157 +int pe_reset_all(struct pfe_ctrl *ctrl)
3158 +{
3159 + struct pe_sync_mailbox *mbox;
3160 + int pe_stopped = 0;
3161 + unsigned long end = jiffies + 2;
3162 + int i;
3163 + int pe_mask = CLASS_MASK | TMU_MASK;
3164 +
3165 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
3166 + pe_mask |= UTIL_MASK;
3167 +#endif
3168 +
3169 + for (i = 0; i < MAX_PE; i++)
3170 + if (pe_mask & (1 << i)) {
3171 + mbox = (void *)ctrl->sync_mailbox_baseaddr[i];
3172 +
3173 + pe_dmem_write(i, cpu_to_be32(0x2), (unsigned
3174 + long)&mbox->stop, 4);
3175 + }
3176 +
3177 + while (pe_stopped != pe_mask) {
3178 + for (i = 0; i < MAX_PE; i++)
3179 + if ((pe_mask & (1 << i)) && !(pe_stopped & (1 << i))) {
3180 + mbox = (void *)ctrl->sync_mailbox_baseaddr[i];
3181 +
3182 + if (pe_dmem_read(i, (unsigned long)
3183 + &mbox->stopped, 4) &
3184 + cpu_to_be32(0x1))
3185 + pe_stopped |= (1 << i);
3186 + }
3187 +
3188 + if (relax(end) < 0)
3189 + goto err;
3190 + }
3191 +
3192 + return 0;
3193 +
3194 +err:
3195 + pr_err("%s: timeout, %x %x\n", __func__, pe_mask, pe_stopped);
3196 + return -EIO;
3197 +}
3198 +
3199 +int pfe_ctrl_init(struct pfe *pfe)
3200 +{
3201 + struct pfe_ctrl *ctrl = &pfe->ctrl;
3202 + int id;
3203 +
3204 + pr_info("%s\n", __func__);
3205 +
3206 + mutex_init(&ctrl->mutex);
3207 + spin_lock_init(&ctrl->lock);
3208 +
3209 + for (id = CLASS0_ID; id <= CLASS_MAX_ID; id++) {
3210 + ctrl->sync_mailbox_baseaddr[id] = CLASS_DM_SYNC_MBOX;
3211 + ctrl->msg_mailbox_baseaddr[id] = CLASS_DM_MSG_MBOX;
3212 + }
3213 +
3214 + for (id = TMU0_ID; id <= TMU_MAX_ID; id++) {
3215 + if (id == TMU2_ID)
3216 + continue;
3217 + ctrl->sync_mailbox_baseaddr[id] = TMU_DM_SYNC_MBOX;
3218 + ctrl->msg_mailbox_baseaddr[id] = TMU_DM_MSG_MBOX;
3219 + }
3220 +
3221 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
3222 + ctrl->sync_mailbox_baseaddr[UTIL_ID] = UTIL_DM_SYNC_MBOX;
3223 + ctrl->msg_mailbox_baseaddr[UTIL_ID] = UTIL_DM_MSG_MBOX;
3224 +#endif
3225 +
3226 + ctrl->hash_array_baseaddr = pfe->ddr_baseaddr + ROUTE_TABLE_BASEADDR;
3227 + ctrl->hash_array_phys_baseaddr = pfe->ddr_phys_baseaddr +
3228 + ROUTE_TABLE_BASEADDR;
3229 +
3230 + ctrl->dev = pfe->dev;
3231 +
3232 + pr_info("%s finished\n", __func__);
3233 +
3234 + return 0;
3235 +}
3236 +
3237 +void pfe_ctrl_exit(struct pfe *pfe)
3238 +{
3239 + pr_info("%s\n", __func__);
3240 +}
3241 --- /dev/null
3242 +++ b/drivers/staging/fsl_ppfe/pfe_ctrl.h
3243 @@ -0,0 +1,100 @@
3244 +/* SPDX-License-Identifier: GPL-2.0+ */
3245 +/*
3246 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
3247 + * Copyright 2017 NXP
3248 + */
3249 +
3250 +#ifndef _PFE_CTRL_H_
3251 +#define _PFE_CTRL_H_
3252 +
3253 +#include <linux/dmapool.h>
3254 +
3255 +#include "pfe/pfe.h"
3256 +
3257 +#define DMA_BUF_SIZE_128 0x80 /* enough for 1 conntracks */
3258 +#define DMA_BUF_SIZE_256 0x100
3259 +/* enough for 2 conntracks, 1 bridge entry or 1 multicast entry */
3260 +#define DMA_BUF_SIZE_512 0x200
3261 +/* 512bytes dma allocated buffers used by rtp relay feature */
3262 +#define DMA_BUF_MIN_ALIGNMENT 8
3263 +#define DMA_BUF_BOUNDARY (4 * 1024)
3264 +/* bursts can not cross 4k boundary */
3265 +
3266 +#define CMD_TX_ENABLE 0x0501
3267 +#define CMD_TX_DISABLE 0x0502
3268 +
3269 +#define CMD_RX_LRO 0x0011
3270 +#define CMD_PKTCAP_ENABLE 0x0d01
3271 +#define CMD_QM_EXPT_RATE 0x020c
3272 +
3273 +#define CLASS_DM_SH_STATIC (0x800)
3274 +#define CLASS_DM_CPU_TICKS (CLASS_DM_SH_STATIC)
3275 +#define CLASS_DM_SYNC_MBOX (0x808)
3276 +#define CLASS_DM_MSG_MBOX (0x810)
3277 +#define CLASS_DM_DROP_CNTR (0x820)
3278 +#define CLASS_DM_RESUME (0x854)
3279 +#define CLASS_DM_PESTATUS (0x860)
3280 +#define CLASS_DM_CRC_VALIDATED (0x14b0)
3281 +
3282 +#define TMU_DM_SH_STATIC (0x80)
3283 +#define TMU_DM_CPU_TICKS (TMU_DM_SH_STATIC)
3284 +#define TMU_DM_SYNC_MBOX (0x88)
3285 +#define TMU_DM_MSG_MBOX (0x90)
3286 +#define TMU_DM_RESUME (0xA0)
3287 +#define TMU_DM_PESTATUS (0xB0)
3288 +#define TMU_DM_CONTEXT (0x300)
3289 +#define TMU_DM_TX_TRANS (0x480)
3290 +
3291 +#define UTIL_DM_SH_STATIC (0x0)
3292 +#define UTIL_DM_CPU_TICKS (UTIL_DM_SH_STATIC)
3293 +#define UTIL_DM_SYNC_MBOX (0x8)
3294 +#define UTIL_DM_MSG_MBOX (0x10)
3295 +#define UTIL_DM_DROP_CNTR (0x20)
3296 +#define UTIL_DM_RESUME (0x40)
3297 +#define UTIL_DM_PESTATUS (0x50)
3298 +
3299 +struct pfe_ctrl {
3300 + struct mutex mutex; /* to serialize pfe control access */
3301 + spinlock_t lock;
3302 +
3303 + void *dma_pool;
3304 + void *dma_pool_512;
3305 + void *dma_pool_128;
3306 +
3307 + struct device *dev;
3308 +
3309 + void *hash_array_baseaddr; /*
3310 + * Virtual base address of
3311 + * the conntrack hash array
3312 + */
3313 + unsigned long hash_array_phys_baseaddr; /*
3314 + * Physical base address of
3315 + * the conntrack hash array
3316 + */
3317 +
3318 + int (*event_cb)(u16, u16, u16*);
3319 +
3320 + unsigned long sync_mailbox_baseaddr[MAX_PE]; /*
3321 + * Sync mailbox PFE
3322 + * internal address,
3323 + * initialized
3324 + * when parsing elf images
3325 + */
3326 + unsigned long msg_mailbox_baseaddr[MAX_PE]; /*
3327 + * Msg mailbox PFE internal
3328 + * address, initialized
3329 + * when parsing elf images
3330 + */
3331 + unsigned int sys_clk; /* AXI clock value, in KHz */
3332 +};
3333 +
3334 +int pfe_ctrl_init(struct pfe *pfe);
3335 +void pfe_ctrl_exit(struct pfe *pfe);
3336 +int pe_sync_stop(struct pfe_ctrl *ctrl, int pe_mask);
3337 +void pe_start(struct pfe_ctrl *ctrl, int pe_mask);
3338 +int pe_reset_all(struct pfe_ctrl *ctrl);
3339 +void pfe_ctrl_suspend(struct pfe_ctrl *ctrl);
3340 +void pfe_ctrl_resume(struct pfe_ctrl *ctrl);
3341 +int relax(unsigned long end);
3342 +
3343 +#endif /* _PFE_CTRL_H_ */
3344 --- /dev/null
3345 +++ b/drivers/staging/fsl_ppfe/pfe_debugfs.c
3346 @@ -0,0 +1,99 @@
3347 +// SPDX-License-Identifier: GPL-2.0+
3348 +/*
3349 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
3350 + * Copyright 2017 NXP
3351 + */
3352 +
3353 +#include <linux/module.h>
3354 +#include <linux/debugfs.h>
3355 +#include <linux/platform_device.h>
3356 +
3357 +#include "pfe_mod.h"
3358 +
3359 +static int dmem_show(struct seq_file *s, void *unused)
3360 +{
3361 + u32 dmem_addr, val;
3362 + int id = (long int)s->private;
3363 + int i;
3364 +
3365 + for (dmem_addr = 0; dmem_addr < CLASS_DMEM_SIZE; dmem_addr += 8 * 4) {
3366 + seq_printf(s, "%04x:", dmem_addr);
3367 +
3368 + for (i = 0; i < 8; i++) {
3369 + val = pe_dmem_read(id, dmem_addr + i * 4, 4);
3370 + seq_printf(s, " %02x %02x %02x %02x", val & 0xff,
3371 + (val >> 8) & 0xff, (val >> 16) & 0xff,
3372 + (val >> 24) & 0xff);
3373 + }
3374 +
3375 + seq_puts(s, "\n");
3376 + }
3377 +
3378 + return 0;
3379 +}
3380 +
3381 +static int dmem_open(struct inode *inode, struct file *file)
3382 +{
3383 + return single_open(file, dmem_show, inode->i_private);
3384 +}
3385 +
3386 +static const struct file_operations dmem_fops = {
3387 + .open = dmem_open,
3388 + .read = seq_read,
3389 + .llseek = seq_lseek,
3390 + .release = single_release,
3391 +};
3392 +
3393 +int pfe_debugfs_init(struct pfe *pfe)
3394 +{
3395 + struct dentry *d;
3396 +
3397 + pr_info("%s\n", __func__);
3398 +
3399 + pfe->dentry = debugfs_create_dir("pfe", NULL);
3400 + if (IS_ERR_OR_NULL(pfe->dentry))
3401 + goto err_dir;
3402 +
3403 + d = debugfs_create_file("pe0_dmem", 0444, pfe->dentry, (void *)0,
3404 + &dmem_fops);
3405 + if (IS_ERR_OR_NULL(d))
3406 + goto err_pe;
3407 +
3408 + d = debugfs_create_file("pe1_dmem", 0444, pfe->dentry, (void *)1,
3409 + &dmem_fops);
3410 + if (IS_ERR_OR_NULL(d))
3411 + goto err_pe;
3412 +
3413 + d = debugfs_create_file("pe2_dmem", 0444, pfe->dentry, (void *)2,
3414 + &dmem_fops);
3415 + if (IS_ERR_OR_NULL(d))
3416 + goto err_pe;
3417 +
3418 + d = debugfs_create_file("pe3_dmem", 0444, pfe->dentry, (void *)3,
3419 + &dmem_fops);
3420 + if (IS_ERR_OR_NULL(d))
3421 + goto err_pe;
3422 +
3423 + d = debugfs_create_file("pe4_dmem", 0444, pfe->dentry, (void *)4,
3424 + &dmem_fops);
3425 + if (IS_ERR_OR_NULL(d))
3426 + goto err_pe;
3427 +
3428 + d = debugfs_create_file("pe5_dmem", 0444, pfe->dentry, (void *)5,
3429 + &dmem_fops);
3430 + if (IS_ERR_OR_NULL(d))
3431 + goto err_pe;
3432 +
3433 + return 0;
3434 +
3435 +err_pe:
3436 + debugfs_remove_recursive(pfe->dentry);
3437 +
3438 +err_dir:
3439 + return -1;
3440 +}
3441 +
3442 +void pfe_debugfs_exit(struct pfe *pfe)
3443 +{
3444 + debugfs_remove_recursive(pfe->dentry);
3445 +}
3446 --- /dev/null
3447 +++ b/drivers/staging/fsl_ppfe/pfe_debugfs.h
3448 @@ -0,0 +1,13 @@
3449 +/* SPDX-License-Identifier: GPL-2.0+ */
3450 +/*
3451 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
3452 + * Copyright 2017 NXP
3453 + */
3454 +
3455 +#ifndef _PFE_DEBUGFS_H_
3456 +#define _PFE_DEBUGFS_H_
3457 +
3458 +int pfe_debugfs_init(struct pfe *pfe);
3459 +void pfe_debugfs_exit(struct pfe *pfe);
3460 +
3461 +#endif /* _PFE_DEBUGFS_H_ */
3462 --- /dev/null
3463 +++ b/drivers/staging/fsl_ppfe/pfe_eth.c
3464 @@ -0,0 +1,2587 @@
3465 +// SPDX-License-Identifier: GPL-2.0+
3466 +/*
3467 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
3468 + * Copyright 2017 NXP
3469 + */
3470 +
3471 +/* @pfe_eth.c.
3472 + * Ethernet driver for to handle exception path for PFE.
3473 + * - uses HIF functions to send/receive packets.
3474 + * - uses ctrl function to start/stop interfaces.
3475 + * - uses direct register accesses to control phy operation.
3476 + */
3477 +#include <linux/version.h>
3478 +#include <linux/kernel.h>
3479 +#include <linux/interrupt.h>
3480 +#include <linux/dma-mapping.h>
3481 +#include <linux/dmapool.h>
3482 +#include <linux/netdevice.h>
3483 +#include <linux/etherdevice.h>
3484 +#include <linux/ethtool.h>
3485 +#include <linux/mii.h>
3486 +#include <linux/phy.h>
3487 +#include <linux/timer.h>
3488 +#include <linux/hrtimer.h>
3489 +#include <linux/platform_device.h>
3490 +
3491 +#include <net/ip.h>
3492 +#include <net/sock.h>
3493 +
3494 +#include <linux/of.h>
3495 +#include <linux/of_mdio.h>
3496 +
3497 +#include <linux/io.h>
3498 +#include <asm/irq.h>
3499 +#include <linux/delay.h>
3500 +#include <linux/regmap.h>
3501 +#include <linux/i2c.h>
3502 +#include <linux/sys_soc.h>
3503 +
3504 +#if defined(CONFIG_NF_CONNTRACK_MARK)
3505 +#include <net/netfilter/nf_conntrack.h>
3506 +#endif
3507 +
3508 +#include "pfe_mod.h"
3509 +#include "pfe_eth.h"
3510 +#include "pfe_cdev.h"
3511 +
3512 +#define LS1012A_REV_1_0 0x87040010
3513 +
3514 +bool pfe_use_old_dts_phy;
3515 +bool pfe_errata_a010897;
3516 +
3517 +static void *cbus_emac_base[3];
3518 +static void *cbus_gpi_base[3];
3519 +
3520 +/* Forward Declaration */
3521 +static void pfe_eth_exit_one(struct pfe_eth_priv_s *priv);
3522 +static void pfe_eth_flush_tx(struct pfe_eth_priv_s *priv);
3523 +static void pfe_eth_flush_txQ(struct pfe_eth_priv_s *priv, int tx_q_num, int
3524 + from_tx, int n_desc);
3525 +
3526 +/* MDIO registers */
3527 +#define MDIO_SGMII_CR 0x00
3528 +#define MDIO_SGMII_SR 0x01
3529 +#define MDIO_SGMII_DEV_ABIL_SGMII 0x04
3530 +#define MDIO_SGMII_LINK_TMR_L 0x12
3531 +#define MDIO_SGMII_LINK_TMR_H 0x13
3532 +#define MDIO_SGMII_IF_MODE 0x14
3533 +
3534 +/* SGMII Control defines */
3535 +#define SGMII_CR_RST 0x8000
3536 +#define SGMII_CR_AN_EN 0x1000
3537 +#define SGMII_CR_RESTART_AN 0x0200
3538 +#define SGMII_CR_FD 0x0100
3539 +#define SGMII_CR_SPEED_SEL1_1G 0x0040
3540 +#define SGMII_CR_DEF_VAL (SGMII_CR_AN_EN | SGMII_CR_FD | \
3541 + SGMII_CR_SPEED_SEL1_1G)
3542 +
3543 +/* SGMII IF Mode */
3544 +#define SGMII_DUPLEX_HALF 0x10
3545 +#define SGMII_SPEED_10MBPS 0x00
3546 +#define SGMII_SPEED_100MBPS 0x04
3547 +#define SGMII_SPEED_1GBPS 0x08
3548 +#define SGMII_USE_SGMII_AN 0x02
3549 +#define SGMII_EN 0x01
3550 +
3551 +/* SGMII Device Ability for SGMII */
3552 +#define SGMII_DEV_ABIL_ACK 0x4000
3553 +#define SGMII_DEV_ABIL_EEE_CLK_STP_EN 0x0100
3554 +#define SGMII_DEV_ABIL_SGMII 0x0001
3555 +
3556 +unsigned int gemac_regs[] = {
3557 + 0x0004, /* Interrupt event */
3558 + 0x0008, /* Interrupt mask */
3559 + 0x0024, /* Ethernet control */
3560 + 0x0064, /* MIB Control/Status */
3561 + 0x0084, /* Receive control/status */
3562 + 0x00C4, /* Transmit control */
3563 + 0x00E4, /* Physical address low */
3564 + 0x00E8, /* Physical address high */
3565 + 0x0144, /* Transmit FIFO Watermark and Store and Forward Control*/
3566 + 0x0190, /* Receive FIFO Section Full Threshold */
3567 + 0x01A0, /* Transmit FIFO Section Empty Threshold */
3568 + 0x01B0, /* Frame Truncation Length */
3569 +};
3570 +
3571 +const struct soc_device_attribute ls1012a_rev1_soc_attr[] = {
3572 + { .family = "QorIQ LS1012A",
3573 + .soc_id = "svr:0x87040010",
3574 + .revision = "1.0",
3575 + .data = NULL },
3576 + { },
3577 +};
3578 +
3579 +/********************************************************************/
3580 +/* SYSFS INTERFACE */
3581 +/********************************************************************/
3582 +
3583 +#ifdef PFE_ETH_NAPI_STATS
3584 +/*
3585 + * pfe_eth_show_napi_stats
3586 + */
3587 +static ssize_t pfe_eth_show_napi_stats(struct device *dev,
3588 + struct device_attribute *attr,
3589 + char *buf)
3590 +{
3591 + struct pfe_eth_priv_s *priv = netdev_priv(to_net_dev(dev));
3592 + ssize_t len = 0;
3593 +
3594 + len += sprintf(buf + len, "sched: %u\n",
3595 + priv->napi_counters[NAPI_SCHED_COUNT]);
3596 + len += sprintf(buf + len, "poll: %u\n",
3597 + priv->napi_counters[NAPI_POLL_COUNT]);
3598 + len += sprintf(buf + len, "packet: %u\n",
3599 + priv->napi_counters[NAPI_PACKET_COUNT]);
3600 + len += sprintf(buf + len, "budget: %u\n",
3601 + priv->napi_counters[NAPI_FULL_BUDGET_COUNT]);
3602 + len += sprintf(buf + len, "desc: %u\n",
3603 + priv->napi_counters[NAPI_DESC_COUNT]);
3604 +
3605 + return len;
3606 +}
3607 +
3608 +/*
3609 + * pfe_eth_set_napi_stats
3610 + */
3611 +static ssize_t pfe_eth_set_napi_stats(struct device *dev,
3612 + struct device_attribute *attr,
3613 + const char *buf, size_t count)
3614 +{
3615 + struct pfe_eth_priv_s *priv = netdev_priv(to_net_dev(dev));
3616 +
3617 + memset(priv->napi_counters, 0, sizeof(priv->napi_counters));
3618 +
3619 + return count;
3620 +}
3621 +#endif
3622 +#ifdef PFE_ETH_TX_STATS
3623 +/* pfe_eth_show_tx_stats
3624 + *
3625 + */
3626 +static ssize_t pfe_eth_show_tx_stats(struct device *dev,
3627 + struct device_attribute *attr,
3628 + char *buf)
3629 +{
3630 + struct pfe_eth_priv_s *priv = netdev_priv(to_net_dev(dev));
3631 + ssize_t len = 0;
3632 + int i;
3633 +
3634 + len += sprintf(buf + len, "TX queues stats:\n");
3635 +
3636 + for (i = 0; i < emac_txq_cnt; i++) {
3637 + struct netdev_queue *tx_queue = netdev_get_tx_queue(priv->ndev,
3638 + i);
3639 +
3640 + len += sprintf(buf + len, "\n");
3641 + __netif_tx_lock_bh(tx_queue);
3642 +
3643 + hif_tx_lock(&pfe->hif);
3644 + len += sprintf(buf + len,
3645 + "Queue %2d : credits = %10d\n"
3646 + , i, hif_lib_tx_credit_avail(pfe, priv->id, i));
3647 + len += sprintf(buf + len,
3648 + " tx packets = %10d\n"
3649 + , pfe->tmu_credit.tx_packets[priv->id][i]);
3650 + hif_tx_unlock(&pfe->hif);
3651 +
3652 + /* Don't output additionnal stats if queue never used */
3653 + if (!pfe->tmu_credit.tx_packets[priv->id][i])
3654 + goto skip;
3655 +
3656 + len += sprintf(buf + len,
3657 + " clean_fail = %10d\n"
3658 + , priv->clean_fail[i]);
3659 + len += sprintf(buf + len,
3660 + " stop_queue = %10d\n"
3661 + , priv->stop_queue_total[i]);
3662 + len += sprintf(buf + len,
3663 + " stop_queue_hif = %10d\n"
3664 + , priv->stop_queue_hif[i]);
3665 + len += sprintf(buf + len,
3666 + " stop_queue_hif_client = %10d\n"
3667 + , priv->stop_queue_hif_client[i]);
3668 + len += sprintf(buf + len,
3669 + " stop_queue_credit = %10d\n"
3670 + , priv->stop_queue_credit[i]);
3671 +skip:
3672 + __netif_tx_unlock_bh(tx_queue);
3673 + }
3674 + return len;
3675 +}
3676 +
3677 +/* pfe_eth_set_tx_stats
3678 + *
3679 + */
3680 +static ssize_t pfe_eth_set_tx_stats(struct device *dev,
3681 + struct device_attribute *attr,
3682 + const char *buf, size_t count)
3683 +{
3684 + struct pfe_eth_priv_s *priv = netdev_priv(to_net_dev(dev));
3685 + int i;
3686 +
3687 + for (i = 0; i < emac_txq_cnt; i++) {
3688 + struct netdev_queue *tx_queue = netdev_get_tx_queue(priv->ndev,
3689 + i);
3690 +
3691 + __netif_tx_lock_bh(tx_queue);
3692 + priv->clean_fail[i] = 0;
3693 + priv->stop_queue_total[i] = 0;
3694 + priv->stop_queue_hif[i] = 0;
3695 + priv->stop_queue_hif_client[i] = 0;
3696 + priv->stop_queue_credit[i] = 0;
3697 + __netif_tx_unlock_bh(tx_queue);
3698 + }
3699 +
3700 + return count;
3701 +}
3702 +#endif
3703 +/* pfe_eth_show_txavail
3704 + *
3705 + */
3706 +static ssize_t pfe_eth_show_txavail(struct device *dev,
3707 + struct device_attribute *attr,
3708 + char *buf)
3709 +{
3710 + struct pfe_eth_priv_s *priv = netdev_priv(to_net_dev(dev));
3711 + ssize_t len = 0;
3712 + int i;
3713 +
3714 + for (i = 0; i < emac_txq_cnt; i++) {
3715 + struct netdev_queue *tx_queue = netdev_get_tx_queue(priv->ndev,
3716 + i);
3717 +
3718 + __netif_tx_lock_bh(tx_queue);
3719 +
3720 + len += sprintf(buf + len, "%d",
3721 + hif_lib_tx_avail(&priv->client, i));
3722 +
3723 + __netif_tx_unlock_bh(tx_queue);
3724 +
3725 + if (i == (emac_txq_cnt - 1))
3726 + len += sprintf(buf + len, "\n");
3727 + else
3728 + len += sprintf(buf + len, " ");
3729 + }
3730 +
3731 + return len;
3732 +}
3733 +
3734 +/* pfe_eth_show_default_priority
3735 + *
3736 + */
3737 +static ssize_t pfe_eth_show_default_priority(struct device *dev,
3738 + struct device_attribute *attr,
3739 + char *buf)
3740 +{
3741 + struct pfe_eth_priv_s *priv = netdev_priv(to_net_dev(dev));
3742 + unsigned long flags;
3743 + int rc;
3744 +
3745 + spin_lock_irqsave(&priv->lock, flags);
3746 + rc = sprintf(buf, "%d\n", priv->default_priority);
3747 + spin_unlock_irqrestore(&priv->lock, flags);
3748 +
3749 + return rc;
3750 +}
3751 +
3752 +/* pfe_eth_set_default_priority
3753 + *
3754 + */
3755 +
3756 +static ssize_t pfe_eth_set_default_priority(struct device *dev,
3757 + struct device_attribute *attr,
3758 + const char *buf, size_t count)
3759 +{
3760 + struct pfe_eth_priv_s *priv = netdev_priv(to_net_dev(dev));
3761 + unsigned long flags;
3762 +
3763 + spin_lock_irqsave(&priv->lock, flags);
3764 + priv->default_priority = kstrtoul(buf, 0, 0);
3765 + spin_unlock_irqrestore(&priv->lock, flags);
3766 +
3767 + return count;
3768 +}
3769 +
3770 +static DEVICE_ATTR(txavail, 0444, pfe_eth_show_txavail, NULL);
3771 +static DEVICE_ATTR(default_priority, 0644, pfe_eth_show_default_priority,
3772 + pfe_eth_set_default_priority);
3773 +
3774 +#ifdef PFE_ETH_NAPI_STATS
3775 +static DEVICE_ATTR(napi_stats, 0644, pfe_eth_show_napi_stats,
3776 + pfe_eth_set_napi_stats);
3777 +#endif
3778 +
3779 +#ifdef PFE_ETH_TX_STATS
3780 +static DEVICE_ATTR(tx_stats, 0644, pfe_eth_show_tx_stats,
3781 + pfe_eth_set_tx_stats);
3782 +#endif
3783 +
3784 +/*
3785 + * pfe_eth_sysfs_init
3786 + *
3787 + */
3788 +static int pfe_eth_sysfs_init(struct net_device *ndev)
3789 +{
3790 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
3791 + int err;
3792 +
3793 + /* Initialize the default values */
3794 +
3795 + /*
3796 + * By default, packets without conntrack will use this default low
3797 + * priority queue
3798 + */
3799 + priv->default_priority = 0;
3800 +
3801 + /* Create our sysfs files */
3802 + err = device_create_file(&ndev->dev, &dev_attr_default_priority);
3803 + if (err) {
3804 + netdev_err(ndev,
3805 + "failed to create default_priority sysfs files\n");
3806 + goto err_priority;
3807 + }
3808 +
3809 + err = device_create_file(&ndev->dev, &dev_attr_txavail);
3810 + if (err) {
3811 + netdev_err(ndev,
3812 + "failed to create default_priority sysfs files\n");
3813 + goto err_txavail;
3814 + }
3815 +
3816 +#ifdef PFE_ETH_NAPI_STATS
3817 + err = device_create_file(&ndev->dev, &dev_attr_napi_stats);
3818 + if (err) {
3819 + netdev_err(ndev, "failed to create napi stats sysfs files\n");
3820 + goto err_napi;
3821 + }
3822 +#endif
3823 +
3824 +#ifdef PFE_ETH_TX_STATS
3825 + err = device_create_file(&ndev->dev, &dev_attr_tx_stats);
3826 + if (err) {
3827 + netdev_err(ndev, "failed to create tx stats sysfs files\n");
3828 + goto err_tx;
3829 + }
3830 +#endif
3831 +
3832 + return 0;
3833 +
3834 +#ifdef PFE_ETH_TX_STATS
3835 +err_tx:
3836 +#endif
3837 +#ifdef PFE_ETH_NAPI_STATS
3838 + device_remove_file(&ndev->dev, &dev_attr_napi_stats);
3839 +
3840 +err_napi:
3841 +#endif
3842 + device_remove_file(&ndev->dev, &dev_attr_txavail);
3843 +
3844 +err_txavail:
3845 + device_remove_file(&ndev->dev, &dev_attr_default_priority);
3846 +
3847 +err_priority:
3848 + return -1;
3849 +}
3850 +
3851 +/* pfe_eth_sysfs_exit
3852 + *
3853 + */
3854 +void pfe_eth_sysfs_exit(struct net_device *ndev)
3855 +{
3856 +#ifdef PFE_ETH_TX_STATS
3857 + device_remove_file(&ndev->dev, &dev_attr_tx_stats);
3858 +#endif
3859 +
3860 +#ifdef PFE_ETH_NAPI_STATS
3861 + device_remove_file(&ndev->dev, &dev_attr_napi_stats);
3862 +#endif
3863 + device_remove_file(&ndev->dev, &dev_attr_txavail);
3864 + device_remove_file(&ndev->dev, &dev_attr_default_priority);
3865 +}
3866 +
3867 +/*************************************************************************/
3868 +/* ETHTOOL INTERCAE */
3869 +/*************************************************************************/
3870 +
3871 +/*MTIP GEMAC */
3872 +static const struct fec_stat {
3873 + char name[ETH_GSTRING_LEN];
3874 + u16 offset;
3875 +} fec_stats[] = {
3876 + /* RMON TX */
3877 + { "tx_dropped", RMON_T_DROP },
3878 + { "tx_packets", RMON_T_PACKETS },
3879 + { "tx_broadcast", RMON_T_BC_PKT },
3880 + { "tx_multicast", RMON_T_MC_PKT },
3881 + { "tx_crc_errors", RMON_T_CRC_ALIGN },
3882 + { "tx_undersize", RMON_T_UNDERSIZE },
3883 + { "tx_oversize", RMON_T_OVERSIZE },
3884 + { "tx_fragment", RMON_T_FRAG },
3885 + { "tx_jabber", RMON_T_JAB },
3886 + { "tx_collision", RMON_T_COL },
3887 + { "tx_64byte", RMON_T_P64 },
3888 + { "tx_65to127byte", RMON_T_P65TO127 },
3889 + { "tx_128to255byte", RMON_T_P128TO255 },
3890 + { "tx_256to511byte", RMON_T_P256TO511 },
3891 + { "tx_512to1023byte", RMON_T_P512TO1023 },
3892 + { "tx_1024to2047byte", RMON_T_P1024TO2047 },
3893 + { "tx_GTE2048byte", RMON_T_P_GTE2048 },
3894 + { "tx_octets", RMON_T_OCTETS },
3895 +
3896 + /* IEEE TX */
3897 + { "IEEE_tx_drop", IEEE_T_DROP },
3898 + { "IEEE_tx_frame_ok", IEEE_T_FRAME_OK },
3899 + { "IEEE_tx_1col", IEEE_T_1COL },
3900 + { "IEEE_tx_mcol", IEEE_T_MCOL },
3901 + { "IEEE_tx_def", IEEE_T_DEF },
3902 + { "IEEE_tx_lcol", IEEE_T_LCOL },
3903 + { "IEEE_tx_excol", IEEE_T_EXCOL },
3904 + { "IEEE_tx_macerr", IEEE_T_MACERR },
3905 + { "IEEE_tx_cserr", IEEE_T_CSERR },
3906 + { "IEEE_tx_sqe", IEEE_T_SQE },
3907 + { "IEEE_tx_fdxfc", IEEE_T_FDXFC },
3908 + { "IEEE_tx_octets_ok", IEEE_T_OCTETS_OK },
3909 +
3910 + /* RMON RX */
3911 + { "rx_packets", RMON_R_PACKETS },
3912 + { "rx_broadcast", RMON_R_BC_PKT },
3913 + { "rx_multicast", RMON_R_MC_PKT },
3914 + { "rx_crc_errors", RMON_R_CRC_ALIGN },
3915 + { "rx_undersize", RMON_R_UNDERSIZE },
3916 + { "rx_oversize", RMON_R_OVERSIZE },
3917 + { "rx_fragment", RMON_R_FRAG },
3918 + { "rx_jabber", RMON_R_JAB },
3919 + { "rx_64byte", RMON_R_P64 },
3920 + { "rx_65to127byte", RMON_R_P65TO127 },
3921 + { "rx_128to255byte", RMON_R_P128TO255 },
3922 + { "rx_256to511byte", RMON_R_P256TO511 },
3923 + { "rx_512to1023byte", RMON_R_P512TO1023 },
3924 + { "rx_1024to2047byte", RMON_R_P1024TO2047 },
3925 + { "rx_GTE2048byte", RMON_R_P_GTE2048 },
3926 + { "rx_octets", RMON_R_OCTETS },
3927 +
3928 + /* IEEE RX */
3929 + { "IEEE_rx_drop", IEEE_R_DROP },
3930 + { "IEEE_rx_frame_ok", IEEE_R_FRAME_OK },
3931 + { "IEEE_rx_crc", IEEE_R_CRC },
3932 + { "IEEE_rx_align", IEEE_R_ALIGN },
3933 + { "IEEE_rx_macerr", IEEE_R_MACERR },
3934 + { "IEEE_rx_fdxfc", IEEE_R_FDXFC },
3935 + { "IEEE_rx_octets_ok", IEEE_R_OCTETS_OK },
3936 +};
3937 +
3938 +static void pfe_eth_fill_stats(struct net_device *ndev, struct ethtool_stats
3939 + *stats, u64 *data)
3940 +{
3941 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
3942 + int i;
3943 + u64 pfe_crc_validated = 0;
3944 + int id;
3945 +
3946 + for (id = CLASS0_ID; id <= CLASS_MAX_ID; id++) {
3947 + pfe_crc_validated += be32_to_cpu(pe_dmem_read(id,
3948 + CLASS_DM_CRC_VALIDATED + (priv->id * 4), 4));
3949 + }
3950 +
3951 + for (i = 0; i < ARRAY_SIZE(fec_stats); i++) {
3952 + data[i] = readl(priv->EMAC_baseaddr + fec_stats[i].offset);
3953 +
3954 + if (fec_stats[i].offset == IEEE_R_DROP)
3955 + data[i] -= pfe_crc_validated;
3956 + }
3957 +}
3958 +
3959 +static void pfe_eth_gstrings(struct net_device *netdev,
3960 + u32 stringset, u8 *data)
3961 +{
3962 + int i;
3963 +
3964 + switch (stringset) {
3965 + case ETH_SS_STATS:
3966 + for (i = 0; i < ARRAY_SIZE(fec_stats); i++)
3967 + memcpy(data + i * ETH_GSTRING_LEN,
3968 + fec_stats[i].name, ETH_GSTRING_LEN);
3969 + break;
3970 + }
3971 +}
3972 +
3973 +static int pfe_eth_stats_count(struct net_device *ndev, int sset)
3974 +{
3975 + switch (sset) {
3976 + case ETH_SS_STATS:
3977 + return ARRAY_SIZE(fec_stats);
3978 + default:
3979 + return -EOPNOTSUPP;
3980 + }
3981 +}
3982 +
3983 +/*
3984 + * pfe_eth_gemac_reglen - Return the length of the register structure.
3985 + *
3986 + */
3987 +static int pfe_eth_gemac_reglen(struct net_device *ndev)
3988 +{
3989 + pr_info("%s()\n", __func__);
3990 + return (sizeof(gemac_regs) / sizeof(u32));
3991 +}
3992 +
3993 +/*
3994 + * pfe_eth_gemac_get_regs - Return the gemac register structure.
3995 + *
3996 + */
3997 +static void pfe_eth_gemac_get_regs(struct net_device *ndev, struct ethtool_regs
3998 + *regs, void *regbuf)
3999 +{
4000 + int i;
4001 +
4002 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
4003 + u32 *buf = (u32 *)regbuf;
4004 +
4005 + pr_info("%s()\n", __func__);
4006 + for (i = 0; i < sizeof(gemac_regs) / sizeof(u32); i++)
4007 + buf[i] = readl(priv->EMAC_baseaddr + gemac_regs[i]);
4008 +}
4009 +
4010 +/*
4011 + * pfe_eth_set_wol - Set the magic packet option, in WoL register.
4012 + *
4013 + */
4014 +static int pfe_eth_set_wol(struct net_device *ndev, struct ethtool_wolinfo *wol)
4015 +{
4016 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
4017 +
4018 + if (wol->wolopts & ~WAKE_MAGIC)
4019 + return -EOPNOTSUPP;
4020 +
4021 + /* for MTIP we store wol->wolopts */
4022 + priv->wol = wol->wolopts;
4023 +
4024 + device_set_wakeup_enable(&ndev->dev, wol->wolopts & WAKE_MAGIC);
4025 +
4026 + return 0;
4027 +}
4028 +
4029 +/*
4030 + *
4031 + * pfe_eth_get_wol - Get the WoL options.
4032 + *
4033 + */
4034 +static void pfe_eth_get_wol(struct net_device *ndev, struct ethtool_wolinfo
4035 + *wol)
4036 +{
4037 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
4038 +
4039 + wol->supported = WAKE_MAGIC;
4040 + wol->wolopts = 0;
4041 +
4042 + if (priv->wol & WAKE_MAGIC)
4043 + wol->wolopts = WAKE_MAGIC;
4044 +
4045 + memset(&wol->sopass, 0, sizeof(wol->sopass));
4046 +}
4047 +
4048 +/*
4049 + * pfe_eth_get_drvinfo - Fills in the drvinfo structure with some basic info
4050 + *
4051 + */
4052 +static void pfe_eth_get_drvinfo(struct net_device *ndev, struct ethtool_drvinfo
4053 + *drvinfo)
4054 +{
4055 + strlcpy(drvinfo->driver, DRV_NAME, sizeof(drvinfo->driver));
4056 + strlcpy(drvinfo->version, DRV_VERSION, sizeof(drvinfo->version));
4057 + strlcpy(drvinfo->fw_version, "N/A", sizeof(drvinfo->fw_version));
4058 + strlcpy(drvinfo->bus_info, "N/A", sizeof(drvinfo->bus_info));
4059 +}
4060 +
4061 +/*
4062 + * pfe_eth_set_settings - Used to send commands to PHY.
4063 + *
4064 + */
4065 +static int pfe_eth_set_settings(struct net_device *ndev,
4066 + const struct ethtool_link_ksettings *cmd)
4067 +{
4068 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
4069 + struct phy_device *phydev = priv->phydev;
4070 +
4071 + if (!phydev)
4072 + return -ENODEV;
4073 +
4074 + return phy_ethtool_ksettings_set(phydev, cmd);
4075 +}
4076 +
4077 +/*
4078 + * pfe_eth_getsettings - Return the current settings in the ethtool_cmd
4079 + * structure.
4080 + *
4081 + */
4082 +static int pfe_eth_get_settings(struct net_device *ndev,
4083 + struct ethtool_link_ksettings *cmd)
4084 +{
4085 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
4086 + struct phy_device *phydev = priv->phydev;
4087 +
4088 + if (!phydev)
4089 + return -ENODEV;
4090 +
4091 + phy_ethtool_ksettings_get(phydev, cmd);
4092 +
4093 + return 0;
4094 +}
4095 +
4096 +/*
4097 + * pfe_eth_get_msglevel - Gets the debug message mask.
4098 + *
4099 + */
4100 +static uint32_t pfe_eth_get_msglevel(struct net_device *ndev)
4101 +{
4102 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
4103 +
4104 + return priv->msg_enable;
4105 +}
4106 +
4107 +/*
4108 + * pfe_eth_set_msglevel - Sets the debug message mask.
4109 + *
4110 + */
4111 +static void pfe_eth_set_msglevel(struct net_device *ndev, uint32_t data)
4112 +{
4113 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
4114 +
4115 + priv->msg_enable = data;
4116 +}
4117 +
4118 +#define HIF_RX_COAL_MAX_CLKS (~(1 << 31))
4119 +#define HIF_RX_COAL_CLKS_PER_USEC (pfe->ctrl.sys_clk / 1000)
4120 +#define HIF_RX_COAL_MAX_USECS (HIF_RX_COAL_MAX_CLKS / \
4121 + HIF_RX_COAL_CLKS_PER_USEC)
4122 +
4123 +/*
4124 + * pfe_eth_set_coalesce - Sets rx interrupt coalescing timer.
4125 + *
4126 + */
4127 +static int pfe_eth_set_coalesce(struct net_device *ndev,
4128 + struct ethtool_coalesce *ec)
4129 +{
4130 + if (ec->rx_coalesce_usecs > HIF_RX_COAL_MAX_USECS)
4131 + return -EINVAL;
4132 +
4133 + if (!ec->rx_coalesce_usecs) {
4134 + writel(0, HIF_INT_COAL);
4135 + return 0;
4136 + }
4137 +
4138 + writel((ec->rx_coalesce_usecs * HIF_RX_COAL_CLKS_PER_USEC) |
4139 + HIF_INT_COAL_ENABLE, HIF_INT_COAL);
4140 +
4141 + return 0;
4142 +}
4143 +
4144 +/*
4145 + * pfe_eth_get_coalesce - Gets rx interrupt coalescing timer value.
4146 + *
4147 + */
4148 +static int pfe_eth_get_coalesce(struct net_device *ndev,
4149 + struct ethtool_coalesce *ec)
4150 +{
4151 + int reg_val = readl(HIF_INT_COAL);
4152 +
4153 + if (reg_val & HIF_INT_COAL_ENABLE)
4154 + ec->rx_coalesce_usecs = (reg_val & HIF_RX_COAL_MAX_CLKS) /
4155 + HIF_RX_COAL_CLKS_PER_USEC;
4156 + else
4157 + ec->rx_coalesce_usecs = 0;
4158 +
4159 + return 0;
4160 +}
4161 +
4162 +/*
4163 + * pfe_eth_set_pauseparam - Sets pause parameters
4164 + *
4165 + */
4166 +static int pfe_eth_set_pauseparam(struct net_device *ndev,
4167 + struct ethtool_pauseparam *epause)
4168 +{
4169 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
4170 +
4171 + if (epause->tx_pause != epause->rx_pause) {
4172 + netdev_info(ndev,
4173 + "hardware only support enable/disable both tx and rx\n");
4174 + return -EINVAL;
4175 + }
4176 +
4177 + priv->pause_flag = 0;
4178 + priv->pause_flag |= epause->rx_pause ? PFE_PAUSE_FLAG_ENABLE : 0;
4179 + priv->pause_flag |= epause->autoneg ? PFE_PAUSE_FLAG_AUTONEG : 0;
4180 +
4181 + if (epause->rx_pause || epause->autoneg) {
4182 + gemac_enable_pause_rx(priv->EMAC_baseaddr);
4183 + writel((readl(priv->GPI_baseaddr + GPI_TX_PAUSE_TIME) |
4184 + EGPI_PAUSE_ENABLE),
4185 + priv->GPI_baseaddr + GPI_TX_PAUSE_TIME);
4186 + if (priv->phydev) {
4187 + linkmode_set_bit(ETHTOOL_LINK_MODE_Pause_BIT,
4188 + priv->phydev->supported);
4189 + linkmode_set_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT,
4190 + priv->phydev->supported);
4191 + linkmode_set_bit(ETHTOOL_LINK_MODE_Pause_BIT,
4192 + priv->phydev->advertising);
4193 + linkmode_set_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT,
4194 + priv->phydev->advertising);
4195 + }
4196 + } else {
4197 + gemac_disable_pause_rx(priv->EMAC_baseaddr);
4198 + writel((readl(priv->GPI_baseaddr + GPI_TX_PAUSE_TIME) &
4199 + ~EGPI_PAUSE_ENABLE),
4200 + priv->GPI_baseaddr + GPI_TX_PAUSE_TIME);
4201 + if (priv->phydev) {
4202 + linkmode_clear_bit(ETHTOOL_LINK_MODE_Pause_BIT,
4203 + priv->phydev->supported);
4204 + linkmode_clear_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT,
4205 + priv->phydev->supported);
4206 + linkmode_clear_bit(ETHTOOL_LINK_MODE_Pause_BIT,
4207 + priv->phydev->advertising);
4208 + linkmode_clear_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT,
4209 + priv->phydev->advertising);
4210 + }
4211 + }
4212 +
4213 + return 0;
4214 +}
4215 +
4216 +/*
4217 + * pfe_eth_get_pauseparam - Gets pause parameters
4218 + *
4219 + */
4220 +static void pfe_eth_get_pauseparam(struct net_device *ndev,
4221 + struct ethtool_pauseparam *epause)
4222 +{
4223 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
4224 +
4225 + epause->autoneg = (priv->pause_flag & PFE_PAUSE_FLAG_AUTONEG) != 0;
4226 + epause->tx_pause = (priv->pause_flag & PFE_PAUSE_FLAG_ENABLE) != 0;
4227 + epause->rx_pause = epause->tx_pause;
4228 +}
4229 +
4230 +/*
4231 + * pfe_eth_get_hash
4232 + */
4233 +#define PFE_HASH_BITS 6 /* #bits in hash */
4234 +#define CRC32_POLY 0xEDB88320
4235 +
4236 +static int pfe_eth_get_hash(u8 *addr)
4237 +{
4238 + unsigned int i, bit, data, crc, hash;
4239 +
4240 + /* calculate crc32 value of mac address */
4241 + crc = 0xffffffff;
4242 +
4243 + for (i = 0; i < 6; i++) {
4244 + data = addr[i];
4245 + for (bit = 0; bit < 8; bit++, data >>= 1) {
4246 + crc = (crc >> 1) ^
4247 + (((crc ^ data) & 1) ? CRC32_POLY : 0);
4248 + }
4249 + }
4250 +
4251 + /*
4252 + * only upper 6 bits (PFE_HASH_BITS) are used
4253 + * which point to specific bit in the hash registers
4254 + */
4255 + hash = (crc >> (32 - PFE_HASH_BITS)) & 0x3f;
4256 +
4257 + return hash;
4258 +}
4259 +
4260 +const struct ethtool_ops pfe_ethtool_ops = {
4261 + .supported_coalesce_params = ETHTOOL_COALESCE_RX_USECS,
4262 + .get_drvinfo = pfe_eth_get_drvinfo,
4263 + .get_regs_len = pfe_eth_gemac_reglen,
4264 + .get_regs = pfe_eth_gemac_get_regs,
4265 + .get_link = ethtool_op_get_link,
4266 + .get_wol = pfe_eth_get_wol,
4267 + .set_wol = pfe_eth_set_wol,
4268 + .set_pauseparam = pfe_eth_set_pauseparam,
4269 + .get_pauseparam = pfe_eth_get_pauseparam,
4270 + .get_strings = pfe_eth_gstrings,
4271 + .get_sset_count = pfe_eth_stats_count,
4272 + .get_ethtool_stats = pfe_eth_fill_stats,
4273 + .get_msglevel = pfe_eth_get_msglevel,
4274 + .set_msglevel = pfe_eth_set_msglevel,
4275 + .set_coalesce = pfe_eth_set_coalesce,
4276 + .get_coalesce = pfe_eth_get_coalesce,
4277 + .get_link_ksettings = pfe_eth_get_settings,
4278 + .set_link_ksettings = pfe_eth_set_settings,
4279 +};
4280 +
4281 +/* pfe_eth_mdio_reset
4282 + */
4283 +int pfe_eth_mdio_reset(struct mii_bus *bus)
4284 +{
4285 + struct pfe_mdio_priv_s *priv = (struct pfe_mdio_priv_s *)bus->priv;
4286 + u32 phy_speed;
4287 +
4288 +
4289 + mutex_lock(&bus->mdio_lock);
4290 +
4291 + /*
4292 + * Set MII speed to 2.5 MHz (= clk_get_rate() / 2 * phy_speed)
4293 + *
4294 + * The formula for FEC MDC is 'ref_freq / (MII_SPEED x 2)' while
4295 + * for ENET-MAC is 'ref_freq / ((MII_SPEED + 1) x 2)'.
4296 + */
4297 + phy_speed = (DIV_ROUND_UP((pfe->ctrl.sys_clk * 1000), 4000000)
4298 + << EMAC_MII_SPEED_SHIFT);
4299 + phy_speed |= EMAC_HOLDTIME(0x5);
4300 + __raw_writel(phy_speed, priv->mdio_base + EMAC_MII_CTRL_REG);
4301 +
4302 + mutex_unlock(&bus->mdio_lock);
4303 +
4304 + return 0;
4305 +}
4306 +
4307 +/* pfe_eth_mdio_timeout
4308 + *
4309 + */
4310 +static int pfe_eth_mdio_timeout(struct pfe_mdio_priv_s *priv, int timeout)
4311 +{
4312 + while (!(__raw_readl(priv->mdio_base + EMAC_IEVENT_REG) &
4313 + EMAC_IEVENT_MII)) {
4314 + if (timeout-- <= 0)
4315 + return -1;
4316 + usleep_range(10, 20);
4317 + }
4318 + __raw_writel(EMAC_IEVENT_MII, priv->mdio_base + EMAC_IEVENT_REG);
4319 + return 0;
4320 +}
4321 +
4322 +static int pfe_eth_mdio_mux(u8 muxval)
4323 +{
4324 + struct i2c_adapter *a;
4325 + struct i2c_msg msg;
4326 + unsigned char buf[2];
4327 + int ret;
4328 +
4329 + a = i2c_get_adapter(0);
4330 + if (!a)
4331 + return -ENODEV;
4332 +
4333 + /* set bit 1 (the second bit) of chip at 0x09, register 0x13 */
4334 + buf[0] = 0x54; /* reg number */
4335 + buf[1] = (muxval << 6) | 0x3; /* data */
4336 + msg.addr = 0x66;
4337 + msg.buf = buf;
4338 + msg.len = 2;
4339 + msg.flags = 0;
4340 + ret = i2c_transfer(a, &msg, 1);
4341 + i2c_put_adapter(a);
4342 + if (ret != 1)
4343 + return -ENODEV;
4344 + return 0;
4345 +}
4346 +
4347 +static int pfe_eth_mdio_write_addr(struct mii_bus *bus, int mii_id,
4348 + int dev_addr, int regnum)
4349 +{
4350 + struct pfe_mdio_priv_s *priv = (struct pfe_mdio_priv_s *)bus->priv;
4351 +
4352 + __raw_writel(EMAC_MII_DATA_PA(mii_id) |
4353 + EMAC_MII_DATA_RA(dev_addr) |
4354 + EMAC_MII_DATA_TA | EMAC_MII_DATA(regnum),
4355 + priv->mdio_base + EMAC_MII_DATA_REG);
4356 +
4357 + if (pfe_eth_mdio_timeout(priv, EMAC_MDIO_TIMEOUT)) {
4358 + dev_err(&bus->dev, "phy MDIO address write timeout\n");
4359 + return -1;
4360 + }
4361 +
4362 + return 0;
4363 +}
4364 +
4365 +static int pfe_eth_mdio_write(struct mii_bus *bus, int mii_id, int regnum,
4366 + u16 value)
4367 +{
4368 + struct pfe_mdio_priv_s *priv = (struct pfe_mdio_priv_s *)bus->priv;
4369 +
4370 + /*To access external PHYs on QDS board mux needs to be configured*/
4371 + if ((mii_id) && (pfe->mdio_muxval[mii_id]))
4372 + pfe_eth_mdio_mux(pfe->mdio_muxval[mii_id]);
4373 +
4374 + if (regnum & MII_ADDR_C45) {
4375 + pfe_eth_mdio_write_addr(bus, mii_id, (regnum >> 16) & 0x1f,
4376 + regnum & 0xffff);
4377 + __raw_writel(EMAC_MII_DATA_OP_CL45_WR |
4378 + EMAC_MII_DATA_PA(mii_id) |
4379 + EMAC_MII_DATA_RA((regnum >> 16) & 0x1f) |
4380 + EMAC_MII_DATA_TA | EMAC_MII_DATA(value),
4381 + priv->mdio_base + EMAC_MII_DATA_REG);
4382 + } else {
4383 + /* start a write op */
4384 + __raw_writel(EMAC_MII_DATA_ST | EMAC_MII_DATA_OP_WR |
4385 + EMAC_MII_DATA_PA(mii_id) |
4386 + EMAC_MII_DATA_RA(regnum) |
4387 + EMAC_MII_DATA_TA | EMAC_MII_DATA(value),
4388 + priv->mdio_base + EMAC_MII_DATA_REG);
4389 + }
4390 +
4391 + if (pfe_eth_mdio_timeout(priv, EMAC_MDIO_TIMEOUT)) {
4392 + dev_err(&bus->dev, "%s: phy MDIO write timeout\n", __func__);
4393 + return -1;
4394 + }
4395 + return 0;
4396 +}
4397 +
4398 +static int pfe_eth_mdio_read(struct mii_bus *bus, int mii_id, int regnum)
4399 +{
4400 + struct pfe_mdio_priv_s *priv = (struct pfe_mdio_priv_s *)bus->priv;
4401 + u16 value = 0;
4402 +
4403 + /*To access external PHYs on QDS board mux needs to be configured*/
4404 + if ((mii_id) && (pfe->mdio_muxval[mii_id]))
4405 + pfe_eth_mdio_mux(pfe->mdio_muxval[mii_id]);
4406 +
4407 + if (regnum & MII_ADDR_C45) {
4408 + pfe_eth_mdio_write_addr(bus, mii_id, (regnum >> 16) & 0x1f,
4409 + regnum & 0xffff);
4410 + __raw_writel(EMAC_MII_DATA_OP_CL45_RD |
4411 + EMAC_MII_DATA_PA(mii_id) |
4412 + EMAC_MII_DATA_RA((regnum >> 16) & 0x1f) |
4413 + EMAC_MII_DATA_TA,
4414 + priv->mdio_base + EMAC_MII_DATA_REG);
4415 + } else {
4416 + /* start a read op */
4417 + __raw_writel(EMAC_MII_DATA_ST | EMAC_MII_DATA_OP_RD |
4418 + EMAC_MII_DATA_PA(mii_id) |
4419 + EMAC_MII_DATA_RA(regnum) |
4420 + EMAC_MII_DATA_TA, priv->mdio_base +
4421 + EMAC_MII_DATA_REG);
4422 + }
4423 +
4424 + if (pfe_eth_mdio_timeout(priv, EMAC_MDIO_TIMEOUT)) {
4425 + dev_err(&bus->dev, "%s: phy MDIO read timeout\n", __func__);
4426 + return -1;
4427 + }
4428 +
4429 + value = EMAC_MII_DATA(__raw_readl(priv->mdio_base +
4430 + EMAC_MII_DATA_REG));
4431 + return value;
4432 +}
4433 +
4434 +static int pfe_eth_mdio_init(struct pfe *pfe,
4435 + struct ls1012a_pfe_platform_data *pfe_info,
4436 + int ii)
4437 +{
4438 + struct pfe_mdio_priv_s *priv = NULL;
4439 + struct ls1012a_mdio_platform_data *mdio_info;
4440 + struct mii_bus *bus;
4441 + struct device_node *mdio_node;
4442 + int rc = 0;
4443 +
4444 + mdio_info = (struct ls1012a_mdio_platform_data *)
4445 + pfe_info->ls1012a_mdio_pdata;
4446 + mdio_info->id = ii;
4447 +
4448 + bus = mdiobus_alloc_size(sizeof(struct pfe_mdio_priv_s));
4449 + if (!bus) {
4450 + pr_err("mdiobus_alloc() failed\n");
4451 + rc = -ENOMEM;
4452 + goto err_mdioalloc;
4453 + }
4454 +
4455 + bus->name = "ls1012a MDIO Bus";
4456 + snprintf(bus->id, MII_BUS_ID_SIZE, "ls1012a-%x", mdio_info->id);
4457 +
4458 + bus->read = &pfe_eth_mdio_read;
4459 + bus->write = &pfe_eth_mdio_write;
4460 + bus->reset = &pfe_eth_mdio_reset;
4461 + bus->parent = pfe->dev;
4462 + bus->phy_mask = mdio_info->phy_mask;
4463 + bus->irq[0] = mdio_info->irq[0];
4464 + priv = bus->priv;
4465 + priv->mdio_base = cbus_emac_base[ii];
4466 +
4467 + priv->mdc_div = mdio_info->mdc_div;
4468 + if (!priv->mdc_div)
4469 + priv->mdc_div = 64;
4470 +
4471 + dev_info(bus->parent, "%s: mdc_div: %d, phy_mask: %x\n",
4472 + __func__, priv->mdc_div, bus->phy_mask);
4473 + mdio_node = of_get_child_by_name(pfe->dev->of_node, "mdio");
4474 + if ((mdio_info->id == 0) && mdio_node) {
4475 + rc = of_mdiobus_register(bus, mdio_node);
4476 + of_node_put(mdio_node);
4477 + } else {
4478 + rc = mdiobus_register(bus);
4479 + }
4480 +
4481 + if (rc) {
4482 + dev_err(bus->parent, "mdiobus_register(%s) failed\n",
4483 + bus->name);
4484 + goto err_mdioregister;
4485 + }
4486 +
4487 + priv->mii_bus = bus;
4488 + pfe->mdio.mdio_priv[ii] = priv;
4489 +
4490 + pfe_eth_mdio_reset(bus);
4491 +
4492 + return 0;
4493 +
4494 +err_mdioregister:
4495 + mdiobus_free(bus);
4496 +err_mdioalloc:
4497 + return rc;
4498 +}
4499 +
4500 +/* pfe_eth_mdio_exit
4501 + */
4502 +static void pfe_eth_mdio_exit(struct pfe *pfe,
4503 + int ii)
4504 +{
4505 + struct pfe_mdio_priv_s *mdio_priv = pfe->mdio.mdio_priv[ii];
4506 + struct mii_bus *bus = mdio_priv->mii_bus;
4507 +
4508 + if (!bus)
4509 + return;
4510 + mdiobus_unregister(bus);
4511 + mdiobus_free(bus);
4512 +}
4513 +
4514 +/* pfe_get_phydev_speed
4515 + */
4516 +static int pfe_get_phydev_speed(struct phy_device *phydev)
4517 +{
4518 + switch (phydev->speed) {
4519 + case 10:
4520 + return SPEED_10M;
4521 + case 100:
4522 + return SPEED_100M;
4523 + case 1000:
4524 + default:
4525 + return SPEED_1000M;
4526 + }
4527 +}
4528 +
4529 +/* pfe_set_rgmii_speed
4530 + */
4531 +#define RGMIIPCR 0x434
4532 +/* RGMIIPCR bit definitions*/
4533 +#define SCFG_RGMIIPCR_EN_AUTO (0x00000008)
4534 +#define SCFG_RGMIIPCR_SETSP_1000M (0x00000004)
4535 +#define SCFG_RGMIIPCR_SETSP_100M (0x00000000)
4536 +#define SCFG_RGMIIPCR_SETSP_10M (0x00000002)
4537 +#define SCFG_RGMIIPCR_SETFD (0x00000001)
4538 +
4539 +#define MDIOSELCR 0x484
4540 +#define MDIOSEL_SERDES 0x0
4541 +#define MDIOSEL_EXTPHY 0x80000000
4542 +
4543 +static void pfe_set_rgmii_speed(struct phy_device *phydev)
4544 +{
4545 + u32 rgmii_pcr;
4546 +
4547 + regmap_read(pfe->scfg, RGMIIPCR, &rgmii_pcr);
4548 + rgmii_pcr &= ~(SCFG_RGMIIPCR_SETSP_1000M | SCFG_RGMIIPCR_SETSP_10M);
4549 +
4550 + switch (phydev->speed) {
4551 + case 10:
4552 + rgmii_pcr |= SCFG_RGMIIPCR_SETSP_10M;
4553 + break;
4554 + case 1000:
4555 + rgmii_pcr |= SCFG_RGMIIPCR_SETSP_1000M;
4556 + break;
4557 + case 100:
4558 + default:
4559 + /* Default is 100M */
4560 + break;
4561 + }
4562 + regmap_write(pfe->scfg, RGMIIPCR, rgmii_pcr);
4563 +}
4564 +
4565 +/* pfe_get_phydev_duplex
4566 + */
4567 +static int pfe_get_phydev_duplex(struct phy_device *phydev)
4568 +{
4569 + /*return (phydev->duplex == DUPLEX_HALF) ? DUP_HALF:DUP_FULL ; */
4570 + return DUPLEX_FULL;
4571 +}
4572 +
4573 +/* pfe_eth_adjust_link
4574 + */
4575 +static void pfe_eth_adjust_link(struct net_device *ndev)
4576 +{
4577 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
4578 + unsigned long flags;
4579 + struct phy_device *phydev = priv->phydev;
4580 + int new_state = 0;
4581 +
4582 + netif_info(priv, drv, ndev, "%s\n", __func__);
4583 +
4584 + spin_lock_irqsave(&priv->lock, flags);
4585 +
4586 + if (phydev->link) {
4587 + /*
4588 + * Now we make sure that we can be in full duplex mode.
4589 + * If not, we operate in half-duplex mode.
4590 + */
4591 + if (phydev->duplex != priv->oldduplex) {
4592 + new_state = 1;
4593 + gemac_set_duplex(priv->EMAC_baseaddr,
4594 + pfe_get_phydev_duplex(phydev));
4595 + priv->oldduplex = phydev->duplex;
4596 + }
4597 +
4598 + if (phydev->speed != priv->oldspeed) {
4599 + new_state = 1;
4600 + gemac_set_speed(priv->EMAC_baseaddr,
4601 + pfe_get_phydev_speed(phydev));
4602 + if (priv->einfo->mii_config ==
4603 + PHY_INTERFACE_MODE_RGMII_ID)
4604 + pfe_set_rgmii_speed(phydev);
4605 + priv->oldspeed = phydev->speed;
4606 + }
4607 +
4608 + if (!priv->oldlink) {
4609 + new_state = 1;
4610 + priv->oldlink = 1;
4611 + }
4612 +
4613 + } else if (priv->oldlink) {
4614 + new_state = 1;
4615 + priv->oldlink = 0;
4616 + priv->oldspeed = 0;
4617 + priv->oldduplex = -1;
4618 + }
4619 +
4620 + if (new_state && netif_msg_link(priv))
4621 + phy_print_status(phydev);
4622 +
4623 + spin_unlock_irqrestore(&priv->lock, flags);
4624 +
4625 + /* Now, dump the details to the cdev.
4626 + * XXX: Locking would be required? (uniprocess arch)
4627 + * Or, maybe move it in spinlock above
4628 + */
4629 + if (us && priv->einfo->gem_id < PFE_CDEV_ETH_COUNT) {
4630 + pr_debug("Changing link state from (%u) to (%u) for ID=(%u)\n",
4631 + link_states[priv->einfo->gem_id].state,
4632 + phydev->link,
4633 + priv->einfo->gem_id);
4634 + link_states[priv->einfo->gem_id].phy_id = priv->einfo->gem_id;
4635 + link_states[priv->einfo->gem_id].state = phydev->link;
4636 + }
4637 +}
4638 +
4639 +/* pfe_phy_exit
4640 + */
4641 +static void pfe_phy_exit(struct net_device *ndev)
4642 +{
4643 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
4644 +
4645 + netif_info(priv, drv, ndev, "%s\n", __func__);
4646 +
4647 + phy_disconnect(priv->phydev);
4648 + priv->phydev = NULL;
4649 +}
4650 +
4651 +/* pfe_eth_stop
4652 + */
4653 +static void pfe_eth_stop(struct net_device *ndev, int wake)
4654 +{
4655 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
4656 +
4657 + netif_info(priv, drv, ndev, "%s\n", __func__);
4658 +
4659 + if (wake) {
4660 + gemac_tx_disable(priv->EMAC_baseaddr);
4661 + } else {
4662 + gemac_disable(priv->EMAC_baseaddr);
4663 + gpi_disable(priv->GPI_baseaddr);
4664 +
4665 + if (priv->phydev)
4666 + phy_stop(priv->phydev);
4667 + }
4668 +}
4669 +
4670 +/* pfe_eth_start
4671 + */
4672 +static int pfe_eth_start(struct pfe_eth_priv_s *priv)
4673 +{
4674 + netif_info(priv, drv, priv->ndev, "%s\n", __func__);
4675 +
4676 + if (priv->phydev)
4677 + phy_start(priv->phydev);
4678 +
4679 + gpi_enable(priv->GPI_baseaddr);
4680 + gemac_enable(priv->EMAC_baseaddr);
4681 +
4682 + return 0;
4683 +}
4684 +
4685 +/*
4686 + * Configure on chip serdes through mdio
4687 + */
4688 +static void ls1012a_configure_serdes(struct net_device *ndev)
4689 +{
4690 + struct pfe_eth_priv_s *eth_priv = netdev_priv(ndev);
4691 + struct pfe_mdio_priv_s *mdio_priv = pfe->mdio.mdio_priv[eth_priv->id];
4692 + int sgmii_2500 = 0;
4693 + struct mii_bus *bus = mdio_priv->mii_bus;
4694 + u16 value = 0;
4695 +
4696 + if (eth_priv->einfo->mii_config == PHY_INTERFACE_MODE_2500SGMII)
4697 + sgmii_2500 = 1;
4698 +
4699 + netif_info(eth_priv, drv, ndev, "%s\n", __func__);
4700 + /* PCS configuration done with corresponding GEMAC */
4701 +
4702 + pfe_eth_mdio_read(bus, 0, MDIO_SGMII_CR);
4703 + pfe_eth_mdio_read(bus, 0, MDIO_SGMII_SR);
4704 +
4705 + pfe_eth_mdio_write(bus, 0, MDIO_SGMII_CR, SGMII_CR_RST);
4706 +
4707 + if (sgmii_2500) {
4708 + pfe_eth_mdio_write(bus, 0, MDIO_SGMII_IF_MODE, SGMII_SPEED_1GBPS
4709 + | SGMII_EN);
4710 + pfe_eth_mdio_write(bus, 0, MDIO_SGMII_DEV_ABIL_SGMII,
4711 + SGMII_DEV_ABIL_ACK | SGMII_DEV_ABIL_SGMII);
4712 + pfe_eth_mdio_write(bus, 0, MDIO_SGMII_LINK_TMR_L, 0xa120);
4713 + pfe_eth_mdio_write(bus, 0, MDIO_SGMII_LINK_TMR_H, 0x7);
4714 + /* Autonegotiation need to be disabled for 2.5G SGMII mode*/
4715 + value = SGMII_CR_FD | SGMII_CR_SPEED_SEL1_1G;
4716 + pfe_eth_mdio_write(bus, 0, MDIO_SGMII_CR, value);
4717 + } else {
4718 + pfe_eth_mdio_write(bus, 0, MDIO_SGMII_IF_MODE,
4719 + SGMII_SPEED_1GBPS
4720 + | SGMII_USE_SGMII_AN
4721 + | SGMII_EN);
4722 + pfe_eth_mdio_write(bus, 0, MDIO_SGMII_DEV_ABIL_SGMII,
4723 + SGMII_DEV_ABIL_EEE_CLK_STP_EN
4724 + | 0xa0
4725 + | SGMII_DEV_ABIL_SGMII);
4726 + pfe_eth_mdio_write(bus, 0, MDIO_SGMII_LINK_TMR_L, 0x400);
4727 + pfe_eth_mdio_write(bus, 0, MDIO_SGMII_LINK_TMR_H, 0x0);
4728 + value = SGMII_CR_AN_EN | SGMII_CR_FD | SGMII_CR_SPEED_SEL1_1G;
4729 + pfe_eth_mdio_write(bus, 0, MDIO_SGMII_CR, value);
4730 + }
4731 +}
4732 +
4733 +/*
4734 + * pfe_phy_init
4735 + *
4736 + */
4737 +static int pfe_phy_init(struct net_device *ndev)
4738 +{
4739 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
4740 + struct phy_device *phydev;
4741 + char phy_id[MII_BUS_ID_SIZE + 3];
4742 + char bus_id[MII_BUS_ID_SIZE];
4743 + phy_interface_t interface;
4744 +
4745 + priv->oldlink = 0;
4746 + priv->oldspeed = 0;
4747 + priv->oldduplex = -1;
4748 +
4749 + snprintf(bus_id, MII_BUS_ID_SIZE, "ls1012a-%d", 0);
4750 + snprintf(phy_id, MII_BUS_ID_SIZE + 3, PHY_ID_FMT, bus_id,
4751 + priv->einfo->phy_id);
4752 + netif_info(priv, drv, ndev, "%s: %s\n", __func__, phy_id);
4753 + interface = priv->einfo->mii_config;
4754 + if ((interface == PHY_INTERFACE_MODE_SGMII) ||
4755 + (interface == PHY_INTERFACE_MODE_2500SGMII)) {
4756 + /*Configure SGMII PCS */
4757 + if (pfe->scfg) {
4758 + /* Config MDIO from serdes */
4759 + regmap_write(pfe->scfg, MDIOSELCR, MDIOSEL_SERDES);
4760 + }
4761 + ls1012a_configure_serdes(ndev);
4762 + }
4763 +
4764 + if (pfe->scfg) {
4765 + /*Config MDIO from PAD */
4766 + regmap_write(pfe->scfg, MDIOSELCR, MDIOSEL_EXTPHY);
4767 + }
4768 +
4769 + priv->oldlink = 0;
4770 + priv->oldspeed = 0;
4771 + priv->oldduplex = -1;
4772 + pr_info("%s interface %x\n", __func__, interface);
4773 +
4774 + if (priv->phy_node) {
4775 + phydev = of_phy_connect(ndev, priv->phy_node,
4776 + pfe_eth_adjust_link, 0,
4777 + priv->einfo->mii_config);
4778 + if (!(phydev)) {
4779 + netdev_err(ndev, "Unable to connect to phy\n");
4780 + return -ENODEV;
4781 + }
4782 +
4783 + } else {
4784 + phydev = phy_connect(ndev, phy_id,
4785 + &pfe_eth_adjust_link, interface);
4786 + if (IS_ERR(phydev)) {
4787 + netdev_err(ndev, "Unable to connect to phy\n");
4788 + return PTR_ERR(phydev);
4789 + }
4790 + }
4791 +
4792 + priv->phydev = phydev;
4793 + phydev->irq = PHY_POLL;
4794 +
4795 + return 0;
4796 +}
4797 +
4798 +/* pfe_gemac_init
4799 + */
4800 +static int pfe_gemac_init(struct pfe_eth_priv_s *priv)
4801 +{
4802 + struct gemac_cfg cfg;
4803 +
4804 + netif_info(priv, ifup, priv->ndev, "%s\n", __func__);
4805 +
4806 + cfg.mode = 0;
4807 + cfg.speed = SPEED_1000M;
4808 + cfg.duplex = DUPLEX_FULL;
4809 +
4810 + gemac_set_config(priv->EMAC_baseaddr, &cfg);
4811 + gemac_allow_broadcast(priv->EMAC_baseaddr);
4812 + gemac_enable_1536_rx(priv->EMAC_baseaddr);
4813 + gemac_enable_stacked_vlan(priv->EMAC_baseaddr);
4814 + gemac_enable_pause_rx(priv->EMAC_baseaddr);
4815 + gemac_set_bus_width(priv->EMAC_baseaddr, 64);
4816 +
4817 + /*GEM will perform checksum verifications*/
4818 + if (priv->ndev->features & NETIF_F_RXCSUM)
4819 + gemac_enable_rx_checksum_offload(priv->EMAC_baseaddr);
4820 + else
4821 + gemac_disable_rx_checksum_offload(priv->EMAC_baseaddr);
4822 +
4823 + return 0;
4824 +}
4825 +
4826 +/* pfe_eth_event_handler
4827 + */
4828 +static int pfe_eth_event_handler(void *data, int event, int qno)
4829 +{
4830 + struct pfe_eth_priv_s *priv = data;
4831 +
4832 + switch (event) {
4833 + case EVENT_RX_PKT_IND:
4834 +
4835 + if (qno == 0) {
4836 + if (napi_schedule_prep(&priv->high_napi)) {
4837 + netif_info(priv, intr, priv->ndev,
4838 + "%s: schedule high prio poll\n"
4839 + , __func__);
4840 +
4841 +#ifdef PFE_ETH_NAPI_STATS
4842 + priv->napi_counters[NAPI_SCHED_COUNT]++;
4843 +#endif
4844 +
4845 + __napi_schedule(&priv->high_napi);
4846 + }
4847 + } else if (qno == 1) {
4848 + if (napi_schedule_prep(&priv->low_napi)) {
4849 + netif_info(priv, intr, priv->ndev,
4850 + "%s: schedule low prio poll\n"
4851 + , __func__);
4852 +
4853 +#ifdef PFE_ETH_NAPI_STATS
4854 + priv->napi_counters[NAPI_SCHED_COUNT]++;
4855 +#endif
4856 + __napi_schedule(&priv->low_napi);
4857 + }
4858 + } else if (qno == 2) {
4859 + if (napi_schedule_prep(&priv->lro_napi)) {
4860 + netif_info(priv, intr, priv->ndev,
4861 + "%s: schedule lro prio poll\n"
4862 + , __func__);
4863 +
4864 +#ifdef PFE_ETH_NAPI_STATS
4865 + priv->napi_counters[NAPI_SCHED_COUNT]++;
4866 +#endif
4867 + __napi_schedule(&priv->lro_napi);
4868 + }
4869 + }
4870 +
4871 + break;
4872 +
4873 + case EVENT_TXDONE_IND:
4874 + pfe_eth_flush_tx(priv);
4875 + hif_lib_event_handler_start(&priv->client, EVENT_TXDONE_IND, 0);
4876 + break;
4877 + case EVENT_HIGH_RX_WM:
4878 + default:
4879 + break;
4880 + }
4881 +
4882 + return 0;
4883 +}
4884 +
4885 +static int pfe_eth_change_mtu(struct net_device *ndev, int new_mtu)
4886 +{
4887 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
4888 +
4889 + ndev->mtu = new_mtu;
4890 + new_mtu += ETH_HLEN + ETH_FCS_LEN;
4891 + gemac_set_rx_max_fl(priv->EMAC_baseaddr, new_mtu);
4892 +
4893 + return 0;
4894 +}
4895 +
4896 +/* pfe_eth_open
4897 + */
4898 +static int pfe_eth_open(struct net_device *ndev)
4899 +{
4900 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
4901 + struct hif_client_s *client;
4902 + int rc;
4903 +
4904 + netif_info(priv, ifup, ndev, "%s\n", __func__);
4905 +
4906 + /* Register client driver with HIF */
4907 + client = &priv->client;
4908 + memset(client, 0, sizeof(*client));
4909 + client->id = PFE_CL_GEM0 + priv->id;
4910 + client->tx_qn = emac_txq_cnt;
4911 + client->rx_qn = EMAC_RXQ_CNT;
4912 + client->priv = priv;
4913 + client->pfe = priv->pfe;
4914 + client->event_handler = pfe_eth_event_handler;
4915 +
4916 + client->tx_qsize = EMAC_TXQ_DEPTH;
4917 + client->rx_qsize = EMAC_RXQ_DEPTH;
4918 +
4919 + rc = hif_lib_client_register(client);
4920 + if (rc) {
4921 + netdev_err(ndev, "%s: hif_lib_client_register(%d) failed\n",
4922 + __func__, client->id);
4923 + goto err0;
4924 + }
4925 +
4926 + netif_info(priv, drv, ndev, "%s: registered client: %p\n", __func__,
4927 + client);
4928 +
4929 + pfe_gemac_init(priv);
4930 +
4931 + if (!is_valid_ether_addr(ndev->dev_addr)) {
4932 + netdev_err(ndev, "%s: invalid MAC address\n", __func__);
4933 + rc = -EADDRNOTAVAIL;
4934 + goto err1;
4935 + }
4936 +
4937 + gemac_set_laddrN(priv->EMAC_baseaddr,
4938 + (struct pfe_mac_addr *)ndev->dev_addr, 1);
4939 +
4940 + napi_enable(&priv->high_napi);
4941 + napi_enable(&priv->low_napi);
4942 + napi_enable(&priv->lro_napi);
4943 +
4944 + rc = pfe_eth_start(priv);
4945 +
4946 + netif_tx_wake_all_queues(ndev);
4947 +
4948 + return rc;
4949 +
4950 +err1:
4951 + hif_lib_client_unregister(&priv->client);
4952 +
4953 +err0:
4954 + return rc;
4955 +}
4956 +
4957 +/*
4958 + * pfe_eth_shutdown
4959 + */
4960 +int pfe_eth_shutdown(struct net_device *ndev, int wake)
4961 +{
4962 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
4963 + int i, qstatus, id;
4964 + unsigned long next_poll = jiffies + 1, end = jiffies +
4965 + (TX_POLL_TIMEOUT_MS * HZ) / 1000;
4966 + int tx_pkts, prv_tx_pkts;
4967 +
4968 + netif_info(priv, ifdown, ndev, "%s\n", __func__);
4969 +
4970 + for (i = 0; i < emac_txq_cnt; i++)
4971 + hrtimer_cancel(&priv->fast_tx_timeout[i].timer);
4972 +
4973 + netif_tx_stop_all_queues(ndev);
4974 +
4975 + do {
4976 + tx_pkts = 0;
4977 + pfe_eth_flush_tx(priv);
4978 +
4979 + for (i = 0; i < emac_txq_cnt; i++)
4980 + tx_pkts += hif_lib_tx_pending(&priv->client, i);
4981 +
4982 + if (tx_pkts) {
4983 + /*Don't wait forever, break if we cross max timeout */
4984 + if (time_after(jiffies, end)) {
4985 + pr_err(
4986 + "(%s)Tx is not complete after %dmsec\n",
4987 + ndev->name, TX_POLL_TIMEOUT_MS);
4988 + break;
4989 + }
4990 +
4991 + pr_info("%s : (%s) Waiting for tx packets to free. Pending tx pkts = %d.\n"
4992 + , __func__, ndev->name, tx_pkts);
4993 + if (need_resched())
4994 + schedule();
4995 + }
4996 +
4997 + } while (tx_pkts);
4998 +
4999 + end = jiffies + (TX_POLL_TIMEOUT_MS * HZ) / 1000;
5000 +
5001 + prv_tx_pkts = tmu_pkts_processed(priv->id);
5002 + /*
5003 + * Wait till TMU transmits all pending packets
5004 + * poll tmu_qstatus and pkts processed by TMU for every 10ms
5005 + * Consider TMU is busy, If we see TMU qeueu pending or any packets
5006 + * processed by TMU
5007 + */
5008 + while (1) {
5009 + if (time_after(jiffies, next_poll)) {
5010 + tx_pkts = tmu_pkts_processed(priv->id);
5011 + qstatus = tmu_qstatus(priv->id) & 0x7ffff;
5012 +
5013 + if (!qstatus && (tx_pkts == prv_tx_pkts))
5014 + break;
5015 + /* Don't wait forever, break if we cross max
5016 + * timeout(TX_POLL_TIMEOUT_MS)
5017 + */
5018 + if (time_after(jiffies, end)) {
5019 + pr_err("TMU%d is busy after %dmsec\n",
5020 + priv->id, TX_POLL_TIMEOUT_MS);
5021 + break;
5022 + }
5023 + prv_tx_pkts = tx_pkts;
5024 + next_poll++;
5025 + }
5026 + if (need_resched())
5027 + schedule();
5028 + }
5029 + /* Wait for some more time to complete transmitting packet if any */
5030 + next_poll = jiffies + 1;
5031 + while (1) {
5032 + if (time_after(jiffies, next_poll))
5033 + break;
5034 + if (need_resched())
5035 + schedule();
5036 + }
5037 +
5038 + pfe_eth_stop(ndev, wake);
5039 +
5040 + napi_disable(&priv->lro_napi);
5041 + napi_disable(&priv->low_napi);
5042 + napi_disable(&priv->high_napi);
5043 +
5044 + for (id = CLASS0_ID; id <= CLASS_MAX_ID; id++) {
5045 + pe_dmem_write(id, 0, CLASS_DM_CRC_VALIDATED
5046 + + (priv->id * 4), 4);
5047 + }
5048 +
5049 + hif_lib_client_unregister(&priv->client);
5050 +
5051 + return 0;
5052 +}
5053 +
5054 +/* pfe_eth_close
5055 + *
5056 + */
5057 +static int pfe_eth_close(struct net_device *ndev)
5058 +{
5059 + pfe_eth_shutdown(ndev, 0);
5060 +
5061 + return 0;
5062 +}
5063 +
5064 +/* pfe_eth_suspend
5065 + *
5066 + * return value : 1 if netdevice is configured to wakeup system
5067 + * 0 otherwise
5068 + */
5069 +int pfe_eth_suspend(struct net_device *ndev)
5070 +{
5071 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
5072 + int retval = 0;
5073 +
5074 + if (priv->wol) {
5075 + gemac_set_wol(priv->EMAC_baseaddr, priv->wol);
5076 + retval = 1;
5077 + }
5078 + pfe_eth_shutdown(ndev, priv->wol);
5079 +
5080 + return retval;
5081 +}
5082 +
5083 +/* pfe_eth_resume
5084 + *
5085 + */
5086 +int pfe_eth_resume(struct net_device *ndev)
5087 +{
5088 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
5089 +
5090 + if (priv->wol)
5091 + gemac_set_wol(priv->EMAC_baseaddr, 0);
5092 + gemac_tx_enable(priv->EMAC_baseaddr);
5093 +
5094 + return pfe_eth_open(ndev);
5095 +}
5096 +
5097 +/* pfe_eth_get_queuenum
5098 + */
5099 +static int pfe_eth_get_queuenum(struct pfe_eth_priv_s *priv, struct sk_buff
5100 + *skb)
5101 +{
5102 + int queuenum = 0;
5103 + unsigned long flags;
5104 +
5105 + /* Get the Fast Path queue number */
5106 + /*
5107 + * Use conntrack mark (if conntrack exists), then packet mark (if any),
5108 + * then fallback to default
5109 + */
5110 +#if defined(CONFIG_IP_NF_CONNTRACK_MARK) || defined(CONFIG_NF_CONNTRACK_MARK)
5111 + if (skb->_nfct) {
5112 + enum ip_conntrack_info cinfo;
5113 + struct nf_conn *ct;
5114 +
5115 + ct = nf_ct_get(skb, &cinfo);
5116 +
5117 + if (ct) {
5118 + u32 connmark;
5119 +
5120 + connmark = ct->mark;
5121 +
5122 + if ((connmark & 0x80000000) && priv->id != 0)
5123 + connmark >>= 16;
5124 +
5125 + queuenum = connmark & EMAC_QUEUENUM_MASK;
5126 + }
5127 + } else {/* continued after #endif ... */
5128 +#endif
5129 + if (skb->mark) {
5130 + queuenum = skb->mark & EMAC_QUEUENUM_MASK;
5131 + } else {
5132 + spin_lock_irqsave(&priv->lock, flags);
5133 + queuenum = priv->default_priority & EMAC_QUEUENUM_MASK;
5134 + spin_unlock_irqrestore(&priv->lock, flags);
5135 + }
5136 +#if defined(CONFIG_IP_NF_CONNTRACK_MARK) || defined(CONFIG_NF_CONNTRACK_MARK)
5137 + }
5138 +#endif
5139 + return queuenum;
5140 +}
5141 +
5142 +/* pfe_eth_might_stop_tx
5143 + *
5144 + */
5145 +static int pfe_eth_might_stop_tx(struct pfe_eth_priv_s *priv, int queuenum,
5146 + struct netdev_queue *tx_queue,
5147 + unsigned int n_desc,
5148 + unsigned int n_segs)
5149 +{
5150 + ktime_t kt;
5151 + int tried = 0;
5152 +
5153 +try_again:
5154 + if (unlikely((__hif_tx_avail(&pfe->hif) < n_desc) ||
5155 + (hif_lib_tx_avail(&priv->client, queuenum) < n_desc) ||
5156 + (hif_lib_tx_credit_avail(pfe, priv->id, queuenum) < n_segs))) {
5157 + if (!tried) {
5158 + __hif_lib_update_credit(&priv->client, queuenum);
5159 + tried = 1;
5160 + goto try_again;
5161 + }
5162 +#ifdef PFE_ETH_TX_STATS
5163 + if (__hif_tx_avail(&pfe->hif) < n_desc) {
5164 + priv->stop_queue_hif[queuenum]++;
5165 + } else if (hif_lib_tx_avail(&priv->client, queuenum) < n_desc) {
5166 + priv->stop_queue_hif_client[queuenum]++;
5167 + } else if (hif_lib_tx_credit_avail(pfe, priv->id, queuenum) <
5168 + n_segs) {
5169 + priv->stop_queue_credit[queuenum]++;
5170 + }
5171 + priv->stop_queue_total[queuenum]++;
5172 +#endif
5173 + netif_tx_stop_queue(tx_queue);
5174 +
5175 + kt = ktime_set(0, LS1012A_TX_FAST_RECOVERY_TIMEOUT_MS *
5176 + NSEC_PER_MSEC);
5177 + hrtimer_start(&priv->fast_tx_timeout[queuenum].timer, kt,
5178 + HRTIMER_MODE_REL);
5179 + return -1;
5180 + } else {
5181 + return 0;
5182 + }
5183 +}
5184 +
5185 +#define SA_MAX_OP 2
5186 +/* pfe_hif_send_packet
5187 + *
5188 + * At this level if TX fails we drop the packet
5189 + */
5190 +static void pfe_hif_send_packet(struct sk_buff *skb, struct pfe_eth_priv_s
5191 + *priv, int queuenum)
5192 +{
5193 + struct skb_shared_info *sh = skb_shinfo(skb);
5194 + unsigned int nr_frags;
5195 + u32 ctrl = 0;
5196 +
5197 + netif_info(priv, tx_queued, priv->ndev, "%s\n", __func__);
5198 +
5199 + if (skb_is_gso(skb)) {
5200 + priv->stats.tx_dropped++;
5201 + return;
5202 + }
5203 +
5204 + if (skb->ip_summed == CHECKSUM_PARTIAL)
5205 + ctrl = HIF_CTRL_TX_CHECKSUM;
5206 +
5207 + nr_frags = sh->nr_frags;
5208 +
5209 + if (nr_frags) {
5210 + skb_frag_t *f;
5211 + int i;
5212 +
5213 + __hif_lib_xmit_pkt(&priv->client, queuenum, skb->data,
5214 + skb_headlen(skb), ctrl, HIF_FIRST_BUFFER,
5215 + skb);
5216 +
5217 + for (i = 0; i < nr_frags - 1; i++) {
5218 + f = &sh->frags[i];
5219 + __hif_lib_xmit_pkt(&priv->client, queuenum,
5220 + skb_frag_address(f),
5221 + skb_frag_size(f),
5222 + 0x0, 0x0, skb);
5223 + }
5224 +
5225 + f = &sh->frags[i];
5226 +
5227 + __hif_lib_xmit_pkt(&priv->client, queuenum,
5228 + skb_frag_address(f), skb_frag_size(f),
5229 + 0x0, HIF_LAST_BUFFER | HIF_DATA_VALID,
5230 + skb);
5231 +
5232 + netif_info(priv, tx_queued, priv->ndev,
5233 + "%s: pkt sent successfully skb:%p nr_frags:%d len:%d\n",
5234 + __func__, skb, nr_frags, skb->len);
5235 + } else {
5236 + __hif_lib_xmit_pkt(&priv->client, queuenum, skb->data,
5237 + skb->len, ctrl, HIF_FIRST_BUFFER |
5238 + HIF_LAST_BUFFER | HIF_DATA_VALID,
5239 + skb);
5240 + netif_info(priv, tx_queued, priv->ndev,
5241 + "%s: pkt sent successfully skb:%p len:%d\n",
5242 + __func__, skb, skb->len);
5243 + }
5244 + hif_tx_dma_start();
5245 + priv->stats.tx_packets++;
5246 + priv->stats.tx_bytes += skb->len;
5247 + hif_lib_tx_credit_use(pfe, priv->id, queuenum, 1);
5248 +}
5249 +
5250 +/* pfe_eth_flush_txQ
5251 + */
5252 +static void pfe_eth_flush_txQ(struct pfe_eth_priv_s *priv, int tx_q_num, int
5253 + from_tx, int n_desc)
5254 +{
5255 + struct sk_buff *skb;
5256 + struct netdev_queue *tx_queue = netdev_get_tx_queue(priv->ndev,
5257 + tx_q_num);
5258 + unsigned int flags;
5259 +
5260 + netif_info(priv, tx_done, priv->ndev, "%s\n", __func__);
5261 +
5262 + if (!from_tx)
5263 + __netif_tx_lock_bh(tx_queue);
5264 +
5265 + /* Clean HIF and client queue */
5266 + while ((skb = hif_lib_tx_get_next_complete(&priv->client,
5267 + tx_q_num, &flags,
5268 + HIF_TX_DESC_NT))) {
5269 + if (flags & HIF_DATA_VALID)
5270 + dev_kfree_skb_any(skb);
5271 + }
5272 + if (!from_tx)
5273 + __netif_tx_unlock_bh(tx_queue);
5274 +}
5275 +
5276 +/* pfe_eth_flush_tx
5277 + */
5278 +static void pfe_eth_flush_tx(struct pfe_eth_priv_s *priv)
5279 +{
5280 + int ii;
5281 +
5282 + netif_info(priv, tx_done, priv->ndev, "%s\n", __func__);
5283 +
5284 + for (ii = 0; ii < emac_txq_cnt; ii++) {
5285 + pfe_eth_flush_txQ(priv, ii, 0, 0);
5286 + __hif_lib_update_credit(&priv->client, ii);
5287 + }
5288 +}
5289 +
5290 +void pfe_tx_get_req_desc(struct sk_buff *skb, unsigned int *n_desc, unsigned int
5291 + *n_segs)
5292 +{
5293 + struct skb_shared_info *sh = skb_shinfo(skb);
5294 +
5295 + /* Scattered data */
5296 + if (sh->nr_frags) {
5297 + *n_desc = sh->nr_frags + 1;
5298 + *n_segs = 1;
5299 + /* Regular case */
5300 + } else {
5301 + *n_desc = 1;
5302 + *n_segs = 1;
5303 + }
5304 +}
5305 +
5306 +/* pfe_eth_send_packet
5307 + */
5308 +static int pfe_eth_send_packet(struct sk_buff *skb, struct net_device *ndev)
5309 +{
5310 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
5311 + int tx_q_num = skb_get_queue_mapping(skb);
5312 + int n_desc, n_segs;
5313 + struct netdev_queue *tx_queue = netdev_get_tx_queue(priv->ndev,
5314 + tx_q_num);
5315 +
5316 + netif_info(priv, tx_queued, ndev, "%s\n", __func__);
5317 +
5318 + if ((!skb_is_gso(skb)) && (skb_headroom(skb) < (PFE_PKT_HEADER_SZ +
5319 + sizeof(unsigned long)))) {
5320 + netif_warn(priv, tx_err, priv->ndev, "%s: copying skb\n",
5321 + __func__);
5322 +
5323 + if (pskb_expand_head(skb, (PFE_PKT_HEADER_SZ + sizeof(unsigned
5324 + long)), 0, GFP_ATOMIC)) {
5325 + /* No need to re-transmit, no way to recover*/
5326 + kfree_skb(skb);
5327 + priv->stats.tx_dropped++;
5328 + return NETDEV_TX_OK;
5329 + }
5330 + }
5331 +
5332 + pfe_tx_get_req_desc(skb, &n_desc, &n_segs);
5333 +
5334 + hif_tx_lock(&pfe->hif);
5335 + if (unlikely(pfe_eth_might_stop_tx(priv, tx_q_num, tx_queue, n_desc,
5336 + n_segs))) {
5337 +#ifdef PFE_ETH_TX_STATS
5338 + if (priv->was_stopped[tx_q_num]) {
5339 + priv->clean_fail[tx_q_num]++;
5340 + priv->was_stopped[tx_q_num] = 0;
5341 + }
5342 +#endif
5343 + hif_tx_unlock(&pfe->hif);
5344 + return NETDEV_TX_BUSY;
5345 + }
5346 +
5347 + pfe_hif_send_packet(skb, priv, tx_q_num);
5348 +
5349 + hif_tx_unlock(&pfe->hif);
5350 +
5351 + tx_queue->trans_start = jiffies;
5352 +
5353 +#ifdef PFE_ETH_TX_STATS
5354 + priv->was_stopped[tx_q_num] = 0;
5355 +#endif
5356 +
5357 + return NETDEV_TX_OK;
5358 +}
5359 +
5360 +/* pfe_eth_select_queue
5361 + *
5362 + */
5363 +static u16 pfe_eth_select_queue(struct net_device *ndev, struct sk_buff *skb,
5364 + struct net_device *sb_dev)
5365 +{
5366 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
5367 +
5368 + return pfe_eth_get_queuenum(priv, skb);
5369 +}
5370 +
5371 +/* pfe_eth_get_stats
5372 + */
5373 +static struct net_device_stats *pfe_eth_get_stats(struct net_device *ndev)
5374 +{
5375 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
5376 +
5377 + netif_info(priv, drv, ndev, "%s\n", __func__);
5378 +
5379 + return &priv->stats;
5380 +}
5381 +
5382 +/* pfe_eth_set_mac_address
5383 + */
5384 +static int pfe_eth_set_mac_address(struct net_device *ndev, void *addr)
5385 +{
5386 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
5387 + struct sockaddr *sa = addr;
5388 +
5389 + netif_info(priv, drv, ndev, "%s\n", __func__);
5390 +
5391 + if (!is_valid_ether_addr(sa->sa_data))
5392 + return -EADDRNOTAVAIL;
5393 +
5394 + memcpy(ndev->dev_addr, sa->sa_data, ETH_ALEN);
5395 +
5396 + gemac_set_laddrN(priv->EMAC_baseaddr,
5397 + (struct pfe_mac_addr *)ndev->dev_addr, 1);
5398 +
5399 + return 0;
5400 +}
5401 +
5402 +/* pfe_eth_enet_addr_byte_mac
5403 + */
5404 +int pfe_eth_enet_addr_byte_mac(u8 *enet_byte_addr,
5405 + struct pfe_mac_addr *enet_addr)
5406 +{
5407 + if (!enet_byte_addr || !enet_addr) {
5408 + return -1;
5409 +
5410 + } else {
5411 + enet_addr->bottom = enet_byte_addr[0] |
5412 + (enet_byte_addr[1] << 8) |
5413 + (enet_byte_addr[2] << 16) |
5414 + (enet_byte_addr[3] << 24);
5415 + enet_addr->top = enet_byte_addr[4] |
5416 + (enet_byte_addr[5] << 8);
5417 + return 0;
5418 + }
5419 +}
5420 +
5421 +/* pfe_eth_set_multi
5422 + */
5423 +static void pfe_eth_set_multi(struct net_device *ndev)
5424 +{
5425 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
5426 + struct pfe_mac_addr hash_addr; /* hash register structure */
5427 + /* specific mac address register structure */
5428 + struct pfe_mac_addr spec_addr;
5429 + int result; /* index into hash register to set.. */
5430 + int uc_count = 0;
5431 + struct netdev_hw_addr *ha;
5432 +
5433 + if (ndev->flags & IFF_PROMISC) {
5434 + netif_info(priv, drv, ndev, "entering promiscuous mode\n");
5435 +
5436 + priv->promisc = 1;
5437 + gemac_enable_copy_all(priv->EMAC_baseaddr);
5438 + } else {
5439 + priv->promisc = 0;
5440 + gemac_disable_copy_all(priv->EMAC_baseaddr);
5441 + }
5442 +
5443 + /* Enable broadcast frame reception if required. */
5444 + if (ndev->flags & IFF_BROADCAST) {
5445 + gemac_allow_broadcast(priv->EMAC_baseaddr);
5446 + } else {
5447 + netif_info(priv, drv, ndev,
5448 + "disabling broadcast frame reception\n");
5449 +
5450 + gemac_no_broadcast(priv->EMAC_baseaddr);
5451 + }
5452 +
5453 + if (ndev->flags & IFF_ALLMULTI) {
5454 + /* Set the hash to rx all multicast frames */
5455 + hash_addr.bottom = 0xFFFFFFFF;
5456 + hash_addr.top = 0xFFFFFFFF;
5457 + gemac_set_hash(priv->EMAC_baseaddr, &hash_addr);
5458 + netdev_for_each_uc_addr(ha, ndev) {
5459 + if (uc_count >= MAX_UC_SPEC_ADDR_REG)
5460 + break;
5461 + pfe_eth_enet_addr_byte_mac(ha->addr, &spec_addr);
5462 + gemac_set_laddrN(priv->EMAC_baseaddr, &spec_addr,
5463 + uc_count + 2);
5464 + uc_count++;
5465 + }
5466 + } else if ((netdev_mc_count(ndev) > 0) || (netdev_uc_count(ndev))) {
5467 + u8 *addr;
5468 +
5469 + hash_addr.bottom = 0;
5470 + hash_addr.top = 0;
5471 +
5472 + netdev_for_each_mc_addr(ha, ndev) {
5473 + addr = ha->addr;
5474 +
5475 + netif_info(priv, drv, ndev,
5476 + "adding multicast address %X:%X:%X:%X:%X:%X to gem filter\n",
5477 + addr[0], addr[1], addr[2],
5478 + addr[3], addr[4], addr[5]);
5479 +
5480 + result = pfe_eth_get_hash(addr);
5481 +
5482 + if (result < EMAC_HASH_REG_BITS) {
5483 + if (result < 32)
5484 + hash_addr.bottom |= (1 << result);
5485 + else
5486 + hash_addr.top |= (1 << (result - 32));
5487 + } else {
5488 + break;
5489 + }
5490 + }
5491 +
5492 + uc_count = -1;
5493 + netdev_for_each_uc_addr(ha, ndev) {
5494 + addr = ha->addr;
5495 +
5496 + if (++uc_count < MAX_UC_SPEC_ADDR_REG) {
5497 + netdev_info(ndev,
5498 + "adding unicast address %02x:%02x:%02x:%02x:%02x:%02x to gem filter\n",
5499 + addr[0], addr[1], addr[2],
5500 + addr[3], addr[4], addr[5]);
5501 + pfe_eth_enet_addr_byte_mac(addr, &spec_addr);
5502 + gemac_set_laddrN(priv->EMAC_baseaddr,
5503 + &spec_addr, uc_count + 2);
5504 + } else {
5505 + netif_info(priv, drv, ndev,
5506 + "adding unicast address %02x:%02x:%02x:%02x:%02x:%02x to gem hash\n",
5507 + addr[0], addr[1], addr[2],
5508 + addr[3], addr[4], addr[5]);
5509 +
5510 + result = pfe_eth_get_hash(addr);
5511 + if (result >= EMAC_HASH_REG_BITS) {
5512 + break;
5513 +
5514 + } else {
5515 + if (result < 32)
5516 + hash_addr.bottom |= (1 <<
5517 + result);
5518 + else
5519 + hash_addr.top |= (1 <<
5520 + (result - 32));
5521 + }
5522 + }
5523 + }
5524 +
5525 + gemac_set_hash(priv->EMAC_baseaddr, &hash_addr);
5526 + }
5527 +
5528 + if (!(netdev_uc_count(ndev) >= MAX_UC_SPEC_ADDR_REG)) {
5529 + /*
5530 + * Check if there are any specific address HW registers that
5531 + * need to be flushed
5532 + */
5533 + for (uc_count = netdev_uc_count(ndev); uc_count <
5534 + MAX_UC_SPEC_ADDR_REG; uc_count++)
5535 + gemac_clear_laddrN(priv->EMAC_baseaddr, uc_count + 2);
5536 + }
5537 +
5538 + if (ndev->flags & IFF_LOOPBACK)
5539 + gemac_set_loop(priv->EMAC_baseaddr, LB_LOCAL);
5540 +}
5541 +
5542 +/* pfe_eth_set_features
5543 + */
5544 +static int pfe_eth_set_features(struct net_device *ndev, netdev_features_t
5545 + features)
5546 +{
5547 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
5548 + int rc = 0;
5549 +
5550 + if (features & NETIF_F_RXCSUM)
5551 + gemac_enable_rx_checksum_offload(priv->EMAC_baseaddr);
5552 + else
5553 + gemac_disable_rx_checksum_offload(priv->EMAC_baseaddr);
5554 + return rc;
5555 +}
5556 +
5557 +/* pfe_eth_fast_tx_timeout
5558 + */
5559 +static enum hrtimer_restart pfe_eth_fast_tx_timeout(struct hrtimer *timer)
5560 +{
5561 + struct pfe_eth_fast_timer *fast_tx_timeout = container_of(timer, struct
5562 + pfe_eth_fast_timer,
5563 + timer);
5564 + struct pfe_eth_priv_s *priv = container_of(fast_tx_timeout->base,
5565 + struct pfe_eth_priv_s,
5566 + fast_tx_timeout);
5567 + struct netdev_queue *tx_queue = netdev_get_tx_queue(priv->ndev,
5568 + fast_tx_timeout->queuenum);
5569 +
5570 + if (netif_tx_queue_stopped(tx_queue)) {
5571 +#ifdef PFE_ETH_TX_STATS
5572 + priv->was_stopped[fast_tx_timeout->queuenum] = 1;
5573 +#endif
5574 + netif_tx_wake_queue(tx_queue);
5575 + }
5576 +
5577 + return HRTIMER_NORESTART;
5578 +}
5579 +
5580 +/* pfe_eth_fast_tx_timeout_init
5581 + */
5582 +static void pfe_eth_fast_tx_timeout_init(struct pfe_eth_priv_s *priv)
5583 +{
5584 + int i;
5585 +
5586 + for (i = 0; i < emac_txq_cnt; i++) {
5587 + priv->fast_tx_timeout[i].queuenum = i;
5588 + hrtimer_init(&priv->fast_tx_timeout[i].timer, CLOCK_MONOTONIC,
5589 + HRTIMER_MODE_REL);
5590 + priv->fast_tx_timeout[i].timer.function =
5591 + pfe_eth_fast_tx_timeout;
5592 + priv->fast_tx_timeout[i].base = priv->fast_tx_timeout;
5593 + }
5594 +}
5595 +
5596 +static struct sk_buff *pfe_eth_rx_skb(struct net_device *ndev,
5597 + struct pfe_eth_priv_s *priv,
5598 + unsigned int qno)
5599 +{
5600 + void *buf_addr;
5601 + unsigned int rx_ctrl;
5602 + unsigned int desc_ctrl = 0;
5603 + struct hif_ipsec_hdr *ipsec_hdr = NULL;
5604 + struct sk_buff *skb;
5605 + struct sk_buff *skb_frag, *skb_frag_last = NULL;
5606 + int length = 0, offset;
5607 +
5608 + skb = priv->skb_inflight[qno];
5609 +
5610 + if (skb) {
5611 + skb_frag_last = skb_shinfo(skb)->frag_list;
5612 + if (skb_frag_last) {
5613 + while (skb_frag_last->next)
5614 + skb_frag_last = skb_frag_last->next;
5615 + }
5616 + }
5617 +
5618 + while (!(desc_ctrl & CL_DESC_LAST)) {
5619 + buf_addr = hif_lib_receive_pkt(&priv->client, qno, &length,
5620 + &offset, &rx_ctrl, &desc_ctrl,
5621 + (void **)&ipsec_hdr);
5622 + if (!buf_addr)
5623 + goto incomplete;
5624 +
5625 +#ifdef PFE_ETH_NAPI_STATS
5626 + priv->napi_counters[NAPI_DESC_COUNT]++;
5627 +#endif
5628 +
5629 + /* First frag */
5630 + if (desc_ctrl & CL_DESC_FIRST) {
5631 + skb = build_skb(buf_addr, 0);
5632 + if (unlikely(!skb))
5633 + goto pkt_drop;
5634 +
5635 + skb_reserve(skb, offset);
5636 + skb_put(skb, length);
5637 + skb->dev = ndev;
5638 +
5639 + if ((ndev->features & NETIF_F_RXCSUM) && (rx_ctrl &
5640 + HIF_CTRL_RX_CHECKSUMMED))
5641 + skb->ip_summed = CHECKSUM_UNNECESSARY;
5642 + else
5643 + skb_checksum_none_assert(skb);
5644 +
5645 + } else {
5646 + /* Next frags */
5647 + if (unlikely(!skb)) {
5648 + pr_err("%s: NULL skb_inflight\n",
5649 + __func__);
5650 + goto pkt_drop;
5651 + }
5652 +
5653 + skb_frag = build_skb(buf_addr, 0);
5654 +
5655 + if (unlikely(!skb_frag)) {
5656 + kfree(buf_addr);
5657 + goto pkt_drop;
5658 + }
5659 +
5660 + skb_reserve(skb_frag, offset);
5661 + skb_put(skb_frag, length);
5662 +
5663 + skb_frag->dev = ndev;
5664 +
5665 + if (skb_shinfo(skb)->frag_list)
5666 + skb_frag_last->next = skb_frag;
5667 + else
5668 + skb_shinfo(skb)->frag_list = skb_frag;
5669 +
5670 + skb->truesize += skb_frag->truesize;
5671 + skb->data_len += length;
5672 + skb->len += length;
5673 + skb_frag_last = skb_frag;
5674 + }
5675 + }
5676 +
5677 + priv->skb_inflight[qno] = NULL;
5678 + return skb;
5679 +
5680 +incomplete:
5681 + priv->skb_inflight[qno] = skb;
5682 + return NULL;
5683 +
5684 +pkt_drop:
5685 + priv->skb_inflight[qno] = NULL;
5686 +
5687 + if (skb)
5688 + kfree_skb(skb);
5689 + else
5690 + kfree(buf_addr);
5691 +
5692 + priv->stats.rx_errors++;
5693 +
5694 + return NULL;
5695 +}
5696 +
5697 +/* pfe_eth_poll
5698 + */
5699 +static int pfe_eth_poll(struct pfe_eth_priv_s *priv, struct napi_struct *napi,
5700 + unsigned int qno, int budget)
5701 +{
5702 + struct net_device *ndev = priv->ndev;
5703 + struct sk_buff *skb;
5704 + int work_done = 0;
5705 + unsigned int len;
5706 +
5707 + netif_info(priv, intr, priv->ndev, "%s\n", __func__);
5708 +
5709 +#ifdef PFE_ETH_NAPI_STATS
5710 + priv->napi_counters[NAPI_POLL_COUNT]++;
5711 +#endif
5712 +
5713 + do {
5714 + skb = pfe_eth_rx_skb(ndev, priv, qno);
5715 +
5716 + if (!skb)
5717 + break;
5718 +
5719 + len = skb->len;
5720 +
5721 + /* Packet will be processed */
5722 + skb->protocol = eth_type_trans(skb, ndev);
5723 +
5724 + netif_receive_skb(skb);
5725 +
5726 + priv->stats.rx_packets++;
5727 + priv->stats.rx_bytes += len;
5728 +
5729 + work_done++;
5730 +
5731 +#ifdef PFE_ETH_NAPI_STATS
5732 + priv->napi_counters[NAPI_PACKET_COUNT]++;
5733 +#endif
5734 +
5735 + } while (work_done < budget);
5736 +
5737 + /*
5738 + * If no Rx receive nor cleanup work was done, exit polling mode.
5739 + * No more netif_running(dev) check is required here , as this is
5740 + * checked in net/core/dev.c (2.6.33.5 kernel specific).
5741 + */
5742 + if (work_done < budget) {
5743 + napi_complete(napi);
5744 +
5745 + hif_lib_event_handler_start(&priv->client, EVENT_RX_PKT_IND,
5746 + qno);
5747 + }
5748 +#ifdef PFE_ETH_NAPI_STATS
5749 + else
5750 + priv->napi_counters[NAPI_FULL_BUDGET_COUNT]++;
5751 +#endif
5752 +
5753 + return work_done;
5754 +}
5755 +
5756 +/*
5757 + * pfe_eth_lro_poll
5758 + */
5759 +static int pfe_eth_lro_poll(struct napi_struct *napi, int budget)
5760 +{
5761 + struct pfe_eth_priv_s *priv = container_of(napi, struct pfe_eth_priv_s,
5762 + lro_napi);
5763 +
5764 + netif_info(priv, intr, priv->ndev, "%s\n", __func__);
5765 +
5766 + return pfe_eth_poll(priv, napi, 2, budget);
5767 +}
5768 +
5769 +/* pfe_eth_low_poll
5770 + */
5771 +static int pfe_eth_low_poll(struct napi_struct *napi, int budget)
5772 +{
5773 + struct pfe_eth_priv_s *priv = container_of(napi, struct pfe_eth_priv_s,
5774 + low_napi);
5775 +
5776 + netif_info(priv, intr, priv->ndev, "%s\n", __func__);
5777 +
5778 + return pfe_eth_poll(priv, napi, 1, budget);
5779 +}
5780 +
5781 +/* pfe_eth_high_poll
5782 + */
5783 +static int pfe_eth_high_poll(struct napi_struct *napi, int budget)
5784 +{
5785 + struct pfe_eth_priv_s *priv = container_of(napi, struct pfe_eth_priv_s,
5786 + high_napi);
5787 +
5788 + netif_info(priv, intr, priv->ndev, "%s\n", __func__);
5789 +
5790 + return pfe_eth_poll(priv, napi, 0, budget);
5791 +}
5792 +
5793 +static const struct net_device_ops pfe_netdev_ops = {
5794 + .ndo_open = pfe_eth_open,
5795 + .ndo_stop = pfe_eth_close,
5796 + .ndo_start_xmit = pfe_eth_send_packet,
5797 + .ndo_select_queue = pfe_eth_select_queue,
5798 + .ndo_set_rx_mode = pfe_eth_set_multi,
5799 + .ndo_set_mac_address = pfe_eth_set_mac_address,
5800 + .ndo_validate_addr = eth_validate_addr,
5801 + .ndo_change_mtu = pfe_eth_change_mtu,
5802 + .ndo_get_stats = pfe_eth_get_stats,
5803 + .ndo_set_features = pfe_eth_set_features,
5804 +};
5805 +
5806 +/* pfe_eth_init_one
5807 + */
5808 +static int pfe_eth_init_one(struct pfe *pfe,
5809 + struct ls1012a_pfe_platform_data *pfe_info,
5810 + int id)
5811 +{
5812 + struct net_device *ndev = NULL;
5813 + struct pfe_eth_priv_s *priv = NULL;
5814 + struct ls1012a_eth_platform_data *einfo;
5815 + int err;
5816 +
5817 + einfo = (struct ls1012a_eth_platform_data *)
5818 + pfe_info->ls1012a_eth_pdata;
5819 +
5820 + /* einfo never be NULL, but no harm in having this check */
5821 + if (!einfo) {
5822 + pr_err(
5823 + "%s: pfe missing additional gemacs platform data\n"
5824 + , __func__);
5825 + err = -ENODEV;
5826 + goto err0;
5827 + }
5828 +
5829 + if (us)
5830 + emac_txq_cnt = EMAC_TXQ_CNT;
5831 + /* Create an ethernet device instance */
5832 + ndev = alloc_etherdev_mq(sizeof(*priv), emac_txq_cnt);
5833 +
5834 + if (!ndev) {
5835 + pr_err("%s: gemac %d device allocation failed\n",
5836 + __func__, einfo[id].gem_id);
5837 + err = -ENOMEM;
5838 + goto err0;
5839 + }
5840 +
5841 + priv = netdev_priv(ndev);
5842 + priv->ndev = ndev;
5843 + priv->id = einfo[id].gem_id;
5844 + priv->pfe = pfe;
5845 + priv->phy_node = einfo[id].phy_node;
5846 +
5847 + SET_NETDEV_DEV(priv->ndev, priv->pfe->dev);
5848 +
5849 + pfe->eth.eth_priv[id] = priv;
5850 +
5851 + /* Set the info in the priv to the current info */
5852 + priv->einfo = &einfo[id];
5853 + priv->EMAC_baseaddr = cbus_emac_base[id];
5854 + priv->GPI_baseaddr = cbus_gpi_base[id];
5855 +
5856 + spin_lock_init(&priv->lock);
5857 +
5858 + pfe_eth_fast_tx_timeout_init(priv);
5859 +
5860 + /* Copy the station address into the dev structure, */
5861 + memcpy(ndev->dev_addr, einfo[id].mac_addr, ETH_ALEN);
5862 +
5863 + if (us)
5864 + goto phy_init;
5865 +
5866 + ndev->mtu = 1500;
5867 +
5868 + /* Set MTU limits */
5869 + ndev->min_mtu = ETH_MIN_MTU;
5870 +
5871 +/*
5872 + * Jumbo frames are not supported on LS1012A rev-1.0.
5873 + * So max mtu should be restricted to supported frame length.
5874 + */
5875 + if (pfe_errata_a010897)
5876 + ndev->max_mtu = JUMBO_FRAME_SIZE_V1 - ETH_HLEN - ETH_FCS_LEN;
5877 + else
5878 + ndev->max_mtu = JUMBO_FRAME_SIZE_V2 - ETH_HLEN - ETH_FCS_LEN;
5879 +
5880 + /*Enable after checksum offload is validated */
5881 + ndev->hw_features = NETIF_F_RXCSUM | NETIF_F_IP_CSUM |
5882 + NETIF_F_IPV6_CSUM | NETIF_F_SG;
5883 +
5884 + /* enabled by default */
5885 + ndev->features = ndev->hw_features;
5886 +
5887 + priv->usr_features = ndev->features;
5888 +
5889 + ndev->netdev_ops = &pfe_netdev_ops;
5890 +
5891 + ndev->ethtool_ops = &pfe_ethtool_ops;
5892 +
5893 + /* Enable basic messages by default */
5894 + priv->msg_enable = NETIF_MSG_IFUP | NETIF_MSG_IFDOWN | NETIF_MSG_LINK |
5895 + NETIF_MSG_PROBE;
5896 +
5897 + netif_napi_add(ndev, &priv->low_napi, pfe_eth_low_poll,
5898 + HIF_RX_POLL_WEIGHT - 16);
5899 + netif_napi_add(ndev, &priv->high_napi, pfe_eth_high_poll,
5900 + HIF_RX_POLL_WEIGHT - 16);
5901 + netif_napi_add(ndev, &priv->lro_napi, pfe_eth_lro_poll,
5902 + HIF_RX_POLL_WEIGHT - 16);
5903 +
5904 + err = register_netdev(ndev);
5905 + if (err) {
5906 + netdev_err(ndev, "register_netdev() failed\n");
5907 + goto err1;
5908 + }
5909 +
5910 + if ((!(pfe_use_old_dts_phy) && !(priv->phy_node)) ||
5911 + ((pfe_use_old_dts_phy) &&
5912 + (priv->einfo->phy_flags & GEMAC_NO_PHY))) {
5913 + pr_info("%s: No PHY or fixed-link\n", __func__);
5914 + goto skip_phy_init;
5915 + }
5916 +
5917 +phy_init:
5918 + device_init_wakeup(&ndev->dev, WAKE_MAGIC);
5919 +
5920 + err = pfe_phy_init(ndev);
5921 + if (err) {
5922 + netdev_err(ndev, "%s: pfe_phy_init() failed\n",
5923 + __func__);
5924 + goto err2;
5925 + }
5926 +
5927 + if (us) {
5928 + if (priv->phydev)
5929 + phy_start(priv->phydev);
5930 + return 0;
5931 + }
5932 +
5933 + netif_carrier_on(ndev);
5934 +
5935 +skip_phy_init:
5936 + /* Create all the sysfs files */
5937 + if (pfe_eth_sysfs_init(ndev))
5938 + goto err3;
5939 +
5940 + netif_info(priv, probe, ndev, "%s: created interface, baseaddr: %p\n",
5941 + __func__, priv->EMAC_baseaddr);
5942 +
5943 + return 0;
5944 +
5945 +err3:
5946 + pfe_phy_exit(priv->ndev);
5947 +err2:
5948 + if (us)
5949 + goto err1;
5950 + unregister_netdev(ndev);
5951 +err1:
5952 + free_netdev(priv->ndev);
5953 +err0:
5954 + return err;
5955 +}
5956 +
5957 +/* pfe_eth_init
5958 + */
5959 +int pfe_eth_init(struct pfe *pfe)
5960 +{
5961 + int ii = 0;
5962 + int err;
5963 + struct ls1012a_pfe_platform_data *pfe_info;
5964 +
5965 + pr_info("%s\n", __func__);
5966 +
5967 + cbus_emac_base[0] = EMAC1_BASE_ADDR;
5968 + cbus_emac_base[1] = EMAC2_BASE_ADDR;
5969 +
5970 + cbus_gpi_base[0] = EGPI1_BASE_ADDR;
5971 + cbus_gpi_base[1] = EGPI2_BASE_ADDR;
5972 +
5973 + pfe_info = (struct ls1012a_pfe_platform_data *)
5974 + pfe->dev->platform_data;
5975 + if (!pfe_info) {
5976 + pr_err("%s: pfe missing additional platform data\n", __func__);
5977 + err = -ENODEV;
5978 + goto err_pdata;
5979 + }
5980 +
5981 + for (ii = 0; ii < NUM_GEMAC_SUPPORT; ii++) {
5982 + err = pfe_eth_mdio_init(pfe, pfe_info, ii);
5983 + if (err) {
5984 + pr_err("%s: pfe_eth_mdio_init() failed\n", __func__);
5985 + goto err_mdio_init;
5986 + }
5987 + }
5988 +
5989 + if (soc_device_match(ls1012a_rev1_soc_attr))
5990 + pfe_errata_a010897 = true;
5991 + else
5992 + pfe_errata_a010897 = false;
5993 +
5994 + for (ii = 0; ii < NUM_GEMAC_SUPPORT; ii++) {
5995 + err = pfe_eth_init_one(pfe, pfe_info, ii);
5996 + if (err)
5997 + goto err_eth_init;
5998 + }
5999 +
6000 + return 0;
6001 +
6002 +err_eth_init:
6003 + while (ii--) {
6004 + pfe_eth_exit_one(pfe->eth.eth_priv[ii]);
6005 + pfe_eth_mdio_exit(pfe, ii);
6006 + }
6007 +
6008 +err_mdio_init:
6009 +err_pdata:
6010 + return err;
6011 +}
6012 +
6013 +/* pfe_eth_exit_one
6014 + */
6015 +static void pfe_eth_exit_one(struct pfe_eth_priv_s *priv)
6016 +{
6017 + netif_info(priv, probe, priv->ndev, "%s\n", __func__);
6018 +
6019 + if (!us)
6020 + pfe_eth_sysfs_exit(priv->ndev);
6021 +
6022 + if ((!(pfe_use_old_dts_phy) && !(priv->phy_node)) ||
6023 + ((pfe_use_old_dts_phy) &&
6024 + (priv->einfo->phy_flags & GEMAC_NO_PHY))) {
6025 + pr_info("%s: No PHY or fixed-link\n", __func__);
6026 + goto skip_phy_exit;
6027 + }
6028 +
6029 + pfe_phy_exit(priv->ndev);
6030 +
6031 +skip_phy_exit:
6032 + if (!us)
6033 + unregister_netdev(priv->ndev);
6034 +
6035 + free_netdev(priv->ndev);
6036 +}
6037 +
6038 +/* pfe_eth_exit
6039 + */
6040 +void pfe_eth_exit(struct pfe *pfe)
6041 +{
6042 + int ii;
6043 +
6044 + pr_info("%s\n", __func__);
6045 +
6046 + for (ii = NUM_GEMAC_SUPPORT - 1; ii >= 0; ii--)
6047 + pfe_eth_exit_one(pfe->eth.eth_priv[ii]);
6048 +
6049 + for (ii = NUM_GEMAC_SUPPORT - 1; ii >= 0; ii--)
6050 + pfe_eth_mdio_exit(pfe, ii);
6051 +}
6052 --- /dev/null
6053 +++ b/drivers/staging/fsl_ppfe/pfe_eth.h
6054 @@ -0,0 +1,175 @@
6055 +/* SPDX-License-Identifier: GPL-2.0+ */
6056 +/*
6057 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
6058 + * Copyright 2017 NXP
6059 + */
6060 +
6061 +#ifndef _PFE_ETH_H_
6062 +#define _PFE_ETH_H_
6063 +#include <linux/kernel.h>
6064 +#include <linux/netdevice.h>
6065 +#include <linux/etherdevice.h>
6066 +#include <linux/ethtool.h>
6067 +#include <linux/mii.h>
6068 +#include <linux/phy.h>
6069 +#include <linux/clk.h>
6070 +#include <linux/interrupt.h>
6071 +#include <linux/time.h>
6072 +
6073 +#define PFE_ETH_NAPI_STATS
6074 +#define PFE_ETH_TX_STATS
6075 +
6076 +#define PFE_ETH_FRAGS_MAX (65536 / HIF_RX_PKT_MIN_SIZE)
6077 +#define LRO_LEN_COUNT_MAX 32
6078 +#define LRO_NB_COUNT_MAX 32
6079 +
6080 +#define PFE_PAUSE_FLAG_ENABLE 1
6081 +#define PFE_PAUSE_FLAG_AUTONEG 2
6082 +
6083 +/* GEMAC configured by SW */
6084 +/* GEMAC configured by phy lines (not for MII/GMII) */
6085 +
6086 +#define GEMAC_SW_FULL_DUPLEX BIT(9)
6087 +#define GEMAC_SW_SPEED_10M (0 << 12)
6088 +#define GEMAC_SW_SPEED_100M BIT(12)
6089 +#define GEMAC_SW_SPEED_1G (2 << 12)
6090 +
6091 +#define GEMAC_NO_PHY BIT(0)
6092 +
6093 +struct ls1012a_eth_platform_data {
6094 + /* board specific information */
6095 + phy_interface_t mii_config;
6096 + u32 phy_flags;
6097 + u32 gem_id;
6098 + u32 phy_id;
6099 + u32 mdio_muxval;
6100 + u8 mac_addr[ETH_ALEN];
6101 + struct device_node *phy_node;
6102 +};
6103 +
6104 +struct ls1012a_mdio_platform_data {
6105 + int id;
6106 + int irq[32];
6107 + u32 phy_mask;
6108 + int mdc_div;
6109 +};
6110 +
6111 +struct ls1012a_pfe_platform_data {
6112 + struct ls1012a_eth_platform_data ls1012a_eth_pdata[3];
6113 + struct ls1012a_mdio_platform_data ls1012a_mdio_pdata[3];
6114 +};
6115 +
6116 +#define NUM_GEMAC_SUPPORT 2
6117 +#define DRV_NAME "pfe-eth"
6118 +#define DRV_VERSION "1.0"
6119 +
6120 +#define LS1012A_TX_FAST_RECOVERY_TIMEOUT_MS 3
6121 +#define TX_POLL_TIMEOUT_MS 1000
6122 +
6123 +#define EMAC_TXQ_CNT 16
6124 +#define EMAC_TXQ_DEPTH (HIF_TX_DESC_NT)
6125 +
6126 +#define JUMBO_FRAME_SIZE_V1 1900
6127 +#define JUMBO_FRAME_SIZE_V2 10258
6128 +/*
6129 + * Client Tx queue threshold, for txQ flush condition.
6130 + * It must be smaller than the queue size (in case we ever change it in the
6131 + * future).
6132 + */
6133 +#define HIF_CL_TX_FLUSH_MARK 32
6134 +
6135 +/*
6136 + * Max number of TX resources (HIF descriptors or skbs) that will be released
6137 + * in a single go during batch recycling.
6138 + * Should be lower than the flush mark so the SW can provide the HW with a
6139 + * continuous stream of packets instead of bursts.
6140 + */
6141 +#define TX_FREE_MAX_COUNT 16
6142 +#define EMAC_RXQ_CNT 3
6143 +#define EMAC_RXQ_DEPTH HIF_RX_DESC_NT
6144 +/* make sure clients can receive a full burst of packets */
6145 +#define EMAC_RMON_TXBYTES_POS 0x00
6146 +#define EMAC_RMON_RXBYTES_POS 0x14
6147 +
6148 +#define EMAC_QUEUENUM_MASK (emac_txq_cnt - 1)
6149 +#define EMAC_MDIO_TIMEOUT 1000
6150 +#define MAX_UC_SPEC_ADDR_REG 31
6151 +
6152 +struct pfe_eth_fast_timer {
6153 + int queuenum;
6154 + struct hrtimer timer;
6155 + void *base;
6156 +};
6157 +
6158 +struct pfe_eth_priv_s {
6159 + struct pfe *pfe;
6160 + struct hif_client_s client;
6161 + struct napi_struct lro_napi;
6162 + struct napi_struct low_napi;
6163 + struct napi_struct high_napi;
6164 + int low_tmu_q;
6165 + int high_tmu_q;
6166 + struct net_device_stats stats;
6167 + struct net_device *ndev;
6168 + int id;
6169 + int promisc;
6170 + unsigned int msg_enable;
6171 + unsigned int usr_features;
6172 +
6173 + spinlock_t lock; /* protect member variables */
6174 + unsigned int event_status;
6175 + int irq;
6176 + void *EMAC_baseaddr;
6177 + void *GPI_baseaddr;
6178 + /* PHY stuff */
6179 + struct phy_device *phydev;
6180 + int oldspeed;
6181 + int oldduplex;
6182 + int oldlink;
6183 + struct device_node *phy_node;
6184 + struct clk *gemtx_clk;
6185 + int wol;
6186 + int pause_flag;
6187 +
6188 + int default_priority;
6189 + struct pfe_eth_fast_timer fast_tx_timeout[EMAC_TXQ_CNT];
6190 +
6191 + struct ls1012a_eth_platform_data *einfo;
6192 + struct sk_buff *skb_inflight[EMAC_RXQ_CNT + 6];
6193 +
6194 +#ifdef PFE_ETH_TX_STATS
6195 + unsigned int stop_queue_total[EMAC_TXQ_CNT];
6196 + unsigned int stop_queue_hif[EMAC_TXQ_CNT];
6197 + unsigned int stop_queue_hif_client[EMAC_TXQ_CNT];
6198 + unsigned int stop_queue_credit[EMAC_TXQ_CNT];
6199 + unsigned int clean_fail[EMAC_TXQ_CNT];
6200 + unsigned int was_stopped[EMAC_TXQ_CNT];
6201 +#endif
6202 +
6203 +#ifdef PFE_ETH_NAPI_STATS
6204 + unsigned int napi_counters[NAPI_MAX_COUNT];
6205 +#endif
6206 + unsigned int frags_inflight[EMAC_RXQ_CNT + 6];
6207 +};
6208 +
6209 +struct pfe_eth {
6210 + struct pfe_eth_priv_s *eth_priv[3];
6211 +};
6212 +
6213 +struct pfe_mdio_priv_s {
6214 + void __iomem *mdio_base;
6215 + int mdc_div;
6216 + struct mii_bus *mii_bus;
6217 +};
6218 +
6219 +struct pfe_mdio {
6220 + struct pfe_mdio_priv_s *mdio_priv[3];
6221 +};
6222 +
6223 +int pfe_eth_init(struct pfe *pfe);
6224 +void pfe_eth_exit(struct pfe *pfe);
6225 +int pfe_eth_suspend(struct net_device *dev);
6226 +int pfe_eth_resume(struct net_device *dev);
6227 +int pfe_eth_mdio_reset(struct mii_bus *bus);
6228 +
6229 +#endif /* _PFE_ETH_H_ */
6230 --- /dev/null
6231 +++ b/drivers/staging/fsl_ppfe/pfe_firmware.c
6232 @@ -0,0 +1,398 @@
6233 +// SPDX-License-Identifier: GPL-2.0+
6234 +/*
6235 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
6236 + * Copyright 2017 NXP
6237 + */
6238 +
6239 +/*
6240 + * @file
6241 + * Contains all the functions to handle parsing and loading of PE firmware
6242 + * files.
6243 + */
6244 +#include <linux/firmware.h>
6245 +
6246 +#include "pfe_mod.h"
6247 +#include "pfe_firmware.h"
6248 +#include "pfe/pfe.h"
6249 +#include <linux/of_platform.h>
6250 +#include <linux/of_address.h>
6251 +
6252 +static struct elf32_shdr *get_elf_section_header(const u8 *fw,
6253 + const char *section)
6254 +{
6255 + struct elf32_hdr *elf_hdr = (struct elf32_hdr *)fw;
6256 + struct elf32_shdr *shdr;
6257 + struct elf32_shdr *shdr_shstr;
6258 + Elf32_Off e_shoff = be32_to_cpu(elf_hdr->e_shoff);
6259 + Elf32_Half e_shentsize = be16_to_cpu(elf_hdr->e_shentsize);
6260 + Elf32_Half e_shnum = be16_to_cpu(elf_hdr->e_shnum);
6261 + Elf32_Half e_shstrndx = be16_to_cpu(elf_hdr->e_shstrndx);
6262 + Elf32_Off shstr_offset;
6263 + Elf32_Word sh_name;
6264 + const char *name;
6265 + int i;
6266 +
6267 + /* Section header strings */
6268 + shdr_shstr = (struct elf32_shdr *)((u8 *)elf_hdr + e_shoff + e_shstrndx
6269 + * e_shentsize);
6270 + shstr_offset = be32_to_cpu(shdr_shstr->sh_offset);
6271 +
6272 + for (i = 0; i < e_shnum; i++) {
6273 + shdr = (struct elf32_shdr *)((u8 *)elf_hdr + e_shoff
6274 + + i * e_shentsize);
6275 +
6276 + sh_name = be32_to_cpu(shdr->sh_name);
6277 +
6278 + name = (const char *)((u8 *)elf_hdr + shstr_offset + sh_name);
6279 +
6280 + if (!strcmp(name, section))
6281 + return shdr;
6282 + }
6283 +
6284 + pr_err("%s: didn't find section %s\n", __func__, section);
6285 +
6286 + return NULL;
6287 +}
6288 +
6289 +#if defined(CFG_DIAGS)
6290 +static int pfe_get_diags_info(const u8 *fw, struct pfe_diags_info
6291 + *diags_info)
6292 +{
6293 + struct elf32_shdr *shdr;
6294 + unsigned long offset, size;
6295 +
6296 + shdr = get_elf_section_header(fw, ".pfe_diags_str");
6297 + if (shdr) {
6298 + offset = be32_to_cpu(shdr->sh_offset);
6299 + size = be32_to_cpu(shdr->sh_size);
6300 + diags_info->diags_str_base = be32_to_cpu(shdr->sh_addr);
6301 + diags_info->diags_str_size = size;
6302 + diags_info->diags_str_array = kmalloc(size, GFP_KERNEL);
6303 + memcpy(diags_info->diags_str_array, fw + offset, size);
6304 +
6305 + return 0;
6306 + } else {
6307 + return -1;
6308 + }
6309 +}
6310 +#endif
6311 +
6312 +static void pfe_check_version_info(const u8 *fw)
6313 +{
6314 + /*static char *version = NULL;*/
6315 + const u8 *elf_data = fw;
6316 + static char *version;
6317 +
6318 + struct elf32_shdr *shdr = get_elf_section_header(fw, ".version");
6319 +
6320 + if (shdr) {
6321 + if (!version) {
6322 + /*
6323 + * this is the first fw we load, use its version
6324 + * string as reference (whatever it is)
6325 + */
6326 + version = (char *)(elf_data +
6327 + be32_to_cpu(shdr->sh_offset));
6328 +
6329 + pr_info("PFE binary version: %s\n", version);
6330 + } else {
6331 + /*
6332 + * already have loaded at least one firmware, check
6333 + * sequence can start now
6334 + */
6335 + if (strcmp(version, (char *)(elf_data +
6336 + be32_to_cpu(shdr->sh_offset)))) {
6337 + pr_info(
6338 + "WARNING: PFE firmware binaries from incompatible version\n");
6339 + }
6340 + }
6341 + } else {
6342 + /*
6343 + * version cannot be verified, a potential issue that should
6344 + * be reported
6345 + */
6346 + pr_info(
6347 + "WARNING: PFE firmware binaries from incompatible version\n");
6348 + }
6349 +}
6350 +
6351 +/* PFE elf firmware loader.
6352 + * Loads an elf firmware image into a list of PE's (specified using a bitmask)
6353 + *
6354 + * @param pe_mask Mask of PE id's to load firmware to
6355 + * @param fw Pointer to the firmware image
6356 + *
6357 + * @return 0 on success, a negative value on error
6358 + *
6359 + */
6360 +int pfe_load_elf(int pe_mask, const u8 *fw, struct pfe *pfe)
6361 +{
6362 + struct elf32_hdr *elf_hdr = (struct elf32_hdr *)fw;
6363 + Elf32_Half sections = be16_to_cpu(elf_hdr->e_shnum);
6364 + struct elf32_shdr *shdr = (struct elf32_shdr *)(fw +
6365 + be32_to_cpu(elf_hdr->e_shoff));
6366 + int id, section;
6367 + int rc;
6368 +
6369 + pr_info("%s\n", __func__);
6370 +
6371 + /* Some sanity checks */
6372 + if (strncmp(&elf_hdr->e_ident[EI_MAG0], ELFMAG, SELFMAG)) {
6373 + pr_err("%s: incorrect elf magic number\n", __func__);
6374 + return -EINVAL;
6375 + }
6376 +
6377 + if (elf_hdr->e_ident[EI_CLASS] != ELFCLASS32) {
6378 + pr_err("%s: incorrect elf class(%x)\n", __func__,
6379 + elf_hdr->e_ident[EI_CLASS]);
6380 + return -EINVAL;
6381 + }
6382 +
6383 + if (elf_hdr->e_ident[EI_DATA] != ELFDATA2MSB) {
6384 + pr_err("%s: incorrect elf data(%x)\n", __func__,
6385 + elf_hdr->e_ident[EI_DATA]);
6386 + return -EINVAL;
6387 + }
6388 +
6389 + if (be16_to_cpu(elf_hdr->e_type) != ET_EXEC) {
6390 + pr_err("%s: incorrect elf file type(%x)\n", __func__,
6391 + be16_to_cpu(elf_hdr->e_type));
6392 + return -EINVAL;
6393 + }
6394 +
6395 + for (section = 0; section < sections; section++, shdr++) {
6396 + if (!(be32_to_cpu(shdr->sh_flags) & (SHF_WRITE | SHF_ALLOC |
6397 + SHF_EXECINSTR)))
6398 + continue;
6399 +
6400 + for (id = 0; id < MAX_PE; id++)
6401 + if (pe_mask & (1 << id)) {
6402 + rc = pe_load_elf_section(id, elf_hdr, shdr,
6403 + pfe->dev);
6404 + if (rc < 0)
6405 + goto err;
6406 + }
6407 + }
6408 +
6409 + pfe_check_version_info(fw);
6410 +
6411 + return 0;
6412 +
6413 +err:
6414 + return rc;
6415 +}
6416 +
6417 +int get_firmware_in_fdt(const u8 **pe_fw, const char *name)
6418 +{
6419 + struct device_node *np;
6420 + const unsigned int *len;
6421 + const void *data;
6422 +
6423 + if (!strcmp(name, CLASS_FIRMWARE_FILENAME)) {
6424 + /* The firmware should be inside the device tree. */
6425 + np = of_find_compatible_node(NULL, NULL,
6426 + "fsl,pfe-class-firmware");
6427 + if (!np) {
6428 + pr_info("Failed to find the node\n");
6429 + return -ENOENT;
6430 + }
6431 +
6432 + data = of_get_property(np, "fsl,class-firmware", NULL);
6433 + if (data) {
6434 + len = of_get_property(np, "length", NULL);
6435 + pr_info("CLASS fw of length %d bytes loaded from FDT.\n",
6436 + be32_to_cpu(*len));
6437 + } else {
6438 + pr_info("fsl,class-firmware not found!!!!\n");
6439 + return -ENOENT;
6440 + }
6441 + of_node_put(np);
6442 + *pe_fw = data;
6443 + } else if (!strcmp(name, TMU_FIRMWARE_FILENAME)) {
6444 + np = of_find_compatible_node(NULL, NULL,
6445 + "fsl,pfe-tmu-firmware");
6446 + if (!np) {
6447 + pr_info("Failed to find the node\n");
6448 + return -ENOENT;
6449 + }
6450 +
6451 + data = of_get_property(np, "fsl,tmu-firmware", NULL);
6452 + if (data) {
6453 + len = of_get_property(np, "length", NULL);
6454 + pr_info("TMU fw of length %d bytes loaded from FDT.\n",
6455 + be32_to_cpu(*len));
6456 + } else {
6457 + pr_info("fsl,tmu-firmware not found!!!!\n");
6458 + return -ENOENT;
6459 + }
6460 + of_node_put(np);
6461 + *pe_fw = data;
6462 + } else if (!strcmp(name, UTIL_FIRMWARE_FILENAME)) {
6463 + np = of_find_compatible_node(NULL, NULL,
6464 + "fsl,pfe-util-firmware");
6465 + if (!np) {
6466 + pr_info("Failed to find the node\n");
6467 + return -ENOENT;
6468 + }
6469 +
6470 + data = of_get_property(np, "fsl,util-firmware", NULL);
6471 + if (data) {
6472 + len = of_get_property(np, "length", NULL);
6473 + pr_info("UTIL fw of length %d bytes loaded from FDT.\n",
6474 + be32_to_cpu(*len));
6475 + } else {
6476 + pr_info("fsl,util-firmware not found!!!!\n");
6477 + return -ENOENT;
6478 + }
6479 + of_node_put(np);
6480 + *pe_fw = data;
6481 + } else {
6482 + pr_err("firmware:%s not known\n", name);
6483 + return -EINVAL;
6484 + }
6485 +
6486 + return 0;
6487 +}
6488 +
6489 +/* PFE firmware initialization.
6490 + * Loads different firmware files from filesystem.
6491 + * Initializes PE IMEM/DMEM and UTIL-PE DDR
6492 + * Initializes control path symbol addresses (by looking them up in the elf
6493 + * firmware files
6494 + * Takes PE's out of reset
6495 + *
6496 + * @return 0 on success, a negative value on error
6497 + *
6498 + */
6499 +int pfe_firmware_init(struct pfe *pfe)
6500 +{
6501 + const struct firmware *class_fw, *tmu_fw;
6502 + const u8 *class_elf_fw, *tmu_elf_fw;
6503 + int rc = 0, fs_load = 0;
6504 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
6505 + const struct firmware *util_fw;
6506 + const u8 *util_elf_fw;
6507 +
6508 +#endif
6509 +
6510 + pr_info("%s\n", __func__);
6511 +
6512 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
6513 + if (get_firmware_in_fdt(&class_elf_fw, CLASS_FIRMWARE_FILENAME) ||
6514 + get_firmware_in_fdt(&tmu_elf_fw, TMU_FIRMWARE_FILENAME) ||
6515 + get_firmware_in_fdt(&util_elf_fw, UTIL_FIRMWARE_FILENAME))
6516 +#else
6517 + if (get_firmware_in_fdt(&class_elf_fw, CLASS_FIRMWARE_FILENAME) ||
6518 + get_firmware_in_fdt(&tmu_elf_fw, TMU_FIRMWARE_FILENAME))
6519 +#endif
6520 + {
6521 + pr_info("%s:PFE firmware not found in FDT.\n", __func__);
6522 + pr_info("%s:Trying to load firmware from filesystem...!\n", __func__);
6523 +
6524 + /* look for firmware in filesystem...!*/
6525 + fs_load = 1;
6526 + if (request_firmware(&class_fw, CLASS_FIRMWARE_FILENAME, pfe->dev)) {
6527 + pr_err("%s: request firmware %s failed\n", __func__,
6528 + CLASS_FIRMWARE_FILENAME);
6529 + rc = -ETIMEDOUT;
6530 + goto err0;
6531 + }
6532 + class_elf_fw = class_fw->data;
6533 +
6534 + if (request_firmware(&tmu_fw, TMU_FIRMWARE_FILENAME, pfe->dev)) {
6535 + pr_err("%s: request firmware %s failed\n", __func__,
6536 + TMU_FIRMWARE_FILENAME);
6537 + rc = -ETIMEDOUT;
6538 + goto err1;
6539 + }
6540 + tmu_elf_fw = tmu_fw->data;
6541 +
6542 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
6543 + if (request_firmware(&util_fw, UTIL_FIRMWARE_FILENAME, pfe->dev)) {
6544 + pr_err("%s: request firmware %s failed\n", __func__,
6545 + UTIL_FIRMWARE_FILENAME);
6546 + rc = -ETIMEDOUT;
6547 + goto err2;
6548 + }
6549 + util_elf_fw = util_fw->data;
6550 +#endif
6551 + }
6552 +
6553 + rc = pfe_load_elf(CLASS_MASK, class_elf_fw, pfe);
6554 + if (rc < 0) {
6555 + pr_err("%s: class firmware load failed\n", __func__);
6556 + goto err3;
6557 + }
6558 +
6559 +#if defined(CFG_DIAGS)
6560 + rc = pfe_get_diags_info(class_elf_fw, &pfe->diags.class_diags_info);
6561 + if (rc < 0) {
6562 + pr_warn(
6563 + "PFE diags won't be available for class PEs\n");
6564 + rc = 0;
6565 + }
6566 +#endif
6567 +
6568 + rc = pfe_load_elf(TMU_MASK, tmu_elf_fw, pfe);
6569 + if (rc < 0) {
6570 + pr_err("%s: tmu firmware load failed\n", __func__);
6571 + goto err3;
6572 + }
6573 +
6574 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
6575 + rc = pfe_load_elf(UTIL_MASK, util_elf_fw, pfe);
6576 + if (rc < 0) {
6577 + pr_err("%s: util firmware load failed\n", __func__);
6578 + goto err3;
6579 + }
6580 +
6581 +#if defined(CFG_DIAGS)
6582 + rc = pfe_get_diags_info(util_elf_fw, &pfe->diags.util_diags_info);
6583 + if (rc < 0) {
6584 + pr_warn(
6585 + "PFE diags won't be available for util PE\n");
6586 + rc = 0;
6587 + }
6588 +#endif
6589 +
6590 + util_enable();
6591 +#endif
6592 +
6593 + tmu_enable(0xf);
6594 + class_enable();
6595 +
6596 +err3:
6597 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
6598 + if (fs_load)
6599 + release_firmware(util_fw);
6600 +err2:
6601 +#endif
6602 + if (fs_load)
6603 + release_firmware(tmu_fw);
6604 +
6605 +err1:
6606 + if (fs_load)
6607 + release_firmware(class_fw);
6608 +
6609 +err0:
6610 + return rc;
6611 +}
6612 +
6613 +/* PFE firmware cleanup
6614 + * Puts PE's in reset
6615 + *
6616 + *
6617 + */
6618 +void pfe_firmware_exit(struct pfe *pfe)
6619 +{
6620 + pr_info("%s\n", __func__);
6621 +
6622 + if (pe_reset_all(&pfe->ctrl) != 0)
6623 + pr_err("Error: Failed to stop PEs, PFE reload may not work correctly\n");
6624 +
6625 + class_disable();
6626 + tmu_disable(0xf);
6627 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
6628 + util_disable();
6629 +#endif
6630 +}
6631 --- /dev/null
6632 +++ b/drivers/staging/fsl_ppfe/pfe_firmware.h
6633 @@ -0,0 +1,21 @@
6634 +/* SPDX-License-Identifier: GPL-2.0+ */
6635 +/*
6636 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
6637 + * Copyright 2017 NXP
6638 + */
6639 +
6640 +#ifndef _PFE_FIRMWARE_H_
6641 +#define _PFE_FIRMWARE_H_
6642 +
6643 +#define CLASS_FIRMWARE_FILENAME "ppfe_class_ls1012a.elf"
6644 +#define TMU_FIRMWARE_FILENAME "ppfe_tmu_ls1012a.elf"
6645 +#define UTIL_FIRMWARE_FILENAME "ppfe_util_ls1012a.elf"
6646 +
6647 +#define PFE_FW_CHECK_PASS 0
6648 +#define PFE_FW_CHECK_FAIL 1
6649 +#define NUM_PFE_FW 3
6650 +
6651 +int pfe_firmware_init(struct pfe *pfe);
6652 +void pfe_firmware_exit(struct pfe *pfe);
6653 +
6654 +#endif /* _PFE_FIRMWARE_H_ */
6655 --- /dev/null
6656 +++ b/drivers/staging/fsl_ppfe/pfe_hal.c
6657 @@ -0,0 +1,1517 @@
6658 +// SPDX-License-Identifier: GPL-2.0+
6659 +/*
6660 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
6661 + * Copyright 2017 NXP
6662 + */
6663 +
6664 +#include "pfe_mod.h"
6665 +#include "pfe/pfe.h"
6666 +
6667 +/* A-010897: Jumbo frame is not supported */
6668 +extern bool pfe_errata_a010897;
6669 +
6670 +#define PFE_RCR_MAX_FL_MASK 0xC000FFFF
6671 +
6672 +void *cbus_base_addr;
6673 +void *ddr_base_addr;
6674 +unsigned long ddr_phys_base_addr;
6675 +unsigned int ddr_size;
6676 +
6677 +static struct pe_info pe[MAX_PE];
6678 +
6679 +/* Initializes the PFE library.
6680 + * Must be called before using any of the library functions.
6681 + *
6682 + * @param[in] cbus_base CBUS virtual base address (as mapped in
6683 + * the host CPU address space)
6684 + * @param[in] ddr_base PFE DDR range virtual base address (as
6685 + * mapped in the host CPU address space)
6686 + * @param[in] ddr_phys_base PFE DDR range physical base address (as
6687 + * mapped in platform)
6688 + * @param[in] size PFE DDR range size (as defined by the host
6689 + * software)
6690 + */
6691 +void pfe_lib_init(void *cbus_base, void *ddr_base, unsigned long ddr_phys_base,
6692 + unsigned int size)
6693 +{
6694 + cbus_base_addr = cbus_base;
6695 + ddr_base_addr = ddr_base;
6696 + ddr_phys_base_addr = ddr_phys_base;
6697 + ddr_size = size;
6698 +
6699 + pe[CLASS0_ID].dmem_base_addr = CLASS_DMEM_BASE_ADDR(0);
6700 + pe[CLASS0_ID].pmem_base_addr = CLASS_IMEM_BASE_ADDR(0);
6701 + pe[CLASS0_ID].pmem_size = CLASS_IMEM_SIZE;
6702 + pe[CLASS0_ID].mem_access_wdata = CLASS_MEM_ACCESS_WDATA;
6703 + pe[CLASS0_ID].mem_access_addr = CLASS_MEM_ACCESS_ADDR;
6704 + pe[CLASS0_ID].mem_access_rdata = CLASS_MEM_ACCESS_RDATA;
6705 +
6706 + pe[CLASS1_ID].dmem_base_addr = CLASS_DMEM_BASE_ADDR(1);
6707 + pe[CLASS1_ID].pmem_base_addr = CLASS_IMEM_BASE_ADDR(1);
6708 + pe[CLASS1_ID].pmem_size = CLASS_IMEM_SIZE;
6709 + pe[CLASS1_ID].mem_access_wdata = CLASS_MEM_ACCESS_WDATA;
6710 + pe[CLASS1_ID].mem_access_addr = CLASS_MEM_ACCESS_ADDR;
6711 + pe[CLASS1_ID].mem_access_rdata = CLASS_MEM_ACCESS_RDATA;
6712 +
6713 + pe[CLASS2_ID].dmem_base_addr = CLASS_DMEM_BASE_ADDR(2);
6714 + pe[CLASS2_ID].pmem_base_addr = CLASS_IMEM_BASE_ADDR(2);
6715 + pe[CLASS2_ID].pmem_size = CLASS_IMEM_SIZE;
6716 + pe[CLASS2_ID].mem_access_wdata = CLASS_MEM_ACCESS_WDATA;
6717 + pe[CLASS2_ID].mem_access_addr = CLASS_MEM_ACCESS_ADDR;
6718 + pe[CLASS2_ID].mem_access_rdata = CLASS_MEM_ACCESS_RDATA;
6719 +
6720 + pe[CLASS3_ID].dmem_base_addr = CLASS_DMEM_BASE_ADDR(3);
6721 + pe[CLASS3_ID].pmem_base_addr = CLASS_IMEM_BASE_ADDR(3);
6722 + pe[CLASS3_ID].pmem_size = CLASS_IMEM_SIZE;
6723 + pe[CLASS3_ID].mem_access_wdata = CLASS_MEM_ACCESS_WDATA;
6724 + pe[CLASS3_ID].mem_access_addr = CLASS_MEM_ACCESS_ADDR;
6725 + pe[CLASS3_ID].mem_access_rdata = CLASS_MEM_ACCESS_RDATA;
6726 +
6727 + pe[CLASS4_ID].dmem_base_addr = CLASS_DMEM_BASE_ADDR(4);
6728 + pe[CLASS4_ID].pmem_base_addr = CLASS_IMEM_BASE_ADDR(4);
6729 + pe[CLASS4_ID].pmem_size = CLASS_IMEM_SIZE;
6730 + pe[CLASS4_ID].mem_access_wdata = CLASS_MEM_ACCESS_WDATA;
6731 + pe[CLASS4_ID].mem_access_addr = CLASS_MEM_ACCESS_ADDR;
6732 + pe[CLASS4_ID].mem_access_rdata = CLASS_MEM_ACCESS_RDATA;
6733 +
6734 + pe[CLASS5_ID].dmem_base_addr = CLASS_DMEM_BASE_ADDR(5);
6735 + pe[CLASS5_ID].pmem_base_addr = CLASS_IMEM_BASE_ADDR(5);
6736 + pe[CLASS5_ID].pmem_size = CLASS_IMEM_SIZE;
6737 + pe[CLASS5_ID].mem_access_wdata = CLASS_MEM_ACCESS_WDATA;
6738 + pe[CLASS5_ID].mem_access_addr = CLASS_MEM_ACCESS_ADDR;
6739 + pe[CLASS5_ID].mem_access_rdata = CLASS_MEM_ACCESS_RDATA;
6740 +
6741 + pe[TMU0_ID].dmem_base_addr = TMU_DMEM_BASE_ADDR(0);
6742 + pe[TMU0_ID].pmem_base_addr = TMU_IMEM_BASE_ADDR(0);
6743 + pe[TMU0_ID].pmem_size = TMU_IMEM_SIZE;
6744 + pe[TMU0_ID].mem_access_wdata = TMU_MEM_ACCESS_WDATA;
6745 + pe[TMU0_ID].mem_access_addr = TMU_MEM_ACCESS_ADDR;
6746 + pe[TMU0_ID].mem_access_rdata = TMU_MEM_ACCESS_RDATA;
6747 +
6748 + pe[TMU1_ID].dmem_base_addr = TMU_DMEM_BASE_ADDR(1);
6749 + pe[TMU1_ID].pmem_base_addr = TMU_IMEM_BASE_ADDR(1);
6750 + pe[TMU1_ID].pmem_size = TMU_IMEM_SIZE;
6751 + pe[TMU1_ID].mem_access_wdata = TMU_MEM_ACCESS_WDATA;
6752 + pe[TMU1_ID].mem_access_addr = TMU_MEM_ACCESS_ADDR;
6753 + pe[TMU1_ID].mem_access_rdata = TMU_MEM_ACCESS_RDATA;
6754 +
6755 + pe[TMU3_ID].dmem_base_addr = TMU_DMEM_BASE_ADDR(3);
6756 + pe[TMU3_ID].pmem_base_addr = TMU_IMEM_BASE_ADDR(3);
6757 + pe[TMU3_ID].pmem_size = TMU_IMEM_SIZE;
6758 + pe[TMU3_ID].mem_access_wdata = TMU_MEM_ACCESS_WDATA;
6759 + pe[TMU3_ID].mem_access_addr = TMU_MEM_ACCESS_ADDR;
6760 + pe[TMU3_ID].mem_access_rdata = TMU_MEM_ACCESS_RDATA;
6761 +
6762 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
6763 + pe[UTIL_ID].dmem_base_addr = UTIL_DMEM_BASE_ADDR;
6764 + pe[UTIL_ID].mem_access_wdata = UTIL_MEM_ACCESS_WDATA;
6765 + pe[UTIL_ID].mem_access_addr = UTIL_MEM_ACCESS_ADDR;
6766 + pe[UTIL_ID].mem_access_rdata = UTIL_MEM_ACCESS_RDATA;
6767 +#endif
6768 +}
6769 +
6770 +/* Writes a buffer to PE internal memory from the host
6771 + * through indirect access registers.
6772 + *
6773 + * @param[in] id PE identification (CLASS0_ID, ..., TMU0_ID,
6774 + * ..., UTIL_ID)
6775 + * @param[in] src Buffer source address
6776 + * @param[in] mem_access_addr DMEM destination address (must be 32bit
6777 + * aligned)
6778 + * @param[in] len Number of bytes to copy
6779 + */
6780 +void pe_mem_memcpy_to32(int id, u32 mem_access_addr, const void *src, unsigned
6781 +int len)
6782 +{
6783 + u32 offset = 0, val, addr;
6784 + unsigned int len32 = len >> 2;
6785 + int i;
6786 +
6787 + addr = mem_access_addr | PE_MEM_ACCESS_WRITE |
6788 + PE_MEM_ACCESS_BYTE_ENABLE(0, 4);
6789 +
6790 + for (i = 0; i < len32; i++, offset += 4, src += 4) {
6791 + val = *(u32 *)src;
6792 + writel(cpu_to_be32(val), pe[id].mem_access_wdata);
6793 + writel(addr + offset, pe[id].mem_access_addr);
6794 + }
6795 +
6796 + len = (len & 0x3);
6797 + if (len) {
6798 + val = 0;
6799 +
6800 + addr = (mem_access_addr | PE_MEM_ACCESS_WRITE |
6801 + PE_MEM_ACCESS_BYTE_ENABLE(0, len)) + offset;
6802 +
6803 + for (i = 0; i < len; i++, src++)
6804 + val |= (*(u8 *)src) << (8 * i);
6805 +
6806 + writel(cpu_to_be32(val), pe[id].mem_access_wdata);
6807 + writel(addr, pe[id].mem_access_addr);
6808 + }
6809 +}
6810 +
6811 +/* Writes a buffer to PE internal data memory (DMEM) from the host
6812 + * through indirect access registers.
6813 + * @param[in] id PE identification (CLASS0_ID, ..., TMU0_ID,
6814 + * ..., UTIL_ID)
6815 + * @param[in] src Buffer source address
6816 + * @param[in] dst DMEM destination address (must be 32bit
6817 + * aligned)
6818 + * @param[in] len Number of bytes to copy
6819 + */
6820 +void pe_dmem_memcpy_to32(int id, u32 dst, const void *src, unsigned int len)
6821 +{
6822 + pe_mem_memcpy_to32(id, pe[id].dmem_base_addr | dst |
6823 + PE_MEM_ACCESS_DMEM, src, len);
6824 +}
6825 +
6826 +/* Writes a buffer to PE internal program memory (PMEM) from the host
6827 + * through indirect access registers.
6828 + * @param[in] id PE identification (CLASS0_ID, ..., TMU0_ID,
6829 + * ..., TMU3_ID)
6830 + * @param[in] src Buffer source address
6831 + * @param[in] dst PMEM destination address (must be 32bit
6832 + * aligned)
6833 + * @param[in] len Number of bytes to copy
6834 + */
6835 +void pe_pmem_memcpy_to32(int id, u32 dst, const void *src, unsigned int len)
6836 +{
6837 + pe_mem_memcpy_to32(id, pe[id].pmem_base_addr | (dst & (pe[id].pmem_size
6838 + - 1)) | PE_MEM_ACCESS_IMEM, src, len);
6839 +}
6840 +
6841 +/* Reads PE internal program memory (IMEM) from the host
6842 + * through indirect access registers.
6843 + * @param[in] id PE identification (CLASS0_ID, ..., TMU0_ID,
6844 + * ..., TMU3_ID)
6845 + * @param[in] addr PMEM read address (must be aligned on size)
6846 + * @param[in] size Number of bytes to read (maximum 4, must not
6847 + * cross 32bit boundaries)
6848 + * @return the data read (in PE endianness, i.e BE).
6849 + */
6850 +u32 pe_pmem_read(int id, u32 addr, u8 size)
6851 +{
6852 + u32 offset = addr & 0x3;
6853 + u32 mask = 0xffffffff >> ((4 - size) << 3);
6854 + u32 val;
6855 +
6856 + addr = pe[id].pmem_base_addr | ((addr & ~0x3) & (pe[id].pmem_size - 1))
6857 + | PE_MEM_ACCESS_IMEM | PE_MEM_ACCESS_BYTE_ENABLE(offset, size);
6858 +
6859 + writel(addr, pe[id].mem_access_addr);
6860 + val = be32_to_cpu(readl(pe[id].mem_access_rdata));
6861 +
6862 + return (val >> (offset << 3)) & mask;
6863 +}
6864 +
6865 +/* Writes PE internal data memory (DMEM) from the host
6866 + * through indirect access registers.
6867 + * @param[in] id PE identification (CLASS0_ID, ..., TMU0_ID,
6868 + * ..., UTIL_ID)
6869 + * @param[in] addr DMEM write address (must be aligned on size)
6870 + * @param[in] val Value to write (in PE endianness, i.e BE)
6871 + * @param[in] size Number of bytes to write (maximum 4, must not
6872 + * cross 32bit boundaries)
6873 + */
6874 +void pe_dmem_write(int id, u32 val, u32 addr, u8 size)
6875 +{
6876 + u32 offset = addr & 0x3;
6877 +
6878 + addr = pe[id].dmem_base_addr | (addr & ~0x3) | PE_MEM_ACCESS_WRITE |
6879 + PE_MEM_ACCESS_DMEM | PE_MEM_ACCESS_BYTE_ENABLE(offset, size);
6880 +
6881 + /* Indirect access interface is byte swapping data being written */
6882 + writel(cpu_to_be32(val << (offset << 3)), pe[id].mem_access_wdata);
6883 + writel(addr, pe[id].mem_access_addr);
6884 +}
6885 +
6886 +/* Reads PE internal data memory (DMEM) from the host
6887 + * through indirect access registers.
6888 + * @param[in] id PE identification (CLASS0_ID, ..., TMU0_ID,
6889 + * ..., UTIL_ID)
6890 + * @param[in] addr DMEM read address (must be aligned on size)
6891 + * @param[in] size Number of bytes to read (maximum 4, must not
6892 + * cross 32bit boundaries)
6893 + * @return the data read (in PE endianness, i.e BE).
6894 + */
6895 +u32 pe_dmem_read(int id, u32 addr, u8 size)
6896 +{
6897 + u32 offset = addr & 0x3;
6898 + u32 mask = 0xffffffff >> ((4 - size) << 3);
6899 + u32 val;
6900 +
6901 + addr = pe[id].dmem_base_addr | (addr & ~0x3) | PE_MEM_ACCESS_DMEM |
6902 + PE_MEM_ACCESS_BYTE_ENABLE(offset, size);
6903 +
6904 + writel(addr, pe[id].mem_access_addr);
6905 +
6906 + /* Indirect access interface is byte swapping data being read */
6907 + val = be32_to_cpu(readl(pe[id].mem_access_rdata));
6908 +
6909 + return (val >> (offset << 3)) & mask;
6910 +}
6911 +
6912 +/* This function is used to write to CLASS internal bus peripherals (ccu,
6913 + * pe-lem) from the host
6914 + * through indirect access registers.
6915 + * @param[in] val value to write
6916 + * @param[in] addr Address to write to (must be aligned on size)
6917 + * @param[in] size Number of bytes to write (1, 2 or 4)
6918 + *
6919 + */
6920 +void class_bus_write(u32 val, u32 addr, u8 size)
6921 +{
6922 + u32 offset = addr & 0x3;
6923 +
6924 + writel((addr & CLASS_BUS_ACCESS_BASE_MASK), CLASS_BUS_ACCESS_BASE);
6925 +
6926 + addr = (addr & ~CLASS_BUS_ACCESS_BASE_MASK) | PE_MEM_ACCESS_WRITE |
6927 + (size << 24);
6928 +
6929 + writel(cpu_to_be32(val << (offset << 3)), CLASS_BUS_ACCESS_WDATA);
6930 + writel(addr, CLASS_BUS_ACCESS_ADDR);
6931 +}
6932 +
6933 +/* Reads from CLASS internal bus peripherals (ccu, pe-lem) from the host
6934 + * through indirect access registers.
6935 + * @param[in] addr Address to read from (must be aligned on size)
6936 + * @param[in] size Number of bytes to read (1, 2 or 4)
6937 + * @return the read data
6938 + *
6939 + */
6940 +u32 class_bus_read(u32 addr, u8 size)
6941 +{
6942 + u32 offset = addr & 0x3;
6943 + u32 mask = 0xffffffff >> ((4 - size) << 3);
6944 + u32 val;
6945 +
6946 + writel((addr & CLASS_BUS_ACCESS_BASE_MASK), CLASS_BUS_ACCESS_BASE);
6947 +
6948 + addr = (addr & ~CLASS_BUS_ACCESS_BASE_MASK) | (size << 24);
6949 +
6950 + writel(addr, CLASS_BUS_ACCESS_ADDR);
6951 + val = be32_to_cpu(readl(CLASS_BUS_ACCESS_RDATA));
6952 +
6953 + return (val >> (offset << 3)) & mask;
6954 +}
6955 +
6956 +/* Writes data to the cluster memory (PE_LMEM)
6957 + * @param[in] dst PE LMEM destination address (must be 32bit aligned)
6958 + * @param[in] src Buffer source address
6959 + * @param[in] len Number of bytes to copy
6960 + */
6961 +void class_pe_lmem_memcpy_to32(u32 dst, const void *src, unsigned int len)
6962 +{
6963 + u32 len32 = len >> 2;
6964 + int i;
6965 +
6966 + for (i = 0; i < len32; i++, src += 4, dst += 4)
6967 + class_bus_write(*(u32 *)src, dst, 4);
6968 +
6969 + if (len & 0x2) {
6970 + class_bus_write(*(u16 *)src, dst, 2);
6971 + src += 2;
6972 + dst += 2;
6973 + }
6974 +
6975 + if (len & 0x1) {
6976 + class_bus_write(*(u8 *)src, dst, 1);
6977 + src++;
6978 + dst++;
6979 + }
6980 +}
6981 +
6982 +/* Writes value to the cluster memory (PE_LMEM)
6983 + * @param[in] dst PE LMEM destination address (must be 32bit aligned)
6984 + * @param[in] val Value to write
6985 + * @param[in] len Number of bytes to write
6986 + */
6987 +void class_pe_lmem_memset(u32 dst, int val, unsigned int len)
6988 +{
6989 + u32 len32 = len >> 2;
6990 + int i;
6991 +
6992 + val = val | (val << 8) | (val << 16) | (val << 24);
6993 +
6994 + for (i = 0; i < len32; i++, dst += 4)
6995 + class_bus_write(val, dst, 4);
6996 +
6997 + if (len & 0x2) {
6998 + class_bus_write(val, dst, 2);
6999 + dst += 2;
7000 + }
7001 +
7002 + if (len & 0x1) {
7003 + class_bus_write(val, dst, 1);
7004 + dst++;
7005 + }
7006 +}
7007 +
7008 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
7009 +
7010 +/* Writes UTIL program memory (DDR) from the host.
7011 + *
7012 + * @param[in] addr Address to write (virtual, must be aligned on size)
7013 + * @param[in] val Value to write (in PE endianness, i.e BE)
7014 + * @param[in] size Number of bytes to write (2 or 4)
7015 + */
7016 +static void util_pmem_write(u32 val, void *addr, u8 size)
7017 +{
7018 + void *addr64 = (void *)((unsigned long)addr & ~0x7);
7019 + unsigned long off = 8 - ((unsigned long)addr & 0x7) - size;
7020 +
7021 + /*
7022 + * IMEM should be loaded as a 64bit swapped value in a 64bit aligned
7023 + * location
7024 + */
7025 + if (size == 4)
7026 + writel(be32_to_cpu(val), addr64 + off);
7027 + else
7028 + writew(be16_to_cpu((u16)val), addr64 + off);
7029 +}
7030 +
7031 +/* Writes a buffer to UTIL program memory (DDR) from the host.
7032 + *
7033 + * @param[in] dst Address to write (virtual, must be at least 16bit
7034 + * aligned)
7035 + * @param[in] src Buffer to write (in PE endianness, i.e BE, must have
7036 + * same alignment as dst)
7037 + * @param[in] len Number of bytes to write (must be at least 16bit
7038 + * aligned)
7039 + */
7040 +static void util_pmem_memcpy(void *dst, const void *src, unsigned int len)
7041 +{
7042 + unsigned int len32;
7043 + int i;
7044 +
7045 + if ((unsigned long)src & 0x2) {
7046 + util_pmem_write(*(u16 *)src, dst, 2);
7047 + src += 2;
7048 + dst += 2;
7049 + len -= 2;
7050 + }
7051 +
7052 + len32 = len >> 2;
7053 +
7054 + for (i = 0; i < len32; i++, dst += 4, src += 4)
7055 + util_pmem_write(*(u32 *)src, dst, 4);
7056 +
7057 + if (len & 0x2)
7058 + util_pmem_write(*(u16 *)src, dst, len & 0x2);
7059 +}
7060 +#endif
7061 +
7062 +/* Loads an elf section into pmem
7063 + * Code needs to be at least 16bit aligned and only PROGBITS sections are
7064 + * supported
7065 + *
7066 + * @param[in] id PE identification (CLASS0_ID, ..., TMU0_ID, ...,
7067 + * TMU3_ID)
7068 + * @param[in] data pointer to the elf firmware
7069 + * @param[in] shdr pointer to the elf section header
7070 + *
7071 + */
7072 +static int pe_load_pmem_section(int id, const void *data,
7073 + struct elf32_shdr *shdr)
7074 +{
7075 + u32 offset = be32_to_cpu(shdr->sh_offset);
7076 + u32 addr = be32_to_cpu(shdr->sh_addr);
7077 + u32 size = be32_to_cpu(shdr->sh_size);
7078 + u32 type = be32_to_cpu(shdr->sh_type);
7079 +
7080 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
7081 + if (id == UTIL_ID) {
7082 + pr_err("%s: unsupported pmem section for UTIL\n",
7083 + __func__);
7084 + return -EINVAL;
7085 + }
7086 +#endif
7087 +
7088 + if (((unsigned long)(data + offset) & 0x3) != (addr & 0x3)) {
7089 + pr_err(
7090 + "%s: load address(%x) and elf file address(%lx) don't have the same alignment\n"
7091 + , __func__, addr, (unsigned long)data + offset);
7092 +
7093 + return -EINVAL;
7094 + }
7095 +
7096 + if (addr & 0x1) {
7097 + pr_err("%s: load address(%x) is not 16bit aligned\n",
7098 + __func__, addr);
7099 + return -EINVAL;
7100 + }
7101 +
7102 + if (size & 0x1) {
7103 + pr_err("%s: load size(%x) is not 16bit aligned\n",
7104 + __func__, size);
7105 + return -EINVAL;
7106 + }
7107 +
7108 + switch (type) {
7109 + case SHT_PROGBITS:
7110 + pe_pmem_memcpy_to32(id, addr, data + offset, size);
7111 +
7112 + break;
7113 +
7114 + default:
7115 + pr_err("%s: unsupported section type(%x)\n", __func__,
7116 + type);
7117 + return -EINVAL;
7118 + }
7119 +
7120 + return 0;
7121 +}
7122 +
7123 +/* Loads an elf section into dmem
7124 + * Data needs to be at least 32bit aligned, NOBITS sections are correctly
7125 + * initialized to 0
7126 + *
7127 + * @param[in] id PE identification (CLASS0_ID, ..., TMU0_ID,
7128 + * ..., UTIL_ID)
7129 + * @param[in] data pointer to the elf firmware
7130 + * @param[in] shdr pointer to the elf section header
7131 + *
7132 + */
7133 +static int pe_load_dmem_section(int id, const void *data,
7134 + struct elf32_shdr *shdr)
7135 +{
7136 + u32 offset = be32_to_cpu(shdr->sh_offset);
7137 + u32 addr = be32_to_cpu(shdr->sh_addr);
7138 + u32 size = be32_to_cpu(shdr->sh_size);
7139 + u32 type = be32_to_cpu(shdr->sh_type);
7140 + u32 size32 = size >> 2;
7141 + int i;
7142 +
7143 + if (((unsigned long)(data + offset) & 0x3) != (addr & 0x3)) {
7144 + pr_err(
7145 + "%s: load address(%x) and elf file address(%lx) don't have the same alignment\n",
7146 + __func__, addr, (unsigned long)data + offset);
7147 +
7148 + return -EINVAL;
7149 + }
7150 +
7151 + if (addr & 0x3) {
7152 + pr_err("%s: load address(%x) is not 32bit aligned\n",
7153 + __func__, addr);
7154 + return -EINVAL;
7155 + }
7156 +
7157 + switch (type) {
7158 + case SHT_PROGBITS:
7159 + pe_dmem_memcpy_to32(id, addr, data + offset, size);
7160 + break;
7161 +
7162 + case SHT_NOBITS:
7163 + for (i = 0; i < size32; i++, addr += 4)
7164 + pe_dmem_write(id, 0, addr, 4);
7165 +
7166 + if (size & 0x3)
7167 + pe_dmem_write(id, 0, addr, size & 0x3);
7168 +
7169 + break;
7170 +
7171 + default:
7172 + pr_err("%s: unsupported section type(%x)\n", __func__,
7173 + type);
7174 + return -EINVAL;
7175 + }
7176 +
7177 + return 0;
7178 +}
7179 +
7180 +/* Loads an elf section into DDR
7181 + * Data needs to be at least 32bit aligned, NOBITS sections are correctly
7182 + * initialized to 0
7183 + *
7184 + * @param[in] id PE identification (CLASS0_ID, ..., TMU0_ID,
7185 + * ..., UTIL_ID)
7186 + * @param[in] data pointer to the elf firmware
7187 + * @param[in] shdr pointer to the elf section header
7188 + *
7189 + */
7190 +static int pe_load_ddr_section(int id, const void *data,
7191 + struct elf32_shdr *shdr,
7192 + struct device *dev) {
7193 + u32 offset = be32_to_cpu(shdr->sh_offset);
7194 + u32 addr = be32_to_cpu(shdr->sh_addr);
7195 + u32 size = be32_to_cpu(shdr->sh_size);
7196 + u32 type = be32_to_cpu(shdr->sh_type);
7197 + u32 flags = be32_to_cpu(shdr->sh_flags);
7198 +
7199 + switch (type) {
7200 + case SHT_PROGBITS:
7201 + if (flags & SHF_EXECINSTR) {
7202 + if (id <= CLASS_MAX_ID) {
7203 + /* DO the loading only once in DDR */
7204 + if (id == CLASS0_ID) {
7205 + pr_err(
7206 + "%s: load address(%x) and elf file address(%lx) rcvd\n",
7207 + __func__, addr,
7208 + (unsigned long)data + offset);
7209 + if (((unsigned long)(data + offset)
7210 + & 0x3) != (addr & 0x3)) {
7211 + pr_err(
7212 + "%s: load address(%x) and elf file address(%lx) don't have the same alignment\n"
7213 + , __func__, addr,
7214 + (unsigned long)data + offset);
7215 +
7216 + return -EINVAL;
7217 + }
7218 +
7219 + if (addr & 0x1) {
7220 + pr_err(
7221 + "%s: load address(%x) is not 16bit aligned\n"
7222 + , __func__, addr);
7223 + return -EINVAL;
7224 + }
7225 +
7226 + if (size & 0x1) {
7227 + pr_err(
7228 + "%s: load length(%x) is not 16bit aligned\n"
7229 + , __func__, size);
7230 + return -EINVAL;
7231 + }
7232 + memcpy(DDR_PHYS_TO_VIRT(
7233 + DDR_PFE_TO_PHYS(addr)),
7234 + data + offset, size);
7235 + }
7236 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
7237 + } else if (id == UTIL_ID) {
7238 + if (((unsigned long)(data + offset) & 0x3)
7239 + != (addr & 0x3)) {
7240 + pr_err(
7241 + "%s: load address(%x) and elf file address(%lx) don't have the same alignment\n"
7242 + , __func__, addr,
7243 + (unsigned long)data + offset);
7244 +
7245 + return -EINVAL;
7246 + }
7247 +
7248 + if (addr & 0x1) {
7249 + pr_err(
7250 + "%s: load address(%x) is not 16bit aligned\n"
7251 + , __func__, addr);
7252 + return -EINVAL;
7253 + }
7254 +
7255 + if (size & 0x1) {
7256 + pr_err(
7257 + "%s: load length(%x) is not 16bit aligned\n"
7258 + , __func__, size);
7259 + return -EINVAL;
7260 + }
7261 +
7262 + util_pmem_memcpy(DDR_PHYS_TO_VIRT(
7263 + DDR_PFE_TO_PHYS(addr)),
7264 + data + offset, size);
7265 + }
7266 +#endif
7267 + } else {
7268 + pr_err(
7269 + "%s: unsupported ddr section type(%x) for PE(%d)\n"
7270 + , __func__, type, id);
7271 + return -EINVAL;
7272 + }
7273 +
7274 + } else {
7275 + memcpy(DDR_PHYS_TO_VIRT(DDR_PFE_TO_PHYS(addr)), data
7276 + + offset, size);
7277 + }
7278 +
7279 + break;
7280 +
7281 + case SHT_NOBITS:
7282 + memset(DDR_PHYS_TO_VIRT(DDR_PFE_TO_PHYS(addr)), 0, size);
7283 +
7284 + break;
7285 +
7286 + default:
7287 + pr_err("%s: unsupported section type(%x)\n", __func__,
7288 + type);
7289 + return -EINVAL;
7290 + }
7291 +
7292 + return 0;
7293 +}
7294 +
7295 +/* Loads an elf section into pe lmem
7296 + * Data needs to be at least 32bit aligned, NOBITS sections are correctly
7297 + * initialized to 0
7298 + *
7299 + * @param[in] id PE identification (CLASS0_ID,..., CLASS5_ID)
7300 + * @param[in] data pointer to the elf firmware
7301 + * @param[in] shdr pointer to the elf section header
7302 + *
7303 + */
7304 +static int pe_load_pe_lmem_section(int id, const void *data,
7305 + struct elf32_shdr *shdr)
7306 +{
7307 + u32 offset = be32_to_cpu(shdr->sh_offset);
7308 + u32 addr = be32_to_cpu(shdr->sh_addr);
7309 + u32 size = be32_to_cpu(shdr->sh_size);
7310 + u32 type = be32_to_cpu(shdr->sh_type);
7311 +
7312 + if (id > CLASS_MAX_ID) {
7313 + pr_err(
7314 + "%s: unsupported pe-lmem section type(%x) for PE(%d)\n",
7315 + __func__, type, id);
7316 + return -EINVAL;
7317 + }
7318 +
7319 + if (((unsigned long)(data + offset) & 0x3) != (addr & 0x3)) {
7320 + pr_err(
7321 + "%s: load address(%x) and elf file address(%lx) don't have the same alignment\n",
7322 + __func__, addr, (unsigned long)data + offset);
7323 +
7324 + return -EINVAL;
7325 + }
7326 +
7327 + if (addr & 0x3) {
7328 + pr_err("%s: load address(%x) is not 32bit aligned\n",
7329 + __func__, addr);
7330 + return -EINVAL;
7331 + }
7332 +
7333 + switch (type) {
7334 + case SHT_PROGBITS:
7335 + class_pe_lmem_memcpy_to32(addr, data + offset, size);
7336 + break;
7337 +
7338 + case SHT_NOBITS:
7339 + class_pe_lmem_memset(addr, 0, size);
7340 + break;
7341 +
7342 + default:
7343 + pr_err("%s: unsupported section type(%x)\n", __func__,
7344 + type);
7345 + return -EINVAL;
7346 + }
7347 +
7348 + return 0;
7349 +}
7350 +
7351 +/* Loads an elf section into a PE
7352 + * For now only supports loading a section to dmem (all PE's), pmem (class and
7353 + * tmu PE's),
7354 + * DDDR (util PE code)
7355 + *
7356 + * @param[in] id PE identification (CLASS0_ID, ..., TMU0_ID,
7357 + * ..., UTIL_ID)
7358 + * @param[in] data pointer to the elf firmware
7359 + * @param[in] shdr pointer to the elf section header
7360 + *
7361 + */
7362 +int pe_load_elf_section(int id, const void *data, struct elf32_shdr *shdr,
7363 + struct device *dev) {
7364 + u32 addr = be32_to_cpu(shdr->sh_addr);
7365 + u32 size = be32_to_cpu(shdr->sh_size);
7366 +
7367 + if (IS_DMEM(addr, size))
7368 + return pe_load_dmem_section(id, data, shdr);
7369 + else if (IS_PMEM(addr, size))
7370 + return pe_load_pmem_section(id, data, shdr);
7371 + else if (IS_PFE_LMEM(addr, size))
7372 + return 0;
7373 + else if (IS_PHYS_DDR(addr, size))
7374 + return pe_load_ddr_section(id, data, shdr, dev);
7375 + else if (IS_PE_LMEM(addr, size))
7376 + return pe_load_pe_lmem_section(id, data, shdr);
7377 +
7378 + pr_err("%s: unsupported memory range(%x)\n", __func__,
7379 + addr);
7380 + return 0;
7381 +}
7382 +
7383 +/**************************** BMU ***************************/
7384 +
7385 +/* Initializes a BMU block.
7386 + * @param[in] base BMU block base address
7387 + * @param[in] cfg BMU configuration
7388 + */
7389 +void bmu_init(void *base, struct BMU_CFG *cfg)
7390 +{
7391 + bmu_disable(base);
7392 +
7393 + bmu_set_config(base, cfg);
7394 +
7395 + bmu_reset(base);
7396 +}
7397 +
7398 +/* Resets a BMU block.
7399 + * @param[in] base BMU block base address
7400 + */
7401 +void bmu_reset(void *base)
7402 +{
7403 + writel(CORE_SW_RESET, base + BMU_CTRL);
7404 +
7405 + /* Wait for self clear */
7406 + while (readl(base + BMU_CTRL) & CORE_SW_RESET)
7407 + ;
7408 +}
7409 +
7410 +/* Enabled a BMU block.
7411 + * @param[in] base BMU block base address
7412 + */
7413 +void bmu_enable(void *base)
7414 +{
7415 + writel(CORE_ENABLE, base + BMU_CTRL);
7416 +}
7417 +
7418 +/* Disables a BMU block.
7419 + * @param[in] base BMU block base address
7420 + */
7421 +void bmu_disable(void *base)
7422 +{
7423 + writel(CORE_DISABLE, base + BMU_CTRL);
7424 +}
7425 +
7426 +/* Sets the configuration of a BMU block.
7427 + * @param[in] base BMU block base address
7428 + * @param[in] cfg BMU configuration
7429 + */
7430 +void bmu_set_config(void *base, struct BMU_CFG *cfg)
7431 +{
7432 + writel(cfg->baseaddr, base + BMU_UCAST_BASE_ADDR);
7433 + writel(cfg->count & 0xffff, base + BMU_UCAST_CONFIG);
7434 + writel(cfg->size & 0xffff, base + BMU_BUF_SIZE);
7435 +
7436 + /* Interrupts are never used */
7437 + writel(cfg->low_watermark, base + BMU_LOW_WATERMARK);
7438 + writel(cfg->high_watermark, base + BMU_HIGH_WATERMARK);
7439 + writel(0x0, base + BMU_INT_ENABLE);
7440 +}
7441 +
7442 +/**************************** MTIP GEMAC ***************************/
7443 +
7444 +/* Enable Rx Checksum Engine. With this enabled, Frame with bad IP,
7445 + * TCP or UDP checksums are discarded
7446 + *
7447 + * @param[in] base GEMAC base address.
7448 + */
7449 +void gemac_enable_rx_checksum_offload(void *base)
7450 +{
7451 + /*Do not find configuration to do this */
7452 +}
7453 +
7454 +/* Disable Rx Checksum Engine.
7455 + *
7456 + * @param[in] base GEMAC base address.
7457 + */
7458 +void gemac_disable_rx_checksum_offload(void *base)
7459 +{
7460 + /*Do not find configuration to do this */
7461 +}
7462 +
7463 +/* GEMAC set speed.
7464 + * @param[in] base GEMAC base address
7465 + * @param[in] speed GEMAC speed (10, 100 or 1000 Mbps)
7466 + */
7467 +void gemac_set_speed(void *base, enum mac_speed gem_speed)
7468 +{
7469 + u32 ecr = readl(base + EMAC_ECNTRL_REG) & ~EMAC_ECNTRL_SPEED;
7470 + u32 rcr = readl(base + EMAC_RCNTRL_REG) & ~EMAC_RCNTRL_RMII_10T;
7471 +
7472 + switch (gem_speed) {
7473 + case SPEED_10M:
7474 + rcr |= EMAC_RCNTRL_RMII_10T;
7475 + break;
7476 +
7477 + case SPEED_1000M:
7478 + ecr |= EMAC_ECNTRL_SPEED;
7479 + break;
7480 +
7481 + case SPEED_100M:
7482 + default:
7483 + /*It is in 100M mode */
7484 + break;
7485 + }
7486 + writel(ecr, (base + EMAC_ECNTRL_REG));
7487 + writel(rcr, (base + EMAC_RCNTRL_REG));
7488 +}
7489 +
7490 +/* GEMAC set duplex.
7491 + * @param[in] base GEMAC base address
7492 + * @param[in] duplex GEMAC duplex mode (Full, Half)
7493 + */
7494 +void gemac_set_duplex(void *base, int duplex)
7495 +{
7496 + if (duplex == DUPLEX_HALF) {
7497 + writel(readl(base + EMAC_TCNTRL_REG) & ~EMAC_TCNTRL_FDEN, base
7498 + + EMAC_TCNTRL_REG);
7499 + writel(readl(base + EMAC_RCNTRL_REG) | EMAC_RCNTRL_DRT, (base
7500 + + EMAC_RCNTRL_REG));
7501 + } else{
7502 + writel(readl(base + EMAC_TCNTRL_REG) | EMAC_TCNTRL_FDEN, base
7503 + + EMAC_TCNTRL_REG);
7504 + writel(readl(base + EMAC_RCNTRL_REG) & ~EMAC_RCNTRL_DRT, (base
7505 + + EMAC_RCNTRL_REG));
7506 + }
7507 +}
7508 +
7509 +/* GEMAC set mode.
7510 + * @param[in] base GEMAC base address
7511 + * @param[in] mode GEMAC operation mode (MII, RMII, RGMII, SGMII)
7512 + */
7513 +void gemac_set_mode(void *base, int mode)
7514 +{
7515 + u32 val = readl(base + EMAC_RCNTRL_REG);
7516 +
7517 + /*Remove loopbank*/
7518 + val &= ~EMAC_RCNTRL_LOOP;
7519 +
7520 + /* Enable flow control and MII mode.PFE firmware always expects
7521 + CRC should be forwarded by MAC to validate CRC in software.*/
7522 + val |= (EMAC_RCNTRL_FCE | EMAC_RCNTRL_MII_MODE);
7523 +
7524 + writel(val, base + EMAC_RCNTRL_REG);
7525 +}
7526 +
7527 +/* GEMAC enable function.
7528 + * @param[in] base GEMAC base address
7529 + */
7530 +void gemac_enable(void *base)
7531 +{
7532 + writel(readl(base + EMAC_ECNTRL_REG) | EMAC_ECNTRL_ETHER_EN, base +
7533 + EMAC_ECNTRL_REG);
7534 +}
7535 +
7536 +/* GEMAC disable function.
7537 + * @param[in] base GEMAC base address
7538 + */
7539 +void gemac_disable(void *base)
7540 +{
7541 + writel(readl(base + EMAC_ECNTRL_REG) & ~EMAC_ECNTRL_ETHER_EN, base +
7542 + EMAC_ECNTRL_REG);
7543 +}
7544 +
7545 +/* GEMAC TX disable function.
7546 + * @param[in] base GEMAC base address
7547 + */
7548 +void gemac_tx_disable(void *base)
7549 +{
7550 + writel(readl(base + EMAC_TCNTRL_REG) | EMAC_TCNTRL_GTS, base +
7551 + EMAC_TCNTRL_REG);
7552 +}
7553 +
7554 +void gemac_tx_enable(void *base)
7555 +{
7556 + writel(readl(base + EMAC_TCNTRL_REG) & ~EMAC_TCNTRL_GTS, base +
7557 + EMAC_TCNTRL_REG);
7558 +}
7559 +
7560 +/* Sets the hash register of the MAC.
7561 + * This register is used for matching unicast and multicast frames.
7562 + *
7563 + * @param[in] base GEMAC base address.
7564 + * @param[in] hash 64-bit hash to be configured.
7565 + */
7566 +void gemac_set_hash(void *base, struct pfe_mac_addr *hash)
7567 +{
7568 + writel(hash->bottom, base + EMAC_GALR);
7569 + writel(hash->top, base + EMAC_GAUR);
7570 +}
7571 +
7572 +void gemac_set_laddrN(void *base, struct pfe_mac_addr *address,
7573 + unsigned int entry_index)
7574 +{
7575 + if ((entry_index < 1) || (entry_index > EMAC_SPEC_ADDR_MAX))
7576 + return;
7577 +
7578 + entry_index = entry_index - 1;
7579 + if (entry_index < 1) {
7580 + writel(htonl(address->bottom), base + EMAC_PHY_ADDR_LOW);
7581 + writel((htonl(address->top) | 0x8808), base +
7582 + EMAC_PHY_ADDR_HIGH);
7583 + } else {
7584 + writel(htonl(address->bottom), base + ((entry_index - 1) * 8)
7585 + + EMAC_SMAC_0_0);
7586 + writel((htonl(address->top) | 0x8808), base + ((entry_index -
7587 + 1) * 8) + EMAC_SMAC_0_1);
7588 + }
7589 +}
7590 +
7591 +void gemac_clear_laddrN(void *base, unsigned int entry_index)
7592 +{
7593 + if ((entry_index < 1) || (entry_index > EMAC_SPEC_ADDR_MAX))
7594 + return;
7595 +
7596 + entry_index = entry_index - 1;
7597 + if (entry_index < 1) {
7598 + writel(0, base + EMAC_PHY_ADDR_LOW);
7599 + writel(0, base + EMAC_PHY_ADDR_HIGH);
7600 + } else {
7601 + writel(0, base + ((entry_index - 1) * 8) + EMAC_SMAC_0_0);
7602 + writel(0, base + ((entry_index - 1) * 8) + EMAC_SMAC_0_1);
7603 + }
7604 +}
7605 +
7606 +/* Set the loopback mode of the MAC. This can be either no loopback for
7607 + * normal operation, local loopback through MAC internal loopback module or PHY
7608 + * loopback for external loopback through a PHY. This asserts the external
7609 + * loop pin.
7610 + *
7611 + * @param[in] base GEMAC base address.
7612 + * @param[in] gem_loop Loopback mode to be enabled. LB_LOCAL - MAC
7613 + * Loopback,
7614 + * LB_EXT - PHY Loopback.
7615 + */
7616 +void gemac_set_loop(void *base, enum mac_loop gem_loop)
7617 +{
7618 + pr_info("%s()\n", __func__);
7619 + writel(readl(base + EMAC_RCNTRL_REG) | EMAC_RCNTRL_LOOP, (base +
7620 + EMAC_RCNTRL_REG));
7621 +}
7622 +
7623 +/* GEMAC allow frames
7624 + * @param[in] base GEMAC base address
7625 + */
7626 +void gemac_enable_copy_all(void *base)
7627 +{
7628 + writel(readl(base + EMAC_RCNTRL_REG) | EMAC_RCNTRL_PROM, (base +
7629 + EMAC_RCNTRL_REG));
7630 +}
7631 +
7632 +/* GEMAC do not allow frames
7633 + * @param[in] base GEMAC base address
7634 + */
7635 +void gemac_disable_copy_all(void *base)
7636 +{
7637 + writel(readl(base + EMAC_RCNTRL_REG) & ~EMAC_RCNTRL_PROM, (base +
7638 + EMAC_RCNTRL_REG));
7639 +}
7640 +
7641 +/* GEMAC allow broadcast function.
7642 + * @param[in] base GEMAC base address
7643 + */
7644 +void gemac_allow_broadcast(void *base)
7645 +{
7646 + writel(readl(base + EMAC_RCNTRL_REG) & ~EMAC_RCNTRL_BC_REJ, base +
7647 + EMAC_RCNTRL_REG);
7648 +}
7649 +
7650 +/* GEMAC no broadcast function.
7651 + * @param[in] base GEMAC base address
7652 + */
7653 +void gemac_no_broadcast(void *base)
7654 +{
7655 + writel(readl(base + EMAC_RCNTRL_REG) | EMAC_RCNTRL_BC_REJ, base +
7656 + EMAC_RCNTRL_REG);
7657 +}
7658 +
7659 +/* GEMAC enable 1536 rx function.
7660 + * @param[in] base GEMAC base address
7661 + */
7662 +void gemac_enable_1536_rx(void *base)
7663 +{
7664 + /* Set 1536 as Maximum frame length */
7665 + writel((readl(base + EMAC_RCNTRL_REG) & PFE_RCR_MAX_FL_MASK)
7666 + | (1536 << 16), base + EMAC_RCNTRL_REG);
7667 +}
7668 +
7669 +/* GEMAC set rx Max frame length.
7670 + * @param[in] base GEMAC base address
7671 + * @param[in] mtu new mtu
7672 + */
7673 +void gemac_set_rx_max_fl(void *base, int mtu)
7674 +{
7675 + /* Set mtu as Maximum frame length */
7676 + writel((readl(base + EMAC_RCNTRL_REG) & PFE_RCR_MAX_FL_MASK)
7677 + | (mtu << 16), base + EMAC_RCNTRL_REG);
7678 +}
7679 +
7680 +/* GEMAC enable stacked vlan function.
7681 + * @param[in] base GEMAC base address
7682 + */
7683 +void gemac_enable_stacked_vlan(void *base)
7684 +{
7685 + /* MTIP doesn't support stacked vlan */
7686 +}
7687 +
7688 +/* GEMAC enable pause rx function.
7689 + * @param[in] base GEMAC base address
7690 + */
7691 +void gemac_enable_pause_rx(void *base)
7692 +{
7693 + writel(readl(base + EMAC_RCNTRL_REG) | EMAC_RCNTRL_FCE,
7694 + base + EMAC_RCNTRL_REG);
7695 +}
7696 +
7697 +/* GEMAC disable pause rx function.
7698 + * @param[in] base GEMAC base address
7699 + */
7700 +void gemac_disable_pause_rx(void *base)
7701 +{
7702 + writel(readl(base + EMAC_RCNTRL_REG) & ~EMAC_RCNTRL_FCE,
7703 + base + EMAC_RCNTRL_REG);
7704 +}
7705 +
7706 +/* GEMAC enable pause tx function.
7707 + * @param[in] base GEMAC base address
7708 + */
7709 +void gemac_enable_pause_tx(void *base)
7710 +{
7711 + writel(EMAC_RX_SECTION_EMPTY_V, base + EMAC_RX_SECTION_EMPTY);
7712 +}
7713 +
7714 +/* GEMAC disable pause tx function.
7715 + * @param[in] base GEMAC base address
7716 + */
7717 +void gemac_disable_pause_tx(void *base)
7718 +{
7719 + writel(0x0, base + EMAC_RX_SECTION_EMPTY);
7720 +}
7721 +
7722 +/* GEMAC wol configuration
7723 + * @param[in] base GEMAC base address
7724 + * @param[in] wol_conf WoL register configuration
7725 + */
7726 +void gemac_set_wol(void *base, u32 wol_conf)
7727 +{
7728 + u32 val = readl(base + EMAC_ECNTRL_REG);
7729 +
7730 + if (wol_conf)
7731 + val |= (EMAC_ECNTRL_MAGIC_ENA | EMAC_ECNTRL_SLEEP);
7732 + else
7733 + val &= ~(EMAC_ECNTRL_MAGIC_ENA | EMAC_ECNTRL_SLEEP);
7734 + writel(val, base + EMAC_ECNTRL_REG);
7735 +}
7736 +
7737 +/* Sets Gemac bus width to 64bit
7738 + * @param[in] base GEMAC base address
7739 + * @param[in] width gemac bus width to be set possible values are 32/64/128
7740 + */
7741 +void gemac_set_bus_width(void *base, int width)
7742 +{
7743 +}
7744 +
7745 +/* Sets Gemac configuration.
7746 + * @param[in] base GEMAC base address
7747 + * @param[in] cfg GEMAC configuration
7748 + */
7749 +void gemac_set_config(void *base, struct gemac_cfg *cfg)
7750 +{
7751 + /*GEMAC config taken from VLSI */
7752 + writel(0x00000004, base + EMAC_TFWR_STR_FWD);
7753 + writel(0x00000005, base + EMAC_RX_SECTION_FULL);
7754 +
7755 + if (pfe_errata_a010897)
7756 + writel(0x0000076c, base + EMAC_TRUNC_FL);
7757 + else
7758 + writel(0x00003fff, base + EMAC_TRUNC_FL);
7759 +
7760 + writel(0x00000030, base + EMAC_TX_SECTION_EMPTY);
7761 + writel(0x00000000, base + EMAC_MIB_CTRL_STS_REG);
7762 +
7763 + gemac_set_mode(base, cfg->mode);
7764 +
7765 + gemac_set_speed(base, cfg->speed);
7766 +
7767 + gemac_set_duplex(base, cfg->duplex);
7768 +}
7769 +
7770 +/**************************** GPI ***************************/
7771 +
7772 +/* Initializes a GPI block.
7773 + * @param[in] base GPI base address
7774 + * @param[in] cfg GPI configuration
7775 + */
7776 +void gpi_init(void *base, struct gpi_cfg *cfg)
7777 +{
7778 + gpi_reset(base);
7779 +
7780 + gpi_disable(base);
7781 +
7782 + gpi_set_config(base, cfg);
7783 +}
7784 +
7785 +/* Resets a GPI block.
7786 + * @param[in] base GPI base address
7787 + */
7788 +void gpi_reset(void *base)
7789 +{
7790 + writel(CORE_SW_RESET, base + GPI_CTRL);
7791 +}
7792 +
7793 +/* Enables a GPI block.
7794 + * @param[in] base GPI base address
7795 + */
7796 +void gpi_enable(void *base)
7797 +{
7798 + writel(CORE_ENABLE, base + GPI_CTRL);
7799 +}
7800 +
7801 +/* Disables a GPI block.
7802 + * @param[in] base GPI base address
7803 + */
7804 +void gpi_disable(void *base)
7805 +{
7806 + writel(CORE_DISABLE, base + GPI_CTRL);
7807 +}
7808 +
7809 +/* Sets the configuration of a GPI block.
7810 + * @param[in] base GPI base address
7811 + * @param[in] cfg GPI configuration
7812 + */
7813 +void gpi_set_config(void *base, struct gpi_cfg *cfg)
7814 +{
7815 + writel(CBUS_VIRT_TO_PFE(BMU1_BASE_ADDR + BMU_ALLOC_CTRL), base
7816 + + GPI_LMEM_ALLOC_ADDR);
7817 + writel(CBUS_VIRT_TO_PFE(BMU1_BASE_ADDR + BMU_FREE_CTRL), base
7818 + + GPI_LMEM_FREE_ADDR);
7819 + writel(CBUS_VIRT_TO_PFE(BMU2_BASE_ADDR + BMU_ALLOC_CTRL), base
7820 + + GPI_DDR_ALLOC_ADDR);
7821 + writel(CBUS_VIRT_TO_PFE(BMU2_BASE_ADDR + BMU_FREE_CTRL), base
7822 + + GPI_DDR_FREE_ADDR);
7823 + writel(CBUS_VIRT_TO_PFE(CLASS_INQ_PKTPTR), base + GPI_CLASS_ADDR);
7824 + writel(DDR_HDR_SIZE, base + GPI_DDR_DATA_OFFSET);
7825 + writel(LMEM_HDR_SIZE, base + GPI_LMEM_DATA_OFFSET);
7826 + writel(0, base + GPI_LMEM_SEC_BUF_DATA_OFFSET);
7827 + writel(0, base + GPI_DDR_SEC_BUF_DATA_OFFSET);
7828 + writel((DDR_HDR_SIZE << 16) | LMEM_HDR_SIZE, base + GPI_HDR_SIZE);
7829 + writel((DDR_BUF_SIZE << 16) | LMEM_BUF_SIZE, base + GPI_BUF_SIZE);
7830 +
7831 + writel(((cfg->lmem_rtry_cnt << 16) | (GPI_DDR_BUF_EN << 1) |
7832 + GPI_LMEM_BUF_EN), base + GPI_RX_CONFIG);
7833 + writel(cfg->tmlf_txthres, base + GPI_TMLF_TX);
7834 + writel(cfg->aseq_len, base + GPI_DTX_ASEQ);
7835 + writel(1, base + GPI_TOE_CHKSUM_EN);
7836 +
7837 + if (cfg->mtip_pause_reg) {
7838 + writel(cfg->mtip_pause_reg, base + GPI_CSR_MTIP_PAUSE_REG);
7839 + writel(EGPI_PAUSE_TIME, base + GPI_TX_PAUSE_TIME);
7840 + }
7841 +}
7842 +
7843 +/**************************** CLASSIFIER ***************************/
7844 +
7845 +/* Initializes CLASSIFIER block.
7846 + * @param[in] cfg CLASSIFIER configuration
7847 + */
7848 +void class_init(struct class_cfg *cfg)
7849 +{
7850 + class_reset();
7851 +
7852 + class_disable();
7853 +
7854 + class_set_config(cfg);
7855 +}
7856 +
7857 +/* Resets CLASSIFIER block.
7858 + *
7859 + */
7860 +void class_reset(void)
7861 +{
7862 + writel(CORE_SW_RESET, CLASS_TX_CTRL);
7863 +}
7864 +
7865 +/* Enables all CLASS-PE's cores.
7866 + *
7867 + */
7868 +void class_enable(void)
7869 +{
7870 + writel(CORE_ENABLE, CLASS_TX_CTRL);
7871 +}
7872 +
7873 +/* Disables all CLASS-PE's cores.
7874 + *
7875 + */
7876 +void class_disable(void)
7877 +{
7878 + writel(CORE_DISABLE, CLASS_TX_CTRL);
7879 +}
7880 +
7881 +/*
7882 + * Sets the configuration of the CLASSIFIER block.
7883 + * @param[in] cfg CLASSIFIER configuration
7884 + */
7885 +void class_set_config(struct class_cfg *cfg)
7886 +{
7887 + u32 val;
7888 +
7889 + /* Initialize route table */
7890 + if (!cfg->resume)
7891 + memset(DDR_PHYS_TO_VIRT(cfg->route_table_baseaddr), 0, (1 <<
7892 + cfg->route_table_hash_bits) * CLASS_ROUTE_SIZE);
7893 +
7894 +#if !defined(LS1012A_PFE_RESET_WA)
7895 + writel(cfg->pe_sys_clk_ratio, CLASS_PE_SYS_CLK_RATIO);
7896 +#endif
7897 +
7898 + writel((DDR_HDR_SIZE << 16) | LMEM_HDR_SIZE, CLASS_HDR_SIZE);
7899 + writel(LMEM_BUF_SIZE, CLASS_LMEM_BUF_SIZE);
7900 + writel(CLASS_ROUTE_ENTRY_SIZE(CLASS_ROUTE_SIZE) |
7901 + CLASS_ROUTE_HASH_SIZE(cfg->route_table_hash_bits),
7902 + CLASS_ROUTE_HASH_ENTRY_SIZE);
7903 + writel(HIF_PKT_CLASS_EN | HIF_PKT_OFFSET(sizeof(struct hif_hdr)),
7904 + CLASS_HIF_PARSE);
7905 +
7906 + val = HASH_CRC_PORT_IP | QB2BUS_LE;
7907 +
7908 +#if defined(CONFIG_IP_ALIGNED)
7909 + val |= IP_ALIGNED;
7910 +#endif
7911 +
7912 + /*
7913 + * Class PE packet steering will only work if TOE mode, bridge fetch or
7914 + * route fetch are enabled (see class/qb_fet.v). Route fetch would
7915 + * trigger additional memory copies (likely from DDR because of hash
7916 + * table size, which cannot be reduced because PE software still
7917 + * relies on hash value computed in HW), so when not in TOE mode we
7918 + * simply enable HW bridge fetch even though we don't use it.
7919 + */
7920 + if (cfg->toe_mode)
7921 + val |= CLASS_TOE;
7922 + else
7923 + val |= HW_BRIDGE_FETCH;
7924 +
7925 + writel(val, CLASS_ROUTE_MULTI);
7926 +
7927 + writel(DDR_PHYS_TO_PFE(cfg->route_table_baseaddr),
7928 + CLASS_ROUTE_TABLE_BASE);
7929 + writel(CLASS_PE0_RO_DM_ADDR0_VAL, CLASS_PE0_RO_DM_ADDR0);
7930 + writel(CLASS_PE0_RO_DM_ADDR1_VAL, CLASS_PE0_RO_DM_ADDR1);
7931 + writel(CLASS_PE0_QB_DM_ADDR0_VAL, CLASS_PE0_QB_DM_ADDR0);
7932 + writel(CLASS_PE0_QB_DM_ADDR1_VAL, CLASS_PE0_QB_DM_ADDR1);
7933 + writel(CBUS_VIRT_TO_PFE(TMU_PHY_INQ_PKTPTR), CLASS_TM_INQ_ADDR);
7934 +
7935 + writel(23, CLASS_AFULL_THRES);
7936 + writel(23, CLASS_TSQ_FIFO_THRES);
7937 +
7938 + writel(24, CLASS_MAX_BUF_CNT);
7939 + writel(24, CLASS_TSQ_MAX_CNT);
7940 +}
7941 +
7942 +/**************************** TMU ***************************/
7943 +
7944 +void tmu_reset(void)
7945 +{
7946 + writel(SW_RESET, TMU_CTRL);
7947 +}
7948 +
7949 +/* Initializes TMU block.
7950 + * @param[in] cfg TMU configuration
7951 + */
7952 +void tmu_init(struct tmu_cfg *cfg)
7953 +{
7954 + int q, phyno;
7955 +
7956 + tmu_disable(0xF);
7957 + mdelay(10);
7958 +
7959 +#if !defined(LS1012A_PFE_RESET_WA)
7960 + /* keep in soft reset */
7961 + writel(SW_RESET, TMU_CTRL);
7962 +#endif
7963 + writel(0x3, TMU_SYS_GENERIC_CONTROL);
7964 + writel(750, TMU_INQ_WATERMARK);
7965 + writel(CBUS_VIRT_TO_PFE(EGPI1_BASE_ADDR +
7966 + GPI_INQ_PKTPTR), TMU_PHY0_INQ_ADDR);
7967 + writel(CBUS_VIRT_TO_PFE(EGPI2_BASE_ADDR +
7968 + GPI_INQ_PKTPTR), TMU_PHY1_INQ_ADDR);
7969 + writel(CBUS_VIRT_TO_PFE(HGPI_BASE_ADDR +
7970 + GPI_INQ_PKTPTR), TMU_PHY3_INQ_ADDR);
7971 + writel(CBUS_VIRT_TO_PFE(HIF_NOCPY_RX_INQ0_PKTPTR), TMU_PHY4_INQ_ADDR);
7972 + writel(CBUS_VIRT_TO_PFE(UTIL_INQ_PKTPTR), TMU_PHY5_INQ_ADDR);
7973 + writel(CBUS_VIRT_TO_PFE(BMU2_BASE_ADDR + BMU_FREE_CTRL),
7974 + TMU_BMU_INQ_ADDR);
7975 +
7976 + writel(0x3FF, TMU_TDQ0_SCH_CTRL); /*
7977 + * enabling all 10
7978 + * schedulers [9:0] of each TDQ
7979 + */
7980 + writel(0x3FF, TMU_TDQ1_SCH_CTRL);
7981 + writel(0x3FF, TMU_TDQ3_SCH_CTRL);
7982 +
7983 +#if !defined(LS1012A_PFE_RESET_WA)
7984 + writel(cfg->pe_sys_clk_ratio, TMU_PE_SYS_CLK_RATIO);
7985 +#endif
7986 +
7987 +#if !defined(LS1012A_PFE_RESET_WA)
7988 + writel(DDR_PHYS_TO_PFE(cfg->llm_base_addr), TMU_LLM_BASE_ADDR);
7989 + /* Extra packet pointers will be stored from this address onwards */
7990 +
7991 + writel(cfg->llm_queue_len, TMU_LLM_QUE_LEN);
7992 + writel(5, TMU_TDQ_IIFG_CFG);
7993 + writel(DDR_BUF_SIZE, TMU_BMU_BUF_SIZE);
7994 +
7995 + writel(0x0, TMU_CTRL);
7996 +
7997 + /* MEM init */
7998 + pr_info("%s: mem init\n", __func__);
7999 + writel(MEM_INIT, TMU_CTRL);
8000 +
8001 + while (!(readl(TMU_CTRL) & MEM_INIT_DONE))
8002 + ;
8003 +
8004 + /* LLM init */
8005 + pr_info("%s: lmem init\n", __func__);
8006 + writel(LLM_INIT, TMU_CTRL);
8007 +
8008 + while (!(readl(TMU_CTRL) & LLM_INIT_DONE))
8009 + ;
8010 +#endif
8011 + /* set up each queue for tail drop */
8012 + for (phyno = 0; phyno < 4; phyno++) {
8013 + if (phyno == 2)
8014 + continue;
8015 + for (q = 0; q < 16; q++) {
8016 + u32 qdepth;
8017 +
8018 + writel((phyno << 8) | q, TMU_TEQ_CTRL);
8019 + writel(1 << 22, TMU_TEQ_QCFG); /*Enable tail drop */
8020 +
8021 + if (phyno == 3)
8022 + qdepth = DEFAULT_TMU3_QDEPTH;
8023 + else
8024 + qdepth = (q == 0) ? DEFAULT_Q0_QDEPTH :
8025 + DEFAULT_MAX_QDEPTH;
8026 +
8027 + /* LOG: 68855 */
8028 + /*
8029 + * The following is a workaround for the reordered
8030 + * packet and BMU2 buffer leakage issue.
8031 + */
8032 + if (CHIP_REVISION() == 0)
8033 + qdepth = 31;
8034 +
8035 + writel(qdepth << 18, TMU_TEQ_HW_PROB_CFG2);
8036 + writel(qdepth >> 14, TMU_TEQ_HW_PROB_CFG3);
8037 + }
8038 + }
8039 +
8040 +#ifdef CFG_LRO
8041 + /* Set TMU-3 queue 5 (LRO) in no-drop mode */
8042 + writel((3 << 8) | TMU_QUEUE_LRO, TMU_TEQ_CTRL);
8043 + writel(0, TMU_TEQ_QCFG);
8044 +#endif
8045 +
8046 + writel(0x05, TMU_TEQ_DISABLE_DROPCHK);
8047 +
8048 + writel(0x0, TMU_CTRL);
8049 +}
8050 +
8051 +/* Enables TMU-PE cores.
8052 + * @param[in] pe_mask TMU PE mask
8053 + */
8054 +void tmu_enable(u32 pe_mask)
8055 +{
8056 + writel(readl(TMU_TX_CTRL) | (pe_mask & 0xF), TMU_TX_CTRL);
8057 +}
8058 +
8059 +/* Disables TMU cores.
8060 + * @param[in] pe_mask TMU PE mask
8061 + */
8062 +void tmu_disable(u32 pe_mask)
8063 +{
8064 + writel(readl(TMU_TX_CTRL) & ~(pe_mask & 0xF), TMU_TX_CTRL);
8065 +}
8066 +
8067 +/* This will return the tmu queue status
8068 + * @param[in] if_id gem interface id or TMU index
8069 + * @return returns the bit mask of busy queues, zero means all
8070 + * queues are empty
8071 + */
8072 +u32 tmu_qstatus(u32 if_id)
8073 +{
8074 + return cpu_to_be32(pe_dmem_read(TMU0_ID + if_id, TMU_DM_PESTATUS +
8075 + offsetof(struct pe_status, tmu_qstatus), 4));
8076 +}
8077 +
8078 +u32 tmu_pkts_processed(u32 if_id)
8079 +{
8080 + return cpu_to_be32(pe_dmem_read(TMU0_ID + if_id, TMU_DM_PESTATUS +
8081 + offsetof(struct pe_status, rx), 4));
8082 +}
8083 +
8084 +/**************************** UTIL ***************************/
8085 +
8086 +/* Resets UTIL block.
8087 + */
8088 +void util_reset(void)
8089 +{
8090 + writel(CORE_SW_RESET, UTIL_TX_CTRL);
8091 +}
8092 +
8093 +/* Initializes UTIL block.
8094 + * @param[in] cfg UTIL configuration
8095 + */
8096 +void util_init(struct util_cfg *cfg)
8097 +{
8098 + writel(cfg->pe_sys_clk_ratio, UTIL_PE_SYS_CLK_RATIO);
8099 +}
8100 +
8101 +/* Enables UTIL-PE core.
8102 + *
8103 + */
8104 +void util_enable(void)
8105 +{
8106 + writel(CORE_ENABLE, UTIL_TX_CTRL);
8107 +}
8108 +
8109 +/* Disables UTIL-PE core.
8110 + *
8111 + */
8112 +void util_disable(void)
8113 +{
8114 + writel(CORE_DISABLE, UTIL_TX_CTRL);
8115 +}
8116 +
8117 +/**************************** HIF ***************************/
8118 +/* Initializes HIF copy block.
8119 + *
8120 + */
8121 +void hif_init(void)
8122 +{
8123 + /*Initialize HIF registers*/
8124 + writel((HIF_RX_POLL_CTRL_CYCLE << 16) | HIF_TX_POLL_CTRL_CYCLE,
8125 + HIF_POLL_CTRL);
8126 +}
8127 +
8128 +/* Enable hif tx DMA and interrupt
8129 + *
8130 + */
8131 +void hif_tx_enable(void)
8132 +{
8133 + writel(HIF_CTRL_DMA_EN, HIF_TX_CTRL);
8134 + writel((readl(HIF_INT_ENABLE) | HIF_INT_EN | HIF_TXPKT_INT_EN),
8135 + HIF_INT_ENABLE);
8136 +}
8137 +
8138 +/* Disable hif tx DMA and interrupt
8139 + *
8140 + */
8141 +void hif_tx_disable(void)
8142 +{
8143 + u32 hif_int;
8144 +
8145 + writel(0, HIF_TX_CTRL);
8146 +
8147 + hif_int = readl(HIF_INT_ENABLE);
8148 + hif_int &= HIF_TXPKT_INT_EN;
8149 + writel(hif_int, HIF_INT_ENABLE);
8150 +}
8151 +
8152 +/* Enable hif rx DMA and interrupt
8153 + *
8154 + */
8155 +void hif_rx_enable(void)
8156 +{
8157 + hif_rx_dma_start();
8158 + writel((readl(HIF_INT_ENABLE) | HIF_INT_EN | HIF_RXPKT_INT_EN),
8159 + HIF_INT_ENABLE);
8160 +}
8161 +
8162 +/* Disable hif rx DMA and interrupt
8163 + *
8164 + */
8165 +void hif_rx_disable(void)
8166 +{
8167 + u32 hif_int;
8168 +
8169 + writel(0, HIF_RX_CTRL);
8170 +
8171 + hif_int = readl(HIF_INT_ENABLE);
8172 + hif_int &= HIF_RXPKT_INT_EN;
8173 + writel(hif_int, HIF_INT_ENABLE);
8174 +}
8175 --- /dev/null
8176 +++ b/drivers/staging/fsl_ppfe/pfe_hif.c
8177 @@ -0,0 +1,1064 @@
8178 +// SPDX-License-Identifier: GPL-2.0+
8179 +/*
8180 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
8181 + * Copyright 2017 NXP
8182 + */
8183 +
8184 +#include <linux/kernel.h>
8185 +#include <linux/interrupt.h>
8186 +#include <linux/dma-mapping.h>
8187 +#include <linux/dmapool.h>
8188 +#include <linux/sched.h>
8189 +#include <linux/module.h>
8190 +#include <linux/list.h>
8191 +#include <linux/kthread.h>
8192 +#include <linux/slab.h>
8193 +
8194 +#include <linux/io.h>
8195 +#include <asm/irq.h>
8196 +
8197 +#include "pfe_mod.h"
8198 +
8199 +#define HIF_INT_MASK (HIF_INT | HIF_RXPKT_INT | HIF_TXPKT_INT)
8200 +
8201 +unsigned char napi_first_batch;
8202 +
8203 +static void pfe_tx_do_cleanup(unsigned long data);
8204 +
8205 +static int pfe_hif_alloc_descr(struct pfe_hif *hif)
8206 +{
8207 + void *addr;
8208 + dma_addr_t dma_addr;
8209 + int err = 0;
8210 +
8211 + pr_info("%s\n", __func__);
8212 + addr = dma_alloc_coherent(pfe->dev,
8213 + HIF_RX_DESC_NT * sizeof(struct hif_desc) +
8214 + HIF_TX_DESC_NT * sizeof(struct hif_desc),
8215 + &dma_addr, GFP_KERNEL);
8216 +
8217 + if (!addr) {
8218 + pr_err("%s: Could not allocate buffer descriptors!\n"
8219 + , __func__);
8220 + err = -ENOMEM;
8221 + goto err0;
8222 + }
8223 +
8224 + hif->descr_baseaddr_p = dma_addr;
8225 + hif->descr_baseaddr_v = addr;
8226 + hif->rx_ring_size = HIF_RX_DESC_NT;
8227 + hif->tx_ring_size = HIF_TX_DESC_NT;
8228 +
8229 + return 0;
8230 +
8231 +err0:
8232 + return err;
8233 +}
8234 +
8235 +#if defined(LS1012A_PFE_RESET_WA)
8236 +static void pfe_hif_disable_rx_desc(struct pfe_hif *hif)
8237 +{
8238 + int ii;
8239 + struct hif_desc *desc = hif->rx_base;
8240 +
8241 + /*Mark all descriptors as LAST_BD */
8242 + for (ii = 0; ii < hif->rx_ring_size; ii++) {
8243 + desc->ctrl |= BD_CTRL_LAST_BD;
8244 + desc++;
8245 + }
8246 +}
8247 +
8248 +struct class_rx_hdr_t {
8249 + u32 next_ptr; /* ptr to the start of the first DDR buffer */
8250 + u16 length; /* total packet length */
8251 + u16 phyno; /* input physical port number */
8252 + u32 status; /* gemac status bits */
8253 + u32 status2; /* reserved for software usage */
8254 +};
8255 +
8256 +/* STATUS_BAD_FRAME_ERR is set for all errors (including checksums if enabled)
8257 + * except overflow
8258 + */
8259 +#define STATUS_BAD_FRAME_ERR BIT(16)
8260 +#define STATUS_LENGTH_ERR BIT(17)
8261 +#define STATUS_CRC_ERR BIT(18)
8262 +#define STATUS_TOO_SHORT_ERR BIT(19)
8263 +#define STATUS_TOO_LONG_ERR BIT(20)
8264 +#define STATUS_CODE_ERR BIT(21)
8265 +#define STATUS_MC_HASH_MATCH BIT(22)
8266 +#define STATUS_CUMULATIVE_ARC_HIT BIT(23)
8267 +#define STATUS_UNICAST_HASH_MATCH BIT(24)
8268 +#define STATUS_IP_CHECKSUM_CORRECT BIT(25)
8269 +#define STATUS_TCP_CHECKSUM_CORRECT BIT(26)
8270 +#define STATUS_UDP_CHECKSUM_CORRECT BIT(27)
8271 +#define STATUS_OVERFLOW_ERR BIT(28) /* GPI error */
8272 +#define MIN_PKT_SIZE 64
8273 +
8274 +static inline void copy_to_lmem(u32 *dst, u32 *src, int len)
8275 +{
8276 + int i;
8277 +
8278 + for (i = 0; i < len; i += sizeof(u32)) {
8279 + *dst = htonl(*src);
8280 + dst++; src++;
8281 + }
8282 +}
8283 +
8284 +static void send_dummy_pkt_to_hif(void)
8285 +{
8286 + void *lmem_ptr, *ddr_ptr, *lmem_virt_addr;
8287 + u32 physaddr;
8288 + struct class_rx_hdr_t local_hdr;
8289 + static u32 dummy_pkt[] = {
8290 + 0x33221100, 0x2b785544, 0xd73093cb, 0x01000608,
8291 + 0x04060008, 0x2b780200, 0xd73093cb, 0x0a01a8c0,
8292 + 0x33221100, 0xa8c05544, 0x00000301, 0x00000000,
8293 + 0x00000000, 0x00000000, 0x00000000, 0xbe86c51f };
8294 +
8295 + ddr_ptr = (void *)((u64)readl(BMU2_BASE_ADDR + BMU_ALLOC_CTRL));
8296 + if (!ddr_ptr)
8297 + return;
8298 +
8299 + lmem_ptr = (void *)((u64)readl(BMU1_BASE_ADDR + BMU_ALLOC_CTRL));
8300 + if (!lmem_ptr)
8301 + return;
8302 +
8303 + pr_info("Sending a dummy pkt to HIF %p %p\n", ddr_ptr, lmem_ptr);
8304 + physaddr = (u32)DDR_VIRT_TO_PFE(ddr_ptr);
8305 +
8306 + lmem_virt_addr = (void *)CBUS_PFE_TO_VIRT((unsigned long int)lmem_ptr);
8307 +
8308 + local_hdr.phyno = htons(0); /* RX_PHY_0 */
8309 + local_hdr.length = htons(MIN_PKT_SIZE);
8310 +
8311 + local_hdr.next_ptr = htonl((u32)physaddr);
8312 + /*Mark checksum is correct */
8313 + local_hdr.status = htonl((STATUS_IP_CHECKSUM_CORRECT |
8314 + STATUS_UDP_CHECKSUM_CORRECT |
8315 + STATUS_TCP_CHECKSUM_CORRECT |
8316 + STATUS_UNICAST_HASH_MATCH |
8317 + STATUS_CUMULATIVE_ARC_HIT));
8318 + local_hdr.status2 = 0;
8319 +
8320 + copy_to_lmem((u32 *)lmem_virt_addr, (u32 *)&local_hdr,
8321 + sizeof(local_hdr));
8322 +
8323 + copy_to_lmem((u32 *)(lmem_virt_addr + LMEM_HDR_SIZE), (u32 *)dummy_pkt,
8324 + 0x40);
8325 +
8326 + writel((unsigned long int)lmem_ptr, CLASS_INQ_PKTPTR);
8327 +}
8328 +
8329 +void pfe_hif_rx_idle(struct pfe_hif *hif)
8330 +{
8331 + int hif_stop_loop = 10;
8332 + u32 rx_status;
8333 +
8334 + pfe_hif_disable_rx_desc(hif);
8335 + pr_info("Bringing hif to idle state...");
8336 + writel(0, HIF_INT_ENABLE);
8337 + /*If HIF Rx BDP is busy send a dummy packet */
8338 + do {
8339 + rx_status = readl(HIF_RX_STATUS);
8340 + if (rx_status & BDP_CSR_RX_DMA_ACTV)
8341 + send_dummy_pkt_to_hif();
8342 +
8343 + usleep_range(100, 150);
8344 + } while (--hif_stop_loop);
8345 +
8346 + if (readl(HIF_RX_STATUS) & BDP_CSR_RX_DMA_ACTV)
8347 + pr_info("Failed\n");
8348 + else
8349 + pr_info("Done\n");
8350 +}
8351 +#endif
8352 +
8353 +static void pfe_hif_free_descr(struct pfe_hif *hif)
8354 +{
8355 + pr_info("%s\n", __func__);
8356 +
8357 + dma_free_coherent(pfe->dev,
8358 + hif->rx_ring_size * sizeof(struct hif_desc) +
8359 + hif->tx_ring_size * sizeof(struct hif_desc),
8360 + hif->descr_baseaddr_v, hif->descr_baseaddr_p);
8361 +}
8362 +
8363 +void pfe_hif_desc_dump(struct pfe_hif *hif)
8364 +{
8365 + struct hif_desc *desc;
8366 + unsigned long desc_p;
8367 + int ii = 0;
8368 +
8369 + pr_info("%s\n", __func__);
8370 +
8371 + desc = hif->rx_base;
8372 + desc_p = (u32)((u64)desc - (u64)hif->descr_baseaddr_v +
8373 + hif->descr_baseaddr_p);
8374 +
8375 + pr_info("HIF Rx desc base %p physical %x\n", desc, (u32)desc_p);
8376 + for (ii = 0; ii < hif->rx_ring_size; ii++) {
8377 + pr_info("status: %08x, ctrl: %08x, data: %08x, next: %x\n",
8378 + readl(&desc->status), readl(&desc->ctrl),
8379 + readl(&desc->data), readl(&desc->next));
8380 + desc++;
8381 + }
8382 +
8383 + desc = hif->tx_base;
8384 + desc_p = ((u64)desc - (u64)hif->descr_baseaddr_v +
8385 + hif->descr_baseaddr_p);
8386 +
8387 + pr_info("HIF Tx desc base %p physical %x\n", desc, (u32)desc_p);
8388 + for (ii = 0; ii < hif->tx_ring_size; ii++) {
8389 + pr_info("status: %08x, ctrl: %08x, data: %08x, next: %x\n",
8390 + readl(&desc->status), readl(&desc->ctrl),
8391 + readl(&desc->data), readl(&desc->next));
8392 + desc++;
8393 + }
8394 +}
8395 +
8396 +/* pfe_hif_release_buffers */
8397 +static void pfe_hif_release_buffers(struct pfe_hif *hif)
8398 +{
8399 + struct hif_desc *desc;
8400 + int i = 0;
8401 +
8402 + hif->rx_base = hif->descr_baseaddr_v;
8403 +
8404 + pr_info("%s\n", __func__);
8405 +
8406 + /*Free Rx buffers */
8407 + desc = hif->rx_base;
8408 + for (i = 0; i < hif->rx_ring_size; i++) {
8409 + if (readl(&desc->data)) {
8410 + if ((i < hif->shm->rx_buf_pool_cnt) &&
8411 + (!hif->shm->rx_buf_pool[i])) {
8412 + /*
8413 + * dma_unmap_single(hif->dev, desc->data,
8414 + * hif->rx_buf_len[i], DMA_FROM_DEVICE);
8415 + */
8416 + dma_unmap_single(hif->dev,
8417 + DDR_PFE_TO_PHYS(
8418 + readl(&desc->data)),
8419 + hif->rx_buf_len[i],
8420 + DMA_FROM_DEVICE);
8421 + hif->shm->rx_buf_pool[i] = hif->rx_buf_addr[i];
8422 + } else {
8423 + pr_err("%s: buffer pool already full\n"
8424 + , __func__);
8425 + }
8426 + }
8427 +
8428 + writel(0, &desc->data);
8429 + writel(0, &desc->status);
8430 + writel(0, &desc->ctrl);
8431 + desc++;
8432 + }
8433 +}
8434 +
8435 +/*
8436 + * pfe_hif_init_buffers
8437 + * This function initializes the HIF Rx/Tx ring descriptors and
8438 + * initialize Rx queue with buffers.
8439 + */
8440 +static int pfe_hif_init_buffers(struct pfe_hif *hif)
8441 +{
8442 + struct hif_desc *desc, *first_desc_p;
8443 + u32 data;
8444 + int i = 0;
8445 +
8446 + pr_info("%s\n", __func__);
8447 +
8448 + /* Check enough Rx buffers available in the shared memory */
8449 + if (hif->shm->rx_buf_pool_cnt < hif->rx_ring_size)
8450 + return -ENOMEM;
8451 +
8452 + hif->rx_base = hif->descr_baseaddr_v;
8453 + memset(hif->rx_base, 0, hif->rx_ring_size * sizeof(struct hif_desc));
8454 +
8455 + /*Initialize Rx descriptors */
8456 + desc = hif->rx_base;
8457 + first_desc_p = (struct hif_desc *)hif->descr_baseaddr_p;
8458 +
8459 + for (i = 0; i < hif->rx_ring_size; i++) {
8460 + /* Initialize Rx buffers from the shared memory */
8461 +
8462 + data = (u32)dma_map_single(hif->dev, hif->shm->rx_buf_pool[i],
8463 + pfe_pkt_size, DMA_FROM_DEVICE);
8464 + hif->rx_buf_addr[i] = hif->shm->rx_buf_pool[i];
8465 + hif->rx_buf_len[i] = pfe_pkt_size;
8466 + hif->shm->rx_buf_pool[i] = NULL;
8467 +
8468 + if (likely(dma_mapping_error(hif->dev, data) == 0)) {
8469 + writel(DDR_PHYS_TO_PFE(data), &desc->data);
8470 + } else {
8471 + pr_err("%s : low on mem\n", __func__);
8472 +
8473 + goto err;
8474 + }
8475 +
8476 + writel(0, &desc->status);
8477 +
8478 + /*
8479 + * Ensure everything else is written to DDR before
8480 + * writing bd->ctrl
8481 + */
8482 + wmb();
8483 +
8484 + writel((BD_CTRL_PKT_INT_EN | BD_CTRL_LIFM
8485 + | BD_CTRL_DIR | BD_CTRL_DESC_EN
8486 + | BD_BUF_LEN(pfe_pkt_size)), &desc->ctrl);
8487 +
8488 + /* Chain descriptors */
8489 + writel((u32)DDR_PHYS_TO_PFE(first_desc_p + i + 1), &desc->next);
8490 + desc++;
8491 + }
8492 +
8493 + /* Overwrite last descriptor to chain it to first one*/
8494 + desc--;
8495 + writel((u32)DDR_PHYS_TO_PFE(first_desc_p), &desc->next);
8496 +
8497 + hif->rxtoclean_index = 0;
8498 +
8499 + /*Initialize Rx buffer descriptor ring base address */
8500 + writel(DDR_PHYS_TO_PFE(hif->descr_baseaddr_p), HIF_RX_BDP_ADDR);
8501 +
8502 + hif->tx_base = hif->rx_base + hif->rx_ring_size;
8503 + first_desc_p = (struct hif_desc *)hif->descr_baseaddr_p +
8504 + hif->rx_ring_size;
8505 + memset(hif->tx_base, 0, hif->tx_ring_size * sizeof(struct hif_desc));
8506 +
8507 + /*Initialize tx descriptors */
8508 + desc = hif->tx_base;
8509 +
8510 + for (i = 0; i < hif->tx_ring_size; i++) {
8511 + /* Chain descriptors */
8512 + writel((u32)DDR_PHYS_TO_PFE(first_desc_p + i + 1), &desc->next);
8513 + writel(0, &desc->ctrl);
8514 + desc++;
8515 + }
8516 +
8517 + /* Overwrite last descriptor to chain it to first one */
8518 + desc--;
8519 + writel((u32)DDR_PHYS_TO_PFE(first_desc_p), &desc->next);
8520 + hif->txavail = hif->tx_ring_size;
8521 + hif->txtosend = 0;
8522 + hif->txtoclean = 0;
8523 + hif->txtoflush = 0;
8524 +
8525 + /*Initialize Tx buffer descriptor ring base address */
8526 + writel((u32)DDR_PHYS_TO_PFE(first_desc_p), HIF_TX_BDP_ADDR);
8527 +
8528 + return 0;
8529 +
8530 +err:
8531 + pfe_hif_release_buffers(hif);
8532 + return -ENOMEM;
8533 +}
8534 +
8535 +/*
8536 + * pfe_hif_client_register
8537 + *
8538 + * This function used to register a client driver with the HIF driver.
8539 + *
8540 + * Return value:
8541 + * 0 - on Successful registration
8542 + */
8543 +static int pfe_hif_client_register(struct pfe_hif *hif, u32 client_id,
8544 + struct hif_client_shm *client_shm)
8545 +{
8546 + struct hif_client *client = &hif->client[client_id];
8547 + u32 i, cnt;
8548 + struct rx_queue_desc *rx_qbase;
8549 + struct tx_queue_desc *tx_qbase;
8550 + struct hif_rx_queue *rx_queue;
8551 + struct hif_tx_queue *tx_queue;
8552 + int err = 0;
8553 +
8554 + pr_info("%s\n", __func__);
8555 +
8556 + spin_lock_bh(&hif->tx_lock);
8557 +
8558 + if (test_bit(client_id, &hif->shm->g_client_status[0])) {
8559 + pr_err("%s: client %d already registered\n",
8560 + __func__, client_id);
8561 + err = -1;
8562 + goto unlock;
8563 + }
8564 +
8565 + memset(client, 0, sizeof(struct hif_client));
8566 +
8567 + /* Initialize client Rx queues baseaddr, size */
8568 +
8569 + cnt = CLIENT_CTRL_RX_Q_CNT(client_shm->ctrl);
8570 + /* Check if client is requesting for more queues than supported */
8571 + if (cnt > HIF_CLIENT_QUEUES_MAX)
8572 + cnt = HIF_CLIENT_QUEUES_MAX;
8573 +
8574 + client->rx_qn = cnt;
8575 + rx_qbase = (struct rx_queue_desc *)client_shm->rx_qbase;
8576 + for (i = 0; i < cnt; i++) {
8577 + rx_queue = &client->rx_q[i];
8578 + rx_queue->base = rx_qbase + i * client_shm->rx_qsize;
8579 + rx_queue->size = client_shm->rx_qsize;
8580 + rx_queue->write_idx = 0;
8581 + }
8582 +
8583 + /* Initialize client Tx queues baseaddr, size */
8584 + cnt = CLIENT_CTRL_TX_Q_CNT(client_shm->ctrl);
8585 +
8586 + /* Check if client is requesting for more queues than supported */
8587 + if (cnt > HIF_CLIENT_QUEUES_MAX)
8588 + cnt = HIF_CLIENT_QUEUES_MAX;
8589 +
8590 + client->tx_qn = cnt;
8591 + tx_qbase = (struct tx_queue_desc *)client_shm->tx_qbase;
8592 + for (i = 0; i < cnt; i++) {
8593 + tx_queue = &client->tx_q[i];
8594 + tx_queue->base = tx_qbase + i * client_shm->tx_qsize;
8595 + tx_queue->size = client_shm->tx_qsize;
8596 + tx_queue->ack_idx = 0;
8597 + }
8598 +
8599 + set_bit(client_id, &hif->shm->g_client_status[0]);
8600 +
8601 +unlock:
8602 + spin_unlock_bh(&hif->tx_lock);
8603 +
8604 + return err;
8605 +}
8606 +
8607 +/*
8608 + * pfe_hif_client_unregister
8609 + *
8610 + * This function used to unregister a client from the HIF driver.
8611 + *
8612 + */
8613 +static void pfe_hif_client_unregister(struct pfe_hif *hif, u32 client_id)
8614 +{
8615 + pr_info("%s\n", __func__);
8616 +
8617 + /*
8618 + * Mark client as no longer available (which prevents further packet
8619 + * receive for this client)
8620 + */
8621 + spin_lock_bh(&hif->tx_lock);
8622 +
8623 + if (!test_bit(client_id, &hif->shm->g_client_status[0])) {
8624 + pr_err("%s: client %d not registered\n", __func__,
8625 + client_id);
8626 +
8627 + spin_unlock_bh(&hif->tx_lock);
8628 + return;
8629 + }
8630 +
8631 + clear_bit(client_id, &hif->shm->g_client_status[0]);
8632 +
8633 + spin_unlock_bh(&hif->tx_lock);
8634 +}
8635 +
8636 +/*
8637 + * client_put_rxpacket-
8638 + * This functions puts the Rx pkt in the given client Rx queue.
8639 + * It actually swap the Rx pkt in the client Rx descriptor buffer
8640 + * and returns the free buffer from it.
8641 + *
8642 + * If the function returns NULL means client Rx queue is full and
8643 + * packet couldn't send to client queue.
8644 + */
8645 +static void *client_put_rxpacket(struct hif_rx_queue *queue, void *pkt, u32 len,
8646 + u32 flags, u32 client_ctrl, u32 *rem_len)
8647 +{
8648 + void *free_pkt = NULL;
8649 + struct rx_queue_desc *desc = queue->base + queue->write_idx;
8650 +
8651 + if (readl(&desc->ctrl) & CL_DESC_OWN) {
8652 + if (page_mode) {
8653 + int rem_page_size = PAGE_SIZE -
8654 + PRESENT_OFST_IN_PAGE(pkt);
8655 + int cur_pkt_size = ROUND_MIN_RX_SIZE(len +
8656 + pfe_pkt_headroom);
8657 + *rem_len = (rem_page_size - cur_pkt_size);
8658 + if (*rem_len) {
8659 + free_pkt = pkt + cur_pkt_size;
8660 + get_page(virt_to_page(free_pkt));
8661 + } else {
8662 + free_pkt = (void
8663 + *)__get_free_page(GFP_ATOMIC | GFP_DMA_PFE);
8664 + *rem_len = pfe_pkt_size;
8665 + }
8666 + } else {
8667 + free_pkt = kmalloc(PFE_BUF_SIZE, GFP_ATOMIC |
8668 + GFP_DMA_PFE);
8669 + *rem_len = PFE_BUF_SIZE - pfe_pkt_headroom;
8670 + }
8671 +
8672 + if (free_pkt) {
8673 + desc->data = pkt;
8674 + desc->client_ctrl = client_ctrl;
8675 + /*
8676 + * Ensure everything else is written to DDR before
8677 + * writing bd->ctrl
8678 + */
8679 + smp_wmb();
8680 + writel(CL_DESC_BUF_LEN(len) | flags, &desc->ctrl);
8681 + queue->write_idx = (queue->write_idx + 1)
8682 + & (queue->size - 1);
8683 +
8684 + free_pkt += pfe_pkt_headroom;
8685 + }
8686 + }
8687 +
8688 + return free_pkt;
8689 +}
8690 +
8691 +/*
8692 + * pfe_hif_rx_process-
8693 + * This function does pfe hif rx queue processing.
8694 + * Dequeue packet from Rx queue and send it to corresponding client queue
8695 + */
8696 +static int pfe_hif_rx_process(struct pfe_hif *hif, int budget)
8697 +{
8698 + struct hif_desc *desc;
8699 + struct hif_hdr *pkt_hdr;
8700 + struct __hif_hdr hif_hdr;
8701 + void *free_buf;
8702 + int rtc, len, rx_processed = 0;
8703 + struct __hif_desc local_desc;
8704 + int flags;
8705 + unsigned int desc_p;
8706 + unsigned int buf_size = 0;
8707 +
8708 + spin_lock_bh(&hif->lock);
8709 +
8710 + rtc = hif->rxtoclean_index;
8711 +
8712 + while (rx_processed < budget) {
8713 + desc = hif->rx_base + rtc;
8714 +
8715 + __memcpy12(&local_desc, desc);
8716 +
8717 + /* ACK pending Rx interrupt */
8718 + if (local_desc.ctrl & BD_CTRL_DESC_EN) {
8719 + writel(HIF_INT | HIF_RXPKT_INT, HIF_INT_SRC);
8720 +
8721 + if (rx_processed == 0) {
8722 + if (napi_first_batch == 1) {
8723 + desc_p = hif->descr_baseaddr_p +
8724 + ((unsigned long int)(desc) -
8725 + (unsigned long
8726 + int)hif->descr_baseaddr_v);
8727 + napi_first_batch = 0;
8728 + }
8729 + }
8730 +
8731 + __memcpy12(&local_desc, desc);
8732 +
8733 + if (local_desc.ctrl & BD_CTRL_DESC_EN)
8734 + break;
8735 + }
8736 +
8737 + napi_first_batch = 0;
8738 +
8739 +#ifdef HIF_NAPI_STATS
8740 + hif->napi_counters[NAPI_DESC_COUNT]++;
8741 +#endif
8742 + len = BD_BUF_LEN(local_desc.ctrl);
8743 + /*
8744 + * dma_unmap_single(hif->dev, DDR_PFE_TO_PHYS(local_desc.data),
8745 + * hif->rx_buf_len[rtc], DMA_FROM_DEVICE);
8746 + */
8747 + dma_unmap_single(hif->dev, DDR_PFE_TO_PHYS(local_desc.data),
8748 + hif->rx_buf_len[rtc], DMA_FROM_DEVICE);
8749 +
8750 + pkt_hdr = (struct hif_hdr *)hif->rx_buf_addr[rtc];
8751 +
8752 + /* Track last HIF header received */
8753 + if (!hif->started) {
8754 + hif->started = 1;
8755 +
8756 + __memcpy8(&hif_hdr, pkt_hdr);
8757 +
8758 + hif->qno = hif_hdr.hdr.q_num;
8759 + hif->client_id = hif_hdr.hdr.client_id;
8760 + hif->client_ctrl = (hif_hdr.hdr.client_ctrl1 << 16) |
8761 + hif_hdr.hdr.client_ctrl;
8762 + flags = CL_DESC_FIRST;
8763 +
8764 + } else {
8765 + flags = 0;
8766 + }
8767 +
8768 + if (local_desc.ctrl & BD_CTRL_LIFM)
8769 + flags |= CL_DESC_LAST;
8770 +
8771 + /* Check for valid client id and still registered */
8772 + if ((hif->client_id >= HIF_CLIENTS_MAX) ||
8773 + !(test_bit(hif->client_id,
8774 + &hif->shm->g_client_status[0]))) {
8775 + printk_ratelimited("%s: packet with invalid client id %d q_num %d\n",
8776 + __func__,
8777 + hif->client_id,
8778 + hif->qno);
8779 +
8780 + free_buf = pkt_hdr;
8781 +
8782 + goto pkt_drop;
8783 + }
8784 +
8785 + /* Check to valid queue number */
8786 + if (hif->client[hif->client_id].rx_qn <= hif->qno) {
8787 + pr_info("%s: packet with invalid queue: %d\n"
8788 + , __func__, hif->qno);
8789 + hif->qno = 0;
8790 + }
8791 +
8792 + free_buf =
8793 + client_put_rxpacket(&hif->client[hif->client_id].rx_q[hif->qno],
8794 + (void *)pkt_hdr, len, flags,
8795 + hif->client_ctrl, &buf_size);
8796 +
8797 + hif_lib_indicate_client(hif->client_id, EVENT_RX_PKT_IND,
8798 + hif->qno);
8799 +
8800 + if (unlikely(!free_buf)) {
8801 +#ifdef HIF_NAPI_STATS
8802 + hif->napi_counters[NAPI_CLIENT_FULL_COUNT]++;
8803 +#endif
8804 + /*
8805 + * If we want to keep in polling mode to retry later,
8806 + * we need to tell napi that we consumed
8807 + * the full budget or we will hit a livelock scenario.
8808 + * The core code keeps this napi instance
8809 + * at the head of the list and none of the other
8810 + * instances get to run
8811 + */
8812 + rx_processed = budget;
8813 +
8814 + if (flags & CL_DESC_FIRST)
8815 + hif->started = 0;
8816 +
8817 + break;
8818 + }
8819 +
8820 +pkt_drop:
8821 + /*Fill free buffer in the descriptor */
8822 + hif->rx_buf_addr[rtc] = free_buf;
8823 + hif->rx_buf_len[rtc] = min(pfe_pkt_size, buf_size);
8824 + writel((DDR_PHYS_TO_PFE
8825 + ((u32)dma_map_single(hif->dev,
8826 + free_buf, hif->rx_buf_len[rtc], DMA_FROM_DEVICE))),
8827 + &desc->data);
8828 + /*
8829 + * Ensure everything else is written to DDR before
8830 + * writing bd->ctrl
8831 + */
8832 + wmb();
8833 + writel((BD_CTRL_PKT_INT_EN | BD_CTRL_LIFM | BD_CTRL_DIR |
8834 + BD_CTRL_DESC_EN | BD_BUF_LEN(hif->rx_buf_len[rtc])),
8835 + &desc->ctrl);
8836 +
8837 + rtc = (rtc + 1) & (hif->rx_ring_size - 1);
8838 +
8839 + if (local_desc.ctrl & BD_CTRL_LIFM) {
8840 + if (!(hif->client_ctrl & HIF_CTRL_RX_CONTINUED)) {
8841 + rx_processed++;
8842 +
8843 +#ifdef HIF_NAPI_STATS
8844 + hif->napi_counters[NAPI_PACKET_COUNT]++;
8845 +#endif
8846 + }
8847 + hif->started = 0;
8848 + }
8849 + }
8850 +
8851 + hif->rxtoclean_index = rtc;
8852 + spin_unlock_bh(&hif->lock);
8853 +
8854 + /* we made some progress, re-start rx dma in case it stopped */
8855 + hif_rx_dma_start();
8856 +
8857 + return rx_processed;
8858 +}
8859 +
8860 +/*
8861 + * client_ack_txpacket-
8862 + * This function ack the Tx packet in the give client Tx queue by resetting
8863 + * ownership bit in the descriptor.
8864 + */
8865 +static int client_ack_txpacket(struct pfe_hif *hif, unsigned int client_id,
8866 + unsigned int q_no)
8867 +{
8868 + struct hif_tx_queue *queue = &hif->client[client_id].tx_q[q_no];
8869 + struct tx_queue_desc *desc = queue->base + queue->ack_idx;
8870 +
8871 + if (readl(&desc->ctrl) & CL_DESC_OWN) {
8872 + writel((readl(&desc->ctrl) & ~CL_DESC_OWN), &desc->ctrl);
8873 + queue->ack_idx = (queue->ack_idx + 1) & (queue->size - 1);
8874 +
8875 + return 0;
8876 +
8877 + } else {
8878 + /*This should not happen */
8879 + pr_err("%s: %d %d %d %d %d %p %d\n", __func__,
8880 + hif->txtosend, hif->txtoclean, hif->txavail,
8881 + client_id, q_no, queue, queue->ack_idx);
8882 + WARN(1, "%s: doesn't own this descriptor", __func__);
8883 + return 1;
8884 + }
8885 +}
8886 +
8887 +void __hif_tx_done_process(struct pfe_hif *hif, int count)
8888 +{
8889 + struct hif_desc *desc;
8890 + struct hif_desc_sw *desc_sw;
8891 + int ttc, tx_avl;
8892 + int pkts_done[HIF_CLIENTS_MAX] = {0, 0};
8893 +
8894 + ttc = hif->txtoclean;
8895 + tx_avl = hif->txavail;
8896 +
8897 + while ((tx_avl < hif->tx_ring_size) && count--) {
8898 + desc = hif->tx_base + ttc;
8899 +
8900 + if (readl(&desc->ctrl) & BD_CTRL_DESC_EN)
8901 + break;
8902 +
8903 + desc_sw = &hif->tx_sw_queue[ttc];
8904 +
8905 + if (desc_sw->data) {
8906 + /*
8907 + * dmap_unmap_single(hif->dev, desc_sw->data,
8908 + * desc_sw->len, DMA_TO_DEVICE);
8909 + */
8910 + dma_unmap_single(hif->dev, desc_sw->data,
8911 + desc_sw->len, DMA_TO_DEVICE);
8912 + }
8913 +
8914 + if (desc_sw->client_id >= HIF_CLIENTS_MAX) {
8915 + pr_err("Invalid cl id %d\n", desc_sw->client_id);
8916 + break;
8917 + }
8918 +
8919 + pkts_done[desc_sw->client_id]++;
8920 +
8921 + client_ack_txpacket(hif, desc_sw->client_id, desc_sw->q_no);
8922 +
8923 + ttc = (ttc + 1) & (hif->tx_ring_size - 1);
8924 + tx_avl++;
8925 + }
8926 +
8927 + if (pkts_done[0])
8928 + hif_lib_indicate_client(0, EVENT_TXDONE_IND, 0);
8929 + if (pkts_done[1])
8930 + hif_lib_indicate_client(1, EVENT_TXDONE_IND, 0);
8931 +
8932 + hif->txtoclean = ttc;
8933 + hif->txavail = tx_avl;
8934 +
8935 + if (!count) {
8936 + tasklet_schedule(&hif->tx_cleanup_tasklet);
8937 + } else {
8938 + /*Enable Tx done interrupt */
8939 + writel(readl_relaxed(HIF_INT_ENABLE) | HIF_TXPKT_INT,
8940 + HIF_INT_ENABLE);
8941 + }
8942 +}
8943 +
8944 +static void pfe_tx_do_cleanup(unsigned long data)
8945 +{
8946 + struct pfe_hif *hif = (struct pfe_hif *)data;
8947 +
8948 + writel(HIF_INT | HIF_TXPKT_INT, HIF_INT_SRC);
8949 +
8950 + hif_tx_done_process(hif, 64);
8951 +}
8952 +
8953 +/*
8954 + * __hif_xmit_pkt -
8955 + * This function puts one packet in the HIF Tx queue
8956 + */
8957 +void __hif_xmit_pkt(struct pfe_hif *hif, unsigned int client_id, unsigned int
8958 + q_no, void *data, u32 len, unsigned int flags)
8959 +{
8960 + struct hif_desc *desc;
8961 + struct hif_desc_sw *desc_sw;
8962 +
8963 + desc = hif->tx_base + hif->txtosend;
8964 + desc_sw = &hif->tx_sw_queue[hif->txtosend];
8965 +
8966 + desc_sw->len = len;
8967 + desc_sw->client_id = client_id;
8968 + desc_sw->q_no = q_no;
8969 + desc_sw->flags = flags;
8970 +
8971 + if (flags & HIF_DONT_DMA_MAP) {
8972 + desc_sw->data = 0;
8973 + writel((u32)DDR_PHYS_TO_PFE(data), &desc->data);
8974 + } else {
8975 + desc_sw->data = dma_map_single(hif->dev, data, len,
8976 + DMA_TO_DEVICE);
8977 + writel((u32)DDR_PHYS_TO_PFE(desc_sw->data), &desc->data);
8978 + }
8979 +
8980 + hif->txtosend = (hif->txtosend + 1) & (hif->tx_ring_size - 1);
8981 + hif->txavail--;
8982 +
8983 + if ((!((flags & HIF_DATA_VALID) && (flags &
8984 + HIF_LAST_BUFFER))))
8985 + goto skip_tx;
8986 +
8987 + /*
8988 + * Ensure everything else is written to DDR before
8989 + * writing bd->ctrl
8990 + */
8991 + wmb();
8992 +
8993 + do {
8994 + desc_sw = &hif->tx_sw_queue[hif->txtoflush];
8995 + desc = hif->tx_base + hif->txtoflush;
8996 +
8997 + if (desc_sw->flags & HIF_LAST_BUFFER) {
8998 + writel((BD_CTRL_LIFM |
8999 + BD_CTRL_BRFETCH_DISABLE | BD_CTRL_RTFETCH_DISABLE
9000 + | BD_CTRL_PARSE_DISABLE | BD_CTRL_DESC_EN |
9001 + BD_CTRL_PKT_INT_EN | BD_BUF_LEN(desc_sw->len)),
9002 + &desc->ctrl);
9003 + } else {
9004 + writel((BD_CTRL_DESC_EN |
9005 + BD_BUF_LEN(desc_sw->len)), &desc->ctrl);
9006 + }
9007 + hif->txtoflush = (hif->txtoflush + 1) & (hif->tx_ring_size - 1);
9008 + }
9009 + while (hif->txtoflush != hif->txtosend)
9010 + ;
9011 +
9012 +skip_tx:
9013 + return;
9014 +}
9015 +
9016 +static irqreturn_t wol_isr(int irq, void *dev_id)
9017 +{
9018 + pr_info("WoL\n");
9019 + gemac_set_wol(EMAC1_BASE_ADDR, 0);
9020 + gemac_set_wol(EMAC2_BASE_ADDR, 0);
9021 + return IRQ_HANDLED;
9022 +}
9023 +
9024 +/*
9025 + * hif_isr-
9026 + * This ISR routine processes Rx/Tx done interrupts from the HIF hardware block
9027 + */
9028 +static irqreturn_t hif_isr(int irq, void *dev_id)
9029 +{
9030 + struct pfe_hif *hif = (struct pfe_hif *)dev_id;
9031 + int int_status;
9032 + int int_enable_mask;
9033 +
9034 + /*Read hif interrupt source register */
9035 + int_status = readl_relaxed(HIF_INT_SRC);
9036 + int_enable_mask = readl_relaxed(HIF_INT_ENABLE);
9037 +
9038 + if ((int_status & HIF_INT) == 0)
9039 + return IRQ_NONE;
9040 +
9041 + int_status &= ~(HIF_INT);
9042 +
9043 + if (int_status & HIF_RXPKT_INT) {
9044 + int_status &= ~(HIF_RXPKT_INT);
9045 + int_enable_mask &= ~(HIF_RXPKT_INT);
9046 +
9047 + napi_first_batch = 1;
9048 +
9049 + if (napi_schedule_prep(&hif->napi)) {
9050 +#ifdef HIF_NAPI_STATS
9051 + hif->napi_counters[NAPI_SCHED_COUNT]++;
9052 +#endif
9053 + __napi_schedule(&hif->napi);
9054 + }
9055 + }
9056 +
9057 + if (int_status & HIF_TXPKT_INT) {
9058 + int_status &= ~(HIF_TXPKT_INT);
9059 + int_enable_mask &= ~(HIF_TXPKT_INT);
9060 + /*Schedule tx cleanup tassklet */
9061 + tasklet_schedule(&hif->tx_cleanup_tasklet);
9062 + }
9063 +
9064 + /*Disable interrupts, they will be enabled after they are serviced */
9065 + writel_relaxed(int_enable_mask, HIF_INT_ENABLE);
9066 +
9067 + if (int_status) {
9068 + pr_info("%s : Invalid interrupt : %d\n", __func__,
9069 + int_status);
9070 + writel(int_status, HIF_INT_SRC);
9071 + }
9072 +
9073 + return IRQ_HANDLED;
9074 +}
9075 +
9076 +void hif_process_client_req(struct pfe_hif *hif, int req, int data1, int data2)
9077 +{
9078 + unsigned int client_id = data1;
9079 +
9080 + if (client_id >= HIF_CLIENTS_MAX) {
9081 + pr_err("%s: client id %d out of bounds\n", __func__,
9082 + client_id);
9083 + return;
9084 + }
9085 +
9086 + switch (req) {
9087 + case REQUEST_CL_REGISTER:
9088 + /* Request for register a client */
9089 + pr_info("%s: register client_id %d\n",
9090 + __func__, client_id);
9091 + pfe_hif_client_register(hif, client_id, (struct
9092 + hif_client_shm *)&hif->shm->client[client_id]);
9093 + break;
9094 +
9095 + case REQUEST_CL_UNREGISTER:
9096 + pr_info("%s: unregister client_id %d\n",
9097 + __func__, client_id);
9098 +
9099 + /* Request for unregister a client */
9100 + pfe_hif_client_unregister(hif, client_id);
9101 +
9102 + break;
9103 +
9104 + default:
9105 + pr_err("%s: unsupported request %d\n",
9106 + __func__, req);
9107 + break;
9108 + }
9109 +
9110 + /*
9111 + * Process client Tx queues
9112 + * Currently we don't have checking for tx pending
9113 + */
9114 +}
9115 +
9116 +/*
9117 + * pfe_hif_rx_poll
9118 + * This function is NAPI poll function to process HIF Rx queue.
9119 + */
9120 +static int pfe_hif_rx_poll(struct napi_struct *napi, int budget)
9121 +{
9122 + struct pfe_hif *hif = container_of(napi, struct pfe_hif, napi);
9123 + int work_done;
9124 +
9125 +#ifdef HIF_NAPI_STATS
9126 + hif->napi_counters[NAPI_POLL_COUNT]++;
9127 +#endif
9128 +
9129 + work_done = pfe_hif_rx_process(hif, budget);
9130 +
9131 + if (work_done < budget) {
9132 + napi_complete(napi);
9133 + writel(readl_relaxed(HIF_INT_ENABLE) | HIF_RXPKT_INT,
9134 + HIF_INT_ENABLE);
9135 + }
9136 +#ifdef HIF_NAPI_STATS
9137 + else
9138 + hif->napi_counters[NAPI_FULL_BUDGET_COUNT]++;
9139 +#endif
9140 +
9141 + return work_done;
9142 +}
9143 +
9144 +/*
9145 + * pfe_hif_init
9146 + * This function initializes the baseaddresses and irq, etc.
9147 + */
9148 +int pfe_hif_init(struct pfe *pfe)
9149 +{
9150 + struct pfe_hif *hif = &pfe->hif;
9151 + int err;
9152 +
9153 + pr_info("%s\n", __func__);
9154 +
9155 + hif->dev = pfe->dev;
9156 + hif->irq = pfe->hif_irq;
9157 +
9158 + err = pfe_hif_alloc_descr(hif);
9159 + if (err)
9160 + goto err0;
9161 +
9162 + if (pfe_hif_init_buffers(hif)) {
9163 + pr_err("%s: Could not initialize buffer descriptors\n"
9164 + , __func__);
9165 + err = -ENOMEM;
9166 + goto err1;
9167 + }
9168 +
9169 + /* Initialize NAPI for Rx processing */
9170 + init_dummy_netdev(&hif->dummy_dev);
9171 + netif_napi_add(&hif->dummy_dev, &hif->napi, pfe_hif_rx_poll,
9172 + HIF_RX_POLL_WEIGHT);
9173 + napi_enable(&hif->napi);
9174 +
9175 + spin_lock_init(&hif->tx_lock);
9176 + spin_lock_init(&hif->lock);
9177 +
9178 + hif_init();
9179 + hif_rx_enable();
9180 + hif_tx_enable();
9181 +
9182 + /* Disable tx done interrupt */
9183 + writel(HIF_INT_MASK, HIF_INT_ENABLE);
9184 +
9185 + gpi_enable(HGPI_BASE_ADDR);
9186 +
9187 + err = request_irq(hif->irq, hif_isr, 0, "pfe_hif", hif);
9188 + if (err) {
9189 + pr_err("%s: failed to get the hif IRQ = %d\n",
9190 + __func__, hif->irq);
9191 + goto err1;
9192 + }
9193 +
9194 + err = request_irq(pfe->wol_irq, wol_isr, 0, "pfe_wol", pfe);
9195 + if (err) {
9196 + pr_err("%s: failed to get the wol IRQ = %d\n",
9197 + __func__, pfe->wol_irq);
9198 + goto err1;
9199 + }
9200 +
9201 + tasklet_init(&hif->tx_cleanup_tasklet,
9202 + (void(*)(unsigned long))pfe_tx_do_cleanup,
9203 + (unsigned long)hif);
9204 +
9205 + return 0;
9206 +err1:
9207 + pfe_hif_free_descr(hif);
9208 +err0:
9209 + return err;
9210 +}
9211 +
9212 +/* pfe_hif_exit- */
9213 +void pfe_hif_exit(struct pfe *pfe)
9214 +{
9215 + struct pfe_hif *hif = &pfe->hif;
9216 +
9217 + pr_info("%s\n", __func__);
9218 +
9219 + tasklet_kill(&hif->tx_cleanup_tasklet);
9220 +
9221 + spin_lock_bh(&hif->lock);
9222 + hif->shm->g_client_status[0] = 0;
9223 + /* Make sure all clients are disabled*/
9224 + hif->shm->g_client_status[1] = 0;
9225 +
9226 + spin_unlock_bh(&hif->lock);
9227 +
9228 + /*Disable Rx/Tx */
9229 + gpi_disable(HGPI_BASE_ADDR);
9230 + hif_rx_disable();
9231 + hif_tx_disable();
9232 +
9233 + napi_disable(&hif->napi);
9234 + netif_napi_del(&hif->napi);
9235 +
9236 + free_irq(pfe->wol_irq, pfe);
9237 + free_irq(hif->irq, hif);
9238 +
9239 + pfe_hif_release_buffers(hif);
9240 + pfe_hif_free_descr(hif);
9241 +}
9242 --- /dev/null
9243 +++ b/drivers/staging/fsl_ppfe/pfe_hif.h
9244 @@ -0,0 +1,199 @@
9245 +/* SPDX-License-Identifier: GPL-2.0+ */
9246 +/*
9247 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
9248 + * Copyright 2017 NXP
9249 + */
9250 +
9251 +#ifndef _PFE_HIF_H_
9252 +#define _PFE_HIF_H_
9253 +
9254 +#include <linux/netdevice.h>
9255 +
9256 +#define HIF_NAPI_STATS
9257 +
9258 +#define HIF_CLIENT_QUEUES_MAX 16
9259 +#define HIF_RX_POLL_WEIGHT 64
9260 +
9261 +#define HIF_RX_PKT_MIN_SIZE 0x800 /* 2KB */
9262 +#define HIF_RX_PKT_MIN_SIZE_MASK ~(HIF_RX_PKT_MIN_SIZE - 1)
9263 +#define ROUND_MIN_RX_SIZE(_sz) (((_sz) + (HIF_RX_PKT_MIN_SIZE - 1)) \
9264 + & HIF_RX_PKT_MIN_SIZE_MASK)
9265 +#define PRESENT_OFST_IN_PAGE(_buf) (((unsigned long int)(_buf) & (PAGE_SIZE \
9266 + - 1)) & HIF_RX_PKT_MIN_SIZE_MASK)
9267 +
9268 +enum {
9269 + NAPI_SCHED_COUNT = 0,
9270 + NAPI_POLL_COUNT,
9271 + NAPI_PACKET_COUNT,
9272 + NAPI_DESC_COUNT,
9273 + NAPI_FULL_BUDGET_COUNT,
9274 + NAPI_CLIENT_FULL_COUNT,
9275 + NAPI_MAX_COUNT
9276 +};
9277 +
9278 +/*
9279 + * HIF_TX_DESC_NT value should be always greter than 4,
9280 + * Otherwise HIF_TX_POLL_MARK will become zero.
9281 + */
9282 +#define HIF_RX_DESC_NT 256
9283 +#define HIF_TX_DESC_NT 2048
9284 +
9285 +#define HIF_FIRST_BUFFER BIT(0)
9286 +#define HIF_LAST_BUFFER BIT(1)
9287 +#define HIF_DONT_DMA_MAP BIT(2)
9288 +#define HIF_DATA_VALID BIT(3)
9289 +#define HIF_TSO BIT(4)
9290 +
9291 +enum {
9292 + PFE_CL_GEM0 = 0,
9293 + PFE_CL_GEM1,
9294 + HIF_CLIENTS_MAX
9295 +};
9296 +
9297 +/*structure to store client queue info */
9298 +struct hif_rx_queue {
9299 + struct rx_queue_desc *base;
9300 + u32 size;
9301 + u32 write_idx;
9302 +};
9303 +
9304 +struct hif_tx_queue {
9305 + struct tx_queue_desc *base;
9306 + u32 size;
9307 + u32 ack_idx;
9308 +};
9309 +
9310 +/*Structure to store the client info */
9311 +struct hif_client {
9312 + int rx_qn;
9313 + struct hif_rx_queue rx_q[HIF_CLIENT_QUEUES_MAX];
9314 + int tx_qn;
9315 + struct hif_tx_queue tx_q[HIF_CLIENT_QUEUES_MAX];
9316 +};
9317 +
9318 +/*HIF hardware buffer descriptor */
9319 +struct hif_desc {
9320 + u32 ctrl;
9321 + u32 status;
9322 + u32 data;
9323 + u32 next;
9324 +};
9325 +
9326 +struct __hif_desc {
9327 + u32 ctrl;
9328 + u32 status;
9329 + u32 data;
9330 +};
9331 +
9332 +struct hif_desc_sw {
9333 + dma_addr_t data;
9334 + u16 len;
9335 + u8 client_id;
9336 + u8 q_no;
9337 + u16 flags;
9338 +};
9339 +
9340 +struct hif_hdr {
9341 + u8 client_id;
9342 + u8 q_num;
9343 + u16 client_ctrl;
9344 + u16 client_ctrl1;
9345 +};
9346 +
9347 +struct __hif_hdr {
9348 + union {
9349 + struct hif_hdr hdr;
9350 + u32 word[2];
9351 + };
9352 +};
9353 +
9354 +struct hif_ipsec_hdr {
9355 + u16 sa_handle[2];
9356 +} __packed;
9357 +
9358 +/* HIF_CTRL_TX... defines */
9359 +#define HIF_CTRL_TX_CHECKSUM BIT(2)
9360 +
9361 +/* HIF_CTRL_RX... defines */
9362 +#define HIF_CTRL_RX_OFFSET_OFST (24)
9363 +#define HIF_CTRL_RX_CHECKSUMMED BIT(2)
9364 +#define HIF_CTRL_RX_CONTINUED BIT(1)
9365 +
9366 +struct pfe_hif {
9367 + /* To store registered clients in hif layer */
9368 + struct hif_client client[HIF_CLIENTS_MAX];
9369 + struct hif_shm *shm;
9370 + int irq;
9371 +
9372 + void *descr_baseaddr_v;
9373 + unsigned long descr_baseaddr_p;
9374 +
9375 + struct hif_desc *rx_base;
9376 + u32 rx_ring_size;
9377 + u32 rxtoclean_index;
9378 + void *rx_buf_addr[HIF_RX_DESC_NT];
9379 + int rx_buf_len[HIF_RX_DESC_NT];
9380 + unsigned int qno;
9381 + unsigned int client_id;
9382 + unsigned int client_ctrl;
9383 + unsigned int started;
9384 +
9385 + struct hif_desc *tx_base;
9386 + u32 tx_ring_size;
9387 + u32 txtosend;
9388 + u32 txtoclean;
9389 + u32 txavail;
9390 + u32 txtoflush;
9391 + struct hif_desc_sw tx_sw_queue[HIF_TX_DESC_NT];
9392 +
9393 +/* tx_lock synchronizes hif packet tx as well as pfe_hif structure access */
9394 + spinlock_t tx_lock;
9395 +/* lock synchronizes hif rx queue processing */
9396 + spinlock_t lock;
9397 + struct net_device dummy_dev;
9398 + struct napi_struct napi;
9399 + struct device *dev;
9400 +
9401 +#ifdef HIF_NAPI_STATS
9402 + unsigned int napi_counters[NAPI_MAX_COUNT];
9403 +#endif
9404 + struct tasklet_struct tx_cleanup_tasklet;
9405 +};
9406 +
9407 +void __hif_xmit_pkt(struct pfe_hif *hif, unsigned int client_id, unsigned int
9408 + q_no, void *data, u32 len, unsigned int flags);
9409 +int hif_xmit_pkt(struct pfe_hif *hif, unsigned int client_id, unsigned int q_no,
9410 + void *data, unsigned int len);
9411 +void __hif_tx_done_process(struct pfe_hif *hif, int count);
9412 +void hif_process_client_req(struct pfe_hif *hif, int req, int data1, int
9413 + data2);
9414 +int pfe_hif_init(struct pfe *pfe);
9415 +void pfe_hif_exit(struct pfe *pfe);
9416 +void pfe_hif_rx_idle(struct pfe_hif *hif);
9417 +static inline void hif_tx_done_process(struct pfe_hif *hif, int count)
9418 +{
9419 + spin_lock_bh(&hif->tx_lock);
9420 + __hif_tx_done_process(hif, count);
9421 + spin_unlock_bh(&hif->tx_lock);
9422 +}
9423 +
9424 +static inline void hif_tx_lock(struct pfe_hif *hif)
9425 +{
9426 + spin_lock_bh(&hif->tx_lock);
9427 +}
9428 +
9429 +static inline void hif_tx_unlock(struct pfe_hif *hif)
9430 +{
9431 + spin_unlock_bh(&hif->tx_lock);
9432 +}
9433 +
9434 +static inline int __hif_tx_avail(struct pfe_hif *hif)
9435 +{
9436 + return hif->txavail;
9437 +}
9438 +
9439 +#define __memcpy8(dst, src) memcpy(dst, src, 8)
9440 +#define __memcpy12(dst, src) memcpy(dst, src, 12)
9441 +#define __memcpy(dst, src, len) memcpy(dst, src, len)
9442 +
9443 +#endif /* _PFE_HIF_H_ */
9444 --- /dev/null
9445 +++ b/drivers/staging/fsl_ppfe/pfe_hif_lib.c
9446 @@ -0,0 +1,628 @@
9447 +// SPDX-License-Identifier: GPL-2.0+
9448 +/*
9449 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
9450 + * Copyright 2017 NXP
9451 + */
9452 +
9453 +#include <linux/version.h>
9454 +#include <linux/kernel.h>
9455 +#include <linux/slab.h>
9456 +#include <linux/interrupt.h>
9457 +#include <linux/workqueue.h>
9458 +#include <linux/dma-mapping.h>
9459 +#include <linux/dmapool.h>
9460 +#include <linux/sched.h>
9461 +#include <linux/skbuff.h>
9462 +#include <linux/moduleparam.h>
9463 +#include <linux/cpu.h>
9464 +
9465 +#include "pfe_mod.h"
9466 +#include "pfe_hif.h"
9467 +#include "pfe_hif_lib.h"
9468 +
9469 +unsigned int lro_mode;
9470 +unsigned int page_mode;
9471 +unsigned int tx_qos = 1;
9472 +module_param(tx_qos, uint, 0444);
9473 +MODULE_PARM_DESC(tx_qos, "0: disable ,\n"
9474 + "1: enable (default), guarantee no packet drop at TMU level\n");
9475 +unsigned int pfe_pkt_size;
9476 +unsigned int pfe_pkt_headroom;
9477 +unsigned int emac_txq_cnt;
9478 +
9479 +/*
9480 + * @pfe_hal_lib.c.
9481 + * Common functions used by HIF client drivers
9482 + */
9483 +
9484 +/*HIF shared memory Global variable */
9485 +struct hif_shm ghif_shm;
9486 +
9487 +/* Cleanup the HIF shared memory, release HIF rx_buffer_pool.
9488 + * This function should be called after pfe_hif_exit
9489 + *
9490 + * @param[in] hif_shm Shared memory address location in DDR
9491 + */
9492 +static void pfe_hif_shm_clean(struct hif_shm *hif_shm)
9493 +{
9494 + int i;
9495 + void *pkt;
9496 +
9497 + for (i = 0; i < hif_shm->rx_buf_pool_cnt; i++) {
9498 + pkt = hif_shm->rx_buf_pool[i];
9499 + if (pkt) {
9500 + hif_shm->rx_buf_pool[i] = NULL;
9501 + pkt -= pfe_pkt_headroom;
9502 +
9503 + if (page_mode)
9504 + put_page(virt_to_page(pkt));
9505 + else
9506 + kfree(pkt);
9507 + }
9508 + }
9509 +}
9510 +
9511 +/* Initialize shared memory used between HIF driver and clients,
9512 + * allocate rx_buffer_pool required for HIF Rx descriptors.
9513 + * This function should be called before initializing HIF driver.
9514 + *
9515 + * @param[in] hif_shm Shared memory address location in DDR
9516 + * @rerurn 0 - on succes, <0 on fail to initialize
9517 + */
9518 +static int pfe_hif_shm_init(struct hif_shm *hif_shm)
9519 +{
9520 + int i;
9521 + void *pkt;
9522 +
9523 + memset(hif_shm, 0, sizeof(struct hif_shm));
9524 + hif_shm->rx_buf_pool_cnt = HIF_RX_DESC_NT;
9525 +
9526 + for (i = 0; i < hif_shm->rx_buf_pool_cnt; i++) {
9527 + if (page_mode) {
9528 + pkt = (void *)__get_free_page(GFP_KERNEL |
9529 + GFP_DMA_PFE);
9530 + } else {
9531 + pkt = kmalloc(PFE_BUF_SIZE, GFP_KERNEL | GFP_DMA_PFE);
9532 + }
9533 +
9534 + if (pkt)
9535 + hif_shm->rx_buf_pool[i] = pkt + pfe_pkt_headroom;
9536 + else
9537 + goto err0;
9538 + }
9539 +
9540 + return 0;
9541 +
9542 +err0:
9543 + pr_err("%s Low memory\n", __func__);
9544 + pfe_hif_shm_clean(hif_shm);
9545 + return -ENOMEM;
9546 +}
9547 +
9548 +/*This function sends indication to HIF driver
9549 + *
9550 + * @param[in] hif hif context
9551 + */
9552 +static void hif_lib_indicate_hif(struct pfe_hif *hif, int req, int data1, int
9553 + data2)
9554 +{
9555 + hif_process_client_req(hif, req, data1, data2);
9556 +}
9557 +
9558 +void hif_lib_indicate_client(int client_id, int event_type, int qno)
9559 +{
9560 + struct hif_client_s *client = pfe->hif_client[client_id];
9561 +
9562 + if (!client || (event_type >= HIF_EVENT_MAX) || (qno >=
9563 + HIF_CLIENT_QUEUES_MAX))
9564 + return;
9565 +
9566 + if (!test_and_set_bit(qno, &client->queue_mask[event_type]))
9567 + client->event_handler(client->priv, event_type, qno);
9568 +}
9569 +
9570 +/*This function releases Rx queue descriptors memory and pre-filled buffers
9571 + *
9572 + * @param[in] client hif_client context
9573 + */
9574 +static void hif_lib_client_release_rx_buffers(struct hif_client_s *client)
9575 +{
9576 + struct rx_queue_desc *desc;
9577 + int qno, ii;
9578 + void *buf;
9579 +
9580 + for (qno = 0; qno < client->rx_qn; qno++) {
9581 + desc = client->rx_q[qno].base;
9582 +
9583 + for (ii = 0; ii < client->rx_q[qno].size; ii++) {
9584 + buf = (void *)desc->data;
9585 + if (buf) {
9586 + buf -= pfe_pkt_headroom;
9587 +
9588 + if (page_mode)
9589 + free_page((unsigned long)buf);
9590 + else
9591 + kfree(buf);
9592 +
9593 + desc->ctrl = 0;
9594 + }
9595 +
9596 + desc++;
9597 + }
9598 + }
9599 +
9600 + kfree(client->rx_qbase);
9601 +}
9602 +
9603 +/*This function allocates memory for the rxq descriptors and pre-fill rx queues
9604 + * with buffers.
9605 + * @param[in] client client context
9606 + * @param[in] q_size size of the rxQ, all queues are of same size
9607 + */
9608 +static int hif_lib_client_init_rx_buffers(struct hif_client_s *client, int
9609 + q_size)
9610 +{
9611 + struct rx_queue_desc *desc;
9612 + struct hif_client_rx_queue *queue;
9613 + int ii, qno;
9614 +
9615 + /*Allocate memory for the client queues */
9616 + client->rx_qbase = kzalloc(client->rx_qn * q_size * sizeof(struct
9617 + rx_queue_desc), GFP_KERNEL);
9618 + if (!client->rx_qbase)
9619 + goto err;
9620 +
9621 + for (qno = 0; qno < client->rx_qn; qno++) {
9622 + queue = &client->rx_q[qno];
9623 +
9624 + queue->base = client->rx_qbase + qno * q_size * sizeof(struct
9625 + rx_queue_desc);
9626 + queue->size = q_size;
9627 + queue->read_idx = 0;
9628 + queue->write_idx = 0;
9629 +
9630 + pr_debug("rx queue: %d, base: %p, size: %d\n", qno,
9631 + queue->base, queue->size);
9632 + }
9633 +
9634 + for (qno = 0; qno < client->rx_qn; qno++) {
9635 + queue = &client->rx_q[qno];
9636 + desc = queue->base;
9637 +
9638 + for (ii = 0; ii < queue->size; ii++) {
9639 + desc->ctrl = CL_DESC_BUF_LEN(pfe_pkt_size) |
9640 + CL_DESC_OWN;
9641 + desc++;
9642 + }
9643 + }
9644 +
9645 + return 0;
9646 +
9647 +err:
9648 + return 1;
9649 +}
9650 +
9651 +
9652 +static void hif_lib_client_cleanup_tx_queue(struct hif_client_tx_queue *queue)
9653 +{
9654 + pr_debug("%s\n", __func__);
9655 +
9656 + /*
9657 + * Check if there are any pending packets. Client must flush the tx
9658 + * queues before unregistering, by calling by calling
9659 + * hif_lib_tx_get_next_complete()
9660 + *
9661 + * Hif no longer calls since we are no longer registered
9662 + */
9663 + if (queue->tx_pending)
9664 + pr_err("%s: pending transmit packets\n", __func__);
9665 +}
9666 +
9667 +static void hif_lib_client_release_tx_buffers(struct hif_client_s *client)
9668 +{
9669 + int qno;
9670 +
9671 + pr_debug("%s\n", __func__);
9672 +
9673 + for (qno = 0; qno < client->tx_qn; qno++)
9674 + hif_lib_client_cleanup_tx_queue(&client->tx_q[qno]);
9675 +
9676 + kfree(client->tx_qbase);
9677 +}
9678 +
9679 +static int hif_lib_client_init_tx_buffers(struct hif_client_s *client, int
9680 + q_size)
9681 +{
9682 + struct hif_client_tx_queue *queue;
9683 + int qno;
9684 +
9685 + client->tx_qbase = kzalloc(client->tx_qn * q_size * sizeof(struct
9686 + tx_queue_desc), GFP_KERNEL);
9687 + if (!client->tx_qbase)
9688 + return 1;
9689 +
9690 + for (qno = 0; qno < client->tx_qn; qno++) {
9691 + queue = &client->tx_q[qno];
9692 +
9693 + queue->base = client->tx_qbase + qno * q_size * sizeof(struct
9694 + tx_queue_desc);
9695 + queue->size = q_size;
9696 + queue->read_idx = 0;
9697 + queue->write_idx = 0;
9698 + queue->tx_pending = 0;
9699 + queue->nocpy_flag = 0;
9700 + queue->prev_tmu_tx_pkts = 0;
9701 + queue->done_tmu_tx_pkts = 0;
9702 +
9703 + pr_debug("tx queue: %d, base: %p, size: %d\n", qno,
9704 + queue->base, queue->size);
9705 + }
9706 +
9707 + return 0;
9708 +}
9709 +
9710 +static int hif_lib_event_dummy(void *priv, int event_type, int qno)
9711 +{
9712 + return 0;
9713 +}
9714 +
9715 +int hif_lib_client_register(struct hif_client_s *client)
9716 +{
9717 + struct hif_shm *hif_shm;
9718 + struct hif_client_shm *client_shm;
9719 + int err, i;
9720 + /* int loop_cnt = 0; */
9721 +
9722 + pr_debug("%s\n", __func__);
9723 +
9724 + /*Allocate memory before spin_lock*/
9725 + if (hif_lib_client_init_rx_buffers(client, client->rx_qsize)) {
9726 + err = -ENOMEM;
9727 + goto err_rx;
9728 + }
9729 +
9730 + if (hif_lib_client_init_tx_buffers(client, client->tx_qsize)) {
9731 + err = -ENOMEM;
9732 + goto err_tx;
9733 + }
9734 +
9735 + spin_lock_bh(&pfe->hif.lock);
9736 + if (!(client->pfe) || (client->id >= HIF_CLIENTS_MAX) ||
9737 + (pfe->hif_client[client->id])) {
9738 + err = -EINVAL;
9739 + goto err;
9740 + }
9741 +
9742 + hif_shm = client->pfe->hif.shm;
9743 +
9744 + if (!client->event_handler)
9745 + client->event_handler = hif_lib_event_dummy;
9746 +
9747 + /*Initialize client specific shared memory */
9748 + client_shm = (struct hif_client_shm *)&hif_shm->client[client->id];
9749 + client_shm->rx_qbase = (unsigned long int)client->rx_qbase;
9750 + client_shm->rx_qsize = client->rx_qsize;
9751 + client_shm->tx_qbase = (unsigned long int)client->tx_qbase;
9752 + client_shm->tx_qsize = client->tx_qsize;
9753 + client_shm->ctrl = (client->tx_qn << CLIENT_CTRL_TX_Q_CNT_OFST) |
9754 + (client->rx_qn << CLIENT_CTRL_RX_Q_CNT_OFST);
9755 + /* spin_lock_init(&client->rx_lock); */
9756 +
9757 + for (i = 0; i < HIF_EVENT_MAX; i++) {
9758 + client->queue_mask[i] = 0; /*
9759 + * By default all events are
9760 + * unmasked
9761 + */
9762 + }
9763 +
9764 + /*Indicate to HIF driver*/
9765 + hif_lib_indicate_hif(&pfe->hif, REQUEST_CL_REGISTER, client->id, 0);
9766 +
9767 + pr_debug("%s: client: %p, client_id: %d, tx_qsize: %d, rx_qsize: %d\n",
9768 + __func__, client, client->id, client->tx_qsize,
9769 + client->rx_qsize);
9770 +
9771 + client->cpu_id = -1;
9772 +
9773 + pfe->hif_client[client->id] = client;
9774 + spin_unlock_bh(&pfe->hif.lock);
9775 +
9776 + return 0;
9777 +
9778 +err:
9779 + spin_unlock_bh(&pfe->hif.lock);
9780 + hif_lib_client_release_tx_buffers(client);
9781 +
9782 +err_tx:
9783 + hif_lib_client_release_rx_buffers(client);
9784 +
9785 +err_rx:
9786 + return err;
9787 +}
9788 +
9789 +int hif_lib_client_unregister(struct hif_client_s *client)
9790 +{
9791 + struct pfe *pfe = client->pfe;
9792 + u32 client_id = client->id;
9793 +
9794 + pr_info(
9795 + "%s : client: %p, client_id: %d, txQ_depth: %d, rxQ_depth: %d\n"
9796 + , __func__, client, client->id, client->tx_qsize,
9797 + client->rx_qsize);
9798 +
9799 + spin_lock_bh(&pfe->hif.lock);
9800 + hif_lib_indicate_hif(&pfe->hif, REQUEST_CL_UNREGISTER, client->id, 0);
9801 +
9802 + hif_lib_client_release_tx_buffers(client);
9803 + hif_lib_client_release_rx_buffers(client);
9804 + pfe->hif_client[client_id] = NULL;
9805 + spin_unlock_bh(&pfe->hif.lock);
9806 +
9807 + return 0;
9808 +}
9809 +
9810 +int hif_lib_event_handler_start(struct hif_client_s *client, int event,
9811 + int qno)
9812 +{
9813 + struct hif_client_rx_queue *queue = &client->rx_q[qno];
9814 + struct rx_queue_desc *desc = queue->base + queue->read_idx;
9815 +
9816 + if ((event >= HIF_EVENT_MAX) || (qno >= HIF_CLIENT_QUEUES_MAX)) {
9817 + pr_debug("%s: Unsupported event : %d queue number : %d\n",
9818 + __func__, event, qno);
9819 + return -1;
9820 + }
9821 +
9822 + test_and_clear_bit(qno, &client->queue_mask[event]);
9823 +
9824 + switch (event) {
9825 + case EVENT_RX_PKT_IND:
9826 + if (!(desc->ctrl & CL_DESC_OWN))
9827 + hif_lib_indicate_client(client->id,
9828 + EVENT_RX_PKT_IND, qno);
9829 + break;
9830 +
9831 + case EVENT_HIGH_RX_WM:
9832 + case EVENT_TXDONE_IND:
9833 + default:
9834 + break;
9835 + }
9836 +
9837 + return 0;
9838 +}
9839 +
9840 +/*
9841 + * This function gets one packet from the specified client queue
9842 + * It also refill the rx buffer
9843 + */
9844 +void *hif_lib_receive_pkt(struct hif_client_s *client, int qno, int *len, int
9845 + *ofst, unsigned int *rx_ctrl,
9846 + unsigned int *desc_ctrl, void **priv_data)
9847 +{
9848 + struct hif_client_rx_queue *queue = &client->rx_q[qno];
9849 + struct rx_queue_desc *desc;
9850 + void *pkt = NULL;
9851 +
9852 + /*
9853 + * Following lock is to protect rx queue access from,
9854 + * hif_lib_event_handler_start.
9855 + * In general below lock is not required, because hif_lib_xmit_pkt and
9856 + * hif_lib_event_handler_start are called from napi poll and which is
9857 + * not re-entrant. But if some client use in different way this lock is
9858 + * required.
9859 + */
9860 + /*spin_lock_irqsave(&client->rx_lock, flags); */
9861 + desc = queue->base + queue->read_idx;
9862 + if (!(desc->ctrl & CL_DESC_OWN)) {
9863 + pkt = desc->data - pfe_pkt_headroom;
9864 +
9865 + *rx_ctrl = desc->client_ctrl;
9866 + *desc_ctrl = desc->ctrl;
9867 +
9868 + if (desc->ctrl & CL_DESC_FIRST) {
9869 + u16 size = *rx_ctrl >> HIF_CTRL_RX_OFFSET_OFST;
9870 +
9871 + if (size) {
9872 + size += PFE_PARSE_INFO_SIZE;
9873 + *len = CL_DESC_BUF_LEN(desc->ctrl) -
9874 + PFE_PKT_HEADER_SZ - size;
9875 + *ofst = pfe_pkt_headroom + PFE_PKT_HEADER_SZ
9876 + + size;
9877 + *priv_data = desc->data + PFE_PKT_HEADER_SZ;
9878 + } else {
9879 + *len = CL_DESC_BUF_LEN(desc->ctrl) -
9880 + PFE_PKT_HEADER_SZ - PFE_PARSE_INFO_SIZE;
9881 + *ofst = pfe_pkt_headroom
9882 + + PFE_PKT_HEADER_SZ
9883 + + PFE_PARSE_INFO_SIZE;
9884 + *priv_data = NULL;
9885 + }
9886 +
9887 + } else {
9888 + *len = CL_DESC_BUF_LEN(desc->ctrl);
9889 + *ofst = pfe_pkt_headroom;
9890 + }
9891 +
9892 + /*
9893 + * Needed so we don't free a buffer/page
9894 + * twice on module_exit
9895 + */
9896 + desc->data = NULL;
9897 +
9898 + /*
9899 + * Ensure everything else is written to DDR before
9900 + * writing bd->ctrl
9901 + */
9902 + smp_wmb();
9903 +
9904 + desc->ctrl = CL_DESC_BUF_LEN(pfe_pkt_size) | CL_DESC_OWN;
9905 + queue->read_idx = (queue->read_idx + 1) & (queue->size - 1);
9906 + }
9907 +
9908 + /*spin_unlock_irqrestore(&client->rx_lock, flags); */
9909 + return pkt;
9910 +}
9911 +
9912 +static inline void hif_hdr_write(struct hif_hdr *pkt_hdr, unsigned int
9913 + client_id, unsigned int qno,
9914 + u32 client_ctrl)
9915 +{
9916 + /* Optimize the write since the destinaton may be non-cacheable */
9917 + if (!((unsigned long)pkt_hdr & 0x3)) {
9918 + ((u32 *)pkt_hdr)[0] = (client_ctrl << 16) | (qno << 8) |
9919 + client_id;
9920 + } else {
9921 + ((u16 *)pkt_hdr)[0] = (qno << 8) | (client_id & 0xFF);
9922 + ((u16 *)pkt_hdr)[1] = (client_ctrl & 0xFFFF);
9923 + }
9924 +}
9925 +
9926 +/*This function puts the given packet in the specific client queue */
9927 +void __hif_lib_xmit_pkt(struct hif_client_s *client, unsigned int qno, void
9928 + *data, unsigned int len, u32 client_ctrl,
9929 + unsigned int flags, void *client_data)
9930 +{
9931 + struct hif_client_tx_queue *queue = &client->tx_q[qno];
9932 + struct tx_queue_desc *desc = queue->base + queue->write_idx;
9933 +
9934 + /* First buffer */
9935 + if (flags & HIF_FIRST_BUFFER) {
9936 + data -= sizeof(struct hif_hdr);
9937 + len += sizeof(struct hif_hdr);
9938 +
9939 + hif_hdr_write(data, client->id, qno, client_ctrl);
9940 + }
9941 +
9942 + desc->data = client_data;
9943 + desc->ctrl = CL_DESC_OWN | CL_DESC_FLAGS(flags);
9944 +
9945 + __hif_xmit_pkt(&pfe->hif, client->id, qno, data, len, flags);
9946 +
9947 + queue->write_idx = (queue->write_idx + 1) & (queue->size - 1);
9948 + queue->tx_pending++;
9949 + queue->jiffies_last_packet = jiffies;
9950 +}
9951 +
9952 +void *hif_lib_tx_get_next_complete(struct hif_client_s *client, int qno,
9953 + unsigned int *flags, int count)
9954 +{
9955 + struct hif_client_tx_queue *queue = &client->tx_q[qno];
9956 + struct tx_queue_desc *desc = queue->base + queue->read_idx;
9957 +
9958 + pr_debug("%s: qno : %d rd_indx: %d pending:%d\n", __func__, qno,
9959 + queue->read_idx, queue->tx_pending);
9960 +
9961 + if (!queue->tx_pending)
9962 + return NULL;
9963 +
9964 + if (queue->nocpy_flag && !queue->done_tmu_tx_pkts) {
9965 + u32 tmu_tx_pkts = be32_to_cpu(pe_dmem_read(TMU0_ID +
9966 + client->id, TMU_DM_TX_TRANS, 4));
9967 +
9968 + if (queue->prev_tmu_tx_pkts > tmu_tx_pkts)
9969 + queue->done_tmu_tx_pkts = UINT_MAX -
9970 + queue->prev_tmu_tx_pkts + tmu_tx_pkts;
9971 + else
9972 + queue->done_tmu_tx_pkts = tmu_tx_pkts -
9973 + queue->prev_tmu_tx_pkts;
9974 +
9975 + queue->prev_tmu_tx_pkts = tmu_tx_pkts;
9976 +
9977 + if (!queue->done_tmu_tx_pkts)
9978 + return NULL;
9979 + }
9980 +
9981 + if (desc->ctrl & CL_DESC_OWN)
9982 + return NULL;
9983 +
9984 + queue->read_idx = (queue->read_idx + 1) & (queue->size - 1);
9985 + queue->tx_pending--;
9986 +
9987 + *flags = CL_DESC_GET_FLAGS(desc->ctrl);
9988 +
9989 + if (queue->done_tmu_tx_pkts && (*flags & HIF_LAST_BUFFER))
9990 + queue->done_tmu_tx_pkts--;
9991 +
9992 + return desc->data;
9993 +}
9994 +
9995 +static void hif_lib_tmu_credit_init(struct pfe *pfe)
9996 +{
9997 + int i, q;
9998 +
9999 + for (i = 0; i < NUM_GEMAC_SUPPORT; i++)
10000 + for (q = 0; q < emac_txq_cnt; q++) {
10001 + pfe->tmu_credit.tx_credit_max[i][q] = (q == 0) ?
10002 + DEFAULT_Q0_QDEPTH : DEFAULT_MAX_QDEPTH;
10003 + pfe->tmu_credit.tx_credit[i][q] =
10004 + pfe->tmu_credit.tx_credit_max[i][q];
10005 + }
10006 +}
10007 +
10008 +/* __hif_lib_update_credit
10009 + *
10010 + * @param[in] client hif client context
10011 + * @param[in] queue queue number in match with TMU
10012 + */
10013 +void __hif_lib_update_credit(struct hif_client_s *client, unsigned int queue)
10014 +{
10015 + unsigned int tmu_tx_packets, tmp;
10016 +
10017 + if (tx_qos) {
10018 + tmu_tx_packets = be32_to_cpu(pe_dmem_read(TMU0_ID +
10019 + client->id, (TMU_DM_TX_TRANS + (queue * 4)), 4));
10020 +
10021 + /* tx_packets counter overflowed */
10022 + if (tmu_tx_packets >
10023 + pfe->tmu_credit.tx_packets[client->id][queue]) {
10024 + tmp = UINT_MAX - tmu_tx_packets +
10025 + pfe->tmu_credit.tx_packets[client->id][queue];
10026 +
10027 + pfe->tmu_credit.tx_credit[client->id][queue] =
10028 + pfe->tmu_credit.tx_credit_max[client->id][queue] - tmp;
10029 + } else {
10030 + /* TMU tx <= pfe_eth tx, normal case or both OF since
10031 + * last time
10032 + */
10033 + pfe->tmu_credit.tx_credit[client->id][queue] =
10034 + pfe->tmu_credit.tx_credit_max[client->id][queue] -
10035 + (pfe->tmu_credit.tx_packets[client->id][queue] -
10036 + tmu_tx_packets);
10037 + }
10038 + }
10039 +}
10040 +
10041 +int pfe_hif_lib_init(struct pfe *pfe)
10042 +{
10043 + int rc;
10044 +
10045 + pr_info("%s\n", __func__);
10046 +
10047 + if (lro_mode) {
10048 + page_mode = 1;
10049 + pfe_pkt_size = min(PAGE_SIZE, MAX_PFE_PKT_SIZE);
10050 + pfe_pkt_headroom = 0;
10051 + } else {
10052 + page_mode = 0;
10053 + pfe_pkt_size = PFE_PKT_SIZE;
10054 + pfe_pkt_headroom = PFE_PKT_HEADROOM;
10055 + }
10056 +
10057 + if (tx_qos)
10058 + emac_txq_cnt = EMAC_TXQ_CNT / 2;
10059 + else
10060 + emac_txq_cnt = EMAC_TXQ_CNT;
10061 +
10062 + hif_lib_tmu_credit_init(pfe);
10063 + pfe->hif.shm = &ghif_shm;
10064 + rc = pfe_hif_shm_init(pfe->hif.shm);
10065 +
10066 + return rc;
10067 +}
10068 +
10069 +void pfe_hif_lib_exit(struct pfe *pfe)
10070 +{
10071 + pr_info("%s\n", __func__);
10072 +
10073 + pfe_hif_shm_clean(pfe->hif.shm);
10074 +}
10075 --- /dev/null
10076 +++ b/drivers/staging/fsl_ppfe/pfe_hif_lib.h
10077 @@ -0,0 +1,229 @@
10078 +/* SPDX-License-Identifier: GPL-2.0+ */
10079 +/*
10080 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
10081 + * Copyright 2017 NXP
10082 + */
10083 +
10084 +#ifndef _PFE_HIF_LIB_H_
10085 +#define _PFE_HIF_LIB_H_
10086 +
10087 +#include "pfe_hif.h"
10088 +
10089 +#define HIF_CL_REQ_TIMEOUT 10
10090 +#define GFP_DMA_PFE 0
10091 +#define PFE_PARSE_INFO_SIZE 16
10092 +
10093 +enum {
10094 + REQUEST_CL_REGISTER = 0,
10095 + REQUEST_CL_UNREGISTER,
10096 + HIF_REQUEST_MAX
10097 +};
10098 +
10099 +enum {
10100 + /* Event to indicate that client rx queue is reached water mark level */
10101 + EVENT_HIGH_RX_WM = 0,
10102 + /* Event to indicate that, packet received for client */
10103 + EVENT_RX_PKT_IND,
10104 + /* Event to indicate that, packet tx done for client */
10105 + EVENT_TXDONE_IND,
10106 + HIF_EVENT_MAX
10107 +};
10108 +
10109 +/*structure to store client queue info */
10110 +
10111 +/*structure to store client queue info */
10112 +struct hif_client_rx_queue {
10113 + struct rx_queue_desc *base;
10114 + u32 size;
10115 + u32 read_idx;
10116 + u32 write_idx;
10117 +};
10118 +
10119 +struct hif_client_tx_queue {
10120 + struct tx_queue_desc *base;
10121 + u32 size;
10122 + u32 read_idx;
10123 + u32 write_idx;
10124 + u32 tx_pending;
10125 + unsigned long jiffies_last_packet;
10126 + u32 nocpy_flag;
10127 + u32 prev_tmu_tx_pkts;
10128 + u32 done_tmu_tx_pkts;
10129 +};
10130 +
10131 +struct hif_client_s {
10132 + int id;
10133 + int tx_qn;
10134 + int rx_qn;
10135 + void *rx_qbase;
10136 + void *tx_qbase;
10137 + int tx_qsize;
10138 + int rx_qsize;
10139 + int cpu_id;
10140 + struct hif_client_tx_queue tx_q[HIF_CLIENT_QUEUES_MAX];
10141 + struct hif_client_rx_queue rx_q[HIF_CLIENT_QUEUES_MAX];
10142 + int (*event_handler)(void *priv, int event, int data);
10143 + unsigned long queue_mask[HIF_EVENT_MAX];
10144 + struct pfe *pfe;
10145 + void *priv;
10146 +};
10147 +
10148 +/*
10149 + * Client specific shared memory
10150 + * It contains number of Rx/Tx queues, base addresses and queue sizes
10151 + */
10152 +struct hif_client_shm {
10153 + u32 ctrl; /*0-7: number of Rx queues, 8-15: number of tx queues */
10154 + unsigned long rx_qbase; /*Rx queue base address */
10155 + u32 rx_qsize; /*each Rx queue size, all Rx queues are of same size */
10156 + unsigned long tx_qbase; /* Tx queue base address */
10157 + u32 tx_qsize; /*each Tx queue size, all Tx queues are of same size */
10158 +};
10159 +
10160 +/*Client shared memory ctrl bit description */
10161 +#define CLIENT_CTRL_RX_Q_CNT_OFST 0
10162 +#define CLIENT_CTRL_TX_Q_CNT_OFST 8
10163 +#define CLIENT_CTRL_RX_Q_CNT(ctrl) (((ctrl) >> CLIENT_CTRL_RX_Q_CNT_OFST) \
10164 + & 0xFF)
10165 +#define CLIENT_CTRL_TX_Q_CNT(ctrl) (((ctrl) >> CLIENT_CTRL_TX_Q_CNT_OFST) \
10166 + & 0xFF)
10167 +
10168 +/*
10169 + * Shared memory used to communicate between HIF driver and host/client drivers
10170 + * Before starting the hif driver rx_buf_pool ans rx_buf_pool_cnt should be
10171 + * initialized with host buffers and buffers count in the pool.
10172 + * rx_buf_pool_cnt should be >= HIF_RX_DESC_NT.
10173 + *
10174 + */
10175 +struct hif_shm {
10176 + u32 rx_buf_pool_cnt; /*Number of rx buffers available*/
10177 + /*Rx buffers required to initialize HIF rx descriptors */
10178 + void *rx_buf_pool[HIF_RX_DESC_NT];
10179 + unsigned long g_client_status[2]; /*Global client status bit mask */
10180 + /* Client specific shared memory */
10181 + struct hif_client_shm client[HIF_CLIENTS_MAX];
10182 +};
10183 +
10184 +#define CL_DESC_OWN BIT(31)
10185 +/* This sets owner ship to HIF driver */
10186 +#define CL_DESC_LAST BIT(30)
10187 +/* This indicates last packet for multi buffers handling */
10188 +#define CL_DESC_FIRST BIT(29)
10189 +/* This indicates first packet for multi buffers handling */
10190 +
10191 +#define CL_DESC_BUF_LEN(x) ((x) & 0xFFFF)
10192 +#define CL_DESC_FLAGS(x) (((x) & 0xF) << 16)
10193 +#define CL_DESC_GET_FLAGS(x) (((x) >> 16) & 0xF)
10194 +
10195 +struct rx_queue_desc {
10196 + void *data;
10197 + u32 ctrl; /*0-15bit len, 16-20bit flags, 31bit owner*/
10198 + u32 client_ctrl;
10199 +};
10200 +
10201 +struct tx_queue_desc {
10202 + void *data;
10203 + u32 ctrl; /*0-15bit len, 16-20bit flags, 31bit owner*/
10204 +};
10205 +
10206 +/* HIF Rx is not working properly for 2-byte aligned buffers and
10207 + * ip_header should be 4byte aligned for better iperformance.
10208 + * "ip_header = 64 + 6(hif_header) + 14 (MAC Header)" will be 4byte aligned.
10209 + */
10210 +#define PFE_PKT_HEADER_SZ sizeof(struct hif_hdr)
10211 +/* must be big enough for headroom, pkt size and skb shared info */
10212 +#define PFE_BUF_SIZE 2048
10213 +#define PFE_PKT_HEADROOM 128
10214 +
10215 +#define SKB_SHARED_INFO_SIZE SKB_DATA_ALIGN(sizeof(struct skb_shared_info))
10216 +#define PFE_PKT_SIZE (PFE_BUF_SIZE - PFE_PKT_HEADROOM \
10217 + - SKB_SHARED_INFO_SIZE)
10218 +#define MAX_L2_HDR_SIZE 14 /* Not correct for VLAN/PPPoE */
10219 +#define MAX_L3_HDR_SIZE 20 /* Not correct for IPv6 */
10220 +#define MAX_L4_HDR_SIZE 60 /* TCP with maximum options */
10221 +#define MAX_HDR_SIZE (MAX_L2_HDR_SIZE + MAX_L3_HDR_SIZE \
10222 + + MAX_L4_HDR_SIZE)
10223 +/* Used in page mode to clamp packet size to the maximum supported by the hif
10224 + *hw interface (<16KiB)
10225 + */
10226 +#define MAX_PFE_PKT_SIZE 16380UL
10227 +
10228 +extern unsigned int pfe_pkt_size;
10229 +extern unsigned int pfe_pkt_headroom;
10230 +extern unsigned int page_mode;
10231 +extern unsigned int lro_mode;
10232 +extern unsigned int tx_qos;
10233 +extern unsigned int emac_txq_cnt;
10234 +
10235 +int pfe_hif_lib_init(struct pfe *pfe);
10236 +void pfe_hif_lib_exit(struct pfe *pfe);
10237 +int hif_lib_client_register(struct hif_client_s *client);
10238 +int hif_lib_client_unregister(struct hif_client_s *client);
10239 +void __hif_lib_xmit_pkt(struct hif_client_s *client, unsigned int qno, void
10240 + *data, unsigned int len, u32 client_ctrl,
10241 + unsigned int flags, void *client_data);
10242 +int hif_lib_xmit_pkt(struct hif_client_s *client, unsigned int qno, void *data,
10243 + unsigned int len, u32 client_ctrl, void *client_data);
10244 +void hif_lib_indicate_client(int cl_id, int event, int data);
10245 +int hif_lib_event_handler_start(struct hif_client_s *client, int event, int
10246 + data);
10247 +int hif_lib_tmu_queue_start(struct hif_client_s *client, int qno);
10248 +int hif_lib_tmu_queue_stop(struct hif_client_s *client, int qno);
10249 +void *hif_lib_tx_get_next_complete(struct hif_client_s *client, int qno,
10250 + unsigned int *flags, int count);
10251 +void *hif_lib_receive_pkt(struct hif_client_s *client, int qno, int *len, int
10252 + *ofst, unsigned int *rx_ctrl,
10253 + unsigned int *desc_ctrl, void **priv_data);
10254 +void __hif_lib_update_credit(struct hif_client_s *client, unsigned int queue);
10255 +void hif_lib_set_rx_cpu_affinity(struct hif_client_s *client, int cpu_id);
10256 +void hif_lib_set_tx_queue_nocpy(struct hif_client_s *client, int qno, int
10257 + enable);
10258 +static inline int hif_lib_tx_avail(struct hif_client_s *client, unsigned int
10259 + qno)
10260 +{
10261 + struct hif_client_tx_queue *queue = &client->tx_q[qno];
10262 +
10263 + return (queue->size - queue->tx_pending);
10264 +}
10265 +
10266 +static inline int hif_lib_get_tx_wr_index(struct hif_client_s *client, unsigned
10267 + int qno)
10268 +{
10269 + struct hif_client_tx_queue *queue = &client->tx_q[qno];
10270 +
10271 + return queue->write_idx;
10272 +}
10273 +
10274 +static inline int hif_lib_tx_pending(struct hif_client_s *client, unsigned int
10275 + qno)
10276 +{
10277 + struct hif_client_tx_queue *queue = &client->tx_q[qno];
10278 +
10279 + return queue->tx_pending;
10280 +}
10281 +
10282 +#define hif_lib_tx_credit_avail(pfe, id, qno) \
10283 + ((pfe)->tmu_credit.tx_credit[id][qno])
10284 +
10285 +#define hif_lib_tx_credit_max(pfe, id, qno) \
10286 + ((pfe)->tmu_credit.tx_credit_max[id][qno])
10287 +
10288 +/*
10289 + * Test comment
10290 + */
10291 +#define hif_lib_tx_credit_use(pfe, id, qno, credit) \
10292 + ({ typeof(pfe) pfe_ = pfe; \
10293 + typeof(id) id_ = id; \
10294 + typeof(qno) qno_ = qno; \
10295 + typeof(credit) credit_ = credit; \
10296 + do { \
10297 + if (tx_qos) { \
10298 + (pfe_)->tmu_credit.tx_credit[id_][qno_]\
10299 + -= credit_; \
10300 + (pfe_)->tmu_credit.tx_packets[id_][qno_]\
10301 + += credit_; \
10302 + } \
10303 + } while (0); \
10304 + })
10305 +
10306 +#endif /* _PFE_HIF_LIB_H_ */
10307 --- /dev/null
10308 +++ b/drivers/staging/fsl_ppfe/pfe_hw.c
10309 @@ -0,0 +1,164 @@
10310 +// SPDX-License-Identifier: GPL-2.0+
10311 +/*
10312 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
10313 + * Copyright 2017 NXP
10314 + */
10315 +
10316 +#include "pfe_mod.h"
10317 +#include "pfe_hw.h"
10318 +
10319 +/* Functions to handle most of pfe hw register initialization */
10320 +int pfe_hw_init(struct pfe *pfe, int resume)
10321 +{
10322 + struct class_cfg class_cfg = {
10323 + .pe_sys_clk_ratio = PE_SYS_CLK_RATIO,
10324 + .route_table_baseaddr = pfe->ddr_phys_baseaddr +
10325 + ROUTE_TABLE_BASEADDR,
10326 + .route_table_hash_bits = ROUTE_TABLE_HASH_BITS,
10327 + };
10328 +
10329 + struct tmu_cfg tmu_cfg = {
10330 + .pe_sys_clk_ratio = PE_SYS_CLK_RATIO,
10331 + .llm_base_addr = pfe->ddr_phys_baseaddr + TMU_LLM_BASEADDR,
10332 + .llm_queue_len = TMU_LLM_QUEUE_LEN,
10333 + };
10334 +
10335 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
10336 + struct util_cfg util_cfg = {
10337 + .pe_sys_clk_ratio = PE_SYS_CLK_RATIO,
10338 + };
10339 +#endif
10340 +
10341 + struct BMU_CFG bmu1_cfg = {
10342 + .baseaddr = CBUS_VIRT_TO_PFE(LMEM_BASE_ADDR +
10343 + BMU1_LMEM_BASEADDR),
10344 + .count = BMU1_BUF_COUNT,
10345 + .size = BMU1_BUF_SIZE,
10346 + .low_watermark = 10,
10347 + .high_watermark = 15,
10348 + };
10349 +
10350 + struct BMU_CFG bmu2_cfg = {
10351 + .baseaddr = DDR_PHYS_TO_PFE(pfe->ddr_phys_baseaddr +
10352 + BMU2_DDR_BASEADDR),
10353 + .count = BMU2_BUF_COUNT,
10354 + .size = BMU2_BUF_SIZE,
10355 + .low_watermark = 250,
10356 + .high_watermark = 253,
10357 + };
10358 +
10359 + struct gpi_cfg egpi1_cfg = {
10360 + .lmem_rtry_cnt = EGPI1_LMEM_RTRY_CNT,
10361 + .tmlf_txthres = EGPI1_TMLF_TXTHRES,
10362 + .aseq_len = EGPI1_ASEQ_LEN,
10363 + .mtip_pause_reg = CBUS_VIRT_TO_PFE(EMAC1_BASE_ADDR +
10364 + EMAC_TCNTRL_REG),
10365 + };
10366 +
10367 + struct gpi_cfg egpi2_cfg = {
10368 + .lmem_rtry_cnt = EGPI2_LMEM_RTRY_CNT,
10369 + .tmlf_txthres = EGPI2_TMLF_TXTHRES,
10370 + .aseq_len = EGPI2_ASEQ_LEN,
10371 + .mtip_pause_reg = CBUS_VIRT_TO_PFE(EMAC2_BASE_ADDR +
10372 + EMAC_TCNTRL_REG),
10373 + };
10374 +
10375 + struct gpi_cfg hgpi_cfg = {
10376 + .lmem_rtry_cnt = HGPI_LMEM_RTRY_CNT,
10377 + .tmlf_txthres = HGPI_TMLF_TXTHRES,
10378 + .aseq_len = HGPI_ASEQ_LEN,
10379 + .mtip_pause_reg = 0,
10380 + };
10381 +
10382 + pr_info("%s\n", __func__);
10383 +
10384 +#if !defined(LS1012A_PFE_RESET_WA)
10385 + /* LS1012A needs this to make PE work correctly */
10386 + writel(0x3, CLASS_PE_SYS_CLK_RATIO);
10387 + writel(0x3, TMU_PE_SYS_CLK_RATIO);
10388 + writel(0x3, UTIL_PE_SYS_CLK_RATIO);
10389 + usleep_range(10, 20);
10390 +#endif
10391 +
10392 + pr_info("CLASS version: %x\n", readl(CLASS_VERSION));
10393 + pr_info("TMU version: %x\n", readl(TMU_VERSION));
10394 +
10395 + pr_info("BMU1 version: %x\n", readl(BMU1_BASE_ADDR +
10396 + BMU_VERSION));
10397 + pr_info("BMU2 version: %x\n", readl(BMU2_BASE_ADDR +
10398 + BMU_VERSION));
10399 +
10400 + pr_info("EGPI1 version: %x\n", readl(EGPI1_BASE_ADDR +
10401 + GPI_VERSION));
10402 + pr_info("EGPI2 version: %x\n", readl(EGPI2_BASE_ADDR +
10403 + GPI_VERSION));
10404 + pr_info("HGPI version: %x\n", readl(HGPI_BASE_ADDR +
10405 + GPI_VERSION));
10406 +
10407 + pr_info("HIF version: %x\n", readl(HIF_VERSION));
10408 + pr_info("HIF NOPCY version: %x\n", readl(HIF_NOCPY_VERSION));
10409 +
10410 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
10411 + pr_info("UTIL version: %x\n", readl(UTIL_VERSION));
10412 +#endif
10413 + while (!(readl(TMU_CTRL) & ECC_MEM_INIT_DONE))
10414 + ;
10415 +
10416 + hif_rx_disable();
10417 + hif_tx_disable();
10418 +
10419 + bmu_init(BMU1_BASE_ADDR, &bmu1_cfg);
10420 +
10421 + pr_info("bmu_init(1) done\n");
10422 +
10423 + bmu_init(BMU2_BASE_ADDR, &bmu2_cfg);
10424 +
10425 + pr_info("bmu_init(2) done\n");
10426 +
10427 + class_cfg.resume = resume ? 1 : 0;
10428 +
10429 + class_init(&class_cfg);
10430 +
10431 + pr_info("class_init() done\n");
10432 +
10433 + tmu_init(&tmu_cfg);
10434 +
10435 + pr_info("tmu_init() done\n");
10436 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
10437 + util_init(&util_cfg);
10438 +
10439 + pr_info("util_init() done\n");
10440 +#endif
10441 + gpi_init(EGPI1_BASE_ADDR, &egpi1_cfg);
10442 +
10443 + pr_info("gpi_init(1) done\n");
10444 +
10445 + gpi_init(EGPI2_BASE_ADDR, &egpi2_cfg);
10446 +
10447 + pr_info("gpi_init(2) done\n");
10448 +
10449 + gpi_init(HGPI_BASE_ADDR, &hgpi_cfg);
10450 +
10451 + pr_info("gpi_init(hif) done\n");
10452 +
10453 + bmu_enable(BMU1_BASE_ADDR);
10454 +
10455 + pr_info("bmu_enable(1) done\n");
10456 +
10457 + bmu_enable(BMU2_BASE_ADDR);
10458 +
10459 + pr_info("bmu_enable(2) done\n");
10460 +
10461 + return 0;
10462 +}
10463 +
10464 +void pfe_hw_exit(struct pfe *pfe)
10465 +{
10466 + pr_info("%s\n", __func__);
10467 +
10468 + bmu_disable(BMU1_BASE_ADDR);
10469 + bmu_reset(BMU1_BASE_ADDR);
10470 +
10471 + bmu_disable(BMU2_BASE_ADDR);
10472 + bmu_reset(BMU2_BASE_ADDR);
10473 +}
10474 --- /dev/null
10475 +++ b/drivers/staging/fsl_ppfe/pfe_hw.h
10476 @@ -0,0 +1,15 @@
10477 +/* SPDX-License-Identifier: GPL-2.0+ */
10478 +/*
10479 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
10480 + * Copyright 2017 NXP
10481 + */
10482 +
10483 +#ifndef _PFE_HW_H_
10484 +#define _PFE_HW_H_
10485 +
10486 +#define PE_SYS_CLK_RATIO 1 /* SYS/AXI = 250MHz, HFE = 500MHz */
10487 +
10488 +int pfe_hw_init(struct pfe *pfe, int resume);
10489 +void pfe_hw_exit(struct pfe *pfe);
10490 +
10491 +#endif /* _PFE_HW_H_ */
10492 --- /dev/null
10493 +++ b/drivers/staging/fsl_ppfe/pfe_ls1012a_platform.c
10494 @@ -0,0 +1,383 @@
10495 +// SPDX-License-Identifier: GPL-2.0+
10496 +/*
10497 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
10498 + * Copyright 2017 NXP
10499 + */
10500 +
10501 +#include <linux/module.h>
10502 +#include <linux/device.h>
10503 +#include <linux/of.h>
10504 +#include <linux/of_net.h>
10505 +#include <linux/of_address.h>
10506 +#include <linux/of_mdio.h>
10507 +#include <linux/platform_device.h>
10508 +#include <linux/slab.h>
10509 +#include <linux/clk.h>
10510 +#include <linux/mfd/syscon.h>
10511 +#include <linux/regmap.h>
10512 +
10513 +#include "pfe_mod.h"
10514 +
10515 +extern bool pfe_use_old_dts_phy;
10516 +struct ls1012a_pfe_platform_data pfe_platform_data;
10517 +
10518 +static int pfe_get_gemac_if_properties(struct device_node *gem,
10519 + int port,
10520 + struct ls1012a_pfe_platform_data *pdata)
10521 +{
10522 + struct device_node *phy_node = NULL;
10523 + int size;
10524 + int phy_id = 0;
10525 + const u32 *addr;
10526 + int err;
10527 +
10528 + addr = of_get_property(gem, "reg", &size);
10529 + if (addr)
10530 + port = be32_to_cpup(addr);
10531 + else
10532 + goto err;
10533 +
10534 + pdata->ls1012a_eth_pdata[port].gem_id = port;
10535 +
10536 + of_get_mac_address(gem, pdata->ls1012a_eth_pdata[port].mac_addr);
10537 +
10538 + phy_node = of_parse_phandle(gem, "phy-handle", 0);
10539 + pdata->ls1012a_eth_pdata[port].phy_node = phy_node;
10540 + if (phy_node) {
10541 + pfe_use_old_dts_phy = false;
10542 + goto process_phynode;
10543 + } else if (of_phy_is_fixed_link(gem)) {
10544 + pfe_use_old_dts_phy = false;
10545 + if (of_phy_register_fixed_link(gem) < 0) {
10546 + pr_err("broken fixed-link specification\n");
10547 + goto err;
10548 + }
10549 + phy_node = of_node_get(gem);
10550 + pdata->ls1012a_eth_pdata[port].phy_node = phy_node;
10551 + } else if (of_get_property(gem, "fsl,pfe-phy-if-flags", &size)) {
10552 + pfe_use_old_dts_phy = true;
10553 + /* Use old dts properties for phy handling */
10554 + addr = of_get_property(gem, "fsl,pfe-phy-if-flags", &size);
10555 + pdata->ls1012a_eth_pdata[port].phy_flags = be32_to_cpup(addr);
10556 +
10557 + addr = of_get_property(gem, "fsl,gemac-phy-id", &size);
10558 + if (!addr) {
10559 + pr_err("%s:%d Invalid gemac-phy-id....\n", __func__,
10560 + __LINE__);
10561 + } else {
10562 + phy_id = be32_to_cpup(addr);
10563 + pdata->ls1012a_eth_pdata[port].phy_id = phy_id;
10564 + pdata->ls1012a_mdio_pdata[0].phy_mask &= ~(1 << phy_id);
10565 + }
10566 +
10567 + /* If PHY is enabled, read mdio properties */
10568 + if (pdata->ls1012a_eth_pdata[port].phy_flags & GEMAC_NO_PHY)
10569 + goto done;
10570 +
10571 + } else {
10572 + pr_info("%s: No PHY or fixed-link\n", __func__);
10573 + return 0;
10574 + }
10575 +
10576 +process_phynode:
10577 + err = of_get_phy_mode(gem, &pdata->ls1012a_eth_pdata[port].mii_config);
10578 + if (err)
10579 + pr_err("%s:%d Incorrect Phy mode....\n", __func__,
10580 + __LINE__);
10581 +
10582 + addr = of_get_property(gem, "fsl,mdio-mux-val", &size);
10583 + if (!addr) {
10584 + pr_err("%s: Invalid mdio-mux-val....\n", __func__);
10585 + } else {
10586 + phy_id = be32_to_cpup(addr);
10587 + pdata->ls1012a_eth_pdata[port].mdio_muxval = phy_id;
10588 + }
10589 +
10590 + if (pdata->ls1012a_eth_pdata[port].phy_id < 32)
10591 + pfe->mdio_muxval[pdata->ls1012a_eth_pdata[port].phy_id] =
10592 + pdata->ls1012a_eth_pdata[port].mdio_muxval;
10593 +
10594 +
10595 + pdata->ls1012a_mdio_pdata[port].irq[0] = PHY_POLL;
10596 +
10597 +done:
10598 + return 0;
10599 +
10600 +err:
10601 + return -1;
10602 +}
10603 +
10604 +/*
10605 + *
10606 + * pfe_platform_probe -
10607 + *
10608 + *
10609 + */
10610 +static int pfe_platform_probe(struct platform_device *pdev)
10611 +{
10612 + struct resource res;
10613 + int ii = 0, rc, interface_count = 0, size = 0;
10614 + const u32 *prop;
10615 + struct device_node *np, *gem = NULL;
10616 + struct clk *pfe_clk;
10617 +
10618 + np = pdev->dev.of_node;
10619 +
10620 + if (!np) {
10621 + pr_err("Invalid device node\n");
10622 + return -EINVAL;
10623 + }
10624 +
10625 + pfe = kzalloc(sizeof(*pfe), GFP_KERNEL);
10626 + if (!pfe) {
10627 + rc = -ENOMEM;
10628 + goto err_alloc;
10629 + }
10630 +
10631 + platform_set_drvdata(pdev, pfe);
10632 +
10633 + if (dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32))) {
10634 + rc = -ENOMEM;
10635 + pr_err("unable to configure DMA mask.\n");
10636 + goto err_ddr;
10637 + }
10638 +
10639 + if (of_address_to_resource(np, 1, &res)) {
10640 + rc = -ENOMEM;
10641 + pr_err("failed to get ddr resource\n");
10642 + goto err_ddr;
10643 + }
10644 +
10645 + pfe->ddr_phys_baseaddr = res.start;
10646 + pfe->ddr_size = resource_size(&res);
10647 +
10648 + pfe->ddr_baseaddr = memremap(res.start, resource_size(&res),
10649 + MEMREMAP_WB);
10650 + if (!pfe->ddr_baseaddr) {
10651 + pr_err("memremap() ddr failed\n");
10652 + rc = -ENOMEM;
10653 + goto err_ddr;
10654 + }
10655 +
10656 + pfe->scfg =
10657 + syscon_regmap_lookup_by_phandle(pdev->dev.of_node,
10658 + "fsl,pfe-scfg");
10659 + if (IS_ERR(pfe->scfg)) {
10660 + dev_err(&pdev->dev, "No syscfg phandle specified\n");
10661 + return PTR_ERR(pfe->scfg);
10662 + }
10663 +
10664 + pfe->cbus_baseaddr = of_iomap(np, 0);
10665 + if (!pfe->cbus_baseaddr) {
10666 + rc = -ENOMEM;
10667 + pr_err("failed to get axi resource\n");
10668 + goto err_axi;
10669 + }
10670 +
10671 + pfe->hif_irq = platform_get_irq(pdev, 0);
10672 + if (pfe->hif_irq < 0) {
10673 + pr_err("platform_get_irq for hif failed\n");
10674 + rc = pfe->hif_irq;
10675 + goto err_hif_irq;
10676 + }
10677 +
10678 + pfe->wol_irq = platform_get_irq(pdev, 2);
10679 + if (pfe->wol_irq < 0) {
10680 + pr_err("platform_get_irq for WoL failed\n");
10681 + rc = pfe->wol_irq;
10682 + goto err_hif_irq;
10683 + }
10684 +
10685 + /* Read interface count */
10686 + prop = of_get_property(np, "fsl,pfe-num-interfaces", &size);
10687 + if (!prop) {
10688 + pr_err("Failed to read number of interfaces\n");
10689 + rc = -ENXIO;
10690 + goto err_prop;
10691 + }
10692 +
10693 + interface_count = be32_to_cpup(prop);
10694 + if (interface_count <= 0) {
10695 + pr_err("No ethernet interface count : %d\n",
10696 + interface_count);
10697 + rc = -ENXIO;
10698 + goto err_prop;
10699 + }
10700 +
10701 + pfe_platform_data.ls1012a_mdio_pdata[0].phy_mask = 0xffffffff;
10702 +
10703 + while ((gem = of_get_next_child(np, gem))) {
10704 + if (of_find_property(gem, "reg", &size)) {
10705 + pfe_get_gemac_if_properties(gem, ii,
10706 + &pfe_platform_data);
10707 + ii++;
10708 + }
10709 + }
10710 +
10711 + if (interface_count != ii)
10712 + pr_info("missing some of gemac interface properties.\n");
10713 +
10714 + pfe->dev = &pdev->dev;
10715 +
10716 + pfe->dev->platform_data = &pfe_platform_data;
10717 +
10718 + /* declare WoL capabilities */
10719 + device_init_wakeup(&pdev->dev, true);
10720 +
10721 + /* find the clocks */
10722 + pfe_clk = devm_clk_get(pfe->dev, "pfe");
10723 + if (IS_ERR(pfe_clk))
10724 + return PTR_ERR(pfe_clk);
10725 +
10726 + /* PFE clock is (platform clock / 2) */
10727 + /* save sys_clk value as KHz */
10728 + pfe->ctrl.sys_clk = clk_get_rate(pfe_clk) / (2 * 1000);
10729 +
10730 + rc = pfe_probe(pfe);
10731 + if (rc < 0)
10732 + goto err_probe;
10733 +
10734 + return 0;
10735 +
10736 +err_probe:
10737 +err_prop:
10738 +err_hif_irq:
10739 + iounmap(pfe->cbus_baseaddr);
10740 +
10741 +err_axi:
10742 + memunmap(pfe->ddr_baseaddr);
10743 +
10744 +err_ddr:
10745 + platform_set_drvdata(pdev, NULL);
10746 +
10747 + kfree(pfe);
10748 +
10749 +err_alloc:
10750 + return rc;
10751 +}
10752 +
10753 +/*
10754 + * pfe_platform_remove -
10755 + */
10756 +static int pfe_platform_remove(struct platform_device *pdev)
10757 +{
10758 + struct pfe *pfe = platform_get_drvdata(pdev);
10759 + int rc;
10760 +
10761 + pr_info("%s\n", __func__);
10762 +
10763 + rc = pfe_remove(pfe);
10764 +
10765 + iounmap(pfe->cbus_baseaddr);
10766 +
10767 + memunmap(pfe->ddr_baseaddr);
10768 +
10769 + platform_set_drvdata(pdev, NULL);
10770 +
10771 + kfree(pfe);
10772 +
10773 + return rc;
10774 +}
10775 +
10776 +#ifdef CONFIG_PM
10777 +#ifdef CONFIG_PM_SLEEP
10778 +int pfe_platform_suspend(struct device *dev)
10779 +{
10780 + struct pfe *pfe = platform_get_drvdata(to_platform_device(dev));
10781 + struct net_device *netdev;
10782 + int i;
10783 +
10784 + pfe->wake = 0;
10785 +
10786 + for (i = 0; i < (NUM_GEMAC_SUPPORT); i++) {
10787 + netdev = pfe->eth.eth_priv[i]->ndev;
10788 +
10789 + netif_device_detach(netdev);
10790 +
10791 + if (netif_running(netdev))
10792 + if (pfe_eth_suspend(netdev))
10793 + pfe->wake = 1;
10794 + }
10795 +
10796 + /* Shutdown PFE only if we're not waking up the system */
10797 + if (!pfe->wake) {
10798 +#if defined(LS1012A_PFE_RESET_WA)
10799 + pfe_hif_rx_idle(&pfe->hif);
10800 +#endif
10801 + pfe_ctrl_suspend(&pfe->ctrl);
10802 + pfe_firmware_exit(pfe);
10803 +
10804 + pfe_hif_exit(pfe);
10805 + pfe_hif_lib_exit(pfe);
10806 +
10807 + pfe_hw_exit(pfe);
10808 + }
10809 +
10810 + return 0;
10811 +}
10812 +
10813 +static int pfe_platform_resume(struct device *dev)
10814 +{
10815 + struct pfe *pfe = platform_get_drvdata(to_platform_device(dev));
10816 + struct net_device *netdev;
10817 + int i;
10818 +
10819 + if (!pfe->wake) {
10820 + pfe_hw_init(pfe, 1);
10821 + pfe_hif_lib_init(pfe);
10822 + pfe_hif_init(pfe);
10823 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
10824 + util_enable();
10825 +#endif
10826 + tmu_enable(0xf);
10827 + class_enable();
10828 + pfe_ctrl_resume(&pfe->ctrl);
10829 + }
10830 +
10831 + for (i = 0; i < (NUM_GEMAC_SUPPORT); i++) {
10832 + netdev = pfe->eth.eth_priv[i]->ndev;
10833 +
10834 + if (pfe->mdio.mdio_priv[i]->mii_bus)
10835 + pfe_eth_mdio_reset(pfe->mdio.mdio_priv[i]->mii_bus);
10836 +
10837 + if (netif_running(netdev))
10838 + pfe_eth_resume(netdev);
10839 +
10840 + netif_device_attach(netdev);
10841 + }
10842 + return 0;
10843 +}
10844 +#else
10845 +#define pfe_platform_suspend NULL
10846 +#define pfe_platform_resume NULL
10847 +#endif
10848 +
10849 +static const struct dev_pm_ops pfe_platform_pm_ops = {
10850 + SET_SYSTEM_SLEEP_PM_OPS(pfe_platform_suspend, pfe_platform_resume)
10851 +};
10852 +#endif
10853 +
10854 +static const struct of_device_id pfe_match[] = {
10855 + {
10856 + .compatible = "fsl,pfe",
10857 + },
10858 + {},
10859 +};
10860 +MODULE_DEVICE_TABLE(of, pfe_match);
10861 +
10862 +static struct platform_driver pfe_platform_driver = {
10863 + .probe = pfe_platform_probe,
10864 + .remove = pfe_platform_remove,
10865 + .driver = {
10866 + .name = "pfe",
10867 + .of_match_table = pfe_match,
10868 +#ifdef CONFIG_PM
10869 + .pm = &pfe_platform_pm_ops,
10870 +#endif
10871 + },
10872 +};
10873 +
10874 +module_platform_driver(pfe_platform_driver);
10875 +MODULE_LICENSE("GPL");
10876 +MODULE_DESCRIPTION("PFE Ethernet driver");
10877 +MODULE_AUTHOR("NXP DNCPE");
10878 --- /dev/null
10879 +++ b/drivers/staging/fsl_ppfe/pfe_mod.c
10880 @@ -0,0 +1,158 @@
10881 +// SPDX-License-Identifier: GPL-2.0+
10882 +/*
10883 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
10884 + * Copyright 2017 NXP
10885 + */
10886 +
10887 +#include <linux/dma-mapping.h>
10888 +#include "pfe_mod.h"
10889 +#include "pfe_cdev.h"
10890 +
10891 +unsigned int us;
10892 +module_param(us, uint, 0444);
10893 +MODULE_PARM_DESC(us, "0: module enabled for kernel networking (DEFAULT)\n"
10894 + "1: module enabled for userspace networking\n");
10895 +struct pfe *pfe;
10896 +
10897 +/*
10898 + * pfe_probe -
10899 + */
10900 +int pfe_probe(struct pfe *pfe)
10901 +{
10902 + int rc;
10903 +
10904 + if (pfe->ddr_size < DDR_MAX_SIZE) {
10905 + pr_err("%s: required DDR memory (%x) above platform ddr memory (%x)\n",
10906 + __func__, (unsigned int)DDR_MAX_SIZE, pfe->ddr_size);
10907 + rc = -ENOMEM;
10908 + goto err_hw;
10909 + }
10910 +
10911 + if (((int)(pfe->ddr_phys_baseaddr + BMU2_DDR_BASEADDR) &
10912 + (8 * SZ_1M - 1)) != 0) {
10913 + pr_err("%s: BMU2 base address (0x%x) must be aligned on 8MB boundary\n",
10914 + __func__, (int)pfe->ddr_phys_baseaddr +
10915 + BMU2_DDR_BASEADDR);
10916 + rc = -ENOMEM;
10917 + goto err_hw;
10918 + }
10919 +
10920 + pr_info("cbus_baseaddr: %lx, ddr_baseaddr: %lx, ddr_phys_baseaddr: %lx, ddr_size: %x\n",
10921 + (unsigned long)pfe->cbus_baseaddr,
10922 + (unsigned long)pfe->ddr_baseaddr,
10923 + pfe->ddr_phys_baseaddr, pfe->ddr_size);
10924 +
10925 + pfe_lib_init(pfe->cbus_baseaddr, pfe->ddr_baseaddr,
10926 + pfe->ddr_phys_baseaddr, pfe->ddr_size);
10927 +
10928 + rc = pfe_hw_init(pfe, 0);
10929 + if (rc < 0)
10930 + goto err_hw;
10931 +
10932 + if (us)
10933 + goto firmware_init;
10934 +
10935 + rc = pfe_hif_lib_init(pfe);
10936 + if (rc < 0)
10937 + goto err_hif_lib;
10938 +
10939 + rc = pfe_hif_init(pfe);
10940 + if (rc < 0)
10941 + goto err_hif;
10942 +
10943 +firmware_init:
10944 + rc = pfe_firmware_init(pfe);
10945 + if (rc < 0)
10946 + goto err_firmware;
10947 +
10948 + rc = pfe_ctrl_init(pfe);
10949 + if (rc < 0)
10950 + goto err_ctrl;
10951 +
10952 + rc = pfe_eth_init(pfe);
10953 + if (rc < 0)
10954 + goto err_eth;
10955 +
10956 + rc = pfe_sysfs_init(pfe);
10957 + if (rc < 0)
10958 + goto err_sysfs;
10959 +
10960 + rc = pfe_debugfs_init(pfe);
10961 + if (rc < 0)
10962 + goto err_debugfs;
10963 +
10964 + if (us) {
10965 + /* Creating a character device */
10966 + rc = pfe_cdev_init();
10967 + if (rc < 0)
10968 + goto err_cdev;
10969 + }
10970 +
10971 + return 0;
10972 +
10973 +err_cdev:
10974 + pfe_debugfs_exit(pfe);
10975 +
10976 +err_debugfs:
10977 + pfe_sysfs_exit(pfe);
10978 +
10979 +err_sysfs:
10980 + pfe_eth_exit(pfe);
10981 +
10982 +err_eth:
10983 + pfe_ctrl_exit(pfe);
10984 +
10985 +err_ctrl:
10986 + pfe_firmware_exit(pfe);
10987 +
10988 +err_firmware:
10989 + if (us)
10990 + goto err_hif_lib;
10991 +
10992 + pfe_hif_exit(pfe);
10993 +
10994 +err_hif:
10995 + pfe_hif_lib_exit(pfe);
10996 +
10997 +err_hif_lib:
10998 + pfe_hw_exit(pfe);
10999 +
11000 +err_hw:
11001 + return rc;
11002 +}
11003 +
11004 +/*
11005 + * pfe_remove -
11006 + */
11007 +int pfe_remove(struct pfe *pfe)
11008 +{
11009 + pr_info("%s\n", __func__);
11010 +
11011 + if (us)
11012 + pfe_cdev_exit();
11013 +
11014 + pfe_debugfs_exit(pfe);
11015 +
11016 + pfe_sysfs_exit(pfe);
11017 +
11018 + pfe_eth_exit(pfe);
11019 +
11020 + pfe_ctrl_exit(pfe);
11021 +
11022 +#if defined(LS1012A_PFE_RESET_WA)
11023 + pfe_hif_rx_idle(&pfe->hif);
11024 +#endif
11025 + pfe_firmware_exit(pfe);
11026 +
11027 + if (us)
11028 + goto hw_exit;
11029 +
11030 + pfe_hif_exit(pfe);
11031 +
11032 + pfe_hif_lib_exit(pfe);
11033 +
11034 +hw_exit:
11035 + pfe_hw_exit(pfe);
11036 +
11037 + return 0;
11038 +}
11039 --- /dev/null
11040 +++ b/drivers/staging/fsl_ppfe/pfe_mod.h
11041 @@ -0,0 +1,103 @@
11042 +/* SPDX-License-Identifier: GPL-2.0+ */
11043 +/*
11044 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
11045 + * Copyright 2017 NXP
11046 + */
11047 +
11048 +#ifndef _PFE_MOD_H_
11049 +#define _PFE_MOD_H_
11050 +
11051 +#include <linux/device.h>
11052 +#include <linux/elf.h>
11053 +
11054 +extern unsigned int us;
11055 +
11056 +struct pfe;
11057 +
11058 +#include "pfe_hw.h"
11059 +#include "pfe_firmware.h"
11060 +#include "pfe_ctrl.h"
11061 +#include "pfe_hif.h"
11062 +#include "pfe_hif_lib.h"
11063 +#include "pfe_eth.h"
11064 +#include "pfe_sysfs.h"
11065 +#include "pfe_perfmon.h"
11066 +#include "pfe_debugfs.h"
11067 +
11068 +#define PHYID_MAX_VAL 32
11069 +
11070 +struct pfe_tmu_credit {
11071 + /* Number of allowed TX packet in-flight, matches TMU queue size */
11072 + unsigned int tx_credit[NUM_GEMAC_SUPPORT][EMAC_TXQ_CNT];
11073 + unsigned int tx_credit_max[NUM_GEMAC_SUPPORT][EMAC_TXQ_CNT];
11074 + unsigned int tx_packets[NUM_GEMAC_SUPPORT][EMAC_TXQ_CNT];
11075 +};
11076 +
11077 +struct pfe {
11078 + struct regmap *scfg;
11079 + unsigned long ddr_phys_baseaddr;
11080 + void *ddr_baseaddr;
11081 + unsigned int ddr_size;
11082 + void *cbus_baseaddr;
11083 + void *apb_baseaddr;
11084 + unsigned long iram_phys_baseaddr;
11085 + void *iram_baseaddr;
11086 + unsigned long ipsec_phys_baseaddr;
11087 + void *ipsec_baseaddr;
11088 + int hif_irq;
11089 + int wol_irq;
11090 + int hif_client_irq;
11091 + struct device *dev;
11092 + struct dentry *dentry;
11093 + struct pfe_ctrl ctrl;
11094 + struct pfe_hif hif;
11095 + struct pfe_eth eth;
11096 + struct pfe_mdio mdio;
11097 + struct hif_client_s *hif_client[HIF_CLIENTS_MAX];
11098 +#if defined(CFG_DIAGS)
11099 + struct pfe_diags diags;
11100 +#endif
11101 + struct pfe_tmu_credit tmu_credit;
11102 + struct pfe_cpumon cpumon;
11103 + struct pfe_memmon memmon;
11104 + int wake;
11105 + int mdio_muxval[PHYID_MAX_VAL];
11106 + struct clk *hfe_clock;
11107 +};
11108 +
11109 +extern struct pfe *pfe;
11110 +
11111 +int pfe_probe(struct pfe *pfe);
11112 +int pfe_remove(struct pfe *pfe);
11113 +
11114 +/* DDR Mapping in reserved memory*/
11115 +#define ROUTE_TABLE_BASEADDR 0
11116 +#define ROUTE_TABLE_HASH_BITS 15 /* 32K entries */
11117 +#define ROUTE_TABLE_SIZE ((1 << ROUTE_TABLE_HASH_BITS) \
11118 + * CLASS_ROUTE_SIZE)
11119 +#define BMU2_DDR_BASEADDR (ROUTE_TABLE_BASEADDR + ROUTE_TABLE_SIZE)
11120 +#define BMU2_BUF_COUNT (4096 - 256)
11121 +/* This is to get a total DDR size of 12MiB */
11122 +#define BMU2_DDR_SIZE (DDR_BUF_SIZE * BMU2_BUF_COUNT)
11123 +#define UTIL_CODE_BASEADDR (BMU2_DDR_BASEADDR + BMU2_DDR_SIZE)
11124 +#define UTIL_CODE_SIZE (128 * SZ_1K)
11125 +#define UTIL_DDR_DATA_BASEADDR (UTIL_CODE_BASEADDR + UTIL_CODE_SIZE)
11126 +#define UTIL_DDR_DATA_SIZE (64 * SZ_1K)
11127 +#define CLASS_DDR_DATA_BASEADDR (UTIL_DDR_DATA_BASEADDR + UTIL_DDR_DATA_SIZE)
11128 +#define CLASS_DDR_DATA_SIZE (32 * SZ_1K)
11129 +#define TMU_DDR_DATA_BASEADDR (CLASS_DDR_DATA_BASEADDR + CLASS_DDR_DATA_SIZE)
11130 +#define TMU_DDR_DATA_SIZE (32 * SZ_1K)
11131 +#define TMU_LLM_BASEADDR (TMU_DDR_DATA_BASEADDR + TMU_DDR_DATA_SIZE)
11132 +#define TMU_LLM_QUEUE_LEN (8 * 512)
11133 +/* Must be power of two and at least 16 * 8 = 128 bytes */
11134 +#define TMU_LLM_SIZE (4 * 16 * TMU_LLM_QUEUE_LEN)
11135 +/* (4 TMU's x 16 queues x queue_len) */
11136 +
11137 +#define DDR_MAX_SIZE (TMU_LLM_BASEADDR + TMU_LLM_SIZE)
11138 +
11139 +/* LMEM Mapping */
11140 +#define BMU1_LMEM_BASEADDR 0
11141 +#define BMU1_BUF_COUNT 256
11142 +#define BMU1_LMEM_SIZE (LMEM_BUF_SIZE * BMU1_BUF_COUNT)
11143 +
11144 +#endif /* _PFE_MOD_H */
11145 --- /dev/null
11146 +++ b/drivers/staging/fsl_ppfe/pfe_perfmon.h
11147 @@ -0,0 +1,26 @@
11148 +/* SPDX-License-Identifier: GPL-2.0+ */
11149 +/*
11150 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
11151 + * Copyright 2017 NXP
11152 + */
11153 +
11154 +#ifndef _PFE_PERFMON_H_
11155 +#define _PFE_PERFMON_H_
11156 +
11157 +#include "pfe/pfe.h"
11158 +
11159 +#define CT_CPUMON_INTERVAL (1 * TIMER_TICKS_PER_SEC)
11160 +
11161 +struct pfe_cpumon {
11162 + u32 cpu_usage_pct[MAX_PE];
11163 + u32 class_usage_pct;
11164 +};
11165 +
11166 +struct pfe_memmon {
11167 + u32 kernel_memory_allocated;
11168 +};
11169 +
11170 +int pfe_perfmon_init(struct pfe *pfe);
11171 +void pfe_perfmon_exit(struct pfe *pfe);
11172 +
11173 +#endif /* _PFE_PERFMON_H_ */
11174 --- /dev/null
11175 +++ b/drivers/staging/fsl_ppfe/pfe_sysfs.c
11176 @@ -0,0 +1,840 @@
11177 +// SPDX-License-Identifier: GPL-2.0+
11178 +/*
11179 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
11180 + * Copyright 2017 NXP
11181 + */
11182 +
11183 +#include <linux/module.h>
11184 +#include <linux/platform_device.h>
11185 +
11186 +#include "pfe_mod.h"
11187 +
11188 +#define PE_EXCEPTION_DUMP_ADDRESS 0x1fa8
11189 +#define NUM_QUEUES 16
11190 +
11191 +static char register_name[20][5] = {
11192 + "EPC", "ECAS", "EID", "ED",
11193 + "r0", "r1", "r2", "r3",
11194 + "r4", "r5", "r6", "r7",
11195 + "r8", "r9", "r10", "r11",
11196 + "r12", "r13", "r14", "r15",
11197 +};
11198 +
11199 +static char exception_name[14][20] = {
11200 + "Reset",
11201 + "HardwareFailure",
11202 + "NMI",
11203 + "InstBreakpoint",
11204 + "DataBreakpoint",
11205 + "Unsupported",
11206 + "PrivilegeViolation",
11207 + "InstBusError",
11208 + "DataBusError",
11209 + "AlignmentError",
11210 + "ArithmeticError",
11211 + "SystemCall",
11212 + "MemoryManagement",
11213 + "Interrupt",
11214 +};
11215 +
11216 +static unsigned long class_do_clear;
11217 +static unsigned long tmu_do_clear;
11218 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
11219 +static unsigned long util_do_clear;
11220 +#endif
11221 +
11222 +static ssize_t display_pe_status(char *buf, int id, u32 dmem_addr, unsigned long
11223 + do_clear)
11224 +{
11225 + ssize_t len = 0;
11226 + u32 val;
11227 + char statebuf[5];
11228 + struct pfe_cpumon *cpumon = &pfe->cpumon;
11229 + u32 debug_indicator;
11230 + u32 debug[20];
11231 +
11232 + if (id < CLASS0_ID || id >= MAX_PE)
11233 + return len;
11234 +
11235 + *(u32 *)statebuf = pe_dmem_read(id, dmem_addr, 4);
11236 + dmem_addr += 4;
11237 +
11238 + statebuf[4] = '\0';
11239 + len += sprintf(buf + len, "state=%4s ", statebuf);
11240 +
11241 + val = pe_dmem_read(id, dmem_addr, 4);
11242 + dmem_addr += 4;
11243 + len += sprintf(buf + len, "ctr=%08x ", cpu_to_be32(val));
11244 +
11245 + val = pe_dmem_read(id, dmem_addr, 4);
11246 + if (do_clear && val)
11247 + pe_dmem_write(id, 0, dmem_addr, 4);
11248 + dmem_addr += 4;
11249 + len += sprintf(buf + len, "rx=%u ", cpu_to_be32(val));
11250 +
11251 + val = pe_dmem_read(id, dmem_addr, 4);
11252 + if (do_clear && val)
11253 + pe_dmem_write(id, 0, dmem_addr, 4);
11254 + dmem_addr += 4;
11255 + if (id >= TMU0_ID && id <= TMU_MAX_ID)
11256 + len += sprintf(buf + len, "qstatus=%x", cpu_to_be32(val));
11257 + else
11258 + len += sprintf(buf + len, "tx=%u", cpu_to_be32(val));
11259 +
11260 + val = pe_dmem_read(id, dmem_addr, 4);
11261 + if (do_clear && val)
11262 + pe_dmem_write(id, 0, dmem_addr, 4);
11263 + dmem_addr += 4;
11264 + if (val)
11265 + len += sprintf(buf + len, " drop=%u", cpu_to_be32(val));
11266 +
11267 + len += sprintf(buf + len, " load=%d%%", cpumon->cpu_usage_pct[id]);
11268 +
11269 + len += sprintf(buf + len, "\n");
11270 +
11271 + debug_indicator = pe_dmem_read(id, dmem_addr, 4);
11272 + dmem_addr += 4;
11273 + if (!strncmp((char *)&debug_indicator, "DBUG", 4)) {
11274 + int j, last = 0;
11275 +
11276 + for (j = 0; j < 16; j++) {
11277 + debug[j] = pe_dmem_read(id, dmem_addr, 4);
11278 + if (debug[j]) {
11279 + if (do_clear)
11280 + pe_dmem_write(id, 0, dmem_addr, 4);
11281 + last = j + 1;
11282 + }
11283 + dmem_addr += 4;
11284 + }
11285 + for (j = 0; j < last; j++) {
11286 + len += sprintf(buf + len, "%08x%s",
11287 + cpu_to_be32(debug[j]),
11288 + (j & 0x7) == 0x7 || j == last - 1 ? "\n" : " ");
11289 + }
11290 + }
11291 +
11292 + if (!strncmp(statebuf, "DEAD", 4)) {
11293 + u32 i, dump = PE_EXCEPTION_DUMP_ADDRESS;
11294 +
11295 + len += sprintf(buf + len, "Exception details:\n");
11296 + for (i = 0; i < 20; i++) {
11297 + debug[i] = pe_dmem_read(id, dump, 4);
11298 + dump += 4;
11299 + if (i == 2)
11300 + len += sprintf(buf + len, "%4s = %08x (=%s) ",
11301 + register_name[i], cpu_to_be32(debug[i]),
11302 + exception_name[min((u32)
11303 + cpu_to_be32(debug[i]), (u32)13)]);
11304 + else
11305 + len += sprintf(buf + len, "%4s = %08x%s",
11306 + register_name[i], cpu_to_be32(debug[i]),
11307 + (i & 0x3) == 0x3 || i == 19 ? "\n" : " ");
11308 + }
11309 + }
11310 +
11311 + return len;
11312 +}
11313 +
11314 +static ssize_t class_phy_stats(char *buf, int phy)
11315 +{
11316 + ssize_t len = 0;
11317 + int off1 = phy * 0x28;
11318 + int off2 = phy * 0x10;
11319 +
11320 + if (phy == 3)
11321 + off1 = CLASS_PHY4_RX_PKTS - CLASS_PHY1_RX_PKTS;
11322 +
11323 + len += sprintf(buf + len, "phy: %d\n", phy);
11324 + len += sprintf(buf + len,
11325 + " rx: %10u, tx: %10u, intf: %10u, ipv4: %10u, ipv6: %10u\n",
11326 + readl(CLASS_PHY1_RX_PKTS + off1),
11327 + readl(CLASS_PHY1_TX_PKTS + off1),
11328 + readl(CLASS_PHY1_INTF_MATCH_PKTS + off1),
11329 + readl(CLASS_PHY1_V4_PKTS + off1),
11330 + readl(CLASS_PHY1_V6_PKTS + off1));
11331 +
11332 + len += sprintf(buf + len,
11333 + " icmp: %10u, igmp: %10u, tcp: %10u, udp: %10u\n",
11334 + readl(CLASS_PHY1_ICMP_PKTS + off2),
11335 + readl(CLASS_PHY1_IGMP_PKTS + off2),
11336 + readl(CLASS_PHY1_TCP_PKTS + off2),
11337 + readl(CLASS_PHY1_UDP_PKTS + off2));
11338 +
11339 + len += sprintf(buf + len, " err\n");
11340 + len += sprintf(buf + len,
11341 + " lp: %10u, intf: %10u, l3: %10u, chcksum: %10u, ttl: %10u\n",
11342 + readl(CLASS_PHY1_LP_FAIL_PKTS + off1),
11343 + readl(CLASS_PHY1_INTF_FAIL_PKTS + off1),
11344 + readl(CLASS_PHY1_L3_FAIL_PKTS + off1),
11345 + readl(CLASS_PHY1_CHKSUM_ERR_PKTS + off1),
11346 + readl(CLASS_PHY1_TTL_ERR_PKTS + off1));
11347 +
11348 + return len;
11349 +}
11350 +
11351 +/* qm_read_drop_stat
11352 + * This function is used to read the drop statistics from the TMU
11353 + * hw drop counter. Since the hw counter is always cleared afer
11354 + * reading, this function maintains the previous drop count, and
11355 + * adds the new value to it. That value can be retrieved by
11356 + * passing a pointer to it with the total_drops arg.
11357 + *
11358 + * @param tmu TMU number (0 - 3)
11359 + * @param queue queue number (0 - 15)
11360 + * @param total_drops pointer to location to store total drops (or NULL)
11361 + * @param do_reset if TRUE, clear total drops after updating
11362 + */
11363 +u32 qm_read_drop_stat(u32 tmu, u32 queue, u32 *total_drops, int do_reset)
11364 +{
11365 + static u32 qtotal[TMU_MAX_ID + 1][NUM_QUEUES];
11366 + u32 val;
11367 +
11368 + writel((tmu << 8) | queue, TMU_TEQ_CTRL);
11369 + writel((tmu << 8) | queue, TMU_LLM_CTRL);
11370 + val = readl(TMU_TEQ_DROP_STAT);
11371 + qtotal[tmu][queue] += val;
11372 + if (total_drops)
11373 + *total_drops = qtotal[tmu][queue];
11374 + if (do_reset)
11375 + qtotal[tmu][queue] = 0;
11376 + return val;
11377 +}
11378 +
11379 +static ssize_t tmu_queue_stats(char *buf, int tmu, int queue)
11380 +{
11381 + ssize_t len = 0;
11382 + u32 drops;
11383 +
11384 + len += sprintf(buf + len, "%d-%02d, ", tmu, queue);
11385 +
11386 + drops = qm_read_drop_stat(tmu, queue, NULL, 0);
11387 +
11388 + /* Select queue */
11389 + writel((tmu << 8) | queue, TMU_TEQ_CTRL);
11390 + writel((tmu << 8) | queue, TMU_LLM_CTRL);
11391 +
11392 + len += sprintf(buf + len,
11393 + "(teq) drop: %10u, tx: %10u (llm) head: %08x, tail: %08x, drop: %10u\n",
11394 + drops, readl(TMU_TEQ_TRANS_STAT),
11395 + readl(TMU_LLM_QUE_HEADPTR), readl(TMU_LLM_QUE_TAILPTR),
11396 + readl(TMU_LLM_QUE_DROPCNT));
11397 +
11398 + return len;
11399 +}
11400 +
11401 +static ssize_t tmu_queues(char *buf, int tmu)
11402 +{
11403 + ssize_t len = 0;
11404 + int queue;
11405 +
11406 + for (queue = 0; queue < 16; queue++)
11407 + len += tmu_queue_stats(buf + len, tmu, queue);
11408 +
11409 + return len;
11410 +}
11411 +
11412 +static ssize_t block_version(char *buf, void *addr)
11413 +{
11414 + ssize_t len = 0;
11415 + u32 val;
11416 +
11417 + val = readl(addr);
11418 + len += sprintf(buf + len, "revision: %x, version: %x, id: %x\n",
11419 + (val >> 24) & 0xff, (val >> 16) & 0xff, val & 0xffff);
11420 +
11421 + return len;
11422 +}
11423 +
11424 +static ssize_t bmu(char *buf, int id, void *base)
11425 +{
11426 + ssize_t len = 0;
11427 +
11428 + len += sprintf(buf + len, "%s: %d\n ", __func__, id);
11429 +
11430 + len += block_version(buf + len, base + BMU_VERSION);
11431 +
11432 + len += sprintf(buf + len, " buf size: %x\n", (1 << readl(base +
11433 + BMU_BUF_SIZE)));
11434 + len += sprintf(buf + len, " buf count: %x\n", readl(base +
11435 + BMU_BUF_CNT));
11436 + len += sprintf(buf + len, " buf rem: %x\n", readl(base +
11437 + BMU_REM_BUF_CNT));
11438 + len += sprintf(buf + len, " buf curr: %x\n", readl(base +
11439 + BMU_CURR_BUF_CNT));
11440 + len += sprintf(buf + len, " free err: %x\n", readl(base +
11441 + BMU_FREE_ERR_ADDR));
11442 +
11443 + return len;
11444 +}
11445 +
11446 +static ssize_t gpi(char *buf, int id, void *base)
11447 +{
11448 + ssize_t len = 0;
11449 + u32 val;
11450 +
11451 + len += sprintf(buf + len, "%s%d:\n ", __func__, id);
11452 + len += block_version(buf + len, base + GPI_VERSION);
11453 +
11454 + len += sprintf(buf + len, " tx under stick: %x\n", readl(base +
11455 + GPI_FIFO_STATUS));
11456 + val = readl(base + GPI_FIFO_DEBUG);
11457 + len += sprintf(buf + len, " tx pkts: %x\n", (val >> 23) &
11458 + 0x3f);
11459 + len += sprintf(buf + len, " rx pkts: %x\n", (val >> 18) &
11460 + 0x3f);
11461 + len += sprintf(buf + len, " tx bytes: %x\n", (val >> 9) &
11462 + 0x1ff);
11463 + len += sprintf(buf + len, " rx bytes: %x\n", (val >> 0) &
11464 + 0x1ff);
11465 + len += sprintf(buf + len, " overrun: %x\n", readl(base +
11466 + GPI_OVERRUN_DROPCNT));
11467 +
11468 + return len;
11469 +}
11470 +
11471 +static ssize_t pfe_set_class(struct device *dev, struct device_attribute *attr,
11472 + const char *buf, size_t count)
11473 +{
11474 + class_do_clear = kstrtoul(buf, 0, 0);
11475 + return count;
11476 +}
11477 +
11478 +static ssize_t pfe_show_class(struct device *dev, struct device_attribute *attr,
11479 + char *buf)
11480 +{
11481 + ssize_t len = 0;
11482 + int id;
11483 + u32 val;
11484 + struct pfe_cpumon *cpumon = &pfe->cpumon;
11485 +
11486 + len += block_version(buf + len, CLASS_VERSION);
11487 +
11488 + for (id = CLASS0_ID; id <= CLASS_MAX_ID; id++) {
11489 + len += sprintf(buf + len, "%d: ", id - CLASS0_ID);
11490 +
11491 + val = readl(CLASS_PE0_DEBUG + id * 4);
11492 + len += sprintf(buf + len, "pc=1%04x ", val & 0xffff);
11493 +
11494 + len += display_pe_status(buf + len, id, CLASS_DM_PESTATUS,
11495 + class_do_clear);
11496 + }
11497 + len += sprintf(buf + len, "aggregate load=%d%%\n\n",
11498 + cpumon->class_usage_pct);
11499 +
11500 + len += sprintf(buf + len, "pe status: 0x%x\n",
11501 + readl(CLASS_PE_STATUS));
11502 + len += sprintf(buf + len, "max buf cnt: 0x%x afull thres: 0x%x\n",
11503 + readl(CLASS_MAX_BUF_CNT), readl(CLASS_AFULL_THRES));
11504 + len += sprintf(buf + len, "tsq max cnt: 0x%x tsq fifo thres: 0x%x\n",
11505 + readl(CLASS_TSQ_MAX_CNT), readl(CLASS_TSQ_FIFO_THRES));
11506 + len += sprintf(buf + len, "state: 0x%x\n", readl(CLASS_STATE));
11507 +
11508 + len += class_phy_stats(buf + len, 0);
11509 + len += class_phy_stats(buf + len, 1);
11510 + len += class_phy_stats(buf + len, 2);
11511 + len += class_phy_stats(buf + len, 3);
11512 +
11513 + return len;
11514 +}
11515 +
11516 +static ssize_t pfe_set_tmu(struct device *dev, struct device_attribute *attr,
11517 + const char *buf, size_t count)
11518 +{
11519 + tmu_do_clear = kstrtoul(buf, 0, 0);
11520 + return count;
11521 +}
11522 +
11523 +static ssize_t pfe_show_tmu(struct device *dev, struct device_attribute *attr,
11524 + char *buf)
11525 +{
11526 + ssize_t len = 0;
11527 + int id;
11528 + u32 val;
11529 +
11530 + len += block_version(buf + len, TMU_VERSION);
11531 +
11532 + for (id = TMU0_ID; id <= TMU_MAX_ID; id++) {
11533 + if (id == TMU2_ID)
11534 + continue;
11535 + len += sprintf(buf + len, "%d: ", id - TMU0_ID);
11536 +
11537 + len += display_pe_status(buf + len, id, TMU_DM_PESTATUS,
11538 + tmu_do_clear);
11539 + }
11540 +
11541 + len += sprintf(buf + len, "pe status: %x\n", readl(TMU_PE_STATUS));
11542 + len += sprintf(buf + len, "inq fifo cnt: %x\n",
11543 + readl(TMU_PHY_INQ_FIFO_CNT));
11544 + val = readl(TMU_INQ_STAT);
11545 + len += sprintf(buf + len, "inq wr ptr: %x\n", val & 0x3ff);
11546 + len += sprintf(buf + len, "inq rd ptr: %x\n", val >> 10);
11547 +
11548 + return len;
11549 +}
11550 +
11551 +static unsigned long drops_do_clear;
11552 +static u32 class_drop_counter[CLASS_NUM_DROP_COUNTERS];
11553 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
11554 +static u32 util_drop_counter[UTIL_NUM_DROP_COUNTERS];
11555 +#endif
11556 +
11557 +char *class_drop_description[CLASS_NUM_DROP_COUNTERS] = {
11558 + "ICC",
11559 + "Host Pkt Error",
11560 + "Rx Error",
11561 + "IPsec Outbound",
11562 + "IPsec Inbound",
11563 + "EXPT IPsec Error",
11564 + "Reassembly",
11565 + "Fragmenter",
11566 + "NAT-T",
11567 + "Socket",
11568 + "Multicast",
11569 + "NAT-PT",
11570 + "Tx Disabled",
11571 +};
11572 +
11573 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
11574 +char *util_drop_description[UTIL_NUM_DROP_COUNTERS] = {
11575 + "IPsec Outbound",
11576 + "IPsec Inbound",
11577 + "IPsec Rate Limiter",
11578 + "Fragmenter",
11579 + "Socket",
11580 + "Tx Disabled",
11581 + "Rx Error",
11582 +};
11583 +#endif
11584 +
11585 +static ssize_t pfe_set_drops(struct device *dev, struct device_attribute *attr,
11586 + const char *buf, size_t count)
11587 +{
11588 + drops_do_clear = kstrtoul(buf, 0, 0);
11589 + return count;
11590 +}
11591 +
11592 +static u32 tmu_drops[4][16];
11593 +static ssize_t pfe_show_drops(struct device *dev, struct device_attribute *attr,
11594 + char *buf)
11595 +{
11596 + ssize_t len = 0;
11597 + int id, dropnum;
11598 + int tmu, queue;
11599 + u32 val;
11600 + u32 dmem_addr;
11601 + int num_class_drops = 0, num_tmu_drops = 0, num_util_drops = 0;
11602 + struct pfe_ctrl *ctrl = &pfe->ctrl;
11603 +
11604 + memset(class_drop_counter, 0, sizeof(class_drop_counter));
11605 + for (id = CLASS0_ID; id <= CLASS_MAX_ID; id++) {
11606 + if (drops_do_clear)
11607 + pe_sync_stop(ctrl, (1 << id));
11608 + for (dropnum = 0; dropnum < CLASS_NUM_DROP_COUNTERS;
11609 + dropnum++) {
11610 + dmem_addr = CLASS_DM_DROP_CNTR;
11611 + val = be32_to_cpu(pe_dmem_read(id, dmem_addr, 4));
11612 + class_drop_counter[dropnum] += val;
11613 + num_class_drops += val;
11614 + if (drops_do_clear)
11615 + pe_dmem_write(id, 0, dmem_addr, 4);
11616 + }
11617 + if (drops_do_clear)
11618 + pe_start(ctrl, (1 << id));
11619 + }
11620 +
11621 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
11622 + if (drops_do_clear)
11623 + pe_sync_stop(ctrl, (1 << UTIL_ID));
11624 + for (dropnum = 0; dropnum < UTIL_NUM_DROP_COUNTERS; dropnum++) {
11625 + dmem_addr = UTIL_DM_DROP_CNTR;
11626 + val = be32_to_cpu(pe_dmem_read(UTIL_ID, dmem_addr, 4));
11627 + util_drop_counter[dropnum] = val;
11628 + num_util_drops += val;
11629 + if (drops_do_clear)
11630 + pe_dmem_write(UTIL_ID, 0, dmem_addr, 4);
11631 + }
11632 + if (drops_do_clear)
11633 + pe_start(ctrl, (1 << UTIL_ID));
11634 +#endif
11635 + for (tmu = 0; tmu < 4; tmu++) {
11636 + for (queue = 0; queue < 16; queue++) {
11637 + qm_read_drop_stat(tmu, queue, &tmu_drops[tmu][queue],
11638 + drops_do_clear);
11639 + num_tmu_drops += tmu_drops[tmu][queue];
11640 + }
11641 + }
11642 +
11643 + if (num_class_drops == 0 && num_util_drops == 0 && num_tmu_drops == 0)
11644 + len += sprintf(buf + len, "No PE drops\n\n");
11645 +
11646 + if (num_class_drops > 0) {
11647 + len += sprintf(buf + len, "Class PE drops --\n");
11648 + for (dropnum = 0; dropnum < CLASS_NUM_DROP_COUNTERS;
11649 + dropnum++) {
11650 + if (class_drop_counter[dropnum] > 0)
11651 + len += sprintf(buf + len, " %s: %d\n",
11652 + class_drop_description[dropnum],
11653 + class_drop_counter[dropnum]);
11654 + }
11655 + len += sprintf(buf + len, "\n");
11656 + }
11657 +
11658 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
11659 + if (num_util_drops > 0) {
11660 + len += sprintf(buf + len, "Util PE drops --\n");
11661 + for (dropnum = 0; dropnum < UTIL_NUM_DROP_COUNTERS; dropnum++) {
11662 + if (util_drop_counter[dropnum] > 0)
11663 + len += sprintf(buf + len, " %s: %d\n",
11664 + util_drop_description[dropnum],
11665 + util_drop_counter[dropnum]);
11666 + }
11667 + len += sprintf(buf + len, "\n");
11668 + }
11669 +#endif
11670 + if (num_tmu_drops > 0) {
11671 + len += sprintf(buf + len, "TMU drops --\n");
11672 + for (tmu = 0; tmu < 4; tmu++) {
11673 + for (queue = 0; queue < 16; queue++) {
11674 + if (tmu_drops[tmu][queue] > 0)
11675 + len += sprintf(buf + len,
11676 + " TMU%d-Q%d: %d\n"
11677 + , tmu, queue, tmu_drops[tmu][queue]);
11678 + }
11679 + }
11680 + len += sprintf(buf + len, "\n");
11681 + }
11682 +
11683 + return len;
11684 +}
11685 +
11686 +static ssize_t pfe_show_tmu0_queues(struct device *dev, struct device_attribute
11687 + *attr, char *buf)
11688 +{
11689 + return tmu_queues(buf, 0);
11690 +}
11691 +
11692 +static ssize_t pfe_show_tmu1_queues(struct device *dev, struct device_attribute
11693 + *attr, char *buf)
11694 +{
11695 + return tmu_queues(buf, 1);
11696 +}
11697 +
11698 +static ssize_t pfe_show_tmu2_queues(struct device *dev, struct device_attribute
11699 + *attr, char *buf)
11700 +{
11701 + return tmu_queues(buf, 2);
11702 +}
11703 +
11704 +static ssize_t pfe_show_tmu3_queues(struct device *dev, struct device_attribute
11705 + *attr, char *buf)
11706 +{
11707 + return tmu_queues(buf, 3);
11708 +}
11709 +
11710 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
11711 +static ssize_t pfe_set_util(struct device *dev, struct device_attribute *attr,
11712 + const char *buf, size_t count)
11713 +{
11714 + util_do_clear = kstrtoul(buf, NULL, 0);
11715 + return count;
11716 +}
11717 +
11718 +static ssize_t pfe_show_util(struct device *dev, struct device_attribute *attr,
11719 + char *buf)
11720 +{
11721 + ssize_t len = 0;
11722 + struct pfe_ctrl *ctrl = &pfe->ctrl;
11723 +
11724 + len += block_version(buf + len, UTIL_VERSION);
11725 +
11726 + pe_sync_stop(ctrl, (1 << UTIL_ID));
11727 + len += display_pe_status(buf + len, UTIL_ID, UTIL_DM_PESTATUS,
11728 + util_do_clear);
11729 + pe_start(ctrl, (1 << UTIL_ID));
11730 +
11731 + len += sprintf(buf + len, "pe status: %x\n", readl(UTIL_PE_STATUS));
11732 + len += sprintf(buf + len, "max buf cnt: %x\n",
11733 + readl(UTIL_MAX_BUF_CNT));
11734 + len += sprintf(buf + len, "tsq max cnt: %x\n",
11735 + readl(UTIL_TSQ_MAX_CNT));
11736 +
11737 + return len;
11738 +}
11739 +#endif
11740 +
11741 +static ssize_t pfe_show_bmu(struct device *dev, struct device_attribute *attr,
11742 + char *buf)
11743 +{
11744 + ssize_t len = 0;
11745 +
11746 + len += bmu(buf + len, 1, BMU1_BASE_ADDR);
11747 + len += bmu(buf + len, 2, BMU2_BASE_ADDR);
11748 +
11749 + return len;
11750 +}
11751 +
11752 +static ssize_t pfe_show_hif(struct device *dev, struct device_attribute *attr,
11753 + char *buf)
11754 +{
11755 + ssize_t len = 0;
11756 +
11757 + len += sprintf(buf + len, "hif:\n ");
11758 + len += block_version(buf + len, HIF_VERSION);
11759 +
11760 + len += sprintf(buf + len, " tx curr bd: %x\n",
11761 + readl(HIF_TX_CURR_BD_ADDR));
11762 + len += sprintf(buf + len, " tx status: %x\n",
11763 + readl(HIF_TX_STATUS));
11764 + len += sprintf(buf + len, " tx dma status: %x\n",
11765 + readl(HIF_TX_DMA_STATUS));
11766 +
11767 + len += sprintf(buf + len, " rx curr bd: %x\n",
11768 + readl(HIF_RX_CURR_BD_ADDR));
11769 + len += sprintf(buf + len, " rx status: %x\n",
11770 + readl(HIF_RX_STATUS));
11771 + len += sprintf(buf + len, " rx dma status: %x\n",
11772 + readl(HIF_RX_DMA_STATUS));
11773 +
11774 + len += sprintf(buf + len, "hif nocopy:\n ");
11775 + len += block_version(buf + len, HIF_NOCPY_VERSION);
11776 +
11777 + len += sprintf(buf + len, " tx curr bd: %x\n",
11778 + readl(HIF_NOCPY_TX_CURR_BD_ADDR));
11779 + len += sprintf(buf + len, " tx status: %x\n",
11780 + readl(HIF_NOCPY_TX_STATUS));
11781 + len += sprintf(buf + len, " tx dma status: %x\n",
11782 + readl(HIF_NOCPY_TX_DMA_STATUS));
11783 +
11784 + len += sprintf(buf + len, " rx curr bd: %x\n",
11785 + readl(HIF_NOCPY_RX_CURR_BD_ADDR));
11786 + len += sprintf(buf + len, " rx status: %x\n",
11787 + readl(HIF_NOCPY_RX_STATUS));
11788 + len += sprintf(buf + len, " rx dma status: %x\n",
11789 + readl(HIF_NOCPY_RX_DMA_STATUS));
11790 +
11791 + return len;
11792 +}
11793 +
11794 +static ssize_t pfe_show_gpi(struct device *dev, struct device_attribute *attr,
11795 + char *buf)
11796 +{
11797 + ssize_t len = 0;
11798 +
11799 + len += gpi(buf + len, 0, EGPI1_BASE_ADDR);
11800 + len += gpi(buf + len, 1, EGPI2_BASE_ADDR);
11801 + len += gpi(buf + len, 3, HGPI_BASE_ADDR);
11802 +
11803 + return len;
11804 +}
11805 +
11806 +static ssize_t pfe_show_pfemem(struct device *dev, struct device_attribute
11807 + *attr, char *buf)
11808 +{
11809 + ssize_t len = 0;
11810 + struct pfe_memmon *memmon = &pfe->memmon;
11811 +
11812 + len += sprintf(buf + len, "Kernel Memory: %d Bytes (%d KB)\n",
11813 + memmon->kernel_memory_allocated,
11814 + (memmon->kernel_memory_allocated + 1023) / 1024);
11815 +
11816 + return len;
11817 +}
11818 +
11819 +static ssize_t pfe_show_crc_revalidated(struct device *dev,
11820 + struct device_attribute *attr,
11821 + char *buf)
11822 +{
11823 + u64 crc_validated = 0;
11824 + ssize_t len = 0;
11825 + int id, phyid;
11826 +
11827 + len += sprintf(buf + len, "FCS re-validated by PFE:\n");
11828 +
11829 + for (phyid = 0; phyid < 2; phyid++) {
11830 + crc_validated = 0;
11831 + for (id = CLASS0_ID; id <= CLASS_MAX_ID; id++) {
11832 + crc_validated += be32_to_cpu(pe_dmem_read(id,
11833 + CLASS_DM_CRC_VALIDATED + (phyid * 4), 4));
11834 + }
11835 + len += sprintf(buf + len, "MAC %d:\n count:%10llu\n",
11836 + phyid, crc_validated);
11837 + }
11838 +
11839 + return len;
11840 +}
11841 +
11842 +#ifdef HIF_NAPI_STATS
11843 +static ssize_t pfe_show_hif_napi_stats(struct device *dev,
11844 + struct device_attribute *attr,
11845 + char *buf)
11846 +{
11847 + struct platform_device *pdev = to_platform_device(dev);
11848 + struct pfe *pfe = platform_get_drvdata(pdev);
11849 + ssize_t len = 0;
11850 +
11851 + len += sprintf(buf + len, "sched: %u\n",
11852 + pfe->hif.napi_counters[NAPI_SCHED_COUNT]);
11853 + len += sprintf(buf + len, "poll: %u\n",
11854 + pfe->hif.napi_counters[NAPI_POLL_COUNT]);
11855 + len += sprintf(buf + len, "packet: %u\n",
11856 + pfe->hif.napi_counters[NAPI_PACKET_COUNT]);
11857 + len += sprintf(buf + len, "budget: %u\n",
11858 + pfe->hif.napi_counters[NAPI_FULL_BUDGET_COUNT]);
11859 + len += sprintf(buf + len, "desc: %u\n",
11860 + pfe->hif.napi_counters[NAPI_DESC_COUNT]);
11861 + len += sprintf(buf + len, "full: %u\n",
11862 + pfe->hif.napi_counters[NAPI_CLIENT_FULL_COUNT]);
11863 +
11864 + return len;
11865 +}
11866 +
11867 +static ssize_t pfe_set_hif_napi_stats(struct device *dev,
11868 + struct device_attribute *attr,
11869 + const char *buf, size_t count)
11870 +{
11871 + struct platform_device *pdev = to_platform_device(dev);
11872 + struct pfe *pfe = platform_get_drvdata(pdev);
11873 +
11874 + memset(pfe->hif.napi_counters, 0, sizeof(pfe->hif.napi_counters));
11875 +
11876 + return count;
11877 +}
11878 +
11879 +static DEVICE_ATTR(hif_napi_stats, 0644, pfe_show_hif_napi_stats,
11880 + pfe_set_hif_napi_stats);
11881 +#endif
11882 +
11883 +static DEVICE_ATTR(class, 0644, pfe_show_class, pfe_set_class);
11884 +static DEVICE_ATTR(tmu, 0644, pfe_show_tmu, pfe_set_tmu);
11885 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
11886 +static DEVICE_ATTR(util, 0644, pfe_show_util, pfe_set_util);
11887 +#endif
11888 +static DEVICE_ATTR(bmu, 0444, pfe_show_bmu, NULL);
11889 +static DEVICE_ATTR(hif, 0444, pfe_show_hif, NULL);
11890 +static DEVICE_ATTR(gpi, 0444, pfe_show_gpi, NULL);
11891 +static DEVICE_ATTR(drops, 0644, pfe_show_drops, pfe_set_drops);
11892 +static DEVICE_ATTR(tmu0_queues, 0444, pfe_show_tmu0_queues, NULL);
11893 +static DEVICE_ATTR(tmu1_queues, 0444, pfe_show_tmu1_queues, NULL);
11894 +static DEVICE_ATTR(tmu2_queues, 0444, pfe_show_tmu2_queues, NULL);
11895 +static DEVICE_ATTR(tmu3_queues, 0444, pfe_show_tmu3_queues, NULL);
11896 +static DEVICE_ATTR(pfemem, 0444, pfe_show_pfemem, NULL);
11897 +static DEVICE_ATTR(fcs_revalidated, 0444, pfe_show_crc_revalidated, NULL);
11898 +
11899 +int pfe_sysfs_init(struct pfe *pfe)
11900 +{
11901 + if (device_create_file(pfe->dev, &dev_attr_class))
11902 + goto err_class;
11903 +
11904 + if (device_create_file(pfe->dev, &dev_attr_tmu))
11905 + goto err_tmu;
11906 +
11907 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
11908 + if (device_create_file(pfe->dev, &dev_attr_util))
11909 + goto err_util;
11910 +#endif
11911 +
11912 + if (device_create_file(pfe->dev, &dev_attr_bmu))
11913 + goto err_bmu;
11914 +
11915 + if (device_create_file(pfe->dev, &dev_attr_hif))
11916 + goto err_hif;
11917 +
11918 + if (device_create_file(pfe->dev, &dev_attr_gpi))
11919 + goto err_gpi;
11920 +
11921 + if (device_create_file(pfe->dev, &dev_attr_drops))
11922 + goto err_drops;
11923 +
11924 + if (device_create_file(pfe->dev, &dev_attr_tmu0_queues))
11925 + goto err_tmu0_queues;
11926 +
11927 + if (device_create_file(pfe->dev, &dev_attr_tmu1_queues))
11928 + goto err_tmu1_queues;
11929 +
11930 + if (device_create_file(pfe->dev, &dev_attr_tmu2_queues))
11931 + goto err_tmu2_queues;
11932 +
11933 + if (device_create_file(pfe->dev, &dev_attr_tmu3_queues))
11934 + goto err_tmu3_queues;
11935 +
11936 + if (device_create_file(pfe->dev, &dev_attr_pfemem))
11937 + goto err_pfemem;
11938 +
11939 + if (device_create_file(pfe->dev, &dev_attr_fcs_revalidated))
11940 + goto err_crc_revalidated;
11941 +
11942 +#ifdef HIF_NAPI_STATS
11943 + if (device_create_file(pfe->dev, &dev_attr_hif_napi_stats))
11944 + goto err_hif_napi_stats;
11945 +#endif
11946 +
11947 + return 0;
11948 +
11949 +#ifdef HIF_NAPI_STATS
11950 +err_hif_napi_stats:
11951 + device_remove_file(pfe->dev, &dev_attr_fcs_revalidated);
11952 +#endif
11953 +
11954 +err_crc_revalidated:
11955 + device_remove_file(pfe->dev, &dev_attr_pfemem);
11956 +
11957 +err_pfemem:
11958 + device_remove_file(pfe->dev, &dev_attr_tmu3_queues);
11959 +
11960 +err_tmu3_queues:
11961 + device_remove_file(pfe->dev, &dev_attr_tmu2_queues);
11962 +
11963 +err_tmu2_queues:
11964 + device_remove_file(pfe->dev, &dev_attr_tmu1_queues);
11965 +
11966 +err_tmu1_queues:
11967 + device_remove_file(pfe->dev, &dev_attr_tmu0_queues);
11968 +
11969 +err_tmu0_queues:
11970 + device_remove_file(pfe->dev, &dev_attr_drops);
11971 +
11972 +err_drops:
11973 + device_remove_file(pfe->dev, &dev_attr_gpi);
11974 +
11975 +err_gpi:
11976 + device_remove_file(pfe->dev, &dev_attr_hif);
11977 +
11978 +err_hif:
11979 + device_remove_file(pfe->dev, &dev_attr_bmu);
11980 +
11981 +err_bmu:
11982 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
11983 + device_remove_file(pfe->dev, &dev_attr_util);
11984 +
11985 +err_util:
11986 +#endif
11987 + device_remove_file(pfe->dev, &dev_attr_tmu);
11988 +
11989 +err_tmu:
11990 + device_remove_file(pfe->dev, &dev_attr_class);
11991 +
11992 +err_class:
11993 + return -1;
11994 +}
11995 +
11996 +void pfe_sysfs_exit(struct pfe *pfe)
11997 +{
11998 +#ifdef HIF_NAPI_STATS
11999 + device_remove_file(pfe->dev, &dev_attr_hif_napi_stats);
12000 +#endif
12001 + device_remove_file(pfe->dev, &dev_attr_fcs_revalidated);
12002 + device_remove_file(pfe->dev, &dev_attr_pfemem);
12003 + device_remove_file(pfe->dev, &dev_attr_tmu3_queues);
12004 + device_remove_file(pfe->dev, &dev_attr_tmu2_queues);
12005 + device_remove_file(pfe->dev, &dev_attr_tmu1_queues);
12006 + device_remove_file(pfe->dev, &dev_attr_tmu0_queues);
12007 + device_remove_file(pfe->dev, &dev_attr_drops);
12008 + device_remove_file(pfe->dev, &dev_attr_gpi);
12009 + device_remove_file(pfe->dev, &dev_attr_hif);
12010 + device_remove_file(pfe->dev, &dev_attr_bmu);
12011 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
12012 + device_remove_file(pfe->dev, &dev_attr_util);
12013 +#endif
12014 + device_remove_file(pfe->dev, &dev_attr_tmu);
12015 + device_remove_file(pfe->dev, &dev_attr_class);
12016 +}
12017 --- /dev/null
12018 +++ b/drivers/staging/fsl_ppfe/pfe_sysfs.h
12019 @@ -0,0 +1,17 @@
12020 +/* SPDX-License-Identifier: GPL-2.0+ */
12021 +/*
12022 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
12023 + * Copyright 2017 NXP
12024 + */
12025 +
12026 +#ifndef _PFE_SYSFS_H_
12027 +#define _PFE_SYSFS_H_
12028 +
12029 +#include <linux/proc_fs.h>
12030 +
12031 +u32 qm_read_drop_stat(u32 tmu, u32 queue, u32 *total_drops, int do_reset);
12032 +
12033 +int pfe_sysfs_init(struct pfe *pfe);
12034 +void pfe_sysfs_exit(struct pfe *pfe);
12035 +
12036 +#endif /* _PFE_SYSFS_H_ */